diff --git a/content/00_2_dedication.html b/content/00_2_dedication.html index 4b7a9291..07e2cd56 100644 --- a/content/00_2_dedication.html +++ b/content/00_2_dedication.html @@ -1,4 +1,4 @@ -
+

Dedication

For my grandmother, Bella Manel Greenfield (October 13, 1915–April 3, 2010) 

diff --git a/content/00_4_acknowledgments.html b/content/00_4_acknowledgments.html index 78362f0d..0901bf3d 100644 --- a/content/00_4_acknowledgments.html +++ b/content/00_4_acknowledgments.html @@ -1,4 +1,4 @@ -
+

Acknowledgments

The world around us moves in complicated and wonderful ways. We spend the earlier parts of our lives learning about our environment through perception and interaction. We expect the physical world around us to behave consistently with our perceptual memory, e.g., if we drop a rock, it will fall due to gravity, if a gust of wind blows, lighter objects will be tossed by the wind further. This class focuses on understanding, simulating, and incorporating motion-based elements of our physical world into the digital worlds that we create. Our hope is to create intuitive, rich, and more satisfying experiences by drawing from the perceptual memories of our users.

diff --git a/content/00_5_introduction.html b/content/00_5_introduction.html index dba976d5..e49cd025 100644 --- a/content/00_5_introduction.html +++ b/content/00_5_introduction.html @@ -1,4 +1,4 @@ -
+

Introduction

Over a decade ago, I self-published The Nature of Code, an online resource and print book exploring the unpredictable evolutionary and emergent properties of nature in software via the creative coding framework Processing. It’s the understatement of the century to say that much has changed in the world of technology and creative media since then, and so here I am again, with a new and rebooted version of this book built around JavaScript and the p5.js library. The book has a few new coding tricks this time, but it’s the same old nature—birds still flap their wings, and apples still fall on our heads.

What Is This Book?

@@ -22,7 +22,7 @@

How Are You Reading This Book?

Are you reading this book on a Kindle? Printed paper? On your laptop in PDF form? On a tablet showing an animated HTML5 version? Are you strapped to a chair, absorbing the content directly into your brain via a series of electrodes, tubes, and cartridges?

My dream has always been to write this book in one single format (in this case, a collection of Notion documents) and then, after pressing a magic button (npm run build), out comes the book in any and all formats you might want—PDF, HTML5, printed hard copy, Kindle, and so on. This was largely made possible by the Magic Book project, which is an open source framework for self-publishing originally developed at ITP by Rune Madsen and Steve Klise. Everything has been designed and styled using CSS—no manual typesetting or layout.

The reality of putting this book together isn’t quite so clean as that, and the story of how it happened is long. If you’re interested in learning more, make sure to read the book’s acknowledgments, and then go hire the people I’ve thanked to help you publish a book! I’ll also include more details in the associated GitHub repository.

-

The bottom line is that no matter what format you’re reading it in, the material is all the same. The only difference will be in how you experience the code examples—more on that in “How to Read the Code”.

+

The bottom line is that no matter what format you’re reading it in, the material is all the same. The only difference will be in how you experience the code examples—more on that in “How to Read the Code”.

The Coding Train Connection

Personally, I still love an assembled amalgamation of cellulose pulp, meticulously bound together with a resilient spine, upon which pigmented compounds have been artfully deployed to convey words and ideas. Yet, ever since 2012, when I impulsively recorded my very first video lesson about programming in my office at ITP, I’ve discovered the tremendous value and joy in conveying ideas and lessons through moving pictures.

All this is to say, I have a YouTube channel called the Coding Train. I mentioned it earlier when discussing options for learning the prerequisite material for this book, and if you continue reading, you’ll find I continue to reference related videos. I might allude to one I made about a related algorithm or alternative technique for a particular coding example, or suggest a series on a tangential concept that could provide additional context to a topic I’m exploring.

diff --git a/content/00_randomness.html b/content/00_randomness.html index 7f720775..3b3baa89 100644 --- a/content/00_randomness.html +++ b/content/00_randomness.html @@ -1,4 +1,4 @@ -
+

Chapter 0. Randomness

@@ -546,7 +546,7 @@

Noise Ranges

//{!1} Use map() to customize the range of Perlin noise. let x = map(n, 0, 1, 0, width); ellipse(x, 180, 16, 16); - //{!1} Move forward in "time". + //{!1} Move forward in time. t += 0.01; }

The same logic can be applied to the random walker, assigning both its x- and y-values according to Perlin noise. This creates a smoother, more organic random walk.

diff --git a/content/01_vectors.html b/content/01_vectors.html index 50571fa1..8b1bfa71 100644 --- a/content/01_vectors.html +++ b/content/01_vectors.html @@ -1,4 +1,4 @@ -
+

Chapter 1. Vectors

@@ -308,7 +308,7 @@

Example 1.2: Bouncing Ball with V

It may not always be obvious when to directly access an object’s properties versus when to reference the object as a whole or use one of its methods. The goal of this chapter (and most of this book) is to help you distinguish between these scenarios by providing a variety of examples and use cases.

Exercise 1.1

-

Take one of the walker examples from Chapter 0 and convert it to use vectors.

+

Take one of the walker examples from Chapter 0 and convert it to use vectors.

Exercise 1.2

@@ -674,7 +674,7 @@

Motion with Vectors

  • Add the velocity to the position.
  • Draw the object at the position.
  • -

    In the bouncing ball example, all this code happened within setup() and draw(). What I want to do now is move toward encapsulating all the logic for an object’s motion inside a class. This way, I can create a foundation for programming moving objects that I can easily reuse again and again. (See “The Random Walker Class” for a brief review of OOP basics.)

    +

    In the bouncing ball example, all this code happened within setup() and draw(). What I want to do now is move toward encapsulating all the logic for an object’s motion inside a class. This way, I can create a foundation for programming moving objects that I can easily reuse again and again. (See “The Random Walker Class” for a brief review of OOP basics.)

    To start, I’m going to create a generic Mover class that will describe a shape moving around the canvas. For that, I must consider the following two questions:

    1. What data does a mover have?
    2. @@ -788,7 +788,7 @@

      Example 1.7: Motion 101 (Velocity)

      }

      If OOP is at all new to you, one aspect here may seem a bit strange. I spent the beginning of this chapter discussing the p5.Vector class, and this class is the template for making the position object and the velocity object. So what are those objects doing inside yet another object, the Mover object?

      In fact, this is just about the most normal thing ever. An object is something that holds data (and functionality). That data can be numbers, or it can be other objects (arrays too)! You’ll see this over and over again in this book. In Chapter 4, for example, I’ll write a class to describe a system of particles. That ParticleSystem object will include a list of Particle objects . . . and each Particle object will have as its data several p5.Vector objects!

      -

      You may have also noticed in the Mover class that I’m setting the initial position and velocity directly within the constructor, without using any arguments. While this approach keeps the code simple for now, I’ll explore the benefits of adding arguments to the constructor in Chapter 2.

      +

      You may have also noticed in the Mover class that I’m setting the initial position and velocity directly within the constructor, without using any arguments. While this approach keeps the code simple for now, I’ll explore the benefits of adding arguments to the constructor in Chapter 2.

      At this point, you hopefully feel comfortable with two concepts: (1) what a vector is and (2) how to use vectors inside an object to keep track of its position and movement. This is an excellent first step and deserves a mild round of applause. Before standing ovations are in order, however, you need to make one more, somewhat bigger step forward. After all, watching the Motion 101 example is fairly boring. The circle never speeds up, never slows down, and never turns. For more sophisticated motion—the kind of motion that appears in the world around us—one more vector needs to be added to the class: acceleration.

      Acceleration

      Acceleration is the rate of change of velocity. Think about that definition for a moment. Is it a new concept? Not really. Earlier I defined velocity as the rate of change of position, so in essence I’m developing a trickle-down effect. Acceleration affects velocity, which in turn affects position. (To provide some brief foreshadowing, this point will become even more crucial in the next chapter, when I show how forces like friction affect acceleration, which affects velocity, which affects position.) In code, this trickle-down effect reads like this:

      diff --git a/content/02_forces.html b/content/02_forces.html index 30a94ce7..43f40134 100644 --- a/content/02_forces.html +++ b/content/02_forces.html @@ -1,4 +1,4 @@ -
      +

      Chapter 2. Forces

      @@ -74,7 +74,7 @@

      Weight vs. Mass

      In the world of p5.js, what is mass anyway? Aren’t we dealing with pixels? Let’s start simple and say that in a pretend pixel world, all objects have a mass equal to 1. Anything divided by 1 equals itself, and so, in this simple world, we have this:

      \vec{A} = \vec{F}
      -

      I’ve effectively removed mass from the equation, making the acceleration of an object equal to force. This is great news. After all, Chapter 1 described acceleration as the key to controlling the movement of objects in a canvas. I said that the position changes according to the velocity, and the velocity according to acceleration. Acceleration seemed to be where it all began. Now you can see that force is truly where it all begins.

      +

      I’ve effectively removed mass from the equation, making the acceleration of an object equal to force. This is great news. After all, Chapter 1 described acceleration as the key to controlling the movement of objects in a canvas. I said that the position changes according to the velocity, and the velocity according to acceleration. Acceleration seemed to be where it all began. Now you can see that force is truly where it all begins.

      Let’s take the Mover class, with position, velocity, and acceleration:

      class Mover {
         constructor() {
      @@ -143,7 +143,7 @@ 

      Factoring In Mass

      Units of Measurement

      Now that I’m introducing mass, it’s important to make a quick note about units of measurement. In the real world, things are measured in specific units: two objects are 3 meters apart, the baseball is moving at a rate of 90 miles per hour, or this bowling ball has a mass of 6 kilograms. Sometimes you do want to take real-world units into consideration. In this chapter, however, I’m going to stick with units of measurement in pixels (“These two circles are 100 pixels apart”) and frames of animation (“This circle is moving at a rate of 2 pixels per frame,” the aforementioned time step).

      In the case of mass, p5.js doesn’t have any unit of measurement to use. How much mass is in any given pixel? You might enjoy inventing your own p5.js unit of mass to associate with those values, like “10 pixeloids” or “10 yurkles.”

      -

      For demonstration purposes, I’ll tie mass to pixels (the larger a circle’s diameter, the larger the mass). This will allow me to visualize the mass of an object, albeit inaccurately. In the real world, size doesn’t indicate mass. A small metal ball could have a much higher mass than a large balloon because of its higher density. And for two circular objects with equal density, I’ll also note that mass should be tied to the formula for the area of a circle: \pi r^2. (This will be addressed in Exercise 2.11, and I’ll say more about \pi and circles in Chapter 3.)

      +

      For demonstration purposes, I’ll tie mass to pixels (the larger a circle’s diameter, the larger the mass). This will allow me to visualize the mass of an object, albeit inaccurately. In the real world, size doesn’t indicate mass. A small metal ball could have a much higher mass than a large balloon because of its higher density. And for two circular objects with equal density, I’ll also note that mass should be tied to the formula for the area of a circle: \pi r^2. (This will be addressed in Exercise 2.11, and I’ll say more about \pi and circles in Chapter 3.)

    Mass is a scalar, not a vector, as it’s just one number describing the amount of matter in an object. I could get fancy and compute the area of a shape as its mass, but it’s simpler to begin by saying, “Hey, the mass of this object is . . . um, I dunno . . . how about 10?”

    constructor() {
    @@ -197,7 +197,7 @@ 

    Units of Measurement

    Let’s take a moment to recap what I’ve covered so far. I’ve defined what a force is (a vector), and I’ve shown how to apply a force to an object (divide it by mass and add it to the object’s acceleration vector). What’s missing? Well, I have yet to figure out how to calculate a force in the first place. Where do forces come from?

    Exercise 2.2

    -

    You could write applyForce() in another way, using the static method div() instead of copy(). Rewrite applyForce() by using the static method. For help with this exercise, review static methods in “Static vs. Nonstatic Methods”.

    +

    You could write applyForce() in another way, using the static method div() instead of copy(). Rewrite applyForce() by using the static method. For help with this exercise, review static methods in “Static vs. Nonstatic Methods”.

    applyForce(force) {
       let f = p5.Vector.div(force, this.mass);
       this.acceleration.add(f);
    @@ -228,7 +228,7 @@ 

    Example 2.1: Forces

    mover.applyForce(wind); }

    Now I have two forces, pointing in different directions and with different magnitudes, both applied to the object mover. I’m beginning to get somewhere. I’ve built a world, an environment with forces that act on objects!

    -

    Let’s look at what happens now when I add a second object with a variable mass. To do this, you’ll probably want to do a quick review of OOP. Again, I’m not covering all the basics of programming here (for that, you can check out any of the intro p5.js books or video tutorials listed in “The Coding Train Connection”). However, since the idea of creating a world filled with objects is fundamental to all the examples in this book, it’s worth taking a moment to walk through the steps of going from one object to many.

    +

    Let’s look at what happens now when I add a second object with a variable mass. To do this, you’ll probably want to do a quick review of OOP. Again, I’m not covering all the basics of programming here (for that, you can check out any of the intro p5.js books or video tutorials listed in “The Coding Train Connection”). However, since the idea of creating a world filled with objects is fundamental to all the examples in this book, it’s worth taking a moment to walk through the steps of going from one object to many.

    This is where I left the Mover class. Notice that it’s identical to the Mover class created in Chapter 1, with two additions, mass and a new applyForce() method:

    class Mover {
       constructor() {
    @@ -279,7 +279,7 @@ 

    Example 2.1: Forces

    } }
    -

    Now that the class is written, I can create more than one Mover object.

    +

    Now that the class is written, I can create more than one Mover object:

    let moverA = new Mover();
     let moverB = new Mover();
    @@ -371,7 +371,7 @@

    Modeling a Force

    Making up forces will actually get you quite far—after all, I just made up a pretty good approximation of Earth’s gravity. Ultimately, the world of p5.js is an orchestra of pixels, and you’re the conductor, so whatever you deem appropriate to be a force, well by golly, that’s the force it should be! Nevertheless, there may come a time when you find yourself wondering, “But how does it all really work?” That’s when modeling forces, instead of just making them up, enters the picture.

    Parsing Formulas

    -

    In a moment, I’m going to write out the formula for friction. This won’t be the first time you’ve seen a formula in this book; I just finished up the discussion of Newton’s second law, \vec{F} = M \times \vec{A} (or force equals mass times acceleration). You hopefully didn’t spend a lot of time worrying about that formula, because it’s just a few characters and symbols. Nevertheless, it’s a scary world out there. Just take a look at the equation for a normal distribution, which I covered (without presenting the formula) in “A Normal Distribution of Random Numbers”:

    +

    In a moment, I’m going to write out the formula for friction. This won’t be the first time you’ve seen a formula in this book; I just finished up the discussion of Newton’s second law, \vec{F} = M \times \vec{A} (or force equals mass times acceleration). You hopefully didn’t spend a lot of time worrying about that formula, because it’s just a few characters and symbols. Nevertheless, it’s a scary world out there. Just take a look at the equation for a normal distribution, which I covered (without presenting the formula) in “A Normal Distribution of Random Numbers”:

    \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}

    Formulas are regularly written with many symbols (often with letters from the Greek alphabet). Here’s the formula for friction (as indicated by \vec{f}):

    \vec{f} = -\mu N \hat{v}
    @@ -924,7 +924,7 @@

    Example 2.7: Attraction with Man movers[i].show(); } }

    -

    This is just a small taste of what’s possible with arrays of objects. Stay tuned for a more in-depth exploration of adding and removing multiple objects from the canvas in Chapter 4, which covers particle systems.

    +

    This is just a small taste of what’s possible with arrays of objects. Stay tuned for a more in-depth exploration of adding and removing multiple objects from the canvas in Chapter 4, which covers particle systems.

    Exercise 2.12

    In Example 2.7, there’s a system (an array) of Mover objects and one Attractor object. Build an example that has systems of both movers and attractors. What if you make the attractors invisible? Can you create a pattern/design from the trails of objects moving around attractors?

    @@ -1061,7 +1061,7 @@

    Example 2.9: n Bodies

    } }

    The nested loop solution in Example 2.9 leads to what’s called an n-squared algorithm, meaning the number of calculations is equal to the number of bodies squared. If I were to increase the number of bodies, the simulation would start to slow significantly because of the number of calculations required.

    -

    In Chapter 5, I’ll explore strategies for optimizing sketches like this one, with a particular focus on spatial subdivision algorithms. Spatial subdivision, in combination with the concept of quadtrees and an algorithm called Barnes-Hut, is particularly effective for improving efficiency in simulations such as the n-body one discussed here.

    +

    In Chapter 5, I’ll explore strategies for optimizing sketches like this one, with a particular focus on spatial subdivision algorithms. Spatial subdivision, in combination with the concept of quadtrees and an algorithm called Barnes-Hut, is particularly effective for improving efficiency in simulations such as the n-body one discussed here.

    Exercise 2.15

    Change the attraction force in Example 2.9 to a repulsion force. Can you create an example in which all the Body objects are attracted to the mouse but repel one another? Think about how you need to balance the relative strength of the forces and how to most effectively use distance in your force calculations.

    diff --git a/content/03_oscillation.html b/content/03_oscillation.html index 7ff19010..4a667253 100644 --- a/content/03_oscillation.html +++ b/content/03_oscillation.html @@ -1,4 +1,4 @@ -
    +

    Chapter 3. Oscillation

    @@ -16,9 +16,9 @@

    Chapter 3. Oscillation

    Gala by Bridget Riley, 1974; acrylic on canvas, 159.7 × 159.7 cm

    Bridget Riley, a celebrated British artist, was a driving force behind the Op Art movement of the 1960s. Her work features geometric patterns that challenge the viewer’s perceptions and evoke feelings of movement or vibration. Her 1974 piece Gala showcases a series of curvilinear forms that ripple across the canvas, evoking the natural rhythm of the sine wave.

    -

    In Chapters 1 and 2, I carefully worked out an object-oriented structure to animate a shape in a p5.js canvas, using a vector to represent position, velocity, and acceleration driven by forces in the environment. I could move straight from here into topics such as particle systems, steering forces, group behaviors, and more. However, doing so would mean skipping a fundamental aspect of motion in the natural world: oscillation, or the back-and-forth movement of an object around a central point or position.

    +

    In Chapters 1 and 2, I carefully worked out an object-oriented structure to animate a shape in a p5.js canvas, using a vector to represent position, velocity, and acceleration driven by forces in the environment. I could move straight from here into topics such as particle systems, steering forces, group behaviors, and more. However, doing so would mean skipping a fundamental aspect of motion in the natural world: oscillation, or the back-and-forth movement of an object around a central point or position.

    To model oscillation, you need to understand a little bit about trigonometry, the mathematics of triangles. Learning some trig will give you new tools to generate patterns and create new motion behaviors in a p5.js sketch. You’ll learn to harness angular velocity and acceleration to spin objects as they move. You’ll be able to use the sine and cosine functions to model nice ease-in, ease-out wave patterns. You’ll also learn to calculate the more complex forces at play in situations that involve angles, such as a pendulum swinging or a box sliding down an incline.

    -

    I’ll start with the basics of working with angles in p5.js, then cover several aspects of trigonometry. In the end, I’ll connect trigonometry with what you learned about forces in Chapter 2. This chapter’s content will pave the way for more sophisticated examples that require trig later in this book.

    +

    I’ll start with the basics of working with angles in p5.js, then cover several aspects of trigonometry. In the end, I’ll connect trigonometry with what you learned about forces in Chapter 2. This chapter’s content will pave the way for more sophisticated examples that require trig later in this book.

    Angles

    Before going any further, I need to make sure you understand how the concept of an angle fits into creative coding in p5.js. If you have experience with p5.js, you’ve undoubtedly encountered this issue while using the rotate() function to rotate and spin objects. You’re most likely to be familiar with the concept of an angle as measured in degrees (see Figure 3.1).

    @@ -63,7 +63,7 @@

    Exercise 3.1

    Angular Motion

    Another term for rotation is angular motion—that is, motion about an angle. Just as linear motion can be described in terms of velocity—the rate at which an object’s position changes over time—angular motion can be described in terms of angular velocity—the rate at which an object’s angle changes over time. By extension, angular acceleration describes changes in an object’s angular velocity.

    -

    Luckily, you already have all the math you need to understand angular motion. Remember the stuff I dedicated almost all of Chapters 1 and 2 to explaining?

    +

    Luckily, you already have all the math you need to understand angular motion. Remember the stuff I dedicated almost all of Chapters 1 and 2 to explaining?

    \overrightarrow{\text{velocity}} = \overrightarrow{\text{velocity}} + \overrightarrow{\text{acceleration}}
    \overrightarrow{\text{position}} = \overrightarrow{\text{position}} + \overrightarrow{\text{velocity}}
    @@ -168,7 +168,7 @@

    Exercise 3.2

    At this point, if you were to actually go ahead and create a Mover object, you wouldn’t see it behave any differently. This is because the angular acceleration is initialized to zero (this.angleAcceleration = 0;). For the object to rotate, it needs a nonzero acceleration! Certainly, one option is to hardcode a number in the constructor:

        this.angleAcceleration = 0.01;
    -

    You can produce a more interesting result, however, by dynamically assigning an angular acceleration in the update() method according to forces in the environment. This could be my cue to start researching the physics of angular acceleration based on the concepts of torque and moment of inertia, but at this stage, that level of simulation would be a bit of a rabbit hole. (I’ll cover modeling angular acceleration with a pendulum in more detail in “The Pendulum”, as well as look at how third-party physics libraries realistically model rotational motion in Chapter 6.)

    +

    You can produce a more interesting result, however, by dynamically assigning an angular acceleration in the update() method according to forces in the environment. This could be my cue to start researching the physics of angular acceleration based on the concepts of torque and moment of inertia, but at this stage, that level of simulation would be a bit of a rabbit hole. (I’ll cover modeling angular acceleration with a pendulum in more detail in “The Pendulum”, as well as look at how third-party physics libraries realistically model rotational motion in Chapter 6.)

    Instead, a quick-and-dirty solution that yields creative results will suffice. A reasonable approach is to calculate angular acceleration as a function of the object’s linear acceleration, its rate of change of velocity along a path vector, as opposed to its rotation. Here’s an example:

        // Use the x-component of the object’s linear acceleration to calculate angular acceleration.
         this.angleAcceleration = this.acceleration.x;
    @@ -487,7 +487,7 @@

    Example 3.5: Simple Harmonic Motion

    Exercise 3.7

    -

    Using the sine function, create a simulation of a weight (sometimes referred to as a bob) that hangs from a spring from the top of the window. Use the map() function to calculate the vertical position of the bob. In “Spring Forces”, I’ll demonstrate how to create this same simulation by modeling the forces of a spring according to Hooke’s law.

    +

    Using the sine function, create a simulation of a weight (sometimes referred to as a bob) that hangs from a spring from the top of the window. Use the map() function to calculate the vertical position of the bob. In “Spring Forces”, I’ll demonstrate how to create this same simulation by modeling the forces of a spring according to Hooke’s law.

    Oscillation with Angular Velocity

    An understanding of oscillation, amplitude, and period (or frequency) can be essential in the course of simulating real-world behaviors. However, there’s a slightly easier way to implement the simple harmonic motion from Example 3.5, one that achieves the same result with fewer variables. Take one more look at the oscillation formula:

    @@ -495,7 +495,7 @@

    Oscillation with Angular Velocity

    Now I’ll rewrite it in a slightly different way:

    let x = amplitude * sin( some value that increments slowly );

    If you care about precisely defining the period of oscillation in terms of frames of animation, you might need the formula as I first wrote it. If you don’t care about the exact period, however—for example, if you’ll be choosing it randomly—all you really need inside the sin() function is a value that increments slowly enough for the object’s motion to appear smooth from one frame to the next. Every time this value ticks past a multiple of 2\pi, the object will have completed one cycle of oscillation.

    -

    This technique mirrors what I did with Perlin noise in Chapter 0. In that case, I incremented an offset variable (which I called t or xoff) to sample various outputs from the noise() function, creating a smooth transition of values. Now, I’m going to increment a value (I’ll call it angle) that’s fed into the sin() function. The difference is that the output from sin() is a smoothly repeating sine wave, without any randomness.

    +

    This technique mirrors what I did with Perlin noise in Chapter 0. In that case, I incremented an offset variable (which I called t or xoff) to sample various outputs from the noise() function, creating a smooth transition of values. Now, I’m going to increment a value (I’ll call it angle) that’s fed into the sin() function. The difference is that the output from sin() is a smoothly repeating sine wave, without any randomness.

    You might be wondering why I refer to the incrementing value as angle, given that the object has no visible rotation. The term angle is used because the value is passed into the sin() function, and angles are the traditional inputs to trigonometric functions. With this in mind, I can reintroduce the concept of angular velocity (and acceleration) to rewrite the example to calculate the x position in terms of a changing angle. I’ll assume these global variables:

    let angle = 0;
     let angleVelocity = 0.05;
    @@ -700,7 +700,7 @@

    Spring Forces

    Figure 3.14: A spring with an anchor and bob

    -

    Exploring the mathematics of triangles and waves has been lovely, but perhaps you’re starting to miss Newton’s laws of motion and vectors. After all, the core of this book is about simulating the physics of moving bodies. In “Properties of Oscillation”, I modeled simple harmonic motion by mapping a sine wave to a range of pixels on a canvas. Exercise 3.7 asked you to use this technique to create a simulation of a bob hanging from a spring with the sin() function. That kind of quick-and-dirty, one-line-of-code solution won’t do, however, if what you really want is a bob hanging from a spring that responds to other forces in the environment (wind, gravity, and so on). To achieve a simulation like that, you need to model the force of the spring by using vectors.

    +

    Exploring the mathematics of triangles and waves has been lovely, but perhaps you’re starting to miss Newton’s laws of motion and vectors. After all, the core of this book is about simulating the physics of moving bodies. In “Properties of Oscillation”, I modeled simple harmonic motion by mapping a sine wave to a range of pixels on a canvas. Exercise 3.7 asked you to use this technique to create a simulation of a bob hanging from a spring with the sin() function. That kind of quick-and-dirty, one-line-of-code solution won’t do, however, if what you really want is a bob hanging from a spring that responds to other forces in the environment (wind, gravity, and so on). To achieve a simulation like that, you need to model the force of the spring by using vectors.

    I’ll consider a spring to be a connection between a movable bob (or weight) and a fixed anchor point (see Figure 3.14).

    @@ -720,7 +720,7 @@

    Spring Forces

    let restLength = 100;

    I’ll then use Hooke’s law to calculate the magnitude of the force. For that, I need k and x. Calculating k is easy; it’s just a constant, so I’ll make something up:

    let k = 0.1;
    -

    Finding x is perhaps a bit more difficult. I need to know the difference between the current length and the rest length. The rest length is defined as the variable restLength. What’s the current length? The distance between the anchor and the bob. And how can I calculate that distance? How about the magnitude of a vector that points from the anchor to the bob? (Note that this is exactly the same process I employed to find the distance between objects for the purposes of calculating gravitational attraction in Chapter 2.)

    +

    Finding x is perhaps a bit more difficult. I need to know the difference between the current length and the rest length. The rest length is defined as the variable restLength. What’s the current length? The distance between the anchor and the bob. And how can I calculate that distance? How about the magnitude of a vector that points from the anchor to the bob? (Note that this is exactly the same process I employed to find the distance between objects for the purposes of calculating gravitational attraction in Chapter 2.)

    //{!1} A vector pointing from the anchor to the bob gives you the current length of the spring.
     let dir = p5.Vector.sub(bob, anchor);
     let currentLength = dir.mag();
    @@ -963,7 +963,7 @@ 

    The Pendulum

    this.r = 125;

    I also know the bob’s current angle relative to the pivot: it’s stored in the variable angle. Between the arm length and the angle, what I have is a polar coordinate for the bob: (r,\theta). What I really need is a Cartesian coordinate, but luckily I already know how to use sine and cosine to convert from polar to Cartesian. And so:

    this.bob = createVector(r * sin(this.angle), r * cos(this.angle));
    -

    Notice that I’m using sin(this.angle) for the x value and cos(this.angle) for the y. This is the opposite of what I showed you in “Polar vs. Cartesian Coordinates”. The reason is that I’m now looking for the top angle of a right triangle pointing down, as depicted in Figure 3.21. This angle lives between the y-axis and the hypotenuse, instead of between the x-axis and the hypotenuse, as you saw earlier in Figure 3.9.

    +

    Notice that I’m using sin(this.angle) for the x value and cos(this.angle) for the y. This is the opposite of what I showed you in “Polar vs. Cartesian Coordinates”. The reason is that I’m now looking for the top angle of a right triangle pointing down, as depicted in Figure 3.21. This angle lives between the y-axis and the hypotenuse, instead of between the x-axis and the hypotenuse, as you saw earlier in Figure 3.9.

    Right now, the value of this.bob is assuming that the pivot is at point (0, 0). To get the bob’s position relative to wherever the pivot actually happens to be, I can just add pivot to the bob vector:

    this.bob.add(this.pivot);

    Now all that remains is the little matter of drawing a line and a circle (you should be more creative, of course):

    @@ -971,7 +971,7 @@

    The Pendulum

    fill(127); line(this.pivot.x, this.pivot.y, this.bob.x, this.bob.y); circle(this.bob.x, this.bob.y, 16);
    -

    Finally, a real-world pendulum is going to experience a certain amount of friction (at the pivot point) and air resistance. As it stands, the pendulum would swing forever with the given code. To make it more realistic, I can slow the pendulum with a damping trick. I say trick because rather than model the resistance forces with some degree of accuracy (as I did in Chapter 2), I can achieve a similar result simply by reducing the angular velocity by an arbitrary amount during each cycle. The following code reduces the velocity by 1 percent (or multiplies it by 0.99) for each frame of animation:

    +

    Finally, a real-world pendulum is going to experience a certain amount of friction (at the pivot point) and air resistance. As it stands, the pendulum would swing forever with the given code. To make it more realistic, I can slow the pendulum with a damping trick. I say trick because rather than model the resistance forces with some degree of accuracy (as I did in Chapter 2), I can achieve a similar result simply by reducing the angular velocity by an arbitrary amount during each cycle. The following code reduces the velocity by 1 percent (or multiplies it by 0.99) for each frame of animation:

    this.angleVelocity *= 0.99;

    Putting everything together, I have the following example (with the pendulum beginning at a 45-degree angle).

    @@ -1057,7 +1057,7 @@

    Exercise 3.17

    The Ecosystem Project

    Take one of your creatures and incorporate oscillation into its motion. You can use the Oscillator class from Example 3.7 as a model. The Oscillator object, however, oscillates around a single point (the middle of the window). Try oscillating around a moving point.

    -

    In other words, design a creature that moves around the screen according to position, velocity, and acceleration. But that creature isn’t just a static shape; it’s an oscillating body. Consider tying the speed of oscillation to the speed of motion. Think of a butterfly’s flapping wings or the legs of an insect. Can you make it appear as though the creature’s internal mechanics (oscillation) drive its locomotion? See the book’s website for an additional example combining attraction from Chapter 2 with oscillation.

    +

    In other words, design a creature that moves around the screen according to position, velocity, and acceleration. But that creature isn’t just a static shape; it’s an oscillating body. Consider tying the speed of oscillation to the speed of motion. Think of a butterfly’s flapping wings or the legs of an insect. Can you make it appear as though the creature’s internal mechanics (oscillation) drive its locomotion? See the book’s website for an additional example combining attraction from Chapter 2 with oscillation.

    diff --git a/content/04_particles.html b/content/04_particles.html index 813cdb87..0f0472f4 100644 --- a/content/04_particles.html +++ b/content/04_particles.html @@ -1,4 +1,4 @@ -
    +

    Chapter 4. Particle Systems

    @@ -44,7 +44,7 @@

    Why Particle Systems Matter

    No single particle is referenced in this code, and yet the result will be full of particles flying all over the canvas. This works because the details are hidden inside the ParticleSystem class, which holds references to lots of instances of the Particle class. Getting used to this technique of writing sketches with multiple classes, including classes that keep lists of instances of other classes, will prove useful as you get to later chapters in this book.

    Finally, working with particle systems is also an opportunity to tackle two other OOP techniques: inheritance and polymorphism. With the examples you’ve seen up until now, I’ve always used an array of a single type of object, like an array of movers or an array of oscillators. With inheritance and polymorphism, I’ll demonstrate a convenient way to use a single list to store objects of different types. This way, a particle system need not be a system of only one kind of particle.

    A Single Particle

    -

    Before I can get rolling on coding the particle system, I need to write a class to describe a single particle. The good news: I’ve done this already! The Mover class from Chapter 2 serves as the perfect template. A particle is an independent body that moves about the canvas, so just like a mover, it has position, velocity, and acceleration variables; a constructor to initialize those variables; and methods to show() itself and update() its position.

    +

    Before I can get rolling on coding the particle system, I need to write a class to describe a single particle. The good news: I’ve done this already! The Mover class from Chapter 2 serves as the perfect template. A particle is an independent body that moves about the canvas, so just like a mover, it has position, velocity, and acceleration variables; a constructor to initialize those variables; and methods to show() itself and update() its position.

    class Particle {
       // A Particle object is just another name for a mover. It has position, velocity, and acceleration.
    @@ -440,7 +440,7 @@ 

    Exercise 4.3

    Exercise 4.4

    -

    Building off Chapter 3’s Asteroids example, use a particle system to emit particles from the ship’s thrusters whenever a thrust force is applied. The particles’ initial velocity should be related to the ship’s current direction.

    +

    Building off Chapter 3’s Asteroids example, use a particle system to emit particles from the ship’s thrusters whenever a thrust force is applied. The particles’ initial velocity should be related to the ship’s current direction.

    A System of Emitters

    So far, I’ve described an individual particle and organized its code into a Particle class. I’ve also described a system of particles and organized the code into an Emitter class. This particle system is nothing more than a collection of independent Particle objects. But as an instance of the Emitter class, isn’t a particle system itself an object? If that’s the case (and it is), there’s no reason I couldn’t also build a collection of many particle emitters: a system of systems!

    @@ -511,7 +511,7 @@

    Exercise 4.6

    Inheritance and Polymorphism

    Up to now, all the particles in my systems have been identical, with the same basic appearance and behaviors. Who says this has to be the case? By harnessing two fundamental OOP principles, inheritance and polymorphism, I can create particle systems with significantly more variety and interest.

    -

    Perhaps you’ve encountered these two terms in your programming life before this book. For example, my beginner text, Learning Processing, has close to an entire chapter (Chapter 22) dedicated to them. Still, perhaps you’ve learned about inheritance and polymorphism only in the abstract and never had a reason to really use them. If that’s true, you’ve come to the right place. Without these techniques, your ability to program diverse particles and particle systems is extremely limited. (In Chapter 6, I’ll also demonstrate how understanding these topics will help you use physics libraries.)

    +

    Perhaps you’ve encountered these two terms in your programming life before this book. For example, my beginner text, Learning Processing, has close to an entire chapter (Chapter 22) dedicated to them. Still, perhaps you’ve learned about inheritance and polymorphism only in the abstract and never had a reason to really use them. If that’s true, you’ve come to the right place. Without these techniques, your ability to program diverse particles and particle systems is extremely limited. (In Chapter 6, I’ll also demonstrate how understanding these topics will help you use physics libraries.)

    Imagine it’s a Saturday morning. You’ve just gone out for a lovely jog, had a delicious bowl of cereal, and are sitting quietly at your computer with a cup of warm chamomile tea. It’s your old friend so-and-so’s birthday, and you’ve decided you’d like to make a greeting card with p5.js. How about simulating some confetti? Purple confetti, pink confetti, star-shaped confetti, square confetti, fast confetti, fluttery confetti—all kinds of confetti, all with different appearances and different behaviors, exploding onto the screen all at once.

    What you have is clearly a particle system: a collection of individual pieces (particles) of confetti. You might be able to cleverly redesign the Particle class to have variables that store color, shape, behavior, and more. To create a variety of particles, you might initialize those variables with random values. But what if some of your particles are drastically different? It could become very messy to have all sorts of code for different ways of being a particle in the same class. Another option might be to do the following:

    class HappyConfetti {
    @@ -745,7 +745,7 @@ 

    Polymorphism Basics

    }

    This is polymorphism (from the Greek polymorphos, meaning “many forms”) in action. Although all the animals are grouped together in an array and processed in a single for loop, JavaScript can identify their true types and invoke the appropriate eat() method for each one. It’s that simple!

    Particles with Inheritance and Polymorphism

    -

    Now that I’ve covered the theory and syntax behind inheritance and polymorphism, I’m ready to write a working example of them in p5.js, based on my Particle class. First, take another look at a basic Particle implementation, adapted from Example 4.1.

    +

    Now that I’ve covered the theory and syntax behind inheritance and polymorphism, I’m ready to write a working example of them in p5.js, based on my Particle class. First, take another look at a basic Particle implementation, adapted from Example 4.1:

    class Particle {
       constructor(x, y) {
         this.acceleration = createVector(0, 0);
    @@ -797,7 +797,7 @@ 

    Particles with Inheritance square(this.position.x, this.position.y, 12); } }

    -

    Let’s make this a bit more sophisticated. Say I want to have each Confetti particle rotate as it flies through the air. One option is to model angular velocity and acceleration, as described in Chapter 3. For ease, however, I’ll implement something less formal.

    +

    Let’s make this a bit more sophisticated. Say I want to have each Confetti particle rotate as it flies through the air. One option is to model angular velocity and acceleration, as described in Chapter 3. For ease, however, I’ll implement something less formal.

    I know a particle has an x-position somewhere between 0 and the width of the canvas. What if I said that when the particle’s x-position is 0, its rotation should be 0; when its x-position is equal to the width, its rotation should be equal to 4\pi? Does this ring a bell? As discussed in Chapter 0, whenever a value has one range that you want to map to another range, you can use the map() function:

        let angle = map(this.position.x, 0, width, 0, TWO_PI * 2);

    Here’s how this code fits into the show() method:

    @@ -1149,14 +1149,14 @@

    Example 4.7: A Particle Sy return force; } } -

    Notice the addition of the power variable in the Repeller class, which controls the strength of the repulsion force exerted. This property becomes especially interesting when you have multiple attractors and repellers, each with different power values. For example, strong attractors and weak repellers might result in particles clustering around the attractors, while more powerful repellers might reveal patterns reminiscent of paths or channels between them. These are hints of what’s to come in Chapter 5, where I’ll further explore the concept of a complex system.

    +

    Notice the addition of the power variable in the Repeller class, which controls the strength of the repulsion force exerted. This property becomes especially interesting when you have multiple attractors and repellers, each with different power values. For example, strong attractors and weak repellers might result in particles clustering around the attractors, while more powerful repellers might reveal patterns reminiscent of paths or channels between them. These are hints of what’s to come in Chapter 5, where I’ll further explore the concept of a complex system.

    Exercise 4.9

    Expand Example 4.7 to include multiple repellers and attractors. How might you use inheritance and polymorphism to create separate Repeller and Attractor classes without duplicating code?

    Exercise 4.10

    -

    Create a particle system in which each particle responds to every other particle. (I’ll explain how to do this in detail in Chapter 5.)

    +

    Create a particle system in which each particle responds to every other particle. (I’ll explain how to do this in detail in Chapter 5.)

    Image Textures and Additive Blending

    Even though this book is almost exclusively focused on behaviors and algorithms rather than computer graphics and design, I don’t think I would be able to live with myself if I finished a discussion of particle systems without presenting an example of texturing each particle with an image. After all, the way you render a particle is a key piece of the puzzle in designing certain types of visual effects. For example, compare the two smoke simulations shown in Figure 4.7.

    @@ -1199,7 +1199,7 @@

    Example 4.8: An Image-Textu tint(255, this.lifespan); image(img, this.position.x, this.position.y); } -

    This smoke example is also a nice excuse to revisit the Gaussian distributions from “A Normal Distribution of Random Numbers”. Instead of launching the particles in a purely random direction, which produces a fountain-like effect, the result will appear more smokelike if the initial velocity vectors cluster mostly around a mean value, with a lower probability of outlying velocities. Using the randomGaussian() function, the particle velocities can be initialized as follows:

    +

    This smoke example is also a nice excuse to revisit the Gaussian distributions from “A Normal Distribution of Random Numbers”. Instead of launching the particles in a purely random direction, which produces a fountain-like effect, the result will appear more smokelike if the initial velocity vectors cluster mostly around a mean value, with a lower probability of outlying velocities. Using the randomGaussian() function, the particle velocities can be initialized as follows:

        let vx = randomGaussian(0, 0.3);
         let vy = randomGaussian(-1, 0.3);
         this.velocity = createVector(vx, vy);
    diff --git a/content/05_steering.html b/content/05_steering.html index 11de0e8c..2528e4d4 100644 --- a/content/05_steering.html +++ b/content/05_steering.html @@ -1,4 +1,4 @@ -
    +

    Chapter 5. Autonomous Agents

    @@ -61,14 +61,14 @@

    Why Vehicles?

  • Locomotion: For the most part, I’m going to ignore this third layer. In the case of fleeing from zombies, the locomotion could be described as “left foot, right foot, left foot, right foot, as fast as you can.” In a canvas, however, a rectangle, circle, or triangle’s actual movement across a window is irrelevant, given that the motion is all an illusion in the first place. This isn’t to say that you should ignore locomotion entirely, however. You’ll find great value in thinking about the locomotive design of your vehicle and how you choose to animate it. The examples in this chapter will remain visually bare; a good exercise would be to elaborate on the animation style. For example, could you add spinning wheels, oscillating paddles, or shuffling legs?
  • Ultimately, the most important layer for you to consider is the first one, action selection. What are the elements of your system, and what are their goals? In this chapter, I’m going to cover a series of steering behaviors (that is, actions): seeking, fleeing, following a path, following a flow field, flocking with your neighbors, and so on. As I’ve said in other chapters, however, the point isn’t that you should use these exact behaviors in all your projects. Rather, the point is to show you how to model a steering behavior—any steering behavior—in code, and to provide a foundation for designing and developing your own vehicles with new and exciting goals and behaviors.

    -

    What’s more, even though the examples in this chapter are highly literal (follow that pixel!), you should allow yourself to think more abstractly (like Braitenberg). What would it mean for your vehicle to have “love” as its goal or “fear” as its driving force? Finally (and I’ll address this in “Combining Behaviors”), you won’t get very far by developing simulations with only one action. Yes, the first example’s action will be to seek a target. But by being creative—by making these steering behaviors your own—it will all come down to mixing and matching multiple actions within the same vehicle. View the coming examples not as singular behaviors to be emulated, but as pieces of a larger puzzle that you’ll eventually assemble.

    +

    What’s more, even though the examples in this chapter are highly literal (follow that pixel!), you should allow yourself to think more abstractly (like Braitenberg). What would it mean for your vehicle to have “love” as its goal or “fear” as its driving force? Finally (and I’ll address this in “Combining Behaviors”), you won’t get very far by developing simulations with only one action. Yes, the first example’s action will be to seek a target. But by being creative—by making these steering behaviors your own—it will all come down to mixing and matching multiple actions within the same vehicle. View the coming examples not as singular behaviors to be emulated, but as pieces of a larger puzzle that you’ll eventually assemble.

    The Steering Force

    What exactly is a steering force? To answer, consider the following scenario: a vehicle with a current velocity is seeking a target. For fun, let’s think of the vehicle as a bug-like creature that desires to savor a delicious strawberry, as in Figure 5.1.

    Figure 5.1: A vehicle with a velocity and a target
    Figure 5.1: A vehicle with a velocity and a target
    -

    The vehicle’s goal and subsequent action is to seek the target. Thinking back to Chapter 2, you might begin by making the target an attractor and applying a gravitational force that pulls the vehicle to the target. This would be a perfectly reasonable solution, but conceptually it’s not what I’m looking for here.

    +

    The vehicle’s goal and subsequent action is to seek the target. Thinking back to Chapter 2, you might begin by making the target an attractor and applying a gravitational force that pulls the vehicle to the target. This would be a perfectly reasonable solution, but conceptually it’s not what I’m looking for here.

    I don’t want to simply calculate a force that pushes the vehicle toward its target; rather, I want to ask the vehicle to make an intelligent decision to steer toward the target based on its perception of its own state (its speed and the direction in which it’s currently moving) and its environment (the location of the target). The vehicle should consider how it desires to move (a vector pointing to the target), compare that goal with how it’s currently moving (its velocity), and apply a force accordingly. That’s exactly what Reynolds’s steering force formula says:

    \text{steering force} = \text{desired velocity} - \text{current velocity}

    Or, as you might write in p5.js:

    @@ -87,7 +87,7 @@

    The Steering Force

    Figure 5.3: The magnitude of the vehicle’s desired velocity is max speed.
    Figure 5.3: The magnitude of the vehicle’s desired velocity is max speed.

    -

    The concept of maximum speed was introduced in Chapter 1 to ensure that a mover’s speed remained within a reasonable range. However, I didn’t always use it in the subsequent chapters. In Chapter 2, other forces such as friction and drag kept the speed in check, while in Chapter 3, oscillation was caused by opposing forces that kept the speed limited. In this chapter, maximum speed is a key parameter for controlling the behavior of a steering agent, so I’ll include it in all the examples.

    +

    The concept of maximum speed was introduced in Chapter 1 to ensure that a mover’s speed remained within a reasonable range. However, I didn’t always use it in the subsequent chapters. In Chapter 2, other forces such as friction and drag kept the speed in check, while in Chapter 3, oscillation was caused by opposing forces that kept the speed limited. In this chapter, maximum speed is a key parameter for controlling the behavior of a steering agent, so I’ll include it in all the examples.

    While I encourage you to consider how other forces such as friction and drag could be combined with steering behaviors, I’m going to focus only on steering forces for the time being. As such, I can include the concept of maximum speed as a limiting factor in the force calculation. First, I need to add a property to the Vehicle class setting the maximum speed:

    class Vehicle {
    @@ -114,7 +114,7 @@ 

    The Steering Force

    // to the object’s acceleration. this.applyForce(steer); }
    -

    Notice that I finish the method by passing the steering force into applyForce(). This assumes that the code is built on top of the foundation I developed in Chapter 2.

    +

    Notice that I finish the method by passing the steering force into applyForce(). This assumes that the code is built on top of the foundation I developed in Chapter 2.

    To see why Reynolds’s steering formula works so well, take a look at Figure 5.4. It shows what the steering force looks like relative to the vehicle and target positions.

    Figure 5.4: The vehicle applies a steering force equal to its desired velocity minus its current velocity. @@ -151,7 +151,7 @@

    The Steering Force

    Figure 5.5: The path for a stronger maximum force (left) versus a weaker one (right)
    Figure 5.5: The path for a stronger maximum force (left) versus a weaker one (right)
    -

    Here’s the full Vehicle class, incorporating the rest of the elements from the Chapter 2 Mover class.

    +

    Here’s the full Vehicle class, incorporating the rest of the elements from the Chapter 2 Mover class.

    Example 5.1: Seeking a Target

    @@ -209,7 +209,7 @@

    Example 5.1: Seeking a Target

    pop(); } } -

    Note that, unlike the circles used to represent movers and particles in previous chapters, the Vehicle object is drawn as a triangle, defined as three custom vertices set with beginShape() and endShape(). This allows the vehicle to be represented in a way that indicates its direction, determined using the heading() method, as demonstrated in Chapter 3.

    +

    Note that, unlike the circles used to represent movers and particles in previous chapters, the Vehicle object is drawn as a triangle, defined as three custom vertices set with beginShape() and endShape(). This allows the vehicle to be represented in a way that indicates its direction, determined using the heading() method, as demonstrated in Chapter 3.

    Exercise 5.1

    Implement a fleeing steering behavior (the desired velocity is the same as seek, but pointed in the opposite direction).

    @@ -307,7 +307,7 @@

    Example 5.2: Arriving at a Target

    this.applyForce(steer); }
    -

    The arrive behavior is a great demonstration of an autonomous agent’s perception of the environment—including its own state. This model differs from the inanimate forces of Chapter 2: a celestial body attracted to another body doesn’t know it is experiencing gravity, whereas a cheetah chasing its prey knows it’s chasing.

    +

    The arrive behavior is a great demonstration of an autonomous agent’s perception of the environment—including its own state. This model differs from the inanimate forces of Chapter 2: a celestial body attracted to another body doesn’t know it is experiencing gravity, whereas a cheetah chasing its prey knows it’s chasing.

    The key is in the way the forces are calculated. For instance, in the gravitational attraction sketch (Example 2.6), the force always points directly from the object to the target—the exact direction of the desired velocity. Here, by contrast, the vehicle perceives its distance to the target and adjusts its desired speed accordingly, slowing as it gets closer. The force on the vehicle itself is therefore based not just on the desired velocity but also on the desired velocity relative to its current velocity. The vehicle accounts for its own state as part of its assessment of the environment.

    Put another way, the magic of Reynolds’s desired minus velocity equation is that it essentially makes the steering force a manifestation of the current velocity’s error: “I’m supposed to be going this fast in this direction, but I’m actually going this fast in another direction. My error is the difference between where I want to go and where I’m currently going.” Sometimes this can lead to seemingly unexpected results, as in Figure 5.10.

    @@ -460,7 +460,7 @@

    Flow Fields

    } xoff += 0.1; } -

    Now I’m getting somewhere. Calculating the direction of the vectors by using Perlin noise is a great way to simulate a variety of natural effects, such as irregular gusts of wind or the meandering path of a river. I’ll note, however, that this noise mapping generates a field that prefers flowing left. Since Perlin noise has a Gaussian-like distribution, angles near \pi are more likely to be selected. For Figure 5.16, I used a range of 0 to 4\pi to counteract this tendency, similarly to the way I applied 4\pi in Chapter 4 to represent a range of angles for spinning confetti particles. Ultimately, of course, there’s no one correct way to calculate the vectors of a flow field; it’s up to you to decide what you’re looking to simulate.

    +

    Now I’m getting somewhere. Calculating the direction of the vectors by using Perlin noise is a great way to simulate a variety of natural effects, such as irregular gusts of wind or the meandering path of a river. I’ll note, however, that this noise mapping generates a field that prefers flowing left. Since Perlin noise has a Gaussian-like distribution, angles near \pi are more likely to be selected. For Figure 5.16, I used a range of 0 to 4\pi to counteract this tendency, similarly to the way I applied 4\pi in Chapter 4 to represent a range of angles for spinning confetti particles. Ultimately, of course, there’s no one correct way to calculate the vectors of a flow field; it’s up to you to decide what you’re looking to simulate.

    Exercise 5.6

    Write the code to calculate a flow field so that the vectors swirl in circles around the center of the canvas. 

    @@ -544,7 +544,7 @@

    Example 5.4: Flow-Field Following 

    Notice that lookup() is a method of the FlowField class, rather than of Vehicle. While you certainly could place lookup() within the Vehicle class instead, from my perspective, placing it in FlowField aligns best with the OOP principle of encapsulation. The lookup task, which retrieves a vector based on a position from the flow field, is inherently tied to the data of the FlowField object.

    -

    You may also notice some familiar elements from Chapter 4, such as the use of an array of vehicles. Although the vehicles here operate independently, this is a great first step toward thinking about the group behaviors that I’ll introduce later in this chapter.

    +

    You may also notice some familiar elements from Chapter 4, such as the use of an array of vehicles. Although the vehicles here operate independently, this is a great first step toward thinking about the group behaviors that I’ll introduce later in this chapter.

    Exercise 5.7

    Adapt the flow-field example so the vectors change over time. (Hint: Try using the third dimension of Perlin noise!)

    @@ -555,9 +555,9 @@

    Exercise 5.8

    Path Following

    The next steering behavior formulated by Reynolds that I’d like to explore is path following. But let me quickly clarify something first: the behavior here is path following, not path finding. Pathfinding refers to an algorithm that solves for the shortest distance between two points, often in a maze. With path following, a predefined route, or path, already exists, and the vehicle simply tries to follow it.

    -

    In this section, I will work through the algorithm, including the corresponding mathematics and code. However, before doing so, it’s important to cover a key concept in vector math that I skipped over in Chapter 1: the dot product. I haven’t needed it yet, but it’s necessary here and likely will prove quite useful for you beyond just this example.

    +

    In this section, I will work through the algorithm, including the corresponding mathematics and code. However, before doing so, it’s important to cover a key concept in vector math that I skipped over in Chapter 1: the dot product. I haven’t needed it yet, but it’s necessary here and likely will prove quite useful for you beyond just this example.

    The Dot Product

    -

    Remember all the vector math covered in Chapter 1? Add, subtract, multiply, and divide? Figure 5.17 has a recap of some of these operations.

    +

    Remember all the vector math covered in Chapter 1? Add, subtract, multiply, and divide? Figure 5.17 has a recap of some of these operations.

    Figure 5.17: Adding vectors and multiplying a vector by a scalar
    Figure 5.17: Adding vectors and multiplying a vector by a scalar
    @@ -952,7 +952,7 @@

    Complex Systems

    • Nonlinearity: This aspect of complex systems is often casually referred to as the butterfly effect, coined by mathematician and meteorologist Edward Norton Lorenz, a pioneer in the study of chaos theory. In 1961, Lorenz was running a computer weather simulation for the second time and, perhaps to save a little time, typed in a starting value of 0.506 instead of 0.506127. The end result was completely different from the first result of the simulation. - Stated more evocatively, the theory is that a single butterfly flapping its wings on the other side of the world could cause a massive weather shift and ruin your weekend at the beach. It’s called nonlinear because there isn’t a linear relationship between a change in initial conditions and a change in outcome. A small change in initial conditions can have a massive effect on the outcome. Nonlinear systems are a superset of chaotic systems. In Chapter 7, you’ll see how even in a system of many 0s and 1s, if you change just one bit, the result will be completely different. + Stated more evocatively, the theory is that a single butterfly flapping its wings on the other side of the world could cause a massive weather shift and ruin your weekend at the beach. It’s called nonlinear because there isn’t a linear relationship between a change in initial conditions and a change in outcome. A small change in initial conditions can have a massive effect on the outcome. Nonlinear systems are a superset of chaotic systems. In Chapter 7, you’ll see how even in a system of many 0s and 1s, if you change just one bit, the result will be completely different.
    • Competition and cooperation: One ingredient that often makes a complex system tick is the presence of both competition and cooperation among the elements. The upcoming flocking system will have three rules: alignment, cohesion, and separation. Alignment and cohesion will ask the elements to “cooperate” by trying to stay together and move together. Separation, however, will ask the elements to “compete” for space. When the time comes, try taking out just the cooperation or just the competition, and you’ll see how the system loses its complexity. Competition and cooperation are found together in living complex systems, but not in nonliving complex systems like the weather.
    • @@ -963,7 +963,7 @@

      Complex Systems

    Complexity will serve as a key theme for much of the remainder of the book. In this section, I’ll begin by introducing an additional feature to the Vehicle class: the ability to perceive neighboring vehicles. This enhancement will pave the way for a culminating example of a complex system in which the interplay of simple individual behaviors results in an emergent behavior: flocking.

    Implementing Group Behaviors (or: Let’s Not Run Into Each Other)

    -

    Managing a group of objects is certainly not a new concept. You’ve seen this before—in Chapter 4, where I developed the Emitter class to represent an overall particle system. There, I used an array to store a list of individual particles. I’ll start with the same technique here and store Vehicle objects in an array:

    +

    Managing a group of objects is certainly not a new concept. You’ve seen this before—in Chapter 4, where I developed the Emitter class to represent an overall particle system. There, I used an array to store a list of individual particles. I’ll start with the same technique here and store Vehicle objects in an array:

    // Declare an array of Vehicle objects.
     let vehicles;
     
    @@ -1137,7 +1137,7 @@ 

    Exercise 5.13

    Combining Behaviors

    The most exciting and intriguing group behaviors come from mixing and matching multiple steering forces. After all, how could I even begin to simulate emergence in a complex system through a sketch that has only one rule?

    -

    When multiple steering forces are at play, I need a mechanism for managing them all. You may be thinking, “This is nothing new. We juggle multiple forces all the time.” You would be right. In fact, this technique appeared as early as Chapter 2:

    +

    When multiple steering forces are at play, I need a mechanism for managing them all. You may be thinking, “This is nothing new. We juggle multiple forces all the time.” You would be right. In fact, this technique appeared as early as Chapter 2:

      let wind = createVector(0.001, 0);
       let gravity = createVector(0, 0.1);
       mover.applyForce(wind);
    @@ -1293,7 +1293,7 @@ 

    Exercise 5.15

    Can you rewrite the align() method so that boids see only other boids that fall within a direct line of sight?

    -

    The code for cohesion is quite similar to that for alignment. The only difference is that instead of calculating the average velocity of the boid’s neighbors, I want to calculate the average position of the boid’s neighbors (and use that as a target to seek):

    +

    The code for cohesion is quite similar to that for alignment. The only difference is that instead of calculating the average velocity of the boid’s neighbors, I want to calculate the average position of the boid’s neighbors (and use that as a target to seek).

      cohesion(boids) {
         let neighborDistance = 50;
    @@ -1319,7 +1319,7 @@ 

    Exercise 5.15

    } }
    -

    It’s also worth taking the time to write a class called Flock that manages the whole group of boids. It will be virtually identical to the ParticleSystem class from Chapter 4, with only one tiny change: when I call run() on each Boid object (as I did to each Particle object), I’ll pass in a reference to the entire array of boids:

    +

    It’s also worth taking the time to write a class called Flock that manages the whole group of boids. It will be virtually identical to the ParticleSystem class from Chapter 4, with only one tiny change: when I call run() on each Boid object (as I did to each Particle object), I’ll pass in a reference to the entire array of boids:

    class Flock {
       constructor() {
         this.boids = [];
    @@ -1363,7 +1363,7 @@ 

    Example 5.11: Flocking

    background(255); flock.run(); }
    -

    Just as with the particle systems from Chapter 4, you can see the elegance of OOP in simplifying the setup() and draw() functions.

    +

    Just as with the particle systems from Chapter 4, you can see the elegance of OOP in simplifying the setup() and draw() functions.

    Exercise 5.16

    Combine flocking with other steering behaviors.

    @@ -1388,7 +1388,7 @@

    Exercise 5.19

    Algorithmic Efficiency (or: Why Does My Sketch Run So Slowly?)

    Group behaviors are wonderful, but it’s with a heavy heart that I must admit that they can also be slow. In fact, the bigger the group, the slower the sketch can be. I’d love to hide this dark truth from you, because I’d like you to be happy and live a fulfilling and meaningful life, free from concerns about the efficiency of your code. But I’d also like to be able to sleep at night without worrying about your inevitable disappointment when you try to run your flocking simulation with too many boids.

    -

    Usually, when I talk about p5.js sketches running slowly, it’s because drawing to the canvas can be slow—the more you draw, the slower your sketch runs. As you may recall from Chapter 4, switching to a different renderer like WebGL can sometimes alleviate this issue, allowing for faster drawing of larger particle systems. With something like a flocking simulation, however, the slowness derives from the algorithm. Computer scientists put this problem in terms of something called big O notation, where the O stands for order. This is shorthand for describing the efficiency of an algorithm: How many computational cycles does the algorithm require to complete?

    +

    Usually, when I talk about p5.js sketches running slowly, it’s because drawing to the canvas can be slow—the more you draw, the slower your sketch runs. As you may recall from Chapter 4, switching to a different renderer like WebGL can sometimes alleviate this issue, allowing for faster drawing of larger particle systems. With something like a flocking simulation, however, the slowness derives from the algorithm. Computer scientists put this problem in terms of something called big O notation, where the O stands for order. This is shorthand for describing the efficiency of an algorithm: How many computational cycles does the algorithm require to complete?

    Consider a simple search problem. You have a basket containing 100 chocolate treats, only one of which is pure dark chocolate. That’s the one you want to eat. To find it, you pick the chocolates out of the basket one by one. You might be lucky and find it on the first try, but in the worst-case scenario, you have to check all 100 before you find the dark chocolate. To find one thing in 100, you have to check 100 things (or to find one thing in N things, you have to check N times). The big O notation here is O(N). This, incidentally, is also the big O notation that describes a simple particle system. If you have N particles, you have to run and display those particles N times.

    Now, let’s think about a group behavior such as flocking. For every Boid object, you have to check the velocity and position of every other Boid object before you can calculate its steering force. Let’s say you have 100 boids. For boid 1, you need to check 100 boids; for boid 2, you need to check 100 boids; and so on. In all, for 100 boids, you need to perform 10,000 checks (100 \times 100 = \text{10,000}).

    You might be thinking, “No problem. Computers are fast. They can do 10,000 things pretty easily.” But what if there are 1,000 boids? Then you have this:

    @@ -1479,7 +1479,7 @@

    Example 5.13: Quadtree

    -

    The quadtree data structure is key to the Barnes-Hut algorithm, which I referenced briefly when building an n-body simulation in Chapter 2. This method uses a quadtree to approximate groups of bodies into a single one when calculating gravitational forces. This drastically reduces the number of calculations needed, allowing simulations with large numbers of bodies to run more efficiently. You can learn more about building a quadtree and applying it to a flocking system as part of Coding Challenge #98 on the Coding Train website.

    +

    The quadtree data structure is key to the Barnes-Hut algorithm, which I referenced briefly when building an n-body simulation in Chapter 2. This method uses a quadtree to approximate groups of bodies into a single one when calculating gravitational forces. This drastically reduces the number of calculations needed, allowing simulations with large numbers of bodies to run more efficiently. You can learn more about building a quadtree and applying it to a flocking system as part of Coding Challenge #98 on the Coding Train website.

    Exercise 5.20

    Expand the bin-lattice spatial subdivision flocking sketch from Example 5.12 to use a quadtree.

    @@ -1499,7 +1499,7 @@

    Use the Magnitude Squared

    return sqrt(x * x + y * y); }
    -

    Magnitude requires the square-root operation. And so it should! After all, if you want the magnitude of a vector, you have to break out the Pythagorean theorem (we did this in Chapter 1). However, if you could somehow skip taking the square root, your code would run faster.

    +

    Magnitude requires the square-root operation. And so it should! After all, if you want the magnitude of a vector, you have to break out the Pythagorean theorem (we did this in Chapter 1). However, if you could somehow skip taking the square root, your code would run faster.

    Say you just want to know the relative magnitude of a vector v. For example, is the magnitude greater than 10?

    if (v.mag() > 10) {
       /* Do something! */
    diff --git a/content/06_libraries.html b/content/06_libraries.html
    index 8ce09d97..39b5fc58 100644
    --- a/content/06_libraries.html
    +++ b/content/06_libraries.html
    @@ -1,4 +1,4 @@
    -
    +

    Chapter 6. Physics Libraries

    @@ -28,7 +28,7 @@

    Living root bridges (

    These activities have yielded a set of motion simulations, allowing you to creatively define the physics of the worlds you build (whether realistic or fantastical). But, of course, you and I aren’t the first or only people to do this. The world of computer graphics and programming is full of prewritten code libraries dedicated to physics simulations.

    Just try searching open source physics engine and you could spend the rest of your day poring over a host of rich and complex codebases. This begs the question: If an existing code library takes care of physics simulation, why should you bother learning how to write any of the algorithms yourself? Here’s where the philosophy behind this book comes into play. While many libraries provide out-of-the-box physics to experiment with (super-awesome, sophisticated, and robust physics at that), there are several good reasons for learning the fundamentals from scratch before diving into such libraries.

    First, without an understanding of vectors, forces, and trigonometry, it’s easy to get lost just reading the documentation of a library, let alone using it. Second, even though a library may take care of the math behind the scenes, it won’t necessarily simplify your code. A great deal of overhead may be required in understanding how a library works and what it expects from you code-wise. Finally, as wonderful as a physics engine might be, if you look deep down into your heart, you’ll likely see that you seek to create worlds and visualizations that stretch the limits of the imagination. A library may be great, but it provides only a limited set of features. It’s important to know when to live within those limitations in the pursuit of a creative coding project and when those limits will prove to be confining.

    -

    This chapter is dedicated to examining two open source physics libraries for JavaScript: Matter.js and Toxiclibs.js. I don’t mean to imply that these are the only libraries you should use for any and all creative coding projects that could benefit from a physics engine (see “Other Physics Libraries” for alternatives, and check the book’s website for ports of the chapter’s examples to other libraries). However, both libraries integrate nicely with p5.js and will allow me to demonstrate the fundamental concepts behind physics engines and how they relate to and build upon the material I’ve covered so far.

    +

    This chapter is dedicated to examining two open source physics libraries for JavaScript: Matter.js and Toxiclibs.js. I don’t mean to imply that these are the only libraries you should use for any and all creative coding projects that could benefit from a physics engine (see “Other Physics Libraries” for alternatives, and check the book’s website for ports of the chapter’s examples to other libraries). However, both libraries integrate nicely with p5.js and will allow me to demonstrate the fundamental concepts behind physics engines and how they relate to and build upon the material I’ve covered so far.

    Ultimately, the aim of this chapter isn’t to teach you the details of a specific physics library, but to provide you with a foundation for working with any physics library. The skills you acquire here will enable you to navigate and understand documentation, opening the door for you to expand your abilities with any library you choose.

    Why Use a Physics Library?

    I’ve made the case for writing your own physics simulations (as you’ve learned to do in the previous chapters), but why use a physics library? After all, adding any external framework or library to a project introduces complexity and extra code. Is that additional overhead worth it? If you just want to simulate a circle falling down because of gravity, for example, do you really need to import an entire physics engine and learn its API? As the early chapters of this book hopefully demonstrated, probably not. Lots of scenarios like this are simple enough for you to get by writing the code yourself.

    @@ -71,7 +71,7 @@

    Importing the Matter.js Library

    <script src="https://cdnjs.cloudflare.com/ajax/libs/matter-js/0.19.0/matter.min.js"></script>

    At the time of this writing, the most recent version of Matter.js is 0.19.0, and that’s what I’ve referenced in this snippet. As Matter.js updates and new versions are released, it’s often a good idea to upgrade, but by referencing a specific version that you know works with your sketch, you don’t have to worry about new features of the library breaking your existing code.

    Matter.js Overview

    -

    When you use Matter.js (or any physics engine) in p5.js, your code ends up looking a bit different. Here’s a pseudocode generalization of all the examples in Chapters 1 through 5:

    +

    When you use Matter.js (or any physics engine) in p5.js, your code ends up looking a bit different. Here’s a pseudocode generalization of all the examples in Chapters 1 through 5:

    setup()

    1. Create all the objects in the world.
    2. @@ -102,7 +102,7 @@

      Matter.js Overview

        -
      • Bodies: The primary elements in the world, corresponding to the physical objects being simulated. A body has a position and a velocity. Sound familiar? It’s basically another version of the class I’ve been building throughout Chapters 1 through 5. It also has geometry to define its shape. It’s important to note that body is a generic term that physics engines use to describe a thing in the world (similarly to the term particle); it isn’t related to an anthropomorphic body.
      • +
      • Bodies: The primary elements in the world, corresponding to the physical objects being simulated. A body has a position and a velocity. Sound familiar? It’s basically another version of the class I’ve been building throughout Chapters 1 through 5. It also has geometry to define its shape. It’s important to note that body is a generic term that physics engines use to describe a thing in the world (similarly to the term particle); it isn’t related to an anthropomorphic body.
      • Composite: A container that allows for the creation of complex entities (made up of multiple bodies). The world itself is an example of a composite, and every body created has to be added to the world.
      • Constraints: Act as connections between bodies.
      @@ -247,7 +247,7 @@

      Engine

      Object Destructuring

      Object destructuring in JavaScript is a technique for extracting properties from an object and assigning them to variables. In the case of Matter.js, the Matter object contains the Engine property. Normally, an alias for this property can be set with let Engine = Matter.Engine, but with destructuring, the alias can be created more concisely:

      const { Engine } = Matter;
      -

      Hold on. Did you catch that I snuck in a const here? I know I said back in Chapter 0 that I would use only let for variable declarations throughout this book. However, working with an external library is a really good time to dip your toe in the const waters. In JavaScript, const is used for declaring variables whose values should never be reassigned after initialization. In this case, I want to protect myself from accidentally overwriting the Engine variable later in the code, which would likely break everything!

      +

      Hold on. Did you catch that I snuck in a const here? I know I said back in Chapter 0 that I would use only let for variable declarations throughout this book. However, working with an external library is a really good time to dip your toe in the const waters. In JavaScript, const is used for declaring variables whose values should never be reassigned after initialization. In this case, I want to protect myself from accidentally overwriting the Engine variable later in the code, which would likely break everything!

      With that out of the way, let’s look at how the destructuring syntax really shines when you need to create aliases to multiple properties of the same object:

      // Use object destructuring to extract aliases for Engine and Vector.
       const { Engine, Vector } = Matter;
      @@ -311,7 +311,7 @@

      Render

      One more critical order of business remains: physics engines must be told to step forward in time. Since I’m using the built-in renderer, I can also use the built-in runner, which runs the engine at a default frame rate of 60 frames per second. The runner is also customizable, but the details aren’t terribly important since the goal here is to move toward using p5.js’s draw() loop instead (coming in the next section):

      // Run the engine!
       Runner.run(engine);
      -

      Here’s the Matter.js code all together, with an added ground object—another rectangular body. Note the use of the { isStatic: true } option in the creation of the ground body to ensure that it remains in a fixed position. I’ll cover more details about static bodies in “Static Matter.js Bodies”.

      +

      Here’s the Matter.js code all together, with an added ground object—another rectangular body. Note the use of the { isStatic: true } option in the creation of the ground body to ensure that it remains in a fixed position. I’ll cover more details about static bodies in “Static Matter.js Bodies”.

      Example 6.1: Matter.js Default Render and Runner

      @@ -376,7 +376,7 @@

      Matter.js with p5.js

      square(this.x, this.y, this.w); } } -

      Now I’ll write a sketch.js file that creates a new Box whenever the mouse is clicked and stores all the Box objects in an array. (This is the same approach I took in the particle system examples from Chapter 4.)

      +

      Now I’ll write a sketch.js file that creates a new Box whenever the mouse is clicked and stores all the Box objects in an array. (This is the same approach I took in the particle system examples from Chapter 4.)

      Example 6.2: A Comfortable and Cozy p5.js Sketch That Needs a Little Matter.js

      @@ -469,7 +469,7 @@

      Step 3: Draw the Box Body

      square(0, 0, this.w); pop(); } -

      It’s important to note here that if you delete a Box object from the boxes array—perhaps when it moves outside the boundaries of the canvas or reaches the end of its life span, as demonstrated in Chapter 4—you must also explicitly remove the body associated with that Box object from the Matter.js world. This can be done with a removeBody() method on the Box class:

      +

      It’s important to note here that if you delete a Box object from the boxes array—perhaps when it moves outside the boundaries of the canvas or reaches the end of its life span, as demonstrated in Chapter 4—you must also explicitly remove the body associated with that Box object from the Matter.js world. This can be done with a removeBody() method on the Box class:

        // This function removes a body from the Matter.js world.
         removeBody() {
           Composite.remove(engine.world, this.body);
      @@ -484,7 +484,7 @@ 

      Exercise 6.2

      Static Matter.js Bodies

      -

      In the example I just created, the Box objects appear at the mouse position and fall downward because of the default gravity force. What if I want to add immovable boundaries to the world that will block the path of the falling Box objects? Matter.js makes this easy with the isStatic property.

      +

      In the example I just created, the Box objects appear at the mouse position and fall downward because of the default gravity force. What if I want to add immovable boundaries to the world that will block the path of the falling Box objects? Matter.js makes this easy with the isStatic property:

      // Create a fixed (static) boundary body.
       let options = { isStatic: true };
       let boundary = Bodies.rectangle(x, y, w, h, options);
      @@ -682,7 +682,7 @@

      Distance Constraints

      Figure 6.10: A constraint is a connection between two bodies at an anchor point for each body.
      -

      A distance constraint is a connection of fixed length between two bodies, similar to a spring force connecting two shapes in Chapter 3. The constraint is attached to each body at a specified anchor, a point relative to the body’s center (see Figure 6.10). Depending on the constraint’s stiffness property, the “fixed” length can exhibit variability, much as a spring can be more or less rigid.

      +

      A distance constraint is a connection of fixed length between two bodies, similar to a spring force connecting two shapes in Chapter 3. The constraint is attached to each body at a specified anchor, a point relative to the body’s center (see Figure 6.10). Depending on the constraint’s stiffness property, the “fixed” length can exhibit variability, much as a spring can be more or less rigid.

      Defining a constraint uses a similar methodology as creating bodies, only you need to have two bodies ready to go. Let’s assume that two Particle objects each store a reference to a Matter.js body in a property called body. I’ll call them particleA and particleB:

      let particleA = new Particle();
       let particleB = new Particle();
      @@ -711,7 +711,7 @@

      Distance Constraints

      let constraint = Constraint.create(options);
       //{!1} Don’t forget to add the constraint to the world!
       Composite.add(engine.world, constraint);
      -

      I can include a constraint to a class to encapsulate and manage the relationships among multiple bodies. Here’s an example of a class that represents a swinging pendulum (mirroring Example 3.11 from Chapter 3).

      +

      I can include a constraint to a class to encapsulate and manage the relationships among multiple bodies. Here’s an example of a class that represents a swinging pendulum (mirroring Example 3.11 from Chapter 3).

      Example 6.6: Matter.js Pendulum

      @@ -883,19 +883,19 @@

      Example 6.8: MouseConstraint D

      In this example, you’ll see that the stiffness property of the constraint is set to 0.7, giving a bit of elasticity to the imaginary mouse string. Other properties such as angularStiffness and damping can also influence the mouse’s interaction. What happens if you adjust these values?

      Adding More Forces

      -

      In Chapter 2, I covered how to build an environment with multiple forces at play. An object might respond to gravitational attraction, wind, air resistance, and so on. Clearly, forces are at work in Matter.js as rectangles and circles spin and fly around the screen! But so far, I’ve demonstrated how to manipulate only a single global force: gravity.

      +

      In Chapter 2, I covered how to build an environment with multiple forces at play. An object might respond to gravitational attraction, wind, air resistance, and so on. Clearly, forces are at work in Matter.js as rectangles and circles spin and fly around the screen! But so far, I’ve demonstrated how to manipulate only a single global force: gravity.

        let engine = Engine.create();
         // Change the engine’s gravity to point horizontally.
         engine.gravity.x = 1;
         engine.gravity.y = 0;
      -

      If I want to use any of the Chapter 2 techniques with Matter.js, I need look no further than the trusty applyForce() method. In Chapter 2, I wrote this method as part of the Mover class. It received a vector, divided it by mass, and accumulated it into the mover’s acceleration. With Matter.js, the same method exists, so I no longer need to write all the details myself! I can call it with the static Body.applyForce(). Here’s what that looks like in what’s now the Box class:

      +

      If I want to use any of the Chapter 2 techniques with Matter.js, I need look no further than the trusty applyForce() method. In Chapter 2, I wrote this method as part of the Mover class. It received a vector, divided it by mass, and accumulated it into the mover’s acceleration. With Matter.js, the same method exists, so I no longer need to write all the details myself! I can call it with the static Body.applyForce(). Here’s what that looks like in what’s now the Box class:

      class Box {
         applyForce(force) {
           //{!1} Call Body’s applyForce().
           Body.applyForce(this.body, this.body.position, force);
         }
       }
      -

      Here, the Box class’s applyForce() method receives a force vector and simply passes it along to Matter.js’s applyForce() method to apply it to the corresponding body. The key difference with this approach is that Matter.js is a more sophisticated engine than the examples from Chapter 2. The earlier examples assumed that the force was always applied at the mover’s center. Here, I’ve specified the exact position on the body where the force is applied. In this case, I’ve just applied it to the center as before by asking the body for its position, but this could be adjusted—for example, a force pushing at the edge of a box, causing it to spin across the canvas, much like dice tumbling when thrown.

      +

      Here, the Box class’s applyForce() method receives a force vector and simply passes it along to Matter.js’s applyForce() method to apply it to the corresponding body. The key difference with this approach is that Matter.js is a more sophisticated engine than the examples from Chapter 2. The earlier examples assumed that the force was always applied at the mover’s center. Here, I’ve specified the exact position on the body where the force is applied. In this case, I’ve just applied it to the center as before by asking the body for its position, but this could be adjusted—for example, a force pushing at the edge of a box, causing it to spin across the canvas, much like dice tumbling when thrown.

      How can I bring forces into a Matter.js-driven sketch? Say I want to use a gravitational attraction force. Remember the code from Example 2.6 in the Attractor class?

        attract(mover) {
           let force = p5.Vector.sub(this.position, mover.position);
      @@ -937,7 +937,7 @@ 

      Example 6.9: Attraction with Matter return force; } }

      -

      In addition to writing a custom attract() method for Example 6.9, two other key elements are required for the sketch to behave more like the example from Chapter 2. First, remember that a Matter.js Engine has a default gravity pointing down. I need to disable it in setup() with a (0, 0) vector:

      +

      In addition to writing a custom attract() method for Example 6.9, two other key elements are required for the sketch to behave more like the example from Chapter 2. First, remember that a Matter.js Engine has a default gravity pointing down. I need to disable it in setup() with a (0, 0) vector:

      engine = Engine.create();
       //{!1} Disable the default gravity.
       engine.gravity = Vector.create(0, 0);
      @@ -962,7 +962,7 @@

      Exercise 6.7

      Exercise 6.8

      -

      Convert any of the steering behavior examples from Chapter 5 to Matter.js. What does flocking look like with collisions?

      +

      Convert any of the steering behavior examples from Chapter 5 to Matter.js. What does flocking look like with collisions?

      Collision Events

      This book isn’t called The Nature of Matter.js, so I’m not going to cover every possible feature of the Matter.js library. At this point, I’ve gone over the basics of creating bodies and constraints, and shown you some of what the library can do. With the skills you’ve gained, hopefully the learning process will be considerably less painful when it comes time to use an aspect of Matter.js that I haven’t addressed here. Before moving on, however, one more feature of the library is worth covering: collision events.

      @@ -1261,7 +1261,7 @@

      Particles

      circle(this.particle.x, this.particle.y, this.r * 2); } } -

      Looking over this code, you might first notice that drawing the particle is as simple as grabbing the x and y properties and using them with circle(). Second, you might notice that this Particle class doesn’t do much beyond storing a reference to a VerletParticle2D object. This hints at something important. Think back to the discussion of inheritance in Chapter 4, and then ask yourself: What is a Particle object other than an augmented VerletParticle2D object? Why bother making two objects—a Particle and a VerletParticle2D—for every one particle in the world, when I could simply extend the VerletParticle2D class to include the extra code needed to draw the particle?

      +

      Looking over this code, you might first notice that drawing the particle is as simple as grabbing the x and y properties and using them with circle(). Second, you might notice that this Particle class doesn’t do much beyond storing a reference to a VerletParticle2D object. This hints at something important. Think back to the discussion of inheritance in Chapter 4, and then ask yourself: What is a Particle object other than an augmented VerletParticle2D object? Why bother making two objects—a Particle and a VerletParticle2D—for every one particle in the world, when I could simply extend the VerletParticle2D class to include the extra code needed to draw the particle?

      class Particle extends VerletParticle2D {
         constructor(x, y, r) {
           //{!1} Call super() with (x, y) so the object is initialized properly.
      @@ -1311,7 +1311,7 @@ 

      Springs

      particle1.y = mouseY; particle1.unlock(); }
      -

      And with that, I’m ready to put all these elements together in a simple sketch with two particles connected by a spring. One particle is permanently locked in place, and the other can be moved by dragging the mouse. This example is virtually identical to Example 3.11 from Chapter 3.

      +

      And with that, I’m ready to put all these elements together in a simple sketch with two particles connected by a spring. One particle is permanently locked in place, and the other can be moved by dragging the mouse. This example is virtually identical to Example 3.11 from Chapter 3.

      Example 6.11: Simple Spring with Toxiclibs.js

      @@ -1708,7 +1708,7 @@

      Attraction and Repulsion Behaviors -

      I can now remake the attraction example from Chapter 2 with a single Attractor object that exerts an attraction behavior anywhere on the canvas. Even though the attractor is centered, I’m using a distance threshold of the full width to account for any movement of the attractor, and for particles located outside the canvas boundaries.

      +

      I can now remake the attraction example from Chapter 2 with a single Attractor object that exerts an attraction behavior anywhere on the canvas. Even though the attractor is centered, I’m using a distance threshold of the full width to account for any movement of the attractor, and for particles located outside the canvas boundaries.

      Example 6.15: Attraction (and Repulsion) Behaviors

      @@ -1733,7 +1733,7 @@

      Example 6.15: Attraction circle(this.x, this.y, this.r * 2); } } -

      Just as discussed in “Spatial Subdivisions”, Toxiclibs.js projects with large numbers of particles interacting with one another can run very slowly because of the N^2 nature of the algorithm (every particle checking every other particle). To speed up the simulation, you could use the manual addForce() method in conjunction with a binning algorithm. Keep in mind, this would also require you to manually calculate the attraction force, as the built-in AttractionBehavior would no longer apply.

      +

      Just as discussed in “Spatial Subdivisions”, Toxiclibs.js projects with large numbers of particles interacting with one another can run very slowly because of the N^2 nature of the algorithm (every particle checking every other particle). To speed up the simulation, you could use the manual addForce() method in conjunction with a binning algorithm. Keep in mind, this would also require you to manually calculate the attraction force, as the built-in AttractionBehavior would no longer apply.

      Exercise 6.14

      Use AttractionBehavior in conjunction with spring forces.

      diff --git a/content/07_ca.html b/content/07_ca.html index 066c0777..bfa59ba6 100644 --- a/content/07_ca.html +++ b/content/07_ca.html @@ -1,4 +1,4 @@ -
      +

      Chapter 7. Cellular Automata

      @@ -17,7 +17,7 @@

      Chapter 7. Cellular Automata

      Kente cloth (photo by ZSM)

      Originating from the Akan people of Ghana, kente cloth is a woven fabric celebrated for its vibrant colors and intricate patterns. Woven in narrow strips, each design is unique, and when joined, the strips form a tapestry of complex and emergent patterns that tell a story or carry a message. The image shows three typical Ewe kente stripes, highlighting the diverse weaving traditions that reflect the rich cultural tapestry of Ghana.

      -

      In Chapter 5, I defined a complex system as a network of elements with short-range relationships, operating in parallel, that exhibit emergent behavior. I created a flocking simulation to demonstrate how a complex system adds up to more than the sum of its parts. In this chapter, I’m going to turn to developing other complex systems known as cellular automata.

      +

      In Chapter 5, I defined a complex system as a network of elements with short-range relationships, operating in parallel, that exhibit emergent behavior. I created a flocking simulation to demonstrate how a complex system adds up to more than the sum of its parts. In this chapter, I’m going to turn to developing other complex systems known as cellular automata.

      In some respects, this shift may seem like a step backward. No longer will the individual elements of my systems be members of a physics world, driven by forces and vectors to move around the canvas. Instead, I’ll build systems out of the simplest digital element possible: a single bit. This bit is called a cell, and its value (0 or 1) is called its state. Working with such simple elements will help reveal how complex systems operate, and will offer an opportunity to elaborate on some programming techniques that apply to code-based projects. Building cellular automata will also set the stage for the rest of the book, where I’ll increasingly focus on systems and algorithms rather than vectors and motion—albeit systems and algorithms that I can and will apply to moving bodies.

      What Is a Cellular Automaton?

      A cellular automaton (cellular automata plural, or CA for short) is a model of a system of cell objects with the following characteristics:

      @@ -103,7 +103,7 @@

      Elementary Cellular Automata

      Figure 7.12: Translating a grid of 0s and 1s to white and black squares
      Figure 7.12: Translating a grid of 0s and 1s to white and black squares

      -

      The low-resolution shape that emerges in Figure 7.12 is the Sierpiński triangle. Named after the Polish mathematician Wacław Sierpiński, it’s a famous example of a fractal. I’ll examine fractals more closely in Chapter 8, but briefly, they’re patterns in which the same shapes repeat themselves at different scales. To give you a better sense of this, Figure 7.13 shows the CA over several more generations and with a wider grid size.

      +

      The low-resolution shape that emerges in Figure 7.12 is the Sierpiński triangle. Named after the Polish mathematician Wacław Sierpiński, it’s a famous example of a fractal. I’ll examine fractals more closely in Chapter 8, but briefly, they’re patterns in which the same shapes repeat themselves at different scales. To give you a better sense of this, Figure 7.13 shows the CA over several more generations and with a wider grid size.

      Figure 7.13: Wolfram elementary CA 
      Figure 7.13: Wolfram elementary CA 
      @@ -423,13 +423,13 @@

      Class 4: Complexity

      Figure 7.25: Rule 110 
      Figure 7.25: Rule 110 
      -

      In Chapter 5, I introduced the concept of a complex system and used flocking to demonstrate how simple rules can result in emergent behaviors. Class 4 CAs remarkably exhibit the characteristics of complex systems and are the key to simulating phenomena such as forest fires, traffic patterns, and the spread of diseases. Research and applications of CA consistently emphasize the importance of class 4 as the bridge between CA and nature.

      +

      In Chapter 5, I introduced the concept of a complex system and used flocking to demonstrate how simple rules can result in emergent behaviors. Class 4 CAs remarkably exhibit the characteristics of complex systems and are the key to simulating phenomena such as forest fires, traffic patterns, and the spread of diseases. Research and applications of CA consistently emphasize the importance of class 4 as the bridge between CA and nature.

      The Game of Life

      The next step is to move from a 1D CA to a 2D one: the Game of Life. This will introduce additional complexity—each cell will have a bigger neighborhood—but with the complexity comes a wider range of possible applications. After all, most of what happens in computer graphics lives in two dimensions, and this chapter demonstrates how to apply CA thinking to a 2D p5.js canvas.

      In 1970, Martin Gardner wrote a Scientific American article that documented mathematician John Conway’s new Game of Life, describing it as recreational mathematics: “To play life you must have a fairly large checkerboard and a plentiful supply of flat counters of two colors. It is possible to work with pencil and graph paper but it is much easier, particularly for beginners, to use counters and a board.”

      The Game of Life has become something of a computational cliché, as myriad projects display the game on LEDs, screens, projection surfaces, and so on. But practicing building the system with code is still valuable for a few reasons.

      For one, the Game of Life provides a good opportunity to practice skills with 2D arrays, nested loops, and more. Perhaps more important, however, this CA’s core principles are tied directly to a core goal of this book: simulating the natural world with code. The Game of Life algorithm and technical implementation will provide you with the inspiration and foundation to build simulations that exhibit the characteristics and behaviors of biological systems of reproduction.

      -

      Unlike von Neumann, who created an extraordinarily complex system of states and rules, Conway wanted to achieve a similar lifelike result with the simplest set of rules possible. Gardner outlined Conway’s goals as follows.

      +

      Unlike von Neumann, who created an extraordinarily complex system of states and rules, Conway wanted to achieve a similar lifelike result with the simplest set of rules possible. Gardner outlined Conway’s goals as follows:

      1. There should be no initial pattern for which there is a simple proof that the population can grow without limit.
      2. There should be initial patterns that apparently do grow without limit.
      3. @@ -743,7 +743,7 @@

        Exercise 7.11

        Create a CA in which each pixel is a cell and the pixel’s color is its state.

      Historical

      -

      In the object-oriented Game of Life example, I used two variables to keep track of a cell’s current and previous states. What if you use an array to keep track of a cell’s state history over a longer period? This relates to the idea of a complex adaptive system, one that has the ability to change its rules over time by learning from its history. (Stay tuned for more on this concept in Chapters 9 and 10.)

      +

      In the object-oriented Game of Life example, I used two variables to keep track of a cell’s current and previous states. What if you use an array to keep track of a cell’s state history over a longer period? This relates to the idea of a complex adaptive system, one that has the ability to change its rules over time by learning from its history. (Stay tuned for more on this concept in Chapters 9 and 10.)

      Exercise 7.12

      Visualize the Game of Life by coloring each cell according to the amount of time it has been alive or dead. Can you also use the cell’s history to inform the rules?

      @@ -755,7 +755,7 @@

      Exercise 7.13

      Use CA rules in a flocking system. What if each boid has a state (that perhaps informs its steering behaviors), and its neighborhood changes from frame to frame as it moves closer to or farther from other boids?

      Nesting

      -

      As discussed in Chapter 5, a feature of complex systems is that they can be nested. A city is a complex system of people, a person is a complex system of organs, an organ is a complex system of cells, and so on. How could this be applied to a CA?

      +

      As discussed in Chapter 5, a feature of complex systems is that they can be nested. A city is a complex system of people, a person is a complex system of organs, an organ is a complex system of cells, and so on. How could this be applied to a CA?

      Exercise 7.14

      Design a CA in which each cell is a smaller CA.

      diff --git a/content/08_fractals.html b/content/08_fractals.html index a3443283..ec3c3ce1 100644 --- a/content/08_fractals.html +++ b/content/08_fractals.html @@ -1,4 +1,4 @@ -
      +

      Chapter 8. Fractals

      @@ -294,7 +294,7 @@

      The Monster Curve

      line(this.start.x, this.start.y, this.end.x, this.end.y); } } -

      Now that I have the KochLine class, I can get started on setup() and draw(). I’ll need a data structure to keep track of what will eventually become many KochLine objects, and a JavaScript array will do just fine (see Chapter 4 for a review of arrays):

      +

      Now that I have the KochLine class, I can get started on setup() and draw(). I’ll need a data structure to keep track of what will eventually become many KochLine objects, and a JavaScript array will do just fine (see Chapter 4 for a review of arrays):

      let segments = [];

      In setup(), I’ll want to add the first line segment to the array, a line that stretches from 0 to the width of the canvas:

      function setup() {
      @@ -314,7 +314,7 @@ 

      The Monster Curve

      } }

      This is my foundation for the sketch. I have a KochLine class that keeps track of a line from point start to point end, and I have an array that keeps track of all the KochLine objects. Given these elements, how and where should I apply the Koch rules and the principles of recursion?

      -

      Remember the Game of Life cellular automaton from Chapter 7? In that simulation, I always kept track of two generations: current and next. When I was finished calculating the next generation, next became current, and I moved on to computing the new next generation. I’m going to apply a similar technique here. I have a segments array listing the current set of line segments (at the start of the program, there’s only one). Now I need a second array (I’ll call it next), where I can place all the new KochLine objects generated from applying the Koch rules. For every single KochLine in the current array, four new line segments will be added to next. When I’m done, the next array becomes the new segments array (see Figure 8.13).

      +

      Remember the Game of Life cellular automaton from Chapter 7? In that simulation, I always kept track of two generations: current and next. When I was finished calculating the next generation, next became current, and I moved on to computing the new next generation. I’m going to apply a similar technique here. I have a segments array listing the current set of line segments (at the start of the program, there’s only one). Now I need a second array (I’ll call it next), where I can place all the new KochLine objects generated from applying the Koch rules. For every single KochLine in the current array, four new line segments will be added to next. When I’m done, the next array becomes the new segments array (see Figure 8.13).

      Figure 8.13: The next generation of the fractal is calculated from the current generation. Then next becomes the new current in the transition from one generation to another.
      Figure 8.13: The next generation of the fractal is calculated from the current generation. Then next becomes the new current in the transition from one generation to another.
      @@ -335,12 +335,12 @@

      The Monster Curve

      segments = next; }

      By calling generate() over and over, the Koch curve rules will be recursively applied to the existing set of KochLine segments. But, of course, I’ve skipped over the real work of the function: How do I actually break one line segment into four as described by the rules? I need a way to calculate the start and end points of each line.

      -

      Because the KochLine class uses p5.Vector objects to store the start and end points, this is a wonderful opportunity to practice all that vector math from Chapter 1, along with some trigonometry from Chapter 3. First, I should establish the scope of the problem: How many points do I need to compute for each KochLine object? Figure 8.14 shows the answer.

      +

      Because the KochLine class uses p5.Vector objects to store the start and end points, this is a wonderful opportunity to practice all that vector math from Chapter 1, along with some trigonometry from Chapter 3. First, I should establish the scope of the problem: How many points do I need to compute for each KochLine object? Figure 8.14 shows the answer.

      Figure 8.14: Two points become five points.
      Figure 8.14: Two points become five points.
      -

      As the figure illustrates, I need to turn the two points (start, end) into five (a, b, c, d, e) to generate the four new line segments (ab, bc, cd, de):

      +

      As the figure illustrates, I need to turn the two points (start, end) into five (a, b, c, d, e) to generate the four new line segments (a b, b c, c d, d e):

          next.add(new KochLine(a, b));
           next.add(new KochLine(b, c));
           next.add(new KochLine(c, d));
      @@ -362,7 +362,7 @@ 

      The Monster Curve

      Wait, let’s take a look at this one line of code a little bit more closely:

          // This is object destructuring, but for an array!
           let [a, b, c, d, e] = segment.kochPoints();
      -

      As you may recall, in Chapter 6 I explained object destructuring as a means of extracting properties from an object and assigning them to individual variables. Guess what? You can do the same with arrays! Here, as long as the kochPoints() method returns an array of five elements, I can conveniently unpack and assign them, each to its respective variables: a, b, c, d, and e. It’s a lovely way to handle multiple return values. Just as with objects, array destructuring keeps the code neat and tidy.

      +

      As you may recall, in Chapter 6 I explained object destructuring as a means of extracting properties from an object and assigning them to individual variables. Guess what? You can do the same with arrays! Here, as long as the kochPoints() method returns an array of five elements, I can conveniently unpack and assign them, each to its respective variables: a, b, c, d, and e. It’s a lovely way to handle multiple return values. Just as with objects, array destructuring keeps the code neat and tidy.

      Now I just need to write a new kochPoints() method in the KochLine class that returns an array of p5.Vector objects representing the points a through e in Figure 8.15. I’ll knock off a and e first, which are the easiest—they’re just copies of the start and end points of the original line:

        kochPoints() {
      @@ -447,7 +447,7 @@ 

      Exercise 8.4

      Exercise 8.5

      -

      Use recursion to draw the Sierpiński triangle (as seen in Chapter 7’s Wolfram elementary CA).

      +

      Use recursion to draw the Sierpiński triangle (as seen in Chapter 7’s Wolfram elementary CA).

       
       
      @@ -463,7 +463,7 @@

      The Deterministic Version

      Figure 8.17: Each generation of a fractal tree, following the given production rules. The final tree is several generations later.

      Once again, I have a nice fractal with a recursive definition: a branch is a line with two branches connected to it. What makes this fractal a bit more difficult than the previous ones is the use of the word rotate in the fractal’s rules. Each new branch must rotate relative to the previous branch, which is rotated relative to all its previous branches. Luckily, p5.js has a mechanism to keep track of rotations: transformations.

      -

      I touched on transformations in Chapter 3. They’re a set of functions, such as translate(), rotate(), scale(), push(), and pop(), that allow you to change the position, orientation, and scale of shapes in your sketch. The translate() function moves the coordinate system, rotate() rotates it, and push() and pop() help save and restore the current transformation state. If you aren’t familiar with these functions, I have a set of videos on transformations in p5.js available at the Coding Train website.

      +

      I touched on transformations in Chapter 3. They’re a set of functions, such as translate(), rotate(), scale(), push(), and pop(), that allow you to change the position, orientation, and scale of shapes in your sketch. The translate() function moves the coordinate system, rotate() rotates it, and push() and pop() help save and restore the current transformation state. If you aren’t familiar with these functions, I have a set of videos on transformations in p5.js available at the Coding Train website.

      I’ll begin by drawing a single branch, the trunk of the tree. Since I’m going to be using the rotate() function, I need to make sure I’m continuously translating along the branches while drawing. Remember, when you rotate in p5.js, you’re always rotating around the origin, or point (0, 0), so here the origin must always be translated to the start of the next branch being drawn (equivalent to the end of the previous branch). Since the trunk starts at the bottom of the window, I first have to translate to that spot:

      translate(width / 2, height);

      Then I can draw the trunk as a line upward:

      diff --git a/content/09_ga.html b/content/09_ga.html index 03325dc3..de85f2f3 100644 --- a/content/09_ga.html +++ b/content/09_ga.html @@ -1,4 +1,4 @@ -
      +

      Chapter 9. Evolutionary Computing

      @@ -28,7 +28,7 @@

      Genetic Algorithms: Inspir
      • Traditional genetic algorithm: I’ll begin with the traditional, textbook GA. This algorithm was developed to solve problems in computer science for which the solution space is so vast that a brute-force algorithm would take too long. Here’s an example: I’m thinking of a number between one and one billion. How long will it take you to guess it? With a brute-force approach, you’d have to check every possible solution. Is it one? Is it two? Is it three? Is it four? . . . Luck plays a factor here (maybe I happened to pick five!), but on average, you would end up spending years counting up from one before hitting the correct answer. However, what if I could tell you whether your answer was good or bad? Warm or cold? Very warm? Hot? Ice frigid? If you could evaluate how close (or fit) your guesses are, you could start picking numbers accordingly and arrive at the answer more quickly. Your answer would evolve.
      • Interactive selection: After exploring the traditional computer science version, I’ll examine other applications of GAs in the visual arts. Interactive selection refers to the process of evolving something (often a computer-generated image) through user interaction. Let’s say you walk into a museum gallery and see 10 paintings. With interactive selection, you might pick your favorites and allow an algorithmic process to generate (or evolve) new paintings based on your preferences.
      • -
      • Ecosystem simulation: The traditional computer science GA and interactive selection technique are what you’ll likely find if you search online or read a textbook about artificial intelligence. But as you’ll soon see, they don’t really simulate the process of evolution as it happens in the physical world. In this chapter, I’ll also explore techniques for simulating evolution in an ecosystem of artificial creatures. How can the objects that move about a canvas meet each other, mate, and pass their genes on to a new generation? This could apply directly to the Ecosystem Project outlined at the end of each chapter. It will also be particularly relevant as I explore neuroevolution in Chapter 11.
      • +
      • Ecosystem simulation: The traditional computer science GA and interactive selection technique are what you’ll likely find if you search online or read a textbook about artificial intelligence. But as you’ll soon see, they don’t really simulate the process of evolution as it happens in the physical world. In this chapter, I’ll also explore techniques for simulating evolution in an ecosystem of artificial creatures. How can the objects that move about a canvas meet each other, mate, and pass their genes on to a new generation? This could apply directly to the Ecosystem Project outlined at the end of each chapter. It will also be particularly relevant as I explore neuroevolution in Chapter 11.

      Why Use Genetic Algorithms?

      To help illustrate the utility of the traditional GA, I’m going to start with cats. No, not just your everyday feline friends. I’m going to start with some purr-fect cats that paw-sess a talent for typing, with the goal of producing the complete works of Shakespeare (Figure 9.1).

      @@ -38,7 +38,7 @@

      Why Use Genetic Algorithms?

      This is my meow-velous twist on the infinite monkey theorem, which is stated as follows: a monkey hitting keys randomly on a typewriter will eventually type the complete works of Shakespeare, given an infinite amount of time. It’s only a theory because in practice the number of possible combinations of letters and words makes the likelihood of the monkey actually typing Shakespeare minuscule. To put it in perspective, even if the monkey had started typing at the beginning of the universe, the probability that by now it would have produced just Hamlet, to say nothing of the entire works of Shakespeare, is still absurdly unlikely.

      Consider a cat named Clawdius. Clawdius types on a reduced typewriter containing only 27 characters: the 26 English letters plus the spacebar. The probability of Clawdius hitting any given key is 1 in 27.

      -

      Next, consider the phrase “to be or not to be that is the question” (for simplicity, I’m ignoring capitalization and punctuation). The phrase is 39 characters long, including spaces. If Clawdius starts typing, the chance he’ll get the first character right is 1 in 27. Since the probability he’ll get the second character right is also 1 in 27, he has a 1 in 729 (27 \times 27) chance of landing the first two characters in correct order. (This follows directly from our discussion of probability in Chapter 0.) Therefore, the probability that Clawdius will type the full phrase is 1 in 27 multiplied by itself 39 times, or (1/27)^{39}. That equals a probability of . . .

      +

      Next, consider the phrase “to be or not to be that is the question” (for simplicity, I’m ignoring capitalization and punctuation). The phrase is 39 characters long, including spaces. If Clawdius starts typing, the chance he’ll get the first character right is 1 in 27. Since the probability he’ll get the second character right is also 1 in 27, he has a 1 in 729 (27 \times 27) chance of landing the first two characters in correct order. (This follows directly from our discussion of probability in Chapter 0.) Therefore, the probability that Clawdius will type the full phrase is 1 in 27 multiplied by itself 39 times, or (1/27)^{39}. That equals a probability of . . .

      1 \text{ in } \text{66,555,937,033,867,822,607,895,549,241,096,482,953,017,615,834,735,226,163}

      Needless to say, even hitting just this one phrase, let alone an entire play, let alone all 38 Shakespeare plays (yes, even The Two Noble Kinsmen) is highly unlikely. Even if Clawdius were a computer simulation and could type a million random phrases per second, for Clawdius to have a 99 percent probability of eventually getting just the one phrase right, he would have to type for 9,719,096,182,010,563,073,125,591,133,903,305,625,605,017 years. (For comparison, the universe is estimated to be a mere 13,750,000,000 years old.)

      The point of all these unfathomably large numbers isn’t to give you a headache, but to demonstrate that a brute-force algorithm (typing every possible random phrase) isn’t a reasonable strategy for arriving randomly at “to be or not to be that is the question.” Enter GAs, which start with random phrases and swiftly find the solution through simulated evolution, leaving plenty of time for Clawdius to savor a cozy catnap.

      @@ -80,7 +80,7 @@

      Step 1: Population Creation

      Sure, these phrases have variety, but try to mix and match the characters every which way and you’ll never get cat. There isn’t enough variety here to evolve the optimal solution. However, if I had a population of thousands of phrases, all generated randomly, chances are that at least one phrase would have a c as the first character, one would have an a as the second, and one a t as the third. A large population will most likely provide enough variety to generate the desired phrase. (In step 3 of the algorithm, I’ll also demonstrate another mechanism to introduce more variation in case there isn’t enough in the first place.) Step 1 can therefore be described as follows:

      Create a population of randomly generated elements.

      -

      Element is perhaps a better, more general-purpose term than creature. But what is the element? As you move through the examples in this chapter, you’ll see several scenarios; you might have a population of images or a population of vehicles à la Chapter 5. The part that’s new in this chapter is that each element, each member of the population, has virtual DNA, a set of properties (you could also call them genes) that describe how a given element looks or behaves. For the typing cats, for example, the DNA could be a string of characters. With this in mind, I can be even more specific and describe step 1 of the GA as follows:

      +

      Element is perhaps a better, more general-purpose term than creature. But what is the element? As you move through the examples in this chapter, you’ll see several scenarios; you might have a population of images or a population of vehicles à la Chapter 5. The part that’s new in this chapter is that each element, each member of the population, has virtual DNA, a set of properties (you could also call them genes) that describe how a given element looks or behaves. For the typing cats, for example, the DNA could be a string of characters. With this in mind, I can be even more specific and describe step 1 of the GA as follows:

      Create a population of N elements, each with randomly generated DNA.

      The field of genetics makes an important distinction between the concepts of genotype and phenotype. The actual genetic code—the particular sequence of molecules in the DNA—is an organism’s genotype. This is what gets passed down from generation to generation. The phenotype, by contrast, is the expression of that data—this cat will be big, that cat will be small, that other cat will be a particularly fast and effective typist.

      The genotype/phenotype distinction is key to creatively using GAs. What are the objects in your world? How will you design the genotype for those objects—the data structure to store each object’s properties, and the values those properties take on? And how will you use that information to design the phenotype? That is, what do you want these variables to actually express?

      @@ -418,7 +418,7 @@

      Step 2: Selection

      } }

      Once the fitness scores have been computed, the next step is to build the mating pool for the reproduction process. The mating pool is a data structure from which two parents are repeatedly selected. Recalling the description of the selection process, the goal is to pick parents with probabilities calculated according to fitness. The members of the population with the highest fitness scores should be the most likely to be selected; those with the lowest scores, the least likely.

      -

      In Chapter 0, I covered the basics of probability and generating a custom distribution of random numbers. I’m going to use the same techniques here to assign a probability to each member of the population, picking parents by spinning the wheel of fortune. Revisiting Figure 9.2, your mind might immediately go back to Chapter 3 and contemplate coding a simulation of an actual spinning wheel. As fun as this might be (and you should make one!), it’s quite unnecessary.

      +

      In Chapter 0, I covered the basics of probability and generating a custom distribution of random numbers. I’m going to use the same techniques here to assign a probability to each member of the population, picking parents by spinning the wheel of fortune. Revisiting Figure 9.2, your mind might immediately go back to Chapter 3 and contemplate coding a simulation of an actual spinning wheel. As fun as this might be (and you should make one!), it’s quite unnecessary.

      Figure 9.7: A bucket full of letters A, B, C, D, and E. The higher the fitness, the more instances of the letter in the bucket. @@ -444,7 +444,7 @@

      Step 2: Selection

        let parentA = random(matingPool);
         let parentB = random(matingPool);
      -

      This method of building a mating pool and choosing parents from it works, but it isn’t the only way to perform selection. Other, more memory-efficient techniques don’t require an additional array full of multiple references to each element. For example, think back to the discussion of nonuniform distributions of random numbers in Chapter 0. There, I implemented the accept-reject method. If applied here, the approach would be to randomly pick an element from the original population array, and then pick a second, qualifying random number to check against the element’s fitness value. If the fitness is less than the qualifying number, start again and pick a new element. Keep going until two parents are deemed fit enough.

      +

      This method of building a mating pool and choosing parents from it works, but it isn’t the only way to perform selection. Other, more memory-efficient techniques don’t require an additional array full of multiple references to each element. For example, think back to the discussion of nonuniform distributions of random numbers in Chapter 0. There, I implemented the accept-reject method. If applied here, the approach would be to randomly pick an element from the original population array, and then pick a second, qualifying random number to check against the element’s fitness value. If the fitness is less than the qualifying number, start again and pick a new element. Keep going until two parents are deemed fit enough.

      Yet another excellent alternative is worth exploring that similarly capitalizes on the principle of fitness-proportionate selection. To understand how it works, imagine a relay race in which each member of the population runs a given distance tied to its fitness. The higher the fitness, the farther they run. Let’s also assume that the fitness values have been normalized to all add up to 1 (just as with the wheel of fortune). The first step is to pick a starting line—a random distance from the finish. This distance is a random number from 0 to 1. (You’ll see in a moment that the finish line is assumed to be at 0.)

      let start = random(1);

      Then the relay race begins at the starting line with the first member of the population:

      @@ -479,7 +479,7 @@

      Step 2: Selection

      Depending on the specific requirements and constraints of your application of GAs, one approach might prove more suitable than the other. I’ll alternate between them in the examples outlined in this chapter.

      Exercise 9.2

      -

      Revisit the accept-reject algorithm from Chapter 0 and rewrite the weightedSelection() function to use accept-reject instead. Like the relay race method, this technique can also end up being computationally intensive, since several potential parents may be rejected as unfit before one is finally chosen.

      +

      Revisit the accept-reject algorithm from Chapter 0 and rewrite the weightedSelection() function to use accept-reject instead. Like the relay race method, this technique can also end up being computationally intensive, since several potential parents may be rejected as unfit before one is finally chosen.

      Exercise 9.3

      @@ -984,7 +984,7 @@

      Evolving Forces: Smart Rockets

      In this section, I’m going to evolve my own simplified smart rockets, inspired by Thorp’s. When I get to the end of the section, I’ll leave implementing some of Thorp’s additional advanced features as an exercise.

      My rockets will have only one thruster, which will be able to fire in any direction with any strength for every frame of animation. This isn’t particularly realistic, but it will make building out the example a little easier. (You can always make the rocket and its thrusters more advanced and realistic later.)

      Developing the Rockets

      -

      To implement my evolving smart rockets, I’ll start by taking the Mover class from Chapter 2 and renaming it Rocket:

      +

      To implement my evolving smart rockets, I’ll start by taking the Mover class from Chapter 2 and renaming it Rocket:

      class Rocket {
         constructor(x, y) {
           // A rocket has three vectors: position, velocity, and acceleration.
      @@ -1059,7 +1059,7 @@ 

      Developing the Rockets

      Figure 9.11: Vectors created with random x and y values (left) and using p5.Vector.random2D() (right)
      -

      As you may recall from Chapter 3, a better choice is to pick a random angle and create a vector of length 1 from that angle. This produces results that form a circle (see the right of Figure 9.11) and can be achieved with polar-to-Cartesian conversion or the trusty p5.Vector.random2D() method:

      +

      As you may recall from Chapter 3, a better choice is to pick a random angle and create a vector of length 1 from that angle. This produces results that form a circle (see the right of Figure 9.11) and can be achieved with polar-to-Cartesian conversion or the trusty p5.Vector.random2D() method:

      for (let i = 0; i < length; i++) {
         //{!1} A random unit vector
         this.genes[i] = p5.Vector.random2D();
      @@ -1463,7 +1463,7 @@ 

      Exercise 9.13

      Exercise 9.14

      -

      Another of Karl Sims’s seminal works in the field of GAs is “Evolved Virtual Creatures.” In this project, a population of digital creatures in a simulated physics environment is evaluated for their ability to perform tasks, such as swimming, running, jumping, following, and competing for a green cube. The project uses a node-based genotype: the creature’s DNA isn’t a linear list of vectors or numbers, but a map of nodes (much like the soft-body simulation in Chapter 6). The phenotype is the creature’s body itself, a network of limbs connected with muscles.

      +

      Another of Karl Sims’s seminal works in the field of GAs is “Evolved Virtual Creatures.” In this project, a population of digital creatures in a simulated physics environment is evaluated for their ability to perform tasks, such as swimming, running, jumping, following, and competing for a green cube. The project uses a node-based genotype: the creature’s DNA isn’t a linear list of vectors or numbers, but a map of nodes (much like the soft-body simulation in Chapter 6). The phenotype is the creature’s body itself, a network of limbs connected with muscles.

      @@ -1517,7 +1517,7 @@

      Ecosystem Simulation

      } }
      -

      So far, I’m just rehashing the particle systems from Chapter 4. I have an entity called Bloop that moves around the canvas, and a class called World that manages a variable quantity of these entities. To turn this into a system that evolves, I need to add two additional features to my world:

      +

      So far, I’m just rehashing the particle systems from Chapter 4. I have an entity called Bloop that moves around the canvas, and a class called World that manages a variable quantity of these entities. To turn this into a system that evolves, I need to add two additional features to my world:

      • Bloops die.
      • Bloops are born.
      • @@ -1544,7 +1544,7 @@

        Ecosystem Simulation

        }

        This is a good first step, but I haven’t really achieved anything. After all, if all bloops start with 100 health points and lose health at the same rate, then all bloops will live for the exact same amount of time and die together. If every single bloop lives the same amount of time, each one has an equal chance of reproducing, and therefore no evolutionary change will occur.

        You can achieve variable life spans in several ways with a more sophisticated world. One approach is to introduce predators that eat bloops. Faster bloops would be more likely to escape being eaten, leading to the evolution of increasingly faster bloops. Another option is to introduce food. When a bloop eats food, its health points increase, extending its life.

        -

        Let’s assume I have an array of vector positions called food. I could test each bloop’s proximity to each food position. If the bloop is close enough, it eats the food (which is then removed from the world) and increases its health:

        +

        Let’s assume I have an array of vector positions called food. I could test each bloop’s proximity to each food position. If the bloop is close enough, it eats the food (which is then removed from the world) and increases its health.

          eat(food) {
             // Check all the food vectors.
             for (let i = food.length - 1; i >= 0; i--) {
        diff --git a/content/10_nn.html b/content/10_nn.html
        index d0720c41..723d3c6f 100644
        --- a/content/10_nn.html
        +++ b/content/10_nn.html
        @@ -1,4 +1,4 @@
        -
        +

        Chapter 10. Neural Networks

        @@ -27,7 +27,7 @@

        Figure 10.1: A neuron with dendrites and an axon connected to another neuron

        Fortunately, as you’ve seen throughout this book, developing engaging animated systems with code doesn’t require scientific rigor or accuracy. Designing a smart rocket isn’t rocket science, and neither is designing an artificial neural network brain science. It’s enough to simply be inspired by the idea of brain function.

        -

        In this chapter, I’ll begin with a conceptual overview of the properties and features of neural networks and build the simplest possible example of one, a network that consists of a single neuron. I’ll then introduce you to more complex neural networks by using the ml5.js library. This will serve as a foundation for Chapter 11, the grand finale of this book, where I’ll combine GAs with neural networks for physics simulation.

        +

        In this chapter, I’ll begin with a conceptual overview of the properties and features of neural networks and build the simplest possible example of one, a network that consists of a single neuron. I’ll then introduce you to more complex neural networks by using the ml5.js library. This will serve as a foundation for Chapter 11, the grand finale of this book, where I’ll combine GAs with neural networks for physics simulation.

        Introducing Artificial Neural Networks

        Computer scientists have long been inspired by the human brain. In 1943, Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, developed the first conceptual model of an artificial neural network. In their paper “A Logical Calculus of the Ideas Immanent in Nervous Activity,” they describe a neuron as a single computational cell living in a network of cells that receives inputs, processes those inputs, and generates an output.

        Their work, and the work of many scientists and researchers who followed, wasn’t meant to accurately describe how the biological brain works. Rather, an artificial neural network (hereafter referred to as just a neural network) was intended as a computational model based on the brain, designed to solve certain kinds of problems that were traditionally difficult for computers.

        @@ -56,14 +56,14 @@

        How Neural Networks Work

        • Supervised learning: Essentially, this strategy involves a teacher that’s smarter than the network itself. Take the case of facial recognition. The teacher shows the network a bunch of faces, and the teacher already knows the name associated with each face. The network makes its guesses; then the teacher provides the network with the actual names. The network can compare its answers to the known correct ones and make adjustments according to its errors. The neural networks in this chapter follow this model.
        • Unsupervised learning: This technique is required when you don’t have an example dataset with known answers. Instead, the network works on its own to uncover hidden patterns in the data. An application of this is clustering: a set of elements is divided into groups according to an unknown pattern. I won’t be showing any instances of unsupervised learning, as the strategy is less relevant to the book’s examples.
        • -
        • Reinforcement learning: This strategy is built on observation: a learning agent makes decisions and looks to its environment for the results. It’s rewarded for good decisions and penalized for bad decisions, such that it learns to make better decisions over time. I’ll discuss this strategy in more detail in Chapter 11.
        • +
        • Reinforcement learning: This strategy is built on observation: a learning agent makes decisions and looks to its environment for the results. It’s rewarded for good decisions and penalized for bad decisions, such that it learns to make better decisions over time. I’ll discuss this strategy in more detail in Chapter 11.

        The ability of a neural network to learn, to make adjustments to its structure over time, is what makes it so useful in the field of machine learning. This term can be traced back to the 1959 paper “Some Studies in Machine Learning Using the Game of Checkers,” in which computer scientist Arthur Lee Samuel outlines a “self-learning” program for playing checkers. The concept of an algorithm enabling a computer to learn without explicit programming is the foundation of machine learning.

        Think about what you’ve been doing throughout this book: coding! In traditional programming, a computer program takes inputs and, based on the rules you’ve provided, produces outputs. Machine learning, however, turns this approach upside down. Instead of you writing the rules, the system is given example inputs and outputs, and generates the rules itself! Many algorithms can be used to implement machine learning, and a neural network is just one of them.

        Machine learning is part of the broad, sweeping field of artificial intelligence (AI), although the terms are sometimes used interchangeably. In their thoughtful and friendly primer A People’s Guide to AI, Mimi Onuoha and Diana Nucera (aka Mother Cyborg) define AI as “the theory and development of computer systems able to perform tasks that normally require human intelligence.” Machine learning algorithms are one approach to these tasks, but not all AI systems feature a self-learning component.

        Machine Learning Libraries

        Today, leveraging machine learning in creative coding and interactive media isn’t only feasible but increasingly common, thanks to third-party libraries that handle a lot of the neural network implementation details under the hood. While the vast majority of machine learning development and research is done in Python, the world of web development has seen the emergence of powerful JavaScript-based tools. Two libraries of note are TensorFlow.js and ml5.js.

        -

        TensorFlow.js is an open source library that lets you define, train, and run neural networks directly in the browser using JavaScript, without the need to install or configure complex environments. It’s part of the TensorFlow ecosystem, which is maintained and developed by Google. TensorFlow.js is a powerful tool, but its low-level operations and highly technical API can be intimidating to beginners. Enter ml5.js, a library built on top of TensorFlow.js and designed specifically for use with p5.js. Its goal is to be beginner friendly and make machine learning approachable for a broad audience of artists, creative coders, and students. I’ll demonstrate how to use ml5.js in “Machine Learning with ml5.js”.

        +

        TensorFlow.js is an open source library that lets you define, train, and run neural networks directly in the browser using JavaScript, without the need to install or configure complex environments. It’s part of the TensorFlow ecosystem, which is maintained and developed by Google. TensorFlow.js is a powerful tool, but its low-level operations and highly technical API can be intimidating to beginners. Enter ml5.js, a library built on top of TensorFlow.js and designed specifically for use with p5.js. Its goal is to be beginner friendly and make machine learning approachable for a broad audience of artists, creative coders, and students. I’ll demonstrate how to use ml5.js in “Machine Learning with ml5.js”.

        A benefit of libraries like TensorFlow.js and ml5.js is that you can use them to run pretrained models. A machine learning model is a specific setup of neurons and connections, and a pretrained model is one that has already been prepared for a particular task. For example, popular pretrained models are used for classifying images, identifying body poses, recognizing facial landmarks or hand positions, and even analyzing the sentiment expressed in a text. You can use such a model as is or treat it as a starting point for additional learning (commonly referred to as transfer learning).

        Before I get to exploring the ml5.js library, however, I’d like to try my hand at building the simplest of all neural networks from scratch, using only p5.js, to illustrate how the concepts of neural networks and machine learning are implemented in code.

        The Perceptron

        @@ -182,7 +182,7 @@

        Simple Pattern Recognitio Figure 10.4: A collection of points in 2D space divided by a line, representing plant categories according to their water and sunlight intake
        Figure 10.4: A collection of points in 2D space divided by a line, representing plant categories according to their water and sunlight intake
        -

        In truth, I don’t need a neural network—not even a simple perceptron—to tell me whether a point is above or below a line. I can see the answer for myself with my own eyes, or have my computer figure it out with simple algebra. But just like solving a problem with a known answer—“to be or not to be”—was a convenient first test for the GA in Chapter 9, training a perceptron to categorize points as being on one side of a line versus the other will be a valuable way to demonstrate the algorithm of the perceptron and verify that it’s working properly.

        +

        In truth, I don’t need a neural network—not even a simple perceptron—to tell me whether a point is above or below a line. I can see the answer for myself with my own eyes, or have my computer figure it out with simple algebra. But just like solving a problem with a known answer—“to be or not to be”—was a convenient first test for the GA in Chapter 9, training a perceptron to categorize points as being on one side of a line versus the other will be a valuable way to demonstrate the algorithm of the perceptron and verify that it’s working properly.

        To solve this problem, I’ll give my perceptron two inputs: x_0 is the x-coordinate of a point, representing a plant’s amount of sunlight, and x_1 is the y-coordinate of that point, representing the plant’s amount of water. The perceptron then guesses the plant’s classification according to the sign of the weighted sum of these inputs. If the sum is positive, the perceptron outputs a +1, signifying a hydrophyte (above the line). If the sum is negative, it outputs a –1, signifying a xerophyte (below the line). Figure 10.5 shows this perceptron (note the shorthand of w_0 and w_1 for the weights).

        Figure 10.5: A perceptron with two inputs (x_0 and x_1), a weight for each input (w_0 and w_1), and a processing neuron that generates the output @@ -283,7 +283,7 @@

        The Perceptron Code

        This process can be packaged into a method on the Perceptron class, but before I can write it, I need to examine steps 3 and 4 in more detail. How do I define the perceptron’s error? And how should I adjust the weights according to this error?

        The perceptron’s error can be defined as the difference between the desired answer and its guess:

        \text{error} = \text{desired output} - \text{guess output}
        -

        Does this formula look familiar? Think back to the formula for a vehicle’s steering force that I worked out in Chapter 5:

        +

        Does this formula look familiar? Think back to the formula for a vehicle’s steering force that I worked out in Chapter 5:

        \text{steering} = \text{desired velocity} - \text{current velocity}

        This is also a calculation of an error! The current velocity serves as a guess, and the error (the steering force) indicates how to adjust the velocity in the correct direction. Adjusting a vehicle’s velocity to follow a target is similar to adjusting the weights of a neural network toward the correct answer.

        For the perceptron, the output has only two possible values: +1 or –1. Therefore, only three errors are possible. If the perceptron guesses the correct answer, the guess equals the desired output and the error is 0. If the correct answer is –1 and the perceptron guessed +1, then the error is –2. If the correct answer is +1 and the perceptron guessed –1, then the error is +2. Here’s that process summarized in a table:

        @@ -423,10 +423,10 @@

        The Perceptron Code

        //{!1} The answer becomes +1 if y is above the line. desired = 1; }

        -

        I can then make an input array to go with the desired output.

        +

        I can then make an input array to go with the desired output:

        // Don’t forget to include the bias!
         let trainingInputs = [x, y, 1];
        -

        Assuming that I have a perceptron variable, I can train it by providing the inputs along with the desired answer.

        +

        Assuming that I have a perceptron variable, I can train it by providing the inputs along with the desired answer:

        perceptron.train(trainingInputs, desired);

        If I train the perceptron on a new random point (and its answer) for each cycle through draw(), it will gradually get better at classifying the points as above or below the line.

        @@ -540,9 +540,9 @@

        Putting the “Network” in Neur

        The solution to optimizing the weights of a multilayered network is backpropagation. This process takes the error and feeds it backward through the network so it can adjust the weights of all the connections in proportion to how much they’ve contributed to the total error. The details of backpropagation are beyond the scope of this book. The algorithm uses a variety of activation functions (one classic example is the sigmoid function) as well as some calculus. If you’re interested in continuing down this road and learning more about how backpropagation works, you can find my “Toy Neural Network” project at the Coding Train website with accompanying video tutorials. They go through all the steps of solving XOR using a multilayered feed-forward network with backpropagation. For this chapter, however, I’d instead like to get some help and phone a friend.

        Machine Learning with ml5.js

        That friend is ml5.js. This machine learning library can manage the details of complex processes like backpropagation so you and I don’t have to worry about them. As I mentioned earlier in the chapter, ml5.js aims to provide a friendly entry point for those who are new to machine learning and neural networks, while still harnessing the power of Google’s TensorFlow.js behind the scenes.

        -

        To use ml5.js in a sketch, you must import it via a <script> element in your index.html file, much as you did with Matter.js and Toxiclibs.js in Chapter 6:

        +

        To use ml5.js in a sketch, you must import it via a <script> element in your index.html file, much as you did with Matter.js and Toxiclibs.js in Chapter 6:

        <script src="https://unpkg.com/ml5@latest/dist/ml5.min.js"></script>
        -

        My goal for the rest of this chapter is to introduce ml5.js by developing a system that can recognize mouse gestures. This will prepare you for Chapter 11, where I’ll add a neural network “brain” to an autonomous steering agent and tie machine learning back into the story of the book. First, however, I’d like to talk more generally through the steps of training a multilayered neural network model using supervised learning. Outlining these steps will highlight important decisions you’ll have to make before developing a learning model, introduce the syntax of the ml5.js library, and provide you with the context you’ll need before training your own machine learning models.

        +

        My goal for the rest of this chapter is to introduce ml5.js by developing a system that can recognize mouse gestures. This will prepare you for Chapter 11, where I’ll add a neural network “brain” to an autonomous steering agent and tie machine learning back into the story of the book. First, however, I’d like to talk more generally through the steps of training a multilayered neural network model using supervised learning. Outlining these steps will highlight important decisions you’ll have to make before developing a learning model, introduce the syntax of the ml5.js library, and provide you with the context you’ll need before training your own machine learning models.

        The Machine Learning Life Cycle

        The life cycle of a machine learning model is typically broken into seven steps:

          @@ -742,10 +742,10 @@

          Building a Gesture Classifier

          Figure 10.20: A single mouse gesture as a vector between a start and end point

          Each gesture could be recorded as a vector extending from the start to the end point of a mouse movement. The x- and y-components of the vector will be the model’s inputs. The model’s task could be to predict one of four possible labels for the gesture: up, down, left, or right. With a discrete set of possible outputs, this sounds like a classification problem. The four labels will be the model’s outputs.

          -

          Much like some of the GA demonstrations in Chapter 9—and like the simple perceptron example earlier in this chapter—the problem I’m selecting here has a known solution and could be solved more easily and efficiently without a neural network. The direction of a vector can be classified with the heading() function and a series of if statements! However, by using this seemingly trivial scenario, I hope to explain the process of training a machine learning model in an understandable and friendly way. Additionally, this example will make it easy to check that the code is working as expected. When I’m done, I’ll provide some ideas about how to expand the classifier to a scenario that couldn’t use simple if statements.

          +

          Much like some of the GA demonstrations in Chapter 9—and like the simple perceptron example earlier in this chapter—the problem I’m selecting here has a known solution and could be solved more easily and efficiently without a neural network. The direction of a vector can be classified with the heading() function and a series of if statements! However, by using this seemingly trivial scenario, I hope to explain the process of training a machine learning model in an understandable and friendly way. Additionally, this example will make it easy to check that the code is working as expected. When I’m done, I’ll provide some ideas about how to expand the classifier to a scenario that couldn’t use simple if statements.

          Collecting and Preparing the Data

          With the problem established, I can turn to steps 1 and 2: collecting and preparing the data. In the real world, these steps can be tedious, especially when the raw data you collect is messy and needs a lot of initial processing. You can think of this like having to organize, wash, and chop all your ingredients before you can start cooking a meal from scratch.

          -

          For simplicity, I’d instead like to take the approach of ordering a machine learning “meal kit,” with the ingredients (data) already portioned and prepared. This way, I’ll get straight to the cooking itself, the process of training the model. After all, this is really just an appetizer for what will be the ultimate meal in Chapter 11, when I apply neural networks to steering agents.

          +

          For simplicity, I’d instead like to take the approach of ordering a machine learning “meal kit,” with the ingredients (data) already portioned and prepared. This way, I’ll get straight to the cooking itself, the process of training the model. After all, this is really just an appetizer for what will be the ultimate meal in Chapter 11, when I apply neural networks to steering agents.

          With that in mind, I’ll handcode some example data and manually keep it normalized within a range of –1 and +1. I’ll organize the data into an array of objects, pairing the x- and y-components of a vector with a string label. I’m picking values that I feel clearly point in a specific direction and assigning the appropriate label—two examples per label:

          let data = [
             { x: 0.99, y: 0.02, label: "right" },
          @@ -814,7 +814,7 @@ 

          Training the Model

          The second argument to train() is optional, but it’s good to include one. It specifies a callback function that runs when the training process is complete—in this case, finshedTraining(). (See the “Callbacks” box for more on callback functions.) This is useful for knowing when you can proceed to the next steps in your code. Another optional callback, which I usually name whileTraining(), is triggered after each epoch. However, for my purposes, knowing when the training is done is plenty!

          Callbacks

          -

          A callback function in JavaScript is a function you don’t actually call yourself. Instead, you provide it as an argument to another function, intending for it to be called back automatically at a later time (typically associated with an event, like a mouse click). You’ve seen this before when working with Matter.js in Chapter 6, where you specified a function to call whenever a collision was detected.

          +

          A callback function in JavaScript is a function you don’t actually call yourself. Instead, you provide it as an argument to another function, intending for it to be called back automatically at a later time (typically associated with an event, like a mouse click). You’ve seen this before when working with Matter.js in Chapter 6, where you specified a function to call whenever a collision was detected.

          Callbacks are needed for asynchronous operations, when you want your code to continue along with animating or doing other things while waiting for another task (like training a machine learning model) to finish. A classic example of this in p5.js is loading data into a sketch with loadJSON().

          JavaScript also provides a more recent approach for handling asynchronous operations known as promises. With promises, you can use keywords like async and await to make your asynchronous code look more like traditional synchronous code. While ml5.js also supports this style, I’ll stick to using callbacks to stay aligned with p5.js style.

          @@ -944,7 +944,7 @@

          The Ecosystem Project

          Incorporate machine learning into your ecosystem to enhance the behavior of creatures. How could classification or regression be applied?

          • Can you classify the creatures of your ecosystem into multiple categories? What if you use an initial population as a training dataset, and as new creatures are born, the system classifies them according to their features? What are the inputs and outputs for your system?
          • -
          • Can you use a regression to predict the life span of a creature based on its properties? Think about how size and speed affected the life span of the bloops from Chapter 9. Could you analyze how well the regression model’s predictions align with the actual outcomes?
          • +
          • Can you use a regression to predict the life span of a creature based on its properties? Think about how size and speed affected the life span of the bloops from Chapter 9. Could you analyze how well the regression model’s predictions align with the actual outcomes?
          diff --git a/content/11_nn_ga.html b/content/11_nn_ga.html index e0510b1c..64524930 100644 --- a/content/11_nn_ga.html +++ b/content/11_nn_ga.html @@ -1,4 +1,4 @@ -
          +

          Chapter 11. Neuroevolution

          @@ -25,14 +25,14 @@

          Star-no

          Throughout this book, you’ve explored the fundamental principles of interactive physics simulations with p5.js, dived into the complexities of agent and other rule-based behaviors, and dipped your toe into the exciting realm of machine learning. You’ve become a natural!

          -

          However, Chapter 10 merely scratched the surface of working with data and neural network–based machine learning—a vast landscape that would require countless sequels to this book to cover comprehensively. My goal was never to go deep into neural networks, but simply to establish the core concepts in preparation for a grand finale, where I find a way to integrate machine learning into the world of animated, interactive p5.js sketches and bring together as many of our new Nature of Code friends as possible for one last hurrah.

          -

          The path forward passes through the field of neuroevolution, a style of machine learning that combines the GAs from Chapter 9 with the neural networks from Chapter 10. A neuroevolutionary system uses Darwinian principles to evolve the weights (and in some cases, the structure itself) of a neural network over generations of trial-and-error learning. In this chapter, I’ll demonstrate how to use neuroevolution with a familiar example from the world of gaming. I’ll then finish off by varying Craig Reynolds’s steering behaviors from Chapter 5 so that they are learned through neuroevolution.

          +

          However, Chapter 10 merely scratched the surface of working with data and neural network–based machine learning—a vast landscape that would require countless sequels to this book to cover comprehensively. My goal was never to go deep into neural networks, but simply to establish the core concepts in preparation for a grand finale, where I find a way to integrate machine learning into the world of animated, interactive p5.js sketches and bring together as many of our new Nature of Code friends as possible for one last hurrah.

          +

          The path forward passes through the field of neuroevolution, a style of machine learning that combines the GAs from Chapter 9 with the neural networks from Chapter 10. A neuroevolutionary system uses Darwinian principles to evolve the weights (and in some cases, the structure itself) of a neural network over generations of trial-and-error learning. In this chapter, I’ll demonstrate how to use neuroevolution with a familiar example from the world of gaming. I’ll then finish off by varying Craig Reynolds’s steering behaviors from Chapter 5 so that they are learned through neuroevolution.

          Reinforcement Learning

          -

          Neuroevolution shares many similarities with another machine learning methodology that I briefly referenced in Chapter 10, reinforcement learning, which incorporates machine learning into a simulated environment. A neural network–backed agent learns by interacting with the environment and receiving feedback about its decisions in the form of rewards or penalties. It’s a strategy built around observation.

          +

          Neuroevolution shares many similarities with another machine learning methodology that I briefly referenced in Chapter 10, reinforcement learning, which incorporates machine learning into a simulated environment. A neural network–backed agent learns by interacting with the environment and receiving feedback about its decisions in the form of rewards or penalties. It’s a strategy built around observation.

          Think of a little mouse running through a maze. If it turns left, it gets a piece of cheese; if it turns right, it receives a little shock. (Don’t worry, this is just a pretend mouse.) Presumably, the mouse will learn over time to turn left. Its biological neural network makes a decision with an outcome (turn left or right) and observes its environment (yum or ouch). If the observation is negative, the network can adjust its weights in order to make a different decision the next time.

          In the real world, reinforcement learning is commonly used not for tormenting rodents but rather for developing robots. At time t, the robot performs a task and observes the results. Did it crash into a wall or fall off a table, or is it unharmed? As time goes on, the robot learns to interpret the signals from its environment in the optimal way to accomplish its tasks and avoid harm.

          Instead of a mouse or a robot, now think about any of the example objects from earlier in this book (walker, mover, particle, vehicle). Imagine embedding a neural network into one of these objects and using it to calculate a force or another action. The neural network could receive its inputs from the environment (such as distance to an obstacle) and output some kind of decision. Perhaps the network chooses from a set of discrete options (move left or right) or picks a set of continuous values (the magnitude and direction of a steering force).

          -

          Is this starting to sound familiar? It’s no different from the way a neural network performed after training in the Chapter 10 examples, receiving inputs and predicting a classification or regression! Actually training one of these objects to make a good decision is where the reinforcement learning process diverges from the supervised learning approach. To better illustrate, let’s start with a hopefully easy-to-understand and possibly familiar scenario, the game Flappy Bird (see Figure 11.1).

          +

          Is this starting to sound familiar? It’s no different from the way a neural network performed after training in the Chapter 10 examples, receiving inputs and predicting a classification or regression! Actually training one of these objects to make a good decision is where the reinforcement learning process diverges from the supervised learning approach. To better illustrate, let’s start with a hopefully easy-to-understand and possibly familiar scenario, the game Flappy Bird (see Figure 11.1).

          The game is deceptively simple. You control a small bird that continually moves horizontally across the screen. With each tap or click, the bird flaps its wings and rises upward. The challenge? A series of vertical pipes spaced apart at irregular intervals emerge from the right. The pipes have gaps, and your primary objective is to navigate the bird safely through these gaps. If you hit a pipe, it’s game over. As you progress, the game’s speed increases, and the more pipes you navigate, the higher your score.

          Figure 11.1: The Flappy Bird game @@ -75,9 +75,9 @@

          Reinforcement Learning

          task: "classification" }; let birdBrain = ml5.neuralNetwork(options);
          -

          What next? If I were following the steps I laid out in Chapter 10, I’d have to go back to steps 1 and 2 of the machine learning process: data collection and preparation. How exactly would that work here? One idea could be to scour the earth for the greatest Flappy Bird player of all time and record them playing for hours. I could log the input features for every moment of gameplay along with whether the player flapped or not. Feed all that data into the model, train it, and I can see the headlines already: “Artificial Intelligence Bot Defeats Flappy Bird.”

          +

          What next? If I were following the steps I laid out in Chapter 10, I’d have to go back to steps 1 and 2 of the machine learning process: data collection and preparation. How exactly would that work here? One idea could be to scour the earth for the greatest Flappy Bird player of all time and record them playing for hours. I could log the input features for every moment of gameplay along with whether the player flapped or not. Feed all that data into the model, train it, and I can see the headlines already: “Artificial Intelligence Bot Defeats Flappy Bird.”

          But wait a second; has a computerized agent really learned to play Flappy Bird on its own, or has it simply learned to mirror the gameplay of a human? What if that human missed a key aspect of Flappy Bird strategy? The automated player would never discover it. Not to mention that collecting all that data would be incredibly tedious.

          -

          The problem here is that I’ve reverted to a supervised learning scenario like the ones from Chapter 10, but this is supposed to be a section about reinforcement learning. Unlike supervised learning, in which the correct answers are provided by a training dataset, the agent in reinforcement learning learns the answers—the optimal decisions—through trial and error by interacting with the environment and receiving feedback. In the case of Flappy Bird, the agent could receive a positive reward every time it successfully navigates a pipe, but a negative reward if it hits a pipe or the ground. The agent’s goal is to figure out which actions lead to the most cumulative rewards over time.

          +

          The problem here is that I’ve reverted to a supervised learning scenario like the ones from Chapter 10, but this is supposed to be a section about reinforcement learning. Unlike supervised learning, in which the correct answers are provided by a training dataset, the agent in reinforcement learning learns the answers—the optimal decisions—through trial and error by interacting with the environment and receiving feedback. In the case of Flappy Bird, the agent could receive a positive reward every time it successfully navigates a pipe, but a negative reward if it hits a pipe or the ground. The agent’s goal is to figure out which actions lead to the most cumulative rewards over time.

          At the start, the Flappy Bird agent won’t know the best time to flap its wings, leading to many crashes. As it accrues more and more feedback from countless play-throughs, however, it will begin to refine its actions and develop the optimal strategy to navigate the pipes without crashing, maximizing its total reward. This process of learning by doing and optimizing based on feedback is the essence of reinforcement learning.

          As the chapter goes on, I’ll explore the principles I’m outlining here, but with a twist. Traditional techniques in reinforcement learning involve defining a strategy (called a policy) and a corresponding reward function to provide feedback for adjusting the policy. Instead of going down this road, however, I’m going to turn toward the star of this chapter, neuroevolution.

          Evolving Neural Networks Is NEAT!

          @@ -276,8 +276,8 @@

          The Bird Brain

          } }
        -

        The neural network’s prediction is in the same format as the gesture classifier from Chapter 10, and the decision can be made by checking the first element of the results array. If the output label is "flap", then call flap().

        -

        Now that I’ve finished the think() method, the real challenge can begin: teaching the bird to win the game by consistently flapping its wings at the right moment. This is where the GA comes back into the picture. Recalling the discussion from Chapter 9, three key principles underpin Darwinian evolution: variation, selection, and heredity. I’ll revisit each of these principles in turn as I implement the steps of the GA in this new context of neural networks.

        +

        The neural network’s prediction is in the same format as the gesture classifier from Chapter 10, and the decision can be made by checking the first element of the results array. If the output label is "flap", then call flap().

        +

        Now that I’ve finished the think() method, the real challenge can begin: teaching the bird to win the game by consistently flapping its wings at the right moment. This is where the GA comes back into the picture. Recalling the discussion from Chapter 9, three key principles underpin Darwinian evolution: variation, selection, and heredity. I’ll revisit each of these principles in turn as I implement the steps of the GA in this new context of neural networks.

        Variation: A Flock of Flappy Birds

        A single bird with a randomly initialized neural network isn’t likely to have any success at all. That lone bird will most likely jump incessantly and fly way off-screen, or sit perched at the bottom of the canvas awaiting collision after collision with the pipes. This erratic and nonsensical behavior is a reminder: a randomly initialized neural network lacks any knowledge or experience. The bird is essentially making wild guesses for its actions, so success is going to be rare.

        This is where the first key principle of GAs comes in: variation. The hope is that by introducing as many different neural network configurations as possible, a few might perform slightly better than the rest. The first step toward variation is to add an array of many birds (Figure 11.4).

        @@ -325,12 +325,12 @@

        Selection: Flappy Bird Fitness

        //{!1} Is the bird alive or not? this.alive = true; } -

        I’ll assign the fitness a numeric value that increases by one every cycle through draw(), as long as the bird remains alive. The birds that survive longer should have a higher fitness value. This mechanism mirrors the reinforcement learning technique of rewarding good decisions. In reinforcement learning, however, an agent receives immediate feedback for every decision it makes, allowing it to adjust its policy accordingly. Here, the bird’s fitness is a cumulative measure of its overall success and will be applied only during the selection step of the GA:

        +

        I’ll assign the fitness a numeric value that increases by one every cycle through draw(), as long as the bird remains alive. The birds that survive longer should have a higher fitness value. This mechanism mirrors the reinforcement learning technique of rewarding good decisions. In reinforcement learning, however, an agent receives immediate feedback for every decision it makes, allowing it to adjust its policy accordingly. Here, the bird’s fitness is a cumulative measure of its overall success and will be applied only during the selection step of the GA.

          update() {
             //{!1} Increment the fitness each time through update().
             this.fitness++;
           }
        -

        The alive property is a Boolean flag that’s initially set to true. When a bird collides with a pipe, this property is set to false. Only birds that are still alive are updated and drawn to the canvas:

        +

        The alive property is a Boolean flag that’s initially set to true. When a bird collides with a pipe, this property is set to false. Only birds that are still alive are updated and drawn to the canvas.

        function draw() {
           // There’s now an array of birds!
           for (let bird of birds) {
        @@ -350,7 +350,7 @@ 

        Selection: Flappy Bird Fitness

        } } }
        -

        In Chapter 9, I demonstrated two techniques for running an evolutionary simulation. In the smart rockets example, the population lived for a fixed amount of time each generation. The same approach could likely work here as well, but I want to allow the birds to accumulate the highest fitness value possible and not arbitrarily stop them based on a time limit. The second technique, demonstrated with the bloops example, eliminated the fitness score entirely and set a random probability for cloning any living creature. For Flappy Bird, this approach could become messy and risks overpopulation or all the birds dying out completely.

        +

        In Chapter 9, I demonstrated two techniques for running an evolutionary simulation. In the smart rockets example, the population lived for a fixed amount of time each generation. The same approach could likely work here as well, but I want to allow the birds to accumulate the highest fitness value possible and not arbitrarily stop them based on a time limit. The second technique, demonstrated with the bloops example, eliminated the fitness score entirely and set a random probability for cloning any living creature. For Flappy Bird, this approach could become messy and risks overpopulation or all the birds dying out completely.

        I propose combining elements of both approaches. I’ll allow a generation to continue as long as at least one bird is still alive. When all the birds have died, I’ll select parents for the reproduction step and start anew. I’ll begin by writing a function to check whether all the birds have died:

        function allBirdsDead() {
           for (let bird of birds) {
        @@ -389,7 +389,7 @@ 

        Selection: Flappy Bird Fitness

        }

        Once normalized, each bird’s fitness is equal to its probability of being selected.

        Heredity: Baby Birds

        -

        Only one step is left in the GA—reproduction. In Chapter 9, I explored in great detail the two-step process for generating a child element: crossover and mutation. Crossover is where the third key principle of heredity arrives: the DNA from the two selected parents is combined to form the child’s DNA.

        +

        Only one step is left in the GA—reproduction. In Chapter 9, I explored in great detail the two-step process for generating a child element: crossover and mutation. Crossover is where the third key principle of heredity arrives: the DNA from the two selected parents is combined to form the child’s DNA.

        At first glance, the idea of inventing a crossover algorithm for two neural networks might seem daunting, and yet it’s quite straightforward. Think of the individual “genes” of a bird’s brain as the weights within the neural network. Mixing two such brains boils down to creating a new neural network with each weight chosen by a virtual coin flip—the weight comes from either the first or the second parent:

        // Pick two parents and create a child with crossover.
         let parentA = weightedSelection();
        @@ -459,15 +459,15 @@ 

        Example 11.2: Flappy Bird w

        Note the addition of a new resetPipes() function. If I don’t remove the pipes before starting a new generation, the birds may immediately restart at a position colliding with a pipe, in which case even the best bird won’t have a chance to fly! The full online code for Example 11.2 also adjusts the behavior of the birds so that they die when they leave the canvas, either by crashing into the ground or soaring too high above the top.

        Exercise 11.2

        -

        It takes a very long time for Example 11.2 to produce any results. Could you “speed up time” by skipping the drawing of every single frame of the game to reach an optimal bird faster? (A solution is presented in “Speeding Up Time”.) Additionally, could you add an overlay that displays information about the simulation’s status, such as the number of birds still in play, the current generation, and the life span of the best bird?

        +

        It takes a very long time for Example 11.2 to produce any results. Could you “speed up time” by skipping the drawing of every single frame of the game to reach an optimal bird faster? (A solution is presented in “Speeding Up Time”.) Additionally, could you add an overlay that displays information about the simulation’s status, such as the number of birds still in play, the current generation, and the life span of the best bird?

        Exercise 11.3

        To avoid starting the neuroevolution process from scratch every time, try using ml5.js’s neural network save() and load() methods. How might you add a feature that saves the best bird model as well as an option to load a previously saved model?

        Steering the Neuroevolutionary Way

        -

        Having explored neuroevolution with Flappy Bird, I’d like to shift the focus back to the realm of simulation, specifically the steering agents introduced in Chapter 5. What if, instead of me dictating the rules for an algorithm to calculate a steering force, a simulated creature could evolve its own strategy? Drawing inspiration from Reynolds’s aim of lifelike and improvisational behaviors, my goal isn’t to use neuroevolution to engineer the perfect creature that can flawlessly execute a task. Instead, I hope to create a captivating world of simulated life, where the quirks, nuances, and happy accidents of evolution unfold in the canvas.

        -

        I’ll begin by adapting the smart rockets example from Chapter 9. In that example, the genes for each rocket were an array of vectors:

        +

        Having explored neuroevolution with Flappy Bird, I’d like to shift the focus back to the realm of simulation, specifically the steering agents introduced in Chapter 5. What if, instead of me dictating the rules for an algorithm to calculate a steering force, a simulated creature could evolve its own strategy? Drawing inspiration from Reynolds’s aim of lifelike and improvisational behaviors, my goal isn’t to use neuroevolution to engineer the perfect creature that can flawlessly execute a task. Instead, I hope to create a captivating world of simulated life, where the quirks, nuances, and happy accidents of evolution unfold in the canvas.

        +

        I’ll begin by adapting the smart rockets example from Chapter 9. In that example, the genes for each rocket were an array of vectors:

        this.genes = [];
         for (let i = 0; i < lifeSpan; i++) {
           //{!2} Each gene is a vector with random direction and magnitude.
        @@ -498,7 +498,7 @@ 

        Steering the Neuroevolutionary Way

        The neural network brain outputs two values: one for the angle of the vector and one for the magnitude. You might think to instead use these outputs for the vector’s x- and y-components. The default output range for an ml5.js neural network is from 0 to 1, however, and I want the forces to be capable of pointing in both positive and negative directions. Mapping the first output to an angle by multiplying it by TWO_PI offers the full range.

        You may have noticed that the code includes a variable called inputs that I have yet to declare or initialize. Defining the inputs to the neural network is where you, as the designer of the system, can be the most creative. You have to consider the nature of the environment and the simulated biology and capabilities of your creatures, and then decide which features are most important.

        -

        As a first try, I’ll assign something basic for the inputs and see if it works. Since the smart rockets’ environment is static, with fixed obstacles and targets, what if the brain could learn and estimate a flow field to navigate toward its goal? As I demonstrated in Chapter 5, a flow field receives a position and returns a vector, so the neural network can mirror this functionality and use the rocket’s current x- and y-position as input. I just have to normalize the values according to the canvas dimensions:

        +

        As a first try, I’ll assign something basic for the inputs and see if it works. Since the smart rockets’ environment is static, with fixed obstacles and targets, what if the brain could learn and estimate a flow field to navigate toward its goal? As I demonstrated in Chapter 5, a flow field receives a position and returns a vector, so the neural network can mirror this functionality and use the rocket’s current x- and y-position as input. I just have to normalize the values according to the canvas dimensions:

        let inputs = [this.position.x / width, this.position.y / height];

        That’s it! Virtually everything else from the original example can remain unchanged: the population, the fitness function, and the selection process.

        @@ -557,7 +557,7 @@

        Responding to Change

        circle(this.position.x, this.position.y, this.r * 2); } }

        -

        As the glow moves, the creature should take the glow’s position into account in its decision-making process, as an input to its brain. However, it isn’t sufficient to know only the light’s position; it’s the position relative to the creature’s own that’s key. A nice way to synthesize this information as an input feature is to calculate a vector that points from the creature to the glow. Essentially, I’m reinventing the seek() method from Chapter 5, using a neural network to estimate the steering force:

        +

        As the glow moves, the creature should take the glow’s position into account in its decision-making process, as an input to its brain. However, it isn’t sufficient to know only the light’s position; it’s the position relative to the creature’s own that’s key. A nice way to synthesize this information as an input feature is to calculate a vector that points from the creature to the glow. Essentially, I’m reinventing the seek() method from Chapter 5, using a neural network to estimate the steering force:

          seek(target) {
             //{!1} Calculate a vector from the position to the target.
        @@ -671,8 +671,8 @@ 

        Example 11.4: Dynamic Ne }

        It’s hard to believe, but this book has been a journey well over 10 years in the making. Thank you, dear reader, for sticking with it. I promise it’s not an infinite loop. However meandering it might have seemed, like a random walk, I’m finally using an arrival steering behavior to reach the final piece of the puzzle, an attempt to bring together all my past explorations in my own version of the Ecosystem Project.

        A Neuroevolutionary Ecosystem

        -

        A few elements in this chapter’s examples don’t quite fit with my dream of simulating a natural ecosystem. The first goes back to an issue I raised in Chapter 9 with the introduction of the bloops. A system of creatures that all live and die together, starting completely over with each subsequent generation—that isn’t how the biological world works! I’d like to revisit this dilemma in this chapter’s context of neuroevolution.

        -

        Second, and perhaps more important, a major flaw exists in the way I’m extracting features from a scene to train a model. The creatures in Example 11.4 are all-knowing. Sure, it’s reasonable to assume that a creature is aware of its own current velocity, but I’ve also allowed each creature to know the glow’s exact location, regardless of how far away it is or what might be blocking the creature’s vision or senses. This is a bridge too far. It flies in the face of one of the main tenets of autonomous agents I introduced in Chapter 5: an agent should have a limited ability to perceive its environment.

        +

        A few elements in this chapter’s examples don’t quite fit with my dream of simulating a natural ecosystem. The first goes back to an issue I raised in Chapter 9 with the introduction of the bloops. A system of creatures that all live and die together, starting completely over with each subsequent generation—that isn’t how the biological world works! I’d like to revisit this dilemma in this chapter’s context of neuroevolution.

        +

        Second, and perhaps more important, a major flaw exists in the way I’m extracting features from a scene to train a model. The creatures in Example 11.4 are all-knowing. Sure, it’s reasonable to assume that a creature is aware of its own current velocity, but I’ve also allowed each creature to know the glow’s exact location, regardless of how far away it is or what might be blocking the creature’s vision or senses. This is a bridge too far. It flies in the face of one of the main tenets of autonomous agents I introduced in Chapter 5: an agent should have a limited ability to perceive its environment.

        Sensing the Environment

        A common approach to simulating how a real-world creature (or robot) would have a limited awareness of its surroundings is to attach sensors to an agent. Think back to that mouse in the maze from the beginning of the chapter (hopefully it’s been thriving on the cheese it’s been getting as a reward), and now imagine it has to navigate the maze in the dark. Its whiskers might act as proximity sensors to detect walls and turns. The mouse whiskers can’t see the entire maze, but only sense the immediate surroundings. Another example of sensors is a bat using echolocation to navigate, or a car on a winding road where the driver can see only what’s projected in front of the car’s headlights.

        I’d like to build on this idea of the whiskers (or more formally the vibrissae) found in mice, cats, and other mammals. In the real world, animals use their vibrissae to navigate and detect nearby objects, especially in dark or obscured environments (see Figure 11.5). How can I attach whisker-like sensors to my neuroevolutionary-seeking creatures?

        @@ -680,7 +680,7 @@

        Sensing the Environment

        Figure 11.5: Clawdius the cat sensing his environment with his vibrissae
        Figure 11.5: Clawdius the cat sensing his environment with his vibrissae
        -

        I’ll keep the generic class name Creature but think of them now as the amoeba-like bloops from Chapter 9, enhanced with whisker-like sensors that emanate from their center in all directions:

        +

        I’ll keep the generic class name Creature but think of them now as the amoeba-like bloops from Chapter 9, enhanced with whisker-like sensors that emanate from their center in all directions:

        class Creature {
           constructor(x, y) {
             // The creature has a position and radius.
        @@ -832,7 +832,7 @@ 

        Learning from the Sensors

        force.setMag(magnitude); this.applyForce(force); }
        -

        The logical next step might be to incorporate all the usual parts of the GA, writing a fitness function (how much food did each creature eat?) and performing selection after a fixed generational time period. But this is a great opportunity to revisit the principles of a continuous ecosystem and aim for a more sophisticated environment and set of potential behaviors for the creatures themselves. Instead of a fixed life span cycle for each generation, I’ll bring back Chapter 9’s health score for each creature. For every cycle through draw() that a creature lives, its health deteriorates a little bit:

        +

        The logical next step might be to incorporate all the usual parts of the GA, writing a fitness function (how much food did each creature eat?) and performing selection after a fixed generational time period. But this is a great opportunity to revisit the principles of a continuous ecosystem and aim for a more sophisticated environment and set of potential behaviors for the creatures themselves. Instead of a fixed life span cycle for each generation, I’ll bring back Chapter 9’s health score for each creature. For every cycle through draw() that a creature lives, its health deteriorates a little bit:

        class Creature {  
           constructor() {
        diff --git a/content/xx_1_creature_design.html b/content/xx_1_creature_design.html
        index deda94c2..cf169c97 100644
        --- a/content/xx_1_creature_design.html
        +++ b/content/xx_1_creature_design.html
        @@ -1,4 +1,4 @@
        -
        +

        Appendix: Creature Design

        This guide is by Zannah Marsh, who created all the illustrations you see in this book.

        If you aren’t sure how to start the creature design task for your Ecosystem Project, or if the thought of populating a multi-creature ecosystem feels daunting, don’t worry! You can start developing creatures by using a few visual building blocks, like basic shapes and lines, and reuse them for various results. This design task is similar to programming by reusing and repurposing code.

        diff --git a/content/xx_2_image_credits.html b/content/xx_2_image_credits.html index 214a9a65..bbf2fa07 100644 --- a/content/xx_2_image_credits.html +++ b/content/xx_2_image_credits.html @@ -1,4 +1,4 @@ -
        +

        Image Credits

        All emojis in the book are from OpenMoji, the open source emoji and icon project, and licensed under CC BY-SA 4.0.

        Chapter 0: Pages 314–315 from A Million Random Digits with 100,000 Normal Deviates, RAND Corporation, MR-1418-RC, 2001. As of October 17, 2023: https://www.rand.org/pubs/monograph_reports/MR1418.html.

        diff --git a/magicbook/stylesheets/components/typography.scss b/magicbook/stylesheets/components/typography.scss index 63da1c8e..2df69366 100644 --- a/magicbook/stylesheets/components/typography.scss +++ b/magicbook/stylesheets/components/typography.scss @@ -90,7 +90,7 @@ a { // Internal links // for the ones include '#' but not at the end - &:not([href^='http'])[href*='#']:not([href$='#'])::after { + &.page-reference:not([href^='http'])[href*='#']:not([href$='#'])::after { content: ' on page ' prince-script( format-number,