diff --git a/content/00_randomness.html b/content/00_randomness.html index b4403dda..0c0b792a 100644 --- a/content/00_randomness.html +++ b/content/00_randomness.html @@ -9,8 +9,8 @@

Chapter 0. Randomness

- Image Credit TBD -
Image Credit TBD
+ Photo from A Million Random Digits with 100,000 Normal Deviates, RAND Corporation, MR-1418-RC, 2001. As of October 17, 2023: https://www.rand.org/pubs/monograph_reports/MR1418.html +
Photo from A Million Random Digits with 100,000 Normal Deviates, RAND Corporation, MR-1418-RC, 2001. As of October 17, 2023: https://www.rand.org/pubs/monograph_reports/MR1418.html

A Million Random Digits with 100,000 Normal Deviates, RAND Corporation

In 1947, the RAND Corporation produced a peculiar book titled A Million Random Digits with 100,000 Normal Deviates. The book wasn’t a work of literature or a philosophical treatise on randomness. Rather, it was a table of random numbers generated using an electronic simulation of a roulette wheel. This book was one of the last in a series of random number tables produced from the mid-1920s to the 1950s. With the development of high-speed computers, generating pseudorandom numbers became faster than reading them from tables, and so this era of printed random number tables ultimately came to an end.

@@ -670,13 +670,15 @@

Exercise 0.10

Just as you can overuse randomness, however, it’s easy to fall into the trap of overusing Perlin noise. How should an object move? Perlin noise! What color should it be? Perlin noise! How fast should it grow? Perlin noise! If that becomes your answer to every question, then keep reading! My goal here is to introduce you to a universe of new possibilities for defining the rules of your systems. After all, those rules are yours to define, and the more possibilities at your disposal, the more you’ll be able to make thoughtful, informed choices. Randomness and Perlin noise are just the first stars in a vast creative cosmos that we’ll explore in this book.

The Ecosystem Project

-

As mentioned in the Introduction, one way to use this book is to build a single project over the course of reading it, incorporating elements from each chapter as you go. One idea for this is a simulation of an ecosystem. Imagine a population of computational creatures swimming around a digital pond, interacting with each other according to various rules.

+

As mentioned in the Introduction, one way to use this book is to build a single project over the course of reading it, incorporating elements from each chapter as you go. One idea for this is a simulation of an ecosystem. Imagine a population of computational creatures swimming around a digital pond, interacting with each other according to various rules.

Step 0 Exercise:

Develop a set of rules for simulating the real-world behavior of a creature, building on top of principles from the “random walk” or other noise-driven motions. Can you simulate a jittery bug that flies in unpredictable ways, or perhaps a floating leaf carried by an inconsistent breeze? Start by exploring the boundaries of how much you can express a creature’s personality purely through its behavior. Then you can think about its visual characteristics.

Here’s an illustration to help you generate ideas for building an ecosystem based on the topics covered in this book. Watch how the illustration evolves as new concepts and techniques are introduced with each subsequent chapter. The goal of this book is to demonstrate algorithms and behaviors, so my examples will almost always only include a single primitive shape, such as a circle. However, I fully expect that there are creative sparks within you, and encourage you to challenge yourself with the designs of the elements you draw on the canvas. If drawing with code is new to you, the book’s illustrator, Zannah Marsh, has written a helpful guide that you can find in the book’s Appendix.

+

+

\ No newline at end of file diff --git a/content/08_fractals.html b/content/08_fractals.html index cd145368..7a42b8ac 100644 --- a/content/08_fractals.html +++ b/content/08_fractals.html @@ -373,8 +373,8 @@

The “Monster” Curve

Figure 8.15: The original line expressed as a vector \vec{v} can be divided by 3 to find the positions of the points for the next generation.

Here’s how that looks in code:

-
    ...
-    // Create a vector from start to end
+
+
    // Create a vector from start to end
     let v = p5.Vector.sub(this.end, this.start);    
     // One-third the length
     v.div(3);
@@ -383,8 +383,8 @@ 

The “Monster” Curve

let b = p5.Vector.add(a, v); // d is just another 1/3 of the way past b! - let d = p5.Vector.add(b, v); - ...
+ let d = p5.Vector.add(b, v);
+
Figure 8.16: The vector \vec{v} is rotated by 60° to find the third point. @@ -392,17 +392,18 @@

The “Monster” Curve

The last point, c, is the most difficult one to compute. However, if you consider that the angles of an equilateral triangle are all 60 degrees, this makes things suddenly easier. If you know how to find the new b with a vector one-third the length of the line, what if you rotate that same vector 60 degrees (or \pi/3 radians) and add it to b, as in Figure 8.16? You’d arrive at c!

-
    ...
-    //{!1} Rotate by -PI/3 radians (negative angle so it rotates “up”).
+
+
    //{!1} Rotate by -PI/3 radians (negative angle so it rotates “up”).
     v.rotate(-PI / 3);    
     //{!1} Move along from b by v to get to point c.
-    let c = p5.Vector.add(b, v);
-    ...
+ let c = p5.Vector.add(b, v);
+

Finally, after calculating the five points, I can return them all together in an array. This will match the code for destructuring the array into five separate variables, as previously outlined.

-
    ...
-    // Return all five points in an array
+
+
    // Return all five points in an array
     return [a, b, c, d, e];
   }
+

Now all that remains is to call generate() a certain number of times (say, five) in setup() to calculate the Koch line segments up to that generation.

Example 8.5: The Koch Curve

@@ -868,7 +869,8 @@

Example 8.8: Simple L-sy

Assuming I’ve generated a sentence from the L-system, I can iterate through the sentence character by character and execute the appropriate code for that character.

-
for (let i = 0; i < sentence.length; i++) {
+
+
for (let i = 0; i < sentence.length; i++) {
 
   //{!1} Looking at each character one at a time.
   let c = sentence.charAt(i);
@@ -892,6 +894,7 @@ 

Example 8.8: Simple L-sy pop(); } }

+

With this code and the right L-system conditions, I can draw incredibly elaborate, plantlike structure. For the next example, here’s the L-system I’ll use:

diff --git a/content/09_ga.html b/content/09_ga.html index 7e6cee17..caa98621 100644 --- a/content/09_ga.html +++ b/content/09_ga.html @@ -361,11 +361,13 @@

Step 1: Initialization

} }

In order to randomly generate a character, I’ll write a helper function called randomCharacter() for each individual gene.

-
// Return a random character (letter, number, symbol, space, etc).
+
+
// Return a random character (letter, number, symbol, space, etc).
 function randomCharacter() {
   let c = floor(random(32, 127));
   return String.fromCharCode(c);
 }
+

The random numbers picked correspond to a specific character according to a standard known as ASCII (American Standard Code for Information Interchange), and String.fromCharCode() is a native JavaScript method that converts a number into its corresponding character based on that standard. The range I’ve specified encompasses upper- and lowercase letters, numbers, punctuation marks, and special characters. An alternative approach could involve using the Unicode standard, which includes emojis and characters from various world languages, providing a more extensive range of characters for a different target string.

Now that I have the constructor, I can return to setup() and initialize each DNA object in the population array.

let population = [];
@@ -421,7 +423,8 @@ 

Step 2: Selection

One solution that could work here is to pick from the five options depicted in Figure 9.2 (ABCDE) according to their probabilities by filling an array with multiple instances of each parent. In other words, imagine you have a bucket of wooden letters, as in Figure 9.7. Based on the earlier probabilities, it should contain 30 As, 40 Bs, 5 Cs, 10 Ds, and 15 Es. If you were to pick a random letter out of that bucket, there’s a 30 percent chance you’ll get an A, a 5 percent chance you’ll get a C, and so on.

For the genetic algorithm code, that bucket could be an array, and each wooden letter a potential parent DNA object. The mating pool is therefore created by adding each parent to the array a certain number of times, scaled according to that parent’s fitness score.

-
  //{!1} Start with an empty mating pool.
+
+
  //{!1} Start with an empty mating pool.
   let matingPool = [];
 
   for (let phrase of population) {
@@ -433,26 +436,36 @@ 

Step 2: Selection

matingPool.push(phrase); } }
+

With the mating pool ready to go, it’s time to select two parents! It’s somewhat of an arbitrary decision to pick two parents for each child. It certainly mirrors human reproduction and is the standard means in the textbook genetic algorithm, but in terms of creative applications, there really aren’t restrictions here. You could choose only one parent for “cloning,” or devise a reproduction methodology for picking three or four parents from which to generate child DNA. For the demonstration here, I’ll stick to two parents and call them parentA and parentB.

I can select two random instances of DNA from the mating pool using the p5.js random() function. When an array is passed as an argument to random(), the function returns a single random element from the array.

-
  let parentA = random(matingPool);
+
+
  let parentA = random(matingPool);
   let parentB = random(matingPool);
+

This method of building a mating pool and choosing parents from it works, but it isn’t the only way to perform selection. There are other, more memory-efficient techniques that don’t require an additional array full of multiple references to each element. For example, think back to the discussion of non-uniform distributions of random numbers in Chapter 0. There, I implemented the “accept-reject” method. If applied here, the approach would be to randomly pick an element from the original population array, and then pick a second, “qualifying” random number to check against the element’s fitness value. If the fitness is less than the qualifying number, start again and pick a new element. Keep going until two parents are deemed fit enough.

There’s also yet another excellent alternative worth exploring that similarly capitalizes on the principle of fitness-proportionate selection. To understand how it works, imagine a relay race where each member of the population runs a given distance tied to its fitness. The higher the fitness, the farther they run. Let’s also assume that the fitness values have been normalized to all add up to 1 (just like with the “wheel of fortune”). The first step is to pick a “starting line”a random distance from the finish. This distance is a random number between 0 and 1. (You’ll see in a moment that the “finish line” is assumed to be at 0.)

-
let start = random(1);
+
+
let start = random(1);
+

Then the relay race begins at the starting line with first member of the population.

-
let index = 0;
+
+
let index = 0;
+

The runner travels a distance defined by its normalized fitness score, then hands the baton to the next runner.

-
+
+
 while (start > 0) {
   // Move a distance according to fitness.
   start = start - population[index].fitness;
   // Pass the baton to the next element.
   index++;
 }
+

The steps are repeated over and over again in a while loop until the race ends (start is less than or equal to 0, the “finish line”). The runner that crosses the finish threshold is selected as a parent.

Here that is all together in a function that returns the selected element.

-
function weightedSelection() {
+
+
function weightedSelection() {
   // Start with the first element.
   let index = 0;
   // Pick a starting point.
@@ -468,6 +481,7 @@ 

Step 2: Selection

index--; return population[index]; }
+

This works well for selection because each and every member has a shot at crossing the finish line (the elements’ fitness scores all add up to 1), but those who run longer distances (that is, those with higher fitness scores) have a better chance of making it there. However, while this method is more memory efficient, it can be more computationally demanding, especially for large populations, as it requires iterating through the population for each selection. By contrast, the original matingPool array method only needs a single random lookup into the array per parent.

Depending on the specific requirements and constraints of your application of genetic algorithms, one approach might prove more suitable than the other. I’ll alternate between them in the examples outlined in this chapter.

@@ -535,12 +549,15 @@

Exercise 9.4

Step 3: Reproduction (Crossover and Mutation)

Once I have the two parents, the next step is to perform a crossover operation to generate child DNA, followed by mutation.

-
// A function for crossover
+
+
// A function for crossover
 let child = parentA.crossover(parentB);
 // A function for mutation
 child.mutate();
+

Of course, the crossover() and mutate() methods don’t magically exist in the DNA class; I have to write them. The way I’ve called crossover() indicates that it should receive an instance of DNA as an argument (parentB) and return a new instance of DNA, the child.

-
crossover(partner) {
+
+
crossover(partner) {
   // The child is a new instance of DNA.
   // (Note that the genes are generated randomly in the DNA constructor,
   // but the crossover method will override the array.)
@@ -560,19 +577,23 @@ 

Step 3: Reproduction (Crosso } return child; }

+

This implementation uses the “random midpoint” method of crossover, in which the first section of genes is taken from parent A and the second from parent B.

Exercise 9.5

Rewrite the crossover function to use the “coin flipping” method instead, in which each gene has a 50 percent chance of coming from parent A and a 50 percent chance of coming from parent B.

The mutate() method is even simpler to write than crossover(). All I need to do is loop through the array of genes and randomly pick a new character according to the defined mutation rate. With a mutation rate of 1 percent, for example, a new character would only be generated 1 out of 100 times.

-
let mutationRate = 0.01;
+
+
let mutationRate = 0.01;
 
 if (random(1) < mutationRate) {
   /* Any code here would be executed 1% of the time. */
 }
+

The entire method therefore reads:

-
mutate(mutationRate) {
+
+
mutate(mutationRate) {
   //{!1} Look at each gene in the array.
   for (let i = 0; i < this.genes.length; i++) {
     //{!1} Check a random number against the mutation rate.
@@ -582,6 +603,7 @@ 

Exercise 9.5

} } }
+

Once again, I’m able to use the randomCharacter() helper function to simplify the mutation process.

Putting It All Together

I’ve now walked through the steps of the genetic algorithm twice—once describing the algorithm in narrative form, and another time with code snippets implementing each of the steps. Now I’m ready to put it all together and show you the complete code alongside the basic steps of the algorithm.

@@ -1007,7 +1029,8 @@

Developing the Rockets

Key #1 was to define the right global variables for the population size and mutation rate. I’m going to hold off on worrying too much about these variables for now and arbitrarily choose some reasonable-sounding numbers—perhaps a population of 50 rockets and a mutation rate of 1 percent. Once I’ve built the system out and have my sketch up and running, I can experiment with these numbers.

Key #2 was to develop an appropriate fitness function. In this case, the goal of a rocket is to reach its target. In other words, the closer a rocket gets to the target, the higher its fitness. Fitness is therefore inversely proportional to distance: the smaller the distance, the greater the fitness, and the greater the distance, the smaller the fitness.

To put this into practice, I first need to add a property to the Rocket class to store its fitness.

-
class Rocket {
+
+
class Rocket {
   constructor(x, y) {
     //{!1} A Rocket has fitness.
     this.fitness = 0;
@@ -1017,19 +1040,24 @@ 

Developing the Rockets

this.acceleration = createVector(); }
+

Next, I need to add a method to calculate the fitness to the Rocket class. After all, only a Rocket object knows how to compute its distance to the target, so the fitness function should live in this class. Assuming I have a target vector, I can write the following:

-
  calculateFitness() {
+
+
  calculateFitness() {
     // How close did the rocket get?
     let distance = p5.Vector.dist(this.position, target);
     //{!1} Fitness is inversely proportional to distance.
     this.fitness = 1 / distance;
   }
+

This is perhaps the simplest fitness function I could write. By dividing 1 by the distance, large distances become small numbers and small distances become large. If I wanted to use my quadratic trick from the previous section, I could divide 1 by the distance squared instead.

-
calculateFitness() {
+
+
calculateFitness() {
   let distance = p5.Vector.dist(position, target);
   //{!1} 1 divided by distance squared
   this.fitness = 1 / (distance * distance);
 }
+

There are several additional improvements I’ll want to make to the fitness function, but this is a good start.

Finally, Key #3 was to think about the relationship between the genotype and the phenotype. I’ve stated that each rocket has a thruster that fires in a variable direction with a variable magnitude—in other words, a vector! The genotype, the data required to encode the rocket’s behavior, is therefore an array of vectors, one for each frame of the animation.

class DNA {
@@ -1042,7 +1070,9 @@ 

Developing the Rockets

}

The happy news here is that I don’t really have to do anything else to the DNA class. All of the functionality for the typing cat (crossover and mutation) still applies. The one difference I do have to consider is how to initialize the array of genes. With the typing cat, I had an array of characters and picked a random character for each element of the array. Now I’ll do exactly the same thing and initialize a DNA sequence as an array of random vectors.

Your instinct in creating a random vector might be as follows:

-
let v = createVector(random(-1, 1), random(-1, 1));
+
+
let v = createVector(random(-1, 1), random(-1, 1));
+

This is perfectly fine and will likely do the trick. However, if I were to draw every single possible vector that could be picked, the result would fill a square (see Figure 9.11, left). In this case, it probably doesn’t matter, but there’s a slight bias to the diagonals given that a vector from the center of a square to a corner is longer than a purely vertical or horizontal one.

@@ -1056,10 +1086,12 @@

Developing the Rockets

Figure 9.11: On the left, vectors created with random x and y values. On the right, using p5.Vector.random2D().

As you may recall from Chapter 3, a better choice is to pick a random angle and create a vector of length 1 from that angle. This produces results that form a circle (see right of Figure 9.11) and can be achieved with polar to Cartesian conversion or the trusty p5.Vector.random2D() method.

-
for (let i = 0; i < length; i++) {
+
+
for (let i = 0; i < length; i++) {
   //{!1} A random unit vector
   this.genes[i] = p5.Vector.random2D();
 }
+

A vector of length 1 would actually create quite a large force. Remember, forces are applied to acceleration, which accumulates into velocity 30 times per second (or whatever the frame rate is). Therefore, for this example, I’ll add another variable to the DNA class, a maximum force, and randomly scale all the vectors to be somewhere between 0 and the maximum. This will control the thruster power.

class DNA {
   constructor() {
@@ -1076,7 +1108,8 @@ 

Developing the Rockets

}

Notice that I’m using lifeSpan to set the length of genes, the array of vectors. This global variable stores the total number of frames in each generation’s life cycle, allowing me to create a vector for each frame of the rocket’s life.

The expression of this array of vectors, the phenotype, is my Rocket class. To cement the connection, I need to add an instance of a DNA object to the class.

-
class Rocket {
+
+
class Rocket {
   constructor(x, y, dna) {
     // A Rocket has DNA.
     this.dna = dna;
@@ -1088,6 +1121,7 @@ 

Developing the Rockets

this.acceleration = createVector(); }
+

What am I using this.dna for? As the rocket launches, it marches through the array of vectors and applies them one at a time as a force. To achieve this, I’ll need to include a variable this.geneCounter to help step through the array.

class Rocket {
   constructor(x, y, dna) {
@@ -1165,17 +1199,21 @@ 

Managing the Population

this.population = newPopulation; }

There’s one more fairly significant change, however. With typing cats, a random phrase was evaluated as soon as it was created. The string of characters had no lifespan; it existed purely for the purpose of calculating its fitness. The rockets, however, need to live for a period of time before they can be evaluated—that is, they need to be given a chance to make their attempt at reaching the target. Therefore, I need to add one more method to the Population class that runs the physics simulation itself. This is identical to what I did in the run() method of a particle system: update all the particle positions and draw them.

-
  live() {
+
+
  live() {
     for (let rocket of this.population) {
       //{!1} The run method takes care of the simulation, updates the rocket’s
       // position, and draws it to the canvas.
       rocket.run();
     }
   }
+

Finally, I’m ready for setup() and draw(). Here, my primary responsibility is to implement the steps of the genetic algorithm in the appropriate order by calling the methods from the Population class.

-
    population.fitness();
+
+
    population.fitness();
     population.selection();
     population.reproduction();
+

However, unlike the Shakespeare example, I don’t want to do this every frame. Rather, my steps work as follows:

  1. Create a population of rockets
  2. @@ -1239,14 +1277,17 @@

    Example 9.2: Smart Rockets

    Making Improvements

    My smart rockets work, but they aren’t particularly exciting yet. After all, the rockets simply evolve toward having DNA with a bunch of vectors that point straight at the target. To make things more interesting, I’m going to suggest two improvements for the example. For starters, when I first introduced the smart rocket scenario, I said the rockets should evolve the ability to avoid obstacles. Adding this feature will make the system more complex and demonstrate the power of the evolutionary algorithm more effectively.

    To evolve obstacle avoidance, I need some obstacles to avoid. I can easily create rectangular, stationary obstacles by implementing a class of Obstacle objects that store their own position and dimensions.

    -
    class Obstacle {
    +
    +
    class Obstacle {
       constructor(x, y, w, h) {
         this.position = createVector(x, y);
         this.w = w;
         this.h = h;
       }
    +

    I’ll add a contains() method to the Obstacle class that returns true if a rocket has hit the obstacle, or false otherwise.

    -
      contains(spot) {
    +
    +
      contains(spot) {
         return (
           spot.x > this.position.x &&
           spot.x < this.position.x + this.w &&
    @@ -1254,8 +1295,10 @@ 

    Making Improvements

    spot.y < this.position.y + this.h ); }
    +

    If I create an array of Obstacle objects, I can then have each rocket check to see if it’s collided with each obstacle. If a collision occurs, the rocket can set a boolean flag hitObstacle to true. To achieve this, I need to add a method to the Rocket class.

    -
      // This new method lives in the Rocket class and checks if a rocket has
    +
    +
      // This new method lives in the Rocket class and checks if a rocket has
       // hit an obstacle.
       checkObstacles(obstacles) {
         for (let obstacle of obstacles) {
    @@ -1264,8 +1307,10 @@ 

    Making Improvements

    } } }
    +

    If the rocket hits an obstacle, I’ll stop it from updating its position. The revised run() method now receives an obstacles array as an argument.

    -
      run(obstacles) {
    +
    +
      run(obstacles) {
         // Stop the rocket if it’s hit an obstacle.
         if (!this.hitObstacle) {
           this.applyForce(this.dna.genes[this.geneCounter]);
    @@ -1276,8 +1321,10 @@ 

    Making Improvements

    } this.show(); }
    +

    I also have an opportunity to adjust the fitness of the rocket. If the rocket hits an obstacle, the fitness should be penalized and greatly reduced.

    -
      calculateFitness() {
    +
    +
      calculateFitness() {
         let distance = p5.Vector.dist(this.position, target);
         this.fitness = 1 / (distance * distance);
         // {.bold !3}
    @@ -1285,28 +1332,36 @@ 

    Making Improvements

    this.fitness *= 0.1; } }
    +

    With that, the rockets should be able to evolve to avoid obstacles. But I won’t stop now. There’s another improvement I’d like to make.

    If you look closely at Example 9.2, you’ll notice that the rockets aren’t rewarded for getting to the target faster. The only variable in the fitness calculation is the distance to the target at the end of the generation’s life. In fact, in the event that a rocket gets very close to the target but overshoots it and flies past, it may actually be penalized for getting to the target faster. Slow and steady wins the race in this case.

    There are several ways in which I could improve the algorithm to optimize for speed to reach the target. First, I could calculate a rocket’s fitness based on the closest it comes to the target at any point during its life, instead of using its distance to the target at the end of the generation. I’ll call this variable the rocket’s recordDistance and update it as part of a checkTarget() method on the Rocket class.

    -
      checkTarget() {
    +
    +
      checkTarget() {
         let distance = p5.Vector.dist(this.position, target);
         // Check if the distance is closer than the “record” distance. If it is, set a new record.
         if (distance < this.recordDistance) {
           this.recordDistance = distance;
         }
    +

    Additionally, a rocket deserves a reward based on the speed with which it reaches its target. For that, I need to a way of knowing when a rocket has hit the target. Actually, I already have one: the Obstacle class has a contains() method, and there’s no reason why the target can’t also be implemented as an obstacle. It’s just an obstacle that the rocket wants to hit! I can use the contains() method to set a new hitTarget flag on each Rocket object. A rocket will stop if it hits the target, just like it stops if it hits an obstacle.

    -
        // If the object reaches the target, set a boolean flag to true.
    +
    +
        // If the object reaches the target, set a boolean flag to true.
         if (target.contains(this.position)) {
           this.hitTarget = true;
         }
    +

    Remember, I also want the rocket to have a higher fitness the faster it reaches the target. Conversely, the slower it reaches the target, the lower its fitness score. To implement this, a finishCounter can be incremented every cycle of the rocket’s life until it reaches the target. At the end of its life, the counter will equal the amount of time the rocket took to reach the target.

    -
        // Increase the finish counter if it hasn’t hit the target
    +
    +
        // Increase the finish counter if it hasn’t hit the target
         if (!this.hitTarget) {      
           this.finishCounter++;
         }
       }
    +

    I want the fitness to be inversely proportional to finishCounter as well. To achieve this, I can improve the fitness function with the following changes:

    -
      calculateFitness() {
    +
    +
      calculateFitness() {
         // Reward finishing faster and getting close.
         this.fitness = 1 / (this.finishTime * this.recordDistance);
     
    @@ -1322,6 +1377,7 @@ 

    Making Improvements

    this.fitness *= 2; } }
    +

    These improvements are both incorporated into the code for Example 9.3.

    Example 9.3: Smarter Rockets

    @@ -1366,15 +1422,18 @@

    Interactive Selection

    } }

    The phenotype is a Flower class that includes an instance of a DNA object.

    -
    class Flower {
    +
    +
    class Flower {
       constructor(dna) {
         // Flower DNA
         this.dna = dna;
         // How “fit” is this flower?
         this.fitness = 1; 
       }
    +

    When it comes time to draw the flower, I’ll use p5.js’s map() function to convert any gene value to the appropriate range for pixel dimensions or color values. (I’ll also use colorMode() to set the RGB ranges between 0 and 1.)

    -
      show() {
    +
    +
      show() {
         //{.offset-top}
         // The DNA values are assigned to flower properties
         // such as petal color, petal size, number of petals, etc.
    @@ -1387,6 +1446,7 @@ 

    Interactive Selection

    let centerSize = map(genes[9], 0, 1, 24, 48); let stemColor = color(genes[10], genes[11], genes[12]); let stemLength = map(genes[13], 0, 1, 50, 100);
    +

    Up to this point, I haven’t done anything new. This is the same process I’ve followed in every GA example so far. What’s different is that I won’t be writing a fitness() function that computes the score based on a mathematical formula. Instead, I’ll ask the user to assign the fitness.

    How exactly to ask a user to assign fitness is best approached as an interaction design problem and isn’t really within the scope of this book. I’m not going to launch into an elaborate discussion of how to program sliders or build your own hardware dials or create a web app where people can submit online scores. How you choose to acquire fitness scores is up to you and the particular application you’re developing. For this demonstration, I'll take inspiration from Sims’s Galapagos installation and simply increase a flower’s fitness whenever the mouse is over it. Then the next generation of flowers is created when an “evolve next generation” button is pressed.

    Look at how the steps of the genetic algorithm—selection and reproduction—are applied in the nextGeneration() function, which is triggered by the mousePressed() event attached to the p5.js button element. Fitness is increased as part of the Population class’s rollover() method, which detects the presence of the mouse over any given flower design. More details about the sketch can be found in the accompanying example code on the book’s website.

    @@ -1482,7 +1542,8 @@

    Ecosystem Simulation

    } }

    As usual, the population of bloops can be stored in an array, which in turn can be managed by a class called World.

    -
    class World {
    +
    +
    class World {
       //{!1} A list of bloops
       constructor(populationSize) {
         // An array of bloops
    @@ -1492,6 +1553,7 @@ 

    Ecosystem Simulation

    this.bloops.push(new Bloop(random(width), random(height))); } }
    +

    So far, I’m just rehashing the particle systems from Chapter 4. I have an entity called Bloop that moves around the canvas, and a class called World that manages a variable quantity of these entities. To turn this into a system that evolves, I need to add two additional features to my world:

    • Bloops die.
    • @@ -1506,20 +1568,25 @@

      Ecosystem Simulation

      } }

    Each time through update(), a bloop loses some health.

    -
      update() {
    +
    +
      update() {
         // Death is always looming.
         this.health -= 0.2;
         // All the rest of update()
       }
    +

    If health drops below 0, the bloop dies.

    -
      // A method to test if the bloop is alive or dead.
    +
    +
      // A method to test if the bloop is alive or dead.
       dead() {
         return (this.health < 0.0);
       }
    +

    This is a good first step, but I haven’t really achieved anything. After all, if all bloops start with 100 health points and lose health at the same rate, then all bloops will live for the exact same amount of time and die together. If every single bloop lives the same amount of time, each one has an equal chance of reproducing, and therefore no evolutionary change will occur.

    There are several ways to achieve variable lifespans with a more sophisticated world. One approach is to introduce predators that eat bloops. Faster bloops would be more likely to escape being eaten, leading to the evolution of increasingly faster bloops. Another option is to introduce food. When a bloop eats food, its health points increase, extending its life.

    Let’s assume there’s an array of vector positions called food. I could test each bloop’s proximity to each food position. If the bloop is close enough, it eats the food (which is then removed from the world) and increases its health.

    -
      eat(food) {
    +
    +
      eat(food) {
         // Check all the food vectors.
         for (let i = food.length - 1; i >= 0; i--) {
           // How far away is the bloop?
    @@ -1532,6 +1599,7 @@ 

    Ecosystem Simulation

    } } }
    +

    In this scenario, bloops that eat more food are expected to live longer and have a greater likelihood of reproducing. As a result, the system should evolve bloops with an optimal ability to find and consume food.

    Now that the world has been built, it’s time to add the components necessary for evolution. The first step is to establish the genotype and phenotype.

    Genotype and Phenotype

    @@ -1543,7 +1611,8 @@

    Genotype and Phenotype

    The ability for a bloop to find food is tied to two variables—size and speed (see Figure 9.13). Bigger bloops will find food more easily simply because their size will allow them to intersect with food positions more often. And faster bloops will find more food because they can cover more ground in a shorter period of time.

    Since size and speed are inversely related (large bloops are slow, small bloops are fast), I only need a genotype with a single number.

    -
    class DNA {
    +
    +
    class DNA {
       constructor() {
         // The genetic sequence is a single value!
         // It may seem absurd to use an array for just one number, but this will
    @@ -1553,8 +1622,10 @@ 

    Genotype and Phenotype

    this.genes[i] = random(0, 1); } }
    +

    The phenotype is the bloop itself, whose size and speed are assigned by adding an instance of a DNA object to the Bloop class.

    -
    class Bloop {
    +
    +
    class Bloop {
       constructor(x, y, dna) {
         this.dna = dna;
         // DNA will determine size and maxspeed.
    @@ -1564,21 +1635,25 @@ 

    Genotype and Phenotype

    // All the rest of the bloop initialization }
    +

    Note that the maxSpeed property is mapped to a range between 15 and 0. This means that a bloop with a gene value of 0 will move at a speed of 15, while a bloop with a gene value of 1 won’t move at all (speed of 0).

    Selection and Reproduction

    Now that I have the genotype and phenotype, I need to move on to devising a method for selecting bloops as parents. I stated before that the longer a bloop lives, the more chances it has to reproduce. The length of a bloop’s life is its fitness.

    One option would be to say that whenever two bloops come into contact with each other, they make a new bloop. The longer a bloop lives, the more likely it is to come into contact with another bloop. This would also affect the evolutionary outcome, since the likelihood of giving birth, in addition to eating food, depends upon a bloop’s ability to locate other bloops.

    A simpler option would be for bloops to “clone” themselves without needing a partner bloop, creating another bloop with the same genetic makeup instantly. For example, what if I said that at any given moment, a bloop has a 1 percent chance of reproducing? With this selection algorithm, the longer a bloop lives, the more likely it will clone itself. This is equivalent to saying the more times you play the lottery, the greater the likelihood you’ll win (though I’m sorry to say your chances of winning the lottery are still essentially zero).

    To implement this selection algorithm, I can write a method in the Bloop class that picks a random number every frame. If the number is less than 0.01 (1 percent), a new bloop is born.

    -
      // This method will return a new “child” bloop.
    +
    +
      // This method will return a new “child” bloop.
       reproduce() {
         // A 1% chance of executing the code inside the if statement
         if (random(1) < 0.01) {
           /* A Bloop baby! */
         }
       }
    +

    How does a bloop reproduce? In previous examples, the reproduction process involved calling the crossover() method in the DNA class and creating a new object from the resulting array of genes. However, in this case, since I’m making a child from a single parent, I’ll call a method called copy() instead.

    -
      reproduce() {
    +
    +
      reproduce() {
         if (random(1) < 0.005) {
           // A child is an exact copy of single parent.      
           let childDNA = this.dna.copy();
    @@ -1588,6 +1663,7 @@ 

    Selection and Reproduction

    return new Bloop(this.position.copy(), childDNA); } }
    +

    Note that I’ve lowered the probability of reproduction from 1 percent to 0.05 percent. This change makes a significant difference; with a high reproduction probability, the system will rapidly become overpopulated. Too low a probability and everything will likely die out quickly.

    Writing the copy() method into the DNA class is easy with the JavaScript array method slice(), a standard JavaScript method that makes a new array by copying elements from an existing array.

    class DNA {
    diff --git a/content/10_nn.html b/content/10_nn.html
    index 59d25dfc..96e7700d 100644
    --- a/content/10_nn.html
    +++ b/content/10_nn.html
    @@ -215,12 +215,15 @@ 

    Simple Pattern Recognitio

    The output is then the sum of the weighted results: 0 + 0 + w_\text{bias}. Therefore, the bias by itself answers the question of where (0,0) is in relation to the line. If the bias’s weight is positive, then (0,0) is above the line; if negative, it’s below. The extra input and its weight bias the perceptron’s understanding of the line’s position relative to (0,0)!

    The Perceptron Code

    I’m now ready to assemble the code for a Perceptron class. The perceptron only needs to track the input weights, which I can store using an array.

    -
    class Perceptron {
    +
    +
    class Perceptron {
       constructor() {
         this.weights = [];
       }
    +

    The constructor can receive an argument indicating the number of inputs (in this case, three: x_0, x_1, and a bias) and size the weights array accordingly, filling it with random values to start.

    -
      // The argument n determines the number of inputs (including the bias).
    +
    +
      // The argument n determines the number of inputs (including the bias).
       constructor(n) {
         this.weights = [];
         for (let i = 0; i < n; i++) {
    @@ -228,8 +231,10 @@ 

    The Perceptron Code

    this.weights[i] = random(-1, 1); } }
    +

    A perceptron’s job is to receive inputs and produce an output. These requirements can be packaged together in a feedForward() method. In this example, the perceptron’s inputs are an array (which should be the same length as the array of weights), and the output is a number, +1 or –1, as returned by the activation function based on the sign of the sum.

    -
      feedForward(inputs) {
    +
    +
      feedForward(inputs) {
         let sum = 0;
         for (let i = 0; i < this.weights.length; i++) {
           sum += inputs[i] * this.weights[i];
    @@ -239,18 +244,21 @@ 

    The Perceptron Code

    // Is it on one side of the line or the other? return this.activate(sum); }
    +

    Presumably, I could now create a Perceptron object and ask it to make a guess for any given point, as in Figure 10.7.

    Figure 10.7: An (x, y) coordinate from the two-dimensional space is the input to the perceptron.
    Figure 10.7: An (x, y) coordinate from the two-dimensional space is the input to the perceptron.

    Here’s the code to generate a guess:

    -
    // Create the perceptron.
    +
    +
    // Create the perceptron.
     let perceptron = new Perceptron(3);
     // The input is 3 values: x, y, and bias.
     let inputs = [50, -12, 1];
     // The answer!
     let guess = perceptron.feedForward(inputs);
    +

    Did the perceptron get it right? Maybe yes, maybe no. At this point, the perceptron has no better than a 50/50 chance of arriving at the correct answer, since each weight starts out as a random value. A neural network isn’t a magic tool that can automatically guess things correctly on its own. I need to teach it how to do so!

    To train a neural network to answer correctly, I’ll use the supervised learning method I described earlier in the chapter. Remember, this technique involves giving the network inputs with known answers. This enables the network to check if it has made a correct guess. If not, the network can learn from its mistake and adjust its weights. The process is as follows:

      @@ -309,7 +317,8 @@

      The Perceptron Code

      \text{new weight} = \text{weight} + (\text{error} \times \text{input}) \times \text{learning constant}

      A high learning constant causes the weight to change more drastically. This may help the perceptron arrive at a solution more quickly, but it also increases the risk of overshooting the optimal weights. A small learning constant will adjust the weights more slowly and require more training time, but it will allow the network to make small adjustments that could improve overall accuracy.

      Assuming the addition of a learningConstant property to the Perceptronclass, I can now write a training method for the perceptron following the steps I outlined earlier.

      -
        // Step 1: Provide the inputs and known answer.
      +
      +
        // Step 1: Provide the inputs and known answer.
         // These are passed in as arguments to train().
         train(inputs, desired) {
           // Step 2: Guess according to those inputs.
      @@ -323,8 +332,10 @@ 

      The Perceptron Code

      this.weights[i] = this.weights[i] + error * inputs[i] * this.learningConstant; } }
      +

      Here’s the Perceptron class as a whole.

      -
      class Perceptron {
      +
      +
      class Perceptron {
         constructor(totalInputs) {
           //{!2} The Perceptron stores its weights and learning constants.
           this.weights = [];
      @@ -362,6 +373,7 @@ 

      The Perceptron Code

      } } }
      +

      To train the perceptron, I need a set of inputs with known answers. However, I don’t happen to have a real-world dataset (or time to research and collect one) for the xerophytes and hydrophytes scenario. In truth, though, the purpose of this demonstration isn’t to show you how to classify plants. It’s about how a perceptron can learn whether points are above or below a line on a graph, and so any set of points will do. In other words, I can just make the data up.

      What I’m describing is an example of synthetic data, artificially generated data that’s often used in machine learning to create controlled scenarios for training and testing. In this case, my synthetic data will consist of a set of random input points, each with a known answer indicating whether the point is above or below a line. To define the line and generate the data, I’ll use simple algebra. This approach allows me to clearly demonstrate the training process and show how the perceptron learns.

      The question therefore becomes, how do I pick a point and know whether it’s above or below a line (without a neural network, that is)? A line can be described as a collection of points, where each point’s y coordinate is a function of its x coordinate:

      @@ -375,38 +387,52 @@

      The Perceptron Code

      Figure 10.8: A graph of y = \frac{1}2x - 1

      I’ll arbitrarily choose that as the equation for my line, and write a function accordingly.

      -
      // A function to calculate y based on x along a line
      +
      +
      // A function to calculate y based on x along a line
       function f(x) {
         return 0.5 * x - 1;
       }
      +

      Now there’s the matter of the p5.js canvas defaulting to (0,0) in the top-left corner with the y-axis pointing down. For this discussion, I’ll assume I’ve built the following into the code to reorient the canvas to match a more traditional Cartesian space.

      -
      // Move the origin (0,0) to the center.
      +
      +
      // Move the origin (0,0) to the center.
       translate(width / 2, height / 2);
       // Flip the y-axis orientation (positive points up!).
       scale(1, -1);
      +

      I can now pick a random point in the 2D space.

      -
      let x = random(-100, 100);
      +
      +
      let x = random(-100, 100);
       let y = random(-100, 100);
      +

      How do I know if this point is above or below the line? The line function f(x) returns the y value on the line for that x position. I’ll call that y_\text{line}.

      -
      // The y position on the line
      +
      +
      // The y position on the line
       let yline = f(x);
      +

      If the y value I’m examining is above the line, it will be greater than y_\text{line}, as in Figure 10.9.

      Figure 10.9: If y_\text{line} is less than y, then the point is above the line.
      Figure 10.9: If y_\text{line} is less than y, then the point is above the line.

      Here’s the code for that logic:

      -
      // Start with a value of -1.
      +
      +
      // Start with a value of -1.
       let desired = -1;
       if (y > yline) {
         //{!1} The answer becomes +1 if y is above the line.
         desired = 1;
       }
      +

      I can then make an inputs array to go with the desired output.

      -
      // Don’t forget to include the bias!
      +
      +
      // Don’t forget to include the bias!
       let trainingInputs = [x, y, 1];
      +

      Assuming that I have a perceptron variable, I can train it by providing the inputs along with the desired answer.

      -
      perceptron.train(trainingInputs, desired);
      +
      +
      perceptron.train(trainingInputs, desired);
      +

      If I train the perceptron on a new random point (and its answer) each cycle through draw(), it will gradually get better at classifying the points as above or below the line.

      Example 10.1: The Perceptron

      @@ -830,16 +856,20 @@

      Tuning the Parameters

      Deploying the Model

      It’s finally time to deploy the model and see the payoff of all that hard work. This typically involves integrating the model into a separate application to make predictions or decisions based on new, previously unseen data. For this, ml5.js offers the convenience of a save() function to download the trained model to a file from one sketch and a load() function to load it for use in a completely different sketch. This saves you from having to retrain the model from scratch every single time you need it.

      While a model would typically be deployed to a different sketch from the one where it was trained, I’m going to deploy the model in the same sketch for the sake of simplicity. In fact, once the training process is complete, the resulting model is, in essence, already deployed in the current sketch. It’s saved in the classifier variable and can be used to make predictions by passing the model new data through the classify() method. The shape of the data sent to classify() should match the that of the input data used in training—in this case, two floating point numbers, representing the x and y components of a direction vector.

      -
      // Manually creating a vector
      +
      +
      // Manually creating a vector
       let direction = createVector(1, 0);
       // Converting the x and y components into an input array
       let inputs = [direction.x, direction.y];
       // Asking the model to classify the inputs
       classifier.classify(inputs, gotResults);
      +

      The second argument to classify() is another callback function where the results can be accessed.

      -
      function gotResults(results) {
      +
      +
      function gotResults(results) {
         console.log(results);
       }
      +

      The model’s prediction arrives in the argument to the callback, which I’m calling results in the code. Inside, you’ll find an array of the possible labels, sorted by confidence, a probability value that the model assigns to each label. These probabilities represent how sure the model is of that particular prediction. They range from 0 to 1, with values closer to 1 indicating higher confidence and values near 0 suggesting lower confidence.

      [
         {
      @@ -883,7 +913,8 @@ 

      Example 10.2: Gesture Classifier

      -
      // Store the start of a gesture when the mouse is pressed.
      +
      +
      // Store the start of a gesture when the mouse is pressed.
       function mousePressed() {
         start = createVector(mouseX, mouseY);
       }
      @@ -907,6 +938,7 @@ 

      Example 10.2: Gesture Classifier

      function gotResults(error, results) { status = results[0].label; }
      +

      Since the results array is sorted by confidence, if I just want to use a single label as the prediction, I can access the first element of the array with results[0].label, as in the gotResults() function in Example 10.2. This label is passed to the status variable to be displayed on the canvas.

      Exercise 10.5

      diff --git a/content/11_nn_ga.html b/content/11_nn_ga.html index d809ab07..81fe987c 100644 --- a/content/11_nn_ga.html +++ b/content/11_nn_ga.html @@ -15,7 +15,10 @@

      Star-Nosed Mole

      The star-nosed mole (Condylura cristata), found mainly in the northeastern United States and eastern Canada, has a unique and highly specialized nasal organ. Evolved over numerous generations, its “nose” consists of 22 tentacles with over 25,000 minute sensory receptors. Despite the moles being functionally blind, these tentacles allow them to create a detailed spatial map of their surrounding. They can navigate their dark underground habitat with astonishing precision and speed, quickly identifying and consuming edible items in a matter of milliseconds.

      Congratulations! You’ve made it to the final act of this book. Take a moment to celebrate all that you’ve learned.

      -

      [ what do you think about having a little illustration with all of the friends, dot, triangle, cats, etc. applauding the reader?]

      +
      + +
      +

      Throughout this book, you’ve explored the fundamental principles of interactive physics simulations with p5.js, dived into the complexities of agent and other rule-based behaviors, and dipped your toe into the exciting realm of machine learning. You’ve become a natural!

      However, Chapter 10 merely scratched the surface of working with data and neural network–based machine learning—a vast landscape that would require countless sequels to this book to cover comprehensively. My goal was never go deep into neural networks, but simply to establish the core concepts in preparation for a grand finale, where I find a way to integrate machine learning into the world of animated, interactive p5.js sketches and bring together as many of our new Nature of Code friends as possible for one last hurrah.

      The path forward passes through the field of neuroevolution, a style of machine learning that combines the genetic algorithms from Chapter 9 with the neural networks from Chapter 10. A neuroevolutionary system uses Darwinian principles to evolve the weights (and in some cases, the structure itself) of a neural network over generations of trial-and-error learning. In this chapter, I’ll demonstrate how to use neuroevolution with a familiar example from the world of gaming. I’ll then finish off with some variations on Craig Reynolds’s steering behaviors from Chapter 5, where the behaviors are learned through neuroevolution.

      @@ -26,7 +29,7 @@

      Reinforcement Learning

      Instead of a mouse or a robot, now think about any of the example objects from earlier this book (walker, mover, particle, vehicle). Imagine embedding a neural network into one of these objects and using it to calculate a force or some other action. The neural network could receive its inputs from the environment (such as distance to an obstacle) and output some kind of decision. Perhaps the network chooses from a set of discrete options (move left or right) or picks a set of continuous values (the magnitude and direction of a steering force). Is this starting to sound familiar? It’s no different from how a neural network performed after training in the Chapter 10 examples, receiving inputs and predicting a classification or regression!

      Actually training one of these objects to make a good decision is where the process diverges from the supervised learning approach. To better illustrate, let’s start with a hopefully easy to understand and possibly familiar scenario, the game Flappy Bird (see Figure 11.1). The game is deceptively simple. You control a small bird that continually moves horizontally across the screen. With each tap or click, the bird flaps its wings and rises upward. The challenge? A series of vertical pipes spaced apart at irregular intervals emerge from the right. The pipes have gaps, and your primary objective is to navigate the bird safely through these gaps. If you hit a pipe, it’s game over. As you progress, the game’s speed increases, and the more pipes you navigate, the higher your score.

      - Figure 11.1: The Flappy Bird game + Figure 11.1: The Flappy Bird game
      Figure 11.1: The Flappy Bird game

      Suppose you wanted to automate the gameplay, and instead of a human tapping, a neural network will make the decision as to whether to flap or not. Could machine learning work here? Skipping over the initial “data” steps in the machine learning lifecycle for a moment, let’s think about how to choose a model. What are the inputs and outputs of the neural network?

      @@ -41,7 +44,7 @@

      Reinforcement Learning

    These features are illustrated in Figure 11.2.

    - Figure 11.2: The Flappy Bird input features for a neural network + Figure 11.2: The Flappy Bird input features for a neural network
    Figure 11.2: The Flappy Bird input features for a neural network

    The neural network will have five inputs, one for each feature, but what about the outputs? Is this a classification problem or a regression problem? This may seem like an odd question to ask in the context of a game like Flappy Bird, but it’s actually quite important and relates to how the game is controlled. Tapping the screen, pressing a button, or using keyboard controls are all examples of classification. After all, there’s only a discrete set of choices: tap or not; press W, A, S, or D on the keyboard. On the other hand, using an analog controller like a joystick leans toward regression. A joystick can be tilted in varying degrees in any direction, translating to continuous output values for both its horizontal and vertical axes.

    @@ -52,7 +55,7 @@

    Reinforcement Learning

This means the network should have two outputs, suggesting an overall network architecture like the one in Figure 11.3.

- Figure 11.3: The neural network for Flappy Bird as ml5.js might design it + Figure 11.3: The neural network for Flappy Bird as ml5.js might design it
Figure 11.3: The neural network for Flappy Bird as ml5.js might design it

I now have all the information necessary to configure a model and let ml5.js build it.

@@ -141,7 +144,8 @@

Coding Flappy Bird

To be clear, the “reality” depicted in the game is a bird flying through pipes—the bird is moving along two dimensions while the pipes remain stationary. However, it’s simpler to code the game as if the bird as stationary in its horizontal position and the pipes are moving.

With a Bird and Pipe class written, I’m almost set to run the game. However, there remains a key missing piece: collisions. The whole game rides on the bird attempting to avoid the pipes! Fortunately, this is nothing new. You’ve seen many examples of objects checking their positions against others throughout this book. There’s a design choice to make, though. A method to check collisions could logically be placed in either the Bird class (to check if the bird hits a pipe) or in the Pipe class (to check if a pipe hits the bird). Either can be justified depending on your point of view.

I’ll place the method in the Pipe class and call it collides(). It’s a little trickier than you might think on first glance, as the method needs to check both the top and bottom rectangles of a pipe against the position of the bird. There are a variety of ways to approach this. One way is to first check if the bird is vertically within the bounds of either rectangle (either above the bottom of the top pipe or below the top of the bottom one). But it’s only actually colliding with the pipe if the bird is also horizontally within the boundaries of the pipe’s width. An elegant way to write this is to combine each of these checks with a logical “and.”

-
  collides(bird) {
+
+
  collides(bird) {
     // Is the bird within the vertical range of the top or bottom pipe?
     let verticalCollision = bird.y < this.top || bird.y > this.bottom;
     // Is the bird within the horizontal range of the pipes?
@@ -149,6 +153,7 @@ 

Coding Flappy Bird

//{!1} If it’s both a vertical and horizontal hit, it’s a hit! return verticalCollision && horizontalCollision; }
+

The algorithm currently treats the bird as a single point and doesn’t take into account its size. This is something that should be improved for a more realistic version of the game.

All that’s left is to write setup() and draw(). I need a single variable for the bird and an array for a list of pipes. The interaction is just a single press of the mouse, which triggers the bird’s flap() method. Rather than build a fully functional game with a score, end screen, and other usual elements, I’ll just make sure things are working by drawing the text “OOPS!” near any pipe when a collision occurs. The code also assumes an additional offscreen() method on the Pipe class for when a pipe has moved beyond the left edge of the canvas.

@@ -210,7 +215,8 @@

The Bird Brain

  • The x distance to the next pipe.
  • There are two outputs representing the bird’s two options: to flap or not to flap. With the inputs and outputs set, I can add a brain property to the bird’s constructor holding an ml5.js neural network with the appropriate configuration. Just to demonstrate a different coding style here, I’ll skip including a separate options variable and pass the properties as an object literal directly into the ml5.neuralNetwork() function. Note the addition of a neuroEvolution property set to true. This is necessary to enable some of the features I’ll be using later in the code.

    -
      constructor() {
    +
    +
      constructor() {
         this.brain = ml5.neuralNetwork({
           // A bird’s brain receives four inputs and classifies them into one of two labels.
           inputs: 4,
    @@ -220,9 +226,11 @@ 

    The Bird Brain

    neuroEvolution: true }); }
    +

    Next, I’ll add a new method called think() to the Bird class where all of the necessary inputs for the bird are calculated at each moment in time. The first two inputs are easy—they’re simply the y and velocity properties of the bird itself. However, for inputs 3 and 4, I need to determine which pipe is the “next” pipe.

    At first glance, it might seem that the next pipe is always the first one in the array, since the pipes are added one at a time to the end of the array. However, once a pipe passes the bird, it’s no longer relevant, and there’s still some time between when this happens an when that pipe exits the canvas and is removed from the beginning of the array. I therefore need to find the first pipe in the array whose right edge (x position plus width) is greater than the bird’s x position.

    -
      think(pipes) {
    +
    +
      think(pipes) {
         let nextPipe = null;
         for (let pipe of pipes) {
           //{!4} The next pipe is the one that hasn’t passed the bird yet.
    @@ -231,8 +239,10 @@ 

    The Bird Brain

    break; } }
    +

    Once I have the next pipe, I can create the four inputs:

    -
       let inputs = [
    +
    +
       let inputs = [
           // y position of bird
           this.y,
           // y velocity of bird
    @@ -242,26 +252,35 @@ 

    The Bird Brain

    //{!1} Distance to the next pipe nextPipe.x - this.x, ];
    +

    This is close, but I’ve forgotten a critical step. The range of all input values is determined by the dimensions of the canvas, but a neural network expects values in a standardized range, such as 0 to 1. One method to normalize these values is to divide the inputs related to vertical properties by height, and those related to horizontal ones by width.

    -
        let inputs = [
    +
    +
        let inputs = [
           //{!4} All of the inputs are now normalized by width and height.
           this.y / height,
           this.velocity / height,
           nextPipe.top / height,
           (nextPipe.x - this.x) / width,
         ];
    +

    With the inputs in hand, I’m ready to pass them to the neural network’s classify() method. There’s another small problem, however: classify() is asynchronous, meaning I’d have to implement a callback inside the Bird class to process the model’s decision. This would add a significant level of complexity to the code, but luckily, it’s entirely unnecessary in this case. Asynchronous callbacks with ml5.js’s machine learning functions are typically needed due to the time required to process the large amount of data in the model. Without a callback, the code might have to wait a long time to get a result, and if the model is running as part of a p5.js sketch, that delay could severely impact the smoothness of the animation. The neural network here, however, only has four floating point inputs and two output labels! It’s tiny and can run fast enough that there’s no reason to use asynchronous code.

    For completeness, I’ll include a version of the example on the book’s website that implements neuroevolution with asynchronous callbacks. For the discussion here, however, I’m going to use a feature of ml5.js that allows me to take a shortcut. The method classifySync() is identical to classify(), but it runs synchronously, meaning the code stops and waits for the results before moving on. You should be very careful when using this version of the method as it can cause problems in other contexts, but it will work well for this simple scenario. Here’s the end of the think() method with classifySync().

    -
        let results = this.brain.classifySync(inputs);
    +
    +
        let results = this.brain.classifySync(inputs);
         if (results[0].label === "flap") {
           this.flap();
         }
       }
    +

    The neural network’s prediction is in the same format as the gesture classifier from the previous chapter, and the decision can be made by checking the first element of the results array. If the output label is "flap", then call flap().

    Now that I’ve finished the think() method, the real challenge can begin: teaching the bird to win the game by consistently flapping its wings at the right moment. This is where the genetic algorithm comes back into the picture. Recalling the discussion of from Chapter 9, there are three key principles that underpin Darwinian evolution: variation, selection, and heredity. I’ll revisit each of these principles in turn as I implement the steps of the genetic algorithm in this new context of neural networks.

    Variation: A Flock of Flappy Birds

    A single bird with a randomly initialized neural network isn’t likely to have any success at all. That lone bird will most likely jump incessantly and fly way offscreen, or sit perched at the bottom of the canvas awaiting collision after collision with the pipes. This erratic and nonsensical behavior is a reminder: a randomly initialized neural network lacks any knowledge or experience. The bird is essentially making wild guesses for its actions, so success is going to be very rare.

    This is where the first key principle of genetic algorithms comes in: variation. The hope is that by introducing as many different neural network configurations as possible, a few might perform slightly better than the rest. The first step toward variation is to add an array of many birds.

    +
    + Figure 11.4 A population of birds, each with unique neural networks, navigating through the pipes in the neuroevolution process. +
    Figure 11.4 A population of birds, each with unique neural networks, navigating through the pipes in the neuroevolution process.
    +
    // Population size
     let populationSize = 200;
     // Array of birds
    @@ -297,19 +316,24 @@ 

    GPU vs. CPU

    Selection: Flappy Bird Fitness

    Once I have a diverse population of birds, each with its own neural network, the next step in the genetic algorithm is selection. Which birds should pass on their genes (in this case, neural network weights) to the next generation? In the world of Flappy Bird, the measure of success is the ability to stay alive the longest by avoiding the pipes. This is the bird’s “fitness.” A bird that dodges many pipes is considered more fit than one that crashes into the first one it encounters.

    To track each bird’s fitness, I’ll add two properties to the Bird class: fitness and alive.

    -
      constructor() {
    +
    +
      constructor() {
         // The bird’s fitness
         this.fitness = 0;
         //{!1} Is the bird alive or not?
         this.alive = true;
       }
    +

    I’ll assign the fitness a numeric value that increases by one every cycle through draw(), as long as the bird remains alive. The birds that survive longer should have a higher fitness. This mechanism mirrors the reinforcement learning technique of rewarding good decisions. In reinforcement learning, however, an agent receives immediate feedback for every decision it makes, allowing it to adjust its policy accordingly. Here, the bird’s fitness is a cumulative measure of its overall success and will only be applied during the selection step of the genetic algorithm.

    -
      update() {
    +
    +
      update() {
         //{!1} Incrementing the fitness each time through update
         this.fitness++;
       }
    +

    The alive property is a boolean flag that’s initially set to true. When a bird collides with a pipe, it’s set to false. Only birds that are still alive are updated and drawn to the canvas.

    -
    function draw() {
    +
    +
    function draw() {
       // There’s now an array of birds!
       for (let bird of birds) {
         //{!1} Only operate on the birds that are still alive.
    @@ -329,9 +353,11 @@ 

    Selection: Flappy Bird Fitness

    } } }
    +

    In Chapter 9, I demonstrated two techniques for running an evolutionary simulation. In the smart rockets example, the population lived for a fixed amount of time each generation. The same approach could likely work here as well, but I want to allow the birds to accumulate the highest fitness possible and not arbitrarily stop them based on a time limit. The second technique, demonstrated with the “bloops” example, involved eliminating the fitness score entirely and setting a random probability for cloning any living creature. For Flappy Bird, this approach could become messy and risks overpopulation or all the birds dying out completely.

    I propose combining elements of both approaches. I’ll allow a generation to continue as long as at least one bird is still alive. When all the birds have died, I’ll select parents for the reproduction step and start anew. I’ll begin by writing a function to check if all the birds have died.

    -
    function allBirdsDead() {
    +
    +
    function allBirdsDead() {
       for (let bird of birds) {
         //{!3} If a single bird is alive, they are not all dead!
         if (bird.alive) {
    @@ -341,8 +367,10 @@ 

    Selection: Flappy Bird Fitness

    //{!1} If the loop completes without finding a living bird, they are all dead. return true; }
    +

    When all the birds have died, it’s time for selection! In the previous genetic algorithm examples, I demonstrated a “relay race” technique for giving a fair shot to all members of a population, while still increasing the chances of selection for those with higher fitness scores. I’ll use that same weightedSelection() function here.

    -
    //{!1} See Chapter 9 for a detailed explanation of this algorithm.
    +
    +
    //{!1} See Chapter 9 for a detailed explanation of this algorithm.
     function weightedSelection() {
       let index = 0;
       let start = random(1);
    @@ -354,8 +382,10 @@ 

    Selection: Flappy Bird Fitness

    //{!1} Instead of returning the entire Bird object, just the brain is returned return birds[index].brain; }
    +

    For this algorithm to function properly, I need to first normalize the fitness values of the birds so that they collectively add up to 1.

    -
    function normalizeFitness() {
    +
    +
    function normalizeFitness() {
       // Sum the total fitness of all birds.
       let sum = 0;
       for (let bird of birds) {
    @@ -366,20 +396,26 @@ 

    Selection: Flappy Bird Fitness

    bird.fitness = bird.fitness / sum; } }
    +

    Once normalized, each bird’s fitness is equal to its probability of being selected.

    Heredity: Baby Birds

    There’s only one step left in the genetic algorithm—reproduction. In Chapter 9, I explored in great detail the two-step process for generating a “child” element: crossover and mutation. Crossover is where the third key principle of heredity arrives: the DNA from the two selected parents is combined to form the child’s DNA.

    At first glance, the idea of inventing a crossover algorithm for two neural networks might seem daunting, and yet it’s actually quite straightforward. Think of the individual “genes” of a bird’s brain as the weights within the neural network. Mixing two such brains boils down to creating a new neural network where each weight is chosen by a virtual coin flip—it comes either from the first or second parent.

    -
    // Picking two parents and creating a child with crossover
    +
    +
    // Picking two parents and creating a child with crossover
     let parentA = weightedSelection();
     let parentB = weightedSelection();
     let child = parentA.crossover(parentB);
    +

    Wow, today’s my lucky day! It turns out ml5.js includes a crossover() method that manages the algorithm for mixing the two neural networks. I can happily move on to the mutation step.

    -
    // Mutating the child
    +
    +
    // Mutating the child
     child.mutate(0.01);
    +

    My luck continues! The ml5.js library also provides a mutate() method that accepts a mutation rate as its primary argument. The rate determines how often a weight will be altered. For example, a rate of 0.01 indicates a 1 percent chance that any given weight will mutate. During mutation, ml5.js adjusts the weight slightly by adding a small random number to it, rather than selecting a completely new random value. This behavior mimics real-world genetic mutations, which typically introduce minor changes rather than entirely new traits. Although this default approach works for many cases, ml5.js offers more control over the process by allowing the use of a custom mutation function as an optional second argument to mutate().

    The crossover and mutation steps need to be repeated for the size of the population to create an entire new generation of birds. This is accomplished by populating an empty local array nextBirds with the new birds. Once the population is full, the global birds array is then updated to this fresh generation.

    -
    function reproduction() {
    +
    +
    function reproduction() {
       //{!1} Start with a new empty array.
       let nextBirds = [];
       for (let i = 0; i < populationSize; i++) {
    @@ -396,8 +432,10 @@ 

    Heredity: Baby Birds

    //{!1} The next generation is now the current one! birds = nextBirds; }
    +

    If you look closely at the reproduction() function, you may notice that I’ve slipped in another new feature of the Bird class: an argument to the constructor. When I first introduced the idea of a bird “brain,” each new Bird object was created with a brand-new brain—a fresh neural network courtesy of ml5.js. However, I now want the new birds to “inherit” a child brain that was generated through the processes of crossover and mutation. To make this possible, I’ll subtly change the Bird constructor to look for an optional argument named, of course, brain.

    -
      constructor(brain) {
    +
    +
      constructor(brain) {
         //{!1} Check if a brain was passed in.
         if (brain) {
           this.brain = brain;
    @@ -411,6 +449,7 @@ 

    Heredity: Baby Birds

    }); } }
    +

    If no brain is provided when a new bird is created, the brain argument remains undefined. In JavaScript, undefined is treated as false. The if (brain) test will therefore fail, so the code will move on to the else statement and call ml5.neuralNetwork(). On the other hand, if an existing neural network is passed in, brain evaluates to true and is assigned directly to this.brain. This elegant trick allows a single constructor to handle different scenarios.

    With that, the example is complete. All that’s left to do is call normalizeFitness() and reproduction() in draw() at the end of each generation, when all the birds have died out.

    @@ -420,7 +459,8 @@

    Example 11.2: Flappy Bird w

    -
    function draw() {
    +
    +
    function draw() {
       /* all the rest of draw */
     
       //{!4} Create the next generation when all the birds have died.
    @@ -435,6 +475,7 @@ 

    Example 11.2: Flappy Bird w // Remove all the pipes but the very latest one pipes.splice(0, pipes.length - 1); }

    +

    Note the addition of a new resetPipes() function. If I don’t remove the pipes before starting a new generation, the birds may immediately restart at a position colliding with a pipe, in which case even the best bird won’t have a chance to fly! The full online code for Example 11.2 also adjusts the behavior of the birds so that they die when they leave the canvas, either by crashing into the ground or soaring too high above the top.

    Exercise 11.2

    @@ -447,23 +488,30 @@

    Exercise 11.3

    Steering the Neuroevolutionary Way

    Having explored neuroevolution with Flappy Bird, I’d like to shift the focus back to the realm of simulation, specifically the steering agents introduced in Chapter 5. What if, instead of me dictating the rules for an algorithm to calculate a steering force, a simulated creature could evolve its own strategy? Drawing inspiration from Craig Reynolds’s aim of “life-like and improvisational” behaviors, my goal isn’t to use neuroevolution to engineer the “perfect” creature that can flawlessly execute a task. Instead, I hope to create a captivating world of simulated life, where the quirks, nuances, and happy accidents of evolution unfold in the canvas.

    I’ll begin by adapting the smart rockets example from Chapter 9. In that example, the genes for each rocket were an array of vectors.

    -
    this.genes = [];
    +
    +
    this.genes = [];
     for (let i = 0; i < lifeSpan; i++) {
       //{!2} Each gene is a vector with random direction and magnitude.
       this.genes[i] = p5.Vector.random2D();
       this.genes[i].mult(random(0, this.maxforce));
     }
    +

    I propose adapting this code to instead use a neural network to predict the vector or steering force, transforming the genes into a brain. Vectors can have a continuous range of values, so this is a regression task.

    -
    this.brain = ml5.neuralNetwork({
    +
    +
    this.brain = ml5.neuralNetwork({
       inputs: 2,
       outputs: 2,
       task: "regression",
       neuroEvolution: true,
     });
    +

    In the original example, the vectors from the genes array were applied sequentially, querying the array with a counter variable.

    -
    this.applyForce(this.genes[this.counter]);
    +
    +
    this.applyForce(this.genes[this.counter]);
    +

    Now, instead of an array lookup, I want the neural network to return a new vector for each frame of the animation. For regression tasks with ml5.js, the output of the neural network is received from the predict() method rather than classify(). And here, I’ll use the predictSync() variant to keep things simple and allow for synchronous output data from the model in the rocket’s run() method.

    -
    run() {
    +
    +
    run() {
       // Get the outputs from the neural network.
       let outputs = this.brain.predictSync(inputs);
       // Use one output for an angle.
    @@ -475,10 +523,13 @@ 

    Steering the Neuroevolutionary Way +

    The neural network brain outputs two values: one for the angle of the vector and one for the magnitude. You might think to instead use these outputs for the vector’s x and y components. The default output range for an ml5.js neural network is between 0 and 1, however, and I want the forces to be capable of pointing in both positive and negative directions. Mapping the first output to an angle by multiplying it by TWO_PI offers the full range.

    You may have noticed that the code includes a variable called inputs that I have yet to declare or initialize. Defining the inputs to the neural network is where you as the designer of the system can be the most creative. You have to consider the nature of the environment and the simulated biology and capabilities of your creatures, and decide what features are most important.

    As a first try, I’ll assign something very basic for the inputs and see if it works. Since the smart rockets’ environment is static, with fixed obstacles and targets, what if the brain could learn and estimate a flow field to navigate toward its goal? As I demonstrated in Chapter 5, a flow field receives a position and returns a vector, so the neural network can mirror this functionality and use the rocket’s current x and y position as input. I just have to normalize the values according to the canvas dimensions.

    -
    let inputs = [this.position.x / width, this.position.y / height];
    +
    +
    let inputs = [this.position.x / width, this.position.y / height];
    +

    That’s it! Virtually everything else from the original example can remain unchanged: the population, the fitness function, and the selection process.

    Example 11.3: Smart Rockets with Neuroevolution

    @@ -487,7 +538,8 @@

    Example 11.3: Smart Rocke

    -
      reproduction() {
    +
    +
      reproduction() {
         let nextPopulation = [];
         // Create the next population.
         for (let i = 0; i < this.population.length; i++) {
    @@ -503,6 +555,7 @@ 

    Example 11.3: Smart Rocke this.population = nextPopulation; this.generations++; }

    +

    Notice how, now that I’m using ml5.js, there’s no longer a need for a separate DNA class with implementations of crossover() and mutate(). Instead, those methods are built into the ml5.neuralNetwork itself and can be called directly.

    Exercise 11.4

    @@ -537,18 +590,23 @@

    Responding to Change

    } }

    As the glow moves, the creature should take the glow’s position into account in its decision making process, as an input to its brain. However, it isn’t sufficient to know only the light’s position; it’s the position relative to the creature’s own that’s key. A nice way to synthesize this information as an input feature is to calculate a vector that points from the creature to the glow. Essentially I’m reinventing the seek() method from Chapter 5, using a neural network to estimate the steering force.

    -
      seek(target) {
    +
    +
      seek(target) {
         //{!1} Calculate a vector from the position to the target.
         let v = p5.Vector.sub(target.position, this.position);
    +

    This is a good start, but the components of the vector don’t fall within a normalized input range. I could divide v.x by width and v.y by height, but since my canvas isn’t a perfect square, this may skew the data. Another solution is to normalize the vector, but while this would retain information about the direction from the creature to the glow, it would eliminate any measure of the distance. This won’t do either—if the creature is sitting on top of the glow, it should steer differently than if it were very far away. As a solution, I’ll save the distance in a separate variable before normalizing the vector. For it to work as an input feature, though, I still have to normalize the range. While not a perfect normalization between 0 and 1, I’ll divide it by the canvas width, which will provide a “practical” normalization that retains the relative magnitude.

    -
      seek(target) {
    +
    +
      seek(target) {
         let v = p5.Vector.sub(target.position, this.position);
         // Save the distance in a variable and normalize according to width (one input)
         let distance = v.mag() / width;
         // Normalize the vector pointing from position to target (two inputs)
         v.normalize();
    +

    If you recall, a key element of Reynolds’s steering formula involved comparing the desired velocity to the current velocity. How the vehicle is currently moving plays a significant role in how it should steer! For the creature to consider its own velocity as part of its decision-making, I can include the velocity vector in the inputs to the neural network as well. To normalize these values, it works beautifully to divide the vector’s components by the maxspeed property. This retains both the direction and relative magnitude of the vector. The rest of the seek() method follows the same logic as the previous example, with the outputs of the neural network synthesized into a force to be applied to the creature.

    -
      seek(target) {
    +
    +
      seek(target) {
         let v = p5.Vector.sub(target.position, this.position);
         let distance = v.mag() / width;
         v.normalize();
    @@ -568,8 +626,10 @@ 

    Responding to Change

    force.setMag(magnitude); this.applyForce(force); }
    +

    Enough has changed in the transition from rockets to creatures that it’s also worth reconsidering the fitness function. Previously, fitness was calculated based on the rocket’s “record” distance from the target at the end of each generation. Since the target is now moving, I’d prefer to accumulate the amount of time the creature is able to catch the glow as the measure of fitness. This can be achieved by checking the distance between the creature and the glow in the update() method and incrementing a fitness value when they’re intersecting.

    -
      update(target) {
    +
    +
      update(target) {
         /* The usual updating of position, velocity, accleration */
     
         //{!4} Increase the fitness whenever the creature reaches the glow.
    @@ -578,6 +638,7 @@ 

    Responding to Change

    this.fitness++; } }
    +

    Both the Glow and Creature classes include a radius property r, which I’m using to determine intersection.

    Speeding Up Time

    One thing you may have noticed about evolutionary computing is that testing the code is a delightful exercise in patience. You have to watch the slow crawl of the simulation play out generation after generation. This is part of the point—I want to watch the process! It’s also a nice excuse to take a break, which is to be encouraged. Head outside and enjoy some non-simulated nature for a while, or perhaps a soothing cup of tea. Then check back in on your creatures and see how they’re progressing. Take comfort in the fact that you only have to wait billions of milliseconds rather than the billions of years required for actual biological evolution.

    @@ -653,7 +714,7 @@

    Sensing the Environment

    A common approach to simulating how a real-world creature (or robot) would have a limited awareness of its surroundings is to attach sensors to an agent. Think back to that mouse in the maze from the beginning of the chapter (hopefully it’s been thriving on the cheese it’s been getting as a reward), and now imagine it has to navigate the maze in the dark. Its whiskers might act as proximity sensors to detect walls and turns. The mouse whisker’s can’t “see” the entire maze, only the immediate surroundings. Another example of sensors is a bat using echolocation to navigate, or a car on a winding road that can only see what’s projected in front of its headlights.

    I’d like to build on this idea of the whiskers (or more formally the vibrissae) found in mice, cats, and other mammals. In the real world, animals use their vibrissae to navigate and detect nearby objects, especially in dark or obscured environments (see Figure 11.4). How can I attach whisker-like sensors to my neuroevolutionary seeking creatures?

    - Figure 11.4: Clawdius the Cat sensing his environment with his vibrissae + Figure 11.4: Clawdius the Cat sensing his environment with his vibrissae
    Figure 11.4: Clawdius the Cat sensing his environment with his vibrissae

    I’ll keep the generic class name Creature but think of them now as the amoeba-like “bloops” from Chapter 9, enhanced with whisker-like sensors that emanate from their center in all directions.

    @@ -694,7 +755,7 @@

    Sensing the Environment

    How can I determine if a creature’s sensor is touching the food? One approach could be to use a technique called raycasting. This method is commonly employed in computer graphics to project straight lines (often representing beams of light) from an origin point in a scene to determine what objects they intersect with. Raycasting is useful for visibility and collision checks, exactly what I’m doing here!

    While raycasting would provide a robust solution, it requires more mathematics than I’d like to delve into here. For those interested, an explanation and implementation are available in Coding Challenge #145 on thecodingtrain.com. For this example, I’ll opt for a more straightforward approach and check whether the endpoint of a sensor lies inside the food circle (see Figure 11.5).

    - Figure 11.5: The endpoint of a sensor is inside or outside of the food based on its distance to the center of the food. + Figure 11.5: The endpoint of a sensor is inside or outside of the food based on its distance to the center of the food.
    Figure 11.5: The endpoint of a sensor is inside or outside of the food based on its distance to the center of the food.

    @@ -797,7 +858,8 @@

    Example 11.5: A Bloop with Sensors

    In the example, the creature’s sensors are drawn as lines from its center. When a sensor detects something (when value is greater than 0), a circle appears. To visualize the strength of the sensor reading, I use value to set its transparency.

    Learning from the Sensors

    Are you thinking what I’m thinking? What if the values of a creature’s sensors are the inputs to a neural network?! Assuming I give the creatures control of their own movements again, I could write a new think() method that processes the sensor values through the neural network “brain” and outputs a steering force, just like in the last two steering examples.

    -
      think() {
    +
    +
      think() {
         // Build an input array from the sensor values.
         let inputs = [];
         for (let i = 0; i < this.sensors.length; i++) {
    @@ -812,8 +874,10 @@ 

    Learning from the Sensors

    force.setMag(magnitude); this.applyForce(force); }
    +

    The logical next step might be incorporate all the usual parts of the genetic algorithm, writing a fitness function (how much food did each creature eat?) and performing selection after a fixed generational time period. But this is a great opportunity to revisit the principles of a “continuous” ecosystem and aim for a more sophisticated environment and set of potential behaviors for the creatures themselves. Instead of a fixed lifespan cycle for each generation, I’ll bring back Chapter 9’s concept of a health score for each creature. For every cycle through draw() that a creature lives, its health deteriorates a little bit.

    -
    class Creature {  
    +
    +
    class Creature {  
       constructor() {
         //{inline} All of the creature’s properties
         
    @@ -827,8 +891,10 @@ 

    Learning from the Sensors

    // Losing some health! this.health -= 0.25; }
    +

    In draw(), if any bloop’s health drops below 0, it dies and is deleted from the bloops array. And for reproduction, instead of performing the usual crossover and mutation all at once, each bloop (with a health greater than 0) will have a 0.1 percent chance of reproducing.

    -
      function draw() {
    +
    +
      function draw() {
         for (let i = bloops.length - 1; i >= 0; i--) {
           if (bloops[i].health < 0) {
             bloops.splice(i, 1);
    @@ -838,21 +904,26 @@ 

    Learning from the Sensors

    } } }
    +

    In reproduce(), I’ll use the copy() method (cloning) instead of the crossover() method (mating), with a higher than usual mutation rate to help introduce variation. (I encourage you to consider ways to incorporate crossover instead.)

    -
      reproduce() {
    +
    +
      reproduce() {
         //{!2} Copy and mutate rather than crossover and mutate
         let brain = this.brain.copy();
         brain.mutate(0.1);
         return new Creature(this.position.x, this.position.y, brain);
       }
    +

    For this to work, some bloops should live longer than others. By consuming food, their health increases, giving them extra time to reproduce. I’ll manage this in an eat() method of the Creature class.

    -
      eat(food) {
    +
    +
      eat(food) {
         // If the bloop is close to the food, increase its health!
         let d = p5.Vector.dist(this.position, food.position);
         if (d < this.r + food.r) {
           this.health += 0.5;
         }
       }
    +

    Is this enough for the system to evolve and find its equilibrium? I could dive deeper, tweaking parameters and behaviors in pursuit of the ultimate evolutionary system. The allure of this infinite rabbit hole is one I cannot easily escape, but I’ll explore it on my own time. For the purpose of this book, I invite you to run the example, experiment, and draw your own conclusions.

    Example 11.6: A Neuroevolutionary Ecosystem

    @@ -914,5 +985,9 @@

    The Ecosystem Project

    The End

    If you’re still reading, thank you! You’ve reached the end of the book. But for as much material as this book contains, I’ve barely scratched the surface of the physical world we inhabit and of techniques for simulating it. It’s my intention for this book to live as an ongoing project, and I hope to continue adding new tutorials and examples to the book’s website, as well as expand and update the accompanying video tutorials on thecodingtrain.com. Your feedback is truly appreciated, so please get in touch via email at daniel@shiffman.net or by contributing to the GitHub repository at github.com/nature-of-code, in keeping with the open source spirit of the project. Share your work. Stay in touch. Let’s be two with nature.

    +
    + +
    +

    \ No newline at end of file diff --git a/content/images/00_randomness/00_randomness_9.png b/content/images/00_randomness/00_randomness_9.png index 83de1dd8..5b953552 100644 Binary files a/content/images/00_randomness/00_randomness_9.png and b/content/images/00_randomness/00_randomness_9.png differ diff --git a/content/images/11_nn_ga/11_nn_ga_2.png b/content/images/11_nn_ga/11_nn_ga_2.png index d99d7bcb..ba822c37 100644 Binary files a/content/images/11_nn_ga/11_nn_ga_2.png and b/content/images/11_nn_ga/11_nn_ga_2.png differ diff --git a/content/images/11_nn_ga/11_nn_ga_3.png b/content/images/11_nn_ga/11_nn_ga_3.png index 13341f84..1167756d 100644 Binary files a/content/images/11_nn_ga/11_nn_ga_3.png and b/content/images/11_nn_ga/11_nn_ga_3.png differ diff --git a/content/images/11_nn_ga/11_nn_ga_4.png b/content/images/11_nn_ga/11_nn_ga_4.png index c5ab88a8..6076b8c0 100644 Binary files a/content/images/11_nn_ga/11_nn_ga_4.png and b/content/images/11_nn_ga/11_nn_ga_4.png differ diff --git a/content/images/11_nn_ga/11_nn_ga_5.png b/content/images/11_nn_ga/11_nn_ga_5.png index 194ee5dc..c5ab88a8 100644 Binary files a/content/images/11_nn_ga/11_nn_ga_5.png and b/content/images/11_nn_ga/11_nn_ga_5.png differ diff --git a/content/images/11_nn_ga/11_nn_ga_6.png b/content/images/11_nn_ga/11_nn_ga_6.png index 7ab2743b..a257947a 100644 Binary files a/content/images/11_nn_ga/11_nn_ga_6.png and b/content/images/11_nn_ga/11_nn_ga_6.png differ diff --git a/content/images/11_nn_ga/11_nn_ga_7.png b/content/images/11_nn_ga/11_nn_ga_7.png new file mode 100644 index 00000000..f3221ef0 Binary files /dev/null and b/content/images/11_nn_ga/11_nn_ga_7.png differ diff --git a/content/images/11_nn_ga/11_nn_ga_8.png b/content/images/11_nn_ga/11_nn_ga_8.png new file mode 100644 index 00000000..7ab2743b Binary files /dev/null and b/content/images/11_nn_ga/11_nn_ga_8.png differ diff --git a/content/images/11_nn_ga/11_nn_ga_9.png b/content/images/11_nn_ga/11_nn_ga_9.png new file mode 100644 index 00000000..479d7ffb Binary files /dev/null and b/content/images/11_nn_ga/11_nn_ga_9.png differ