diff --git a/content/07_ca.html b/content/07_ca.html index 2a6bb130..d44c056b 100644 --- a/content/07_ca.html +++ b/content/07_ca.html @@ -9,8 +9,8 @@

Chapter 7. Cellular Automata

- Image caption TBD -
Image caption TBD
+ Photo by ZSM, CC BY-SA 4.0 +
Photo by ZSM, CC BY-SA 4.0

Kente Cloth

Originating from the Akan people of Ghana, kente cloth is a woven fabric celebrated for its vibrant colors and intricate patterns. Each strip of cloth features a distinct design, and when they are stitched together, they create a complex and emergent pattern. Typically, each kente design carries its own story or message.

diff --git a/content/10_nn.html b/content/10_nn.html index 96e7700d..7edcd69d 100644 --- a/content/10_nn.html +++ b/content/10_nn.html @@ -233,7 +233,7 @@

The Perceptron Code

}

A perceptron’s job is to receive inputs and produce an output. These requirements can be packaged together in a feedForward() method. In this example, the perceptron’s inputs are an array (which should be the same length as the array of weights), and the output is a number, +1 or –1, as returned by the activation function based on the sign of the sum.

-
+
  feedForward(inputs) {
     let sum = 0;
     for (let i = 0; i < this.weights.length; i++) {
@@ -243,7 +243,8 @@ 

The Perceptron Code

// Here the perceptron is making a guess. // Is it on one side of the line or the other? return this.activate(sum); - }
+ } +}

Presumably, I could now create a Perceptron object and ask it to make a guess for any given point, as in Figure 10.7.

@@ -251,14 +252,12 @@

The Perceptron Code

Figure 10.7: An (x, y) coordinate from the two-dimensional space is the input to the perceptron.

Here’s the code to generate a guess:

-
-
// Create the perceptron.
+
// Create the perceptron.
 let perceptron = new Perceptron(3);
 // The input is 3 values: x, y, and bias.
 let inputs = [50, -12, 1];
 // The answer!
 let guess = perceptron.feedForward(inputs);
-

Did the perceptron get it right? Maybe yes, maybe no. At this point, the perceptron has no better than a 50/50 chance of arriving at the correct answer, since each weight starts out as a random value. A neural network isn’t a magic tool that can automatically guess things correctly on its own. I need to teach it how to do so!

To train a neural network to answer correctly, I’ll use the supervised learning method I described earlier in the chapter. Remember, this technique involves giving the network inputs with known answers. This enables the network to check if it has made a correct guess. If not, the network can learn from its mistake and adjust its weights. The process is as follows:

    @@ -317,8 +316,7 @@

    The Perceptron Code

    \text{new weight} = \text{weight} + (\text{error} \times \text{input}) \times \text{learning constant}

    A high learning constant causes the weight to change more drastically. This may help the perceptron arrive at a solution more quickly, but it also increases the risk of overshooting the optimal weights. A small learning constant will adjust the weights more slowly and require more training time, but it will allow the network to make small adjustments that could improve overall accuracy.

    Assuming the addition of a learningConstant property to the Perceptronclass, I can now write a training method for the perceptron following the steps I outlined earlier.

    -
    -
      // Step 1: Provide the inputs and known answer.
    +
      // Step 1: Provide the inputs and known answer.
       // These are passed in as arguments to train().
       train(inputs, desired) {
         // Step 2: Guess according to those inputs.
    @@ -332,10 +330,8 @@ 

    The Perceptron Code

    this.weights[i] = this.weights[i] + error * inputs[i] * this.learningConstant; } }
    -

    Here’s the Perceptron class as a whole.

    -
    -
    class Perceptron {
    +
    class Perceptron {
       constructor(totalInputs) {
         //{!2} The Perceptron stores its weights and learning constants.
         this.weights = [];
    @@ -373,7 +369,6 @@ 

    The Perceptron Code

    } } }
    -

    To train the perceptron, I need a set of inputs with known answers. However, I don’t happen to have a real-world dataset (or time to research and collect one) for the xerophytes and hydrophytes scenario. In truth, though, the purpose of this demonstration isn’t to show you how to classify plants. It’s about how a perceptron can learn whether points are above or below a line on a graph, and so any set of points will do. In other words, I can just make the data up.

    What I’m describing is an example of synthetic data, artificially generated data that’s often used in machine learning to create controlled scenarios for training and testing. In this case, my synthetic data will consist of a set of random input points, each with a known answer indicating whether the point is above or below a line. To define the line and generate the data, I’ll use simple algebra. This approach allows me to clearly demonstrate the training process and show how the perceptron learns.

    The question therefore becomes, how do I pick a point and know whether it’s above or below a line (without a neural network, that is)? A line can be described as a collection of points, where each point’s y coordinate is a function of its x coordinate:

    @@ -387,52 +382,38 @@

    The Perceptron Code

    Figure 10.8: A graph of y = \frac{1}2x - 1

    I’ll arbitrarily choose that as the equation for my line, and write a function accordingly.

    -
    -
    // A function to calculate y based on x along a line
    +
    // A function to calculate y based on x along a line
     function f(x) {
       return 0.5 * x - 1;
     }
    -

    Now there’s the matter of the p5.js canvas defaulting to (0,0) in the top-left corner with the y-axis pointing down. For this discussion, I’ll assume I’ve built the following into the code to reorient the canvas to match a more traditional Cartesian space.

    -
    -
    // Move the origin (0,0) to the center.
    +
    // Move the origin (0,0) to the center.
     translate(width / 2, height / 2);
     // Flip the y-axis orientation (positive points up!).
     scale(1, -1);
    -

    I can now pick a random point in the 2D space.

    -
    -
    let x = random(-100, 100);
    +
    let x = random(-100, 100);
     let y = random(-100, 100);
    -

    How do I know if this point is above or below the line? The line function f(x) returns the y value on the line for that x position. I’ll call that y_\text{line}.

    -
    -
    // The y position on the line
    +
    // The y position on the line
     let yline = f(x);
    -

    If the y value I’m examining is above the line, it will be greater than y_\text{line}, as in Figure 10.9.

    Figure 10.9: If y_\text{line} is less than y, then the point is above the line.
    Figure 10.9: If y_\text{line} is less than y, then the point is above the line.

    Here’s the code for that logic:

    -
    -
    // Start with a value of -1.
    +
    // Start with a value of -1.
     let desired = -1;
     if (y > yline) {
       //{!1} The answer becomes +1 if y is above the line.
       desired = 1;
     }
    -

    I can then make an inputs array to go with the desired output.

    -
    -
    // Don’t forget to include the bias!
    +
    // Don’t forget to include the bias!
     let trainingInputs = [x, y, 1];
    -

    Assuming that I have a perceptron variable, I can train it by providing the inputs along with the desired answer.

    -
    -
    perceptron.train(trainingInputs, desired);
    -
    +
    perceptron.train(trainingInputs, desired);

    If I train the perceptron on a new random point (and its answer) each cycle through draw(), it will gradually get better at classifying the points as above or below the line.

    Example 10.1: The Perceptron

    @@ -856,20 +837,16 @@

    Tuning the Parameters

    Deploying the Model

    It’s finally time to deploy the model and see the payoff of all that hard work. This typically involves integrating the model into a separate application to make predictions or decisions based on new, previously unseen data. For this, ml5.js offers the convenience of a save() function to download the trained model to a file from one sketch and a load() function to load it for use in a completely different sketch. This saves you from having to retrain the model from scratch every single time you need it.

    While a model would typically be deployed to a different sketch from the one where it was trained, I’m going to deploy the model in the same sketch for the sake of simplicity. In fact, once the training process is complete, the resulting model is, in essence, already deployed in the current sketch. It’s saved in the classifier variable and can be used to make predictions by passing the model new data through the classify() method. The shape of the data sent to classify() should match the that of the input data used in training—in this case, two floating point numbers, representing the x and y components of a direction vector.

    -
    -
    // Manually creating a vector
    +
    // Manually creating a vector
     let direction = createVector(1, 0);
     // Converting the x and y components into an input array
     let inputs = [direction.x, direction.y];
     // Asking the model to classify the inputs
     classifier.classify(inputs, gotResults);
    -

    The second argument to classify() is another callback function where the results can be accessed.

    -
    -
    function gotResults(results) {
    +
    function gotResults(results) {
       console.log(results);
     }
    -

    The model’s prediction arrives in the argument to the callback, which I’m calling results in the code. Inside, you’ll find an array of the possible labels, sorted by confidence, a probability value that the model assigns to each label. These probabilities represent how sure the model is of that particular prediction. They range from 0 to 1, with values closer to 1 indicating higher confidence and values near 0 suggesting lower confidence.

    [
       {
    @@ -913,8 +890,7 @@ 

    Example 10.2: Gesture Classifier

    -
    -
    // Store the start of a gesture when the mouse is pressed.
    +
    // Store the start of a gesture when the mouse is pressed.
     function mousePressed() {
       start = createVector(mouseX, mouseY);
     }
    @@ -938,7 +914,6 @@ 

    Example 10.2: Gesture Classifier

    function gotResults(error, results) { status = results[0].label; }
    -

    Since the results array is sorted by confidence, if I just want to use a single label as the prediction, I can access the first element of the array with results[0].label, as in the gotResults() function in Example 10.2. This label is passed to the status variable to be displayed on the canvas.

    Exercise 10.5

    diff --git a/content/11_nn_ga.html b/content/11_nn_ga.html index 81fe987c..fb3c61c1 100644 --- a/content/11_nn_ga.html +++ b/content/11_nn_ga.html @@ -879,14 +879,14 @@

    Learning from the Sensors

    class Creature {  
       constructor() {
    -    //{inline} All of the creature’s properties
    +    /* All of the creature’s properties */
         
         // The health starts at 100.
         this.health = 100;
       } 
     
       update() {
    -    //{inline} The usual updating position, velocity, acceleration
    +    /* The usual updating position, velocity, acceleration */
     
         // Losing some health!
         this.health -= 0.25;
    diff --git a/content/images/07_ca/07_ca_1.jpg b/content/images/07_ca/07_ca_1.jpg
    new file mode 100644
    index 00000000..dabefc57
    Binary files /dev/null and b/content/images/07_ca/07_ca_1.jpg differ
    diff --git a/content/images/07_ca/07_ca_1.png b/content/images/07_ca/07_ca_1.png
    deleted file mode 100644
    index 3b2edaf7..00000000
    Binary files a/content/images/07_ca/07_ca_1.png and /dev/null differ