From 44690cff8dc2a0ff5d6491a7c843d0e9e4e7e1b2 Mon Sep 17 00:00:00 2001 From: shiffman Date: Tue, 27 Feb 2024 02:52:52 +0000 Subject: [PATCH] Notion - Update docs --- content/05_steering.html | 1 + content/08_fractals.html | 3 --- content/10_nn.html | 2 +- 3 files changed, 2 insertions(+), 4 deletions(-) diff --git a/content/05_steering.html b/content/05_steering.html index dd0fb10f..11de0e8c 100644 --- a/content/05_steering.html +++ b/content/05_steering.html @@ -1165,6 +1165,7 @@

Combining Behaviors

  seek(target) {
     let desired = p5.Vector.sub(target, this.position);
     desired.setMag(this.maxspeed);
+
     let steer = p5.Vector.sub(desired, this.velocity);
     steer.limit(this.maxforce);
 
diff --git a/content/08_fractals.html b/content/08_fractals.html
index 82f9488f..a3443283 100644
--- a/content/08_fractals.html
+++ b/content/08_fractals.html
@@ -531,10 +531,8 @@ 

Exercise 8.6

function branch(len) { line(0, 0, 0, -len); translate(0, -len); - //{!1} Each branch’s length shrinks by one-third. len *= 0.67; - //{!1} Exit condition for the recursion! if (len > 2) { push(); @@ -567,7 +565,6 @@

Example 8.6: A Recursive Tree

background(255); // Map the angle to range from 0° to 90° (HALF_PI) according to mouseX. angle = map(mouseX, 0, width, 0, HALF_PI); - // Start the tree from the bottom of the canvas. translate(width / 2, height); stroke(0); diff --git a/content/10_nn.html b/content/10_nn.html index f3b4aa2d..d0720c41 100644 --- a/content/10_nn.html +++ b/content/10_nn.html @@ -531,11 +531,11 @@

Putting the “Network” in Neur

The fact that a perceptron can’t even solve something as simple as XOR may seem extremely limiting. But what if I made a network out of two perceptrons? If one perceptron can solve the linearly separable OR and one perceptron can solve the linearly separate NOT AND, then two perceptrons combined can solve the nonlinearly separable XOR.

When you combine multiple perceptrons, you get a multilayered perceptron, a network of many neurons (see Figure 10.13). Some are input neurons and receive the initial inputs, some are part of what’s called a hidden layer (as they’re connected to neither the inputs nor the outputs of the network directly), and then there are the output neurons, from which the results are read.

+

Up until now, I’ve been visualizing a singular perceptron with one circle representing a neuron processing its input signals. Now, as I move on to larger networks, it’s more typical to represent all the elements (inputs, neurons, outputs) as circles, with arrows that indicate the flow of data. In Figure 10.13, you can see the inputs and bias flowing into the hidden layer, which then flows to the output.

Figure 10.13: A multilayered perceptron has the same inputs and output as the simple perceptron, but now it includes a hidden layer of neurons.
Figure 10.13: A multilayered perceptron has the same inputs and output as the simple perceptron, but now it includes a hidden layer of neurons.
-

Up until now, I’ve been visualizing a singular perceptron with one circle representing a neuron processing its input signals. Now, as I move on to larger networks, it’s more typical to represent all the elements (inputs, neurons, outputs) as circles, with arrows that indicate the flow of data. In Figure 10.13, you can see the inputs and bias flowing into the hidden layer, which then flows to the output.

Training a simple perceptron is pretty straightforward: you feed the data through and evaluate how to change the input weights according to the error. With a multilayered perceptron, however, the training process becomes more complex. The overall output of the network is still generated in essentially the same manner as before: the inputs multiplied by the weights are summed and fed forward through the various layers of the network. And you still use the network’s guess to calculate the error (desired result – guess). But now so many connections exist between layers of the network, each with its own weight. How do you know how much each neuron or connection contributed to the overall error of the network, and how it should be adjusted?

The solution to optimizing the weights of a multilayered network is backpropagation. This process takes the error and feeds it backward through the network so it can adjust the weights of all the connections in proportion to how much they’ve contributed to the total error. The details of backpropagation are beyond the scope of this book. The algorithm uses a variety of activation functions (one classic example is the sigmoid function) as well as some calculus. If you’re interested in continuing down this road and learning more about how backpropagation works, you can find my “Toy Neural Network” project at the Coding Train website with accompanying video tutorials. They go through all the steps of solving XOR using a multilayered feed-forward network with backpropagation. For this chapter, however, I’d instead like to get some help and phone a friend.

Machine Learning with ml5.js