Skip to content

Commit

Permalink
Merge pull request #870 from nature-of-code/notion-update-docs
Browse files Browse the repository at this point in the history
5,8, Figure 10.13
  • Loading branch information
shiffman committed Feb 27, 2024
2 parents 825db03 + 44690cf commit 7d2f0cb
Show file tree
Hide file tree
Showing 3 changed files with 2 additions and 4 deletions.
1 change: 1 addition & 0 deletions content/05_steering.html
Original file line number Diff line number Diff line change
Expand Up @@ -1165,6 +1165,7 @@ <h3 id="combining-behaviors">Combining Behaviors</h3>
<pre class="codesplit" data-code-language="javascript"> seek(target) {
let desired = p5.Vector.sub(target, this.position);
desired.setMag(this.maxspeed);

let steer = p5.Vector.sub(desired, this.velocity);
steer.limit(this.maxforce);

Expand Down
3 changes: 0 additions & 3 deletions content/08_fractals.html
Original file line number Diff line number Diff line change
Expand Up @@ -531,10 +531,8 @@ <h3 id="exercise-86">Exercise 8.6</h3>
function branch(len) {
line(0, 0, 0, -len);
translate(0, -len);

//{!1} Each branch’s length shrinks by one-third.
len *= 0.67;

//{!1} Exit condition for the recursion!
if (len > 2) {
push();
Expand Down Expand Up @@ -567,7 +565,6 @@ <h3 id="example-86-a-recursive-tree">Example 8.6: A Recursive Tree</h3>
background(255);
// Map the angle to range from 0° to 90° (<code>HALF_PI</code>) according to <code>mouseX</code>.
angle = map(mouseX, 0, width, 0, HALF_PI);

// Start the tree from the bottom of the canvas.
translate(width / 2, height);
stroke(0);
Expand Down
2 changes: 1 addition & 1 deletion content/10_nn.html
Original file line number Diff line number Diff line change
Expand Up @@ -531,11 +531,11 @@ <h2 id="putting-the-network-in-neural-network">Putting the “Network” in Neur
</figure>
<p>The fact that a perceptron can’t even solve something as simple as XOR may seem extremely limiting. But what if I made a network out of two perceptrons? If one perceptron can solve the linearly separable OR and one perceptron can solve the linearly separate NOT AND, then two perceptrons combined can solve the nonlinearly separable XOR.</p>
<p>When you combine multiple perceptrons, you get a <strong>multilayered perceptron</strong>, a network of many neurons (see Figure 10.13). Some are input neurons and receive the initial inputs, some are part of what’s called a <strong>hidden layer</strong> (as they’re connected to neither the inputs nor the outputs of the network directly), and then there are the output neurons, from which the results are read.</p>
<p>Up until now, I’ve been visualizing a singular perceptron with one circle representing a neuron processing its input signals. Now, as I move on to larger networks, it’s more typical to represent all the elements (inputs, neurons, outputs) as circles, with arrows that indicate the flow of data. In Figure 10.13, you can see the inputs and bias flowing into the hidden layer, which then flows to the output.</p>
<figure>
<img src="images/10_nn/10_nn_14.png" alt="Figure 10.13: A multilayered perceptron has the same inputs and output as the simple perceptron, but now it includes a hidden layer of neurons.">
<figcaption>Figure 10.13: A multilayered perceptron has the same inputs and output as the simple perceptron, but now it includes a hidden layer of neurons.</figcaption>
</figure>
<p>Up until now, I’ve been visualizing a singular perceptron with one circle representing a neuron processing its input signals. Now, as I move on to larger networks, it’s more typical to represent all the elements (inputs, neurons, outputs) as circles, with arrows that indicate the flow of data. In Figure 10.13, you can see the inputs and bias flowing into the hidden layer, which then flows to the output.</p>
<p>Training a simple perceptron is pretty straightforward: you feed the data through and evaluate how to change the input weights according to the error. With a multilayered perceptron, however, the training process becomes more complex. The overall output of the network is still generated in essentially the same manner as before: the inputs multiplied by the weights are summed and fed forward through the various layers of the network. And you still use the network’s guess to calculate the error (desired result – guess). But now so many connections exist between layers of the network, each with its own weight. How do you know how much each neuron or connection contributed to the overall error of the network, and how it should be adjusted?</p>
<p>The solution to optimizing the weights of a multilayered network is <strong>backpropagation</strong>. This process takes the error and feeds it backward through the network so it can adjust the weights of all the connections in proportion to how much they’ve contributed to the total error. The details of backpropagation are beyond the scope of this book. The algorithm uses a variety of activation functions (one classic example is the sigmoid function) as well as some calculus. If you’re interested in continuing down this road and learning more about how backpropagation works, you can find my <a href="https://thecodingtrain.com/neural-network">“Toy Neural Network” project at the Coding Train website with accompanying video tutorials</a>. They go through all the steps of solving XOR using a multilayered feed-forward network with backpropagation. For this chapter, however, I’d instead like to get some help and phone a friend.</p>
<h2 id="machine-learning-with-ml5js">Machine Learning with ml5.js</h2>
Expand Down

0 comments on commit 7d2f0cb

Please sign in to comment.