diff --git a/content/09_ga.html b/content/09_ga.html index e9c4d5b6..f52c841a 100644 --- a/content/09_ga.html +++ b/content/09_ga.html @@ -236,6 +236,8 @@

B) Create a mating pool.

Figure 9.2: A “wheel of fortune” where each slice of the wheel is sized according to a fitness value.
Figure 9.2: A “wheel of fortune” where each slice of the wheel is sized according to a fitness value.
+

+

Spin the wheel and you’ll notice that Element B has the highest chance of being selected, followed by A, then E, then D, and finally C. This probability-based selection according to fitness is an excellent approach. One, it guarantees that the highest-scoring elements will be most likely to reproduce. Two, it does not entirely eliminate any variation from the population. Unlike with the elitist method, even the lowest-scoring element (in this case C) has a chance to pass its information down to the next generation. It’s quite possible (and often the case) that even low-scoring elements have a tiny nugget of genetic code that is truly useful and should not entirely be eliminated from the population. For example, in the case of evolving “to be or not to be”, we might have the following elements.

diff --git a/content/10_nn.html b/content/10_nn.html index e5d98d82..ddd8c8a4 100644 --- a/content/10_nn.html +++ b/content/10_nn.html @@ -283,6 +283,13 @@

Coding the Perceptron

+
+

Example 10.1: The Perceptron

+
+
+
+
+

The error is the determining factor in how the perceptron’s weights should be adjusted. For any given weight, what I am looking to calculate is the change in weight, often called \Delta\text{weight} (or “delta” weight, delta being the Greek letter \Delta).

\text{new weight} = \text{weight} + \Delta\text{weight}

\Delta\text{weight} is calculated as the error multiplied by the input.

@@ -384,13 +391,6 @@

Coding the Perceptron

Now, it’s important to remember that this is just a demonstration. Remember the Shakespeare-typing monkeys? I asked the genetic algorithm to solve for “to be or not to be”—an answer I already knew. I did this to make sure the genetic algorithm worked properly. The same reasoning applies to this example. I don’t need a perceptron to tell me whether a point is above or below a line; I can do that with simple math. By using an example that I can easily solve without a perceptron, I can both demonstrate the algorithm of the perceptron and verify that it is working properly.

Let’s look the perceptron trained with with an array of many points.

-
-

Example 10.1: The Perceptron

-
-
-
-
-
// The Perceptron
 let perceptron;
 //{!1} 2,000 training points
@@ -631,8 +631,160 @@ 

Choosing a Model

I’ll also point out that ml5.js is able to infer the inputs and outputs from the data itself, so those properties is not entirely necessary to include here in the options object. However, for the sake of clarity (and since I’ll need to specify those for later examples), I’m including them here.

The debug property, when set to true, enables a visual interface for the training process. It’s a helpful too for spotting potential issues during training and for getting a better understanding of what's happening behind the scenes.

Training

-

What is NEAT “neuroevolution augmented topologies)

-

+

Now that I have the data and a neural network initialized in the classifier variable, I’m ready to train the model! The thing is, I’m not really done with the data. In the “Data Collection and Preparation” section, I organized the data neatly into an array of objects, representing the x,y components of a vector paired with a string label. This format, while typical, isn't directly consumable by ml5.js for training. I need to be more specific about what are the inputs and what are the outputs for training the model. I certainly could have originally organized the data into a format that ml5.js recognizes, but I’m including this extra step as it’s much more likely to be what happens when you are using a “real” dataset that you’ve collected or sourced elsewhere.

+

ml5.js offers a fair amount of flexibility in the kinds of formats it will accept, the one I will choose to use here involves arrays—one for the inputs and one for the outputs.

+
for (let i = 0; i < data.length; i++) {
+  let item = data[i];
+  // An array of 2 numbers for the inputs
+  let inputs = [item.x, item.y];
+  // A single string "label" for the output
+  let outputs = [item.label];
+  //{!1} Add the training data to the classifier
+  classifier.addData(inputs, outputs);
+}
+

A term you will often hear when talking about data in machine learning is “shape.” What is the “shape” of your data?

+

The "shape" of data in machine learning describes its dimensions and structure. It indicates how the data is organized in terms of rows, columns, and potentially even deeper, into additional dimensions. In the context of machine learning, understanding the shape of your data is crucial because it determines how the model should be structured.

+

Here, the input data's shape is a one-dimensional array containing 2 numbers (representing x and y). The output data, similarly, is an array but instead contains a single string label. While this is a very small and simple example, it nicely mirrors many real-world scenarios where input features are numerically represented in an array, and outputs are string labels.

+

Oh dear, another term to unpack—features! In machine learning, the individual pieces of information used to make predictions are often called features. The term “feature” is chosen because it underscores the idea of distinct characteristics of the data are that most salient for the prediction. This will come into focus more clearly in future examples in this chapter.

+

Once the data has been passed into the classifier, ml5.js offers a helper function to normalize it.

+
// Normalize the data
+classifier.normalizeData();
+

As I’ve mentioned, normalizing data (adjusting the scale to a standard range) is a critical step in the machine learning process. However, if you recall during the data collection process, the hand-coded data was written with values that already range between -1 and 1. So, while calling normalizeData() here is likely redundant, it's important to demonstrate. Normalizing your data as part of the pre-processing step will absolutely work, the auto-normalization feature of ml5.js is a quite convenient alternative.

+

Ok, this subsection is called training. So now it’s time to train! Here’s the code:

+
+// The "train" method initiates the training process
+classifier.train(finishedTraining);
+
+// A callback function for when the training is complete
+function finishedTraining() {
+  console.log("Training complete!");
+}
+

Yes, that’s it! After all, the hard work as already been completed! The data was collected, prepared, and fed into the model. However, if I were to run the above code and then test the model, the results would probably be inadequate. Here is where it’s important to introduce another key term in machine learning: the epoch. The train() method tells the neural network to start the learning process. But how long should it train for? You can think of an epoch as one round of practice, one cycle of using the entire dataset to update the weights of the neural network. Generally speaking, the longer you train, the better the network will perform, but at a certain point there are diminishing returns. You can specify the number of epochs with an options object passed into train().

+
+//{!1} Setting the number of epochs for training
+let options = { epochs: 25 };
+classifier.train(options, finishedTraining);
+

There are other “hyperparameters” you can set in the options variable (learning rate is one again!) but I’m going to stick with the defaults. You can read more about customization options in the ml5.js reference. The second argument finishedTraining() is optional, but good to include as its a callback that runs when the training process has completed. This is useful for knowing when you can begin the next steps in your code. There is also an additional optional callback typically named whileTraining() that is triggered after each epoch but for my purposes just knowing when it is done is plenty.

+
+

Callbacks

+

If you've worked with p5.js, you're already familiar with the concept of a callback even if you don't know it by that name. Think of the mousePressed() function. You define what should happen inside it, and p5.js takes care of calling it at the right moment, when the mouse is pressed.

+

A callback function in JavaScript operates on a similar principle. It's a function that you provide as an argument to another function, intending for it to be “called back” at a later time. They are needed for “asynchronous” operations, where you want your code to continue along with animating or doing other things while waiting for another task to finish. A classic example of this in p5.js is loading data into a sketch with loadJSON().

+

In JavaScript, there's also a more recent approach for handling asynchronous operations known as "Promises." With Promises, you can use keywords like async and await to make your asynchronous code look more like traditional synchronous code. While ml5.js also supports this style, I’ll stick to using callbacks to stay aligned with p5.js style.

+
+

Evaluation

+

With debug set to true as part of the original call to ml5.neuralNetwork(), as soon train() is called, a visual interface will appear covering most of the p5.js page and canvas.

+
+ +
+
+

This panel or “Visor” represents the evaluation step, as shown in Figure X.X. The “visor” is part of TensorFlow.js and includes a graph that provides real-time feedback on the progress of the training. I’d like to focus on the “loss” plotted on the y-axis against the number of epochs along the x-axis.

+

So, what exactly is this "loss"? Loss is a measure of how far off the model's predictions are from the “correct” outputs provided by the training data. It quantifies the model’s total error. When training begins, it's common for the loss to be high because the model has yet to learn anything. As the model trains through more epochs, it should, ideally, get better at its predictions, and the loss should decrease. If the graph goes down as the epochs increase, this is a good sign!

+

Running the training for 200 epochs might strike you as a bit excessive, especially for such a tiny dataset. In a real-world scenario with more extensive data, I would probably use fewer epochs. However, because the dataset here is limited, the higher number of epochs ensures that our model gets enough "practice" with the data. Remember, this is a "toy" example, aiming to make the concepts clear rather than to produce a sophisticated machine learning model.

+

Below the graph, you will also see a "model summary" table. This provides details on the lower-level TensorFlow.js model architecture that ml5.js created behind the scenes. This summary details default layer names, neuron counts per layer, and an aggregate "parameters" count, referring to weights connecting the neurons.

+

Now, before moving on, I’d like to refer back to the data preparation step. There I mentioned the idea of splitting the data between “training” and “testing.” In truth, a full machine learning workflow would split the data into three categories:

+
    +
  1. training: primary dataset used to train the model
  2. +
  3. validation: subset of data used to check the model during training
  4. +
  5. testing: additional untouched data never considered during the training process to determine its final performance.
  6. +
+

With ml5.js, while it’s possible to incorporate all three categories of data. However, I’m simplfying things here and focusing only on the training dataset. After all, my dataset only has 8 records in it, it’s much too small to divide into separate stages. For a more rigorous demonstration, this would be a terrible idea! Working only with training data risks the model “overfitting” the data. Overfitting is a term that describes when a machine learning model has learned the training data too well. In this case, it’s become so “tuned” to the specific details and any pecularities or noise in that data, that is is much less effective when working with new, unseen data. The best way to combat overfitting, is to use validation data during the training process! If it performs well on the training data but poorly on the validation data, it's a strong indicator that overfitting might be occurring.

+

ml5.js provides some automatic features to employ validation data, if you are inclined to go further, you can explore the full set of neural network examples at ml5js.org.

+

Parameter Tuning

+

After the evaluation step, there is typically an iterative process of adjusting "hyperparameters" to achieve the best performance from the model. The ml5.js library is designed to provide a higher-level, user-friendly interface to machine learning. So while it does offer some capabilities for parameter tuning (which you can explore in the ml5.js reference), it is not as geared towards low-level, fine-grained adjustments as some other frameworks might be. However, ultimately, TensorFlow.js might be your best bet since it offers a broader suite of tools and allows for lower-level control over the training process. For this demonstration—seeing a loss all the way down to 0.1 on the evaluation graph—I am satisfied with the result and happy to move onto deployment!

+

Deployment

+

This is it, all that hard work has paid off! Now it’s time to deploy the model. This typically involves integrating it into a separate application to make predictions or decisions based on new, unseen data. For this, ml5.js offers the convenience of a save() and load() function. After all, there’s no reason to re-train a model every single time you use it! You can download the model to a file in one sketch and then load it for use in a completely different one. However, in this tiny, toy example, I’m going to demonstrate deploying and utilizing the model in the same sketch where it was trained.

+

The model is saved in the classifier variable so, in essence, it is already deployed. I know when it’s done because of the finishedTraining() callback so can use a boolean or other logic to engage the prediction stage of the code. In this example, I’ll create a global variable called label which will display the status of training and ultimately the predicted label to the canvas.

+
// When the sketch starts, it will show a status of "training"
+let status = "training";
+
+function draw() {
+  background(255);
+  textAlign(CENTER, CENTER);
+  textSize(64);
+  text(status, width / 2, height / 2);
+}
+
+// This is the callback for when training is complete, and the message changes to "ready"
+function finishedTraining() {
+  status = "ready";
+}
+

Once the model is trained, the classify() function can be used to send new data into the model for prediction. The format of the data sent to classify() should match the format of the data used in training, in this case two floating point numbers, representing the x and y components of a direction vector.

+
// Manually creating a vector
+let direction = createVector(1, 0);
+// Converting the x and y components into an input array
+let inputs = [direction.x, direction.y];
+// Asking the model to classify the inputs
+classifier.classify(inputs, gotResults);
+

The second argument of the classify() function is a callback. While it would be more convenient to receive the results back immediately and move onto the next line of code, just like with model loading and training, the results come back a later time via a separate callback event.

+
function gotResults(results) {
+  console.log(results);
+}
+

The models prediction arrives in the form of an argument to the callback. Inside, you’ll find an array of the labels, sorted by “confidence.” Confidence refers to the probability assigned by the model to each label, representing how sure it is of that particular prediction. It ranges from 0 to 1, with values closer to 1 indicating higher confidence and values near 0 suggesting lower confidence.

+
[
+  {
+    "label": "right",
+    "confidence": 0.9669702649116516
+  },
+  {
+    "label": "up",
+    "confidence": 0.01878807507455349
+  },
+  {
+    "label": "down",
+    "confidence": 0.013948931358754635
+  },
+  {
+    "label": "left",
+    "confidence": 0.00029277068097144365
+  }
+]
+

In the example output here, the model is highly confident (approximately 96.7%) that the correct label is "right," while it has minimal confidence in the "left" label, 0.03%. The confidence values are also normalized and add up to 100%.

+
+

Example 10.2: Gesture Classifier

+
+
+
+
+
+
+// Storing the start of a gesture when the mouse is pressed
+function mousePressed() {
+  start = createVector(mouseX, mouseY);
+}
+
+// Updating the end of a gesture as the mouse is dragged
+function mouseDragged() {
+  end = createVector(mouseX, mouseY);
+}
+
+// The gesture is complete when the mouse is released
+function mouseReleased() {
+  // Calculate and normalize a direction vector
+  let dir = p5.Vector.sub(end, start);
+  dir.normalize();
+  // Convert to an inputs array and classify
+  let inputs = [dir.x, dir.y];
+  classifier.classify(inputs, gotResults);
+}
+
+// Store the resulting label in the status variable for showing in the canvas
+function gotResults(error, results) {
+  status = results[0].label;
+}
+

Since the array is sorted by confidence, if I just want to use a single label as the prediction, I can access the first element of the array with results[0].label as in the gotResults() function in Example 10.2.

+
+

+ Exercise 10.4 + Divide Example 10.2 into three different sketches, one for collecting data, one for training, and one for deployment. Using the ml5.neuralNetwork functions save() and load() for saving and loading the model to and from a file. +

+
+
+

+ Exercise 10.5 + Expand the gesture recognition to classify a sequence of vectors, capturing more accurately the path of a longer mouse movement. Remember your input data must have a consistent shape! So you’ll have to decide on how many vectors to use to represent a gesture and store no more and no less for each data point. While this approach can work, other machine learning models (such as Recurrent Neural Networks) are specifically designed to handle sequential data and might offer more flexibility and potential accuracy. +

+
+

What is NEAT? “neuroevolution augmented topologies”

flappy bird scenario (classification) vs. steering force (regression)?

features?

NeuroEvolution Steering

diff --git a/content/examples/10_nn/gesture_classifier/index.html b/content/examples/10_nn/gesture_classifier/index.html new file mode 100644 index 00000000..83eff963 --- /dev/null +++ b/content/examples/10_nn/gesture_classifier/index.html @@ -0,0 +1,14 @@ + + + + + + + + + + +
+ + + diff --git a/content/examples/10_nn/gesture_classifier/sketch.js b/content/examples/10_nn/gesture_classifier/sketch.js new file mode 100644 index 00000000..6768f4df --- /dev/null +++ b/content/examples/10_nn/gesture_classifier/sketch.js @@ -0,0 +1,80 @@ +// Step 1: load data or create some data +let data = [ + { x: 0.99, y: 0.02, label: "right" }, + { x: 0.76, y: -0.1, label: "right" }, + { x: -1.0, y: 0.12, label: "left" }, + { x: -0.9, y: -0.1, label: "left" }, + { x: 0.02, y: 0.98, label: "down" }, + { x: -0.2, y: 0.75, label: "down" }, + { x: 0.01, y: -0.9, label: "up" }, + { x: -0.1, y: -0.8, label: "up" }, +]; +let classifer; +let status = "training"; + +let start, end; + +function setup() { + createCanvas(640, 240); + // Step 2: set your neural network options + let options = { + task: "classification", + debug: true, + }; + + // Step 3: initialize your neural network + classifier = ml5.neuralNetwork(options); + + // Step 4: add data to the neural network + for (let i = 0; i < data.length; i++) { + let item = data[i]; + let inputs = [item.x, item.y]; + let outputs = [item.label]; + classifier.addData(inputs, outputs); + } + + // Step 5: normalize your data; + classifier.normalizeData(); + + // Step 6: train your neural network + classifier.train({ epochs: 200 }, finishedTraining); +} +// Step 7: use the trained model +function finishedTraining() { + status = "ready"; +} + +// Step 8: make a classification + +function draw() { + background(255); + textAlign(CENTER, CENTER); + textSize(64); + text(status, width / 2, height / 2); + if (start && end) { + strokeWeight(8); + line(start.x, start.y, end.x, end.y); + } +} + +function mousePressed() { + start = createVector(mouseX, mouseY); +} + +function mouseDragged() { + end = createVector(mouseX, mouseY); +} + +function mouseReleased() { + let dir = p5.Vector.sub(end, start); + dir.normalize(); + let inputs = [dir.x, dir.y]; + console.log(inputs); + classifier.classify(inputs, gotResults); +} + +// Step 9: define a function to handle the results of your classification +function gotResults(error, results) { + status = results[0].label; + console.log(JSON.stringify(results,null,2)); +} diff --git a/content/examples/10_nn/gesture_classifier/style.css b/content/examples/10_nn/gesture_classifier/style.css new file mode 100644 index 00000000..9386f1c2 --- /dev/null +++ b/content/examples/10_nn/gesture_classifier/style.css @@ -0,0 +1,7 @@ +html, body { + margin: 0; + padding: 0; +} +canvas { + display: block; +} diff --git a/content/images/10_nn/10_nn_14.png b/content/images/10_nn/10_nn_14.png new file mode 100644 index 00000000..62aae538 Binary files /dev/null and b/content/images/10_nn/10_nn_14.png differ