I had been using these concepts informally in my own projects but had never taken the time to closely examine the science behind the algorithms or learn object-oriented techniques to formalize their implementation. That very semester, I also enrolled in Foundations of Generative Art Systems, a course taught by Philip Galanter that focused on the theory and practice of generative art, covering topics such as chaos, cellular automata, genetic algorithms, neural networks, and fractals. Both Tu’s and Galanter’s courses opened my eyes to the world of simulation algorithms—techniques that carried me through the next several years of work and teaching and that served as the foundation and inspiration for this book.
But another piece of the puzzle is missing from this story.
Galanter’s course was mostly theory based, while Tu’s was taught using Macromedia Director and the Lingo programming language. That semester, I learned many of the algorithms by translating them into C++ (the language I was using quite awkwardly at the time, well before C++ creative coding environments like openFrameworks and Cinder had arrived). Toward the end of the semester, however, I discovered something called Processing. Processing was in alpha then (version 0055), and having had some experience with Java, I was intrigued enough to ask the question, Could this open source, artist-friendly programming language and environment be the right place to develop a suite of tutorials and examples about programming and simulation? With the support of the ITP and Processing communities, I embarked on what has now been an almost 20-year journey of teaching coding.
-
I’d like to first thank Red Burns, who led ITP through its first 30 years and passed away in 2013. Red supported and encouraged me in my work for well over 10 years. Dan O’Sullivan, the associate dean of Emerging Media at the Tisch School of the Arts, has been a mentor and was the first to suggest that I try teaching a course on Processing, giving me a reason to start assembling programming tutorials in the first place. Shawn Van Every, current chair of the department, was my officemate during my first year of teaching full-time, and has been a rich source of help and inspiration over the years. I am grateful for the support and encouragement of ITP professor Luisa Pereira. Her work on her upcoming book, Code of Music, was deeply inspiring. Her innovative approach to interactive educational materials helped me rethink and redefine my own writing and publishing process.
+
I’d like to first thank Red Burns, who led ITP through its first 30 years and passed away in 2013. Red supported and encouraged me in my work for well over 10 years. Dan O’Sullivan, the associate dean of Emerging Media at the Tisch School of the Arts, has been a mentor and was the first to suggest that I try teaching a course on Processing, giving me a reason to start assembling programming tutorials in the first place. Shawn Van Every, current chair of the department, was my officemate during my first year of teaching full-time and has been a rich source of help and inspiration over the years. I am grateful for the support and encouragement of ITP professor Luisa Pereira. Her work on her upcoming book, Code of Music, was deeply inspiring. Her innovative approach to interactive educational materials helped me rethink and redefine my own writing and publishing process.
The vibrant and nurturing environment of ITP has been shaped by so many incredible individuals. Whether they were colleagues from the early days of this book’s conception or newer faces bringing fresh waves of inspiration, I’m so thankful to the full-time faculty of ITP/IMA: Ali Santana, Allison Parrish, Blair Simmons, Clay Shirky, Craig Protzel, Danny Rozin, David Rios, Gabe Barcia-Colombo, Katherine Dillon, Marianne Petit, Marina Zurkow, Matt Romein, Mimi Yin, Nancy Hechinger, Pedro Galvao Cesar de Oliveira, Sarah Rothberg, Sharon De La Cruz, Tom Igoe, and Yeseul Song.
The dedicated and tireless staff at ITP and NYU’s Interactive Media Arts (IMA) play such a vital role in keeping the ecosystem thriving and making everything. My thanks go to the many people I’ve worked with over the years: Adrian Mandeville, Brian Kim, Daniel Tsadok, Dante Delgiacco, Edward Gordon, Emma Asumeng, George Agudow, John Duane, Lenin Compres, Luke Bunn, Marlon Evans, Matt Berger, Megan Demarest, Midori Yasuda, Phil Caridi, Rob Ryan, Scott Broussard, and Shirley Lin.
A special note of thanks goes to ITP adjunct faculty members Ellen Nickles and Nuntinee Tansrisakul, who co-taught an online, asynchronous version of the Nature of Code course with me in 2021, amid the peak of a global pandemic. Their contributions and the ideas from that semester greatly enriched the course materials.
The students of ITP and IMA, too numerous to mention, have been an amazing source of feedback throughout this process. Much of the material in this book comes from my course of the same title, which I’ve now taught 17 times. I have stacks of draft printouts of the book with notes scrawled in the margins, as well as a vast archive of student emails with corrections, comments, and generous words of encouragement.
I would like to spotlight several students who worked as graduate associates on the Nature of Code materials. Through their work with the ITP/IMA Equitable Syllabus project, Briana Jones and Chaski No provided extraordinary research support that expanded the book’s concepts and references. As the graduate assistant for the inaugural undergraduate version of the Nature of Code class, Gracy Whelihan offered invaluable support and feedback, and always reminded me of the wonder of random numbers.
Jason Gao and Stuti Mohgaonkar worked on the build systems for the book materials, inventing new workflows for writing and editing. Elias Jarzombek also warrants a mention for his advice and technical support, stemming from the Code of Music book project.
-
After graduating, Jason Gao continued to develop the Nature of Code website. If you head there now, you will see the fruits of his many talents: a full version of the book that seamlessly integrates with the p5.js web editor. It’s a realization far beyond my initial vision.
-
The interior of the book along with the website was meticulously designed by Tuan Huang. Tuan began developing layout ideas while taking the Nature of Code class in the spring of 2023. After graduating, Tuan further refined the design, working to develop a consistent visual language across the many elements of the book. Her minimal and elegant aesthetics greatly enhanced the book’s visual appeal and accessibility. A special thanks also to the OpenMoji project—the open source emoji and icon project (Creative Commons license CC BY-SA 4.0)—for providing a delightful and comprehensive set of emojis used throughout this book for various elements.
+
After graduating, Jason Gao continued to design and develop the book’s build system and website. If you head there now, you will see the fruits of his many talents: a full version of the book that seamlessly integrates with the p5.js web editor. It’s a realization far beyond my initial vision.
+
The interior of the book and the website were meticulously designed by Tuan Huang. Tuan began developing layout ideas while taking the Nature of Code class in the spring of 2023. After graduating, Tuan further refined the design, working to develop a consistent visual language across the many elements of the book. Her minimal and elegant aesthetics greatly enhanced the book’s visual appeal and accessibility. A special thanks also to the OpenMoji project—the open source emoji and icon project (Creative Commons license CC BY-SA 4.0)—for providing a delightful and comprehensive set of emojis used throughout this book for various elements.
I’m also indebted to the energetic and supportive creative coding community and the Processing Foundation. I wouldn’t be writing this book if it weren’t for Casey Reas and Ben Fry, who created Processing in 2001 and co-founded the Processing Foundation. They’ve dedicated over 20 years to building and maintaining the software and its community. I’ve learned half of what I know simply from reading through the Processing source code and documentation; the elegant simplicity of the Processing language, website, and IDE is the original source of inspiration for all my work and teaching.
-
Lauren Lee McCarthy, the creator of p5.js, planted the seed that made everything possible for transforming the book into JavaScript. She’s a tireless champion for inclusion and access in open source, and her approach to community building has been profoundly inspiring to me. Cassie Tarakajian invented the p5.js web editor, a heroic undertaking that has made it possible to collect and organize all the example code in the book.
+
Lauren Lee McCarthy, the creator of p5.js, planted the seed that made everything possible for transforming the book into JavaScript. She’s a tireless champion for inclusion and access in open source, and her approach to community building has been profoundly inspiring to me. Cassie Tarakajian invented the p5.js web editor, a heroic undertaking that has made it possible to collect and organize all the example code in the book.
My heartfelt thanks extend to the other current and former members (along with Casey, Ben, and Lauren) of the Processing board of directors: Dorothy Santos, Johanna Hedva, Kate Hollenbach, and Xin Xin. A special acknowledgment to the project leads, staff, and alumni of the foundation, who have each played a pivotal role in shaping and propelling the community and its projects: Andres Colubri, Charles Reinhardt, evelyn masso, Jesse C. Thompson, Jonathan Feinberg, Moira Turner, Qianqian Ye, Rachel Lim, Raphaël de Courville, Saber Khan, Suhyun (Sonia) Choi, Toni Pizza, Tsige Tafesse, and Xiaowei R. Wang.
In Chapter 10, I introduce the ml5.js project, a companion library to p5.js that aims to bring machine learning capabilities to creative coders in a friendly and approachable manner. Thank you to the numerous researchers and students at ITP/IMA who contributed to its development: Apoorva Avadhana, Ashley Lewis, Bomani McClendon, Christina Dacanay, Cristóbal Valenzuela, Lydia Jessup, Miaoye Que, Micaelle Lages, Michael Weinberg, Orpheas Kofinakos, Ozi Chukwukeme, Sam Krystal, Yining Shi, and Ziyuan (Peter) Lin. Thank you to Professor J.H. Moon, Professor Gottfried Haider, and Quinn Fangqing He from NYU Shanghai, who additionally supported the library’s development and graciously read early drafts of the neural network chapters. Linda Paiste deserves a mention for her volunteer efforts in improving the codebase. Finally, I’d like to especially thank Joey K. Lee, who provided valuable encouragement and feedback on the Nature of Code book itself in tandem with developing ml5.js.
I would also like to thank AI researcher David Ha, whose research on neuroevolution (see “Additional Resources” on the book’s website) inspired me to create examples implementing the technique with ml5.js and add a new chapter to this book.
For the last 10 years, I’ve spent the bulk of my time making video tutorials on my YouTube channel, the Coding Train. I’m incredibly grateful for the immense support and collaboration from so many people in keeping the engines running and on the tracks (as much as I work very hard to veer off), including Chloe Desaulles, Cy X, David Snyder, Dusk Virkus, Elizabeth Perez, Jason Heglund, Katie Chan, Kline Gareth, Kobe Liesenborgs, and Mathieu Blanchette. A special thanks to Melissa Rodriguez, who helped research and secure permissions for the images you see at the start of each chapter.
-
My thanks also extend to the Nebula streaming platform and its CEO, Dave Wiskus, for their unwavering support, and to Nebula creator Grady Hillhouse, who recommended I collaborate with No Starch Press to actually print this darn thing. I wouldn’t be able to reach such a wide audience without the YouTube platform itself; a special thanks goes to my illustrious YouTube partner manager, Dean Kowalski, as well as to Doreen Tran, who helps lead YouTube Skilling for North America.
+
My thanks also extend to the Nebula streaming platform and its CEO, Dave Wiskus, for their unwavering support, and to Nebula creator Grady Hillhouse, who recommended I collaborate with No Starch Press to actually print this darn thing. I wouldn’t be able to reach such a wide audience without the YouTube platform itself; a special thanks goes to my illustrious YouTube partner manager, Dean Kowalski, as well as to Doreen Tran, who helps lead YouTube Skilling for North America.
I have many thoughtful, smart, generous, and kind viewers. I’d like to especially thank Dipam Sen, Francis Turmel, Kathy McGuiness, and Simon Tiger, who offered advice, feedback, corrections, technical support, and more. The book is so much better because of them.
I also would like to thank many people who collaborated with me over 10 years ago on the 2012 edition: David Wilson (book cover and design), Rune Madsen and Steve Klise (build system and website), Shannon Fry (editing), Evan Emolo, Miguel Bermudez, and all the Kickstarter backers who helped fund the work.
-
A special mention goes to Zannah Marsh, who worked tirelessly to create over 100 illustrations for the 2012 version of this book, and by some miracle agreed to do it all again for this new edition. I especially want to thank her for her patience and willingness to go with the flow as I changed my mind on certain illustrations way too many times. And the cats! I smile from ear to ear every time I see them typing.
-
Now, the real reason we’re all here. If it weren’t for No Starch Press, I’m almost certain you’d never be reading these words. Sure, you might be perusing updated tutorials on the website, but the collaboration, support, and thoughtful and kind deadline setting of the team was the thing that really pushed me over the hump. I want to express my gratitude to editor Nathan Heidelberger, who is responsible for this book making any sense at all, not to mention for all the legitimately funny jokes. (The blame for any bad puns lies squarely with me.) Thank you to Jasper Palfree, the technical editor, who patiently explained to me, as many times as it took for me to grok, the difference between linear and rotational motion (and clarified countless other science and code concepts). I also want to extend special thanks to copyeditor Sharon Wilkey, whose meticulous attention to detail polished every sentence and provided the perfect finishing touches. Additionally, thank you to Audrey Doyle for her keen eye in proofreading. Thank you to the founder of No Starch, Bill Pollock, who taught me everything I need to know about shopping at Trader Joe’s; managing editor Jill Franklin, for her kind and patient support; and the production team, led by senior production editor Jennifer Kepler and production manager Sabrina Plomitallo-González, who accommodated my unusual Notion → GitHub → PDF workflow.
-
Finally, a heartfelt thank-you to my wife, Aliki Caloyeras, who is always right. Seriously, it’s like a superpower at this point. I love you. To my children, Elias, who graciously allows me to maintain a semblance of dignity by not utterly obliterating me at basketball and video games, and Olympia, who reminds me “I’m feeling 22” when we play backgammon and cards and laugh together. I’d also like to thank my father, Bernard Shiffman, who generously lent his mathematical expertise and provided feedback along the way, as well as my mother, Doris Yaffe Shiffman, and brother, Jonathan Shiffman, who were always tremendously supportive in asking the question, “How is the book coming along?”
+
A special mention goes to Zannah Marsh, who worked tirelessly to create over 100 illustrations for the 2012 version of this book and by some miracle agreed to do it all again for this new edition. I especially want to thank her for her patience and willingness to go with the flow as I changed my mind on certain illustrations way too many times. And the cats! I smile from ear to ear every time I see them typing.
+
Now, the real reason we’re all here. If it weren’t for No Starch Press, I’m almost certain you’d never be reading these words. Sure, you might be perusing updated tutorials on the website, but the collaboration, support, and thoughtful and kind deadline setting of the team was the thing that really pushed me over the hump. I want to express my gratitude to editor Nathan Heidelberger, who is responsible for this book making any sense at all, not to mention for all the legitimately funny jokes. (The blame for any bad puns lies squarely with me.) Thank you to Jasper Palfree, the technical editor, who patiently explained to me, as many times as it took for me to grok, the difference between linear and rotational motion (and clarified countless other science and code concepts). I also want to extend special thanks to copyeditor Sharon Wilkey, whose meticulous attention to detail polished every sentence and provided the perfect finishing touches. Additionally, thank you to Audrey Doyle for her keen eye in proofreading. Thank you to the founder of No Starch, Bill Pollock, who taught me everything I need to know about shopping at Trader Joe’s; managing editor Jill Franklin, for her kind and patient support; and the production team, led by senior production editor Jennifer Kepler and production manager Sabrina Plomitallo-González, who accommodated my unusual Notion → GitHub → PDF workflow.
+
Finally, a heartfelt thank-you to my wife, Aliki Caloyeras, who is always right. Seriously, it’s like a superpower at this point. I love you. And to my children, Elias, who graciously allows me to maintain a semblance of dignity by not utterly obliterating me at basketball and video games, and Olympia, who reminds me “I’m feeling 22” when we play backgammon and cards and laugh together. I’d also like to thank my father, Bernard Shiffman, who generously lent his mathematical expertise and provided feedback along the way, as well as my mother, Doris Yaffe Shiffman, and brother, Jonathan Shiffman, who were always tremendously supportive in asking the question, “How is the book coming along?”
\ No newline at end of file
diff --git a/content/00_5_introduction.html b/content/00_5_introduction.html
index ea2fc9c4..3d404f6a 100644
--- a/content/00_5_introduction.html
+++ b/content/00_5_introduction.html
@@ -155,7 +155,7 @@
Context-Free Code
Occasionally, you’ll find lines of code hanging out on the page without a surrounding function or context. These snippets are there to illustrate a point, not necessarily to be run as is. They might represent a concept, a tiny piece of an algorithm, or a coding technique.
// RGB values to make the circles pink
fill(240, 99, 164);
-
Notice that this context-free snippet matches the indentation of fill(255) in the previous “complete” snippet. I’ll do this when the code has been established to be part of something demonstrated previously. Admittedly, this won’t always work out so cleanly or perfectly, but I’m doing my best!
+
Notice that this context-free snippet matches the indentation of fill(255) in the previous “complete” snippet. I’ll do this when the code has been established to be part of something demonstrated previously. Admittedly, this won’t always work out so cleanly or perfectly, but I’m doing my best!
Snipped Code
Be on the lookout for the scissors! This design element indicates that a code snippet is a continuation of a previous piece or will be continued after some explanatory text. Sometimes it’s not actually being continued but is just cut off because all the code isn’t relevant to the discussion at hand. The scissors are there to say, “Hey, there might be more to this code above or below, or at the very least, this is a part of something bigger!” Here’s how this might play out with some surrounding body text.
The first step to building a p5.js sketch is to create a canvas:
@@ -200,5 +200,5 @@
The Ecosystem Project
Getting Help and Submitting Feedback
Coding can be tough and frustrating, and the ideas in this book aren’t always straightforward. You don’t have to go it alone. There’s probably someone else reading right now who would love to co-organize a study group or a book club where you can meet, chat, and share your struggles and successes. If you don’t find a local community for traveling this journey together, what about an online one? Two places I’d suggest are the official Processing forums and the Coding Train Discord server.
More important, I want to see what you make! You can share your ideas by submitting to the passenger showcase on the Coding Train website, or in the channels on the aforementioned Discord. A hello in a YouTube comment is always welcome (although to be honest, it’s often best not to read the comments on YouTube), and feel free to tag me on whatever platform the future of social media has to offer—whichever one is the friendliest and least toxic! I want to enjoy all the bloops that swim in your ecosystem. Whether they leap triumphantly over a wave of creativity or make a tiny splash in a pond of learning, let’s bask in the ripples they send through the nature of coding!
+
More important, I want to see what you make! You can share your ideas by submitting to the passenger showcase on the Coding Train website or in the channels on the aforementioned Discord. A hello in a YouTube comment is always welcome (although to be honest, it’s often best not to read the comments on YouTube), and feel free to tag me on whatever platform the future of social media has to offer—whichever one is the friendliest and least toxic! I want to enjoy all the bloops that swim in your ecosystem. Whether they leap triumphantly over a wave of creativity or make a tiny splash in a pond of learning, let’s bask in the ripples they send through the nature of coding!
\ No newline at end of file
diff --git a/content/00_randomness.html b/content/00_randomness.html
index 309a952b..45f256a3 100644
--- a/content/00_randomness.html
+++ b/content/00_randomness.html
@@ -62,7 +62,7 @@
Random Walks
The random walk instigates the two questions that I’ll ask over and over again throughout this book: “How do you define the rules that govern the behavior of your objects?” and then, “How do you implement these rules in code?”
You’ll periodically need a basic understanding of randomness, probability, and Perlin noise for this book’s projects. The random walk will allow me to demonstrate key points that will come in handy later.
An object in JavaScript is an entity that has both data and functionality. In this case, a Walker object should have data about its position on the canvas and functionality such as the capability to draw itself or take a step.
A class is the template for building actual instances of objects. Think of a class as the cookie cutter and objects as the cookies themselves. To create a Walker object, I’ll begin by defining the Walker class—what it means to be a walker.
@@ -296,7 +296,7 @@
A Normal Distribution of Random
Another way to create a nonuniform distribution of random numbers is to use a normal distribution, where the numbers cluster around an average value. To see why this is useful, let’s go back to that population of simulated monkeys and assume your sketch generates a thousand Monkey objects, each with a random height value of 200 to 300 (as this is a world of monkeys that have heights of 200 to 300 pixels):
let h = random(200, 300);
Is this an accurate algorithm for creating a population of monkey heights? Think of a crowded sidewalk in New York City. Pick any person off the street, and it may appear that their height is random. Nevertheless, it’s not the kind of random that random() produces by default. People’s heights aren’t uniformly distributed; there are many more people of about average height than there are very tall or very short ones. To accurately reflect this population, random heights close to the mean (another word for average) should be more likely to be chosen, while outlying heights (very short or very tall) should be rarer.
-
That’s exactly how a normal distribution (sometimes called a Gaussian distribution, after mathematician Carl Friedrich Gauss) works. A graph of this distribution is informally known as a bell curve. The curve is generated by a mathematical function that defines the probability of any given value occurring as a function of the mean (often written as μ, the Greek letter mu) and standard deviation (σ, the Greek letter sigma).
+
That’s exactly how a normal distribution (sometimes called a Gaussian distribution, after mathematician Carl Friedrich Gauss) works. A graph of this distribution is informally known as a bell curve. The curve is generated by a mathematical function that defines the probability of any given value occurring as a function of the mean (often written as μ, the Greek letter mu) and standard deviation (σ, the Greek letter sigma).
In the case of height values from 200 to 300, you probably have an intuitive sense of the mean (average) as 250. However, what if I were to say that the standard deviation is 3? Or 15? What does this mean for the numbers? The graphs depicted in Figure 0.2 should give you a hint.
@@ -436,7 +436,7 @@
Example 0.5: An Accept-Reject
}
}
}
-
While the accept-reject algorithm does work for generating custom distributions of random numbers, this technique is not particularly efficient. It can lead to a considerable amount of wasted computation when a large number of random values are rejected, especially when the qualifying probability is very low. When I get to genetic algorithms in Chapter 9, I’ll take a different, more optimal approach.
+
While the accept-reject algorithm does work for generating custom distributions of random numbers, this technique is not particularly efficient. It can lead to a considerable amount of wasted computation when a large number of random values are rejected, especially when the qualifying probability is very low. When I get to genetic algorithms in Chapter 9, I’ll take a different, more optimal approach.
Exercise 0.6
Use a custom probability distribution to vary the size of the random walker’s steps. The step size can be determined by influencing the range of values picked with a qualifying random value. Can you map the probability to a quadratic function by making the likelihood that a value is picked equal to the value squared?
let position = createVector(100, 100);
let velocity = createVector(1, 3.3);
-
Notice that the position and velocity vector objects aren’t created as you might expect, by invoking a constructor function. Instead of writing new p5.Vector(x, y), I’ve called createVector(x, y). The createVector() function is included in p5.js as a helper function to take care of details behind the scenes upon creation of the vector. Except in special circumstances, you should always create p5.Vector objects with createVector(). I should note that p5.js functions such as createVector() can’t be executed outside of setup() or draw(), since the library won’t yet be loaded. I’ll demonstrate how to address this in Example 1.2.
+
Notice that the position and velocity vector objects aren’t created as you might expect, by invoking a constructor function. Instead of writing new p5.Vector(x, y), I’ve called createVector(x, y). The createVector() function is included in p5.js as a helper function to take care of details behind the scenes upon creation of the vector. Except in special circumstances, you should always create p5.Vector objects with createVector(). I should note that p5.js functions such as createVector() can’t be executed outside of setup() or draw(), since the library won’t yet be loaded. I’ll demonstrate how to address this in Example 1.2.
Now that I have two vector objects (position and velocity), I’m ready to implement the vector-based algorithm for motion: position = position + velocity. In Example 1.1, without vectors, the code reads as follows:
// Add each speed to each position.
x = x + xspeed;
@@ -319,7 +319,7 @@
Exercise 1.3
Extend Example 1.2 into 3D. Can you get a sphere to bounce around a box?
More Vector Math
-
Addition was really just the first step. Many mathematical operations are commonly used with vectors. Here’s a comprehensive table of the operations available as methods in the p5.Vector class. Remember, these are not stand-alone functions, but rather methods associated with the p5.Vector class. When you see the word this in the following table, it refers to the specific vector the method is operating on.
+
Addition was really just the first step. Many mathematical operations are commonly used with vectors. Here’s a comprehensive table of the operations available as methods in the p5.Vector class. Remember, these are not stand-alone functions, but rather methods associated with the p5.Vector class. When you see the word this in the following table, it refers to the specific vector the method is operating on.
@@ -443,7 +443,7 @@
Vector Subtraction
Having already covered addition, I’ll now turn to subtraction. This one’s not so bad; just take the plus sign and replace it with a minus! Before tackling subtraction itself, however, consider what it means for a vector \vec{v} to become -\vec{v}. The negative version of the scalar 3 is –3. A negative vector is similar: the polarity of each of the vector’s components is inverted. So if \vec{v} has the components (x, y), then -\vec{v} is (–x, –y). Visually, this results in an arrow of the same length as the original vector pointing in the opposite direction, as depicted in Figure 1.7.
Subtraction, then, is the same as addition, only with the second vector in the equation treated as a negative version of itself:
\vec{u} - \vec{v} = \vec{u} + -\vec{v}
-
Just as vectors are added by placing them “tip to tail”—that is, aligning the tip (or end point) of one vector with the tail (or start point) of the next—vectors are subtracted by reversing the direction of the second vector and placing it at the end of the first, as in Figure 1.8.
+
Just as vectors are added by placing them “tip to tail”—that is, aligning the tip (or endpoint) of one vector with the tail (or start point) of the next—vectors are subtracted by reversing the direction of the second vector and placing it at the end of the first, as in Figure 1.8.
Notice in particular that Toxiclibs.js vectors are created by calling the Vec2D constructor with the new keyword, rather than by using a factory method like Matter.Vector() or createVector().
The Physics World
-
The classes to describe the world and its particles and springs in Toxiclibs.js are found in toxi.physics2d. I’m also going to use a Rect object (to describe a generic rectangle boundary) and GravityBehavior to apply a global gravity force to the world. Including Vec2D, I now have all the following class aliases:
+
The classes to describe the world and its particles and springs in Toxiclibs.js are found in toxi.physics2d. I’m also going to use a Rect object (to describe a generic rectangle boundary) and GravityBehavior to apply a global gravity force to the world. Including Vec2D, I now have all the following class aliases:
// The necessary geometry classes: vectors, rectangles
let { Vec2D, Rect } = toxi.geom;
@@ -1246,7 +1246,7 @@
Particles
this.body = Bodies.circle(x, y, r);
}
}
-
This technique was somewhat redundant, since Matter.js keeps track of the bodies in its world. However, it allowed me to manage which body is which (and therefore how each body should be drawn) without having to rely on iterating through Matter.js’s internal lists. I might take the same approach with Toxiclibs.js, making my own Particle class that stores a reference to a VerletParticle2D object. This way, I’ll be able to give the particles custom properties and draw them however I want. I’d probably write the code as follows:
+
This technique was somewhat redundant, since Matter.js keeps track of the bodies in its world. However, it allowed me to manage which body is which (and therefore how each body should be drawn) without having to rely on iterating through Matter.js’s internal lists. I might take the same approach with Toxiclibs.js, making my own Particle class that stores a reference to a VerletParticle2D object. This way, I’ll be able to give the particles custom properties and draw them however I want. I’d probably write the code as follows:
class Particle {
constructor(x, y, r) {
//{!1} A VerletParticle needs an initial (x, y) position, but it has no geometry, so the r is used only for drawing.
@@ -1536,14 +1536,14 @@
Example 6.13: Soft-Body Character
physics = new VerletPhysics2D();
physics.setWorldBounds(new Rect(0, 0, width, height));
physics.addBehavior(new GravityBehavior(new Vec2D(0, 0.5)));
- // Particles at vertices of character
+ // Particles at vertices of the character
particles.push(new Particle(200, 25));
particles.push(new Particle(400, 25));
particles.push(new Particle(350, 125));
particles.push(new Particle(400, 225));
particles.push(new Particle(200, 225));
particles.push(new Particle(250, 125));
- // Springs connecting vertices of character
+ // Springs connecting vertices of the character
springs.push(new Spring(particles[0], particles[1]));
springs.push(new Spring(particles[1], particles[2]));
springs.push(new Spring(particles[2], particles[3]));
@@ -1678,7 +1678,7 @@
Exercise 6.13
Attraction and Repulsion Behaviors
-
When it came time to create an attraction example for Matter.js, I showed how the Matter.Body class includes an applyForce() method. All I then needed to do was calculate the attraction force F_g = (G \times m_1 \times m_2) \div d^2 as a vector and apply it to the body. Similarly, the Toxiclibs.js VerletParticle2D class also includes a method called addForce() that can apply any calculated force to a particle.
+
When it came time to create an attraction example for Matter.js, I showed how the Matter.Body class includes an applyForce() method. All I then needed to do was calculate the attraction force F_g = (G \times m_1 \times m_2) \div d^2 as a vector and apply it to the body. Similarly, the Toxiclibs.js VerletParticle2D class also includes a method called addForce() that can apply any calculated force to a particle.
However, Toxiclibs.js takes this idea one step further by offering built-in functionality for common forces (called behaviors) such as attraction! For example, if you add an AttractionBehavior object to a particular VerletParticle2D object, all other particles in the physics world will experience an attraction force toward that particle.
Say I create an instance of my Particle class (which extends the VerletParticle2D class):
The low-resolution shape that emerges in Figure 7.12 is the Sierpiński triangle. Named after the Polish mathematician Wacław Sierpiński, it’s a famous example of a fractal. I’ll examine fractals more closely in Chapter 8, but briefly, they’re patterns in which the same shapes repeat themselves at different scales. To give you a better sense of this, Figure 7.13 shows the CA over several more generations and with a wider grid size.
-
- Figure 7.13: Wolfram elementary CA
+
+ Figure 7.13: The Wolfram elementary CA
And Figure 7.14 shows the CA again, this time with cells that are just a single pixel wide so the resolution is much higher.
-
- Figure 7.14: Wolfram elementary CA at higher resolution
+
+ Figure 7.14: The Wolfram elementary CA at higher resolution
Take a moment to let the enormity of what you’ve just seen sink in. Using an incredibly simple system of 0s and 1s, with little neighborhoods of three cells, I was able to generate a shape as sophisticated and detailed as the Sierpiński triangle. This is the beauty of complex systems.
Of course, this particular result didn’t happen by accident. I picked the set of rules in Figure 7.8 because I knew the pattern it would generate. The mere act of defining a ruleset doesn’t guarantee visually exciting results. In fact, for a 1D CA in which each cell can have two possible states, there are exactly 256 possible rulesets to choose from, and only a handful are on par with the Sierpiński triangle. How do I know there are 256 possible rulesets? It comes down to a little more binary math.
@@ -130,8 +130,8 @@
Defining Rulesets
The ruleset in Figure 7.16 could be called rule 01011010, but Wolfram instead refers to it as rule 90. Where does 90 come from? To make ruleset naming even more concise, Wolfram uses decimal (or base 10) representations rather than binary. To name a rule, you convert its 8-bit binary number to its decimal counterpart. The binary number 01011010 translates to the decimal number 90, and therefore it’s named rule 90.
Since there are 256 possible combinations of eight 0s and 1s, there are also 256 unique rulesets. Let’s check out another one. How about rule 11011110, or more commonly, rule 222? Figure 7.17 shows how it looks.
-
- Figure 7.17: Wolfram elementary CA, rule 222
+
+ Figure 7.17: The Wolfram elementary CA, rule 222
@@ -248,7 +248,7 @@
Programming an Elementary CA
else if (a === 0 && b === 0 && c === 1) return ruleset[6];
else if (a === 0 && b === 0 && c === 0) return ruleset[7];
}
-
I like writing the rules() function this way because it describes line by line exactly what’s happening for each neighborhood configuration. However, it’s not a great solution. After all, what if a CA has four possible states (0 through 3) instead of two? Suddenly there are 64 possible neighborhood configurations. And with 10 possible states, 1,000 configurations. And just imagine programming von Neumann’s 29 possible states. I’d be stuck typing out thousands upon thousands of else...if statements!
+
I like writing the rules() function this way because it describes line by line exactly what’s happening for each neighborhood configuration. However, it’s not a great solution. After all, what if a CA has four possible states (0 through 3) instead of two? Suddenly there are 64 possible neighborhood configurations. And with 10 possible states, 1,000 configurations. And just imagine programming von Neumann’s 29 possible states. I’d be stuck typing out thousands upon thousands of else...if statements!
Another solution, though not quite as transparent, is to convert the neighborhood configuration (a 3-bit number) into a regular integer and use that value as the index into the ruleset array. This can be done as follows, using JavaScript’s built-in parseInt() function:
function rules(a, b, c) {
// A quick way to concatenate three numbers into a string
@@ -301,7 +301,7 @@
Programming an Elementary CA
This is great, but one more piece is still missing: What good is a CA if you can’t see it?
Drawing an Elementary CA
-
The standard technique for drawing an elementary CA is to stack the generations one on top of the other, and to draw each cell as a square that’s black (for state 1) or white (for state 0), as in Figure 7.21. Before implementing this particular visualization, however, I’d like to point out two things.
+
The standard technique for drawing an elementary CA is to stack the generations one on top of the other, and to draw each cell as a square that’s black (for state 1) or white (for state 0), as in Figure 7.21. Before implementing this particular visualization, however, I’d like to point out two things.
Figure 7.21: Rule 90 visualized as a stack of generations
@@ -425,11 +425,11 @@
Class 4: Complexity
In Chapter 5, I introduced the concept of a complex system and used flocking to demonstrate how simple rules can result in emergent behaviors. Class 4 CAs remarkably exhibit the characteristics of complex systems and are the key to simulating phenomena such as forest fires, traffic patterns, and the spread of diseases. Research and applications of CA consistently emphasize the importance of class 4 as the bridge between CA and nature.
The Game of Life
-
The next step is to move from a 1D CA to a 2D one: the Game of Life. This will introduce additional complexity—each cell will have a bigger neighborhood—but with the complexity comes a wider range of possible applications. After all, most of what happens in computer graphics lives in two dimensions, and this chapter demonstrates how to apply CA thinking to a 2D p5.js canvas.
+
The next step is to move from a 1D CA to a 2D one: the Game of Life. This will introduce additional complexity—each cell will have a bigger neighborhood—but with the complexity comes a wider range of possible applications. After all, most of what happens in computer graphics lives in two dimensions, and this chapter demonstrates how to apply CA thinking to a 2D p5.js canvas.
In 1970, Martin Gardner wrote a Scientific American article that documented mathematician John Conway’s new Game of Life, describing it as recreational mathematics: “To play life you must have a fairly large checkerboard and a plentiful supply of flat counters of two colors. It is possible to work with pencil and graph paper but it is much easier, particularly for beginners, to use counters and a board.”
The Game of Life has become something of a computational cliché, as myriad projects display the game on LEDs, screens, projection surfaces, and so on. But practicing building the system with code is still valuable for a few reasons.
For one, the Game of Life provides a good opportunity to practice skills with 2D arrays, nested loops, and more. Perhaps more important, however, this CA’s core principles are tied directly to a core goal of this book: simulating the natural world with code. The Game of Life algorithm and technical implementation will provide you with the inspiration and foundation to build simulations that exhibit the characteristics and behaviors of biological systems of reproduction.
-
Unlike von Neumann, who created an extraordinarily complex system of states and rules, Conway wanted to achieve a similar lifelike result with the simplest set of rules possible. Gardner outlined Conway’s goals as follows:
+
Unlike von Neumann, who created an extraordinarily complex system of states and rules, Conway wanted to achieve a similar lifelike result with the simplest set of rules possible. Let’s look at how Gardner outlined Conway’s goals.
There should be no initial pattern for which there is a simple proof that the population can grow without limit.
There should be initial patterns that apparently do grow without limit.
@@ -621,7 +621,7 @@
Exercise 7.7
The code in Example 7.2 is convenient but not particularly memory efficient. It creates a new 2D array for every frame of animation! This matters very little for a p5.js application, but if you were implementing the Game of Life on a microcontroller or mobile device, you’d want to be more careful. One solution is to have only two arrays and constantly swap them, writing the next set of states into whichever one isn’t the current array. Implement this particular solution.
Object-Oriented Cells
-
Over the course of this book, I’ve built examples of systems of objects that have properties and move about the canvas. In this chapter, although I’ve been talking about a cell as if it were an object, I haven’t used the principles of object orientation in the code. This has worked because a cell is such an enormously simple object; its only property is its state, a single 0 or 1. However, I could further develop CA systems in plenty of ways beyond the simple models discussed here, and often these may involve keeping track of multiple properties for each cell. For example, what if a cell needs to remember its history of states? Or what if you want to apply motion and physics to a CA and have the cells move about the canvas, dynamically changing their neighbors from frame to frame?
+
Over the course of this book, I’ve built examples of systems of objects that have properties and move about the canvas. In this chapter, although I’ve been talking about a cell as if it were an object, I haven’t used the principles of object orientation in the code. This has worked because a cell is such an enormously simple object; its only property is its state, a single 0 or 1. However, I could further develop CA systems in plenty of ways beyond the simple models discussed here, and often these may involve keeping track of multiple properties for each cell. For example, what if a cell needs to remember its history of states? Or what if you want to apply motion and physics to a CA and have the cells move about the canvas, dynamically changing their neighbors from frame to frame?
To accomplish any of these ideas (and more), it would be helpful to see how to treat each cell as an object, rather than as a single 0 or 1 in an array. In a Game of Life simulation, for example, I’ll no longer want to initialize each cell like this:
Notice that the tree has a single trunk with branches connected at its end. Each one of those branches has branches at its end, and those branches have branches, and so on. And what if you were to pluck one branch from the tree and examine it more closely on its own, as in Figure 8.3?
+
Notice that the tree has a single trunk with branches connected at its end. Each one of those branches has branches at its end, and those branches have branches, and so on. And what if you were to pluck one branch from the tree and examine it more closely on its own, as in Figure 8.3?
Figure 8.3: Zooming in on one branch of the fractal tree
@@ -330,10 +330,10 @@
The Monster Curve
// For every segment . . .
for (let segment of segments) {
//{!4} . . . add four new lines. How do you calculate the start and end points of each?
- next.push(new KochLine(???, ???));
- next.push(new KochLine(???, ???));
- next.push(new KochLine(???, ???));
- next.push(new KochLine(???, ???));
+ next.push(new KochLine(????, ????));
+ next.push(new KochLine(????, ????));
+ next.push(new KochLine(????, ????));
+ next.push(new KochLine(????, ????));
}
// The next segments!
segments = next;
diff --git a/content/09_ga.html b/content/09_ga.html
index 0f805555..7e9897e1 100644
--- a/content/09_ga.html
+++ b/content/09_ga.html
@@ -37,10 +37,10 @@
Why Use Genetic Algorithms?
Figure 9.1: Infinite cats typing at infinite keyboards
This is my meow-veloustwist on the infinite monkey theorem,whichis stated as follows: a monkey hitting keys randomly on a typewriter will eventually type the complete works of Shakespeare, given an infinite amount of time. It’s only a theory because in practice the number of possible combinations of letters and words makes the likelihood of the monkey actually typing Shakespeare minuscule. To put it in perspective, even if the monkey had started typing at the beginning of the universe, the probability that by now it would have produced just Hamlet, to say nothing of the entireworks of Shakespeare, is still absurdly unlikely.
-
Consider a cat named Clawdius. Clawdius types on a reduced typewriter containing only 27 characters: the 26 English letters plus the spacebar. The probability of Clawdius hitting any given key is 1 in 27.
+
Consider a cat named Clawdius. Clawdius types on a reduced typewriter containing only 27 characters: the 26 English letters plus the spacebar. The probability of Clawdius hitting any given key is 1 in 27.
Next, consider the phrase “to be or not to be that is the question” (for simplicity, I’m ignoring capitalization and punctuation). The phrase is 39 characters long, including spaces. If Clawdius starts typing, the chance he’ll get the first character right is 1 in 27. Since the probability he’ll get the second character right is also 1 in 27, he has a 1 in 729 (27 \times 27) chance of landing the first two characters in correct order. (This follows directly from our discussion of probability in Chapter 0.) Therefore, the probability that Clawdius will type the full phrase is 1 in 27 multiplied by itself 39 times, or (1/27)^{39}. That equals a probability of . . .
1 \text{ in } \text{66,555,937,033,867,822,607,895,549,241,096,482,953,017,615,834,735,226,163}
-
Needless to say, even hitting just this one phrase, let alone an entire play, let alone all 38 Shakespeare plays (yes, even The Two Noble Kinsmen) is highly unlikely. Even if Clawdius were a computer simulation and could type a million random phrases per second, for Clawdius to have a 99 percent probability of eventually getting just the one phrase right, he would have to type for 9,719,096,182,010,563,073,125,591,133,903,305,625,605,017 years. (For comparison, the universe is estimated to be a mere 13,750,000,000 years old.)
+
Needless to say, even hitting just this one phrase, let alone an entire play, let alone all 38 Shakespeare plays (yes, even The Two Noble Kinsmen) is highly unlikely. Even if Clawdius were a computer simulation and could type a million random phrases per second, for Clawdius to have a 99 percent probability of eventually getting just the one phrase right, he would have to type for 9,719,096,182,010,563,073,125,591,133,903,305,625,605,017 years. (For comparison, the universe is estimated to be a mere 13,750,000,000 years old.)
The point of all these unfathomably large numbers isn’t to give you a headache, but to demonstrate that a brute-force algorithm (typing every possible random phrase) isn’t a reasonable strategy for arriving randomly at “to be or not to be that is the question.” Enter GAs, which start with random phrases and swiftly find the solution through simulated evolution, leaving plenty of time for Clawdius to savor a cozy catnap.
To be fair, this particular problem (to arrive at the phrase “to be or not to be that is the question”) is a ridiculous one. Since you know the answer already, all you need to do is type it. Here’s a p5.js sketch that solves the problem:
let s = "to be or not to be that is the question";
@@ -608,6 +608,55 @@
Example 9.1: Gene
// Target phrase
let target = "to be or not to be";
+function setup() {
+ createCanvas(640, 360);
+ // Step 1: Initialization
+ for (let i = 0; i < populationSize; i++) {
+ population[i] = new DNA(target.length);
+ }
+}
+
+function draw() {
+ //{!0} Step 2: Selection
+ //{!3} Step 2a: Calculate fitness.
+ for (let phrase of population) {
+ phrase.calculateFitness(target);
+ }
+
+ //{!1} Step 2b: Build the mating pool.
+ let matingPool = [];
+ for (let phrase of population) {
+ //{!4} Add each member n times according to its fitness score.
+ let n = floor(phrase.fitness * 100);
+ for (let j = 0; j < n; j++) {
+ matingPool.push(phrase);
+ }
+ }
+
+ // Step 3: Reproduction
+ for (let i = 0; i < population.length; i++) {
+ let parentA = random(matingPool);
+ let parentB = random(matingPool);
+ // Step 3a: Crossover
+ let child = parentA.crossover(parentB);
+ // Step 3b: Mutation
+ child.mutate(mutationRate);
+ //{!1} Note that you are overwriting the population with the new
+ // children. When draw() loops, you will perform all the same
+ // steps with the new population of children.
+ population[i] = child;
+ }
+ // Step 4: Repetition. Go back to the beginning of draw()!
+}
+
// Mutation rate
+let mutationRate = 0.01;
+// Population size
+let populationSize = 150;
+// Population array
+let population = [];
+// Target phrase
+let target = "to be or not to be";
+
function setup() {
createCanvas(640, 360);
// Step 1: Initialization
diff --git a/content/10_nn.html b/content/10_nn.html
index 36c97e35..5189ff81 100644
--- a/content/10_nn.html
+++ b/content/10_nn.html
@@ -533,13 +533,13 @@
Putting the “Network” in Neur
The fact that a perceptron can’t even solve something as simple as XOR may seem extremely limiting. But what if I made a network out of two perceptrons? If one perceptron can solve the linearly separable OR and one perceptron can solve the linearly separate NOT AND, then two perceptrons combined can solve the nonlinearly separable XOR.
When you combine multiple perceptrons, you get a multilayered perceptron, a network of many neurons (see Figure 10.13). Some are input neurons and receive the initial inputs, some are part of what’s called a hidden layer (as they’re connected to neither the inputs nor the outputs of the network directly), and then there are the output neurons, from which the results are read.
-
Up until now, I’ve been visualizing a singular perceptron with one circle representing a neuron processing its input signals. Now, as I move on to larger networks, it’s more typical to represent all the elements (inputs, neurons, outputs) as circles, with arrows that indicate the flow of data. In Figure 10.13, you can see the inputs and bias flowing into the hidden layer, which then flows to the output.
+
Up until now, I’ve been visualizing a singular perceptron with one circle representing a neuron processing its input signals. Now, as I move on to larger networks, it’s more typical to represent all the elements (inputs, neurons, outputs) as circles, with arrows that indicate the flow of data. In Figure 10.13, you can see the inputs and bias flowing into the hidden layer, which then flows to the output.
Figure 10.13: A multilayered perceptron has the same inputs and output as the simple perceptron, but now it includes a hidden layer of neurons.
Training a simple perceptron is pretty straightforward: you feed the data through and evaluate how to change the input weights according to the error. With a multilayered perceptron, however, the training process becomes more complex. The overall output of the network is still generated in essentially the same manner as before: the inputs multiplied by the weights are summed and fed forward through the various layers of the network. And you still use the network’s guess to calculate the error (desired result – guess). But now so many connections exist between layers of the network, each with its own weight. How do you know how much each neuron or connection contributed to the overall error of the network, and how it should be adjusted?
-
The solution to optimizing the weights of a multilayered network is backpropagation. This process takes the error and feeds it backward through the network so it can adjust the weights of all the connections in proportion to how much they’ve contributed to the total error. The details of backpropagation are beyond the scope of this book. The algorithm uses a variety of activation functions (one classic example is the sigmoid function) as well as some calculus. If you’re interested in continuing down this road and learning more about how backpropagation works, you can find my “Toy Neural Network” project at the Coding Train website with accompanying video tutorials. They go through all the steps of solving XOR using a multilayered feed-forward network with backpropagation. For this chapter, however, I’d instead like to get some help and phone a friend.
+
The solution to optimizing the weights of a multilayered network is backpropagation. This process takes the error and feeds it backward through the network so it can adjust the weights of all the connections in proportion to how much they’ve contributed to the total error. The details of backpropagation are beyond the scope of this book. The algorithm uses a variety of activation functions (one classic example is the sigmoid function) as well as some calculus. If you’re interested in continuing down this road and learning more about how backpropagation works, you can find my “Toy Neural Network” project at the Coding Train website with accompanying video tutorials. They go through all the steps of solving XOR using a multilayered feed-forward network with backpropagation. For this chapter, however, I’d instead like to get some help and phone a friend.
Machine Learning with ml5.js
That friend is ml5.js. This machine learning library can manage the details of complex processes like backpropagation so you and I don’t have to worry about them. As I mentioned earlier in the chapter, ml5.js aims to provide a friendly entry point for those who are new to machine learning and neural networks, while still harnessing the power of Google’s TensorFlow.js behind the scenes.
To use ml5.js in a sketch, you must import it via a <script> element in your index.html file, much as you did with Matter.js and Toxiclibs.js in Chapter 6:
@@ -588,7 +588,7 @@
Classification and Regression
Rather than picking from a discrete set of output options, the goal of the neural network is now to guess a number—any number. Will the house use 30.5 kilowatt-hours of electricity that day? Or 48.7 kWh? Or 100.2 kWh? The output prediction could be any value from a continuous range.
Network Design
Knowing what problem you’re trying to solve (step 0) also has a significant bearing on the design of the neural network—in particular, on its input and output layers. I’ll demonstrate with another classic “Hello, world!” classification example from the field of data science and machine learning: the iris dataset. This dataset, which can be found in the Machine Learning Repository at the University of California, Irvine, originated from the work of American botanist Edgar Anderson.
-
Anderson collected flower data over many years across multiple regions of the United States and Canada. For more on the origins of this famous dataset, see “The Iris Data Set: In Search of the Source of Virginica” by Antony Unwin and Kim Kleinman. After carefully analyzing the data, Anderson built a table to classify iris flowers into three distinct species: Iris setosa, Iris virginica, and Iris versicolor (see Figure 10.17).
+
Anderson collected flower data over many years across multiple regions of the United States and Canada. For more on the origins of this famous dataset, see “The Iris Data Set: In Search of the Source of Virginica” by Antony Unwin and Kim Kleinman. After carefully analyzing the data, Anderson built a table to classify iris flowers into three distinct species: Iris setosa, Iris versicolor, and Iris virginica (see Figure 10.17).
Figure 10.17: Three distinct species of iris flowers
@@ -658,7 +658,7 @@
Network Design
You might also notice the absence of explicit bias nodes in this diagram. While biases play an important role in the output of each neuron, they’re often left out of visual representations to keep the diagrams clean and focused on the primary data flow. (The ml5.js library will ultimately manage the biases for me internally.)
The neural network’s goal is to “activate” the correct output for the input data, just as the perceptron would output a +1 or –1 for its single binary classification. In this case, the output values are like signals that help the network decide which iris species label to assign. The highest computed value activates to signify the network’s best guess about the classification.
The key takeaway here is that a classification network should have as many inputs as there are values for each item in the dataset, and as many outputs as there are categories. As for the hidden layer, the design is much less set in stone. The hidden layer in Figure 10.18 has five nodes, but this number is entirely arbitrary. Neural network architectures can vary greatly, and the number of hidden nodes is often determined through trial and error or other educated guessing methods (called heuristics). In the context of this book, I’ll be relying on ml5.js to automatically configure the architecture based on the input and output data.
-
What about the inputs and outputs in a regression scenario, like the household electricity consumption example I mentioned earlier? I’ll go ahead and make up a dataset for this scenario, with values representing the occupants and size of the house, the day’s temperature, and the corresponding electricity usage. This is much like a synthetic dataset, given that it’s not data collected for a real-world scenario—but whereas synthetic data is generated automatically, here I’m manually inputting numbers from my own imagination:
+
What about the inputs and outputs in a regression scenario, like the household electricity consumption example I mentioned earlier? I’ll go ahead and make up a dataset for this scenario, with values representing the occupants and size of the house, the day’s temperature, and the corresponding electricity usage. This is much like a synthetic dataset, given that it’s not data collected for a real-world scenario—but whereas synthetic data is generated automatically, here I’m manually inputting numbers from my own imagination:
@@ -738,7 +738,7 @@
ml5.js Syntax
let energyPredictor = ml5.neuralNetwork(options);
You can set many other properties of the model through the options object. For example, you could specify the number of hidden layers between the inputs and outputs (there are typically several), the number of neurons in each layer, which activation functions to use, and more. In most cases, however, you can leave out these extra settings and let ml5.js make its best guess on how to design the model based on the task and data at hand.
Building a Gesture Classifier
-
I’ll now walk through the steps of the machine learning life cycle with an example problem well suited for p5.js, building all the code for each step along the way using ml5.js. I’ll begin at step 0 by articulating the problem. Imagine for a moment that you’re working on an interactive application that responds to gestures. Maybe the gestures are ultimately meant to be recorded via body tracking, but you want to start with something much simpler—a single stroke of the mouse (see Figure 10.20).
+
I’ll now walk through the steps of the machine learning life cycle with an example problem well suited for p5.js, building all the code for each step along the way using ml5.js. I’ll begin at step 0 by articulating the problem. Imagine for a moment that you’re working on an interactive application that responds to gestures. Maybe the gestures are ultimately meant to be recorded via body tracking, but you want to start with something much simpler—a single stroke of the mouse (see Figure 10.20).
Figure 10.20:A single mouse gesture as a vector between a start and end point
diff --git a/content/11_nn_ga.html b/content/11_nn_ga.html
index af168028..9da203af 100644
--- a/content/11_nn_ga.html
+++ b/content/11_nn_ga.html
@@ -246,7 +246,7 @@
The Bird Brain
Once I have the next pipe, I can create the four inputs:
-
let inputs = [
+
let inputs = [
// y-position of the bird
this.y,
// y-velocity of the bird
@@ -317,7 +317,7 @@
GPU vs. CPU
Central processing unit (CPU): Often considered the brain or general-purpose heart of a computer, a CPU handles a wider variety of tasks than the specialized GPU, but it isn’t built to do as many tasks simultaneously.
-
But there’s a catch! Transferring data to and from the GPU introduces overhead. In most cases, the gains from the GPU’s parallel processing more than offset this overhead, but for a tiny model like the one here, copying data to the GPU and back actually slows the neural network. Calling ml5.setBackend("cpu") tells ml5.js to run the neural network computations on the CPU instead. At least in this simple case of tiny bird brains, this is the more efficient choice.
+
But there’s a catch! Transferring data to and from the GPU introduces overhead. In most cases, the gains from the GPU’s parallel processing more than offset this overhead, but for a tiny model like the one here, copying data to the GPU and back actually slows the neural network. Calling ml5.setBackend("cpu") tells ml5.js to run the neural network computations on the CPU instead. At least in this simple case of tiny bird brains, this is the more efficient choice.
Selection: Flappy Bird Fitness
Once I have a diverse population of birds, each with its own neural network, the next step in the GA is selection. Which birds should pass on their genes (in this case, neural network weights) to the next generation? In the world of Flappy Bird, the measure of success is the ability to stay alive the longest by avoiding the pipes. This is the bird’s fitness. A bird that dodges many pipes is considered more fit than one that crashes into the first pipe it encounters.
To track each bird’s fitness, I’ll add two properties to the Bird class, fitness and alive: