diff --git a/notebooks/ConcolicFuzzer.ipynb b/notebooks/ConcolicFuzzer.ipynb index 278359440..4381d08fd 100644 --- a/notebooks/ConcolicFuzzer.ipynb +++ b/notebooks/ConcolicFuzzer.ipynb @@ -5813,7 +5813,7 @@ "source": [ "## Concolic Grammar Fuzzing\n", "\n", - "The concolic framework can be used directly in grammar-based fuzzing. We implement a class `ConcolicGrammarFuzzer` wihich does this." + "The concolic framework can be used directly in grammar-based fuzzing. We implement a class `ConcolicGrammarFuzzer` which does this." ] }, { @@ -6814,7 +6814,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Exercise 1: Implment a Concolic Float Proxy Class\n" + "### Exercise 1: Implement a Concolic Float Proxy Class\n" ] }, { diff --git a/notebooks/ControlFlow.ipynb b/notebooks/ControlFlow.ipynb index 9162bf35e..c6bdf3f47 100644 --- a/notebooks/ControlFlow.ipynb +++ b/notebooks/ControlFlow.ipynb @@ -712,7 +712,7 @@ " myparents[0].add_calls(mid)\n", "\n", " # these need to be unlinked later if our module actually defines these\n", - " # functions. Otherwsise we may leave them around.\n", + " # functions. Otherwise we may leave them around.\n", " # during a call, the direct child is not the next\n", " # statement in text.\n", " for c in p:\n", diff --git a/notebooks/DynamicInvariants.ipynb b/notebooks/DynamicInvariants.ipynb index 4e92e2827..7ff430e5d 100644 --- a/notebooks/DynamicInvariants.ipynb +++ b/notebooks/DynamicInvariants.ipynb @@ -1586,7 +1586,7 @@ "source": [ "### All-in-one Annotation\n", "\n", - "Let us bring all of this together in a single class `TypeAnnotator` that first tracks calls of functions and then allows accesing the AST (and the source code form) of the tracked functions annotated with types. The method `typed_functions()` returns the annotated functions as a string; `typed_functions_ast()` returns their AST." + "Let us bring all of this together in a single class `TypeAnnotator` that first tracks calls of functions and then allows accessing the AST (and the source code form) of the tracked functions annotated with types. The method `typed_functions()` returns the annotated functions as a string; `typed_functions_ast()` returns their AST." ] }, { @@ -2853,7 +2853,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Now for the actual annotation. `preconditions()` returns the preconditions from the mined invariants (i.e., those propertes that do not depend on the return value) as a string with annotations:" + "Now for the actual annotation. `preconditions()` returns the preconditions from the mined invariants (i.e., those properties that do not depend on the return value) as a string with annotations:" ] }, { diff --git a/notebooks/Fuzzer.ipynb b/notebooks/Fuzzer.ipynb index 65ff79350..bce588c5a 100644 --- a/notebooks/Fuzzer.ipynb +++ b/notebooks/Fuzzer.ipynb @@ -895,7 +895,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The above code block precludes the possiblity of removing `~` (your home directory), this is because the probability of generating the character '~' is not 1/32; it is 0/32. The characters are created by calling chr(random.randrange(char_start, char_start + char_range)), where the default value of char_start is 32 and the default value of char_range is 32. The documentation for chr reads, \"[r]eturn the string representing a character whose Unicode code point is the integer i.\" The Unicode code point for '~' is 126 and therefore, not in the interval [32, 64). \n", + "The above code block precludes the possibility of removing `~` (your home directory), this is because the probability of generating the character '~' is not 1/32; it is 0/32. The characters are created by calling chr(random.randrange(char_start, char_start + char_range)), where the default value of char_start is 32 and the default value of char_range is 32. The documentation for chr reads, \"[r]eturn the string representing a character whose Unicode code point is the integer i.\" The Unicode code point for '~' is 126 and therefore, not in the interval [32, 64). \n", "\n", "If the code were to be changed so that char_range = 95 then the probability of obtaining the character '~' would be 1/94 , thus resulting in the probability of the event of deleting all files being equal to 0.000332\n", "\n", @@ -920,7 +920,7 @@ "\n", "For the space the probability is 1 out of 32.\n", "\n", - "We have to include the term for the probability of obtaining at least 2 characters which is required for the scenario of obtaining a space as the second character. This probability is 99/101 because it is calculated as (1 - probabilty of obtaining a single character or no character at all), so it is equal to 1-(2/101).\n", + "We have to include the term for the probability of obtaining at least 2 characters which is required for the scenario of obtaining a space as the second character. This probability is 99/101 because it is calculated as (1 - probability of obtaining a single character or no character at all), so it is equal to 1-(2/101).\n", "\n", "Therefore, the probability calculation for the event of deleting all files in the case of having a space for the second character is:\n", "\n", diff --git a/notebooks/FuzzingWithConstraints.ipynb b/notebooks/FuzzingWithConstraints.ipynb index 54550d784..adcb59d3a 100644 --- a/notebooks/FuzzingWithConstraints.ipynb +++ b/notebooks/FuzzingWithConstraints.ipynb @@ -420,7 +420,7 @@ "One very general solution to this problem would be to use _unrestricted_ grammars rather than the _context-free_ grammars we have used so far.\n", "In an unrestricted grammar, one can have multiple symbols also on the left-hand side of an expansion rule, making them very flexible.\n", "In fact, unrestricted grammars are _Turing-universal_, meaning that they can express any feature that could also be expressed in program code; and they could thus check and produce arbitrary strings with arbitrary features. (If they finish, that is – unrestricted grammars also suffer from the halting problem.)\n", - "The downside is that there is literally no programming support for unrestricted grammars – we'd have to implement all arithmetics, strings, and other functionality from scratch in a grammar, which is - well - not fun." + "The downside is that there is literally no programming support for unrestricted grammars – we'd have to implement all arithmetic, strings, and other functionality from scratch in a grammar, which is - well - not fun." ] }, { @@ -445,7 +445,7 @@ "metadata": {}, "source": [ "In recent work, _Dominic Steinhöfel_ and _Andreas Zeller_ (one of the authors of this book) have presented an infrastructure that allows producing inputs with _arbitrary properties_, but without having to go through the trouble of implementing producers or checkers.\n", - "Instead, they suggest a dedicated _language_ for specifiying inputs, named [ISLa](https://rindphi.github.io/isla/) (for input specification language).\n", + "Instead, they suggest a dedicated _language_ for specifying inputs, named [ISLa](https://rindphi.github.io/isla/) (for input specification language).\n", "_ISLa_ combines a standard context-free _grammar_ with _constraints_ that express _semantic_ properties of the inputs and their elements.\n", "ISLa can be used as a _fuzzer_ (producing inputs that satisfy the constraints) as well as a _checker_ (checking inputs whether they satisfy the given constraints)." ] diff --git a/notebooks/Guide_for_Authors.ipynb b/notebooks/Guide_for_Authors.ipynb index 13fab4cc2..0b4c9ec75 100644 --- a/notebooks/Guide_for_Authors.ipynb +++ b/notebooks/Guide_for_Authors.ipynb @@ -153,7 +153,7 @@ "The derived material for the book ends up in the `docs/` folder, from where it is eventually pushed to the [fuzzingbook website](http://www.fuzzingbook.org/). This site allows\n", "* reading the chapters online,\n", "* launching interactive Jupyter notebooks using the binder service, and\n", - "* accesssing code and slide formats.\n", + "* accessing code and slide formats.\n", "\n", "Use `make publish` to create and update the site." ] diff --git a/notebooks/InformationFlow.ipynb b/notebooks/InformationFlow.ipynb index 5b58daacd..0779c0319 100644 --- a/notebooks/InformationFlow.ipynb +++ b/notebooks/InformationFlow.ipynb @@ -2214,7 +2214,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "More complex origining such as *bitmap origins* are possible where a single character may result from multiple origined character indexes (such as *checksum* operations on strings). We do not consider these in this chapter." + "More complex originating such as *bitmap origins* are possible where a single character may result from multiple origined character indexes (such as *checksum* operations on strings). We do not consider these in this chapter." ] }, { diff --git a/notebooks/Parser.ipynb b/notebooks/Parser.ipynb index f92b7281b..7674dec90 100644 --- a/notebooks/Parser.ipynb +++ b/notebooks/Parser.ipynb @@ -4111,7 +4111,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Using it is as folows:" + "Using it is as follows:" ] }, { diff --git a/notebooks/ProbabilisticGrammarFuzzer.ipynb b/notebooks/ProbabilisticGrammarFuzzer.ipynb index f9cba9a3a..81b136414 100644 --- a/notebooks/ProbabilisticGrammarFuzzer.ipynb +++ b/notebooks/ProbabilisticGrammarFuzzer.ipynb @@ -1895,7 +1895,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "To have our probabilistic grammar fuzzer focus on _uncommon_ features, we _change the learned probabilities_ such that commonly occuring features (i.e., those with a high learned probability) get a low probability, and vice versa: The last shall be first, and the first last. A particularly simple way to achieve such an _inversion_ of probabilities is to _swap_ them: The alternatives with the highest and lowest probability swaps their probabilities, as so the alternatives with the second-highest and second-lowest probability, the alternatives with the third highest and lowest, and so on." + "To have our probabilistic grammar fuzzer focus on _uncommon_ features, we _change the learned probabilities_ such that commonly occurring features (i.e., those with a high learned probability) get a low probability, and vice versa: The last shall be first, and the first last. A particularly simple way to achieve such an _inversion_ of probabilities is to _swap_ them: The alternatives with the highest and lowest probability swaps their probabilities, as so the alternatives with the second-highest and second-lowest probability, the alternatives with the third highest and lowest, and so on." ] }, { diff --git a/notebooks/Project_MutationFuzzing.ipynb b/notebooks/Project_MutationFuzzing.ipynb index b014d99c6..e27b515db 100644 --- a/notebooks/Project_MutationFuzzing.ipynb +++ b/notebooks/Project_MutationFuzzing.ipynb @@ -27,7 +27,7 @@ "\n", "While fuzzers can run for days in a row to cover considerable behavior, the goal of this project is to utilize mutation fuzzing to cover as much code as possible during a specified number of generations. \n", "\n", - "Our target is the [svglib](https://pypi.org/project/svglib/) SVG rendering library written in python. For an easier integration with the library we provide a wrapped function __parse_svg(string)__, which receives a string with the SVG content and invokes the parsing library. To ensure that all converted elements are correct, the wrapper function internally converts the parsed SVG into PDF and PNG formats. Finally, the wrapper function returns an _RLG Drawing_ object if the conversion was successfull or None if it wasn't." + "Our target is the [svglib](https://pypi.org/project/svglib/) SVG rendering library written in python. For an easier integration with the library we provide a wrapped function __parse_svg(string)__, which receives a string with the SVG content and invokes the parsing library. To ensure that all converted elements are correct, the wrapper function internally converts the parsed SVG into PDF and PNG formats. Finally, the wrapper function returns an _RLG Drawing_ object if the conversion was successful or None if it wasn't." ] }, { @@ -386,7 +386,7 @@ "source": [ "## Obtaining the population coverage\n", "\n", - "In order to obtain the overal coverage achieved by the fuzzer's population we will adapt the [population_coverage](Coverage.ipynb) function from the lecture.\n", + "In order to obtain the overall coverage achieved by the fuzzer's population we will adapt the [population_coverage](Coverage.ipynb) function from the lecture.\n", "\n", "The following code calculates the overall coverage from a fuzzer's population:" ] diff --git a/notebooks/Project_Search_Based_WebFuzzer.ipynb b/notebooks/Project_Search_Based_WebFuzzer.ipynb index 834fa0bf7..b32591b95 100644 --- a/notebooks/Project_Search_Based_WebFuzzer.ipynb +++ b/notebooks/Project_Search_Based_WebFuzzer.ipynb @@ -769,7 +769,7 @@ "## Coverage of Web App \n", "In this project, the coverage of the web app is defined as _web reachability_, i.e. the number of reached pages on the site. \n", "\n", - "We reachability is a proxy measurement of the number of validation schemes passed/failed and also a proxy measurement of brach coverage in the `handle_order()` method. " + "We reachability is a proxy measurement of the number of validation schemes passed/failed and also a proxy measurement of branch coverage in the `handle_order()` method. " ] }, { @@ -1206,7 +1206,7 @@ "metadata": {}, "source": [ "## Your Tasks\n", - "For each input, you are epxected to produce a set of urls to reach a targeted web page, i.e. generate inputs that will fulfill the input validation requirements necessary to reach a specific web page. Overall, all web pages will be targets, i.e. both error and normal pages. \n", + "For each input, you are expected to produce a set of urls to reach a targeted web page, i.e. generate inputs that will fulfill the input validation requirements necessary to reach a specific web page. Overall, all web pages will be targets, i.e. both error and normal pages. \n", "\n", "Your task is to implement your own custom selection, fitness and mutation functions for the genetic algorithm, in order to fulfill the input validation and web reachabilty requirements." ] @@ -1223,7 +1223,7 @@ "* Ensure that your implementation accounts for any arbitrary regex and any random initial population of inputs.\n", "* For debugging purposes: unmute the `webbrowser()` to obtain logging information, i.e. set `webbrowser(url, mute=False)` \n", "* Do not implement in any other section except the section below. \n", - "* Gracefully handle exceptions and errors resulting from your impelementation.\n", + "* Gracefully handle exceptions and errors resulting from your implementation.\n", "* __Remember the input validation regex and initial input population could be arbitrary, do not hard code for a specific regex or input__." ] }, @@ -1308,7 +1308,7 @@ "source": [ "# Evaluation code\n", "\n", - "The code in the following section will be used to evaluate your impelementation." + "The code in the following section will be used to evaluate your implementation." ] }, { @@ -1596,7 +1596,7 @@ "source": [ "## Scoring\n", "\n", - "For each URL input in the population and its corresponding target, your implementaton should generate a list of test inputs, __maximum of 10 inputs__. These inputs will be executed on the server and graded based on:\n", + "For each URL input in the population and its corresponding target, your implementation should generate a list of test inputs, __maximum of 10 inputs__. These inputs will be executed on the server and graded based on:\n", "\n", "* Number of iterations of your GA algorithm (less is better)\n", "* Number of reached target pages , i.e. error pages, confirmation page and page not found\n", diff --git a/notebooks/WhenToStopFuzzing.ipynb b/notebooks/WhenToStopFuzzing.ipynb index 07605a638..4a14c088e 100644 --- a/notebooks/WhenToStopFuzzing.ipynb +++ b/notebooks/WhenToStopFuzzing.ipynb @@ -808,7 +808,7 @@ "plt.subplot(1, 2, 1)\n", "plt.hist(frequencies, range=[1, 21], bins=numpy.arange(1, 21) - 0.5) # type: ignore\n", "plt.xticks(range(1, 21)) # type: ignore\n", - "plt.xlabel('# of occurances (e.g., 1 represents singleton trigrams)')\n", + "plt.xlabel('# of occurrences (e.g., 1 represents singleton trigrams)')\n", "plt.ylabel('Frequency of occurances')\n", "plt.title('Figure 1. Frequency of Rare Trigrams')\n", "\n", diff --git a/notebooks/shared/ClassDiagram.ipynb b/notebooks/shared/ClassDiagram.ipynb index 3be01710f..160bf3954 100644 --- a/notebooks/shared/ClassDiagram.ipynb +++ b/notebooks/shared/ClassDiagram.ipynb @@ -235,7 +235,7 @@ "outputs": [], "source": [ "class D_Class(D_Class):\n", - " pass # An incremental addiiton that should not impact D's semantics" + " pass # An incremental addition that should not impact D's semantics" ] }, {