Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

codespell #847

Merged
merged 1 commit into from
Jul 24, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion experimental/ee_genie.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -635,7 +635,7 @@
"\n",
"Make sure you have enough justification to definitively declare the analysis\n",
"relevant - it's better to give a false negative than a false positive. However,\n",
"the image analysis identtifies specific matching landmarks (eg, the\n",
"the image analysis identifies specific matching landmarks (eg, the\n",
"the outlines of Manhattan island for a request to show NYC), believe it.\n",
"\n",
"Do not assume too much (eg, that the presence of green doesn't by itself mean the\n",
Expand Down
4 changes: 2 additions & 2 deletions guides/linked/Earth_Engine_AutoML_Vertex_AI.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@
"\n",
"REGION = \"us-central1\" # @param {type: \"string\"}\n",
"\n",
"# The diplay name of your model (this can be any string).\n",
"# The display name of your model (this can be any string).\n",
"MODEL_NAME = \"[model-name]\" # @param {type: \"string\"}"
],
"metadata": {
Expand Down Expand Up @@ -163,7 +163,7 @@
"\n",
"Creating data is a long-running operation. This next step can take a while. The `create()` method waits for the operation to complete, outputting statements as the operation progresses. The statements contain the full name of the dataset that you use in the following section.\n",
"\n",
"**Note**: You can close the noteboook while you wait for this operation to complete."
"**Note**: You can close the notebook while you wait for this operation to complete."
],
"metadata": {
"id": "A1ZdO3ueKLsd"
Expand Down
2 changes: 1 addition & 1 deletion guides/linked/Earth_Engine_PyTorch_Vertex_AI.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -468,7 +468,7 @@
{
"cell_type": "markdown",
"source": [
"Now we need to specify a handler for our model. We could use a Torchserve default handler or write a custom one. Here, our model returns per-class probabilities, so we'll write a custom handler to call argmax on the probabilites and return the highest-probability class value to Earth Engine."
"Now we need to specify a handler for our model. We could use a Torchserve default handler or write a custom one. Here, our model returns per-class probabilities, so we'll write a custom handler to call argmax on the probabilities and return the highest-probability class value to Earth Engine."
],
"metadata": {
"id": "STWSevy7gJga"
Expand Down
2 changes: 1 addition & 1 deletion guides/linked/Earth_Engine_TensorFlow_AI_Platform.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -426,7 +426,7 @@
"source": [
"## Create the Keras model\n",
"\n",
"Before we create the model, there's still a wee bit of pre-processing to get the data into the right input shape and a format that can be used with cross-entropy loss. Specifically, Keras expects a list of inputs and a one-hot vector for the class. (See [the Keras loss function docs](https://keras.io/losses/), [the TensorFlow categorical identity docs](https://www.tensorflow.org/guide/feature_columns#categorical_identity_column) and [the `tf.one_hot` docs](https://www.tensorflow.org/api_docs/python/tf/one_hot) for details).\n",
"Before we create the model, there's still a small bit of pre-processing to get the data into the right input shape and a format that can be used with cross-entropy loss. Specifically, Keras expects a list of inputs and a one-hot vector for the class. (See [the Keras loss function docs](https://keras.io/losses/), [the TensorFlow categorical identity docs](https://www.tensorflow.org/guide/feature_columns#categorical_identity_column) and [the `tf.one_hot` docs](https://www.tensorflow.org/api_docs/python/tf/one_hot) for details).\n",
"\n",
"Here we will use a simple neural network model with a 64 node hidden layer. Once the dataset has been prepared, define the model, compile it, fit it to the training data. See [the Keras `Sequential` model guide](https://keras.io/getting-started/sequential-model-guide/) for more details."
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -278,7 +278,7 @@
"source": [
"# Generate training data\n",
"\n",
"This is a multi-step process. First, export the image that contains the prediction bands. When that export completes (several hours in this example), it can be reloaded and sampled to generate training and testing datasets. The second step is to export the traning and testing tables to TFRecord files in Cloud Storage (also several hours)."
"This is a multi-step process. First, export the image that contains the prediction bands. When that export completes (several hours in this example), it can be reloaded and sampled to generate training and testing datasets. The second step is to export the training and testing tables to TFRecord files in Cloud Storage (also several hours)."
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,7 @@
"source": [
"## Image retrieval functions\n",
"\n",
"This section includes functions to compute a Sentinel-2 median composite and get a pacth of pixels from the composite, centered on the provided coordinates, as either a numpy array or a JPEG thumbnail (for visualization). The functions that request patches are retriable and you can do that automatically by decorating the functions with [Retry](https://googleapis.dev/python/google-api-core/latest/retry.html)."
"This section includes functions to compute a Sentinel-2 median composite and get a patch of pixels from the composite, centered on the provided coordinates, as either a numpy array or a JPEG thumbnail (for visualization). The functions that request patches are retriable and you can do that automatically by decorating the functions with [Retry](https://googleapis.dev/python/google-api-core/latest/retry.html)."
],
"metadata": {
"id": "vbEM4nlUOmQn"
Expand Down
2 changes: 1 addition & 1 deletion tutorials/imad-tutorial-pt1/index.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -586,7 +586,7 @@
"id": "qksWAxsrIV4g"
},
"source": [
"The next cell codes the MAD transformation itself in the funcion *mad_run()*, taking as input two multiband images and returning the _canonical variates_\n",
"The next cell codes the MAD transformation itself in the function *mad_run()*, taking as input two multiband images and returning the _canonical variates_\n",
"\n",
"$$\n",
"U_i, \\ V_i, \\quad i=1\\dots N,\n",
Expand Down
6 changes: 3 additions & 3 deletions tutorials/imad-tutorial-pt2/index.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -575,7 +575,7 @@
" rhos = ee.String.encodeJSON(ee.List(result.get('allrhos')).get(-1))\n",
" Z = ee.Image(result.get('Z'))\n",
" niter = result.getNumber('niter')\n",
" # Export iMAD and Z as a singe image, including rhos and number of iterations in properties.\n",
" # Export iMAD and Z as a single image, including rhos and number of iterations in properties.\n",
" iMAD_export = ee.Image.cat(iMAD, Z).rename(imadnames).set('rhos', rhos, 'niter', niter)\n",
" assexport = ee.batch.Export.image.toAsset(iMAD_export,\n",
" description='assetExportTask',\n",
Expand Down Expand Up @@ -718,7 +718,7 @@
},
"source": [
"Gray pixels point to no change, while the wide range of color in the iMAD variates\n",
"indicates a good discrimination of the types of change occuring.\n",
"indicates a good discrimination of the types of change occurring.\n",
"\n",
"**Aside:** We are of course primarily interested in extracting the changes in the iMAD\n",
"image, especially those which mark clear cutting, and we'll come back to them in a moment.\n",
Expand Down Expand Up @@ -783,7 +783,7 @@
"id": "22554e72"
},
"source": [
"Here we display the four clusters overlayed onto the two Sentinel 2 images:"
"Here we display the four clusters overlaid onto the two Sentinel 2 images:"
]
},
{
Expand Down
Loading