Skip to content

Commit

Permalink
minor fix (#1086)
Browse files Browse the repository at this point in the history
* minor fix

* minor fix

* Process tutorial notebooks

---------

Co-authored-by: GitHub Action <[email protected]>
  • Loading branch information
spirosChv and actions-user authored Oct 25, 2023
1 parent 3d638d0 commit 061f96b
Show file tree
Hide file tree
Showing 6 changed files with 34 additions and 34 deletions.
22 changes: 11 additions & 11 deletions tutorials/W1D3_GeneralizedLinearModels/W1D3_Tutorial2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@
" best_C = C_values[np.argmax(accuracies)]\n",
" ax.set(\n",
" xticks=C_values,\n",
" xlabel=\"$C$\",\n",
" xlabel=\"C\",\n",
" ylabel=\"Cross-validated accuracy\",\n",
" title=f\"Best C: {best_C:1g} ({np.max(accuracies):.2%})\",\n",
" )\n",
Expand All @@ -219,7 +219,7 @@
" ax.plot(C_values, non_zero_l1, marker=\"o\")\n",
" ax.set(\n",
" xticks=C_values,\n",
" xlabel=\"$C$\",\n",
" xlabel=\"C\",\n",
" ylabel=\"Number of non-zero coefficients\",\n",
" )\n",
" ax.axhline(n_voxels, color=\".1\", linestyle=\":\")\n",
Expand Down Expand Up @@ -640,7 +640,7 @@
"\n",
"*Estimated timing to here from start of tutorial: 30 min*\n",
"\n",
"Now we need to evaluate the model's predictions. We'll do that with an *accuracy* score. The accuracy of the classifier is the proportion of trials where the predicted label matches the true label.\n"
"Now, we need to evaluate the predictions of the model. We will use an *accuracy* score for this purpose. The accuracy of a classifier is determined by the proportion of correct trials, where the predicted label matches the true label, out of the total number of trials."
]
},
{
Expand Down Expand Up @@ -726,7 +726,7 @@
"\n",
" y_pred = model.predict(X)\n",
"\n",
" accuracy = (y == y_pred).mean()\n",
" accuracy = (y == y_pred).sum() / len(y)\n",
"\n",
" return accuracy\n",
"\n",
Expand Down Expand Up @@ -841,7 +841,7 @@
},
"outputs": [],
"source": [
"X.shape"
"print(X.shape)"
]
},
{
Expand Down Expand Up @@ -1089,10 +1089,10 @@
" penalized_models[log_C] = m.fit(X, y)\n",
"\n",
"@widgets.interact\n",
"def plot_observed(log_C = widgets.FloatSlider(value=1, min=1, max=10, step=1)):\n",
"def plot_observed(log_C = widgets.IntSlider(value=1, min=1, max=10, step=1)):\n",
" models = {\n",
" \"No regularization\": log_reg,\n",
" f\"$L_2$ (C = $10^{log_C}$)\": penalized_models[log_C]\n",
" f\"$L_2$ (C = $10^{{{log_C}}}$)\": penalized_models[log_C]\n",
" }\n",
" plot_weights(models)"
]
Expand All @@ -1103,7 +1103,7 @@
"execution": {}
},
"source": [
"Recall from above that $C=\\frac1\\beta$ so larger `C` is less regularization. The top panel corresponds to $C=\\infty$."
"Recall from above that $C=\\frac1\\beta$ so larger $C$ is less regularization. The top panel corresponds to $C \\rightarrow \\infty$."
]
},
{
Expand Down Expand Up @@ -1347,7 +1347,7 @@
"execution": {}
},
"source": [
"Smaller `C` (bigger $\\beta$) leads to sparser solutions.\n",
"Smaller $C$ (bigger $\\beta$) leads to sparser solutions.\n",
"\n",
"**Link to neuroscience**: When is it OK to assume that the parameter vector is sparse? Whenever it is true that most features don't affect the outcome. One use-case might be decoding low-level visual features from whole-brain fMRI: we may expect only voxels in V1 and thalamus should be used in the prediction.\n",
"\n",
Expand Down Expand Up @@ -1565,7 +1565,7 @@
"\n",
"**The logistic link function**\n",
"\n",
"You've seen $\\theta^T x_i$ before, but the $\\sigma$ is new. It's the *sigmoidal* or *logistic* link function that \"squashes\" $\\theta^T x_i$ to keep it between $0$ and $1$:\n",
"You've seen $\\theta^T x_i$ before, but the $\\sigma$ is new. It's the *sigmoidal* or *logistic* link function that \"squashes\" $\\theta^\\top x_i$ to keep it between $0$ and $1$:\n",
"\n",
"\\begin{equation}\n",
"\\sigma(z) = \\frac{1}{1 + \\textrm{exp}(-z)}\n",
Expand Down Expand Up @@ -1661,7 +1661,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.17"
"version": "3.9.18"
}
},
"nbformat": 4,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@
" best_C = C_values[np.argmax(accuracies)]\n",
" ax.set(\n",
" xticks=C_values,\n",
" xlabel=\"$C$\",\n",
" xlabel=\"C\",\n",
" ylabel=\"Cross-validated accuracy\",\n",
" title=f\"Best C: {best_C:1g} ({np.max(accuracies):.2%})\",\n",
" )\n",
Expand All @@ -219,7 +219,7 @@
" ax.plot(C_values, non_zero_l1, marker=\"o\")\n",
" ax.set(\n",
" xticks=C_values,\n",
" xlabel=\"$C$\",\n",
" xlabel=\"C\",\n",
" ylabel=\"Number of non-zero coefficients\",\n",
" )\n",
" ax.axhline(n_voxels, color=\".1\", linestyle=\":\")\n",
Expand Down Expand Up @@ -642,7 +642,7 @@
"\n",
"*Estimated timing to here from start of tutorial: 30 min*\n",
"\n",
"Now we need to evaluate the model's predictions. We'll do that with an *accuracy* score. The accuracy of the classifier is the proportion of trials where the predicted label matches the true label.\n"
"Now, we need to evaluate the predictions of the model. We will use an *accuracy* score for this purpose. The accuracy of a classifier is determined by the proportion of correct trials, where the predicted label matches the true label, out of the total number of trials."
]
},
{
Expand Down Expand Up @@ -730,7 +730,7 @@
"\n",
" y_pred = model.predict(X)\n",
"\n",
" accuracy = (y == y_pred).mean()\n",
" accuracy = (y == y_pred).sum() / len(y)\n",
"\n",
" return accuracy\n",
"\n",
Expand Down Expand Up @@ -845,7 +845,7 @@
},
"outputs": [],
"source": [
"X.shape"
"print(X.shape)"
]
},
{
Expand Down Expand Up @@ -1093,10 +1093,10 @@
" penalized_models[log_C] = m.fit(X, y)\n",
"\n",
"@widgets.interact\n",
"def plot_observed(log_C = widgets.FloatSlider(value=1, min=1, max=10, step=1)):\n",
"def plot_observed(log_C = widgets.IntSlider(value=1, min=1, max=10, step=1)):\n",
" models = {\n",
" \"No regularization\": log_reg,\n",
" f\"$L_2$ (C = $10^{log_C}$)\": penalized_models[log_C]\n",
" f\"$L_2$ (C = $10^{{{log_C}}}$)\": penalized_models[log_C]\n",
" }\n",
" plot_weights(models)"
]
Expand All @@ -1107,7 +1107,7 @@
"execution": {}
},
"source": [
"Recall from above that $C=\\frac1\\beta$ so larger `C` is less regularization. The top panel corresponds to $C=\\infty$."
"Recall from above that $C=\\frac1\\beta$ so larger $C$ is less regularization. The top panel corresponds to $C \\rightarrow \\infty$."
]
},
{
Expand Down Expand Up @@ -1353,7 +1353,7 @@
"execution": {}
},
"source": [
"Smaller `C` (bigger $\\beta$) leads to sparser solutions.\n",
"Smaller $C$ (bigger $\\beta$) leads to sparser solutions.\n",
"\n",
"**Link to neuroscience**: When is it OK to assume that the parameter vector is sparse? Whenever it is true that most features don't affect the outcome. One use-case might be decoding low-level visual features from whole-brain fMRI: we may expect only voxels in V1 and thalamus should be used in the prediction.\n",
"\n",
Expand Down Expand Up @@ -1573,7 +1573,7 @@
"\n",
"**The logistic link function**\n",
"\n",
"You've seen $\\theta^T x_i$ before, but the $\\sigma$ is new. It's the *sigmoidal* or *logistic* link function that \"squashes\" $\\theta^T x_i$ to keep it between $0$ and $1$:\n",
"You've seen $\\theta^T x_i$ before, but the $\\sigma$ is new. It's the *sigmoidal* or *logistic* link function that \"squashes\" $\\theta^\\top x_i$ to keep it between $0$ and $1$:\n",
"\n",
"\\begin{equation}\n",
"\\sigma(z) = \\frac{1}{1 + \\textrm{exp}(-z)}\n",
Expand Down Expand Up @@ -1669,7 +1669,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.17"
"version": "3.9.18"
}
},
"nbformat": 4,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ def compute_accuracy(X, y, model):

y_pred = model.predict(X)

accuracy = (y == y_pred).mean()
accuracy = (y == y_pred).sum() / len(y)

return accuracy

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
22 changes: 11 additions & 11 deletions tutorials/W1D3_GeneralizedLinearModels/student/W1D3_Tutorial2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@
" best_C = C_values[np.argmax(accuracies)]\n",
" ax.set(\n",
" xticks=C_values,\n",
" xlabel=\"$C$\",\n",
" xlabel=\"C\",\n",
" ylabel=\"Cross-validated accuracy\",\n",
" title=f\"Best C: {best_C:1g} ({np.max(accuracies):.2%})\",\n",
" )\n",
Expand All @@ -219,7 +219,7 @@
" ax.plot(C_values, non_zero_l1, marker=\"o\")\n",
" ax.set(\n",
" xticks=C_values,\n",
" xlabel=\"$C$\",\n",
" xlabel=\"C\",\n",
" ylabel=\"Number of non-zero coefficients\",\n",
" )\n",
" ax.axhline(n_voxels, color=\".1\", linestyle=\":\")\n",
Expand Down Expand Up @@ -633,7 +633,7 @@
"\n",
"*Estimated timing to here from start of tutorial: 30 min*\n",
"\n",
"Now we need to evaluate the model's predictions. We'll do that with an *accuracy* score. The accuracy of the classifier is the proportion of trials where the predicted label matches the true label.\n"
"Now, we need to evaluate the predictions of the model. We will use an *accuracy* score for this purpose. The accuracy of a classifier is determined by the proportion of correct trials, where the predicted label matches the true label, out of the total number of trials."
]
},
{
Expand Down Expand Up @@ -702,7 +702,7 @@
"execution": {}
},
"source": [
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/main/tutorials/W1D3_GeneralizedLinearModels/solutions/W1D3_Tutorial2_Solution_bfe654b0.py)\n",
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/main/tutorials/W1D3_GeneralizedLinearModels/solutions/W1D3_Tutorial2_Solution_1485808c.py)\n",
"\n"
]
},
Expand Down Expand Up @@ -811,7 +811,7 @@
},
"outputs": [],
"source": [
"X.shape"
"print(X.shape)"
]
},
{
Expand Down Expand Up @@ -1059,10 +1059,10 @@
" penalized_models[log_C] = m.fit(X, y)\n",
"\n",
"@widgets.interact\n",
"def plot_observed(log_C = widgets.FloatSlider(value=1, min=1, max=10, step=1)):\n",
"def plot_observed(log_C = widgets.IntSlider(value=1, min=1, max=10, step=1)):\n",
" models = {\n",
" \"No regularization\": log_reg,\n",
" f\"$L_2$ (C = $10^{log_C}$)\": penalized_models[log_C]\n",
" f\"$L_2$ (C = $10^{{{log_C}}}$)\": penalized_models[log_C]\n",
" }\n",
" plot_weights(models)"
]
Expand All @@ -1073,7 +1073,7 @@
"execution": {}
},
"source": [
"Recall from above that $C=\\frac1\\beta$ so larger `C` is less regularization. The top panel corresponds to $C=\\infty$."
"Recall from above that $C=\\frac1\\beta$ so larger $C$ is less regularization. The top panel corresponds to $C \\rightarrow \\infty$."
]
},
{
Expand Down Expand Up @@ -1281,7 +1281,7 @@
"execution": {}
},
"source": [
"Smaller `C` (bigger $\\beta$) leads to sparser solutions.\n",
"Smaller $C$ (bigger $\\beta$) leads to sparser solutions.\n",
"\n",
"**Link to neuroscience**: When is it OK to assume that the parameter vector is sparse? Whenever it is true that most features don't affect the outcome. One use-case might be decoding low-level visual features from whole-brain fMRI: we may expect only voxels in V1 and thalamus should be used in the prediction.\n",
"\n",
Expand Down Expand Up @@ -1465,7 +1465,7 @@
"\n",
"**The logistic link function**\n",
"\n",
"You've seen $\\theta^T x_i$ before, but the $\\sigma$ is new. It's the *sigmoidal* or *logistic* link function that \"squashes\" $\\theta^T x_i$ to keep it between $0$ and $1$:\n",
"You've seen $\\theta^T x_i$ before, but the $\\sigma$ is new. It's the *sigmoidal* or *logistic* link function that \"squashes\" $\\theta^\\top x_i$ to keep it between $0$ and $1$:\n",
"\n",
"\\begin{equation}\n",
"\\sigma(z) = \\frac{1}{1 + \\textrm{exp}(-z)}\n",
Expand Down Expand Up @@ -1561,7 +1561,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.17"
"version": "3.9.18"
}
},
"nbformat": 4,
Expand Down

0 comments on commit 061f96b

Please sign in to comment.