From 5df742a15f71bff59816f17e03f930beab53d2c2 Mon Sep 17 00:00:00 2001 From: Jason Davies Date: Wed, 24 Jan 2024 15:11:56 +0000 Subject: [PATCH] Fix typos. --- .../cmgf_logistic_regression_demo.ipynb | 2 +- .../cmgf_mlp_classification_demo.ipynb | 4 ++-- .../generalized_gaussian_ssm/cmgf_poisson_demo.ipynb | 4 ++-- docs/notebooks/hmm/gaussian_hmm.ipynb | 4 ++-- docs/notebooks/linear_gaussian_ssm/kf_linreg.ipynb | 2 +- docs/notebooks/linear_gaussian_ssm/lgssm_hmc.ipynb | 8 ++++---- 6 files changed, 12 insertions(+), 12 deletions(-) diff --git a/docs/notebooks/generalized_gaussian_ssm/cmgf_logistic_regression_demo.ipynb b/docs/notebooks/generalized_gaussian_ssm/cmgf_logistic_regression_demo.ipynb index b7b6a5ae..ac030dc0 100644 --- a/docs/notebooks/generalized_gaussian_ssm/cmgf_logistic_regression_demo.ipynb +++ b/docs/notebooks/generalized_gaussian_ssm/cmgf_logistic_regression_demo.ipynb @@ -37,7 +37,7 @@ "\n", "This is a generalized Gaussian SSM, where the observation model is non-Gaussian.\n", "\n", - "To perform approximate inferece, using the conditional moments Gaussian filter (CMGF).\n", + "To perform approximate inference, using the conditional moments Gaussian filter (CMGF).\n", "We approximate the relevant integrals using 3 different methods: linearization (extended Kalman filter),\n", "sigma point approximation (unscented kalman filter), and Gauss hermite integration (order 5).\n", "We compare results with the offline (batch) Laplace approximation, and see that GHKF converges fastest to the batch solution,\n", diff --git a/docs/notebooks/generalized_gaussian_ssm/cmgf_mlp_classification_demo.ipynb b/docs/notebooks/generalized_gaussian_ssm/cmgf_mlp_classification_demo.ipynb index f525d13c..bf96a6b5 100644 --- a/docs/notebooks/generalized_gaussian_ssm/cmgf_mlp_classification_demo.ipynb +++ b/docs/notebooks/generalized_gaussian_ssm/cmgf_mlp_classification_demo.ipynb @@ -39,7 +39,7 @@ "\n", "This is a generalized Gaussian SSM, where the observation model is non-linear and non-Gaussian.\n", "\n", - "To perform approximate inferece, using the conditional moments Gaussian filter (CMGF).\n", + "To perform approximate inference, using the conditional moments Gaussian filter (CMGF).\n", "We approximate the relevant integrals using the extended Kalman filter.\n", "For more details, see sec 8.7.7 of [Probabilistic Machine Learning: Advanced Topics](https://probml.github.io/pml-book/book2.html).\n", "\n", @@ -395,7 +395,7 @@ }, "outputs": [], "source": [ - "# Some model parameters and helper funciton\n", + "# Some model parameters and helper function\n", "state_dim, emission_dim = flat_params.size, output_dim\n", "sigmoid_fn = lambda w, x: jax.nn.sigmoid(apply_fn(w, x))\n", "\n", diff --git a/docs/notebooks/generalized_gaussian_ssm/cmgf_poisson_demo.ipynb b/docs/notebooks/generalized_gaussian_ssm/cmgf_poisson_demo.ipynb index 0b0725c8..14caae38 100644 --- a/docs/notebooks/generalized_gaussian_ssm/cmgf_poisson_demo.ipynb +++ b/docs/notebooks/generalized_gaussian_ssm/cmgf_poisson_demo.ipynb @@ -9,7 +9,7 @@ "source": [ "# Fitting an LDS with Poisson Likelihood using conditional moments Gaussian filter\n", "\n", - "Adapted fom [https://github.com/lindermanlab/ssm-jax/blob/main/notebooks/poisson-lds-example.ipynb](https://github.com/lindermanlab/ssm-jax/blob/main/notebooks/poisson-lds-example.ipynb )" + "Adapted from [https://github.com/lindermanlab/ssm-jax/blob/main/notebooks/poisson-lds-example.ipynb](https://github.com/lindermanlab/ssm-jax/blob/main/notebooks/poisson-lds-example.ipynb )" ] }, { @@ -221,7 +221,7 @@ "id": "0DV-H8hRYJbK" }, "source": [ - "First, we define a helper random rotation functionto use as our dynamics function." + "First, we define a helper random rotation function to use as our dynamics function." ] }, { diff --git a/docs/notebooks/hmm/gaussian_hmm.ipynb b/docs/notebooks/hmm/gaussian_hmm.ipynb index 10cff484..430877a9 100644 --- a/docs/notebooks/hmm/gaussian_hmm.ipynb +++ b/docs/notebooks/hmm/gaussian_hmm.ipynb @@ -237,7 +237,7 @@ "\n", "This function fits the data into _folds_ where each fold consists of all\n", "but one of the training sequences. It fits the model to each fold in \n", - "parallel, and then computs the log likelihood of the held-out sequence for\n", + "parallel, and then computes the log likelihood of the held-out sequence for\n", "each fold. The average held-out log likelihood is what we will use for \n", "determining the number of discrete states." ] @@ -312,7 +312,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Plot the individual and average validation log likelihods as a function of number of states" + "### Plot the individual and average validation log likelihoods as a function of number of states" ] }, { diff --git a/docs/notebooks/linear_gaussian_ssm/kf_linreg.ipynb b/docs/notebooks/linear_gaussian_ssm/kf_linreg.ipynb index ea705f19..4f93869c 100644 --- a/docs/notebooks/linear_gaussian_ssm/kf_linreg.ipynb +++ b/docs/notebooks/linear_gaussian_ssm/kf_linreg.ipynb @@ -140,7 +140,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Offline inferenece\n", + "## Offline inference\n", "\n", "We compute the offline posterior given all the data using Bayes rule for linear regression.\n", "This should give the same results as the final step of online inference." diff --git a/docs/notebooks/linear_gaussian_ssm/lgssm_hmc.ipynb b/docs/notebooks/linear_gaussian_ssm/lgssm_hmc.ipynb index 32a25a2a..8705f093 100644 --- a/docs/notebooks/linear_gaussian_ssm/lgssm_hmc.ipynb +++ b/docs/notebooks/linear_gaussian_ssm/lgssm_hmc.ipynb @@ -203,7 +203,7 @@ "source": [ "\n", "\n", - "# Initilize parameters by fitting EM algorithm\n", + "# Initialize parameters by fitting EM algorithm\n", "num_iters = 100\n", "test_model = LinearGaussianSSM(state_dim, emission_dim)\n", "initial_params, param_props = test_model.initialize(next(keys))\n", @@ -539,7 +539,7 @@ "source": [ "## Use HMC to infer posterior over a subset of the parameters\n", "\n", - "We freeze the transition parameters and inital parameters, so that only covariance matrices are learned.\n", + "We freeze the transition parameters and initial parameters, so that only covariance matrices are learned.\n", "This is useful for structural time series models (see e.g., [sts-jax](https://github.com/probml/sts-jax) library, \n", "which builds on dynamax.).\n" ] @@ -550,7 +550,7 @@ "metadata": {}, "outputs": [], "source": [ - "# Freeze transition parameters and inital parameters, so that only covariance matrices are learned\n", + "# Freeze transition parameters and initial parameters, so that only covariance matrices are learned\n", "\n", "test_model = LinearGaussianSSM(state_dim, emission_dim)\n", "test_params, test_param_props = test_model.initialize(next(keys),\n", @@ -559,7 +559,7 @@ " emission_weights=true_params.emissions.weights,\n", " emission_bias=true_params.emissions.bias)\n", "\n", - "# Set transition parameters and inital parameters to true values and mark as frozen\n", + "# Set transition parameters and initial parameters to true values and mark as frozen\n", "test_param_props.dynamics.weights.trainable = False\n", "test_param_props.dynamics.bias.trainable = False\n", "test_param_props.emissions.weights.trainable = False\n",