diff --git a/docs/src/example_defining_problems.md b/docs/src/example_defining_problems.md index 23fba8d2..85eb3019 100644 --- a/docs/src/example_defining_problems.md +++ b/docs/src/example_defining_problems.md @@ -7,7 +7,7 @@ There is a large variety of problems that can be expressed as MDPs and POMDPs an For the examples, we will use the CryingBaby problem from [Algorithms for Decision Making](https://algorithmsbook.com/) by Mykel J. Kochenderfer, Tim A. Wheeler, and Kyle H. Wray. !!! note - This craying baby problem follows the description in Algorithsm for Decision Making and is different than `BabyPOMDP` defined in [POMDPModels.jl](https://github.com/JuliaPOMDP/POMDPModels.jl). + This craying baby problem follows the description in Algorithms for Decision Making and is different than `BabyPOMDP` defined in [POMDPModels.jl](https://github.com/JuliaPOMDP/POMDPModels.jl). From [Appendix F](https://algorithmsbook.com/files/appendix-f.pdf) of Algorithms for Decision Making: > The crying baby problem is a simple POMDP with two states, three actions, and two observations. Our goal is to care for a baby, and we do so by choosing at each time step whether to feed the baby, sing to the baby, or ignore the baby. diff --git a/docs/src/example_gridworld_mdp.md b/docs/src/example_gridworld_mdp.md index 29f7ff8c..4dfa4e48 100644 --- a/docs/src/example_gridworld_mdp.md +++ b/docs/src/example_gridworld_mdp.md @@ -189,7 +189,7 @@ If your problem is very large we probably do not want to store all of our states # Define the length of the state space, number of grid locations plus the terminal state Base.length(mdp::GridWorldMDP) = mdp.size_x * mdp.size_y + 1 - # `states` now returns the mdp, which we will constructur our iterator from + # `states` now returns the mdp, which we will construct our iterator from POMDPs.states(mdp::GridWorldMDP) = mdp function Base.getindex(mdp::GridWorldMDP, si::Int) # Enables mdp[si] diff --git a/docs/src/example_solvers.md b/docs/src/example_solvers.md index 99690a99..069053a7 100644 --- a/docs/src/example_solvers.md +++ b/docs/src/example_solvers.md @@ -37,7 +37,7 @@ end ``` ## Offline (SARSOP) -In this example, we will use the [NativeSARSOP](https://github.com/JuliaPOMDP/NativeSARSOP.jl) solver. We are generating the policy offline, so we will also save the policy to a file so we can use it at a later time without having to recompute it. +In this example, we will use the [NativeSARSOP](https://github.com/JuliaPOMDP/NativeSARSOP.jl) solver. The process for generating offline polcies is similar for all offline solvers. First, we define the solver with the desired parameters. Then, we call `POMDPs.solve` with the solver and the problem. We can query the policy using the `action` function. ```@example crying_sim using NativeSARSOP