Skip to content

Commit

Permalink
Merge pull request #201 from ArnoStrouwen/formatter
Browse files Browse the repository at this point in the history
reapply formatter
  • Loading branch information
ChrisRackauckas authored Feb 24, 2024
2 parents 9aceec3 + c109091 commit fd5e45d
Show file tree
Hide file tree
Showing 14 changed files with 225 additions and 158 deletions.
3 changes: 2 additions & 1 deletion .JuliaFormatter.toml
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
style = "sciml"
format_markdown = true
format_markdown = true
format_docstrings = true
8 changes: 5 additions & 3 deletions docs/pages.jl
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
pages = [
"ReservoirComputing.jl" => "index.md",
"General Settings" => Any["Changing Training Algorithms" => "general/different_training.md",
"General Settings" => Any[
"Changing Training Algorithms" => "general/different_training.md",
"Altering States" => "general/states_variation.md",
"Generative vs Predictive" => "general/predictive_generative.md"],
"Echo State Network Tutorials" => Any["Lorenz System Forecasting" => "esn_tutorials/lorenz_basic.md",
"Echo State Network Tutorials" => Any[
"Lorenz System Forecasting" => "esn_tutorials/lorenz_basic.md",
#"Mackey-Glass Forecasting on GPU" => "esn_tutorials/mackeyglass_basic.md",
"Using Different Layers" => "esn_tutorials/change_layers.md",
"Using Different Reservoir Drivers" => "esn_tutorials/different_drivers.md",
Expand All @@ -17,5 +19,5 @@ pages = [
"Echo State Networks" => "api/esn.md",
"ESN Layers" => "api/esn_layers.md",
"ESN Drivers" => "api/esn_drivers.md",
"ReCA" => "api/reca.md"],
"ReCA" => "api/reca.md"]
]
2 changes: 1 addition & 1 deletion docs/src/esn_tutorials/change_layers.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ using ReservoirComputing, StatsBase
res_size = 300
input_layer = [
MinimumLayer(0.85, IrrationalSample()),
MinimumLayer(0.95, IrrationalSample()),
MinimumLayer(0.95, IrrationalSample())
]
reservoirs = [SimpleCycleReservoir(res_size, 0.7),
CycleJumpsReservoir(res_size, cycle_weight = 0.7, jump_weight = 0.2, jump_size = 5)]
Expand Down
2 changes: 1 addition & 1 deletion src/ReservoirComputing.jl
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ end
"""
Predictive(prediction_data)
Given a set of labels as ```prediction_data```, this method of prediction will return the corresponding labels in a standard Machine Learning fashion.
Given a set of labels as `prediction_data`, this method of prediction will return the corresponding labels in a standard Machine Learning fashion.
"""
function Predictive(prediction_data)
prediction_len = size(prediction_data, 2)
Expand Down
67 changes: 38 additions & 29 deletions src/esn/echostatenetwork.jl
Original file line number Diff line number Diff line change
Expand Up @@ -34,20 +34,23 @@ end
Hybrid(prior_model, u0, tspan, datasize)
Constructs a `Hybrid` variation of Echo State Networks (ESNs) integrating a knowledge-based model
(`prior_model`) with ESNs for advanced training and prediction in chaotic systems.
(`prior_model`) with ESNs for advanced training and prediction in chaotic systems.
# Parameters
- `prior_model`: A knowledge-based model function for integration with ESNs.
- `u0`: Initial conditions for the model.
- `tspan`: Time span as a tuple, indicating the duration for model operation.
- `datasize`: The size of the data to be processed.
- `prior_model`: A knowledge-based model function for integration with ESNs.
- `u0`: Initial conditions for the model.
- `tspan`: Time span as a tuple, indicating the duration for model operation.
- `datasize`: The size of the data to be processed.
# Returns
- A `Hybrid` struct instance representing the combined ESN and knowledge-based model.
- A `Hybrid` struct instance representing the combined ESN and knowledge-based model.
This method is effective for chaotic processes as highlighted in [^Pathak].
Reference:
[^Pathak]: Jaideep Pathak et al.
"Hybrid Forecasting of Chaotic Processes:
Using Machine Learning in Conjunction with a Knowledge-Based Model" (2018).
Expand All @@ -67,27 +70,30 @@ end
Creates an Echo State Network (ESN) using specified parameters and training data, suitable for various machine learning tasks.
# Parameters
- `train_data`: Matrix of training data (columns as time steps, rows as features).
- `variation`: Variation of ESN (default: `Default()`).
- `input_layer`: Input layer of ESN (default: `DenseLayer()`).
- `reservoir`: Reservoir of the ESN (default: `RandSparseReservoir(100)`).
- `bias`: Bias vector for each time step (default: `NullLayer()`).
- `reservoir_driver`: Mechanism for evolving reservoir states (default: `RNN()`).
- `nla_type`: Non-linear activation type (default: `NLADefault()`).
- `states_type`: Format for storing states (default: `StandardStates()`).
- `washout`: Initial time steps to discard (default: `0`).
- `matrix_type`: Type of matrices used internally (default: type of `train_data`).
- `train_data`: Matrix of training data (columns as time steps, rows as features).
- `variation`: Variation of ESN (default: `Default()`).
- `input_layer`: Input layer of ESN (default: `DenseLayer()`).
- `reservoir`: Reservoir of the ESN (default: `RandSparseReservoir(100)`).
- `bias`: Bias vector for each time step (default: `NullLayer()`).
- `reservoir_driver`: Mechanism for evolving reservoir states (default: `RNN()`).
- `nla_type`: Non-linear activation type (default: `NLADefault()`).
- `states_type`: Format for storing states (default: `StandardStates()`).
- `washout`: Initial time steps to discard (default: `0`).
- `matrix_type`: Type of matrices used internally (default: type of `train_data`).
# Returns
- An initialized ESN instance with specified parameters.
- An initialized ESN instance with specified parameters.
# Examples
```julia
using ReservoirComputing
train_data = rand(10, 100) # 10 features, 100 time steps
esn = ESN(train_data, reservoir=RandSparseReservoir(200), washout=10)
esn = ESN(train_data, reservoir = RandSparseReservoir(200), washout = 10)
```
"""
function ESN(train_data;
Expand Down Expand Up @@ -159,16 +165,16 @@ function obtain_layers(in_size,

if input_layer isa Array
input_matrix = [create_layer(input_layer[j], input_res_sizes[j], in_sizes[j],
matrix_type = matrix_type) for j in 1:esn_depth]
matrix_type = matrix_type) for j in 1:esn_depth]
else
_input_layer = fill(input_layer, esn_depth)
input_matrix = [create_layer(_input_layer[k], input_res_sizes[k], in_sizes[k],
matrix_type = matrix_type) for k in 1:esn_depth]
matrix_type = matrix_type) for k in 1:esn_depth]
end

res_sizes = [get_ressize(input_matrix[j]) for j in 1:esn_depth]
reservoir_matrix = [create_reservoir(reservoir[k], res_sizes[k],
matrix_type = matrix_type) for k in 1:esn_depth]
matrix_type = matrix_type) for k in 1:esn_depth]

if bias isa Array
bias_vector = [create_layer(bias[j], res_sizes[j], 1, matrix_type = matrix_type)
Expand Down Expand Up @@ -212,36 +218,39 @@ end
Trains an Echo State Network (ESN) using the provided target data and a specified training method.
# Parameters
- `esn::AbstractEchoStateNetwork`: The ESN instance to be trained.
- `target_data`: Supervised training data for the ESN.
- `training_method`: The method for training the ESN (default: `StandardRidge(0.0)`).
- `esn::AbstractEchoStateNetwork`: The ESN instance to be trained.
- `target_data`: Supervised training data for the ESN.
- `training_method`: The method for training the ESN (default: `StandardRidge(0.0)`).
# Returns
- The trained ESN model. Its type and structure depend on `training_method` and the ESN's implementation.
- The trained ESN model. Its type and structure depend on `training_method` and the ESN's implementation.
# Returns
The trained ESN model. The exact type and structure of the return value depends on the
`training_method` and the specific ESN implementation.
```julia
using ReservoirComputing
# Initialize an ESN instance and target data
esn = ESN(train_data, reservoir=RandSparseReservoir(200), washout=10)
esn = ESN(train_data, reservoir = RandSparseReservoir(200), washout = 10)
target_data = rand(size(train_data, 2))
# Train the ESN using the default training method
trained_esn = train(esn, target_data)
# Train the ESN using a custom training method
trained_esn = train(esn, target_data, training_method=StandardRidge(1.0))
trained_esn = train(esn, target_data, training_method = StandardRidge(1.0))
```
# Notes
- When using a `Hybrid` variation, the function extends the state matrix with data from the
- When using a `Hybrid` variation, the function extends the state matrix with data from the
physical model included in the `variation`.
- The training is handled by a lower-level `_train` function which takes the new state matrix
- The training is handled by a lower-level `_train` function which takes the new state matrix
and performs the actual training using the specified `training_method`.
"""
function train(esn::AbstractEchoStateNetwork,
Expand Down
77 changes: 50 additions & 27 deletions src/esn/esn_input_layers.jl
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,15 @@ elements distributed uniformly within the range [-`scaling`, `scaling`],
following the approach in [^Lu].
# Parameters
- `scaling`: The scaling factor for the weight distribution (default: 0.1).
- `scaling`: The scaling factor for the weight distribution (default: 0.1).
# Returns
- A `WeightedInput` instance to be used for initializing the input layer of an ESN.
- A `WeightedInput` instance to be used for initializing the input layer of an ESN.
Reference:
[^Lu]: Lu, Zhixin, et al.
"Reservoir observers: Model-free inference of unmeasured variables in chaotic systems."
Chaos: An Interdisciplinary Journal of Nonlinear Science 27.4 (2017): 041102.
Expand Down Expand Up @@ -59,10 +62,12 @@ This scaling factor can be provided either as an argument or a keyword argument.
The `DenseLayer` is the default input layer in `ESN` construction.
# Parameters
- `scaling`: The scaling factor for weight distribution (default: 0.1).
- `scaling`: The scaling factor for weight distribution (default: 0.1).
# Returns
- A `DenseLayer` instance for initializing the ESN's input layer.
- A `DenseLayer` instance for initializing the ESN's input layer.
"""
struct DenseLayer{T} <: AbstractLayer
scaling::T
Expand All @@ -78,12 +83,14 @@ end
Generates a matrix layer of size `res_size` x `in_size`, constructed according to the specifications of the `input_layer`.
# Parameters
- `input_layer`: An instance of `AbstractLayer` determining the layer construction.
- `res_size`: The number of rows (reservoir size) for the layer.
- `in_size`: The number of columns (input size) for the layer.
- `input_layer`: An instance of `AbstractLayer` determining the layer construction.
- `res_size`: The number of rows (reservoir size) for the layer.
- `in_size`: The number of columns (input size) for the layer.
# Returns
- A matrix representing the constructed layer.
- A matrix representing the constructed layer.
"""
function create_layer(input_layer::DenseLayer,
res_size,
Expand All @@ -104,11 +111,13 @@ The layer is initialized with weights distributed within [-`scaling`, `scaling`]
and a specified `sparsity` level. Both `scaling` and `sparsity` can be set as arguments or keyword arguments.
# Parameters
- `scaling`: Scaling factor for weight distribution (default: 0.1).
- `sparsity`: Sparsity level of the layer (default: 0.1).
- `scaling`: Scaling factor for weight distribution (default: 0.1).
- `sparsity`: Sparsity level of the layer (default: 0.1).
# Returns
- A `SparseLayer` instance for initializing ESN's input layer with sparse connections.
- A `SparseLayer` instance for initializing ESN's input layer with sparse connections.
"""
struct SparseLayer{T} <: AbstractLayer
scaling::T
Expand Down Expand Up @@ -151,14 +160,17 @@ The parameter `p` sets the probability of a weight being positive, as per the `D
This method of sign weight determination for input layers is based on the approach in [^Rodan].
# Parameters
- `p`: Probability of a positive weight (default: 0.5).
- `p`: Probability of a positive weight (default: 0.5).
# Returns
- A `BernoulliSample` instance for generating sign weights in `MinimumLayer`.
- A `BernoulliSample` instance for generating sign weights in `MinimumLayer`.
Reference:
[^Rodan]: Rodan, Ali, and Peter Tino.
"Minimum complexity echo state network."
"Minimum complexity echo state network."
IEEE Transactions on Neural Networks 22.1 (2010): 131-144.
"""
function BernoulliSample(; p = 0.5)
Expand All @@ -180,13 +192,16 @@ The `start` parameter sets the starting point in the decimal sequence.
The signs are assigned based on the thresholding of each decimal digit against 4.5, as described in [^Rodan].
# Parameters
- `irrational`: An irrational number for weight sign determination (default: π).
- `start`: Starting index in the decimal expansion (default: 1).
- `irrational`: An irrational number for weight sign determination (default: π).
- `start`: Starting index in the decimal expansion (default: 1).
# Returns
- An `IrrationalSample` instance for generating sign weights in `MinimumLayer`.
- An `IrrationalSample` instance for generating sign weights in `MinimumLayer`.
Reference:
[^Rodan]: Rodan, Ali, and Peter Tiňo.
"Simple deterministically constructed cycle reservoirs with regular jumps."
Neural Computation 24.7 (2012): 1822-1852.
Expand All @@ -211,13 +226,16 @@ weight determined by the `sampling` method. This approach, as detailed in [^Roda
allows for controlled weight distribution in the layer.
# Parameters
- `weight`: Absolute value of weights in the layer.
- `sampling`: Method for determining the sign of weights (default: `BernoulliSample(0.5)`).
- `weight`: Absolute value of weights in the layer.
- `sampling`: Method for determining the sign of weights (default: `BernoulliSample(0.5)`).
# Returns
- A `MinimumLayer` instance for initializing the ESN's input layer.
- A `MinimumLayer` instance for initializing the ESN's input layer.
References:
[^Rodan1]: Rodan, Ali, and Peter Tino.
"Minimum complexity echo state network."
IEEE Transactions on Neural Networks 22.1 (2010): 131-144.
Expand Down Expand Up @@ -291,23 +309,27 @@ end
Creates an `InformedLayer` initializer for Echo State Networks (ESNs) that generates
a weighted input layer matrix. The matrix contains random non-zero elements drawn from
the range [-```scaling```, ```scaling```]. This initializer ensures that a fraction (`gamma`)
the range [-`scaling`, `scaling`]. This initializer ensures that a fraction (`gamma`)
of reservoir nodes are exclusively connected to the raw inputs, while the rest are
connected to the outputs of a prior knowledge model, as described in [^Pathak].
# Arguments
- `model_in_size`: The size of the prior knowledge model's output,
- `model_in_size`: The size of the prior knowledge model's output,
which determines the number of columns in the input layer matrix.
# Keyword Arguments
- `scaling`: The absolute value of the weights (default: 0.1).
- `gamma`: The fraction of reservoir nodes connected exclusively to raw inputs (default: 0.5).
- `scaling`: The absolute value of the weights (default: 0.1).
- `gamma`: The fraction of reservoir nodes connected exclusively to raw inputs (default: 0.5).
# Returns
- An `InformedLayer` instance for initializing the ESN's input layer matrix.
- An `InformedLayer` instance for initializing the ESN's input layer matrix.
Reference:
[^Pathak]: Jaideep Pathak et al.
[^Pathak]: Jaideep Pathak et al.
"Hybrid Forecasting of Chaotic Processes: Using Machine Learning in Conjunction with a Knowledge-Based Model" (2018).
"""
function InformedLayer(model_in_size; scaling = 0.1, gamma = 0.5)
Expand Down Expand Up @@ -359,7 +381,8 @@ end
Creates a `NullLayer` initializer for Echo State Networks (ESNs) that generates a vector of zeros.
# Returns
- A `NullLayer` instance for initializing the ESN's input layer matrix.
- A `NullLayer` instance for initializing the ESN's input layer matrix.
"""
struct NullLayer <: AbstractLayer end

Expand Down
Loading

0 comments on commit fd5e45d

Please sign in to comment.