Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'encoder_activation' parameter listed in LSTM model documentation is not included in LSTM initialization #426

Open
PotosnakW opened this issue Jan 31, 2023 · 2 comments · May be fixed by #1175
Labels

Comments

@PotosnakW
Copy link

Describe the bug

  • 'encoder_activation' parameter listed in LSTM model documentation is not included in LSTM initialization.
  • 'activation' and 'shared_weights' parameters listed in TFT model documentation are not included in TFT initialization

Model training fails when these parameters are included.

To Reproduce
Steps to reproduce the behavior:

  1. Run the example code below
  2. See error (ex: Trainer.init() got an unexpected keyword argument 'shared_weights')

Expected behavior
Parameters are recognized and model training is successful.

Desktop (please complete the following information):

  • OS: Springdale Open Enterprise Linux (--> based on Red Hat Enterprise Linux)
  • Browser: N/A
  • Version: 8.7 Moderna]

Code Example

import torch
import pandas as pd
from datasetsforecast.long_horizon import LongHorizon

from ray import tune
from ray.tune.search.hyperopt import HyperOptSearch

from neuralforecast.auto import AutoLSTM, AutoTFT
from neuralforecast.core import NeuralForecast
from neuralforecast.losses.pytorch import MAE

import logging
logging.getLogger("pytorch_lightning").setLevel(logging.WARNING)

def main():
    Y_df = pd.read_csv('/home/scratch/wpotosna/longhorizon/datasets/ili/M/df_y.csv')
    Y_df['ds'] = pd.to_datetime(Y_df['ds'])

    n_time = len(Y_df.ds.unique())
    val_size = int(.2 * n_time)
    test_size = int(.2 * n_time)
    horizon = 96

    lstm_config = {
       "learning_rate": tune.choice([1e-3]),
       "max_steps": tune.choice([4]),   
       "val_check_steps": tune.choice([2]),
       "input_size": tune.choice([2 * horizon]),
       "encoder_activation": tune.choice(['tanh']),
       "encoder_n_layers":tune.choice([2]),
       "encoder_hidden_size":tune.choice([128]),
       "context_size":tune.choice([10]),
       "decoder_hidden_size":tune.choice([128]),
       "decoder_layers":tune.choice([2]),
       "random_seed": tune.choice([1, 2, 3, 4, 5]),
    }
    
    tft_config = {
       "learning_rate": tune.choice([1e-3]),
       "max_steps": tune.choice([4]),   
       "val_check_steps": tune.choice([2]),
       "input_size": tune.choice([2 * horizon]),                                 
       "hidden_size": tune.choice([256, 512]),
       "dropout": tune.choice([0]),
       "attn_dropout": tune.choice([0]),
       "shared_weights": tune.choice([False]),
       "activation": tune.choice(['ReLU']),
       "random_seed": tune.choice([1, 2, 3, 4, 5]),
    }

    models = [AutoLSTM(h=horizon,
                       loss=MAE(),
                       config=lstm_config,
                       search_alg=HyperOptSearch(),
                       num_samples=1),
              AutoTFT(h=horizon,
                      loss=MAE(),
                      config=tft_config,
                      search_alg=HyperOptSearch(),
                      num_samples=1),
             ]

    fcst = NeuralForecast(models=models, freq='15min') 

    fcst_df = fcst.cross_validation(df=Y_df, val_size=val_size,test_size=test_size, n_windows=None)
   
if __name__=='__main__':
    main()

@wasf84
Copy link

wasf84 commented Feb 17, 2024

Is it already fixed?
I'm using the latest version (v1.6.4 from oct 2023) and I'm still getting this error message.

Thanks in advance.

@marcopeix
Copy link
Contributor

encoder_activation is not used in LSTM and it was removed from the docstring already. shared_weights is not used in TFT. A PR will add the choice of activation function in the GRN module of TFT.

@marcopeix marcopeix linked a pull request Oct 10, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants