Skip to content

Libra, a forecasting benchmark, automatically evaluates and ranks forecasting methods based on their performance in a diverse set of evaluation scenarios. The benchmark comprises four different use cases, each covering 100 heterogeneous time series taken from different domains.

License

Notifications You must be signed in to change notification settings

DescartesResearch/ForecastBenchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ForecastBenchmark

Libra, a forecasting benchmark, automatically evaluates forecasting methods based on their performance in a diverse set of evaluation scenarios. The benchmark comprises four different use cases, each covering 100 heterogeneous time series taken from different domains. A more detailed description of Libra can be found in the Description.

A live-demo for using the benchmark is hosted at CodeOcean.

The assembled dataset for this benchmark, which includes 400 time series, is incorporated in this package and is additionally publicly available at Zenodo.

Check out our Getting Started Guide for information on how to use the Libra.

Quick Example

An example code to execute the ForecastBenchmark is depicted in the following:

library(ForecastBenchmark)

forecaster <- function(ts,h){
  model <- naive(ts)
  values <- forecast(model,h=h)$mean
  return(values)
}

benchmark(forecaster,usecase="economics",type="one")

The installation process, requirements, and options for this benchmark are described in the Getting Started Guide. A more detailed documentation can be found in the Documentation.

Cite Us

The forecast benchmark was first published in Proceedings of the 12th ACM/SPEC International Conference on Performance Engineering (ICPE '21). If you use the forecast benchmark please cite the following bibkey:

@inproceedings{KiEiScBaGrKo2018-MASCOTS-TeaStore,
  author = {Andr{\'e} Bauer and Marwin Z{\"u}fle Simon Eismann and Johannes Grohmann and Nikolas Herbst and Samuel Kounev},
  title = {{Libra: A Benchmark for Time Series Forecasting Methods}},
  booktitle = {Proceedings of the 12th ACM/SPEC International Conference on Performance Engineering},
  series = {ICPE '21},
  year = {2021},
  month = {April},
  location = {{Rennes, France}},
}

About

Libra, a forecasting benchmark, automatically evaluates and ranks forecasting methods based on their performance in a diverse set of evaluation scenarios. The benchmark comprises four different use cases, each covering 100 heterogeneous time series taken from different domains.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages