Skip to content

Releases: Grid2op/grid2op

Release 1.6.5

19 Jan 16:26
1f67644
Compare
Choose a tag to compare

This release focuses on grid2op improvment, especially adressing some enhancement of the github issues.

Breaking changes

  • the function "env.reset()" now reset the underlying pseudo random number generators
    of all the environment subclasses (eg. observation space, action space, etc.) This change has been made to
    ensure reproducibility between episodes: if env.seed(...) is called once, then regardless of what happens
    (basically the number of "env.step()" between calls to "env.reset()")
    the "env.reset()" will be generated with the same prng (drawn from the environment)
    This effect the opponent and the chronics (when maintenance are generated "on the fly").
  • the name of the python files for the "Chronics" module are now lowercase (complient with PEP). If you
    did things like from grid2op.Chronics.ChangeNothing import ChangeNothing you need to change it like
    from grid2op.Chronics.changeNothing import ChangeNothing or even better, and this is the preferred way to include
    them: from grid2op.Chronics import ChangeNothing. It should not affect lots of code (more refactoring of the kind
    are to be expected in following versions).
  • same as above for the "Observation" module. It should not affect lots of code (more refactoring of the kind
    are to be expected in following versions).

Fixed issues

  • a bug for the EpisodeData that did not save the first observation when
    "add_detailed_output" was set to True and the data were not saved on disk.
  • an issue when copying the environment with the opponent (see issue #274)
  • a bug leading to the wrong "backend.get_action_to_set()" when there were storage units on the grid.
  • a bug in the "BackendConverter" when there are storage on the grid
  • issue #265
  • issue #261

New features

  • possibility to "env.set_id" by giving only the folder of the chronics and not the whole path.
  • function "env.chronics_handler.available_chronics()" to return the list of available chronics
    for a given environment
  • possibility, through the Parameters class, to limit the number of possible calls to obs.simulate(...)
    see param.MAX_SIMULATE_PER_STEP and param.MAX_SIMULATE_PER_EPISODE (see issue #273)
  • a class to generate a "Chronics" readable by grid2op from numpy arrays (see #271)
  • an attribute delta_time in the observation that tells the time (in minutes) between two consecutive steps.
  • a method of the action space to show a list of actions to get back to the original topology
    (see #275)
    env.action_space.get_back_to_ref_state(obs)
  • a method of the action to store it in a grid2op independant fashion (using json and dictionaries), see act.as_serializable_dict()
  • possibility to generate a gym DiscreteActSpace from a given list of actions (see #277)
  • a class that output a noisy observation to the agent (see NoisyObservation): the agent sees
    the real values of the environment with some noise, this could used to model inacurate
    sensors.

Improvements

  • observation now raises Grid2OpException instead of RuntimeError
  • docs (and notebooks) for the "split_train_val" #269
  • the "split_train_val" function to also generate a test dataset see #276

Release 1.6.4

08 Nov 09:44
9685e76
Compare
Choose a tag to compare

Some quality of life features and minor speed improvments

Breaking changes

  • the name of the python file for the "agent" module are now lowercase (complient with PEP). If you
    did things like from grid2op.Agent.BaseAgent import BaseAgent you need to change it like
    from grid2op.Agent.baseAgent import BaseAgent or even better, and this is the preferred way to include
    them: from grid2op.Agent import BaseAgent It should not affect lots of code.

Fixed issues

  • a bug where the shunt had a voltage when disconnected using pandapower backend
  • a bug preventing to print the action space if some "part" of it had no size (empty action space)
  • a bug preventing to copy an action properly (especially for the alarm)
  • a bug that did not "close" the backend of the observation space when the environment was closed. This
    might be related to #255

New features

  • serialization of current_iter and max_iter in the observation.
  • the possibility to use the runner only on certain episode id
    (see runner.run(..., episode_id=[xxx, yyy, ...]))
  • a function that returns if an action has any change to modify the grid see act.can_affect_something()
  • a ttype of agent that performs predefined actions from a given list
  • basic support for logging in environment and runner (more coming soon)
  • possibility to make an environment with an implementation of a reward, instead of relying on a reward class.
  • a possible implementation of a N-1 reward

Improvements

  • right time stamp is now set in the observation after the game over.
  • correct current number of steps when the observation is set to a game over state.
  • documentation to clearly state that the action_class should not be modified.
  • possibility to tell which chronics to use with the result of env.chronics_handler.get_id() (this is also
    compatible in the runner)
  • it is no more possible to call "env.reset()" or "env.step()" after an environment has been closed: a clean error
    is raised in this case.

Version 1.6.3

21 Aug 17:10
e2f8af9
Compare
Choose a tag to compare

This version focuses on performances when using env.copy and fixes some bugs in obs.simulate

  • [FIXED] a bug that allowed to use wrongly the function backend.get_action_to_set() even when the backend
    has diverged (which should not be possible)
  • [FIXED] a bug leading to non correct consideration of the status of powerlines right after the activation
    of some protections (see issue #245 )
  • [IMPROVED] the PandaPowerBackend is now able to load a grid with a distributed slack bus. When loaded though, the
    said grid will be converted to one with a single slack bus (the first slack among the distributed)
  • [IMPROVED] massive speed-ups when copying environment or using obs.simulate (sometimes higher than 30x speed up)
  • [IMPROVED] experimental compatibility with different frameworks thanks to the possibility to serialize, as text
    files the class created "on the fly" (should solve most of the "pickle" error). See env.generate_classes()
    for an example usage. Every feedback is appreciated.

Version 1.6.2 (hotfx)

18 Aug 12:00
de8588e
Compare
Choose a tag to compare

A bug was present since version 1.6.0 that prevented, in some specific cases to use "obs.simulate" if the "alarm" / "attention budget" where used.

This hotfix (that will not be ported back to previous grid2op version) fix that issue.

Version 1.6.2

18 Aug 11:57
5620aec
Compare
Choose a tag to compare

Adding the complete support for pickling grid2op classes. This is a major feature that allows to use grid2op
way more easily with multiprocessing and to ensure compatibility with more recent version of some RL package
(eg ray / rllib). Note that full compatibility with "multiprocessing" and "pickle" is not completely done yet.

Release 1.6.1

27 Jul 08:46
db48d12
Compare
Choose a tag to compare

This will be the grid2op version used to rank the submission of the ICAPS 2021 competition.

Fixed issues

  • a bug in the "env.get_path_env()" in case env was a multimix (it returned the path of the current mix
    instead of the path of the multimix environment)
  • a bug in the backend.get_action_to_set() and backend.update_from_obs() in case of disconnected shunt
    with backend that supported shunts (values for p and q were set even if the shunt was disconnected, which
    could lead to undefined behaviour)

Improvements

  • now grid2op is able to check if an environment needs to be updated when calling grid2op.update_env()
    thanks to the use of registered hash values.
  • now grid2op will check if an update is available when an environment is being downloaded for the
    first time.

Release v1.6.0 (hotfix)

23 Jun 08:39
Compare
Choose a tag to compare

Fix the issue #235

This can be installed with:

pip install -U grid2op

Version 1.6.0

22 Jun 08:51
abcddb4
Compare
Choose a tag to compare

EDIT a bug (#235 ) has slipped through our tests and is present in this release. This will be update as soon as possible

Minor breaking changes

  • (but transparent for everyone): the disc_lines attribute is now part of the environment, and is also
    containing integer (representing the "order" on which the lines are disconnected due to protections) rather
    than just boolean.
  • now the observation stores the information related to shunts by default. This means old logs computed with
    the runner might not work with this new version.
  • the "Runner.py" file has been renamed, following pep convention "runner.py". You should rename your
    import from grid2op.Runner.Runner import Runner to from grid2op.Runner.runner import Runner
    (NB we higly recommend importing the Runner like from grid2op.Runner import Runner though !)

Fixed issues

  • the L2RPN_2020 score has been updated to reflect the score used during these competitions (there was an
    error between DoNothingAgent and RecoPowerlineAgent)
    [see #228]
  • some bugs in the action_space.get_all_unitary_redispatch and action_space.get_all_unitary_curtail
  • some bugs in the GreedyAgent and TopologyGreedy
  • #220 flow_bus_matrix did not took into
    account disconnected powerlines, leading to impossibility to compute this matrix in some cases.
  • #223 : now able to plot a grid even
    if there is nothing controllable in grid2op present in it.
  • an issue where the parameters would not be completely saved when saved in json format (alarm feature was
    absent) (related to #224 )
  • an error caused by the observation non being copied when a game over occurred that caused some issue in
    some cases (related to #226 )
  • a bug in the opponent space where the "previous_fail" kwargs was not updated properly and send wrongly
    to the opponent
  • a bug in the geometric opponent when it did attack that failed.
  • #229 typo in the AlarmReward class when reset.

Addition

  • retrieval of the max_step (ie the maximum number of step that can be performed for the current episode)
    in the observation
  • some handy argument in the action_space.get_all_unitary_redispatch and
    action_space.get_all_unitary_curtail (see doc)
  • as utils function to compute the score used for the ICAPS 2021 competition (see
    from grid2op.utils import ScoreICAPS2021 and the associate documentation for more information)
  • a first version of the "l2rpn_icaps_2021" environment (accessible with
    grid2op.make("l2rpn_icaps_2021", test=True))

Improvements

  • prevent the use of the same instance of a backend in different environments
  • #217 : no more errors when trying to
    load a grid with unsupported elements (eg. 3w trafos or static generators) by PandaPowerBackend
  • #215 : warnings are issued when elements
    present in pandapower grid will not be modified grid2op side.
  • #214 : adding the shunt information
    in the observation documentation.
  • documentation to use the env.change_paramters function.

Release version 1.5.2

10 May 08:57
c1bb757
Compare
Choose a tag to compare

Minor breaking changes

  • Allow the opponent to chose the duration of its attack. This breaks the previous "Opponent.attack(...)"
    signature by adding an object in the return value. All code provided with grid2op are compatible with this
    new change. (for previously coded opponent, the only thing you have to do to make it compliant with
    the new interface is, in the opponent.attack(...) function return whatever_you_returned_before, None instead
    of simply whatever_you_returned_before

Fixed issues

  • #196 an issue related to the
    low / high of the observation if using the gym_compat module. Some more protections
    are enforced now.
  • #196 an issue related the scaling when negative
    numbers are used (in these cases low / max would be mixed up)
  • an issue with the IncreasingFlatReward reward types
  • a bug due to the conversion of int to float in the range of the BoxActionSpace for the gym_compat module
  • a bug in the BoxGymActSpace, BoxGymObsSpace, MultiDiscreteActSpace and DiscreteActSpace
    where the order of the attribute for the conversion
    was encoded in a set. We enforced a sorted list now. We did not manage to find a bug caused by this issue, but
    it is definitely possible. This has been fixed now.
  • a bug where, when an observation was set to a "game over" state, some of its attributes were below the
    maximum values allowed in the BoxGymObsSpace

Addition

  • A reward EpisodeDurationReward that is always 0 unless at the end of an episode where it returns a float
    proportional to the number of step made from the beginning of the environment.
  • in the Observation the possibility to retrieve the current number of steps
  • easier function to manipulate the max number of iteration we want to perform directly from the environment
  • function to retrieve the maximum duration of the current episode.
  • a new kind of opponent that is able to attack at "more random" times with "more random" duration.
    See the GeometricOpponent.

Improvements

  • on windows at least, grid2op does not work with gym < 0.17.2 Checks are performed in order to make sure
    the installed open ai gym package meets this requirement (see issue
    Issue#185 <https://github.com/rte-france/Grid2Op/issues/185>_ )
  • the seed of openAI gym for composed action space (see issue https://github.com/openai/gym/issues/2166):
    in waiting for an official fix, grid2op will use the solution proposed there
    openai/gym#2166 (comment) )

Release v1.5.1 (hotfix)

20 Apr 13:34
580e4be
Compare
Choose a tag to compare

A file had been named "platform.py" in grid2op, which could lead to some bug during the import of the package (see https://stackoverflow.com/questions/22438609/attributeerror-module-object-has-no-attribute-python-implementation-running or https://www.programmersought.com/article/52577306429/)

This file has been renamed to avoid this issue. This is the only fix provided.