From 39c5413b216e6a1d669051e2bca9b3a77edfbf8e Mon Sep 17 00:00:00 2001 From: Leo Heidweiler Date: Mon, 8 Jul 2024 11:05:26 +0100 Subject: [PATCH] Deployed 57a694a to develop with MkDocs 1.6.0 and mike 2.0.0 --- develop/search/search_index.json | 2 +- develop/tutorials/R/overview/index.html | 32 +-- develop/tutorials/python/overview/index.html | 222 +++++++++---------- 3 files changed, 128 insertions(+), 128 deletions(-) diff --git a/develop/search/search_index.json b/develop/search/search_index.json index ef8e7a6..37ee4d3 100644 --- a/develop/search/search_index.json +++ b/develop/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to POSTED's documentation","text":"

POSTED (the Potsdam Open-Source Techno-Economic Database) is a public database of techno-economic data on energy and climate-mitigation technologies, along with a framework for consistent data handling and an open-source toolbox for techno-economic assessments (TEA). In particular, it provides a structure for and contains data on investment cost, energy and feedstock demand, other fixed and variable costs, emissions intensities, and other characteristics of conversion, storage, and transportation technologies in the energy and related sectors. The accompanying software code is intended for consistent maintenance of this data and for deriving straight-forward results from them, such as levelised cost, greenhouse-gas emission intensities, or marginal abatement cost.

"},{"location":"installation/","title":"How to work with and contribute to POSTED","text":"

If you want to use POSTED or even contribute to its development, you first need to get/install POSTED in one of two ways:

"},{"location":"installation/#installing-posted-as-a-package","title":"Installing POSTED as a package","text":"PythonR

You can install the posted Python package via:

# when using poetry\npoetry add git+https://github.com:PhilippVerpoort/posted.git\n\n# when using pip\npip install git+https://github.com:PhilippVerpoort/posted.git\n
A PyPI package will be made available at a later stage.

You can install the posted R package using install_github from devtools via:

install_github('PhilippVerpoort/posted')\n
A CRAN package will be made available at a later stage.

This will allow you to use the data contained in POSTED's public database and general-purpose functions from the NOSLAG and TEAM frameworks.

"},{"location":"installation/#cloning-posted-from-source","title":"Cloning POSTED from source","text":""},{"location":"methodology/","title":"Methodology of POSTED","text":""},{"location":"methodology/#purpose-and-aims","title":"Purpose and aims","text":"

The development of the POSTED framework pursues several goals:

Obtain a comprehensive collection of techno-economic data. The data needed for techno-economic assessments and various types of modelling is often scattered across many sources and formats. One aim of POSTED is to collect all required data in one place with a consistent format. This data will forever be publicly available under a permissive licence in order to overcome existing barriers of collaboration.

Make data easily available for manipulation and techno-economic assessments. Techno-economic data often comes in the form of Excel Spreadsheets, which is difficult to work with when performing assessments. Calculating the levelized cost of production or comparing parameters across different sources should be easy and straightforward with a few lines of code.

Make data sources traceable and transparent. When working with techno-economic data, the origin and underlying assumptions are often intransparent and hard to trace back. By being explicit about sources and reporting data only in the original units, the process of data curation becomes more open and transparent. Through the development and explication of clear standards, the misunderstandings can be avoided. Be extendible to meet users\u2019 requirements. The POSTED database can be extended, allowing users\u2019 to meet the requirements of their own projects.

"},{"location":"methodology/#database-format","title":"Database format","text":"

POSTED is extensible via public and private databases. The public database is part of the public GitHub repository and is located in the inst/extdata subdirectory. Private project-specific databases can be added to POSTED by adding a respective database path to the databases dictionary of the path module before executing any other POSTED code.

PythonR
from pathlib import Path\nimport posted.path\ndatabases |= {'database_name': Path('/path/to/database/')}\n
library(POSTED)\ndatabases$database_name <- \"/path/to/database/\"\n

The public database is intended for the curation of a comprehensive set of general-purpose resources that should suit the needs of most users. Private databases may be used for low-threshold extensibility, for more project-specific work that is not in the interest of a broad audience, or for confidential data that cannot be made available publicly.

The format mandates the following components for all databases. If these components have contents, they should be placed as subdirectories in the database directory (see here: https://github.com/PhilippVerpoort/posted/tree/refactoring/inst/extdata/database).

TEDFs, fields, and masks are organised in a hierarchical system of variable definitions. This means that the file .../database/tedfs/Tech/Electrolysis.csv defines entries for variables Tech|Electrolysis|..., and so on. The columns variable and reference_variable in the TEDFs are attached to the end of the parent variable defined by the path of the file.

"},{"location":"methodology/#flow-types","title":"Flow types","text":"

POSTED defines flow types, which are used throughout the TEDF format and NOSLAG and unit framework. Flow types may be energy carriers (electricity, heat, fossil gas, hydrogen, etc), feedstocks (naphtha, ethylene, carbon-dioxide), or materials (steel, cement, etc).

They are defined in the flow_types.csv file in each database. Flow types may be overridden by other databases in the order in which the databases are added to POSTED (i.e. private databases will normally override the public database). Flow types are also automatically loaded as tags for the variable definitions.

Flow types come with a unique ID, the so-called flow_id, which is used throughout POSTED (Electricity, Hydrogen, Ammonia, etc). Moreover, the following attributes may be defined for them as attributes:

Attributes can be assigned a source by adding the respective BibTeX handle (see below) in the source column.

"},{"location":"methodology/#technology-types","title":"Technology types","text":"

POSTED defines technology types, which are used throughout the TEDF format and the NOSLAG framework. Technology types should represent generic classes of technologies (electrolysis, electric-arc furnaces, direct-air capture, etc).

Technologies are defined in the tech_types.csv file in each database. Technology types may be overridden by other databases in the order in which the databases are added to POSTED (i.e. private databases will normally override the public database). Technology types are also automatically loaded as tags for the variable definitions.

Technology types come with a unique ID, the so-called tech_id, which is used throughout POSTED (Electrolysis for water electrolysis, Haber-Bosch with ASU for Haber-Bosch synthesis with an air-separation unit, Iron Direct Reduction for direct reduction of iron ore based on either fossil gas or hydrogen, etc). Moreover, the following attributes may be defined for them in separate columns:

"},{"location":"methodology/#sources","title":"Sources","text":""},{"location":"methodology/#techno-economic-data-files-tedfs","title":"Techno-economic data files (TEDFs)","text":""},{"location":"methodology/#base-format","title":"Base format","text":"

The TEDF base format contains the following columns:

PythonR

The base columns in Python are defined here.

The base columns in R are defined here.

Columns that are not found in a CSV file will be added by POSTED and set to the default value of the column type.

If one wants to specify additional columns, these need to be defined as fields in one of the databases.

By placing an asterisk (*) in a period, source, or any field column, POSTED expands these rows across all possible values for these columns in the harmonise method of the NHS framework.

"},{"location":"methodology/#fields","title":"Fields","text":"

Fields can create additional columns for specific variables. Fields can currently be one of three:

"},{"location":"methodology/#masks","title":"Masks","text":"

To be written.

"},{"location":"methodology/#variable-definitions","title":"Variable definitions","text":"

To be written.

See IAMC format: https://github.com/IAMconsortium/common-definitions

"},{"location":"methodology/#units","title":"Units","text":"

To be written.

See pint: https://pint.readthedocs.io/en/stable/

See IAMC units: https://github.com/IAMconsortium/units/

"},{"location":"methodology/#the-normalise-select-aggregate-noslag-framework","title":"The Normalise-Select-Aggregate (NOSLAG) framework","text":"

To be written.

"},{"location":"methodology/#the-techno-economic-assessment-and-manipulation-team-framework","title":"The Techno-economic Assessment and Manipulation (TEAM) framework","text":""},{"location":"methodology/#levelised-cost-of-x-lcox","title":"Levelised Cost of X (LCOX)","text":"

The levelised cost of activity X can be calculated via POSTED based on the following convention: $$ \\mathrm{LCOX} = \\frac{\\mathrm{Capital~Cost} + \\mathrm{OM~Fixed} + \\mathrm{OM~Variable} + \\sum_f \\mathrm{Input~Cost}_f - \\sum_f \\mathrm{Output~Revenue}_f}{\\mathrm{Activity}_X} $$

This is based on the following cost components:

\\[ \\mathrm{Capital~Cost} = \\frac{\\mathrm{ANF} \\times \\mathrm{CAPEX}}{\\mathrm{OCF} \\times \\mathrm{Reference~Capacity}} \\times \\mathrm{Reference~Flow} \\] \\[ \\mathrm{OM~Fixed} = \\frac{\\mathrm{OPEX~Fixed}}{\\mathrm{OCF} \\times \\mathrm{Reference~Capacity}} \\times \\mathrm{Reference~Flow} \\] \\[ \\mathrm{OM~Variable} = \\mathrm{OPEX~Variable} \\] \\[ \\mathrm{Input~Cost}_f = \\mathrm{Price}_f \\times \\mathrm{Input}_f \\] \\[ \\mathrm{Output~Revenue}_f = \\mathrm{Price}_f \\times \\mathrm{Output}_f \\]

with \\(\\mathrm{ANF} = \\frac{\\mathrm{IR} * (1 + \\mathrm{IR})^\\mathrm{BL} / ((1 + \\mathrm{IR})^\\mathrm{BL} - 1)}{\\mathrm{yr}}\\) based on the Interest Rate (IR) and Book Lifetime (BL). The \\(\\mathrm{Reference~Capacity}\\) is the capacity that the CAPEX and OPEX Fixed variables are defined in reference to (e.g. Input Capacity|Electricity or Output Capacity|Methanol), and the \\(\\mathrm{Reference~Flow}\\) is the associated flow. Moreover, \\(\\mathrm{Activity}_X\\) is one of Output|X (with X being Hydrogen, Methanol, Ammonia, etc), Input|X (with X being e.g. Waste), or Service|X (with X being e.g. Passenger Kilometers).

"},{"location":"methodology/#process-chains","title":"Process chains","text":"

Process chains, i.e. a combination of processes that feed inputs and outputs into each other, can be define in POSTED before performing LCOX analysis.

For a process chain consisting of processes \\(P = \\{p_1, p_2, \\ldots\\}\\) we can define feeds \\(C^\\mathrm{Flow}_{p_1\\rightarrow p_2}\\) for Flow being fed from process \\(p_1\\) to process \\(p_2\\). Moreover, we can define demand \\(\\mathrm{Demand|Flow}_{p_1}\\) for the Flow demanded from process \\(p_1\\). This results in the following linear equation for functional process units \\(\\mathrm{Functional~Unit}_{p_1}\\):

\\[ \\mathrm{Output|Flow}_{p_1} \\times \\mathrm{Functional~Unit}_{p_1} = \\sum_{p_2} \\mathrm{Input|Flow}_{p_2} \\times \\mathrm{Functional~Unit}_{p_2} \\times C^\\mathrm{Flow}_{p_1\\rightarrow p_2} + \\mathrm{Demand|Flow}_{p_1} \\]"},{"location":"code/R/functions/AbstractColumnDefinition/","title":"AbstractColumnDefinition","text":""},{"location":"code/R/functions/AbstractColumnDefinition/#abstractcolumndefinition","title":"AbstractColumnDefinition","text":""},{"location":"code/R/functions/AbstractColumnDefinition/#description","title":"Description","text":"

Abstract class to store columns

"},{"location":"code/R/functions/AbstractColumnDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/AbstractColumnDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/AbstractColumnDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the AbstractColumnDefinition class

Usage

AbstractColumnDefinition$new(col_type, name, description, dtype, required)\n

Arguments:

"},{"location":"code/R/functions/AbstractColumnDefinition/#method-is_allowed","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

AbstractColumnDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/functions/AbstractColumnDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

AbstractColumnDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/AbstractFieldDefinition/","title":"AbstractFieldDefinition","text":""},{"location":"code/R/functions/AbstractFieldDefinition/#abstractfielddefinition","title":"AbstractFieldDefinition","text":""},{"location":"code/R/functions/AbstractFieldDefinition/#description","title":"Description","text":"

Abstract class to store fields

"},{"location":"code/R/functions/AbstractFieldDefinition/#examples","title":"Examples","text":"
### ------------------------------------------------\n### Method `AbstractFieldDefinition$select_and_expand`\n### ------------------------------------------------\n\n## Example usage:\n## select_and_expand(df, \"col_id\", field_vals = NULL)\n
"},{"location":"code/R/functions/AbstractFieldDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/AbstractFieldDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/AbstractFieldDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the AbstractFieldDefinition Class

Usage

AbstractFieldDefinition$new(\n  field_type,\n  name,\n  description,\n  dtype,\n  coded,\n  codes = NULL\n)\n

Arguments:

"},{"location":"code/R/functions/AbstractFieldDefinition/#method-is_allowed","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

AbstractFieldDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/functions/AbstractFieldDefinition/#method-select_and_expand","title":"Method select_and_expand()","text":"

Select and expand fields which are valid for multiple periods or other field vals

Usage

AbstractFieldDefinition$select_and_expand(df, col_id, field_vals = NA, ...)\n

Arguments:

Example:

## Example usage:\n## select_and_expand(df, \"col_id\", field_vals = NULL)\n

Returns:

DataFrame where fields are selected and expanded

"},{"location":"code/R/functions/AbstractFieldDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

AbstractFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/CommentDefinition/","title":"CommentDefinition","text":""},{"location":"code/R/functions/CommentDefinition/#commentdefinition","title":"CommentDefinition","text":""},{"location":"code/R/functions/CommentDefinition/#description","title":"Description","text":"

Class to store comment columns

"},{"location":"code/R/functions/CommentDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/CommentDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/CommentDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the CommentDefinition Class

Usage

CommentDefinition$new(name, description, required)\n

Arguments:

"},{"location":"code/R/functions/CommentDefinition/#method-is_allowed","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

CommentDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/functions/CommentDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

CommentDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/CustomFieldDefinition/","title":"CustomFieldDefinition","text":""},{"location":"code/R/functions/CustomFieldDefinition/#customfielddefinition","title":"CustomFieldDefinition","text":""},{"location":"code/R/functions/CustomFieldDefinition/#description","title":"Description","text":"

Class to store Custom fields

"},{"location":"code/R/functions/CustomFieldDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/CustomFieldDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/CustomFieldDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the CustomFieldDefinition class

Usage

CustomFieldDefinition$new(field_specs)\n

Arguments:

"},{"location":"code/R/functions/CustomFieldDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

CustomFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/DataSet/","title":"DataSet","text":""},{"location":"code/R/functions/DataSet/#dataset","title":"DataSet","text":""},{"location":"code/R/functions/DataSet/#description","title":"Description","text":"

This class provides methods to store, normalize, select, and aggregate DataSets.

"},{"location":"code/R/functions/DataSet/#examples","title":"Examples","text":"
### ------------------------------------------------\n### Method `DataSet$normalise`\n### ------------------------------------------------\n\n## Example usage:\ndataset$normalize(override = list(\"variable1\" = \"value1\"), inplace = FALSE)\n\n\n### ------------------------------------------------\n### Method `DataSet$select`\n### ------------------------------------------------\n\n## Example usage:\ndataset$select(override = list(\"variable1\" = \"value1\"), drop_singular_fields = TRUE, extrapolate_period = FALSE, field1 = \"value1\")\n\n\n### ------------------------------------------------\n### Method `DataSet$aggregate`\n### ------------------------------------------------\n\n## Example usage:\ndataset$aggregate(override = list(\"variable1\" = \"value1\"), drop_singular_fields = TRUE, extrapolate_period = FALSE, agg = \"field\", masks = list(mask1, mask2), masks_database = TRUE)\n
"},{"location":"code/R/functions/DataSet/#methods","title":"Methods","text":""},{"location":"code/R/functions/DataSet/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/DataSet/#method-new","title":"Method new()","text":"

Create new instance of the DataSet class

Usage

DataSet$new(\n  parent_variable,\n  include_databases = NULL,\n  file_paths = NULL,\n  check_inconsistencies = FALSE,\n  data = NULL\n)\n

Arguments:

"},{"location":"code/R/functions/DataSet/#method-normalise","title":"Method normalise()","text":"

Normalize data: default reference units, reference value equal to 1.0, default reported units

Usage

DataSet$normalise(override = NULL, inplace = FALSE)\n

Arguments:

Example:

## Example usage:\ndataset$normalize(override = list(\"variable1\" = \"value1\"), inplace = FALSE)\n

Returns:

DataFrame. If inplace is FALSE, returns normalized dataframe.

"},{"location":"code/R/functions/DataSet/#method-select","title":"Method select()","text":"

Select desired data from the dataframe

Usage

DataSet$select(\n  override = NULL,\n  drop_singular_fields = TRUE,\n  extrapolate_period = TRUE,\n  ...\n)\n

Arguments:

Example:

## Example usage:\ndataset$select(override = list(\"variable1\" = \"value1\"), drop_singular_fields = TRUE, extrapolate_period = FALSE, field1 = \"value1\")\n

Returns:

DataFrame. DataFrame with selected values.

"},{"location":"code/R/functions/DataSet/#method-aggregate","title":"Method aggregate()","text":"

Aggregates data based on specified parameters, applies masks, and cleans up the resulting DataFrame.

Usage

DataSet$aggregate(\n  override = NULL,\n  drop_singular_fields = TRUE,\n  extrapolate_period = TRUE,\n  agg = NULL,\n  masks = NULL,\n  masks_database = TRUE,\n  ...\n)\n

Arguments:

Example:

## Example usage:\ndataset$aggregate(override = list(\"variable1\" = \"value1\"), drop_singular_fields = TRUE, extrapolate_period = FALSE, agg = \"field\", masks = list(mask1, mask2), masks_database = TRUE)\n

Returns:

DataFrame. The aggregate method returns a pandas DataFrame that has been cleaned up and aggregated based on the specified parameters and input data. The method performs aggregation over component fields and case fields, applies weights based on masks, drops rows with NaN weights, aggregates with weights, inserts reference variables, sorts columns and rows, rounds values, and inserts units before returning the final cleaned and aggregated DataFrame.

"},{"location":"code/R/functions/DataSet/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

DataSet$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/Mask/","title":"Mask","text":""},{"location":"code/R/functions/Mask/#mask","title":"Mask","text":""},{"location":"code/R/functions/Mask/#description","title":"Description","text":"

Class to define masks with conditions and weights to apply to DataFiles

"},{"location":"code/R/functions/Mask/#methods","title":"Methods","text":""},{"location":"code/R/functions/Mask/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/Mask/#method-new","title":"Method new()","text":"

Create a new mask object

Usage

Mask$new(where = NULL, use = NULL, weight = NULL, other = NaN, comment = \"\")\n

Arguments:

"},{"location":"code/R/functions/Mask/#method-matches","title":"Method matches()","text":"

Check if a mask matches a dataframe by verifying if all 'where' conditions match across all rows.

Usage

Mask$matches(df)\n

Arguments:

Returns:

Logical. If the mask matches the dataframe.

"},{"location":"code/R/functions/Mask/#method-get_weights","title":"Method get_weights()","text":"

Apply weights to the dataframe

Usage

Mask$get_weights(df)\n

Arguments:

Returns:

Dataframe. Dataframe with applied weights

"},{"location":"code/R/functions/Mask/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

Mask$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/PeriodFieldDefinition/","title":"PeriodFieldDefinition","text":""},{"location":"code/R/functions/PeriodFieldDefinition/#periodfielddefinition","title":"PeriodFieldDefinition","text":""},{"location":"code/R/functions/PeriodFieldDefinition/#description","title":"Description","text":"

Class to store Period fields

"},{"location":"code/R/functions/PeriodFieldDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/PeriodFieldDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/PeriodFieldDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the PeriodFieldDefinition Class

Usage

PeriodFieldDefinition$new(name, description)\n

Arguments:

"},{"location":"code/R/functions/PeriodFieldDefinition/#method-is_allowed","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

PeriodFieldDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/functions/PeriodFieldDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

PeriodFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/RegionFieldDefinition/","title":"RegionFieldDefinition","text":""},{"location":"code/R/functions/RegionFieldDefinition/#regionfielddefinition","title":"RegionFieldDefinition","text":""},{"location":"code/R/functions/RegionFieldDefinition/#description","title":"Description","text":"

Class to store Region fields

"},{"location":"code/R/functions/RegionFieldDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/RegionFieldDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/RegionFieldDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the RegionFieldDefinition class

Usage

RegionFieldDefinition$new(name, description)\n

Arguments:

"},{"location":"code/R/functions/RegionFieldDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

RegionFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/SourceFieldDefinition/","title":"SourceFieldDefinition","text":""},{"location":"code/R/functions/SourceFieldDefinition/#sourcefielddefinition","title":"SourceFieldDefinition","text":""},{"location":"code/R/functions/SourceFieldDefinition/#description","title":"Description","text":"

Class to store Source fields

"},{"location":"code/R/functions/SourceFieldDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/SourceFieldDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/SourceFieldDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the SourceFieldDefinition class

Usage

SourceFieldDefinition$new(name, description)\n

Arguments:

"},{"location":"code/R/functions/SourceFieldDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

SourceFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/TEBase/","title":"TEBase","text":""},{"location":"code/R/functions/TEBase/#tebase","title":"TEBase","text":""},{"location":"code/R/functions/TEBase/#description","title":"Description","text":"

This is the base class for technoeconomic data.

"},{"location":"code/R/functions/TEBase/#examples","title":"Examples","text":"
## Example usage:\nbase_technoeconomic_data <- TEBase$new(\"variable_name\")\n
"},{"location":"code/R/functions/TEBase/#methods","title":"Methods","text":""},{"location":"code/R/functions/TEBase/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/TEBase/#method-new","title":"Method new()","text":"

Create new instance of TEBase class. Set parent variable and technology specifications (var_specs) from input

Usage

TEBase$new(parent_variable)\n

Arguments:

"},{"location":"code/R/functions/TEBase/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

TEBase$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/TEDF/","title":"TEDF","text":""},{"location":"code/R/functions/TEDF/#tedf","title":"TEDF","text":""},{"location":"code/R/functions/TEDF/#description","title":"Description","text":"

This class is used to store Technoeconomic DataFiles.

"},{"location":"code/R/functions/TEDF/#examples","title":"Examples","text":"
## Example usage:\ntedf <- TEDF$new(\"variable_name\")\ntedf$load()\ntedf$read(\"file_path.csv\")\ntedf$write(\"output_file_path.csv\")\ntedf$check()\ntedf$check_row()\n\n\n### ------------------------------------------------\n### Method `TEDF$load`\n### ------------------------------------------------\n\n## Example usage:\ntedf$load()\n\n\n### ------------------------------------------------\n### Method `TEDF$read`\n### ------------------------------------------------\n\n## Example usage:\ntedf$read()\n\n\n### ------------------------------------------------\n### Method `TEDF$write`\n### ------------------------------------------------\n\n## Example usage:\ntedf$write()\n\n\n### ------------------------------------------------\n### Method `TEDF$check`\n### ------------------------------------------------\n\n## Example usage:\ntedf$check(raise_exception = TRUE)\n
"},{"location":"code/R/functions/TEDF/#methods","title":"Methods","text":""},{"location":"code/R/functions/TEDF/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/TEDF/#method-new","title":"Method new()","text":"

Create new instance of TEDF class. Initialise parent class and object fields

Usage

TEDF$new(\n  parent_variable,\n  database_id = \"public\",\n  file_path = NULL,\n  data = NULL\n)\n

Arguments:

"},{"location":"code/R/functions/TEDF/#method-load","title":"Method load()","text":"

Load TEDataFile (only if it has not been read yet)

Usage

TEDF$load()\n

Example:

## Example usage:\ntedf$load()\n

Returns:

TEDF. Returns the TEDF object it is called on.

"},{"location":"code/R/functions/TEDF/#method-read","title":"Method read()","text":"

This method reads TEDF from a CSV file.

Usage

TEDF$read()\n

Example:

## Example usage:\ntedf$read()\n

"},{"location":"code/R/functions/TEDF/#method-write","title":"Method write()","text":"

write TEDF to CSV file.

Usage

TEDF$write()\n

Example:

## Example usage:\ntedf$write()\n

"},{"location":"code/R/functions/TEDF/#method-check","title":"Method check()","text":"

Check that TEDF is consistent and add inconsistencies to internal parameter

Usage

TEDF$check(raise_exception = TRUE)\n

Arguments:

Example:

## Example usage:\ntedf$check(raise_exception = TRUE)\n

"},{"location":"code/R/functions/TEDF/#method-check_row","title":"Method check_row()","text":"

checks if row of dataframe has issues - NOT IMPLEMENTED YET

Usage

TEDF$check_row(row_id, raise_exception = TRUE)\n

Arguments:

"},{"location":"code/R/functions/TEDF/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

TEDF$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/TEDFInconsistencyException/","title":"TEDFInconsistencyException","text":""},{"location":"code/R/functions/TEDFInconsistencyException/#tedfinconsistencyexception","title":"TEDFInconsistencyException","text":""},{"location":"code/R/functions/TEDFInconsistencyException/#description","title":"Description","text":"

This is a class to store inconsistencies in the TEDFs

"},{"location":"code/R/functions/TEDFInconsistencyException/#methods","title":"Methods","text":""},{"location":"code/R/functions/TEDFInconsistencyException/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/TEDFInconsistencyException/#method-new","title":"Method new()","text":"

Create instance of TEDFInconsistencyException class

Usage

TEDFInconsistencyException$new(\n  message = \"Inconsistency detected\",\n  row_id = NULL,\n  col_id = NULL,\n  file_path = NULL\n)\n

Arguments:

"},{"location":"code/R/functions/TEDFInconsistencyException/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

TEDFInconsistencyException$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/UnitDefinition/","title":"UnitDefinition","text":""},{"location":"code/R/functions/UnitDefinition/#unitdefinition","title":"UnitDefinition","text":""},{"location":"code/R/functions/UnitDefinition/#description","title":"Description","text":"

Class to store Unit columns

"},{"location":"code/R/functions/UnitDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/UnitDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/UnitDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the UnitDefinition class

Usage

UnitDefinition$new(name, description, required)\n

Arguments:

"},{"location":"code/R/functions/UnitDefinition/#method-is_allowed","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

UnitDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/functions/UnitDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

UnitDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/ValueDefinition/","title":"ValueDefinition","text":""},{"location":"code/R/functions/ValueDefinition/#valuedefinition","title":"ValueDefinition","text":""},{"location":"code/R/functions/ValueDefinition/#description","title":"Description","text":"

Class to store Value columns

"},{"location":"code/R/functions/ValueDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/ValueDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/ValueDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the ValueDefinition class

Usage

ValueDefinition$new(name, description, required)\n

Arguments:

"},{"location":"code/R/functions/ValueDefinition/#method-is_allowed","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

ValueDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/functions/ValueDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

ValueDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/VariableDefinition/","title":"VariableDefinition","text":""},{"location":"code/R/functions/VariableDefinition/#variabledefinition","title":"VariableDefinition","text":""},{"location":"code/R/functions/VariableDefinition/#description","title":"Description","text":"

Class to store variable columns

"},{"location":"code/R/functions/VariableDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/VariableDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/VariableDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the VariableDefinition class

Usage

VariableDefinition$new(name, description, required)\n

Arguments:

"},{"location":"code/R/functions/VariableDefinition/#method-is_allowed","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

VariableDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/functions/VariableDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

VariableDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/apply_cond/","title":"Apply cond","text":""},{"location":"code/R/functions/apply_cond/#apply_cond","title":"apply_cond","text":"

apply_cond

"},{"location":"code/R/functions/apply_cond/#description","title":"Description","text":"

Takes a pandas DataFrame and a condition, which can be a string, dictionary, or callable, and applies the condition to the DataFrame using eval or apply accordingly.

"},{"location":"code/R/functions/apply_cond/#usage","title":"Usage","text":"
apply_cond(df, cond)\n
"},{"location":"code/R/functions/apply_cond/#arguments","title":"Arguments","text":"Argument Description df DataFrame. A pandas DataFrame containing the data on which the condition will be applied. cond MaskCondition. The condition to be applied on the dataframe. Can be either a string, a dictionary, or a callable function."},{"location":"code/R/functions/apply_cond/#return-value","title":"Return Value","text":"

DataFrame. Dataframe evaluated at the mask condition.

"},{"location":"code/R/functions/collect_files/","title":"Collect files","text":""},{"location":"code/R/functions/collect_files/#collect_files","title":"collect_files","text":"

collect_files

"},{"location":"code/R/functions/collect_files/#description","title":"Description","text":"

Takes a parent variable and optional list of databases to include, checks for their existence, and collects files and directories based on the parent variable.

"},{"location":"code/R/functions/collect_files/#usage","title":"Usage","text":"
collect_files(parent_variable, include_databases = NULL)\n
"},{"location":"code/R/functions/collect_files/#arguments","title":"Arguments","text":"Argument Description parent_variable Character. Variable to collect files on. include_databases Optional listCharacter. List of Database IDs to collect files from."},{"location":"code/R/functions/collect_files/#return-value","title":"Return Value","text":"

List of tuples. List of tuples containing the parent variable and the database ID for each file found in the specified directories.

"},{"location":"code/R/functions/collect_files/#examples","title":"Examples","text":"
## Example usage:\ncollect_files(\"variable_name\", c(\"db1\", \"db2\"))\n
"},{"location":"code/R/functions/combine_units/","title":"Combine units","text":""},{"location":"code/R/functions/combine_units/#combine_units","title":"combine_units","text":"

combine_units

"},{"location":"code/R/functions/combine_units/#description","title":"Description","text":"

Combine fraction of two units into an updated unit string

"},{"location":"code/R/functions/combine_units/#usage","title":"Usage","text":"
combine_units(numerator, denominator)\n
"},{"location":"code/R/functions/combine_units/#arguments","title":"Arguments","text":"Argument Description numerator Character. Numerator of the fraction. denominator Character. Denominator of the fraction."},{"location":"code/R/functions/combine_units/#return-value","title":"Return Value","text":"

Character. Updated unit string after simplification.

"},{"location":"code/R/functions/combine_units/#examples","title":"Examples","text":"
## Example usage:\ncombine_units(\"m\", \"s\")\n
"},{"location":"code/R/functions/is_float/","title":"Is float","text":""},{"location":"code/R/functions/is_float/#is_float","title":"is_float","text":"

is_float

"},{"location":"code/R/functions/is_float/#description","title":"Description","text":"

Checks if a given string can be converted to a floating-point number in Python.

"},{"location":"code/R/functions/is_float/#usage","title":"Usage","text":"
is_float(string)\n
"},{"location":"code/R/functions/is_float/#arguments","title":"Arguments","text":"Argument Description string Character. String to check."},{"location":"code/R/functions/is_float/#return-value","title":"Return Value","text":"

Logical. TRUE if conversion was successful, FALSE if not.

"},{"location":"code/R/functions/is_float/#examples","title":"Examples","text":"
## Example usage:\nis_numeric(\"3.14\")\n
"},{"location":"code/R/functions/normalise_units/","title":"Normalise units","text":""},{"location":"code/R/functions/normalise_units/#normalise_units","title":"normalise_units","text":"

normalise_units

"},{"location":"code/R/functions/normalise_units/#description","title":"Description","text":"

Takes a DataFrame with reported or reference data, along with dictionaries mapping variable units and flow IDs, and normalizes the units of the variables in the DataFrame based on the provided mappings.

"},{"location":"code/R/functions/normalise_units/#usage","title":"Usage","text":"
normalise_units(df, level, var_units, var_flow_ids)\n
"},{"location":"code/R/functions/normalise_units/#arguments","title":"Arguments","text":"Argument Description df DataFrame. Dataframe to be normalized. level Character. Specifies whether the data should be normalized on the reported or reference values. Possible values are 'reported' or 'reference'. var_units List. Dictionary that maps a combination of parent variable and variable to its corresponding unit. The keys in the dictionary are in the format \"{parent_variable} var_flow_ids List. Dictionary that maps a combination of parent variable and variable to a specific flow ID. This flow ID is used for unit conversion in the normalize_units function."},{"location":"code/R/functions/normalise_units/#return-value","title":"Return Value","text":"

DataFrame. Normalized dataframe.

"},{"location":"code/R/functions/normalise_units/#examples","title":"Examples","text":"
## Example usage:\nnormalize_dataframe(df, \"reported\", var_units, var_flow_ids)\n
"},{"location":"code/R/functions/normalise_values/","title":"Normalise values","text":""},{"location":"code/R/functions/normalise_values/#normalise_values","title":"normalise_values","text":"

normalise_values

"},{"location":"code/R/functions/normalise_values/#description","title":"Description","text":"

Takes a DataFrame as input, normalizes the 'value' and 'uncertainty' columns by the reference value, and updates the 'reference_value' column accordingly.

"},{"location":"code/R/functions/normalise_values/#usage","title":"Usage","text":"
normalise_values(df)\n
"},{"location":"code/R/functions/normalise_values/#arguments","title":"Arguments","text":"Argument Description df DataFrame. Dataframe to be normalized."},{"location":"code/R/functions/normalise_values/#return-value","title":"Return Value","text":"

DataFrame. Returns a modified DataFrame where the 'value' column has been divided by the 'reference_value' column (or 1.0 if 'reference_value' is null), the 'uncertainty' column has been divided by the 'reference_value' column, and the 'reference_value' column has been replaced with 1.0 if it was not null.

"},{"location":"code/R/functions/normalise_values/#examples","title":"Examples","text":"
## Example usage:\nnormalized_df <- normalize_values(df)\n
"},{"location":"code/R/functions/read_csv_file/","title":"Read csv file","text":""},{"location":"code/R/functions/read_csv_file/#read_csv_file","title":"read_csv_file","text":"

read_csv_file

"},{"location":"code/R/functions/read_csv_file/#description","title":"Description","text":"

Read a csv datafile

"},{"location":"code/R/functions/read_csv_file/#usage","title":"Usage","text":"
read_csv_file(fpath)\n
"},{"location":"code/R/functions/read_csv_file/#arguments","title":"Arguments","text":"Argument Description fpath path of the csv file"},{"location":"code/R/functions/read_definitions/","title":"Read definitions","text":""},{"location":"code/R/functions/read_definitions/#read_definitions","title":"read_definitions","text":"

read_definitions

"},{"location":"code/R/functions/read_definitions/#description","title":"Description","text":"

Reads YAML files from definitions directory, extracts tags, inserts tags into definitions, replaces tokens in definitions, and returns the updated definitions.

"},{"location":"code/R/functions/read_definitions/#usage","title":"Usage","text":"
read_definitions(definitions_dir, flows, techs)\n
"},{"location":"code/R/functions/read_definitions/#arguments","title":"Arguments","text":"Argument Description definitions_dir Character. Path leading to the definitions. flows List. Dictionary containing the different flow types. Each key represents a flow type, the corresponding value is a dictionary containing key value pairs of attributes like density, energy content and their values. techs List. Dictionary containing information about different technologies. Each key in the dictionary represents a unique technology ID, and the corresponding value is a dictionary containing various specifications for that technology, like 'description', 'class', 'primary output' etc."},{"location":"code/R/functions/read_definitions/#return-value","title":"Return Value","text":"

List. Dictionary containing the definitions after processing and replacing tags and tokens.

"},{"location":"code/R/functions/read_masks/","title":"Read masks","text":""},{"location":"code/R/functions/read_masks/#read_masks","title":"read_masks","text":"

read_masks

"},{"location":"code/R/functions/read_masks/#description","title":"Description","text":"

Reads YAML files containing mask specifications from multiple databases and returns a list of Mask objects.

"},{"location":"code/R/functions/read_masks/#usage","title":"Usage","text":"
read_masks(variable)\n
"},{"location":"code/R/functions/read_masks/#arguments","title":"Arguments","text":"Argument Description variable Character. Variable to be read."},{"location":"code/R/functions/read_masks/#return-value","title":"Return Value","text":"

List. List with masks for the variable.

"},{"location":"code/R/functions/read_yml_file/","title":"Read yml file","text":""},{"location":"code/R/functions/read_yml_file/#read_yml_file","title":"read_yml_file","text":"

read_yaml_file

"},{"location":"code/R/functions/read_yml_file/#description","title":"Description","text":"

read YAML config file

"},{"location":"code/R/functions/read_yml_file/#usage","title":"Usage","text":"
read_yml_file(fpath)\n
"},{"location":"code/R/functions/read_yml_file/#arguments","title":"Arguments","text":"Argument Description fpath path of the YAML file"},{"location":"code/R/functions/replace_tags/","title":"Replace tags","text":""},{"location":"code/R/functions/replace_tags/#replace_tags","title":"replace_tags","text":"

replace_tags

"},{"location":"code/R/functions/replace_tags/#description","title":"Description","text":"

Replaces specified tags in dictionary keys and values with corresponding items from another dictionary.

"},{"location":"code/R/functions/replace_tags/#usage","title":"Usage","text":"
replace_tags(definitions, tag, items)\n
"},{"location":"code/R/functions/replace_tags/#arguments","title":"Arguments","text":"Argument Description definitions List. Dictionary containing the definitions, where the tags should be replaced by the items. tag Character. String to identify where replacements should be made in the definitions. Specifies the placeholder that needs to be replaced with actual values from the items dictionary. items List. Dictionary containing the items from which to replace the definitions."},{"location":"code/R/functions/replace_tags/#return-value","title":"Return Value","text":"

List. Dictionary containing the definitions with replacements based on the provided tag and items.

"},{"location":"code/R/functions/unit_convert/","title":"Unit convert","text":""},{"location":"code/R/functions/unit_convert/#unit_convert","title":"unit_convert","text":"

unit_convert

"},{"location":"code/R/functions/unit_convert/#description","title":"Description","text":"

Converts units with optional flow context handling based on specified variants and flow ID. The function checks if the input units are not NaN, then it proceeds to handle different cases based on the presence of a flow context and unit variants.

"},{"location":"code/R/functions/unit_convert/#usage","title":"Usage","text":"
unit_convert(unit_from, unit_to, flow_id = NULL)\n
"},{"location":"code/R/functions/unit_convert/#arguments","title":"Arguments","text":"Argument Description unit_from Character or numeric. Unit to convert from. unit_to Character or numeric. Unit to convert to. flow_id Character or NULL. Identifier for the specific flow or process."},{"location":"code/R/functions/unit_convert/#return-value","title":"Return Value","text":"

Numeric. Conversion factor between unit_from and unit_to.

"},{"location":"code/R/functions/unit_convert/#examples","title":"Examples","text":"
## Example usage:\nunit_convert(\"m\", \"km\", flow_id = NULL)\n
"},{"location":"code/R/functions/unit_token_func/","title":"Unit token func","text":""},{"location":"code/R/functions/unit_token_func/#unit_token_func","title":"unit_token_func","text":"

unit_token_func

"},{"location":"code/R/functions/unit_token_func/#description","title":"Description","text":"

Takes a unit component type and a dictionary of flows, and returns a lambda function that extracts the default unit based on the specified component type from the flow dictionary.

"},{"location":"code/R/functions/unit_token_func/#usage","title":"Usage","text":"
unit_token_func(unit_component, flows)\n
"},{"location":"code/R/functions/unit_token_func/#arguments","title":"Arguments","text":"Argument Description unit_component Character. Specifies the type of unit token to be returned. Possible values are 'full', 'raw', 'variant'. flows List. Dictionary containing the flows."},{"location":"code/R/functions/unit_token_func/#return-value","title":"Return Value","text":"

Function. Lambda function that takes a dictionary def_specs as input. The lambda function will return different values based on the unit_component parameter.

"},{"location":"code/R/modules/columns/","title":"columns","text":""},{"location":"code/R/modules/columns/#is_float","title":"is_float","text":"

is_float

"},{"location":"code/R/modules/columns/#description","title":"Description","text":"

Checks if a given string can be converted to a floating-point number in Python.

"},{"location":"code/R/modules/columns/#usage","title":"Usage","text":"
is_float(string)\n
"},{"location":"code/R/modules/columns/#arguments","title":"Arguments","text":"Argument Description string Character. String to check."},{"location":"code/R/modules/columns/#return-value","title":"Return Value","text":"

Logical. TRUE if conversion was successful, FALSE if not.

"},{"location":"code/R/modules/columns/#examples","title":"Examples","text":"
## Example usage:\nis_numeric(\"3.14\")\n
"},{"location":"code/R/modules/columns/#abstractcolumndefinition","title":"AbstractColumnDefinition","text":""},{"location":"code/R/modules/columns/#description_1","title":"Description","text":"

Abstract class to store columns

"},{"location":"code/R/modules/columns/#methods","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new","title":"Method new()","text":"

Creates a new instance of the AbstractColumnDefinition class

Usage

AbstractColumnDefinition$new(col_type, name, description, dtype, required)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-is_allowed","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

AbstractColumnDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

AbstractColumnDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#variabledefinition","title":"VariableDefinition","text":""},{"location":"code/R/modules/columns/#description_2","title":"Description","text":"

Class to store variable columns

"},{"location":"code/R/modules/columns/#methods_1","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_1","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_1","title":"Method new()","text":"

Creates a new instance of the VariableDefinition class

Usage

VariableDefinition$new(name, description, required)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-is_allowed_1","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

VariableDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone_1","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

VariableDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#unitdefinition","title":"UnitDefinition","text":""},{"location":"code/R/modules/columns/#description_3","title":"Description","text":"

Class to store Unit columns

"},{"location":"code/R/modules/columns/#methods_2","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_2","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_2","title":"Method new()","text":"

Creates a new instance of the UnitDefinition class

Usage

UnitDefinition$new(name, description, required)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-is_allowed_2","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

UnitDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone_2","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

UnitDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#valuedefinition","title":"ValueDefinition","text":""},{"location":"code/R/modules/columns/#description_4","title":"Description","text":"

Class to store Value columns

"},{"location":"code/R/modules/columns/#methods_3","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_3","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_3","title":"Method new()","text":"

Creates a new instance of the ValueDefinition class

Usage

ValueDefinition$new(name, description, required)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-is_allowed_3","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

ValueDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone_3","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

ValueDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#commentdefinition","title":"CommentDefinition","text":""},{"location":"code/R/modules/columns/#description_5","title":"Description","text":"

Class to store comment columns

"},{"location":"code/R/modules/columns/#methods_4","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_4","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_4","title":"Method new()","text":"

Creates a new instance of the CommentDefinition Class

Usage

CommentDefinition$new(name, description, required)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-is_allowed_4","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

CommentDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone_4","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

CommentDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#abstractfielddefinition","title":"AbstractFieldDefinition","text":""},{"location":"code/R/modules/columns/#description_6","title":"Description","text":"

Abstract class to store fields

"},{"location":"code/R/modules/columns/#examples_1","title":"Examples","text":"
### ------------------------------------------------\n### Method `AbstractFieldDefinition$select_and_expand`\n### ------------------------------------------------\n\n## Example usage:\n## select_and_expand(df, \"col_id\", field_vals = NULL)\n
"},{"location":"code/R/modules/columns/#methods_5","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_5","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_5","title":"Method new()","text":"

Creates a new instance of the AbstractFieldDefinition Class

Usage

AbstractFieldDefinition$new(\n  field_type,\n  name,\n  description,\n  dtype,\n  coded,\n  codes = NULL\n)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-is_allowed_5","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

AbstractFieldDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-select_and_expand","title":"Method select_and_expand()","text":"

Select and expand fields which are valid for multiple periods or other field vals

Usage

AbstractFieldDefinition$select_and_expand(df, col_id, field_vals = NA, ...)\n

Arguments:

Example:

## Example usage:\n## select_and_expand(df, \"col_id\", field_vals = NULL)\n

Returns:

DataFrame where fields are selected and expanded

"},{"location":"code/R/modules/columns/#method-clone_5","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

AbstractFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#regionfielddefinition","title":"RegionFieldDefinition","text":""},{"location":"code/R/modules/columns/#description_7","title":"Description","text":"

Class to store Region fields

"},{"location":"code/R/modules/columns/#methods_6","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_6","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_6","title":"Method new()","text":"

Creates a new instance of the RegionFieldDefinition class

Usage

RegionFieldDefinition$new(name, description)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone_6","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

RegionFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#periodfielddefinition","title":"PeriodFieldDefinition","text":""},{"location":"code/R/modules/columns/#description_8","title":"Description","text":"

Class to store Period fields

"},{"location":"code/R/modules/columns/#methods_7","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_7","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_7","title":"Method new()","text":"

Creates a new instance of the PeriodFieldDefinition Class

Usage

PeriodFieldDefinition$new(name, description)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-is_allowed_6","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

PeriodFieldDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone_7","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

PeriodFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#sourcefielddefinition","title":"SourceFieldDefinition","text":""},{"location":"code/R/modules/columns/#description_9","title":"Description","text":"

Class to store Source fields

"},{"location":"code/R/modules/columns/#methods_8","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_8","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_8","title":"Method new()","text":"

Creates a new instance of the SourceFieldDefinition class

Usage

SourceFieldDefinition$new(name, description)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone_8","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

SourceFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#customfielddefinition","title":"CustomFieldDefinition","text":""},{"location":"code/R/modules/columns/#description_10","title":"Description","text":"

Class to store Custom fields

"},{"location":"code/R/modules/columns/#methods_9","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_9","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_9","title":"Method new()","text":"

Creates a new instance of the CustomFieldDefinition class

Usage

CustomFieldDefinition$new(field_specs)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone_9","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

CustomFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/definitions/","title":"definitions","text":""},{"location":"code/R/modules/definitions/#read_definitions","title":"read_definitions","text":"

read_definitions

"},{"location":"code/R/modules/definitions/#description","title":"Description","text":"

Reads YAML files from definitions directory, extracts tags, inserts tags into definitions, replaces tokens in definitions, and returns the updated definitions.

"},{"location":"code/R/modules/definitions/#usage","title":"Usage","text":"
read_definitions(definitions_dir, flows, techs)\n
"},{"location":"code/R/modules/definitions/#arguments","title":"Arguments","text":"Argument Description definitions_dir Character. Path leading to the definitions. flows List. Dictionary containing the different flow types. Each key represents a flow type, the corresponding value is a dictionary containing key value pairs of attributes like density, energy content and their values. techs List. Dictionary containing information about different technologies. Each key in the dictionary represents a unique technology ID, and the corresponding value is a dictionary containing various specifications for that technology, like 'description', 'class', 'primary output' etc."},{"location":"code/R/modules/definitions/#return-value","title":"Return Value","text":"

List. Dictionary containing the definitions after processing and replacing tags and tokens.

"},{"location":"code/R/modules/definitions/#replace_tags","title":"replace_tags","text":"

replace_tags

"},{"location":"code/R/modules/definitions/#description_1","title":"Description","text":"

Replaces specified tags in dictionary keys and values with corresponding items from another dictionary.

"},{"location":"code/R/modules/definitions/#usage_1","title":"Usage","text":"
replace_tags(definitions, tag, items)\n
"},{"location":"code/R/modules/definitions/#arguments_1","title":"Arguments","text":"Argument Description definitions List. Dictionary containing the definitions, where the tags should be replaced by the items. tag Character. String to identify where replacements should be made in the definitions. Specifies the placeholder that needs to be replaced with actual values from the items dictionary. items List. Dictionary containing the items from which to replace the definitions."},{"location":"code/R/modules/definitions/#return-value_1","title":"Return Value","text":"

List. Dictionary containing the definitions with replacements based on the provided tag and items.

"},{"location":"code/R/modules/definitions/#unit_token_func","title":"unit_token_func","text":"

unit_token_func

"},{"location":"code/R/modules/definitions/#description_2","title":"Description","text":"

Takes a unit component type and a dictionary of flows, and returns a lambda function that extracts the default unit based on the specified component type from the flow dictionary.

"},{"location":"code/R/modules/definitions/#usage_2","title":"Usage","text":"
unit_token_func(unit_component, flows)\n
"},{"location":"code/R/modules/definitions/#arguments_2","title":"Arguments","text":"Argument Description unit_component Character. Specifies the type of unit token to be returned. Possible values are 'full', 'raw', 'variant'. flows List. Dictionary containing the flows."},{"location":"code/R/modules/definitions/#return-value_2","title":"Return Value","text":"

Function. Lambda function that takes a dictionary def_specs as input. The lambda function will return different values based on the unit_component parameter.

"},{"location":"code/R/modules/masking/","title":"masking","text":""},{"location":"code/R/modules/masking/#apply_cond","title":"apply_cond","text":"

apply_cond

"},{"location":"code/R/modules/masking/#description","title":"Description","text":"

Takes a pandas DataFrame and a condition, which can be a string, dictionary, or callable, and applies the condition to the DataFrame using eval or apply accordingly.

"},{"location":"code/R/modules/masking/#usage","title":"Usage","text":"
apply_cond(df, cond)\n
"},{"location":"code/R/modules/masking/#arguments","title":"Arguments","text":"Argument Description df DataFrame. A pandas DataFrame containing the data on which the condition will be applied. cond MaskCondition. The condition to be applied on the dataframe. Can be either a string, a dictionary, or a callable function."},{"location":"code/R/modules/masking/#return-value","title":"Return Value","text":"

DataFrame. Dataframe evaluated at the mask condition.

"},{"location":"code/R/modules/masking/#mask","title":"Mask","text":""},{"location":"code/R/modules/masking/#description_1","title":"Description","text":"

Class to define masks with conditions and weights to apply to DataFiles

"},{"location":"code/R/modules/masking/#methods","title":"Methods","text":""},{"location":"code/R/modules/masking/#public-methods","title":"Public Methods","text":""},{"location":"code/R/modules/masking/#method-new","title":"Method new()","text":"

Create a new mask object

Usage

Mask$new(where = NULL, use = NULL, weight = NULL, other = NaN, comment = \"\")\n

Arguments:

"},{"location":"code/R/modules/masking/#method-matches","title":"Method matches()","text":"

Check if a mask matches a dataframe by verifying if all 'where' conditions match across all rows.

Usage

Mask$matches(df)\n

Arguments:

Returns:

Logical. If the mask matches the dataframe.

"},{"location":"code/R/modules/masking/#method-get_weights","title":"Method get_weights()","text":"

Apply weights to the dataframe

Usage

Mask$get_weights(df)\n

Arguments:

Returns:

Dataframe. Dataframe with applied weights

"},{"location":"code/R/modules/masking/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

Mask$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/masking/#read_masks","title":"read_masks","text":"

read_masks

"},{"location":"code/R/modules/masking/#description_2","title":"Description","text":"

Reads YAML files containing mask specifications from multiple databases and returns a list of Mask objects.

"},{"location":"code/R/modules/masking/#usage_1","title":"Usage","text":"
read_masks(variable)\n
"},{"location":"code/R/modules/masking/#arguments_1","title":"Arguments","text":"Argument Description variable Character. Variable to be read."},{"location":"code/R/modules/masking/#return-value_1","title":"Return Value","text":"

List. List with masks for the variable.

"},{"location":"code/R/modules/noslag/","title":"noslag","text":""},{"location":"code/R/modules/noslag/#collect_files","title":"collect_files","text":"

collect_files

"},{"location":"code/R/modules/noslag/#description","title":"Description","text":"

Takes a parent variable and optional list of databases to include, checks for their existence, and collects files and directories based on the parent variable.

"},{"location":"code/R/modules/noslag/#usage","title":"Usage","text":"
collect_files(parent_variable, include_databases = NULL)\n
"},{"location":"code/R/modules/noslag/#arguments","title":"Arguments","text":"Argument Description parent_variable Character. Variable to collect files on. include_databases Optional listCharacter. List of Database IDs to collect files from."},{"location":"code/R/modules/noslag/#return-value","title":"Return Value","text":"

List of tuples. List of tuples containing the parent variable and the database ID for each file found in the specified directories.

"},{"location":"code/R/modules/noslag/#examples","title":"Examples","text":"
## Example usage:\ncollect_files(\"variable_name\", c(\"db1\", \"db2\"))\n
"},{"location":"code/R/modules/noslag/#normalise_units","title":"normalise_units","text":"

normalise_units

"},{"location":"code/R/modules/noslag/#description_1","title":"Description","text":"

Takes a DataFrame with reported or reference data, along with dictionaries mapping variable units and flow IDs, and normalizes the units of the variables in the DataFrame based on the provided mappings.

"},{"location":"code/R/modules/noslag/#usage_1","title":"Usage","text":"
normalise_units(df, level, var_units, var_flow_ids)\n
"},{"location":"code/R/modules/noslag/#arguments_1","title":"Arguments","text":"Argument Description df DataFrame. Dataframe to be normalized. level Character. Specifies whether the data should be normalized on the reported or reference values. Possible values are 'reported' or 'reference'. var_units List. Dictionary that maps a combination of parent variable and variable to its corresponding unit. The keys in the dictionary are in the format \"{parent_variable} var_flow_ids List. Dictionary that maps a combination of parent variable and variable to a specific flow ID. This flow ID is used for unit conversion in the normalize_units function."},{"location":"code/R/modules/noslag/#return-value_1","title":"Return Value","text":"

DataFrame. Normalized dataframe.

"},{"location":"code/R/modules/noslag/#examples_1","title":"Examples","text":"
## Example usage:\nnormalize_dataframe(df, \"reported\", var_units, var_flow_ids)\n
"},{"location":"code/R/modules/noslag/#normalise_values","title":"normalise_values","text":"

normalise_values

"},{"location":"code/R/modules/noslag/#description_2","title":"Description","text":"

Takes a DataFrame as input, normalizes the 'value' and 'uncertainty' columns by the reference value, and updates the 'reference_value' column accordingly.

"},{"location":"code/R/modules/noslag/#usage_2","title":"Usage","text":"
normalise_values(df)\n
"},{"location":"code/R/modules/noslag/#arguments_2","title":"Arguments","text":"Argument Description df DataFrame. Dataframe to be normalized."},{"location":"code/R/modules/noslag/#return-value_2","title":"Return Value","text":"

DataFrame. Returns a modified DataFrame where the 'value' column has been divided by the 'reference_value' column (or 1.0 if 'reference_value' is null), the 'uncertainty' column has been divided by the 'reference_value' column, and the 'reference_value' column has been replaced with 1.0 if it was not null.

"},{"location":"code/R/modules/noslag/#examples_2","title":"Examples","text":"
## Example usage:\nnormalized_df <- normalize_values(df)\n
"},{"location":"code/R/modules/noslag/#dataset","title":"DataSet","text":""},{"location":"code/R/modules/noslag/#description_3","title":"Description","text":"

This class provides methods to store, normalize, select, and aggregate DataSets.

"},{"location":"code/R/modules/noslag/#examples_3","title":"Examples","text":"
### ------------------------------------------------\n### Method `DataSet$normalise`\n### ------------------------------------------------\n\n## Example usage:\ndataset$normalize(override = list(\"variable1\" = \"value1\"), inplace = FALSE)\n\n\n### ------------------------------------------------\n### Method `DataSet$select`\n### ------------------------------------------------\n\n## Example usage:\ndataset$select(override = list(\"variable1\" = \"value1\"), drop_singular_fields = TRUE, extrapolate_period = FALSE, field1 = \"value1\")\n\n\n### ------------------------------------------------\n### Method `DataSet$aggregate`\n### ------------------------------------------------\n\n## Example usage:\ndataset$aggregate(override = list(\"variable1\" = \"value1\"), drop_singular_fields = TRUE, extrapolate_period = FALSE, agg = \"field\", masks = list(mask1, mask2), masks_database = TRUE)\n
"},{"location":"code/R/modules/noslag/#methods","title":"Methods","text":""},{"location":"code/R/modules/noslag/#public-methods","title":"Public Methods","text":""},{"location":"code/R/modules/noslag/#method-new","title":"Method new()","text":"

Create new instance of the DataSet class

Usage

DataSet$new(\n  parent_variable,\n  include_databases = NULL,\n  file_paths = NULL,\n  check_inconsistencies = FALSE,\n  data = NULL\n)\n

Arguments:

"},{"location":"code/R/modules/noslag/#method-normalise","title":"Method normalise()","text":"

Normalize data: default reference units, reference value equal to 1.0, default reported units

Usage

DataSet$normalise(override = NULL, inplace = FALSE)\n

Arguments:

Example:

## Example usage:\ndataset$normalize(override = list(\"variable1\" = \"value1\"), inplace = FALSE)\n

Returns:

DataFrame. If inplace is FALSE, returns normalized dataframe.

"},{"location":"code/R/modules/noslag/#method-select","title":"Method select()","text":"

Select desired data from the dataframe

Usage

DataSet$select(\n  override = NULL,\n  drop_singular_fields = TRUE,\n  extrapolate_period = TRUE,\n  ...\n)\n

Arguments:

Example:

## Example usage:\ndataset$select(override = list(\"variable1\" = \"value1\"), drop_singular_fields = TRUE, extrapolate_period = FALSE, field1 = \"value1\")\n

Returns:

DataFrame. DataFrame with selected values.

"},{"location":"code/R/modules/noslag/#method-aggregate","title":"Method aggregate()","text":"

Aggregates data based on specified parameters, applies masks, and cleans up the resulting DataFrame.

Usage

DataSet$aggregate(\n  override = NULL,\n  drop_singular_fields = TRUE,\n  extrapolate_period = TRUE,\n  agg = NULL,\n  masks = NULL,\n  masks_database = TRUE,\n  ...\n)\n

Arguments:

Example:

## Example usage:\ndataset$aggregate(override = list(\"variable1\" = \"value1\"), drop_singular_fields = TRUE, extrapolate_period = FALSE, agg = \"field\", masks = list(mask1, mask2), masks_database = TRUE)\n

Returns:

DataFrame. The aggregate method returns a pandas DataFrame that has been cleaned up and aggregated based on the specified parameters and input data. The method performs aggregation over component fields and case fields, applies weights based on masks, drops rows with NaN weights, aggregates with weights, inserts reference variables, sorts columns and rows, rounds values, and inserts units before returning the final cleaned and aggregated DataFrame.

"},{"location":"code/R/modules/noslag/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

DataSet$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/read/","title":"read","text":""},{"location":"code/R/modules/read/#read_csv_file","title":"read_csv_file","text":"

read_csv_file

"},{"location":"code/R/modules/read/#description","title":"Description","text":"

Read a csv datafile

"},{"location":"code/R/modules/read/#usage","title":"Usage","text":"
read_csv_file(fpath)\n
"},{"location":"code/R/modules/read/#arguments","title":"Arguments","text":"Argument Description fpath path of the csv file"},{"location":"code/R/modules/tedf/","title":"tedf","text":""},{"location":"code/R/modules/tedf/#tedfinconsistencyexception","title":"TEDFInconsistencyException","text":""},{"location":"code/R/modules/tedf/#description","title":"Description","text":"

This is a class to store inconsistencies in the TEDFs

"},{"location":"code/R/modules/tedf/#methods","title":"Methods","text":""},{"location":"code/R/modules/tedf/#public-methods","title":"Public Methods","text":""},{"location":"code/R/modules/tedf/#method-new","title":"Method new()","text":"

Create instance of TEDFInconsistencyException class

Usage

TEDFInconsistencyException$new(\n  message = \"Inconsistency detected\",\n  row_id = NULL,\n  col_id = NULL,\n  file_path = NULL\n)\n

Arguments:

"},{"location":"code/R/modules/tedf/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

TEDFInconsistencyException$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/tedf/#tebase","title":"TEBase","text":""},{"location":"code/R/modules/tedf/#description_1","title":"Description","text":"

This is the base class for technoeconomic data.

"},{"location":"code/R/modules/tedf/#examples","title":"Examples","text":"
## Example usage:\nbase_technoeconomic_data <- TEBase$new(\"variable_name\")\n
"},{"location":"code/R/modules/tedf/#methods_1","title":"Methods","text":""},{"location":"code/R/modules/tedf/#public-methods_1","title":"Public Methods","text":""},{"location":"code/R/modules/tedf/#method-new_1","title":"Method new()","text":"

Create new instance of TEBase class. Set parent variable and technology specifications (var_specs) from input

Usage

TEBase$new(parent_variable)\n

Arguments:

"},{"location":"code/R/modules/tedf/#method-clone_1","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

TEBase$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/tedf/#tedf","title":"TEDF","text":""},{"location":"code/R/modules/tedf/#description_2","title":"Description","text":"

This class is used to store Technoeconomic DataFiles.

"},{"location":"code/R/modules/tedf/#examples_1","title":"Examples","text":"
## Example usage:\ntedf <- TEDF$new(\"variable_name\")\ntedf$load()\ntedf$read(\"file_path.csv\")\ntedf$write(\"output_file_path.csv\")\ntedf$check()\ntedf$check_row()\n\n\n### ------------------------------------------------\n### Method `TEDF$load`\n### ------------------------------------------------\n\n## Example usage:\ntedf$load()\n\n\n### ------------------------------------------------\n### Method `TEDF$read`\n### ------------------------------------------------\n\n## Example usage:\ntedf$read()\n\n\n### ------------------------------------------------\n### Method `TEDF$write`\n### ------------------------------------------------\n\n## Example usage:\ntedf$write()\n\n\n### ------------------------------------------------\n### Method `TEDF$check`\n### ------------------------------------------------\n\n## Example usage:\ntedf$check(raise_exception = TRUE)\n
"},{"location":"code/R/modules/tedf/#methods_2","title":"Methods","text":""},{"location":"code/R/modules/tedf/#public-methods_2","title":"Public Methods","text":""},{"location":"code/R/modules/tedf/#method-new_2","title":"Method new()","text":"

Create new instance of TEDF class. Initialise parent class and object fields

Usage

TEDF$new(\n  parent_variable,\n  database_id = \"public\",\n  file_path = NULL,\n  data = NULL\n)\n

Arguments:

"},{"location":"code/R/modules/tedf/#method-load","title":"Method load()","text":"

Load TEDataFile (only if it has not been read yet)

Usage

TEDF$load()\n

Example:

## Example usage:\ntedf$load()\n

Returns:

TEDF. Returns the TEDF object it is called on.

"},{"location":"code/R/modules/tedf/#method-read","title":"Method read()","text":"

This method reads TEDF from a CSV file.

Usage

TEDF$read()\n

Example:

## Example usage:\ntedf$read()\n

"},{"location":"code/R/modules/tedf/#method-write","title":"Method write()","text":"

write TEDF to CSV file.

Usage

TEDF$write()\n

Example:

## Example usage:\ntedf$write()\n

"},{"location":"code/R/modules/tedf/#method-check","title":"Method check()","text":"

Check that TEDF is consistent and add inconsistencies to internal parameter

Usage

TEDF$check(raise_exception = TRUE)\n

Arguments:

Example:

## Example usage:\ntedf$check(raise_exception = TRUE)\n

"},{"location":"code/R/modules/tedf/#method-check_row","title":"Method check_row()","text":"

checks if row of dataframe has issues - NOT IMPLEMENTED YET

Usage

TEDF$check_row(row_id, raise_exception = TRUE)\n

Arguments:

"},{"location":"code/R/modules/tedf/#method-clone_2","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

TEDF$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/units/","title":"units","text":""},{"location":"code/R/modules/units/#unit_convert","title":"unit_convert","text":"

unit_convert

"},{"location":"code/R/modules/units/#description","title":"Description","text":"

Converts units with optional flow context handling based on specified variants and flow ID. The function checks if the input units are not NaN, then it proceeds to handle different cases based on the presence of a flow context and unit variants.

"},{"location":"code/R/modules/units/#usage","title":"Usage","text":"
unit_convert(unit_from, unit_to, flow_id = NULL)\n
"},{"location":"code/R/modules/units/#arguments","title":"Arguments","text":"Argument Description unit_from Character or numeric. Unit to convert from. unit_to Character or numeric. Unit to convert to. flow_id Character or NULL. Identifier for the specific flow or process."},{"location":"code/R/modules/units/#return-value","title":"Return Value","text":"

Numeric. Conversion factor between unit_from and unit_to.

"},{"location":"code/R/modules/units/#examples","title":"Examples","text":"
## Example usage:\nunit_convert(\"m\", \"km\", flow_id = NULL)\n
"},{"location":"code/python/columns/","title":"columns","text":""},{"location":"code/python/columns/#python.posted.columns.AbstractColumnDefinition","title":"AbstractColumnDefinition","text":"

Abstract class to store columns

Parameters:

Name Type Description Default col_type str

Type of the column

required name str

Name of the column

required description str

Description of the column

required dtype str

Data type of the column

required required bool

Bool that specifies if the column is required

required

Methods:

Name Description is_allowed

Check if cell is allowed

Source code in python/posted/columns.py
class AbstractColumnDefinition:\n    '''\n    Abstract class to store columns\n\n    Parameters\n    ----------\n    col_type: str\n        Type of the column\n    name: str\n        Name of the column\n    description: str\n        Description of the column\n    dtype:\n        Data type of the column\n    required: bool\n        Bool that specifies if the column is required\n\n    Methods\n    -------\n        is_allowed\n            Check if cell is allowed\n    '''\n    def __init__(self, col_type: str, name: str, description: str, dtype: str, required: bool):\n        if col_type not in ['field', 'variable', 'unit', 'value', 'comment']:\n            raise Exception(f\"Columns must be of type field, variable, unit, value, or comment but found: {col_type}\")\n        if not isinstance(name, str):\n            raise Exception(f\"The 'name' must be a string but found type {type(name)}: {name}\")\n        if not isinstance(description, str):\n            raise Exception(f\"The 'name' must be a string but found type {type(description)}: {description}\")\n        if not (isinstance(dtype, str) and dtype in ['float', 'str', 'category']):\n            raise Exception(f\"The 'dtype' must be a valid data type but found: {dtype}\")\n        if not isinstance(required, bool):\n            raise Exception(f\"The 'required' argument must be a bool but found: {required}\")\n\n        self._col_type: str = col_type\n        self._name: str = name\n        self._description: str = description\n        self._dtype: str = dtype\n        self._required: bool = required\n\n    @property\n    def col_type(self):\n        '''Get col type'''\n        return self._col_type\n\n    @property\n    def name(self):\n        '''Get name of the column'''\n        return self._name\n\n    @property\n    def description(self):\n        '''Get description of the column'''\n        return self._description\n\n    @property\n    def dtype(self):\n        '''Get data type of the column'''\n        return self._dtype\n\n    @property\n    def required(self):\n        '''Return if column is required'''\n        return self._required\n\n    @property\n    def default(self):\n        '''Get default value of the column'''\n        return np.nan\n\n    def is_allowed(self, cell: str | float | int) -> bool:\n        '''Check if Cell is allowed\n\n        Parameters\n        ----------\n            cell: str | float | int\n                Cell to check\n        Returns\n        -------\n            bool\n                If the cell is allowed\n        '''\n        return True\n
"},{"location":"code/python/columns/#python.posted.columns.AbstractColumnDefinition.col_type","title":"col_type property","text":"

Get col type

"},{"location":"code/python/columns/#python.posted.columns.AbstractColumnDefinition.default","title":"default property","text":"

Get default value of the column

"},{"location":"code/python/columns/#python.posted.columns.AbstractColumnDefinition.description","title":"description property","text":"

Get description of the column

"},{"location":"code/python/columns/#python.posted.columns.AbstractColumnDefinition.dtype","title":"dtype property","text":"

Get data type of the column

"},{"location":"code/python/columns/#python.posted.columns.AbstractColumnDefinition.name","title":"name property","text":"

Get name of the column

"},{"location":"code/python/columns/#python.posted.columns.AbstractColumnDefinition.required","title":"required property","text":"

Return if column is required

"},{"location":"code/python/columns/#python.posted.columns.AbstractColumnDefinition.is_allowed","title":"is_allowed(cell)","text":"

Check if Cell is allowed

Returns:

Type Description bool

If the cell is allowed

Source code in python/posted/columns.py
def is_allowed(self, cell: str | float | int) -> bool:\n    '''Check if Cell is allowed\n\n    Parameters\n    ----------\n        cell: str | float | int\n            Cell to check\n    Returns\n    -------\n        bool\n            If the cell is allowed\n    '''\n    return True\n
"},{"location":"code/python/columns/#python.posted.columns.AbstractFieldDefinition","title":"AbstractFieldDefinition","text":"

Bases: AbstractColumnDefinition

Abstract class to store fields

Parameters:

Name Type Description Default field_type str

Type of the field

required name str

Name of the field

required description str

Description of the field

required dtype str

Data type of the field

required coded bool

If the field is coded

required coded bool

Codes for the field

required

Methods:

Name Description is_allowed

Check if cell is allowed

select_and_expand

Select and expand fields

Source code in python/posted/columns.py
class AbstractFieldDefinition(AbstractColumnDefinition):\n    '''\n    Abstract class to store fields\n\n    Parameters\n    ----------\n    field_type: str\n        Type of the field\n    name: str\n        Name of the field\n    description: str\n        Description of the field\n    dtype: str\n        Data type of the field\n    coded: bool\n        If the field is coded\n    coded: Optional[dict[str,str]], optional\n        Codes for the field\n\n\n    Methods\n    -------\n    is_allowed\n        Check if cell is allowed\n    select_and_expand\n        Select and expand fields\n\n    '''\n    def __init__(self, field_type: str, name: str, description: str, dtype: str, coded: bool,\n                 codes: Optional[dict[str, str]] = None):\n        if field_type not in ['case', 'component']:\n            raise Exception('Fields must be of type case or component.')\n        super().__init__(\n            col_type='field',\n            name=name,\n            description=description,\n            dtype=dtype,\n            required=True,\n        )\n\n        self._field_type: str = field_type\n        self._coded: bool = coded\n        self._codes: None | dict[str, str] = codes\n\n    @property\n    def field_type(self) -> str:\n        '''Get field type'''\n        return self._field_type\n\n    @property\n    def coded(self) -> bool:\n        '''Return if field is coded'''\n        return self._coded\n\n    @property\n    def codes(self) -> None | dict[str, str]:\n        '''Get field codes'''\n        return self._codes\n\n    @property\n    def default(self):\n        '''Get symbol for default value'''\n        return '*' if self._field_type == 'case' else '#'\n\n    def is_allowed(self, cell: str | float | int) -> bool:\n        ''' Chek if cell is allowed'''\n        if pd.isnull(cell):\n            return False\n        if self._coded:\n            return cell in self._codes or cell == '*' or (cell == '#' and self.col_type == 'component')\n        else:\n            return True\n\n    def _expand(self, df: pd.DataFrame, col_id: str, field_vals: list, **kwargs) -> pd.DataFrame:\n        # Expand fields\n        return pd.concat([\n            df[df[col_id].isin(field_vals)],\n            df[df[col_id] == '*']\n            .drop(columns=[col_id])\n            .merge(pd.DataFrame.from_dict({col_id: field_vals}), how='cross'),\n        ])\n\n    def _select(self, df: pd.DataFrame, col_id: str, field_vals: list, **kwargs):\n        # Select fields\n        return df.query(f\"{col_id}.isin({field_vals})\").reset_index(drop=True)\n\n\n    def select_and_expand(self, df: pd.DataFrame, col_id: str, field_vals: None | list, **kwargs) -> pd.DataFrame:\n        '''\n        Select and expand fields which are valid for multiple periods or other field vals\n\n        Parameters\n        ----------\n        df: pd.DataFrame\n            DataFrame where fields should be selected and expanded\n        col_id: str\n            col_id of the column to be selected and expanded\n        field_vals: None | list\n            field_vals to select and expand\n        **kwargs\n            Additional keyword arguments\n\n        Returns\n        -------\n        pd.DataFrame\n            Dataframe where fields are selected and expanded\n\n        '''\n        # get list of selected field values\n        if field_vals is None:\n            if col_id == 'period':\n                field_vals = default_periods\n            elif self._coded:\n                field_vals = list(self._codes.keys())\n            else:\n                field_vals = [v for v in df[col_id].unique() if v != '*' and not pd.isnull(v)]\n        else:\n            # ensure that field values is a list of elements (not tuple, not single value)\n            if isinstance(field_vals, tuple):\n                field_vals = list(field_vals)\n            elif not isinstance(field_vals, list):\n                field_vals = [field_vals]\n            # check that every element is of allowed type\n            for val in field_vals:\n                if not self.is_allowed(val):\n                    raise Exception(f\"Invalid type selected for field '{col_id}': {val}\")\n            if '*' in field_vals:\n                raise Exception(f\"Selected values for field '{col_id}' must not contain the asterisk.\"\n                                f\"Omit the '{col_id}' argument to select all entries.\")\n\n\n        df = self._expand(df, col_id, field_vals, **kwargs)\n        df = self._select(df, col_id, field_vals, **kwargs)\n\n        return df\n
"},{"location":"code/python/columns/#python.posted.columns.AbstractFieldDefinition.coded","title":"coded: bool property","text":"

Return if field is coded

"},{"location":"code/python/columns/#python.posted.columns.AbstractFieldDefinition.codes","title":"codes: None | dict[str, str] property","text":"

Get field codes

"},{"location":"code/python/columns/#python.posted.columns.AbstractFieldDefinition.default","title":"default property","text":"

Get symbol for default value

"},{"location":"code/python/columns/#python.posted.columns.AbstractFieldDefinition.field_type","title":"field_type: str property","text":"

Get field type

"},{"location":"code/python/columns/#python.posted.columns.AbstractFieldDefinition.is_allowed","title":"is_allowed(cell)","text":"

Chek if cell is allowed

Source code in python/posted/columns.py
def is_allowed(self, cell: str | float | int) -> bool:\n    ''' Chek if cell is allowed'''\n    if pd.isnull(cell):\n        return False\n    if self._coded:\n        return cell in self._codes or cell == '*' or (cell == '#' and self.col_type == 'component')\n    else:\n        return True\n
"},{"location":"code/python/columns/#python.posted.columns.AbstractFieldDefinition.select_and_expand","title":"select_and_expand(df, col_id, field_vals, **kwargs)","text":"

Select and expand fields which are valid for multiple periods or other field vals

Parameters:

Name Type Description Default df DataFrame

DataFrame where fields should be selected and expanded

required col_id str

col_id of the column to be selected and expanded

required field_vals None | list

field_vals to select and expand

required **kwargs

Additional keyword arguments

{}

Returns:

Type Description DataFrame

Dataframe where fields are selected and expanded

Source code in python/posted/columns.py
def select_and_expand(self, df: pd.DataFrame, col_id: str, field_vals: None | list, **kwargs) -> pd.DataFrame:\n    '''\n    Select and expand fields which are valid for multiple periods or other field vals\n\n    Parameters\n    ----------\n    df: pd.DataFrame\n        DataFrame where fields should be selected and expanded\n    col_id: str\n        col_id of the column to be selected and expanded\n    field_vals: None | list\n        field_vals to select and expand\n    **kwargs\n        Additional keyword arguments\n\n    Returns\n    -------\n    pd.DataFrame\n        Dataframe where fields are selected and expanded\n\n    '''\n    # get list of selected field values\n    if field_vals is None:\n        if col_id == 'period':\n            field_vals = default_periods\n        elif self._coded:\n            field_vals = list(self._codes.keys())\n        else:\n            field_vals = [v for v in df[col_id].unique() if v != '*' and not pd.isnull(v)]\n    else:\n        # ensure that field values is a list of elements (not tuple, not single value)\n        if isinstance(field_vals, tuple):\n            field_vals = list(field_vals)\n        elif not isinstance(field_vals, list):\n            field_vals = [field_vals]\n        # check that every element is of allowed type\n        for val in field_vals:\n            if not self.is_allowed(val):\n                raise Exception(f\"Invalid type selected for field '{col_id}': {val}\")\n        if '*' in field_vals:\n            raise Exception(f\"Selected values for field '{col_id}' must not contain the asterisk.\"\n                            f\"Omit the '{col_id}' argument to select all entries.\")\n\n\n    df = self._expand(df, col_id, field_vals, **kwargs)\n    df = self._select(df, col_id, field_vals, **kwargs)\n\n    return df\n
"},{"location":"code/python/columns/#python.posted.columns.CommentDefinition","title":"CommentDefinition","text":"

Bases: AbstractColumnDefinition

Class to store comment columns

Parameters:

Name Type Description Default col_type

Type of the column

required name str

Name of the column

required description str

Description of the column

required required bool

Bool that specifies if the column is required

required

Methods:

Name Description is_allowed

Check if cell is allowed

Source code in python/posted/columns.py
class CommentDefinition(AbstractColumnDefinition):\n    '''\n    Class to store comment columns\n\n    Parameters\n    ----------\n    col_type: str\n        Type of the column\n    name: str\n        Name of the column\n    description: str\n        Description of the column\n    required: bool\n        Bool that specifies if the column is required\n\n    Methods\n    -------\n    is_allowed\n        Check if cell is allowed\n    '''\n    def __init__(self, name: str, description: str, required: bool):\n        super().__init__(\n            col_type='comment',\n            name=name,\n            description=description,\n            dtype='str',\n            required=required,\n        )\n\n    def is_allowed(self, cell: str | float | int) -> bool:\n        return True\n
"},{"location":"code/python/columns/#python.posted.columns.CustomFieldDefinition","title":"CustomFieldDefinition","text":"

Bases: AbstractFieldDefinition

Class to store Custom fields

Parameters:

Name Type Description Default **field_specs

Specs of the custom fields

{} Source code in python/posted/columns.py
class CustomFieldDefinition(AbstractFieldDefinition):\n    '''\n    Class to store Custom fields\n\n    Parameters\n    ----------\n    **field_specs:\n        Specs of the custom fields\n    '''\n    def __init__(self, **field_specs):\n        '''Check if the field specs are of the required type and format,\n        initialize parent class'''\n        if not ('type' in field_specs and isinstance(field_specs['type'], str) and\n                field_specs['type'] in ['case', 'component']):\n            raise Exception(\"Field type must be provided and equal to 'case' or 'component'.\")\n        if not ('name' in field_specs and isinstance(field_specs['name'], str)):\n            raise Exception('Field name must be provided and of type string.')\n        if not ('description' in field_specs and isinstance(field_specs['description'], str)):\n            raise Exception('Field description must be provided and of type string.')\n        if not ('coded' in field_specs and isinstance(field_specs['coded'], bool)):\n            raise Exception('Field coded must be provided and of type bool.')\n        if field_specs['coded'] and not ('codes' in field_specs and isinstance(field_specs['codes'], dict)):\n            raise Exception('Field codes must be provided and contain a dict of possible codes.')\n\n        super().__init__(\n            field_type=field_specs['type'],\n            name=field_specs['name'],\n            description=field_specs['description'],\n            dtype='category',\n            coded=field_specs['coded'],\n            codes=field_specs['codes'] if 'codes' in field_specs else None,\n        )\n
"},{"location":"code/python/columns/#python.posted.columns.CustomFieldDefinition.__init__","title":"__init__(**field_specs)","text":"

Check if the field specs are of the required type and format, initialize parent class

Source code in python/posted/columns.py
def __init__(self, **field_specs):\n    '''Check if the field specs are of the required type and format,\n    initialize parent class'''\n    if not ('type' in field_specs and isinstance(field_specs['type'], str) and\n            field_specs['type'] in ['case', 'component']):\n        raise Exception(\"Field type must be provided and equal to 'case' or 'component'.\")\n    if not ('name' in field_specs and isinstance(field_specs['name'], str)):\n        raise Exception('Field name must be provided and of type string.')\n    if not ('description' in field_specs and isinstance(field_specs['description'], str)):\n        raise Exception('Field description must be provided and of type string.')\n    if not ('coded' in field_specs and isinstance(field_specs['coded'], bool)):\n        raise Exception('Field coded must be provided and of type bool.')\n    if field_specs['coded'] and not ('codes' in field_specs and isinstance(field_specs['codes'], dict)):\n        raise Exception('Field codes must be provided and contain a dict of possible codes.')\n\n    super().__init__(\n        field_type=field_specs['type'],\n        name=field_specs['name'],\n        description=field_specs['description'],\n        dtype='category',\n        coded=field_specs['coded'],\n        codes=field_specs['codes'] if 'codes' in field_specs else None,\n    )\n
"},{"location":"code/python/columns/#python.posted.columns.PeriodFieldDefinition","title":"PeriodFieldDefinition","text":"

Bases: AbstractFieldDefinition

Class to store Period fields

Parameters:

Name Type Description Default name str

Name of the field

required description str

Description of the field

required

Methods:

Name Description is_allowed

Checks if cell is allowed

Source code in python/posted/columns.py
class PeriodFieldDefinition(AbstractFieldDefinition):\n    '''\n    Class to store Period fields\n\n    Parameters\n    ----------\n    name: str\n        Name of the field\n    description: str\n        Description of the field\n\n    Methods\n    -------\n    is_allowed\n        Checks if cell is allowed\n    '''\n    def __init__(self, name: str, description: str):\n        '''Initialize parent class'''\n        super().__init__(\n            field_type='case',\n            name=name,\n            description=description,\n            dtype='float',\n            coded=False,\n        )\n\n    def is_allowed(self, cell: str | float | int) -> bool:\n        '''Check if cell is a flowat or *'''\n        return is_float(cell) or cell == '*'\n\n    def _expand(self, df: pd.DataFrame, col_id: str, field_vals: list, **kwargs) -> pd.DataFrame:\n        return pd.concat([\n            df[df[col_id] != '*'],\n            df[df[col_id] == '*']\n            .drop(columns=[col_id])\n            .merge(pd.DataFrame.from_dict({col_id: field_vals}), how='cross'),\n        ]).astype({'period': 'float'})\n\n\n    def _select(self, df: pd.DataFrame, col_id: str, field_vals: list[int | float], **kwargs) -> pd.DataFrame:\n        # group by identifying columns and select periods/generate time series\n        # get list of groupable columns\n        group_cols = [\n            c for c in df.columns\n            if c not in [col_id, 'value']\n        ]\n\n        # perform groupby and do not drop NA values\n        grouped = df.groupby(group_cols, dropna=False)\n\n        # create return list\n        ret = []\n\n        # loop over groups\n        for keys, rows in grouped:\n            # get rows in group\n            rows = rows[[col_id, 'value']]\n\n            # get a list of periods that exist\n            periods_exist = rows[col_id].unique()\n\n            # create dataframe containing rows for all requested periods\n            req_rows = pd.DataFrame.from_dict({\n                f\"{col_id}\": field_vals,\n                f\"{col_id}_upper\": [min([ip for ip in periods_exist if ip >= p], default=np.nan) for p in field_vals],\n                f\"{col_id}_lower\": [max([ip for ip in periods_exist if ip <= p], default=np.nan) for p in field_vals],\n            })\n\n            # set missing columns from group\n            req_rows[group_cols] = keys\n\n            # check case\n            cond_match = req_rows[col_id].isin(periods_exist)\n            cond_extrapolate = (req_rows[f\"{col_id}_upper\"].isna() | req_rows[f\"{col_id}_lower\"].isna())\n\n            # match\n            rows_match = req_rows.loc[cond_match] \\\n                .merge(rows, on=col_id)\n\n            # extrapolate\n            rows_extrapolate = (\n                req_rows.loc[~cond_match & cond_extrapolate]\n                    .assign(\n                        period_combined=lambda x: np.where(\n                            x.notna()[f\"{col_id}_upper\"],\n                            x[f\"{col_id}_upper\"],\n                            x[f\"{col_id}_lower\"],\n                        )\n                    )\n                    .merge(rows.rename(columns={col_id: f\"{col_id}_combined\"}), on=f\"{col_id}_combined\")\n                if 'extrapolate_period' not in kwargs or kwargs['extrapolate_period'] else\n                pd.DataFrame()\n            )\n\n            # interpolate\n            rows_interpolate = req_rows.loc[~cond_match & ~cond_extrapolate] \\\n                .merge(rows.rename(columns={c: f\"{c}_upper\" for c in rows.columns}), on=f\"{col_id}_upper\") \\\n                .merge(rows.rename(columns={c: f\"{c}_lower\" for c in rows.columns}), on=f\"{col_id}_lower\") \\\n                .assign(value=lambda row: row['value_lower'] + (row[f\"{col_id}_upper\"] - row[col_id]) /\n                                          (row[f\"{col_id}_upper\"] - row[f\"{col_id}_lower\"]) * (row['value_upper'] - row['value_lower']))\n\n            # combine into one dataframe and drop unused columns\n            rows_to_concat = [df for df in [rows_match, rows_extrapolate, rows_interpolate] if not df.empty]\n            if rows_to_concat:\n                rows_append = pd.concat(rows_to_concat)\n                rows_append.drop(columns=[\n                        c for c in [f\"{col_id}_upper\", f\"{col_id}_lower\", f\"{col_id}_combined\", 'value_upper', 'value_lower']\n                        if c in rows_append.columns\n                    ], inplace=True)\n\n                # add to return list\n                ret.append(rows_append)\n\n        # convert return list to dataframe and return\n        return pd.concat(ret).reset_index(drop=True) if ret else df.iloc[[]]\n
"},{"location":"code/python/columns/#python.posted.columns.PeriodFieldDefinition.__init__","title":"__init__(name, description)","text":"

Initialize parent class

Source code in python/posted/columns.py
def __init__(self, name: str, description: str):\n    '''Initialize parent class'''\n    super().__init__(\n        field_type='case',\n        name=name,\n        description=description,\n        dtype='float',\n        coded=False,\n    )\n
"},{"location":"code/python/columns/#python.posted.columns.PeriodFieldDefinition.is_allowed","title":"is_allowed(cell)","text":"

Check if cell is a flowat or *

Source code in python/posted/columns.py
def is_allowed(self, cell: str | float | int) -> bool:\n    '''Check if cell is a flowat or *'''\n    return is_float(cell) or cell == '*'\n
"},{"location":"code/python/columns/#python.posted.columns.RegionFieldDefinition","title":"RegionFieldDefinition","text":"

Bases: AbstractFieldDefinition

Class to store Region fields

Parameters:

Name Type Description Default name str

Name of the field

required description str

Description of the field

required Source code in python/posted/columns.py
class RegionFieldDefinition(AbstractFieldDefinition):\n    '''\n    Class to store Region fields\n\n    Parameters\n    ----------\n    name: str\n        Name of the field\n    description: str\n        Description of the field\n    '''\n    def __init__(self, name: str, description: str):\n        '''Initialize parent class'''\n        super().__init__(\n            field_type='case',\n            name=name,\n            description=description,\n            dtype='category',\n            coded=True,\n            codes={'World': 'World'},  # TODO: Insert list of country names here.\n        )\n
"},{"location":"code/python/columns/#python.posted.columns.RegionFieldDefinition.__init__","title":"__init__(name, description)","text":"

Initialize parent class

Source code in python/posted/columns.py
def __init__(self, name: str, description: str):\n    '''Initialize parent class'''\n    super().__init__(\n        field_type='case',\n        name=name,\n        description=description,\n        dtype='category',\n        coded=True,\n        codes={'World': 'World'},  # TODO: Insert list of country names here.\n    )\n
"},{"location":"code/python/columns/#python.posted.columns.SourceFieldDefinition","title":"SourceFieldDefinition","text":"

Bases: AbstractFieldDefinition

Class to store Source fields

Parameters:

Name Type Description Default name str

Name of the field

required description str

Description of the field

required Source code in python/posted/columns.py
class SourceFieldDefinition(AbstractFieldDefinition):\n    '''\n    Class to store Source fields\n\n    Parameters\n    ----------\n    name: str\n        Name of the field\n    description: str\n        Description of the field\n    '''\n    def __init__(self, name: str, description: str):\n        '''Initialize parent class'''\n        super().__init__(\n            field_type='case',\n            name=name,\n            description=description,\n            dtype='category',\n            coded=False,  # TODO: Insert list of BibTeX identifiers here.\n        )\n
"},{"location":"code/python/columns/#python.posted.columns.SourceFieldDefinition.__init__","title":"__init__(name, description)","text":"

Initialize parent class

Source code in python/posted/columns.py
def __init__(self, name: str, description: str):\n    '''Initialize parent class'''\n    super().__init__(\n        field_type='case',\n        name=name,\n        description=description,\n        dtype='category',\n        coded=False,  # TODO: Insert list of BibTeX identifiers here.\n    )\n
"},{"location":"code/python/columns/#python.posted.columns.UnitDefinition","title":"UnitDefinition","text":"

Bases: AbstractColumnDefinition

Class to store Unit columns

Parameters:

Name Type Description Default col_type

Type of the column

required name str

Name of the column

required description str

Description of the column

required required bool

Bool that specifies if the column is required

required

Methods:

Name Description is_allowed

Check if cell is allowed

Source code in python/posted/columns.py
class UnitDefinition(AbstractColumnDefinition):\n    '''\n    Class to store Unit columns\n\n    Parameters\n    ----------\n    col_type: str\n        Type of the column\n    name: str\n        Name of the column\n    description: str\n        Description of the column\n    required: bool\n        Bool that specifies if the column is required\n\n    Methods\n    -------\n    is_allowed\n        Check if cell is allowed\n    '''\n    def __init__(self, name: str, description: str, required: bool):\n        super().__init__(\n            col_type='unit',\n            name=name,\n            description=description,\n            dtype='category',\n            required=required,\n        )\n\n    def is_allowed(self, cell: str | float | int) -> bool:\n        if pd.isnull(cell):\n            return not self._required\n        if not isinstance(cell, str):\n            return False\n        tokens = cell.split(';')\n        if len(tokens) == 1:\n            return cell in ureg\n        elif len(tokens) == 2:\n            return tokens[0] in ureg and tokens[1] in unit_variants\n        else:\n            return False\n
"},{"location":"code/python/columns/#python.posted.columns.ValueDefinition","title":"ValueDefinition","text":"

Bases: AbstractColumnDefinition

Class to store Value columns

Parameters:

Name Type Description Default col_type

Type of the column

required name str

Name of the column

required description str

Description of the column

required required bool

Bool that specifies if the column is required

required

Methods:

Name Description is_allowed

Check if cell is allowed

Source code in python/posted/columns.py
class ValueDefinition(AbstractColumnDefinition):\n    '''\n    Class to store Value columns\n\n    Parameters\n    ----------\n    col_type: str\n        Type of the column\n    name: str\n        Name of the column\n    description: str\n        Description of the column\n    required: bool\n        Bool that specifies if the column is required\n\n    Methods\n    -------\n    is_allowed\n        Check if cell is allowed\n    '''\n    def __init__(self, name: str, description: str, required: bool):\n        super().__init__(\n            col_type='value',\n            name=name,\n            description=description,\n            dtype='float',\n            required=required,\n        )\n\n    def is_allowed(self, cell: str | float | int) -> bool:\n        if pd.isnull(cell):\n            return not self._required\n        return isinstance(cell, float | int)\n
"},{"location":"code/python/columns/#python.posted.columns.VariableDefinition","title":"VariableDefinition","text":"

Bases: AbstractColumnDefinition

Class to store variable columns

Parameters:

Name Type Description Default col_type

Type of the column

required name str

Name of the column

required description str

Description of the column

required required bool

Bool that specifies if the column is required

required

Methods:

Name Description is_allowed

Check if cell is allowed

Source code in python/posted/columns.py
class VariableDefinition(AbstractColumnDefinition):\n    '''\n    Class to store variable columns\n\n    Parameters\n    ----------\n    col_type: str\n        Type of the column\n    name: str\n        Name of the column\n    description: str\n        Description of the column\n    required: bool\n        Bool that specifies if the column is required\n\n    Methods\n    -------\n    is_allowed\n        Check if cell is allowed\n    '''\n    def __init__(self, name: str, description: str, required: bool):\n        super().__init__(\n            col_type='variable',\n            name=name,\n            description=description,\n            dtype='category',\n            required=required,\n        )\n\n    def is_allowed(self, cell: str | float | int) -> bool:\n        if pd.isnull(cell):\n            return not self._required\n        return isinstance(cell, str) and cell in variables\n
"},{"location":"code/python/columns/#python.posted.columns.is_float","title":"is_float(string)","text":"

Checks if a given string can be converted to a floating-point number in Python.

Parameters:

Name Type Description Default string str

String to check

required

Returns:

Type Description bool

True if conversion was successful, False if not

Source code in python/posted/columns.py
def is_float(string: str) -> bool:\n    '''Checks if a given string can be converted to a floating-point number in\n    Python.\n\n    Parameters\n    ----------\n    string : str\n        String to check\n\n    Returns\n    -------\n        bool\n            True if conversion was successful, False if not\n    '''\n    try:\n        float(string)\n        return True\n    except ValueError:\n        return False\n
"},{"location":"code/python/columns/#python.posted.columns.read_fields","title":"read_fields(variable)","text":"

Read the fields of a variable

Returns:

Type Description dict
Dictionary containing the fields\n

comments Dictionary containing the comments

Source code in python/posted/columns.py
def read_fields(variable: str):\n    '''\n    Read the fields of a variable\n\n    Parameters\n    ----------\n        variable: str\n            Variable to read\n\n    Returns\n    -------\n        dict\n            Dictionary containing the fields\n        comments\n            Dictionary containing the comments\n\n    '''\n    fields: dict[str, CustomFieldDefinition] = {}\n    comments: dict[str, CommentDefinition] = {}\n\n    for database_id in databases:\n        fpath = databases[database_id] / 'fields' / ('/'.join(variable.split('|')) + '.yml')\n        if fpath.exists():\n            if not fpath.is_file():\n                raise Exception(f\"Expected YAML file, but not a file: {fpath}\")\n\n            for col_id, field_specs in read_yml_file(fpath).items():\n                if field_specs['type'] in ('case', 'component'):\n                    fields[col_id] = CustomFieldDefinition(**field_specs)\n                elif field_specs['type'] == 'comment':\n                    comments[col_id] = CommentDefinition(\n                        **{k: v for k, v in field_specs.items() if k != 'type'},\n                        required=False,\n                    )\n                else:\n                    raise Exception(f\"Unkown field type: {col_id}\")\n\n    # make sure the field ID is not the same as for a base column\n    for col_id in fields:\n        if col_id in base_columns:\n            raise Exception(f\"Field ID cannot be equal to a base column ID: {col_id}\")\n\n    return fields, comments\n
"},{"location":"code/python/definitions/","title":"definitions","text":""},{"location":"code/python/definitions/#python.posted.definitions.read_definitions","title":"read_definitions(definitions_dir, flows, techs)","text":"

Reads YAML files from definitions directory, extracts tags, inserts tags into definitions, replaces tokens in definitions, and returns the updated definitions.

Parameters:

Name Type Description Default definitions_dir Path

Path leading to the definitions

required flows dict

Dictionary containng the different flow types. Each key represents a flow type, the corresponding value is a dictionary containing key value pairs of attributes like denisty, energycontent and their values.

required techs dict

Dictionary containing information about different technologies. Each key in the dictionary represents a unique technology ID, and the corresponding value is a dictionary containing various specifications for that technology, like 'description', 'class', 'primary output' etc.

required

Returns:

Type Description dict

Dictionary containing the definitions after processing and replacing tags and tokens

Source code in python/posted/definitions.py
def read_definitions(definitions_dir: Path, flows: dict, techs: dict):\n    '''\n    Reads YAML files from definitions directory, extracts tags, inserts tags into\n    definitions, replaces tokens in definitions, and returns the updated definitions.\n\n    Parameters\n    ----------\n    definitions_dir : Path\n        Path leading to the definitions\n    flows : dict\n        Dictionary containng the different flow types. Each key represents a flow type, the corresponding\n        value is a dictionary containing key value pairs of attributes like denisty, energycontent and their\n        values.\n    techs : dict\n        Dictionary containing information about different technologies. Each key in the\n        dictionary represents a unique technology ID, and the corresponding value is a dictionary containing\n        various specifications for that technology, like 'description', 'class', 'primary output' etc.\n\n    Returns\n    -------\n        dict\n            Dictionary containing the definitions after processing and replacing tags and tokens\n    '''\n    # check that variables exists and is a directory\n    if not definitions_dir.exists():\n        return {}\n    if not definitions_dir.is_dir():\n        raise Exception(f\"Should be a directory but is not: {definitions_dir}\")\n\n    # read all definitions and tags\n    definitions = {}\n    tags = {}\n    for file_path in definitions_dir.rglob('*.yml'):\n        if file_path.name.startswith('tag_'):\n            tags |= read_yml_file(file_path)\n        else:\n            definitions |= read_yml_file(file_path)\n\n    # read tags from flows and techs\n    tags['Flow IDs'] = {\n        flow_id: {}\n        for flow_id, flow_specs in flows.items()\n    }\n    tags['Tech IDs'] = {\n        tech_id: {\n            k: v\n            for k, v in tech_specs.items()\n            if k in ['primary_output']\n        }\n        for tech_id, tech_specs in techs.items()\n    }\n\n    # insert tags\n    for tag, items in tags.items():\n        definitions = replace_tags(definitions, tag, items)\n\n    # remove definitions where tags could not been replaced\n    if any('{' in key for key in definitions):\n        warnings.warn('Tokens could not be replaced correctly.')\n        definitions = {k: v for k, v in definitions.items() if '{' not in k}\n\n    # insert tokens\n    tokens = {\n        'default currency': lambda def_specs: default_currency,\n        'primary output': lambda def_specs: def_specs['primary_output'],\n    } | {\n        f\"default flow unit {unit_component}\": unit_token_func(unit_component, flows)\n        for unit_component in ('full', 'raw', 'variant')\n    }\n    for def_key, def_specs in definitions.items():\n        for def_property, def_value in def_specs.items():\n            for token_key, token_func in tokens.items():\n                if isinstance(def_value, str) and f\"{{{token_key}}}\" in def_value:\n                    def_specs[def_property] = def_specs[def_property].replace(f\"{{{token_key}}}\", token_func(def_specs))\n\n    return definitions\n
"},{"location":"code/python/definitions/#python.posted.definitions.replace_tags","title":"replace_tags(definitions, tag, items)","text":"

Replaces specified tags in dictionary keys and values with corresponding items from another dictionary.

Parameters:

Name Type Description Default definitions dict

Dictionary containing the definitions, where the tags should be replaced by the items

required tag str

String to identify where replacements should be made in the definitions. Specifies the placeholder that needs to be replaced with actual values from the items dictionary.

required items dict[str, dict]

Dictionary containing the items from whith to replace the definitions

required

Returns:

Type Description dict

Dictionary containing the definitions with replacements based on the provided tag and items.

Source code in python/posted/definitions.py
def replace_tags(definitions: dict, tag: str, items: dict[str, dict]):\n    '''\n    Replaces specified tags in dictionary keys and values with corresponding\n    items from another dictionary.\n\n    Parameters\n    ----------\n    definitions : dict\n        Dictionary containing the definitions, where the tags should be replaced by the items\n    tag : str\n        String to identify where replacements should be made in the definitions. Specifies\n        the placeholder that needs to be replaced with actual values from the `items` dictionary.\n    items : dict[str, dict]\n        Dictionary containing the items from whith to replace the definitions\n\n    Returns\n    -------\n        dict\n            Dictionary containing the definitions with replacements based on the provided tag and items.\n    '''\n\n    definitions_with_replacements = {}\n    for def_name, def_specs in definitions.items():\n        if f\"{{{tag}}}\" not in def_name:\n            definitions_with_replacements[def_name] = def_specs\n        else:\n            for item_name, item_specs in items.items():\n                item_desc = item_specs['description'] if 'description' in item_specs else item_name\n                def_name_new = def_name.replace(f\"{{{tag}}}\", item_name)\n                def_specs_new = copy.deepcopy(def_specs)\n                def_specs_new |= item_specs\n\n                # replace tags in description\n                def_specs_new['description'] = def_specs['description'].replace(f\"{{{tag}}}\", item_desc)\n\n                # replace tags in other specs\n                for k, v in def_specs_new.items():\n                    if k == 'description' or not isinstance(v, str):\n                        continue\n                    def_specs_new[k] = def_specs_new[k].replace(f\"{{{tag}}}\", item_name)\n                    def_specs_new[k] = def_specs_new[k].replace('{parent variable}', def_name[:def_name.find(f\"{{{tag}}}\")-1])\n                definitions_with_replacements[def_name_new] = def_specs_new\n\n    return definitions_with_replacements\n
"},{"location":"code/python/definitions/#python.posted.definitions.unit_token_func","title":"unit_token_func(unit_component, flows)","text":"

Takes a unit component type and a dictionary of flows, and returns a lambda function that extracts the default unit based on the specified component type from the flow dictionary.

Parameters:

Name Type Description Default unit_component Literal['full', 'raw', 'variant']

Specifies the type of unit token to be returned.

required flows dict

Dictionary containg the flows

required

Returns:

Type Description lambda function

lambda function that takes a dictionary def_specs as input. The lambda function will return different values based on the unit_component parameter and the contents of the flows dictionary.

Source code in python/posted/definitions.py
def unit_token_func(unit_component: Literal['full', 'raw', 'variant'], flows: dict):\n    '''\n    Takes a unit component type and a dictionary of flows, and returns a lambda function\n    that extracts the default unit based on the specified component type from the flow\n    dictionary.\n\n    Parameters\n    ----------\n    unit_component : Literal['full', 'raw', 'variant']\n        Specifies the type of unit token to be returned.\n    flows : dict\n        Dictionary containg the flows\n\n\n    Returns\n    -------\n        lambda function\n            lambda function that takes a dictionary `def_specs` as input. The lambda function\n            will return different values based on the `unit_component` parameter and\n            the contents of the `flows` dictionary.\n    '''\n    return lambda def_specs: (\n        'ERROR'\n        if 'flow_id' not in def_specs or def_specs['flow_id'] not in flows else\n        (\n            flows[def_specs['flow_id']]['default_unit']\n            if unit_component == 'full' else\n            flows[def_specs['flow_id']]['default_unit'].split(';')[0]\n            if unit_component == 'raw' else\n            ';'.join([''] + flows[def_specs['flow_id']]['default_unit'].split(';')[1:2])\n            if unit_component == 'variant' else\n            'UNKNOWN'\n        )\n    )\n
"},{"location":"code/python/masking/","title":"masking","text":""},{"location":"code/python/masking/#python.posted.masking.Mask","title":"Mask","text":"

Class to define masks with conditions and weights to apply to DataFiles

Parameters:

Name Type Description Default where MaskCondition | list[MaskCondition]

Where the mask should be applied

None use MaskCondition | list[MaskCondition]

Condition on where to use the masks

None weight None | float | str | list[float | str]

Weights to apply

None other float nan comment str
Comment\n
'' Source code in python/posted/masking.py
class Mask:\n    '''Class to define masks with conditions and weights to apply to DataFiles\n\n    Parameters\n    ----------\n    where: MaskCondition | list[MaskCondition], optional\n        Where the mask should be applied\n    use:  MaskCondition | list[MaskCondition], optional\n        Condition on where to use the masks\n    weight: None | float | str | list[float | str], optional\n        Weights to apply\n    other: float, optional\n\n    comment: str, optional\n            Comment\n    '''\n    def __init__(self,\n                 where: MaskCondition | list[MaskCondition] = None,\n                 use: MaskCondition | list[MaskCondition] = None,\n                 weight: None | float | str | list[float | str] = None,\n                 other: float = np.nan,\n                 comment: str = ''):\n        '''set fields from constructor arguments, perform consistency checks on fields,\n        set default weight to 1 if not set otherwise'''\n        self._where: list[MaskCondition] = [] if where is None else where if isinstance(where, list) else [where]\n        self._use: list[MaskCondition] = [] if use is None else use if isinstance(use, list) else [use]\n        self._weight: list[float] = (\n            None\n            if weight is None else\n            [float(w) for w in weight]\n            if isinstance(weight, list) else\n            [float(weight)]\n        )\n        self._other: float = other\n        self._comment: str = comment\n\n        # perform consistency checks on fields\n        if self._use and self._weight and len(self._use) != len(self._weight):\n            raise Exception(f\"Must provide same length of 'use' conditions as 'weight' values.\")\n\n        # set default weight to 1 if not set otherwise\n        if not self._weight:\n            self._weight = len(self._use) * [1.0]\n\n\n    def matches(self, df: pd.DataFrame):\n        '''Check if a mask matches a dataframe (all 'where' conditions match across all rows)\n\n        Parameters\n        ----------\n        df: pd.Dataframe\n            Dataframe to check for matches\n        Returns\n        -------\n            bool\n                If the mask matches the dataframe'''\n        for w in self._where:\n            if not apply_cond(df, w).all():\n                return False\n        return True\n\n\n    def get_weights(self, df: pd.DataFrame):\n        '''Apply weights to the dataframe\n\n        Parameters\n        ----------\n        df: pd.Dataframe\n            Dataframe to apply weights on\n\n        Returns\n        -------\n            pd.DataFrame\n                Dataframe with applied weights'''\n        ret = pd.Series(index=df.index, data=np.nan)\n\n        # apply weights where the use condition matches\n        for u, w in zip(self._use, self._weight):\n            ret.loc[apply_cond(df, u)] = w\n\n        return ret\n
"},{"location":"code/python/masking/#python.posted.masking.Mask.__init__","title":"__init__(where=None, use=None, weight=None, other=np.nan, comment='')","text":"

set fields from constructor arguments, perform consistency checks on fields, set default weight to 1 if not set otherwise

Source code in python/posted/masking.py
def __init__(self,\n             where: MaskCondition | list[MaskCondition] = None,\n             use: MaskCondition | list[MaskCondition] = None,\n             weight: None | float | str | list[float | str] = None,\n             other: float = np.nan,\n             comment: str = ''):\n    '''set fields from constructor arguments, perform consistency checks on fields,\n    set default weight to 1 if not set otherwise'''\n    self._where: list[MaskCondition] = [] if where is None else where if isinstance(where, list) else [where]\n    self._use: list[MaskCondition] = [] if use is None else use if isinstance(use, list) else [use]\n    self._weight: list[float] = (\n        None\n        if weight is None else\n        [float(w) for w in weight]\n        if isinstance(weight, list) else\n        [float(weight)]\n    )\n    self._other: float = other\n    self._comment: str = comment\n\n    # perform consistency checks on fields\n    if self._use and self._weight and len(self._use) != len(self._weight):\n        raise Exception(f\"Must provide same length of 'use' conditions as 'weight' values.\")\n\n    # set default weight to 1 if not set otherwise\n    if not self._weight:\n        self._weight = len(self._use) * [1.0]\n
"},{"location":"code/python/masking/#python.posted.masking.Mask.get_weights","title":"get_weights(df)","text":"

Apply weights to the dataframe

Parameters:

Name Type Description Default df DataFrame

Dataframe to apply weights on

required

Returns:

Type Description pd.DataFrame

Dataframe with applied weights

Source code in python/posted/masking.py
def get_weights(self, df: pd.DataFrame):\n    '''Apply weights to the dataframe\n\n    Parameters\n    ----------\n    df: pd.Dataframe\n        Dataframe to apply weights on\n\n    Returns\n    -------\n        pd.DataFrame\n            Dataframe with applied weights'''\n    ret = pd.Series(index=df.index, data=np.nan)\n\n    # apply weights where the use condition matches\n    for u, w in zip(self._use, self._weight):\n        ret.loc[apply_cond(df, u)] = w\n\n    return ret\n
"},{"location":"code/python/masking/#python.posted.masking.Mask.matches","title":"matches(df)","text":"

Check if a mask matches a dataframe (all 'where' conditions match across all rows)

Parameters:

Name Type Description Default df DataFrame

Dataframe to check for matches

required

Returns:

Type Description bool

If the mask matches the dataframe

Source code in python/posted/masking.py
def matches(self, df: pd.DataFrame):\n    '''Check if a mask matches a dataframe (all 'where' conditions match across all rows)\n\n    Parameters\n    ----------\n    df: pd.Dataframe\n        Dataframe to check for matches\n    Returns\n    -------\n        bool\n            If the mask matches the dataframe'''\n    for w in self._where:\n        if not apply_cond(df, w).all():\n            return False\n    return True\n
"},{"location":"code/python/masking/#python.posted.masking.apply_cond","title":"apply_cond(df, cond)","text":"

Takes a pandas DataFrame and a condition, which can be a string, dictionary, or callable, and applies the condition to the DataFrame using eval or apply accordingly.

Parameters:

Name Type Description Default df DataFrame

A pandas DataFrame containing the data on which the condition will be applied.

required cond MaskCondition

The condition to be applied on the dataframe. Can be either a string, a dictionary, or a callable function.

required

Returns:

Type Description pd.DataFrame

Dataframe evaluated at the mask condition

Source code in python/posted/masking.py
def apply_cond(df: pd.DataFrame, cond: MaskCondition):\n    '''Takes a pandas DataFrame and a condition, which can be a string, dictionary,\n    or callable, and applies the condition to the DataFrame using `eval` or `apply`\n    accordingly.\n\n    Parameters\n    ----------\n    df : pd.DataFrame\n        A pandas DataFrame containing the data on which the condition will be applied.\n    cond : MaskCondition\n        The condition to be applied on the dataframe. Can be either a string, a dictionary, or a\n        callable function.\n\n    Returns\n    -------\n        pd.DataFrame\n            Dataframe evaluated at the mask condition\n\n    '''\n    if isinstance(cond, str):\n        return df.eval(cond)\n    elif isinstance(cond, dict):\n        cond = ' & '.join([f\"{key}=='{val}'\" for key, val in cond.items()])\n        return df.eval(cond)\n    elif isinstance(cond, Callable):\n        return df.apply(cond)\n
"},{"location":"code/python/masking/#python.posted.masking.read_masks","title":"read_masks(variable)","text":"

Reads YAML files containing mask specifications from multiple databases and returns a list of Mask objects.

Parameters:

Name Type Description Default variable str

Variable to be read

required

Returns:

Type Description list

List with masks for the variable

Source code in python/posted/masking.py
def read_masks(variable: str):\n    '''Reads YAML files containing mask specifications from multiple databases\n    and returns a list of Mask objects.\n\n    Parameters\n    ----------\n    variable : str\n        Variable to be read\n\n    Returns\n    -------\n        list\n            List with masks for the variable\n\n    '''\n    ret: list[Mask] = []\n\n    for database_id in databases:\n        fpath = databases[database_id] / 'masks' / ('/'.join(variable.split('|')) + '.yml')\n        if fpath.exists():\n            if not fpath.is_file():\n                raise Exception(f\"Expected YAML file, but not a file: {fpath}\")\n\n            ret += [\n                Mask(**mask_specs)\n                for mask_specs in read_yml_file(fpath)\n            ]\n\n    return ret\n
"},{"location":"code/python/noslag/","title":"noslag","text":""},{"location":"code/python/noslag/#python.posted.noslag.DataSet","title":"DataSet","text":"

Bases: TEBase

Class to store, normalise, select and aggregate DataSets

Parameters

parent_variable: str Variable to collect Data on include_databases: Optional[list|str] | tuple[str]], optional Databases to load from file_paths: Optional[list[path]], optional Paths to load data from check_inconsistencies: bool, optional Wether to check for inconsistencies data: Optional[pd.DataFrame], optional Specific data to include in the dataset

Source code in python/posted/noslag.py
class DataSet(TEBase):\n    '''Class to store, normalise, select and aggregate DataSets\n     Parameters\n    ----------\n    parent_variable: str\n        Variable to collect Data on\n    include_databases: Optional[list|str] | tuple[str]], optional\n        Databases to load from\n    file_paths: Optional[list[path]], optional\n        Paths to load data from\n    check_inconsistencies: bool, optional\n        Wether to check for inconsistencies\n    data: Optional[pd.DataFrame], optional\n        Specific data to include in the dataset\n\n\n    '''\n    _df: None | pd.DataFrame\n    _columns: dict[str, AbstractColumnDefinition]\n    _fields: dict[str, AbstractFieldDefinition]\n    _masks: list[Mask]\n\n    # initialise\n    def __init__(self,\n                 parent_variable: str,\n                 include_databases: Optional[list[str] | tuple[str]] = None,\n                 file_paths: Optional[list[Path]] = None,\n                 check_inconsistencies: bool = False,\n                 data: Optional[pd.DataFrame] = None,\n                 ):\n        '''Initialise parent class and fields, load data from specified databases and files\n\n\n        '''\n        TEBase.__init__(self, parent_variable)\n\n        # initialise fields\n        self._df = None\n        self._columns = base_columns\n        self._fields = {\n            col_id: field\n            for col_id, field in self._columns.items()\n            if isinstance(field, AbstractFieldDefinition)\n        }\n        self._masks = []\n\n        # Load data if provided, otherwise load from TEDataFiles\n        if data is not None:\n            self._df = data\n        else:\n            # read TEDataFiles and combine into dataset\n            include_databases = list(include_databases) if include_databases is not None else list(databases.keys())\n            self._df = self._load_files(include_databases, file_paths or [], check_inconsistencies)\n\n\n    @property\n    def data(self):\n        '''str: Get or set dataframe'''\n        return self._df\n\n    def set_data(self, df: pd.DataFrame):\n        self._df = df\n\n\n    def _load_files(self, include_databases: list[str], file_paths: list[Path], check_inconsistencies: bool):\n        # Load TEDFs and compile into NSHADataSet\n\n        files: list[TEDF] = []\n\n        # collect TEDF and append to list\n        collected_files = collect_files(parent_variable=self._parent_variable, include_databases=include_databases)\n        for file_variable, file_database_id in collected_files:\n            files.append(TEDF(parent_variable=file_variable, database_id=file_database_id))\n        for file_path in file_paths:\n            files.append(TEDF(parent_variable=self._parent_variable, file_path=file_path))\n\n        # raise exception if no TEDF can be loaded\n        if not files:\n            raise Exception(f\"No TEDF to load for variable '{self._parent_variable}'.\")\n\n        # get fields and masks from databases\n        files_vars: set[str] = {f.parent_variable for f in files}\n        for v in files_vars:\n            new_fields, new_comments = read_fields(v)\n            for col_id in new_fields | new_comments:\n                if col_id in self._columns:\n                    raise Exception(f\"Cannot load TEDFs due to multiple columns with same ID defined: {col_id}\")\n            self._fields = new_fields | self._fields\n            self._columns = new_fields | self._columns | new_comments\n            self._masks += read_masks(v)\n\n        # load all TEDFs: load from file, check for inconsistencies (if requested), expand cases and variables\n        file_dfs: list[pd.DataFrame] = []\n        for f in files:\n            # load\n            f.load()\n\n            # check for inconsistencies\n            if check_inconsistencies:\n                f.check()\n\n            # obtain dataframe and insert column parent_variable\n            df_tmp = f.data.copy()\n            df_tmp.insert(0, 'parent_variable', f.parent_variable)\n\n            # append to dataframe list\n            file_dfs.append(df_tmp)\n\n        # compile dataset from the dataframes loaded from the individual files\n        data = pd.concat(file_dfs)\n\n        # query relevant variables\n        data = data.query(f\"parent_variable=='{self._parent_variable}'\")\n\n        # drop entries with unknown variables and warn\n        for var_type in ('variable', 'reference_variable'):\n            cond = (data[var_type].notnull() &\n                    data.apply(lambda row: f\"{row['parent_variable']}|{row[var_type]}\" not in self._var_specs, axis=1))\n            if cond.any():\n                warnings.warn(f\"Unknown {var_type}, so dropping rows:\\n{data.loc[cond, var_type]}\")\n                data = data.loc[~cond].reset_index(drop=True)\n\n        # return\n        return data\n\n\n    def normalise(self, override: Optional[dict[str, str]] = None, inplace: bool = False) -> pd.DataFrame | None:\n        '''\n        normalise data: default reference units, reference value equal to 1.0, default reported units\n\n        Parameters\n        ----------\n        override: Optional[dict[str,str]], optional\n            Dictionary with key, value pairs of variables to override\n        inplace: bool, optional\n            Wether to do the normalisation in place\n\n        Returns\n        -------\n        pd.DataFrame\n            if inplace is false, returns normalised dataframe'''\n        normalised, _ = self._normalise(override)\n        if inplace:\n            self._df = normalised\n            return\n        else:\n            return normalised\n\n    def _normalise(self, override: Optional[dict[str, str]]) -> tuple[pd.DataFrame, dict[str, str]]:\n        if override is None:\n            override = {}\n\n        # get overridden var specs\n        var_flow_ids = {\n            var_name: var_specs['flow_id'] if 'flow_id' in var_specs else np.nan\n            for var_name, var_specs in self._var_specs.items()\n        }\n        var_units = {\n            var_name: var_specs['default_unit']\n            for var_name, var_specs in self._var_specs.items()\n        } | override\n\n        # normalise reference units, normalise reference values, and normalise reported units\n        normalised = self._df \\\n            .pipe(normalise_units, level='reference', var_units=var_units, var_flow_ids=var_flow_ids) \\\n            .pipe(normalise_values) \\\n            .pipe(normalise_units, level='reported', var_units=var_units, var_flow_ids=var_flow_ids)\n\n        # return normalised data and variable units\n        return normalised, var_units\n\n    # prepare data for selection\n    def select(self,\n               override: Optional[dict[str, str]] = None,\n               drop_singular_fields: bool = True,\n               extrapolate_period: bool = True,\n               **field_vals_select) -> pd.DataFrame:\n        '''Select desired data from the dataframe\n\n        Parameters\n        ----------\n        override: Optional[dict[str, str]]\n            Dictionary with key, value paris of variables to override\n        drop_singular_fields: bool, optional\n            If True, drop custom fields with only one value\n        extrapolate_period: bool, optional\n            If True, extrapolate values by extrapolation, if no value for this period is given\n        **field_vals_select\n            IDs of values to select\n\n        Returns\n        -------\n        pd.DataFrame\n            DataFrame with selected Values\n            '''\n        selected, var_units, var_references = self._select(\n            override,\n            drop_singular_fields,\n            extrapolate_period,\n            **field_vals_select,\n        )\n        selected.insert(selected.columns.tolist().index('variable'), 'reference_variable', np.nan)\n        selected['reference_variable'] = selected['variable'].map(var_references)\n        return self._cleanup(selected, var_units)\n\n    def _select(self,\n                override: Optional[dict[str, str]],\n                drop_singular_fields: bool,\n                extrapolate_period: bool,\n                **field_vals_select) -> tuple[pd.DataFrame, dict[str, str], dict[str, str]]:\n        # start from normalised data\n        normalised, var_units = self._normalise(override)\n        selected = normalised\n\n        # drop unit columns and reference value column\n        selected.drop(columns=['unit', 'reference_unit', 'reference_value'], inplace=True)\n\n        # drop columns containing comments and uncertainty field (which is currently unsupported)\n        selected.drop(\n            columns=['uncertainty'] + [\n                col_id for col_id, field in self._columns.items()\n                if field.col_type == 'comment'\n            ],\n            inplace=True,\n        )\n\n        # add parent variable as prefix to other variable columns\n        selected['variable'] = selected['parent_variable'] + '|' + selected['variable']\n        selected['reference_variable'] = selected['parent_variable'] + '|' + selected['reference_variable']\n        selected.drop(columns=['parent_variable'], inplace=True)\n\n        # raise exception if fields listed in arguments that are unknown\n        for field_id in field_vals_select:\n            if not any(field_id == col_id for col_id in self._fields):\n                raise Exception(f\"Field '{field_id}' does not exist and cannot be used for selection.\")\n\n        # order fields for selection: period must be expanded last due to the interpolation\n        fields_select = ({col_id: self._fields[col_id] for col_id in field_vals_select} |\n                         {col_id: field for col_id, field in self._fields.items() if col_id != 'period' and col_id not in field_vals_select} |\n                         {'period': self._fields['period']})\n\n        # select and expand fields\n        for col_id, field in fields_select.items():\n            field_vals = field_vals_select[col_id] if col_id in field_vals_select else None\n            selected = field.select_and_expand(selected, col_id, field_vals, extrapolate_period=extrapolate_period)\n\n        # drop custom fields with only one value if specified in method argument\n        if drop_singular_fields:\n            selected.drop(columns=[\n                col_id for col_id, field in self._fields.items()\n                if isinstance(field, CustomFieldDefinition) and selected[col_id].nunique() < 2\n            ], inplace=True)\n\n        # apply mappings\n        selected = self._apply_mappings(selected, var_units)\n\n        # drop rows with failed mappings\n        selected.dropna(subset='value', inplace=True)\n\n        # get map of variable references\n        var_references = selected \\\n            .filter(['variable', 'reference_variable']) \\\n            .drop_duplicates() \\\n            .set_index('variable')['reference_variable']\n\n        # Check for multiple reference variables per reported variable\n        if not var_references.index.is_unique:\n            raise Exception(f\"Multiple reference variables per reported variable found: {var_references}\")\n        var_references = var_references.to_dict()\n\n        # Remove 'reference_variable column\n        selected.drop(columns=['reference_variable'], inplace=True)\n\n        # strip off unit variants\n        var_units = {\n            variable: unit.split(';')[0]\n            for variable, unit in var_units.items()\n        }\n\n        # return\n        return selected, var_units, var_references\n\n\n    def _apply_mappings(self, expanded: pd.DataFrame, var_units: dict) -> pd.DataFrame:\n        # apply mappings between entry types\n        # list of columns to group by\n        group_cols = [\n            c for c in expanded.columns\n            if c not in ['variable', 'reference_variable', 'value']\n        ]\n\n        # perform groupby and do not drop NA values\n        grouped = expanded.groupby(group_cols, dropna=False)\n\n        # create return list\n        ret = []\n\n        # loop over groups\n        for keys, ids in grouped.groups.items():\n            # get rows in group\n            rows = expanded.loc[ids, [c for c in expanded if c not in group_cols]].copy()\n\n            # 1. convert FLH to OCF\n            cond = rows['variable'].str.endswith('|FLH')\n            if cond.any():\n\n                # Multiply 'value' by conversion factor\n                rows.loc[cond, 'value'] *= rows.loc[cond].apply(\n                    lambda row: unit_convert(\n                        var_units[row['variable']] + '/a',\n                        var_units[row['variable'].replace('|FLH', '|OCF')],\n                    ),\n                    axis=1,\n                )\n\n                # Replace '|FLH' with '|OCF\u2018 in 'variable'\n                rows.loc[cond, 'variable'] = rows.loc[cond, 'variable'] \\\n                    .str.replace('|FLH', '|OCF', regex=False)\n\n            # 2. convert OPEX Fixed Relative to OPEX Fixed\n            cond = rows['variable'].str.endswith('|OPEX Fixed Relative')\n            if cond.any():\n\n                # Define a function to calculate the conversion factor\n                def calculate_conversion(row):\n                    conversion_factor = unit_convert(var_units[row['variable']], 'dimensionless') * unit_convert(\n                        var_units[row['variable'].replace('|OPEX Fixed Relative', '|CAPEX')] + '/a',\n                        var_units[row['variable'].replace('|OPEX Fixed Relative', '|OPEX Fixed')]\n                    ) * (rows.query(\n                        f\"variable=='{row['variable'].replace('|OPEX Fixed Relative', '|CAPEX')}'\"\n                    ).pipe(\n                        lambda df: df['value'].iloc[0] if not df.empty else np.nan,\n                    ))\n                    return conversion_factor\n\n                # Calcualte the conversion factor and update 'value' for rows satisfying the condition\n                rows.loc[cond, 'value'] *= rows.loc[cond].apply(\n                    lambda row: calculate_conversion(row),\n                    axis=1,\n                )\n\n                # Replace '|OPEX Fixed Relative' with '|OPEX FIXED' in 'variable'\n                rows.loc[cond, 'variable'] = rows.loc[cond, 'variable'] \\\n                    .str.replace('|OPEX Fixed Relative', '|OPEX Fixed')\n\n                # Assign 'reference_variable' based on modified 'variable'\n                rows.loc[cond, 'reference_variable'] = rows.loc[cond].apply(\n                    lambda row: rows.query(\n                        f\"variable=='{row['variable'].replace('|OPEX Fixed', '|CAPEX')}'\"\n                    ).pipe(\n                        lambda df: df['reference_variable'].iloc[0] if not df.empty else np.nan,\n                    ),\n                    axis=1,\n                )\n\n                # Check if there are rows with null 'value' after the operation\n                if (cond & rows['value'].isnull()).any():\n                    warnings.warn(HarmoniseMappingFailure(\n                        expanded.loc[ids].loc[cond & rows['value'].isnull()],\n                        'No CAPEX value matching a OPEX Fixed Relative value found.',\n                    ))\n\n            # 3. convert OPEX Fixed Specific to OPEX Fixed\n            cond = rows['variable'].str.endswith('|OPEX Fixed Specific')\n            if cond.any():\n\n                # Define a function to calculate the conversion factor\n                def calculate_conversion(row):\n                    conversion_factor = unit_convert(\n                        var_units[row['variable']] + '/a',\n                        var_units[row['variable'].replace('|OPEX Fixed Specific', '|OPEX Fixed')]\n                    ) / unit_convert(\n                        var_units[row['reference_variable']] + '/a',\n                        var_units[re.sub(r'(Input|Output)', r'\\1 Capacity', row['reference_variable'])],\n                        self._var_specs[row['reference_variable']]['flow_id'] if 'flow_id' in self._var_specs[row['reference_variable']] else np.nan,\n                    ) * unit_convert(\n                        var_units[row['variable'].replace('|OPEX Fixed Specific', '|OCF')],\n                        'dimensionless'\n                    ) * (rows.query(\n                        f\"variable=='{row['variable'].replace('|OPEX Fixed Specific', '|OCF')}'\"\n                    ).pipe(\n                        lambda df: df['value'].iloc[0] if not df.empty else np.nan,\n                    ))\n                    return conversion_factor\n\n                # Calculate the conversion factor and update 'value' for rows satisfying the condition\n                rows.loc[cond, 'value'] *= rows.loc[cond].apply(\n                    lambda row: calculate_conversion(row),\n                    axis=1,\n                )\n\n                # replace '|OPEX Fixed Specific' with '|OPEX Fixed' in 'variable'\n                rows.loc[cond, 'variable'] = rows.loc[cond, 'variable'] \\\n                    .str.replace('|OPEX Fixed Specific', '|OPEX Fixed')\n\n                # Assign 'reference_variable by replacing 'Input' or 'Output' with 'Input Capacity' or 'Output Capacity'\n                rows.loc[cond, 'reference_variable'] = rows.loc[cond, 'reference_variable'].apply(\n                    lambda cell: re.sub(r'(Input|Output)', r'\\1 Capacity', cell),\n                )\n\n                # Check if there are any rows with null 'value' after the opera\n                if (cond & rows['value'].isnull()).any():\n                    warnings.warn(HarmoniseMappingFailure(\n                        expanded.loc[ids].loc[cond & rows['value'].isnull()],\n                        'No OCF value matching a OPEX Fixed Specific value found.',\n                    ))\n\n            # 4. convert efficiencies (Output over Input) to demands (Input over Output)\n            cond = (rows['variable'].str.contains(r'\\|Output(?: Capacity)?\\|') &\n                    (rows['reference_variable'].str.contains(r'\\|Input(?: Capacity)?\\|')\n                    if rows['reference_variable'].notnull().any() else False))\n            if cond.any():\n                rows.loc[cond, 'value'] = 1.0 / rows.loc[cond, 'value']\n                rows.loc[cond, 'variable_new'] = rows.loc[cond, 'reference_variable']\n                rows.loc[cond, 'reference_variable'] = rows.loc[cond, 'variable']\n                rows.loc[cond, 'variable'] = rows.loc[cond, 'variable_new']\n                rows.drop(columns=['variable_new'], inplace=True)\n\n            # 5. convert all references to primary output\n            cond = (((rows['reference_variable'].str.contains(r'\\|Output(?: Capacity)?\\|') |\n                    rows['reference_variable'].str.contains(r'\\|Input(?: Capacity)?\\|'))\n                    if rows['reference_variable'].notnull().any() else False) &\n                    rows['variable'].map(lambda var: 'default_reference' in self._var_specs[var]) &\n                    (rows['variable'].map(\n                        lambda var: self._var_specs[var]['default_reference']\n                        if 'default_reference' in self._var_specs[var] else np.nan\n                    ) != rows['reference_variable']))\n            if cond.any():\n                regex_find = r'\\|(Input|Output)(?: Capacity)?\\|'\n                regex_repl = r'|\\1|'\n                rows.loc[cond, 'reference_variable_new'] = rows.loc[cond, 'variable'].map(\n                    lambda var: self._var_specs[var]['default_reference'],\n                )\n\n                # Define function to calculate the conversion factor\n                def calculate_conversion(row):\n                    conversion_factor =  unit_convert(\n                        ('a*' if 'Capacity' in row['reference_variable'] else '') + var_units[row['reference_variable_new']],\n                        var_units[re.sub(regex_find, regex_repl, row['reference_variable_new'])],\n                        row['reference_variable_new'].split('|')[-1]\n                    ) / unit_convert(\n                        ('a*' if 'Capacity' in row['reference_variable'] else '') + var_units[row['reference_variable']],\n                        var_units[re.sub(regex_find, regex_repl, row['reference_variable'])],\n                        row['reference_variable'].split('|')[-1]\n                    ) * rows.query(\n                        f\"variable=='{re.sub(regex_find, regex_repl, row['reference_variable'])}' & \"\n                        f\"reference_variable=='{re.sub(regex_find, regex_repl, row['reference_variable_new'])}'\"\n                    ).pipe(\n                        lambda df: df['value'].iloc[0] if not df.empty else np.nan,\n                    )\n                    return conversion_factor\n\n                # Calculate the conversion factor and update 'value' for rows satisfying the condition\n                rows.loc[cond, 'value'] *= rows.loc[cond].apply(\n                    lambda row: calculate_conversion(row),\n                    axis=1,\n                )\n                rows.loc[cond, 'reference_variable'] = rows.loc[cond, 'reference_variable_new']\n                rows.drop(columns=['reference_variable_new'], inplace=True)\n                if (cond & rows['value'].isnull()).any():\n                    warnings.warn(HarmoniseMappingFailure(\n                        expanded.loc[ids].loc[cond & rows['value'].isnull()],\n                        'No appropriate mapping found to convert row reference to primary output.',\n                    ))\n\n            # set missing columns from group\n            rows[group_cols] = keys\n\n            # add to return list\n            ret.append(rows)\n\n        # convert return list to dataframe and return\n        return pd.concat(ret).reset_index(drop=True) if ret else expanded.iloc[[]]\n\n    # select data\n    def aggregate(self, override: Optional[dict[str, str]] = None,\n                  drop_singular_fields: bool = True,\n                  extrapolate_period: bool = True,\n                  agg: Optional[str | list[str] | tuple[str]] = None,\n                  masks: Optional[list[Mask]] = None,\n                  masks_database: bool = True,\n                  **field_vals_select) -> pd.DataFrame:\n        '''Aggregates data based on specified parameters, applies masks,\n        and cleans up the resulting DataFrame.\n\n        Parameters\n        ----------\n        override: Optional[dict[str, str]]\n            Dictionary with key, value paris of variables to override\n        drop_singular_fields: bool, optional\n            If True, drop custom fields with only one value\n        extrapolate_period: bool, optional\n            If True, extrapolate values by extrapolation, if no value for this period is given\n        agg : Optional[str | list[str] | tuple[str]]\n            Specifies which fields to aggregate over.\n        masks : Optional[list[Mask]]\n            Specifies a list of Mask objects that will be applied to the data during aggregation.\n            These masks can be used to filter or weight the\n            data based on certain conditions defined in the Mask objects.\n        masks_database : bool, optional\n            Determines whether to include masks from databases in the aggregation process.\n            If set to `True`, masks from databases will be included along with any masks provided as function arguments.\n            If set to `False`, only the masks provided as function argruments will be applied\n\n        Returns\n        -------\n        pd.DataFrame\n            The `aggregate` method returns a pandas DataFrame that has been cleaned up and aggregated based\n            on the specified parameters and input data. The method performs aggregation over component\n            fields and cases fields, applies weights based on masks, drops rows with NaN weights, aggregates\n            with weights, inserts reference variables, sorts columns and rows, rounds values, and inserts\n            units before returning the final cleaned and aggregated DataFrame.\n\n        '''\n\n        # get selection\n        selected, var_units, var_references = self._select(override,\n                                                           extrapolate_period,\n                                                           drop_singular_fields,\n                                                           **field_vals_select)\n\n        # compile masks from databases and function argument into one list\n        if masks is not None and any(not isinstance(m, Mask) for m in masks):\n            raise Exception(\"Function argument 'masks' must contain a list of posted.masking.Mask objects.\")\n        masks = (self._masks if masks_database else []) + (masks or [])\n\n        # aggregation\n        component_fields = [\n            col_id for col_id, field in self._fields.items()\n            if field.field_type == 'component'\n        ]\n        if agg is None:\n            agg = component_fields + ['source']\n        else:\n            if isinstance(agg, tuple):\n                agg = list(agg)\n            elif not isinstance(agg, list):\n                agg = [agg]\n            for a in agg:\n                if not isinstance(a, str):\n                    raise Exception(f\"Field ID in argument 'agg' must be a string but found: {a}\")\n                if not any(a == col_id for col_id in self._fields):\n                    raise Exception(f\"Field ID in argument 'agg' is not a valid field: {a}\")\n\n        # aggregate over component fields\n        group_cols = [\n            c for c in selected.columns\n            if not (c == 'value' or (c in agg and c in component_fields))\n        ]\n        aggregated = selected \\\n            .groupby(group_cols, dropna=False) \\\n            .agg({'value': 'sum'}) \\\n            .reset_index()\n\n        # aggregate over cases fields\n        group_cols = [\n            c for c in aggregated.columns\n            if not (c == 'value' or c in agg)\n        ]\n        ret = []\n        for keys, rows in aggregated.groupby(group_cols, dropna=False):\n            # set default weights to 1.0\n            rows = rows.assign(weight=1.0)\n\n            # update weights by applying masks\n            for mask in masks:\n                if mask.matches(rows):\n                    rows['weight'] *= mask.get_weights(rows)\n\n            # drop all rows with weights equal to nan\n            rows.dropna(subset='weight', inplace=True)\n\n            if not rows.empty:\n                # aggregate with weights\n                out = rows \\\n                    .groupby(group_cols, dropna=False)[['value', 'weight']] \\\n                    .apply(lambda cols: pd.Series({\n                        'value': np.average(cols['value'], weights=cols['weight']),\n                    }))\n\n                # add to return list\n                ret.append(out)\n        aggregated = pd.concat(ret).reset_index()\n\n        # insert reference variables\n        var_ref_unique = {\n            var_references[var]\n            for var in aggregated['variable'].unique()\n            if not pd.isnull(var_references[var])\n        }\n        agg_append = []\n        for ref_var in var_ref_unique:\n            agg_append.append(pd.DataFrame({\n                'variable': [ref_var],\n                'value': [1.0],\n            } | {\n                col_id: ['*']\n                for col_id, field in self._fields.items() if col_id in aggregated\n            }))\n        if agg_append:\n            agg_append = pd.concat(agg_append).reset_index(drop=True)\n            for col_id, field in self._fields.items():\n                if col_id not in aggregated:\n                    continue\n                agg_append = field.select_and_expand(agg_append, col_id, aggregated[col_id].unique().tolist())\n        else:\n            agg_append = None\n\n        # convert return list to dataframe, reset index, and clean up\n        return self._cleanup(pd.concat([aggregated, agg_append]), var_units)\n\n    # clean up: sort columns and rows, round values, insert units\n    def _cleanup(self, df: pd.DataFrame, var_units: dict[str, str]) -> pd.DataFrame:\n        # sort columns and rows\n        cols_sorted = (\n            [col_id for col_id, field in self._fields.items() if isinstance(field, CustomFieldDefinition)] +\n            ['source', 'variable', 'reference_variable', 'region', 'period', 'value']\n        )\n        cols_sorted = [c for c in cols_sorted if c in df.columns]\n        df = df[cols_sorted]\n        df = df \\\n            .sort_values(by=[c for c in cols_sorted if c in df and c != 'value']) \\\n            .reset_index(drop=True)\n\n        # round values\n        df['value'] = df['value'].apply(\n            lambda cell: cell if pd.isnull(cell) else round(cell, sigfigs=4, warn=False)\n        )\n\n        # insert column containing units\n        df.insert(df.columns.tolist().index('value'), 'unit', np.nan)\n        if 'reference_variable' in df:\n            df['unit'] = df.apply(\n                lambda row: combine_units(var_units[row['variable']], var_units[row['reference_variable']])\n                            if not pd.isnull(row['reference_variable']) else\n                            var_units[row['variable']],\n                axis=1,\n            )\n        else:\n            df['unit'] = df['variable'].map(var_units)\n\n        return df\n
"},{"location":"code/python/noslag/#python.posted.noslag.DataSet.data","title":"data property","text":"

str: Get or set dataframe

"},{"location":"code/python/noslag/#python.posted.noslag.DataSet.__init__","title":"__init__(parent_variable, include_databases=None, file_paths=None, check_inconsistencies=False, data=None)","text":"

Initialise parent class and fields, load data from specified databases and files

Source code in python/posted/noslag.py
def __init__(self,\n             parent_variable: str,\n             include_databases: Optional[list[str] | tuple[str]] = None,\n             file_paths: Optional[list[Path]] = None,\n             check_inconsistencies: bool = False,\n             data: Optional[pd.DataFrame] = None,\n             ):\n    '''Initialise parent class and fields, load data from specified databases and files\n\n\n    '''\n    TEBase.__init__(self, parent_variable)\n\n    # initialise fields\n    self._df = None\n    self._columns = base_columns\n    self._fields = {\n        col_id: field\n        for col_id, field in self._columns.items()\n        if isinstance(field, AbstractFieldDefinition)\n    }\n    self._masks = []\n\n    # Load data if provided, otherwise load from TEDataFiles\n    if data is not None:\n        self._df = data\n    else:\n        # read TEDataFiles and combine into dataset\n        include_databases = list(include_databases) if include_databases is not None else list(databases.keys())\n        self._df = self._load_files(include_databases, file_paths or [], check_inconsistencies)\n
"},{"location":"code/python/noslag/#python.posted.noslag.DataSet.aggregate","title":"aggregate(override=None, drop_singular_fields=True, extrapolate_period=True, agg=None, masks=None, masks_database=True, **field_vals_select)","text":"

Aggregates data based on specified parameters, applies masks, and cleans up the resulting DataFrame.

Parameters:

Name Type Description Default override Optional[dict[str, str]]

Dictionary with key, value paris of variables to override

None drop_singular_fields bool

If True, drop custom fields with only one value

True extrapolate_period bool

If True, extrapolate values by extrapolation, if no value for this period is given

True agg Optional[str | list[str] | tuple[str]]

Specifies which fields to aggregate over.

None masks Optional[list[Mask]]

Specifies a list of Mask objects that will be applied to the data during aggregation. These masks can be used to filter or weight the data based on certain conditions defined in the Mask objects.

None masks_database bool

Determines whether to include masks from databases in the aggregation process. If set to True, masks from databases will be included along with any masks provided as function arguments. If set to False, only the masks provided as function argruments will be applied

True

Returns:

Type Description DataFrame

The aggregate method returns a pandas DataFrame that has been cleaned up and aggregated based on the specified parameters and input data. The method performs aggregation over component fields and cases fields, applies weights based on masks, drops rows with NaN weights, aggregates with weights, inserts reference variables, sorts columns and rows, rounds values, and inserts units before returning the final cleaned and aggregated DataFrame.

Source code in python/posted/noslag.py
def aggregate(self, override: Optional[dict[str, str]] = None,\n              drop_singular_fields: bool = True,\n              extrapolate_period: bool = True,\n              agg: Optional[str | list[str] | tuple[str]] = None,\n              masks: Optional[list[Mask]] = None,\n              masks_database: bool = True,\n              **field_vals_select) -> pd.DataFrame:\n    '''Aggregates data based on specified parameters, applies masks,\n    and cleans up the resulting DataFrame.\n\n    Parameters\n    ----------\n    override: Optional[dict[str, str]]\n        Dictionary with key, value paris of variables to override\n    drop_singular_fields: bool, optional\n        If True, drop custom fields with only one value\n    extrapolate_period: bool, optional\n        If True, extrapolate values by extrapolation, if no value for this period is given\n    agg : Optional[str | list[str] | tuple[str]]\n        Specifies which fields to aggregate over.\n    masks : Optional[list[Mask]]\n        Specifies a list of Mask objects that will be applied to the data during aggregation.\n        These masks can be used to filter or weight the\n        data based on certain conditions defined in the Mask objects.\n    masks_database : bool, optional\n        Determines whether to include masks from databases in the aggregation process.\n        If set to `True`, masks from databases will be included along with any masks provided as function arguments.\n        If set to `False`, only the masks provided as function argruments will be applied\n\n    Returns\n    -------\n    pd.DataFrame\n        The `aggregate` method returns a pandas DataFrame that has been cleaned up and aggregated based\n        on the specified parameters and input data. The method performs aggregation over component\n        fields and cases fields, applies weights based on masks, drops rows with NaN weights, aggregates\n        with weights, inserts reference variables, sorts columns and rows, rounds values, and inserts\n        units before returning the final cleaned and aggregated DataFrame.\n\n    '''\n\n    # get selection\n    selected, var_units, var_references = self._select(override,\n                                                       extrapolate_period,\n                                                       drop_singular_fields,\n                                                       **field_vals_select)\n\n    # compile masks from databases and function argument into one list\n    if masks is not None and any(not isinstance(m, Mask) for m in masks):\n        raise Exception(\"Function argument 'masks' must contain a list of posted.masking.Mask objects.\")\n    masks = (self._masks if masks_database else []) + (masks or [])\n\n    # aggregation\n    component_fields = [\n        col_id for col_id, field in self._fields.items()\n        if field.field_type == 'component'\n    ]\n    if agg is None:\n        agg = component_fields + ['source']\n    else:\n        if isinstance(agg, tuple):\n            agg = list(agg)\n        elif not isinstance(agg, list):\n            agg = [agg]\n        for a in agg:\n            if not isinstance(a, str):\n                raise Exception(f\"Field ID in argument 'agg' must be a string but found: {a}\")\n            if not any(a == col_id for col_id in self._fields):\n                raise Exception(f\"Field ID in argument 'agg' is not a valid field: {a}\")\n\n    # aggregate over component fields\n    group_cols = [\n        c for c in selected.columns\n        if not (c == 'value' or (c in agg and c in component_fields))\n    ]\n    aggregated = selected \\\n        .groupby(group_cols, dropna=False) \\\n        .agg({'value': 'sum'}) \\\n        .reset_index()\n\n    # aggregate over cases fields\n    group_cols = [\n        c for c in aggregated.columns\n        if not (c == 'value' or c in agg)\n    ]\n    ret = []\n    for keys, rows in aggregated.groupby(group_cols, dropna=False):\n        # set default weights to 1.0\n        rows = rows.assign(weight=1.0)\n\n        # update weights by applying masks\n        for mask in masks:\n            if mask.matches(rows):\n                rows['weight'] *= mask.get_weights(rows)\n\n        # drop all rows with weights equal to nan\n        rows.dropna(subset='weight', inplace=True)\n\n        if not rows.empty:\n            # aggregate with weights\n            out = rows \\\n                .groupby(group_cols, dropna=False)[['value', 'weight']] \\\n                .apply(lambda cols: pd.Series({\n                    'value': np.average(cols['value'], weights=cols['weight']),\n                }))\n\n            # add to return list\n            ret.append(out)\n    aggregated = pd.concat(ret).reset_index()\n\n    # insert reference variables\n    var_ref_unique = {\n        var_references[var]\n        for var in aggregated['variable'].unique()\n        if not pd.isnull(var_references[var])\n    }\n    agg_append = []\n    for ref_var in var_ref_unique:\n        agg_append.append(pd.DataFrame({\n            'variable': [ref_var],\n            'value': [1.0],\n        } | {\n            col_id: ['*']\n            for col_id, field in self._fields.items() if col_id in aggregated\n        }))\n    if agg_append:\n        agg_append = pd.concat(agg_append).reset_index(drop=True)\n        for col_id, field in self._fields.items():\n            if col_id not in aggregated:\n                continue\n            agg_append = field.select_and_expand(agg_append, col_id, aggregated[col_id].unique().tolist())\n    else:\n        agg_append = None\n\n    # convert return list to dataframe, reset index, and clean up\n    return self._cleanup(pd.concat([aggregated, agg_append]), var_units)\n
"},{"location":"code/python/noslag/#python.posted.noslag.DataSet.normalise","title":"normalise(override=None, inplace=False)","text":"

normalise data: default reference units, reference value equal to 1.0, default reported units

Parameters:

Name Type Description Default override Optional[dict[str, str]]

Dictionary with key, value pairs of variables to override

None inplace bool

Wether to do the normalisation in place

False

Returns:

Type Description DataFrame

if inplace is false, returns normalised dataframe

Source code in python/posted/noslag.py
def normalise(self, override: Optional[dict[str, str]] = None, inplace: bool = False) -> pd.DataFrame | None:\n    '''\n    normalise data: default reference units, reference value equal to 1.0, default reported units\n\n    Parameters\n    ----------\n    override: Optional[dict[str,str]], optional\n        Dictionary with key, value pairs of variables to override\n    inplace: bool, optional\n        Wether to do the normalisation in place\n\n    Returns\n    -------\n    pd.DataFrame\n        if inplace is false, returns normalised dataframe'''\n    normalised, _ = self._normalise(override)\n    if inplace:\n        self._df = normalised\n        return\n    else:\n        return normalised\n
"},{"location":"code/python/noslag/#python.posted.noslag.DataSet.select","title":"select(override=None, drop_singular_fields=True, extrapolate_period=True, **field_vals_select)","text":"

Select desired data from the dataframe

Parameters:

Name Type Description Default override Optional[dict[str, str]]

Dictionary with key, value paris of variables to override

None drop_singular_fields bool

If True, drop custom fields with only one value

True extrapolate_period bool

If True, extrapolate values by extrapolation, if no value for this period is given

True **field_vals_select

IDs of values to select

{}

Returns:

Type Description DataFrame

DataFrame with selected Values

Source code in python/posted/noslag.py
def select(self,\n           override: Optional[dict[str, str]] = None,\n           drop_singular_fields: bool = True,\n           extrapolate_period: bool = True,\n           **field_vals_select) -> pd.DataFrame:\n    '''Select desired data from the dataframe\n\n    Parameters\n    ----------\n    override: Optional[dict[str, str]]\n        Dictionary with key, value paris of variables to override\n    drop_singular_fields: bool, optional\n        If True, drop custom fields with only one value\n    extrapolate_period: bool, optional\n        If True, extrapolate values by extrapolation, if no value for this period is given\n    **field_vals_select\n        IDs of values to select\n\n    Returns\n    -------\n    pd.DataFrame\n        DataFrame with selected Values\n        '''\n    selected, var_units, var_references = self._select(\n        override,\n        drop_singular_fields,\n        extrapolate_period,\n        **field_vals_select,\n    )\n    selected.insert(selected.columns.tolist().index('variable'), 'reference_variable', np.nan)\n    selected['reference_variable'] = selected['variable'].map(var_references)\n    return self._cleanup(selected, var_units)\n
"},{"location":"code/python/noslag/#python.posted.noslag.HarmoniseMappingFailure","title":"HarmoniseMappingFailure","text":"

Bases: Warning

Warning raised for rows in TEDataSets where mappings fail.

Parameters:

Name Type Description Default row_data DataFrame

Contains the Data on the rows to map

required message str

Contains the message of the failure

'Failure when selecting from dataset.'

Attributes:

Name Type Description row_data DataFrame

the data of the row that causes the failure

message str

explanation of the error

Source code in python/posted/noslag.py
class HarmoniseMappingFailure(Warning):\n    \"\"\"Warning raised for rows in TEDataSets where mappings fail.\n\n    Parameters\n    ----------\n    row_data: pd.DataFrame\n        Contains the Data on the rows to map\n    message: str, optional\n        Contains the message of the failure\n\n    Attributes\n    ----------\n    row_data\n        the data of the row that causes the failure\n    message\n        explanation of the error\n    \"\"\"\n    def __init__(self, row_data: pd.DataFrame, message: str = \"Failure when selecting from dataset.\"):\n        '''Save constructor arguments as public fields, compose warning message, call super constructor'''\n        # save constructor arguments as public fields\n        self.row_data: pd.DataFrame = row_data\n        self.message: str = message\n\n        # compose warning message\n        warning_message: str = message + f\"\\n{row_data}\"\n\n        # call super constructor\n        super().__init__(warning_message)\n
"},{"location":"code/python/noslag/#python.posted.noslag.HarmoniseMappingFailure.__init__","title":"__init__(row_data, message='Failure when selecting from dataset.')","text":"

Save constructor arguments as public fields, compose warning message, call super constructor

Source code in python/posted/noslag.py
def __init__(self, row_data: pd.DataFrame, message: str = \"Failure when selecting from dataset.\"):\n    '''Save constructor arguments as public fields, compose warning message, call super constructor'''\n    # save constructor arguments as public fields\n    self.row_data: pd.DataFrame = row_data\n    self.message: str = message\n\n    # compose warning message\n    warning_message: str = message + f\"\\n{row_data}\"\n\n    # call super constructor\n    super().__init__(warning_message)\n
"},{"location":"code/python/noslag/#python.posted.noslag.collect_files","title":"collect_files(parent_variable, include_databases=None)","text":"

Takes a parent variable and optional list of databases to include, checks for their existence, and collects files and directories based on the parent variable.

Parameters:

Name Type Description Default parent_variable str

Variable to collect files on

required include_databases Optional[list[str]]

List of Database IDs to collect files from

None

Returns:

Type Description list[tuple]
List of tuples containing the parent variable and the\n

database ID for each file found in the specified directories.

Source code in python/posted/noslag.py
def collect_files(parent_variable: str, include_databases: Optional[list[str]] = None):\n    '''Takes a parent variable and optional list of databases to include,\n    checks for their existence, and collects files and directories based on the parent variable.\n\n    Parameters\n    ----------\n    parent_variable : str\n        Variable to collect files on\n    include_databases : Optional[list[str]]\n        List of Database IDs to collect files from\n\n    Returns\n    -------\n        list[tuple]\n            List of tuples containing the parent variable and the\n        database ID for each file found in the specified directories.\n\n    '''\n    if not parent_variable:\n        raise Exception('Variable may not me empty.')\n\n    # check that the requested database to include can be found\n    if include_databases is not None:\n        for database_id in include_databases:\n            if not (database_id in databases and databases[database_id].exists()):\n                raise Exception(f\"Could not find database '{database_id}'.\")\n\n    ret = []\n    for database_id, database_path in databases.items():\n        # skip ted paths not requested to include\n        if include_databases is not None and database_id not in include_databases: continue\n\n        # find top-level file and directory\n        top_path = '/'.join(parent_variable.split('|'))\n        top_file = database_path / 'tedfs' / (top_path + '.csv')\n        top_directory = database_path / 'tedfs' / top_path\n\n        # add top-level file if it exists\n        if top_file.exists() and top_file.is_file():\n            ret.append((parent_variable, database_id))\n\n        # add all files contained in top-level directory\n        if top_directory.exists() and top_directory.is_dir():\n            for sub_file in top_directory.rglob('*.csv'):\n                sub_variable = parent_variable + '|' + sub_file.relative_to(top_directory).name.rstrip('.csv')\n                ret.append((sub_variable, database_id))\n\n        # loop over levels\n        levels = parent_variable.split('|')\n        for l in range(0, len(levels)):\n            # find top-level file and directory\n            top_path = '/'.join(levels[:l])\n            parent_file = database_path / 'tedfs' / (top_path + '.csv')\n\n            # add parent file if it exists\n            if parent_file.exists() and parent_file.is_file():\n                parent_variable = '|'.join(levels[:l])\n                ret.append((parent_variable, database_id))\n\n    return ret\n
"},{"location":"code/python/noslag/#python.posted.noslag.combine_units","title":"combine_units(numerator, denominator)","text":"

Combine fraction of two units into updated unit string

Parameters:

Name Type Description Default numerator str

numerator of the fraction

required denominator str

denominator of the fraction

required

Returns:

Type Description str

updated unit string after simplification

Source code in python/posted/noslag.py
def combine_units(numerator: str, denominator: str):\n    '''Combine fraction of two units into updated unit string\n\n    Parameters\n    ----------\n    numerator: str\n        numerator of the fraction\n    denominator: str\n        denominator of the fraction\n\n    Returns\n    -------\n        str\n            updated unit string after simplification\n    '''\n\n\n    ret = ureg(f\"{numerator}/({denominator})\").u\n    # chekc if ret is dimensionless, if not return ret, else return the explicit quotient\n    if not ret.dimensionless:\n        return str(ret)\n    else:\n        return (f\"{numerator}/({denominator})\"\n                if '/' in denominator else\n                f\"{numerator}/{denominator}\")\n
"},{"location":"code/python/noslag/#python.posted.noslag.normalise_units","title":"normalise_units(df, level, var_units, var_flow_ids)","text":"

Takes a DataFrame with reported or reference data, along with dictionaries mapping variable units and flow IDs, and normalizes the units of the variables in the DataFrame based on the provided mappings.

Parameters:

Name Type Description Default df DataFrame

Dataframe to be normalised

required level Literal['reported', 'reference']

Specifies whether the data should be normalised on the reported or reference values

required var_units dict[str, str]

Dictionary that maps a combination of parent variable and variable to its corresponding unit. The keys in the dictionary are in the format \"{parent_variable}|{variable}\", and the values are the units associated with that variable.

required var_flow_ids dict[str, str]

Dictionary that maps a combination of parent variable and variable to a specific flow ID. This flow ID is used for unit conversion in the normalise_units function.

required

Returns:

Type Description pd.DataFrame

Normalised dataframe

Source code in python/posted/noslag.py
def normalise_units(df: pd.DataFrame, level: Literal['reported', 'reference'], var_units: dict[str, str],\n                       var_flow_ids: dict[str, str]):\n    '''\n    Takes a DataFrame with reported or reference data, along with\n    dictionaries mapping variable units and flow IDs, and normalizes the units of the variables in the\n    DataFrame based on the provided mappings.\n\n    Parameters\n    ----------\n    df : pd.DataFrame\n        Dataframe to be normalised\n    level : Literal['reported', 'reference']\n        Specifies whether the data should be normalised on the reported or reference values\n    var_units : dict[str, str]\n        Dictionary that maps a combination of parent variable and variable\n        to its corresponding unit. The keys in the dictionary are in the format \"{parent_variable}|{variable}\",\n        and the values are the units associated with that variable.\n    var_flow_ids : dict[str, str]\n        Dictionary that maps a combination of parent variable and variable to a\n        specific flow ID. This flow ID is used for unit conversion in the `normalise_units` function.\n\n    Returns\n    -------\n        pd.DataFrame\n            Normalised dataframe\n\n    '''\n\n    prefix = '' if level == 'reported' else 'reference_'\n    var_col_id = prefix + 'variable'\n    value_col_id = prefix + 'value'\n    unit_col_id = prefix + 'unit'\n    df_tmp = pd.concat([\n        df,\n        df.apply(\n            lambda row: var_units[f\"{row['parent_variable']}|{row[var_col_id]}\"]\n            if isinstance(row[var_col_id], str) else np.nan,\n            axis=1,\n        )\n        .to_frame('target_unit'),\n        df.apply(\n            lambda row: var_flow_ids[f\"{row['parent_variable']}|{row[var_col_id]}\"]\n            if isinstance(row[var_col_id], str) else np.nan,\n            axis=1,\n        )\n        .to_frame('target_flow_id'),\n    ], axis=1)\n\n    # Apply unit conversion\n    conv_factor = df_tmp.apply(\n        lambda row: unit_convert(row[unit_col_id], row['target_unit'], row['target_flow_id'])\n        if not np.isnan(row[value_col_id]) else 1.0,\n        axis=1,\n    )\n\n    # Update value column with conversion factor\n    df_tmp[value_col_id] *= conv_factor\n\n    # If level is 'reported', update uncertainty column with conversion factor\n    if level == 'reported':\n        df_tmp['uncertainty'] *= conv_factor\n\n    # Uupdate unit columns\n    df_tmp[unit_col_id] = df_tmp['target_unit']\n\n    # Drop unneccessary columns and return\n    return df_tmp.drop(columns=['target_unit', 'target_flow_id'])\n
"},{"location":"code/python/noslag/#python.posted.noslag.normalise_values","title":"normalise_values(df)","text":"

Takes a DataFrame as input, normalizes the 'value' and 'uncertainty' columns by the reference value, and updates the 'reference_value' column accordingly.

Parameters:

Name Type Description Default df DataFrame

Dataframe to be normalised

required

Returns:

Type Description pd.DataFrame

Returns a modified DataFrame where the 'value' column has been divided by the 'reference_value' column (or 1.0 if 'reference_value' is null), the 'uncertainty' column has been divided by the 'reference_value' column, and the 'reference_value' column has been replaced with 1.0 if it was not null, otherwise

Source code in python/posted/noslag.py
def normalise_values(df: pd.DataFrame):\n    '''Takes a DataFrame as input, normalizes the 'value' and 'uncertainty'\n    columns by the reference value, and updates the 'reference_value' column accordingly.\n\n    Parameters\n    ----------\n    df : pd.DataFrame\n        Dataframe to be normalised\n\n    Returns\n    -------\n        pd.DataFrame\n            Returns a modified DataFrame where the 'value' column has been\n            divided by the 'reference_value' column (or 1.0 if 'reference_value' is null), the 'uncertainty'\n            column has been divided by the 'reference_value' column, and the 'reference_value' column has been\n            replaced with 1.0 if it was not null, otherwise\n\n    '''\n    # Calculate reference value\n    reference_value =  df.apply(\n        lambda row:\n            row['reference_value']\n            if not pd.isnull(row['reference_value']) else\n            1.0,\n        axis=1,\n    )\n    # Calculate new value, reference value and uncertainty\n    value_new = df['value'] / reference_value\n    uncertainty_new = df['uncertainty'] / reference_value\n    reference_value_new = df.apply(\n        lambda row:\n            1.0\n            if not pd.isnull(row['reference_value']) else\n            np.nan,\n        axis=1,\n    )\n    # Assign new values to dataframe and return\n    return df.assign(value=value_new, uncertainty=uncertainty_new, reference_value=reference_value_new)\n
"},{"location":"code/python/read/","title":"read","text":""},{"location":"code/python/read/#python.posted.read.read_csv_file","title":"read_csv_file(fpath)","text":"

Read CSV data file

Parameters:

Name Type Description Default fpath str

Path of the file to read

required

Returns:

Type Description pd.DataFrame

DataFrame containg the data of the CSV

Source code in python/posted/read.py
def read_csv_file(fpath: str):\n    \"\"\"\n    Read CSV data file\n\n    Parameters\n    ----------\n    fpath: str\n        Path of the file to read\n    Returns\n    -------\n        pd.DataFrame\n            DataFrame containg the data of the CSV\n    \"\"\"\n    return pd.read_csv(fpath)\n
"},{"location":"code/python/read/#python.posted.read.read_yml_file","title":"read_yml_file(fpath)","text":"

Read YAML config file

Parameters:

Name Type Description Default fpath Path

Path of the file to read

required

Returns:

Type Description dict

Dictionary containing config info

Source code in python/posted/read.py
def read_yml_file(fpath: Path):\n    \"\"\"\n    Read YAML config file\n\n    Parameters\n    ----------\n    fpath: str\n        Path of the file to read\n    Returns\n    -------\n        dict\n            Dictionary containing config info\n    \"\"\"\n    fhandle = open(fpath, 'r', encoding='utf-8')\n    ret = yaml.load(stream=fhandle, Loader=yaml.FullLoader)\n    fhandle.close()\n    return ret\n
"},{"location":"code/python/sources/","title":"sources","text":""},{"location":"code/python/sources/#python.posted.sources.dump_sources","title":"dump_sources(file_path)","text":"

Parses BibTeX files, formats the data, and exports it into a CSV or Excel file using pandas.

Parameters:

Name Type Description Default file_path str | Path

Path to the file where the formatted sources should be exported to. It can be either a string representing the file path or a Path object from the pathlib module.

required Source code in python/posted/sources.py
def dump_sources(file_path: str | Path):\n    '''Parses BibTeX files, formats the data, and exports it into a CSV or Excel\n    file using pandas.\n\n    Parameters\n    ----------\n    file_path : str | Path\n        Path to the file where the formatted sources should be exported to.\n         It can be either a string representing the file path or a `Path` object\n        from the `pathlib` module.\n\n    '''\n    # convert string to pathlib.Path if necessary\n    if isinstance(file_path, str):\n        file_path = Path(file_path)\n\n    # define styles and formats\n    style = find_plugin('pybtex.style.formatting', 'apa')()\n    # format_html = find_plugin('pybtex.backends', 'html')()\n    format_plain = find_plugin('pybtex.backends', 'plaintext')()\n\n    # parse bibtex file\n    parser = bibtex.Parser()\n\n    # loop over databases\n    formatted = []\n    for database_path in databases.values():\n        bib_data = parser.parse_file(database_path / 'sources.bib')\n        formatted += format_sources(bib_data, style, format_plain)\n\n    # convert to dataframe\n    df = pd.DataFrame.from_records(formatted)\n\n    # dump dataframe with pandas to CSV or Excel spreadsheet\n    if file_path.suffix == '.csv':\n        df.to_csv(Path(file_path))\n    elif file_path.suffix in ['.xls', '.xlsx']:\n        df.to_excel(Path(file_path))\n    else:\n        raise Exception('Unknown file suffix!')\n
"},{"location":"code/python/sources/#python.posted.sources.format_sources","title":"format_sources(bib_data, style, form, exclude_fields=None)","text":"

Takes bibliographic data, a citation style, a citation form, and optional excluded fields, and returns a formatted list of sources based on the specified style and form.

Parameters:

Name Type Description Default bib_data

Contains bibliographic information, such as author, title, references or citations.

required style

Specifies the formatting style for the bibliography entries.

required form

Specifies the format in which the citation should be rendered. It determines how the citation information will be displayed or structured in the final output.

required exclude_fields

Specifies a list of fields that should be excluded from the final output. These fields will be removed from the entries before

None formatting required

Returns:

Type Description list[dict]

A list of dictionaries containing the identifier, citation, DOI, and URL information for each entry in the bibliography data, formatted according to the specified style and form, with any excluded fields removed.

Source code in python/posted/sources.py
def format_sources(bib_data, style, form, exclude_fields = None):\n    '''\n    Takes bibliographic data, a citation style, a citation form, and\n    optional excluded fields, and returns a formatted list of sources based on the specified style and\n    form.\n\n    Parameters\n    ----------\n    bib_data\n        Contains bibliographic information, such as author, title, references or citations.\n    style\n        Specifies the formatting style for the bibliography entries.\n    form\n        Specifies the format in which the citation should be rendered. It determines how the citation information will be displayed or\n        structured in the final output.\n    exclude_fields\n        Specifies a list of fields that should be excluded from the final output. These fields will be removed from the entries before\n    formatting and returning the citation data.\n\n    Returns\n    -------\n        list[dict]\n            A list of dictionaries containing the identifier, citation, DOI, and URL information for each entry\n            in the bibliography data, formatted according to the specified style and form, with any excluded\n            fields removed.\n\n    '''\n    exclude_fields = exclude_fields or []\n\n    if exclude_fields:\n        for entry in bib_data.entries.values():\n            for ef in exclude_fields:\n                if ef in entry.fields.__dict__['_dict']:\n                    del entry.fields.__dict__['_dict'][ef]\n\n    ret = []\n    for identifier in bib_data.entries:\n        entry = bib_data.entries[identifier]\n        fields = entry.fields.__dict__['_dict']\n        ret.append({\n            'identifier': identifier,\n            'citation': next(style.format_entries([entry])).text.render(form),\n            'doi': fields['doi'] if 'doi' in fields else '',\n            'url': fields['url'] if 'url' in fields else '',\n        })\n\n    return ret\n
"},{"location":"code/python/tedf/","title":"tedf","text":""},{"location":"code/python/tedf/#python.posted.tedf.TEBase","title":"TEBase","text":"

Base Class for Technoeconomic Data

Parameters:

Name Type Description Default parent_variable str

Variable from which Data should be collected

required Source code in python/posted/tedf.py
class TEBase:\n    \"\"\"\n    Base Class for Technoeconomic Data\n\n    Parameters\n    ----------\n    parent_variable: str\n        Variable from which Data should be collected\n    \"\"\"\n    # initialise\n    def __init__(self, parent_variable: str):\n        \"\"\" Set parent variable and technology specifications (var_specs) from input\"\"\"\n        self._parent_variable: str = parent_variable\n        self._var_specs: dict = {key: val for key, val in variables.items() if key.startswith(self._parent_variable)}\n\n    @property\n    def parent_variable(self) -> str:\n        \"\"\" Get parent variable\"\"\"\n        return self._parent_variable\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEBase.parent_variable","title":"parent_variable: str property","text":"

Get parent variable

"},{"location":"code/python/tedf/#python.posted.tedf.TEBase.__init__","title":"__init__(parent_variable)","text":"

Set parent variable and technology specifications (var_specs) from input

Source code in python/posted/tedf.py
def __init__(self, parent_variable: str):\n    \"\"\" Set parent variable and technology specifications (var_specs) from input\"\"\"\n    self._parent_variable: str = parent_variable\n    self._var_specs: dict = {key: val for key, val in variables.items() if key.startswith(self._parent_variable)}\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEDF","title":"TEDF","text":"

Bases: TEBase

Class to store Technoeconomic DataFiles

Parameters:

Name Type Description Default parent_variable str

Variable from which Data should be collected

required database_id str

Database from which to load data

'public' file_path Optional[Path]

File Path from which to load file

None data Optional[DataFrame]

Specific Technoeconomic data

None

Methods:

Name Description load

Load TEDataFile if it has not been read yet

read

Read TEDF from CSV file

write

Write TEDF to CSV file

check

Check if TEDF is consistent

check_row

Check that row in TEDF is consistent and return all inconsistencies found for row

Source code in python/posted/tedf.py
class TEDF(TEBase):\n    \"\"\"\n    Class to store Technoeconomic DataFiles\n\n    Parameters\n    ----------\n    parent_variable: str\n        Variable from which Data should be collected\n    database_id: str, default: public\n        Database from which to load data\n    file_path: Path, optional\n        File Path from which to load file\n    data: pd.DataFrame, optional\n        Specific Technoeconomic data\n\n    Methods\n    ----------\n    load\n        Load TEDataFile if it has not been read yet\n    read\n        Read TEDF from CSV file\n    write\n        Write TEDF to CSV file\n    check\n        Check if TEDF is consistent\n    check_row\n        Check that row in TEDF is consistent and return all inconsistencies found for row\n    \"\"\"\n\n    # typed delcarations\n    _df: None | pd.DataFrame\n    _inconsistencies: dict\n    _file_path: None | Path\n    _fields: dict[str, AbstractFieldDefinition]\n    _columns: dict[str, AbstractColumnDefinition]\n\n\n    def __init__(self,\n                 parent_variable: str,\n                 database_id: str = 'public',\n                 file_path: Optional[Path] = None,\n                 data: Optional[pd.DataFrame] = None,\n                 ):\n        \"\"\" Initialise parent class and object fields\"\"\"\n        TEBase.__init__(self, parent_variable)\n\n        self._df = data\n        self._inconsistencies = {}\n        self._file_path = (\n            None if data is not None else\n            file_path if file_path is not None else\n            databases[database_id] / 'tedfs' / ('/'.join(self._parent_variable.split('|')) + '.csv')\n        )\n        self._fields, comments = read_fields(self._parent_variable)\n        self._columns = self._fields | base_columns | comments\n\n    @property\n    def file_path(self) -> Path:\n        \"\"\" Get or set the file File Path\"\"\"\n        return self._file_path\n\n    @file_path.setter\n    def file_path(self, file_path: Path):\n        self._file_path = file_path\n\n\n    def load(self):\n        \"\"\"\n        load TEDataFile (only if it has not been read yet)\n\n        Warns\n        ----------\n        warning\n            Warns if TEDF is already loaded\n        Returns\n        --------\n            TEDF\n                Returns the TEDF object it is called on\n        \"\"\"\n        if self._df is None:\n            self.read()\n        else:\n            warnings.warn('TEDF is already loaded. Please execute .read() if you want to load from file again.')\n\n        return self\n\n    def read(self):\n        \"\"\"\n        read TEDF from CSV file\n\n        Raises\n        ------\n        Exception\n            If there is no file path from which to read\n        \"\"\"\n\n        if self._file_path is None:\n            raise Exception('Cannot read from file, as this TEDF object has been created from a dataframe.')\n\n        # read CSV file\n        self._df = pd.read_csv(\n            self._file_path,\n            sep=',',\n            quotechar='\"',\n            encoding='utf-8',\n        )\n\n        # check column IDs match base columns and fields\n        if not all(c in self._columns for c in self._df.columns):\n            raise Exception(f\"Column IDs used in CSV file do not match columns definition: {self._df.columns.tolist()}\")\n\n        # adjust row index to start at 1 instead of 0\n        self._df.index += 1\n\n        # insert missing columns and reorder via reindexing, then update dtypes\n        df_new = self._df.reindex(columns=list(self._columns.keys()))\n        for col_id, col in self._columns.items():\n            if col_id in self._df:\n                continue\n            df_new[col_id] = df_new[col_id].astype(col.dtype)\n            df_new[col_id] = col.default\n        self._df = df_new\n\n    def write(self):\n        \"\"\"\n        Write TEDF to CSV file\n\n        Raises\n        ------\n        Exception\n            If there is no file path that specifies where to write\n        \"\"\"\n        if self._file_path is None:\n            raise Exception('Cannot write to file, as this TEDataFile object has been created from a dataframe. Please '\n                            'first set a file path on this object.')\n\n        self._df.to_csv(\n            self._file_path,\n            index=False,\n            sep=',',\n            quotechar='\"',\n            encoding='utf-8',\n            na_rep='',\n        )\n\n\n    @property\n    def data(self) -> pd.DataFrame:\n        \"\"\"Get data, i.e. access dataframe\"\"\"\n        return self._df\n\n    @property\n    def inconsistencies(self) -> dict[int, TEDFInconsistencyException]:\n        \"\"\"Get inconsistencies\"\"\"\n        return self._inconsistencies\n\n    def check(self, raise_exception: bool = True):\n        \"\"\"\n        Check that TEDF is consistent and add inconsistencies to internal parameter\n\n        Parameters\n        ----------\n        raise_exception: bool, default: True\n            If exception is to be raised\n        \"\"\"\n        self._inconsistencies = {}\n\n        # check row consistency for each row individually\n        for row_id in self._df.index:\n            self._inconsistencies[row_id] = self.check_row(row_id, raise_exception=raise_exception)\n\n    def check_row(self, row_id: int, raise_exception: bool) -> list[TEDFInconsistencyException]:\n        \"\"\"\n        Check that row in TEDF is consistent and return all inconsistencies found for row\n\n        Parameters\n        ----------\n        row_id: int\n            Index of the row to check\n        raise_exception: bool\n            If exception is to be raised\n\n        Returns\n        -------\n            list\n                List of inconsistencies\n        \"\"\"\n        row = self._df.loc[row_id]\n        ikwargs = {'row_id': row_id, 'file_path': self._file_path, 'raise_exception': raise_exception}\n        ret = []\n\n        # check whether fields are among those defined in the technology specs\n        for col_id, col in self._columns.items():\n            cell = row[col_id]\n            if col.col_type == 'variable':\n                cell = cell if pd.isnull(cell) else self.parent_variable + '|' + cell\n            if not col.is_allowed(cell):\n                ret.append(new_inconsistency(\n                    message=f\"Invalid cell for column of type '{col.col_type}': {cell}\", col_id=col_id, **ikwargs,\n                ))\n\n        # check that reported and reference units match variable definition\n        for col_prefix in ['', 'reference_']:\n            raw_variable = row[col_prefix + 'variable']\n            col_id = col_prefix + 'unit'\n            unit = row[col_id]\n            if pd.isnull(raw_variable) and pd.isnull(unit):\n                continue\n            if pd.isnull(raw_variable) or pd.isnull(unit):\n                ret.append(new_inconsistency(\n                    message=f\"Variable and unit must either both be set or both be unset': {raw_variable} -- {unit}\",\n                    col_id=col_id, **ikwargs,\n                ))\n            variable = self.parent_variable + '|' + raw_variable\n            var_specs = variables[variable]\n            if 'dimension' not in var_specs:\n                if unit is not np.nan:\n                    ret.append(new_inconsistency(\n                        message=f\"Unexpected unit '{unit}' for {col_id}.\", col_id=col_id, **ikwargs,\n                    ))\n                continue\n            dimension = var_specs['dimension']\n\n            flow_id = var_specs['flow_id'] if 'flow_id' in var_specs else None\n            allowed, message = unit_allowed(unit=unit, flow_id=flow_id, dimension=dimension)\n            if not allowed:\n                ret.append(new_inconsistency(message=message, col_id=col_id, **ikwargs))\n\n        return ret\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.data","title":"data: pd.DataFrame property","text":"

Get data, i.e. access dataframe

"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.file_path","title":"file_path: Path property writable","text":"

Get or set the file File Path

"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.inconsistencies","title":"inconsistencies: dict[int, TEDFInconsistencyException] property","text":"

Get inconsistencies

"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.__init__","title":"__init__(parent_variable, database_id='public', file_path=None, data=None)","text":"

Initialise parent class and object fields

Source code in python/posted/tedf.py
def __init__(self,\n             parent_variable: str,\n             database_id: str = 'public',\n             file_path: Optional[Path] = None,\n             data: Optional[pd.DataFrame] = None,\n             ):\n    \"\"\" Initialise parent class and object fields\"\"\"\n    TEBase.__init__(self, parent_variable)\n\n    self._df = data\n    self._inconsistencies = {}\n    self._file_path = (\n        None if data is not None else\n        file_path if file_path is not None else\n        databases[database_id] / 'tedfs' / ('/'.join(self._parent_variable.split('|')) + '.csv')\n    )\n    self._fields, comments = read_fields(self._parent_variable)\n    self._columns = self._fields | base_columns | comments\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.check","title":"check(raise_exception=True)","text":"

Check that TEDF is consistent and add inconsistencies to internal parameter

Parameters:

Name Type Description Default raise_exception bool

If exception is to be raised

True Source code in python/posted/tedf.py
def check(self, raise_exception: bool = True):\n    \"\"\"\n    Check that TEDF is consistent and add inconsistencies to internal parameter\n\n    Parameters\n    ----------\n    raise_exception: bool, default: True\n        If exception is to be raised\n    \"\"\"\n    self._inconsistencies = {}\n\n    # check row consistency for each row individually\n    for row_id in self._df.index:\n        self._inconsistencies[row_id] = self.check_row(row_id, raise_exception=raise_exception)\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.check_row","title":"check_row(row_id, raise_exception)","text":"

Check that row in TEDF is consistent and return all inconsistencies found for row

Parameters:

Name Type Description Default row_id int

Index of the row to check

required raise_exception bool

If exception is to be raised

required

Returns:

Type Description list

List of inconsistencies

Source code in python/posted/tedf.py
def check_row(self, row_id: int, raise_exception: bool) -> list[TEDFInconsistencyException]:\n    \"\"\"\n    Check that row in TEDF is consistent and return all inconsistencies found for row\n\n    Parameters\n    ----------\n    row_id: int\n        Index of the row to check\n    raise_exception: bool\n        If exception is to be raised\n\n    Returns\n    -------\n        list\n            List of inconsistencies\n    \"\"\"\n    row = self._df.loc[row_id]\n    ikwargs = {'row_id': row_id, 'file_path': self._file_path, 'raise_exception': raise_exception}\n    ret = []\n\n    # check whether fields are among those defined in the technology specs\n    for col_id, col in self._columns.items():\n        cell = row[col_id]\n        if col.col_type == 'variable':\n            cell = cell if pd.isnull(cell) else self.parent_variable + '|' + cell\n        if not col.is_allowed(cell):\n            ret.append(new_inconsistency(\n                message=f\"Invalid cell for column of type '{col.col_type}': {cell}\", col_id=col_id, **ikwargs,\n            ))\n\n    # check that reported and reference units match variable definition\n    for col_prefix in ['', 'reference_']:\n        raw_variable = row[col_prefix + 'variable']\n        col_id = col_prefix + 'unit'\n        unit = row[col_id]\n        if pd.isnull(raw_variable) and pd.isnull(unit):\n            continue\n        if pd.isnull(raw_variable) or pd.isnull(unit):\n            ret.append(new_inconsistency(\n                message=f\"Variable and unit must either both be set or both be unset': {raw_variable} -- {unit}\",\n                col_id=col_id, **ikwargs,\n            ))\n        variable = self.parent_variable + '|' + raw_variable\n        var_specs = variables[variable]\n        if 'dimension' not in var_specs:\n            if unit is not np.nan:\n                ret.append(new_inconsistency(\n                    message=f\"Unexpected unit '{unit}' for {col_id}.\", col_id=col_id, **ikwargs,\n                ))\n            continue\n        dimension = var_specs['dimension']\n\n        flow_id = var_specs['flow_id'] if 'flow_id' in var_specs else None\n        allowed, message = unit_allowed(unit=unit, flow_id=flow_id, dimension=dimension)\n        if not allowed:\n            ret.append(new_inconsistency(message=message, col_id=col_id, **ikwargs))\n\n    return ret\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.load","title":"load()","text":"

load TEDataFile (only if it has not been read yet)

Warns:

Type Description warning

Warns if TEDF is already loaded

Returns:

Type Description TEDF

Returns the TEDF object it is called on

Source code in python/posted/tedf.py
def load(self):\n    \"\"\"\n    load TEDataFile (only if it has not been read yet)\n\n    Warns\n    ----------\n    warning\n        Warns if TEDF is already loaded\n    Returns\n    --------\n        TEDF\n            Returns the TEDF object it is called on\n    \"\"\"\n    if self._df is None:\n        self.read()\n    else:\n        warnings.warn('TEDF is already loaded. Please execute .read() if you want to load from file again.')\n\n    return self\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.read","title":"read()","text":"

read TEDF from CSV file

Raises:

Type Description Exception

If there is no file path from which to read

Source code in python/posted/tedf.py
def read(self):\n    \"\"\"\n    read TEDF from CSV file\n\n    Raises\n    ------\n    Exception\n        If there is no file path from which to read\n    \"\"\"\n\n    if self._file_path is None:\n        raise Exception('Cannot read from file, as this TEDF object has been created from a dataframe.')\n\n    # read CSV file\n    self._df = pd.read_csv(\n        self._file_path,\n        sep=',',\n        quotechar='\"',\n        encoding='utf-8',\n    )\n\n    # check column IDs match base columns and fields\n    if not all(c in self._columns for c in self._df.columns):\n        raise Exception(f\"Column IDs used in CSV file do not match columns definition: {self._df.columns.tolist()}\")\n\n    # adjust row index to start at 1 instead of 0\n    self._df.index += 1\n\n    # insert missing columns and reorder via reindexing, then update dtypes\n    df_new = self._df.reindex(columns=list(self._columns.keys()))\n    for col_id, col in self._columns.items():\n        if col_id in self._df:\n            continue\n        df_new[col_id] = df_new[col_id].astype(col.dtype)\n        df_new[col_id] = col.default\n    self._df = df_new\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.write","title":"write()","text":"

Write TEDF to CSV file

Raises:

Type Description Exception

If there is no file path that specifies where to write

Source code in python/posted/tedf.py
def write(self):\n    \"\"\"\n    Write TEDF to CSV file\n\n    Raises\n    ------\n    Exception\n        If there is no file path that specifies where to write\n    \"\"\"\n    if self._file_path is None:\n        raise Exception('Cannot write to file, as this TEDataFile object has been created from a dataframe. Please '\n                        'first set a file path on this object.')\n\n    self._df.to_csv(\n        self._file_path,\n        index=False,\n        sep=',',\n        quotechar='\"',\n        encoding='utf-8',\n        na_rep='',\n    )\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEDFInconsistencyException","title":"TEDFInconsistencyException","text":"

Bases: Exception

Exception raised for inconsistencies in TEDFs.

Attributes: message -- message explaining the inconsistency row_id -- row where the inconsistency occurs col_id -- column where the inconsistency occurs file_path -- path to the file where the inconsistency occurs

Source code in python/posted/tedf.py
class TEDFInconsistencyException(Exception):\n    \"\"\"Exception raised for inconsistencies in TEDFs.\n\n    Attributes:\n        message -- message explaining the inconsistency\n        row_id -- row where the inconsistency occurs\n        col_id -- column where the inconsistency occurs\n        file_path -- path to the file where the inconsistency occurs\n    \"\"\"\n    def __init__(self, message: str = \"Inconsistency detected\", row_id: None | int = None,\n                 col_id: None | str = None, file_path: None | Path = None):\n        self.message: str = message\n        self.row_id: None | int = row_id\n        self.col_id: None | str = col_id\n        self.file_path: None | Path = file_path\n\n        # add tokens at the end of the error message\n        message_tokens = []\n        if file_path is not None:\n            message_tokens.append(f\"file \\\"{file_path}\\\"\")\n        if row_id is not None:\n            message_tokens.append(f\"line {row_id}\")\n        if col_id is not None:\n            message_tokens.append(f\"in column \\\"{col_id}\\\"\")\n\n        # compose error message from tokens\n        exception_message: str = message\n        if message_tokens:\n            exception_message += f\"\\n    \" + (\", \".join(message_tokens)).capitalize()\n\n        super().__init__(exception_message)\n
"},{"location":"code/python/tedf/#python.posted.tedf.new_inconsistency","title":"new_inconsistency(raise_exception, **kwargs)","text":"

Create new inconsistency object based on kwqargs

Source code in python/posted/tedf.py
def new_inconsistency(raise_exception: bool, **kwargs) -> TEDFInconsistencyException:\n    \"\"\"\n    Create new inconsistency object based on kwqargs\n\n    Parameters\n    ----------\n\n    \"\"\"\n    exception = TEDFInconsistencyException(**kwargs)\n    if raise_exception:\n        raise exception\n    else:\n        return exception\n
"},{"location":"code/python/units/","title":"units","text":""},{"location":"code/python/units/#python.posted.units.ctx_kwargs_for_variants","title":"ctx_kwargs_for_variants(variants, flow_id)","text":"

Generates a dictionary of context key-word arguments for unit conversion for context from flow specs

Parameters:

Name Type Description Default variants list[str | None]

A list of variant names or None values.

required flow_id str

Identifier for the specific flow or process.

required

Returns:

Type Description dict

Dictionary containing default conversion parameters for energy content and density,

Source code in python/posted/units.py
def ctx_kwargs_for_variants(variants: list[str | None], flow_id: str):\n    '''\n    Generates a dictionary of context key-word arguments for unit conversion for context from flow specs\n\n\n    Parameters\n    ----------\n    variants : list[str | None]\n        A list of variant names or None values.\n    flow_id : str\n        Identifier for the specific flow or process.\n\n\n    Returns\n    -------\n        dict\n            Dictionary containing default conversion parameters for energy content and density,\n\n    '''\n    # set default conversion parameters to NaN, such that conversion fails with a meaningful error message in their\n    # absence. when this is left out, the conversion fails will throw a division-by-zero error message.\n    ctx_kwargs = {'energycontent': np.nan, 'density': np.nan}\n    ctx_kwargs |= {\n        unit_variants[v]['param']: flows[flow_id][unit_variants[v]['value']]\n        for v in variants if v is not None\n    }\n    return ctx_kwargs\n
"},{"location":"code/python/units/#python.posted.units.split_off_variant","title":"split_off_variant(unit)","text":"

Takes a unit string and splits it into a pure unit and a variant, if present, based on a semicolon separator, e.g. MWh;LHV into MWh and LHV.

Parameters:

Name Type Description Default unit str

String that may contain a unit and its variant separated by a semicolon.

required

Returns:

Type Description tuple

Returns a tuple containing the pure unit and the variant (if present) after splitting the input unit string by semi-colons.

Source code in python/posted/units.py
def split_off_variant(unit: str):\n    '''\n    Takes a unit string and splits it into a pure unit and a variant,\n    if present, based on a semicolon separator, e.g. MWh;LHV into MWh and LHV.\n\n    Parameters\n    ----------\n    unit : str\n        String that may contain a unit and its variant separated by a semicolon.\n\n    Returns\n    -------\n        tuple\n            Returns a tuple containing the pure unit and the variant (if\n            present) after splitting the input unit string by semi-colons.\n\n    '''\n    tokens = unit.split(';')\n    if len(tokens) == 1:\n        pure_unit = unit\n        variant = None\n    elif len(tokens) > 2:\n        raise Exception(f\"Too many semi-colons in unit '{unit}'.\")\n    else:\n        pure_unit, variant = tokens\n    if variant is not None and variant not in unit_variants:\n        raise Exception(f\"Cannot find unit variant '{variant}'.\")\n    return pure_unit, variant\n
"},{"location":"code/python/units/#python.posted.units.unit_allowed","title":"unit_allowed(unit, flow_id, dimension)","text":"

Checks if a given unit is allowed for a specific dimension and flow ID, handling unit variants and compatibility checks.

Parameters:

Name Type Description Default unit str

The Unit to Check

required flow_id None | str

Identifier for the specific flow or process.

required dimension str

Expected dimension of the unit.

required

Returns:

Type Description tuple(bool, str)

Tuple with a boolean value and a message. The boolean value indicates whether the unit is allowed based on the provided conditions, and the message provides additional information or an error message related to the unit validation process.

Source code in python/posted/units.py
def unit_allowed(unit: str, flow_id: None | str, dimension: str):\n    '''Checks if a given unit is allowed for a specific dimension and flow ID,\n    handling unit variants and compatibility checks.\n\n    Parameters\n    ----------\n    unit : str\n        The Unit to Check\n    flow_id : None | str\n        Identifier for the specific flow or process.\n    dimension : str\n        Expected dimension of the unit.\n\n    Returns\n    -------\n        tuple(bool, str)\n            Tuple with a boolean value and a message. The boolean value indicates\n            whether the unit is allowed based on the provided conditions, and the message\n            provides additional information or an error message related to the unit validation process.\n    '''\n    if not isinstance(unit, str):\n        raise Exception('Unit to check must be string.')\n\n    # split unit into pure unit and variant\n    try:\n        unit, variant = split_off_variant(unit)\n    except:\n        return False, f\"Inconsistent unit variant format in '{unit}'.\"\n\n    try:\n        unit_registered = ureg(unit)\n    except:\n        return False, f\"Unknown unit '{unit}'.\"\n\n    if flow_id is None:\n        if '[flow]' in dimension:\n            return False, f\"No flow_id provided even though [flow] is in dimension.\"\n        if variant is not None:\n            return False, f\"Unexpected unit variant '{variant}' for dimension [{dimension}].\"\n        if (dimension == 'dimensionless' and unit_registered.dimensionless) or unit_registered.check(dimension):\n            return True, ''\n        else:\n            return False, f\"Unit '{unit}' does not match expected dimension [{dimension}].\"\n    else:\n        if '[flow]' not in dimension:\n            if (dimension == 'dimensionless' and unit_registered.dimensionless) or unit_registered.check(dimension):\n                return True, ''\n        else:\n            check_dimensions = [\n                (dimension.replace(\n                    '[flow]', f\"[{dimension_base}]\"), dimension_base, base_unit)\n                for dimension_base, base_unit in [('mass', 'kg'), ('energy', 'kWh'), ('volume', 'm**3')]\n            ]\n            for check_dimension, check_dimension_base, check_base_unit in check_dimensions:\n                if unit_registered.check(check_dimension):\n                    if variant is None:\n                        if any(\n                            (check_dimension_base == variant_specs['dimension']) and\n                            flows[flow_id][variant_specs['value']] is not np.nan\n                            for variant, variant_specs in unit_variants.items()\n                        ):\n                            return False, (f\"Missing unit variant for dimension [{check_dimension_base}] for unit \"\n                                           f\"'{unit}'.\")\n                    elif unit_variants[variant]['dimension'] != check_dimension_base:\n                        return False, f\"Variant '{variant}' incompatible with unit '{unit}'.\"\n\n                    default_unit, default_variant = split_off_variant(\n                        flows[flow_id]['default_unit'])\n                    ctx_kwargs = ctx_kwargs_for_variants(\n                        [variant, default_variant], flow_id)\n\n                    if ureg(check_base_unit).is_compatible_with(default_unit, 'flocon', **ctx_kwargs):\n                        return True, ''\n                    else:\n                        return False, f\"Unit '{unit}' not compatible with flow '{flow_id}'.\"\n\n        return False, f\"Unit '{unit}' is not compatible with dimension [{dimension}].\"\n
"},{"location":"code/python/units/#python.posted.units.unit_convert","title":"unit_convert(unit_from, unit_to, flow_id=None)","text":"

Converts units with optional flow context handling based on specified variants and flow ID. The function checks if the input units are not NaN, then it proceeds to handle different cases based on the presence of a flow context and unit variants.

Parameters:

Name Type Description Default unit_from str | float

Unit to convert from.

required unit_to str | float

Unit to convert to.

required flow_id None | str

Identifier for the specific flow or process.

None

Returns:

Type Description float

Conversion factor between unit_from and unit_to

Source code in python/posted/units.py
def unit_convert(unit_from: str | float, unit_to: str | float, flow_id: None | str = None) -> float:\n    '''\n    Converts units with optional flow context handling based on\n    specified variants and flow ID. The function checks if the input units are not NaN,\n    then it proceeds to handle different cases based on the presence of a flow context and unit\n    variants.\n\n    Parameters\n    ----------\n    unit_from : str | float\n        Unit to convert from.\n    unit_to : str | float\n        Unit to convert to.\n    flow_id : None | str\n        Identifier for the specific flow or process.\n\n    Returns\n    -------\n        float\n            Conversion factor between unit_from and unit_to\n\n    '''\n    # return nan if unit_from or unit_to is nan\n    if unit_from is np.nan or unit_to is np.nan:\n        return np.nan\n\n    # replace \"No Unit\" by \"Dimensionless\"\n    if unit_from == 'No Unit':\n        unit_from = 'dimensionless'\n    if unit_to == 'No Unit':\n        unit_to = 'dimensionless'\n\n    # skip flow conversion if no flow_id specified\n    if flow_id is None or pd.isna(flow_id):\n        return ureg(unit_from).to(unit_to).magnitude\n\n    # get variants from units\n    pure_units = []\n    variants = []\n    for u in (unit_from, unit_to):\n        pure_unit, variant = split_off_variant(u)\n        pure_units.append(pure_unit)\n        variants.append(variant)\n\n    unit_from, unit_to = pure_units\n\n    # if no variants a specified, we may proceed without a flow context\n    if not any(variants):\n        return ureg(unit_from).to(unit_to).magnitude\n\n    # if both variants refer to the same dimension, we need to manually calculate the conversion factor and proceed\n    # without a flow context\n    if len(variants) == 2:\n        variant_params = {\n            unit_variants[v]['param'] if v is not None else None\n            for v in variants\n        }\n        if len(variant_params) == 1:\n            param = next(iter(variant_params))\n            value_from, value_to = (\n                flows[flow_id][unit_variants[v]['value']] for v in variants)\n\n            conv_factor = (ureg(value_from) / ureg(value_to)\n                           if param == 'energycontent' else\n                           ureg(value_to) / ureg(value_from))\n\n            return conv_factor.magnitude * ureg(unit_from).to(unit_to).magnitude\n\n    # perform the actual conversion step with all required variants\n    ctx_kwargs = ctx_kwargs_for_variants(variants, flow_id)\n    return ureg(unit_from).to(unit_to, 'flocon', **ctx_kwargs).magnitude\n
"},{"location":"tutorials/R/overview/","title":"Overview","text":"In\u00a0[1]: Copied!
devtools::load_all()\n
devtools::load_all()
\u2139 Loading posted\n\nAttache Paket: \u2018dplyr\u2019\n\n\nDie folgenden Objekte sind maskiert von \u2018package:stats\u2019:\n\n    filter, lag\n\n\nDie folgenden Objekte sind maskiert von \u2018package:base\u2019:\n\n    intersect, setdiff, setequal, union\n\n\n\nAttache Paket: \u2018docstring\u2019\n\n\nDas folgende Objekt ist maskiert \u2018package:utils\u2019:\n\n    ?\n\n\n
In\u00a0[2]: Copied!
par(bg = \"white\")\nplot(1:10)\n
par(bg = \"white\") plot(1:10) In\u00a0[3]: Copied!
tedf <- TEDF$new(\"Tech|Electrolysis\")$load()\ntedf$data\n
tedf <- TEDF$new(\"Tech|Electrolysis\")$load() tedf$data A data.frame: 95 \u00d7 14 subtechsizeregionperiodvariablereference_variablevalueuncertaintyunitreference_valuereference_unitcommentsourcesource_detail <chr><chr><chr><chr><chr><chr><dbl><dbl><chr><dbl><chr><chr><chr><chr> AEL 100 MW *2020CAPEXInput Capacity|Electricity 400 0EUR_20201kW Vartiainen22Page 4, Figure 4 AEL 100 MW *2030CAPEXInput Capacity|Electricity 240 50EUR_20201kW Vartiainen22Page 4, Figure 4 AEL 100 MW *2040CAPEXInput Capacity|Electricity 140 75EUR_20201kW Vartiainen22Page 4, Figure 4 AEL 100 MW *2050CAPEXInput Capacity|Electricity 80 75EUR_20201kW Vartiainen22Page 4, Figure 4 AEL 100 MW *2020CAPEXInput Capacity|Electricity 663 NAEUR_20201kW Holst21 Appendix A AEL 100 MW *2030CAPEXInput Capacity|Electricity 444 NAEUR_20201kW Holst21 Appendix A PEM 100 MW *2020CAPEXInput Capacity|Electricity 718 NAEUR_20201kW Holst21 Appendix A PEM 100 MW *2030CAPEXInput Capacity|Electricity 502 NAEUR_20201kW Holst21 Appendix A AEL 5 MW *2020CAPEXInput Capacity|Electricity 949 NAEUR_20201kW Holst21 Appendix A AEL 5 MW *2030CAPEXInput Capacity|Electricity 726 NAEUR_20201kW Holst21 Appendix A PEM 5 MW *2020CAPEXInput Capacity|Electricity 978 NAEUR_20201kW Holst21 Appendix A PEM 5 MW *2030CAPEXInput Capacity|Electricity 718 NAEUR_20201kW Holst21 Appendix A AEL * *2030CAPEXInput Capacity|Electricity 536152USD_20211kW IRENA22 Page 23 AEL * *2050CAPEXInput Capacity|Electricity 230 96USD_20211kW IRENA22 Page 23 AEL 130000 Nm\u00b3/h*2020CAPEXInput Capacity|Electricity1150350EUR_20191kW Tenhumberg20Table 2, Page 1588 (3/10) PEM 130000 Nm\u00b3/h*2020CAPEXInput Capacity|Electricity1750350EUR_20191kW Tenhumberg20Table 2, Page 1588 (3/10) SOEC130000 Nm\u00b3/h*2020CAPEXInput Capacity|Electricity2000 NAEUR_20191kW Provided as a lower thresholdTenhumberg20Table 2, Page 1588 (3/10) AEL 130000 Nm\u00b3/h*2030CAPEXInput Capacity|Electricity 450 NAEUR_20191kW Tenhumberg20Supplement, Table S2, Page 3 PEM 130000 Nm\u00b3/h*2030CAPEXInput Capacity|Electricity 810 NAEUR_20191kW Tenhumberg20Supplement, Table S2, Page 3 SOEC130000 Nm\u00b3/h*2030CAPEXInput Capacity|Electricity 800 NAEUR_20191kW Tenhumberg20Supplement, Table S2, Page 3 AEL 1 MW *2020CAPEXOutput Capacity|Hydrogen 1566918EUR_20201kg/d DEARF23 Sheet 86 AEC 1MW, Row 24 AEL 1 MW *2030CAPEXOutput Capacity|Hydrogen 1164 NAEUR_20201kg/d DEARF23 Sheet 86 AEC 1MW, Row 24 AEL 1 MW *2040CAPEXOutput Capacity|Hydrogen 874 NAEUR_20201kg/d DEARF23 Sheet 86 AEC 1MW, Row 24 AEL 1 MW *2050CAPEXOutput Capacity|Hydrogen 648438EUR_20201kg/d DEARF23 Sheet 86 AEC 1MW, Row 24 AEL 100 MW *2020CAPEXOutput Capacity|Hydrogen 1358374EUR_20201kg/d DEARF23 Sheet 86 AEC 100MW, Row 24 AEL 100 MW *2030CAPEXOutput Capacity|Hydrogen 919 NAEUR_20201kg/d DEARF23 Sheet 86 AEC 100MW, Row 24 AEL 100 MW *2040CAPEXOutput Capacity|Hydrogen 583 NAEUR_20201kg/d DEARF23 Sheet 86 AEC 100MW, Row 24 AEL 100 MW *2050CAPEXOutput Capacity|Hydrogen 463201EUR_20201kg/d DEARF23 Sheet 86 AEC 100MW, Row 24 PEM 1 MW *2020CAPEXOutput Capacity|Hydrogen 2215497EUR_20201kg/d DEARF23 Sheet 86 PEMEC 1MW, Row 24 PEM 1 MW *2030CAPEXOutput Capacity|Hydrogen 1378 NAEUR_20201kg/d DEARF23 Sheet 86 PEMEC 1MW, Row 24 \u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee SOEC* *2030Output|Hydrogen Input|Electricity 24.20 NAkg 1.0MWh DEARF23 Sheet 86 SOEC 1MW, Row 16 SOEC* *2040Output|Hydrogen Input|Electricity 24.60 NAkg 1.0MWh DEARF23 Sheet 86 SOEC 1MW, Row 16 SOEC* *2050Output|Hydrogen Input|Electricity 25.100.50kg 1.0MWh DEARF23 Sheet 86 SOEC 1MW, Row 16 SOEC* *2020Input|Heat Input|Electricity 20.502.00pct MWh 79.5pct MWh DEARF23 Sheet 86 SOEC 1MW, Rows 9 and 10 SOEC* *2030Input|Heat Input|Electricity 19.50 NApct MWh 80.5pct MWh DEARF23 Sheet 86 SOEC 1MW, Rows 9 and 10 SOEC* *2040Input|Heat Input|Electricity 18.60 NApct MWh 81.4pct MWh DEARF23 Sheet 86 SOEC 1MW, Rows 9 and 10 SOEC* *2050Input|Heat Input|Electricity 18.601.00pct MWh 81.4pct MWh DEARF23 Sheet 86 SOEC 1MW, Rows 9 and 10 AEL * *2020Input|Electricity Output|Hydrogen 48.30 NAkWh 1.0kg Holst21 Table 3-2, Page 42 AEL * *2030Input|Electricity Output|Hydrogen 45.60 NAkWh 1.0kg Holst21 Table 3-2, Page 42 PEM * *2020Input|Electricity Output|Hydrogen 51.00 NAkWh 1.0kg Holst21 Table 3-7, Page 50 PEM * *2030Input|Electricity Output|Hydrogen 45.70 NAkWh 1.0kg Holst21 Table 3-7, Page 50 AEL * *2030Input|Electricity Output|Hydrogen 50.351.85kWh 1.0kg IRENA22 Footnote 12, Page 21 AEL * *2050Input|Electricity Output|Hydrogen 46.501.50kWh 1.0kg IRENA22 Footnote 12, Page 21 PEM * *2021Input|Electricity Output|Hydrogen 54.20 NAkWh 1.0kg Based on life cycle inventory, from Ecoinvent v3.4Al-Qahtani21Table 2, Page 3 AEL 130000 Nm\u00b3/h*2020Input|Electricity Output|Hydrogen 5.450.45kWh 1.0m\u00b3;normTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) PEM 130000 Nm\u00b3/h*2020Input|Electricity Output|Hydrogen 5.750.75kWh 1.0m\u00b3;normTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) SOEC130000 Nm\u00b3/h*2020Input|Electricity Output|Hydrogen 3.800.10kWh 1.0m\u00b3;normTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) SOEC130000 Nm\u00b3/h*2020Input|Heat Output|Hydrogen 0.70 NAkWh 1.0m\u00b3;normTotal system demand Tenhumberg20Table 3, Page 1590 (5/10) AEL 130000 Nm\u00b3/h*2030Input|Electricity Output|Hydrogen 4.42 NAkWh 1.0m\u00b3;normTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) PEM 130000 Nm\u00b3/h*2030Input|Electricity Output|Hydrogen 4.81 NAkWh 1.0m\u00b3;normTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) SOEC130000 Nm\u00b3/h*2030Input|Electricity Output|Hydrogen 3.55 NAkWh 1.0m\u00b3;normTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) SOEC130000 Nm\u00b3/h*2030Input|Heat Output|Hydrogen 0.70 NAkWh 1.0m\u00b3;normTotal system demand Tenhumberg20Table 3, Page 1590 (5/10) AEL * *2020Input|Electricity Output|Hydrogen 54.004.00kWh 1.0kg Yates20 Table 3, Page 10 PEM * *2020Input|Water Output|Hydrogen 10.00 NAkg 1.0kg Based on life cycle inventory, from Ecoinvent v3.4Al-Qahtani21Table 2, Page 3 * * *2021Input|Water Output|Hydrogen 9.00 NAkg 1.0kg IEA-GHR21 AEL * *2020Input|Water Output|Hydrogen 10.001.00L;norm 1.0kg Yates20 Table 3, Page 10 * 1 MW ** Total Input Capacity|Electricity 1.00 NAMW NA * * 5 MW ** Total Input Capacity|Electricity 5.00 NAMW NA * * 100 MW ** Total Input Capacity|Electricity 100.00 NAMW NA * * 130000 Nm\u00b3/h** Total Output Capacity|Hydrogen 130000.00 NAm\u00b3/h;norm NA * In\u00a0[4]: Copied!
DataSet$new('Tech|Electrolysis')$normalise(override=list('Tech|Electrolysis|Input Capacity|elec'= 'kW', 'Tech|Electrolysis|Output Capacity|h2'= 'kW;LHV'))  %>% filter(source=='Vartiainen22')\n
DataSet$new('Tech|Electrolysis')$normalise(override=list('Tech|Electrolysis|Input Capacity|elec'= 'kW', 'Tech|Electrolysis|Output Capacity|h2'= 'kW;LHV')) %>% filter(source=='Vartiainen22') A data.frame: 12 \u00d7 15 parent_variablesubtechsizeregionperiodvariablereference_variablevalueuncertaintyunitreference_valuereference_unitcommentsourcesource_detail <chr><chr><chr><chr><chr><chr><chr><dbl><dbl><chr><dbl><chr><chr><chr><chr> Tech|ElectrolysisAEL100 MW*2020CAPEX Input Capacity|Electricity45.403859630.000000USD_2005 1MWh/a Vartiainen22Page 4, Figure 4 Tech|ElectrolysisAEL100 MW*2030CAPEX Input Capacity|Electricity27.242315785.675482USD_2005 1MWh/a Vartiainen22Page 4, Figure 4 Tech|ElectrolysisAEL100 MW*2040CAPEX Input Capacity|Electricity15.891350878.513224USD_2005 1MWh/a Vartiainen22Page 4, Figure 4 Tech|ElectrolysisAEL100 MW*2050CAPEX Input Capacity|Electricity 9.080771938.513224USD_2005 1MWh/a Vartiainen22Page 4, Figure 4 Tech|ElectrolysisAEL100 MW*2020OPEX Fixed Input Capacity|Electricity 0.68105789 NAUSD_2005/a1MWh/a1.5% of CAPEX; reported in units of electric capacity Vartiainen22Page 4 Tech|ElectrolysisAEL100 MW*2030OPEX Fixed Input Capacity|Electricity 0.23837026 NAUSD_2005/a1MWh/a10% LR decrease Vartiainen22Page 4 Tech|ElectrolysisAEL100 MW*2040OPEX Fixed Input Capacity|Electricity 0.08286204 NAUSD_2005/a1MWh/a10% LR decrease Vartiainen22Page 4 Tech|ElectrolysisAEL100 MW*2050OPEX Fixed Input Capacity|Electricity 0.02837741 NAUSD_2005/a1MWh/a10% LR decrease Vartiainen22Page 4 Tech|ElectrolysisAEL100 MW*2020Output|HydrogenInput|Electricity 0.67000000 NAMWh;LHV 1MWh 1.5% of CAPEX; reported in units of electric capacity; 67% assumes the LHVVartiainen22Page 4 Tech|ElectrolysisAEL100 MW*2030Output|HydrogenInput|Electricity 0.70000000 NAMWh;LHV 1MWh 10% LR decrease Vartiainen22Page 4 Tech|ElectrolysisAEL100 MW*2040Output|HydrogenInput|Electricity 0.73000000 NAMWh;LHV 1MWh 10% LR decrease Vartiainen22Page 4 Tech|ElectrolysisAEL100 MW*2050Output|HydrogenInput|Electricity 0.76000000 NAMWh;LHV 1MWh 10% LR decrease Vartiainen22Page 4 In\u00a0[5]: Copied!
DataSet$new('Tech|Electrolysis')$normalise(override=list('Tech|Electrolysis|Output Capacity|h2'= 'kW;LHV'))\n
DataSet$new('Tech|Electrolysis')$normalise(override=list('Tech|Electrolysis|Output Capacity|h2'= 'kW;LHV')) A data.frame: 95 \u00d7 15 parent_variablesubtechsizeregionperiodvariablereference_variablevalueuncertaintyunitreference_valuereference_unitcommentsourcesource_detail <chr><chr><chr><chr><chr><chr><chr><dbl><dbl><chr><dbl><chr><chr><chr><chr> Tech|ElectrolysisAEL 100 MW *2020CAPEXInput Capacity|Electricity 45.403860 0.000000USD_20051MWh/a Vartiainen22Page 4, Figure 4 Tech|ElectrolysisAEL 100 MW *2030CAPEXInput Capacity|Electricity 27.242316 5.675482USD_20051MWh/a Vartiainen22Page 4, Figure 4 Tech|ElectrolysisAEL 100 MW *2040CAPEXInput Capacity|Electricity 15.891351 8.513224USD_20051MWh/a Vartiainen22Page 4, Figure 4 Tech|ElectrolysisAEL 100 MW *2050CAPEXInput Capacity|Electricity 9.080772 8.513224USD_20051MWh/a Vartiainen22Page 4, Figure 4 Tech|ElectrolysisAEL 100 MW *2020CAPEXInput Capacity|Electricity 75.256897 NAUSD_20051MWh/a Holst21 Appendix A Tech|ElectrolysisAEL 100 MW *2030CAPEXInput Capacity|Electricity 50.398284 NAUSD_20051MWh/a Holst21 Appendix A Tech|ElectrolysisPEM 100 MW *2020CAPEXInput Capacity|Electricity 81.499928 NAUSD_20051MWh/a Holst21 Appendix A Tech|ElectrolysisPEM 100 MW *2030CAPEXInput Capacity|Electricity 56.981844 NAUSD_20051MWh/a Holst21 Appendix A Tech|ElectrolysisAEL 5 MW *2020CAPEXInput Capacity|Electricity107.720657 NAUSD_20051MWh/a Holst21 Appendix A Tech|ElectrolysisAEL 5 MW *2030CAPEXInput Capacity|Electricity 82.408005 NAUSD_20051MWh/a Holst21 Appendix A Tech|ElectrolysisPEM 5 MW *2020CAPEXInput Capacity|Electricity111.012437 NAUSD_20051MWh/a Holst21 Appendix A Tech|ElectrolysisPEM 5 MW *2030CAPEXInput Capacity|Electricity 81.499928 NAUSD_20051MWh/a Holst21 Appendix A Tech|ElectrolysisAEL * *2030CAPEXInput Capacity|Electricity 45.03196212.770258USD_20051MWh/a IRENA22 Page 23 Tech|ElectrolysisAEL * *2050CAPEXInput Capacity|Electricity 19.323417 8.065426USD_20051MWh/a IRENA22 Page 23 Tech|ElectrolysisAEL 130000 Nm\u00b3/h*2020CAPEXInput Capacity|Electricity132.89985740.447783USD_20051MWh/a Tenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisPEM 130000 Nm\u00b3/h*2020CAPEXInput Capacity|Electricity202.23891340.447783USD_20051MWh/a Tenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisSOEC130000 Nm\u00b3/h*2020CAPEXInput Capacity|Electricity231.130187 NAUSD_20051MWh/a Provided as a lower thresholdTenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisAEL 130000 Nm\u00b3/h*2030CAPEXInput Capacity|Electricity 52.004292 NAUSD_20051MWh/a Tenhumberg20Supplement, Table S2, Page 3 Tech|ElectrolysisPEM 130000 Nm\u00b3/h*2030CAPEXInput Capacity|Electricity 93.607726 NAUSD_20051MWh/a Tenhumberg20Supplement, Table S2, Page 3 Tech|ElectrolysisSOEC130000 Nm\u00b3/h*2030CAPEXInput Capacity|Electricity 92.452075 NAUSD_20051MWh/a Tenhumberg20Supplement, Table S2, Page 3 Tech|ElectrolysisAEL 1 MW *2020CAPEXOutput Capacity|Hydrogen 127.99852175.033616USD_20051MWh/a;LHV DEARF23 Sheet 86 AEC 1MW, Row 24 Tech|ElectrolysisAEL 1 MW *2030CAPEXOutput Capacity|Hydrogen 95.140663 NAUSD_20051MWh/a;LHV DEARF23 Sheet 86 AEC 1MW, Row 24 Tech|ElectrolysisAEL 1 MW *2040CAPEXOutput Capacity|Hydrogen 71.437233 NAUSD_20051MWh/a;LHV DEARF23 Sheet 86 AEC 1MW, Row 24 Tech|ElectrolysisAEL 1 MW *2050CAPEXOutput Capacity|Hydrogen 52.96490535.800353USD_20051MWh/a;LHV DEARF23 Sheet 86 AEC 1MW, Row 24 Tech|ElectrolysisAEL 100 MW *2020CAPEXOutput Capacity|Hydrogen 110.99744030.569251USD_20051MWh/a;LHV DEARF23 Sheet 86 AEC 100MW, Row 24 Tech|ElectrolysisAEL 100 MW *2030CAPEXOutput Capacity|Hydrogen 75.115352 NAUSD_20051MWh/a;LHV DEARF23 Sheet 86 AEC 100MW, Row 24 Tech|ElectrolysisAEL 100 MW *2040CAPEXOutput Capacity|Hydrogen 47.652068 NAUSD_20051MWh/a;LHV DEARF23 Sheet 86 AEC 100MW, Row 24 Tech|ElectrolysisAEL 100 MW *2050CAPEXOutput Capacity|Hydrogen 37.84375216.428929USD_20051MWh/a;LHV DEARF23 Sheet 86 AEC 100MW, Row 24 Tech|ElectrolysisPEM 1 MW *2020CAPEXOutput Capacity|Hydrogen 181.04516340.622775USD_20051MWh/a;LHV DEARF23 Sheet 86 PEMEC 1MW, Row 24 Tech|ElectrolysisPEM 1 MW *2030CAPEXOutput Capacity|Hydrogen 112.632160 NAUSD_20051MWh/a;LHV DEARF23 Sheet 86 PEMEC 1MW, Row 24 \u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee Tech|ElectrolysisSOEC* *2030Output|Hydrogen Input|Electricity8.065777e-01 NAMWh;LHV 1MWh DEARF23 Sheet 86 SOEC 1MW, Row 16 Tech|ElectrolysisSOEC* *2040Output|Hydrogen Input|Electricity8.199095e-01 NAMWh;LHV 1MWh DEARF23 Sheet 86 SOEC 1MW, Row 16 Tech|ElectrolysisSOEC* *2050Output|Hydrogen Input|Electricity8.365744e-010.01666483MWh;LHV 1MWh DEARF23 Sheet 86 SOEC 1MW, Row 16 Tech|ElectrolysisSOEC* *2020Input|Heat Input|Electricity2.578616e-010.02515723MWh 1MWh DEARF23 Sheet 86 SOEC 1MW, Rows 9 and 10 Tech|ElectrolysisSOEC* *2030Input|Heat Input|Electricity2.422360e-01 NAMWh 1MWh DEARF23 Sheet 86 SOEC 1MW, Rows 9 and 10 Tech|ElectrolysisSOEC* *2040Input|Heat Input|Electricity2.285012e-01 NAMWh 1MWh DEARF23 Sheet 86 SOEC 1MW, Rows 9 and 10 Tech|ElectrolysisSOEC* *2050Input|Heat Input|Electricity2.285012e-010.01228501MWh 1MWh DEARF23 Sheet 86 SOEC 1MW, Rows 9 and 10 Tech|ElectrolysisAEL * *2020Input|Electricity Output|Hydrogen 1.449160e+00 NAMWh 1MWh;LHV Holst21 Table 3-2, Page 42 Tech|ElectrolysisAEL * *2030Input|Electricity Output|Hydrogen 1.368151e+00 NAMWh 1MWh;LHV Holst21 Table 3-2, Page 42 Tech|ElectrolysisPEM * *2020Input|Electricity Output|Hydrogen 1.530169e+00 NAMWh 1MWh;LHV Holst21 Table 3-7, Page 50 Tech|ElectrolysisPEM * *2030Input|Electricity Output|Hydrogen 1.371151e+00 NAMWh 1MWh;LHV Holst21 Table 3-7, Page 50 Tech|ElectrolysisAEL * *2030Input|Electricity Output|Hydrogen 1.510667e+000.05550612MWh 1MWh;LHV IRENA22 Footnote 12, Page 21 Tech|ElectrolysisAEL * *2050Input|Electricity Output|Hydrogen 1.395154e+000.04500496MWh 1MWh;LHV IRENA22 Footnote 12, Page 21 Tech|ElectrolysisPEM * *2021Input|Electricity Output|Hydrogen 1.626179e+00 NAMWh 1MWh;LHVBased on life cycle inventory, from Ecoinvent v3.4Al-Qahtani21Table 2, Page 3 Tech|ElectrolysisAEL 130000 Nm\u00b3/h*2020Input|Electricity Output|Hydrogen 1.952408e+000.16120799MWh 1MWh;LHVTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisPEM 130000 Nm\u00b3/h*2020Input|Electricity Output|Hydrogen 2.059880e+000.26867998MWh 1MWh;LHVTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisSOEC130000 Nm\u00b3/h*2020Input|Electricity Output|Hydrogen 1.361312e+000.03582400MWh 1MWh;LHVTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisSOEC130000 Nm\u00b3/h*2020Input|Heat Output|Hydrogen 2.507680e-01 NAMWh 1MWh;LHVTotal system demand Tenhumberg20Table 3, Page 1590 (5/10) Tech|ElectrolysisAEL 130000 Nm\u00b3/h*2030Input|Electricity Output|Hydrogen 1.583421e+00 NAMWh 1MWh;LHVTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisPEM 130000 Nm\u00b3/h*2030Input|Electricity Output|Hydrogen 1.723134e+00 NAMWh 1MWh;LHVTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisSOEC130000 Nm\u00b3/h*2030Input|Electricity Output|Hydrogen 1.271752e+00 NAMWh 1MWh;LHVTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisSOEC130000 Nm\u00b3/h*2030Input|Heat Output|Hydrogen 2.507680e-01 NAMWh 1MWh;LHVTotal system demand Tenhumberg20Table 3, Page 1590 (5/10) Tech|ElectrolysisAEL * *2020Input|Electricity Output|Hydrogen 1.620179e+000.12001324MWh 1MWh;LHV Yates20 Table 3, Page 10 Tech|ElectrolysisPEM * *2020Input|Water Output|Hydrogen 3.000331e-01 NAt 1MWh;LHVBased on life cycle inventory, from Ecoinvent v3.4Al-Qahtani21Table 2, Page 3 Tech|Electrolysis* * *2021Input|Water Output|Hydrogen 2.700298e-01 NAt 1MWh;LHV IEA-GHR21 Tech|ElectrolysisAEL * *2020Input|Water Output|Hydrogen 2.994960e-010.02994960t 1MWh;LHV Yates20 Table 3, Page 10 Tech|Electrolysis* 1 MW ** Total Input Capacity|Electricity 8.760000e+03 NAMWh/a NANA * Tech|Electrolysis* 5 MW ** Total Input Capacity|Electricity 4.380000e+04 NAMWh/a NANA * Tech|Electrolysis* 100 MW ** Total Input Capacity|Electricity 8.760000e+05 NAMWh/a NANA * Tech|Electrolysis* 130000 Nm\u00b3/h** Total Output Capacity|Hydrogen 3.178875e+06 NAMWh/a;LHVNANA * In\u00a0[6]: Copied!
DataSet$new('Tech|Electrolysis')$select(period=2020, subtech='AEL', size='100 MW', override=list('Tech|Electrolysis|Output Capacity|h2'= 'kW;LHV'))\n
DataSet$new('Tech|Electrolysis')$select(period=2020, subtech='AEL', size='100 MW', override=list('Tech|Electrolysis|Output Capacity|h2'= 'kW;LHV')) A data.frame: 21 \u00d7 7 sourcevariablereference_variableregionperiodunitvalue <chr><chr><chr><chr><dbl><list><dbl> DEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2020a * USD_2005/MWh1.665e+02 DEARF23 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2020MWh/MWh2.250e+00 DEARF23 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2020USD_2005/MWh3.330e+00 DEARF23 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2020MWh/a8.760e+05 Holst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2020a * USD_2005/MWh1.091e+02 Holst21 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2020MWh/MWh2.100e+00 Holst21 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2020USD_2005/MWh3.290e+00 Holst21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2020MWh/a8.760e+05 IEA-GHR21 Tech|Electrolysis|Input|Water Tech|Electrolysis|Output|Hydrogen World2020t/MWh7.292e-02 IEA-GHR21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2020MWh/a8.760e+05 IRENA22 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2020a * USD_2005/MWh6.803e+01 IRENA22 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2020MWh/MWh2.282e+00 IRENA22 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2020MWh/a8.760e+05 Vartiainen22Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2020a * USD_2005/MWh6.777e+01 Vartiainen22Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2020MWh/MWh2.228e+00 Vartiainen22Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2020USD_2005/MWh1.017e+00 Vartiainen22Tech|Electrolysis|Total Input Capacity|ElectricityNA World2020MWh/a8.760e+05 Yates20 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2020MWh/MWh2.625e+00 Yates20 Tech|Electrolysis|Input|Water Tech|Electrolysis|Output|Hydrogen World2020t/MWh4.852e-01 Yates20 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2020USD_2005/MWh2.418e+00 Yates20 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2020MWh/a8.760e+05 In\u00a0[7]: Copied!
DataSet$new('Tech|Electrolysis')$select(period=2030, source='Yates20', subtech='AEL', size='100 MW', override={'Tech|Electrolysis|Output Capacity|h2'= 'kW;LHV'}, extrapolate_period=FALSE)\n
DataSet$new('Tech|Electrolysis')$select(period=2030, source='Yates20', subtech='AEL', size='100 MW', override={'Tech|Electrolysis|Output Capacity|h2'= 'kW;LHV'}, extrapolate_period=FALSE) A data.frame: 1 \u00d7 7 sourcevariablereference_variableregionperiodunitvalue <chr><chr><chr><chr><dbl><list><dbl> Yates20Tech|Electrolysis|Total Input Capacity|ElectricityNAWorld2030MWh/a876000 In\u00a0[8]: Copied!
DataSet$new('Tech|Electrolysis')$select(subtech=c('AEL', 'PEM'), size='100 MW', override={'Tech|Electrolysis|Input Capacity|Electricity'= 'kW'})\n
DataSet$new('Tech|Electrolysis')$select(subtech=c('AEL', 'PEM'), size='100 MW', override={'Tech|Electrolysis|Input Capacity|Electricity'= 'kW'}) A data.frame: 114 \u00d7 8 subtechsourcevariablereference_variableregionperiodunitvalue <chr><chr><chr><chr><chr><dbl><list><dbl> AELAl-Qahtani21Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 AELAl-Qahtani21Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 AELAl-Qahtani21Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 AELDEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2030a * USD_2005/MWh1.105e+02 AELDEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2040a * USD_2005/MWh6.650e+01 AELDEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2050a * USD_2005/MWh5.046e+01 AELDEARF23 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2030MWh/MWh2.163e+00 AELDEARF23 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2040MWh/MWh1.947e+00 AELDEARF23 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2050MWh/MWh1.778e+00 AELDEARF23 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2030USD_2005/MWh2.210e+00 AELDEARF23 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2040USD_2005/MWh1.330e+00 AELDEARF23 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2050USD_2005/MWh1.009e+00 AELDEARF23 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 AELDEARF23 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 AELDEARF23 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 AELHolst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2030a * USD_2005/MWh6.895e+01 AELHolst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2040a * USD_2005/MWh6.895e+01 AELHolst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2050a * USD_2005/MWh6.895e+01 AELHolst21 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2030MWh/MWh1.872e+00 AELHolst21 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2040MWh/MWh1.872e+00 AELHolst21 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2050MWh/MWh1.872e+00 AELHolst21 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2030USD_2005/MWh3.106e+00 AELHolst21 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2040USD_2005/MWh3.106e+00 AELHolst21 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2050USD_2005/MWh3.106e+00 AELHolst21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 AELHolst21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 AELHolst21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 AELIEA-GHR21 Tech|Electrolysis|Input|Water Tech|Electrolysis|Output|Hydrogen World2030t/MWh7.292e-02 AELIEA-GHR21 Tech|Electrolysis|Input|Water Tech|Electrolysis|Output|Hydrogen World2040t/MWh7.292e-02 AELIEA-GHR21 Tech|Electrolysis|Input|Water Tech|Electrolysis|Output|Hydrogen World2050t/MWh7.292e-02 \u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee PEMDEARF23 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 PEMDEARF23 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 PEMDEARF23 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 PEMHolst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2030a * USD_2005/MWh7.813e+01 PEMHolst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2040a * USD_2005/MWh7.813e+01 PEMHolst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2050a * USD_2005/MWh7.813e+01 PEMHolst21 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2030MWh/MWh1.880e+00 PEMHolst21 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2040MWh/MWh1.880e+00 PEMHolst21 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2050MWh/MWh1.880e+00 PEMHolst21 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2030USD_2005/MWh2.335e+00 PEMHolst21 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2040USD_2005/MWh2.335e+00 PEMHolst21 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2050USD_2005/MWh2.335e+00 PEMHolst21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 PEMHolst21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 PEMHolst21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 PEMIEA-GHR21 Tech|Electrolysis|Input|Water Tech|Electrolysis|Output|Hydrogen World2030t/MWh7.292e-02 PEMIEA-GHR21 Tech|Electrolysis|Input|Water Tech|Electrolysis|Output|Hydrogen World2040t/MWh7.292e-02 PEMIEA-GHR21 Tech|Electrolysis|Input|Water Tech|Electrolysis|Output|Hydrogen World2050t/MWh7.292e-02 PEMIEA-GHR21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 PEMIEA-GHR21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 PEMIEA-GHR21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 PEMIRENA22 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 PEMIRENA22 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 PEMIRENA22 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 PEMVartiainen22Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 PEMVartiainen22Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 PEMVartiainen22Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 PEMYates20 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 PEMYates20 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 PEMYates20 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 In\u00a0[9]: Copied!
DataSet$new('Tech|Electrolysis')$aggregate(subtech='AEL', size='100 MW', agg='subtech', override={'Tech|Electrolysis|Output Capacity|Hydrogen'='kW;LHV'})\n
DataSet$new('Tech|Electrolysis')$aggregate(subtech='AEL', size='100 MW', agg='subtech', override={'Tech|Electrolysis|Output Capacity|Hydrogen'='kW;LHV'}) A data.frame: 99 \u00d7 6 sourcevariableregionperiodunitvalue <chr><chr><chr><dbl><named list><dbl> DEARF23Tech|Electrolysis|CAPEX World2030USD_20051.105e+02 DEARF23Tech|Electrolysis|CAPEX World2040USD_20056.650e+01 DEARF23Tech|Electrolysis|CAPEX World2050USD_20055.046e+01 DEARF23Tech|Electrolysis|Input|Electricity World2030MWh2.163e+00 DEARF23Tech|Electrolysis|Input|Electricity World2040MWh1.947e+00 DEARF23Tech|Electrolysis|Input|Electricity World2050MWh1.778e+00 DEARF23Tech|Electrolysis|OPEX Fixed World2030USD_2005/a2.210e+00 DEARF23Tech|Electrolysis|OPEX Fixed World2040USD_2005/a1.330e+00 DEARF23Tech|Electrolysis|OPEX Fixed World2050USD_2005/a1.009e+00 DEARF23Tech|Electrolysis|Output Capacity|Hydrogen World2030MWh/a1.000e+00 DEARF23Tech|Electrolysis|Output Capacity|Hydrogen World2040MWh/a1.000e+00 DEARF23Tech|Electrolysis|Output Capacity|Hydrogen World2050MWh/a1.000e+00 DEARF23Tech|Electrolysis|Output|Hydrogen World2030MWh1.000e+00 DEARF23Tech|Electrolysis|Output|Hydrogen World2040MWh1.000e+00 DEARF23Tech|Electrolysis|Output|Hydrogen World2050MWh1.000e+00 DEARF23Tech|Electrolysis|Total Input Capacity|ElectricityWorld2030MWh/a8.760e+05 DEARF23Tech|Electrolysis|Total Input Capacity|ElectricityWorld2040MWh/a8.760e+05 DEARF23Tech|Electrolysis|Total Input Capacity|ElectricityWorld2050MWh/a8.760e+05 Holst21Tech|Electrolysis|CAPEX World2030USD_20056.895e+01 Holst21Tech|Electrolysis|CAPEX World2040USD_20056.895e+01 Holst21Tech|Electrolysis|CAPEX World2050USD_20056.895e+01 Holst21Tech|Electrolysis|Input|Electricity World2030MWh1.872e+00 Holst21Tech|Electrolysis|Input|Electricity World2040MWh1.872e+00 Holst21Tech|Electrolysis|Input|Electricity World2050MWh1.872e+00 Holst21Tech|Electrolysis|OPEX Fixed World2030USD_2005/a3.106e+00 Holst21Tech|Electrolysis|OPEX Fixed World2040USD_2005/a3.106e+00 Holst21Tech|Electrolysis|OPEX Fixed World2050USD_2005/a3.106e+00 Holst21Tech|Electrolysis|Output Capacity|Hydrogen World2030MWh/a1.000e+00 Holst21Tech|Electrolysis|Output Capacity|Hydrogen World2040MWh/a1.000e+00 Holst21Tech|Electrolysis|Output Capacity|Hydrogen World2050MWh/a1.000e+00 \u22ee\u22ee\u22ee\u22ee\u22ee\u22ee Vartiainen22Tech|Electrolysis|OPEX Fixed World2030USD_2005/a3.405e-01 Vartiainen22Tech|Electrolysis|OPEX Fixed World2040USD_2005/a1.135e-01 Vartiainen22Tech|Electrolysis|OPEX Fixed World2050USD_2005/a3.734e-02 Vartiainen22Tech|Electrolysis|Output Capacity|Hydrogen World2030MWh/a1.000e+00 Vartiainen22Tech|Electrolysis|Output Capacity|Hydrogen World2040MWh/a1.000e+00 Vartiainen22Tech|Electrolysis|Output Capacity|Hydrogen World2050MWh/a1.000e+00 Vartiainen22Tech|Electrolysis|Output|Hydrogen World2030MWh1.000e+00 Vartiainen22Tech|Electrolysis|Output|Hydrogen World2040MWh1.000e+00 Vartiainen22Tech|Electrolysis|Output|Hydrogen World2050MWh1.000e+00 Vartiainen22Tech|Electrolysis|Total Input Capacity|ElectricityWorld2030MWh/a8.760e+05 Vartiainen22Tech|Electrolysis|Total Input Capacity|ElectricityWorld2040MWh/a8.760e+05 Vartiainen22Tech|Electrolysis|Total Input Capacity|ElectricityWorld2050MWh/a8.760e+05 Yates20 Tech|Electrolysis|Input|Electricity World2030MWh2.625e+00 Yates20 Tech|Electrolysis|Input|Electricity World2040MWh2.625e+00 Yates20 Tech|Electrolysis|Input|Electricity World2050MWh2.625e+00 Yates20 Tech|Electrolysis|Input|Water World2030t4.852e-01 Yates20 Tech|Electrolysis|Input|Water World2040t4.852e-01 Yates20 Tech|Electrolysis|Input|Water World2050t4.852e-01 Yates20 Tech|Electrolysis|OPEX Fixed World2030USD_2005/a2.418e+00 Yates20 Tech|Electrolysis|OPEX Fixed World2040USD_2005/a2.418e+00 Yates20 Tech|Electrolysis|OPEX Fixed World2050USD_2005/a2.418e+00 Yates20 Tech|Electrolysis|Output Capacity|Hydrogen World2030MWh/a1.000e+00 Yates20 Tech|Electrolysis|Output Capacity|Hydrogen World2040MWh/a1.000e+00 Yates20 Tech|Electrolysis|Output Capacity|Hydrogen World2050MWh/a1.000e+00 Yates20 Tech|Electrolysis|Output|Hydrogen World2030MWh1.000e+00 Yates20 Tech|Electrolysis|Output|Hydrogen World2040MWh1.000e+00 Yates20 Tech|Electrolysis|Output|Hydrogen World2050MWh1.000e+00 Yates20 Tech|Electrolysis|Total Input Capacity|ElectricityWorld2030MWh/a8.760e+05 Yates20 Tech|Electrolysis|Total Input Capacity|ElectricityWorld2040MWh/a8.760e+05 Yates20 Tech|Electrolysis|Total Input Capacity|ElectricityWorld2050MWh/a8.760e+05 In\u00a0[10]: Copied!
# DataSet$new('Tech|Methane Reforming')$aggregate(period=2030).query(\"variable.str.contains('OM Cost')\"))\n# display(DataSet('Tech|Methane Reforming').aggregate(period=2030).query(\"variable.str.contains('Demand')\"))\nDataSet$new('Tech|Methane Reforming')$aggregate(period=2030) %>% arrange(variable)\n
# DataSet$new('Tech|Methane Reforming')$aggregate(period=2030).query(\"variable.str.contains('OM Cost')\")) # display(DataSet('Tech|Methane Reforming').aggregate(period=2030).query(\"variable.str.contains('Demand')\")) DataSet$new('Tech|Methane Reforming')$aggregate(period=2030) %>% arrange(variable) A data.frame: 77 \u00d7 7 subtechcapture_ratevariableregionperiodunitvalue <chr><chr><chr><chr><dbl><named list><dbl> ATR94.50%Tech|Methane Reforming|CAPEX World2030USD_20051.148e+01 SMR0.00% Tech|Methane Reforming|CAPEX World2030USD_20057.169e+01 SMR55.70%Tech|Methane Reforming|CAPEX World2030USD_20051.741e+02 SMR90.00%Tech|Methane Reforming|CAPEX World2030USD_20052.797e+02 SMR96.20%Tech|Methane Reforming|CAPEX World2030USD_20057.395e+00 ATR94.50%Tech|Methane Reforming|Capture Rate World2030pct9.450e+01 SMR0.00% Tech|Methane Reforming|Capture Rate World2030pct0.000e+00 SMR55.70%Tech|Methane Reforming|Capture Rate World2030pct5.570e+01 SMR90.00%Tech|Methane Reforming|Capture Rate World2030pct9.000e+01 SMR96.20%Tech|Methane Reforming|Capture Rate World2030pct9.620e+01 ATR94.50%Tech|Methane Reforming|Input|ElectricityWorld2030MWh1.440e-02 SMR0.00% Tech|Methane Reforming|Input|ElectricityWorld2030MWh3.756e-04 SMR96.20%Tech|Methane Reforming|Input|ElectricityWorld2030MWh3.736e-03 ATR94.50%Tech|Methane Reforming|Input|Fossil Gas World2030MWh1.662e-01 SMR0.00% Tech|Methane Reforming|Input|Fossil Gas World2030MWh1.013e+00 SMR55.70%Tech|Methane Reforming|Input|Fossil Gas World2030MWh2.133e+00 SMR90.00%Tech|Methane Reforming|Input|Fossil Gas World2030MWh2.414e+00 SMR96.20%Tech|Methane Reforming|Input|Fossil Gas World2030MWh9.006e-02 ATR0.00% Tech|Methane Reforming|Lifetime World2030a3.000e+01 ATR55.70%Tech|Methane Reforming|Lifetime World2030a3.000e+01 ATR90.00%Tech|Methane Reforming|Lifetime World2030a3.000e+01 ATR94.50%Tech|Methane Reforming|Lifetime World2030a3.000e+01 ATR96.20%Tech|Methane Reforming|Lifetime World2030a3.000e+01 SMR0.00% Tech|Methane Reforming|Lifetime World2030a2.750e+01 SMR55.70%Tech|Methane Reforming|Lifetime World2030a2.750e+01 SMR90.00%Tech|Methane Reforming|Lifetime World2030a2.750e+01 SMR94.50%Tech|Methane Reforming|Lifetime World2030a2.750e+01 SMR96.20%Tech|Methane Reforming|Lifetime World2030a2.750e+01 ATR0.00% Tech|Methane Reforming|OCF World2030pct9.000e+01 ATR55.70%Tech|Methane Reforming|OCF World2030pct9.000e+01 \u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee SMR96.20%Tech|Methane Reforming|OPEX Variable World2030USD_20053.519e-01 ATR0.00% Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 ATR55.70%Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 ATR90.00%Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 ATR94.50%Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 ATR96.20%Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 SMR0.00% Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 SMR55.70%Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 SMR90.00%Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 SMR94.50%Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 SMR96.20%Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 SMR0.00% Tech|Methane Reforming|Output|Electricity World2030MWh5.025e-02 SMR55.70%Tech|Methane Reforming|Output|Electricity World2030MWh7.801e-03 SMR90.00%Tech|Methane Reforming|Output|Electricity World2030MWh2.371e-03 ATR0.00% Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 ATR55.70%Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 ATR90.00%Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 ATR94.50%Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 ATR96.20%Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 SMR0.00% Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 SMR55.70%Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 SMR90.00%Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 SMR94.50%Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 SMR96.20%Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 ATR94.50%Tech|Methane Reforming|Total Output Capacity|HydrogenWorld2030MWh/a8.029e+06 SMR0.00% Tech|Methane Reforming|Total Output Capacity|HydrogenWorld2030MWh/a4.161e+06 SMR55.70%Tech|Methane Reforming|Total Output Capacity|HydrogenWorld2030MWh/a4.161e+06 SMR90.00%Tech|Methane Reforming|Total Output Capacity|HydrogenWorld2030MWh/a4.161e+06 SMR94.50%Tech|Methane Reforming|Total Output Capacity|HydrogenWorld2030MWh/a4.161e+06 SMR96.20%Tech|Methane Reforming|Total Output Capacity|HydrogenWorld2030MWh/a4.161e+06 In\u00a0[11]: Copied!
DataSet$new('Tech|Direct Air Capture')$normalise()\n
DataSet$new('Tech|Direct Air Capture')$normalise() A data.frame: 90 \u00d7 15 parent_variablesubtechcomponentregionperiodvariablereference_variablevalueuncertaintyunitreference_valuereference_unitcommentsourcesource_detail <chr><chr><chr><chr><chr><chr><chr><dbl><dbl><chr><dbl><chr><chr><chr><chr> Tech|Direct Air CaptureHT-DAC# *2018CAPEX Output Capacity|Captured CO2951.5012518NAUSD_2005 1t/aEarly plant. Value reported for a capacity of 0.98 Mt-CO2/year. Corresponding to the Carbon Engineering pilot plant. Keith18 Table 3 Tech|Direct Air CaptureHT-DAC# *2018CAPEX Output Capacity|Captured CO2658.2314748NAUSD_2005 1t/aNth plant. Value reported for a capacity of 0.98 Mt-CO2/year. Keith18 Table 3 Tech|Direct Air CaptureHT-DAC# *2018OPEX Variable Output|Captured CO2 21.5160205NAUSD_2005 1t Corresponding to the Carbon Engineering pilot plant (scenario C). Keith18 Table 2 Tech|Direct Air CaptureHT-DAC# *2018Input|Electricity Output|Captured CO2 0.3660000NAMWh 1t Corresponding to the Carbon Engineering pilot plant (scenario C). Keith18 Table 2 Tech|Direct Air CaptureHT-DAC# *2018OCF 90.0000000NApct NANA Corresponding to the Carbon Engineering pilot plant (scenario C). Keith18 Table 2 Tech|Direct Air CaptureHT-DAC# *2018Input|Heat Output|Captured CO2 1.4583333NAMWh 1t Is assumed to be NG consumption in the paper, as high temperatures are needed (900\u2009\u00b0C). Corresponding to the Carbon Engineering pilot plant (scenario C).Keith18 Table 2 Tech|Direct Air CaptureHT-DAC# *2020CAPEX Output Capacity|Captured CO2810.3907887NAUSD_2005 1t/aNo explicit cost basis given, assumed to be EUR2020 as this is the time of cost. Author's own assumption. Fasihi19Table 4 Tech|Direct Air CaptureLT-DAC# *2020CAPEX Output Capacity|Captured CO2725.8715040NAUSD_2005 1t/aNo explicit cost basis given, assumed to be EUR2020 as this is the time of cost. Author's own assumption. Fasihi19Table 4 Tech|Direct Air CaptureHT-DAC# *2020OPEX Fixed Relative 3.7000000NApct NANA Author's own assumption. Fasihi19Table 4 Tech|Direct Air CaptureLT-DAC# *2020OPEX Fixed Relative 4.0000000NApct NANA Author's own assumption. Fasihi19Table 4 Tech|Direct Air CaptureHT-DAC# *2020Input|Electricity Output|Captured CO2 1.5350000NAMWh 1t Author's own assumption. Fasihi19Table 4 Tech|Direct Air CaptureHT-DAC# *2020Input|Heat Output|Captured CO2 0.0000000NAMWh 1t Author's own assumption. Fasihi19Table 4 Tech|Direct Air CaptureLT-DAC# *2020Input|Electricity Output|Captured CO2 0.2500000NAMWh 1t Author's own assumption. Fasihi19Table 4 Tech|Direct Air CaptureLT-DAC# *2020Input|Heat Output|Captured CO2 1.7500000NAMWh 1t Assuming low temperature (80\u2013100\u2009\u00b0C). Author's own assumption. Fasihi19Table 4 Tech|Direct Air CaptureHT-DAC# *2020Lifetime 25.0000000NAa NANA Fasihi19Table 4 Tech|Direct Air CaptureLT-DAC# *2020Lifetime 20.0000000NAa NANA Fasihi19Table 4 Tech|Direct Air Capture* CO2 injection *2021Input|Electricity Output|Captured CO2 0.0070000NAMWh 1t Madhu21 Table 1 and 2 Tech|Direct Air Capture* CO2 compression*2021Input|Electricity Output|Captured CO2 0.1110000NAMWh 1t Madhu21 Table 1 and 2 Tech|Direct Air CaptureHT-DACCO2 capture *2021Input|Electricity Output|Captured CO2 0.3450000NAMWh 1t Reference case Madhu21 Table 1 Tech|Direct Air CaptureHT-DACCO2 capture *2021Input|Electricity Output|Captured CO2 0.3370000NAMWh 1t Best case Madhu21 Table 1 Tech|Direct Air CaptureHT-DACCO2 capture *2021Input|Electricity Output|Captured CO2 0.4490000NAMWh 1t Worst case Madhu21 Table 1 Tech|Direct Air CaptureHT-DAC# *2021Input|Heat Output|Captured CO2 1.2416667NAMWh 1t Reference case. Assuming high temperature (850\u2013900\u2009\u00b0C). Madhu21 Table 1 Tech|Direct Air CaptureHT-DAC# *2021Input|Heat Output|Captured CO2 1.1250000NAMWh 1t Best case. Assuming high temperature (850\u2013900\u2009\u00b0C). Madhu21 Table 1 Tech|Direct Air CaptureHT-DAC# *2021Input|Heat Output|Captured CO2 1.2416667NAMWh 1t Worst case. Assuming high temperature (850\u2013900\u2009\u00b0C). Madhu21 Table 1 Tech|Direct Air CaptureLT-DACCO2 capture *2021Input|Electricity Output|Captured CO2 0.1800000NAMWh 1t Reference case. Madhu21 Table 2 Tech|Direct Air CaptureLT-DACCO2 capture *2021Input|Electricity Output|Captured CO2 0.1300000NAMWh 1t Best case. Madhu21 Table 2 Tech|Direct Air CaptureLT-DACCO2 capture *2021Input|Electricity Output|Captured CO2 0.3500000NAMWh 1t Worst case. Madhu21 Table 2 Tech|Direct Air CaptureLT-DAC# *2021Input|Heat Output|Captured CO2 0.7222222NAMWh 1t Reference case. Assuming low temperature (80\u2013120\u2009\u00b0C). Madhu21 Table 2 Tech|Direct Air CaptureLT-DAC# *2021Input|Heat Output|Captured CO2 0.6388889NAMWh 1t Best case. Assuming low temperature (80\u2013120\u2009\u00b0C). Madhu21 Table 2 Tech|Direct Air CaptureLT-DAC# *2021Input|Heat Output|Captured CO2 1.7222222NAMWh 1t Worst case. Assuming low temperature (80\u2013120\u2009\u00b0C). Madhu21 Table 2 \u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee Tech|Direct Air CaptureHT-DAC# *2019CAPEX Output Capacity|Captured CO21146.0000000NAUSD_2005 1t/aNo cost basis given. See Supplementary information. Low assumption. Legend states that reference plant size is used here, which is 1Mt/a for both. Realmonte19Table S4 Tech|Direct Air CaptureHT-DAC# *2019CAPEX Output Capacity|Captured CO22060.0000000NAUSD_2005 1t/aNo cost basis given. See Supplementary information. High assumption. Legend states that reference plant size is used here, which is 1Mt/a for both.Realmonte19Table S4 Tech|Direct Air CaptureLT-DAC# *2019Input|ElectricityOutput|Captured CO2 0.1666667NAMWh 1t See Supplementary information. Low assumption. Realmonte19Table S4 Tech|Direct Air CaptureLT-DAC# *2019Input|ElectricityOutput|Captured CO2 0.3055556NAMWh 1t See Supplementary information. High assumption. Realmonte19Table S4 Tech|Direct Air CaptureLT-DAC# *2019Input|Heat Output|Captured CO2 1.2222222NAMWh 1t Low temperature (85\u00b0-120\u00b0C). See Supplementary information. Low assumption. Realmonte19Table S4 Tech|Direct Air CaptureLT-DAC# *2019Input|Heat Output|Captured CO2 2.0000000NAMWh 1t Low temperature (85\u00b0-120\u00b0C). See Supplementary information. High assumption. Realmonte19Table S4 Tech|Direct Air CaptureHT-DAC# *2019Input|ElectricityOutput|Captured CO2 0.3611111NAMWh 1t See Supplementary information. Low assumption. Realmonte19Table S4 Tech|Direct Air CaptureHT-DAC# *2019Input|ElectricityOutput|Captured CO2 0.5000000NAMWh 1t See Supplementary information. High assumption. Realmonte19Table S4 Tech|Direct Air CaptureHT-DAC# *2019Input|Heat Output|Captured CO2 1.4722222NAMWh 1t High temperature (T > 800\u00b0C). See Supplementary information. Low assumption. Realmonte19Table S4 Tech|Direct Air CaptureHT-DAC# *2019Input|Heat Output|Captured CO2 2.2500000NAMWh 1t High temperature (T > 800\u00b0C). See Supplementary information. High assumption. Realmonte19Table S4 Tech|Direct Air CaptureLT-DAC# *2019Lifetime 15.0000000NAa NANA See Supplementary information. Realmonte19Figure S8 Tech|Direct Air CaptureHT-DAC# *2019Lifetime 20.0000000NAa NANA See Supplementary information. Realmonte19Figure S8 Tech|Direct Air CaptureHT-DAC# *2019CAPEX Output Capacity|Captured CO21038.5617586NAUSD_2005 1t/aUpper bound. Second line, for slaker causticizer, and clarificator mentions that currencies are converted to USD 2016. NASEM19 Table 5.3 Tech|Direct Air CaptureHT-DAC# *2019CAPEX Output Capacity|Captured CO2 558.5889936NAUSD_2005 1t/aLower bound NASEM19 Table 5.3 Tech|Direct Air CaptureHT-DACOperation & Maintenance*2019OPEX Variable Output|Captured CO2 14.8957065NAUSD_2005 1t Lower bound NASEM19 Table 5.3 Tech|Direct Air CaptureHT-DACOperation & Maintenance*2019OPEX Variable Output|Captured CO2 27.3087952NAUSD_2005 1t Upper bound NASEM19 Table 5.3 Tech|Direct Air CaptureHT-DACLabour *2019OPEX Variable Output|Captured CO2 4.9652355NAUSD_2005 1t Lower bound NASEM19 Table 5.3 Tech|Direct Air CaptureHT-DACLabour *2019OPEX Variable Output|Captured CO2 8.2753925NAUSD_2005 1t Upper bound NASEM19 Table 5.3 Tech|Direct Air CaptureHT-DACOther *2019OPEX Variable Output|Captured CO2 4.1376962NAUSD_2005 1t Lower bound NASEM19 Table 5.3 Tech|Direct Air CaptureHT-DACOther *2019OPEX Variable Output|Captured CO2 5.7927747NAUSD_2005 1t Upper bound NASEM19 Table 5.3 Tech|Direct Air CaptureLT-DACAdsorbent *2019CAPEX Output Capacity|Captured CO2 57.9277475NAUSD_2005 1t/aCase 2: low cost case. NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACAdsorbent *2019CAPEX Output Capacity|Captured CO2 153.9223005NAUSD_2005 1t/aCase 4: high cost case. Costs assumed to be in USD2016, like the HT case NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACBlower *2019CAPEX Output Capacity|Captured CO2 1.7378324NAUSD_2005 1t/aCase 2: low cost case. NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACBlower *2019CAPEX Output Capacity|Captured CO2 5.5445130NAUSD_2005 1t/aCase 4: high cost case. NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACVacuum pump *2019CAPEX Output Capacity|Captured CO2 2.1516020NAUSD_2005 1t/aCase 2: low cost case. NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACVacuum pump *2019CAPEX Output Capacity|Captured CO2 7.0340836NAUSD_2005 1t/aCase 4: high cost case. NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACAdsorption *2019OPEX Variable Output|Captured CO2 7.4478532NAUSD_2005 1t Case 2: low cost case. NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACAdsorption *2019OPEX Variable Output|Captured CO2 15.7232457NAUSD_2005 1t Case 4: high cost case. NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACSteam *2019OPEX Variable Output|Captured CO2 1.8205863NAUSD_2005 1t Case 2: low cost case. NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACSteam *2019OPEX Variable Output|Captured CO2 2.4826177NAUSD_2005 1t Case 4: high cost case. NASEM19 Table 5.10 In\u00a0[12]: Copied!
DataSet$new('Tech|Direct Air Capture')$select()\n
DataSet$new('Tech|Direct Air Capture')$select()
Warning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2030, 2030)No appropriate mapping found to convert row reference to primary output: c(\"HT-DAC\", \"HT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2030, 2030)No appropriate mapping found to convert row reference to primary output: c(\"HT-DAC\", \"HT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"Okzan22\", \"Okzan22\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2030, 2030)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"Okzan22\", \"Okzan22\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2030, 2030)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Adsorbent\", \"Adsorbent\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2030, 2030)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Blower\", \"Blower\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2030, 2030)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Vacuum pump\", \"Vacuum pump\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2040, 2040)No appropriate mapping found to convert row reference to primary output: c(\"HT-DAC\", \"HT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2040, 2040)No appropriate mapping found to convert row reference to primary output: c(\"HT-DAC\", \"HT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"Okzan22\", \"Okzan22\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2040, 2040)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"Okzan22\", \"Okzan22\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2040, 2040)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Adsorbent\", \"Adsorbent\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2040, 2040)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Blower\", \"Blower\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2040, 2040)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Vacuum pump\", \"Vacuum pump\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2050, 2050)No appropriate mapping found to convert row reference to primary output: c(\"HT-DAC\", \"HT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2050, 2050)No appropriate mapping found to convert row reference to primary output: c(\"HT-DAC\", \"HT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"Okzan22\", \"Okzan22\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2050, 2050)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"Okzan22\", \"Okzan22\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2050, 2050)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Adsorbent\", \"Adsorbent\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2050, 2050)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Blower\", \"Blower\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2050, 2050)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Vacuum pump\", \"Vacuum pump\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\n
A data.frame: 240 \u00d7 9 subtechcomponentsourcevariablereference_variableregionperiodunitvalue <chr><chr><chr><chr><chr><chr><dbl><list><dbl> HT-DAC#Fasihi19 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2030a * USD_2005/t1244.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2040a * USD_2005/t1244.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2050a * USD_2005/t1244.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2 World2030MWh/t 2.3560 HT-DAC#Fasihi19 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2 World2040MWh/t 2.3560 HT-DAC#Fasihi19 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2 World2050MWh/t 2.3560 HT-DAC#Fasihi19 Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2030MWh/t 0.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2040MWh/t 0.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2050MWh/t 0.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|Lifetime NA World2030a 25.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|Lifetime NA World2040a 25.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|Lifetime NA World2050a 25.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|OPEX Fixed Tech|Direct Air Capture|Output Capacity|Captured CO2World2030USD_2005/t 46.0300 HT-DAC#Fasihi19 Tech|Direct Air Capture|OPEX Fixed Tech|Direct Air Capture|Output Capacity|Captured CO2World2040USD_2005/t 46.0300 HT-DAC#Fasihi19 Tech|Direct Air Capture|OPEX Fixed Tech|Direct Air Capture|Output Capacity|Captured CO2World2050USD_2005/t 46.0300 HT-DAC#IEA-DAC22Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2030MWh/t 2.1670 HT-DAC#IEA-DAC22Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2040MWh/t 2.1670 HT-DAC#IEA-DAC22Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2050MWh/t 2.1670 HT-DAC#Keith18 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2030a * USD_2005/t 348.2000 HT-DAC#Keith18 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2030a * USD_2005/t 240.9000 HT-DAC#Keith18 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2040a * USD_2005/t 348.2000 HT-DAC#Keith18 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2040a * USD_2005/t 240.9000 HT-DAC#Keith18 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2050a * USD_2005/t 348.2000 HT-DAC#Keith18 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2050a * USD_2005/t 240.9000 HT-DAC#Keith18 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2 World2030MWh/t 0.1340 HT-DAC#Keith18 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2 World2040MWh/t 0.1340 HT-DAC#Keith18 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2 World2050MWh/t 0.1340 HT-DAC#Keith18 Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2030MWh/t 0.5338 HT-DAC#Keith18 Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2040MWh/t 0.5338 HT-DAC#Keith18 Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2050MWh/t 0.5338 \u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee LT-DACCO2 capture Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2040MWh/t3.240e-02 LT-DACCO2 capture Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2040MWh/t2.340e-02 LT-DACCO2 capture Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2040MWh/t6.300e-02 LT-DACCO2 capture Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2050MWh/t3.240e-02 LT-DACCO2 capture Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2050MWh/t2.340e-02 LT-DACCO2 capture Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2050MWh/t6.300e-02 LT-DACCO2 compressionIEA-DAC22Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2030MWh/t1.929e-02 LT-DACCO2 compressionIEA-DAC22Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2040MWh/t1.929e-02 LT-DACCO2 compressionIEA-DAC22Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2050MWh/t1.929e-02 LT-DACCO2 compressionMadhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2030MWh/t1.232e-02 LT-DACCO2 compressionMadhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2040MWh/t1.232e-02 LT-DACCO2 compressionMadhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2050MWh/t1.232e-02 LT-DACCO2 injection Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2030MWh/t4.900e-05 LT-DACCO2 injection Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2040MWh/t4.900e-05 LT-DACCO2 injection Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2050MWh/t4.900e-05 LT-DACNon-compressionIEA-DAC22Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2030MWh/t2.500e-01 LT-DACNon-compressionIEA-DAC22Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2040MWh/t2.500e-01 LT-DACNon-compressionIEA-DAC22Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2050MWh/t2.500e-01 LT-DACO&M Okzan22 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2030USD_2005/t2.500e+01 LT-DACO&M Okzan22 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2030USD_2005/t2.500e+02 LT-DACO&M Okzan22 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2040USD_2005/t2.500e+01 LT-DACO&M Okzan22 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2040USD_2005/t2.500e+02 LT-DACO&M Okzan22 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2050USD_2005/t2.500e+01 LT-DACO&M Okzan22 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2050USD_2005/t2.500e+02 LT-DACSteam NASEM19 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2030USD_2005/t3.315e+00 LT-DACSteam NASEM19 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2030USD_2005/t4.520e+00 LT-DACSteam NASEM19 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2040USD_2005/t3.315e+00 LT-DACSteam NASEM19 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2040USD_2005/t4.520e+00 LT-DACSteam NASEM19 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2050USD_2005/t3.315e+00 LT-DACSteam NASEM19 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2050USD_2005/t4.520e+00 In\u00a0[13]: Copied!
TEDF$new('Tech|Haber-Bosch with ASU')$load()# $check()\nDataSet$new('Tech|Haber-Bosch with ASU')$normalise()\n
TEDF$new('Tech|Haber-Bosch with ASU')$load()# $check() DataSet$new('Tech|Haber-Bosch with ASU')$normalise()
<TEDF>\n  Inherits from: <TEBase>\n  Public:\n    check: function (raise_exception = TRUE) \n    check_row: function (row_id, raise_exception = TRUE) \n    clone: function (deep = FALSE) \n    data: active binding\n    file_path: active binding\n    inconsistencies: active binding\n    initialize: function (parent_variable, database_id = \"public\", file_path = NULL, \n    load: function () \n    parent_variable: active binding\n    read: function () \n    write: function () \n  Private:\n    ..columns: list\n    ..df: data.frame\n    ..fields: list\n    ..file_path: ./inst/extdata/database/tedfs/Tech/Haber-Bosch with ASU.csv\n    ..inconsistencies: list\n    ..parent_variable: Tech|Haber-Bosch with ASU\n    ..var_specs: list
Warning message in readLines(file, warn = readLines.warn):\n\u201cunvollst\u00e4ndige letzte Zeile in './inst/extdata/database/masks/Tech/Haber-Bosch with ASU.yml' gefunden\u201d\n
A data.frame: 23 \u00d7 14 parent_variablecomponentregionperiodvariablereference_variablevalueuncertaintyunitreference_valuereference_unitcommentsourcesource_detail <chr><chr><chr><chr><chr><chr><dbl><dbl><chr><dbl><chr><chr><chr><chr> Tech|Haber-Bosch with ASU# *2022CAPEX Output Capacity|Ammonia 383.1801978 NAUSD_2005 1t/a As stated in Section 2.4.1, the ammonia output of the plant is measured in units of its lower heating value. ArnaizdelPozo22Supplementary Information, Table 2 Tech|Haber-Bosch with ASU# *2018CAPEX Output Capacity|Ammonia 539.0569917 NAUSD_2005 1t/a Cost basis not given in the paper, assumed to be 2018. Ikaheimo18 Page 7, Table 3 Tech|Haber-Bosch with ASU# *2025CAPEX Output Capacity|Ammonia 849.4034361 NAUSD_2005 1t/a Error in Table 7, units are given in EUR/MW. Table 3 contradicts this Grahn22 Table 7 Tech|Haber-Bosch with ASU# *2040CAPEX Output Capacity|Ammonia 515.7092291 NAUSD_2005 1t/a Error in Table 7, units are given in EUR/MW. Table 3 contradicts this Grahn22 Table 7 Tech|Haber-Bosch with ASU# *2022OPEX Fixed Relative 3.0000000 NApct NANA ArnaizdelPozo22Supplementary Information, Table 2 Tech|Haber-Bosch with ASU# *2025OPEX Fixed Relative 4.0000000 NApct NANA Grahn22 Table7 Tech|Haber-Bosch with ASU# *2040OPEX Fixed Relative 4.0000000 NApct NANA Grahn22 Table7 Tech|Haber-Bosch with ASU# *2018OPEX Fixed Output Capacity|Ammonia 10.5332975 NAUSD_2005/a 1t/a Cost basis not given in the paper, assumed to be 2018. Ikaheimo18 Page 7, Table 3 Tech|Haber-Bosch with ASU# *2022OPEX Variable Output|Ammonia 3132.1883891 NAUSD_2005 1t As stated in Section 2.4.1, the ammonia output of the plant is measured in units of its lower heating value. ArnaizdelPozo22Supplementary Information, Table 2 Tech|Haber-Bosch with ASU# *2015Input|Hydrogen Output|Ammonia 5.9326787 NAMWh;LHV 1t Matzen15 Page 9 / Table 13 Tech|Haber-Bosch with ASU# *2018Input|Hydrogen Output|Ammonia 6.0270060 NAMWh;LHV 1t Found in supplementary information Stolz22 Table 5 Tech|Haber-Bosch with ASUAir-separation unit *2015Input|Electricity Output|Ammonia 0.7236234 NAMWh 1t This source claims 3.1 MJ/kg electricity denand for the ASU. Assuming a nitrogen demand of 0.84 tonnes per tonne of ammonia, this amounts of an electricity demand of 3.1\u00d70.85 GJ per tonne of ammonia.Matzen15 Page 8 / Table 9 Tech|Haber-Bosch with ASUAir-separation unit *2016Input|Electricity Output|Ammonia 0.0500000 NAMWh 1t GrinbergDana16 Supplementary Table 4 Tech|Haber-Bosch with ASUSynthesis process *2016Input|Electricity Output|Ammonia 0.4444444 NAMWh 1t GrinbergDana16 Supplementary Table 4 Tech|Haber-Bosch with ASUAir-separation unit *2014Input|Electricity Output|Ammonia 0.0907563 NAMWh 1t This line is masked later down the line, as not found in the original source (input error?) Morgan14 Tech|Haber-Bosch with ASUSynthesis process *2014Input|Electricity Output|Ammonia 0.4000000 NAMWh 1t his line is masked later down the line, as not found in the original source (input error?) Morgan14 Tech|Haber-Bosch with ASU# *2018Input|Electricity Output|Ammonia 0.6400000 NAMWh 1t Ikaheimo18 Page 4 Tech|Haber-Bosch with ASUCompressors *2017Input|Electricity Output|Ammonia 1.44444440.3611111114MWh 1t Bazzanella17 Page 56, Section 4.2.2 Tech|Haber-Bosch with ASUAir-separation unit *2017Input|Electricity Output|Ammonia 0.3300000 NAMWh 1t Bazzanella17 Page 57, Section 4.2.3 Tech|Haber-Bosch with ASUH2 compression and others*2018Input|Electricity Output|Ammonia 0.3307503 NAMWh 1t Found in supplementary information Stolz22 Table 5 Tech|Haber-Bosch with ASUAir-separation unit *2018Input|Electricity Output|Ammonia 0.13545010.0002625003MWh 1t Data found in supplementary information. Manually calculated from 0.162 kWh/kg Nitrogen and 0.159kg nitrogen/kWh ammonia. Stolz22 Table 5 Tech|Haber-Bosch with ASU# *2025Output|Ammonia Input|Hydrogen 0.1504760 NAt 1MWh;LHV Grahn22 Table 7 Tech|Haber-Bosch with ASU# *2040Output|Ammonia Input|Hydrogen 0.1504760 NAt 1MWh;LHV Grahn22 Table 7 In\u00a0[14]: Copied!
DataSet$new('Tech|Haber-Bosch with ASU')$select(period=2020)\n
DataSet$new('Tech|Haber-Bosch with ASU')$select(period=2020)
Warning message in readLines(file, warn = readLines.warn):\n\u201cunvollst\u00e4ndige letzte Zeile in './inst/extdata/database/masks/Tech/Haber-Bosch with ASU.yml' gefunden\u201d\n
A data.frame: 20 \u00d7 8 componentsourcevariablereference_variableregionperiodunitvalue <chr><chr><chr><chr><chr><dbl><chr><dbl> # ArnaizdelPozo22Tech|Haber-Bosch with ASU|CAPEX Tech|Haber-Bosch with ASU|Output Capacity|AmmoniaWorld2020a * USD_2005/t1.200e+06 # ArnaizdelPozo22Tech|Haber-Bosch with ASU|OPEX Fixed Tech|Haber-Bosch with ASU|Output Capacity|AmmoniaWorld2020USD_2005/t 3.601e+04 # ArnaizdelPozo22Tech|Haber-Bosch with ASU|OPEX Variable Tech|Haber-Bosch with ASU|Output|Ammonia World2020USD_2005/t 9.811e+06 # Grahn22 Tech|Haber-Bosch with ASU|CAPEX Tech|Haber-Bosch with ASU|Output Capacity|AmmoniaWorld2020a * USD_2005/t5.645e+03 # Grahn22 Tech|Haber-Bosch with ASU|Input|Hydrogen Tech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 4.416e+01 # Grahn22 Tech|Haber-Bosch with ASU|OPEX Fixed Tech|Haber-Bosch with ASU|Output Capacity|AmmoniaWorld2020USD_2005/t 2.258e+02 # Ikaheimo18 Tech|Haber-Bosch with ASU|CAPEX Tech|Haber-Bosch with ASU|Output Capacity|AmmoniaWorld2020a * USD_2005/t3.450e+02 # Ikaheimo18 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 4.096e-01 # Ikaheimo18 Tech|Haber-Bosch with ASU|OPEX Fixed Tech|Haber-Bosch with ASU|Output Capacity|AmmoniaWorld2020USD_2005/t 6.741e+00 # Matzen15 Tech|Haber-Bosch with ASU|Input|Hydrogen Tech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 3.520e+01 # Stolz22 Tech|Haber-Bosch with ASU|Input|Hydrogen Tech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 3.632e+01 Air-separation unit Bazzanella17 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 1.089e-01 Air-separation unit GrinbergDana16 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 2.500e-03 Air-separation unit Matzen15 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 5.236e-01 Air-separation unit Morgan14 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 8.237e-03 Air-separation unit Stolz22 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 1.835e-02 Compressors Bazzanella17 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 2.086e+00 H2 compression and othersStolz22 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 1.094e-01 Synthesis process GrinbergDana16 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 1.975e-01 Synthesis process Morgan14 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 1.600e-01 In\u00a0[15]: Copied!
DataSet$new('Tech|Haber-Bosch with ASU')$aggregate(period=2020)\n
DataSet$new('Tech|Haber-Bosch with ASU')$aggregate(period=2020)
Warning message in readLines(file, warn = readLines.warn):\n\u201cunvollst\u00e4ndige letzte Zeile in './inst/extdata/database/masks/Tech/Haber-Bosch with ASU.yml' gefunden\u201d\n
\nError in eval(parse(text = cond)): Objekt 'variable' nicht gefunden\nTraceback:\n\n1. DataSet$new(\"Tech|Haber-Bosch with ASU\")$aggregate(period = 2020)\n2. mask$matches(rows)   # at line 910-912 of file /Users/gmax/Documents/PIK_Job/posted_philipp/R/noslag.R\n3. apply_cond(df, w)   # at line 81-83 of file /Users/gmax/Documents/PIK_Job/posted_philipp/R/masking.R\n4. filter(eval(parse(text = cond)))   # at line 20 of file /Users/gmax/Documents/PIK_Job/posted_philipp/R/masking.R\n5. eval(parse(text = cond))\n6. eval(parse(text = cond))
In\u00a0[\u00a0]: Copied!
\n
"},{"location":"tutorials/python/overview/","title":"Main POSTED tutorial for python","text":"

First, we import some general-purpose libraries. The python-side of posted depends on pandas for working with dataframes. Here we also use plotly and itables for plotting and inspecting data, but posted does not depend on those and other tools could be used instead. The package igraph is an optional dependency used for representing the interlinkages in value chains. The package matplotlib is only used for plotting igraphs, which is again optional.

In\u00a0[1]: Copied!
import pandas as pd\n\nimport plotly\npd.options.plotting.backend = \"plotly\"\n\n# for process chains only\nimport igraph as ig\nimport matplotlib.pyplot as plt\n
import pandas as pd import plotly pd.options.plotting.backend = \"plotly\" # for process chains only import igraph as ig import matplotlib.pyplot as plt
Intel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.\nIntel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.\n

The posted package has to be installed in the python environment. If it is not installed yet, you can easily install it from the GitHub source code using pip.

In\u00a0[2]: Copied!
try:\n    import posted\nexcept ImportError:\n    ! pip install git+https://github.com:PhilippVerpoort/posted.git@develop\n
try: import posted except ImportError: ! pip install git+https://github.com:PhilippVerpoort/posted.git@develop

Import specific functions and classes from POSTED that will be used later.

In\u00a0[3]: Copied!
from posted.tedf import TEDF\nfrom posted.noslag import DataSet\nfrom posted.units import Q, U\nfrom posted.team import CalcVariable, LCOX, FSCP, ProcessChain, annuity_factor\n
from posted.tedf import TEDF from posted.noslag import DataSet from posted.units import Q, U from posted.team import CalcVariable, LCOX, FSCP, ProcessChain, annuity_factor
\n---------------------------------------------------------------------------\nImportError                               Traceback (most recent call last)\nCell In[3], line 4\n      2 from posted.noslag import DataSet\n      3 from posted.units import Q, U\n----> 4 from posted.team import CalcVariable, LCOX, FSCP, ProcessChain, annuity_factor\n\nImportError: cannot import name 'ProcessChain' from 'posted.team' (/Users/gmax/Documents/PIK_Job/posted_leo/python/posted/team.py)

Use some basic plotly and pandas functions for plotting and output analysis

Let's compare CAPEX data for electrolysis in years 2020\u20132050 for Alkaline and PEM across different sources (Danish Energy Agency, Breyer, Fraunhofer, IRENA) for different electrolyser plant sizes.

In\u00a0[4]: Copied!
# select data from TEDFs\ndf_elh2 = DataSet('Tech|Electrolysis').select(\n        period=[2020, 2030, 2040, 2050],\n        subtech=['AEL', 'PEM'],\n        override={'Tech|Electrolysis|Output Capacity|Hydrogen': 'kW;LHV'},\n        source=['DEARF23', 'Vartiainen22', 'Holst21', 'IRENA22'],\n        size=['1 MW', '5 MW', '100 MW'],\n        extrapolate_period=False\n    ).query(f\"variable=='Tech|Electrolysis|CAPEX'\")\n\n# display a few examples\ndisplay(df_elh2.sample(15).sort_index())\n\n# sort data and plot\ndf_elh2.assign(size_sort=lambda df: df['size'].str.split(' ', n=1, expand=True).iloc[:, 0].astype(int)) \\\n       .sort_values(by=['size_sort', 'period']) \\\n       .plot.line(x='period', y='value', color='source', facet_col='size', facet_row='subtech')\n
# select data from TEDFs df_elh2 = DataSet('Tech|Electrolysis').select( period=[2020, 2030, 2040, 2050], subtech=['AEL', 'PEM'], override={'Tech|Electrolysis|Output Capacity|Hydrogen': 'kW;LHV'}, source=['DEARF23', 'Vartiainen22', 'Holst21', 'IRENA22'], size=['1 MW', '5 MW', '100 MW'], extrapolate_period=False ).query(f\"variable=='Tech|Electrolysis|CAPEX'\") # display a few examples display(df_elh2.sample(15).sort_index()) # sort data and plot df_elh2.assign(size_sort=lambda df: df['size'].str.split(' ', n=1, expand=True).iloc[:, 0].astype(int)) \\ .sort_values(by=['size_sort', 'period']) \\ .plot.line(x='period', y='value', color='source', facet_col='size', facet_row='subtech') subtech size source variable reference_variable region period unit value 0 AEL 1 MW DEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2020 USD_2005/kW 1121.0 2 AEL 1 MW DEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2040 USD_2005/kW 625.8 3 AEL 1 MW DEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2050 USD_2005/kW 464.0 25 AEL 1 MW IRENA22 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2050 USD_2005/kW 236.2 40 AEL 100 MW DEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2050 USD_2005/kW 331.5 54 AEL 100 MW Holst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2030 USD_2005/kW 604.0 64 AEL 100 MW IRENA22 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2050 USD_2005/kW 236.2 74 AEL 100 MW Vartiainen22 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2040 USD_2005/kW 190.7 75 AEL 100 MW Vartiainen22 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2050 USD_2005/kW 104.7 107 AEL 5 MW IRENA22 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2050 USD_2005/kW 236.2 119 PEM 1 MW DEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2020 USD_2005/kW 1586.0 122 PEM 1 MW DEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2050 USD_2005/kW 564.2 152 PEM 100 MW DEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2040 USD_2005/kW 658.0 191 PEM 5 MW Holst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2020 USD_2005/kW 1488.0 192 PEM 5 MW Holst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2030 USD_2005/kW 978.9

Based on those many sources and cases (size and subtechnology), we can now aggregate the data for further use.

In\u00a0[5]: Copied!
DataSet('Tech|Electrolysis').aggregate(\n        period=[2020, 2030, 2040, 2050],\n        subtech=['AEL', 'PEM'],\n        override={'Tech|Electrolysis|Output Capacity|Hydrogen': 'kW;LHV'},\n        source=['DEARF23', 'Vartiainen22', 'Holst21', 'IRENA22'],\n        size=['1 MW', '5 MW', '100 MW'],\n        agg=['subtech', 'size', 'source'],\n        extrapolate_period=False,\n    ).team.varsplit('Tech|Electrolysis|*variable') \\\n    .query(f\"variable.isin({['CAPEX', 'Output Capacity|Hydrogen']})\")\n
DataSet('Tech|Electrolysis').aggregate( period=[2020, 2030, 2040, 2050], subtech=['AEL', 'PEM'], override={'Tech|Electrolysis|Output Capacity|Hydrogen': 'kW;LHV'}, source=['DEARF23', 'Vartiainen22', 'Holst21', 'IRENA22'], size=['1 MW', '5 MW', '100 MW'], agg=['subtech', 'size', 'source'], extrapolate_period=False, ).team.varsplit('Tech|Electrolysis|*variable') \\ .query(f\"variable.isin({['CAPEX', 'Output Capacity|Hydrogen']})\") Out[5]: variable region period unit value 0 CAPEX World 2020 USD_2005 1046.0 1 CAPEX World 2030 USD_2005 737.4 2 CAPEX World 2040 USD_2005 586.1 3 CAPEX World 2050 USD_2005 499.3 12 Output Capacity|Hydrogen World 2020 kW 1.0 13 Output Capacity|Hydrogen World 2030 kW 1.0 14 Output Capacity|Hydrogen World 2040 kW 1.0 15 Output Capacity|Hydrogen World 2050 kW 1.0

Next, let's compare the energy demand of methane reforming (for blue hydrogen) and different types of electrolysis (for green hydrogen).

In\u00a0[6]: Copied!
pd.concat([\n        DataSet('Tech|Methane Reforming').aggregate(period=2030, source='Lewis22'),\n        DataSet('Tech|Electrolysis').aggregate(period=2030, agg=['source', 'size']),\n    ]) \\\n    .reset_index(drop=True) \\\n    .team.varsplit('Tech|?tech|Input|?fuel') \\\n    .assign(tech=lambda df: df.apply(lambda row: f\"{row['tech']}<br>({row['subtech']})\" if pd.isnull(row['capture_rate']) else f\"{row['tech']}<br>({row['subtech']}, {row['capture_rate']} CR)\", axis=1)) \\\n    .plot.bar(x='tech', y='value', color='fuel') \\\n    .update_layout(\n        xaxis_title='Technologies',\n        yaxis_title='Energy demand  ( MWh<sub>LHV</sub> / MWh<sub>LHV</sub> H<sub>2</sub> )',\n        legend_title='Energy carriers',\n    )\n
pd.concat([ DataSet('Tech|Methane Reforming').aggregate(period=2030, source='Lewis22'), DataSet('Tech|Electrolysis').aggregate(period=2030, agg=['source', 'size']), ]) \\ .reset_index(drop=True) \\ .team.varsplit('Tech|?tech|Input|?fuel') \\ .assign(tech=lambda df: df.apply(lambda row: f\"{row['tech']}({row['subtech']})\" if pd.isnull(row['capture_rate']) else f\"{row['tech']}({row['subtech']}, {row['capture_rate']} CR)\", axis=1)) \\ .plot.bar(x='tech', y='value', color='fuel') \\ .update_layout( xaxis_title='Technologies', yaxis_title='Energy demand ( MWhLHV / MWhLHV H2 )', legend_title='Energy carriers', )

Next, let's compare the energy demand of iron direct reduction (production of low-carbon crude iron) across sources.

In\u00a0[7]: Copied!
DataSet('Tech|Iron Direct Reduction') \\\n    .aggregate(period=2030, mode='h2', agg=[]) \\\n    .team.varsplit('Tech|Iron Direct Reduction|Input|?fuel') \\\n    .query(f\"fuel != 'Iron Ore'\") \\\n    .team.varcombine('{fuel} ({component})') \\\n    .plot.bar(x='source', y='value', color='variable') \\\n    .update_layout(\n        xaxis_title='Sources',\n        yaxis_title='Energy demand  ( MWh<sub>LHV</sub> / t<sub>DRI</sub> )',\n        legend_title='Energy carriers'\n    )\n
DataSet('Tech|Iron Direct Reduction') \\ .aggregate(period=2030, mode='h2', agg=[]) \\ .team.varsplit('Tech|Iron Direct Reduction|Input|?fuel') \\ .query(f\"fuel != 'Iron Ore'\") \\ .team.varcombine('{fuel} ({component})') \\ .plot.bar(x='source', y='value', color='variable') \\ .update_layout( xaxis_title='Sources', yaxis_title='Energy demand ( MWhLHV / tDRI )', legend_title='Energy carriers' )

We can also compare the energy demand for operation with hydrogen or with fossil gas for only one source.

In\u00a0[8]: Copied!
DataSet('Tech|Iron Direct Reduction') \\\n    .select(period=2030, source='Jacobasch21') \\\n    .team.varsplit('Tech|Iron Direct Reduction|Input|?fuel') \\\n    .query(f\"fuel.isin({['Electricity', 'Fossil Gas', 'Hydrogen']})\") \\\n    .plot.bar(x='mode', y='value', color='fuel') \\\n    .update_layout(\n        xaxis_title='Mode of operation',\n        yaxis_title='Energy demand  ( MWh<sub>LHV</sub> / t<sub>DRI</sub> )',\n        legend_title='Energy carriers'\n    )\n
DataSet('Tech|Iron Direct Reduction') \\ .select(period=2030, source='Jacobasch21') \\ .team.varsplit('Tech|Iron Direct Reduction|Input|?fuel') \\ .query(f\"fuel.isin({['Electricity', 'Fossil Gas', 'Hydrogen']})\") \\ .plot.bar(x='mode', y='value', color='fuel') \\ .update_layout( xaxis_title='Mode of operation', yaxis_title='Energy demand ( MWhLHV / tDRI )', legend_title='Energy carriers' )

Finally, let's compare the energy demand of Haber-Bosch synthesis between an integrated SMR plant and a plant running on green hydrogen.

In\u00a0[9]: Copied!
pd.concat([\n        DataSet('Tech|Haber-Bosch with ASU').aggregate(period=2024, agg='component'),\n        DataSet('Tech|Haber-Bosch with Reforming').aggregate(period=2024, agg='component')\n    ]) \\\n    .reset_index(drop=True) \\\n    .team.varsplit('Tech|?tech|*variable') \\\n    .query(f\"variable.str.startswith('Input|')\") \\\n    .plot.bar(x='source', y='value', color='variable') \\\n    .update_layout(\n        xaxis_title='Sources',\n        yaxis_title='Energy demand  ( MWh<sub>LHV</sub> / t<sub>NH<sub>3</sub></sub> )',\n        legend_title='Energy carriers'\n    )\n
pd.concat([ DataSet('Tech|Haber-Bosch with ASU').aggregate(period=2024, agg='component'), DataSet('Tech|Haber-Bosch with Reforming').aggregate(period=2024, agg='component') ]) \\ .reset_index(drop=True) \\ .team.varsplit('Tech|?tech|*variable') \\ .query(f\"variable.str.startswith('Input|')\") \\ .plot.bar(x='source', y='value', color='variable') \\ .update_layout( xaxis_title='Sources', yaxis_title='Energy demand ( MWhLHV / tNH3 )', legend_title='Energy carriers' )

New variables can be calculated manually via the CalcVariable class. The next example demonstrates this for calculating the levelised cost of hydrogen.

In\u00a0[10]: Copied!
assumptions = pd.DataFrame.from_records([\n    {'elec_price_case': f\"Case {i}\", 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': 30 + (i-1)*25}\n    for i in range(1, 4)\n] + [\n    {'variable': 'Tech|Electrolysis|OCF', 'value': 50, 'unit': 'pct'},\n    {'variable': 'Annuity Factor', 'value': annuity_factor(Q('5 pct'), Q('18 a')).m, 'unit': '1/a'},\n])\ndisplay(assumptions)\n
assumptions = pd.DataFrame.from_records([ {'elec_price_case': f\"Case {i}\", 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': 30 + (i-1)*25} for i in range(1, 4) ] + [ {'variable': 'Tech|Electrolysis|OCF', 'value': 50, 'unit': 'pct'}, {'variable': 'Annuity Factor', 'value': annuity_factor(Q('5 pct'), Q('18 a')).m, 'unit': '1/a'}, ]) display(assumptions)
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[10], line 6\n      1 assumptions = pd.DataFrame.from_records([\n      2     {'elec_price_case': f\"Case {i}\", 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': 30 + (i-1)*25}\n      3     for i in range(1, 4)\n      4 ] + [\n      5     {'variable': 'Tech|Electrolysis|OCF', 'value': 50, 'unit': 'pct'},\n----> 6     {'variable': 'Annuity Factor', 'value': annuity_factor(Q('5 pct'), Q('18 a')).m, 'unit': '1/a'},\n      7 ])\n      8 display(assumptions)\n\nNameError: name 'annuity_factor' is not defined
In\u00a0[11]: Copied!
df_calc = pd.concat([\n        DataSet('Tech|Electrolysis').aggregate(period=[2030, 2040, 2050], subtech=['AEL', 'PEM'], agg=['size', 'source']),\n        assumptions,\n    ]).team.perform(CalcVariable(**{\n        'LCOX|Green Hydrogen|Capital Cost': lambda x: (x['Annuity Factor'] * x['Tech|Electrolysis|CAPEX'] / x['Tech|Electrolysis|Output Capacity|Hydrogen'] / x['Tech|Electrolysis|OCF']),\n        'LCOX|Green Hydrogen|OM Cost Fixed': lambda x: x['Tech|Electrolysis|OPEX Fixed'] / x['Tech|Electrolysis|Output Capacity|Hydrogen'] / x['Tech|Electrolysis|OCF'],\n        'LCOX|Green Hydrogen|Input Cost|Electricity': lambda x: x['Price|Electricity'] * x['Tech|Electrolysis|Input|Electricity'] / x['Tech|Electrolysis|Output|Hydrogen'],\n    }), only_new=True) \\\n    .team.unit_convert(to='EUR_2020/MWh')\n\ndisplay(df_calc.sample(15).sort_index())\n
df_calc = pd.concat([ DataSet('Tech|Electrolysis').aggregate(period=[2030, 2040, 2050], subtech=['AEL', 'PEM'], agg=['size', 'source']), assumptions, ]).team.perform(CalcVariable(**{ 'LCOX|Green Hydrogen|Capital Cost': lambda x: (x['Annuity Factor'] * x['Tech|Electrolysis|CAPEX'] / x['Tech|Electrolysis|Output Capacity|Hydrogen'] / x['Tech|Electrolysis|OCF']), 'LCOX|Green Hydrogen|OM Cost Fixed': lambda x: x['Tech|Electrolysis|OPEX Fixed'] / x['Tech|Electrolysis|Output Capacity|Hydrogen'] / x['Tech|Electrolysis|OCF'], 'LCOX|Green Hydrogen|Input Cost|Electricity': lambda x: x['Price|Electricity'] * x['Tech|Electrolysis|Input|Electricity'] / x['Tech|Electrolysis|Output|Hydrogen'], }), only_new=True) \\ .team.unit_convert(to='EUR_2020/MWh') display(df_calc.sample(15).sort_index())
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[11], line 3\n      1 df_calc = pd.concat([\n      2         DataSet('Tech|Electrolysis').aggregate(period=[2030, 2040, 2050], subtech=['AEL', 'PEM'], agg=['size', 'source']),\n----> 3         assumptions,\n      4     ]).team.perform(CalcVariable(**{\n      5         'LCOX|Green Hydrogen|Capital Cost': lambda x: (x['Annuity Factor'] * x['Tech|Electrolysis|CAPEX'] / x['Tech|Electrolysis|Output Capacity|Hydrogen'] / x['Tech|Electrolysis|OCF']),\n      6         'LCOX|Green Hydrogen|OM Cost Fixed': lambda x: x['Tech|Electrolysis|OPEX Fixed'] / x['Tech|Electrolysis|Output Capacity|Hydrogen'] / x['Tech|Electrolysis|OCF'],\n      7         'LCOX|Green Hydrogen|Input Cost|Electricity': lambda x: x['Price|Electricity'] * x['Tech|Electrolysis|Input|Electricity'] / x['Tech|Electrolysis|Output|Hydrogen'],\n      8     }), only_new=True) \\\n      9     .team.unit_convert(to='EUR_2020/MWh')\n     11 display(df_calc.sample(15).sort_index())\n\nNameError: name 'assumptions' is not defined
In\u00a0[12]: Copied!
df_calc.team.varsplit('LCOX|Green Hydrogen|?component') \\\n    .sort_values(by=['elec_price_case', 'value']) \\\n    .plot.bar(x='period', y='value', color='component', facet_col='elec_price_case', facet_row='subtech')\n
df_calc.team.varsplit('LCOX|Green Hydrogen|?component') \\ .sort_values(by=['elec_price_case', 'value']) \\ .plot.bar(x='period', y='value', color='component', facet_col='elec_price_case', facet_row='subtech')
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[12], line 1\n----> 1 df_calc.team.varsplit('LCOX|Green Hydrogen|?component') \\\n      2     .sort_values(by=['elec_price_case', 'value']) \\\n      3     .plot.bar(x='period', y='value', color='component', facet_col='elec_price_case', facet_row='subtech')\n\nNameError: name 'df_calc' is not defined

POSTED uses the pivot dataframe method to bring the data into a usable format.

In\u00a0[13]: Copied!
pd.concat([\n        DataSet('Tech|Electrolysis').aggregate(period=[2030, 2040, 2050], subtech=['AEL', 'PEM'], agg=['size', 'source']),\n        assumptions,\n    ]).team.pivot_wide().pint.dequantify()\n
pd.concat([ DataSet('Tech|Electrolysis').aggregate(period=[2030, 2040, 2050], subtech=['AEL', 'PEM'], agg=['size', 'source']), assumptions, ]).team.pivot_wide().pint.dequantify()
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[13], line 3\n      1 pd.concat([\n      2         DataSet('Tech|Electrolysis').aggregate(period=[2030, 2040, 2050], subtech=['AEL', 'PEM'], agg=['size', 'source']),\n----> 3         assumptions,\n      4     ]).team.pivot_wide().pint.dequantify()\n\nNameError: name 'assumptions' is not defined

POSTED also contains predefined methods for calculating LCOX. Here we apply it to blue and green hydrogen.

In\u00a0[14]: Copied!
df_lcox_bluegreen = pd.concat([\n        pd.DataFrame.from_records([\n            {'elec_price_case': f\"Case {i}\", 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': 30 + (i-1)*25}\n            for i in range(1, 4)\n        ]),\n        pd.DataFrame.from_records([\n            {'ng_price_case': 'High' if i-1 else 'Low', 'variable': 'Price|Fossil Gas', 'unit': 'EUR_2020/MWh', 'value': 40 if i-1 else 20}\n            for i in range(1, 3)\n        ]),\n        DataSet('Tech|Electrolysis').aggregate(period=2030, subtech=['AEL', 'PEM'], agg=['size', 'subtech', 'source']),\n        DataSet('Tech|Methane Reforming').aggregate(period=2030, capture_rate=['55.70%', '94.50%'])\n            .team.varsplit('Tech|Methane Reforming|*comp')\n            .team.varcombine('{variable} {subtech} ({capture_rate})|{comp}')\n    ]) \\\n    .team.perform(\n        LCOX('Output|Hydrogen', 'Electrolysis', name='Green Hydrogen', interest_rate=0.1, book_lifetime=18),\n        LCOX('Output|Hydrogen', 'Methane Reforming SMR (55.70%)', name='Blue Hydrogen (Low CR)', interest_rate=0.1, book_lifetime=18),\n        LCOX('Output|Hydrogen', 'Methane Reforming ATR (94.50%)', name='Blue Hydrogen (High CR)', interest_rate=0.1, book_lifetime=18),\n        only_new=True,\n    ) \\\n    .team.unit_convert(to='EUR_2022/MWh')\n\ndisplay(df_lcox_bluegreen)\n
df_lcox_bluegreen = pd.concat([ pd.DataFrame.from_records([ {'elec_price_case': f\"Case {i}\", 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': 30 + (i-1)*25} for i in range(1, 4) ]), pd.DataFrame.from_records([ {'ng_price_case': 'High' if i-1 else 'Low', 'variable': 'Price|Fossil Gas', 'unit': 'EUR_2020/MWh', 'value': 40 if i-1 else 20} for i in range(1, 3) ]), DataSet('Tech|Electrolysis').aggregate(period=2030, subtech=['AEL', 'PEM'], agg=['size', 'subtech', 'source']), DataSet('Tech|Methane Reforming').aggregate(period=2030, capture_rate=['55.70%', '94.50%']) .team.varsplit('Tech|Methane Reforming|*comp') .team.varcombine('{variable} {subtech} ({capture_rate})|{comp}') ]) \\ .team.perform( LCOX('Output|Hydrogen', 'Electrolysis', name='Green Hydrogen', interest_rate=0.1, book_lifetime=18), LCOX('Output|Hydrogen', 'Methane Reforming SMR (55.70%)', name='Blue Hydrogen (Low CR)', interest_rate=0.1, book_lifetime=18), LCOX('Output|Hydrogen', 'Methane Reforming ATR (94.50%)', name='Blue Hydrogen (High CR)', interest_rate=0.1, book_lifetime=18), only_new=True, ) \\ .team.unit_convert(to='EUR_2022/MWh') display(df_lcox_bluegreen)
\n---------------------------------------------------------------------------\nTypeError                                 Traceback (most recent call last)\nCell In[14], line 16\n      1 df_lcox_bluegreen = pd.concat([\n      2         pd.DataFrame.from_records([\n      3             {'elec_price_case': f\"Case {i}\", 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': 30 + (i-1)*25}\n      4             for i in range(1, 4)\n      5         ]),\n      6         pd.DataFrame.from_records([\n      7             {'ng_price_case': 'High' if i-1 else 'Low', 'variable': 'Price|Fossil Gas', 'unit': 'EUR_2020/MWh', 'value': 40 if i-1 else 20}\n      8             for i in range(1, 3)\n      9         ]),\n     10         DataSet('Tech|Electrolysis').aggregate(period=2030, subtech=['AEL', 'PEM'], agg=['size', 'subtech', 'source']),\n     11         DataSet('Tech|Methane Reforming').aggregate(period=2030, capture_rate=['55.70%', '94.50%'])\n     12             .team.varsplit('Tech|Methane Reforming|*comp')\n     13             .team.varcombine('{variable} {subtech} ({capture_rate})|{comp}')\n     14     ]) \\\n     15     .team.perform(\n---> 16         LCOX('Output|Hydrogen', 'Electrolysis', name='Green Hydrogen', interest_rate=0.1, book_lifetime=18),\n     17         LCOX('Output|Hydrogen', 'Methane Reforming SMR (55.70%)', name='Blue Hydrogen (Low CR)', interest_rate=0.1, book_lifetime=18),\n     18         LCOX('Output|Hydrogen', 'Methane Reforming ATR (94.50%)', name='Blue Hydrogen (High CR)', interest_rate=0.1, book_lifetime=18),\n     19         only_new=True,\n     20     ) \\\n     21     .team.unit_convert(to='EUR_2022/MWh')\n     23 display(df_lcox_bluegreen)\n\nTypeError: LCOX.__init__() got multiple values for argument 'name'
In\u00a0[15]: Copied!
df_lcox_bluegreen.team.varsplit('LCOX|?fuel|*comp') \\\n    .plot.bar(x='fuel', y='value', color='comp', facet_col='elec_price_case', facet_row='ng_price_case')\n
df_lcox_bluegreen.team.varsplit('LCOX|?fuel|*comp') \\ .plot.bar(x='fuel', y='value', color='comp', facet_col='elec_price_case', facet_row='ng_price_case')
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[15], line 1\n----> 1 df_lcox_bluegreen.team.varsplit('LCOX|?fuel|*comp') \\\n      2     .plot.bar(x='fuel', y='value', color='comp', facet_col='elec_price_case', facet_row='ng_price_case')\n\nNameError: name 'df_lcox_bluegreen' is not defined

Let's calculate the levelised cost of green methanol (from electrolytic hydrogen). First we can do this simply based on a hydrogen price (i.e. without accounting for electrolysis).

In\u00a0[16]: Copied!
df_lcox_meoh = pd.concat([\n        DataSet('Tech|Methanol Synthesis').aggregate(period=[2030, 2050]),\n        pd.DataFrame.from_records([\n            {'period': 2030, 'variable': 'Price|Hydrogen', 'unit': 'EUR_2022/MWh', 'value': 120},\n            {'period': 2050, 'variable': 'Price|Hydrogen', 'unit': 'EUR_2022/MWh', 'value': 80},\n            {'period': 2030, 'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': 150},\n            {'period': 2050, 'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': 100},\n        ]),\n    ]) \\\n    .team.perform(LCOX(\n        'Output|Methanol', 'Methanol Synthesis', name='Green Methanol',\n        interest_rate=0.1, book_lifetime=10.0), only_new=True,\n    ) \\\n    .team.unit_convert('EUR_2022/MWh')\n\ndisplay(df_lcox_meoh)\n
df_lcox_meoh = pd.concat([ DataSet('Tech|Methanol Synthesis').aggregate(period=[2030, 2050]), pd.DataFrame.from_records([ {'period': 2030, 'variable': 'Price|Hydrogen', 'unit': 'EUR_2022/MWh', 'value': 120}, {'period': 2050, 'variable': 'Price|Hydrogen', 'unit': 'EUR_2022/MWh', 'value': 80}, {'period': 2030, 'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': 150}, {'period': 2050, 'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': 100}, ]), ]) \\ .team.perform(LCOX( 'Output|Methanol', 'Methanol Synthesis', name='Green Methanol', interest_rate=0.1, book_lifetime=10.0), only_new=True, ) \\ .team.unit_convert('EUR_2022/MWh') display(df_lcox_meoh)
/Users/gmax/Documents/PIK_Job/posted_leo/python/posted/noslag.py:368: UserWarning:\n\nUnknown variable, so dropping rows:\n36    Emissions|CO2\n37    Emissions|CO2\nName: variable, dtype: object\n\n
\n---------------------------------------------------------------------------\nTypeError                                 Traceback (most recent call last)\nCell In[16], line 10\n      1 df_lcox_meoh = pd.concat([\n      2         DataSet('Tech|Methanol Synthesis').aggregate(period=[2030, 2050]),\n      3         pd.DataFrame.from_records([\n      4             {'period': 2030, 'variable': 'Price|Hydrogen', 'unit': 'EUR_2022/MWh', 'value': 120},\n      5             {'period': 2050, 'variable': 'Price|Hydrogen', 'unit': 'EUR_2022/MWh', 'value': 80},\n      6             {'period': 2030, 'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': 150},\n      7             {'period': 2050, 'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': 100},\n      8         ]),\n      9     ]) \\\n---> 10     .team.perform(LCOX(\n     11         'Output|Methanol', 'Methanol Synthesis', name='Green Methanol',\n     12         interest_rate=0.1, book_lifetime=10.0), only_new=True,\n     13     ) \\\n     14     .team.unit_convert('EUR_2022/MWh')\n     16 display(df_lcox_meoh)\n\nTypeError: LCOX.__init__() got multiple values for argument 'name'
In\u00a0[17]: Copied!
df_lcox_meoh.team.varsplit('LCOX|Green Methanol|*component') \\\n    .plot.bar(x='period', y='value', color='component')\n
df_lcox_meoh.team.varsplit('LCOX|Green Methanol|*component') \\ .plot.bar(x='period', y='value', color='component')
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[17], line 1\n----> 1 df_lcox_meoh.team.varsplit('LCOX|Green Methanol|*component') \\\n      2     .plot.bar(x='period', y='value', color='component')\n\nNameError: name 'df_lcox_meoh' is not defined

Next, we can calculate the LCOX of green methanol for a the value chain consisting of electrolysis, low-temperature direct air capture, and methanol synthesis. The heat for the direct air capture will be provided by an industrial heat pump.

In\u00a0[18]: Copied!
pc_green_meoh = ProcessChain(\n    'Green Methanol',\n    {'Methanol Synthesis': {'Methanol': Q('1 MWh')}},\n    'Heatpump for DAC -> Heat => Direct Air Capture -> Captured CO2 => Methanol Synthesis;Electrolysis -> Hydrogen => Methanol Synthesis -> Methanol',\n)\n\ng, lay = pc_green_meoh.igraph()\nfig, ax = plt.subplots()\nax.set_title(pc_green_meoh.name)\nig.plot(g, target=ax, layout=lay, vertex_label=[n.replace(' ', '\\n') for n in g.vs['name']], edge_label=[n.replace(' ', '\\n') for n in g.es['name']], vertex_label_size=8, edge_label_size=6)\n
pc_green_meoh = ProcessChain( 'Green Methanol', {'Methanol Synthesis': {'Methanol': Q('1 MWh')}}, 'Heatpump for DAC -> Heat => Direct Air Capture -> Captured CO2 => Methanol Synthesis;Electrolysis -> Hydrogen => Methanol Synthesis -> Methanol', ) g, lay = pc_green_meoh.igraph() fig, ax = plt.subplots() ax.set_title(pc_green_meoh.name) ig.plot(g, target=ax, layout=lay, vertex_label=[n.replace(' ', '\\n') for n in g.vs['name']], edge_label=[n.replace(' ', '\\n') for n in g.es['name']], vertex_label_size=8, edge_label_size=6)
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[18], line 1\n----> 1 pc_green_meoh = ProcessChain(\n      2     'Green Methanol',\n      3     {'Methanol Synthesis': {'Methanol': Q('1 MWh')}},\n      4     'Heatpump for DAC -> Heat => Direct Air Capture -> Captured CO2 => Methanol Synthesis;Electrolysis -> Hydrogen => Methanol Synthesis -> Methanol',\n      5 )\n      7 g, lay = pc_green_meoh.igraph()\n      8 fig, ax = plt.subplots()\n\nNameError: name 'ProcessChain' is not defined
In\u00a0[19]: Copied!
df_lcox_meoh_pc = pd.concat([\n        DataSet('Tech|Electrolysis').aggregate(period=[2030, 2050], subtech=['AEL', 'PEM'], size=['1 MW', '100 MW'], agg=['subtech', 'size', 'source']),\n        DataSet('Tech|Direct Air Capture').aggregate(period=[2030, 2050], subtech='LT-DAC'),\n        DataSet('Tech|Heatpump for DAC').aggregate(period=[2030, 2050]),\n        DataSet('Tech|Methanol Synthesis').aggregate(period=[2030, 2050]),\n        pd.DataFrame.from_records([\n            {'period': 2030, 'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 50},\n            {'period': 2050, 'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 30},\n        ]),\n    ]) \\\n    .team.perform(pc_green_meoh) \\\n    .team.perform(LCOX(\n        'Methanol Synthesis|Output|Methanol', process_chain='Green Methanol',\n        interest_rate=0.1, book_lifetime=10.0,\n    ), only_new=True) \\\n    .team.unit_convert('EUR_2022/MWh')\n\ndisplay(df_lcox_meoh_pc)\n
df_lcox_meoh_pc = pd.concat([ DataSet('Tech|Electrolysis').aggregate(period=[2030, 2050], subtech=['AEL', 'PEM'], size=['1 MW', '100 MW'], agg=['subtech', 'size', 'source']), DataSet('Tech|Direct Air Capture').aggregate(period=[2030, 2050], subtech='LT-DAC'), DataSet('Tech|Heatpump for DAC').aggregate(period=[2030, 2050]), DataSet('Tech|Methanol Synthesis').aggregate(period=[2030, 2050]), pd.DataFrame.from_records([ {'period': 2030, 'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 50}, {'period': 2050, 'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 30}, ]), ]) \\ .team.perform(pc_green_meoh) \\ .team.perform(LCOX( 'Methanol Synthesis|Output|Methanol', process_chain='Green Methanol', interest_rate=0.1, book_lifetime=10.0, ), only_new=True) \\ .team.unit_convert('EUR_2022/MWh') display(df_lcox_meoh_pc)
/Users/gmax/Documents/PIK_Job/posted_leo/python/posted/noslag.py:368: UserWarning:\n\nUnknown variable, so dropping rows:\n36    Emissions|CO2\n37    Emissions|CO2\nName: variable, dtype: object\n\n
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[19], line 11\n      1 df_lcox_meoh_pc = pd.concat([\n      2         DataSet('Tech|Electrolysis').aggregate(period=[2030, 2050], subtech=['AEL', 'PEM'], size=['1 MW', '100 MW'], agg=['subtech', 'size', 'source']),\n      3         DataSet('Tech|Direct Air Capture').aggregate(period=[2030, 2050], subtech='LT-DAC'),\n      4         DataSet('Tech|Heatpump for DAC').aggregate(period=[2030, 2050]),\n      5         DataSet('Tech|Methanol Synthesis').aggregate(period=[2030, 2050]),\n      6         pd.DataFrame.from_records([\n      7             {'period': 2030, 'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 50},\n      8             {'period': 2050, 'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 30},\n      9         ]),\n     10     ]) \\\n---> 11     .team.perform(pc_green_meoh) \\\n     12     .team.perform(LCOX(\n     13         'Methanol Synthesis|Output|Methanol', process_chain='Green Methanol',\n     14         interest_rate=0.1, book_lifetime=10.0,\n     15     ), only_new=True) \\\n     16     .team.unit_convert('EUR_2022/MWh')\n     18 display(df_lcox_meoh_pc)\n\nNameError: name 'pc_green_meoh' is not defined
In\u00a0[20]: Copied!
df_lcox_meoh_pc.team.varsplit('LCOX|Green Methanol|?process|*component') \\\n    .plot.bar(x='period', y='value', color='component', hover_data='process')\n
df_lcox_meoh_pc.team.varsplit('LCOX|Green Methanol|?process|*component') \\ .plot.bar(x='period', y='value', color='component', hover_data='process')
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[20], line 1\n----> 1 df_lcox_meoh_pc.team.varsplit('LCOX|Green Methanol|?process|*component') \\\n      2     .plot.bar(x='period', y='value', color='component', hover_data='process')\n\nNameError: name 'df_lcox_meoh_pc' is not defined
In\u00a0[21]: Copied!
pc_green_ethylene = ProcessChain(\n    'Green Ethylene',\n    {'Electric Arc Furnace': {'Ethylene': Q('1t')}},\n    'Electrolysis -> Hydrogen => Methanol Synthesis; Heatpump for DAC -> Heat => Direct Air Capture -> Captured CO2 => Methanol Synthesis -> Methanol => Methanol to Olefines -> Ethylene',\n)\n\ng, lay = pc_green_ethylene.igraph()\nfig, ax = plt.subplots()\nax.set_title(pc_green_ethylene.name)\nig.plot(g, target=ax, layout=lay, vertex_label=[n.replace(' ', '\\n') for n in g.vs['name']], edge_label=[n.replace(' ', '\\n') for n in g.es['name']], vertex_label_size=8, edge_label_size=6)\n
pc_green_ethylene = ProcessChain( 'Green Ethylene', {'Electric Arc Furnace': {'Ethylene': Q('1t')}}, 'Electrolysis -> Hydrogen => Methanol Synthesis; Heatpump for DAC -> Heat => Direct Air Capture -> Captured CO2 => Methanol Synthesis -> Methanol => Methanol to Olefines -> Ethylene', ) g, lay = pc_green_ethylene.igraph() fig, ax = plt.subplots() ax.set_title(pc_green_ethylene.name) ig.plot(g, target=ax, layout=lay, vertex_label=[n.replace(' ', '\\n') for n in g.vs['name']], edge_label=[n.replace(' ', '\\n') for n in g.es['name']], vertex_label_size=8, edge_label_size=6)
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[21], line 1\n----> 1 pc_green_ethylene = ProcessChain(\n      2     'Green Ethylene',\n      3     {'Electric Arc Furnace': {'Ethylene': Q('1t')}},\n      4     'Electrolysis -> Hydrogen => Methanol Synthesis; Heatpump for DAC -> Heat => Direct Air Capture -> Captured CO2 => Methanol Synthesis -> Methanol => Methanol to Olefines -> Ethylene',\n      5 )\n      7 g, lay = pc_green_ethylene.igraph()\n      8 fig, ax = plt.subplots()\n\nNameError: name 'ProcessChain' is not defined
In\u00a0[22]: Copied!
pc_green_steel = ProcessChain(\n    'Green Steel (H2-DR)',\n    {'Steel Hot Rolling': {'Steel Hot-rolled Coil': Q('1t')}},\n    'Electrolysis -> Hydrogen => Iron Direct Reduction -> Directly Reduced Iron => Electric Arc Furnace -> Steel Liquid => Steel Casting -> Steel Slab => Steel Hot Rolling -> Steel Hot-rolled Coil',\n)\n\ng, lay = pc_green_steel.igraph()\nfig, ax = plt.subplots()\nax.set_title(pc_green_steel.name)\nig.plot(g, target=ax, layout=lay, vertex_label=[n.replace(' ', '\\n') for n in g.vs['name']], edge_label=[n.replace(' ', '\\n') for n in g.es['name']], vertex_label_size=8, edge_label_size=6)\n
pc_green_steel = ProcessChain( 'Green Steel (H2-DR)', {'Steel Hot Rolling': {'Steel Hot-rolled Coil': Q('1t')}}, 'Electrolysis -> Hydrogen => Iron Direct Reduction -> Directly Reduced Iron => Electric Arc Furnace -> Steel Liquid => Steel Casting -> Steel Slab => Steel Hot Rolling -> Steel Hot-rolled Coil', ) g, lay = pc_green_steel.igraph() fig, ax = plt.subplots() ax.set_title(pc_green_steel.name) ig.plot(g, target=ax, layout=lay, vertex_label=[n.replace(' ', '\\n') for n in g.vs['name']], edge_label=[n.replace(' ', '\\n') for n in g.es['name']], vertex_label_size=8, edge_label_size=6)
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[22], line 1\n----> 1 pc_green_steel = ProcessChain(\n      2     'Green Steel (H2-DR)',\n      3     {'Steel Hot Rolling': {'Steel Hot-rolled Coil': Q('1t')}},\n      4     'Electrolysis -> Hydrogen => Iron Direct Reduction -> Directly Reduced Iron => Electric Arc Furnace -> Steel Liquid => Steel Casting -> Steel Slab => Steel Hot Rolling -> Steel Hot-rolled Coil',\n      5 )\n      7 g, lay = pc_green_steel.igraph()\n      8 fig, ax = plt.subplots()\n\nNameError: name 'ProcessChain' is not defined
In\u00a0[23]: Copied!
df_lcox_green_steel = pd.concat([\n        DataSet('Tech|Electrolysis').aggregate(period=2030, subtech=['AEL', 'PEM'], size=['1 MW', '100 MW'], agg=['subtech', 'size', 'source'], override={'Tech|ELH2|Output Capacity|Hydrogen': 'kW;LHV'}),\n        DataSet('Tech|Iron Direct Reduction').aggregate(period=2030, mode='h2'),\n        DataSet('Tech|Electric Arc Furnace').aggregate(period=2030, mode='Primary'),\n        DataSet('Tech|Steel Casting').aggregate(period=2030),\n        DataSet('Tech|Steel Hot Rolling').aggregate(period=2030),\n        pd.DataFrame({'price_case': range(30, 60, 10), 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': range(30, 60, 10)}),\n    ]) \\\n    .team.perform(pc_green_steel) \\\n    .team.perform(LCOX(\n        'Steel Hot Rolling|Output|Steel Hot-rolled Coil', process_chain='Green Steel (H2-DR)',\n        interest_rate=0.1, book_lifetime=10.0,\n    ), only_new=True) \\\n    .team.unit_convert('EUR_2022/t')\n\ndisplay(df_lcox_green_steel)\n
df_lcox_green_steel = pd.concat([ DataSet('Tech|Electrolysis').aggregate(period=2030, subtech=['AEL', 'PEM'], size=['1 MW', '100 MW'], agg=['subtech', 'size', 'source'], override={'Tech|ELH2|Output Capacity|Hydrogen': 'kW;LHV'}), DataSet('Tech|Iron Direct Reduction').aggregate(period=2030, mode='h2'), DataSet('Tech|Electric Arc Furnace').aggregate(period=2030, mode='Primary'), DataSet('Tech|Steel Casting').aggregate(period=2030), DataSet('Tech|Steel Hot Rolling').aggregate(period=2030), pd.DataFrame({'price_case': range(30, 60, 10), 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': range(30, 60, 10)}), ]) \\ .team.perform(pc_green_steel) \\ .team.perform(LCOX( 'Steel Hot Rolling|Output|Steel Hot-rolled Coil', process_chain='Green Steel (H2-DR)', interest_rate=0.1, book_lifetime=10.0, ), only_new=True) \\ .team.unit_convert('EUR_2022/t') display(df_lcox_green_steel)
/Users/gmax/Documents/PIK_Job/posted_leo/python/posted/noslag.py:651: HarmoniseMappingFailure:\n\nNo OCF value matching a OPEX Fixed Specific value found.\n    period     reheating component region  \\\n73    2030  w/ reheating    Labour  World   \n\n                                         variable  \\\n73  Tech|Electric Arc Furnace|OPEX Fixed Specific   \n\n                               reference_variable  source      value  \n73  Tech|Electric Arc Furnace|Output|Steel Liquid  Vogl18  62.628147  \n\n/Users/gmax/Documents/PIK_Job/posted_leo/python/posted/noslag.py:651: HarmoniseMappingFailure:\n\nNo OCF value matching a OPEX Fixed Specific value found.\n    period      reheating component region  \\\n74    2030  w/o reheating    Labour  World   \n\n                                         variable  \\\n74  Tech|Electric Arc Furnace|OPEX Fixed Specific   \n\n                               reference_variable  source      value  \n74  Tech|Electric Arc Furnace|Output|Steel Liquid  Vogl18  62.628147  \n\n
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[23], line 9\n      1 df_lcox_green_steel = pd.concat([\n      2         DataSet('Tech|Electrolysis').aggregate(period=2030, subtech=['AEL', 'PEM'], size=['1 MW', '100 MW'], agg=['subtech', 'size', 'source'], override={'Tech|ELH2|Output Capacity|Hydrogen': 'kW;LHV'}),\n      3         DataSet('Tech|Iron Direct Reduction').aggregate(period=2030, mode='h2'),\n      4         DataSet('Tech|Electric Arc Furnace').aggregate(period=2030, mode='Primary'),\n      5         DataSet('Tech|Steel Casting').aggregate(period=2030),\n      6         DataSet('Tech|Steel Hot Rolling').aggregate(period=2030),\n      7         pd.DataFrame({'price_case': range(30, 60, 10), 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': range(30, 60, 10)}),\n      8     ]) \\\n----> 9     .team.perform(pc_green_steel) \\\n     10     .team.perform(LCOX(\n     11         'Steel Hot Rolling|Output|Steel Hot-rolled Coil', process_chain='Green Steel (H2-DR)',\n     12         interest_rate=0.1, book_lifetime=10.0,\n     13     ), only_new=True) \\\n     14     .team.unit_convert('EUR_2022/t')\n     16 display(df_lcox_green_steel)\n\nNameError: name 'pc_green_steel' is not defined
In\u00a0[24]: Copied!
df_lcox_green_steel.team.varsplit('LCOX|Green Steel (H2-DR)|?process|*component') \\\n    .plot.bar(x='price_case', y='value', color='component', hover_data='process', facet_col='reheating')\n
df_lcox_green_steel.team.varsplit('LCOX|Green Steel (H2-DR)|?process|*component') \\ .plot.bar(x='price_case', y='value', color='component', hover_data='process', facet_col='reheating')
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[24], line 1\n----> 1 df_lcox_green_steel.team.varsplit('LCOX|Green Steel (H2-DR)|?process|*component') \\\n      2     .plot.bar(x='price_case', y='value', color='component', hover_data='process', facet_col='reheating')\n\nNameError: name 'df_lcox_green_steel' is not defined
In\u00a0[25]: Copied!
df_lcox_cement = pd.concat([\n        DataSet('Tech|Cement Production').aggregate(period=2030),\n        pd.DataFrame.from_records([\n            {'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 50},\n            {'variable': 'Price|Coal', 'unit': 'EUR_2022/GJ', 'value': 3},\n            {'variable': 'Price|Oxygen', 'unit': 'EUR_2022/t', 'value': 30},\n            {'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': -30},\n        ]),\n    ]) \\\n    .team.perform(LCOX(\n        'Output|Cement', 'Cement Production',\n        interest_rate=0.1, book_lifetime=10.0,\n    ), only_new=True) \\\n    .team.unit_convert('EUR_2022/t')\n\ndisplay(df_lcox_cement)\n
df_lcox_cement = pd.concat([ DataSet('Tech|Cement Production').aggregate(period=2030), pd.DataFrame.from_records([ {'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 50}, {'variable': 'Price|Coal', 'unit': 'EUR_2022/GJ', 'value': 3}, {'variable': 'Price|Oxygen', 'unit': 'EUR_2022/t', 'value': 30}, {'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': -30}, ]), ]) \\ .team.perform(LCOX( 'Output|Cement', 'Cement Production', interest_rate=0.1, book_lifetime=10.0, ), only_new=True) \\ .team.unit_convert('EUR_2022/t') display(df_lcox_cement)
/Users/gmax/Documents/PIK_Job/posted_leo/python/posted/noslag.py:368: UserWarning:\n\nUnknown variable, so dropping rows:\n37    Emissions|CO2\n38    Emissions|CO2\n39    Emissions|CO2\n40    Emissions|CO2\n41    Emissions|CO2\n42    Emissions|CO2\n43    Emissions|CO2\n44    Emissions|CO2\nName: variable, dtype: object\n\n
\n---------------------------------------------------------------------------\nTypeError                                 Traceback (most recent call last)\nCell In[25], line 10\n      1 df_lcox_cement = pd.concat([\n      2         DataSet('Tech|Cement Production').aggregate(period=2030),\n      3         pd.DataFrame.from_records([\n      4             {'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 50},\n      5             {'variable': 'Price|Coal', 'unit': 'EUR_2022/GJ', 'value': 3},\n      6             {'variable': 'Price|Oxygen', 'unit': 'EUR_2022/t', 'value': 30},\n      7             {'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': -30},\n      8         ]),\n      9     ]) \\\n---> 10     .team.perform(LCOX(\n     11         'Output|Cement', 'Cement Production',\n     12         interest_rate=0.1, book_lifetime=10.0,\n     13     ), only_new=True) \\\n     14     .team.unit_convert('EUR_2022/t')\n     16 display(df_lcox_cement)\n\nTypeError: LCOX.__init__() missing 1 required positional argument: 'reference'

We first sort the dataframe by total LCOX for each subtech.

In\u00a0[26]: Copied!
df_lcox_cement.team.varsplit('?variable|?process|*component') \\\n    .groupby('subtech') \\\n    .apply(lambda df: df.assign(order=df['value'].sum()), include_groups=False) \\\n    .sort_values(by='order') \\\n    .reset_index() \\\n    .plot.bar(x='subtech', y='value', color='component', hover_data='process')\n
df_lcox_cement.team.varsplit('?variable|?process|*component') \\ .groupby('subtech') \\ .apply(lambda df: df.assign(order=df['value'].sum()), include_groups=False) \\ .sort_values(by='order') \\ .reset_index() \\ .plot.bar(x='subtech', y='value', color='component', hover_data='process')
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[26], line 1\n----> 1 df_lcox_cement.team.varsplit('?variable|?process|*component') \\\n      2     .groupby('subtech') \\\n      3     .apply(lambda df: df.assign(order=df['value'].sum()), include_groups=False) \\\n      4     .sort_values(by='order') \\\n      5     .reset_index() \\\n      6     .plot.bar(x='subtech', y='value', color='component', hover_data='process')\n\nNameError: name 'df_lcox_cement' is not defined
"},{"location":"tutorials/python/overview/#main-posted-tutorial-for-python","title":"Main POSTED tutorial for python\u00b6","text":""},{"location":"tutorials/python/overview/#prerequisits","title":"Prerequisits\u00b6","text":""},{"location":"tutorials/python/overview/#dependencies","title":"Dependencies\u00b6","text":""},{"location":"tutorials/python/overview/#importing-posted","title":"Importing POSTED\u00b6","text":""},{"location":"tutorials/python/overview/#noslag","title":"NOSLAG\u00b6","text":""},{"location":"tutorials/python/overview/#electrolysis-capex","title":"Electrolysis CAPEX\u00b6","text":""},{"location":"tutorials/python/overview/#energy-demand-of-green-vs-blue-hydrogen-production","title":"Energy demand of green vs. blue hydrogen production\u00b6","text":""},{"location":"tutorials/python/overview/#energy-demand-of-iron-direct-reduction","title":"Energy demand of iron direct reduction\u00b6","text":""},{"location":"tutorials/python/overview/#energy-demand-of-haber-bosch-synthesis","title":"Energy demand of Haber-Bosch synthesis\u00b6","text":""},{"location":"tutorials/python/overview/#team","title":"TEAM\u00b6","text":""},{"location":"tutorials/python/overview/#calcvariable","title":"CalcVariable\u00b6","text":""},{"location":"tutorials/python/overview/#pivot","title":"Pivot\u00b6","text":""},{"location":"tutorials/python/overview/#lcox-of-blue-and-green-hydrogen","title":"LCOX of blue and green hydrogen\u00b6","text":""},{"location":"tutorials/python/overview/#lcox-of-methanol","title":"LCOX of methanol\u00b6","text":""},{"location":"tutorials/python/overview/#lcox-of-green-ethylene-from-green-methanol","title":"LCOX of green ethylene (from green methanol)\u00b6","text":""},{"location":"tutorials/python/overview/#lcox-of-green-steel","title":"LCOX of green steel\u00b6","text":""},{"location":"tutorials/python/overview/#lcox-of-cement-w-and-wo-cc","title":"LCOX of cement w/ and w/o CC\u00b6","text":""}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to POSTED's documentation","text":"

POSTED (the Potsdam Open-Source Techno-Economic Database) is a public database of techno-economic data on energy and climate-mitigation technologies, along with a framework for consistent data handling and an open-source toolbox for techno-economic assessments (TEA). In particular, it provides a structure for and contains data on investment cost, energy and feedstock demand, other fixed and variable costs, emissions intensities, and other characteristics of conversion, storage, and transportation technologies in the energy and related sectors. The accompanying software code is intended for consistent maintenance of this data and for deriving straight-forward results from them, such as levelised cost, greenhouse-gas emission intensities, or marginal abatement cost.

"},{"location":"installation/","title":"How to work with and contribute to POSTED","text":"

If you want to use POSTED or even contribute to its development, you first need to get/install POSTED in one of two ways:

"},{"location":"installation/#installing-posted-as-a-package","title":"Installing POSTED as a package","text":"PythonR

You can install the posted Python package via:

# when using poetry\npoetry add git+https://github.com:PhilippVerpoort/posted.git\n\n# when using pip\npip install git+https://github.com:PhilippVerpoort/posted.git\n
A PyPI package will be made available at a later stage.

You can install the posted R package using install_github from devtools via:

install_github('PhilippVerpoort/posted')\n
A CRAN package will be made available at a later stage.

This will allow you to use the data contained in POSTED's public database and general-purpose functions from the NOSLAG and TEAM frameworks.

"},{"location":"installation/#cloning-posted-from-source","title":"Cloning POSTED from source","text":""},{"location":"methodology/","title":"Methodology of POSTED","text":""},{"location":"methodology/#purpose-and-aims","title":"Purpose and aims","text":"

The development of the POSTED framework pursues several goals:

Obtain a comprehensive collection of techno-economic data. The data needed for techno-economic assessments and various types of modelling is often scattered across many sources and formats. One aim of POSTED is to collect all required data in one place with a consistent format. This data will forever be publicly available under a permissive licence in order to overcome existing barriers of collaboration.

Make data easily available for manipulation and techno-economic assessments. Techno-economic data often comes in the form of Excel Spreadsheets, which is difficult to work with when performing assessments. Calculating the levelized cost of production or comparing parameters across different sources should be easy and straightforward with a few lines of code.

Make data sources traceable and transparent. When working with techno-economic data, the origin and underlying assumptions are often intransparent and hard to trace back. By being explicit about sources and reporting data only in the original units, the process of data curation becomes more open and transparent. Through the development and explication of clear standards, the misunderstandings can be avoided. Be extendible to meet users\u2019 requirements. The POSTED database can be extended, allowing users\u2019 to meet the requirements of their own projects.

"},{"location":"methodology/#database-format","title":"Database format","text":"

POSTED is extensible via public and private databases. The public database is part of the public GitHub repository and is located in the inst/extdata subdirectory. Private project-specific databases can be added to POSTED by adding a respective database path to the databases dictionary of the path module before executing any other POSTED code.

PythonR
from pathlib import Path\nimport posted.path\ndatabases |= {'database_name': Path('/path/to/database/')}\n
library(POSTED)\ndatabases$database_name <- \"/path/to/database/\"\n

The public database is intended for the curation of a comprehensive set of general-purpose resources that should suit the needs of most users. Private databases may be used for low-threshold extensibility, for more project-specific work that is not in the interest of a broad audience, or for confidential data that cannot be made available publicly.

The format mandates the following components for all databases. If these components have contents, they should be placed as subdirectories in the database directory (see here: https://github.com/PhilippVerpoort/posted/tree/refactoring/inst/extdata/database).

TEDFs, fields, and masks are organised in a hierarchical system of variable definitions. This means that the file .../database/tedfs/Tech/Electrolysis.csv defines entries for variables Tech|Electrolysis|..., and so on. The columns variable and reference_variable in the TEDFs are attached to the end of the parent variable defined by the path of the file.

"},{"location":"methodology/#flow-types","title":"Flow types","text":"

POSTED defines flow types, which are used throughout the TEDF format and NOSLAG and unit framework. Flow types may be energy carriers (electricity, heat, fossil gas, hydrogen, etc), feedstocks (naphtha, ethylene, carbon-dioxide), or materials (steel, cement, etc).

They are defined in the flow_types.csv file in each database. Flow types may be overridden by other databases in the order in which the databases are added to POSTED (i.e. private databases will normally override the public database). Flow types are also automatically loaded as tags for the variable definitions.

Flow types come with a unique ID, the so-called flow_id, which is used throughout POSTED (Electricity, Hydrogen, Ammonia, etc). Moreover, the following attributes may be defined for them as attributes:

Attributes can be assigned a source by adding the respective BibTeX handle (see below) in the source column.

"},{"location":"methodology/#technology-types","title":"Technology types","text":"

POSTED defines technology types, which are used throughout the TEDF format and the NOSLAG framework. Technology types should represent generic classes of technologies (electrolysis, electric-arc furnaces, direct-air capture, etc).

Technologies are defined in the tech_types.csv file in each database. Technology types may be overridden by other databases in the order in which the databases are added to POSTED (i.e. private databases will normally override the public database). Technology types are also automatically loaded as tags for the variable definitions.

Technology types come with a unique ID, the so-called tech_id, which is used throughout POSTED (Electrolysis for water electrolysis, Haber-Bosch with ASU for Haber-Bosch synthesis with an air-separation unit, Iron Direct Reduction for direct reduction of iron ore based on either fossil gas or hydrogen, etc). Moreover, the following attributes may be defined for them in separate columns:

"},{"location":"methodology/#sources","title":"Sources","text":""},{"location":"methodology/#techno-economic-data-files-tedfs","title":"Techno-economic data files (TEDFs)","text":""},{"location":"methodology/#base-format","title":"Base format","text":"

The TEDF base format contains the following columns:

PythonR

The base columns in Python are defined here.

The base columns in R are defined here.

Columns that are not found in a CSV file will be added by POSTED and set to the default value of the column type.

If one wants to specify additional columns, these need to be defined as fields in one of the databases.

By placing an asterisk (*) in a period, source, or any field column, POSTED expands these rows across all possible values for these columns in the harmonise method of the NHS framework.

"},{"location":"methodology/#fields","title":"Fields","text":"

Fields can create additional columns for specific variables. Fields can currently be one of three:

"},{"location":"methodology/#masks","title":"Masks","text":"

To be written.

"},{"location":"methodology/#variable-definitions","title":"Variable definitions","text":"

To be written.

See IAMC format: https://github.com/IAMconsortium/common-definitions

"},{"location":"methodology/#units","title":"Units","text":"

To be written.

See pint: https://pint.readthedocs.io/en/stable/

See IAMC units: https://github.com/IAMconsortium/units/

"},{"location":"methodology/#the-normalise-select-aggregate-noslag-framework","title":"The Normalise-Select-Aggregate (NOSLAG) framework","text":"

To be written.

"},{"location":"methodology/#the-techno-economic-assessment-and-manipulation-team-framework","title":"The Techno-economic Assessment and Manipulation (TEAM) framework","text":""},{"location":"methodology/#levelised-cost-of-x-lcox","title":"Levelised Cost of X (LCOX)","text":"

The levelised cost of activity X can be calculated via POSTED based on the following convention: $$ \\mathrm{LCOX} = \\frac{\\mathrm{Capital~Cost} + \\mathrm{OM~Fixed} + \\mathrm{OM~Variable} + \\sum_f \\mathrm{Input~Cost}_f - \\sum_f \\mathrm{Output~Revenue}_f}{\\mathrm{Activity}_X} $$

This is based on the following cost components:

\\[ \\mathrm{Capital~Cost} = \\frac{\\mathrm{ANF} \\times \\mathrm{CAPEX}}{\\mathrm{OCF} \\times \\mathrm{Reference~Capacity}} \\times \\mathrm{Reference~Flow} \\] \\[ \\mathrm{OM~Fixed} = \\frac{\\mathrm{OPEX~Fixed}}{\\mathrm{OCF} \\times \\mathrm{Reference~Capacity}} \\times \\mathrm{Reference~Flow} \\] \\[ \\mathrm{OM~Variable} = \\mathrm{OPEX~Variable} \\] \\[ \\mathrm{Input~Cost}_f = \\mathrm{Price}_f \\times \\mathrm{Input}_f \\] \\[ \\mathrm{Output~Revenue}_f = \\mathrm{Price}_f \\times \\mathrm{Output}_f \\]

with \\(\\mathrm{ANF} = \\frac{\\mathrm{IR} * (1 + \\mathrm{IR})^\\mathrm{BL} / ((1 + \\mathrm{IR})^\\mathrm{BL} - 1)}{\\mathrm{yr}}\\) based on the Interest Rate (IR) and Book Lifetime (BL). The \\(\\mathrm{Reference~Capacity}\\) is the capacity that the CAPEX and OPEX Fixed variables are defined in reference to (e.g. Input Capacity|Electricity or Output Capacity|Methanol), and the \\(\\mathrm{Reference~Flow}\\) is the associated flow. Moreover, \\(\\mathrm{Activity}_X\\) is one of Output|X (with X being Hydrogen, Methanol, Ammonia, etc), Input|X (with X being e.g. Waste), or Service|X (with X being e.g. Passenger Kilometers).

"},{"location":"methodology/#process-chains","title":"Process chains","text":"

Process chains, i.e. a combination of processes that feed inputs and outputs into each other, can be define in POSTED before performing LCOX analysis.

For a process chain consisting of processes \\(P = \\{p_1, p_2, \\ldots\\}\\) we can define feeds \\(C^\\mathrm{Flow}_{p_1\\rightarrow p_2}\\) for Flow being fed from process \\(p_1\\) to process \\(p_2\\). Moreover, we can define demand \\(\\mathrm{Demand|Flow}_{p_1}\\) for the Flow demanded from process \\(p_1\\). This results in the following linear equation for functional process units \\(\\mathrm{Functional~Unit}_{p_1}\\):

\\[ \\mathrm{Output|Flow}_{p_1} \\times \\mathrm{Functional~Unit}_{p_1} = \\sum_{p_2} \\mathrm{Input|Flow}_{p_2} \\times \\mathrm{Functional~Unit}_{p_2} \\times C^\\mathrm{Flow}_{p_1\\rightarrow p_2} + \\mathrm{Demand|Flow}_{p_1} \\]"},{"location":"code/R/functions/AbstractColumnDefinition/","title":"AbstractColumnDefinition","text":""},{"location":"code/R/functions/AbstractColumnDefinition/#abstractcolumndefinition","title":"AbstractColumnDefinition","text":""},{"location":"code/R/functions/AbstractColumnDefinition/#description","title":"Description","text":"

Abstract class to store columns

"},{"location":"code/R/functions/AbstractColumnDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/AbstractColumnDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/AbstractColumnDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the AbstractColumnDefinition class

Usage

AbstractColumnDefinition$new(col_type, name, description, dtype, required)\n

Arguments:

"},{"location":"code/R/functions/AbstractColumnDefinition/#method-is_allowed","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

AbstractColumnDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/functions/AbstractColumnDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

AbstractColumnDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/AbstractFieldDefinition/","title":"AbstractFieldDefinition","text":""},{"location":"code/R/functions/AbstractFieldDefinition/#abstractfielddefinition","title":"AbstractFieldDefinition","text":""},{"location":"code/R/functions/AbstractFieldDefinition/#description","title":"Description","text":"

Abstract class to store fields

"},{"location":"code/R/functions/AbstractFieldDefinition/#examples","title":"Examples","text":"
### ------------------------------------------------\n### Method `AbstractFieldDefinition$select_and_expand`\n### ------------------------------------------------\n\n## Example usage:\n## select_and_expand(df, \"col_id\", field_vals = NULL)\n
"},{"location":"code/R/functions/AbstractFieldDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/AbstractFieldDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/AbstractFieldDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the AbstractFieldDefinition Class

Usage

AbstractFieldDefinition$new(\n  field_type,\n  name,\n  description,\n  dtype,\n  coded,\n  codes = NULL\n)\n

Arguments:

"},{"location":"code/R/functions/AbstractFieldDefinition/#method-is_allowed","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

AbstractFieldDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/functions/AbstractFieldDefinition/#method-select_and_expand","title":"Method select_and_expand()","text":"

Select and expand fields which are valid for multiple periods or other field vals

Usage

AbstractFieldDefinition$select_and_expand(df, col_id, field_vals = NA, ...)\n

Arguments:

Example:

## Example usage:\n## select_and_expand(df, \"col_id\", field_vals = NULL)\n

Returns:

DataFrame where fields are selected and expanded

"},{"location":"code/R/functions/AbstractFieldDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

AbstractFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/CommentDefinition/","title":"CommentDefinition","text":""},{"location":"code/R/functions/CommentDefinition/#commentdefinition","title":"CommentDefinition","text":""},{"location":"code/R/functions/CommentDefinition/#description","title":"Description","text":"

Class to store comment columns

"},{"location":"code/R/functions/CommentDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/CommentDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/CommentDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the CommentDefinition Class

Usage

CommentDefinition$new(name, description, required)\n

Arguments:

"},{"location":"code/R/functions/CommentDefinition/#method-is_allowed","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

CommentDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/functions/CommentDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

CommentDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/CustomFieldDefinition/","title":"CustomFieldDefinition","text":""},{"location":"code/R/functions/CustomFieldDefinition/#customfielddefinition","title":"CustomFieldDefinition","text":""},{"location":"code/R/functions/CustomFieldDefinition/#description","title":"Description","text":"

Class to store Custom fields

"},{"location":"code/R/functions/CustomFieldDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/CustomFieldDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/CustomFieldDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the CustomFieldDefinition class

Usage

CustomFieldDefinition$new(field_specs)\n

Arguments:

"},{"location":"code/R/functions/CustomFieldDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

CustomFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/DataSet/","title":"DataSet","text":""},{"location":"code/R/functions/DataSet/#dataset","title":"DataSet","text":""},{"location":"code/R/functions/DataSet/#description","title":"Description","text":"

This class provides methods to store, normalize, select, and aggregate DataSets.

"},{"location":"code/R/functions/DataSet/#examples","title":"Examples","text":"
### ------------------------------------------------\n### Method `DataSet$normalise`\n### ------------------------------------------------\n\n## Example usage:\ndataset$normalize(override = list(\"variable1\" = \"value1\"), inplace = FALSE)\n\n\n### ------------------------------------------------\n### Method `DataSet$select`\n### ------------------------------------------------\n\n## Example usage:\ndataset$select(override = list(\"variable1\" = \"value1\"), drop_singular_fields = TRUE, extrapolate_period = FALSE, field1 = \"value1\")\n\n\n### ------------------------------------------------\n### Method `DataSet$aggregate`\n### ------------------------------------------------\n\n## Example usage:\ndataset$aggregate(override = list(\"variable1\" = \"value1\"), drop_singular_fields = TRUE, extrapolate_period = FALSE, agg = \"field\", masks = list(mask1, mask2), masks_database = TRUE)\n
"},{"location":"code/R/functions/DataSet/#methods","title":"Methods","text":""},{"location":"code/R/functions/DataSet/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/DataSet/#method-new","title":"Method new()","text":"

Create new instance of the DataSet class

Usage

DataSet$new(\n  parent_variable,\n  include_databases = NULL,\n  file_paths = NULL,\n  check_inconsistencies = FALSE,\n  data = NULL\n)\n

Arguments:

"},{"location":"code/R/functions/DataSet/#method-normalise","title":"Method normalise()","text":"

Normalize data: default reference units, reference value equal to 1.0, default reported units

Usage

DataSet$normalise(override = NULL, inplace = FALSE)\n

Arguments:

Example:

## Example usage:\ndataset$normalize(override = list(\"variable1\" = \"value1\"), inplace = FALSE)\n

Returns:

DataFrame. If inplace is FALSE, returns normalized dataframe.

"},{"location":"code/R/functions/DataSet/#method-select","title":"Method select()","text":"

Select desired data from the dataframe

Usage

DataSet$select(\n  override = NULL,\n  drop_singular_fields = TRUE,\n  extrapolate_period = TRUE,\n  ...\n)\n

Arguments:

Example:

## Example usage:\ndataset$select(override = list(\"variable1\" = \"value1\"), drop_singular_fields = TRUE, extrapolate_period = FALSE, field1 = \"value1\")\n

Returns:

DataFrame. DataFrame with selected values.

"},{"location":"code/R/functions/DataSet/#method-aggregate","title":"Method aggregate()","text":"

Aggregates data based on specified parameters, applies masks, and cleans up the resulting DataFrame.

Usage

DataSet$aggregate(\n  override = NULL,\n  drop_singular_fields = TRUE,\n  extrapolate_period = TRUE,\n  agg = NULL,\n  masks = NULL,\n  masks_database = TRUE,\n  ...\n)\n

Arguments:

Example:

## Example usage:\ndataset$aggregate(override = list(\"variable1\" = \"value1\"), drop_singular_fields = TRUE, extrapolate_period = FALSE, agg = \"field\", masks = list(mask1, mask2), masks_database = TRUE)\n

Returns:

DataFrame. The aggregate method returns a pandas DataFrame that has been cleaned up and aggregated based on the specified parameters and input data. The method performs aggregation over component fields and case fields, applies weights based on masks, drops rows with NaN weights, aggregates with weights, inserts reference variables, sorts columns and rows, rounds values, and inserts units before returning the final cleaned and aggregated DataFrame.

"},{"location":"code/R/functions/DataSet/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

DataSet$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/Mask/","title":"Mask","text":""},{"location":"code/R/functions/Mask/#mask","title":"Mask","text":""},{"location":"code/R/functions/Mask/#description","title":"Description","text":"

Class to define masks with conditions and weights to apply to DataFiles

"},{"location":"code/R/functions/Mask/#methods","title":"Methods","text":""},{"location":"code/R/functions/Mask/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/Mask/#method-new","title":"Method new()","text":"

Create a new mask object

Usage

Mask$new(where = NULL, use = NULL, weight = NULL, other = NaN, comment = \"\")\n

Arguments:

"},{"location":"code/R/functions/Mask/#method-matches","title":"Method matches()","text":"

Check if a mask matches a dataframe by verifying if all 'where' conditions match across all rows.

Usage

Mask$matches(df)\n

Arguments:

Returns:

Logical. If the mask matches the dataframe.

"},{"location":"code/R/functions/Mask/#method-get_weights","title":"Method get_weights()","text":"

Apply weights to the dataframe

Usage

Mask$get_weights(df)\n

Arguments:

Returns:

Dataframe. Dataframe with applied weights

"},{"location":"code/R/functions/Mask/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

Mask$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/PeriodFieldDefinition/","title":"PeriodFieldDefinition","text":""},{"location":"code/R/functions/PeriodFieldDefinition/#periodfielddefinition","title":"PeriodFieldDefinition","text":""},{"location":"code/R/functions/PeriodFieldDefinition/#description","title":"Description","text":"

Class to store Period fields

"},{"location":"code/R/functions/PeriodFieldDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/PeriodFieldDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/PeriodFieldDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the PeriodFieldDefinition Class

Usage

PeriodFieldDefinition$new(name, description)\n

Arguments:

"},{"location":"code/R/functions/PeriodFieldDefinition/#method-is_allowed","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

PeriodFieldDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/functions/PeriodFieldDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

PeriodFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/RegionFieldDefinition/","title":"RegionFieldDefinition","text":""},{"location":"code/R/functions/RegionFieldDefinition/#regionfielddefinition","title":"RegionFieldDefinition","text":""},{"location":"code/R/functions/RegionFieldDefinition/#description","title":"Description","text":"

Class to store Region fields

"},{"location":"code/R/functions/RegionFieldDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/RegionFieldDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/RegionFieldDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the RegionFieldDefinition class

Usage

RegionFieldDefinition$new(name, description)\n

Arguments:

"},{"location":"code/R/functions/RegionFieldDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

RegionFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/SourceFieldDefinition/","title":"SourceFieldDefinition","text":""},{"location":"code/R/functions/SourceFieldDefinition/#sourcefielddefinition","title":"SourceFieldDefinition","text":""},{"location":"code/R/functions/SourceFieldDefinition/#description","title":"Description","text":"

Class to store Source fields

"},{"location":"code/R/functions/SourceFieldDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/SourceFieldDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/SourceFieldDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the SourceFieldDefinition class

Usage

SourceFieldDefinition$new(name, description)\n

Arguments:

"},{"location":"code/R/functions/SourceFieldDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

SourceFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/TEBase/","title":"TEBase","text":""},{"location":"code/R/functions/TEBase/#tebase","title":"TEBase","text":""},{"location":"code/R/functions/TEBase/#description","title":"Description","text":"

This is the base class for technoeconomic data.

"},{"location":"code/R/functions/TEBase/#examples","title":"Examples","text":"
## Example usage:\nbase_technoeconomic_data <- TEBase$new(\"variable_name\")\n
"},{"location":"code/R/functions/TEBase/#methods","title":"Methods","text":""},{"location":"code/R/functions/TEBase/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/TEBase/#method-new","title":"Method new()","text":"

Create new instance of TEBase class. Set parent variable and technology specifications (var_specs) from input

Usage

TEBase$new(parent_variable)\n

Arguments:

"},{"location":"code/R/functions/TEBase/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

TEBase$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/TEDF/","title":"TEDF","text":""},{"location":"code/R/functions/TEDF/#tedf","title":"TEDF","text":""},{"location":"code/R/functions/TEDF/#description","title":"Description","text":"

This class is used to store Technoeconomic DataFiles.

"},{"location":"code/R/functions/TEDF/#examples","title":"Examples","text":"
## Example usage:\ntedf <- TEDF$new(\"variable_name\")\ntedf$load()\ntedf$read(\"file_path.csv\")\ntedf$write(\"output_file_path.csv\")\ntedf$check()\ntedf$check_row()\n\n\n### ------------------------------------------------\n### Method `TEDF$load`\n### ------------------------------------------------\n\n## Example usage:\ntedf$load()\n\n\n### ------------------------------------------------\n### Method `TEDF$read`\n### ------------------------------------------------\n\n## Example usage:\ntedf$read()\n\n\n### ------------------------------------------------\n### Method `TEDF$write`\n### ------------------------------------------------\n\n## Example usage:\ntedf$write()\n\n\n### ------------------------------------------------\n### Method `TEDF$check`\n### ------------------------------------------------\n\n## Example usage:\ntedf$check(raise_exception = TRUE)\n
"},{"location":"code/R/functions/TEDF/#methods","title":"Methods","text":""},{"location":"code/R/functions/TEDF/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/TEDF/#method-new","title":"Method new()","text":"

Create new instance of TEDF class. Initialise parent class and object fields

Usage

TEDF$new(\n  parent_variable,\n  database_id = \"public\",\n  file_path = NULL,\n  data = NULL\n)\n

Arguments:

"},{"location":"code/R/functions/TEDF/#method-load","title":"Method load()","text":"

Load TEDataFile (only if it has not been read yet)

Usage

TEDF$load()\n

Example:

## Example usage:\ntedf$load()\n

Returns:

TEDF. Returns the TEDF object it is called on.

"},{"location":"code/R/functions/TEDF/#method-read","title":"Method read()","text":"

This method reads TEDF from a CSV file.

Usage

TEDF$read()\n

Example:

## Example usage:\ntedf$read()\n

"},{"location":"code/R/functions/TEDF/#method-write","title":"Method write()","text":"

write TEDF to CSV file.

Usage

TEDF$write()\n

Example:

## Example usage:\ntedf$write()\n

"},{"location":"code/R/functions/TEDF/#method-check","title":"Method check()","text":"

Check that TEDF is consistent and add inconsistencies to internal parameter

Usage

TEDF$check(raise_exception = TRUE)\n

Arguments:

Example:

## Example usage:\ntedf$check(raise_exception = TRUE)\n

"},{"location":"code/R/functions/TEDF/#method-check_row","title":"Method check_row()","text":"

checks if row of dataframe has issues - NOT IMPLEMENTED YET

Usage

TEDF$check_row(row_id, raise_exception = TRUE)\n

Arguments:

"},{"location":"code/R/functions/TEDF/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

TEDF$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/TEDFInconsistencyException/","title":"TEDFInconsistencyException","text":""},{"location":"code/R/functions/TEDFInconsistencyException/#tedfinconsistencyexception","title":"TEDFInconsistencyException","text":""},{"location":"code/R/functions/TEDFInconsistencyException/#description","title":"Description","text":"

This is a class to store inconsistencies in the TEDFs

"},{"location":"code/R/functions/TEDFInconsistencyException/#methods","title":"Methods","text":""},{"location":"code/R/functions/TEDFInconsistencyException/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/TEDFInconsistencyException/#method-new","title":"Method new()","text":"

Create instance of TEDFInconsistencyException class

Usage

TEDFInconsistencyException$new(\n  message = \"Inconsistency detected\",\n  row_id = NULL,\n  col_id = NULL,\n  file_path = NULL\n)\n

Arguments:

"},{"location":"code/R/functions/TEDFInconsistencyException/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

TEDFInconsistencyException$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/UnitDefinition/","title":"UnitDefinition","text":""},{"location":"code/R/functions/UnitDefinition/#unitdefinition","title":"UnitDefinition","text":""},{"location":"code/R/functions/UnitDefinition/#description","title":"Description","text":"

Class to store Unit columns

"},{"location":"code/R/functions/UnitDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/UnitDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/UnitDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the UnitDefinition class

Usage

UnitDefinition$new(name, description, required)\n

Arguments:

"},{"location":"code/R/functions/UnitDefinition/#method-is_allowed","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

UnitDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/functions/UnitDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

UnitDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/ValueDefinition/","title":"ValueDefinition","text":""},{"location":"code/R/functions/ValueDefinition/#valuedefinition","title":"ValueDefinition","text":""},{"location":"code/R/functions/ValueDefinition/#description","title":"Description","text":"

Class to store Value columns

"},{"location":"code/R/functions/ValueDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/ValueDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/ValueDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the ValueDefinition class

Usage

ValueDefinition$new(name, description, required)\n

Arguments:

"},{"location":"code/R/functions/ValueDefinition/#method-is_allowed","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

ValueDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/functions/ValueDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

ValueDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/VariableDefinition/","title":"VariableDefinition","text":""},{"location":"code/R/functions/VariableDefinition/#variabledefinition","title":"VariableDefinition","text":""},{"location":"code/R/functions/VariableDefinition/#description","title":"Description","text":"

Class to store variable columns

"},{"location":"code/R/functions/VariableDefinition/#methods","title":"Methods","text":""},{"location":"code/R/functions/VariableDefinition/#public-methods","title":"Public Methods","text":""},{"location":"code/R/functions/VariableDefinition/#method-new","title":"Method new()","text":"

Creates a new instance of the VariableDefinition class

Usage

VariableDefinition$new(name, description, required)\n

Arguments:

"},{"location":"code/R/functions/VariableDefinition/#method-is_allowed","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

VariableDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/functions/VariableDefinition/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

VariableDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/functions/apply_cond/","title":"Apply cond","text":""},{"location":"code/R/functions/apply_cond/#apply_cond","title":"apply_cond","text":"

apply_cond

"},{"location":"code/R/functions/apply_cond/#description","title":"Description","text":"

Takes a pandas DataFrame and a condition, which can be a string, dictionary, or callable, and applies the condition to the DataFrame using eval or apply accordingly.

"},{"location":"code/R/functions/apply_cond/#usage","title":"Usage","text":"
apply_cond(df, cond)\n
"},{"location":"code/R/functions/apply_cond/#arguments","title":"Arguments","text":"Argument Description df DataFrame. A pandas DataFrame containing the data on which the condition will be applied. cond MaskCondition. The condition to be applied on the dataframe. Can be either a string, a dictionary, or a callable function."},{"location":"code/R/functions/apply_cond/#return-value","title":"Return Value","text":"

DataFrame. Dataframe evaluated at the mask condition.

"},{"location":"code/R/functions/collect_files/","title":"Collect files","text":""},{"location":"code/R/functions/collect_files/#collect_files","title":"collect_files","text":"

collect_files

"},{"location":"code/R/functions/collect_files/#description","title":"Description","text":"

Takes a parent variable and optional list of databases to include, checks for their existence, and collects files and directories based on the parent variable.

"},{"location":"code/R/functions/collect_files/#usage","title":"Usage","text":"
collect_files(parent_variable, include_databases = NULL)\n
"},{"location":"code/R/functions/collect_files/#arguments","title":"Arguments","text":"Argument Description parent_variable Character. Variable to collect files on. include_databases Optional listCharacter. List of Database IDs to collect files from."},{"location":"code/R/functions/collect_files/#return-value","title":"Return Value","text":"

List of tuples. List of tuples containing the parent variable and the database ID for each file found in the specified directories.

"},{"location":"code/R/functions/collect_files/#examples","title":"Examples","text":"
## Example usage:\ncollect_files(\"variable_name\", c(\"db1\", \"db2\"))\n
"},{"location":"code/R/functions/combine_units/","title":"Combine units","text":""},{"location":"code/R/functions/combine_units/#combine_units","title":"combine_units","text":"

combine_units

"},{"location":"code/R/functions/combine_units/#description","title":"Description","text":"

Combine fraction of two units into an updated unit string

"},{"location":"code/R/functions/combine_units/#usage","title":"Usage","text":"
combine_units(numerator, denominator)\n
"},{"location":"code/R/functions/combine_units/#arguments","title":"Arguments","text":"Argument Description numerator Character. Numerator of the fraction. denominator Character. Denominator of the fraction."},{"location":"code/R/functions/combine_units/#return-value","title":"Return Value","text":"

Character. Updated unit string after simplification.

"},{"location":"code/R/functions/combine_units/#examples","title":"Examples","text":"
## Example usage:\ncombine_units(\"m\", \"s\")\n
"},{"location":"code/R/functions/is_float/","title":"Is float","text":""},{"location":"code/R/functions/is_float/#is_float","title":"is_float","text":"

is_float

"},{"location":"code/R/functions/is_float/#description","title":"Description","text":"

Checks if a given string can be converted to a floating-point number in Python.

"},{"location":"code/R/functions/is_float/#usage","title":"Usage","text":"
is_float(string)\n
"},{"location":"code/R/functions/is_float/#arguments","title":"Arguments","text":"Argument Description string Character. String to check."},{"location":"code/R/functions/is_float/#return-value","title":"Return Value","text":"

Logical. TRUE if conversion was successful, FALSE if not.

"},{"location":"code/R/functions/is_float/#examples","title":"Examples","text":"
## Example usage:\nis_numeric(\"3.14\")\n
"},{"location":"code/R/functions/normalise_units/","title":"Normalise units","text":""},{"location":"code/R/functions/normalise_units/#normalise_units","title":"normalise_units","text":"

normalise_units

"},{"location":"code/R/functions/normalise_units/#description","title":"Description","text":"

Takes a DataFrame with reported or reference data, along with dictionaries mapping variable units and flow IDs, and normalizes the units of the variables in the DataFrame based on the provided mappings.

"},{"location":"code/R/functions/normalise_units/#usage","title":"Usage","text":"
normalise_units(df, level, var_units, var_flow_ids)\n
"},{"location":"code/R/functions/normalise_units/#arguments","title":"Arguments","text":"Argument Description df DataFrame. Dataframe to be normalized. level Character. Specifies whether the data should be normalized on the reported or reference values. Possible values are 'reported' or 'reference'. var_units List. Dictionary that maps a combination of parent variable and variable to its corresponding unit. The keys in the dictionary are in the format \"{parent_variable} var_flow_ids List. Dictionary that maps a combination of parent variable and variable to a specific flow ID. This flow ID is used for unit conversion in the normalize_units function."},{"location":"code/R/functions/normalise_units/#return-value","title":"Return Value","text":"

DataFrame. Normalized dataframe.

"},{"location":"code/R/functions/normalise_units/#examples","title":"Examples","text":"
## Example usage:\nnormalize_dataframe(df, \"reported\", var_units, var_flow_ids)\n
"},{"location":"code/R/functions/normalise_values/","title":"Normalise values","text":""},{"location":"code/R/functions/normalise_values/#normalise_values","title":"normalise_values","text":"

normalise_values

"},{"location":"code/R/functions/normalise_values/#description","title":"Description","text":"

Takes a DataFrame as input, normalizes the 'value' and 'uncertainty' columns by the reference value, and updates the 'reference_value' column accordingly.

"},{"location":"code/R/functions/normalise_values/#usage","title":"Usage","text":"
normalise_values(df)\n
"},{"location":"code/R/functions/normalise_values/#arguments","title":"Arguments","text":"Argument Description df DataFrame. Dataframe to be normalized."},{"location":"code/R/functions/normalise_values/#return-value","title":"Return Value","text":"

DataFrame. Returns a modified DataFrame where the 'value' column has been divided by the 'reference_value' column (or 1.0 if 'reference_value' is null), the 'uncertainty' column has been divided by the 'reference_value' column, and the 'reference_value' column has been replaced with 1.0 if it was not null.

"},{"location":"code/R/functions/normalise_values/#examples","title":"Examples","text":"
## Example usage:\nnormalized_df <- normalize_values(df)\n
"},{"location":"code/R/functions/read_csv_file/","title":"Read csv file","text":""},{"location":"code/R/functions/read_csv_file/#read_csv_file","title":"read_csv_file","text":"

read_csv_file

"},{"location":"code/R/functions/read_csv_file/#description","title":"Description","text":"

Read a csv datafile

"},{"location":"code/R/functions/read_csv_file/#usage","title":"Usage","text":"
read_csv_file(fpath)\n
"},{"location":"code/R/functions/read_csv_file/#arguments","title":"Arguments","text":"Argument Description fpath path of the csv file"},{"location":"code/R/functions/read_definitions/","title":"Read definitions","text":""},{"location":"code/R/functions/read_definitions/#read_definitions","title":"read_definitions","text":"

read_definitions

"},{"location":"code/R/functions/read_definitions/#description","title":"Description","text":"

Reads YAML files from definitions directory, extracts tags, inserts tags into definitions, replaces tokens in definitions, and returns the updated definitions.

"},{"location":"code/R/functions/read_definitions/#usage","title":"Usage","text":"
read_definitions(definitions_dir, flows, techs)\n
"},{"location":"code/R/functions/read_definitions/#arguments","title":"Arguments","text":"Argument Description definitions_dir Character. Path leading to the definitions. flows List. Dictionary containing the different flow types. Each key represents a flow type, the corresponding value is a dictionary containing key value pairs of attributes like density, energy content and their values. techs List. Dictionary containing information about different technologies. Each key in the dictionary represents a unique technology ID, and the corresponding value is a dictionary containing various specifications for that technology, like 'description', 'class', 'primary output' etc."},{"location":"code/R/functions/read_definitions/#return-value","title":"Return Value","text":"

List. Dictionary containing the definitions after processing and replacing tags and tokens.

"},{"location":"code/R/functions/read_masks/","title":"Read masks","text":""},{"location":"code/R/functions/read_masks/#read_masks","title":"read_masks","text":"

read_masks

"},{"location":"code/R/functions/read_masks/#description","title":"Description","text":"

Reads YAML files containing mask specifications from multiple databases and returns a list of Mask objects.

"},{"location":"code/R/functions/read_masks/#usage","title":"Usage","text":"
read_masks(variable)\n
"},{"location":"code/R/functions/read_masks/#arguments","title":"Arguments","text":"Argument Description variable Character. Variable to be read."},{"location":"code/R/functions/read_masks/#return-value","title":"Return Value","text":"

List. List with masks for the variable.

"},{"location":"code/R/functions/read_yml_file/","title":"Read yml file","text":""},{"location":"code/R/functions/read_yml_file/#read_yml_file","title":"read_yml_file","text":"

read_yaml_file

"},{"location":"code/R/functions/read_yml_file/#description","title":"Description","text":"

read YAML config file

"},{"location":"code/R/functions/read_yml_file/#usage","title":"Usage","text":"
read_yml_file(fpath)\n
"},{"location":"code/R/functions/read_yml_file/#arguments","title":"Arguments","text":"Argument Description fpath path of the YAML file"},{"location":"code/R/functions/replace_tags/","title":"Replace tags","text":""},{"location":"code/R/functions/replace_tags/#replace_tags","title":"replace_tags","text":"

replace_tags

"},{"location":"code/R/functions/replace_tags/#description","title":"Description","text":"

Replaces specified tags in dictionary keys and values with corresponding items from another dictionary.

"},{"location":"code/R/functions/replace_tags/#usage","title":"Usage","text":"
replace_tags(definitions, tag, items)\n
"},{"location":"code/R/functions/replace_tags/#arguments","title":"Arguments","text":"Argument Description definitions List. Dictionary containing the definitions, where the tags should be replaced by the items. tag Character. String to identify where replacements should be made in the definitions. Specifies the placeholder that needs to be replaced with actual values from the items dictionary. items List. Dictionary containing the items from which to replace the definitions."},{"location":"code/R/functions/replace_tags/#return-value","title":"Return Value","text":"

List. Dictionary containing the definitions with replacements based on the provided tag and items.

"},{"location":"code/R/functions/unit_convert/","title":"Unit convert","text":""},{"location":"code/R/functions/unit_convert/#unit_convert","title":"unit_convert","text":"

unit_convert

"},{"location":"code/R/functions/unit_convert/#description","title":"Description","text":"

Converts units with optional flow context handling based on specified variants and flow ID. The function checks if the input units are not NaN, then it proceeds to handle different cases based on the presence of a flow context and unit variants.

"},{"location":"code/R/functions/unit_convert/#usage","title":"Usage","text":"
unit_convert(unit_from, unit_to, flow_id = NULL)\n
"},{"location":"code/R/functions/unit_convert/#arguments","title":"Arguments","text":"Argument Description unit_from Character or numeric. Unit to convert from. unit_to Character or numeric. Unit to convert to. flow_id Character or NULL. Identifier for the specific flow or process."},{"location":"code/R/functions/unit_convert/#return-value","title":"Return Value","text":"

Numeric. Conversion factor between unit_from and unit_to.

"},{"location":"code/R/functions/unit_convert/#examples","title":"Examples","text":"
## Example usage:\nunit_convert(\"m\", \"km\", flow_id = NULL)\n
"},{"location":"code/R/functions/unit_token_func/","title":"Unit token func","text":""},{"location":"code/R/functions/unit_token_func/#unit_token_func","title":"unit_token_func","text":"

unit_token_func

"},{"location":"code/R/functions/unit_token_func/#description","title":"Description","text":"

Takes a unit component type and a dictionary of flows, and returns a lambda function that extracts the default unit based on the specified component type from the flow dictionary.

"},{"location":"code/R/functions/unit_token_func/#usage","title":"Usage","text":"
unit_token_func(unit_component, flows)\n
"},{"location":"code/R/functions/unit_token_func/#arguments","title":"Arguments","text":"Argument Description unit_component Character. Specifies the type of unit token to be returned. Possible values are 'full', 'raw', 'variant'. flows List. Dictionary containing the flows."},{"location":"code/R/functions/unit_token_func/#return-value","title":"Return Value","text":"

Function. Lambda function that takes a dictionary def_specs as input. The lambda function will return different values based on the unit_component parameter.

"},{"location":"code/R/modules/columns/","title":"columns","text":""},{"location":"code/R/modules/columns/#is_float","title":"is_float","text":"

is_float

"},{"location":"code/R/modules/columns/#description","title":"Description","text":"

Checks if a given string can be converted to a floating-point number in Python.

"},{"location":"code/R/modules/columns/#usage","title":"Usage","text":"
is_float(string)\n
"},{"location":"code/R/modules/columns/#arguments","title":"Arguments","text":"Argument Description string Character. String to check."},{"location":"code/R/modules/columns/#return-value","title":"Return Value","text":"

Logical. TRUE if conversion was successful, FALSE if not.

"},{"location":"code/R/modules/columns/#examples","title":"Examples","text":"
## Example usage:\nis_numeric(\"3.14\")\n
"},{"location":"code/R/modules/columns/#abstractcolumndefinition","title":"AbstractColumnDefinition","text":""},{"location":"code/R/modules/columns/#description_1","title":"Description","text":"

Abstract class to store columns

"},{"location":"code/R/modules/columns/#methods","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new","title":"Method new()","text":"

Creates a new instance of the AbstractColumnDefinition class

Usage

AbstractColumnDefinition$new(col_type, name, description, dtype, required)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-is_allowed","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

AbstractColumnDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

AbstractColumnDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#variabledefinition","title":"VariableDefinition","text":""},{"location":"code/R/modules/columns/#description_2","title":"Description","text":"

Class to store variable columns

"},{"location":"code/R/modules/columns/#methods_1","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_1","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_1","title":"Method new()","text":"

Creates a new instance of the VariableDefinition class

Usage

VariableDefinition$new(name, description, required)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-is_allowed_1","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

VariableDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone_1","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

VariableDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#unitdefinition","title":"UnitDefinition","text":""},{"location":"code/R/modules/columns/#description_3","title":"Description","text":"

Class to store Unit columns

"},{"location":"code/R/modules/columns/#methods_2","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_2","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_2","title":"Method new()","text":"

Creates a new instance of the UnitDefinition class

Usage

UnitDefinition$new(name, description, required)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-is_allowed_2","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

UnitDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone_2","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

UnitDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#valuedefinition","title":"ValueDefinition","text":""},{"location":"code/R/modules/columns/#description_4","title":"Description","text":"

Class to store Value columns

"},{"location":"code/R/modules/columns/#methods_3","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_3","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_3","title":"Method new()","text":"

Creates a new instance of the ValueDefinition class

Usage

ValueDefinition$new(name, description, required)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-is_allowed_3","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

ValueDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone_3","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

ValueDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#commentdefinition","title":"CommentDefinition","text":""},{"location":"code/R/modules/columns/#description_5","title":"Description","text":"

Class to store comment columns

"},{"location":"code/R/modules/columns/#methods_4","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_4","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_4","title":"Method new()","text":"

Creates a new instance of the CommentDefinition Class

Usage

CommentDefinition$new(name, description, required)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-is_allowed_4","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

CommentDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone_4","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

CommentDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#abstractfielddefinition","title":"AbstractFieldDefinition","text":""},{"location":"code/R/modules/columns/#description_6","title":"Description","text":"

Abstract class to store fields

"},{"location":"code/R/modules/columns/#examples_1","title":"Examples","text":"
### ------------------------------------------------\n### Method `AbstractFieldDefinition$select_and_expand`\n### ------------------------------------------------\n\n## Example usage:\n## select_and_expand(df, \"col_id\", field_vals = NULL)\n
"},{"location":"code/R/modules/columns/#methods_5","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_5","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_5","title":"Method new()","text":"

Creates a new instance of the AbstractFieldDefinition Class

Usage

AbstractFieldDefinition$new(\n  field_type,\n  name,\n  description,\n  dtype,\n  coded,\n  codes = NULL\n)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-is_allowed_5","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

AbstractFieldDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-select_and_expand","title":"Method select_and_expand()","text":"

Select and expand fields which are valid for multiple periods or other field vals

Usage

AbstractFieldDefinition$select_and_expand(df, col_id, field_vals = NA, ...)\n

Arguments:

Example:

## Example usage:\n## select_and_expand(df, \"col_id\", field_vals = NULL)\n

Returns:

DataFrame where fields are selected and expanded

"},{"location":"code/R/modules/columns/#method-clone_5","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

AbstractFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#regionfielddefinition","title":"RegionFieldDefinition","text":""},{"location":"code/R/modules/columns/#description_7","title":"Description","text":"

Class to store Region fields

"},{"location":"code/R/modules/columns/#methods_6","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_6","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_6","title":"Method new()","text":"

Creates a new instance of the RegionFieldDefinition class

Usage

RegionFieldDefinition$new(name, description)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone_6","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

RegionFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#periodfielddefinition","title":"PeriodFieldDefinition","text":""},{"location":"code/R/modules/columns/#description_8","title":"Description","text":"

Class to store Period fields

"},{"location":"code/R/modules/columns/#methods_7","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_7","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_7","title":"Method new()","text":"

Creates a new instance of the PeriodFieldDefinition Class

Usage

PeriodFieldDefinition$new(name, description)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-is_allowed_6","title":"Method is_allowed()","text":"

Tests if cell is allowed

Usage

PeriodFieldDefinition$is_allowed(cell)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone_7","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

PeriodFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#sourcefielddefinition","title":"SourceFieldDefinition","text":""},{"location":"code/R/modules/columns/#description_9","title":"Description","text":"

Class to store Source fields

"},{"location":"code/R/modules/columns/#methods_8","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_8","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_8","title":"Method new()","text":"

Creates a new instance of the SourceFieldDefinition class

Usage

SourceFieldDefinition$new(name, description)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone_8","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

SourceFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/columns/#customfielddefinition","title":"CustomFieldDefinition","text":""},{"location":"code/R/modules/columns/#description_10","title":"Description","text":"

Class to store Custom fields

"},{"location":"code/R/modules/columns/#methods_9","title":"Methods","text":""},{"location":"code/R/modules/columns/#public-methods_9","title":"Public Methods","text":""},{"location":"code/R/modules/columns/#method-new_9","title":"Method new()","text":"

Creates a new instance of the CustomFieldDefinition class

Usage

CustomFieldDefinition$new(field_specs)\n

Arguments:

"},{"location":"code/R/modules/columns/#method-clone_9","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

CustomFieldDefinition$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/definitions/","title":"definitions","text":""},{"location":"code/R/modules/definitions/#read_definitions","title":"read_definitions","text":"

read_definitions

"},{"location":"code/R/modules/definitions/#description","title":"Description","text":"

Reads YAML files from definitions directory, extracts tags, inserts tags into definitions, replaces tokens in definitions, and returns the updated definitions.

"},{"location":"code/R/modules/definitions/#usage","title":"Usage","text":"
read_definitions(definitions_dir, flows, techs)\n
"},{"location":"code/R/modules/definitions/#arguments","title":"Arguments","text":"Argument Description definitions_dir Character. Path leading to the definitions. flows List. Dictionary containing the different flow types. Each key represents a flow type, the corresponding value is a dictionary containing key value pairs of attributes like density, energy content and their values. techs List. Dictionary containing information about different technologies. Each key in the dictionary represents a unique technology ID, and the corresponding value is a dictionary containing various specifications for that technology, like 'description', 'class', 'primary output' etc."},{"location":"code/R/modules/definitions/#return-value","title":"Return Value","text":"

List. Dictionary containing the definitions after processing and replacing tags and tokens.

"},{"location":"code/R/modules/definitions/#replace_tags","title":"replace_tags","text":"

replace_tags

"},{"location":"code/R/modules/definitions/#description_1","title":"Description","text":"

Replaces specified tags in dictionary keys and values with corresponding items from another dictionary.

"},{"location":"code/R/modules/definitions/#usage_1","title":"Usage","text":"
replace_tags(definitions, tag, items)\n
"},{"location":"code/R/modules/definitions/#arguments_1","title":"Arguments","text":"Argument Description definitions List. Dictionary containing the definitions, where the tags should be replaced by the items. tag Character. String to identify where replacements should be made in the definitions. Specifies the placeholder that needs to be replaced with actual values from the items dictionary. items List. Dictionary containing the items from which to replace the definitions."},{"location":"code/R/modules/definitions/#return-value_1","title":"Return Value","text":"

List. Dictionary containing the definitions with replacements based on the provided tag and items.

"},{"location":"code/R/modules/definitions/#unit_token_func","title":"unit_token_func","text":"

unit_token_func

"},{"location":"code/R/modules/definitions/#description_2","title":"Description","text":"

Takes a unit component type and a dictionary of flows, and returns a lambda function that extracts the default unit based on the specified component type from the flow dictionary.

"},{"location":"code/R/modules/definitions/#usage_2","title":"Usage","text":"
unit_token_func(unit_component, flows)\n
"},{"location":"code/R/modules/definitions/#arguments_2","title":"Arguments","text":"Argument Description unit_component Character. Specifies the type of unit token to be returned. Possible values are 'full', 'raw', 'variant'. flows List. Dictionary containing the flows."},{"location":"code/R/modules/definitions/#return-value_2","title":"Return Value","text":"

Function. Lambda function that takes a dictionary def_specs as input. The lambda function will return different values based on the unit_component parameter.

"},{"location":"code/R/modules/masking/","title":"masking","text":""},{"location":"code/R/modules/masking/#apply_cond","title":"apply_cond","text":"

apply_cond

"},{"location":"code/R/modules/masking/#description","title":"Description","text":"

Takes a pandas DataFrame and a condition, which can be a string, dictionary, or callable, and applies the condition to the DataFrame using eval or apply accordingly.

"},{"location":"code/R/modules/masking/#usage","title":"Usage","text":"
apply_cond(df, cond)\n
"},{"location":"code/R/modules/masking/#arguments","title":"Arguments","text":"Argument Description df DataFrame. A pandas DataFrame containing the data on which the condition will be applied. cond MaskCondition. The condition to be applied on the dataframe. Can be either a string, a dictionary, or a callable function."},{"location":"code/R/modules/masking/#return-value","title":"Return Value","text":"

DataFrame. Dataframe evaluated at the mask condition.

"},{"location":"code/R/modules/masking/#mask","title":"Mask","text":""},{"location":"code/R/modules/masking/#description_1","title":"Description","text":"

Class to define masks with conditions and weights to apply to DataFiles

"},{"location":"code/R/modules/masking/#methods","title":"Methods","text":""},{"location":"code/R/modules/masking/#public-methods","title":"Public Methods","text":""},{"location":"code/R/modules/masking/#method-new","title":"Method new()","text":"

Create a new mask object

Usage

Mask$new(where = NULL, use = NULL, weight = NULL, other = NaN, comment = \"\")\n

Arguments:

"},{"location":"code/R/modules/masking/#method-matches","title":"Method matches()","text":"

Check if a mask matches a dataframe by verifying if all 'where' conditions match across all rows.

Usage

Mask$matches(df)\n

Arguments:

Returns:

Logical. If the mask matches the dataframe.

"},{"location":"code/R/modules/masking/#method-get_weights","title":"Method get_weights()","text":"

Apply weights to the dataframe

Usage

Mask$get_weights(df)\n

Arguments:

Returns:

Dataframe. Dataframe with applied weights

"},{"location":"code/R/modules/masking/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

Mask$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/masking/#read_masks","title":"read_masks","text":"

read_masks

"},{"location":"code/R/modules/masking/#description_2","title":"Description","text":"

Reads YAML files containing mask specifications from multiple databases and returns a list of Mask objects.

"},{"location":"code/R/modules/masking/#usage_1","title":"Usage","text":"
read_masks(variable)\n
"},{"location":"code/R/modules/masking/#arguments_1","title":"Arguments","text":"Argument Description variable Character. Variable to be read."},{"location":"code/R/modules/masking/#return-value_1","title":"Return Value","text":"

List. List with masks for the variable.

"},{"location":"code/R/modules/noslag/","title":"noslag","text":""},{"location":"code/R/modules/noslag/#collect_files","title":"collect_files","text":"

collect_files

"},{"location":"code/R/modules/noslag/#description","title":"Description","text":"

Takes a parent variable and optional list of databases to include, checks for their existence, and collects files and directories based on the parent variable.

"},{"location":"code/R/modules/noslag/#usage","title":"Usage","text":"
collect_files(parent_variable, include_databases = NULL)\n
"},{"location":"code/R/modules/noslag/#arguments","title":"Arguments","text":"Argument Description parent_variable Character. Variable to collect files on. include_databases Optional listCharacter. List of Database IDs to collect files from."},{"location":"code/R/modules/noslag/#return-value","title":"Return Value","text":"

List of tuples. List of tuples containing the parent variable and the database ID for each file found in the specified directories.

"},{"location":"code/R/modules/noslag/#examples","title":"Examples","text":"
## Example usage:\ncollect_files(\"variable_name\", c(\"db1\", \"db2\"))\n
"},{"location":"code/R/modules/noslag/#normalise_units","title":"normalise_units","text":"

normalise_units

"},{"location":"code/R/modules/noslag/#description_1","title":"Description","text":"

Takes a DataFrame with reported or reference data, along with dictionaries mapping variable units and flow IDs, and normalizes the units of the variables in the DataFrame based on the provided mappings.

"},{"location":"code/R/modules/noslag/#usage_1","title":"Usage","text":"
normalise_units(df, level, var_units, var_flow_ids)\n
"},{"location":"code/R/modules/noslag/#arguments_1","title":"Arguments","text":"Argument Description df DataFrame. Dataframe to be normalized. level Character. Specifies whether the data should be normalized on the reported or reference values. Possible values are 'reported' or 'reference'. var_units List. Dictionary that maps a combination of parent variable and variable to its corresponding unit. The keys in the dictionary are in the format \"{parent_variable} var_flow_ids List. Dictionary that maps a combination of parent variable and variable to a specific flow ID. This flow ID is used for unit conversion in the normalize_units function."},{"location":"code/R/modules/noslag/#return-value_1","title":"Return Value","text":"

DataFrame. Normalized dataframe.

"},{"location":"code/R/modules/noslag/#examples_1","title":"Examples","text":"
## Example usage:\nnormalize_dataframe(df, \"reported\", var_units, var_flow_ids)\n
"},{"location":"code/R/modules/noslag/#normalise_values","title":"normalise_values","text":"

normalise_values

"},{"location":"code/R/modules/noslag/#description_2","title":"Description","text":"

Takes a DataFrame as input, normalizes the 'value' and 'uncertainty' columns by the reference value, and updates the 'reference_value' column accordingly.

"},{"location":"code/R/modules/noslag/#usage_2","title":"Usage","text":"
normalise_values(df)\n
"},{"location":"code/R/modules/noslag/#arguments_2","title":"Arguments","text":"Argument Description df DataFrame. Dataframe to be normalized."},{"location":"code/R/modules/noslag/#return-value_2","title":"Return Value","text":"

DataFrame. Returns a modified DataFrame where the 'value' column has been divided by the 'reference_value' column (or 1.0 if 'reference_value' is null), the 'uncertainty' column has been divided by the 'reference_value' column, and the 'reference_value' column has been replaced with 1.0 if it was not null.

"},{"location":"code/R/modules/noslag/#examples_2","title":"Examples","text":"
## Example usage:\nnormalized_df <- normalize_values(df)\n
"},{"location":"code/R/modules/noslag/#dataset","title":"DataSet","text":""},{"location":"code/R/modules/noslag/#description_3","title":"Description","text":"

This class provides methods to store, normalize, select, and aggregate DataSets.

"},{"location":"code/R/modules/noslag/#examples_3","title":"Examples","text":"
### ------------------------------------------------\n### Method `DataSet$normalise`\n### ------------------------------------------------\n\n## Example usage:\ndataset$normalize(override = list(\"variable1\" = \"value1\"), inplace = FALSE)\n\n\n### ------------------------------------------------\n### Method `DataSet$select`\n### ------------------------------------------------\n\n## Example usage:\ndataset$select(override = list(\"variable1\" = \"value1\"), drop_singular_fields = TRUE, extrapolate_period = FALSE, field1 = \"value1\")\n\n\n### ------------------------------------------------\n### Method `DataSet$aggregate`\n### ------------------------------------------------\n\n## Example usage:\ndataset$aggregate(override = list(\"variable1\" = \"value1\"), drop_singular_fields = TRUE, extrapolate_period = FALSE, agg = \"field\", masks = list(mask1, mask2), masks_database = TRUE)\n
"},{"location":"code/R/modules/noslag/#methods","title":"Methods","text":""},{"location":"code/R/modules/noslag/#public-methods","title":"Public Methods","text":""},{"location":"code/R/modules/noslag/#method-new","title":"Method new()","text":"

Create new instance of the DataSet class

Usage

DataSet$new(\n  parent_variable,\n  include_databases = NULL,\n  file_paths = NULL,\n  check_inconsistencies = FALSE,\n  data = NULL\n)\n

Arguments:

"},{"location":"code/R/modules/noslag/#method-normalise","title":"Method normalise()","text":"

Normalize data: default reference units, reference value equal to 1.0, default reported units

Usage

DataSet$normalise(override = NULL, inplace = FALSE)\n

Arguments:

Example:

## Example usage:\ndataset$normalize(override = list(\"variable1\" = \"value1\"), inplace = FALSE)\n

Returns:

DataFrame. If inplace is FALSE, returns normalized dataframe.

"},{"location":"code/R/modules/noslag/#method-select","title":"Method select()","text":"

Select desired data from the dataframe

Usage

DataSet$select(\n  override = NULL,\n  drop_singular_fields = TRUE,\n  extrapolate_period = TRUE,\n  ...\n)\n

Arguments:

Example:

## Example usage:\ndataset$select(override = list(\"variable1\" = \"value1\"), drop_singular_fields = TRUE, extrapolate_period = FALSE, field1 = \"value1\")\n

Returns:

DataFrame. DataFrame with selected values.

"},{"location":"code/R/modules/noslag/#method-aggregate","title":"Method aggregate()","text":"

Aggregates data based on specified parameters, applies masks, and cleans up the resulting DataFrame.

Usage

DataSet$aggregate(\n  override = NULL,\n  drop_singular_fields = TRUE,\n  extrapolate_period = TRUE,\n  agg = NULL,\n  masks = NULL,\n  masks_database = TRUE,\n  ...\n)\n

Arguments:

Example:

## Example usage:\ndataset$aggregate(override = list(\"variable1\" = \"value1\"), drop_singular_fields = TRUE, extrapolate_period = FALSE, agg = \"field\", masks = list(mask1, mask2), masks_database = TRUE)\n

Returns:

DataFrame. The aggregate method returns a pandas DataFrame that has been cleaned up and aggregated based on the specified parameters and input data. The method performs aggregation over component fields and case fields, applies weights based on masks, drops rows with NaN weights, aggregates with weights, inserts reference variables, sorts columns and rows, rounds values, and inserts units before returning the final cleaned and aggregated DataFrame.

"},{"location":"code/R/modules/noslag/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

DataSet$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/read/","title":"read","text":""},{"location":"code/R/modules/read/#read_csv_file","title":"read_csv_file","text":"

read_csv_file

"},{"location":"code/R/modules/read/#description","title":"Description","text":"

Read a csv datafile

"},{"location":"code/R/modules/read/#usage","title":"Usage","text":"
read_csv_file(fpath)\n
"},{"location":"code/R/modules/read/#arguments","title":"Arguments","text":"Argument Description fpath path of the csv file"},{"location":"code/R/modules/tedf/","title":"tedf","text":""},{"location":"code/R/modules/tedf/#tedfinconsistencyexception","title":"TEDFInconsistencyException","text":""},{"location":"code/R/modules/tedf/#description","title":"Description","text":"

This is a class to store inconsistencies in the TEDFs

"},{"location":"code/R/modules/tedf/#methods","title":"Methods","text":""},{"location":"code/R/modules/tedf/#public-methods","title":"Public Methods","text":""},{"location":"code/R/modules/tedf/#method-new","title":"Method new()","text":"

Create instance of TEDFInconsistencyException class

Usage

TEDFInconsistencyException$new(\n  message = \"Inconsistency detected\",\n  row_id = NULL,\n  col_id = NULL,\n  file_path = NULL\n)\n

Arguments:

"},{"location":"code/R/modules/tedf/#method-clone","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

TEDFInconsistencyException$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/tedf/#tebase","title":"TEBase","text":""},{"location":"code/R/modules/tedf/#description_1","title":"Description","text":"

This is the base class for technoeconomic data.

"},{"location":"code/R/modules/tedf/#examples","title":"Examples","text":"
## Example usage:\nbase_technoeconomic_data <- TEBase$new(\"variable_name\")\n
"},{"location":"code/R/modules/tedf/#methods_1","title":"Methods","text":""},{"location":"code/R/modules/tedf/#public-methods_1","title":"Public Methods","text":""},{"location":"code/R/modules/tedf/#method-new_1","title":"Method new()","text":"

Create new instance of TEBase class. Set parent variable and technology specifications (var_specs) from input

Usage

TEBase$new(parent_variable)\n

Arguments:

"},{"location":"code/R/modules/tedf/#method-clone_1","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

TEBase$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/tedf/#tedf","title":"TEDF","text":""},{"location":"code/R/modules/tedf/#description_2","title":"Description","text":"

This class is used to store Technoeconomic DataFiles.

"},{"location":"code/R/modules/tedf/#examples_1","title":"Examples","text":"
## Example usage:\ntedf <- TEDF$new(\"variable_name\")\ntedf$load()\ntedf$read(\"file_path.csv\")\ntedf$write(\"output_file_path.csv\")\ntedf$check()\ntedf$check_row()\n\n\n### ------------------------------------------------\n### Method `TEDF$load`\n### ------------------------------------------------\n\n## Example usage:\ntedf$load()\n\n\n### ------------------------------------------------\n### Method `TEDF$read`\n### ------------------------------------------------\n\n## Example usage:\ntedf$read()\n\n\n### ------------------------------------------------\n### Method `TEDF$write`\n### ------------------------------------------------\n\n## Example usage:\ntedf$write()\n\n\n### ------------------------------------------------\n### Method `TEDF$check`\n### ------------------------------------------------\n\n## Example usage:\ntedf$check(raise_exception = TRUE)\n
"},{"location":"code/R/modules/tedf/#methods_2","title":"Methods","text":""},{"location":"code/R/modules/tedf/#public-methods_2","title":"Public Methods","text":""},{"location":"code/R/modules/tedf/#method-new_2","title":"Method new()","text":"

Create new instance of TEDF class. Initialise parent class and object fields

Usage

TEDF$new(\n  parent_variable,\n  database_id = \"public\",\n  file_path = NULL,\n  data = NULL\n)\n

Arguments:

"},{"location":"code/R/modules/tedf/#method-load","title":"Method load()","text":"

Load TEDataFile (only if it has not been read yet)

Usage

TEDF$load()\n

Example:

## Example usage:\ntedf$load()\n

Returns:

TEDF. Returns the TEDF object it is called on.

"},{"location":"code/R/modules/tedf/#method-read","title":"Method read()","text":"

This method reads TEDF from a CSV file.

Usage

TEDF$read()\n

Example:

## Example usage:\ntedf$read()\n

"},{"location":"code/R/modules/tedf/#method-write","title":"Method write()","text":"

write TEDF to CSV file.

Usage

TEDF$write()\n

Example:

## Example usage:\ntedf$write()\n

"},{"location":"code/R/modules/tedf/#method-check","title":"Method check()","text":"

Check that TEDF is consistent and add inconsistencies to internal parameter

Usage

TEDF$check(raise_exception = TRUE)\n

Arguments:

Example:

## Example usage:\ntedf$check(raise_exception = TRUE)\n

"},{"location":"code/R/modules/tedf/#method-check_row","title":"Method check_row()","text":"

checks if row of dataframe has issues - NOT IMPLEMENTED YET

Usage

TEDF$check_row(row_id, raise_exception = TRUE)\n

Arguments:

"},{"location":"code/R/modules/tedf/#method-clone_2","title":"Method clone()","text":"

The objects of this class are cloneable with this method.

Usage

TEDF$clone(deep = FALSE)\n

Arguments:

"},{"location":"code/R/modules/units/","title":"units","text":""},{"location":"code/R/modules/units/#unit_convert","title":"unit_convert","text":"

unit_convert

"},{"location":"code/R/modules/units/#description","title":"Description","text":"

Converts units with optional flow context handling based on specified variants and flow ID. The function checks if the input units are not NaN, then it proceeds to handle different cases based on the presence of a flow context and unit variants.

"},{"location":"code/R/modules/units/#usage","title":"Usage","text":"
unit_convert(unit_from, unit_to, flow_id = NULL)\n
"},{"location":"code/R/modules/units/#arguments","title":"Arguments","text":"Argument Description unit_from Character or numeric. Unit to convert from. unit_to Character or numeric. Unit to convert to. flow_id Character or NULL. Identifier for the specific flow or process."},{"location":"code/R/modules/units/#return-value","title":"Return Value","text":"

Numeric. Conversion factor between unit_from and unit_to.

"},{"location":"code/R/modules/units/#examples","title":"Examples","text":"
## Example usage:\nunit_convert(\"m\", \"km\", flow_id = NULL)\n
"},{"location":"code/python/columns/","title":"columns","text":""},{"location":"code/python/columns/#python.posted.columns.AbstractColumnDefinition","title":"AbstractColumnDefinition","text":"

Abstract class to store columns

Parameters:

Name Type Description Default col_type str

Type of the column

required name str

Name of the column

required description str

Description of the column

required dtype str

Data type of the column

required required bool

Bool that specifies if the column is required

required

Methods:

Name Description is_allowed

Check if cell is allowed

Source code in python/posted/columns.py
class AbstractColumnDefinition:\n    '''\n    Abstract class to store columns\n\n    Parameters\n    ----------\n    col_type: str\n        Type of the column\n    name: str\n        Name of the column\n    description: str\n        Description of the column\n    dtype:\n        Data type of the column\n    required: bool\n        Bool that specifies if the column is required\n\n    Methods\n    -------\n        is_allowed\n            Check if cell is allowed\n    '''\n    def __init__(self, col_type: str, name: str, description: str, dtype: str, required: bool):\n        if col_type not in ['field', 'variable', 'unit', 'value', 'comment']:\n            raise Exception(f\"Columns must be of type field, variable, unit, value, or comment but found: {col_type}\")\n        if not isinstance(name, str):\n            raise Exception(f\"The 'name' must be a string but found type {type(name)}: {name}\")\n        if not isinstance(description, str):\n            raise Exception(f\"The 'name' must be a string but found type {type(description)}: {description}\")\n        if not (isinstance(dtype, str) and dtype in ['float', 'str', 'category']):\n            raise Exception(f\"The 'dtype' must be a valid data type but found: {dtype}\")\n        if not isinstance(required, bool):\n            raise Exception(f\"The 'required' argument must be a bool but found: {required}\")\n\n        self._col_type: str = col_type\n        self._name: str = name\n        self._description: str = description\n        self._dtype: str = dtype\n        self._required: bool = required\n\n    @property\n    def col_type(self):\n        '''Get col type'''\n        return self._col_type\n\n    @property\n    def name(self):\n        '''Get name of the column'''\n        return self._name\n\n    @property\n    def description(self):\n        '''Get description of the column'''\n        return self._description\n\n    @property\n    def dtype(self):\n        '''Get data type of the column'''\n        return self._dtype\n\n    @property\n    def required(self):\n        '''Return if column is required'''\n        return self._required\n\n    @property\n    def default(self):\n        '''Get default value of the column'''\n        return np.nan\n\n    def is_allowed(self, cell: str | float | int) -> bool:\n        '''Check if Cell is allowed\n\n        Parameters\n        ----------\n            cell: str | float | int\n                Cell to check\n        Returns\n        -------\n            bool\n                If the cell is allowed\n        '''\n        return True\n
"},{"location":"code/python/columns/#python.posted.columns.AbstractColumnDefinition.col_type","title":"col_type property","text":"

Get col type

"},{"location":"code/python/columns/#python.posted.columns.AbstractColumnDefinition.default","title":"default property","text":"

Get default value of the column

"},{"location":"code/python/columns/#python.posted.columns.AbstractColumnDefinition.description","title":"description property","text":"

Get description of the column

"},{"location":"code/python/columns/#python.posted.columns.AbstractColumnDefinition.dtype","title":"dtype property","text":"

Get data type of the column

"},{"location":"code/python/columns/#python.posted.columns.AbstractColumnDefinition.name","title":"name property","text":"

Get name of the column

"},{"location":"code/python/columns/#python.posted.columns.AbstractColumnDefinition.required","title":"required property","text":"

Return if column is required

"},{"location":"code/python/columns/#python.posted.columns.AbstractColumnDefinition.is_allowed","title":"is_allowed(cell)","text":"

Check if Cell is allowed

Returns:

Type Description bool

If the cell is allowed

Source code in python/posted/columns.py
def is_allowed(self, cell: str | float | int) -> bool:\n    '''Check if Cell is allowed\n\n    Parameters\n    ----------\n        cell: str | float | int\n            Cell to check\n    Returns\n    -------\n        bool\n            If the cell is allowed\n    '''\n    return True\n
"},{"location":"code/python/columns/#python.posted.columns.AbstractFieldDefinition","title":"AbstractFieldDefinition","text":"

Bases: AbstractColumnDefinition

Abstract class to store fields

Parameters:

Name Type Description Default field_type str

Type of the field

required name str

Name of the field

required description str

Description of the field

required dtype str

Data type of the field

required coded bool

If the field is coded

required coded bool

Codes for the field

required

Methods:

Name Description is_allowed

Check if cell is allowed

select_and_expand

Select and expand fields

Source code in python/posted/columns.py
class AbstractFieldDefinition(AbstractColumnDefinition):\n    '''\n    Abstract class to store fields\n\n    Parameters\n    ----------\n    field_type: str\n        Type of the field\n    name: str\n        Name of the field\n    description: str\n        Description of the field\n    dtype: str\n        Data type of the field\n    coded: bool\n        If the field is coded\n    coded: Optional[dict[str,str]], optional\n        Codes for the field\n\n\n    Methods\n    -------\n    is_allowed\n        Check if cell is allowed\n    select_and_expand\n        Select and expand fields\n\n    '''\n    def __init__(self, field_type: str, name: str, description: str, dtype: str, coded: bool,\n                 codes: Optional[dict[str, str]] = None):\n        if field_type not in ['case', 'component']:\n            raise Exception('Fields must be of type case or component.')\n        super().__init__(\n            col_type='field',\n            name=name,\n            description=description,\n            dtype=dtype,\n            required=True,\n        )\n\n        self._field_type: str = field_type\n        self._coded: bool = coded\n        self._codes: None | dict[str, str] = codes\n\n    @property\n    def field_type(self) -> str:\n        '''Get field type'''\n        return self._field_type\n\n    @property\n    def coded(self) -> bool:\n        '''Return if field is coded'''\n        return self._coded\n\n    @property\n    def codes(self) -> None | dict[str, str]:\n        '''Get field codes'''\n        return self._codes\n\n    @property\n    def default(self):\n        '''Get symbol for default value'''\n        return '*' if self._field_type == 'case' else '#'\n\n    def is_allowed(self, cell: str | float | int) -> bool:\n        ''' Chek if cell is allowed'''\n        if pd.isnull(cell):\n            return False\n        if self._coded:\n            return cell in self._codes or cell == '*' or (cell == '#' and self.col_type == 'component')\n        else:\n            return True\n\n    def _expand(self, df: pd.DataFrame, col_id: str, field_vals: list, **kwargs) -> pd.DataFrame:\n        # Expand fields\n        return pd.concat([\n            df[df[col_id].isin(field_vals)],\n            df[df[col_id] == '*']\n            .drop(columns=[col_id])\n            .merge(pd.DataFrame.from_dict({col_id: field_vals}), how='cross'),\n        ])\n\n    def _select(self, df: pd.DataFrame, col_id: str, field_vals: list, **kwargs):\n        # Select fields\n        return df.query(f\"{col_id}.isin({field_vals})\").reset_index(drop=True)\n\n\n    def select_and_expand(self, df: pd.DataFrame, col_id: str, field_vals: None | list, **kwargs) -> pd.DataFrame:\n        '''\n        Select and expand fields which are valid for multiple periods or other field vals\n\n        Parameters\n        ----------\n        df: pd.DataFrame\n            DataFrame where fields should be selected and expanded\n        col_id: str\n            col_id of the column to be selected and expanded\n        field_vals: None | list\n            field_vals to select and expand\n        **kwargs\n            Additional keyword arguments\n\n        Returns\n        -------\n        pd.DataFrame\n            Dataframe where fields are selected and expanded\n\n        '''\n        # get list of selected field values\n        if field_vals is None:\n            if col_id == 'period':\n                field_vals = default_periods\n            elif self._coded:\n                field_vals = list(self._codes.keys())\n            else:\n                field_vals = [v for v in df[col_id].unique() if v != '*' and not pd.isnull(v)]\n        else:\n            # ensure that field values is a list of elements (not tuple, not single value)\n            if isinstance(field_vals, tuple):\n                field_vals = list(field_vals)\n            elif not isinstance(field_vals, list):\n                field_vals = [field_vals]\n            # check that every element is of allowed type\n            for val in field_vals:\n                if not self.is_allowed(val):\n                    raise Exception(f\"Invalid type selected for field '{col_id}': {val}\")\n            if '*' in field_vals:\n                raise Exception(f\"Selected values for field '{col_id}' must not contain the asterisk.\"\n                                f\"Omit the '{col_id}' argument to select all entries.\")\n\n\n        df = self._expand(df, col_id, field_vals, **kwargs)\n        df = self._select(df, col_id, field_vals, **kwargs)\n\n        return df\n
"},{"location":"code/python/columns/#python.posted.columns.AbstractFieldDefinition.coded","title":"coded: bool property","text":"

Return if field is coded

"},{"location":"code/python/columns/#python.posted.columns.AbstractFieldDefinition.codes","title":"codes: None | dict[str, str] property","text":"

Get field codes

"},{"location":"code/python/columns/#python.posted.columns.AbstractFieldDefinition.default","title":"default property","text":"

Get symbol for default value

"},{"location":"code/python/columns/#python.posted.columns.AbstractFieldDefinition.field_type","title":"field_type: str property","text":"

Get field type

"},{"location":"code/python/columns/#python.posted.columns.AbstractFieldDefinition.is_allowed","title":"is_allowed(cell)","text":"

Chek if cell is allowed

Source code in python/posted/columns.py
def is_allowed(self, cell: str | float | int) -> bool:\n    ''' Chek if cell is allowed'''\n    if pd.isnull(cell):\n        return False\n    if self._coded:\n        return cell in self._codes or cell == '*' or (cell == '#' and self.col_type == 'component')\n    else:\n        return True\n
"},{"location":"code/python/columns/#python.posted.columns.AbstractFieldDefinition.select_and_expand","title":"select_and_expand(df, col_id, field_vals, **kwargs)","text":"

Select and expand fields which are valid for multiple periods or other field vals

Parameters:

Name Type Description Default df DataFrame

DataFrame where fields should be selected and expanded

required col_id str

col_id of the column to be selected and expanded

required field_vals None | list

field_vals to select and expand

required **kwargs

Additional keyword arguments

{}

Returns:

Type Description DataFrame

Dataframe where fields are selected and expanded

Source code in python/posted/columns.py
def select_and_expand(self, df: pd.DataFrame, col_id: str, field_vals: None | list, **kwargs) -> pd.DataFrame:\n    '''\n    Select and expand fields which are valid for multiple periods or other field vals\n\n    Parameters\n    ----------\n    df: pd.DataFrame\n        DataFrame where fields should be selected and expanded\n    col_id: str\n        col_id of the column to be selected and expanded\n    field_vals: None | list\n        field_vals to select and expand\n    **kwargs\n        Additional keyword arguments\n\n    Returns\n    -------\n    pd.DataFrame\n        Dataframe where fields are selected and expanded\n\n    '''\n    # get list of selected field values\n    if field_vals is None:\n        if col_id == 'period':\n            field_vals = default_periods\n        elif self._coded:\n            field_vals = list(self._codes.keys())\n        else:\n            field_vals = [v for v in df[col_id].unique() if v != '*' and not pd.isnull(v)]\n    else:\n        # ensure that field values is a list of elements (not tuple, not single value)\n        if isinstance(field_vals, tuple):\n            field_vals = list(field_vals)\n        elif not isinstance(field_vals, list):\n            field_vals = [field_vals]\n        # check that every element is of allowed type\n        for val in field_vals:\n            if not self.is_allowed(val):\n                raise Exception(f\"Invalid type selected for field '{col_id}': {val}\")\n        if '*' in field_vals:\n            raise Exception(f\"Selected values for field '{col_id}' must not contain the asterisk.\"\n                            f\"Omit the '{col_id}' argument to select all entries.\")\n\n\n    df = self._expand(df, col_id, field_vals, **kwargs)\n    df = self._select(df, col_id, field_vals, **kwargs)\n\n    return df\n
"},{"location":"code/python/columns/#python.posted.columns.CommentDefinition","title":"CommentDefinition","text":"

Bases: AbstractColumnDefinition

Class to store comment columns

Parameters:

Name Type Description Default col_type

Type of the column

required name str

Name of the column

required description str

Description of the column

required required bool

Bool that specifies if the column is required

required

Methods:

Name Description is_allowed

Check if cell is allowed

Source code in python/posted/columns.py
class CommentDefinition(AbstractColumnDefinition):\n    '''\n    Class to store comment columns\n\n    Parameters\n    ----------\n    col_type: str\n        Type of the column\n    name: str\n        Name of the column\n    description: str\n        Description of the column\n    required: bool\n        Bool that specifies if the column is required\n\n    Methods\n    -------\n    is_allowed\n        Check if cell is allowed\n    '''\n    def __init__(self, name: str, description: str, required: bool):\n        super().__init__(\n            col_type='comment',\n            name=name,\n            description=description,\n            dtype='str',\n            required=required,\n        )\n\n    def is_allowed(self, cell: str | float | int) -> bool:\n        return True\n
"},{"location":"code/python/columns/#python.posted.columns.CustomFieldDefinition","title":"CustomFieldDefinition","text":"

Bases: AbstractFieldDefinition

Class to store Custom fields

Parameters:

Name Type Description Default **field_specs

Specs of the custom fields

{} Source code in python/posted/columns.py
class CustomFieldDefinition(AbstractFieldDefinition):\n    '''\n    Class to store Custom fields\n\n    Parameters\n    ----------\n    **field_specs:\n        Specs of the custom fields\n    '''\n    def __init__(self, **field_specs):\n        '''Check if the field specs are of the required type and format,\n        initialize parent class'''\n        if not ('type' in field_specs and isinstance(field_specs['type'], str) and\n                field_specs['type'] in ['case', 'component']):\n            raise Exception(\"Field type must be provided and equal to 'case' or 'component'.\")\n        if not ('name' in field_specs and isinstance(field_specs['name'], str)):\n            raise Exception('Field name must be provided and of type string.')\n        if not ('description' in field_specs and isinstance(field_specs['description'], str)):\n            raise Exception('Field description must be provided and of type string.')\n        if not ('coded' in field_specs and isinstance(field_specs['coded'], bool)):\n            raise Exception('Field coded must be provided and of type bool.')\n        if field_specs['coded'] and not ('codes' in field_specs and isinstance(field_specs['codes'], dict)):\n            raise Exception('Field codes must be provided and contain a dict of possible codes.')\n\n        super().__init__(\n            field_type=field_specs['type'],\n            name=field_specs['name'],\n            description=field_specs['description'],\n            dtype='category',\n            coded=field_specs['coded'],\n            codes=field_specs['codes'] if 'codes' in field_specs else None,\n        )\n
"},{"location":"code/python/columns/#python.posted.columns.CustomFieldDefinition.__init__","title":"__init__(**field_specs)","text":"

Check if the field specs are of the required type and format, initialize parent class

Source code in python/posted/columns.py
def __init__(self, **field_specs):\n    '''Check if the field specs are of the required type and format,\n    initialize parent class'''\n    if not ('type' in field_specs and isinstance(field_specs['type'], str) and\n            field_specs['type'] in ['case', 'component']):\n        raise Exception(\"Field type must be provided and equal to 'case' or 'component'.\")\n    if not ('name' in field_specs and isinstance(field_specs['name'], str)):\n        raise Exception('Field name must be provided and of type string.')\n    if not ('description' in field_specs and isinstance(field_specs['description'], str)):\n        raise Exception('Field description must be provided and of type string.')\n    if not ('coded' in field_specs and isinstance(field_specs['coded'], bool)):\n        raise Exception('Field coded must be provided and of type bool.')\n    if field_specs['coded'] and not ('codes' in field_specs and isinstance(field_specs['codes'], dict)):\n        raise Exception('Field codes must be provided and contain a dict of possible codes.')\n\n    super().__init__(\n        field_type=field_specs['type'],\n        name=field_specs['name'],\n        description=field_specs['description'],\n        dtype='category',\n        coded=field_specs['coded'],\n        codes=field_specs['codes'] if 'codes' in field_specs else None,\n    )\n
"},{"location":"code/python/columns/#python.posted.columns.PeriodFieldDefinition","title":"PeriodFieldDefinition","text":"

Bases: AbstractFieldDefinition

Class to store Period fields

Parameters:

Name Type Description Default name str

Name of the field

required description str

Description of the field

required

Methods:

Name Description is_allowed

Checks if cell is allowed

Source code in python/posted/columns.py
class PeriodFieldDefinition(AbstractFieldDefinition):\n    '''\n    Class to store Period fields\n\n    Parameters\n    ----------\n    name: str\n        Name of the field\n    description: str\n        Description of the field\n\n    Methods\n    -------\n    is_allowed\n        Checks if cell is allowed\n    '''\n    def __init__(self, name: str, description: str):\n        '''Initialize parent class'''\n        super().__init__(\n            field_type='case',\n            name=name,\n            description=description,\n            dtype='float',\n            coded=False,\n        )\n\n    def is_allowed(self, cell: str | float | int) -> bool:\n        '''Check if cell is a flowat or *'''\n        return is_float(cell) or cell == '*'\n\n    def _expand(self, df: pd.DataFrame, col_id: str, field_vals: list, **kwargs) -> pd.DataFrame:\n        return pd.concat([\n            df[df[col_id] != '*'],\n            df[df[col_id] == '*']\n            .drop(columns=[col_id])\n            .merge(pd.DataFrame.from_dict({col_id: field_vals}), how='cross'),\n        ]).astype({'period': 'float'})\n\n\n    def _select(self, df: pd.DataFrame, col_id: str, field_vals: list[int | float], **kwargs) -> pd.DataFrame:\n        # group by identifying columns and select periods/generate time series\n        # get list of groupable columns\n        group_cols = [\n            c for c in df.columns\n            if c not in [col_id, 'value']\n        ]\n\n        # perform groupby and do not drop NA values\n        grouped = df.groupby(group_cols, dropna=False)\n\n        # create return list\n        ret = []\n\n        # loop over groups\n        for keys, rows in grouped:\n            # get rows in group\n            rows = rows[[col_id, 'value']]\n\n            # get a list of periods that exist\n            periods_exist = rows[col_id].unique()\n\n            # create dataframe containing rows for all requested periods\n            req_rows = pd.DataFrame.from_dict({\n                f\"{col_id}\": field_vals,\n                f\"{col_id}_upper\": [min([ip for ip in periods_exist if ip >= p], default=np.nan) for p in field_vals],\n                f\"{col_id}_lower\": [max([ip for ip in periods_exist if ip <= p], default=np.nan) for p in field_vals],\n            })\n\n            # set missing columns from group\n            req_rows[group_cols] = keys\n\n            # check case\n            cond_match = req_rows[col_id].isin(periods_exist)\n            cond_extrapolate = (req_rows[f\"{col_id}_upper\"].isna() | req_rows[f\"{col_id}_lower\"].isna())\n\n            # match\n            rows_match = req_rows.loc[cond_match] \\\n                .merge(rows, on=col_id)\n\n            # extrapolate\n            rows_extrapolate = (\n                req_rows.loc[~cond_match & cond_extrapolate]\n                    .assign(\n                        period_combined=lambda x: np.where(\n                            x.notna()[f\"{col_id}_upper\"],\n                            x[f\"{col_id}_upper\"],\n                            x[f\"{col_id}_lower\"],\n                        )\n                    )\n                    .merge(rows.rename(columns={col_id: f\"{col_id}_combined\"}), on=f\"{col_id}_combined\")\n                if 'extrapolate_period' not in kwargs or kwargs['extrapolate_period'] else\n                pd.DataFrame()\n            )\n\n            # interpolate\n            rows_interpolate = req_rows.loc[~cond_match & ~cond_extrapolate] \\\n                .merge(rows.rename(columns={c: f\"{c}_upper\" for c in rows.columns}), on=f\"{col_id}_upper\") \\\n                .merge(rows.rename(columns={c: f\"{c}_lower\" for c in rows.columns}), on=f\"{col_id}_lower\") \\\n                .assign(value=lambda row: row['value_lower'] + (row[f\"{col_id}_upper\"] - row[col_id]) /\n                                          (row[f\"{col_id}_upper\"] - row[f\"{col_id}_lower\"]) * (row['value_upper'] - row['value_lower']))\n\n            # combine into one dataframe and drop unused columns\n            rows_to_concat = [df for df in [rows_match, rows_extrapolate, rows_interpolate] if not df.empty]\n            if rows_to_concat:\n                rows_append = pd.concat(rows_to_concat)\n                rows_append.drop(columns=[\n                        c for c in [f\"{col_id}_upper\", f\"{col_id}_lower\", f\"{col_id}_combined\", 'value_upper', 'value_lower']\n                        if c in rows_append.columns\n                    ], inplace=True)\n\n                # add to return list\n                ret.append(rows_append)\n\n        # convert return list to dataframe and return\n        return pd.concat(ret).reset_index(drop=True) if ret else df.iloc[[]]\n
"},{"location":"code/python/columns/#python.posted.columns.PeriodFieldDefinition.__init__","title":"__init__(name, description)","text":"

Initialize parent class

Source code in python/posted/columns.py
def __init__(self, name: str, description: str):\n    '''Initialize parent class'''\n    super().__init__(\n        field_type='case',\n        name=name,\n        description=description,\n        dtype='float',\n        coded=False,\n    )\n
"},{"location":"code/python/columns/#python.posted.columns.PeriodFieldDefinition.is_allowed","title":"is_allowed(cell)","text":"

Check if cell is a flowat or *

Source code in python/posted/columns.py
def is_allowed(self, cell: str | float | int) -> bool:\n    '''Check if cell is a flowat or *'''\n    return is_float(cell) or cell == '*'\n
"},{"location":"code/python/columns/#python.posted.columns.RegionFieldDefinition","title":"RegionFieldDefinition","text":"

Bases: AbstractFieldDefinition

Class to store Region fields

Parameters:

Name Type Description Default name str

Name of the field

required description str

Description of the field

required Source code in python/posted/columns.py
class RegionFieldDefinition(AbstractFieldDefinition):\n    '''\n    Class to store Region fields\n\n    Parameters\n    ----------\n    name: str\n        Name of the field\n    description: str\n        Description of the field\n    '''\n    def __init__(self, name: str, description: str):\n        '''Initialize parent class'''\n        super().__init__(\n            field_type='case',\n            name=name,\n            description=description,\n            dtype='category',\n            coded=True,\n            codes={'World': 'World'},  # TODO: Insert list of country names here.\n        )\n
"},{"location":"code/python/columns/#python.posted.columns.RegionFieldDefinition.__init__","title":"__init__(name, description)","text":"

Initialize parent class

Source code in python/posted/columns.py
def __init__(self, name: str, description: str):\n    '''Initialize parent class'''\n    super().__init__(\n        field_type='case',\n        name=name,\n        description=description,\n        dtype='category',\n        coded=True,\n        codes={'World': 'World'},  # TODO: Insert list of country names here.\n    )\n
"},{"location":"code/python/columns/#python.posted.columns.SourceFieldDefinition","title":"SourceFieldDefinition","text":"

Bases: AbstractFieldDefinition

Class to store Source fields

Parameters:

Name Type Description Default name str

Name of the field

required description str

Description of the field

required Source code in python/posted/columns.py
class SourceFieldDefinition(AbstractFieldDefinition):\n    '''\n    Class to store Source fields\n\n    Parameters\n    ----------\n    name: str\n        Name of the field\n    description: str\n        Description of the field\n    '''\n    def __init__(self, name: str, description: str):\n        '''Initialize parent class'''\n        super().__init__(\n            field_type='case',\n            name=name,\n            description=description,\n            dtype='category',\n            coded=False,  # TODO: Insert list of BibTeX identifiers here.\n        )\n
"},{"location":"code/python/columns/#python.posted.columns.SourceFieldDefinition.__init__","title":"__init__(name, description)","text":"

Initialize parent class

Source code in python/posted/columns.py
def __init__(self, name: str, description: str):\n    '''Initialize parent class'''\n    super().__init__(\n        field_type='case',\n        name=name,\n        description=description,\n        dtype='category',\n        coded=False,  # TODO: Insert list of BibTeX identifiers here.\n    )\n
"},{"location":"code/python/columns/#python.posted.columns.UnitDefinition","title":"UnitDefinition","text":"

Bases: AbstractColumnDefinition

Class to store Unit columns

Parameters:

Name Type Description Default col_type

Type of the column

required name str

Name of the column

required description str

Description of the column

required required bool

Bool that specifies if the column is required

required

Methods:

Name Description is_allowed

Check if cell is allowed

Source code in python/posted/columns.py
class UnitDefinition(AbstractColumnDefinition):\n    '''\n    Class to store Unit columns\n\n    Parameters\n    ----------\n    col_type: str\n        Type of the column\n    name: str\n        Name of the column\n    description: str\n        Description of the column\n    required: bool\n        Bool that specifies if the column is required\n\n    Methods\n    -------\n    is_allowed\n        Check if cell is allowed\n    '''\n    def __init__(self, name: str, description: str, required: bool):\n        super().__init__(\n            col_type='unit',\n            name=name,\n            description=description,\n            dtype='category',\n            required=required,\n        )\n\n    def is_allowed(self, cell: str | float | int) -> bool:\n        if pd.isnull(cell):\n            return not self._required\n        if not isinstance(cell, str):\n            return False\n        tokens = cell.split(';')\n        if len(tokens) == 1:\n            return cell in ureg\n        elif len(tokens) == 2:\n            return tokens[0] in ureg and tokens[1] in unit_variants\n        else:\n            return False\n
"},{"location":"code/python/columns/#python.posted.columns.ValueDefinition","title":"ValueDefinition","text":"

Bases: AbstractColumnDefinition

Class to store Value columns

Parameters:

Name Type Description Default col_type

Type of the column

required name str

Name of the column

required description str

Description of the column

required required bool

Bool that specifies if the column is required

required

Methods:

Name Description is_allowed

Check if cell is allowed

Source code in python/posted/columns.py
class ValueDefinition(AbstractColumnDefinition):\n    '''\n    Class to store Value columns\n\n    Parameters\n    ----------\n    col_type: str\n        Type of the column\n    name: str\n        Name of the column\n    description: str\n        Description of the column\n    required: bool\n        Bool that specifies if the column is required\n\n    Methods\n    -------\n    is_allowed\n        Check if cell is allowed\n    '''\n    def __init__(self, name: str, description: str, required: bool):\n        super().__init__(\n            col_type='value',\n            name=name,\n            description=description,\n            dtype='float',\n            required=required,\n        )\n\n    def is_allowed(self, cell: str | float | int) -> bool:\n        if pd.isnull(cell):\n            return not self._required\n        return isinstance(cell, float | int)\n
"},{"location":"code/python/columns/#python.posted.columns.VariableDefinition","title":"VariableDefinition","text":"

Bases: AbstractColumnDefinition

Class to store variable columns

Parameters:

Name Type Description Default col_type

Type of the column

required name str

Name of the column

required description str

Description of the column

required required bool

Bool that specifies if the column is required

required

Methods:

Name Description is_allowed

Check if cell is allowed

Source code in python/posted/columns.py
class VariableDefinition(AbstractColumnDefinition):\n    '''\n    Class to store variable columns\n\n    Parameters\n    ----------\n    col_type: str\n        Type of the column\n    name: str\n        Name of the column\n    description: str\n        Description of the column\n    required: bool\n        Bool that specifies if the column is required\n\n    Methods\n    -------\n    is_allowed\n        Check if cell is allowed\n    '''\n    def __init__(self, name: str, description: str, required: bool):\n        super().__init__(\n            col_type='variable',\n            name=name,\n            description=description,\n            dtype='category',\n            required=required,\n        )\n\n    def is_allowed(self, cell: str | float | int) -> bool:\n        if pd.isnull(cell):\n            return not self._required\n        return isinstance(cell, str) and cell in variables\n
"},{"location":"code/python/columns/#python.posted.columns.is_float","title":"is_float(string)","text":"

Checks if a given string can be converted to a floating-point number in Python.

Parameters:

Name Type Description Default string str

String to check

required

Returns:

Type Description bool

True if conversion was successful, False if not

Source code in python/posted/columns.py
def is_float(string: str) -> bool:\n    '''Checks if a given string can be converted to a floating-point number in\n    Python.\n\n    Parameters\n    ----------\n    string : str\n        String to check\n\n    Returns\n    -------\n        bool\n            True if conversion was successful, False if not\n    '''\n    try:\n        float(string)\n        return True\n    except ValueError:\n        return False\n
"},{"location":"code/python/columns/#python.posted.columns.read_fields","title":"read_fields(variable)","text":"

Read the fields of a variable

Returns:

Type Description dict
Dictionary containing the fields\n

comments Dictionary containing the comments

Source code in python/posted/columns.py
def read_fields(variable: str):\n    '''\n    Read the fields of a variable\n\n    Parameters\n    ----------\n        variable: str\n            Variable to read\n\n    Returns\n    -------\n        dict\n            Dictionary containing the fields\n        comments\n            Dictionary containing the comments\n\n    '''\n    fields: dict[str, CustomFieldDefinition] = {}\n    comments: dict[str, CommentDefinition] = {}\n\n    for database_id in databases:\n        fpath = databases[database_id] / 'fields' / ('/'.join(variable.split('|')) + '.yml')\n        if fpath.exists():\n            if not fpath.is_file():\n                raise Exception(f\"Expected YAML file, but not a file: {fpath}\")\n\n            for col_id, field_specs in read_yml_file(fpath).items():\n                if field_specs['type'] in ('case', 'component'):\n                    fields[col_id] = CustomFieldDefinition(**field_specs)\n                elif field_specs['type'] == 'comment':\n                    comments[col_id] = CommentDefinition(\n                        **{k: v for k, v in field_specs.items() if k != 'type'},\n                        required=False,\n                    )\n                else:\n                    raise Exception(f\"Unkown field type: {col_id}\")\n\n    # make sure the field ID is not the same as for a base column\n    for col_id in fields:\n        if col_id in base_columns:\n            raise Exception(f\"Field ID cannot be equal to a base column ID: {col_id}\")\n\n    return fields, comments\n
"},{"location":"code/python/definitions/","title":"definitions","text":""},{"location":"code/python/definitions/#python.posted.definitions.read_definitions","title":"read_definitions(definitions_dir, flows, techs)","text":"

Reads YAML files from definitions directory, extracts tags, inserts tags into definitions, replaces tokens in definitions, and returns the updated definitions.

Parameters:

Name Type Description Default definitions_dir Path

Path leading to the definitions

required flows dict

Dictionary containng the different flow types. Each key represents a flow type, the corresponding value is a dictionary containing key value pairs of attributes like denisty, energycontent and their values.

required techs dict

Dictionary containing information about different technologies. Each key in the dictionary represents a unique technology ID, and the corresponding value is a dictionary containing various specifications for that technology, like 'description', 'class', 'primary output' etc.

required

Returns:

Type Description dict

Dictionary containing the definitions after processing and replacing tags and tokens

Source code in python/posted/definitions.py
def read_definitions(definitions_dir: Path, flows: dict, techs: dict):\n    '''\n    Reads YAML files from definitions directory, extracts tags, inserts tags into\n    definitions, replaces tokens in definitions, and returns the updated definitions.\n\n    Parameters\n    ----------\n    definitions_dir : Path\n        Path leading to the definitions\n    flows : dict\n        Dictionary containng the different flow types. Each key represents a flow type, the corresponding\n        value is a dictionary containing key value pairs of attributes like denisty, energycontent and their\n        values.\n    techs : dict\n        Dictionary containing information about different technologies. Each key in the\n        dictionary represents a unique technology ID, and the corresponding value is a dictionary containing\n        various specifications for that technology, like 'description', 'class', 'primary output' etc.\n\n    Returns\n    -------\n        dict\n            Dictionary containing the definitions after processing and replacing tags and tokens\n    '''\n    # check that variables exists and is a directory\n    if not definitions_dir.exists():\n        return {}\n    if not definitions_dir.is_dir():\n        raise Exception(f\"Should be a directory but is not: {definitions_dir}\")\n\n    # read all definitions and tags\n    definitions = {}\n    tags = {}\n    for file_path in definitions_dir.rglob('*.yml'):\n        if file_path.name.startswith('tag_'):\n            tags |= read_yml_file(file_path)\n        else:\n            definitions |= read_yml_file(file_path)\n\n    # read tags from flows and techs\n    tags['Flow IDs'] = {\n        flow_id: {}\n        for flow_id, flow_specs in flows.items()\n    }\n    tags['Tech IDs'] = {\n        tech_id: {\n            k: v\n            for k, v in tech_specs.items()\n            if k in ['primary_output']\n        }\n        for tech_id, tech_specs in techs.items()\n    }\n\n    # insert tags\n    for tag, items in tags.items():\n        definitions = replace_tags(definitions, tag, items)\n\n    # remove definitions where tags could not been replaced\n    if any('{' in key for key in definitions):\n        warnings.warn('Tokens could not be replaced correctly.')\n        definitions = {k: v for k, v in definitions.items() if '{' not in k}\n\n    # insert tokens\n    tokens = {\n        'default currency': lambda def_specs: default_currency,\n        'primary output': lambda def_specs: def_specs['primary_output'],\n    } | {\n        f\"default flow unit {unit_component}\": unit_token_func(unit_component, flows)\n        for unit_component in ('full', 'raw', 'variant')\n    }\n    for def_key, def_specs in definitions.items():\n        for def_property, def_value in def_specs.items():\n            for token_key, token_func in tokens.items():\n                if isinstance(def_value, str) and f\"{{{token_key}}}\" in def_value:\n                    def_specs[def_property] = def_specs[def_property].replace(f\"{{{token_key}}}\", token_func(def_specs))\n\n    return definitions\n
"},{"location":"code/python/definitions/#python.posted.definitions.replace_tags","title":"replace_tags(definitions, tag, items)","text":"

Replaces specified tags in dictionary keys and values with corresponding items from another dictionary.

Parameters:

Name Type Description Default definitions dict

Dictionary containing the definitions, where the tags should be replaced by the items

required tag str

String to identify where replacements should be made in the definitions. Specifies the placeholder that needs to be replaced with actual values from the items dictionary.

required items dict[str, dict]

Dictionary containing the items from whith to replace the definitions

required

Returns:

Type Description dict

Dictionary containing the definitions with replacements based on the provided tag and items.

Source code in python/posted/definitions.py
def replace_tags(definitions: dict, tag: str, items: dict[str, dict]):\n    '''\n    Replaces specified tags in dictionary keys and values with corresponding\n    items from another dictionary.\n\n    Parameters\n    ----------\n    definitions : dict\n        Dictionary containing the definitions, where the tags should be replaced by the items\n    tag : str\n        String to identify where replacements should be made in the definitions. Specifies\n        the placeholder that needs to be replaced with actual values from the `items` dictionary.\n    items : dict[str, dict]\n        Dictionary containing the items from whith to replace the definitions\n\n    Returns\n    -------\n        dict\n            Dictionary containing the definitions with replacements based on the provided tag and items.\n    '''\n\n    definitions_with_replacements = {}\n    for def_name, def_specs in definitions.items():\n        if f\"{{{tag}}}\" not in def_name:\n            definitions_with_replacements[def_name] = def_specs\n        else:\n            for item_name, item_specs in items.items():\n                item_desc = item_specs['description'] if 'description' in item_specs else item_name\n                def_name_new = def_name.replace(f\"{{{tag}}}\", item_name)\n                def_specs_new = copy.deepcopy(def_specs)\n                def_specs_new |= item_specs\n\n                # replace tags in description\n                def_specs_new['description'] = def_specs['description'].replace(f\"{{{tag}}}\", item_desc)\n\n                # replace tags in other specs\n                for k, v in def_specs_new.items():\n                    if k == 'description' or not isinstance(v, str):\n                        continue\n                    def_specs_new[k] = def_specs_new[k].replace(f\"{{{tag}}}\", item_name)\n                    def_specs_new[k] = def_specs_new[k].replace('{parent variable}', def_name[:def_name.find(f\"{{{tag}}}\")-1])\n                definitions_with_replacements[def_name_new] = def_specs_new\n\n    return definitions_with_replacements\n
"},{"location":"code/python/definitions/#python.posted.definitions.unit_token_func","title":"unit_token_func(unit_component, flows)","text":"

Takes a unit component type and a dictionary of flows, and returns a lambda function that extracts the default unit based on the specified component type from the flow dictionary.

Parameters:

Name Type Description Default unit_component Literal['full', 'raw', 'variant']

Specifies the type of unit token to be returned.

required flows dict

Dictionary containg the flows

required

Returns:

Type Description lambda function

lambda function that takes a dictionary def_specs as input. The lambda function will return different values based on the unit_component parameter and the contents of the flows dictionary.

Source code in python/posted/definitions.py
def unit_token_func(unit_component: Literal['full', 'raw', 'variant'], flows: dict):\n    '''\n    Takes a unit component type and a dictionary of flows, and returns a lambda function\n    that extracts the default unit based on the specified component type from the flow\n    dictionary.\n\n    Parameters\n    ----------\n    unit_component : Literal['full', 'raw', 'variant']\n        Specifies the type of unit token to be returned.\n    flows : dict\n        Dictionary containg the flows\n\n\n    Returns\n    -------\n        lambda function\n            lambda function that takes a dictionary `def_specs` as input. The lambda function\n            will return different values based on the `unit_component` parameter and\n            the contents of the `flows` dictionary.\n    '''\n    return lambda def_specs: (\n        'ERROR'\n        if 'flow_id' not in def_specs or def_specs['flow_id'] not in flows else\n        (\n            flows[def_specs['flow_id']]['default_unit']\n            if unit_component == 'full' else\n            flows[def_specs['flow_id']]['default_unit'].split(';')[0]\n            if unit_component == 'raw' else\n            ';'.join([''] + flows[def_specs['flow_id']]['default_unit'].split(';')[1:2])\n            if unit_component == 'variant' else\n            'UNKNOWN'\n        )\n    )\n
"},{"location":"code/python/masking/","title":"masking","text":""},{"location":"code/python/masking/#python.posted.masking.Mask","title":"Mask","text":"

Class to define masks with conditions and weights to apply to DataFiles

Parameters:

Name Type Description Default where MaskCondition | list[MaskCondition]

Where the mask should be applied

None use MaskCondition | list[MaskCondition]

Condition on where to use the masks

None weight None | float | str | list[float | str]

Weights to apply

None other float nan comment str
Comment\n
'' Source code in python/posted/masking.py
class Mask:\n    '''Class to define masks with conditions and weights to apply to DataFiles\n\n    Parameters\n    ----------\n    where: MaskCondition | list[MaskCondition], optional\n        Where the mask should be applied\n    use:  MaskCondition | list[MaskCondition], optional\n        Condition on where to use the masks\n    weight: None | float | str | list[float | str], optional\n        Weights to apply\n    other: float, optional\n\n    comment: str, optional\n            Comment\n    '''\n    def __init__(self,\n                 where: MaskCondition | list[MaskCondition] = None,\n                 use: MaskCondition | list[MaskCondition] = None,\n                 weight: None | float | str | list[float | str] = None,\n                 other: float = np.nan,\n                 comment: str = ''):\n        '''set fields from constructor arguments, perform consistency checks on fields,\n        set default weight to 1 if not set otherwise'''\n        self._where: list[MaskCondition] = [] if where is None else where if isinstance(where, list) else [where]\n        self._use: list[MaskCondition] = [] if use is None else use if isinstance(use, list) else [use]\n        self._weight: list[float] = (\n            None\n            if weight is None else\n            [float(w) for w in weight]\n            if isinstance(weight, list) else\n            [float(weight)]\n        )\n        self._other: float = other\n        self._comment: str = comment\n\n        # perform consistency checks on fields\n        if self._use and self._weight and len(self._use) != len(self._weight):\n            raise Exception(f\"Must provide same length of 'use' conditions as 'weight' values.\")\n\n        # set default weight to 1 if not set otherwise\n        if not self._weight:\n            self._weight = len(self._use) * [1.0]\n\n\n    def matches(self, df: pd.DataFrame):\n        '''Check if a mask matches a dataframe (all 'where' conditions match across all rows)\n\n        Parameters\n        ----------\n        df: pd.Dataframe\n            Dataframe to check for matches\n        Returns\n        -------\n            bool\n                If the mask matches the dataframe'''\n        for w in self._where:\n            if not apply_cond(df, w).all():\n                return False\n        return True\n\n\n    def get_weights(self, df: pd.DataFrame):\n        '''Apply weights to the dataframe\n\n        Parameters\n        ----------\n        df: pd.Dataframe\n            Dataframe to apply weights on\n\n        Returns\n        -------\n            pd.DataFrame\n                Dataframe with applied weights'''\n        ret = pd.Series(index=df.index, data=np.nan)\n\n        # apply weights where the use condition matches\n        for u, w in zip(self._use, self._weight):\n            ret.loc[apply_cond(df, u)] = w\n\n        return ret\n
"},{"location":"code/python/masking/#python.posted.masking.Mask.__init__","title":"__init__(where=None, use=None, weight=None, other=np.nan, comment='')","text":"

set fields from constructor arguments, perform consistency checks on fields, set default weight to 1 if not set otherwise

Source code in python/posted/masking.py
def __init__(self,\n             where: MaskCondition | list[MaskCondition] = None,\n             use: MaskCondition | list[MaskCondition] = None,\n             weight: None | float | str | list[float | str] = None,\n             other: float = np.nan,\n             comment: str = ''):\n    '''set fields from constructor arguments, perform consistency checks on fields,\n    set default weight to 1 if not set otherwise'''\n    self._where: list[MaskCondition] = [] if where is None else where if isinstance(where, list) else [where]\n    self._use: list[MaskCondition] = [] if use is None else use if isinstance(use, list) else [use]\n    self._weight: list[float] = (\n        None\n        if weight is None else\n        [float(w) for w in weight]\n        if isinstance(weight, list) else\n        [float(weight)]\n    )\n    self._other: float = other\n    self._comment: str = comment\n\n    # perform consistency checks on fields\n    if self._use and self._weight and len(self._use) != len(self._weight):\n        raise Exception(f\"Must provide same length of 'use' conditions as 'weight' values.\")\n\n    # set default weight to 1 if not set otherwise\n    if not self._weight:\n        self._weight = len(self._use) * [1.0]\n
"},{"location":"code/python/masking/#python.posted.masking.Mask.get_weights","title":"get_weights(df)","text":"

Apply weights to the dataframe

Parameters:

Name Type Description Default df DataFrame

Dataframe to apply weights on

required

Returns:

Type Description pd.DataFrame

Dataframe with applied weights

Source code in python/posted/masking.py
def get_weights(self, df: pd.DataFrame):\n    '''Apply weights to the dataframe\n\n    Parameters\n    ----------\n    df: pd.Dataframe\n        Dataframe to apply weights on\n\n    Returns\n    -------\n        pd.DataFrame\n            Dataframe with applied weights'''\n    ret = pd.Series(index=df.index, data=np.nan)\n\n    # apply weights where the use condition matches\n    for u, w in zip(self._use, self._weight):\n        ret.loc[apply_cond(df, u)] = w\n\n    return ret\n
"},{"location":"code/python/masking/#python.posted.masking.Mask.matches","title":"matches(df)","text":"

Check if a mask matches a dataframe (all 'where' conditions match across all rows)

Parameters:

Name Type Description Default df DataFrame

Dataframe to check for matches

required

Returns:

Type Description bool

If the mask matches the dataframe

Source code in python/posted/masking.py
def matches(self, df: pd.DataFrame):\n    '''Check if a mask matches a dataframe (all 'where' conditions match across all rows)\n\n    Parameters\n    ----------\n    df: pd.Dataframe\n        Dataframe to check for matches\n    Returns\n    -------\n        bool\n            If the mask matches the dataframe'''\n    for w in self._where:\n        if not apply_cond(df, w).all():\n            return False\n    return True\n
"},{"location":"code/python/masking/#python.posted.masking.apply_cond","title":"apply_cond(df, cond)","text":"

Takes a pandas DataFrame and a condition, which can be a string, dictionary, or callable, and applies the condition to the DataFrame using eval or apply accordingly.

Parameters:

Name Type Description Default df DataFrame

A pandas DataFrame containing the data on which the condition will be applied.

required cond MaskCondition

The condition to be applied on the dataframe. Can be either a string, a dictionary, or a callable function.

required

Returns:

Type Description pd.DataFrame

Dataframe evaluated at the mask condition

Source code in python/posted/masking.py
def apply_cond(df: pd.DataFrame, cond: MaskCondition):\n    '''Takes a pandas DataFrame and a condition, which can be a string, dictionary,\n    or callable, and applies the condition to the DataFrame using `eval` or `apply`\n    accordingly.\n\n    Parameters\n    ----------\n    df : pd.DataFrame\n        A pandas DataFrame containing the data on which the condition will be applied.\n    cond : MaskCondition\n        The condition to be applied on the dataframe. Can be either a string, a dictionary, or a\n        callable function.\n\n    Returns\n    -------\n        pd.DataFrame\n            Dataframe evaluated at the mask condition\n\n    '''\n    if isinstance(cond, str):\n        return df.eval(cond)\n    elif isinstance(cond, dict):\n        cond = ' & '.join([f\"{key}=='{val}'\" for key, val in cond.items()])\n        return df.eval(cond)\n    elif isinstance(cond, Callable):\n        return df.apply(cond)\n
"},{"location":"code/python/masking/#python.posted.masking.read_masks","title":"read_masks(variable)","text":"

Reads YAML files containing mask specifications from multiple databases and returns a list of Mask objects.

Parameters:

Name Type Description Default variable str

Variable to be read

required

Returns:

Type Description list

List with masks for the variable

Source code in python/posted/masking.py
def read_masks(variable: str):\n    '''Reads YAML files containing mask specifications from multiple databases\n    and returns a list of Mask objects.\n\n    Parameters\n    ----------\n    variable : str\n        Variable to be read\n\n    Returns\n    -------\n        list\n            List with masks for the variable\n\n    '''\n    ret: list[Mask] = []\n\n    for database_id in databases:\n        fpath = databases[database_id] / 'masks' / ('/'.join(variable.split('|')) + '.yml')\n        if fpath.exists():\n            if not fpath.is_file():\n                raise Exception(f\"Expected YAML file, but not a file: {fpath}\")\n\n            ret += [\n                Mask(**mask_specs)\n                for mask_specs in read_yml_file(fpath)\n            ]\n\n    return ret\n
"},{"location":"code/python/noslag/","title":"noslag","text":""},{"location":"code/python/noslag/#python.posted.noslag.DataSet","title":"DataSet","text":"

Bases: TEBase

Class to store, normalise, select and aggregate DataSets

Parameters

parent_variable: str Variable to collect Data on include_databases: Optional[list|str] | tuple[str]], optional Databases to load from file_paths: Optional[list[path]], optional Paths to load data from check_inconsistencies: bool, optional Wether to check for inconsistencies data: Optional[pd.DataFrame], optional Specific data to include in the dataset

Source code in python/posted/noslag.py
class DataSet(TEBase):\n    '''Class to store, normalise, select and aggregate DataSets\n     Parameters\n    ----------\n    parent_variable: str\n        Variable to collect Data on\n    include_databases: Optional[list|str] | tuple[str]], optional\n        Databases to load from\n    file_paths: Optional[list[path]], optional\n        Paths to load data from\n    check_inconsistencies: bool, optional\n        Wether to check for inconsistencies\n    data: Optional[pd.DataFrame], optional\n        Specific data to include in the dataset\n\n\n    '''\n    _df: None | pd.DataFrame\n    _columns: dict[str, AbstractColumnDefinition]\n    _fields: dict[str, AbstractFieldDefinition]\n    _masks: list[Mask]\n\n    # initialise\n    def __init__(self,\n                 parent_variable: str,\n                 include_databases: Optional[list[str] | tuple[str]] = None,\n                 file_paths: Optional[list[Path]] = None,\n                 check_inconsistencies: bool = False,\n                 data: Optional[pd.DataFrame] = None,\n                 ):\n        '''Initialise parent class and fields, load data from specified databases and files\n\n\n        '''\n        TEBase.__init__(self, parent_variable)\n\n        # initialise fields\n        self._df = None\n        self._columns = base_columns\n        self._fields = {\n            col_id: field\n            for col_id, field in self._columns.items()\n            if isinstance(field, AbstractFieldDefinition)\n        }\n        self._masks = []\n\n        # Load data if provided, otherwise load from TEDataFiles\n        if data is not None:\n            self._df = data\n        else:\n            # read TEDataFiles and combine into dataset\n            include_databases = list(include_databases) if include_databases is not None else list(databases.keys())\n            self._df = self._load_files(include_databases, file_paths or [], check_inconsistencies)\n\n\n    @property\n    def data(self):\n        '''str: Get or set dataframe'''\n        return self._df\n\n    def set_data(self, df: pd.DataFrame):\n        self._df = df\n\n\n    def _load_files(self, include_databases: list[str], file_paths: list[Path], check_inconsistencies: bool):\n        # Load TEDFs and compile into NSHADataSet\n\n        files: list[TEDF] = []\n\n        # collect TEDF and append to list\n        collected_files = collect_files(parent_variable=self._parent_variable, include_databases=include_databases)\n        for file_variable, file_database_id in collected_files:\n            files.append(TEDF(parent_variable=file_variable, database_id=file_database_id))\n        for file_path in file_paths:\n            files.append(TEDF(parent_variable=self._parent_variable, file_path=file_path))\n\n        # raise exception if no TEDF can be loaded\n        if not files:\n            raise Exception(f\"No TEDF to load for variable '{self._parent_variable}'.\")\n\n        # get fields and masks from databases\n        files_vars: set[str] = {f.parent_variable for f in files}\n        for v in files_vars:\n            new_fields, new_comments = read_fields(v)\n            for col_id in new_fields | new_comments:\n                if col_id in self._columns:\n                    raise Exception(f\"Cannot load TEDFs due to multiple columns with same ID defined: {col_id}\")\n            self._fields = new_fields | self._fields\n            self._columns = new_fields | self._columns | new_comments\n            self._masks += read_masks(v)\n\n        # load all TEDFs: load from file, check for inconsistencies (if requested), expand cases and variables\n        file_dfs: list[pd.DataFrame] = []\n        for f in files:\n            # load\n            f.load()\n\n            # check for inconsistencies\n            if check_inconsistencies:\n                f.check()\n\n            # obtain dataframe and insert column parent_variable\n            df_tmp = f.data.copy()\n            df_tmp.insert(0, 'parent_variable', f.parent_variable)\n\n            # append to dataframe list\n            file_dfs.append(df_tmp)\n\n        # compile dataset from the dataframes loaded from the individual files\n        data = pd.concat(file_dfs)\n\n        # query relevant variables\n        data = data.query(f\"parent_variable=='{self._parent_variable}'\")\n\n        # drop entries with unknown variables and warn\n        for var_type in ('variable', 'reference_variable'):\n            cond = (data[var_type].notnull() &\n                    data.apply(lambda row: f\"{row['parent_variable']}|{row[var_type]}\" not in self._var_specs, axis=1))\n            if cond.any():\n                warnings.warn(f\"Unknown {var_type}, so dropping rows:\\n{data.loc[cond, var_type]}\")\n                data = data.loc[~cond].reset_index(drop=True)\n\n        # return\n        return data\n\n\n    def normalise(self, override: Optional[dict[str, str]] = None, inplace: bool = False) -> pd.DataFrame | None:\n        '''\n        normalise data: default reference units, reference value equal to 1.0, default reported units\n\n        Parameters\n        ----------\n        override: Optional[dict[str,str]], optional\n            Dictionary with key, value pairs of variables to override\n        inplace: bool, optional\n            Wether to do the normalisation in place\n\n        Returns\n        -------\n        pd.DataFrame\n            if inplace is false, returns normalised dataframe'''\n        normalised, _ = self._normalise(override)\n        if inplace:\n            self._df = normalised\n            return\n        else:\n            return normalised\n\n    def _normalise(self, override: Optional[dict[str, str]]) -> tuple[pd.DataFrame, dict[str, str]]:\n        if override is None:\n            override = {}\n\n        # get overridden var specs\n        var_flow_ids = {\n            var_name: var_specs['flow_id'] if 'flow_id' in var_specs else np.nan\n            for var_name, var_specs in self._var_specs.items()\n        }\n        var_units = {\n            var_name: var_specs['default_unit']\n            for var_name, var_specs in self._var_specs.items()\n        } | override\n\n        # normalise reference units, normalise reference values, and normalise reported units\n        normalised = self._df \\\n            .pipe(normalise_units, level='reference', var_units=var_units, var_flow_ids=var_flow_ids) \\\n            .pipe(normalise_values) \\\n            .pipe(normalise_units, level='reported', var_units=var_units, var_flow_ids=var_flow_ids)\n\n        # return normalised data and variable units\n        return normalised, var_units\n\n    # prepare data for selection\n    def select(self,\n               override: Optional[dict[str, str]] = None,\n               drop_singular_fields: bool = True,\n               extrapolate_period: bool = True,\n               **field_vals_select) -> pd.DataFrame:\n        '''Select desired data from the dataframe\n\n        Parameters\n        ----------\n        override: Optional[dict[str, str]]\n            Dictionary with key, value paris of variables to override\n        drop_singular_fields: bool, optional\n            If True, drop custom fields with only one value\n        extrapolate_period: bool, optional\n            If True, extrapolate values by extrapolation, if no value for this period is given\n        **field_vals_select\n            IDs of values to select\n\n        Returns\n        -------\n        pd.DataFrame\n            DataFrame with selected Values\n            '''\n        selected, var_units, var_references = self._select(\n            override,\n            drop_singular_fields,\n            extrapolate_period,\n            **field_vals_select,\n        )\n        selected.insert(selected.columns.tolist().index('variable'), 'reference_variable', np.nan)\n        selected['reference_variable'] = selected['variable'].map(var_references)\n        return self._cleanup(selected, var_units)\n\n    def _select(self,\n                override: Optional[dict[str, str]],\n                drop_singular_fields: bool,\n                extrapolate_period: bool,\n                **field_vals_select) -> tuple[pd.DataFrame, dict[str, str], dict[str, str]]:\n        # start from normalised data\n        normalised, var_units = self._normalise(override)\n        selected = normalised\n\n        # drop unit columns and reference value column\n        selected.drop(columns=['unit', 'reference_unit', 'reference_value'], inplace=True)\n\n        # drop columns containing comments and uncertainty field (which is currently unsupported)\n        selected.drop(\n            columns=['uncertainty'] + [\n                col_id for col_id, field in self._columns.items()\n                if field.col_type == 'comment'\n            ],\n            inplace=True,\n        )\n\n        # add parent variable as prefix to other variable columns\n        selected['variable'] = selected['parent_variable'] + '|' + selected['variable']\n        selected['reference_variable'] = selected['parent_variable'] + '|' + selected['reference_variable']\n        selected.drop(columns=['parent_variable'], inplace=True)\n\n        # raise exception if fields listed in arguments that are unknown\n        for field_id in field_vals_select:\n            if not any(field_id == col_id for col_id in self._fields):\n                raise Exception(f\"Field '{field_id}' does not exist and cannot be used for selection.\")\n\n        # order fields for selection: period must be expanded last due to the interpolation\n        fields_select = ({col_id: self._fields[col_id] for col_id in field_vals_select} |\n                         {col_id: field for col_id, field in self._fields.items() if col_id != 'period' and col_id not in field_vals_select} |\n                         {'period': self._fields['period']})\n\n        # select and expand fields\n        for col_id, field in fields_select.items():\n            field_vals = field_vals_select[col_id] if col_id in field_vals_select else None\n            selected = field.select_and_expand(selected, col_id, field_vals, extrapolate_period=extrapolate_period)\n\n        # drop custom fields with only one value if specified in method argument\n        if drop_singular_fields:\n            selected.drop(columns=[\n                col_id for col_id, field in self._fields.items()\n                if isinstance(field, CustomFieldDefinition) and selected[col_id].nunique() < 2\n            ], inplace=True)\n\n        # apply mappings\n        selected = self._apply_mappings(selected, var_units)\n\n        # drop rows with failed mappings\n        selected.dropna(subset='value', inplace=True)\n\n        # get map of variable references\n        var_references = selected \\\n            .filter(['variable', 'reference_variable']) \\\n            .drop_duplicates() \\\n            .set_index('variable')['reference_variable']\n\n        # Check for multiple reference variables per reported variable\n        if not var_references.index.is_unique:\n            raise Exception(f\"Multiple reference variables per reported variable found: {var_references}\")\n        var_references = var_references.to_dict()\n\n        # Remove 'reference_variable column\n        selected.drop(columns=['reference_variable'], inplace=True)\n\n        # strip off unit variants\n        var_units = {\n            variable: unit.split(';')[0]\n            for variable, unit in var_units.items()\n        }\n\n        # return\n        return selected, var_units, var_references\n\n\n    def _apply_mappings(self, expanded: pd.DataFrame, var_units: dict) -> pd.DataFrame:\n        # apply mappings between entry types\n        # list of columns to group by\n        group_cols = [\n            c for c in expanded.columns\n            if c not in ['variable', 'reference_variable', 'value']\n        ]\n\n        # perform groupby and do not drop NA values\n        grouped = expanded.groupby(group_cols, dropna=False)\n\n        # create return list\n        ret = []\n\n        # loop over groups\n        for keys, ids in grouped.groups.items():\n            # get rows in group\n            rows = expanded.loc[ids, [c for c in expanded if c not in group_cols]].copy()\n\n            # 1. convert FLH to OCF\n            cond = rows['variable'].str.endswith('|FLH')\n            if cond.any():\n\n                # Multiply 'value' by conversion factor\n                rows.loc[cond, 'value'] *= rows.loc[cond].apply(\n                    lambda row: unit_convert(\n                        var_units[row['variable']] + '/a',\n                        var_units[row['variable'].replace('|FLH', '|OCF')],\n                    ),\n                    axis=1,\n                )\n\n                # Replace '|FLH' with '|OCF\u2018 in 'variable'\n                rows.loc[cond, 'variable'] = rows.loc[cond, 'variable'] \\\n                    .str.replace('|FLH', '|OCF', regex=False)\n\n            # 2. convert OPEX Fixed Relative to OPEX Fixed\n            cond = rows['variable'].str.endswith('|OPEX Fixed Relative')\n            if cond.any():\n\n                # Define a function to calculate the conversion factor\n                def calculate_conversion(row):\n                    conversion_factor = unit_convert(var_units[row['variable']], 'dimensionless') * unit_convert(\n                        var_units[row['variable'].replace('|OPEX Fixed Relative', '|CAPEX')] + '/a',\n                        var_units[row['variable'].replace('|OPEX Fixed Relative', '|OPEX Fixed')]\n                    ) * (rows.query(\n                        f\"variable=='{row['variable'].replace('|OPEX Fixed Relative', '|CAPEX')}'\"\n                    ).pipe(\n                        lambda df: df['value'].iloc[0] if not df.empty else np.nan,\n                    ))\n                    return conversion_factor\n\n                # Calcualte the conversion factor and update 'value' for rows satisfying the condition\n                rows.loc[cond, 'value'] *= rows.loc[cond].apply(\n                    lambda row: calculate_conversion(row),\n                    axis=1,\n                )\n\n                # Replace '|OPEX Fixed Relative' with '|OPEX FIXED' in 'variable'\n                rows.loc[cond, 'variable'] = rows.loc[cond, 'variable'] \\\n                    .str.replace('|OPEX Fixed Relative', '|OPEX Fixed')\n\n                # Assign 'reference_variable' based on modified 'variable'\n                rows.loc[cond, 'reference_variable'] = rows.loc[cond].apply(\n                    lambda row: rows.query(\n                        f\"variable=='{row['variable'].replace('|OPEX Fixed', '|CAPEX')}'\"\n                    ).pipe(\n                        lambda df: df['reference_variable'].iloc[0] if not df.empty else np.nan,\n                    ),\n                    axis=1,\n                )\n\n                # Check if there are rows with null 'value' after the operation\n                if (cond & rows['value'].isnull()).any():\n                    warnings.warn(HarmoniseMappingFailure(\n                        expanded.loc[ids].loc[cond & rows['value'].isnull()],\n                        'No CAPEX value matching a OPEX Fixed Relative value found.',\n                    ))\n\n            # 3. convert OPEX Fixed Specific to OPEX Fixed\n            cond = rows['variable'].str.endswith('|OPEX Fixed Specific')\n            if cond.any():\n\n                # Define a function to calculate the conversion factor\n                def calculate_conversion(row):\n                    conversion_factor = unit_convert(\n                        var_units[row['variable']] + '/a',\n                        var_units[row['variable'].replace('|OPEX Fixed Specific', '|OPEX Fixed')]\n                    ) / unit_convert(\n                        var_units[row['reference_variable']] + '/a',\n                        var_units[re.sub(r'(Input|Output)', r'\\1 Capacity', row['reference_variable'])],\n                        self._var_specs[row['reference_variable']]['flow_id'] if 'flow_id' in self._var_specs[row['reference_variable']] else np.nan,\n                    ) * unit_convert(\n                        var_units[row['variable'].replace('|OPEX Fixed Specific', '|OCF')],\n                        'dimensionless'\n                    ) * (rows.query(\n                        f\"variable=='{row['variable'].replace('|OPEX Fixed Specific', '|OCF')}'\"\n                    ).pipe(\n                        lambda df: df['value'].iloc[0] if not df.empty else np.nan,\n                    ))\n                    return conversion_factor\n\n                # Calculate the conversion factor and update 'value' for rows satisfying the condition\n                rows.loc[cond, 'value'] *= rows.loc[cond].apply(\n                    lambda row: calculate_conversion(row),\n                    axis=1,\n                )\n\n                # replace '|OPEX Fixed Specific' with '|OPEX Fixed' in 'variable'\n                rows.loc[cond, 'variable'] = rows.loc[cond, 'variable'] \\\n                    .str.replace('|OPEX Fixed Specific', '|OPEX Fixed')\n\n                # Assign 'reference_variable by replacing 'Input' or 'Output' with 'Input Capacity' or 'Output Capacity'\n                rows.loc[cond, 'reference_variable'] = rows.loc[cond, 'reference_variable'].apply(\n                    lambda cell: re.sub(r'(Input|Output)', r'\\1 Capacity', cell),\n                )\n\n                # Check if there are any rows with null 'value' after the opera\n                if (cond & rows['value'].isnull()).any():\n                    warnings.warn(HarmoniseMappingFailure(\n                        expanded.loc[ids].loc[cond & rows['value'].isnull()],\n                        'No OCF value matching a OPEX Fixed Specific value found.',\n                    ))\n\n            # 4. convert efficiencies (Output over Input) to demands (Input over Output)\n            cond = (rows['variable'].str.contains(r'\\|Output(?: Capacity)?\\|') &\n                    (rows['reference_variable'].str.contains(r'\\|Input(?: Capacity)?\\|')\n                    if rows['reference_variable'].notnull().any() else False))\n            if cond.any():\n                rows.loc[cond, 'value'] = 1.0 / rows.loc[cond, 'value']\n                rows.loc[cond, 'variable_new'] = rows.loc[cond, 'reference_variable']\n                rows.loc[cond, 'reference_variable'] = rows.loc[cond, 'variable']\n                rows.loc[cond, 'variable'] = rows.loc[cond, 'variable_new']\n                rows.drop(columns=['variable_new'], inplace=True)\n\n            # 5. convert all references to primary output\n            cond = (((rows['reference_variable'].str.contains(r'\\|Output(?: Capacity)?\\|') |\n                    rows['reference_variable'].str.contains(r'\\|Input(?: Capacity)?\\|'))\n                    if rows['reference_variable'].notnull().any() else False) &\n                    rows['variable'].map(lambda var: 'default_reference' in self._var_specs[var]) &\n                    (rows['variable'].map(\n                        lambda var: self._var_specs[var]['default_reference']\n                        if 'default_reference' in self._var_specs[var] else np.nan\n                    ) != rows['reference_variable']))\n            if cond.any():\n                regex_find = r'\\|(Input|Output)(?: Capacity)?\\|'\n                regex_repl = r'|\\1|'\n                rows.loc[cond, 'reference_variable_new'] = rows.loc[cond, 'variable'].map(\n                    lambda var: self._var_specs[var]['default_reference'],\n                )\n\n                # Define function to calculate the conversion factor\n                def calculate_conversion(row):\n                    conversion_factor =  unit_convert(\n                        ('a*' if 'Capacity' in row['reference_variable'] else '') + var_units[row['reference_variable_new']],\n                        var_units[re.sub(regex_find, regex_repl, row['reference_variable_new'])],\n                        row['reference_variable_new'].split('|')[-1]\n                    ) / unit_convert(\n                        ('a*' if 'Capacity' in row['reference_variable'] else '') + var_units[row['reference_variable']],\n                        var_units[re.sub(regex_find, regex_repl, row['reference_variable'])],\n                        row['reference_variable'].split('|')[-1]\n                    ) * rows.query(\n                        f\"variable=='{re.sub(regex_find, regex_repl, row['reference_variable'])}' & \"\n                        f\"reference_variable=='{re.sub(regex_find, regex_repl, row['reference_variable_new'])}'\"\n                    ).pipe(\n                        lambda df: df['value'].iloc[0] if not df.empty else np.nan,\n                    )\n                    return conversion_factor\n\n                # Calculate the conversion factor and update 'value' for rows satisfying the condition\n                rows.loc[cond, 'value'] *= rows.loc[cond].apply(\n                    lambda row: calculate_conversion(row),\n                    axis=1,\n                )\n                rows.loc[cond, 'reference_variable'] = rows.loc[cond, 'reference_variable_new']\n                rows.drop(columns=['reference_variable_new'], inplace=True)\n                if (cond & rows['value'].isnull()).any():\n                    warnings.warn(HarmoniseMappingFailure(\n                        expanded.loc[ids].loc[cond & rows['value'].isnull()],\n                        'No appropriate mapping found to convert row reference to primary output.',\n                    ))\n\n            # set missing columns from group\n            rows[group_cols] = keys\n\n            # add to return list\n            ret.append(rows)\n\n        # convert return list to dataframe and return\n        return pd.concat(ret).reset_index(drop=True) if ret else expanded.iloc[[]]\n\n    # select data\n    def aggregate(self, override: Optional[dict[str, str]] = None,\n                  drop_singular_fields: bool = True,\n                  extrapolate_period: bool = True,\n                  agg: Optional[str | list[str] | tuple[str]] = None,\n                  masks: Optional[list[Mask]] = None,\n                  masks_database: bool = True,\n                  **field_vals_select) -> pd.DataFrame:\n        '''Aggregates data based on specified parameters, applies masks,\n        and cleans up the resulting DataFrame.\n\n        Parameters\n        ----------\n        override: Optional[dict[str, str]]\n            Dictionary with key, value paris of variables to override\n        drop_singular_fields: bool, optional\n            If True, drop custom fields with only one value\n        extrapolate_period: bool, optional\n            If True, extrapolate values by extrapolation, if no value for this period is given\n        agg : Optional[str | list[str] | tuple[str]]\n            Specifies which fields to aggregate over.\n        masks : Optional[list[Mask]]\n            Specifies a list of Mask objects that will be applied to the data during aggregation.\n            These masks can be used to filter or weight the\n            data based on certain conditions defined in the Mask objects.\n        masks_database : bool, optional\n            Determines whether to include masks from databases in the aggregation process.\n            If set to `True`, masks from databases will be included along with any masks provided as function arguments.\n            If set to `False`, only the masks provided as function argruments will be applied\n\n        Returns\n        -------\n        pd.DataFrame\n            The `aggregate` method returns a pandas DataFrame that has been cleaned up and aggregated based\n            on the specified parameters and input data. The method performs aggregation over component\n            fields and cases fields, applies weights based on masks, drops rows with NaN weights, aggregates\n            with weights, inserts reference variables, sorts columns and rows, rounds values, and inserts\n            units before returning the final cleaned and aggregated DataFrame.\n\n        '''\n\n        # get selection\n        selected, var_units, var_references = self._select(override,\n                                                           extrapolate_period,\n                                                           drop_singular_fields,\n                                                           **field_vals_select)\n\n        # compile masks from databases and function argument into one list\n        if masks is not None and any(not isinstance(m, Mask) for m in masks):\n            raise Exception(\"Function argument 'masks' must contain a list of posted.masking.Mask objects.\")\n        masks = (self._masks if masks_database else []) + (masks or [])\n\n        # aggregation\n        component_fields = [\n            col_id for col_id, field in self._fields.items()\n            if field.field_type == 'component'\n        ]\n        if agg is None:\n            agg = component_fields + ['source']\n        else:\n            if isinstance(agg, tuple):\n                agg = list(agg)\n            elif not isinstance(agg, list):\n                agg = [agg]\n            for a in agg:\n                if not isinstance(a, str):\n                    raise Exception(f\"Field ID in argument 'agg' must be a string but found: {a}\")\n                if not any(a == col_id for col_id in self._fields):\n                    raise Exception(f\"Field ID in argument 'agg' is not a valid field: {a}\")\n\n        # aggregate over component fields\n        group_cols = [\n            c for c in selected.columns\n            if not (c == 'value' or (c in agg and c in component_fields))\n        ]\n        aggregated = selected \\\n            .groupby(group_cols, dropna=False) \\\n            .agg({'value': 'sum'}) \\\n            .reset_index()\n\n        # aggregate over cases fields\n        group_cols = [\n            c for c in aggregated.columns\n            if not (c == 'value' or c in agg)\n        ]\n        ret = []\n        for keys, rows in aggregated.groupby(group_cols, dropna=False):\n            # set default weights to 1.0\n            rows = rows.assign(weight=1.0)\n\n            # update weights by applying masks\n            for mask in masks:\n                if mask.matches(rows):\n                    rows['weight'] *= mask.get_weights(rows)\n\n            # drop all rows with weights equal to nan\n            rows.dropna(subset='weight', inplace=True)\n\n            if not rows.empty:\n                # aggregate with weights\n                out = rows \\\n                    .groupby(group_cols, dropna=False)[['value', 'weight']] \\\n                    .apply(lambda cols: pd.Series({\n                        'value': np.average(cols['value'], weights=cols['weight']),\n                    }))\n\n                # add to return list\n                ret.append(out)\n        aggregated = pd.concat(ret).reset_index()\n\n        # insert reference variables\n        var_ref_unique = {\n            var_references[var]\n            for var in aggregated['variable'].unique()\n            if not pd.isnull(var_references[var])\n        }\n        agg_append = []\n        for ref_var in var_ref_unique:\n            agg_append.append(pd.DataFrame({\n                'variable': [ref_var],\n                'value': [1.0],\n            } | {\n                col_id: ['*']\n                for col_id, field in self._fields.items() if col_id in aggregated\n            }))\n        if agg_append:\n            agg_append = pd.concat(agg_append).reset_index(drop=True)\n            for col_id, field in self._fields.items():\n                if col_id not in aggregated:\n                    continue\n                agg_append = field.select_and_expand(agg_append, col_id, aggregated[col_id].unique().tolist())\n        else:\n            agg_append = None\n\n        # convert return list to dataframe, reset index, and clean up\n        return self._cleanup(pd.concat([aggregated, agg_append]), var_units)\n\n    # clean up: sort columns and rows, round values, insert units\n    def _cleanup(self, df: pd.DataFrame, var_units: dict[str, str]) -> pd.DataFrame:\n        # sort columns and rows\n        cols_sorted = (\n            [col_id for col_id, field in self._fields.items() if isinstance(field, CustomFieldDefinition)] +\n            ['source', 'variable', 'reference_variable', 'region', 'period', 'value']\n        )\n        cols_sorted = [c for c in cols_sorted if c in df.columns]\n        df = df[cols_sorted]\n        df = df \\\n            .sort_values(by=[c for c in cols_sorted if c in df and c != 'value']) \\\n            .reset_index(drop=True)\n\n        # round values\n        df['value'] = df['value'].apply(\n            lambda cell: cell if pd.isnull(cell) else round(cell, sigfigs=4, warn=False)\n        )\n\n        # insert column containing units\n        df.insert(df.columns.tolist().index('value'), 'unit', np.nan)\n        if 'reference_variable' in df:\n            df['unit'] = df.apply(\n                lambda row: combine_units(var_units[row['variable']], var_units[row['reference_variable']])\n                            if not pd.isnull(row['reference_variable']) else\n                            var_units[row['variable']],\n                axis=1,\n            )\n        else:\n            df['unit'] = df['variable'].map(var_units)\n\n        return df\n
"},{"location":"code/python/noslag/#python.posted.noslag.DataSet.data","title":"data property","text":"

str: Get or set dataframe

"},{"location":"code/python/noslag/#python.posted.noslag.DataSet.__init__","title":"__init__(parent_variable, include_databases=None, file_paths=None, check_inconsistencies=False, data=None)","text":"

Initialise parent class and fields, load data from specified databases and files

Source code in python/posted/noslag.py
def __init__(self,\n             parent_variable: str,\n             include_databases: Optional[list[str] | tuple[str]] = None,\n             file_paths: Optional[list[Path]] = None,\n             check_inconsistencies: bool = False,\n             data: Optional[pd.DataFrame] = None,\n             ):\n    '''Initialise parent class and fields, load data from specified databases and files\n\n\n    '''\n    TEBase.__init__(self, parent_variable)\n\n    # initialise fields\n    self._df = None\n    self._columns = base_columns\n    self._fields = {\n        col_id: field\n        for col_id, field in self._columns.items()\n        if isinstance(field, AbstractFieldDefinition)\n    }\n    self._masks = []\n\n    # Load data if provided, otherwise load from TEDataFiles\n    if data is not None:\n        self._df = data\n    else:\n        # read TEDataFiles and combine into dataset\n        include_databases = list(include_databases) if include_databases is not None else list(databases.keys())\n        self._df = self._load_files(include_databases, file_paths or [], check_inconsistencies)\n
"},{"location":"code/python/noslag/#python.posted.noslag.DataSet.aggregate","title":"aggregate(override=None, drop_singular_fields=True, extrapolate_period=True, agg=None, masks=None, masks_database=True, **field_vals_select)","text":"

Aggregates data based on specified parameters, applies masks, and cleans up the resulting DataFrame.

Parameters:

Name Type Description Default override Optional[dict[str, str]]

Dictionary with key, value paris of variables to override

None drop_singular_fields bool

If True, drop custom fields with only one value

True extrapolate_period bool

If True, extrapolate values by extrapolation, if no value for this period is given

True agg Optional[str | list[str] | tuple[str]]

Specifies which fields to aggregate over.

None masks Optional[list[Mask]]

Specifies a list of Mask objects that will be applied to the data during aggregation. These masks can be used to filter or weight the data based on certain conditions defined in the Mask objects.

None masks_database bool

Determines whether to include masks from databases in the aggregation process. If set to True, masks from databases will be included along with any masks provided as function arguments. If set to False, only the masks provided as function argruments will be applied

True

Returns:

Type Description DataFrame

The aggregate method returns a pandas DataFrame that has been cleaned up and aggregated based on the specified parameters and input data. The method performs aggregation over component fields and cases fields, applies weights based on masks, drops rows with NaN weights, aggregates with weights, inserts reference variables, sorts columns and rows, rounds values, and inserts units before returning the final cleaned and aggregated DataFrame.

Source code in python/posted/noslag.py
def aggregate(self, override: Optional[dict[str, str]] = None,\n              drop_singular_fields: bool = True,\n              extrapolate_period: bool = True,\n              agg: Optional[str | list[str] | tuple[str]] = None,\n              masks: Optional[list[Mask]] = None,\n              masks_database: bool = True,\n              **field_vals_select) -> pd.DataFrame:\n    '''Aggregates data based on specified parameters, applies masks,\n    and cleans up the resulting DataFrame.\n\n    Parameters\n    ----------\n    override: Optional[dict[str, str]]\n        Dictionary with key, value paris of variables to override\n    drop_singular_fields: bool, optional\n        If True, drop custom fields with only one value\n    extrapolate_period: bool, optional\n        If True, extrapolate values by extrapolation, if no value for this period is given\n    agg : Optional[str | list[str] | tuple[str]]\n        Specifies which fields to aggregate over.\n    masks : Optional[list[Mask]]\n        Specifies a list of Mask objects that will be applied to the data during aggregation.\n        These masks can be used to filter or weight the\n        data based on certain conditions defined in the Mask objects.\n    masks_database : bool, optional\n        Determines whether to include masks from databases in the aggregation process.\n        If set to `True`, masks from databases will be included along with any masks provided as function arguments.\n        If set to `False`, only the masks provided as function argruments will be applied\n\n    Returns\n    -------\n    pd.DataFrame\n        The `aggregate` method returns a pandas DataFrame that has been cleaned up and aggregated based\n        on the specified parameters and input data. The method performs aggregation over component\n        fields and cases fields, applies weights based on masks, drops rows with NaN weights, aggregates\n        with weights, inserts reference variables, sorts columns and rows, rounds values, and inserts\n        units before returning the final cleaned and aggregated DataFrame.\n\n    '''\n\n    # get selection\n    selected, var_units, var_references = self._select(override,\n                                                       extrapolate_period,\n                                                       drop_singular_fields,\n                                                       **field_vals_select)\n\n    # compile masks from databases and function argument into one list\n    if masks is not None and any(not isinstance(m, Mask) for m in masks):\n        raise Exception(\"Function argument 'masks' must contain a list of posted.masking.Mask objects.\")\n    masks = (self._masks if masks_database else []) + (masks or [])\n\n    # aggregation\n    component_fields = [\n        col_id for col_id, field in self._fields.items()\n        if field.field_type == 'component'\n    ]\n    if agg is None:\n        agg = component_fields + ['source']\n    else:\n        if isinstance(agg, tuple):\n            agg = list(agg)\n        elif not isinstance(agg, list):\n            agg = [agg]\n        for a in agg:\n            if not isinstance(a, str):\n                raise Exception(f\"Field ID in argument 'agg' must be a string but found: {a}\")\n            if not any(a == col_id for col_id in self._fields):\n                raise Exception(f\"Field ID in argument 'agg' is not a valid field: {a}\")\n\n    # aggregate over component fields\n    group_cols = [\n        c for c in selected.columns\n        if not (c == 'value' or (c in agg and c in component_fields))\n    ]\n    aggregated = selected \\\n        .groupby(group_cols, dropna=False) \\\n        .agg({'value': 'sum'}) \\\n        .reset_index()\n\n    # aggregate over cases fields\n    group_cols = [\n        c for c in aggregated.columns\n        if not (c == 'value' or c in agg)\n    ]\n    ret = []\n    for keys, rows in aggregated.groupby(group_cols, dropna=False):\n        # set default weights to 1.0\n        rows = rows.assign(weight=1.0)\n\n        # update weights by applying masks\n        for mask in masks:\n            if mask.matches(rows):\n                rows['weight'] *= mask.get_weights(rows)\n\n        # drop all rows with weights equal to nan\n        rows.dropna(subset='weight', inplace=True)\n\n        if not rows.empty:\n            # aggregate with weights\n            out = rows \\\n                .groupby(group_cols, dropna=False)[['value', 'weight']] \\\n                .apply(lambda cols: pd.Series({\n                    'value': np.average(cols['value'], weights=cols['weight']),\n                }))\n\n            # add to return list\n            ret.append(out)\n    aggregated = pd.concat(ret).reset_index()\n\n    # insert reference variables\n    var_ref_unique = {\n        var_references[var]\n        for var in aggregated['variable'].unique()\n        if not pd.isnull(var_references[var])\n    }\n    agg_append = []\n    for ref_var in var_ref_unique:\n        agg_append.append(pd.DataFrame({\n            'variable': [ref_var],\n            'value': [1.0],\n        } | {\n            col_id: ['*']\n            for col_id, field in self._fields.items() if col_id in aggregated\n        }))\n    if agg_append:\n        agg_append = pd.concat(agg_append).reset_index(drop=True)\n        for col_id, field in self._fields.items():\n            if col_id not in aggregated:\n                continue\n            agg_append = field.select_and_expand(agg_append, col_id, aggregated[col_id].unique().tolist())\n    else:\n        agg_append = None\n\n    # convert return list to dataframe, reset index, and clean up\n    return self._cleanup(pd.concat([aggregated, agg_append]), var_units)\n
"},{"location":"code/python/noslag/#python.posted.noslag.DataSet.normalise","title":"normalise(override=None, inplace=False)","text":"

normalise data: default reference units, reference value equal to 1.0, default reported units

Parameters:

Name Type Description Default override Optional[dict[str, str]]

Dictionary with key, value pairs of variables to override

None inplace bool

Wether to do the normalisation in place

False

Returns:

Type Description DataFrame

if inplace is false, returns normalised dataframe

Source code in python/posted/noslag.py
def normalise(self, override: Optional[dict[str, str]] = None, inplace: bool = False) -> pd.DataFrame | None:\n    '''\n    normalise data: default reference units, reference value equal to 1.0, default reported units\n\n    Parameters\n    ----------\n    override: Optional[dict[str,str]], optional\n        Dictionary with key, value pairs of variables to override\n    inplace: bool, optional\n        Wether to do the normalisation in place\n\n    Returns\n    -------\n    pd.DataFrame\n        if inplace is false, returns normalised dataframe'''\n    normalised, _ = self._normalise(override)\n    if inplace:\n        self._df = normalised\n        return\n    else:\n        return normalised\n
"},{"location":"code/python/noslag/#python.posted.noslag.DataSet.select","title":"select(override=None, drop_singular_fields=True, extrapolate_period=True, **field_vals_select)","text":"

Select desired data from the dataframe

Parameters:

Name Type Description Default override Optional[dict[str, str]]

Dictionary with key, value paris of variables to override

None drop_singular_fields bool

If True, drop custom fields with only one value

True extrapolate_period bool

If True, extrapolate values by extrapolation, if no value for this period is given

True **field_vals_select

IDs of values to select

{}

Returns:

Type Description DataFrame

DataFrame with selected Values

Source code in python/posted/noslag.py
def select(self,\n           override: Optional[dict[str, str]] = None,\n           drop_singular_fields: bool = True,\n           extrapolate_period: bool = True,\n           **field_vals_select) -> pd.DataFrame:\n    '''Select desired data from the dataframe\n\n    Parameters\n    ----------\n    override: Optional[dict[str, str]]\n        Dictionary with key, value paris of variables to override\n    drop_singular_fields: bool, optional\n        If True, drop custom fields with only one value\n    extrapolate_period: bool, optional\n        If True, extrapolate values by extrapolation, if no value for this period is given\n    **field_vals_select\n        IDs of values to select\n\n    Returns\n    -------\n    pd.DataFrame\n        DataFrame with selected Values\n        '''\n    selected, var_units, var_references = self._select(\n        override,\n        drop_singular_fields,\n        extrapolate_period,\n        **field_vals_select,\n    )\n    selected.insert(selected.columns.tolist().index('variable'), 'reference_variable', np.nan)\n    selected['reference_variable'] = selected['variable'].map(var_references)\n    return self._cleanup(selected, var_units)\n
"},{"location":"code/python/noslag/#python.posted.noslag.HarmoniseMappingFailure","title":"HarmoniseMappingFailure","text":"

Bases: Warning

Warning raised for rows in TEDataSets where mappings fail.

Parameters:

Name Type Description Default row_data DataFrame

Contains the Data on the rows to map

required message str

Contains the message of the failure

'Failure when selecting from dataset.'

Attributes:

Name Type Description row_data DataFrame

the data of the row that causes the failure

message str

explanation of the error

Source code in python/posted/noslag.py
class HarmoniseMappingFailure(Warning):\n    \"\"\"Warning raised for rows in TEDataSets where mappings fail.\n\n    Parameters\n    ----------\n    row_data: pd.DataFrame\n        Contains the Data on the rows to map\n    message: str, optional\n        Contains the message of the failure\n\n    Attributes\n    ----------\n    row_data\n        the data of the row that causes the failure\n    message\n        explanation of the error\n    \"\"\"\n    def __init__(self, row_data: pd.DataFrame, message: str = \"Failure when selecting from dataset.\"):\n        '''Save constructor arguments as public fields, compose warning message, call super constructor'''\n        # save constructor arguments as public fields\n        self.row_data: pd.DataFrame = row_data\n        self.message: str = message\n\n        # compose warning message\n        warning_message: str = message + f\"\\n{row_data}\"\n\n        # call super constructor\n        super().__init__(warning_message)\n
"},{"location":"code/python/noslag/#python.posted.noslag.HarmoniseMappingFailure.__init__","title":"__init__(row_data, message='Failure when selecting from dataset.')","text":"

Save constructor arguments as public fields, compose warning message, call super constructor

Source code in python/posted/noslag.py
def __init__(self, row_data: pd.DataFrame, message: str = \"Failure when selecting from dataset.\"):\n    '''Save constructor arguments as public fields, compose warning message, call super constructor'''\n    # save constructor arguments as public fields\n    self.row_data: pd.DataFrame = row_data\n    self.message: str = message\n\n    # compose warning message\n    warning_message: str = message + f\"\\n{row_data}\"\n\n    # call super constructor\n    super().__init__(warning_message)\n
"},{"location":"code/python/noslag/#python.posted.noslag.collect_files","title":"collect_files(parent_variable, include_databases=None)","text":"

Takes a parent variable and optional list of databases to include, checks for their existence, and collects files and directories based on the parent variable.

Parameters:

Name Type Description Default parent_variable str

Variable to collect files on

required include_databases Optional[list[str]]

List of Database IDs to collect files from

None

Returns:

Type Description list[tuple]
List of tuples containing the parent variable and the\n

database ID for each file found in the specified directories.

Source code in python/posted/noslag.py
def collect_files(parent_variable: str, include_databases: Optional[list[str]] = None):\n    '''Takes a parent variable and optional list of databases to include,\n    checks for their existence, and collects files and directories based on the parent variable.\n\n    Parameters\n    ----------\n    parent_variable : str\n        Variable to collect files on\n    include_databases : Optional[list[str]]\n        List of Database IDs to collect files from\n\n    Returns\n    -------\n        list[tuple]\n            List of tuples containing the parent variable and the\n        database ID for each file found in the specified directories.\n\n    '''\n    if not parent_variable:\n        raise Exception('Variable may not me empty.')\n\n    # check that the requested database to include can be found\n    if include_databases is not None:\n        for database_id in include_databases:\n            if not (database_id in databases and databases[database_id].exists()):\n                raise Exception(f\"Could not find database '{database_id}'.\")\n\n    ret = []\n    for database_id, database_path in databases.items():\n        # skip ted paths not requested to include\n        if include_databases is not None and database_id not in include_databases: continue\n\n        # find top-level file and directory\n        top_path = '/'.join(parent_variable.split('|'))\n        top_file = database_path / 'tedfs' / (top_path + '.csv')\n        top_directory = database_path / 'tedfs' / top_path\n\n        # add top-level file if it exists\n        if top_file.exists() and top_file.is_file():\n            ret.append((parent_variable, database_id))\n\n        # add all files contained in top-level directory\n        if top_directory.exists() and top_directory.is_dir():\n            for sub_file in top_directory.rglob('*.csv'):\n                sub_variable = parent_variable + '|' + sub_file.relative_to(top_directory).name.rstrip('.csv')\n                ret.append((sub_variable, database_id))\n\n        # loop over levels\n        levels = parent_variable.split('|')\n        for l in range(0, len(levels)):\n            # find top-level file and directory\n            top_path = '/'.join(levels[:l])\n            parent_file = database_path / 'tedfs' / (top_path + '.csv')\n\n            # add parent file if it exists\n            if parent_file.exists() and parent_file.is_file():\n                parent_variable = '|'.join(levels[:l])\n                ret.append((parent_variable, database_id))\n\n    return ret\n
"},{"location":"code/python/noslag/#python.posted.noslag.combine_units","title":"combine_units(numerator, denominator)","text":"

Combine fraction of two units into updated unit string

Parameters:

Name Type Description Default numerator str

numerator of the fraction

required denominator str

denominator of the fraction

required

Returns:

Type Description str

updated unit string after simplification

Source code in python/posted/noslag.py
def combine_units(numerator: str, denominator: str):\n    '''Combine fraction of two units into updated unit string\n\n    Parameters\n    ----------\n    numerator: str\n        numerator of the fraction\n    denominator: str\n        denominator of the fraction\n\n    Returns\n    -------\n        str\n            updated unit string after simplification\n    '''\n\n\n    ret = ureg(f\"{numerator}/({denominator})\").u\n    # chekc if ret is dimensionless, if not return ret, else return the explicit quotient\n    if not ret.dimensionless:\n        return str(ret)\n    else:\n        return (f\"{numerator}/({denominator})\"\n                if '/' in denominator else\n                f\"{numerator}/{denominator}\")\n
"},{"location":"code/python/noslag/#python.posted.noslag.normalise_units","title":"normalise_units(df, level, var_units, var_flow_ids)","text":"

Takes a DataFrame with reported or reference data, along with dictionaries mapping variable units and flow IDs, and normalizes the units of the variables in the DataFrame based on the provided mappings.

Parameters:

Name Type Description Default df DataFrame

Dataframe to be normalised

required level Literal['reported', 'reference']

Specifies whether the data should be normalised on the reported or reference values

required var_units dict[str, str]

Dictionary that maps a combination of parent variable and variable to its corresponding unit. The keys in the dictionary are in the format \"{parent_variable}|{variable}\", and the values are the units associated with that variable.

required var_flow_ids dict[str, str]

Dictionary that maps a combination of parent variable and variable to a specific flow ID. This flow ID is used for unit conversion in the normalise_units function.

required

Returns:

Type Description pd.DataFrame

Normalised dataframe

Source code in python/posted/noslag.py
def normalise_units(df: pd.DataFrame, level: Literal['reported', 'reference'], var_units: dict[str, str],\n                       var_flow_ids: dict[str, str]):\n    '''\n    Takes a DataFrame with reported or reference data, along with\n    dictionaries mapping variable units and flow IDs, and normalizes the units of the variables in the\n    DataFrame based on the provided mappings.\n\n    Parameters\n    ----------\n    df : pd.DataFrame\n        Dataframe to be normalised\n    level : Literal['reported', 'reference']\n        Specifies whether the data should be normalised on the reported or reference values\n    var_units : dict[str, str]\n        Dictionary that maps a combination of parent variable and variable\n        to its corresponding unit. The keys in the dictionary are in the format \"{parent_variable}|{variable}\",\n        and the values are the units associated with that variable.\n    var_flow_ids : dict[str, str]\n        Dictionary that maps a combination of parent variable and variable to a\n        specific flow ID. This flow ID is used for unit conversion in the `normalise_units` function.\n\n    Returns\n    -------\n        pd.DataFrame\n            Normalised dataframe\n\n    '''\n\n    prefix = '' if level == 'reported' else 'reference_'\n    var_col_id = prefix + 'variable'\n    value_col_id = prefix + 'value'\n    unit_col_id = prefix + 'unit'\n    df_tmp = pd.concat([\n        df,\n        df.apply(\n            lambda row: var_units[f\"{row['parent_variable']}|{row[var_col_id]}\"]\n            if isinstance(row[var_col_id], str) else np.nan,\n            axis=1,\n        )\n        .to_frame('target_unit'),\n        df.apply(\n            lambda row: var_flow_ids[f\"{row['parent_variable']}|{row[var_col_id]}\"]\n            if isinstance(row[var_col_id], str) else np.nan,\n            axis=1,\n        )\n        .to_frame('target_flow_id'),\n    ], axis=1)\n\n    # Apply unit conversion\n    conv_factor = df_tmp.apply(\n        lambda row: unit_convert(row[unit_col_id], row['target_unit'], row['target_flow_id'])\n        if not np.isnan(row[value_col_id]) else 1.0,\n        axis=1,\n    )\n\n    # Update value column with conversion factor\n    df_tmp[value_col_id] *= conv_factor\n\n    # If level is 'reported', update uncertainty column with conversion factor\n    if level == 'reported':\n        df_tmp['uncertainty'] *= conv_factor\n\n    # Uupdate unit columns\n    df_tmp[unit_col_id] = df_tmp['target_unit']\n\n    # Drop unneccessary columns and return\n    return df_tmp.drop(columns=['target_unit', 'target_flow_id'])\n
"},{"location":"code/python/noslag/#python.posted.noslag.normalise_values","title":"normalise_values(df)","text":"

Takes a DataFrame as input, normalizes the 'value' and 'uncertainty' columns by the reference value, and updates the 'reference_value' column accordingly.

Parameters:

Name Type Description Default df DataFrame

Dataframe to be normalised

required

Returns:

Type Description pd.DataFrame

Returns a modified DataFrame where the 'value' column has been divided by the 'reference_value' column (or 1.0 if 'reference_value' is null), the 'uncertainty' column has been divided by the 'reference_value' column, and the 'reference_value' column has been replaced with 1.0 if it was not null, otherwise

Source code in python/posted/noslag.py
def normalise_values(df: pd.DataFrame):\n    '''Takes a DataFrame as input, normalizes the 'value' and 'uncertainty'\n    columns by the reference value, and updates the 'reference_value' column accordingly.\n\n    Parameters\n    ----------\n    df : pd.DataFrame\n        Dataframe to be normalised\n\n    Returns\n    -------\n        pd.DataFrame\n            Returns a modified DataFrame where the 'value' column has been\n            divided by the 'reference_value' column (or 1.0 if 'reference_value' is null), the 'uncertainty'\n            column has been divided by the 'reference_value' column, and the 'reference_value' column has been\n            replaced with 1.0 if it was not null, otherwise\n\n    '''\n    # Calculate reference value\n    reference_value =  df.apply(\n        lambda row:\n            row['reference_value']\n            if not pd.isnull(row['reference_value']) else\n            1.0,\n        axis=1,\n    )\n    # Calculate new value, reference value and uncertainty\n    value_new = df['value'] / reference_value\n    uncertainty_new = df['uncertainty'] / reference_value\n    reference_value_new = df.apply(\n        lambda row:\n            1.0\n            if not pd.isnull(row['reference_value']) else\n            np.nan,\n        axis=1,\n    )\n    # Assign new values to dataframe and return\n    return df.assign(value=value_new, uncertainty=uncertainty_new, reference_value=reference_value_new)\n
"},{"location":"code/python/read/","title":"read","text":""},{"location":"code/python/read/#python.posted.read.read_csv_file","title":"read_csv_file(fpath)","text":"

Read CSV data file

Parameters:

Name Type Description Default fpath str

Path of the file to read

required

Returns:

Type Description pd.DataFrame

DataFrame containg the data of the CSV

Source code in python/posted/read.py
def read_csv_file(fpath: str):\n    \"\"\"\n    Read CSV data file\n\n    Parameters\n    ----------\n    fpath: str\n        Path of the file to read\n    Returns\n    -------\n        pd.DataFrame\n            DataFrame containg the data of the CSV\n    \"\"\"\n    return pd.read_csv(fpath)\n
"},{"location":"code/python/read/#python.posted.read.read_yml_file","title":"read_yml_file(fpath)","text":"

Read YAML config file

Parameters:

Name Type Description Default fpath Path

Path of the file to read

required

Returns:

Type Description dict

Dictionary containing config info

Source code in python/posted/read.py
def read_yml_file(fpath: Path):\n    \"\"\"\n    Read YAML config file\n\n    Parameters\n    ----------\n    fpath: str\n        Path of the file to read\n    Returns\n    -------\n        dict\n            Dictionary containing config info\n    \"\"\"\n    fhandle = open(fpath, 'r', encoding='utf-8')\n    ret = yaml.load(stream=fhandle, Loader=yaml.FullLoader)\n    fhandle.close()\n    return ret\n
"},{"location":"code/python/sources/","title":"sources","text":""},{"location":"code/python/sources/#python.posted.sources.dump_sources","title":"dump_sources(file_path)","text":"

Parses BibTeX files, formats the data, and exports it into a CSV or Excel file using pandas.

Parameters:

Name Type Description Default file_path str | Path

Path to the file where the formatted sources should be exported to. It can be either a string representing the file path or a Path object from the pathlib module.

required Source code in python/posted/sources.py
def dump_sources(file_path: str | Path):\n    '''Parses BibTeX files, formats the data, and exports it into a CSV or Excel\n    file using pandas.\n\n    Parameters\n    ----------\n    file_path : str | Path\n        Path to the file where the formatted sources should be exported to.\n         It can be either a string representing the file path or a `Path` object\n        from the `pathlib` module.\n\n    '''\n    # convert string to pathlib.Path if necessary\n    if isinstance(file_path, str):\n        file_path = Path(file_path)\n\n    # define styles and formats\n    style = find_plugin('pybtex.style.formatting', 'apa')()\n    # format_html = find_plugin('pybtex.backends', 'html')()\n    format_plain = find_plugin('pybtex.backends', 'plaintext')()\n\n    # parse bibtex file\n    parser = bibtex.Parser()\n\n    # loop over databases\n    formatted = []\n    for database_path in databases.values():\n        bib_data = parser.parse_file(database_path / 'sources.bib')\n        formatted += format_sources(bib_data, style, format_plain)\n\n    # convert to dataframe\n    df = pd.DataFrame.from_records(formatted)\n\n    # dump dataframe with pandas to CSV or Excel spreadsheet\n    if file_path.suffix == '.csv':\n        df.to_csv(Path(file_path))\n    elif file_path.suffix in ['.xls', '.xlsx']:\n        df.to_excel(Path(file_path))\n    else:\n        raise Exception('Unknown file suffix!')\n
"},{"location":"code/python/sources/#python.posted.sources.format_sources","title":"format_sources(bib_data, style, form, exclude_fields=None)","text":"

Takes bibliographic data, a citation style, a citation form, and optional excluded fields, and returns a formatted list of sources based on the specified style and form.

Parameters:

Name Type Description Default bib_data

Contains bibliographic information, such as author, title, references or citations.

required style

Specifies the formatting style for the bibliography entries.

required form

Specifies the format in which the citation should be rendered. It determines how the citation information will be displayed or structured in the final output.

required exclude_fields

Specifies a list of fields that should be excluded from the final output. These fields will be removed from the entries before

None formatting required

Returns:

Type Description list[dict]

A list of dictionaries containing the identifier, citation, DOI, and URL information for each entry in the bibliography data, formatted according to the specified style and form, with any excluded fields removed.

Source code in python/posted/sources.py
def format_sources(bib_data, style, form, exclude_fields = None):\n    '''\n    Takes bibliographic data, a citation style, a citation form, and\n    optional excluded fields, and returns a formatted list of sources based on the specified style and\n    form.\n\n    Parameters\n    ----------\n    bib_data\n        Contains bibliographic information, such as author, title, references or citations.\n    style\n        Specifies the formatting style for the bibliography entries.\n    form\n        Specifies the format in which the citation should be rendered. It determines how the citation information will be displayed or\n        structured in the final output.\n    exclude_fields\n        Specifies a list of fields that should be excluded from the final output. These fields will be removed from the entries before\n    formatting and returning the citation data.\n\n    Returns\n    -------\n        list[dict]\n            A list of dictionaries containing the identifier, citation, DOI, and URL information for each entry\n            in the bibliography data, formatted according to the specified style and form, with any excluded\n            fields removed.\n\n    '''\n    exclude_fields = exclude_fields or []\n\n    if exclude_fields:\n        for entry in bib_data.entries.values():\n            for ef in exclude_fields:\n                if ef in entry.fields.__dict__['_dict']:\n                    del entry.fields.__dict__['_dict'][ef]\n\n    ret = []\n    for identifier in bib_data.entries:\n        entry = bib_data.entries[identifier]\n        fields = entry.fields.__dict__['_dict']\n        ret.append({\n            'identifier': identifier,\n            'citation': next(style.format_entries([entry])).text.render(form),\n            'doi': fields['doi'] if 'doi' in fields else '',\n            'url': fields['url'] if 'url' in fields else '',\n        })\n\n    return ret\n
"},{"location":"code/python/tedf/","title":"tedf","text":""},{"location":"code/python/tedf/#python.posted.tedf.TEBase","title":"TEBase","text":"

Base Class for Technoeconomic Data

Parameters:

Name Type Description Default parent_variable str

Variable from which Data should be collected

required Source code in python/posted/tedf.py
class TEBase:\n    \"\"\"\n    Base Class for Technoeconomic Data\n\n    Parameters\n    ----------\n    parent_variable: str\n        Variable from which Data should be collected\n    \"\"\"\n    # initialise\n    def __init__(self, parent_variable: str):\n        \"\"\" Set parent variable and technology specifications (var_specs) from input\"\"\"\n        self._parent_variable: str = parent_variable\n        self._var_specs: dict = {key: val for key, val in variables.items() if key.startswith(self._parent_variable)}\n\n    @property\n    def parent_variable(self) -> str:\n        \"\"\" Get parent variable\"\"\"\n        return self._parent_variable\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEBase.parent_variable","title":"parent_variable: str property","text":"

Get parent variable

"},{"location":"code/python/tedf/#python.posted.tedf.TEBase.__init__","title":"__init__(parent_variable)","text":"

Set parent variable and technology specifications (var_specs) from input

Source code in python/posted/tedf.py
def __init__(self, parent_variable: str):\n    \"\"\" Set parent variable and technology specifications (var_specs) from input\"\"\"\n    self._parent_variable: str = parent_variable\n    self._var_specs: dict = {key: val for key, val in variables.items() if key.startswith(self._parent_variable)}\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEDF","title":"TEDF","text":"

Bases: TEBase

Class to store Technoeconomic DataFiles

Parameters:

Name Type Description Default parent_variable str

Variable from which Data should be collected

required database_id str

Database from which to load data

'public' file_path Optional[Path]

File Path from which to load file

None data Optional[DataFrame]

Specific Technoeconomic data

None

Methods:

Name Description load

Load TEDataFile if it has not been read yet

read

Read TEDF from CSV file

write

Write TEDF to CSV file

check

Check if TEDF is consistent

check_row

Check that row in TEDF is consistent and return all inconsistencies found for row

Source code in python/posted/tedf.py
class TEDF(TEBase):\n    \"\"\"\n    Class to store Technoeconomic DataFiles\n\n    Parameters\n    ----------\n    parent_variable: str\n        Variable from which Data should be collected\n    database_id: str, default: public\n        Database from which to load data\n    file_path: Path, optional\n        File Path from which to load file\n    data: pd.DataFrame, optional\n        Specific Technoeconomic data\n\n    Methods\n    ----------\n    load\n        Load TEDataFile if it has not been read yet\n    read\n        Read TEDF from CSV file\n    write\n        Write TEDF to CSV file\n    check\n        Check if TEDF is consistent\n    check_row\n        Check that row in TEDF is consistent and return all inconsistencies found for row\n    \"\"\"\n\n    # typed delcarations\n    _df: None | pd.DataFrame\n    _inconsistencies: dict\n    _file_path: None | Path\n    _fields: dict[str, AbstractFieldDefinition]\n    _columns: dict[str, AbstractColumnDefinition]\n\n\n    def __init__(self,\n                 parent_variable: str,\n                 database_id: str = 'public',\n                 file_path: Optional[Path] = None,\n                 data: Optional[pd.DataFrame] = None,\n                 ):\n        \"\"\" Initialise parent class and object fields\"\"\"\n        TEBase.__init__(self, parent_variable)\n\n        self._df = data\n        self._inconsistencies = {}\n        self._file_path = (\n            None if data is not None else\n            file_path if file_path is not None else\n            databases[database_id] / 'tedfs' / ('/'.join(self._parent_variable.split('|')) + '.csv')\n        )\n        self._fields, comments = read_fields(self._parent_variable)\n        self._columns = self._fields | base_columns | comments\n\n    @property\n    def file_path(self) -> Path:\n        \"\"\" Get or set the file File Path\"\"\"\n        return self._file_path\n\n    @file_path.setter\n    def file_path(self, file_path: Path):\n        self._file_path = file_path\n\n\n    def load(self):\n        \"\"\"\n        load TEDataFile (only if it has not been read yet)\n\n        Warns\n        ----------\n        warning\n            Warns if TEDF is already loaded\n        Returns\n        --------\n            TEDF\n                Returns the TEDF object it is called on\n        \"\"\"\n        if self._df is None:\n            self.read()\n        else:\n            warnings.warn('TEDF is already loaded. Please execute .read() if you want to load from file again.')\n\n        return self\n\n    def read(self):\n        \"\"\"\n        read TEDF from CSV file\n\n        Raises\n        ------\n        Exception\n            If there is no file path from which to read\n        \"\"\"\n\n        if self._file_path is None:\n            raise Exception('Cannot read from file, as this TEDF object has been created from a dataframe.')\n\n        # read CSV file\n        self._df = pd.read_csv(\n            self._file_path,\n            sep=',',\n            quotechar='\"',\n            encoding='utf-8',\n        )\n\n        # check column IDs match base columns and fields\n        if not all(c in self._columns for c in self._df.columns):\n            raise Exception(f\"Column IDs used in CSV file do not match columns definition: {self._df.columns.tolist()}\")\n\n        # adjust row index to start at 1 instead of 0\n        self._df.index += 1\n\n        # insert missing columns and reorder via reindexing, then update dtypes\n        df_new = self._df.reindex(columns=list(self._columns.keys()))\n        for col_id, col in self._columns.items():\n            if col_id in self._df:\n                continue\n            df_new[col_id] = df_new[col_id].astype(col.dtype)\n            df_new[col_id] = col.default\n        self._df = df_new\n\n    def write(self):\n        \"\"\"\n        Write TEDF to CSV file\n\n        Raises\n        ------\n        Exception\n            If there is no file path that specifies where to write\n        \"\"\"\n        if self._file_path is None:\n            raise Exception('Cannot write to file, as this TEDataFile object has been created from a dataframe. Please '\n                            'first set a file path on this object.')\n\n        self._df.to_csv(\n            self._file_path,\n            index=False,\n            sep=',',\n            quotechar='\"',\n            encoding='utf-8',\n            na_rep='',\n        )\n\n\n    @property\n    def data(self) -> pd.DataFrame:\n        \"\"\"Get data, i.e. access dataframe\"\"\"\n        return self._df\n\n    @property\n    def inconsistencies(self) -> dict[int, TEDFInconsistencyException]:\n        \"\"\"Get inconsistencies\"\"\"\n        return self._inconsistencies\n\n    def check(self, raise_exception: bool = True):\n        \"\"\"\n        Check that TEDF is consistent and add inconsistencies to internal parameter\n\n        Parameters\n        ----------\n        raise_exception: bool, default: True\n            If exception is to be raised\n        \"\"\"\n        self._inconsistencies = {}\n\n        # check row consistency for each row individually\n        for row_id in self._df.index:\n            self._inconsistencies[row_id] = self.check_row(row_id, raise_exception=raise_exception)\n\n    def check_row(self, row_id: int, raise_exception: bool) -> list[TEDFInconsistencyException]:\n        \"\"\"\n        Check that row in TEDF is consistent and return all inconsistencies found for row\n\n        Parameters\n        ----------\n        row_id: int\n            Index of the row to check\n        raise_exception: bool\n            If exception is to be raised\n\n        Returns\n        -------\n            list\n                List of inconsistencies\n        \"\"\"\n        row = self._df.loc[row_id]\n        ikwargs = {'row_id': row_id, 'file_path': self._file_path, 'raise_exception': raise_exception}\n        ret = []\n\n        # check whether fields are among those defined in the technology specs\n        for col_id, col in self._columns.items():\n            cell = row[col_id]\n            if col.col_type == 'variable':\n                cell = cell if pd.isnull(cell) else self.parent_variable + '|' + cell\n            if not col.is_allowed(cell):\n                ret.append(new_inconsistency(\n                    message=f\"Invalid cell for column of type '{col.col_type}': {cell}\", col_id=col_id, **ikwargs,\n                ))\n\n        # check that reported and reference units match variable definition\n        for col_prefix in ['', 'reference_']:\n            raw_variable = row[col_prefix + 'variable']\n            col_id = col_prefix + 'unit'\n            unit = row[col_id]\n            if pd.isnull(raw_variable) and pd.isnull(unit):\n                continue\n            if pd.isnull(raw_variable) or pd.isnull(unit):\n                ret.append(new_inconsistency(\n                    message=f\"Variable and unit must either both be set or both be unset': {raw_variable} -- {unit}\",\n                    col_id=col_id, **ikwargs,\n                ))\n            variable = self.parent_variable + '|' + raw_variable\n            var_specs = variables[variable]\n            if 'dimension' not in var_specs:\n                if unit is not np.nan:\n                    ret.append(new_inconsistency(\n                        message=f\"Unexpected unit '{unit}' for {col_id}.\", col_id=col_id, **ikwargs,\n                    ))\n                continue\n            dimension = var_specs['dimension']\n\n            flow_id = var_specs['flow_id'] if 'flow_id' in var_specs else None\n            allowed, message = unit_allowed(unit=unit, flow_id=flow_id, dimension=dimension)\n            if not allowed:\n                ret.append(new_inconsistency(message=message, col_id=col_id, **ikwargs))\n\n        return ret\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.data","title":"data: pd.DataFrame property","text":"

Get data, i.e. access dataframe

"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.file_path","title":"file_path: Path property writable","text":"

Get or set the file File Path

"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.inconsistencies","title":"inconsistencies: dict[int, TEDFInconsistencyException] property","text":"

Get inconsistencies

"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.__init__","title":"__init__(parent_variable, database_id='public', file_path=None, data=None)","text":"

Initialise parent class and object fields

Source code in python/posted/tedf.py
def __init__(self,\n             parent_variable: str,\n             database_id: str = 'public',\n             file_path: Optional[Path] = None,\n             data: Optional[pd.DataFrame] = None,\n             ):\n    \"\"\" Initialise parent class and object fields\"\"\"\n    TEBase.__init__(self, parent_variable)\n\n    self._df = data\n    self._inconsistencies = {}\n    self._file_path = (\n        None if data is not None else\n        file_path if file_path is not None else\n        databases[database_id] / 'tedfs' / ('/'.join(self._parent_variable.split('|')) + '.csv')\n    )\n    self._fields, comments = read_fields(self._parent_variable)\n    self._columns = self._fields | base_columns | comments\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.check","title":"check(raise_exception=True)","text":"

Check that TEDF is consistent and add inconsistencies to internal parameter

Parameters:

Name Type Description Default raise_exception bool

If exception is to be raised

True Source code in python/posted/tedf.py
def check(self, raise_exception: bool = True):\n    \"\"\"\n    Check that TEDF is consistent and add inconsistencies to internal parameter\n\n    Parameters\n    ----------\n    raise_exception: bool, default: True\n        If exception is to be raised\n    \"\"\"\n    self._inconsistencies = {}\n\n    # check row consistency for each row individually\n    for row_id in self._df.index:\n        self._inconsistencies[row_id] = self.check_row(row_id, raise_exception=raise_exception)\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.check_row","title":"check_row(row_id, raise_exception)","text":"

Check that row in TEDF is consistent and return all inconsistencies found for row

Parameters:

Name Type Description Default row_id int

Index of the row to check

required raise_exception bool

If exception is to be raised

required

Returns:

Type Description list

List of inconsistencies

Source code in python/posted/tedf.py
def check_row(self, row_id: int, raise_exception: bool) -> list[TEDFInconsistencyException]:\n    \"\"\"\n    Check that row in TEDF is consistent and return all inconsistencies found for row\n\n    Parameters\n    ----------\n    row_id: int\n        Index of the row to check\n    raise_exception: bool\n        If exception is to be raised\n\n    Returns\n    -------\n        list\n            List of inconsistencies\n    \"\"\"\n    row = self._df.loc[row_id]\n    ikwargs = {'row_id': row_id, 'file_path': self._file_path, 'raise_exception': raise_exception}\n    ret = []\n\n    # check whether fields are among those defined in the technology specs\n    for col_id, col in self._columns.items():\n        cell = row[col_id]\n        if col.col_type == 'variable':\n            cell = cell if pd.isnull(cell) else self.parent_variable + '|' + cell\n        if not col.is_allowed(cell):\n            ret.append(new_inconsistency(\n                message=f\"Invalid cell for column of type '{col.col_type}': {cell}\", col_id=col_id, **ikwargs,\n            ))\n\n    # check that reported and reference units match variable definition\n    for col_prefix in ['', 'reference_']:\n        raw_variable = row[col_prefix + 'variable']\n        col_id = col_prefix + 'unit'\n        unit = row[col_id]\n        if pd.isnull(raw_variable) and pd.isnull(unit):\n            continue\n        if pd.isnull(raw_variable) or pd.isnull(unit):\n            ret.append(new_inconsistency(\n                message=f\"Variable and unit must either both be set or both be unset': {raw_variable} -- {unit}\",\n                col_id=col_id, **ikwargs,\n            ))\n        variable = self.parent_variable + '|' + raw_variable\n        var_specs = variables[variable]\n        if 'dimension' not in var_specs:\n            if unit is not np.nan:\n                ret.append(new_inconsistency(\n                    message=f\"Unexpected unit '{unit}' for {col_id}.\", col_id=col_id, **ikwargs,\n                ))\n            continue\n        dimension = var_specs['dimension']\n\n        flow_id = var_specs['flow_id'] if 'flow_id' in var_specs else None\n        allowed, message = unit_allowed(unit=unit, flow_id=flow_id, dimension=dimension)\n        if not allowed:\n            ret.append(new_inconsistency(message=message, col_id=col_id, **ikwargs))\n\n    return ret\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.load","title":"load()","text":"

load TEDataFile (only if it has not been read yet)

Warns:

Type Description warning

Warns if TEDF is already loaded

Returns:

Type Description TEDF

Returns the TEDF object it is called on

Source code in python/posted/tedf.py
def load(self):\n    \"\"\"\n    load TEDataFile (only if it has not been read yet)\n\n    Warns\n    ----------\n    warning\n        Warns if TEDF is already loaded\n    Returns\n    --------\n        TEDF\n            Returns the TEDF object it is called on\n    \"\"\"\n    if self._df is None:\n        self.read()\n    else:\n        warnings.warn('TEDF is already loaded. Please execute .read() if you want to load from file again.')\n\n    return self\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.read","title":"read()","text":"

read TEDF from CSV file

Raises:

Type Description Exception

If there is no file path from which to read

Source code in python/posted/tedf.py
def read(self):\n    \"\"\"\n    read TEDF from CSV file\n\n    Raises\n    ------\n    Exception\n        If there is no file path from which to read\n    \"\"\"\n\n    if self._file_path is None:\n        raise Exception('Cannot read from file, as this TEDF object has been created from a dataframe.')\n\n    # read CSV file\n    self._df = pd.read_csv(\n        self._file_path,\n        sep=',',\n        quotechar='\"',\n        encoding='utf-8',\n    )\n\n    # check column IDs match base columns and fields\n    if not all(c in self._columns for c in self._df.columns):\n        raise Exception(f\"Column IDs used in CSV file do not match columns definition: {self._df.columns.tolist()}\")\n\n    # adjust row index to start at 1 instead of 0\n    self._df.index += 1\n\n    # insert missing columns and reorder via reindexing, then update dtypes\n    df_new = self._df.reindex(columns=list(self._columns.keys()))\n    for col_id, col in self._columns.items():\n        if col_id in self._df:\n            continue\n        df_new[col_id] = df_new[col_id].astype(col.dtype)\n        df_new[col_id] = col.default\n    self._df = df_new\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEDF.write","title":"write()","text":"

Write TEDF to CSV file

Raises:

Type Description Exception

If there is no file path that specifies where to write

Source code in python/posted/tedf.py
def write(self):\n    \"\"\"\n    Write TEDF to CSV file\n\n    Raises\n    ------\n    Exception\n        If there is no file path that specifies where to write\n    \"\"\"\n    if self._file_path is None:\n        raise Exception('Cannot write to file, as this TEDataFile object has been created from a dataframe. Please '\n                        'first set a file path on this object.')\n\n    self._df.to_csv(\n        self._file_path,\n        index=False,\n        sep=',',\n        quotechar='\"',\n        encoding='utf-8',\n        na_rep='',\n    )\n
"},{"location":"code/python/tedf/#python.posted.tedf.TEDFInconsistencyException","title":"TEDFInconsistencyException","text":"

Bases: Exception

Exception raised for inconsistencies in TEDFs.

Attributes: message -- message explaining the inconsistency row_id -- row where the inconsistency occurs col_id -- column where the inconsistency occurs file_path -- path to the file where the inconsistency occurs

Source code in python/posted/tedf.py
class TEDFInconsistencyException(Exception):\n    \"\"\"Exception raised for inconsistencies in TEDFs.\n\n    Attributes:\n        message -- message explaining the inconsistency\n        row_id -- row where the inconsistency occurs\n        col_id -- column where the inconsistency occurs\n        file_path -- path to the file where the inconsistency occurs\n    \"\"\"\n    def __init__(self, message: str = \"Inconsistency detected\", row_id: None | int = None,\n                 col_id: None | str = None, file_path: None | Path = None):\n        self.message: str = message\n        self.row_id: None | int = row_id\n        self.col_id: None | str = col_id\n        self.file_path: None | Path = file_path\n\n        # add tokens at the end of the error message\n        message_tokens = []\n        if file_path is not None:\n            message_tokens.append(f\"file \\\"{file_path}\\\"\")\n        if row_id is not None:\n            message_tokens.append(f\"line {row_id}\")\n        if col_id is not None:\n            message_tokens.append(f\"in column \\\"{col_id}\\\"\")\n\n        # compose error message from tokens\n        exception_message: str = message\n        if message_tokens:\n            exception_message += f\"\\n    \" + (\", \".join(message_tokens)).capitalize()\n\n        super().__init__(exception_message)\n
"},{"location":"code/python/tedf/#python.posted.tedf.new_inconsistency","title":"new_inconsistency(raise_exception, **kwargs)","text":"

Create new inconsistency object based on kwqargs

Source code in python/posted/tedf.py
def new_inconsistency(raise_exception: bool, **kwargs) -> TEDFInconsistencyException:\n    \"\"\"\n    Create new inconsistency object based on kwqargs\n\n    Parameters\n    ----------\n\n    \"\"\"\n    exception = TEDFInconsistencyException(**kwargs)\n    if raise_exception:\n        raise exception\n    else:\n        return exception\n
"},{"location":"code/python/units/","title":"units","text":""},{"location":"code/python/units/#python.posted.units.ctx_kwargs_for_variants","title":"ctx_kwargs_for_variants(variants, flow_id)","text":"

Generates a dictionary of context key-word arguments for unit conversion for context from flow specs

Parameters:

Name Type Description Default variants list[str | None]

A list of variant names or None values.

required flow_id str

Identifier for the specific flow or process.

required

Returns:

Type Description dict

Dictionary containing default conversion parameters for energy content and density,

Source code in python/posted/units.py
def ctx_kwargs_for_variants(variants: list[str | None], flow_id: str):\n    '''\n    Generates a dictionary of context key-word arguments for unit conversion for context from flow specs\n\n\n    Parameters\n    ----------\n    variants : list[str | None]\n        A list of variant names or None values.\n    flow_id : str\n        Identifier for the specific flow or process.\n\n\n    Returns\n    -------\n        dict\n            Dictionary containing default conversion parameters for energy content and density,\n\n    '''\n    # set default conversion parameters to NaN, such that conversion fails with a meaningful error message in their\n    # absence. when this is left out, the conversion fails will throw a division-by-zero error message.\n    ctx_kwargs = {'energycontent': np.nan, 'density': np.nan}\n    ctx_kwargs |= {\n        unit_variants[v]['param']: flows[flow_id][unit_variants[v]['value']]\n        for v in variants if v is not None\n    }\n    return ctx_kwargs\n
"},{"location":"code/python/units/#python.posted.units.split_off_variant","title":"split_off_variant(unit)","text":"

Takes a unit string and splits it into a pure unit and a variant, if present, based on a semicolon separator, e.g. MWh;LHV into MWh and LHV.

Parameters:

Name Type Description Default unit str

String that may contain a unit and its variant separated by a semicolon.

required

Returns:

Type Description tuple

Returns a tuple containing the pure unit and the variant (if present) after splitting the input unit string by semi-colons.

Source code in python/posted/units.py
def split_off_variant(unit: str):\n    '''\n    Takes a unit string and splits it into a pure unit and a variant,\n    if present, based on a semicolon separator, e.g. MWh;LHV into MWh and LHV.\n\n    Parameters\n    ----------\n    unit : str\n        String that may contain a unit and its variant separated by a semicolon.\n\n    Returns\n    -------\n        tuple\n            Returns a tuple containing the pure unit and the variant (if\n            present) after splitting the input unit string by semi-colons.\n\n    '''\n    tokens = unit.split(';')\n    if len(tokens) == 1:\n        pure_unit = unit\n        variant = None\n    elif len(tokens) > 2:\n        raise Exception(f\"Too many semi-colons in unit '{unit}'.\")\n    else:\n        pure_unit, variant = tokens\n    if variant is not None and variant not in unit_variants:\n        raise Exception(f\"Cannot find unit variant '{variant}'.\")\n    return pure_unit, variant\n
"},{"location":"code/python/units/#python.posted.units.unit_allowed","title":"unit_allowed(unit, flow_id, dimension)","text":"

Checks if a given unit is allowed for a specific dimension and flow ID, handling unit variants and compatibility checks.

Parameters:

Name Type Description Default unit str

The Unit to Check

required flow_id None | str

Identifier for the specific flow or process.

required dimension str

Expected dimension of the unit.

required

Returns:

Type Description tuple(bool, str)

Tuple with a boolean value and a message. The boolean value indicates whether the unit is allowed based on the provided conditions, and the message provides additional information or an error message related to the unit validation process.

Source code in python/posted/units.py
def unit_allowed(unit: str, flow_id: None | str, dimension: str):\n    '''Checks if a given unit is allowed for a specific dimension and flow ID,\n    handling unit variants and compatibility checks.\n\n    Parameters\n    ----------\n    unit : str\n        The Unit to Check\n    flow_id : None | str\n        Identifier for the specific flow or process.\n    dimension : str\n        Expected dimension of the unit.\n\n    Returns\n    -------\n        tuple(bool, str)\n            Tuple with a boolean value and a message. The boolean value indicates\n            whether the unit is allowed based on the provided conditions, and the message\n            provides additional information or an error message related to the unit validation process.\n    '''\n    if not isinstance(unit, str):\n        raise Exception('Unit to check must be string.')\n\n    # split unit into pure unit and variant\n    try:\n        unit, variant = split_off_variant(unit)\n    except:\n        return False, f\"Inconsistent unit variant format in '{unit}'.\"\n\n    try:\n        unit_registered = ureg(unit)\n    except:\n        return False, f\"Unknown unit '{unit}'.\"\n\n    if flow_id is None:\n        if '[flow]' in dimension:\n            return False, f\"No flow_id provided even though [flow] is in dimension.\"\n        if variant is not None:\n            return False, f\"Unexpected unit variant '{variant}' for dimension [{dimension}].\"\n        if (dimension == 'dimensionless' and unit_registered.dimensionless) or unit_registered.check(dimension):\n            return True, ''\n        else:\n            return False, f\"Unit '{unit}' does not match expected dimension [{dimension}].\"\n    else:\n        if '[flow]' not in dimension:\n            if (dimension == 'dimensionless' and unit_registered.dimensionless) or unit_registered.check(dimension):\n                return True, ''\n        else:\n            check_dimensions = [\n                (dimension.replace(\n                    '[flow]', f\"[{dimension_base}]\"), dimension_base, base_unit)\n                for dimension_base, base_unit in [('mass', 'kg'), ('energy', 'kWh'), ('volume', 'm**3')]\n            ]\n            for check_dimension, check_dimension_base, check_base_unit in check_dimensions:\n                if unit_registered.check(check_dimension):\n                    if variant is None:\n                        if any(\n                            (check_dimension_base == variant_specs['dimension']) and\n                            flows[flow_id][variant_specs['value']] is not np.nan\n                            for variant, variant_specs in unit_variants.items()\n                        ):\n                            return False, (f\"Missing unit variant for dimension [{check_dimension_base}] for unit \"\n                                           f\"'{unit}'.\")\n                    elif unit_variants[variant]['dimension'] != check_dimension_base:\n                        return False, f\"Variant '{variant}' incompatible with unit '{unit}'.\"\n\n                    default_unit, default_variant = split_off_variant(\n                        flows[flow_id]['default_unit'])\n                    ctx_kwargs = ctx_kwargs_for_variants(\n                        [variant, default_variant], flow_id)\n\n                    if ureg(check_base_unit).is_compatible_with(default_unit, 'flocon', **ctx_kwargs):\n                        return True, ''\n                    else:\n                        return False, f\"Unit '{unit}' not compatible with flow '{flow_id}'.\"\n\n        return False, f\"Unit '{unit}' is not compatible with dimension [{dimension}].\"\n
"},{"location":"code/python/units/#python.posted.units.unit_convert","title":"unit_convert(unit_from, unit_to, flow_id=None)","text":"

Converts units with optional flow context handling based on specified variants and flow ID. The function checks if the input units are not NaN, then it proceeds to handle different cases based on the presence of a flow context and unit variants.

Parameters:

Name Type Description Default unit_from str | float

Unit to convert from.

required unit_to str | float

Unit to convert to.

required flow_id None | str

Identifier for the specific flow or process.

None

Returns:

Type Description float

Conversion factor between unit_from and unit_to

Source code in python/posted/units.py
def unit_convert(unit_from: str | float, unit_to: str | float, flow_id: None | str = None) -> float:\n    '''\n    Converts units with optional flow context handling based on\n    specified variants and flow ID. The function checks if the input units are not NaN,\n    then it proceeds to handle different cases based on the presence of a flow context and unit\n    variants.\n\n    Parameters\n    ----------\n    unit_from : str | float\n        Unit to convert from.\n    unit_to : str | float\n        Unit to convert to.\n    flow_id : None | str\n        Identifier for the specific flow or process.\n\n    Returns\n    -------\n        float\n            Conversion factor between unit_from and unit_to\n\n    '''\n    # return nan if unit_from or unit_to is nan\n    if unit_from is np.nan or unit_to is np.nan:\n        return np.nan\n\n    # replace \"No Unit\" by \"Dimensionless\"\n    if unit_from == 'No Unit':\n        unit_from = 'dimensionless'\n    if unit_to == 'No Unit':\n        unit_to = 'dimensionless'\n\n    # skip flow conversion if no flow_id specified\n    if flow_id is None or pd.isna(flow_id):\n        return ureg(unit_from).to(unit_to).magnitude\n\n    # get variants from units\n    pure_units = []\n    variants = []\n    for u in (unit_from, unit_to):\n        pure_unit, variant = split_off_variant(u)\n        pure_units.append(pure_unit)\n        variants.append(variant)\n\n    unit_from, unit_to = pure_units\n\n    # if no variants a specified, we may proceed without a flow context\n    if not any(variants):\n        return ureg(unit_from).to(unit_to).magnitude\n\n    # if both variants refer to the same dimension, we need to manually calculate the conversion factor and proceed\n    # without a flow context\n    if len(variants) == 2:\n        variant_params = {\n            unit_variants[v]['param'] if v is not None else None\n            for v in variants\n        }\n        if len(variant_params) == 1:\n            param = next(iter(variant_params))\n            value_from, value_to = (\n                flows[flow_id][unit_variants[v]['value']] for v in variants)\n\n            conv_factor = (ureg(value_from) / ureg(value_to)\n                           if param == 'energycontent' else\n                           ureg(value_to) / ureg(value_from))\n\n            return conv_factor.magnitude * ureg(unit_from).to(unit_to).magnitude\n\n    # perform the actual conversion step with all required variants\n    ctx_kwargs = ctx_kwargs_for_variants(variants, flow_id)\n    return ureg(unit_from).to(unit_to, 'flocon', **ctx_kwargs).magnitude\n
"},{"location":"tutorials/R/overview/","title":"Overview","text":"In\u00a0[1]: Copied!
devtools::load_all()\n
devtools::load_all()
\u2139 Loading posted\n\nAttache Paket: \u2018dplyr\u2019\n\n\nDie folgenden Objekte sind maskiert von \u2018package:stats\u2019:\n\n    filter, lag\n\n\nDie folgenden Objekte sind maskiert von \u2018package:base\u2019:\n\n    intersect, setdiff, setequal, union\n\n\n\nAttache Paket: \u2018docstring\u2019\n\n\nDas folgende Objekt ist maskiert \u2018package:utils\u2019:\n\n    ?\n\n\n
In\u00a0[2]: Copied!
par(bg = \"white\")\nplot(1:10)\n
par(bg = \"white\") plot(1:10) In\u00a0[3]: Copied!
tedf <- TEDF$new(\"Tech|Electrolysis\")$load()\ntedf$data\n
tedf <- TEDF$new(\"Tech|Electrolysis\")$load() tedf$data A data.frame: 95 \u00d7 14 subtechsizeregionperiodvariablereference_variablevalueuncertaintyunitreference_valuereference_unitcommentsourcesource_detail <chr><chr><chr><chr><chr><chr><dbl><dbl><chr><dbl><chr><chr><chr><chr> AEL 100 MW *2020CAPEXInput Capacity|Electricity 400 0EUR_20201kW Vartiainen22Page 4, Figure 4 AEL 100 MW *2030CAPEXInput Capacity|Electricity 240 50EUR_20201kW Vartiainen22Page 4, Figure 4 AEL 100 MW *2040CAPEXInput Capacity|Electricity 140 75EUR_20201kW Vartiainen22Page 4, Figure 4 AEL 100 MW *2050CAPEXInput Capacity|Electricity 80 75EUR_20201kW Vartiainen22Page 4, Figure 4 AEL 100 MW *2020CAPEXInput Capacity|Electricity 663 NAEUR_20201kW Holst21 Appendix A AEL 100 MW *2030CAPEXInput Capacity|Electricity 444 NAEUR_20201kW Holst21 Appendix A PEM 100 MW *2020CAPEXInput Capacity|Electricity 718 NAEUR_20201kW Holst21 Appendix A PEM 100 MW *2030CAPEXInput Capacity|Electricity 502 NAEUR_20201kW Holst21 Appendix A AEL 5 MW *2020CAPEXInput Capacity|Electricity 949 NAEUR_20201kW Holst21 Appendix A AEL 5 MW *2030CAPEXInput Capacity|Electricity 726 NAEUR_20201kW Holst21 Appendix A PEM 5 MW *2020CAPEXInput Capacity|Electricity 978 NAEUR_20201kW Holst21 Appendix A PEM 5 MW *2030CAPEXInput Capacity|Electricity 718 NAEUR_20201kW Holst21 Appendix A AEL * *2030CAPEXInput Capacity|Electricity 536152USD_20211kW IRENA22 Page 23 AEL * *2050CAPEXInput Capacity|Electricity 230 96USD_20211kW IRENA22 Page 23 AEL 130000 Nm\u00b3/h*2020CAPEXInput Capacity|Electricity1150350EUR_20191kW Tenhumberg20Table 2, Page 1588 (3/10) PEM 130000 Nm\u00b3/h*2020CAPEXInput Capacity|Electricity1750350EUR_20191kW Tenhumberg20Table 2, Page 1588 (3/10) SOEC130000 Nm\u00b3/h*2020CAPEXInput Capacity|Electricity2000 NAEUR_20191kW Provided as a lower thresholdTenhumberg20Table 2, Page 1588 (3/10) AEL 130000 Nm\u00b3/h*2030CAPEXInput Capacity|Electricity 450 NAEUR_20191kW Tenhumberg20Supplement, Table S2, Page 3 PEM 130000 Nm\u00b3/h*2030CAPEXInput Capacity|Electricity 810 NAEUR_20191kW Tenhumberg20Supplement, Table S2, Page 3 SOEC130000 Nm\u00b3/h*2030CAPEXInput Capacity|Electricity 800 NAEUR_20191kW Tenhumberg20Supplement, Table S2, Page 3 AEL 1 MW *2020CAPEXOutput Capacity|Hydrogen 1566918EUR_20201kg/d DEARF23 Sheet 86 AEC 1MW, Row 24 AEL 1 MW *2030CAPEXOutput Capacity|Hydrogen 1164 NAEUR_20201kg/d DEARF23 Sheet 86 AEC 1MW, Row 24 AEL 1 MW *2040CAPEXOutput Capacity|Hydrogen 874 NAEUR_20201kg/d DEARF23 Sheet 86 AEC 1MW, Row 24 AEL 1 MW *2050CAPEXOutput Capacity|Hydrogen 648438EUR_20201kg/d DEARF23 Sheet 86 AEC 1MW, Row 24 AEL 100 MW *2020CAPEXOutput Capacity|Hydrogen 1358374EUR_20201kg/d DEARF23 Sheet 86 AEC 100MW, Row 24 AEL 100 MW *2030CAPEXOutput Capacity|Hydrogen 919 NAEUR_20201kg/d DEARF23 Sheet 86 AEC 100MW, Row 24 AEL 100 MW *2040CAPEXOutput Capacity|Hydrogen 583 NAEUR_20201kg/d DEARF23 Sheet 86 AEC 100MW, Row 24 AEL 100 MW *2050CAPEXOutput Capacity|Hydrogen 463201EUR_20201kg/d DEARF23 Sheet 86 AEC 100MW, Row 24 PEM 1 MW *2020CAPEXOutput Capacity|Hydrogen 2215497EUR_20201kg/d DEARF23 Sheet 86 PEMEC 1MW, Row 24 PEM 1 MW *2030CAPEXOutput Capacity|Hydrogen 1378 NAEUR_20201kg/d DEARF23 Sheet 86 PEMEC 1MW, Row 24 \u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee SOEC* *2030Output|Hydrogen Input|Electricity 24.20 NAkg 1.0MWh DEARF23 Sheet 86 SOEC 1MW, Row 16 SOEC* *2040Output|Hydrogen Input|Electricity 24.60 NAkg 1.0MWh DEARF23 Sheet 86 SOEC 1MW, Row 16 SOEC* *2050Output|Hydrogen Input|Electricity 25.100.50kg 1.0MWh DEARF23 Sheet 86 SOEC 1MW, Row 16 SOEC* *2020Input|Heat Input|Electricity 20.502.00pct MWh 79.5pct MWh DEARF23 Sheet 86 SOEC 1MW, Rows 9 and 10 SOEC* *2030Input|Heat Input|Electricity 19.50 NApct MWh 80.5pct MWh DEARF23 Sheet 86 SOEC 1MW, Rows 9 and 10 SOEC* *2040Input|Heat Input|Electricity 18.60 NApct MWh 81.4pct MWh DEARF23 Sheet 86 SOEC 1MW, Rows 9 and 10 SOEC* *2050Input|Heat Input|Electricity 18.601.00pct MWh 81.4pct MWh DEARF23 Sheet 86 SOEC 1MW, Rows 9 and 10 AEL * *2020Input|Electricity Output|Hydrogen 48.30 NAkWh 1.0kg Holst21 Table 3-2, Page 42 AEL * *2030Input|Electricity Output|Hydrogen 45.60 NAkWh 1.0kg Holst21 Table 3-2, Page 42 PEM * *2020Input|Electricity Output|Hydrogen 51.00 NAkWh 1.0kg Holst21 Table 3-7, Page 50 PEM * *2030Input|Electricity Output|Hydrogen 45.70 NAkWh 1.0kg Holst21 Table 3-7, Page 50 AEL * *2030Input|Electricity Output|Hydrogen 50.351.85kWh 1.0kg IRENA22 Footnote 12, Page 21 AEL * *2050Input|Electricity Output|Hydrogen 46.501.50kWh 1.0kg IRENA22 Footnote 12, Page 21 PEM * *2021Input|Electricity Output|Hydrogen 54.20 NAkWh 1.0kg Based on life cycle inventory, from Ecoinvent v3.4Al-Qahtani21Table 2, Page 3 AEL 130000 Nm\u00b3/h*2020Input|Electricity Output|Hydrogen 5.450.45kWh 1.0m\u00b3;normTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) PEM 130000 Nm\u00b3/h*2020Input|Electricity Output|Hydrogen 5.750.75kWh 1.0m\u00b3;normTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) SOEC130000 Nm\u00b3/h*2020Input|Electricity Output|Hydrogen 3.800.10kWh 1.0m\u00b3;normTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) SOEC130000 Nm\u00b3/h*2020Input|Heat Output|Hydrogen 0.70 NAkWh 1.0m\u00b3;normTotal system demand Tenhumberg20Table 3, Page 1590 (5/10) AEL 130000 Nm\u00b3/h*2030Input|Electricity Output|Hydrogen 4.42 NAkWh 1.0m\u00b3;normTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) PEM 130000 Nm\u00b3/h*2030Input|Electricity Output|Hydrogen 4.81 NAkWh 1.0m\u00b3;normTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) SOEC130000 Nm\u00b3/h*2030Input|Electricity Output|Hydrogen 3.55 NAkWh 1.0m\u00b3;normTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) SOEC130000 Nm\u00b3/h*2030Input|Heat Output|Hydrogen 0.70 NAkWh 1.0m\u00b3;normTotal system demand Tenhumberg20Table 3, Page 1590 (5/10) AEL * *2020Input|Electricity Output|Hydrogen 54.004.00kWh 1.0kg Yates20 Table 3, Page 10 PEM * *2020Input|Water Output|Hydrogen 10.00 NAkg 1.0kg Based on life cycle inventory, from Ecoinvent v3.4Al-Qahtani21Table 2, Page 3 * * *2021Input|Water Output|Hydrogen 9.00 NAkg 1.0kg IEA-GHR21 AEL * *2020Input|Water Output|Hydrogen 10.001.00L;norm 1.0kg Yates20 Table 3, Page 10 * 1 MW ** Total Input Capacity|Electricity 1.00 NAMW NA * * 5 MW ** Total Input Capacity|Electricity 5.00 NAMW NA * * 100 MW ** Total Input Capacity|Electricity 100.00 NAMW NA * * 130000 Nm\u00b3/h** Total Output Capacity|Hydrogen 130000.00 NAm\u00b3/h;norm NA * In\u00a0[4]: Copied!
DataSet$new('Tech|Electrolysis')$normalise(override=list('Tech|Electrolysis|Input Capacity|elec'= 'kW', 'Tech|Electrolysis|Output Capacity|h2'= 'kW;LHV'))  %>% filter(source=='Vartiainen22')\n
DataSet$new('Tech|Electrolysis')$normalise(override=list('Tech|Electrolysis|Input Capacity|elec'= 'kW', 'Tech|Electrolysis|Output Capacity|h2'= 'kW;LHV')) %>% filter(source=='Vartiainen22') A data.frame: 12 \u00d7 15 parent_variablesubtechsizeregionperiodvariablereference_variablevalueuncertaintyunitreference_valuereference_unitcommentsourcesource_detail <chr><chr><chr><chr><chr><chr><chr><dbl><dbl><chr><dbl><chr><chr><chr><chr> Tech|ElectrolysisAEL100 MW*2020CAPEX Input Capacity|Electricity45.403859630.000000USD_2005 1MWh/a Vartiainen22Page 4, Figure 4 Tech|ElectrolysisAEL100 MW*2030CAPEX Input Capacity|Electricity27.242315785.675482USD_2005 1MWh/a Vartiainen22Page 4, Figure 4 Tech|ElectrolysisAEL100 MW*2040CAPEX Input Capacity|Electricity15.891350878.513224USD_2005 1MWh/a Vartiainen22Page 4, Figure 4 Tech|ElectrolysisAEL100 MW*2050CAPEX Input Capacity|Electricity 9.080771938.513224USD_2005 1MWh/a Vartiainen22Page 4, Figure 4 Tech|ElectrolysisAEL100 MW*2020OPEX Fixed Input Capacity|Electricity 0.68105789 NAUSD_2005/a1MWh/a1.5% of CAPEX; reported in units of electric capacity Vartiainen22Page 4 Tech|ElectrolysisAEL100 MW*2030OPEX Fixed Input Capacity|Electricity 0.23837026 NAUSD_2005/a1MWh/a10% LR decrease Vartiainen22Page 4 Tech|ElectrolysisAEL100 MW*2040OPEX Fixed Input Capacity|Electricity 0.08286204 NAUSD_2005/a1MWh/a10% LR decrease Vartiainen22Page 4 Tech|ElectrolysisAEL100 MW*2050OPEX Fixed Input Capacity|Electricity 0.02837741 NAUSD_2005/a1MWh/a10% LR decrease Vartiainen22Page 4 Tech|ElectrolysisAEL100 MW*2020Output|HydrogenInput|Electricity 0.67000000 NAMWh;LHV 1MWh 1.5% of CAPEX; reported in units of electric capacity; 67% assumes the LHVVartiainen22Page 4 Tech|ElectrolysisAEL100 MW*2030Output|HydrogenInput|Electricity 0.70000000 NAMWh;LHV 1MWh 10% LR decrease Vartiainen22Page 4 Tech|ElectrolysisAEL100 MW*2040Output|HydrogenInput|Electricity 0.73000000 NAMWh;LHV 1MWh 10% LR decrease Vartiainen22Page 4 Tech|ElectrolysisAEL100 MW*2050Output|HydrogenInput|Electricity 0.76000000 NAMWh;LHV 1MWh 10% LR decrease Vartiainen22Page 4 In\u00a0[5]: Copied!
DataSet$new('Tech|Electrolysis')$normalise(override=list('Tech|Electrolysis|Output Capacity|h2'= 'kW;LHV'))\n
DataSet$new('Tech|Electrolysis')$normalise(override=list('Tech|Electrolysis|Output Capacity|h2'= 'kW;LHV')) A data.frame: 95 \u00d7 15 parent_variablesubtechsizeregionperiodvariablereference_variablevalueuncertaintyunitreference_valuereference_unitcommentsourcesource_detail <chr><chr><chr><chr><chr><chr><chr><dbl><dbl><chr><dbl><chr><chr><chr><chr> Tech|ElectrolysisAEL 100 MW *2020CAPEXInput Capacity|Electricity 45.403860 0.000000USD_20051MWh/a Vartiainen22Page 4, Figure 4 Tech|ElectrolysisAEL 100 MW *2030CAPEXInput Capacity|Electricity 27.242316 5.675482USD_20051MWh/a Vartiainen22Page 4, Figure 4 Tech|ElectrolysisAEL 100 MW *2040CAPEXInput Capacity|Electricity 15.891351 8.513224USD_20051MWh/a Vartiainen22Page 4, Figure 4 Tech|ElectrolysisAEL 100 MW *2050CAPEXInput Capacity|Electricity 9.080772 8.513224USD_20051MWh/a Vartiainen22Page 4, Figure 4 Tech|ElectrolysisAEL 100 MW *2020CAPEXInput Capacity|Electricity 75.256897 NAUSD_20051MWh/a Holst21 Appendix A Tech|ElectrolysisAEL 100 MW *2030CAPEXInput Capacity|Electricity 50.398284 NAUSD_20051MWh/a Holst21 Appendix A Tech|ElectrolysisPEM 100 MW *2020CAPEXInput Capacity|Electricity 81.499928 NAUSD_20051MWh/a Holst21 Appendix A Tech|ElectrolysisPEM 100 MW *2030CAPEXInput Capacity|Electricity 56.981844 NAUSD_20051MWh/a Holst21 Appendix A Tech|ElectrolysisAEL 5 MW *2020CAPEXInput Capacity|Electricity107.720657 NAUSD_20051MWh/a Holst21 Appendix A Tech|ElectrolysisAEL 5 MW *2030CAPEXInput Capacity|Electricity 82.408005 NAUSD_20051MWh/a Holst21 Appendix A Tech|ElectrolysisPEM 5 MW *2020CAPEXInput Capacity|Electricity111.012437 NAUSD_20051MWh/a Holst21 Appendix A Tech|ElectrolysisPEM 5 MW *2030CAPEXInput Capacity|Electricity 81.499928 NAUSD_20051MWh/a Holst21 Appendix A Tech|ElectrolysisAEL * *2030CAPEXInput Capacity|Electricity 45.03196212.770258USD_20051MWh/a IRENA22 Page 23 Tech|ElectrolysisAEL * *2050CAPEXInput Capacity|Electricity 19.323417 8.065426USD_20051MWh/a IRENA22 Page 23 Tech|ElectrolysisAEL 130000 Nm\u00b3/h*2020CAPEXInput Capacity|Electricity132.89985740.447783USD_20051MWh/a Tenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisPEM 130000 Nm\u00b3/h*2020CAPEXInput Capacity|Electricity202.23891340.447783USD_20051MWh/a Tenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisSOEC130000 Nm\u00b3/h*2020CAPEXInput Capacity|Electricity231.130187 NAUSD_20051MWh/a Provided as a lower thresholdTenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisAEL 130000 Nm\u00b3/h*2030CAPEXInput Capacity|Electricity 52.004292 NAUSD_20051MWh/a Tenhumberg20Supplement, Table S2, Page 3 Tech|ElectrolysisPEM 130000 Nm\u00b3/h*2030CAPEXInput Capacity|Electricity 93.607726 NAUSD_20051MWh/a Tenhumberg20Supplement, Table S2, Page 3 Tech|ElectrolysisSOEC130000 Nm\u00b3/h*2030CAPEXInput Capacity|Electricity 92.452075 NAUSD_20051MWh/a Tenhumberg20Supplement, Table S2, Page 3 Tech|ElectrolysisAEL 1 MW *2020CAPEXOutput Capacity|Hydrogen 127.99852175.033616USD_20051MWh/a;LHV DEARF23 Sheet 86 AEC 1MW, Row 24 Tech|ElectrolysisAEL 1 MW *2030CAPEXOutput Capacity|Hydrogen 95.140663 NAUSD_20051MWh/a;LHV DEARF23 Sheet 86 AEC 1MW, Row 24 Tech|ElectrolysisAEL 1 MW *2040CAPEXOutput Capacity|Hydrogen 71.437233 NAUSD_20051MWh/a;LHV DEARF23 Sheet 86 AEC 1MW, Row 24 Tech|ElectrolysisAEL 1 MW *2050CAPEXOutput Capacity|Hydrogen 52.96490535.800353USD_20051MWh/a;LHV DEARF23 Sheet 86 AEC 1MW, Row 24 Tech|ElectrolysisAEL 100 MW *2020CAPEXOutput Capacity|Hydrogen 110.99744030.569251USD_20051MWh/a;LHV DEARF23 Sheet 86 AEC 100MW, Row 24 Tech|ElectrolysisAEL 100 MW *2030CAPEXOutput Capacity|Hydrogen 75.115352 NAUSD_20051MWh/a;LHV DEARF23 Sheet 86 AEC 100MW, Row 24 Tech|ElectrolysisAEL 100 MW *2040CAPEXOutput Capacity|Hydrogen 47.652068 NAUSD_20051MWh/a;LHV DEARF23 Sheet 86 AEC 100MW, Row 24 Tech|ElectrolysisAEL 100 MW *2050CAPEXOutput Capacity|Hydrogen 37.84375216.428929USD_20051MWh/a;LHV DEARF23 Sheet 86 AEC 100MW, Row 24 Tech|ElectrolysisPEM 1 MW *2020CAPEXOutput Capacity|Hydrogen 181.04516340.622775USD_20051MWh/a;LHV DEARF23 Sheet 86 PEMEC 1MW, Row 24 Tech|ElectrolysisPEM 1 MW *2030CAPEXOutput Capacity|Hydrogen 112.632160 NAUSD_20051MWh/a;LHV DEARF23 Sheet 86 PEMEC 1MW, Row 24 \u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee Tech|ElectrolysisSOEC* *2030Output|Hydrogen Input|Electricity8.065777e-01 NAMWh;LHV 1MWh DEARF23 Sheet 86 SOEC 1MW, Row 16 Tech|ElectrolysisSOEC* *2040Output|Hydrogen Input|Electricity8.199095e-01 NAMWh;LHV 1MWh DEARF23 Sheet 86 SOEC 1MW, Row 16 Tech|ElectrolysisSOEC* *2050Output|Hydrogen Input|Electricity8.365744e-010.01666483MWh;LHV 1MWh DEARF23 Sheet 86 SOEC 1MW, Row 16 Tech|ElectrolysisSOEC* *2020Input|Heat Input|Electricity2.578616e-010.02515723MWh 1MWh DEARF23 Sheet 86 SOEC 1MW, Rows 9 and 10 Tech|ElectrolysisSOEC* *2030Input|Heat Input|Electricity2.422360e-01 NAMWh 1MWh DEARF23 Sheet 86 SOEC 1MW, Rows 9 and 10 Tech|ElectrolysisSOEC* *2040Input|Heat Input|Electricity2.285012e-01 NAMWh 1MWh DEARF23 Sheet 86 SOEC 1MW, Rows 9 and 10 Tech|ElectrolysisSOEC* *2050Input|Heat Input|Electricity2.285012e-010.01228501MWh 1MWh DEARF23 Sheet 86 SOEC 1MW, Rows 9 and 10 Tech|ElectrolysisAEL * *2020Input|Electricity Output|Hydrogen 1.449160e+00 NAMWh 1MWh;LHV Holst21 Table 3-2, Page 42 Tech|ElectrolysisAEL * *2030Input|Electricity Output|Hydrogen 1.368151e+00 NAMWh 1MWh;LHV Holst21 Table 3-2, Page 42 Tech|ElectrolysisPEM * *2020Input|Electricity Output|Hydrogen 1.530169e+00 NAMWh 1MWh;LHV Holst21 Table 3-7, Page 50 Tech|ElectrolysisPEM * *2030Input|Electricity Output|Hydrogen 1.371151e+00 NAMWh 1MWh;LHV Holst21 Table 3-7, Page 50 Tech|ElectrolysisAEL * *2030Input|Electricity Output|Hydrogen 1.510667e+000.05550612MWh 1MWh;LHV IRENA22 Footnote 12, Page 21 Tech|ElectrolysisAEL * *2050Input|Electricity Output|Hydrogen 1.395154e+000.04500496MWh 1MWh;LHV IRENA22 Footnote 12, Page 21 Tech|ElectrolysisPEM * *2021Input|Electricity Output|Hydrogen 1.626179e+00 NAMWh 1MWh;LHVBased on life cycle inventory, from Ecoinvent v3.4Al-Qahtani21Table 2, Page 3 Tech|ElectrolysisAEL 130000 Nm\u00b3/h*2020Input|Electricity Output|Hydrogen 1.952408e+000.16120799MWh 1MWh;LHVTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisPEM 130000 Nm\u00b3/h*2020Input|Electricity Output|Hydrogen 2.059880e+000.26867998MWh 1MWh;LHVTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisSOEC130000 Nm\u00b3/h*2020Input|Electricity Output|Hydrogen 1.361312e+000.03582400MWh 1MWh;LHVTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisSOEC130000 Nm\u00b3/h*2020Input|Heat Output|Hydrogen 2.507680e-01 NAMWh 1MWh;LHVTotal system demand Tenhumberg20Table 3, Page 1590 (5/10) Tech|ElectrolysisAEL 130000 Nm\u00b3/h*2030Input|Electricity Output|Hydrogen 1.583421e+00 NAMWh 1MWh;LHVTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisPEM 130000 Nm\u00b3/h*2030Input|Electricity Output|Hydrogen 1.723134e+00 NAMWh 1MWh;LHVTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisSOEC130000 Nm\u00b3/h*2030Input|Electricity Output|Hydrogen 1.271752e+00 NAMWh 1MWh;LHVTotal system demand Tenhumberg20Table 2, Page 1588 (3/10) Tech|ElectrolysisSOEC130000 Nm\u00b3/h*2030Input|Heat Output|Hydrogen 2.507680e-01 NAMWh 1MWh;LHVTotal system demand Tenhumberg20Table 3, Page 1590 (5/10) Tech|ElectrolysisAEL * *2020Input|Electricity Output|Hydrogen 1.620179e+000.12001324MWh 1MWh;LHV Yates20 Table 3, Page 10 Tech|ElectrolysisPEM * *2020Input|Water Output|Hydrogen 3.000331e-01 NAt 1MWh;LHVBased on life cycle inventory, from Ecoinvent v3.4Al-Qahtani21Table 2, Page 3 Tech|Electrolysis* * *2021Input|Water Output|Hydrogen 2.700298e-01 NAt 1MWh;LHV IEA-GHR21 Tech|ElectrolysisAEL * *2020Input|Water Output|Hydrogen 2.994960e-010.02994960t 1MWh;LHV Yates20 Table 3, Page 10 Tech|Electrolysis* 1 MW ** Total Input Capacity|Electricity 8.760000e+03 NAMWh/a NANA * Tech|Electrolysis* 5 MW ** Total Input Capacity|Electricity 4.380000e+04 NAMWh/a NANA * Tech|Electrolysis* 100 MW ** Total Input Capacity|Electricity 8.760000e+05 NAMWh/a NANA * Tech|Electrolysis* 130000 Nm\u00b3/h** Total Output Capacity|Hydrogen 3.178875e+06 NAMWh/a;LHVNANA * In\u00a0[6]: Copied!
DataSet$new('Tech|Electrolysis')$select(period=2020, subtech='AEL', size='100 MW', override=list('Tech|Electrolysis|Output Capacity|h2'= 'kW;LHV'))\n
DataSet$new('Tech|Electrolysis')$select(period=2020, subtech='AEL', size='100 MW', override=list('Tech|Electrolysis|Output Capacity|h2'= 'kW;LHV')) A data.frame: 21 \u00d7 7 sourcevariablereference_variableregionperiodunitvalue <chr><chr><chr><chr><dbl><list><dbl> DEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2020a * USD_2005/MWh1.665e+02 DEARF23 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2020MWh/MWh2.250e+00 DEARF23 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2020USD_2005/MWh3.330e+00 DEARF23 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2020MWh/a8.760e+05 Holst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2020a * USD_2005/MWh1.091e+02 Holst21 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2020MWh/MWh2.100e+00 Holst21 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2020USD_2005/MWh3.290e+00 Holst21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2020MWh/a8.760e+05 IEA-GHR21 Tech|Electrolysis|Input|Water Tech|Electrolysis|Output|Hydrogen World2020t/MWh7.292e-02 IEA-GHR21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2020MWh/a8.760e+05 IRENA22 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2020a * USD_2005/MWh6.803e+01 IRENA22 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2020MWh/MWh2.282e+00 IRENA22 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2020MWh/a8.760e+05 Vartiainen22Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2020a * USD_2005/MWh6.777e+01 Vartiainen22Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2020MWh/MWh2.228e+00 Vartiainen22Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2020USD_2005/MWh1.017e+00 Vartiainen22Tech|Electrolysis|Total Input Capacity|ElectricityNA World2020MWh/a8.760e+05 Yates20 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2020MWh/MWh2.625e+00 Yates20 Tech|Electrolysis|Input|Water Tech|Electrolysis|Output|Hydrogen World2020t/MWh4.852e-01 Yates20 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2020USD_2005/MWh2.418e+00 Yates20 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2020MWh/a8.760e+05 In\u00a0[7]: Copied!
DataSet$new('Tech|Electrolysis')$select(period=2030, source='Yates20', subtech='AEL', size='100 MW', override={'Tech|Electrolysis|Output Capacity|h2'= 'kW;LHV'}, extrapolate_period=FALSE)\n
DataSet$new('Tech|Electrolysis')$select(period=2030, source='Yates20', subtech='AEL', size='100 MW', override={'Tech|Electrolysis|Output Capacity|h2'= 'kW;LHV'}, extrapolate_period=FALSE) A data.frame: 1 \u00d7 7 sourcevariablereference_variableregionperiodunitvalue <chr><chr><chr><chr><dbl><list><dbl> Yates20Tech|Electrolysis|Total Input Capacity|ElectricityNAWorld2030MWh/a876000 In\u00a0[8]: Copied!
DataSet$new('Tech|Electrolysis')$select(subtech=c('AEL', 'PEM'), size='100 MW', override={'Tech|Electrolysis|Input Capacity|Electricity'= 'kW'})\n
DataSet$new('Tech|Electrolysis')$select(subtech=c('AEL', 'PEM'), size='100 MW', override={'Tech|Electrolysis|Input Capacity|Electricity'= 'kW'}) A data.frame: 114 \u00d7 8 subtechsourcevariablereference_variableregionperiodunitvalue <chr><chr><chr><chr><chr><dbl><list><dbl> AELAl-Qahtani21Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 AELAl-Qahtani21Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 AELAl-Qahtani21Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 AELDEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2030a * USD_2005/MWh1.105e+02 AELDEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2040a * USD_2005/MWh6.650e+01 AELDEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2050a * USD_2005/MWh5.046e+01 AELDEARF23 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2030MWh/MWh2.163e+00 AELDEARF23 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2040MWh/MWh1.947e+00 AELDEARF23 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2050MWh/MWh1.778e+00 AELDEARF23 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2030USD_2005/MWh2.210e+00 AELDEARF23 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2040USD_2005/MWh1.330e+00 AELDEARF23 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2050USD_2005/MWh1.009e+00 AELDEARF23 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 AELDEARF23 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 AELDEARF23 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 AELHolst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2030a * USD_2005/MWh6.895e+01 AELHolst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2040a * USD_2005/MWh6.895e+01 AELHolst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2050a * USD_2005/MWh6.895e+01 AELHolst21 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2030MWh/MWh1.872e+00 AELHolst21 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2040MWh/MWh1.872e+00 AELHolst21 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2050MWh/MWh1.872e+00 AELHolst21 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2030USD_2005/MWh3.106e+00 AELHolst21 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2040USD_2005/MWh3.106e+00 AELHolst21 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2050USD_2005/MWh3.106e+00 AELHolst21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 AELHolst21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 AELHolst21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 AELIEA-GHR21 Tech|Electrolysis|Input|Water Tech|Electrolysis|Output|Hydrogen World2030t/MWh7.292e-02 AELIEA-GHR21 Tech|Electrolysis|Input|Water Tech|Electrolysis|Output|Hydrogen World2040t/MWh7.292e-02 AELIEA-GHR21 Tech|Electrolysis|Input|Water Tech|Electrolysis|Output|Hydrogen World2050t/MWh7.292e-02 \u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee PEMDEARF23 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 PEMDEARF23 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 PEMDEARF23 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 PEMHolst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2030a * USD_2005/MWh7.813e+01 PEMHolst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2040a * USD_2005/MWh7.813e+01 PEMHolst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|HydrogenWorld2050a * USD_2005/MWh7.813e+01 PEMHolst21 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2030MWh/MWh1.880e+00 PEMHolst21 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2040MWh/MWh1.880e+00 PEMHolst21 Tech|Electrolysis|Input|Electricity Tech|Electrolysis|Output|Hydrogen World2050MWh/MWh1.880e+00 PEMHolst21 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2030USD_2005/MWh2.335e+00 PEMHolst21 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2040USD_2005/MWh2.335e+00 PEMHolst21 Tech|Electrolysis|OPEX Fixed Tech|Electrolysis|Output Capacity|HydrogenWorld2050USD_2005/MWh2.335e+00 PEMHolst21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 PEMHolst21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 PEMHolst21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 PEMIEA-GHR21 Tech|Electrolysis|Input|Water Tech|Electrolysis|Output|Hydrogen World2030t/MWh7.292e-02 PEMIEA-GHR21 Tech|Electrolysis|Input|Water Tech|Electrolysis|Output|Hydrogen World2040t/MWh7.292e-02 PEMIEA-GHR21 Tech|Electrolysis|Input|Water Tech|Electrolysis|Output|Hydrogen World2050t/MWh7.292e-02 PEMIEA-GHR21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 PEMIEA-GHR21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 PEMIEA-GHR21 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 PEMIRENA22 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 PEMIRENA22 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 PEMIRENA22 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 PEMVartiainen22Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 PEMVartiainen22Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 PEMVartiainen22Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 PEMYates20 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2030MWh/a8.760e+05 PEMYates20 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2040MWh/a8.760e+05 PEMYates20 Tech|Electrolysis|Total Input Capacity|ElectricityNA World2050MWh/a8.760e+05 In\u00a0[9]: Copied!
DataSet$new('Tech|Electrolysis')$aggregate(subtech='AEL', size='100 MW', agg='subtech', override={'Tech|Electrolysis|Output Capacity|Hydrogen'='kW;LHV'})\n
DataSet$new('Tech|Electrolysis')$aggregate(subtech='AEL', size='100 MW', agg='subtech', override={'Tech|Electrolysis|Output Capacity|Hydrogen'='kW;LHV'}) A data.frame: 99 \u00d7 6 sourcevariableregionperiodunitvalue <chr><chr><chr><dbl><named list><dbl> DEARF23Tech|Electrolysis|CAPEX World2030USD_20051.105e+02 DEARF23Tech|Electrolysis|CAPEX World2040USD_20056.650e+01 DEARF23Tech|Electrolysis|CAPEX World2050USD_20055.046e+01 DEARF23Tech|Electrolysis|Input|Electricity World2030MWh2.163e+00 DEARF23Tech|Electrolysis|Input|Electricity World2040MWh1.947e+00 DEARF23Tech|Electrolysis|Input|Electricity World2050MWh1.778e+00 DEARF23Tech|Electrolysis|OPEX Fixed World2030USD_2005/a2.210e+00 DEARF23Tech|Electrolysis|OPEX Fixed World2040USD_2005/a1.330e+00 DEARF23Tech|Electrolysis|OPEX Fixed World2050USD_2005/a1.009e+00 DEARF23Tech|Electrolysis|Output Capacity|Hydrogen World2030MWh/a1.000e+00 DEARF23Tech|Electrolysis|Output Capacity|Hydrogen World2040MWh/a1.000e+00 DEARF23Tech|Electrolysis|Output Capacity|Hydrogen World2050MWh/a1.000e+00 DEARF23Tech|Electrolysis|Output|Hydrogen World2030MWh1.000e+00 DEARF23Tech|Electrolysis|Output|Hydrogen World2040MWh1.000e+00 DEARF23Tech|Electrolysis|Output|Hydrogen World2050MWh1.000e+00 DEARF23Tech|Electrolysis|Total Input Capacity|ElectricityWorld2030MWh/a8.760e+05 DEARF23Tech|Electrolysis|Total Input Capacity|ElectricityWorld2040MWh/a8.760e+05 DEARF23Tech|Electrolysis|Total Input Capacity|ElectricityWorld2050MWh/a8.760e+05 Holst21Tech|Electrolysis|CAPEX World2030USD_20056.895e+01 Holst21Tech|Electrolysis|CAPEX World2040USD_20056.895e+01 Holst21Tech|Electrolysis|CAPEX World2050USD_20056.895e+01 Holst21Tech|Electrolysis|Input|Electricity World2030MWh1.872e+00 Holst21Tech|Electrolysis|Input|Electricity World2040MWh1.872e+00 Holst21Tech|Electrolysis|Input|Electricity World2050MWh1.872e+00 Holst21Tech|Electrolysis|OPEX Fixed World2030USD_2005/a3.106e+00 Holst21Tech|Electrolysis|OPEX Fixed World2040USD_2005/a3.106e+00 Holst21Tech|Electrolysis|OPEX Fixed World2050USD_2005/a3.106e+00 Holst21Tech|Electrolysis|Output Capacity|Hydrogen World2030MWh/a1.000e+00 Holst21Tech|Electrolysis|Output Capacity|Hydrogen World2040MWh/a1.000e+00 Holst21Tech|Electrolysis|Output Capacity|Hydrogen World2050MWh/a1.000e+00 \u22ee\u22ee\u22ee\u22ee\u22ee\u22ee Vartiainen22Tech|Electrolysis|OPEX Fixed World2030USD_2005/a3.405e-01 Vartiainen22Tech|Electrolysis|OPEX Fixed World2040USD_2005/a1.135e-01 Vartiainen22Tech|Electrolysis|OPEX Fixed World2050USD_2005/a3.734e-02 Vartiainen22Tech|Electrolysis|Output Capacity|Hydrogen World2030MWh/a1.000e+00 Vartiainen22Tech|Electrolysis|Output Capacity|Hydrogen World2040MWh/a1.000e+00 Vartiainen22Tech|Electrolysis|Output Capacity|Hydrogen World2050MWh/a1.000e+00 Vartiainen22Tech|Electrolysis|Output|Hydrogen World2030MWh1.000e+00 Vartiainen22Tech|Electrolysis|Output|Hydrogen World2040MWh1.000e+00 Vartiainen22Tech|Electrolysis|Output|Hydrogen World2050MWh1.000e+00 Vartiainen22Tech|Electrolysis|Total Input Capacity|ElectricityWorld2030MWh/a8.760e+05 Vartiainen22Tech|Electrolysis|Total Input Capacity|ElectricityWorld2040MWh/a8.760e+05 Vartiainen22Tech|Electrolysis|Total Input Capacity|ElectricityWorld2050MWh/a8.760e+05 Yates20 Tech|Electrolysis|Input|Electricity World2030MWh2.625e+00 Yates20 Tech|Electrolysis|Input|Electricity World2040MWh2.625e+00 Yates20 Tech|Electrolysis|Input|Electricity World2050MWh2.625e+00 Yates20 Tech|Electrolysis|Input|Water World2030t4.852e-01 Yates20 Tech|Electrolysis|Input|Water World2040t4.852e-01 Yates20 Tech|Electrolysis|Input|Water World2050t4.852e-01 Yates20 Tech|Electrolysis|OPEX Fixed World2030USD_2005/a2.418e+00 Yates20 Tech|Electrolysis|OPEX Fixed World2040USD_2005/a2.418e+00 Yates20 Tech|Electrolysis|OPEX Fixed World2050USD_2005/a2.418e+00 Yates20 Tech|Electrolysis|Output Capacity|Hydrogen World2030MWh/a1.000e+00 Yates20 Tech|Electrolysis|Output Capacity|Hydrogen World2040MWh/a1.000e+00 Yates20 Tech|Electrolysis|Output Capacity|Hydrogen World2050MWh/a1.000e+00 Yates20 Tech|Electrolysis|Output|Hydrogen World2030MWh1.000e+00 Yates20 Tech|Electrolysis|Output|Hydrogen World2040MWh1.000e+00 Yates20 Tech|Electrolysis|Output|Hydrogen World2050MWh1.000e+00 Yates20 Tech|Electrolysis|Total Input Capacity|ElectricityWorld2030MWh/a8.760e+05 Yates20 Tech|Electrolysis|Total Input Capacity|ElectricityWorld2040MWh/a8.760e+05 Yates20 Tech|Electrolysis|Total Input Capacity|ElectricityWorld2050MWh/a8.760e+05 In\u00a0[10]: Copied!
# DataSet$new('Tech|Methane Reforming')$aggregate(period=2030).query(\"variable.str.contains('OM Cost')\"))\n# display(DataSet('Tech|Methane Reforming').aggregate(period=2030).query(\"variable.str.contains('Demand')\"))\nDataSet$new('Tech|Methane Reforming')$aggregate(period=2030) %>% arrange(variable)\n
# DataSet$new('Tech|Methane Reforming')$aggregate(period=2030).query(\"variable.str.contains('OM Cost')\")) # display(DataSet('Tech|Methane Reforming').aggregate(period=2030).query(\"variable.str.contains('Demand')\")) DataSet$new('Tech|Methane Reforming')$aggregate(period=2030) %>% arrange(variable) A data.frame: 77 \u00d7 7 subtechcapture_ratevariableregionperiodunitvalue <chr><chr><chr><chr><dbl><named list><dbl> ATR94.50%Tech|Methane Reforming|CAPEX World2030USD_20051.148e+01 SMR0.00% Tech|Methane Reforming|CAPEX World2030USD_20057.169e+01 SMR55.70%Tech|Methane Reforming|CAPEX World2030USD_20051.741e+02 SMR90.00%Tech|Methane Reforming|CAPEX World2030USD_20052.797e+02 SMR96.20%Tech|Methane Reforming|CAPEX World2030USD_20057.395e+00 ATR94.50%Tech|Methane Reforming|Capture Rate World2030pct9.450e+01 SMR0.00% Tech|Methane Reforming|Capture Rate World2030pct0.000e+00 SMR55.70%Tech|Methane Reforming|Capture Rate World2030pct5.570e+01 SMR90.00%Tech|Methane Reforming|Capture Rate World2030pct9.000e+01 SMR96.20%Tech|Methane Reforming|Capture Rate World2030pct9.620e+01 ATR94.50%Tech|Methane Reforming|Input|ElectricityWorld2030MWh1.440e-02 SMR0.00% Tech|Methane Reforming|Input|ElectricityWorld2030MWh3.756e-04 SMR96.20%Tech|Methane Reforming|Input|ElectricityWorld2030MWh3.736e-03 ATR94.50%Tech|Methane Reforming|Input|Fossil Gas World2030MWh1.662e-01 SMR0.00% Tech|Methane Reforming|Input|Fossil Gas World2030MWh1.013e+00 SMR55.70%Tech|Methane Reforming|Input|Fossil Gas World2030MWh2.133e+00 SMR90.00%Tech|Methane Reforming|Input|Fossil Gas World2030MWh2.414e+00 SMR96.20%Tech|Methane Reforming|Input|Fossil Gas World2030MWh9.006e-02 ATR0.00% Tech|Methane Reforming|Lifetime World2030a3.000e+01 ATR55.70%Tech|Methane Reforming|Lifetime World2030a3.000e+01 ATR90.00%Tech|Methane Reforming|Lifetime World2030a3.000e+01 ATR94.50%Tech|Methane Reforming|Lifetime World2030a3.000e+01 ATR96.20%Tech|Methane Reforming|Lifetime World2030a3.000e+01 SMR0.00% Tech|Methane Reforming|Lifetime World2030a2.750e+01 SMR55.70%Tech|Methane Reforming|Lifetime World2030a2.750e+01 SMR90.00%Tech|Methane Reforming|Lifetime World2030a2.750e+01 SMR94.50%Tech|Methane Reforming|Lifetime World2030a2.750e+01 SMR96.20%Tech|Methane Reforming|Lifetime World2030a2.750e+01 ATR0.00% Tech|Methane Reforming|OCF World2030pct9.000e+01 ATR55.70%Tech|Methane Reforming|OCF World2030pct9.000e+01 \u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee SMR96.20%Tech|Methane Reforming|OPEX Variable World2030USD_20053.519e-01 ATR0.00% Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 ATR55.70%Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 ATR90.00%Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 ATR94.50%Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 ATR96.20%Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 SMR0.00% Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 SMR55.70%Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 SMR90.00%Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 SMR94.50%Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 SMR96.20%Tech|Methane Reforming|Output Capacity|Hydrogen World2030MWh/a1.000e+00 SMR0.00% Tech|Methane Reforming|Output|Electricity World2030MWh5.025e-02 SMR55.70%Tech|Methane Reforming|Output|Electricity World2030MWh7.801e-03 SMR90.00%Tech|Methane Reforming|Output|Electricity World2030MWh2.371e-03 ATR0.00% Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 ATR55.70%Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 ATR90.00%Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 ATR94.50%Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 ATR96.20%Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 SMR0.00% Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 SMR55.70%Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 SMR90.00%Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 SMR94.50%Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 SMR96.20%Tech|Methane Reforming|Output|Hydrogen World2030MWh1.000e+00 ATR94.50%Tech|Methane Reforming|Total Output Capacity|HydrogenWorld2030MWh/a8.029e+06 SMR0.00% Tech|Methane Reforming|Total Output Capacity|HydrogenWorld2030MWh/a4.161e+06 SMR55.70%Tech|Methane Reforming|Total Output Capacity|HydrogenWorld2030MWh/a4.161e+06 SMR90.00%Tech|Methane Reforming|Total Output Capacity|HydrogenWorld2030MWh/a4.161e+06 SMR94.50%Tech|Methane Reforming|Total Output Capacity|HydrogenWorld2030MWh/a4.161e+06 SMR96.20%Tech|Methane Reforming|Total Output Capacity|HydrogenWorld2030MWh/a4.161e+06 In\u00a0[11]: Copied!
DataSet$new('Tech|Direct Air Capture')$normalise()\n
DataSet$new('Tech|Direct Air Capture')$normalise() A data.frame: 90 \u00d7 15 parent_variablesubtechcomponentregionperiodvariablereference_variablevalueuncertaintyunitreference_valuereference_unitcommentsourcesource_detail <chr><chr><chr><chr><chr><chr><chr><dbl><dbl><chr><dbl><chr><chr><chr><chr> Tech|Direct Air CaptureHT-DAC# *2018CAPEX Output Capacity|Captured CO2951.5012518NAUSD_2005 1t/aEarly plant. Value reported for a capacity of 0.98 Mt-CO2/year. Corresponding to the Carbon Engineering pilot plant. Keith18 Table 3 Tech|Direct Air CaptureHT-DAC# *2018CAPEX Output Capacity|Captured CO2658.2314748NAUSD_2005 1t/aNth plant. Value reported for a capacity of 0.98 Mt-CO2/year. Keith18 Table 3 Tech|Direct Air CaptureHT-DAC# *2018OPEX Variable Output|Captured CO2 21.5160205NAUSD_2005 1t Corresponding to the Carbon Engineering pilot plant (scenario C). Keith18 Table 2 Tech|Direct Air CaptureHT-DAC# *2018Input|Electricity Output|Captured CO2 0.3660000NAMWh 1t Corresponding to the Carbon Engineering pilot plant (scenario C). Keith18 Table 2 Tech|Direct Air CaptureHT-DAC# *2018OCF 90.0000000NApct NANA Corresponding to the Carbon Engineering pilot plant (scenario C). Keith18 Table 2 Tech|Direct Air CaptureHT-DAC# *2018Input|Heat Output|Captured CO2 1.4583333NAMWh 1t Is assumed to be NG consumption in the paper, as high temperatures are needed (900\u2009\u00b0C). Corresponding to the Carbon Engineering pilot plant (scenario C).Keith18 Table 2 Tech|Direct Air CaptureHT-DAC# *2020CAPEX Output Capacity|Captured CO2810.3907887NAUSD_2005 1t/aNo explicit cost basis given, assumed to be EUR2020 as this is the time of cost. Author's own assumption. Fasihi19Table 4 Tech|Direct Air CaptureLT-DAC# *2020CAPEX Output Capacity|Captured CO2725.8715040NAUSD_2005 1t/aNo explicit cost basis given, assumed to be EUR2020 as this is the time of cost. Author's own assumption. Fasihi19Table 4 Tech|Direct Air CaptureHT-DAC# *2020OPEX Fixed Relative 3.7000000NApct NANA Author's own assumption. Fasihi19Table 4 Tech|Direct Air CaptureLT-DAC# *2020OPEX Fixed Relative 4.0000000NApct NANA Author's own assumption. Fasihi19Table 4 Tech|Direct Air CaptureHT-DAC# *2020Input|Electricity Output|Captured CO2 1.5350000NAMWh 1t Author's own assumption. Fasihi19Table 4 Tech|Direct Air CaptureHT-DAC# *2020Input|Heat Output|Captured CO2 0.0000000NAMWh 1t Author's own assumption. Fasihi19Table 4 Tech|Direct Air CaptureLT-DAC# *2020Input|Electricity Output|Captured CO2 0.2500000NAMWh 1t Author's own assumption. Fasihi19Table 4 Tech|Direct Air CaptureLT-DAC# *2020Input|Heat Output|Captured CO2 1.7500000NAMWh 1t Assuming low temperature (80\u2013100\u2009\u00b0C). Author's own assumption. Fasihi19Table 4 Tech|Direct Air CaptureHT-DAC# *2020Lifetime 25.0000000NAa NANA Fasihi19Table 4 Tech|Direct Air CaptureLT-DAC# *2020Lifetime 20.0000000NAa NANA Fasihi19Table 4 Tech|Direct Air Capture* CO2 injection *2021Input|Electricity Output|Captured CO2 0.0070000NAMWh 1t Madhu21 Table 1 and 2 Tech|Direct Air Capture* CO2 compression*2021Input|Electricity Output|Captured CO2 0.1110000NAMWh 1t Madhu21 Table 1 and 2 Tech|Direct Air CaptureHT-DACCO2 capture *2021Input|Electricity Output|Captured CO2 0.3450000NAMWh 1t Reference case Madhu21 Table 1 Tech|Direct Air CaptureHT-DACCO2 capture *2021Input|Electricity Output|Captured CO2 0.3370000NAMWh 1t Best case Madhu21 Table 1 Tech|Direct Air CaptureHT-DACCO2 capture *2021Input|Electricity Output|Captured CO2 0.4490000NAMWh 1t Worst case Madhu21 Table 1 Tech|Direct Air CaptureHT-DAC# *2021Input|Heat Output|Captured CO2 1.2416667NAMWh 1t Reference case. Assuming high temperature (850\u2013900\u2009\u00b0C). Madhu21 Table 1 Tech|Direct Air CaptureHT-DAC# *2021Input|Heat Output|Captured CO2 1.1250000NAMWh 1t Best case. Assuming high temperature (850\u2013900\u2009\u00b0C). Madhu21 Table 1 Tech|Direct Air CaptureHT-DAC# *2021Input|Heat Output|Captured CO2 1.2416667NAMWh 1t Worst case. Assuming high temperature (850\u2013900\u2009\u00b0C). Madhu21 Table 1 Tech|Direct Air CaptureLT-DACCO2 capture *2021Input|Electricity Output|Captured CO2 0.1800000NAMWh 1t Reference case. Madhu21 Table 2 Tech|Direct Air CaptureLT-DACCO2 capture *2021Input|Electricity Output|Captured CO2 0.1300000NAMWh 1t Best case. Madhu21 Table 2 Tech|Direct Air CaptureLT-DACCO2 capture *2021Input|Electricity Output|Captured CO2 0.3500000NAMWh 1t Worst case. Madhu21 Table 2 Tech|Direct Air CaptureLT-DAC# *2021Input|Heat Output|Captured CO2 0.7222222NAMWh 1t Reference case. Assuming low temperature (80\u2013120\u2009\u00b0C). Madhu21 Table 2 Tech|Direct Air CaptureLT-DAC# *2021Input|Heat Output|Captured CO2 0.6388889NAMWh 1t Best case. Assuming low temperature (80\u2013120\u2009\u00b0C). Madhu21 Table 2 Tech|Direct Air CaptureLT-DAC# *2021Input|Heat Output|Captured CO2 1.7222222NAMWh 1t Worst case. Assuming low temperature (80\u2013120\u2009\u00b0C). Madhu21 Table 2 \u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee Tech|Direct Air CaptureHT-DAC# *2019CAPEX Output Capacity|Captured CO21146.0000000NAUSD_2005 1t/aNo cost basis given. See Supplementary information. Low assumption. Legend states that reference plant size is used here, which is 1Mt/a for both. Realmonte19Table S4 Tech|Direct Air CaptureHT-DAC# *2019CAPEX Output Capacity|Captured CO22060.0000000NAUSD_2005 1t/aNo cost basis given. See Supplementary information. High assumption. Legend states that reference plant size is used here, which is 1Mt/a for both.Realmonte19Table S4 Tech|Direct Air CaptureLT-DAC# *2019Input|ElectricityOutput|Captured CO2 0.1666667NAMWh 1t See Supplementary information. Low assumption. Realmonte19Table S4 Tech|Direct Air CaptureLT-DAC# *2019Input|ElectricityOutput|Captured CO2 0.3055556NAMWh 1t See Supplementary information. High assumption. Realmonte19Table S4 Tech|Direct Air CaptureLT-DAC# *2019Input|Heat Output|Captured CO2 1.2222222NAMWh 1t Low temperature (85\u00b0-120\u00b0C). See Supplementary information. Low assumption. Realmonte19Table S4 Tech|Direct Air CaptureLT-DAC# *2019Input|Heat Output|Captured CO2 2.0000000NAMWh 1t Low temperature (85\u00b0-120\u00b0C). See Supplementary information. High assumption. Realmonte19Table S4 Tech|Direct Air CaptureHT-DAC# *2019Input|ElectricityOutput|Captured CO2 0.3611111NAMWh 1t See Supplementary information. Low assumption. Realmonte19Table S4 Tech|Direct Air CaptureHT-DAC# *2019Input|ElectricityOutput|Captured CO2 0.5000000NAMWh 1t See Supplementary information. High assumption. Realmonte19Table S4 Tech|Direct Air CaptureHT-DAC# *2019Input|Heat Output|Captured CO2 1.4722222NAMWh 1t High temperature (T > 800\u00b0C). See Supplementary information. Low assumption. Realmonte19Table S4 Tech|Direct Air CaptureHT-DAC# *2019Input|Heat Output|Captured CO2 2.2500000NAMWh 1t High temperature (T > 800\u00b0C). See Supplementary information. High assumption. Realmonte19Table S4 Tech|Direct Air CaptureLT-DAC# *2019Lifetime 15.0000000NAa NANA See Supplementary information. Realmonte19Figure S8 Tech|Direct Air CaptureHT-DAC# *2019Lifetime 20.0000000NAa NANA See Supplementary information. Realmonte19Figure S8 Tech|Direct Air CaptureHT-DAC# *2019CAPEX Output Capacity|Captured CO21038.5617586NAUSD_2005 1t/aUpper bound. Second line, for slaker causticizer, and clarificator mentions that currencies are converted to USD 2016. NASEM19 Table 5.3 Tech|Direct Air CaptureHT-DAC# *2019CAPEX Output Capacity|Captured CO2 558.5889936NAUSD_2005 1t/aLower bound NASEM19 Table 5.3 Tech|Direct Air CaptureHT-DACOperation & Maintenance*2019OPEX Variable Output|Captured CO2 14.8957065NAUSD_2005 1t Lower bound NASEM19 Table 5.3 Tech|Direct Air CaptureHT-DACOperation & Maintenance*2019OPEX Variable Output|Captured CO2 27.3087952NAUSD_2005 1t Upper bound NASEM19 Table 5.3 Tech|Direct Air CaptureHT-DACLabour *2019OPEX Variable Output|Captured CO2 4.9652355NAUSD_2005 1t Lower bound NASEM19 Table 5.3 Tech|Direct Air CaptureHT-DACLabour *2019OPEX Variable Output|Captured CO2 8.2753925NAUSD_2005 1t Upper bound NASEM19 Table 5.3 Tech|Direct Air CaptureHT-DACOther *2019OPEX Variable Output|Captured CO2 4.1376962NAUSD_2005 1t Lower bound NASEM19 Table 5.3 Tech|Direct Air CaptureHT-DACOther *2019OPEX Variable Output|Captured CO2 5.7927747NAUSD_2005 1t Upper bound NASEM19 Table 5.3 Tech|Direct Air CaptureLT-DACAdsorbent *2019CAPEX Output Capacity|Captured CO2 57.9277475NAUSD_2005 1t/aCase 2: low cost case. NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACAdsorbent *2019CAPEX Output Capacity|Captured CO2 153.9223005NAUSD_2005 1t/aCase 4: high cost case. Costs assumed to be in USD2016, like the HT case NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACBlower *2019CAPEX Output Capacity|Captured CO2 1.7378324NAUSD_2005 1t/aCase 2: low cost case. NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACBlower *2019CAPEX Output Capacity|Captured CO2 5.5445130NAUSD_2005 1t/aCase 4: high cost case. NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACVacuum pump *2019CAPEX Output Capacity|Captured CO2 2.1516020NAUSD_2005 1t/aCase 2: low cost case. NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACVacuum pump *2019CAPEX Output Capacity|Captured CO2 7.0340836NAUSD_2005 1t/aCase 4: high cost case. NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACAdsorption *2019OPEX Variable Output|Captured CO2 7.4478532NAUSD_2005 1t Case 2: low cost case. NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACAdsorption *2019OPEX Variable Output|Captured CO2 15.7232457NAUSD_2005 1t Case 4: high cost case. NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACSteam *2019OPEX Variable Output|Captured CO2 1.8205863NAUSD_2005 1t Case 2: low cost case. NASEM19 Table 5.10 Tech|Direct Air CaptureLT-DACSteam *2019OPEX Variable Output|Captured CO2 2.4826177NAUSD_2005 1t Case 4: high cost case. NASEM19 Table 5.10 In\u00a0[12]: Copied!
DataSet$new('Tech|Direct Air Capture')$select()\n
DataSet$new('Tech|Direct Air Capture')$select()
Warning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2030, 2030)No appropriate mapping found to convert row reference to primary output: c(\"HT-DAC\", \"HT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2030, 2030)No appropriate mapping found to convert row reference to primary output: c(\"HT-DAC\", \"HT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"Okzan22\", \"Okzan22\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2030, 2030)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"Okzan22\", \"Okzan22\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2030, 2030)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Adsorbent\", \"Adsorbent\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2030, 2030)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Blower\", \"Blower\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2030, 2030)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Vacuum pump\", \"Vacuum pump\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2040, 2040)No appropriate mapping found to convert row reference to primary output: c(\"HT-DAC\", \"HT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2040, 2040)No appropriate mapping found to convert row reference to primary output: c(\"HT-DAC\", \"HT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"Okzan22\", \"Okzan22\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2040, 2040)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"Okzan22\", \"Okzan22\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2040, 2040)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Adsorbent\", \"Adsorbent\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2040, 2040)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Blower\", \"Blower\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2040, 2040)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Vacuum pump\", \"Vacuum pump\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2050, 2050)No appropriate mapping found to convert row reference to primary output: c(\"HT-DAC\", \"HT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2050, 2050)No appropriate mapping found to convert row reference to primary output: c(\"HT-DAC\", \"HT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"Okzan22\", \"Okzan22\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2050, 2050)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"#\", \"#\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"Okzan22\", \"Okzan22\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2050, 2050)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Adsorbent\", \"Adsorbent\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2050, 2050)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Blower\", \"Blower\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\nWarning message in private$..apply_mappings(selected, var_units):\n\u201cNo appropriate mapping found to convert row reference to primary output: c(2050, 2050)No appropriate mapping found to convert row reference to primary output: c(\"LT-DAC\", \"LT-DAC\")No appropriate mapping found to convert row reference to primary output: c(\"Vacuum pump\", \"Vacuum pump\")No appropriate mapping found to convert row reference to primary output: c(\"World\", \"World\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|CAPEX\", \"Tech|Direct Air Capture|CAPEX\")No appropriate mapping found to convert row reference to primary output: c(\"Tech|Direct Air Capture|Output Capacity|Captured CO2\", \"Tech|Direct Air Capture|Output Capacity|Captured CO2\")No appropriate mapping found to convert row reference to primary output: c(\"NASEM19\", \"NASEM19\")No appropriate mapping found to convert row reference to primary output: c(NA, NA)\u201d\n
A data.frame: 240 \u00d7 9 subtechcomponentsourcevariablereference_variableregionperiodunitvalue <chr><chr><chr><chr><chr><chr><dbl><list><dbl> HT-DAC#Fasihi19 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2030a * USD_2005/t1244.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2040a * USD_2005/t1244.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2050a * USD_2005/t1244.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2 World2030MWh/t 2.3560 HT-DAC#Fasihi19 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2 World2040MWh/t 2.3560 HT-DAC#Fasihi19 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2 World2050MWh/t 2.3560 HT-DAC#Fasihi19 Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2030MWh/t 0.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2040MWh/t 0.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2050MWh/t 0.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|Lifetime NA World2030a 25.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|Lifetime NA World2040a 25.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|Lifetime NA World2050a 25.0000 HT-DAC#Fasihi19 Tech|Direct Air Capture|OPEX Fixed Tech|Direct Air Capture|Output Capacity|Captured CO2World2030USD_2005/t 46.0300 HT-DAC#Fasihi19 Tech|Direct Air Capture|OPEX Fixed Tech|Direct Air Capture|Output Capacity|Captured CO2World2040USD_2005/t 46.0300 HT-DAC#Fasihi19 Tech|Direct Air Capture|OPEX Fixed Tech|Direct Air Capture|Output Capacity|Captured CO2World2050USD_2005/t 46.0300 HT-DAC#IEA-DAC22Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2030MWh/t 2.1670 HT-DAC#IEA-DAC22Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2040MWh/t 2.1670 HT-DAC#IEA-DAC22Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2050MWh/t 2.1670 HT-DAC#Keith18 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2030a * USD_2005/t 348.2000 HT-DAC#Keith18 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2030a * USD_2005/t 240.9000 HT-DAC#Keith18 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2040a * USD_2005/t 348.2000 HT-DAC#Keith18 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2040a * USD_2005/t 240.9000 HT-DAC#Keith18 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2050a * USD_2005/t 348.2000 HT-DAC#Keith18 Tech|Direct Air Capture|CAPEX Tech|Direct Air Capture|Output Capacity|Captured CO2World2050a * USD_2005/t 240.9000 HT-DAC#Keith18 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2 World2030MWh/t 0.1340 HT-DAC#Keith18 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2 World2040MWh/t 0.1340 HT-DAC#Keith18 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2 World2050MWh/t 0.1340 HT-DAC#Keith18 Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2030MWh/t 0.5338 HT-DAC#Keith18 Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2040MWh/t 0.5338 HT-DAC#Keith18 Tech|Direct Air Capture|Input|Heat Tech|Direct Air Capture|Output|Captured CO2 World2050MWh/t 0.5338 \u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee\u22ee LT-DACCO2 capture Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2040MWh/t3.240e-02 LT-DACCO2 capture Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2040MWh/t2.340e-02 LT-DACCO2 capture Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2040MWh/t6.300e-02 LT-DACCO2 capture Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2050MWh/t3.240e-02 LT-DACCO2 capture Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2050MWh/t2.340e-02 LT-DACCO2 capture Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2050MWh/t6.300e-02 LT-DACCO2 compressionIEA-DAC22Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2030MWh/t1.929e-02 LT-DACCO2 compressionIEA-DAC22Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2040MWh/t1.929e-02 LT-DACCO2 compressionIEA-DAC22Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2050MWh/t1.929e-02 LT-DACCO2 compressionMadhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2030MWh/t1.232e-02 LT-DACCO2 compressionMadhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2040MWh/t1.232e-02 LT-DACCO2 compressionMadhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2050MWh/t1.232e-02 LT-DACCO2 injection Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2030MWh/t4.900e-05 LT-DACCO2 injection Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2040MWh/t4.900e-05 LT-DACCO2 injection Madhu21 Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2050MWh/t4.900e-05 LT-DACNon-compressionIEA-DAC22Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2030MWh/t2.500e-01 LT-DACNon-compressionIEA-DAC22Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2040MWh/t2.500e-01 LT-DACNon-compressionIEA-DAC22Tech|Direct Air Capture|Input|ElectricityTech|Direct Air Capture|Output|Captured CO2World2050MWh/t2.500e-01 LT-DACO&M Okzan22 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2030USD_2005/t2.500e+01 LT-DACO&M Okzan22 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2030USD_2005/t2.500e+02 LT-DACO&M Okzan22 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2040USD_2005/t2.500e+01 LT-DACO&M Okzan22 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2040USD_2005/t2.500e+02 LT-DACO&M Okzan22 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2050USD_2005/t2.500e+01 LT-DACO&M Okzan22 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2050USD_2005/t2.500e+02 LT-DACSteam NASEM19 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2030USD_2005/t3.315e+00 LT-DACSteam NASEM19 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2030USD_2005/t4.520e+00 LT-DACSteam NASEM19 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2040USD_2005/t3.315e+00 LT-DACSteam NASEM19 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2040USD_2005/t4.520e+00 LT-DACSteam NASEM19 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2050USD_2005/t3.315e+00 LT-DACSteam NASEM19 Tech|Direct Air Capture|OPEX Variable Tech|Direct Air Capture|Output|Captured CO2World2050USD_2005/t4.520e+00 In\u00a0[13]: Copied!
TEDF$new('Tech|Haber-Bosch with ASU')$load()# $check()\nDataSet$new('Tech|Haber-Bosch with ASU')$normalise()\n
TEDF$new('Tech|Haber-Bosch with ASU')$load()# $check() DataSet$new('Tech|Haber-Bosch with ASU')$normalise()
<TEDF>\n  Inherits from: <TEBase>\n  Public:\n    check: function (raise_exception = TRUE) \n    check_row: function (row_id, raise_exception = TRUE) \n    clone: function (deep = FALSE) \n    data: active binding\n    file_path: active binding\n    inconsistencies: active binding\n    initialize: function (parent_variable, database_id = \"public\", file_path = NULL, \n    load: function () \n    parent_variable: active binding\n    read: function () \n    write: function () \n  Private:\n    ..columns: list\n    ..df: data.frame\n    ..fields: list\n    ..file_path: ./inst/extdata/database/tedfs/Tech/Haber-Bosch with ASU.csv\n    ..inconsistencies: list\n    ..parent_variable: Tech|Haber-Bosch with ASU\n    ..var_specs: list
Warning message in readLines(file, warn = readLines.warn):\n\u201cunvollst\u00e4ndige letzte Zeile in './inst/extdata/database/masks/Tech/Haber-Bosch with ASU.yml' gefunden\u201d\n
A data.frame: 23 \u00d7 14 parent_variablecomponentregionperiodvariablereference_variablevalueuncertaintyunitreference_valuereference_unitcommentsourcesource_detail <chr><chr><chr><chr><chr><chr><dbl><dbl><chr><dbl><chr><chr><chr><chr> Tech|Haber-Bosch with ASU# *2022CAPEX Output Capacity|Ammonia 383.1801978 NAUSD_2005 1t/a As stated in Section 2.4.1, the ammonia output of the plant is measured in units of its lower heating value. ArnaizdelPozo22Supplementary Information, Table 2 Tech|Haber-Bosch with ASU# *2018CAPEX Output Capacity|Ammonia 539.0569917 NAUSD_2005 1t/a Cost basis not given in the paper, assumed to be 2018. Ikaheimo18 Page 7, Table 3 Tech|Haber-Bosch with ASU# *2025CAPEX Output Capacity|Ammonia 849.4034361 NAUSD_2005 1t/a Error in Table 7, units are given in EUR/MW. Table 3 contradicts this Grahn22 Table 7 Tech|Haber-Bosch with ASU# *2040CAPEX Output Capacity|Ammonia 515.7092291 NAUSD_2005 1t/a Error in Table 7, units are given in EUR/MW. Table 3 contradicts this Grahn22 Table 7 Tech|Haber-Bosch with ASU# *2022OPEX Fixed Relative 3.0000000 NApct NANA ArnaizdelPozo22Supplementary Information, Table 2 Tech|Haber-Bosch with ASU# *2025OPEX Fixed Relative 4.0000000 NApct NANA Grahn22 Table7 Tech|Haber-Bosch with ASU# *2040OPEX Fixed Relative 4.0000000 NApct NANA Grahn22 Table7 Tech|Haber-Bosch with ASU# *2018OPEX Fixed Output Capacity|Ammonia 10.5332975 NAUSD_2005/a 1t/a Cost basis not given in the paper, assumed to be 2018. Ikaheimo18 Page 7, Table 3 Tech|Haber-Bosch with ASU# *2022OPEX Variable Output|Ammonia 3132.1883891 NAUSD_2005 1t As stated in Section 2.4.1, the ammonia output of the plant is measured in units of its lower heating value. ArnaizdelPozo22Supplementary Information, Table 2 Tech|Haber-Bosch with ASU# *2015Input|Hydrogen Output|Ammonia 5.9326787 NAMWh;LHV 1t Matzen15 Page 9 / Table 13 Tech|Haber-Bosch with ASU# *2018Input|Hydrogen Output|Ammonia 6.0270060 NAMWh;LHV 1t Found in supplementary information Stolz22 Table 5 Tech|Haber-Bosch with ASUAir-separation unit *2015Input|Electricity Output|Ammonia 0.7236234 NAMWh 1t This source claims 3.1 MJ/kg electricity denand for the ASU. Assuming a nitrogen demand of 0.84 tonnes per tonne of ammonia, this amounts of an electricity demand of 3.1\u00d70.85 GJ per tonne of ammonia.Matzen15 Page 8 / Table 9 Tech|Haber-Bosch with ASUAir-separation unit *2016Input|Electricity Output|Ammonia 0.0500000 NAMWh 1t GrinbergDana16 Supplementary Table 4 Tech|Haber-Bosch with ASUSynthesis process *2016Input|Electricity Output|Ammonia 0.4444444 NAMWh 1t GrinbergDana16 Supplementary Table 4 Tech|Haber-Bosch with ASUAir-separation unit *2014Input|Electricity Output|Ammonia 0.0907563 NAMWh 1t This line is masked later down the line, as not found in the original source (input error?) Morgan14 Tech|Haber-Bosch with ASUSynthesis process *2014Input|Electricity Output|Ammonia 0.4000000 NAMWh 1t his line is masked later down the line, as not found in the original source (input error?) Morgan14 Tech|Haber-Bosch with ASU# *2018Input|Electricity Output|Ammonia 0.6400000 NAMWh 1t Ikaheimo18 Page 4 Tech|Haber-Bosch with ASUCompressors *2017Input|Electricity Output|Ammonia 1.44444440.3611111114MWh 1t Bazzanella17 Page 56, Section 4.2.2 Tech|Haber-Bosch with ASUAir-separation unit *2017Input|Electricity Output|Ammonia 0.3300000 NAMWh 1t Bazzanella17 Page 57, Section 4.2.3 Tech|Haber-Bosch with ASUH2 compression and others*2018Input|Electricity Output|Ammonia 0.3307503 NAMWh 1t Found in supplementary information Stolz22 Table 5 Tech|Haber-Bosch with ASUAir-separation unit *2018Input|Electricity Output|Ammonia 0.13545010.0002625003MWh 1t Data found in supplementary information. Manually calculated from 0.162 kWh/kg Nitrogen and 0.159kg nitrogen/kWh ammonia. Stolz22 Table 5 Tech|Haber-Bosch with ASU# *2025Output|Ammonia Input|Hydrogen 0.1504760 NAt 1MWh;LHV Grahn22 Table 7 Tech|Haber-Bosch with ASU# *2040Output|Ammonia Input|Hydrogen 0.1504760 NAt 1MWh;LHV Grahn22 Table 7 In\u00a0[14]: Copied!
DataSet$new('Tech|Haber-Bosch with ASU')$select(period=2020)\n
DataSet$new('Tech|Haber-Bosch with ASU')$select(period=2020)
Warning message in readLines(file, warn = readLines.warn):\n\u201cunvollst\u00e4ndige letzte Zeile in './inst/extdata/database/masks/Tech/Haber-Bosch with ASU.yml' gefunden\u201d\n
A data.frame: 20 \u00d7 8 componentsourcevariablereference_variableregionperiodunitvalue <chr><chr><chr><chr><chr><dbl><chr><dbl> # ArnaizdelPozo22Tech|Haber-Bosch with ASU|CAPEX Tech|Haber-Bosch with ASU|Output Capacity|AmmoniaWorld2020a * USD_2005/t1.200e+06 # ArnaizdelPozo22Tech|Haber-Bosch with ASU|OPEX Fixed Tech|Haber-Bosch with ASU|Output Capacity|AmmoniaWorld2020USD_2005/t 3.601e+04 # ArnaizdelPozo22Tech|Haber-Bosch with ASU|OPEX Variable Tech|Haber-Bosch with ASU|Output|Ammonia World2020USD_2005/t 9.811e+06 # Grahn22 Tech|Haber-Bosch with ASU|CAPEX Tech|Haber-Bosch with ASU|Output Capacity|AmmoniaWorld2020a * USD_2005/t5.645e+03 # Grahn22 Tech|Haber-Bosch with ASU|Input|Hydrogen Tech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 4.416e+01 # Grahn22 Tech|Haber-Bosch with ASU|OPEX Fixed Tech|Haber-Bosch with ASU|Output Capacity|AmmoniaWorld2020USD_2005/t 2.258e+02 # Ikaheimo18 Tech|Haber-Bosch with ASU|CAPEX Tech|Haber-Bosch with ASU|Output Capacity|AmmoniaWorld2020a * USD_2005/t3.450e+02 # Ikaheimo18 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 4.096e-01 # Ikaheimo18 Tech|Haber-Bosch with ASU|OPEX Fixed Tech|Haber-Bosch with ASU|Output Capacity|AmmoniaWorld2020USD_2005/t 6.741e+00 # Matzen15 Tech|Haber-Bosch with ASU|Input|Hydrogen Tech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 3.520e+01 # Stolz22 Tech|Haber-Bosch with ASU|Input|Hydrogen Tech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 3.632e+01 Air-separation unit Bazzanella17 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 1.089e-01 Air-separation unit GrinbergDana16 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 2.500e-03 Air-separation unit Matzen15 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 5.236e-01 Air-separation unit Morgan14 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 8.237e-03 Air-separation unit Stolz22 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 1.835e-02 Compressors Bazzanella17 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 2.086e+00 H2 compression and othersStolz22 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 1.094e-01 Synthesis process GrinbergDana16 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 1.975e-01 Synthesis process Morgan14 Tech|Haber-Bosch with ASU|Input|ElectricityTech|Haber-Bosch with ASU|Output|Ammonia World2020MWh/t 1.600e-01 In\u00a0[15]: Copied!
DataSet$new('Tech|Haber-Bosch with ASU')$aggregate(period=2020)\n
DataSet$new('Tech|Haber-Bosch with ASU')$aggregate(period=2020)
Warning message in readLines(file, warn = readLines.warn):\n\u201cunvollst\u00e4ndige letzte Zeile in './inst/extdata/database/masks/Tech/Haber-Bosch with ASU.yml' gefunden\u201d\n
\nError in eval(parse(text = cond)): Objekt 'variable' nicht gefunden\nTraceback:\n\n1. DataSet$new(\"Tech|Haber-Bosch with ASU\")$aggregate(period = 2020)\n2. mask$matches(rows)   # at line 910-912 of file /Users/gmax/Documents/PIK_Job/posted_philipp/R/noslag.R\n3. apply_cond(df, w)   # at line 81-83 of file /Users/gmax/Documents/PIK_Job/posted_philipp/R/masking.R\n4. filter(eval(parse(text = cond)))   # at line 20 of file /Users/gmax/Documents/PIK_Job/posted_philipp/R/masking.R\n5. eval(parse(text = cond))\n6. eval(parse(text = cond))
In\u00a0[\u00a0]: Copied!
\n
"},{"location":"tutorials/python/overview/","title":"Main POSTED tutorial for python","text":"

First, we import some general-purpose libraries. The python-side of posted depends on pandas for working with dataframes. Here we also use plotly and itables for plotting and inspecting data, but posted does not depend on those and other tools could be used instead. The package igraph is an optional dependency used for representing the interlinkages in value chains. The package matplotlib is only used for plotting igraphs, which is again optional.

In\u00a0[1]: Copied!
import pandas as pd\n\nimport plotly\npd.options.plotting.backend = \"plotly\"\n\n# for process chains only\nimport igraph as ig\nimport matplotlib.pyplot as plt\n
import pandas as pd import plotly pd.options.plotting.backend = \"plotly\" # for process chains only import igraph as ig import matplotlib.pyplot as plt
Intel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.\nIntel MKL WARNING: Support of Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) enabled only processors has been deprecated. Intel oneAPI Math Kernel Library 2025.0 will require Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.\n

The posted package has to be installed in the python environment. If it is not installed yet, you can easily install it from the GitHub source code using pip.

In\u00a0[2]: Copied!
try:\n    import posted\nexcept ImportError:\n    ! pip install git+https://github.com:PhilippVerpoort/posted.git@develop\n
try: import posted except ImportError: ! pip install git+https://github.com:PhilippVerpoort/posted.git@develop

Import specific functions and classes from POSTED that will be used later.

In\u00a0[3]: Copied!
from posted.tedf import TEDF\nfrom posted.noslag import DataSet\nfrom posted.units import Q, U\nfrom posted.team import CalcVariable, LCOX, FSCP, ProcessChain, annuity_factor\n
from posted.tedf import TEDF from posted.noslag import DataSet from posted.units import Q, U from posted.team import CalcVariable, LCOX, FSCP, ProcessChain, annuity_factor
\n---------------------------------------------------------------------------\nImportError                               Traceback (most recent call last)\nCell In[3], line 4\n      2 from posted.noslag import DataSet\n      3 from posted.units import Q, U\n----> 4 from posted.team import CalcVariable, LCOX, FSCP, ProcessChain, annuity_factor\n\nImportError: cannot import name 'ProcessChain' from 'posted.team' (/Users/gmax/Documents/PIK_Job/posted_leo/python/posted/team.py)

Use some basic plotly and pandas functions for plotting and output analysis

Let's compare CAPEX data for electrolysis in years 2020\u20132050 for Alkaline and PEM across different sources (Danish Energy Agency, Breyer, Fraunhofer, IRENA) for different electrolyser plant sizes.

In\u00a0[4]: Copied!
# select data from TEDFs\ndf_elh2 = DataSet('Tech|Electrolysis').select(\n        period=[2020, 2030, 2040, 2050],\n        subtech=['AEL', 'PEM'],\n        override={'Tech|Electrolysis|Output Capacity|Hydrogen': 'kW;LHV'},\n        source=['DEARF23', 'Vartiainen22', 'Holst21', 'IRENA22'],\n        size=['1 MW', '5 MW', '100 MW'],\n        extrapolate_period=False\n    ).query(f\"variable=='Tech|Electrolysis|CAPEX'\")\n\n# display a few examples\ndisplay(df_elh2.sample(15).sort_index())\n\n# sort data and plot\ndf_elh2.assign(size_sort=lambda df: df['size'].str.split(' ', n=1, expand=True).iloc[:, 0].astype(int)) \\\n       .sort_values(by=['size_sort', 'period']) \\\n       .plot.line(x='period', y='value', color='source', facet_col='size', facet_row='subtech')\n
# select data from TEDFs df_elh2 = DataSet('Tech|Electrolysis').select( period=[2020, 2030, 2040, 2050], subtech=['AEL', 'PEM'], override={'Tech|Electrolysis|Output Capacity|Hydrogen': 'kW;LHV'}, source=['DEARF23', 'Vartiainen22', 'Holst21', 'IRENA22'], size=['1 MW', '5 MW', '100 MW'], extrapolate_period=False ).query(f\"variable=='Tech|Electrolysis|CAPEX'\") # display a few examples display(df_elh2.sample(15).sort_index()) # sort data and plot df_elh2.assign(size_sort=lambda df: df['size'].str.split(' ', n=1, expand=True).iloc[:, 0].astype(int)) \\ .sort_values(by=['size_sort', 'period']) \\ .plot.line(x='period', y='value', color='source', facet_col='size', facet_row='subtech') subtech size source variable reference_variable region period unit value 0 AEL 1 MW DEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2020 USD_2005/kW 1121.0 23 AEL 1 MW IRENA22 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2030 USD_2005/kW 595.9 38 AEL 100 MW DEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2030 USD_2005/kW 658.0 40 AEL 100 MW DEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2050 USD_2005/kW 331.5 53 AEL 100 MW Holst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2020 USD_2005/kW 955.4 54 AEL 100 MW Holst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2030 USD_2005/kW 604.0 62 AEL 100 MW IRENA22 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2030 USD_2005/kW 595.9 63 AEL 100 MW IRENA22 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2040 USD_2005/kW 409.5 72 AEL 100 MW Vartiainen22 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2020 USD_2005/kW 593.6 74 AEL 100 MW Vartiainen22 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2040 USD_2005/kW 190.7 106 AEL 5 MW IRENA22 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2040 USD_2005/kW 409.5 122 PEM 1 MW DEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2050 USD_2005/kW 564.2 150 PEM 100 MW DEARF23 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2020 USD_2005/kW 1586.0 166 PEM 100 MW Holst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2020 USD_2005/kW 1092.0 192 PEM 5 MW Holst21 Tech|Electrolysis|CAPEX Tech|Electrolysis|Output Capacity|Hydrogen World 2030 USD_2005/kW 978.9

Based on those many sources and cases (size and subtechnology), we can now aggregate the data for further use.

In\u00a0[5]: Copied!
DataSet('Tech|Electrolysis').aggregate(\n        period=[2020, 2030, 2040, 2050],\n        subtech=['AEL', 'PEM'],\n        override={'Tech|Electrolysis|Output Capacity|Hydrogen': 'kW;LHV'},\n        source=['DEARF23', 'Vartiainen22', 'Holst21', 'IRENA22'],\n        size=['1 MW', '5 MW', '100 MW'],\n        agg=['subtech', 'size', 'source'],\n        extrapolate_period=False,\n    ).team.varsplit('Tech|Electrolysis|*variable') \\\n    .query(f\"variable.isin({['CAPEX', 'Output Capacity|Hydrogen']})\")\n
DataSet('Tech|Electrolysis').aggregate( period=[2020, 2030, 2040, 2050], subtech=['AEL', 'PEM'], override={'Tech|Electrolysis|Output Capacity|Hydrogen': 'kW;LHV'}, source=['DEARF23', 'Vartiainen22', 'Holst21', 'IRENA22'], size=['1 MW', '5 MW', '100 MW'], agg=['subtech', 'size', 'source'], extrapolate_period=False, ).team.varsplit('Tech|Electrolysis|*variable') \\ .query(f\"variable.isin({['CAPEX', 'Output Capacity|Hydrogen']})\") Out[5]: variable region period unit value 0 CAPEX World 2020 USD_2005 1046.0 1 CAPEX World 2030 USD_2005 737.4 2 CAPEX World 2040 USD_2005 586.1 3 CAPEX World 2050 USD_2005 499.3 12 Output Capacity|Hydrogen World 2020 kW 1.0 13 Output Capacity|Hydrogen World 2030 kW 1.0 14 Output Capacity|Hydrogen World 2040 kW 1.0 15 Output Capacity|Hydrogen World 2050 kW 1.0

Next, let's compare the energy demand of methane reforming (for blue hydrogen) and different types of electrolysis (for green hydrogen).

In\u00a0[6]: Copied!
pd.concat([\n        DataSet('Tech|Methane Reforming').aggregate(period=2030, source='Lewis22'),\n        DataSet('Tech|Electrolysis').aggregate(period=2030, agg=['source', 'size']),\n    ]) \\\n    .reset_index(drop=True) \\\n    .team.varsplit('Tech|?tech|Input|?fuel') \\\n    .assign(tech=lambda df: df.apply(lambda row: f\"{row['tech']}<br>({row['subtech']})\" if pd.isnull(row['capture_rate']) else f\"{row['tech']}<br>({row['subtech']}, {row['capture_rate']} CR)\", axis=1)) \\\n    .plot.bar(x='tech', y='value', color='fuel') \\\n    .update_layout(\n        xaxis_title='Technologies',\n        yaxis_title='Energy demand  ( MWh<sub>LHV</sub> / MWh<sub>LHV</sub> H<sub>2</sub> )',\n        legend_title='Energy carriers',\n    )\n
pd.concat([ DataSet('Tech|Methane Reforming').aggregate(period=2030, source='Lewis22'), DataSet('Tech|Electrolysis').aggregate(period=2030, agg=['source', 'size']), ]) \\ .reset_index(drop=True) \\ .team.varsplit('Tech|?tech|Input|?fuel') \\ .assign(tech=lambda df: df.apply(lambda row: f\"{row['tech']}({row['subtech']})\" if pd.isnull(row['capture_rate']) else f\"{row['tech']}({row['subtech']}, {row['capture_rate']} CR)\", axis=1)) \\ .plot.bar(x='tech', y='value', color='fuel') \\ .update_layout( xaxis_title='Technologies', yaxis_title='Energy demand ( MWhLHV / MWhLHV H2 )', legend_title='Energy carriers', )

Next, let's compare the energy demand of iron direct reduction (production of low-carbon crude iron) across sources.

In\u00a0[7]: Copied!
DataSet('Tech|Iron Direct Reduction') \\\n    .aggregate(period=2030, mode='h2', agg=[]) \\\n    .team.varsplit('Tech|Iron Direct Reduction|Input|?fuel') \\\n    .query(f\"fuel != 'Iron Ore'\") \\\n    .team.varcombine('{fuel} ({component})') \\\n    .plot.bar(x='source', y='value', color='variable') \\\n    .update_layout(\n        xaxis_title='Sources',\n        yaxis_title='Energy demand  ( MWh<sub>LHV</sub> / t<sub>DRI</sub> )',\n        legend_title='Energy carriers'\n    )\n
DataSet('Tech|Iron Direct Reduction') \\ .aggregate(period=2030, mode='h2', agg=[]) \\ .team.varsplit('Tech|Iron Direct Reduction|Input|?fuel') \\ .query(f\"fuel != 'Iron Ore'\") \\ .team.varcombine('{fuel} ({component})') \\ .plot.bar(x='source', y='value', color='variable') \\ .update_layout( xaxis_title='Sources', yaxis_title='Energy demand ( MWhLHV / tDRI )', legend_title='Energy carriers' )

We can also compare the energy demand for operation with hydrogen or with fossil gas for only one source.

In\u00a0[8]: Copied!
DataSet('Tech|Iron Direct Reduction') \\\n    .select(period=2030, source='Jacobasch21') \\\n    .team.varsplit('Tech|Iron Direct Reduction|Input|?fuel') \\\n    .query(f\"fuel.isin({['Electricity', 'Fossil Gas', 'Hydrogen']})\") \\\n    .plot.bar(x='mode', y='value', color='fuel') \\\n    .update_layout(\n        xaxis_title='Mode of operation',\n        yaxis_title='Energy demand  ( MWh<sub>LHV</sub> / t<sub>DRI</sub> )',\n        legend_title='Energy carriers'\n    )\n
DataSet('Tech|Iron Direct Reduction') \\ .select(period=2030, source='Jacobasch21') \\ .team.varsplit('Tech|Iron Direct Reduction|Input|?fuel') \\ .query(f\"fuel.isin({['Electricity', 'Fossil Gas', 'Hydrogen']})\") \\ .plot.bar(x='mode', y='value', color='fuel') \\ .update_layout( xaxis_title='Mode of operation', yaxis_title='Energy demand ( MWhLHV / tDRI )', legend_title='Energy carriers' )

Finally, let's compare the energy demand of Haber-Bosch synthesis between an integrated SMR plant and a plant running on green hydrogen.

In\u00a0[9]: Copied!
pd.concat([\n        DataSet('Tech|Haber-Bosch with ASU').aggregate(period=2024, agg='component'),\n        DataSet('Tech|Haber-Bosch with Reforming').aggregate(period=2024, agg='component')\n    ]) \\\n    .reset_index(drop=True) \\\n    .team.varsplit('Tech|?tech|*variable') \\\n    .query(f\"variable.str.startswith('Input|')\") \\\n    .plot.bar(x='source', y='value', color='variable') \\\n    .update_layout(\n        xaxis_title='Sources',\n        yaxis_title='Energy demand  ( MWh<sub>LHV</sub> / t<sub>NH<sub>3</sub></sub> )',\n        legend_title='Energy carriers'\n    )\n
pd.concat([ DataSet('Tech|Haber-Bosch with ASU').aggregate(period=2024, agg='component'), DataSet('Tech|Haber-Bosch with Reforming').aggregate(period=2024, agg='component') ]) \\ .reset_index(drop=True) \\ .team.varsplit('Tech|?tech|*variable') \\ .query(f\"variable.str.startswith('Input|')\") \\ .plot.bar(x='source', y='value', color='variable') \\ .update_layout( xaxis_title='Sources', yaxis_title='Energy demand ( MWhLHV / tNH3 )', legend_title='Energy carriers' )

New variables can be calculated manually via the CalcVariable class. The next example demonstrates this for calculating the levelised cost of hydrogen.

In\u00a0[10]: Copied!
assumptions = pd.DataFrame.from_records([\n    {'elec_price_case': f\"Case {i}\", 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': 30 + (i-1)*25}\n    for i in range(1, 4)\n] + [\n    {'variable': 'Tech|Electrolysis|OCF', 'value': 50, 'unit': 'pct'},\n    {'variable': 'Annuity Factor', 'value': annuity_factor(Q('5 pct'), Q('18 a')).m, 'unit': '1/a'},\n])\ndisplay(assumptions)\n
assumptions = pd.DataFrame.from_records([ {'elec_price_case': f\"Case {i}\", 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': 30 + (i-1)*25} for i in range(1, 4) ] + [ {'variable': 'Tech|Electrolysis|OCF', 'value': 50, 'unit': 'pct'}, {'variable': 'Annuity Factor', 'value': annuity_factor(Q('5 pct'), Q('18 a')).m, 'unit': '1/a'}, ]) display(assumptions)
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[10], line 6\n      1 assumptions = pd.DataFrame.from_records([\n      2     {'elec_price_case': f\"Case {i}\", 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': 30 + (i-1)*25}\n      3     for i in range(1, 4)\n      4 ] + [\n      5     {'variable': 'Tech|Electrolysis|OCF', 'value': 50, 'unit': 'pct'},\n----> 6     {'variable': 'Annuity Factor', 'value': annuity_factor(Q('5 pct'), Q('18 a')).m, 'unit': '1/a'},\n      7 ])\n      8 display(assumptions)\n\nNameError: name 'annuity_factor' is not defined
In\u00a0[11]: Copied!
df_calc = pd.concat([\n        DataSet('Tech|Electrolysis').aggregate(period=[2030, 2040, 2050], subtech=['AEL', 'PEM'], agg=['size', 'source']),\n        assumptions,\n    ]).team.perform(CalcVariable(**{\n        'LCOX|Green Hydrogen|Capital Cost': lambda x: (x['Annuity Factor'] * x['Tech|Electrolysis|CAPEX'] / x['Tech|Electrolysis|Output Capacity|Hydrogen'] / x['Tech|Electrolysis|OCF']),\n        'LCOX|Green Hydrogen|OM Cost Fixed': lambda x: x['Tech|Electrolysis|OPEX Fixed'] / x['Tech|Electrolysis|Output Capacity|Hydrogen'] / x['Tech|Electrolysis|OCF'],\n        'LCOX|Green Hydrogen|Input Cost|Electricity': lambda x: x['Price|Electricity'] * x['Tech|Electrolysis|Input|Electricity'] / x['Tech|Electrolysis|Output|Hydrogen'],\n    }), only_new=True) \\\n    .team.unit_convert(to='EUR_2020/MWh')\n\ndisplay(df_calc.sample(15).sort_index())\n
df_calc = pd.concat([ DataSet('Tech|Electrolysis').aggregate(period=[2030, 2040, 2050], subtech=['AEL', 'PEM'], agg=['size', 'source']), assumptions, ]).team.perform(CalcVariable(**{ 'LCOX|Green Hydrogen|Capital Cost': lambda x: (x['Annuity Factor'] * x['Tech|Electrolysis|CAPEX'] / x['Tech|Electrolysis|Output Capacity|Hydrogen'] / x['Tech|Electrolysis|OCF']), 'LCOX|Green Hydrogen|OM Cost Fixed': lambda x: x['Tech|Electrolysis|OPEX Fixed'] / x['Tech|Electrolysis|Output Capacity|Hydrogen'] / x['Tech|Electrolysis|OCF'], 'LCOX|Green Hydrogen|Input Cost|Electricity': lambda x: x['Price|Electricity'] * x['Tech|Electrolysis|Input|Electricity'] / x['Tech|Electrolysis|Output|Hydrogen'], }), only_new=True) \\ .team.unit_convert(to='EUR_2020/MWh') display(df_calc.sample(15).sort_index())
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[11], line 3\n      1 df_calc = pd.concat([\n      2         DataSet('Tech|Electrolysis').aggregate(period=[2030, 2040, 2050], subtech=['AEL', 'PEM'], agg=['size', 'source']),\n----> 3         assumptions,\n      4     ]).team.perform(CalcVariable(**{\n      5         'LCOX|Green Hydrogen|Capital Cost': lambda x: (x['Annuity Factor'] * x['Tech|Electrolysis|CAPEX'] / x['Tech|Electrolysis|Output Capacity|Hydrogen'] / x['Tech|Electrolysis|OCF']),\n      6         'LCOX|Green Hydrogen|OM Cost Fixed': lambda x: x['Tech|Electrolysis|OPEX Fixed'] / x['Tech|Electrolysis|Output Capacity|Hydrogen'] / x['Tech|Electrolysis|OCF'],\n      7         'LCOX|Green Hydrogen|Input Cost|Electricity': lambda x: x['Price|Electricity'] * x['Tech|Electrolysis|Input|Electricity'] / x['Tech|Electrolysis|Output|Hydrogen'],\n      8     }), only_new=True) \\\n      9     .team.unit_convert(to='EUR_2020/MWh')\n     11 display(df_calc.sample(15).sort_index())\n\nNameError: name 'assumptions' is not defined
In\u00a0[12]: Copied!
df_calc.team.varsplit('LCOX|Green Hydrogen|?component') \\\n    .sort_values(by=['elec_price_case', 'value']) \\\n    .plot.bar(x='period', y='value', color='component', facet_col='elec_price_case', facet_row='subtech')\n
df_calc.team.varsplit('LCOX|Green Hydrogen|?component') \\ .sort_values(by=['elec_price_case', 'value']) \\ .plot.bar(x='period', y='value', color='component', facet_col='elec_price_case', facet_row='subtech')
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[12], line 1\n----> 1 df_calc.team.varsplit('LCOX|Green Hydrogen|?component') \\\n      2     .sort_values(by=['elec_price_case', 'value']) \\\n      3     .plot.bar(x='period', y='value', color='component', facet_col='elec_price_case', facet_row='subtech')\n\nNameError: name 'df_calc' is not defined

POSTED uses the pivot dataframe method to bring the data into a usable format.

In\u00a0[13]: Copied!
pd.concat([\n        DataSet('Tech|Electrolysis').aggregate(period=[2030, 2040, 2050], subtech=['AEL', 'PEM'], agg=['size', 'source']),\n        assumptions,\n    ]).team.pivot_wide().pint.dequantify()\n
pd.concat([ DataSet('Tech|Electrolysis').aggregate(period=[2030, 2040, 2050], subtech=['AEL', 'PEM'], agg=['size', 'source']), assumptions, ]).team.pivot_wide().pint.dequantify()
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[13], line 3\n      1 pd.concat([\n      2         DataSet('Tech|Electrolysis').aggregate(period=[2030, 2040, 2050], subtech=['AEL', 'PEM'], agg=['size', 'source']),\n----> 3         assumptions,\n      4     ]).team.pivot_wide().pint.dequantify()\n\nNameError: name 'assumptions' is not defined

POSTED also contains predefined methods for calculating LCOX. Here we apply it to blue and green hydrogen.

In\u00a0[14]: Copied!
df_lcox_bluegreen = pd.concat([\n        pd.DataFrame.from_records([\n            {'elec_price_case': f\"Case {i}\", 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': 30 + (i-1)*25}\n            for i in range(1, 4)\n        ]),\n        pd.DataFrame.from_records([\n            {'ng_price_case': 'High' if i-1 else 'Low', 'variable': 'Price|Fossil Gas', 'unit': 'EUR_2020/MWh', 'value': 40 if i-1 else 20}\n            for i in range(1, 3)\n        ]),\n        DataSet('Tech|Electrolysis').aggregate(period=2030, subtech=['AEL', 'PEM'], agg=['size', 'subtech', 'source']),\n        DataSet('Tech|Methane Reforming').aggregate(period=2030, capture_rate=['55.70%', '94.50%'])\n            .team.varsplit('Tech|Methane Reforming|*comp')\n            .team.varcombine('{variable} {subtech} ({capture_rate})|{comp}')\n    ]) \\\n    .team.perform(\n        LCOX('Output|Hydrogen', 'Electrolysis', name='Green Hydrogen', interest_rate=0.1, book_lifetime=18),\n        LCOX('Output|Hydrogen', 'Methane Reforming SMR (55.70%)', name='Blue Hydrogen (Low CR)', interest_rate=0.1, book_lifetime=18),\n        LCOX('Output|Hydrogen', 'Methane Reforming ATR (94.50%)', name='Blue Hydrogen (High CR)', interest_rate=0.1, book_lifetime=18),\n        only_new=True,\n    ) \\\n    .team.unit_convert(to='EUR_2022/MWh')\n\ndisplay(df_lcox_bluegreen)\n
df_lcox_bluegreen = pd.concat([ pd.DataFrame.from_records([ {'elec_price_case': f\"Case {i}\", 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': 30 + (i-1)*25} for i in range(1, 4) ]), pd.DataFrame.from_records([ {'ng_price_case': 'High' if i-1 else 'Low', 'variable': 'Price|Fossil Gas', 'unit': 'EUR_2020/MWh', 'value': 40 if i-1 else 20} for i in range(1, 3) ]), DataSet('Tech|Electrolysis').aggregate(period=2030, subtech=['AEL', 'PEM'], agg=['size', 'subtech', 'source']), DataSet('Tech|Methane Reforming').aggregate(period=2030, capture_rate=['55.70%', '94.50%']) .team.varsplit('Tech|Methane Reforming|*comp') .team.varcombine('{variable} {subtech} ({capture_rate})|{comp}') ]) \\ .team.perform( LCOX('Output|Hydrogen', 'Electrolysis', name='Green Hydrogen', interest_rate=0.1, book_lifetime=18), LCOX('Output|Hydrogen', 'Methane Reforming SMR (55.70%)', name='Blue Hydrogen (Low CR)', interest_rate=0.1, book_lifetime=18), LCOX('Output|Hydrogen', 'Methane Reforming ATR (94.50%)', name='Blue Hydrogen (High CR)', interest_rate=0.1, book_lifetime=18), only_new=True, ) \\ .team.unit_convert(to='EUR_2022/MWh') display(df_lcox_bluegreen)
\n---------------------------------------------------------------------------\nTypeError                                 Traceback (most recent call last)\nCell In[14], line 16\n      1 df_lcox_bluegreen = pd.concat([\n      2         pd.DataFrame.from_records([\n      3             {'elec_price_case': f\"Case {i}\", 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': 30 + (i-1)*25}\n      4             for i in range(1, 4)\n      5         ]),\n      6         pd.DataFrame.from_records([\n      7             {'ng_price_case': 'High' if i-1 else 'Low', 'variable': 'Price|Fossil Gas', 'unit': 'EUR_2020/MWh', 'value': 40 if i-1 else 20}\n      8             for i in range(1, 3)\n      9         ]),\n     10         DataSet('Tech|Electrolysis').aggregate(period=2030, subtech=['AEL', 'PEM'], agg=['size', 'subtech', 'source']),\n     11         DataSet('Tech|Methane Reforming').aggregate(period=2030, capture_rate=['55.70%', '94.50%'])\n     12             .team.varsplit('Tech|Methane Reforming|*comp')\n     13             .team.varcombine('{variable} {subtech} ({capture_rate})|{comp}')\n     14     ]) \\\n     15     .team.perform(\n---> 16         LCOX('Output|Hydrogen', 'Electrolysis', name='Green Hydrogen', interest_rate=0.1, book_lifetime=18),\n     17         LCOX('Output|Hydrogen', 'Methane Reforming SMR (55.70%)', name='Blue Hydrogen (Low CR)', interest_rate=0.1, book_lifetime=18),\n     18         LCOX('Output|Hydrogen', 'Methane Reforming ATR (94.50%)', name='Blue Hydrogen (High CR)', interest_rate=0.1, book_lifetime=18),\n     19         only_new=True,\n     20     ) \\\n     21     .team.unit_convert(to='EUR_2022/MWh')\n     23 display(df_lcox_bluegreen)\n\nTypeError: LCOX.__init__() got multiple values for argument 'name'
In\u00a0[15]: Copied!
df_lcox_bluegreen.team.varsplit('LCOX|?fuel|*comp') \\\n    .plot.bar(x='fuel', y='value', color='comp', facet_col='elec_price_case', facet_row='ng_price_case')\n
df_lcox_bluegreen.team.varsplit('LCOX|?fuel|*comp') \\ .plot.bar(x='fuel', y='value', color='comp', facet_col='elec_price_case', facet_row='ng_price_case')
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[15], line 1\n----> 1 df_lcox_bluegreen.team.varsplit('LCOX|?fuel|*comp') \\\n      2     .plot.bar(x='fuel', y='value', color='comp', facet_col='elec_price_case', facet_row='ng_price_case')\n\nNameError: name 'df_lcox_bluegreen' is not defined

Let's calculate the levelised cost of green methanol (from electrolytic hydrogen). First we can do this simply based on a hydrogen price (i.e. without accounting for electrolysis).

In\u00a0[16]: Copied!
df_lcox_meoh = pd.concat([\n        DataSet('Tech|Methanol Synthesis').aggregate(period=[2030, 2050]),\n        pd.DataFrame.from_records([\n            {'period': 2030, 'variable': 'Price|Hydrogen', 'unit': 'EUR_2022/MWh', 'value': 120},\n            {'period': 2050, 'variable': 'Price|Hydrogen', 'unit': 'EUR_2022/MWh', 'value': 80},\n            {'period': 2030, 'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': 150},\n            {'period': 2050, 'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': 100},\n        ]),\n    ]) \\\n    .team.perform(LCOX(\n        'Output|Methanol', 'Methanol Synthesis', name='Green Methanol',\n        interest_rate=0.1, book_lifetime=10.0), only_new=True,\n    ) \\\n    .team.unit_convert('EUR_2022/MWh')\n\ndisplay(df_lcox_meoh)\n
df_lcox_meoh = pd.concat([ DataSet('Tech|Methanol Synthesis').aggregate(period=[2030, 2050]), pd.DataFrame.from_records([ {'period': 2030, 'variable': 'Price|Hydrogen', 'unit': 'EUR_2022/MWh', 'value': 120}, {'period': 2050, 'variable': 'Price|Hydrogen', 'unit': 'EUR_2022/MWh', 'value': 80}, {'period': 2030, 'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': 150}, {'period': 2050, 'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': 100}, ]), ]) \\ .team.perform(LCOX( 'Output|Methanol', 'Methanol Synthesis', name='Green Methanol', interest_rate=0.1, book_lifetime=10.0), only_new=True, ) \\ .team.unit_convert('EUR_2022/MWh') display(df_lcox_meoh)
/Users/gmax/Documents/PIK_Job/posted_leo/python/posted/noslag.py:368: UserWarning:\n\nUnknown variable, so dropping rows:\n36    Emissions|CO2\n37    Emissions|CO2\nName: variable, dtype: object\n\n
\n---------------------------------------------------------------------------\nTypeError                                 Traceback (most recent call last)\nCell In[16], line 10\n      1 df_lcox_meoh = pd.concat([\n      2         DataSet('Tech|Methanol Synthesis').aggregate(period=[2030, 2050]),\n      3         pd.DataFrame.from_records([\n      4             {'period': 2030, 'variable': 'Price|Hydrogen', 'unit': 'EUR_2022/MWh', 'value': 120},\n      5             {'period': 2050, 'variable': 'Price|Hydrogen', 'unit': 'EUR_2022/MWh', 'value': 80},\n      6             {'period': 2030, 'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': 150},\n      7             {'period': 2050, 'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': 100},\n      8         ]),\n      9     ]) \\\n---> 10     .team.perform(LCOX(\n     11         'Output|Methanol', 'Methanol Synthesis', name='Green Methanol',\n     12         interest_rate=0.1, book_lifetime=10.0), only_new=True,\n     13     ) \\\n     14     .team.unit_convert('EUR_2022/MWh')\n     16 display(df_lcox_meoh)\n\nTypeError: LCOX.__init__() got multiple values for argument 'name'
In\u00a0[17]: Copied!
df_lcox_meoh.team.varsplit('LCOX|Green Methanol|*component') \\\n    .plot.bar(x='period', y='value', color='component')\n
df_lcox_meoh.team.varsplit('LCOX|Green Methanol|*component') \\ .plot.bar(x='period', y='value', color='component')
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[17], line 1\n----> 1 df_lcox_meoh.team.varsplit('LCOX|Green Methanol|*component') \\\n      2     .plot.bar(x='period', y='value', color='component')\n\nNameError: name 'df_lcox_meoh' is not defined

Next, we can calculate the LCOX of green methanol for a the value chain consisting of electrolysis, low-temperature direct air capture, and methanol synthesis. The heat for the direct air capture will be provided by an industrial heat pump.

In\u00a0[18]: Copied!
pc_green_meoh = ProcessChain(\n    'Green Methanol',\n    {'Methanol Synthesis': {'Methanol': Q('1 MWh')}},\n    'Heatpump for DAC -> Heat => Direct Air Capture -> Captured CO2 => Methanol Synthesis;Electrolysis -> Hydrogen => Methanol Synthesis -> Methanol',\n)\n\ng, lay = pc_green_meoh.igraph()\nfig, ax = plt.subplots()\nax.set_title(pc_green_meoh.name)\nig.plot(g, target=ax, layout=lay, vertex_label=[n.replace(' ', '\\n') for n in g.vs['name']], edge_label=[n.replace(' ', '\\n') for n in g.es['name']], vertex_label_size=8, edge_label_size=6)\n
pc_green_meoh = ProcessChain( 'Green Methanol', {'Methanol Synthesis': {'Methanol': Q('1 MWh')}}, 'Heatpump for DAC -> Heat => Direct Air Capture -> Captured CO2 => Methanol Synthesis;Electrolysis -> Hydrogen => Methanol Synthesis -> Methanol', ) g, lay = pc_green_meoh.igraph() fig, ax = plt.subplots() ax.set_title(pc_green_meoh.name) ig.plot(g, target=ax, layout=lay, vertex_label=[n.replace(' ', '\\n') for n in g.vs['name']], edge_label=[n.replace(' ', '\\n') for n in g.es['name']], vertex_label_size=8, edge_label_size=6)
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[18], line 1\n----> 1 pc_green_meoh = ProcessChain(\n      2     'Green Methanol',\n      3     {'Methanol Synthesis': {'Methanol': Q('1 MWh')}},\n      4     'Heatpump for DAC -> Heat => Direct Air Capture -> Captured CO2 => Methanol Synthesis;Electrolysis -> Hydrogen => Methanol Synthesis -> Methanol',\n      5 )\n      7 g, lay = pc_green_meoh.igraph()\n      8 fig, ax = plt.subplots()\n\nNameError: name 'ProcessChain' is not defined
In\u00a0[19]: Copied!
df_lcox_meoh_pc = pd.concat([\n        DataSet('Tech|Electrolysis').aggregate(period=[2030, 2050], subtech=['AEL', 'PEM'], size=['1 MW', '100 MW'], agg=['subtech', 'size', 'source']),\n        DataSet('Tech|Direct Air Capture').aggregate(period=[2030, 2050], subtech='LT-DAC'),\n        DataSet('Tech|Heatpump for DAC').aggregate(period=[2030, 2050]),\n        DataSet('Tech|Methanol Synthesis').aggregate(period=[2030, 2050]),\n        pd.DataFrame.from_records([\n            {'period': 2030, 'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 50},\n            {'period': 2050, 'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 30},\n        ]),\n    ]) \\\n    .team.perform(pc_green_meoh) \\\n    .team.perform(LCOX(\n        'Methanol Synthesis|Output|Methanol', process_chain='Green Methanol',\n        interest_rate=0.1, book_lifetime=10.0,\n    ), only_new=True) \\\n    .team.unit_convert('EUR_2022/MWh')\n\ndisplay(df_lcox_meoh_pc)\n
df_lcox_meoh_pc = pd.concat([ DataSet('Tech|Electrolysis').aggregate(period=[2030, 2050], subtech=['AEL', 'PEM'], size=['1 MW', '100 MW'], agg=['subtech', 'size', 'source']), DataSet('Tech|Direct Air Capture').aggregate(period=[2030, 2050], subtech='LT-DAC'), DataSet('Tech|Heatpump for DAC').aggregate(period=[2030, 2050]), DataSet('Tech|Methanol Synthesis').aggregate(period=[2030, 2050]), pd.DataFrame.from_records([ {'period': 2030, 'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 50}, {'period': 2050, 'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 30}, ]), ]) \\ .team.perform(pc_green_meoh) \\ .team.perform(LCOX( 'Methanol Synthesis|Output|Methanol', process_chain='Green Methanol', interest_rate=0.1, book_lifetime=10.0, ), only_new=True) \\ .team.unit_convert('EUR_2022/MWh') display(df_lcox_meoh_pc)
/Users/gmax/Documents/PIK_Job/posted_leo/python/posted/noslag.py:368: UserWarning:\n\nUnknown variable, so dropping rows:\n36    Emissions|CO2\n37    Emissions|CO2\nName: variable, dtype: object\n\n
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[19], line 11\n      1 df_lcox_meoh_pc = pd.concat([\n      2         DataSet('Tech|Electrolysis').aggregate(period=[2030, 2050], subtech=['AEL', 'PEM'], size=['1 MW', '100 MW'], agg=['subtech', 'size', 'source']),\n      3         DataSet('Tech|Direct Air Capture').aggregate(period=[2030, 2050], subtech='LT-DAC'),\n      4         DataSet('Tech|Heatpump for DAC').aggregate(period=[2030, 2050]),\n      5         DataSet('Tech|Methanol Synthesis').aggregate(period=[2030, 2050]),\n      6         pd.DataFrame.from_records([\n      7             {'period': 2030, 'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 50},\n      8             {'period': 2050, 'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 30},\n      9         ]),\n     10     ]) \\\n---> 11     .team.perform(pc_green_meoh) \\\n     12     .team.perform(LCOX(\n     13         'Methanol Synthesis|Output|Methanol', process_chain='Green Methanol',\n     14         interest_rate=0.1, book_lifetime=10.0,\n     15     ), only_new=True) \\\n     16     .team.unit_convert('EUR_2022/MWh')\n     18 display(df_lcox_meoh_pc)\n\nNameError: name 'pc_green_meoh' is not defined
In\u00a0[20]: Copied!
df_lcox_meoh_pc.team.varsplit('LCOX|Green Methanol|?process|*component') \\\n    .plot.bar(x='period', y='value', color='component', hover_data='process')\n
df_lcox_meoh_pc.team.varsplit('LCOX|Green Methanol|?process|*component') \\ .plot.bar(x='period', y='value', color='component', hover_data='process')
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[20], line 1\n----> 1 df_lcox_meoh_pc.team.varsplit('LCOX|Green Methanol|?process|*component') \\\n      2     .plot.bar(x='period', y='value', color='component', hover_data='process')\n\nNameError: name 'df_lcox_meoh_pc' is not defined
In\u00a0[21]: Copied!
pc_green_ethylene = ProcessChain(\n    'Green Ethylene',\n    {'Electric Arc Furnace': {'Ethylene': Q('1t')}},\n    'Electrolysis -> Hydrogen => Methanol Synthesis; Heatpump for DAC -> Heat => Direct Air Capture -> Captured CO2 => Methanol Synthesis -> Methanol => Methanol to Olefines -> Ethylene',\n)\n\ng, lay = pc_green_ethylene.igraph()\nfig, ax = plt.subplots()\nax.set_title(pc_green_ethylene.name)\nig.plot(g, target=ax, layout=lay, vertex_label=[n.replace(' ', '\\n') for n in g.vs['name']], edge_label=[n.replace(' ', '\\n') for n in g.es['name']], vertex_label_size=8, edge_label_size=6)\n
pc_green_ethylene = ProcessChain( 'Green Ethylene', {'Electric Arc Furnace': {'Ethylene': Q('1t')}}, 'Electrolysis -> Hydrogen => Methanol Synthesis; Heatpump for DAC -> Heat => Direct Air Capture -> Captured CO2 => Methanol Synthesis -> Methanol => Methanol to Olefines -> Ethylene', ) g, lay = pc_green_ethylene.igraph() fig, ax = plt.subplots() ax.set_title(pc_green_ethylene.name) ig.plot(g, target=ax, layout=lay, vertex_label=[n.replace(' ', '\\n') for n in g.vs['name']], edge_label=[n.replace(' ', '\\n') for n in g.es['name']], vertex_label_size=8, edge_label_size=6)
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[21], line 1\n----> 1 pc_green_ethylene = ProcessChain(\n      2     'Green Ethylene',\n      3     {'Electric Arc Furnace': {'Ethylene': Q('1t')}},\n      4     'Electrolysis -> Hydrogen => Methanol Synthesis; Heatpump for DAC -> Heat => Direct Air Capture -> Captured CO2 => Methanol Synthesis -> Methanol => Methanol to Olefines -> Ethylene',\n      5 )\n      7 g, lay = pc_green_ethylene.igraph()\n      8 fig, ax = plt.subplots()\n\nNameError: name 'ProcessChain' is not defined
In\u00a0[22]: Copied!
pc_green_steel = ProcessChain(\n    'Green Steel (H2-DR)',\n    {'Steel Hot Rolling': {'Steel Hot-rolled Coil': Q('1t')}},\n    'Electrolysis -> Hydrogen => Iron Direct Reduction -> Directly Reduced Iron => Electric Arc Furnace -> Steel Liquid => Steel Casting -> Steel Slab => Steel Hot Rolling -> Steel Hot-rolled Coil',\n)\n\ng, lay = pc_green_steel.igraph()\nfig, ax = plt.subplots()\nax.set_title(pc_green_steel.name)\nig.plot(g, target=ax, layout=lay, vertex_label=[n.replace(' ', '\\n') for n in g.vs['name']], edge_label=[n.replace(' ', '\\n') for n in g.es['name']], vertex_label_size=8, edge_label_size=6)\n
pc_green_steel = ProcessChain( 'Green Steel (H2-DR)', {'Steel Hot Rolling': {'Steel Hot-rolled Coil': Q('1t')}}, 'Electrolysis -> Hydrogen => Iron Direct Reduction -> Directly Reduced Iron => Electric Arc Furnace -> Steel Liquid => Steel Casting -> Steel Slab => Steel Hot Rolling -> Steel Hot-rolled Coil', ) g, lay = pc_green_steel.igraph() fig, ax = plt.subplots() ax.set_title(pc_green_steel.name) ig.plot(g, target=ax, layout=lay, vertex_label=[n.replace(' ', '\\n') for n in g.vs['name']], edge_label=[n.replace(' ', '\\n') for n in g.es['name']], vertex_label_size=8, edge_label_size=6)
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[22], line 1\n----> 1 pc_green_steel = ProcessChain(\n      2     'Green Steel (H2-DR)',\n      3     {'Steel Hot Rolling': {'Steel Hot-rolled Coil': Q('1t')}},\n      4     'Electrolysis -> Hydrogen => Iron Direct Reduction -> Directly Reduced Iron => Electric Arc Furnace -> Steel Liquid => Steel Casting -> Steel Slab => Steel Hot Rolling -> Steel Hot-rolled Coil',\n      5 )\n      7 g, lay = pc_green_steel.igraph()\n      8 fig, ax = plt.subplots()\n\nNameError: name 'ProcessChain' is not defined
In\u00a0[23]: Copied!
df_lcox_green_steel = pd.concat([\n        DataSet('Tech|Electrolysis').aggregate(period=2030, subtech=['AEL', 'PEM'], size=['1 MW', '100 MW'], agg=['subtech', 'size', 'source'], override={'Tech|ELH2|Output Capacity|Hydrogen': 'kW;LHV'}),\n        DataSet('Tech|Iron Direct Reduction').aggregate(period=2030, mode='h2'),\n        DataSet('Tech|Electric Arc Furnace').aggregate(period=2030, mode='Primary'),\n        DataSet('Tech|Steel Casting').aggregate(period=2030),\n        DataSet('Tech|Steel Hot Rolling').aggregate(period=2030),\n        pd.DataFrame({'price_case': range(30, 60, 10), 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': range(30, 60, 10)}),\n    ]) \\\n    .team.perform(pc_green_steel) \\\n    .team.perform(LCOX(\n        'Steel Hot Rolling|Output|Steel Hot-rolled Coil', process_chain='Green Steel (H2-DR)',\n        interest_rate=0.1, book_lifetime=10.0,\n    ), only_new=True) \\\n    .team.unit_convert('EUR_2022/t')\n\ndisplay(df_lcox_green_steel)\n
df_lcox_green_steel = pd.concat([ DataSet('Tech|Electrolysis').aggregate(period=2030, subtech=['AEL', 'PEM'], size=['1 MW', '100 MW'], agg=['subtech', 'size', 'source'], override={'Tech|ELH2|Output Capacity|Hydrogen': 'kW;LHV'}), DataSet('Tech|Iron Direct Reduction').aggregate(period=2030, mode='h2'), DataSet('Tech|Electric Arc Furnace').aggregate(period=2030, mode='Primary'), DataSet('Tech|Steel Casting').aggregate(period=2030), DataSet('Tech|Steel Hot Rolling').aggregate(period=2030), pd.DataFrame({'price_case': range(30, 60, 10), 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': range(30, 60, 10)}), ]) \\ .team.perform(pc_green_steel) \\ .team.perform(LCOX( 'Steel Hot Rolling|Output|Steel Hot-rolled Coil', process_chain='Green Steel (H2-DR)', interest_rate=0.1, book_lifetime=10.0, ), only_new=True) \\ .team.unit_convert('EUR_2022/t') display(df_lcox_green_steel)
/Users/gmax/Documents/PIK_Job/posted_leo/python/posted/noslag.py:651: HarmoniseMappingFailure:\n\nNo OCF value matching a OPEX Fixed Specific value found.\n    period     reheating component region  \\\n73    2030  w/ reheating    Labour  World   \n\n                                         variable  \\\n73  Tech|Electric Arc Furnace|OPEX Fixed Specific   \n\n                               reference_variable  source      value  \n73  Tech|Electric Arc Furnace|Output|Steel Liquid  Vogl18  62.628147  \n\n/Users/gmax/Documents/PIK_Job/posted_leo/python/posted/noslag.py:651: HarmoniseMappingFailure:\n\nNo OCF value matching a OPEX Fixed Specific value found.\n    period      reheating component region  \\\n74    2030  w/o reheating    Labour  World   \n\n                                         variable  \\\n74  Tech|Electric Arc Furnace|OPEX Fixed Specific   \n\n                               reference_variable  source      value  \n74  Tech|Electric Arc Furnace|Output|Steel Liquid  Vogl18  62.628147  \n\n
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[23], line 9\n      1 df_lcox_green_steel = pd.concat([\n      2         DataSet('Tech|Electrolysis').aggregate(period=2030, subtech=['AEL', 'PEM'], size=['1 MW', '100 MW'], agg=['subtech', 'size', 'source'], override={'Tech|ELH2|Output Capacity|Hydrogen': 'kW;LHV'}),\n      3         DataSet('Tech|Iron Direct Reduction').aggregate(period=2030, mode='h2'),\n      4         DataSet('Tech|Electric Arc Furnace').aggregate(period=2030, mode='Primary'),\n      5         DataSet('Tech|Steel Casting').aggregate(period=2030),\n      6         DataSet('Tech|Steel Hot Rolling').aggregate(period=2030),\n      7         pd.DataFrame({'price_case': range(30, 60, 10), 'variable': 'Price|Electricity', 'unit': 'EUR_2020/MWh', 'value': range(30, 60, 10)}),\n      8     ]) \\\n----> 9     .team.perform(pc_green_steel) \\\n     10     .team.perform(LCOX(\n     11         'Steel Hot Rolling|Output|Steel Hot-rolled Coil', process_chain='Green Steel (H2-DR)',\n     12         interest_rate=0.1, book_lifetime=10.0,\n     13     ), only_new=True) \\\n     14     .team.unit_convert('EUR_2022/t')\n     16 display(df_lcox_green_steel)\n\nNameError: name 'pc_green_steel' is not defined
In\u00a0[24]: Copied!
df_lcox_green_steel.team.varsplit('LCOX|Green Steel (H2-DR)|?process|*component') \\\n    .plot.bar(x='price_case', y='value', color='component', hover_data='process', facet_col='reheating')\n
df_lcox_green_steel.team.varsplit('LCOX|Green Steel (H2-DR)|?process|*component') \\ .plot.bar(x='price_case', y='value', color='component', hover_data='process', facet_col='reheating')
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[24], line 1\n----> 1 df_lcox_green_steel.team.varsplit('LCOX|Green Steel (H2-DR)|?process|*component') \\\n      2     .plot.bar(x='price_case', y='value', color='component', hover_data='process', facet_col='reheating')\n\nNameError: name 'df_lcox_green_steel' is not defined
In\u00a0[25]: Copied!
df_lcox_cement = pd.concat([\n        DataSet('Tech|Cement Production').aggregate(period=2030),\n        pd.DataFrame.from_records([\n            {'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 50},\n            {'variable': 'Price|Coal', 'unit': 'EUR_2022/GJ', 'value': 3},\n            {'variable': 'Price|Oxygen', 'unit': 'EUR_2022/t', 'value': 30},\n            {'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': -30},\n        ]),\n    ]) \\\n    .team.perform(LCOX(\n        'Output|Cement', 'Cement Production',\n        interest_rate=0.1, book_lifetime=10.0,\n    ), only_new=True) \\\n    .team.unit_convert('EUR_2022/t')\n\ndisplay(df_lcox_cement)\n
df_lcox_cement = pd.concat([ DataSet('Tech|Cement Production').aggregate(period=2030), pd.DataFrame.from_records([ {'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 50}, {'variable': 'Price|Coal', 'unit': 'EUR_2022/GJ', 'value': 3}, {'variable': 'Price|Oxygen', 'unit': 'EUR_2022/t', 'value': 30}, {'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': -30}, ]), ]) \\ .team.perform(LCOX( 'Output|Cement', 'Cement Production', interest_rate=0.1, book_lifetime=10.0, ), only_new=True) \\ .team.unit_convert('EUR_2022/t') display(df_lcox_cement)
/Users/gmax/Documents/PIK_Job/posted_leo/python/posted/noslag.py:368: UserWarning:\n\nUnknown variable, so dropping rows:\n37    Emissions|CO2\n38    Emissions|CO2\n39    Emissions|CO2\n40    Emissions|CO2\n41    Emissions|CO2\n42    Emissions|CO2\n43    Emissions|CO2\n44    Emissions|CO2\nName: variable, dtype: object\n\n
\n---------------------------------------------------------------------------\nTypeError                                 Traceback (most recent call last)\nCell In[25], line 10\n      1 df_lcox_cement = pd.concat([\n      2         DataSet('Tech|Cement Production').aggregate(period=2030),\n      3         pd.DataFrame.from_records([\n      4             {'variable': 'Price|Electricity', 'unit': 'EUR_2022/MWh', 'value': 50},\n      5             {'variable': 'Price|Coal', 'unit': 'EUR_2022/GJ', 'value': 3},\n      6             {'variable': 'Price|Oxygen', 'unit': 'EUR_2022/t', 'value': 30},\n      7             {'variable': 'Price|Captured CO2', 'unit': 'EUR_2022/t', 'value': -30},\n      8         ]),\n      9     ]) \\\n---> 10     .team.perform(LCOX(\n     11         'Output|Cement', 'Cement Production',\n     12         interest_rate=0.1, book_lifetime=10.0,\n     13     ), only_new=True) \\\n     14     .team.unit_convert('EUR_2022/t')\n     16 display(df_lcox_cement)\n\nTypeError: LCOX.__init__() missing 1 required positional argument: 'reference'

We first sort the dataframe by total LCOX for each subtech.

In\u00a0[26]: Copied!
df_lcox_cement.team.varsplit('?variable|?process|*component') \\\n    .groupby('subtech') \\\n    .apply(lambda df: df.assign(order=df['value'].sum()), include_groups=False) \\\n    .sort_values(by='order') \\\n    .reset_index() \\\n    .plot.bar(x='subtech', y='value', color='component', hover_data='process')\n
df_lcox_cement.team.varsplit('?variable|?process|*component') \\ .groupby('subtech') \\ .apply(lambda df: df.assign(order=df['value'].sum()), include_groups=False) \\ .sort_values(by='order') \\ .reset_index() \\ .plot.bar(x='subtech', y='value', color='component', hover_data='process')
\n---------------------------------------------------------------------------\nNameError                                 Traceback (most recent call last)\nCell In[26], line 1\n----> 1 df_lcox_cement.team.varsplit('?variable|?process|*component') \\\n      2     .groupby('subtech') \\\n      3     .apply(lambda df: df.assign(order=df['value'].sum()), include_groups=False) \\\n      4     .sort_values(by='order') \\\n      5     .reset_index() \\\n      6     .plot.bar(x='subtech', y='value', color='component', hover_data='process')\n\nNameError: name 'df_lcox_cement' is not defined
"},{"location":"tutorials/python/overview/#main-posted-tutorial-for-python","title":"Main POSTED tutorial for python\u00b6","text":""},{"location":"tutorials/python/overview/#prerequisits","title":"Prerequisits\u00b6","text":""},{"location":"tutorials/python/overview/#dependencies","title":"Dependencies\u00b6","text":""},{"location":"tutorials/python/overview/#importing-posted","title":"Importing POSTED\u00b6","text":""},{"location":"tutorials/python/overview/#noslag","title":"NOSLAG\u00b6","text":""},{"location":"tutorials/python/overview/#electrolysis-capex","title":"Electrolysis CAPEX\u00b6","text":""},{"location":"tutorials/python/overview/#energy-demand-of-green-vs-blue-hydrogen-production","title":"Energy demand of green vs. blue hydrogen production\u00b6","text":""},{"location":"tutorials/python/overview/#energy-demand-of-iron-direct-reduction","title":"Energy demand of iron direct reduction\u00b6","text":""},{"location":"tutorials/python/overview/#energy-demand-of-haber-bosch-synthesis","title":"Energy demand of Haber-Bosch synthesis\u00b6","text":""},{"location":"tutorials/python/overview/#team","title":"TEAM\u00b6","text":""},{"location":"tutorials/python/overview/#calcvariable","title":"CalcVariable\u00b6","text":""},{"location":"tutorials/python/overview/#pivot","title":"Pivot\u00b6","text":""},{"location":"tutorials/python/overview/#lcox-of-blue-and-green-hydrogen","title":"LCOX of blue and green hydrogen\u00b6","text":""},{"location":"tutorials/python/overview/#lcox-of-methanol","title":"LCOX of methanol\u00b6","text":""},{"location":"tutorials/python/overview/#lcox-of-green-ethylene-from-green-methanol","title":"LCOX of green ethylene (from green methanol)\u00b6","text":""},{"location":"tutorials/python/overview/#lcox-of-green-steel","title":"LCOX of green steel\u00b6","text":""},{"location":"tutorials/python/overview/#lcox-of-cement-w-and-wo-cc","title":"LCOX of cement w/ and w/o CC\u00b6","text":""}]} \ No newline at end of file diff --git a/develop/tutorials/R/overview/index.html b/develop/tutorials/R/overview/index.html index 3fb103b..e690220 100644 --- a/develop/tutorials/R/overview/index.html +++ b/develop/tutorials/R/overview/index.html @@ -1401,7 +1401,7 @@

Overview