flixopt.commons
This module makes the commonly used classes and functions available in the flixopt framework.
Classes
AggregationParameters
AggregationParameters(hours_per_period: float, nr_of_periods: int, fix_storage_flows: bool, aggregate_data_and_fix_non_binary_vars: bool, percentage_of_period_freedom: float = 0, penalty_of_period_freedom: float = 0, time_series_for_high_peaks: list[TimeSeriesData] | None = None, time_series_for_low_peaks: list[TimeSeriesData] | None = None)
Initializes aggregation parameters for time series data
Parameters:
Name | Type | Description | Default |
---|---|---|---|
hours_per_period
|
float
|
Duration of each period in hours. |
required |
nr_of_periods
|
int
|
Number of typical periods to use in the aggregation. |
required |
fix_storage_flows
|
bool
|
Whether to aggregate storage flows (load/unload); if other flows are fixed, fixing storage flows is usually not required. |
required |
aggregate_data_and_fix_non_binary_vars
|
bool
|
Whether to aggregate all time series data, which allows to fix all time series variables (like flow_rate), or only fix binary variables. If False non time_series data is changed!! If True, the mathematical Problem is simplified even further. |
required |
percentage_of_period_freedom
|
float
|
Specifies the maximum percentage (0–100) of binary values within each period that can deviate as "free variables", chosen by the solver (default is 0). This allows binary variables to be 'partly equated' between aggregated periods. |
0
|
penalty_of_period_freedom
|
float
|
The penalty associated with each "free variable"; defaults to 0. Added to Penalty |
0
|
time_series_for_high_peaks
|
list[TimeSeriesData] | None
|
List of TimeSeriesData to use for explicitly selecting periods with high values. |
None
|
time_series_for_low_peaks
|
list[TimeSeriesData] | None
|
List of TimeSeriesData to use for explicitly selecting periods with low values. |
None
|
Functions
AggregatedCalculation
AggregatedCalculation(name: str, flow_system: FlowSystem, aggregation_parameters: AggregationParameters, components_to_clusterize: list[Component] | None = None, active_timesteps: Annotated[DatetimeIndex | None, 'DEPRECATED: Use flow_system.sel(time=...) or flow_system.isel(time=...) instead'] = None, folder: Path | None = None)
Bases: FullCalculation
AggregatedCalculation reduces computational complexity by clustering time series into typical periods.
This calculation approach aggregates time series data using clustering techniques (tsam) to identify representative time periods, significantly reducing computation time while maintaining solution accuracy.
Note
The quality of the solution depends on the choice of aggregation parameters. The optimal parameters depend on the specific problem and the characteristics of the time series data. For more information, refer to the tsam documentation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
Name of the calculation |
required |
flow_system
|
FlowSystem
|
FlowSystem to be optimized |
required |
aggregation_parameters
|
AggregationParameters
|
Parameters for aggregation. See AggregationParameters class documentation |
required |
components_to_clusterize
|
list[Component] | None
|
list of Components to perform aggregation on. If None, all components are aggregated. This equalizes variables in the components according to the typical periods computed in the aggregation |
None
|
active_timesteps
|
Annotated[DatetimeIndex | None, 'DEPRECATED: Use flow_system.sel(time=...) or flow_system.isel(time=...) instead']
|
DatetimeIndex of timesteps to use for calculation. If None, all timesteps are used |
None
|
folder
|
Path | None
|
Folder where results should be saved. If None, current working directory is used |
None
|
Attributes:
Name | Type | Description |
---|---|---|
aggregation |
Aggregation | None
|
Contains the clustered time series data |
aggregation_model |
AggregationModel | None
|
Contains Variables and Constraints that equalize clusters of the time series data |
Functions
fix_sizes
Fix the sizes of the calculations to specified values.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
The dataset that contains the variable names mapped to their sizes. If None, the dataset is loaded from the results. |
required |
decimal_rounding
|
int | None
|
The number of decimal places to round the sizes to. If no rounding is applied, numerical errors might lead to infeasibility. |
5
|
FullCalculation
FullCalculation(name: str, flow_system: FlowSystem, active_timesteps: Annotated[DatetimeIndex | None, 'DEPRECATED: Use flow_system.sel(time=...) or flow_system.isel(time=...) instead'] = None, folder: Path | None = None, normalize_weights: bool = True)
Bases: Calculation
FullCalculation solves the complete optimization problem using all time steps.
This is the most comprehensive calculation type that considers every time step in the optimization, providing the most accurate but computationally intensive solution.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
name of calculation |
required |
flow_system
|
FlowSystem
|
flow_system which should be calculated |
required |
folder
|
Path | None
|
folder where results should be saved. If None, then the current working directory is used. |
None
|
normalize_weights
|
bool
|
Whether to automatically normalize the weights (periods and scenarios) to sum up to 1 when solving. |
True
|
active_timesteps
|
Annotated[DatetimeIndex | None, 'DEPRECATED: Use flow_system.sel(time=...) or flow_system.isel(time=...) instead']
|
Deprecated. Use FlowSystem.sel(time=...) or FlowSystem.isel(time=...) instead. |
None
|
Functions
fix_sizes
Fix the sizes of the calculations to specified values.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
The dataset that contains the variable names mapped to their sizes. If None, the dataset is loaded from the results. |
required |
decimal_rounding
|
int | None
|
The number of decimal places to round the sizes to. If no rounding is applied, numerical errors might lead to infeasibility. |
5
|
SegmentedCalculation
SegmentedCalculation(name: str, flow_system: FlowSystem, timesteps_per_segment: int, overlap_timesteps: int, nr_of_previous_values: int = 1, folder: Path | None = None)
Bases: Calculation
Solve large optimization problems by dividing time horizon into (overlapping) segments.
This class addresses memory and computational limitations of large-scale optimization problems by decomposing the time horizon into smaller overlapping segments that are solved sequentially. Each segment uses final values from the previous segment as initial conditions, ensuring dynamic continuity across the solution.
Key Concepts
Temporal Decomposition: Divides long time horizons into manageable segments Overlapping Windows: Segments share timesteps to improve storage dynamics Value Transfer: Final states of one segment become initial states of the next Sequential Solving: Each segment solved independently but with coupling
Limitations and Constraints
Investment Parameters: InvestParameters are not supported in segmented calculations as investment decisions must be made for the entire time horizon, not per segment.
Global Constraints: Time-horizon-wide constraints (flow_hours_total_min/max, load_factor_min/max) may produce suboptimal results as they cannot be enforced globally across segments.
Storage Dynamics: While overlap helps, storage optimization may be suboptimal compared to full-horizon solutions due to limited foresight in each segment.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
Unique identifier for the calculation, used in result files and logging. |
required |
flow_system
|
FlowSystem
|
The FlowSystem to optimize, containing all components, flows, and buses. |
required |
timesteps_per_segment
|
int
|
Number of timesteps in each segment (excluding overlap). Must be > 2 to avoid internal side effects. Larger values provide better optimization at the cost of memory and computation time. |
required |
overlap_timesteps
|
int
|
Number of additional timesteps added to each segment. Improves storage optimization by providing lookahead. Higher values improve solution quality but increase computational cost. |
required |
nr_of_previous_values
|
int
|
Number of previous timestep values to transfer between segments for initialization. Typically 1 is sufficient. |
1
|
folder
|
Path | None
|
Directory for saving results. Defaults to current working directory + 'results'. |
None
|
Examples:
Annual optimization with monthly segments:
# 8760 hours annual data with monthly segments (730 hours) and 48-hour overlap
segmented_calc = SegmentedCalculation(
name='annual_energy_system',
flow_system=energy_system,
timesteps_per_segment=730, # ~1 month
overlap_timesteps=48, # 2 days overlap
folder=Path('results/segmented'),
)
segmented_calc.do_modeling_and_solve(solver='gurobi')
Weekly optimization with daily overlap:
# Weekly segments for detailed operational planning
weekly_calc = SegmentedCalculation(
name='weekly_operations',
flow_system=industrial_system,
timesteps_per_segment=168, # 1 week (hourly data)
overlap_timesteps=24, # 1 day overlap
nr_of_previous_values=1,
)
Large-scale system with minimal overlap:
# Large system with minimal overlap for computational efficiency
large_calc = SegmentedCalculation(
name='large_scale_grid',
flow_system=grid_system,
timesteps_per_segment=100, # Shorter segments
overlap_timesteps=5, # Minimal overlap
)
Design Considerations
Segment Size: Balance between solution quality and computational efficiency. Larger segments provide better optimization but require more memory and time.
Overlap Duration: More overlap improves storage dynamics and reduces end-effects but increases computational cost. Typically 5-10% of segment length.
Storage Systems: Systems with large storage components benefit from longer overlaps to capture charge/discharge cycles effectively.
Investment Decisions: Use FullCalculation for problems requiring investment optimization, as SegmentedCalculation cannot handle investment parameters.
Common Use Cases
- Annual Planning: Long-term planning with seasonal variations
- Large Networks: Spatially or temporally large energy systems
- Memory-Limited Systems: When full optimization exceeds available memory
- Operational Planning: Detailed short-term optimization with limited foresight
- Sensitivity Analysis: Quick approximate solutions for parameter studies
Performance Tips
- Start with FullCalculation and use this class if memory issues occur
- Use longer overlaps for systems with significant storage
- Monitor solution quality at segment boundaries for discontinuities
Warning
The evaluation of the solution is a bit more complex than FullCalculation or AggregatedCalculation due to the overlapping individual solutions.
LinearConverter
LinearConverter(label: str, inputs: list[Flow], outputs: list[Flow], on_off_parameters: OnOffParameters | None = None, conversion_factors: list[dict[str, TemporalDataUser]] | None = None, piecewise_conversion: PiecewiseConversion | None = None, meta_data: dict | None = None)
Bases: Component
Converts input-Flows into output-Flows via linear conversion factors.
LinearConverter models equipment that transforms one or more input flows into one or more output flows through linear relationships. This includes heat exchangers, electrical converters, chemical reactors, and other equipment where the relationship between inputs and outputs can be expressed as linear equations.
The component supports two modeling approaches: simple conversion factors for straightforward linear relationships, or piecewise conversion for complex non-linear behavior approximated through piecewise linear segments.
Mathematical Formulation
See the complete mathematical model in the documentation: LinearConverter
Parameters:
Name | Type | Description | Default |
---|---|---|---|
label
|
str
|
The label of the Element. Used to identify it in the FlowSystem. |
required |
inputs
|
list[Flow]
|
list of input Flows that feed into the converter. |
required |
outputs
|
list[Flow]
|
list of output Flows that are produced by the converter. |
required |
on_off_parameters
|
OnOffParameters | None
|
Information about on and off state of LinearConverter. Component is On/Off if all connected Flows are On/Off. This induces an On-Variable (binary) in all Flows! If possible, use OnOffParameters in a single Flow instead to keep the number of binary variables low. |
None
|
conversion_factors
|
list[dict[str, TemporalDataUser]] | None
|
Linear relationships between flows expressed as a list of dictionaries. Each dictionary maps flow labels to their coefficients in one linear equation. The number of conversion factors must be less than the total number of flows to ensure degrees of freedom > 0. Either 'conversion_factors' OR 'piecewise_conversion' can be used, but not both. For examples also look into the linear_converters.py file. |
None
|
piecewise_conversion
|
PiecewiseConversion | None
|
Define piecewise linear relationships between flow rates of different flows. Enables modeling of non-linear conversion behavior through linear approximation. Either 'conversion_factors' or 'piecewise_conversion' can be used, but not both. |
None
|
meta_data
|
dict | None
|
Used to store additional information about the Element. Not used internally, but saved in results. Only use Python native types. |
None
|
Examples:
Simple 1:1 heat exchanger with 95% efficiency:
heat_exchanger = LinearConverter(
label='primary_hx',
inputs=[hot_water_in],
outputs=[hot_water_out],
conversion_factors=[{'hot_water_in': 0.95, 'hot_water_out': 1}],
)
Multi-input heat pump with COP=3:
heat_pump = LinearConverter(
label='air_source_hp',
inputs=[electricity_in],
outputs=[heat_output],
conversion_factors=[{'electricity_in': 3, 'heat_output': 1}],
)
Combined heat and power (CHP) unit with multiple outputs:
chp_unit = LinearConverter(
label='gas_chp',
inputs=[natural_gas],
outputs=[electricity_out, heat_out],
conversion_factors=[
{'natural_gas': 0.35, 'electricity_out': 1},
{'natural_gas': 0.45, 'heat_out': 1},
],
)
Electrolyzer with multiple conversion relationships:
electrolyzer = LinearConverter(
label='pem_electrolyzer',
inputs=[electricity_in, water_in],
outputs=[hydrogen_out, oxygen_out],
conversion_factors=[
{'electricity_in': 1, 'hydrogen_out': 50}, # 50 kWh/kg H2
{'water_in': 1, 'hydrogen_out': 9}, # 9 kg H2O/kg H2
{'hydrogen_out': 8, 'oxygen_out': 1}, # Mass balance
],
)
Complex converter with piecewise efficiency:
variable_efficiency_converter = LinearConverter(
label='variable_converter',
inputs=[fuel_in],
outputs=[power_out],
piecewise_conversion=PiecewiseConversion(
{
'fuel_in': Piecewise(
[
Piece(0, 10), # Low load operation
Piece(10, 25), # High load operation
]
),
'power_out': Piecewise(
[
Piece(0, 3.5), # Lower efficiency at part load
Piece(3.5, 10), # Higher efficiency at full load
]
),
}
),
)
Note
Conversion factors define linear relationships where the sum of (coefficient × flow_rate)
equals zero for each equation: factor1×flow1 + factor2×flow2 + ... = 0
Conversion factors define linear relationships:
{flow1: a1, flow2: a2, ...}
yields a1×flow_rate1 + a2×flow_rate2 + ... = 0
.
Note: The input format may be unintuitive. For example,
{"electricity": 1, "H2": 50}
implies 1×electricity = 50×H2
,
i.e., 50 units of electricity produce 1 unit of H2.
The system must have fewer conversion factors than total flows (degrees of freedom > 0) to avoid over-constraining the problem. For n total flows, use at most n-1 conversion factors.
When using piecewise_conversion, the converter operates on one piece at a time, with binary variables determining which piece is active.
Functions
to_dataset
Convert the object to an xarray Dataset representation. All DataArrays become dataset variables, everything else goes to attrs.
Its recommended to only call this method on Interfaces with all numeric data stored as xr.DataArrays. Interfaces inside a FlowSystem are automatically converted this form after connecting and transforming the FlowSystem.
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Dataset containing all DataArrays with basic objects only in attributes |
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails due to naming conflicts or invalid data |
to_netcdf
Save the object to a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to save the NetCDF file |
required |
compression
|
int
|
Compression level (0-9) |
0
|
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails |
IOError
|
If file cannot be written |
from_dataset
classmethod
Create an instance from an xarray Dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset containing the object data |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
ValueError
|
If dataset format is invalid or class mismatch |
from_netcdf
classmethod
Load an instance from a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to the NetCDF file |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be read |
ValueError
|
If file format is invalid |
get_structure
Get object structure as a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clean
|
bool
|
If True, remove None and empty dicts and lists. |
False
|
stats
|
bool
|
If True, replace DataArray references with statistics |
False
|
Returns:
Type | Description |
---|---|
dict
|
Dictionary representation of the object structure |
to_json
Save the object to a JSON file. This is meant for documentation and comparison, not for reloading.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the JSON file. |
required |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be written |
copy
Create a copy of the Interface object.
Uses the existing serialization infrastructure to ensure proper copying of all DataArrays and nested objects.
Returns:
Type | Description |
---|---|
Interface
|
A new instance of the same class with copied data. |
Sink
Sink(label: str, inputs: list[Flow] | None = None, meta_data: dict | None = None, prevent_simultaneous_flow_rates: bool = False, **kwargs)
Bases: Component
A Sink consumes energy or material flows from the system.
Sinks represent demand points like electrical loads, heat demands, material consumption, or any system boundary where flows terminate. They provide unlimited consumption capability subject to flow constraints, demand patterns and effects.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
label
|
str
|
The label of the Element. Used to identify it in the FlowSystem. |
required |
inputs
|
list[Flow] | None
|
Input-flows into the sink. Can be single flow or list of flows for sinks consuming multiple commodities or services. |
None
|
meta_data
|
dict | None
|
Used to store additional information about the Element. Not used internally but saved in results. Only use Python native types. |
None
|
prevent_simultaneous_flow_rates
|
bool
|
If True, only one input flow can be active at a time. Useful for modeling mutually exclusive consumption options. Default is False. |
False
|
Examples:
Simple electrical demand:
Heat demand with time-varying profile:
heat_demand = Sink(
label='district_heating_load',
inputs=[
Flow(
label='heat_consumption',
bus=heat_bus,
fixed_relative_profile=hourly_heat_profile, # Demand profile
size=2000, # Peak demand of 2000 kW
)
],
)
Multi-energy building with switching capabilities:
flexible_building = Sink(
label='smart_building',
inputs=[electricity_heating, gas_heating, heat_pump_heating],
prevent_simultaneous_flow_rates=True, # Can only use one heating mode
)
Industrial process with variable demand:
factory_load = Sink(
label='manufacturing_plant',
inputs=[
Flow(
label='electricity_process',
bus=electricity_bus,
size=5000, # Base electrical load
effects_per_flow_hour={'cost': -0.1}, # Value of service (negative cost)
),
Flow(
label='steam_process',
bus=steam_bus,
size=3000, # Process steam demand
fixed_relative_profile=production_schedule,
),
],
)
Deprecated
The deprecated sink
kwarg is accepted for compatibility but will be removed in future releases.
Initialize a Sink (consumes flow from the system).
Supports legacy sink=
keyword for backward compatibility (deprecated): if sink
is provided it is used as the single input flow and a DeprecationWarning is issued; specifying both inputs
and sink
raises ValueError.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
label
|
str
|
Unique element label. |
required |
inputs
|
list[Flow]
|
Input flows for the sink. |
None
|
meta_data
|
dict
|
Arbitrary metadata attached to the element. |
None
|
prevent_simultaneous_flow_rates
|
bool
|
If True, prevents simultaneous nonzero flow rates across the element's inputs by wiring that restriction into the base Component setup. |
False
|
Note
The deprecated sink
kwarg is accepted for compatibility but will be removed in future releases.
Functions
to_dataset
Convert the object to an xarray Dataset representation. All DataArrays become dataset variables, everything else goes to attrs.
Its recommended to only call this method on Interfaces with all numeric data stored as xr.DataArrays. Interfaces inside a FlowSystem are automatically converted this form after connecting and transforming the FlowSystem.
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Dataset containing all DataArrays with basic objects only in attributes |
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails due to naming conflicts or invalid data |
to_netcdf
Save the object to a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to save the NetCDF file |
required |
compression
|
int
|
Compression level (0-9) |
0
|
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails |
IOError
|
If file cannot be written |
from_dataset
classmethod
Create an instance from an xarray Dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset containing the object data |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
ValueError
|
If dataset format is invalid or class mismatch |
from_netcdf
classmethod
Load an instance from a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to the NetCDF file |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be read |
ValueError
|
If file format is invalid |
get_structure
Get object structure as a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clean
|
bool
|
If True, remove None and empty dicts and lists. |
False
|
stats
|
bool
|
If True, replace DataArray references with statistics |
False
|
Returns:
Type | Description |
---|---|
dict
|
Dictionary representation of the object structure |
to_json
Save the object to a JSON file. This is meant for documentation and comparison, not for reloading.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the JSON file. |
required |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be written |
copy
Create a copy of the Interface object.
Uses the existing serialization infrastructure to ensure proper copying of all DataArrays and nested objects.
Returns:
Type | Description |
---|---|
Interface
|
A new instance of the same class with copied data. |
Source
Source(label: str, outputs: list[Flow] | None = None, meta_data: dict | None = None, prevent_simultaneous_flow_rates: bool = False, **kwargs)
Bases: Component
A Source generates or provides energy or material flows into the system.
Sources represent supply points like power plants, fuel suppliers, renewable energy sources, or any system boundary where flows originate. They provide unlimited supply capability subject to flow constraints, demand patterns and effects.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
label
|
str
|
The label of the Element. Used to identify it in the FlowSystem. |
required |
outputs
|
list[Flow] | None
|
Output-flows from the source. Can be single flow or list of flows for sources providing multiple commodities or services. |
None
|
meta_data
|
dict | None
|
Used to store additional information about the Element. Not used internally but saved in results. Only use Python native types. |
None
|
prevent_simultaneous_flow_rates
|
bool
|
If True, only one output flow can be active at a time. Useful for modeling mutually exclusive supply options. Default is False. |
False
|
Examples:
Simple electricity grid connection:
Natural gas supply with cost and capacity constraints:
gas_supply = Source(
label='gas_network',
outputs=[
Flow(
label='natural_gas_flow',
bus=gas_bus,
size=1000, # Maximum 1000 kW supply capacity
effects_per_flow_hour={'cost': 0.04}, # €0.04/kWh gas cost
)
],
)
Multi-fuel power plant with switching constraints:
multi_fuel_plant = Source(
label='flexible_generator',
outputs=[coal_electricity, gas_electricity, biomass_electricity],
prevent_simultaneous_flow_rates=True, # Can only use one fuel at a time
)
Renewable energy source with investment optimization:
solar_farm = Source(
label='solar_pv',
outputs=[
Flow(
label='solar_power',
bus=electricity_bus,
size=InvestParameters(
minimum_size=0,
maximum_size=50000, # Up to 50 MW
specific_effects={'cost': 800}, # €800/kW installed
fix_effects={'cost': 100000}, # €100k development costs
),
fixed_relative_profile=solar_profile, # Hourly generation profile
)
],
)
Deprecated
The deprecated source
kwarg is accepted for compatibility but will be removed in future releases.
Functions
to_dataset
Convert the object to an xarray Dataset representation. All DataArrays become dataset variables, everything else goes to attrs.
Its recommended to only call this method on Interfaces with all numeric data stored as xr.DataArrays. Interfaces inside a FlowSystem are automatically converted this form after connecting and transforming the FlowSystem.
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Dataset containing all DataArrays with basic objects only in attributes |
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails due to naming conflicts or invalid data |
to_netcdf
Save the object to a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to save the NetCDF file |
required |
compression
|
int
|
Compression level (0-9) |
0
|
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails |
IOError
|
If file cannot be written |
from_dataset
classmethod
Create an instance from an xarray Dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset containing the object data |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
ValueError
|
If dataset format is invalid or class mismatch |
from_netcdf
classmethod
Load an instance from a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to the NetCDF file |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be read |
ValueError
|
If file format is invalid |
get_structure
Get object structure as a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clean
|
bool
|
If True, remove None and empty dicts and lists. |
False
|
stats
|
bool
|
If True, replace DataArray references with statistics |
False
|
Returns:
Type | Description |
---|---|
dict
|
Dictionary representation of the object structure |
to_json
Save the object to a JSON file. This is meant for documentation and comparison, not for reloading.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the JSON file. |
required |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be written |
copy
Create a copy of the Interface object.
Uses the existing serialization infrastructure to ensure proper copying of all DataArrays and nested objects.
Returns:
Type | Description |
---|---|
Interface
|
A new instance of the same class with copied data. |
SourceAndSink
SourceAndSink(label: str, inputs: list[Flow] | None = None, outputs: list[Flow] | None = None, prevent_simultaneous_flow_rates: bool = True, meta_data: dict | None = None, **kwargs)
Bases: Component
A SourceAndSink combines both supply and demand capabilities in a single component.
SourceAndSink components can both consume AND provide energy or material flows from and to the system, making them ideal for modeling markets, (simple) storage facilities, or bidirectional grid connections where buying and selling occur at the same location.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
label
|
str
|
The label of the Element. Used to identify it in the FlowSystem. |
required |
inputs
|
list[Flow] | None
|
Input-flows into the SourceAndSink representing consumption/demand side. |
None
|
outputs
|
list[Flow] | None
|
Output-flows from the SourceAndSink representing supply/generation side. |
None
|
prevent_simultaneous_flow_rates
|
bool
|
If True, prevents simultaneous input and output flows. This enforces that the component operates either as a source OR sink at any given time, but not both simultaneously. Default is True. |
True
|
meta_data
|
dict | None
|
Used to store additional information about the Element. Not used internally but saved in results. Only use Python native types. |
None
|
Examples:
Electricity market connection (buy/sell to grid):
electricity_market = SourceAndSink(
label='grid_connection',
inputs=[electricity_purchase], # Buy from grid
outputs=[electricity_sale], # Sell to grid
prevent_simultaneous_flow_rates=True, # Can't buy and sell simultaneously
)
Natural gas storage facility:
gas_storage_facility = SourceAndSink(
label='underground_gas_storage',
inputs=[gas_injection_flow], # Inject gas into storage
outputs=[gas_withdrawal_flow], # Withdraw gas from storage
prevent_simultaneous_flow_rates=True, # Injection or withdrawal, not both
)
District heating network connection:
dh_connection = SourceAndSink(
label='district_heating_tie',
inputs=[heat_purchase_flow], # Purchase heat from network
outputs=[heat_sale_flow], # Sell excess heat to network
prevent_simultaneous_flow_rates=False, # May allow simultaneous flows
)
Industrial waste heat exchange:
waste_heat_exchange = SourceAndSink(
label='industrial_heat_hub',
inputs=[
waste_heat_input_a, # Receive waste heat from process A
waste_heat_input_b, # Receive waste heat from process B
],
outputs=[
useful_heat_supply_c, # Supply heat to process C
useful_heat_supply_d, # Supply heat to process D
],
prevent_simultaneous_flow_rates=False, # Multiple simultaneous flows allowed
)
Note
When prevent_simultaneous_flow_rates is True, binary variables are created to ensure mutually exclusive operation between input and output flows, which increases computational complexity but reflects realistic market or storage operation constraints.
SourceAndSink is particularly useful for modeling: - Energy markets with bidirectional trading - Storage facilities with injection/withdrawal operations - Grid tie points with import/export capabilities - Waste exchange networks with multiple participants
Deprecated
The deprecated sink
and source
kwargs are accepted for compatibility but will be removed in future releases.
Functions
to_dataset
Convert the object to an xarray Dataset representation. All DataArrays become dataset variables, everything else goes to attrs.
Its recommended to only call this method on Interfaces with all numeric data stored as xr.DataArrays. Interfaces inside a FlowSystem are automatically converted this form after connecting and transforming the FlowSystem.
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Dataset containing all DataArrays with basic objects only in attributes |
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails due to naming conflicts or invalid data |
to_netcdf
Save the object to a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to save the NetCDF file |
required |
compression
|
int
|
Compression level (0-9) |
0
|
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails |
IOError
|
If file cannot be written |
from_dataset
classmethod
Create an instance from an xarray Dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset containing the object data |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
ValueError
|
If dataset format is invalid or class mismatch |
from_netcdf
classmethod
Load an instance from a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to the NetCDF file |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be read |
ValueError
|
If file format is invalid |
get_structure
Get object structure as a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clean
|
bool
|
If True, remove None and empty dicts and lists. |
False
|
stats
|
bool
|
If True, replace DataArray references with statistics |
False
|
Returns:
Type | Description |
---|---|
dict
|
Dictionary representation of the object structure |
to_json
Save the object to a JSON file. This is meant for documentation and comparison, not for reloading.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the JSON file. |
required |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be written |
copy
Create a copy of the Interface object.
Uses the existing serialization infrastructure to ensure proper copying of all DataArrays and nested objects.
Returns:
Type | Description |
---|---|
Interface
|
A new instance of the same class with copied data. |
Storage
Storage(label: str, charging: Flow, discharging: Flow, capacity_in_flow_hours: PeriodicDataUser | InvestParameters, relative_minimum_charge_state: TemporalDataUser = 0, relative_maximum_charge_state: TemporalDataUser = 1, initial_charge_state: PeriodicDataUser | Literal['lastValueOfSim'] = 0, minimal_final_charge_state: PeriodicDataUser | None = None, maximal_final_charge_state: PeriodicDataUser | None = None, relative_minimum_final_charge_state: PeriodicDataUser | None = None, relative_maximum_final_charge_state: PeriodicDataUser | None = None, eta_charge: TemporalDataUser = 1, eta_discharge: TemporalDataUser = 1, relative_loss_per_hour: TemporalDataUser = 0, prevent_simultaneous_charge_and_discharge: bool = True, balanced: bool = False, meta_data: dict | None = None)
Bases: Component
A Storage models the temporary storage and release of energy or material.
Storages have one incoming and one outgoing Flow, each with configurable efficiency factors. They maintain a charge state variable that represents the stored amount, bounded by capacity limits and evolving over time based on charging, discharging, and self-discharge losses.
The storage model handles complex temporal dynamics including initial conditions, final state constraints, and time-varying parameters. It supports both fixed-size and investment-optimized storage systems with comprehensive techno-economic modeling.
Mathematical Formulation
See the complete mathematical model in the documentation: Storage
- Equation (1): Charge state bounds
- Equation (3): Storage balance (charge state evolution)
Variable Mapping:
- capacity_in_flow_hours
→ C (storage capacity)
- charge_state
→ c(t_i) (state of charge at time t_i)
- relative_loss_per_hour
→ ċ_rel,loss (self-discharge rate)
- eta_charge
→ η_in (charging efficiency)
- eta_discharge
→ η_out (discharging efficiency)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
label
|
str
|
Element identifier used in the FlowSystem. |
required |
charging
|
Flow
|
Incoming flow for loading the storage. |
required |
discharging
|
Flow
|
Outgoing flow for unloading the storage. |
required |
capacity_in_flow_hours
|
PeriodicDataUser | InvestParameters
|
Storage capacity in flow-hours (kWh, m³, kg). Scalar for fixed size or InvestParameters for optimization. |
required |
relative_minimum_charge_state
|
TemporalDataUser
|
Minimum charge state (0-1). Default: 0. |
0
|
relative_maximum_charge_state
|
TemporalDataUser
|
Maximum charge state (0-1). Default: 1. |
1
|
initial_charge_state
|
PeriodicDataUser | Literal['lastValueOfSim']
|
Charge at start. Numeric or 'lastValueOfSim'. Default: 0. |
0
|
minimal_final_charge_state
|
PeriodicDataUser | None
|
Minimum absolute charge required at end (optional). |
None
|
maximal_final_charge_state
|
PeriodicDataUser | None
|
Maximum absolute charge allowed at end (optional). |
None
|
relative_minimum_final_charge_state
|
PeriodicDataUser | None
|
Minimum relative charge at end. Defaults to last value of relative_minimum_charge_state. |
None
|
relative_maximum_final_charge_state
|
PeriodicDataUser | None
|
Maximum relative charge at end. Defaults to last value of relative_maximum_charge_state. |
None
|
eta_charge
|
TemporalDataUser
|
Charging efficiency (0-1). Default: 1. |
1
|
eta_discharge
|
TemporalDataUser
|
Discharging efficiency (0-1). Default: 1. |
1
|
relative_loss_per_hour
|
TemporalDataUser
|
Self-discharge per hour (0-0.1). Default: 0. |
0
|
prevent_simultaneous_charge_and_discharge
|
bool
|
Prevent charging and discharging simultaneously. Adds binary variables. Default: True. |
True
|
meta_data
|
dict | None
|
Additional information stored in results. Python native types only. |
None
|
Examples:
Battery energy storage system:
battery = Storage(
label='lithium_battery',
charging=battery_charge_flow,
discharging=battery_discharge_flow,
capacity_in_flow_hours=100, # 100 kWh capacity
eta_charge=0.95, # 95% charging efficiency
eta_discharge=0.95, # 95% discharging efficiency
relative_loss_per_hour=0.001, # 0.1% loss per hour
relative_minimum_charge_state=0.1, # Never below 10% SOC
relative_maximum_charge_state=0.9, # Never above 90% SOC
)
Thermal storage with cycling constraints:
thermal_storage = Storage(
label='hot_water_tank',
charging=heat_input,
discharging=heat_output,
capacity_in_flow_hours=500, # 500 kWh thermal capacity
initial_charge_state=250, # Start half full
# Impact of temperature on energy capacity
relative_maximum_charge_state=water_temperature_spread / rated_temeprature_spread,
eta_charge=0.90, # Heat exchanger losses
eta_discharge=0.85, # Distribution losses
relative_loss_per_hour=0.02, # 2% thermal loss per hour
prevent_simultaneous_charge_and_discharge=True,
)
Pumped hydro storage with investment optimization:
pumped_hydro = Storage(
label='pumped_hydro',
charging=pump_flow,
discharging=turbine_flow,
capacity_in_flow_hours=InvestParameters(
minimum_size=1000, # Minimum economic scale
maximum_size=10000, # Site constraints
specific_effects={'cost': 150}, # €150/MWh capacity
fix_effects={'cost': 50_000_000}, # €50M fixed costs
),
eta_charge=0.85, # Pumping efficiency
eta_discharge=0.90, # Turbine efficiency
initial_charge_state='lastValueOfSim', # Ensuring no deficit compared to start
relative_loss_per_hour=0.0001, # Minimal evaporation
)
Material storage with inventory management:
fuel_storage = Storage(
label='natural_gas_storage',
charging=gas_injection,
discharging=gas_withdrawal,
capacity_in_flow_hours=10000, # 10,000 m³ storage volume
initial_charge_state=3000, # Start with 3,000 m³
minimal_final_charge_state=1000, # Strategic reserve
maximal_final_charge_state=9000, # Prevent overflow
eta_charge=0.98, # Compression losses
eta_discharge=0.95, # Pressure reduction losses
relative_loss_per_hour=0.0005, # 0.05% leakage per hour
prevent_simultaneous_charge_and_discharge=False, # Allow flow-through
)
Note
Mathematical formulation: See Storage for charge state evolution equations and balance constraints.
Efficiency parameters (eta_charge, eta_discharge) are dimensionless (0-1 range). The relative_loss_per_hour represents exponential decay per hour.
Binary variables: When prevent_simultaneous_charge_and_discharge is True, binary variables enforce mutual exclusivity, increasing solution time but preventing unrealistic simultaneous charging and discharging.
Units: Flow rates and charge states are related by the concept of 'flow hours' (=flow_rate * time). With flow rates in kW, the charge state is therefore (usually) kWh. With flow rates in m3/h, the charge state is therefore in m3.
Functions
to_dataset
Convert the object to an xarray Dataset representation. All DataArrays become dataset variables, everything else goes to attrs.
Its recommended to only call this method on Interfaces with all numeric data stored as xr.DataArrays. Interfaces inside a FlowSystem are automatically converted this form after connecting and transforming the FlowSystem.
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Dataset containing all DataArrays with basic objects only in attributes |
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails due to naming conflicts or invalid data |
to_netcdf
Save the object to a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to save the NetCDF file |
required |
compression
|
int
|
Compression level (0-9) |
0
|
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails |
IOError
|
If file cannot be written |
from_dataset
classmethod
Create an instance from an xarray Dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset containing the object data |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
ValueError
|
If dataset format is invalid or class mismatch |
from_netcdf
classmethod
Load an instance from a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to the NetCDF file |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be read |
ValueError
|
If file format is invalid |
get_structure
Get object structure as a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clean
|
bool
|
If True, remove None and empty dicts and lists. |
False
|
stats
|
bool
|
If True, replace DataArray references with statistics |
False
|
Returns:
Type | Description |
---|---|
dict
|
Dictionary representation of the object structure |
to_json
Save the object to a JSON file. This is meant for documentation and comparison, not for reloading.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the JSON file. |
required |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be written |
copy
Create a copy of the Interface object.
Uses the existing serialization infrastructure to ensure proper copying of all DataArrays and nested objects.
Returns:
Type | Description |
---|---|
Interface
|
A new instance of the same class with copied data. |
Transmission
Transmission(label: str, in1: Flow, out1: Flow, in2: Flow | None = None, out2: Flow | None = None, relative_losses: TemporalDataUser | None = None, absolute_losses: TemporalDataUser | None = None, on_off_parameters: OnOffParameters = None, prevent_simultaneous_flows_in_both_directions: bool = True, balanced: bool = False, meta_data: dict | None = None)
Bases: Component
Models transmission infrastructure that transports flows between two locations with losses.
Transmission components represent physical infrastructure like pipes, cables, transmission lines, or conveyor systems that transport energy or materials between two points. They can model both unidirectional and bidirectional flow with configurable loss mechanisms and operational constraints.
The component supports complex transmission scenarios including relative losses (proportional to flow), absolute losses (fixed when active), and bidirectional operation with flow direction constraints.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
label
|
str
|
The label of the Element. Used to identify it in the FlowSystem. |
required |
in1
|
Flow
|
The primary inflow (side A). Pass InvestParameters here for capacity optimization. |
required |
out1
|
Flow
|
The primary outflow (side B). |
required |
in2
|
Flow | None
|
Optional secondary inflow (side B) for bidirectional operation. If in1 has InvestParameters, in2 will automatically have matching capacity. |
None
|
out2
|
Flow | None
|
Optional secondary outflow (side A) for bidirectional operation. |
None
|
relative_losses
|
TemporalDataUser | None
|
Proportional losses as fraction of throughput (e.g., 0.02 for 2% loss). Applied as: output = input × (1 - relative_losses) |
None
|
absolute_losses
|
TemporalDataUser | None
|
Fixed losses that occur when transmission is active. Automatically creates binary variables for on/off states. |
None
|
on_off_parameters
|
OnOffParameters
|
Parameters defining binary operation constraints and costs. |
None
|
prevent_simultaneous_flows_in_both_directions
|
bool
|
If True, prevents simultaneous flow in both directions. Increases binary variables but reflects physical reality for most transmission systems. Default is True. |
True
|
balanced
|
bool
|
Whether to equate the size of the in1 and in2 Flow. Needs InvestParameters in both Flows. |
False
|
meta_data
|
dict | None
|
Used to store additional information. Not used internally but saved in results. Only use Python native types. |
None
|
Examples:
Simple electrical transmission line:
power_line = Transmission(
label='110kv_line',
in1=substation_a_out,
out1=substation_b_in,
relative_losses=0.03, # 3% line losses
)
Bidirectional natural gas pipeline:
gas_pipeline = Transmission(
label='interstate_pipeline',
in1=compressor_station_a,
out1=distribution_hub_b,
in2=compressor_station_b,
out2=distribution_hub_a,
relative_losses=0.005, # 0.5% friction losses
absolute_losses=50, # 50 kW compressor power when active
prevent_simultaneous_flows_in_both_directions=True,
)
District heating network with investment optimization:
heating_network = Transmission(
label='dh_main_line',
in1=Flow(
label='heat_supply',
bus=central_plant_bus,
size=InvestParameters(
minimum_size=1000, # Minimum 1 MW capacity
maximum_size=10000, # Maximum 10 MW capacity
specific_effects={'cost': 200}, # €200/kW capacity
fix_effects={'cost': 500000}, # €500k fixed installation
),
),
out1=district_heat_demand,
relative_losses=0.15, # 15% thermal losses in distribution
)
Material conveyor with on/off operation:
conveyor_belt = Transmission(
label='material_transport',
in1=loading_station,
out1=unloading_station,
absolute_losses=25, # 25 kW motor power when running
on_off_parameters=OnOffParameters(
effects_per_switch_on={'maintenance': 0.1},
consecutive_on_hours_min=2, # Minimum 2-hour operation
switch_on_total_max=10, # Maximum 10 starts per day
),
)
Note
The transmission equation balances flows with losses: output_flow = input_flow × (1 - relative_losses) - absolute_losses
For bidirectional transmission, each direction has independent loss calculations.
When using InvestParameters on in1, the capacity automatically applies to in2 to maintain consistent bidirectional capacity without additional investment variables.
Absolute losses force the creation of binary on/off variables, which increases computational complexity but enables realistic modeling of equipment with standby power consumption.
Functions
to_dataset
Convert the object to an xarray Dataset representation. All DataArrays become dataset variables, everything else goes to attrs.
Its recommended to only call this method on Interfaces with all numeric data stored as xr.DataArrays. Interfaces inside a FlowSystem are automatically converted this form after connecting and transforming the FlowSystem.
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Dataset containing all DataArrays with basic objects only in attributes |
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails due to naming conflicts or invalid data |
to_netcdf
Save the object to a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to save the NetCDF file |
required |
compression
|
int
|
Compression level (0-9) |
0
|
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails |
IOError
|
If file cannot be written |
from_dataset
classmethod
Create an instance from an xarray Dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset containing the object data |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
ValueError
|
If dataset format is invalid or class mismatch |
from_netcdf
classmethod
Load an instance from a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to the NetCDF file |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be read |
ValueError
|
If file format is invalid |
get_structure
Get object structure as a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clean
|
bool
|
If True, remove None and empty dicts and lists. |
False
|
stats
|
bool
|
If True, replace DataArray references with statistics |
False
|
Returns:
Type | Description |
---|---|
dict
|
Dictionary representation of the object structure |
to_json
Save the object to a JSON file. This is meant for documentation and comparison, not for reloading.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the JSON file. |
required |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be written |
copy
Create a copy of the Interface object.
Uses the existing serialization infrastructure to ensure proper copying of all DataArrays and nested objects.
Returns:
Type | Description |
---|---|
Interface
|
A new instance of the same class with copied data. |
CONFIG
Configuration for flixopt library.
Always call CONFIG.apply()
after changes.
Attributes:
Name | Type | Description |
---|---|---|
Logging |
Logging configuration. |
|
Modeling |
Optimization modeling parameters. |
|
config_name |
str
|
Configuration name. |
Examples:
Load from YAML file:
Classes
Logging
Logging configuration.
Silent by default. Enable via console=True
or file='path'
.
Attributes:
Name | Type | Description |
---|---|---|
level |
Literal['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']
|
Logging level. |
file |
str | None
|
Log file path for file logging. |
console |
bool | Literal['stdout', 'stderr']
|
Enable console output. |
rich |
bool
|
Use Rich library for enhanced output. |
max_file_size |
int
|
Max file size before rotation. |
backup_count |
int
|
Number of backup files to keep. |
date_format |
str
|
Date/time format string. |
format |
str
|
Log message format string. |
console_width |
int
|
Console width for Rich handler. |
show_path |
bool
|
Show file paths in messages. |
show_logger_name |
bool
|
Show logger name in messages. |
Colors |
bool
|
ANSI color codes for log levels. |
Examples:
# File logging with rotation
CONFIG.Logging.file = 'app.log'
CONFIG.Logging.max_file_size = 5_242_880 # 5MB
CONFIG.apply()
# Rich handler with stdout
CONFIG.Logging.console = True # or 'stdout'
CONFIG.Logging.rich = True
CONFIG.apply()
# Console output to stderr
CONFIG.Logging.console = 'stderr'
CONFIG.apply()
Classes
ANSI color codes for log levels.
Attributes:
Name | Type | Description |
---|---|---|
DEBUG |
str
|
ANSI color for DEBUG level. |
INFO |
str
|
ANSI color for INFO level. |
WARNING |
str
|
ANSI color for WARNING level. |
ERROR |
str
|
ANSI color for ERROR level. |
CRITICAL |
str
|
ANSI color for CRITICAL level. |
Examples:
CONFIG.Logging.Colors.INFO = '\033[32m' # Green
CONFIG.Logging.Colors.ERROR = '\033[1m\033[31m' # Bold red
CONFIG.apply()
Common ANSI codes
- '\033[30m' - Black
- '\033[31m' - Red
- '\033[32m' - Green
- '\033[33m' - Yellow
- '\033[34m' - Blue
- '\033[35m' - Magenta
- '\033[36m' - Cyan
- '\033[37m' - White
- '\033[90m' - Bright Black/Gray
- '\033[0m' - Reset to default
- '\033[1m\033[3Xm' - Bold (replace X with color code 0-7)
- '\033[2m\033[3Xm' - Dim (replace X with color code 0-7)
Modeling
Optimization modeling parameters.
Attributes:
Name | Type | Description |
---|---|---|
big |
int
|
Large number for big-M constraints. |
epsilon |
float
|
Tolerance for numerical comparisons. |
big_binary_bound |
int
|
Upper bound for binary constraints. |
Functions
load_from_file
classmethod
Load configuration from YAML file and apply it.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config_file
|
str | Path
|
Path to the YAML configuration file. |
required |
Raises:
Type | Description |
---|---|
FileNotFoundError
|
If the config file does not exist. |
TimeSeriesData
TimeSeriesData(*args: Any, aggregation_group: str | None = None, aggregation_weight: float | None = None, agg_group: str | None = None, agg_weight: float | None = None, **kwargs: Any)
Bases: DataArray
Minimal TimeSeriesData that inherits from xr.DataArray with aggregation metadata.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
*args
|
Any
|
Arguments passed to DataArray |
()
|
aggregation_group
|
str | None
|
Aggregation group name |
None
|
aggregation_weight
|
float | None
|
Aggregation weight (0-1) |
None
|
agg_group
|
str | None
|
Deprecated, use aggregation_group instead |
None
|
agg_weight
|
float | None
|
Deprecated, use aggregation_weight instead |
None
|
**kwargs
|
Any
|
Additional arguments passed to DataArray |
{}
|
Functions
fit_to_coords
Fit the data to the given coordinates. Returns a new TimeSeriesData object if the current coords are different.
Effect
Effect(label: str, unit: str, description: str, meta_data: dict | None = None, is_standard: bool = False, is_objective: bool = False, share_from_temporal: TemporalEffectsUser | None = None, share_from_periodic: PeriodicEffectsUser | None = None, minimum_temporal: PeriodicEffectsUser | None = None, maximum_temporal: PeriodicEffectsUser | None = None, minimum_periodic: PeriodicEffectsUser | None = None, maximum_periodic: PeriodicEffectsUser | None = None, minimum_per_hour: TemporalDataUser | None = None, maximum_per_hour: TemporalDataUser | None = None, minimum_total: Scalar | None = None, maximum_total: Scalar | None = None, **kwargs)
Bases: Element
Represents system-wide impacts like costs, emissions, resource consumption, or other effects.
Effects capture the broader impacts of system operation and investment decisions beyond the primary energy/material flows. Each Effect accumulates contributions from Components, Flows, and other system elements. One Effect is typically chosen as the optimization objective, while others can serve as constraints or tracking metrics.
Effects support comprehensive modeling including operational and investment contributions, cross-effect relationships (e.g., carbon pricing), and flexible constraint formulation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
label
|
str
|
The label of the Element. Used to identify it in the FlowSystem. |
required |
unit
|
str
|
The unit of the effect (e.g., '€', 'kg_CO2', 'kWh_primary', 'm²'). This is informative only and does not affect optimization calculations. |
required |
description
|
str
|
Descriptive name explaining what this effect represents. |
required |
is_standard
|
bool
|
If True, this is a standard effect allowing direct value input without effect dictionaries. Used for simplified effect specification (and less boilerplate code). |
False
|
is_objective
|
bool
|
If True, this effect serves as the optimization objective function. Only one effect can be marked as objective per optimization. |
False
|
share_from_temporal
|
TemporalEffectsUser | None
|
Temporal cross-effect contributions. Maps temporal contributions from other effects to this effect |
None
|
share_from_periodic
|
PeriodicEffectsUser | None
|
Periodic cross-effect contributions. Maps periodic contributions from other effects to this effect. |
None
|
minimum_temporal
|
PeriodicEffectsUser | None
|
Minimum allowed total contribution across all timesteps. |
None
|
maximum_temporal
|
PeriodicEffectsUser | None
|
Maximum allowed total contribution across all timesteps. |
None
|
minimum_per_hour
|
TemporalDataUser | None
|
Minimum allowed contribution per hour. |
None
|
maximum_per_hour
|
TemporalDataUser | None
|
Maximum allowed contribution per hour. |
None
|
minimum_periodic
|
PeriodicEffectsUser | None
|
Minimum allowed total periodic contribution. |
None
|
maximum_periodic
|
PeriodicEffectsUser | None
|
Maximum allowed total periodic contribution. |
None
|
minimum_total
|
Scalar | None
|
Minimum allowed total effect (temporal + periodic combined). |
None
|
maximum_total
|
Scalar | None
|
Maximum allowed total effect (temporal + periodic combined). |
None
|
meta_data
|
dict | None
|
Used to store additional information. Not used internally but saved in results. Only use Python native types. |
None
|
Deprecated Parameters (for backwards compatibility):
minimum_operation: Use minimum_temporal
instead.
maximum_operation: Use maximum_temporal
instead.
minimum_invest: Use minimum_periodic
instead.
maximum_invest: Use maximum_periodic
instead.
minimum_operation_per_hour: Use minimum_per_hour
instead.
maximum_operation_per_hour: Use maximum_per_hour
instead.
Examples:
Basic cost objective:
cost_effect = Effect(
label='system_costs',
unit='€',
description='Total system costs',
is_objective=True,
)
CO2 emissions:
co2_effect = Effect(
label='CO2',
unit='kg_CO2',
description='Carbon dioxide emissions',
maximum_total=1_000_000, # 1000 t CO2 annual limit
)
Land use constraint:
land_use = Effect(
label='land_usage',
unit='m²',
description='Land area requirement',
maximum_total=50_000, # Maximum 5 hectares available
)
Primary energy tracking:
primary_energy = Effect(
label='primary_energy',
unit='kWh_primary',
description='Primary energy consumption',
)
Cost objective with carbon and primary energy pricing:
```python
cost_effect = Effect(
label='system_costs',
unit='€',
description='Total system costs',
is_objective=True,
share_from_temporal={
'primary_energy': 0.08, # 0.08 €/kWh_primary
'CO2': 0.2, # Carbon pricing: 0.2 €/kg_CO2 into costs if used on a cost effect
},
)
```
Water consumption with tiered constraints:
```python
water_usage = Effect(
label='water_consumption',
unit='m³',
description='Industrial water usage',
minimum_per_hour=10, # Minimum 10 m³/h for process stability
maximum_per_hour=500, # Maximum 500 m³/h capacity limit
maximum_total=100_000, # Annual permit limit: 100,000 m³
)
```
Note
Effect bounds can be None to indicate no constraint in that direction.
Cross-effect relationships enable sophisticated modeling like carbon pricing, resource valuation, or multi-criteria optimization with weighted objectives.
The unit field is purely informational - ensure dimensional consistency across all contributions to each effect manually.
Effects are accumulated as: - Total = Σ(temporal contributions) + Σ(periodic contributions)
Attributes
minimum_operation
property
writable
DEPRECATED: Use 'minimum_temporal' property instead.
maximum_operation
property
writable
DEPRECATED: Use 'maximum_temporal' property instead.
minimum_invest
property
writable
DEPRECATED: Use 'minimum_periodic' property instead.
maximum_invest
property
writable
DEPRECATED: Use 'maximum_periodic' property instead.
minimum_operation_per_hour
property
writable
DEPRECATED: Use 'minimum_per_hour' property instead.
maximum_operation_per_hour
property
writable
DEPRECATED: Use 'maximum_per_hour' property instead.
Functions
to_dataset
Convert the object to an xarray Dataset representation. All DataArrays become dataset variables, everything else goes to attrs.
Its recommended to only call this method on Interfaces with all numeric data stored as xr.DataArrays. Interfaces inside a FlowSystem are automatically converted this form after connecting and transforming the FlowSystem.
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Dataset containing all DataArrays with basic objects only in attributes |
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails due to naming conflicts or invalid data |
to_netcdf
Save the object to a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to save the NetCDF file |
required |
compression
|
int
|
Compression level (0-9) |
0
|
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails |
IOError
|
If file cannot be written |
from_dataset
classmethod
Create an instance from an xarray Dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset containing the object data |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
ValueError
|
If dataset format is invalid or class mismatch |
from_netcdf
classmethod
Load an instance from a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to the NetCDF file |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be read |
ValueError
|
If file format is invalid |
get_structure
Get object structure as a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clean
|
bool
|
If True, remove None and empty dicts and lists. |
False
|
stats
|
bool
|
If True, replace DataArray references with statistics |
False
|
Returns:
Type | Description |
---|---|
dict
|
Dictionary representation of the object structure |
to_json
Save the object to a JSON file. This is meant for documentation and comparison, not for reloading.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the JSON file. |
required |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be written |
copy
Create a copy of the Interface object.
Uses the existing serialization infrastructure to ensure proper copying of all DataArrays and nested objects.
Returns:
Type | Description |
---|---|
Interface
|
A new instance of the same class with copied data. |
Bus
Bus(label: str, excess_penalty_per_flow_hour: TemporalDataUser | None = 100000.0, meta_data: dict | None = None)
Bases: Element
Buses represent nodal balances between flow rates, serving as connection points.
A Bus enforces energy or material balance constraints where the sum of all incoming flows must equal the sum of all outgoing flows at each time step. Buses represent physical or logical connection points for energy carriers (electricity, heat, gas) or material flows between different Components.
Mathematical Formulation
See the complete mathematical model in the documentation: Bus
Parameters:
Name | Type | Description | Default |
---|---|---|---|
label
|
str
|
The label of the Element. Used to identify it in the FlowSystem. |
required |
excess_penalty_per_flow_hour
|
TemporalDataUser | None
|
Penalty costs for bus balance violations. When None, no excess/deficit is allowed (hard constraint). When set to a value > 0, allows bus imbalances at penalty cost. Default is 1e5 (high penalty). |
100000.0
|
meta_data
|
dict | None
|
Used to store additional information. Not used internally but saved in results. Only use Python native types. |
None
|
Examples:
Electrical bus with strict balance:
electricity_bus = Bus(
label='main_electrical_bus',
excess_penalty_per_flow_hour=None, # No imbalance allowed
)
Heat network with penalty for imbalances:
heat_network = Bus(
label='district_heating_network',
excess_penalty_per_flow_hour=1000, # €1000/MWh penalty for imbalance
)
Material flow with time-varying penalties:
material_hub = Bus(
label='material_processing_hub',
excess_penalty_per_flow_hour=waste_disposal_costs, # Time series
)
Note
The bus balance equation enforced is: Σ(inflows) = Σ(outflows) + excess - deficit
When excess_penalty_per_flow_hour is None, excess and deficit are forced to zero. When a penalty cost is specified, the optimization can choose to violate the balance if economically beneficial, paying the penalty. The penalty is added to the objective directly.
Empty inputs
and outputs
lists are initialized and populated automatically
by the FlowSystem during system setup.
Functions
to_dataset
Convert the object to an xarray Dataset representation. All DataArrays become dataset variables, everything else goes to attrs.
Its recommended to only call this method on Interfaces with all numeric data stored as xr.DataArrays. Interfaces inside a FlowSystem are automatically converted this form after connecting and transforming the FlowSystem.
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Dataset containing all DataArrays with basic objects only in attributes |
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails due to naming conflicts or invalid data |
to_netcdf
Save the object to a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to save the NetCDF file |
required |
compression
|
int
|
Compression level (0-9) |
0
|
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails |
IOError
|
If file cannot be written |
from_dataset
classmethod
Create an instance from an xarray Dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset containing the object data |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
ValueError
|
If dataset format is invalid or class mismatch |
from_netcdf
classmethod
Load an instance from a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to the NetCDF file |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be read |
ValueError
|
If file format is invalid |
get_structure
Get object structure as a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clean
|
bool
|
If True, remove None and empty dicts and lists. |
False
|
stats
|
bool
|
If True, replace DataArray references with statistics |
False
|
Returns:
Type | Description |
---|---|
dict
|
Dictionary representation of the object structure |
to_json
Save the object to a JSON file. This is meant for documentation and comparison, not for reloading.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the JSON file. |
required |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be written |
copy
Create a copy of the Interface object.
Uses the existing serialization infrastructure to ensure proper copying of all DataArrays and nested objects.
Returns:
Type | Description |
---|---|
Interface
|
A new instance of the same class with copied data. |
Flow
Flow(label: str, bus: str, size: Scalar | InvestParameters = None, fixed_relative_profile: TemporalDataUser | None = None, relative_minimum: TemporalDataUser = 0, relative_maximum: TemporalDataUser = 1, effects_per_flow_hour: TemporalEffectsUser | None = None, on_off_parameters: OnOffParameters | None = None, flow_hours_total_max: Scalar | None = None, flow_hours_total_min: Scalar | None = None, load_factor_min: Scalar | None = None, load_factor_max: Scalar | None = None, previous_flow_rate: Scalar | list[Scalar] | None = None, meta_data: dict | None = None)
Bases: Element
Define a directed flow of energy or material between bus and component.
A Flow represents the transfer of energy (electricity, heat, fuel) or material between a Bus and a Component in a specific direction. The flow rate is the primary optimization variable, with constraints and costs defined through various parameters. Flows can have fixed or variable sizes, operational constraints, and complex on/off behavior.
Key Concepts
Flow Rate: The instantaneous rate of energy/material transfer (optimization variable) [kW, m³/h, kg/h] Flow Hours: Amount of energy/material transferred per timestep. [kWh, m³, kg] Flow Size: The maximum capacity or nominal rating of the flow [kW, m³/h, kg/h] Relative Bounds: Flow rate limits expressed as fractions of flow size
Integration with Parameter Classes
- InvestParameters: Used for
size
when flow Size is an investment decision - OnOffParameters: Used for
on_off_parameters
when flow has discrete states
Mathematical Formulation
See the complete mathematical model in the documentation: Flow
Parameters:
Name | Type | Description | Default |
---|---|---|---|
label
|
str
|
Unique flow identifier within its component. |
required |
bus
|
str
|
Bus label this flow connects to. |
required |
size
|
Scalar | InvestParameters
|
Flow capacity. Scalar, InvestParameters, or None (uses CONFIG.Modeling.big). |
None
|
relative_minimum
|
TemporalDataUser
|
Minimum flow rate as fraction of size (0-1). Default: 0. |
0
|
relative_maximum
|
TemporalDataUser
|
Maximum flow rate as fraction of size. Default: 1. |
1
|
load_factor_min
|
Scalar | None
|
Minimum average utilization (0-1). Default: 0. |
None
|
load_factor_max
|
Scalar | None
|
Maximum average utilization (0-1). Default: 1. |
None
|
effects_per_flow_hour
|
TemporalEffectsUser | None
|
Operational costs/impacts per flow-hour. Dict mapping effect names to values (e.g., {'cost': 45, 'CO2': 0.8}). |
None
|
on_off_parameters
|
OnOffParameters | None
|
Binary operation constraints (OnOffParameters). Default: None. |
None
|
flow_hours_total_max
|
Scalar | None
|
Maximum cumulative flow-hours. Alternative to load_factor_max. |
None
|
flow_hours_total_min
|
Scalar | None
|
Minimum cumulative flow-hours. Alternative to load_factor_min. |
None
|
fixed_relative_profile
|
TemporalDataUser | None
|
Predetermined pattern as fraction of size. Flow rate = size × fixed_relative_profile(t). |
None
|
previous_flow_rate
|
Scalar | list[Scalar] | None
|
Initial flow state for on/off dynamics. Default: None (off). |
None
|
meta_data
|
dict | None
|
Additional info stored in results. Python native types only. |
None
|
Examples:
Basic power flow with fixed capacity:
generator_output = Flow(
label='electricity_out',
bus='electricity_grid',
size=100, # 100 MW capacity
relative_minimum=0.4, # Cannot operate below 40 MW
effects_per_flow_hour={'fuel_cost': 45, 'co2_emissions': 0.8},
)
Investment decision for battery capacity:
battery_flow = Flow(
label='electricity_storage',
bus='electricity_grid',
size=InvestParameters(
minimum_size=10, # Minimum 10 MWh
maximum_size=100, # Maximum 100 MWh
specific_effects={'cost': 150_000}, # €150k/MWh annualized
),
)
Heat pump with startup costs and minimum run times:
heat_pump = Flow(
label='heat_output',
bus='heating_network',
size=50, # 50 kW thermal
relative_minimum=0.3, # Minimum 15 kW output when on
effects_per_flow_hour={'electricity_cost': 25, 'maintenance': 2},
on_off_parameters=OnOffParameters(
effects_per_switch_on={'startup_cost': 100, 'wear': 0.1},
consecutive_on_hours_min=2, # Must run at least 2 hours
consecutive_off_hours_min=1, # Must stay off at least 1 hour
switch_on_total_max=200, # Maximum 200 starts per period
),
)
Fixed renewable generation profile:
solar_generation = Flow(
label='solar_power',
bus='electricity_grid',
size=25, # 25 MW installed capacity
fixed_relative_profile=np.array([0, 0.1, 0.4, 0.8, 0.9, 0.7, 0.3, 0.1, 0]),
effects_per_flow_hour={'maintenance_costs': 5}, # €5/MWh maintenance
)
Industrial process with annual utilization limits:
production_line = Flow(
label='product_output',
bus='product_market',
size=1000, # 1000 units/hour capacity
load_factor_min=0.6, # Must achieve 60% annual utilization
load_factor_max=0.85, # Cannot exceed 85% for maintenance
effects_per_flow_hour={'variable_cost': 12, 'quality_control': 0.5},
)
Design Considerations
Size vs Load Factors: Use flow_hours_total_min/max
for absolute limits,
load_factor_min/max
for utilization-based constraints.
Relative Bounds: Set relative_minimum > 0
only when equipment cannot
operate below that level. Use on_off_parameters
for discrete on/off behavior.
Fixed Profiles: Use fixed_relative_profile
for known exact patterns,
relative_maximum
for upper bounds on optimization variables.
Notes
- Default size (CONFIG.Modeling.big) is used when size=None
- list inputs for previous_flow_rate are converted to NumPy arrays
- Flow direction is determined by component input/output designation
Deprecated
Passing Bus objects to bus
parameter. Use bus label strings instead.
Functions
to_dataset
Convert the object to an xarray Dataset representation. All DataArrays become dataset variables, everything else goes to attrs.
Its recommended to only call this method on Interfaces with all numeric data stored as xr.DataArrays. Interfaces inside a FlowSystem are automatically converted this form after connecting and transforming the FlowSystem.
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Dataset containing all DataArrays with basic objects only in attributes |
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails due to naming conflicts or invalid data |
to_netcdf
Save the object to a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to save the NetCDF file |
required |
compression
|
int
|
Compression level (0-9) |
0
|
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails |
IOError
|
If file cannot be written |
from_dataset
classmethod
Create an instance from an xarray Dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset containing the object data |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
ValueError
|
If dataset format is invalid or class mismatch |
from_netcdf
classmethod
Load an instance from a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to the NetCDF file |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be read |
ValueError
|
If file format is invalid |
get_structure
Get object structure as a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clean
|
bool
|
If True, remove None and empty dicts and lists. |
False
|
stats
|
bool
|
If True, replace DataArray references with statistics |
False
|
Returns:
Type | Description |
---|---|
dict
|
Dictionary representation of the object structure |
to_json
Save the object to a JSON file. This is meant for documentation and comparison, not for reloading.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the JSON file. |
required |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be written |
copy
Create a copy of the Interface object.
Uses the existing serialization infrastructure to ensure proper copying of all DataArrays and nested objects.
Returns:
Type | Description |
---|---|
Interface
|
A new instance of the same class with copied data. |
FlowSystem
FlowSystem(timesteps: DatetimeIndex, periods: Index | None = None, scenarios: Index | None = None, hours_of_last_timestep: float | None = None, hours_of_previous_timesteps: int | float | ndarray | None = None, weights: PeriodicDataUser | None = None, scenario_independent_sizes: bool | list[str] = True, scenario_independent_flow_rates: bool | list[str] = False)
Bases: Interface
A FlowSystem organizes the high level Elements (Components, Buses & Effects).
This is the main container class that users work with to build and manage their System.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
timesteps
|
DatetimeIndex
|
The timesteps of the model. |
required |
periods
|
Index | None
|
The periods of the model. |
None
|
scenarios
|
Index | None
|
The scenarios of the model. |
None
|
hours_of_last_timestep
|
float | None
|
The duration of the last time step. Uses the last time interval if not specified |
None
|
hours_of_previous_timesteps
|
int | float | ndarray | None
|
The duration of previous timesteps. If None, the first time increment of time_series is used. This is needed to calculate previous durations (for example consecutive_on_hours). If you use an array, take care that its long enough to cover all previous values! |
None
|
weights
|
PeriodicDataUser | None
|
The weights of each period and scenario. If None, all scenarios have the same weight (normalized to 1). Its recommended to normalize the weights to sum up to 1. |
None
|
scenario_independent_sizes
|
bool | list[str]
|
Controls whether investment sizes are equalized across scenarios. - True: All sizes are shared/equalized across scenarios - False: All sizes are optimized separately per scenario - list[str]: Only specified components (by label_full) are equalized across scenarios |
True
|
scenario_independent_flow_rates
|
bool | list[str]
|
Controls whether flow rates are equalized across scenarios. - True: All flow rates are shared/equalized across scenarios - False: All flow rates are optimized separately per scenario - list[str]: Only specified flows (by label_full) are equalized across scenarios |
False
|
Notes
- Creates an empty registry for components and buses, an empty EffectCollection, and a placeholder for a SystemModel.
- The instance starts disconnected (self._connected_and_transformed == False) and will be connected_and_transformed automatically when trying to solve a calculation.
Attributes
scenario_independent_sizes
property
writable
Controls whether investment sizes are equalized across scenarios.
Returns:
Type | Description |
---|---|
bool | list[str]
|
bool or list[str]: Configuration for scenario-independent sizing |
scenario_independent_flow_rates
property
writable
Controls whether flow rates are equalized across scenarios.
Returns:
Type | Description |
---|---|
bool | list[str]
|
bool or list[str]: Configuration for scenario-independent flow rates |
Functions
transform_data
Transform the data of the interface to match the FlowSystem's dimensions.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
flow_system
|
FlowSystem
|
The FlowSystem containing timing and dimensional information |
required |
name_prefix
|
str
|
The prefix to use for the names of the variables. Defaults to '', which results in no prefix. |
''
|
Raises:
Type | Description |
---|---|
NotImplementedError
|
Must be implemented by subclasses |
from_netcdf
classmethod
Load an instance from a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to the NetCDF file |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be read |
ValueError
|
If file format is invalid |
copy
Create a copy of the Interface object.
Uses the existing serialization infrastructure to ensure proper copying of all DataArrays and nested objects.
Returns:
Type | Description |
---|---|
Interface
|
A new instance of the same class with copied data. |
calculate_hours_per_timestep
staticmethod
Calculate duration of each timestep as a 1D DataArray.
to_dataset
Convert the FlowSystem to an xarray Dataset. Ensures FlowSystem is connected before serialization.
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Dataset containing all DataArrays with structure in attributes |
from_dataset
classmethod
Create a FlowSystem from an xarray Dataset. Handles FlowSystem-specific reconstruction logic.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset containing the FlowSystem data |
required |
Returns:
Type | Description |
---|---|
FlowSystem
|
FlowSystem instance |
to_netcdf
Save the FlowSystem to a NetCDF file. Ensures FlowSystem is connected before saving.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the netCDF file. |
required |
compression
|
int
|
The compression level to use when saving the file. |
0
|
get_structure
Get FlowSystem structure. Ensures FlowSystem is connected before getting structure.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clean
|
bool
|
If True, remove None and empty dicts and lists. |
False
|
stats
|
bool
|
If True, replace DataArray references with statistics |
False
|
to_json
Save the flow system to a JSON file. Ensures FlowSystem is connected before saving.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the JSON file. |
required |
fit_to_model_coords
fit_to_model_coords(name: str, data: TemporalDataUser | PeriodicDataUser | None, dims: Collection[FlowSystemDimensions] | None = None) -> TemporalData | PeriodicData | None
Fit data to model coordinate system (currently time, but extensible).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
Name of the data |
required |
data
|
TemporalDataUser | PeriodicDataUser | None
|
Data to fit to model coordinates |
required |
dims
|
Collection[FlowSystemDimensions] | None
|
Collection of dimension names to use for fitting. If None, all dimensions are used. |
None
|
Returns:
Type | Description |
---|---|
TemporalData | PeriodicData | None
|
xr.DataArray aligned to model coordinate system. If data is None, returns None. |
fit_effects_to_model_coords
fit_effects_to_model_coords(label_prefix: str | None, effect_values: TemporalEffectsUser | PeriodicEffectsUser | None, label_suffix: str | None = None, dims: Collection[FlowSystemDimensions] | None = None, delimiter: str = '|') -> TemporalEffects | PeriodicEffects | None
Transform EffectValues from the user to Internal Datatypes aligned with model coordinates.
connect_and_transform
Transform data for all elements using the new simplified approach.
add_elements
Add Components(Storages, Boilers, Heatpumps, ...), Buses or Effects to the FlowSystem
Parameters:
Name | Type | Description | Default |
---|---|---|---|
*elements
|
Element
|
childs of Element like Boiler, HeatPump, Bus,... modeling Elements |
()
|
create_model
Create a linopy model from the FlowSystem.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
normalize_weights
|
bool
|
Whether to automatically normalize the weights (periods and scenarios) to sum up to 1 when solving. |
True
|
plot_network
plot_network(path: bool | str | Path = 'flow_system.html', controls: bool | list[Literal['nodes', 'edges', 'layout', 'interaction', 'manipulation', 'physics', 'selection', 'renderer']] = True, show: bool = False) -> pyvis.network.Network | None
Visualizes the network structure of a FlowSystem using PyVis, saving it as an interactive HTML file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
bool | str | Path
|
Path to save the HTML visualization.
- |
'flow_system.html'
|
controls
|
bool | list[Literal['nodes', 'edges', 'layout', 'interaction', 'manipulation', 'physics', 'selection', 'renderer']]
|
UI controls to add to the visualization.
- |
True
|
show
|
bool
|
Whether to open the visualization in the web browser. |
False
|
Returns:
- 'pyvis.network.Network' | None: The Network
instance representing the visualization, or None
if pyvis
is not installed.
Examples:
>>> flow_system.plot_network()
>>> flow_system.plot_network(show=False)
>>> flow_system.plot_network(path='output/custom_network.html', controls=['nodes', 'layout'])
Notes:
- This function requires pyvis
. If not installed, the function prints a warning and returns None
.
- Nodes are styled based on type (e.g., circles for buses, boxes for components) and annotated with node information.
start_network_app
Visualizes the network structure of a FlowSystem using Dash, Cytoscape, and networkx. Requires optional dependencies: dash, dash-cytoscape, dash-daq, networkx, flask, werkzeug.
sel
sel(time: str | slice | list[str] | Timestamp | DatetimeIndex | None = None, period: int | slice | list[int] | Index | None = None, scenario: str | slice | list[str] | Index | None = None) -> FlowSystem
Select a subset of the flowsystem by the time coordinate.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
time
|
str | slice | list[str] | Timestamp | DatetimeIndex | None
|
Time selection (e.g., slice('2023-01-01', '2023-12-31'), '2023-06-15', or list of times) |
None
|
period
|
int | slice | list[int] | Index | None
|
Period selection (e.g., slice(2023, 2024), or list of periods) |
None
|
scenario
|
str | slice | list[str] | Index | None
|
Scenario selection (e.g., slice('scenario1', 'scenario2'), or list of scenarios) |
None
|
Returns:
Name | Type | Description |
---|---|---|
FlowSystem |
FlowSystem
|
New FlowSystem with selected data |
isel
isel(time: int | slice | list[int] | None = None, period: int | slice | list[int] | None = None, scenario: int | slice | list[int] | None = None) -> FlowSystem
Select a subset of the flowsystem by integer indices.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
time
|
int | slice | list[int] | None
|
Time selection by integer index (e.g., slice(0, 100), 50, or [0, 5, 10]) |
None
|
period
|
int | slice | list[int] | None
|
Period selection by integer index (e.g., slice(0, 100), 50, or [0, 5, 10]) |
None
|
scenario
|
int | slice | list[int] | None
|
Scenario selection by integer index (e.g., slice(0, 3), 50, or [0, 5, 10]) |
None
|
Returns:
Name | Type | Description |
---|---|---|
FlowSystem |
FlowSystem
|
New FlowSystem with selected data |
resample
resample(time: str, method: Literal['mean', 'sum', 'max', 'min', 'first', 'last', 'std', 'var', 'median', 'count'] = 'mean', **kwargs: Any) -> FlowSystem
Create a resampled FlowSystem by resampling data along the time dimension (like xr.Dataset.resample()). Only resamples data variables that have a time dimension.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
time
|
str
|
Resampling frequency (e.g., '3h', '2D', '1M') |
required |
method
|
Literal['mean', 'sum', 'max', 'min', 'first', 'last', 'std', 'var', 'median', 'count']
|
Resampling method. Recommended: 'mean', 'first', 'last', 'max', 'min' |
'mean'
|
**kwargs
|
Any
|
Additional arguments passed to xarray.resample() |
{}
|
Returns:
Name | Type | Description |
---|---|---|
FlowSystem |
FlowSystem
|
New FlowSystem with resampled data |
InvestParameters
InvestParameters(fixed_size: PeriodicDataUser | None = None, minimum_size: PeriodicDataUser | None = None, maximum_size: PeriodicDataUser | None = None, mandatory: bool = False, effects_of_investment: PeriodicEffectsUser | None = None, effects_of_investment_per_size: PeriodicEffectsUser | None = None, effects_of_retirement: PeriodicEffectsUser | None = None, piecewise_effects_of_investment: PiecewiseEffects | None = None, linked_periods: PeriodicDataUser | tuple[int, int] | None = None, **kwargs)
Bases: Interface
Define investment decision parameters with flexible sizing and effect modeling.
This class models investment decisions in optimization problems, supporting both binary (invest/don't invest) and continuous sizing choices with comprehensive cost structures. It enables realistic representation of investment economics including fixed costs, scale effects, and divestment penalties.
Investment Decision Types
Binary Investments: Fixed size investments creating yes/no decisions (e.g., install a specific generator, build a particular facility)
Continuous Sizing: Variable size investments with minimum/maximum bounds (e.g., battery capacity from 10-1000 kWh, pipeline diameter optimization)
Cost Modeling Approaches
- Fixed Effects: One-time costs independent of size (permits, connections)
- Specific Effects: Linear costs proportional to size (€/kW, €/m²)
- Piecewise Effects: Non-linear relationships (bulk discounts, learning curves)
- Divestment Effects: Penalties for not investing (demolition, opportunity costs)
Mathematical Formulation
See the complete mathematical model in the documentation: InvestParameters
Parameters:
Name | Type | Description | Default |
---|---|---|---|
fixed_size
|
PeriodicDataUser | None
|
Creates binary decision at this exact size. None allows continuous sizing. |
None
|
minimum_size
|
PeriodicDataUser | None
|
Lower bound for continuous sizing. Default: CONFIG.Modeling.epsilon. Ignored if fixed_size is specified. |
None
|
maximum_size
|
PeriodicDataUser | None
|
Upper bound for continuous sizing. Default: CONFIG.Modeling.big. Ignored if fixed_size is specified. |
None
|
mandatory
|
bool
|
Controls whether investment is required. When True, forces investment to occur (useful for mandatory upgrades or replacement decisions). When False (default), optimization can choose not to invest. With multiple periods, at least one period has to have an investment. |
False
|
effects_of_investment
|
PeriodicEffectsUser | None
|
Fixed costs if investment is made, regardless of size. Dict: {'effect_name': value} (e.g., {'cost': 10000}). |
None
|
effects_of_investment_per_size
|
PeriodicEffectsUser | None
|
Variable costs proportional to size (per-unit costs). Dict: {'effect_name': value/unit} (e.g., {'cost': 1200}). |
None
|
piecewise_effects_of_investment
|
PiecewiseEffects | None
|
Non-linear costs using PiecewiseEffects. Combinable with effects_of_investment and effects_of_investment_per_size. |
None
|
effects_of_retirement
|
PeriodicEffectsUser | None
|
Costs incurred if NOT investing (demolition, penalties). Dict: {'effect_name': value}. |
None
|
Deprecated Args
fix_effects: Deprecated. Use effects_of_investment
instead.
Will be removed in version 4.0.
specific_effects: Deprecated. Use effects_of_investment_per_size
instead.
Will be removed in version 4.0.
divest_effects: Deprecated. Use effects_of_retirement
instead.
Will be removed in version 4.0.
piecewise_effects: Deprecated. Use piecewise_effects_of_investment
instead.
Will be removed in version 4.0.
optional: DEPRECATED. Use mandatory
instead. Opposite of mandatory
.
Will be removed in version 4.0.
linked_periods: Describes which periods are linked. 1 means linked, 0 means size=0. None means no linked periods.
Cost Annualization Requirements
All cost values must be properly weighted to match the optimization model's time horizon. For long-term investments, the cost values should be annualized to the corresponding operation time (annuity).
- Use equivalent annual cost (capital cost / equipment lifetime)
- Apply appropriate discount rates for present value calculations
- Account for inflation, escalation, and financing costs
Example: €1M equipment with 20-year life → €50k/year fixed cost
Examples:
Simple binary investment (solar panels):
solar_investment = InvestParameters(
fixed_size=100, # 100 kW system (binary decision)
mandatory=False, # Investment is optional
effects_of_investment={
'cost': 25000, # Installation and permitting costs
'CO2': -50000, # Avoided emissions over lifetime
},
effects_of_investment_per_size={
'cost': 1200, # €1200/kW for panels (annualized)
'CO2': -800, # kg CO2 avoided per kW annually
},
)
Flexible sizing with economies of scale:
battery_investment = InvestParameters(
minimum_size=10, # Minimum viable system size (kWh)
maximum_size=1000, # Maximum installable capacity
mandatory=False, # Investment is optional
effects_of_investment={
'cost': 5000, # Grid connection and control system
'installation_time': 2, # Days for fixed components
},
piecewise_effects_of_investment=PiecewiseEffects(
piecewise_origin=Piecewise(
[
Piece(0, 100), # Small systems
Piece(100, 500), # Medium systems
Piece(500, 1000), # Large systems
]
),
piecewise_shares={
'cost': Piecewise(
[
Piece(800, 750), # High cost/kWh for small systems
Piece(750, 600), # Medium cost/kWh
Piece(600, 500), # Bulk discount for large systems
]
)
},
),
)
Mandatory replacement with retirement costs:
boiler_replacement = InvestParameters(
minimum_size=50,
maximum_size=200,
mandatory=False, # Can choose not to replace
effects_of_investment={
'cost': 15000, # Installation costs
'disruption': 3, # Days of downtime
},
effects_of_investment_per_size={
'cost': 400, # €400/kW capacity
'maintenance': 25, # Annual maintenance per kW
},
effects_of_retirement={
'cost': 8000, # Demolition if not replaced
'environmental': 100, # Disposal fees
},
)
Multi-technology comparison:
# Gas turbine option
gas_turbine = InvestParameters(
fixed_size=50, # MW
effects_of_investment={'cost': 2500000, 'CO2': 1250000},
effects_of_investment_per_size={'fuel_cost': 45, 'maintenance': 12},
)
# Wind farm option
wind_farm = InvestParameters(
minimum_size=20,
maximum_size=100,
effects_of_investment={'cost': 1000000, 'CO2': -5000000},
effects_of_investment_per_size={'cost': 1800000, 'land_use': 0.5},
)
Technology learning curve:
hydrogen_electrolyzer = InvestParameters(
minimum_size=1,
maximum_size=50, # MW
piecewise_effects_of_investment=PiecewiseEffects(
piecewise_origin=Piecewise(
[
Piece(0, 5), # Small scale: early adoption
Piece(5, 20), # Medium scale: cost reduction
Piece(20, 50), # Large scale: mature technology
]
),
piecewise_shares={
'capex': Piecewise(
[
Piece(2000, 1800), # Learning reduces costs
Piece(1800, 1400), # Continued cost reduction
Piece(1400, 1200), # Technology maturity
]
),
'efficiency': Piecewise(
[
Piece(65, 68), # Improving efficiency
Piece(68, 72), # with scale and experience
Piece(72, 75), # Best efficiency at scale
]
),
},
),
)
Common Use Cases
- Power generation: Plant sizing, technology selection, retrofit decisions
- Industrial equipment: Capacity expansion, efficiency upgrades, replacements
- Infrastructure: Network expansion, facility construction, system upgrades
- Energy storage: Battery sizing, pumped hydro, compressed air systems
- Transportation: Fleet expansion, charging infrastructure, modal shifts
- Buildings: HVAC systems, insulation upgrades, renewable integration
Attributes
optional
property
writable
DEPRECATED: Use 'mandatory' property instead. Returns the opposite of 'mandatory'.
fix_effects
property
Deprecated property. Use effects_of_investment instead.
specific_effects
property
Deprecated property. Use effects_of_investment_per_size instead.
divest_effects
property
Deprecated property. Use effects_of_retirement instead.
piecewise_effects
property
Deprecated property. Use piecewise_effects_of_investment instead.
Functions
to_dataset
Convert the object to an xarray Dataset representation. All DataArrays become dataset variables, everything else goes to attrs.
Its recommended to only call this method on Interfaces with all numeric data stored as xr.DataArrays. Interfaces inside a FlowSystem are automatically converted this form after connecting and transforming the FlowSystem.
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Dataset containing all DataArrays with basic objects only in attributes |
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails due to naming conflicts or invalid data |
to_netcdf
Save the object to a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to save the NetCDF file |
required |
compression
|
int
|
Compression level (0-9) |
0
|
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails |
IOError
|
If file cannot be written |
from_dataset
classmethod
Create an instance from an xarray Dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset containing the object data |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
ValueError
|
If dataset format is invalid or class mismatch |
from_netcdf
classmethod
Load an instance from a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to the NetCDF file |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be read |
ValueError
|
If file format is invalid |
get_structure
Get object structure as a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clean
|
bool
|
If True, remove None and empty dicts and lists. |
False
|
stats
|
bool
|
If True, replace DataArray references with statistics |
False
|
Returns:
Type | Description |
---|---|
dict
|
Dictionary representation of the object structure |
to_json
Save the object to a JSON file. This is meant for documentation and comparison, not for reloading.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the JSON file. |
required |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be written |
copy
Create a copy of the Interface object.
Uses the existing serialization infrastructure to ensure proper copying of all DataArrays and nested objects.
Returns:
Type | Description |
---|---|
Interface
|
A new instance of the same class with copied data. |
OnOffParameters
OnOffParameters(effects_per_switch_on: TemporalEffectsUser | None = None, effects_per_running_hour: TemporalEffectsUser | None = None, on_hours_total_min: int | None = None, on_hours_total_max: int | None = None, consecutive_on_hours_min: TemporalDataUser | None = None, consecutive_on_hours_max: TemporalDataUser | None = None, consecutive_off_hours_min: TemporalDataUser | None = None, consecutive_off_hours_max: TemporalDataUser | None = None, switch_on_total_max: int | None = None, force_switch_on: bool = False)
Bases: Interface
Define operational constraints and effects for binary on/off equipment behavior.
This class models equipment that operates in discrete states (on/off) rather than continuous operation, capturing realistic operational constraints and associated costs. It handles complex equipment behavior including startup costs, minimum run times, cycling limitations, and maintenance scheduling requirements.
Key Modeling Capabilities
Switching Costs: One-time costs for starting equipment (fuel, wear, labor) Runtime Constraints: Minimum and maximum continuous operation periods Cycling Limits: Maximum number of starts to prevent excessive wear Operating Hours: Total runtime limits and requirements over time horizon
Typical Equipment Applications
- Power Plants: Combined cycle units, steam turbines with startup costs
- Industrial Processes: Batch reactors, furnaces with thermal cycling
- HVAC Systems: Chillers, boilers with minimum run times
- Backup Equipment: Emergency generators, standby systems
- Process Equipment: Compressors, pumps with operational constraints
Mathematical Formulation
See the complete mathematical model in the documentation: OnOffParameters
Parameters:
Name | Type | Description | Default |
---|---|---|---|
effects_per_switch_on
|
TemporalEffectsUser | None
|
Costs or impacts incurred for each transition from off state (var_on=0) to on state (var_on=1). Represents startup costs, wear and tear, or other switching impacts. Dictionary mapping effect names to values (e.g., {'cost': 500, 'maintenance_hours': 2}). |
None
|
effects_per_running_hour
|
TemporalEffectsUser | None
|
Ongoing costs or impacts while equipment operates in the on state. Includes fuel costs, labor, consumables, or emissions. Dictionary mapping effect names to hourly values (e.g., {'fuel_cost': 45}). |
None
|
on_hours_total_min
|
int | None
|
Minimum total operating hours across the entire time horizon. Ensures equipment meets minimum utilization requirements or contractual obligations (e.g., power purchase agreements, maintenance schedules). |
None
|
on_hours_total_max
|
int | None
|
Maximum total operating hours across the entire time horizon. Limits equipment usage due to maintenance schedules, fuel availability, environmental permits, or equipment lifetime constraints. |
None
|
consecutive_on_hours_min
|
TemporalDataUser | None
|
Minimum continuous operating duration once started. Models minimum run times due to thermal constraints, process stability, or efficiency considerations. Can be time-varying to reflect different constraints across the planning horizon. |
None
|
consecutive_on_hours_max
|
TemporalDataUser | None
|
Maximum continuous operating duration in one campaign. Models mandatory maintenance intervals, process batch sizes, or equipment thermal limits requiring periodic shutdowns. |
None
|
consecutive_off_hours_min
|
TemporalDataUser | None
|
Minimum continuous shutdown duration between operations. Models cooling periods, maintenance requirements, or process constraints that prevent immediate restart after shutdown. |
None
|
consecutive_off_hours_max
|
TemporalDataUser | None
|
Maximum continuous shutdown duration before mandatory restart. Models equipment preservation, process stability, or contractual requirements for minimum activity levels. |
None
|
switch_on_total_max
|
int | None
|
Maximum number of startup operations across the time horizon. Limits equipment cycling to reduce wear, maintenance costs, or comply with operational constraints (e.g., grid stability requirements). |
None
|
force_switch_on
|
bool
|
When True, creates switch-on variables even without explicit switch_on_total_max constraint. Useful for tracking or reporting startup events without enforcing limits. |
False
|
Note
Time Series Boundary Handling: The final time period constraints for consecutive_on_hours_min/max and consecutive_off_hours_min/max are not enforced, allowing the optimization to end with ongoing campaigns that may be shorter than the specified minimums or longer than maximums.
Examples:
Combined cycle power plant with startup costs and minimum run time:
power_plant_operation = OnOffParameters(
effects_per_switch_on={
'startup_cost': 25000, # €25,000 per startup
'startup_fuel': 150, # GJ natural gas for startup
'startup_time': 4, # Hours to reach full output
'maintenance_impact': 0.1, # Fractional life consumption
},
effects_per_running_hour={
'fixed_om': 125, # Fixed O&M costs while running
'auxiliary_power': 2.5, # MW parasitic loads
},
consecutive_on_hours_min=8, # Minimum 8-hour run once started
consecutive_off_hours_min=4, # Minimum 4-hour cooling period
on_hours_total_max=6000, # Annual operating limit
)
Industrial batch process with cycling limits:
batch_reactor = OnOffParameters(
effects_per_switch_on={
'setup_cost': 1500, # Labor and materials for startup
'catalyst_consumption': 5, # kg catalyst per batch
'cleaning_chemicals': 200, # L cleaning solution
},
effects_per_running_hour={
'steam': 2.5, # t/h process steam
'electricity': 150, # kWh electrical load
'cooling_water': 50, # m³/h cooling water
},
consecutive_on_hours_min=12, # Minimum batch size (12 hours)
consecutive_on_hours_max=24, # Maximum batch size (24 hours)
consecutive_off_hours_min=6, # Cleaning and setup time
switch_on_total_max=200, # Maximum 200 batches per period
on_hours_total_max=4000, # Maximum production time
)
HVAC system with thermostat control and maintenance:
hvac_operation = OnOffParameters(
effects_per_switch_on={
'compressor_wear': 0.5, # Hours of compressor life per start
'inrush_current': 15, # kW peak demand on startup
},
effects_per_running_hour={
'electricity': 25, # kW electrical consumption
'maintenance': 0.12, # €/hour maintenance reserve
},
consecutive_on_hours_min=1, # Minimum 1-hour run to avoid cycling
consecutive_off_hours_min=0.5, # 30-minute minimum off time
switch_on_total_max=2000, # Limit cycling for compressor life
on_hours_total_min=2000, # Minimum operation for humidity control
on_hours_total_max=5000, # Maximum operation for energy budget
)
Backup generator with testing and maintenance requirements:
backup_generator = OnOffParameters(
effects_per_switch_on={
'fuel_priming': 50, # L diesel for system priming
'wear_factor': 1.0, # Start cycles impact on maintenance
'testing_labor': 2, # Hours technician time per test
},
effects_per_running_hour={
'fuel_consumption': 180, # L/h diesel consumption
'emissions_permit': 15, # € emissions allowance cost
'noise_penalty': 25, # € noise compliance cost
},
consecutive_on_hours_min=0.5, # Minimum test duration (30 min)
consecutive_off_hours_max=720, # Maximum 30 days between tests
switch_on_total_max=52, # Weekly testing limit
on_hours_total_min=26, # Minimum annual testing (0.5h × 52)
on_hours_total_max=200, # Maximum runtime (emergencies + tests)
)
Peak shaving battery with cycling degradation:
battery_cycling = OnOffParameters(
effects_per_switch_on={
'cycle_degradation': 0.01, # % capacity loss per cycle
'inverter_startup': 0.5, # kWh losses during startup
},
effects_per_running_hour={
'standby_losses': 2, # kW standby consumption
'cooling': 5, # kW thermal management
'inverter_losses': 8, # kW conversion losses
},
consecutive_on_hours_min=1, # Minimum discharge duration
consecutive_on_hours_max=4, # Maximum continuous discharge
consecutive_off_hours_min=1, # Minimum rest between cycles
switch_on_total_max=365, # Daily cycling limit
force_switch_on=True, # Track all cycling events
)
Common Use Cases
- Power generation: Thermal plant cycling, renewable curtailment, grid services
- Industrial processes: Batch production, maintenance scheduling, equipment rotation
- Buildings: HVAC control, lighting systems, elevator operations
- Transportation: Fleet management, charging infrastructure, maintenance windows
- Storage systems: Battery cycling, pumped hydro, compressed air systems
- Emergency equipment: Backup generators, safety systems, emergency lighting
Attributes
use_consecutive_on_hours
property
Determines whether a Variable for consecutive on hours is needed or not
use_consecutive_off_hours
property
Determines whether a Variable for consecutive off hours is needed or not
use_switch_on
property
Determines whether a variable for switch_on is needed or not
Functions
to_dataset
Convert the object to an xarray Dataset representation. All DataArrays become dataset variables, everything else goes to attrs.
Its recommended to only call this method on Interfaces with all numeric data stored as xr.DataArrays. Interfaces inside a FlowSystem are automatically converted this form after connecting and transforming the FlowSystem.
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Dataset containing all DataArrays with basic objects only in attributes |
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails due to naming conflicts or invalid data |
to_netcdf
Save the object to a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to save the NetCDF file |
required |
compression
|
int
|
Compression level (0-9) |
0
|
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails |
IOError
|
If file cannot be written |
from_dataset
classmethod
Create an instance from an xarray Dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset containing the object data |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
ValueError
|
If dataset format is invalid or class mismatch |
from_netcdf
classmethod
Load an instance from a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to the NetCDF file |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be read |
ValueError
|
If file format is invalid |
get_structure
Get object structure as a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clean
|
bool
|
If True, remove None and empty dicts and lists. |
False
|
stats
|
bool
|
If True, replace DataArray references with statistics |
False
|
Returns:
Type | Description |
---|---|
dict
|
Dictionary representation of the object structure |
to_json
Save the object to a JSON file. This is meant for documentation and comparison, not for reloading.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the JSON file. |
required |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be written |
copy
Create a copy of the Interface object.
Uses the existing serialization infrastructure to ensure proper copying of all DataArrays and nested objects.
Returns:
Type | Description |
---|---|
Interface
|
A new instance of the same class with copied data. |
Piece
Bases: Interface
Define a single linear segment with specified domain boundaries.
This class represents one linear segment that will be combined with other pieces to form complete piecewise linear functions. Each piece defines a domain interval [start, end] where a linear relationship applies.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
start
|
TemporalDataUser
|
Lower bound of the domain interval for this linear segment. Can be scalar values or time series arrays for time-varying boundaries. |
required |
end
|
TemporalDataUser
|
Upper bound of the domain interval for this linear segment. Can be scalar values or time series arrays for time-varying boundaries. |
required |
Examples:
Basic piece for equipment efficiency curve:
Piece with time-varying boundaries:
# Capacity limits that change seasonally
seasonal_piece = Piece(
start=np.array([10, 20, 30, 25]), # Minimum capacity by season
end=np.array([80, 100, 90, 70]), # Maximum capacity by season
)
Fixed operating point (start equals end):
Note
Individual pieces are building blocks that gain meaning when combined into Piecewise functions. See the Piecewise class for information about how pieces interact and relate to each other.
Functions
to_dataset
Convert the object to an xarray Dataset representation. All DataArrays become dataset variables, everything else goes to attrs.
Its recommended to only call this method on Interfaces with all numeric data stored as xr.DataArrays. Interfaces inside a FlowSystem are automatically converted this form after connecting and transforming the FlowSystem.
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Dataset containing all DataArrays with basic objects only in attributes |
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails due to naming conflicts or invalid data |
to_netcdf
Save the object to a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to save the NetCDF file |
required |
compression
|
int
|
Compression level (0-9) |
0
|
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails |
IOError
|
If file cannot be written |
from_dataset
classmethod
Create an instance from an xarray Dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset containing the object data |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
ValueError
|
If dataset format is invalid or class mismatch |
from_netcdf
classmethod
Load an instance from a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to the NetCDF file |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be read |
ValueError
|
If file format is invalid |
get_structure
Get object structure as a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clean
|
bool
|
If True, remove None and empty dicts and lists. |
False
|
stats
|
bool
|
If True, replace DataArray references with statistics |
False
|
Returns:
Type | Description |
---|---|
dict
|
Dictionary representation of the object structure |
to_json
Save the object to a JSON file. This is meant for documentation and comparison, not for reloading.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the JSON file. |
required |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be written |
copy
Create a copy of the Interface object.
Uses the existing serialization infrastructure to ensure proper copying of all DataArrays and nested objects.
Returns:
Type | Description |
---|---|
Interface
|
A new instance of the same class with copied data. |
Piecewise
Bases: Interface
Define a Piecewise, consisting of a list of Pieces.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
pieces
|
list[Piece]
|
list of Piece objects defining the linear segments. The arrangement and relationships between pieces determine the function behavior: - Touching pieces (end of one = start of next) ensure continuity - Gaps between pieces create forbidden regions - Overlapping pieces provide an extra choice for the optimizer |
required |
Piece Relationship Patterns
Touching Pieces (Continuous Function): Pieces that share boundary points create smooth, continuous functions without gaps or overlaps.
Gaps Between Pieces (Forbidden Regions): Non-contiguous pieces with gaps represent forbidden regions. For example minimum load requirements or safety zones.
Overlapping Pieces (Flexible Operation): Pieces with overlapping domains provide optimization flexibility, allowing the solver to choose which segment to operate in.
Examples:
Continuous efficiency curve (touching pieces):
efficiency_curve = Piecewise(
[
Piece(start=0, end=25), # Low load: 0-25 MW
Piece(start=25, end=75), # Medium load: 25-75 MW (touches at 25)
Piece(start=75, end=100), # High load: 75-100 MW (touches at 75)
]
)
Equipment with forbidden operating range (gap):
turbine_operation = Piecewise(
[
Piece(start=0, end=0), # Off state (point operation)
Piece(start=40, end=100), # Operating range (gap: 0-40 forbidden)
]
)
Flexible operation with overlapping options:
flexible_operation = Piecewise(
[
Piece(start=20, end=60), # Standard efficiency mode
Piece(start=50, end=90), # High efficiency mode (overlap: 50-60)
]
)
Tiered pricing structure:
electricity_pricing = Piecewise(
[
Piece(start=0, end=100), # Tier 1: 0-100 kWh
Piece(start=100, end=500), # Tier 2: 100-500 kWh
Piece(start=500, end=1000), # Tier 3: 500-1000 kWh
]
)
Seasonal capacity variation:
seasonal_capacity = Piecewise(
[
Piece(start=[10, 15, 20, 12], end=[80, 90, 85, 75]), # Varies by time
]
)
Container Operations
The Piecewise class supports standard Python container operations:
Validation Considerations
- Pieces are typically ordered by their start values
- Check for unintended gaps that might create infeasible regions
- Consider whether overlaps provide desired flexibility or create ambiguity
- Ensure time-varying pieces have consistent dimensions
Common Use Cases
- Power plants: Heat rate curves, efficiency vs load, emissions profiles
- HVAC systems: COP vs temperature, capacity vs conditions
- Industrial processes: Conversion rates vs throughput, quality vs speed
- Financial modeling: Tiered rates, progressive taxes, bulk discounts
- Transportation: Fuel efficiency curves, capacity vs speed
- Storage systems: Efficiency vs state of charge, power vs energy
- Renewable energy: Output vs weather conditions, curtailment strategies
Functions
to_dataset
Convert the object to an xarray Dataset representation. All DataArrays become dataset variables, everything else goes to attrs.
Its recommended to only call this method on Interfaces with all numeric data stored as xr.DataArrays. Interfaces inside a FlowSystem are automatically converted this form after connecting and transforming the FlowSystem.
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Dataset containing all DataArrays with basic objects only in attributes |
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails due to naming conflicts or invalid data |
to_netcdf
Save the object to a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to save the NetCDF file |
required |
compression
|
int
|
Compression level (0-9) |
0
|
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails |
IOError
|
If file cannot be written |
from_dataset
classmethod
Create an instance from an xarray Dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset containing the object data |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
ValueError
|
If dataset format is invalid or class mismatch |
from_netcdf
classmethod
Load an instance from a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to the NetCDF file |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be read |
ValueError
|
If file format is invalid |
get_structure
Get object structure as a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clean
|
bool
|
If True, remove None and empty dicts and lists. |
False
|
stats
|
bool
|
If True, replace DataArray references with statistics |
False
|
Returns:
Type | Description |
---|---|
dict
|
Dictionary representation of the object structure |
to_json
Save the object to a JSON file. This is meant for documentation and comparison, not for reloading.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the JSON file. |
required |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be written |
copy
Create a copy of the Interface object.
Uses the existing serialization infrastructure to ensure proper copying of all DataArrays and nested objects.
Returns:
Type | Description |
---|---|
Interface
|
A new instance of the same class with copied data. |
PiecewiseConversion
Bases: Interface
Define coordinated piecewise linear relationships between multiple flows.
This class models conversion processes where multiple flows (inputs, outputs, auxiliaries) have synchronized piecewise relationships. All flows change together based on the same operating point, enabling accurate modeling of complex equipment with variable performance characteristics.
Multi-Flow Coordination
All piecewise functions must have matching piece structures (same number of pieces with compatible domains) to ensure synchronized operation. When the equipment operates at a given point, ALL flows scale proportionally within their respective pieces.
Mathematical Formulation
See the complete mathematical model in the documentation: Piecewise
Parameters:
Name | Type | Description | Default |
---|---|---|---|
piecewises
|
dict[str, Piecewise]
|
Dictionary mapping flow labels to their Piecewise functions. Keys are flow identifiers (e.g., 'electricity_in', 'heat_out', 'fuel_consumed'). Values are Piecewise objects that define each flow's behavior. Critical Requirement: All Piecewise objects must have the same number of pieces with compatible domains to ensure consistent operation. |
required |
Examples:
Heat pump with coordinated efficiency changes:
heat_pump_pc = PiecewiseConversion(
{
'electricity_in': Piecewise(
[
Piece(0, 10), # Low load: 0-10 kW electricity
Piece(10, 25), # High load: 10-25 kW electricity
]
),
'heat_out': Piecewise(
[
Piece(0, 35), # Low load COP=3.5: 0-35 kW heat
Piece(35, 75), # High load COP=3.0: 35-75 kW heat
]
),
'cooling_water': Piecewise(
[
Piece(0, 2.5), # Low load: 0-2.5 m³/h cooling
Piece(2.5, 6), # High load: 2.5-6 m³/h cooling
]
),
}
)
# At 15 kW electricity → 52.5 kW heat + 3.75 m³/h cooling water
Combined cycle power plant with synchronized flows:
power_plant_pc = PiecewiseConversion(
{
'natural_gas': Piecewise(
[
Piece(150, 300), # Part load: 150-300 MW_th fuel
Piece(300, 500), # Full load: 300-500 MW_th fuel
]
),
'electricity': Piecewise(
[
Piece(60, 135), # Part load: 60-135 MW_e (45% efficiency)
Piece(135, 250), # Full load: 135-250 MW_e (50% efficiency)
]
),
'steam_export': Piecewise(
[
Piece(20, 35), # Part load: 20-35 MW_th steam
Piece(35, 50), # Full load: 35-50 MW_th steam
]
),
'co2_emissions': Piecewise(
[
Piece(30, 60), # Part load: 30-60 t/h CO2
Piece(60, 100), # Full load: 60-100 t/h CO2
]
),
}
)
Chemical reactor with multiple products and waste:
reactor_pc = PiecewiseConversion(
{
'feedstock': Piecewise(
[
Piece(10, 50), # Small batch: 10-50 kg/h
Piece(50, 200), # Large batch: 50-200 kg/h
]
),
'product_A': Piecewise(
[
Piece(7, 35), # Small batch: 70% yield
Piece(35, 140), # Large batch: 70% yield
]
),
'product_B': Piecewise(
[
Piece(2, 10), # Small batch: 20% yield
Piece(10, 45), # Large batch: 22.5% yield (improved)
]
),
'waste_stream': Piecewise(
[
Piece(1, 5), # Small batch: 10% waste
Piece(5, 15), # Large batch: 7.5% waste (efficiency)
]
),
}
)
Equipment with discrete operating modes:
compressor_pc = PiecewiseConversion(
{
'electricity': Piecewise(
[
Piece(0, 0), # Off mode: no consumption
Piece(45, 45), # Low mode: fixed 45 kW
Piece(85, 85), # High mode: fixed 85 kW
]
),
'compressed_air': Piecewise(
[
Piece(0, 0), # Off mode: no production
Piece(250, 250), # Low mode: 250 Nm³/h
Piece(500, 500), # High mode: 500 Nm³/h
]
),
}
)
Equipment with forbidden operating range:
steam_turbine_pc = PiecewiseConversion(
{
'steam_in': Piecewise(
[
Piece(0, 100), # Low pressure operation
Piece(200, 500), # High pressure (gap: 100-200 forbidden)
]
),
'electricity_out': Piecewise(
[
Piece(0, 30), # Low pressure: poor efficiency
Piece(80, 220), # High pressure: good efficiency
]
),
'condensate_out': Piecewise(
[
Piece(0, 100), # Low pressure condensate
Piece(200, 500), # High pressure condensate
]
),
}
)
Design Patterns
Forbidden Ranges: Use gaps between pieces to model equipment that cannot operate in certain ranges (e.g., minimum loads, unstable regions).
Discrete Modes: Use pieces with identical start/end values to model equipment with fixed operating points (e.g., on/off, discrete speeds).
Efficiency Changes: Coordinate input and output pieces to reflect changing conversion efficiency across operating ranges.
Common Use Cases
- Power generation: Multi-fuel plants, cogeneration systems, renewable hybrids
- HVAC systems: Heat pumps, chillers with variable COP and auxiliary loads
- Industrial processes: Multi-product reactors, separation units, heat exchangers
- Transportation: Multi-modal systems, hybrid vehicles, charging infrastructure
- Water treatment: Multi-stage processes with varying energy and chemical needs
- Energy storage: Systems with efficiency changes and auxiliary power requirements
Functions
to_dataset
Convert the object to an xarray Dataset representation. All DataArrays become dataset variables, everything else goes to attrs.
Its recommended to only call this method on Interfaces with all numeric data stored as xr.DataArrays. Interfaces inside a FlowSystem are automatically converted this form after connecting and transforming the FlowSystem.
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Dataset containing all DataArrays with basic objects only in attributes |
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails due to naming conflicts or invalid data |
to_netcdf
Save the object to a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to save the NetCDF file |
required |
compression
|
int
|
Compression level (0-9) |
0
|
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails |
IOError
|
If file cannot be written |
from_dataset
classmethod
Create an instance from an xarray Dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset containing the object data |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
ValueError
|
If dataset format is invalid or class mismatch |
from_netcdf
classmethod
Load an instance from a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to the NetCDF file |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be read |
ValueError
|
If file format is invalid |
get_structure
Get object structure as a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clean
|
bool
|
If True, remove None and empty dicts and lists. |
False
|
stats
|
bool
|
If True, replace DataArray references with statistics |
False
|
Returns:
Type | Description |
---|---|
dict
|
Dictionary representation of the object structure |
to_json
Save the object to a JSON file. This is meant for documentation and comparison, not for reloading.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the JSON file. |
required |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be written |
copy
Create a copy of the Interface object.
Uses the existing serialization infrastructure to ensure proper copying of all DataArrays and nested objects.
Returns:
Type | Description |
---|---|
Interface
|
A new instance of the same class with copied data. |
PiecewiseEffects
Bases: Interface
Define how a single decision variable contributes to system effects with piecewise rates.
This class models situations where a decision variable (the origin) generates different types of system effects (costs, emissions, resource consumption) at rates that change non-linearly with the variable's operating level. Unlike PiecewiseConversion which coordinates multiple flows, PiecewiseEffects focuses on how one variable impacts multiple system-wide effects.
Key Concept - Origin vs. Effects: - Origin: The primary decision variable (e.g., production level, capacity, size) - Shares: The amounts which this variable contributes to different system effects
Relationship to PiecewiseConversion
PiecewiseConversion: Models synchronized relationships between multiple flow variables (e.g., fuel_in, electricity_out, emissions_out all coordinated).
PiecewiseEffects: Models how one variable contributes to system-wide effects at variable rates (e.g., production_level → costs, emissions, resources).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
piecewise_origin
|
Piecewise
|
Piecewise function defining the behavior of the primary decision variable. This establishes the operating domain and ranges. |
required |
piecewise_shares
|
dict[str, Piecewise]
|
Dictionary mapping effect names to their rate functions. Keys are effect identifiers (e.g., 'cost_per_unit', 'CO2_intensity'). Values are Piecewise objects defining the contribution rate per unit of the origin variable at different operating levels. |
required |
Mathematical Relationship
For each effect: Total_Effect = Origin_Variable × Share_Rate(Origin_Level)
This enables modeling of: - Economies of scale (decreasing unit costs with volume) - Learning curves (improving efficiency with experience) - Threshold effects (changing rates at different scales) - Progressive pricing (increasing rates with consumption)
Examples:
Manufacturing with economies of scale:
production_effects = PiecewiseEffects(
piecewise_origin=Piecewise(
[
Piece(0, 1000), # Small scale: 0-1000 units/month
Piece(1000, 5000), # Medium scale: 1000-5000 units/month
Piece(5000, 10000), # Large scale: 5000-10000 units/month
]
),
piecewise_shares={
'unit_cost': Piecewise(
[
Piece(50, 45), # €50-45/unit (scale benefits)
Piece(45, 35), # €45-35/unit (bulk materials)
Piece(35, 30), # €35-30/unit (automation benefits)
]
),
'labor_hours': Piecewise(
[
Piece(2.5, 2.0), # 2.5-2.0 hours/unit (learning curve)
Piece(2.0, 1.5), # 2.0-1.5 hours/unit (efficiency gains)
Piece(1.5, 1.2), # 1.5-1.2 hours/unit (specialization)
]
),
'CO2_intensity': Piecewise(
[
Piece(15, 12), # 15-12 kg CO2/unit (process optimization)
Piece(12, 9), # 12-9 kg CO2/unit (equipment efficiency)
Piece(9, 7), # 9-7 kg CO2/unit (renewable energy)
]
),
},
)
Power generation with load-dependent characteristics:
generator_effects = PiecewiseEffects(
piecewise_origin=Piecewise(
[
Piece(50, 200), # Part load operation: 50-200 MW
Piece(200, 350), # Rated operation: 200-350 MW
Piece(350, 400), # Overload operation: 350-400 MW
]
),
piecewise_shares={
'fuel_rate': Piecewise(
[
Piece(12.0, 10.5), # Heat rate: 12.0-10.5 GJ/MWh (part load penalty)
Piece(10.5, 9.8), # Heat rate: 10.5-9.8 GJ/MWh (optimal efficiency)
Piece(9.8, 11.2), # Heat rate: 9.8-11.2 GJ/MWh (overload penalty)
]
),
'maintenance_factor': Piecewise(
[
Piece(0.8, 1.0), # Low stress operation
Piece(1.0, 1.0), # Design operation
Piece(1.0, 1.5), # High stress operation
]
),
'NOx_rate': Piecewise(
[
Piece(0.20, 0.15), # NOx: 0.20-0.15 kg/MWh
Piece(0.15, 0.12), # NOx: 0.15-0.12 kg/MWh (optimal combustion)
Piece(0.12, 0.25), # NOx: 0.12-0.25 kg/MWh (overload penalties)
]
),
},
)
Progressive utility pricing structure:
electricity_billing = PiecewiseEffects(
piecewise_origin=Piecewise(
[
Piece(0, 200), # Basic usage: 0-200 kWh/month
Piece(200, 800), # Standard usage: 200-800 kWh/month
Piece(800, 2000), # High usage: 800-2000 kWh/month
]
),
piecewise_shares={
'energy_rate': Piecewise(
[
Piece(0.12, 0.12), # Basic rate: €0.12/kWh
Piece(0.18, 0.18), # Standard rate: €0.18/kWh
Piece(0.28, 0.28), # Premium rate: €0.28/kWh
]
),
'carbon_tax': Piecewise(
[
Piece(0.02, 0.02), # Low carbon tax: €0.02/kWh
Piece(0.03, 0.03), # Medium carbon tax: €0.03/kWh
Piece(0.05, 0.05), # High carbon tax: €0.05/kWh
]
),
},
)
Data center with capacity-dependent efficiency:
datacenter_effects = PiecewiseEffects(
piecewise_origin=Piecewise(
[
Piece(100, 500), # Low utilization: 100-500 servers
Piece(500, 2000), # Medium utilization: 500-2000 servers
Piece(2000, 5000), # High utilization: 2000-5000 servers
]
),
piecewise_shares={
'power_per_server': Piecewise(
[
Piece(0.8, 0.6), # 0.8-0.6 kW/server (inefficient cooling)
Piece(0.6, 0.4), # 0.6-0.4 kW/server (optimal efficiency)
Piece(0.4, 0.5), # 0.4-0.5 kW/server (thermal limits)
]
),
'cooling_overhead': Piecewise(
[
Piece(0.4, 0.3), # 40%-30% cooling overhead
Piece(0.3, 0.2), # 30%-20% cooling overhead
Piece(0.2, 0.25), # 20%-25% cooling overhead
]
),
},
)
Design Patterns
Economies of Scale: Decreasing unit costs/impacts with increased scale Learning Curves: Improving efficiency rates with experience/volume Threshold Effects: Step changes in rates at specific operating levels Progressive Pricing: Increasing rates for higher consumption levels Capacity Utilization: Optimal efficiency at design points, penalties at extremes
Common Use Cases
- Manufacturing: Production scaling, learning effects, quality improvements
- Energy systems: Generator efficiency curves, renewable capacity factors
- Logistics: Transportation rates, warehouse utilization, delivery optimization
- Utilities: Progressive pricing, infrastructure cost allocation
- Financial services: Risk premiums, transaction fees, volume discounts
- Environmental modeling: Pollution intensity, resource consumption rates
Functions
to_dataset
Convert the object to an xarray Dataset representation. All DataArrays become dataset variables, everything else goes to attrs.
Its recommended to only call this method on Interfaces with all numeric data stored as xr.DataArrays. Interfaces inside a FlowSystem are automatically converted this form after connecting and transforming the FlowSystem.
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Dataset containing all DataArrays with basic objects only in attributes |
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails due to naming conflicts or invalid data |
to_netcdf
Save the object to a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to save the NetCDF file |
required |
compression
|
int
|
Compression level (0-9) |
0
|
Raises:
Type | Description |
---|---|
ValueError
|
If serialization fails |
IOError
|
If file cannot be written |
from_dataset
classmethod
Create an instance from an xarray Dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset containing the object data |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
ValueError
|
If dataset format is invalid or class mismatch |
from_netcdf
classmethod
Load an instance from a NetCDF file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
Path to the NetCDF file |
required |
Returns:
Type | Description |
---|---|
Interface
|
Interface instance |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be read |
ValueError
|
If file format is invalid |
get_structure
Get object structure as a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
clean
|
bool
|
If True, remove None and empty dicts and lists. |
False
|
stats
|
bool
|
If True, replace DataArray references with statistics |
False
|
Returns:
Type | Description |
---|---|
dict
|
Dictionary representation of the object structure |
to_json
Save the object to a JSON file. This is meant for documentation and comparison, not for reloading.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str | Path
|
The path to the JSON file. |
required |
Raises:
Type | Description |
---|---|
IOError
|
If file cannot be written |
copy
Create a copy of the Interface object.
Uses the existing serialization infrastructure to ensure proper copying of all DataArrays and nested objects.
Returns:
Type | Description |
---|---|
Interface
|
A new instance of the same class with copied data. |
Functions
change_logging_level
Change the logging level for the flixopt logger and all its handlers.
.. deprecated:: 2.1.11
Use CONFIG.Logging.level = level_name
and CONFIG.apply()
instead.
This function will be removed in version 3.0.0.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
level_name
|
Literal['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']
|
The logging level to set. |
required |
Examples: