flixopt.results
Attributes
Classes
CalculationResults
CalculationResults(solution: Dataset, flow_system_data: Dataset, name: str, summary: dict, folder: Path | None = None, model: Model | None = None, **kwargs)
Comprehensive container for optimization calculation results and analysis tools.
This class provides unified access to all optimization results including flow rates, component states, bus balances, and system effects. It offers powerful analysis capabilities through filtering, plotting, and export functionality, making it the primary interface for post-processing optimization results.
Key Features
Unified Access: Single interface to all solution variables and constraints Element Results: Direct access to component, bus, and effect-specific results Visualization: Built-in plotting methods for heatmaps, time series, and networks Persistence: Save/load functionality with compression for large datasets Analysis Tools: Filtering, aggregation, and statistical analysis methods
Result Organization
- Components: Equipment-specific results (flows, states, constraints)
- Buses: Network node balances and energy flows
- Effects: System-wide impacts (costs, emissions, resource consumption)
- Solution: Raw optimization variables and their values
- Metadata: Calculation parameters, timing, and system configuration
Attributes:
Name | Type | Description |
---|---|---|
solution |
Dataset containing all optimization variable solutions |
|
flow_system_data |
Dataset with complete system configuration and parameters. Restore the used FlowSystem for further analysis. |
|
summary |
Calculation metadata including solver status, timing, and statistics |
|
name |
Unique identifier for this calculation |
|
model |
Original linopy optimization model (if available) |
|
folder |
Directory path for result storage and loading |
|
components |
Dictionary mapping component labels to ComponentResults objects |
|
buses |
Dictionary mapping bus labels to BusResults objects |
|
effects |
Dictionary mapping effect names to EffectResults objects |
|
timesteps_extra |
Extended time index including boundary conditions |
|
hours_per_timestep |
Duration of each timestep for proper energy calculations |
Examples:
Load and analyze saved results:
# Load results from file
results = CalculationResults.from_file('results', 'annual_optimization')
# Access specific component results
boiler_results = results['Boiler_01']
heat_pump_results = results['HeatPump_02']
# Plot component flow rates
results.plot_heatmap('Boiler_01(Natural_Gas)|flow_rate')
results['Boiler_01'].plot_node_balance()
# Access raw solution dataarrays
electricity_flows = results.solution[['Generator_01(Grid)|flow_rate', 'HeatPump_02(Grid)|flow_rate']]
# Filter and analyze results
peak_demand_hours = results.filter_solution(variable_dims='time')
costs_solution = results.effects['cost'].solution
Advanced filtering and aggregation:
# Filter by variable type
scalar_results = results.filter_solution(variable_dims='scalar')
time_series = results.filter_solution(variable_dims='time')
# Custom data analysis leveraging xarray
peak_power = results.solution['Generator_01(Grid)|flow_rate'].max()
avg_efficiency = (
results.solution['HeatPump(Heat)|flow_rate'] / results.solution['HeatPump(Electricity)|flow_rate']
).mean()
Design Patterns
Factory Methods: Use from_file()
and from_calculation()
for creation or access directly from Calculation.results
Dictionary Access: Use results[element_label]
for element-specific results
Lazy Loading: Results objects created on-demand for memory efficiency
Unified Interface: Consistent API across different result types
Initialize CalculationResults with optimization data. Usually, this class is instantiated by the Calculation class, or by loading from file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
solution
|
Dataset
|
Optimization solution dataset. |
required |
flow_system_data
|
Dataset
|
Flow system configuration dataset. |
required |
name
|
str
|
Calculation name. |
required |
summary
|
dict
|
Calculation metadata. |
required |
folder
|
Path | None
|
Results storage folder. |
None
|
model
|
Model | None
|
Linopy optimization model. |
None
|
Deprecated: flow_system: Use flow_system_data instead.
Attributes
flow_system
property
The restored flow_system that was used to create the calculation. Contains all input parameters.
effects_per_component
property
Returns a dataset containing effect results for each mode, aggregated by Component
Returns:
Type | Description |
---|---|
Dataset
|
An xarray Dataset with an additional component dimension and effects as variables. |
Functions
from_file
classmethod
Load CalculationResults from saved files.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
folder
|
str | Path
|
Directory containing saved files. |
required |
name
|
str
|
Base name of saved files (without extensions). |
required |
Returns:
Name | Type | Description |
---|---|---|
CalculationResults |
CalculationResults
|
Loaded instance. |
from_calculation
classmethod
Create CalculationResults from a Calculation object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
calculation
|
Calculation
|
Calculation object with solved model. |
required |
Returns:
Name | Type | Description |
---|---|---|
CalculationResults |
CalculationResults
|
New instance with extracted results. |
filter_solution
filter_solution(variable_dims: Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None = None, element: str | None = None, timesteps: DatetimeIndex | None = None, scenarios: Index | None = None, contains: str | list[str] | None = None, startswith: str | list[str] | None = None) -> xr.Dataset
Filter solution by variable dimension and/or element.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
variable_dims
|
Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None
|
The dimension of which to get variables from. - 'scalar': Get scalar variables (without dimensions) - 'time': Get time-dependent variables (with a time dimension) - 'scenario': Get scenario-dependent variables (with ONLY a scenario dimension) - 'timeonly': Get time-dependent variables (with ONLY a time dimension) - 'scenarioonly': Get scenario-dependent variables (with ONLY a scenario dimension) |
None
|
element
|
str | None
|
The element to filter for. |
None
|
timesteps
|
DatetimeIndex | None
|
Optional time indexes to select. Can be: - pd.DatetimeIndex: Multiple timesteps - str/pd.Timestamp: Single timestep Defaults to all available timesteps. |
None
|
scenarios
|
Index | None
|
Optional scenario indexes to select. Can be: - pd.Index: Multiple scenarios - str/int: Single scenario (int is treated as a label, not an index position) Defaults to all available scenarios. |
None
|
contains
|
str | list[str] | None
|
Filter variables that contain this string or strings. If a list is provided, variables must contain ALL strings in the list. |
None
|
startswith
|
str | list[str] | None
|
Filter variables that start with this string or strings. If a list is provided, variables must start with ANY of the strings in the list. |
None
|
flow_rates
flow_rates(start: str | list[str] | None = None, end: str | list[str] | None = None, component: str | list[str] | None = None) -> xr.DataArray
Returns a DataArray containing the flow rates of each Flow.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
start
|
str | list[str] | None
|
Optional source node(s) to filter by. Can be a single node name or a list of names. |
None
|
end
|
str | list[str] | None
|
Optional destination node(s) to filter by. Can be a single node name or a list of names. |
None
|
component
|
str | list[str] | None
|
Optional component(s) to filter by. Can be a single component name or a list of names. |
None
|
Further usage
Convert the dataarray to a dataframe:
results.flow_rates().to_pandas() Get the max or min over time: results.flow_rates().max('time') Sum up the flow rates of flows with the same start and end: results.flow_rates(end='Fernwärme').groupby('start').sum(dim='flow') To recombine filtered dataarrays, use
xr.concat
with dim 'flow': xr.concat([results.flow_rates(start='Fernwärme'), results.flow_rates(end='Fernwärme')], dim='flow')
flow_hours
flow_hours(start: str | list[str] | None = None, end: str | list[str] | None = None, component: str | list[str] | None = None) -> xr.DataArray
Returns a DataArray containing the flow hours of each Flow.
Flow hours represent the total energy/material transferred over time, calculated by multiplying flow rates by the duration of each timestep.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
start
|
str | list[str] | None
|
Optional source node(s) to filter by. Can be a single node name or a list of names. |
None
|
end
|
str | list[str] | None
|
Optional destination node(s) to filter by. Can be a single node name or a list of names. |
None
|
component
|
str | list[str] | None
|
Optional component(s) to filter by. Can be a single component name or a list of names. |
None
|
Further usage
Convert the dataarray to a dataframe:
results.flow_hours().to_pandas() Sum up the flow hours over time: results.flow_hours().sum('time') Sum up the flow hours of flows with the same start and end: results.flow_hours(end='Fernwärme').groupby('start').sum(dim='flow') To recombine filtered dataarrays, use
xr.concat
with dim 'flow': xr.concat([results.flow_hours(start='Fernwärme'), results.flow_hours(end='Fernwärme')], dim='flow')
sizes
sizes(start: str | list[str] | None = None, end: str | list[str] | None = None, component: str | list[str] | None = None) -> xr.DataArray
Returns a dataset with the sizes of the Flows. Args: start: Optional source node(s) to filter by. Can be a single node name or a list of names. end: Optional destination node(s) to filter by. Can be a single node name or a list of names. component: Optional component(s) to filter by. Can be a single component name or a list of names.
Further usage
Convert the dataarray to a dataframe:
results.sizes().to_pandas() To recombine filtered dataarrays, use
xr.concat
with dim 'flow': xr.concat([results.sizes(start='Fernwärme'), results.sizes(end='Fernwärme')], dim='flow')
get_effect_shares
get_effect_shares(element: str, effect: str, mode: Literal['temporal', 'periodic'] | None = None, include_flows: bool = False) -> xr.Dataset
Retrieves individual effect shares for a specific element and effect. Either for temporal, investment, or both modes combined. Only includes the direct shares.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
element
|
str
|
The element identifier for which to retrieve effect shares. |
required |
effect
|
str
|
The effect identifier for which to retrieve shares. |
required |
mode
|
Literal['temporal', 'periodic'] | None
|
Optional. The mode to retrieve shares for. Can be 'temporal', 'periodic', or None to retrieve both. Defaults to None. |
None
|
Returns:
Type | Description |
---|---|
Dataset
|
An xarray Dataset containing the requested effect shares. If mode is None, |
Dataset
|
returns a merged Dataset containing both temporal and investment shares. |
Raises:
Type | Description |
---|---|
ValueError
|
If the specified effect is not available or if mode is invalid. |
plot_heatmap
plot_heatmap(variable_name: str, heatmap_timeframes: Literal['YS', 'MS', 'W', 'D', 'h', '15min', 'min'] = 'D', heatmap_timesteps_per_frame: Literal['W', 'D', 'h', '15min', 'min'] = 'h', color_map: str = 'portland', save: bool | Path = False, show: bool = True, engine: PlottingEngine = 'plotly', indexer: dict[FlowSystemDimensions, Any] | None = None) -> plotly.graph_objs.Figure | tuple[plt.Figure, plt.Axes]
Plots a heatmap of the solution of a variable.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
variable_name
|
str
|
The name of the variable to plot. |
required |
heatmap_timeframes
|
Literal['YS', 'MS', 'W', 'D', 'h', '15min', 'min']
|
The timeframes to use for the heatmap. |
'D'
|
heatmap_timesteps_per_frame
|
Literal['W', 'D', 'h', '15min', 'min']
|
The timesteps per frame to use for the heatmap. |
'h'
|
color_map
|
str
|
The color map to use for the heatmap. |
'portland'
|
save
|
bool | Path
|
Whether to save the plot or not. If a path is provided, the plot will be saved at that location. |
False
|
show
|
bool
|
Whether to show the plot or not. |
True
|
engine
|
PlottingEngine
|
The engine to use for plotting. Can be either 'plotly' or 'matplotlib'. |
'plotly'
|
indexer
|
dict[FlowSystemDimensions, Any] | None
|
Optional selection dict, e.g., {'scenario': 'base', 'period': 2024}. If None, uses first value for each dimension. If empty dict {}, uses all values. |
None
|
Examples:
Basic usage (uses first scenario, first period, all time):
Select specific scenario and period:
Time filtering (summer months only):
>>> results.plot_heatmap(
... 'Boiler(Qth)|flow_rate',
... indexer={
... 'scenario': 'base',
... 'time': results.solution.time[results.solution.time.dt.month.isin([6, 7, 8])],
... },
... )
Save to specific location:
plot_network
plot_network(controls: bool | list[Literal['nodes', 'edges', 'layout', 'interaction', 'manipulation', 'physics', 'selection', 'renderer']] = True, path: Path | None = None, show: bool = False) -> pyvis.network.Network | None
Plot interactive network visualization of the system.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
controls
|
bool | list[Literal['nodes', 'edges', 'layout', 'interaction', 'manipulation', 'physics', 'selection', 'renderer']]
|
Enable/disable interactive controls. |
True
|
path
|
Path | None
|
Save path for network HTML. |
None
|
show
|
bool
|
Whether to display the plot. |
False
|
to_file
to_file(folder: str | Path | None = None, name: str | None = None, compression: int = 5, document_model: bool = True, save_linopy_model: bool = False)
Save results to files.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
folder
|
str | Path | None
|
Save folder (defaults to calculation folder). |
None
|
name
|
str | None
|
File name (defaults to calculation name). |
None
|
compression
|
int
|
Compression level 0-9. |
5
|
document_model
|
bool
|
Whether to document model formulations as yaml. |
True
|
save_linopy_model
|
bool
|
Whether to save linopy model file. |
False
|
BusResults
BusResults(calculation_results: CalculationResults, label: str, variables: list[str], constraints: list[str], inputs: list[str], outputs: list[str], flows: list[str])
Bases: _NodeResults
Results container for energy/material balance nodes in the system.
Attributes
variables
property
Get element variables (requires linopy model).
Raises:
Type | Description |
---|---|
ValueError
|
If linopy model is unavailable. |
constraints
property
Get element constraints (requires linopy model).
Raises:
Type | Description |
---|---|
ValueError
|
If linopy model is unavailable. |
Functions
filter_solution
filter_solution(variable_dims: Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None = None, timesteps: DatetimeIndex | None = None, scenarios: Index | None = None, contains: str | list[str] | None = None, startswith: str | list[str] | None = None) -> xr.Dataset
Filter the solution to a specific variable dimension and element. If no element is specified, all elements are included.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
variable_dims
|
Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None
|
The dimension of which to get variables from. - 'scalar': Get scalar variables (without dimensions) - 'time': Get time-dependent variables (with a time dimension) - 'scenario': Get scenario-dependent variables (with ONLY a scenario dimension) - 'timeonly': Get time-dependent variables (with ONLY a time dimension) - 'scenarioonly': Get scenario-dependent variables (with ONLY a scenario dimension) |
None
|
timesteps
|
DatetimeIndex | None
|
Optional time indexes to select. Can be: - pd.DatetimeIndex: Multiple timesteps - str/pd.Timestamp: Single timestep Defaults to all available timesteps. |
None
|
scenarios
|
Index | None
|
Optional scenario indexes to select. Can be: - pd.Index: Multiple scenarios - str/int: Single scenario (int is treated as a label, not an index position) Defaults to all available scenarios. |
None
|
contains
|
str | list[str] | None
|
Filter variables that contain this string or strings. If a list is provided, variables must contain ALL strings in the list. |
None
|
startswith
|
str | list[str] | None
|
Filter variables that start with this string or strings. If a list is provided, variables must start with ANY of the strings in the list. |
None
|
plot_node_balance
plot_node_balance(save: bool | Path = False, show: bool = True, colors: ColorType = 'viridis', engine: PlottingEngine = 'plotly', indexer: dict[FlowSystemDimensions, Any] | None = None, mode: Literal['flow_rate', 'flow_hours'] = 'flow_rate', style: Literal['area', 'stacked_bar', 'line'] = 'stacked_bar', drop_suffix: bool = True) -> plotly.graph_objs.Figure | tuple[plt.Figure, plt.Axes]
Plots the node balance of the Component or Bus.
Args:
save: Whether to save the plot or not. If a path is provided, the plot will be saved at that location.
show: Whether to show the plot or not.
colors: The colors to use for the plot. See flixopt.plotting.ColorType
for options.
engine: The engine to use for plotting. Can be either 'plotly' or 'matplotlib'.
indexer: Optional selection dict, e.g., {'scenario': 'base', 'period': 2024}.
If None, uses first value for each dimension (except time).
If empty dict {}, uses all values.
style: The style to use for the dataset. Can be 'flow_rate' or 'flow_hours'.
- 'flow_rate': Returns the flow_rates of the Node.
- 'flow_hours': Returns the flow_hours of the Node. [flow_hours(t) = flow_rate(t) * dt(t)]. Renames suffixes to |flow_hours.
drop_suffix: Whether to drop the suffix from the variable names.
plot_node_balance_pie
plot_node_balance_pie(lower_percentage_group: float = 5, colors: ColorType = 'viridis', text_info: str = 'percent+label+value', save: bool | Path = False, show: bool = True, engine: PlottingEngine = 'plotly', indexer: dict[FlowSystemDimensions, Any] | None = None) -> plotly.graph_objs.Figure | tuple[plt.Figure, list[plt.Axes]]
Plot pie chart of flow hours distribution. Args: lower_percentage_group: Percentage threshold for "Others" grouping. colors: Color scheme. Also see plotly. text_info: Information to display on pie slices. save: Whether to save plot. show: Whether to display plot. engine: Plotting engine ('plotly' or 'matplotlib'). indexer: Optional selection dict, e.g., {'scenario': 'base', 'period': 2024}. If None, uses first value for each dimension. If empty dict {}, uses all values.
node_balance
node_balance(negate_inputs: bool = True, negate_outputs: bool = False, threshold: float | None = 1e-05, with_last_timestep: bool = False, mode: Literal['flow_rate', 'flow_hours'] = 'flow_rate', drop_suffix: bool = False, indexer: dict[FlowSystemDimensions, Any] | None = None) -> xr.Dataset
Returns a dataset with the node balance of the Component or Bus. Args: negate_inputs: Whether to negate the input flow_rates of the Node. negate_outputs: Whether to negate the output flow_rates of the Node. threshold: The threshold for small values. Variables with all values below the threshold are dropped. with_last_timestep: Whether to include the last timestep in the dataset. mode: The mode to use for the dataset. Can be 'flow_rate' or 'flow_hours'. - 'flow_rate': Returns the flow_rates of the Node. - 'flow_hours': Returns the flow_hours of the Node. [flow_hours(t) = flow_rate(t) * dt(t)]. Renames suffixes to |flow_hours. drop_suffix: Whether to drop the suffix from the variable names. indexer: Optional selection dict, e.g., {'scenario': 'base', 'period': 2024}. If None, uses first value for each dimension. If empty dict {}, uses all values.
ComponentResults
ComponentResults(calculation_results: CalculationResults, label: str, variables: list[str], constraints: list[str], inputs: list[str], outputs: list[str], flows: list[str])
Bases: _NodeResults
Results container for individual system components with specialized analysis tools.
Attributes
variables
property
Get element variables (requires linopy model).
Raises:
Type | Description |
---|---|
ValueError
|
If linopy model is unavailable. |
constraints
property
Get element constraints (requires linopy model).
Raises:
Type | Description |
---|---|
ValueError
|
If linopy model is unavailable. |
Functions
plot_charge_state
plot_charge_state(save: bool | Path = False, show: bool = True, colors: ColorType = 'viridis', engine: PlottingEngine = 'plotly', style: Literal['area', 'stacked_bar', 'line'] = 'stacked_bar', indexer: dict[FlowSystemDimensions, Any] | None = None) -> plotly.graph_objs.Figure
Plot storage charge state over time, combined with the node balance.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
save
|
bool | Path
|
Whether to save the plot or not. If a path is provided, the plot will be saved at that location. |
False
|
show
|
bool
|
Whether to show the plot or not. |
True
|
colors
|
ColorType
|
Color scheme. Also see plotly. |
'viridis'
|
engine
|
PlottingEngine
|
Plotting engine to use. Only 'plotly' is implemented atm. |
'plotly'
|
style
|
Literal['area', 'stacked_bar', 'line']
|
The colors to use for the plot. See |
'stacked_bar'
|
indexer
|
dict[FlowSystemDimensions, Any] | None
|
Optional selection dict, e.g., {'scenario': 'base', 'period': 2024}. If None, uses first value for each dimension. If empty dict {}, uses all values. |
None
|
Raises:
Type | Description |
---|---|
ValueError
|
If component is not a storage. |
node_balance_with_charge_state
node_balance_with_charge_state(negate_inputs: bool = True, negate_outputs: bool = False, threshold: float | None = 1e-05) -> xr.Dataset
Get storage node balance including charge state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
negate_inputs
|
bool
|
Whether to negate input flows. |
True
|
negate_outputs
|
bool
|
Whether to negate output flows. |
False
|
threshold
|
float | None
|
Threshold for small values. |
1e-05
|
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Node balance with charge state. |
Raises:
Type | Description |
---|---|
ValueError
|
If component is not a storage. |
filter_solution
filter_solution(variable_dims: Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None = None, timesteps: DatetimeIndex | None = None, scenarios: Index | None = None, contains: str | list[str] | None = None, startswith: str | list[str] | None = None) -> xr.Dataset
Filter the solution to a specific variable dimension and element. If no element is specified, all elements are included.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
variable_dims
|
Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None
|
The dimension of which to get variables from. - 'scalar': Get scalar variables (without dimensions) - 'time': Get time-dependent variables (with a time dimension) - 'scenario': Get scenario-dependent variables (with ONLY a scenario dimension) - 'timeonly': Get time-dependent variables (with ONLY a time dimension) - 'scenarioonly': Get scenario-dependent variables (with ONLY a scenario dimension) |
None
|
timesteps
|
DatetimeIndex | None
|
Optional time indexes to select. Can be: - pd.DatetimeIndex: Multiple timesteps - str/pd.Timestamp: Single timestep Defaults to all available timesteps. |
None
|
scenarios
|
Index | None
|
Optional scenario indexes to select. Can be: - pd.Index: Multiple scenarios - str/int: Single scenario (int is treated as a label, not an index position) Defaults to all available scenarios. |
None
|
contains
|
str | list[str] | None
|
Filter variables that contain this string or strings. If a list is provided, variables must contain ALL strings in the list. |
None
|
startswith
|
str | list[str] | None
|
Filter variables that start with this string or strings. If a list is provided, variables must start with ANY of the strings in the list. |
None
|
plot_node_balance
plot_node_balance(save: bool | Path = False, show: bool = True, colors: ColorType = 'viridis', engine: PlottingEngine = 'plotly', indexer: dict[FlowSystemDimensions, Any] | None = None, mode: Literal['flow_rate', 'flow_hours'] = 'flow_rate', style: Literal['area', 'stacked_bar', 'line'] = 'stacked_bar', drop_suffix: bool = True) -> plotly.graph_objs.Figure | tuple[plt.Figure, plt.Axes]
Plots the node balance of the Component or Bus.
Args:
save: Whether to save the plot or not. If a path is provided, the plot will be saved at that location.
show: Whether to show the plot or not.
colors: The colors to use for the plot. See flixopt.plotting.ColorType
for options.
engine: The engine to use for plotting. Can be either 'plotly' or 'matplotlib'.
indexer: Optional selection dict, e.g., {'scenario': 'base', 'period': 2024}.
If None, uses first value for each dimension (except time).
If empty dict {}, uses all values.
style: The style to use for the dataset. Can be 'flow_rate' or 'flow_hours'.
- 'flow_rate': Returns the flow_rates of the Node.
- 'flow_hours': Returns the flow_hours of the Node. [flow_hours(t) = flow_rate(t) * dt(t)]. Renames suffixes to |flow_hours.
drop_suffix: Whether to drop the suffix from the variable names.
plot_node_balance_pie
plot_node_balance_pie(lower_percentage_group: float = 5, colors: ColorType = 'viridis', text_info: str = 'percent+label+value', save: bool | Path = False, show: bool = True, engine: PlottingEngine = 'plotly', indexer: dict[FlowSystemDimensions, Any] | None = None) -> plotly.graph_objs.Figure | tuple[plt.Figure, list[plt.Axes]]
Plot pie chart of flow hours distribution. Args: lower_percentage_group: Percentage threshold for "Others" grouping. colors: Color scheme. Also see plotly. text_info: Information to display on pie slices. save: Whether to save plot. show: Whether to display plot. engine: Plotting engine ('plotly' or 'matplotlib'). indexer: Optional selection dict, e.g., {'scenario': 'base', 'period': 2024}. If None, uses first value for each dimension. If empty dict {}, uses all values.
node_balance
node_balance(negate_inputs: bool = True, negate_outputs: bool = False, threshold: float | None = 1e-05, with_last_timestep: bool = False, mode: Literal['flow_rate', 'flow_hours'] = 'flow_rate', drop_suffix: bool = False, indexer: dict[FlowSystemDimensions, Any] | None = None) -> xr.Dataset
Returns a dataset with the node balance of the Component or Bus. Args: negate_inputs: Whether to negate the input flow_rates of the Node. negate_outputs: Whether to negate the output flow_rates of the Node. threshold: The threshold for small values. Variables with all values below the threshold are dropped. with_last_timestep: Whether to include the last timestep in the dataset. mode: The mode to use for the dataset. Can be 'flow_rate' or 'flow_hours'. - 'flow_rate': Returns the flow_rates of the Node. - 'flow_hours': Returns the flow_hours of the Node. [flow_hours(t) = flow_rate(t) * dt(t)]. Renames suffixes to |flow_hours. drop_suffix: Whether to drop the suffix from the variable names. indexer: Optional selection dict, e.g., {'scenario': 'base', 'period': 2024}. If None, uses first value for each dimension. If empty dict {}, uses all values.
EffectResults
EffectResults(calculation_results: CalculationResults, label: str, variables: list[str], constraints: list[str])
Bases: _ElementResults
Results for an Effect
Attributes
variables
property
Get element variables (requires linopy model).
Raises:
Type | Description |
---|---|
ValueError
|
If linopy model is unavailable. |
constraints
property
Get element constraints (requires linopy model).
Raises:
Type | Description |
---|---|
ValueError
|
If linopy model is unavailable. |
Functions
get_shares_from
Get effect shares from specific element.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
element
|
str
|
Element label to get shares from. |
required |
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: Element shares to this effect. |
filter_solution
filter_solution(variable_dims: Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None = None, timesteps: DatetimeIndex | None = None, scenarios: Index | None = None, contains: str | list[str] | None = None, startswith: str | list[str] | None = None) -> xr.Dataset
Filter the solution to a specific variable dimension and element. If no element is specified, all elements are included.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
variable_dims
|
Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None
|
The dimension of which to get variables from. - 'scalar': Get scalar variables (without dimensions) - 'time': Get time-dependent variables (with a time dimension) - 'scenario': Get scenario-dependent variables (with ONLY a scenario dimension) - 'timeonly': Get time-dependent variables (with ONLY a time dimension) - 'scenarioonly': Get scenario-dependent variables (with ONLY a scenario dimension) |
None
|
timesteps
|
DatetimeIndex | None
|
Optional time indexes to select. Can be: - pd.DatetimeIndex: Multiple timesteps - str/pd.Timestamp: Single timestep Defaults to all available timesteps. |
None
|
scenarios
|
Index | None
|
Optional scenario indexes to select. Can be: - pd.Index: Multiple scenarios - str/int: Single scenario (int is treated as a label, not an index position) Defaults to all available scenarios. |
None
|
contains
|
str | list[str] | None
|
Filter variables that contain this string or strings. If a list is provided, variables must contain ALL strings in the list. |
None
|
startswith
|
str | list[str] | None
|
Filter variables that start with this string or strings. If a list is provided, variables must start with ANY of the strings in the list. |
None
|
SegmentedCalculationResults
SegmentedCalculationResults(segment_results: list[CalculationResults], all_timesteps: DatetimeIndex, timesteps_per_segment: int, overlap_timesteps: int, name: str, folder: Path | None = None)
Results container for segmented optimization calculations with temporal decomposition.
This class manages results from SegmentedCalculation runs where large optimization problems are solved by dividing the time horizon into smaller, overlapping segments. It provides unified access to results across all segments while maintaining the ability to analyze individual segment behavior.
Key Features
Unified Time Series: Automatically assembles results from all segments into continuous time series, removing overlaps and boundary effects Segment Analysis: Access individual segment results for debugging and validation Consistency Checks: Verify solution continuity at segment boundaries Memory Efficiency: Handles large datasets that exceed single-segment memory limits
Temporal Handling
The class manages the complex task of combining overlapping segment solutions into coherent time series, ensuring proper treatment of: - Storage state continuity between segments - Flow rate transitions at segment boundaries - Aggregated results over the full time horizon
Examples:
Load and analyze segmented results:
# Load segmented calculation results
results = SegmentedCalculationResults.from_file('results', 'annual_segmented')
# Access unified results across all segments
full_timeline = results.all_timesteps
total_segments = len(results.segment_results)
# Analyze individual segments
for i, segment in enumerate(results.segment_results):
print(f'Segment {i + 1}: {len(segment.solution.time)} timesteps')
segment_costs = segment.effects['cost'].total_value
# Check solution continuity at boundaries
segment_boundaries = results.get_boundary_analysis()
max_discontinuity = segment_boundaries['max_storage_jump']
Create from segmented calculation:
# After running segmented calculation
segmented_calc = SegmentedCalculation(
name='annual_system',
flow_system=system,
timesteps_per_segment=730, # Monthly segments
overlap_timesteps=48, # 2-day overlap
)
segmented_calc.do_modeling_and_solve(solver='gurobi')
# Extract unified results
results = SegmentedCalculationResults.from_calculation(segmented_calc)
# Save combined results
results.to_file(compression=5)
Performance analysis across segments:
# Compare segment solve times
solve_times = [seg.summary['durations']['solving'] for seg in results.segment_results]
avg_solve_time = sum(solve_times) / len(solve_times)
# Verify solution quality consistency
segment_objectives = [seg.summary['objective_value'] for seg in results.segment_results]
# Storage continuity analysis
if 'Battery' in results.segment_results[0].components:
storage_continuity = results.check_storage_continuity('Battery')
Design Considerations
Boundary Effects: Monitor solution quality at segment interfaces where foresight is limited compared to full-horizon optimization.
Memory Management: Individual segment results are maintained for detailed analysis while providing unified access for system-wide metrics.
Validation Tools: Built-in methods to verify temporal consistency and identify potential issues from segmentation approach.
Common Use Cases
- Large-Scale Analysis: Annual or multi-period optimization results
- Memory-Constrained Systems: Results from systems exceeding hardware limits
- Segment Validation: Verifying segmentation approach effectiveness
- Performance Monitoring: Comparing segmented vs. full-horizon solutions
- Debugging: Identifying issues specific to temporal decomposition
Functions
from_file
classmethod
Load SegmentedCalculationResults from saved files.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
folder
|
str | Path
|
Directory containing saved files. |
required |
name
|
str
|
Base name of saved files. |
required |
Returns:
Name | Type | Description |
---|---|---|
SegmentedCalculationResults |
SegmentedCalculationResults
|
Loaded instance. |
solution_without_overlap
Get variable solution removing segment overlaps.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
variable_name
|
str
|
Name of variable to extract. |
required |
Returns:
Type | Description |
---|---|
DataArray
|
xr.DataArray: Continuous solution without overlaps. |
plot_heatmap
plot_heatmap(variable_name: str, heatmap_timeframes: Literal['YS', 'MS', 'W', 'D', 'h', '15min', 'min'] = 'D', heatmap_timesteps_per_frame: Literal['W', 'D', 'h', '15min', 'min'] = 'h', color_map: str = 'portland', save: bool | Path = False, show: bool = True, engine: PlottingEngine = 'plotly') -> plotly.graph_objs.Figure | tuple[plt.Figure, plt.Axes]
Plot heatmap of variable solution across segments.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
variable_name
|
str
|
Variable to plot. |
required |
heatmap_timeframes
|
Literal['YS', 'MS', 'W', 'D', 'h', '15min', 'min']
|
Time aggregation level. |
'D'
|
heatmap_timesteps_per_frame
|
Literal['W', 'D', 'h', '15min', 'min']
|
Timesteps per frame. |
'h'
|
color_map
|
str
|
Color scheme. Also see plotly. |
'portland'
|
save
|
bool | Path
|
Whether to save plot. |
False
|
show
|
bool
|
Whether to display plot. |
True
|
engine
|
PlottingEngine
|
Plotting engine. |
'plotly'
|
Returns:
Type | Description |
---|---|
Figure | tuple[Figure, Axes]
|
Figure object. |
to_file
Save segmented results to files.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
folder
|
str | Path | None
|
Save folder (defaults to instance folder). |
None
|
name
|
str | None
|
File name (defaults to instance name). |
None
|
compression
|
int
|
Compression level 0-9. |
5
|
Functions
plot_heatmap
plot_heatmap(dataarray: DataArray, name: str, folder: Path, heatmap_timeframes: Literal['YS', 'MS', 'W', 'D', 'h', '15min', 'min'] = 'D', heatmap_timesteps_per_frame: Literal['W', 'D', 'h', '15min', 'min'] = 'h', color_map: str = 'portland', save: bool | Path = False, show: bool = True, engine: PlottingEngine = 'plotly', indexer: dict[str, Any] | None = None)
Plot heatmap of time series data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataarray
|
DataArray
|
Data to plot. |
required |
name
|
str
|
Variable name for title. |
required |
folder
|
Path
|
Save folder. |
required |
heatmap_timeframes
|
Literal['YS', 'MS', 'W', 'D', 'h', '15min', 'min']
|
Time aggregation level. |
'D'
|
heatmap_timesteps_per_frame
|
Literal['W', 'D', 'h', '15min', 'min']
|
Timesteps per frame. |
'h'
|
color_map
|
str
|
Color scheme. Also see plotly. |
'portland'
|
save
|
bool | Path
|
Whether to save plot. |
False
|
show
|
bool
|
Whether to display plot. |
True
|
engine
|
PlottingEngine
|
Plotting engine. |
'plotly'
|
indexer
|
dict[str, Any] | None
|
Optional selection dict, e.g., {'scenario': 'base', 'period': 2024}. If None, uses first value for each dimension. If empty dict {}, uses all values. |
None
|
sanitize_dataset
sanitize_dataset(ds: Dataset, timesteps: DatetimeIndex | None = None, threshold: float | None = 1e-05, negate: list[str] | None = None, drop_small_vars: bool = True, zero_small_values: bool = False, drop_suffix: str | None = None) -> xr.Dataset
Clean dataset by handling small values and reindexing time.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
Dataset to sanitize. |
required |
timesteps
|
DatetimeIndex | None
|
Time index for reindexing (optional). |
None
|
threshold
|
float | None
|
Threshold for small values processing. |
1e-05
|
negate
|
list[str] | None
|
Variables to negate. |
None
|
drop_small_vars
|
bool
|
Whether to drop variables below threshold. |
True
|
zero_small_values
|
bool
|
Whether to zero values below threshold. |
False
|
drop_suffix
|
str | None
|
Drop suffix of data var names. Split by the provided str. |
None
|
filter_dataset
filter_dataset(ds: Dataset, variable_dims: Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None = None, timesteps: DatetimeIndex | str | Timestamp | None = None, scenarios: Index | str | int | None = None, contains: str | list[str] | None = None, startswith: str | list[str] | None = None) -> xr.Dataset
Filter dataset by variable dimensions, indexes, and with string filters for variable names.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
Dataset
|
The dataset to filter. |
required |
variable_dims
|
Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None
|
The dimension of which to get variables from. - 'scalar': Get scalar variables (without dimensions) - 'time': Get time-dependent variables (with a time dimension) - 'scenario': Get scenario-dependent variables (with ONLY a scenario dimension) - 'timeonly': Get time-dependent variables (with ONLY a time dimension) - 'scenarioonly': Get scenario-dependent variables (with ONLY a scenario dimension) |
None
|
timesteps
|
DatetimeIndex | str | Timestamp | None
|
Optional time indexes to select. Can be: - pd.DatetimeIndex: Multiple timesteps - str/pd.Timestamp: Single timestep Defaults to all available timesteps. |
None
|
scenarios
|
Index | str | int | None
|
Optional scenario indexes to select. Can be: - pd.Index: Multiple scenarios - str/int: Single scenario (int is treated as a label, not an index position) Defaults to all available scenarios. |
None
|
contains
|
str | list[str] | None
|
Filter variables that contain this string or strings. If a list is provided, variables must contain ALL strings in the list. |
None
|
startswith
|
str | list[str] | None
|
Filter variables that start with this string or strings. If a list is provided, variables must start with ANY of the strings in the list. |
None
|
filter_dataarray_by_coord
Filter flows by node and component attributes.
Filters are applied in the order they are specified. All filters must match for an edge to be included.
To recombine filtered dataarrays, use xr.concat
.
xr.concat([res.sizes(start='Fernwärme'), res.sizes(end='Fernwärme')], dim='flow')
Parameters:
Name | Type | Description | Default |
---|---|---|---|
da
|
DataArray
|
Flow DataArray with network metadata coordinates. |
required |
**kwargs
|
str | list[str] | None
|
Coord filters as name=value pairs. |
{}
|
Returns:
Type | Description |
---|---|
DataArray
|
Filtered DataArray with matching edges. |
Raises:
Type | Description |
---|---|
AttributeError
|
If required coordinates are missing. |
ValueError
|
If specified nodes don't exist or no matches found. |