flixopt.results ¶
Attributes¶
Classes¶
CalculationResults ¶
CalculationResults(solution: Dataset, flow_system_data: Dataset, name: str, summary: dict, folder: Path | None = None, model: Model | None = None, **kwargs)
Comprehensive container for optimization calculation results and analysis tools.
This class provides unified access to all optimization results including flow rates, component states, bus balances, and system effects. It offers powerful analysis capabilities through filtering, plotting, and export functionality, making it the primary interface for post-processing optimization results.
Key Features
Unified Access: Single interface to all solution variables and constraints Element Results: Direct access to component, bus, and effect-specific results Visualization: Built-in plotting methods for heatmaps, time series, and networks Persistence: Save/load functionality with compression for large datasets Analysis Tools: Filtering, aggregation, and statistical analysis methods
Result Organization
- Components: Equipment-specific results (flows, states, constraints)
- Buses: Network node balances and energy flows
- Effects: System-wide impacts (costs, emissions, resource consumption)
- Solution: Raw optimization variables and their values
- Metadata: Calculation parameters, timing, and system configuration
Attributes:
Name | Type | Description |
---|---|---|
solution | Dataset containing all optimization variable solutions | |
flow_system_data | Dataset with complete system configuration and parameters. Restore the used FlowSystem for further analysis. | |
summary | Calculation metadata including solver status, timing, and statistics | |
name | Unique identifier for this calculation | |
model | Original linopy optimization model (if available) | |
folder | Directory path for result storage and loading | |
components | Dictionary mapping component labels to ComponentResults objects | |
buses | Dictionary mapping bus labels to BusResults objects | |
effects | Dictionary mapping effect names to EffectResults objects | |
timesteps_extra | Extended time index including boundary conditions | |
hours_per_timestep | Duration of each timestep for proper energy calculations |
Examples:
Load and analyze saved results:
# Load results from file
results = CalculationResults.from_file('results', 'annual_optimization')
# Access specific component results
boiler_results = results['Boiler_01']
heat_pump_results = results['HeatPump_02']
# Plot component flow rates
results.plot_heatmap('Boiler_01(Natural_Gas)|flow_rate')
results['Boiler_01'].plot_node_balance()
# Access raw solution dataarrays
electricity_flows = results.solution[['Generator_01(Grid)|flow_rate', 'HeatPump_02(Grid)|flow_rate']]
# Filter and analyze results
peak_demand_hours = results.filter_solution(variable_dims='time')
costs_solution = results.effects['cost'].solution
Advanced filtering and aggregation:
# Filter by variable type
scalar_results = results.filter_solution(variable_dims='scalar')
time_series = results.filter_solution(variable_dims='time')
# Custom data analysis leveraging xarray
peak_power = results.solution['Generator_01(Grid)|flow_rate'].max()
avg_efficiency = (
results.solution['HeatPump(Heat)|flow_rate'] / results.solution['HeatPump(Electricity)|flow_rate']
).mean()
Design Patterns
Factory Methods: Use from_file()
and from_calculation()
for creation or access directly from Calculation.results
Dictionary Access: Use results[element_label]
for element-specific results Lazy Loading: Results objects created on-demand for memory efficiency Unified Interface: Consistent API across different result types
Initialize CalculationResults with optimization data. Usually, this class is instantiated by the Calculation class, or by loading from file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
solution | Dataset | Optimization solution dataset. | required |
flow_system_data | Dataset | Flow system configuration dataset. | required |
name | str | Calculation name. | required |
summary | dict | Calculation metadata. | required |
folder | Path | None | Results storage folder. | None |
model | Model | None | Linopy optimization model. | None |
Deprecated: flow_system: Use flow_system_data instead.
Attributes¶
storages property
¶
Get all storage components in the results.
constraints property
¶
Get optimization constraints (requires linopy model).
flow_system property
¶
The restored flow_system that was used to create the calculation. Contains all input parameters.
effects_per_component property
¶
Returns a dataset containing effect results for each mode, aggregated by Component
Returns:
Type | Description |
---|---|
Dataset | An xarray Dataset with an additional component dimension and effects as variables. |
Functions¶
from_file classmethod
¶
Load CalculationResults from saved files.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
folder | str | Path | Directory containing saved files. | required |
name | str | Base name of saved files (without extensions). | required |
Returns:
Name | Type | Description |
---|---|---|
CalculationResults | CalculationResults | Loaded instance. |
from_calculation classmethod
¶
Create CalculationResults from a Calculation object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
calculation | Calculation | Calculation object with solved model. | required |
Returns:
Name | Type | Description |
---|---|---|
CalculationResults | CalculationResults | New instance with extracted results. |
filter_solution ¶
filter_solution(variable_dims: Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None = None, element: str | None = None, timesteps: DatetimeIndex | None = None, scenarios: Index | None = None, contains: str | list[str] | None = None, startswith: str | list[str] | None = None) -> xr.Dataset
Filter solution by variable dimension and/or element.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
variable_dims | Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None | The dimension of which to get variables from. - 'scalar': Get scalar variables (without dimensions) - 'time': Get time-dependent variables (with a time dimension) - 'scenario': Get scenario-dependent variables (with ONLY a scenario dimension) - 'timeonly': Get time-dependent variables (with ONLY a time dimension) - 'scenarioonly': Get scenario-dependent variables (with ONLY a scenario dimension) | None |
element | str | None | The element to filter for. | None |
timesteps | DatetimeIndex | None | Optional time indexes to select. Can be: - pd.DatetimeIndex: Multiple timesteps - str/pd.Timestamp: Single timestep Defaults to all available timesteps. | None |
scenarios | Index | None | Optional scenario indexes to select. Can be: - pd.Index: Multiple scenarios - str/int: Single scenario (int is treated as a label, not an index position) Defaults to all available scenarios. | None |
contains | str | list[str] | None | Filter variables that contain this string or strings. If a list is provided, variables must contain ALL strings in the list. | None |
startswith | str | list[str] | None | Filter variables that start with this string or strings. If a list is provided, variables must start with ANY of the strings in the list. | None |
flow_rates ¶
flow_rates(start: str | list[str] | None = None, end: str | list[str] | None = None, component: str | list[str] | None = None) -> xr.DataArray
Returns a DataArray containing the flow rates of each Flow.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
start | str | list[str] | None | Optional source node(s) to filter by. Can be a single node name or a list of names. | None |
end | str | list[str] | None | Optional destination node(s) to filter by. Can be a single node name or a list of names. | None |
component | str | list[str] | None | Optional component(s) to filter by. Can be a single component name or a list of names. | None |
Further usage
Convert the dataarray to a dataframe:
results.flow_rates().to_pandas() Get the max or min over time: results.flow_rates().max('time') Sum up the flow rates of flows with the same start and end: results.flow_rates(end='Fernwärme').groupby('start').sum(dim='flow') To recombine filtered dataarrays, use
xr.concat
with dim 'flow': xr.concat([results.flow_rates(start='Fernwärme'), results.flow_rates(end='Fernwärme')], dim='flow')
flow_hours ¶
flow_hours(start: str | list[str] | None = None, end: str | list[str] | None = None, component: str | list[str] | None = None) -> xr.DataArray
Returns a DataArray containing the flow hours of each Flow.
Flow hours represent the total energy/material transferred over time, calculated by multiplying flow rates by the duration of each timestep.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
start | str | list[str] | None | Optional source node(s) to filter by. Can be a single node name or a list of names. | None |
end | str | list[str] | None | Optional destination node(s) to filter by. Can be a single node name or a list of names. | None |
component | str | list[str] | None | Optional component(s) to filter by. Can be a single component name or a list of names. | None |
Further usage
Convert the dataarray to a dataframe:
results.flow_hours().to_pandas() Sum up the flow hours over time: results.flow_hours().sum('time') Sum up the flow hours of flows with the same start and end: results.flow_hours(end='Fernwärme').groupby('start').sum(dim='flow') To recombine filtered dataarrays, use
xr.concat
with dim 'flow': xr.concat([results.flow_hours(start='Fernwärme'), results.flow_hours(end='Fernwärme')], dim='flow')
sizes ¶
sizes(start: str | list[str] | None = None, end: str | list[str] | None = None, component: str | list[str] | None = None) -> xr.DataArray
Returns a dataset with the sizes of the Flows. Args: start: Optional source node(s) to filter by. Can be a single node name or a list of names. end: Optional destination node(s) to filter by. Can be a single node name or a list of names. component: Optional component(s) to filter by. Can be a single component name or a list of names.
Further usage
Convert the dataarray to a dataframe:
results.sizes().to_pandas() To recombine filtered dataarrays, use
xr.concat
with dim 'flow': xr.concat([results.sizes(start='Fernwärme'), results.sizes(end='Fernwärme')], dim='flow')
get_effect_shares ¶
get_effect_shares(element: str, effect: str, mode: Literal['temporal', 'periodic'] | None = None, include_flows: bool = False) -> xr.Dataset
Retrieves individual effect shares for a specific element and effect. Either for temporal, investment, or both modes combined. Only includes the direct shares.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
element | str | The element identifier for which to retrieve effect shares. | required |
effect | str | The effect identifier for which to retrieve shares. | required |
mode | Literal['temporal', 'periodic'] | None | Optional. The mode to retrieve shares for. Can be 'temporal', 'periodic', or None to retrieve both. Defaults to None. | None |
Returns:
Type | Description |
---|---|
Dataset | An xarray Dataset containing the requested effect shares. If mode is None, |
Dataset | returns a merged Dataset containing both temporal and investment shares. |
Raises:
Type | Description |
---|---|
ValueError | If the specified effect is not available or if mode is invalid. |
plot_heatmap ¶
plot_heatmap(variable_name: str | list[str], save: bool | Path = False, show: bool = True, colors: ColorType = 'viridis', engine: PlottingEngine = 'plotly', select: dict[FlowSystemDimensions, Any] | None = None, facet_by: str | list[str] | None = 'scenario', animate_by: str | None = 'period', facet_cols: int = 3, reshape_time: tuple[Literal['YS', 'MS', 'W', 'D', 'h', '15min', 'min'], Literal['W', 'D', 'h', '15min', 'min']] | Literal['auto'] | None = 'auto', fill: Literal['ffill', 'bfill'] | None = 'ffill', indexer: dict[FlowSystemDimensions, Any] | None = None, heatmap_timeframes: Literal['YS', 'MS', 'W', 'D', 'h', '15min', 'min'] | None = None, heatmap_timesteps_per_frame: Literal['W', 'D', 'h', '15min', 'min'] | None = None, color_map: str | None = None) -> plotly.graph_objs.Figure | tuple[plt.Figure, plt.Axes]
Plots a heatmap visualization of a variable using imshow or time-based reshaping.
Supports multiple visualization features that can be combined: - Multi-variable: Plot multiple variables on a single heatmap (creates 'variable' dimension) - Time reshaping: Converts 'time' dimension into 2D (e.g., hours vs days) - Faceting: Creates subplots for different dimension values - Animation: Animates through dimension values (Plotly only)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
variable_name | str | list[str] | The name of the variable to plot, or a list of variable names. When a list is provided, variables are combined into a single DataArray with a new 'variable' dimension. | required |
save | bool | Path | Whether to save the plot or not. If a path is provided, the plot will be saved at that location. | False |
show | bool | Whether to show the plot or not. | True |
colors | ColorType | Color scheme for the heatmap. See | 'viridis' |
engine | PlottingEngine | The engine to use for plotting. Can be either 'plotly' or 'matplotlib'. | 'plotly' |
select | dict[FlowSystemDimensions, Any] | None | Optional data selection dict. Supports single values, lists, slices, and index arrays. Applied BEFORE faceting/animation/reshaping. | None |
facet_by | str | list[str] | None | Dimension(s) to create facets (subplots) for. Can be a single dimension name (str) or list of dimensions. Each unique value combination creates a subplot. Ignored if not found. | 'scenario' |
animate_by | str | None | Dimension to animate over (Plotly only). Creates animation frames that cycle through dimension values. Only one dimension can be animated. Ignored if not found. | 'period' |
facet_cols | int | Number of columns in the facet grid layout (default: 3). | 3 |
reshape_time | tuple[Literal['YS', 'MS', 'W', 'D', 'h', '15min', 'min'], Literal['W', 'D', 'h', '15min', 'min']] | Literal['auto'] | None | Time reshaping configuration (default: 'auto'): - 'auto': Automatically applies ('D', 'h') when only 'time' dimension remains - Tuple: Explicit reshaping, e.g. ('D', 'h') for days vs hours, ('MS', 'D') for months vs days, ('W', 'h') for weeks vs hours - None: Disable auto-reshaping (will error if only 1D time data) Supported timeframes: 'YS', 'MS', 'W', 'D', 'h', '15min', 'min' | 'auto' |
fill | Literal['ffill', 'bfill'] | None | Method to fill missing values after reshape: 'ffill' (forward fill) or 'bfill' (backward fill). Default is 'ffill'. | 'ffill' |
Examples:
Direct imshow mode (default):
Facet by scenario:
>>> results.plot_heatmap('Boiler(Qth)|flow_rate', facet_by='scenario', facet_cols=2)
Animate by period:
>>> results.plot_heatmap('Boiler(Qth)|flow_rate', select={'scenario': 'base'}, animate_by='period')
Time reshape mode - daily patterns:
>>> results.plot_heatmap('Boiler(Qth)|flow_rate', select={'scenario': 'base'}, reshape_time=('D', 'h'))
Combined: time reshaping with faceting and animation:
>>> results.plot_heatmap(
... 'Boiler(Qth)|flow_rate', facet_by='scenario', animate_by='period', reshape_time=('D', 'h')
... )
Multi-variable heatmap (variables as one axis):
>>> results.plot_heatmap(
... ['Boiler(Q_th)|flow_rate', 'CHP(Q_th)|flow_rate', 'HeatStorage|charge_state'],
... select={'scenario': 'base', 'period': 1},
... reshape_time=None,
... )
Multi-variable with time reshaping:
plot_network ¶
plot_network(controls: bool | list[Literal['nodes', 'edges', 'layout', 'interaction', 'manipulation', 'physics', 'selection', 'renderer']] = True, path: Path | None = None, show: bool = False) -> pyvis.network.Network | None
Plot interactive network visualization of the system.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
controls | bool | list[Literal['nodes', 'edges', 'layout', 'interaction', 'manipulation', 'physics', 'selection', 'renderer']] | Enable/disable interactive controls. | True |
path | Path | None | Save path for network HTML. | None |
show | bool | Whether to display the plot. | False |
to_file ¶
to_file(folder: str | Path | None = None, name: str | None = None, compression: int = 5, document_model: bool = True, save_linopy_model: bool = False)
Save results to files.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
folder | str | Path | None | Save folder (defaults to calculation folder). | None |
name | str | None | File name (defaults to calculation name). | None |
compression | int | Compression level 0-9. | 5 |
document_model | bool | Whether to document model formulations as yaml. | True |
save_linopy_model | bool | Whether to save linopy model file. | False |
BusResults ¶
BusResults(calculation_results: CalculationResults, label: str, variables: list[str], constraints: list[str], inputs: list[str], outputs: list[str], flows: list[str])
Bases: _NodeResults
Results container for energy/material balance nodes in the system.
Attributes¶
variables property
¶
Get element variables (requires linopy model).
Raises:
Type | Description |
---|---|
ValueError | If linopy model is unavailable. |
constraints property
¶
Get element constraints (requires linopy model).
Raises:
Type | Description |
---|---|
ValueError | If linopy model is unavailable. |
Functions¶
filter_solution ¶
filter_solution(variable_dims: Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None = None, timesteps: DatetimeIndex | None = None, scenarios: Index | None = None, contains: str | list[str] | None = None, startswith: str | list[str] | None = None) -> xr.Dataset
Filter the solution to a specific variable dimension and element. If no element is specified, all elements are included.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
variable_dims | Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None | The dimension of which to get variables from. - 'scalar': Get scalar variables (without dimensions) - 'time': Get time-dependent variables (with a time dimension) - 'scenario': Get scenario-dependent variables (with ONLY a scenario dimension) - 'timeonly': Get time-dependent variables (with ONLY a time dimension) - 'scenarioonly': Get scenario-dependent variables (with ONLY a scenario dimension) | None |
timesteps | DatetimeIndex | None | Optional time indexes to select. Can be: - pd.DatetimeIndex: Multiple timesteps - str/pd.Timestamp: Single timestep Defaults to all available timesteps. | None |
scenarios | Index | None | Optional scenario indexes to select. Can be: - pd.Index: Multiple scenarios - str/int: Single scenario (int is treated as a label, not an index position) Defaults to all available scenarios. | None |
contains | str | list[str] | None | Filter variables that contain this string or strings. If a list is provided, variables must contain ALL strings in the list. | None |
startswith | str | list[str] | None | Filter variables that start with this string or strings. If a list is provided, variables must start with ANY of the strings in the list. | None |
plot_node_balance ¶
plot_node_balance(save: bool | Path = False, show: bool = True, colors: ColorType = 'viridis', engine: PlottingEngine = 'plotly', select: dict[FlowSystemDimensions, Any] | None = None, unit_type: Literal['flow_rate', 'flow_hours'] = 'flow_rate', mode: Literal['area', 'stacked_bar', 'line'] = 'stacked_bar', drop_suffix: bool = True, facet_by: str | list[str] | None = 'scenario', animate_by: str | None = 'period', facet_cols: int = 3, indexer: dict[FlowSystemDimensions, Any] | None = None) -> plotly.graph_objs.Figure | tuple[plt.Figure, plt.Axes]
Plots the node balance of the Component or Bus with optional faceting and animation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
save | bool | Path | Whether to save the plot or not. If a path is provided, the plot will be saved at that location. | False |
show | bool | Whether to show the plot or not. | True |
colors | ColorType | The colors to use for the plot. See | 'viridis' |
engine | PlottingEngine | The engine to use for plotting. Can be either 'plotly' or 'matplotlib'. | 'plotly' |
select | dict[FlowSystemDimensions, Any] | None | Optional data selection dict. Supports: - Single values: {'scenario': 'base', 'period': 2024} - Multiple values: {'scenario': ['base', 'high', 'renewable']} - Slices: {'time': slice('2024-01', '2024-06')} - Index arrays: {'time': time_array} Note: Applied BEFORE faceting/animation. | None |
unit_type | Literal['flow_rate', 'flow_hours'] | The unit type to use for the dataset. Can be 'flow_rate' or 'flow_hours'. - 'flow_rate': Returns the flow_rates of the Node. - 'flow_hours': Returns the flow_hours of the Node. [flow_hours(t) = flow_rate(t) * dt(t)]. Renames suffixes to |flow_hours. | 'flow_rate' |
mode | Literal['area', 'stacked_bar', 'line'] | The plotting mode. Use 'stacked_bar' for stacked bar charts, 'line' for stepped lines, or 'area' for stacked area charts. | 'stacked_bar' |
drop_suffix | bool | Whether to drop the suffix from the variable names. | True |
facet_by | str | list[str] | None | Dimension(s) to create facets (subplots) for. Can be a single dimension name (str) or list of dimensions. Each unique value combination creates a subplot. Ignored if not found. Example: 'scenario' creates one subplot per scenario. Example: ['scenario', 'period'] creates a grid of subplots for each scenario-period combination. | 'scenario' |
animate_by | str | None | Dimension to animate over (Plotly only). Creates animation frames that cycle through dimension values. Only one dimension can be animated. Ignored if not found. | 'period' |
facet_cols | int | Number of columns in the facet grid layout (default: 3). | 3 |
Examples:
Basic plot (current behavior):
Facet by scenario:
Animate by period:
Facet by scenario AND animate by period:
>>> results['Boiler'].plot_node_balance(facet_by='scenario', animate_by='period')
Select single scenario, then facet by period:
>>> results['Boiler'].plot_node_balance(select={'scenario': 'base'}, facet_by='period')
Select multiple scenarios and facet by them:
>>> results['Boiler'].plot_node_balance(
... select={'scenario': ['base', 'high', 'renewable']}, facet_by='scenario'
... )
Time range selection (summer months only):
plot_node_balance_pie ¶
plot_node_balance_pie(lower_percentage_group: float = 5, colors: ColorType = 'viridis', text_info: str = 'percent+label+value', save: bool | Path = False, show: bool = True, engine: PlottingEngine = 'plotly', select: dict[FlowSystemDimensions, Any] | None = None, indexer: dict[FlowSystemDimensions, Any] | None = None) -> plotly.graph_objs.Figure | tuple[plt.Figure, list[plt.Axes]]
Plot pie chart of flow hours distribution.
Note
Pie charts require scalar data (no extra dimensions beyond time). If your data has dimensions like 'scenario' or 'period', either:
- Use
select
to choose specific values:select={'scenario': 'base', 'period': 2024}
- Let auto-selection choose the first value (a warning will be logged)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
lower_percentage_group | float | Percentage threshold for "Others" grouping. | 5 |
colors | ColorType | Color scheme. Also see plotly. | 'viridis' |
text_info | str | Information to display on pie slices. | 'percent+label+value' |
save | bool | Path | Whether to save plot. | False |
show | bool | Whether to display plot. | True |
engine | PlottingEngine | Plotting engine ('plotly' or 'matplotlib'). | 'plotly' |
select | dict[FlowSystemDimensions, Any] | None | Optional data selection dict. Supports single values, lists, slices, and index arrays. Use this to select specific scenario/period before creating the pie chart. | None |
Examples:
Basic usage (auto-selects first scenario/period if present):
Explicitly select a scenario and period:
node_balance ¶
node_balance(negate_inputs: bool = True, negate_outputs: bool = False, threshold: float | None = 1e-05, with_last_timestep: bool = False, unit_type: Literal['flow_rate', 'flow_hours'] = 'flow_rate', drop_suffix: bool = False, select: dict[FlowSystemDimensions, Any] | None = None, indexer: dict[FlowSystemDimensions, Any] | None = None) -> xr.Dataset
Returns a dataset with the node balance of the Component or Bus. Args: negate_inputs: Whether to negate the input flow_rates of the Node. negate_outputs: Whether to negate the output flow_rates of the Node. threshold: The threshold for small values. Variables with all values below the threshold are dropped. with_last_timestep: Whether to include the last timestep in the dataset. unit_type: The unit type to use for the dataset. Can be 'flow_rate' or 'flow_hours'. - 'flow_rate': Returns the flow_rates of the Node. - 'flow_hours': Returns the flow_hours of the Node. [flow_hours(t) = flow_rate(t) * dt(t)]. Renames suffixes to |flow_hours. drop_suffix: Whether to drop the suffix from the variable names. select: Optional data selection dict. Supports single values, lists, slices, and index arrays.
ComponentResults ¶
ComponentResults(calculation_results: CalculationResults, label: str, variables: list[str], constraints: list[str], inputs: list[str], outputs: list[str], flows: list[str])
Bases: _NodeResults
Results container for individual system components with specialized analysis tools.
Attributes¶
variables property
¶
Get element variables (requires linopy model).
Raises:
Type | Description |
---|---|
ValueError | If linopy model is unavailable. |
constraints property
¶
Get element constraints (requires linopy model).
Raises:
Type | Description |
---|---|
ValueError | If linopy model is unavailable. |
Functions¶
plot_charge_state ¶
plot_charge_state(save: bool | Path = False, show: bool = True, colors: ColorType = 'viridis', engine: PlottingEngine = 'plotly', mode: Literal['area', 'stacked_bar', 'line'] = 'area', select: dict[FlowSystemDimensions, Any] | None = None, facet_by: str | list[str] | None = 'scenario', animate_by: str | None = 'period', facet_cols: int = 3, indexer: dict[FlowSystemDimensions, Any] | None = None) -> plotly.graph_objs.Figure
Plot storage charge state over time, combined with the node balance with optional faceting and animation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
save | bool | Path | Whether to save the plot or not. If a path is provided, the plot will be saved at that location. | False |
show | bool | Whether to show the plot or not. | True |
colors | ColorType | Color scheme. Also see plotly. | 'viridis' |
engine | PlottingEngine | Plotting engine to use. Only 'plotly' is implemented atm. | 'plotly' |
mode | Literal['area', 'stacked_bar', 'line'] | The plotting mode. Use 'stacked_bar' for stacked bar charts, 'line' for stepped lines, or 'area' for stacked area charts. | 'area' |
select | dict[FlowSystemDimensions, Any] | None | Optional data selection dict. Supports single values, lists, slices, and index arrays. Applied BEFORE faceting/animation. | None |
facet_by | str | list[str] | None | Dimension(s) to create facets (subplots) for. Can be a single dimension name (str) or list of dimensions. Each unique value combination creates a subplot. Ignored if not found. | 'scenario' |
animate_by | str | None | Dimension to animate over (Plotly only). Creates animation frames that cycle through dimension values. Only one dimension can be animated. Ignored if not found. | 'period' |
facet_cols | int | Number of columns in the facet grid layout (default: 3). | 3 |
Raises:
Type | Description |
---|---|
ValueError | If component is not a storage. |
Examples:
Basic plot:
Facet by scenario:
Animate by period:
Facet by scenario AND animate by period:
node_balance_with_charge_state ¶
node_balance_with_charge_state(negate_inputs: bool = True, negate_outputs: bool = False, threshold: float | None = 1e-05) -> xr.Dataset
Get storage node balance including charge state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
negate_inputs | bool | Whether to negate input flows. | True |
negate_outputs | bool | Whether to negate output flows. | False |
threshold | float | None | Threshold for small values. | 1e-05 |
Returns:
Type | Description |
---|---|
Dataset | xr.Dataset: Node balance with charge state. |
Raises:
Type | Description |
---|---|
ValueError | If component is not a storage. |
filter_solution ¶
filter_solution(variable_dims: Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None = None, timesteps: DatetimeIndex | None = None, scenarios: Index | None = None, contains: str | list[str] | None = None, startswith: str | list[str] | None = None) -> xr.Dataset
Filter the solution to a specific variable dimension and element. If no element is specified, all elements are included.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
variable_dims | Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None | The dimension of which to get variables from. - 'scalar': Get scalar variables (without dimensions) - 'time': Get time-dependent variables (with a time dimension) - 'scenario': Get scenario-dependent variables (with ONLY a scenario dimension) - 'timeonly': Get time-dependent variables (with ONLY a time dimension) - 'scenarioonly': Get scenario-dependent variables (with ONLY a scenario dimension) | None |
timesteps | DatetimeIndex | None | Optional time indexes to select. Can be: - pd.DatetimeIndex: Multiple timesteps - str/pd.Timestamp: Single timestep Defaults to all available timesteps. | None |
scenarios | Index | None | Optional scenario indexes to select. Can be: - pd.Index: Multiple scenarios - str/int: Single scenario (int is treated as a label, not an index position) Defaults to all available scenarios. | None |
contains | str | list[str] | None | Filter variables that contain this string or strings. If a list is provided, variables must contain ALL strings in the list. | None |
startswith | str | list[str] | None | Filter variables that start with this string or strings. If a list is provided, variables must start with ANY of the strings in the list. | None |
plot_node_balance ¶
plot_node_balance(save: bool | Path = False, show: bool = True, colors: ColorType = 'viridis', engine: PlottingEngine = 'plotly', select: dict[FlowSystemDimensions, Any] | None = None, unit_type: Literal['flow_rate', 'flow_hours'] = 'flow_rate', mode: Literal['area', 'stacked_bar', 'line'] = 'stacked_bar', drop_suffix: bool = True, facet_by: str | list[str] | None = 'scenario', animate_by: str | None = 'period', facet_cols: int = 3, indexer: dict[FlowSystemDimensions, Any] | None = None) -> plotly.graph_objs.Figure | tuple[plt.Figure, plt.Axes]
Plots the node balance of the Component or Bus with optional faceting and animation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
save | bool | Path | Whether to save the plot or not. If a path is provided, the plot will be saved at that location. | False |
show | bool | Whether to show the plot or not. | True |
colors | ColorType | The colors to use for the plot. See | 'viridis' |
engine | PlottingEngine | The engine to use for plotting. Can be either 'plotly' or 'matplotlib'. | 'plotly' |
select | dict[FlowSystemDimensions, Any] | None | Optional data selection dict. Supports: - Single values: {'scenario': 'base', 'period': 2024} - Multiple values: {'scenario': ['base', 'high', 'renewable']} - Slices: {'time': slice('2024-01', '2024-06')} - Index arrays: {'time': time_array} Note: Applied BEFORE faceting/animation. | None |
unit_type | Literal['flow_rate', 'flow_hours'] | The unit type to use for the dataset. Can be 'flow_rate' or 'flow_hours'. - 'flow_rate': Returns the flow_rates of the Node. - 'flow_hours': Returns the flow_hours of the Node. [flow_hours(t) = flow_rate(t) * dt(t)]. Renames suffixes to |flow_hours. | 'flow_rate' |
mode | Literal['area', 'stacked_bar', 'line'] | The plotting mode. Use 'stacked_bar' for stacked bar charts, 'line' for stepped lines, or 'area' for stacked area charts. | 'stacked_bar' |
drop_suffix | bool | Whether to drop the suffix from the variable names. | True |
facet_by | str | list[str] | None | Dimension(s) to create facets (subplots) for. Can be a single dimension name (str) or list of dimensions. Each unique value combination creates a subplot. Ignored if not found. Example: 'scenario' creates one subplot per scenario. Example: ['scenario', 'period'] creates a grid of subplots for each scenario-period combination. | 'scenario' |
animate_by | str | None | Dimension to animate over (Plotly only). Creates animation frames that cycle through dimension values. Only one dimension can be animated. Ignored if not found. | 'period' |
facet_cols | int | Number of columns in the facet grid layout (default: 3). | 3 |
Examples:
Basic plot (current behavior):
Facet by scenario:
Animate by period:
Facet by scenario AND animate by period:
>>> results['Boiler'].plot_node_balance(facet_by='scenario', animate_by='period')
Select single scenario, then facet by period:
>>> results['Boiler'].plot_node_balance(select={'scenario': 'base'}, facet_by='period')
Select multiple scenarios and facet by them:
>>> results['Boiler'].plot_node_balance(
... select={'scenario': ['base', 'high', 'renewable']}, facet_by='scenario'
... )
Time range selection (summer months only):
plot_node_balance_pie ¶
plot_node_balance_pie(lower_percentage_group: float = 5, colors: ColorType = 'viridis', text_info: str = 'percent+label+value', save: bool | Path = False, show: bool = True, engine: PlottingEngine = 'plotly', select: dict[FlowSystemDimensions, Any] | None = None, indexer: dict[FlowSystemDimensions, Any] | None = None) -> plotly.graph_objs.Figure | tuple[plt.Figure, list[plt.Axes]]
Plot pie chart of flow hours distribution.
Note
Pie charts require scalar data (no extra dimensions beyond time). If your data has dimensions like 'scenario' or 'period', either:
- Use
select
to choose specific values:select={'scenario': 'base', 'period': 2024}
- Let auto-selection choose the first value (a warning will be logged)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
lower_percentage_group | float | Percentage threshold for "Others" grouping. | 5 |
colors | ColorType | Color scheme. Also see plotly. | 'viridis' |
text_info | str | Information to display on pie slices. | 'percent+label+value' |
save | bool | Path | Whether to save plot. | False |
show | bool | Whether to display plot. | True |
engine | PlottingEngine | Plotting engine ('plotly' or 'matplotlib'). | 'plotly' |
select | dict[FlowSystemDimensions, Any] | None | Optional data selection dict. Supports single values, lists, slices, and index arrays. Use this to select specific scenario/period before creating the pie chart. | None |
Examples:
Basic usage (auto-selects first scenario/period if present):
Explicitly select a scenario and period:
node_balance ¶
node_balance(negate_inputs: bool = True, negate_outputs: bool = False, threshold: float | None = 1e-05, with_last_timestep: bool = False, unit_type: Literal['flow_rate', 'flow_hours'] = 'flow_rate', drop_suffix: bool = False, select: dict[FlowSystemDimensions, Any] | None = None, indexer: dict[FlowSystemDimensions, Any] | None = None) -> xr.Dataset
Returns a dataset with the node balance of the Component or Bus. Args: negate_inputs: Whether to negate the input flow_rates of the Node. negate_outputs: Whether to negate the output flow_rates of the Node. threshold: The threshold for small values. Variables with all values below the threshold are dropped. with_last_timestep: Whether to include the last timestep in the dataset. unit_type: The unit type to use for the dataset. Can be 'flow_rate' or 'flow_hours'. - 'flow_rate': Returns the flow_rates of the Node. - 'flow_hours': Returns the flow_hours of the Node. [flow_hours(t) = flow_rate(t) * dt(t)]. Renames suffixes to |flow_hours. drop_suffix: Whether to drop the suffix from the variable names. select: Optional data selection dict. Supports single values, lists, slices, and index arrays.
EffectResults ¶
EffectResults(calculation_results: CalculationResults, label: str, variables: list[str], constraints: list[str])
Bases: _ElementResults
Results for an Effect
Attributes¶
variables property
¶
Get element variables (requires linopy model).
Raises:
Type | Description |
---|---|
ValueError | If linopy model is unavailable. |
constraints property
¶
Get element constraints (requires linopy model).
Raises:
Type | Description |
---|---|
ValueError | If linopy model is unavailable. |
Functions¶
get_shares_from ¶
Get effect shares from specific element.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
element | str | Element label to get shares from. | required |
Returns:
Type | Description |
---|---|
Dataset | xr.Dataset: Element shares to this effect. |
filter_solution ¶
filter_solution(variable_dims: Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None = None, timesteps: DatetimeIndex | None = None, scenarios: Index | None = None, contains: str | list[str] | None = None, startswith: str | list[str] | None = None) -> xr.Dataset
Filter the solution to a specific variable dimension and element. If no element is specified, all elements are included.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
variable_dims | Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None | The dimension of which to get variables from. - 'scalar': Get scalar variables (without dimensions) - 'time': Get time-dependent variables (with a time dimension) - 'scenario': Get scenario-dependent variables (with ONLY a scenario dimension) - 'timeonly': Get time-dependent variables (with ONLY a time dimension) - 'scenarioonly': Get scenario-dependent variables (with ONLY a scenario dimension) | None |
timesteps | DatetimeIndex | None | Optional time indexes to select. Can be: - pd.DatetimeIndex: Multiple timesteps - str/pd.Timestamp: Single timestep Defaults to all available timesteps. | None |
scenarios | Index | None | Optional scenario indexes to select. Can be: - pd.Index: Multiple scenarios - str/int: Single scenario (int is treated as a label, not an index position) Defaults to all available scenarios. | None |
contains | str | list[str] | None | Filter variables that contain this string or strings. If a list is provided, variables must contain ALL strings in the list. | None |
startswith | str | list[str] | None | Filter variables that start with this string or strings. If a list is provided, variables must start with ANY of the strings in the list. | None |
SegmentedCalculationResults ¶
SegmentedCalculationResults(segment_results: list[CalculationResults], all_timesteps: DatetimeIndex, timesteps_per_segment: int, overlap_timesteps: int, name: str, folder: Path | None = None)
Results container for segmented optimization calculations with temporal decomposition.
This class manages results from SegmentedCalculation runs where large optimization problems are solved by dividing the time horizon into smaller, overlapping segments. It provides unified access to results across all segments while maintaining the ability to analyze individual segment behavior.
Key Features
Unified Time Series: Automatically assembles results from all segments into continuous time series, removing overlaps and boundary effects Segment Analysis: Access individual segment results for debugging and validation Consistency Checks: Verify solution continuity at segment boundaries Memory Efficiency: Handles large datasets that exceed single-segment memory limits
Temporal Handling
The class manages the complex task of combining overlapping segment solutions into coherent time series, ensuring proper treatment of: - Storage state continuity between segments - Flow rate transitions at segment boundaries - Aggregated results over the full time horizon
Examples:
Load and analyze segmented results:
# Load segmented calculation results
results = SegmentedCalculationResults.from_file('results', 'annual_segmented')
# Access unified results across all segments
full_timeline = results.all_timesteps
total_segments = len(results.segment_results)
# Analyze individual segments
for i, segment in enumerate(results.segment_results):
print(f'Segment {i + 1}: {len(segment.solution.time)} timesteps')
segment_costs = segment.effects['cost'].total_value
# Check solution continuity at boundaries
segment_boundaries = results.get_boundary_analysis()
max_discontinuity = segment_boundaries['max_storage_jump']
Create from segmented calculation:
# After running segmented calculation
segmented_calc = SegmentedCalculation(
name='annual_system',
flow_system=system,
timesteps_per_segment=730, # Monthly segments
overlap_timesteps=48, # 2-day overlap
)
segmented_calc.do_modeling_and_solve(solver='gurobi')
# Extract unified results
results = SegmentedCalculationResults.from_calculation(segmented_calc)
# Save combined results
results.to_file(compression=5)
Performance analysis across segments:
# Compare segment solve times
solve_times = [seg.summary['durations']['solving'] for seg in results.segment_results]
avg_solve_time = sum(solve_times) / len(solve_times)
# Verify solution quality consistency
segment_objectives = [seg.summary['objective_value'] for seg in results.segment_results]
# Storage continuity analysis
if 'Battery' in results.segment_results[0].components:
storage_continuity = results.check_storage_continuity('Battery')
Design Considerations
Boundary Effects: Monitor solution quality at segment interfaces where foresight is limited compared to full-horizon optimization.
Memory Management: Individual segment results are maintained for detailed analysis while providing unified access for system-wide metrics.
Validation Tools: Built-in methods to verify temporal consistency and identify potential issues from segmentation approach.
Common Use Cases
- Large-Scale Analysis: Annual or multi-period optimization results
- Memory-Constrained Systems: Results from systems exceeding hardware limits
- Segment Validation: Verifying segmentation approach effectiveness
- Performance Monitoring: Comparing segmented vs. full-horizon solutions
- Debugging: Identifying issues specific to temporal decomposition
Functions¶
from_file classmethod
¶
Load SegmentedCalculationResults from saved files.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
folder | str | Path | Directory containing saved files. | required |
name | str | Base name of saved files. | required |
Returns:
Name | Type | Description |
---|---|---|
SegmentedCalculationResults | SegmentedCalculationResults | Loaded instance. |
solution_without_overlap ¶
Get variable solution removing segment overlaps.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
variable_name | str | Name of variable to extract. | required |
Returns:
Type | Description |
---|---|
DataArray | xr.DataArray: Continuous solution without overlaps. |
plot_heatmap ¶
plot_heatmap(variable_name: str, reshape_time: tuple[Literal['YS', 'MS', 'W', 'D', 'h', '15min', 'min'], Literal['W', 'D', 'h', '15min', 'min']] | Literal['auto'] | None = 'auto', colors: str = 'portland', save: bool | Path = False, show: bool = True, engine: PlottingEngine = 'plotly', facet_by: str | list[str] | None = None, animate_by: str | None = None, facet_cols: int = 3, fill: Literal['ffill', 'bfill'] | None = 'ffill', heatmap_timeframes: Literal['YS', 'MS', 'W', 'D', 'h', '15min', 'min'] | None = None, heatmap_timesteps_per_frame: Literal['W', 'D', 'h', '15min', 'min'] | None = None, color_map: str | None = None) -> plotly.graph_objs.Figure | tuple[plt.Figure, plt.Axes]
Plot heatmap of variable solution across segments.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
variable_name | str | Variable to plot. | required |
reshape_time | tuple[Literal['YS', 'MS', 'W', 'D', 'h', '15min', 'min'], Literal['W', 'D', 'h', '15min', 'min']] | Literal['auto'] | None | Time reshaping configuration (default: 'auto'): - 'auto': Automatically applies ('D', 'h') when only 'time' dimension remains - Tuple like ('D', 'h'): Explicit reshaping (days vs hours) - None: Disable time reshaping | 'auto' |
colors | str | Color scheme. See plotting.ColorType for options. | 'portland' |
save | bool | Path | Whether to save plot. | False |
show | bool | Whether to display plot. | True |
engine | PlottingEngine | Plotting engine. | 'plotly' |
facet_by | str | list[str] | None | Dimension(s) to create facets (subplots) for. | None |
animate_by | str | None | Dimension to animate over (Plotly only). | None |
facet_cols | int | Number of columns in the facet grid layout. | 3 |
fill | Literal['ffill', 'bfill'] | None | Method to fill missing values: 'ffill' or 'bfill'. | 'ffill' |
heatmap_timeframes | Literal['YS', 'MS', 'W', 'D', 'h', '15min', 'min'] | None | (Deprecated) Use reshape_time instead. | None |
heatmap_timesteps_per_frame | Literal['W', 'D', 'h', '15min', 'min'] | None | (Deprecated) Use reshape_time instead. | None |
color_map | str | None | (Deprecated) Use colors instead. | None |
Returns:
Type | Description |
---|---|
Figure | tuple[Figure, Axes] | Figure object. |
to_file ¶
Functions¶
plot_heatmap ¶
plot_heatmap(data: DataArray | Dataset, name: str | None = None, folder: Path | None = None, colors: ColorType = 'viridis', save: bool | Path = False, show: bool = True, engine: PlottingEngine = 'plotly', select: dict[str, Any] | None = None, facet_by: str | list[str] | None = None, animate_by: str | None = None, facet_cols: int = 3, reshape_time: tuple[Literal['YS', 'MS', 'W', 'D', 'h', '15min', 'min'], Literal['W', 'D', 'h', '15min', 'min']] | Literal['auto'] | None = 'auto', fill: Literal['ffill', 'bfill'] | None = 'ffill', indexer: dict[str, Any] | None = None, heatmap_timeframes: Literal['YS', 'MS', 'W', 'D', 'h', '15min', 'min'] | None = None, heatmap_timesteps_per_frame: Literal['W', 'D', 'h', '15min', 'min'] | None = None, color_map: str | None = None)
Plot heatmap visualization with support for multi-variable, faceting, and animation.
This function provides a standalone interface to the heatmap plotting capabilities, supporting the same modern features as CalculationResults.plot_heatmap().
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data | DataArray | Dataset | Data to plot. Can be a single DataArray or an xarray Dataset. When a Dataset is provided, all data variables are combined along a new 'variable' dimension. | required |
name | str | None | Optional name for the title. If not provided, uses the DataArray name or generates a default title for Datasets. | None |
folder | Path | None | Save folder for the plot. Defaults to current directory if not provided. | None |
colors | ColorType | Color scheme for the heatmap. See | 'viridis' |
save | bool | Path | Whether to save the plot or not. If a path is provided, the plot will be saved at that location. | False |
show | bool | Whether to show the plot or not. | True |
engine | PlottingEngine | The engine to use for plotting. Can be either 'plotly' or 'matplotlib'. | 'plotly' |
select | dict[str, Any] | None | Optional data selection dict. Supports single values, lists, slices, and index arrays. | None |
facet_by | str | list[str] | None | Dimension(s) to create facets (subplots) for. Can be a single dimension name (str) or list of dimensions. Each unique value combination creates a subplot. | None |
animate_by | str | None | Dimension to animate over (Plotly only). Creates animation frames. | None |
facet_cols | int | Number of columns in the facet grid layout (default: 3). | 3 |
reshape_time | tuple[Literal['YS', 'MS', 'W', 'D', 'h', '15min', 'min'], Literal['W', 'D', 'h', '15min', 'min']] | Literal['auto'] | None | Time reshaping configuration (default: 'auto'): - 'auto': Automatically applies ('D', 'h') when only 'time' dimension remains - Tuple: Explicit reshaping, e.g. ('D', 'h') for days vs hours - None: Disable auto-reshaping | 'auto' |
fill | Literal['ffill', 'bfill'] | None | Method to fill missing values after reshape: 'ffill' (forward fill) or 'bfill' (backward fill). Default is 'ffill'. | 'ffill' |
Examples:
Single DataArray with time reshaping:
>>> plot_heatmap(data, name='Temperature', folder=Path('.'), reshape_time=('D', 'h'))
Dataset with multiple variables (facet by variable):
>>> dataset = xr.Dataset({'Boiler': data1, 'CHP': data2, 'Storage': data3})
>>> plot_heatmap(
... dataset,
... folder=Path('.'),
... facet_by='variable',
... reshape_time=('D', 'h'),
... )
Dataset with animation by variable:
sanitize_dataset ¶
sanitize_dataset(ds: Dataset, timesteps: DatetimeIndex | None = None, threshold: float | None = 1e-05, negate: list[str] | None = None, drop_small_vars: bool = True, zero_small_values: bool = False, drop_suffix: str | None = None) -> xr.Dataset
Clean dataset by handling small values and reindexing time.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds | Dataset | Dataset to sanitize. | required |
timesteps | DatetimeIndex | None | Time index for reindexing (optional). | None |
threshold | float | None | Threshold for small values processing. | 1e-05 |
negate | list[str] | None | Variables to negate. | None |
drop_small_vars | bool | Whether to drop variables below threshold. | True |
zero_small_values | bool | Whether to zero values below threshold. | False |
drop_suffix | str | None | Drop suffix of data var names. Split by the provided str. | None |
filter_dataset ¶
filter_dataset(ds: Dataset, variable_dims: Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None = None, timesteps: DatetimeIndex | str | Timestamp | None = None, scenarios: Index | str | int | None = None, contains: str | list[str] | None = None, startswith: str | list[str] | None = None) -> xr.Dataset
Filter dataset by variable dimensions, indexes, and with string filters for variable names.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds | Dataset | The dataset to filter. | required |
variable_dims | Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None | The dimension of which to get variables from. - 'scalar': Get scalar variables (without dimensions) - 'time': Get time-dependent variables (with a time dimension) - 'scenario': Get scenario-dependent variables (with ONLY a scenario dimension) - 'timeonly': Get time-dependent variables (with ONLY a time dimension) - 'scenarioonly': Get scenario-dependent variables (with ONLY a scenario dimension) | None |
timesteps | DatetimeIndex | str | Timestamp | None | Optional time indexes to select. Can be: - pd.DatetimeIndex: Multiple timesteps - str/pd.Timestamp: Single timestep Defaults to all available timesteps. | None |
scenarios | Index | str | int | None | Optional scenario indexes to select. Can be: - pd.Index: Multiple scenarios - str/int: Single scenario (int is treated as a label, not an index position) Defaults to all available scenarios. | None |
contains | str | list[str] | None | Filter variables that contain this string or strings. If a list is provided, variables must contain ALL strings in the list. | None |
startswith | str | list[str] | None | Filter variables that start with this string or strings. If a list is provided, variables must start with ANY of the strings in the list. | None |
filter_dataarray_by_coord ¶
Filter flows by node and component attributes.
Filters are applied in the order they are specified. All filters must match for an edge to be included.
To recombine filtered dataarrays, use xr.concat
.
xr.concat([res.sizes(start='Fernwärme'), res.sizes(end='Fernwärme')], dim='flow')
Parameters:
Name | Type | Description | Default |
---|---|---|---|
da | DataArray | Flow DataArray with network metadata coordinates. | required |
**kwargs | str | list[str] | None | Coord filters as name=value pairs. | {} |
Returns:
Type | Description |
---|---|
DataArray | Filtered DataArray with matching edges. |
Raises:
Type | Description |
---|---|
AttributeError | If required coordinates are missing. |
ValueError | If specified nodes don't exist or no matches found. |