flixopt.io ¶
Attributes¶
Classes¶
ResultsPaths dataclass ¶
Container for all paths related to saving Results.
Functions¶
create_folders ¶
Ensure the folder exists.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
parents | bool | If True, create parent directories as needed. If False, parent must exist. | False |
exist_ok | bool | If True, do not raise error if folder already exists. If False, raise FileExistsError. | True |
Raises:
| Type | Description |
|---|---|
FileNotFoundError | If parents=False and parent directory doesn't exist. |
FileExistsError | If exist_ok=False and folder already exists. |
update ¶
Update name and/or folder and refresh all paths.
FlowSystemDatasetIO ¶
Unified I/O handler for FlowSystem dataset serialization and deserialization.
This class provides optimized methods for converting FlowSystem objects to/from xarray Datasets. It uses shared constants for variable prefixes and implements fast DataArray construction to avoid xarray's slow _construct_dataarray method.
Constants
SOLUTION_PREFIX: Prefix for solution variables ('solution|') CLUSTERING_PREFIX: Prefix for clustering variables ('clustering|')
Example
Serialization (FlowSystem -> Dataset)¶
ds = FlowSystemDatasetIO.to_dataset(flow_system, base_ds)
Deserialization (Dataset -> FlowSystem)¶
fs = FlowSystemDatasetIO.from_dataset(ds)
Functions¶
from_dataset classmethod ¶
Create FlowSystem from dataset.
This is the main entry point for dataset restoration. Called by FlowSystem.from_dataset().
If the dataset contains solution data (variables prefixed with 'solution|'), the solution will be restored to the FlowSystem. Solution time coordinates are renamed back from 'solution_time' to 'time'.
Supports clustered datasets with (cluster, time) dimensions. When detected, creates a synthetic DatetimeIndex for compatibility and stores the clustered data structure for later use.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ds | Dataset | Dataset containing the FlowSystem data | required |
Returns:
| Type | Description |
|---|---|
FlowSystem | FlowSystem instance with all components, buses, effects, and solution restored |
to_dataset classmethod ¶
to_dataset(flow_system: FlowSystem, base_dataset: Dataset, include_solution: bool = True, include_original_data: bool = True) -> xr.Dataset
Convert FlowSystem-specific data to dataset.
This function adds FlowSystem-specific data (solution, clustering, metadata) to a base dataset created by the parent class's to_dataset() method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
flow_system | FlowSystem | The FlowSystem to serialize | required |
base_dataset | Dataset | Dataset from parent class with basic structure | required |
include_solution | bool | Whether to include optimization solution | True |
include_original_data | bool | Whether to include clustering.original_data | True |
Returns:
| Type | Description |
|---|---|
Dataset | Complete dataset with all FlowSystem data |
Functions¶
remove_none_and_empty ¶
Recursively removes None and empty dicts and lists values from a dictionary or list.
round_nested_floats ¶
round_nested_floats(obj: dict | list | float | int | Any, decimals: int = 2) -> dict | list | float | int | Any
Recursively round floating point numbers in nested data structures and convert it to python native types.
This function traverses nested data structures (dictionaries, lists) and rounds any floating point numbers to the specified number of decimal places. It handles various data types including NumPy arrays and xarray DataArrays by converting them to lists with rounded values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
obj | dict | list | float | int | Any | The object to process. Can be a dict, list, float, int, numpy.ndarray, xarray.DataArray, or any other type. | required |
decimals | int | Number of decimal places to round to. Defaults to 2. | 2 |
Returns:
| Type | Description |
|---|---|
dict | list | float | int | Any | The processed object with the same structure as the input, but with all floating point numbers rounded to the specified precision. NumPy arrays and xarray DataArrays are converted to lists. |
Examples:
load_json ¶
Load data from a JSON file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path | str | Path | Path to the JSON file. | required |
Returns:
| Type | Description |
|---|---|
dict | list | Loaded data (typically dict or list). |
Raises:
| Type | Description |
|---|---|
FileNotFoundError | If the file does not exist. |
JSONDecodeError | If the file is not valid JSON. |
save_json ¶
save_json(data: dict | list, path: str | Path, indent: int = 4, ensure_ascii: bool = False, **kwargs: Any) -> None
Save data to a JSON file with consistent formatting.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data | dict | list | Data to save (dict or list). | required |
path | str | Path | Path to save the JSON file. | required |
indent | int | Number of spaces for indentation (default: 4). | 4 |
ensure_ascii | bool | If False, allow Unicode characters (default: False). | False |
**kwargs | Any | Additional arguments to pass to json.dump(). | {} |
load_yaml ¶
Load data from a YAML file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path | str | Path | Path to the YAML file. | required |
Returns:
| Type | Description |
|---|---|
dict | list | Loaded data (typically dict or list), or empty dict if file is empty. |
Raises:
| Type | Description |
|---|---|
FileNotFoundError | If the file does not exist. |
YAMLError | If the file is not valid YAML. |
Note | Returns {} for empty YAML files instead of None. |
save_yaml ¶
save_yaml(data: dict | list, path: str | Path, indent: int = 4, width: int = 1000, allow_unicode: bool = True, sort_keys: bool = False, compact_numeric_lists: bool = False, **kwargs: Any) -> None
Save data to a YAML file with consistent formatting.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data | dict | list | Data to save (dict or list). | required |
path | str | Path | Path to save the YAML file. | required |
indent | int | Number of spaces for indentation (default: 4). | 4 |
width | int | Maximum line width (default: 1000). | 1000 |
allow_unicode | bool | If True, allow Unicode characters (default: True). | True |
sort_keys | bool | If True, sort dictionary keys (default: False). | False |
compact_numeric_lists | bool | If True, format numeric lists inline for better readability (default: False). | False |
**kwargs | Any | Additional arguments to pass to yaml.dump(). | {} |
format_yaml_string ¶
format_yaml_string(data: dict | list, indent: int = 4, width: int = 1000, allow_unicode: bool = True, sort_keys: bool = False, compact_numeric_lists: bool = False, **kwargs: Any) -> str
Format data as a YAML string with consistent formatting.
This function provides the same formatting as save_yaml() but returns a string instead of writing to a file. Useful for logging or displaying YAML data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data | dict | list | Data to format (dict or list). | required |
indent | int | Number of spaces for indentation (default: 4). | 4 |
width | int | Maximum line width (default: 1000). | 1000 |
allow_unicode | bool | If True, allow Unicode characters (default: True). | True |
sort_keys | bool | If True, sort dictionary keys (default: False). | False |
compact_numeric_lists | bool | If True, format numeric lists inline for better readability (default: False). | False |
**kwargs | Any | Additional arguments to pass to yaml.dump(). | {} |
Returns:
| Type | Description |
|---|---|
str | Formatted YAML string. |
load_config_file ¶
Load a configuration file, automatically detecting JSON or YAML format.
This function intelligently tries to load the file based on its extension, with fallback support if the primary format fails.
Supported extensions: - .json: Tries JSON first, falls back to YAML - .yaml, .yml: Tries YAML first, falls back to JSON - Others: Tries YAML, then JSON
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path | str | Path | Path to the configuration file. | required |
Returns:
| Type | Description |
|---|---|
dict | Loaded configuration as a dictionary. |
Raises:
| Type | Description |
|---|---|
FileNotFoundError | If the file does not exist. |
ValueError | If neither JSON nor YAML parsing succeeds. |
document_linopy_model ¶
Convert all model variables and constraints to a structured string representation. This can take multiple seconds for large models. The output can be saved to a yaml file with readable formating applied.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path | Path | Path to save the document. Defaults to None. | None |
save_dataset_to_netcdf ¶
save_dataset_to_netcdf(ds: Dataset, path: str | Path, compression: int = 0, stack_vars: bool = True) -> None
Save a dataset to a netcdf file. Store all attrs as JSON strings in 'attrs' attributes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ds | Dataset | Dataset to save. | required |
path | str | Path | Path to save the dataset to. | required |
compression | int | Compression level for the dataset (0-9). 0 means no compression. 5 is a good default. | 0 |
stack_vars | bool | If True (default), stack variables with equal dims for faster I/O. Variables are automatically unstacked when loading with load_dataset_from_netcdf. | True |
Raises:
| Type | Description |
|---|---|
ValueError | If the path has an invalid file extension. |
load_dataset_from_netcdf ¶
Load a dataset from a netcdf file. Load all attrs from 'attrs' attributes.
Automatically unstacks variables that were stacked during saving with save_dataset_to_netcdf(stack_vars=True).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path | str | Path | Path to load the dataset from. | required |
Returns:
| Name | Type | Description |
|---|---|---|
Dataset | Dataset | Loaded dataset with restored attrs and unstacked variables. |
convert_old_dataset ¶
convert_old_dataset(ds: Dataset, key_renames: dict[str, str] | None = None, value_renames: dict[str, dict] | None = None, reduce_constants: bool = True) -> xr.Dataset
Convert an old FlowSystem dataset to the current format.
This function performs two conversions: 1. Renames parameters in the reference structure to current naming conventions 2. Reduces constant arrays to minimal dimensions (e.g., broadcasted scalars back to scalars)
This is useful for loading FlowSystem files saved with older versions of flixopt.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ds | Dataset | The dataset to convert | required |
key_renames | dict[str, str] | None | Custom key renames to apply. If None, uses PARAMETER_RENAMES. | None |
value_renames | dict[str, dict] | None | Custom value renames to apply. If None, uses VALUE_RENAMES. | None |
reduce_constants | bool | If True (default), reduce constant arrays to minimal dimensions. Old files may have scalars broadcasted to full (time, period, scenario) shape. | True |
Returns:
| Type | Description |
|---|---|
Dataset | The converted dataset |
Examples:
Convert an old netCDF file to new format:
convert_old_netcdf ¶
convert_old_netcdf(input_path: str | Path, output_path: str | Path | None = None, compression: int = 0) -> xr.Dataset
Load an old FlowSystem netCDF file and convert to new parameter names.
This is a convenience function that combines loading, conversion, and optionally saving the converted dataset.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_path | str | Path | Path to the old netCDF file | required |
output_path | str | Path | None | If provided, save the converted dataset to this path. If None, only returns the converted dataset without saving. | None |
compression | int | Compression level (0-9) for saving. Only used if output_path is provided. | 0 |
Returns:
| Type | Description |
|---|---|
Dataset | The converted dataset |
Examples:
Convert and save to new file:
from flixopt import io
# Convert old file to new format
ds = io.convert_old_netcdf('old_system.nc4', 'new_system.nc')
Convert and load as FlowSystem:
numeric_to_str_for_repr ¶
Format value for display in repr methods.
For single values or uniform arrays, returns the formatted value. For arrays with variation, returns a range showing min-max.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value | Numeric_TPS | Numeric value or container (DataArray, array, Series, DataFrame) | required |
precision | int | Number of decimal places (default: 1) | 1 |
atol | float | Absolute tolerance for considering values equal (default: 1e-10) | 1e-10 |
Returns:
| Type | Description |
|---|---|
str | Formatted string representation: |
str |
|
str |
|
str |
|
Raises:
| Type | Description |
|---|---|
TypeError | If value cannot be converted to numeric format |
build_repr_from_init ¶
build_repr_from_init(obj: object, excluded_params: set[str] | None = None, label_as_positional: bool = True, skip_default_size: bool = False) -> str
Build a repr string from init signature, showing non-default parameter values.
This utility function extracts common repr logic used across flixopt classes. It introspects the init method to build a constructor-style repr showing only parameters that differ from their defaults.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
obj | object | The object to create repr for | required |
excluded_params | set[str] | None | Set of parameter names to exclude (e.g., {'self', 'inputs', 'outputs'}) Default excludes 'self', 'label', and 'kwargs' | None |
label_as_positional | bool | If True and 'label' param exists, show it as first positional arg | True |
skip_default_size | bool | Deprecated. Previously skipped size=CONFIG.Modeling.big, now size=None is default. | False |
Returns:
| Type | Description |
|---|---|
str | Formatted repr string like: ClassName("label", param=value) |
format_flow_details ¶
Format inputs and outputs as indented bullet list.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
obj | Any | Object with 'inputs' and/or 'outputs' attributes | required |
has_inputs | bool | Whether to check for inputs | True |
has_outputs | bool | Whether to check for outputs | True |
Returns:
| Type | Description |
|---|---|
str | Formatted string with flow details (including leading newline), or empty string if no flows |
format_title_with_underline ¶
format_sections_with_headers ¶
build_metadata_info ¶
suppress_output ¶
Suppress all console output including C-level output from solvers.
WARNING: Not thread-safe. Modifies global file descriptors. Use only with sequential execution or multiprocessing.
restore_flow_system_from_dataset ¶
Create FlowSystem from dataset.
This is the main entry point for dataset restoration. Called by FlowSystem.from_dataset().
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ds | Dataset | Dataset containing the FlowSystem data | required |
Returns:
| Type | Description |
|---|---|
FlowSystem | FlowSystem instance with all components, buses, effects, and solution restored |
See Also
FlowSystemDatasetIO: Class containing the implementation
flow_system_to_dataset ¶
flow_system_to_dataset(flow_system: FlowSystem, base_dataset: Dataset, include_solution: bool = True, include_original_data: bool = True) -> xr.Dataset
Convert FlowSystem-specific data to dataset.
This function adds FlowSystem-specific data (solution, clustering, metadata) to a base dataset created by the parent class's to_dataset() method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
flow_system | FlowSystem | The FlowSystem to serialize | required |
base_dataset | Dataset | Dataset from parent class with basic structure | required |
include_solution | bool | Whether to include optimization solution | True |
include_original_data | bool | Whether to include clustering.original_data | True |
Returns:
| Type | Description |
|---|---|
Dataset | Complete dataset with all FlowSystem data |
See Also
FlowSystemDatasetIO: Class containing the implementation