xhydro package

Hydrological analysis library built with xarray.

Subpackages

Submodules

xhydro.cc module

Module to compute climate change statistics using xscen functions.

xhydro.cc.climatological_op(ds: Dataset, *, op: str | dict = 'mean', window: int | None = None, min_periods: int | float | None = None, stride: int = 1, periods: list[str] | list[list[str]] | None = None, rename_variables: bool = True, to_level: str = 'climatology', horizons_as_dim: bool = False) Dataset[source]

Perform an operation “op” over time, for given time periods, respecting the temporal resolution of ds.

Parameters

dsxr.Dataset

Dataset to use for the computation.

opstr or dict

Operation to perform over time. The operation can be any method name of xarray.core.rolling.DatasetRolling, “linregress”, or a dictionary. If “op” is a dictionary, the key is the operation name and the value is a dict of kwargs accepted by the equivalent NumPy function. See the Notes for more information. While other operations are technically possible, the following are recommended and tested: [“max”, “mean”, “median”, “min”, “std”, “sum”, “var”, “linregress”]. Operations beyond methods of xarray.core.rolling.DatasetRolling include:

  • “linregress” : Computes the linear regression over time, using scipy.stats.linregress and employing years as regressors. The output will have a new dimension “linreg_param” with coordinates: [“slope”, “intercept”, “rvalue”, “pvalue”, “stderr”, “intercept_stderr”].

Only one operation per call is supported, so len(op)==1 if a dict.

windowint, optional

Number of years to use for the rolling operation. If left at None and periods is given, window will be the size of the first period. Hence, if periods are of different lengths, the shortest period should be passed first. If left at None and periods is not given, the window will be the size of the input dataset.

min_periodsint or float, optional

For the rolling operation, minimum number of years required for a value to be computed. If left at None and the xrfreq is either QS or AS and doesn’t start in January, min_periods will be one less than window. Otherwise, if left at None, it will be deemed the same as “window”. If passed as a float value between 0 and 1, this will be interpreted as the floor of the fraction of the window size.

strideint

Stride (in years) at which to provide an output from the rolling window operation.

periodslist of str or list of lists of str, optional

Either [start, end] or list of [start, end] of continuous periods to be considered. This is needed when the time axis of ds contains some jumps in time. If None, the dataset will be considered continuous.

rename_variablesbool

If True, “_clim_{op}” will be added to variable names.

to_levelstr, optional

The processing level to assign to the output. If None, the processing level of the inputs is preserved.

horizons_as_dimbool

If True, the output will have “horizon” and the frequency as “month”, “season” or “year” as dimensions and coordinates. The “time” coordinate will be unstacked to horizon and frequency dimensions. Horizons originate from periods and/or windows and their stride in the rolling operation.

Returns

xr.Dataset

Dataset with the results from the climatological operation.

Notes

xarray.core.rolling.DatasetRolling functions do not support kwargs other than “keep_attrs”. In order to pass additional arguments to the operation, we instead use the “reduce” method and pass the operation as a numpy function. If possible, a function that handles NaN values will be used (e.g. op=”mean” will use np.nanmean), as the “min_periods” argument already decides how many NaN values are acceptable.

xhydro.cc.compute_deltas(ds: Dataset, reference_horizon: str | Dataset, *, kind: str | dict = '+', rename_variables: bool = True, to_level: str | None = 'deltas') Dataset[source]

Compute deltas in comparison to a reference time period, respecting the temporal resolution of ds.

Parameters

dsxr.Dataset

Dataset to use for the computation.

reference_horizonstr or xr.Dataset

Either a YYYY-YYYY string corresponding to the “horizon” coordinate of the reference period, or a xr.Dataset containing the climatological mean.

kindstr or dict

[“+”, “/”, “%”] Whether to provide absolute, relative, or percentage deltas. Can also be a dictionary separated per variable name.

rename_variablesbool

If True, “_delta_YYYY-YYYY” will be added to variable names.

to_levelstr, optional

The processing level to assign to the output. If None, the processing level of the inputs is preserved.

Returns

xr.Dataset

Returns a Dataset with the requested deltas.

xhydro.cc.ensemble_stats(datasets: dict | list[str | PathLike] | list[Dataset] | list[DataArray] | Dataset, statistics: dict, *, create_kwargs: dict | None = None, weights: DataArray | None = None, common_attrs_only: bool = True, to_level: str = 'ensemble') Dataset[source]

Create an ensemble and computes statistics on it.

Parameters

datasetsdict or list of [str, os.PathLike, Dataset or DataArray], or Dataset

List of file paths or xarray Dataset/DataArray objects to include in the ensemble. A dictionary can be passed instead of a list, in which case the keys are used as coordinates along the new realization axis. Tip: With a project catalog, you can do: datasets = pcat.search(**search_dict).to_dataset_dict(). If a single Dataset is passed, it is assumed to already be an ensemble and will be used as is. The “realization” dimension is required.

statisticsdict

xclim.ensembles statistics to be called. Dictionary in the format {function: arguments}. If a function requires “weights”, you can leave it out of this dictionary and it will be applied automatically if the “weights” argument is provided. See the Notes section for more details on robustness statistics, which are more complex in their usage.

create_kwargsdict, optional

Dictionary of arguments for xclim.ensembles.create_ensemble.

weightsxr.DataArray, optional

Weights to apply along the “realization” dimension. This array cannot contain missing values.

common_attrs_onlybool

If True, keeps only the global attributes that are the same for all datasets and generate new id. If False, keeps global attrs of the first dataset (same behaviour as xclim.ensembles.create_ensemble)

to_levelstr

The processing level to assign to the output.

Returns

xr.Dataset

Dataset with ensemble statistics

Notes

  • The positive fraction in “change_significance” and “robustness_fractions” is calculated by xclim using “v > 0”, which is not appropriate for relative deltas. This function will attempt to detect relative deltas by using the “delta_kind” attribute (“rel.”, “relative”, “*”, or “/”) and will apply “v - 1” before calling the function.

  • The “robustness_categories” statistic requires the outputs of “robustness_fractions”. Thus, there are two ways to build the “statistics” dictionary:

    1. Having “robustness_fractions” and “robustness_categories” as separate entries in the dictionary. In this case, all outputs will be returned.

    2. Having “robustness_fractions” as a nested dictionary under “robustness_categories”. In this case, only the robustness categories will be returned.

  • A “ref” DataArray can be passed to “change_significance” and “robustness_fractions”, which will be used by xclim to compute deltas and perform some significance tests. However, this supposes that both “datasets” and “ref” are still timeseries (e.g. annual means), not climatologies where the “time” dimension represents the period over which the climatology was computed. Thus, using “ref” is only accepted if “robustness_fractions” (or “robustness_categories”) is the only statistic being computed.

  • If you want to use compute a robustness statistic on a climatology, you should first compute the climatologies and deltas yourself, then leave “ref” as None and pass the deltas as the “datasets” argument. This will be compatible with other statistics.

See Also

xclim.ensembles._base.create_ensemble, xclim.ensembles._base.ensemble_percentiles, xclim.ensembles._base.ensemble_mean_std_max_min, xclim.ensembles._robustness.robustness_fractions, xclim.ensembles._robustness.robustness_categories, xclim.ensembles._robustness.robustness_coefficient,

xhydro.cc.produce_horizon(ds: Dataset, *, indicators: str | PathLike | Sequence[Indicator] | Sequence[tuple[str, Indicator]] | ModuleType | None = None, periods: list[str] | list[list[str]] | None = None, warminglevels: dict | None = None, op: str | dict = 'mean', to_level: str | None = 'horizons') Dataset[source]

Compute indicators, then the climatological mean, and finally unstack dates in order to have a single dataset with all indicators of different frequencies.

Once this is done, the function drops “time” in favor of “horizon”. This function computes the indicators and does an interannual mean. It stacks the season and month in different dimensions and adds a dimension horizon for the period or the warming level, if given.

Parameters

dsxr.Dataset

Input dataset with a time dimension. If “indicators” is None, the dataset should contain the precomputed indicators.

indicatorsstr | os.PathLike | Sequence[Indicator] | Sequence[Tuple[str, Indicator]] | ModuleType, optional

Indicators to compute. It will be passed to the indicators argument of xs.compute_indicators.

periodslist of str or list of lists of str, optional

Either [start, end] or list of [start_year, end_year] for the period(s) to be evaluated. If both periods and warminglevels are None, the full time series will be used.

warminglevelsdict, optional

Dictionary of arguments to pass to py:func:xscen.subset_warming_level. If “wl” is a list, the function will be called for each value and produce multiple horizons. If both periods and warminglevels are None, the full time series will be used.

opstr or dict

Operation to perform over the time dimension. See py:func:xscen.climatological_op for details. Default is “mean”.

to_levelstr, optional

The processing level to assign to the output. If there is only one horizon, you can use « {wl} », « {period0} » and « {period1} » in the string to dynamically include that information in the processing level.

Returns

xr.Dataset

Horizon dataset.

xhydro.cc.sampled_indicators(ds: Dataset, deltas: Dataset, *, delta_kind: str | dict | None = None, ds_weights: DataArray | None = None, delta_weights: DataArray | None = None, n: int = 50000, seed: int | None = None, return_dist: bool = False) Dataset | tuple[Dataset, Dataset, Dataset, Dataset][source]

Compute future indicators using a perturbation approach and random sampling.

Parameters

dsxr.Dataset

Dataset containing the historical indicators. The indicators are expected to be represented by a distribution of pre-computed percentiles. The percentiles should be stored in either a dimension called « percentile » [0, 100] or « quantile » [0, 1].

deltasxr.Dataset

Dataset containing the future deltas to apply to the historical indicators.

delta_kindstr or dict, optional

Type of delta provided. Recognized values are: [“absolute”, “abs.”, “+”], [“percentage”, “pct.”, “%”]. If a dict is provided, it should map the variable names to their respective delta type. If None, the variables should have a “delta_kind” attribute.

ds_weightsxr.DataArray, optional

Weights to use when sampling the historical indicators, for dimensions other than “percentile”/”quantile”. Dimensions not present in this Dataset, or if None, will be sampled uniformly unless they are shared with “deltas”.

delta_weightsxr.DataArray, optional

Weights to use when sampling the deltas, such as along the “realization” dimension. Dimensions not present in this Dataset, or if None, will be sampled uniformly unless they are shared with “ds”.

nint

Number of samples to generate.

seedint, optional

Seed to use for the random number generator.

return_distbool

Whether to return the full distributions (ds, deltas, fut) or only the percentiles.

Returns

fut_pctxr.Dataset

Dataset containing the future percentiles.

ds_distxr.Dataset

The historical distribution, stacked along the “sample” dimension.

deltas_distxr.Dataset

The delta distribution, stacked along the “sample” dimension.

fut_distxr.Dataset

The future distribution, stacked along the “sample” dimension.

Notes

Weights along the “time” or “horizon” dimensions are supported, but behave differently than other dimensions. They will not be stacked alongside other dimensions in the new “sample” dimension. Rather, a separate sampling will be done for each time/horizon,

The future percentiles are computed as follows: 1. Sample “n” values from the historical distribution, weighting the percentiles by their associated coverage. 2. Sample “n” values from the delta distribution, using the provided weights. 3. Create the future distribution by applying the sampled deltas to the sampled historical distribution, element-wise. 4. Compute the percentiles of the future distribution.

xhydro.gis module

Module to compute geospatial operations useful in hydrology.

xhydro.gis.land_use_classification(gdf: GeoDataFrame, unique_id: str | None = None, output_format: str = 'geopandas', collection='io-lulc-annual-v02', year: str | int = 'latest') GeoDataFrame | Dataset[source]

Calculate land use classification.

Calculate land use classification for each polygon from a gpd.GeoDataFrame. By default, the classes are generated from the Planetary Computer’s STAC catalog using the 10m Annual Land Use Land Cover dataset.

Parameters

gdfgpd.GeoDataFrame

GeoDataFrame containing watershed polygons. Must have a defined .crs attribute.

unique_idstr

GeoDataFrame containing watershed polygons. Must have a defined .crs attribute.

output_formatstr

One of either xarray (or xr.Dataset) or geopandas (or gpd.GeoDataFrame).

collectionstr

Collection name from the Planetary Computer STAC Catalog.

yearstr | int

Land use dataset year between 2017 and 2022.

Returns

gpd.GeoDataFrame or xr.Dataset

Output dataset containing the watershed properties.

Warnings

This function relies on the Microsoft Planetary Computer’s STAC Catalog to retrieve the Digital Elevation Model (DEM) data.

xhydro.gis.land_use_plot(gdf: GeoDataFrame, idx: int = 0, unique_id: str | None = None, collection: str = 'io-lulc-annual-v02', year: str | int = 'latest') None[source]

Plot a land use map for a specific year and watershed.

Parameters

gdfgpd.GeoDataFrame

GeoDataFrame containing watershed polygons. Must have a defined .crs attribute.

idxint

Index to select in gpd.GeoDataFrame.

unique_idstr

GeoDataFrame containing watershed polygons. Must have a defined .crs attribute.

collectionstr

Collection name from the Planetary Computer STAC Catalog.

yearstr | int

Land use dataset year between 2017 and 2022.

Returns

None

Nothing to return.

Warnings

This function relies on the Microsoft Planetary Computer’s STAC Catalog to retrieve the Digital Elevation Model (DEM) data.

xhydro.gis.surface_properties(gdf: GeoDataFrame, unique_id: str | None = None, projected_crs: int = 6622, output_format: str = 'geopandas', operation: str = 'mean', dataset_date: str = '2021-04-22', collection: str = 'cop-dem-glo-90') GeoDataFrame | Dataset[source]

Surface properties for watersheds.

Surface properties are calculated using Copernicus’s GLO-90 Digital Elevation Model. By default, the dataset has a geographic coordinate system (EPSG: 4326) and this function expects a projected crs for more accurate results.

The calculated properties are : - elevation (meters) - slope (degrees) - aspect ratio (degrees)

Parameters

gdfgpd.GeoDataFrame

GeoDataFrame containing watershed polygons. Must have a defined .crs attribute.

unique_idstr, optional

The column name in the GeoDataFrame that serves as a unique identifier.

projected_crsint

The projected coordinate reference system (crs) to utilize for calculations, such as determining watershed area.

output_formatstr

One of either xarray (or xr.Dataset) or geopandas (or gpd.GeoDataFrame).

operationstr

Aggregation statistics such as mean or sum.

dataset_datestr

Date (%Y-%m-%d) for which to select the imagery from the dataset. Date must be available.

collectionstr

Collection name from the Planetary Computer STAC Catalog. Default is cop-dem-glo-90.

Returns

gpd.GeoDataFrame or xr.Dataset

Output dataset containing the surface properties.

Warnings

This function relies on the Microsoft Planetary Computer’s STAC Catalog to retrieve the Digital Elevation Model (DEM) data.

xhydro.gis.watershed_delineation(*, coordinates: list[tuple] | tuple | None = None, map: Map | None = None) GeoDataFrame[source]

Calculate watershed delineation from pour point.

Watershed delineation can be computed at any location in North America using HydroBASINS (hybas_na_lev01-12_v1c). The process involves assessing all upstream sub-basins from a specified pour point and consolidating them into a unified watershed.

Parameters

coordinateslist of tuple, tuple, optional

Geographic coordinates (longitude, latitude) for the location where watershed delineation will be conducted.

mapleafmap.Map, optional

If the function receives both a map and coordinates as inputs, it will generate and display watershed boundaries on the map. Additionally, any markers present on the map will be transformed into corresponding watershed boundaries for each marker.

Returns

gpd.GeoDataFrame

GeoDataFrame containing the watershed boundaries.

Warnings

This function relies on an Amazon S3-hosted dataset to delineate watersheds.

xhydro.gis.watershed_properties(gdf: GeoDataFrame, unique_id: str | None = None, projected_crs: int = 6622, output_format: str = 'geopandas') GeoDataFrame | Dataset[source]

Watershed properties extracted from a gpd.GeoDataFrame.

The calculated properties are : - area - perimeter - gravelius - centroid coordinates

Parameters

gdfgpd.GeoDataFrame

GeoDataFrame containing watershed polygons. Must have a defined .crs attribute.

unique_idstr, optional

The column name in the GeoDataFrame that serves as a unique identifier.

projected_crsint

The projected coordinate reference system (crs) to utilize for calculations, such as determining watershed area.

output_formatstr

One of either xarray (or xr.Dataset) or geopandas (or gpd.GeoDataFrame).

Returns

gpd.GeoDataFrame or xr.Dataset

Output dataset containing the watershed properties.

xhydro.utils module

Utility functions for xhydro.

xhydro.utils.health_checks(ds: Dataset | DataArray, *, structure: dict | None = None, calendar: str | None = None, start_date: str | None = None, end_date: str | None = None, variables_and_units: dict | None = None, cfchecks: dict | None = None, freq: str | None = None, missing: dict | str | list | None = None, flags: dict | None = None, flags_kwargs: dict | None = None, return_flags: bool = False, raise_on: list | None = None) None | Dataset[source]

Perform a series of health checks on the dataset. Be aware that missing data checks and flag checks can be slow.

Parameters

ds: xr.Dataset or xr.DataArray

Dataset to check.

structure: dict, optional

Dictionary with keys « dims » and « coords » containing the expected dimensions and coordinates. This check will fail is extra dimensions or coordinates are found.

calendar: str, optional

Expected calendar. Synonyms should be detected correctly (e.g. « standard » and « gregorian »).

start_date: str, optional

To check if the dataset starts at least at this date.

end_date: str, optional

To check if the dataset ends at least at this date.

variables_and_units: dict, optional

Dictionary containing the expected variables and units.

cfchecks: dict, optional

Dictionary where the key is the variable to check and the values are the cfchecks. The cfchecks themselves must be a dictionary with the keys being the cfcheck names and the values being the arguments to pass to the cfcheck. See xclim.core.cfchecks for more details.

freq: str, optional

Expected frequency, written as the result of xr.infer_freq(ds.time).

missing: dict or str or list of str, optional

String, list of strings, or dictionary where the key is the method to check for missing data and the values are the arguments to pass to the method. The methods are: « missing_any », « at_least_n_valid », « missing_pct », « missing_wmo ». See xclim.core.missing() for more details.

flags: dict, optional

Dictionary where the key is the variable to check and the values are the flags. The flags themselves must be a dictionary with the keys being the data_flags names and the values being the arguments to pass to the data_flags. If None is passed instead of a dictionary, then xclim’s default flags for the given variable are run. See xclim.core.utils.VARIABLES. See also xclim.core.dataflags.data_flags() for the list of possible flags.

flags_kwargs: dict, optional

Additional keyword arguments to pass to the data_flags (« dims » and « freq »).

return_flags: bool

Whether to return the Dataset created by data_flags.

raise_on: list of str, optional

Whether to raise an error if a check fails, else there will only be a warning. The possible values are the names of the checks. Use [« all »] to raise on all checks.

Returns

xr.Dataset or None

Dataset containing the flags if return_flags is True & raise_on is False for the « flags » check.

xhydro.utils.merge_attributes(attribute: str, *inputs_list: DataArray | Dataset, new_line: str = '\n', missing_str: str | None = None, **inputs_kws: DataArray | Dataset) str[source]

Merge attributes from several DataArrays or Datasets.

If more than one input is given, its name (if available) is prepended as: « <input name> : <input attribute> ».

Parameters

attributestr

The attribute to merge.

*inputs_listxr.DataArray or xr.Dataset

The datasets or variables that were used to produce the new object. Inputs given that way will be prefixed by their name attribute if available.

new_linestr

The character to put between each instance of the attributes. Usually, in CF-conventions, the history attributes uses “\n” while cell_methods uses “ “.

missing_strstr

A string that is printed if an input doesn’t have the attribute. Defaults to None, in which case the input is simply skipped.

**inputs_kwsxr.DataArray or xr.Dataset

Mapping from names to the datasets or variables that were used to produce the new object. Inputs given that way will be prefixes by the passed name.

Returns

str

The new attribute made from the combination of the ones from all the inputs.

xhydro.utils.update_history(hist_str: str, *inputs_list: DataArray | Dataset, new_name: str | None = None, **inputs_kws: DataArray | Dataset) str[source]

Return a history string with the timestamped message and the combination of the history of all inputs.

The new history entry is formatted as « [<timestamp>] <new_name>: <hist_str> - xhydro version: <xhydro.__version__>. »

Parameters

hist_strstr

The string describing what has been done on the data.

*inputs_listxr.DataArray or xr.Dataset

The datasets or variables that were used to produce the new object. Inputs given that way will be prefixed by their « name » attribute if available.

new_namestr, optional

The name of the newly created variable or dataset to prefix hist_msg.

**inputs_kwsxr.DataArray or xr.Dataset

Mapping from names to the datasets or variables that were used to produce the new object. Inputs given that way will be prefixes by the passed name.

Returns

str

The combine history of all inputs starting with hist_str.