[Git][debian-gis-team/xsar][master] 4 commits: New upstream version 2024.12.03

Antonio Valentino (@antonio.valentino) gitlab at salsa.debian.org
Wed Dec 4 06:53:35 GMT 2024



Antonio Valentino pushed to branch master at Debian GIS Project / xsar


Commits:
09561a09 by Antonio Valentino at 2024-12-04T06:42:49+00:00
New upstream version 2024.12.03
- - - - -
a8e9297f by Antonio Valentino at 2024-12-04T06:42:58+00:00
Update upstream source from tag 'upstream/2024.12.03'

Update to upstream version '2024.12.03'
with Debian dir 3185ad858ca8d1ca6c67048eb5001caf5a12b67a
- - - - -
ee6feb4b by Antonio Valentino at 2024-12-04T06:45:07+00:00
New upstream release

- - - - -
c281c915 by Antonio Valentino at 2024-12-04T06:46:55+00:00
Set distribution to unstable

- - - - -


11 changed files:

- .git_archival.txt
- .github/workflows/ci.yaml
- .github/workflows/upstream-dev.yaml
- ci/requirements/environment.yaml
- debian/changelog
- environment.yml
- src/xsar/config.yml
- src/xsar/radarsat2_dataset.py
- src/xsar/raster_readers.py
- src/xsar/rcm_dataset.py
- src/xsar/sentinel1_dataset.py


Changes:

=====================================
.git_archival.txt
=====================================
@@ -1 +1 @@
-ref-names: tag: 2024.11.18
\ No newline at end of file
+ref-names: HEAD -> develop, tag: 2024.12.03
\ No newline at end of file


=====================================
.github/workflows/ci.yaml
=====================================
@@ -2,9 +2,9 @@ name: CI
 
 on:
   push:
-    branches: [main]
+    branches: [develop]
   pull_request:
-    branches: [main]
+    branches: [develop]
   workflow_dispatch:
 
 concurrency:
@@ -62,7 +62,7 @@ jobs:
           echo "CONDA_ENV_FILE=ci/requirements/environment.yaml" >> $GITHUB_ENV
 
       - name: Setup micromamba
-        uses: mamba-org/setup-micromamba at v1
+        uses: mamba-org/setup-micromamba at v2
         with:
           environment-file: ${{ env.CONDA_ENV_FILE }}
           environment-name: xsar-tests


=====================================
.github/workflows/upstream-dev.yaml
=====================================
@@ -61,7 +61,7 @@ jobs:
           fetch-depth: 0 # fetch all branches and tags
 
       - name: set up conda environment
-        uses: mamba-org/setup-micromamba at v1
+        uses: mamba-org/setup-micromamba at v2
         with:
           environment-file: ci/requirements/environment.yaml
           environment-name: tests


=====================================
ci/requirements/environment.yaml
=====================================
@@ -3,6 +3,7 @@ channels:
   - conda-forge
 dependencies:
   - ipython
+  - python
   - pre-commit
   - pytest
   - pytest-reportlog
@@ -17,3 +18,11 @@ dependencies:
   - dask
   - distributed
   - aiohttp
+  - rasterio
+  - cartopy
+  - pyproj
+  - scipy
+  - shapely
+  - geopandas
+  - lxml
+  - rioxarray


=====================================
debian/changelog
=====================================
@@ -1,3 +1,9 @@
+xsar (2024.12.03-1) unstable; urgency=medium
+
+  * New upstream release.
+
+ -- Antonio Valentino <antonio.valentino at tiscali.it>  Wed, 04 Dec 2024 06:46:39 +0000
+
 xsar (2024.11.18-1) unstable; urgency=medium
 
   * New upstream release.


=====================================
environment.yml
=====================================
@@ -14,6 +14,6 @@ dependencies:
   - xarray
   - rioxarray >=0.10
   - pandoc
-  - python >=3.8
+  - python
   - importlib_resources
   - lxml
\ No newline at end of file


=====================================
src/xsar/config.yml
=====================================
@@ -1,16 +1,4 @@
 # default data dir for tests and examples
 data_dir: /tmp
 auxiliary_dir:
-path_auxiliary_df: 
-
-auxiliary_names:    
-    S1B:     
-        v_IPF_36:
-            AUX_PP1: 'S1B_AUX_PP1_V20160422T000000_G20220323T140710.SAFE'
-            AUX_CAL: 'S1B_AUX_CAL_V20160422T000000_G20210104T140113.SAFE'
-            #AUX_INS: 'S1B_AUX_INS_V20160422T000000_G20211027T134314.SAFE'
-    S1A:
-        v_IPF_36:
-            AUX_PP1: 'S1A_AUX_PP1_V20190228T092500_G20220323T153041.SAFE'
-            AUX_CAL: 'S1A_AUX_CAL_V20190228T092500_G20210104T141310.SAFE'
-            #AUX_INS: 'S1A_AUX_INS_V20190228T092500_G20211103T111906.SAFE'}
\ No newline at end of file
+path_auxiliary_df:


=====================================
src/xsar/radarsat2_dataset.py
=====================================
@@ -16,7 +16,8 @@ logger = logging.getLogger("xsar.radarsat2_dataset")
 logger.addHandler(logging.NullHandler())
 
 # we know tiff as no geotransform : ignore warning
-warnings.filterwarnings("ignore", category=rasterio.errors.NotGeoreferencedWarning)
+warnings.filterwarnings(
+    "ignore", category=rasterio.errors.NotGeoreferencedWarning)
 
 # allow nan without warnings
 # some dask warnings are still non filtered: https://github.com/dask/dask/issues/3245
@@ -92,7 +93,8 @@ class RadarSat2Dataset(BaseDataset):
             # assert isinstance(sar_meta.coords2ll(100, 100),tuple)
         else:
             # we want self.sar_meta to be a dask actor on a worker
-            self.sar_meta = BlockingActorProxy(RadarSat2Meta.from_dict, dataset_id.dict)
+            self.sar_meta = BlockingActorProxy(
+                RadarSat2Meta.from_dict, dataset_id.dict)
         del dataset_id
         if self.sar_meta.multidataset:
             raise IndexError(
@@ -162,7 +164,8 @@ class RadarSat2Dataset(BaseDataset):
             "gamma0": "lutGamma",
             "beta0": "lutBeta",
         }
-        geoloc_vars = ["latitude", "longitude", "altitude", "incidence", "elevation"]
+        geoloc_vars = ["latitude", "longitude",
+                       "altitude", "incidence", "elevation"]
         for vv in skip_variables:
             if vv in geoloc_vars:
                 geoloc_vars.remove(vv)
@@ -221,7 +224,8 @@ class RadarSat2Dataset(BaseDataset):
                 dask.array.empty_like(
                     self._dataset.digital_number.isel(pol=0).drop("pol"),
                     dtype=np.int8,
-                    name="empty_var_tmpl-%s" % dask.base.tokenize(self.sar_meta.name),
+                    name="empty_var_tmpl-%s" % dask.base.tokenize(
+                        self.sar_meta.name),
                 ),
                 dims=("line", "sample"),
                 coords={
@@ -270,9 +274,11 @@ class RadarSat2Dataset(BaseDataset):
             self._dataset = xr.merge([self._dataset, rasters])
         self._dataset = xr.merge([self.interpolate_times, self._dataset])
         if "ground_heading" not in skip_variables:
-            self._dataset = xr.merge([self.load_ground_heading(), self._dataset])
+            self._dataset = xr.merge(
+                [self.load_ground_heading(), self._dataset])
         if "velocity" not in skip_variables:
-            self._dataset = xr.merge([self.get_sensor_velocity(), self._dataset])
+            self._dataset = xr.merge(
+                [self.get_sensor_velocity(), self._dataset])
         self._rasterized_masks = self.load_rasterized_masks()
         self._dataset = xr.merge([self._rasterized_masks, self._dataset])
         """a = self._dataset.copy()
@@ -399,7 +405,8 @@ class RadarSat2Dataset(BaseDataset):
                 )
                 typee = self.sar_meta.geoloc[varname_in_geoloc].dtype
                 if lazy_loading:
-                    da_var = map_blocks_coords(self._da_tmpl.astype(typee), interp_func)
+                    da_var = map_blocks_coords(
+                        self._da_tmpl.astype(typee), interp_func)
                 else:
                     da_val = interp_func(
                         self._dataset.digital_number.line,
@@ -471,7 +478,8 @@ class RadarSat2Dataset(BaseDataset):
         da_var = xr.DataArray(data=var, dims=['line', 'sample'],
                               coords={'line': self._dataset.digital_number.line,
                                       'sample': self._dataset.digital_number.sample})"""
-        da_var = map_blocks_coords(self._da_tmpl.astype(lut.dtype), interp_func)
+        da_var = map_blocks_coords(
+            self._da_tmpl.astype(lut.dtype), interp_func)
         return da_var
 
     @timing
@@ -510,7 +518,8 @@ class RadarSat2Dataset(BaseDataset):
         try:
             lut_name = self._map_var_lut_noise[var_name]
         except KeyError:
-            raise ValueError("can't find noise lut name for var '%s'" % var_name)
+            raise ValueError(
+                "can't find noise lut name for var '%s'" % var_name)
         try:
             lut = self.sar_meta.dt["radarParameters"][lut_name]
         except KeyError:
@@ -546,7 +555,8 @@ class RadarSat2Dataset(BaseDataset):
         noise_values = 10 ** (initial_lut / 10)
         lines = np.arange(self.sar_meta.geoloc.line[-1] + 1)
         noise_values_2d = np.tile(noise_values, (lines.shape[0], 1))
-        indexes = [first_pix + step * i for i in range(0, noise_values.shape[0])]
+        indexes = [first_pix + step *
+                   i for i in range(0, noise_values.shape[0])]
         interp_func = dask.delayed(RectBivariateSpline)(
             x=lines, y=indexes, z=noise_values_2d, kx=1, ky=1
         )
@@ -604,6 +614,18 @@ class RadarSat2Dataset(BaseDataset):
                     % (var_name, lut_name)
                 )
         self._dataset = self._add_denoised(self._dataset)
+
+        for var_name, lut_name in self._map_var_lut.items():
+            var_name_raw = var_name + "_raw"
+            if var_name_raw in self._dataset:
+                self._dataset[var_name_raw] = self._dataset[var_name_raw].where(
+                    self._dataset[var_name_raw] > 0, 0)
+            else:
+                logger.debug(
+                    "Skipping variable '%s' ('%s' lut is missing)"
+                    % (var_name, lut_name)
+                )
+
         self.datatree["measurement"] = self.datatree["measurement"].assign(
             self._dataset
         )
@@ -666,8 +688,6 @@ class RadarSat2Dataset(BaseDataset):
         # if self.resolution is not None:
         lut = self._resample_lut_values(lut)
         res = ((self._dataset.digital_number**2.0) + offset) / lut
-        # Have to know if we keep this line written by Olivier because it replaces 0 values by nan --> creates problems for wind inversion
-        res = res.where(res > 0)
         res.attrs.update(lut.attrs)
         return res.to_dataset(name=var_name + "_raw")
 
@@ -745,7 +765,8 @@ class RadarSat2Dataset(BaseDataset):
         interp_func = RectBivariateSpline(
             x=lines, y=samples, z=time_values_2d.astype(float), kx=1, ky=1
         )
-        da_var = map_blocks_coords(self._da_tmpl.astype("datetime64[ns]"), interp_func)
+        da_var = map_blocks_coords(
+            self._da_tmpl.astype("datetime64[ns]"), interp_func)
         return da_var.isel(sample=0).to_dataset(name="time")
 
     def get_sensor_velocity(self):
@@ -782,7 +803,8 @@ class RadarSat2Dataset(BaseDataset):
         vels = np.sqrt(np.sum(velos, axis=0))
         interp_f = interp1d(azimuth_times.astype(float), vels)
         _vels = interp_f(interp_times.astype(float))
-        res = xr.DataArray(_vels, dims=["line"], coords={"line": self.dataset.line})
+        res = xr.DataArray(_vels, dims=["line"], coords={
+                           "line": self.dataset.line})
         return xr.Dataset({"velocity": res})
 
     def _reconfigure_reader_datatree(self):
@@ -837,7 +859,8 @@ class RadarSat2Dataset(BaseDataset):
         new_dt["lut"] = dt["lut"].ds.rename(rename_lut)
 
         # extract noise_lut, rename and put these in a dataset
-        new_dt["noise_lut"] = dt["radarParameters"].ds.rename(rename_radarParameters)
+        new_dt["noise_lut"] = dt["radarParameters"].ds.rename(
+            rename_radarParameters)
         new_dt["noise_lut"].attrs = {}  # reset attributes
         delete_list = get_list_keys_delete(
             new_dt["noise_lut"], rename_radarParameters.values(), inside=False


=====================================
src/xsar/raster_readers.py
=====================================
@@ -73,12 +73,10 @@ def ecmwf_0100_1h(fname, **kwargs):
     xarray.Dataset
     """
     ecmwf_ds = xr.open_dataset(
-        fname, chunks={"Longitude": 1000, "Latitude": 1000}
-    ).isel(time=0)
-    ecmwf_ds.attrs["time"] = datetime.datetime.fromtimestamp(
-        ecmwf_ds.time.item() // 1000000000
-    )
-    if "time" in ecmwf_ds:
+        fname, chunks={'time': 1, 'latitude': 901, 'longitude': 1800}).isel(time=0)
+    ecmwf_ds.attrs['time'] = datetime.datetime.fromtimestamp(
+        ecmwf_ds.time.item() // 1000000000)
+    if 'time' in ecmwf_ds:
         ecmwf_ds = ecmwf_ds.drop("time")
     ecmwf_ds = ecmwf_ds[["Longitude", "Latitude", "10U", "10V"]].rename(
         {"Longitude": "x", "Latitude": "y", "10U": "U10", "10V": "V10"}
@@ -143,7 +141,7 @@ def hwrf_0015_3h(fname, **kwargs):
     -------
     xarray.Dataset
     """
-    hwrf_ds = xr.open_dataset(fname)
+    hwrf_ds = xr.open_dataset(fname, chunks={'t': 1, 'LAT': 700, 'LON': 700})
     try:
         hwrf_ds = hwrf_ds[["U", "V", "LON", "LAT"]]
         hwrf_ds = hwrf_ds.squeeze("t", drop=True)
@@ -186,10 +184,11 @@ def era5_0250_1h(fname, **kwargs):
     xarray.Dataset
     """
 
-    ds_era5 = xr.open_dataset(fname, chunks={"time": 1})
-    ds_era5 = ds_era5[["u10", "v10", "latitude025", "longitude025"]]
-    ds_era5 = ds_era5.sel(time=str(kwargs["date"]), method="nearest")
-    ds_era5 = ds_era5.drop("time")
+    ds_era5 = xr.open_dataset(
+        fname, chunks={'time': 6, 'latitude025': 721, 'longitude025': 1440})
+    ds_era5 = ds_era5[['u10', 'v10', 'latitude025', 'longitude025']]
+    ds_era5 = ds_era5.sel(time=str(kwargs['date']), method="nearest")
+    ds_era5 = ds_era5.drop('time')
 
     ds_era5 = ds_era5.rename(
         {"longitude025": "x", "latitude025": "y", "u10": "U10", "v10": "V10"}


=====================================
src/xsar/rcm_dataset.py
=====================================
@@ -21,7 +21,8 @@ logger = logging.getLogger("xsar.radarsat2_dataset")
 logger.addHandler(logging.NullHandler())
 
 # we know tiff as no geotransform : ignore warning
-warnings.filterwarnings("ignore", category=rasterio.errors.NotGeoreferencedWarning)
+warnings.filterwarnings(
+    "ignore", category=rasterio.errors.NotGeoreferencedWarning)
 
 # allow nan without warnings
 # some dask warnings are still non filtered: https://github.com/dask/dask/issues/3245
@@ -99,7 +100,8 @@ class RcmDataset(BaseDataset):
             # assert isinstance(rs2meta.coords2ll(100, 100),tuple)
         else:
             # we want self.rs2meta to be a dask actor on a worker
-            self.sar_meta = BlockingActorProxy(RcmMeta.from_dict, dataset_id.dict)
+            self.sar_meta = BlockingActorProxy(
+                RcmMeta.from_dict, dataset_id.dict)
         del dataset_id
 
         if self.sar_meta.multidataset:
@@ -146,7 +148,8 @@ class RcmDataset(BaseDataset):
             "beta0": "beta0",
         }
 
-        geoloc_vars = ["latitude", "longitude", "altitude", "incidence", "elevation"]
+        geoloc_vars = ["latitude", "longitude",
+                       "altitude", "incidence", "elevation"]
         for vv in skip_variables:
             if vv in geoloc_vars:
                 geoloc_vars.remove(vv)
@@ -168,8 +171,10 @@ class RcmDataset(BaseDataset):
                 value_res_sample = float(resolution.replace("m", ""))
                 value_res_line = value_res_sample
             elif isinstance(resolution, dict):
-                value_res_sample = self.sar_meta.pixel_sample_m * resolution["sample"]
-                value_res_line = self.sar_meta.pixel_line_m * resolution["line"]
+                value_res_sample = self.sar_meta.pixel_sample_m * \
+                    resolution["sample"]
+                value_res_line = self.sar_meta.pixel_line_m * \
+                    resolution["line"]
             else:
                 logger.warning(
                     "resolution type not handle (%s) should be str or dict -> sampleSpacing"
@@ -198,7 +203,8 @@ class RcmDataset(BaseDataset):
                 dask.array.empty_like(
                     self._dataset.digital_number.isel(pol=0).drop("pol"),
                     dtype=np.int8,
-                    name="empty_var_tmpl-%s" % dask.base.tokenize(self.sar_meta.name),
+                    name="empty_var_tmpl-%s" % dask.base.tokenize(
+                        self.sar_meta.name),
                 ),
                 dims=("line", "sample"),
                 coords={
@@ -234,7 +240,8 @@ class RcmDataset(BaseDataset):
 
         self._luts = self.lazy_load_luts()
         self._noise_luts = self.lazy_load_noise_luts()
-        self._noise_luts = self._noise_luts.drop(["pixelFirstNoiseValue", "stepSize"])
+        self._noise_luts = self._noise_luts.drop(
+            ["pixelFirstNoiseValue", "stepSize"])
         self.apply_calibration_and_denoising()
         self._dataset = xr.merge(
             [
@@ -249,9 +256,11 @@ class RcmDataset(BaseDataset):
         if rasters is not None:
             self._dataset = xr.merge([self._dataset, rasters])
         if "ground_heading" not in skip_variables:
-            self._dataset = xr.merge([self.load_ground_heading(), self._dataset])
+            self._dataset = xr.merge(
+                [self.load_ground_heading(), self._dataset])
         if "velocity" not in skip_variables:
-            self._dataset = xr.merge([self.get_sensor_velocity(), self._dataset])
+            self._dataset = xr.merge(
+                [self.get_sensor_velocity(), self._dataset])
         self._rasterized_masks = self.load_rasterized_masks()
         self._dataset = xr.merge([self._rasterized_masks, self._dataset])
         self.datatree["measurement"] = self.datatree["measurement"].assign(
@@ -410,7 +419,6 @@ class RcmDataset(BaseDataset):
         lut = self._get_lut(var_name)
         offset = lut.attrs["offset"]
         res = ((self._dataset.digital_number**2.0) + offset) / lut
-        res = res.where(res > 0)
         res.attrs.update(lut.attrs)
         return res.to_dataset(name=var_name + "_raw")
 
@@ -464,10 +472,22 @@ class RcmDataset(BaseDataset):
                     "Skipping variable '%s' ('%s' lut is missing)"
                     % (var_name, lut_name)
                 )
+
         self._dataset = self._add_denoised(self._dataset)
+
+        for var_name, lut_name in self._map_var_lut.items():
+            var_name_raw = var_name + "_raw"
+            if var_name_raw in self._dataset:
+                self._dataset[var_name_raw] = self._dataset[var_name_raw].where(
+                    self._dataset[var_name_raw] > 0, 0)
+            else:
+                logger.debug(
+                    "Skipping variable '%s' ('%s' lut is missing)"
+                    % (var_name, lut_name)
+                )
         self.datatree["measurement"] = self.datatree["measurement"].assign(
-            self._dataset
-        )
+            self._dataset)
+
         # self._dataset = self.datatree[
         #     'measurement'].to_dataset()  # test oct 22 to see if then I can modify variables of the dt
         return
@@ -496,7 +516,6 @@ class RcmDataset(BaseDataset):
         if vars is None:
             vars = ["sigma0", "beta0", "gamma0"]
         for varname in vars:
-
             varname_raw = varname + "_raw"
             noise = "ne%sz" % varname[0]
             if varname_raw not in ds:
@@ -511,6 +530,10 @@ class RcmDataset(BaseDataset):
                     else:
                         denoised.attrs["comment"] = "not clipped, some values can be <0"
                     ds[varname] = denoised
+                    
+                    ds[varname_raw].attrs[
+                        "denoising information"
+                    ] = f"product was not denoised"
 
                 else:
                     logging.debug(
@@ -518,7 +541,7 @@ class RcmDataset(BaseDataset):
                     )
                     denoised = ds[varname_raw]
                     denoised.attrs["denoising information"] = (
-                        "already denoised by Canadian Agency"
+                        "already denoised by Canadian Space Agency"
                     )
                     if clip:
                         denoised = denoised.clip(min=0)
@@ -530,7 +553,7 @@ class RcmDataset(BaseDataset):
                     ds[varname_raw] = denoised + ds[noise]
                     ds[varname_raw].attrs[
                         "denoising information"
-                    ] = "product was already denoised, noise added back"
+                    ] = "product was already denoised by Canadian Space Agency, noise added back"
 
         return ds
 
@@ -561,7 +584,7 @@ class RcmDataset(BaseDataset):
         # ds_lut_f_delayed.attrs = incidence.attrs
         return da
 
-    @timing
+    @ timing
     def _load_elevation_from_lut(self):
         """
         Load elevation from lut.
@@ -583,7 +606,7 @@ class RcmDataset(BaseDataset):
         inside = angle_rad * earth_radius / (earth_radius + satellite_height)
         return np.degrees(np.arcsin(inside))
 
-    @timing
+    @ timing
     def _get_offboresight_from_elevation(self):
         """
         Compute offboresight angle.
@@ -609,7 +632,7 @@ class RcmDataset(BaseDataset):
             "comment"
         ] = "built from elevation angle and latitude"
 
-    @timing
+    @ timing
     def load_from_geoloc(self, varnames, lazy_loading=True):
         """
         Interpolate (with RectBiVariateSpline) variables from `self.sar_meta.geoloc` to `self._dataset`
@@ -663,7 +686,8 @@ class RcmDataset(BaseDataset):
                 )
                 typee = self.sar_meta.geoloc[varname_in_geoloc].dtype
                 if lazy_loading:
-                    da_var = map_blocks_coords(self._da_tmpl.astype(typee), interp_func)
+                    da_var = map_blocks_coords(
+                        self._da_tmpl.astype(typee), interp_func)
                 else:
                     da_val = interp_func(
                         self._dataset.digital_number.line,
@@ -695,7 +719,7 @@ class RcmDataset(BaseDataset):
         ds = xr.merge(da_list)
         return ds
 
-    @property
+    @ property
     def interpolate_times(self):
         """
         Apply interpolation with RectBivariateSpline to the azimuth time extracted from `self.sar_meta.geoloc`
@@ -712,7 +736,8 @@ class RcmDataset(BaseDataset):
         interp_func = RectBivariateSpline(
             x=lines, y=samples, z=time_values_2d.astype(float), kx=1, ky=1
         )
-        da_var = map_blocks_coords(self._da_tmpl.astype("datetime64[ns]"), interp_func)
+        da_var = map_blocks_coords(
+            self._da_tmpl.astype("datetime64[ns]"), interp_func)
         return da_var.isel(sample=0).to_dataset(name="time")
 
     def get_sensor_velocity(self):
@@ -735,10 +760,11 @@ class RcmDataset(BaseDataset):
         vels = np.sqrt(np.sum(velos, axis=0))
         interp_f = interp1d(azimuth_times.astype(float), vels)
         _vels = interp_f(self.interpolate_times["time"].astype(float))
-        res = xr.DataArray(_vels, dims=["line"], coords={"line": self.dataset.line})
+        res = xr.DataArray(_vels, dims=["line"], coords={
+            "line": self.dataset.line})
         return xr.Dataset({"velocity": res})
 
-    @timing
+    @ timing
     def load_digital_number(
         self, resolution=None, chunks=None, resampling=rasterio.enums.Resampling.rms
     ):
@@ -898,8 +924,10 @@ class RcmDataset(BaseDataset):
                 (resolution["sample"] - 1) / 2, (resolution["line"] - 1) / 2
             )
             scale = Affine.scale(
-                rio.width // resolution["sample"] * resolution["sample"] / out_shape[1],
-                rio.height // resolution["line"] * resolution["line"] / out_shape[0],
+                rio.width // resolution["sample"] *
+                resolution["sample"] / out_shape[1],
+                rio.height // resolution["line"] *
+                resolution["line"] / out_shape[0],
             )
             sample, _ = translate * scale * (dn.sample, 0)
             _, line = translate * scale * (0, dn.line)
@@ -916,7 +944,8 @@ class RcmDataset(BaseDataset):
             "history": yaml.safe_dump(
                 {
                     var_name: get_glob(
-                        [p.replace(self.sar_meta.path + "/", "") for p in tiff_files]
+                        [p.replace(self.sar_meta.path + "/", "")
+                         for p in tiff_files]
                     )
                 }
             ),
@@ -935,7 +964,7 @@ class RcmDataset(BaseDataset):
             intro = "full coverage"
         return "<RcmDataset %s object>" % intro
 
-    @timing
+    @ timing
     def flip_sample_da(self, ds):
         """
         When a product is flipped, flip back data arrays (from a dataset) sample dimensions to respect the xsar
@@ -969,7 +998,7 @@ class RcmDataset(BaseDataset):
             new_ds = ds
         return new_ds
 
-    @timing
+    @ timing
     def flip_line_da(self, ds):
         """
         Flip dataArrays (from a dataset) that depend on line dimension when a product is ascending, in order to
@@ -1004,7 +1033,7 @@ class RcmDataset(BaseDataset):
         self.datatree.attrs |= self.sar_meta.dt.attrs
         return
 
-    @property
+    @ property
     def dataset(self):
         """
         `xarray.Dataset` representation of this `xsar.RcmDataset` object.
@@ -1015,7 +1044,7 @@ class RcmDataset(BaseDataset):
         res.attrs = self.datatree.attrs
         return res
 
-    @dataset.setter
+    @ dataset.setter
     def dataset(self, ds):
         if self.sar_meta.name == ds.attrs["name"]:
             # check if new ds has changed coordinates
@@ -1032,6 +1061,6 @@ class RcmDataset(BaseDataset):
         else:
             raise ValueError("dataset must be same kind as original one.")
 
-    @dataset.deleter
+    @ dataset.deleter
     def dataset(self):
         logger.debug("deleter dataset")


=====================================
src/xsar/sentinel1_dataset.py
=====================================
@@ -30,7 +30,8 @@ logger = logging.getLogger("xsar.sentinel1_dataset")
 logger.addHandler(logging.NullHandler())
 
 # we know tiff as no geotransform : ignore warning
-warnings.filterwarnings("ignore", category=rasterio.errors.NotGeoreferencedWarning)
+warnings.filterwarnings(
+    "ignore", category=rasterio.errors.NotGeoreferencedWarning)
 
 # allow nan without warnings
 # some dask warnings are still non filtered: https://github.com/dask/dask/issues/3245
@@ -126,7 +127,8 @@ class Sentinel1Dataset(BaseDataset):
             # assert isinstance(sar_meta.coords2ll(100, 100),tuple)
         else:
             # we want self.sar_meta to be a dask actor on a worker
-            self.sar_meta = BlockingActorProxy(Sentinel1Meta.from_dict, dataset_id.dict)
+            self.sar_meta = BlockingActorProxy(
+                Sentinel1Meta.from_dict, dataset_id.dict)
         del dataset_id
 
         if self.sar_meta.multidataset:
@@ -242,8 +244,7 @@ class Sentinel1Dataset(BaseDataset):
         ):
             self.apply_recalibration = False
             raise ValueError(
-                "Recalibration can't be done without roll angle. You probably work with an old file for which roll angle is not auxiliary file."
-            )
+                f"Recalibration can't be done without roll angle. You probably work with an old file for which roll angle is not in auxiliary file.")
 
         # self.datatree['measurement'].ds = .from_dict({'measurement':self._load_digital_number(resolution=resolution, resampling=resampling, chunks=chunks)
         # self._dataset = self.datatree['measurement'].ds #the two variables should be linked then.
@@ -277,10 +278,12 @@ class Sentinel1Dataset(BaseDataset):
                 value_res_line = value_res_sample
             elif isinstance(resolution, dict):
                 value_res_sample = (
-                    self.sar_meta.image["slantRangePixelSpacing"] * resolution["sample"]
+                    self.sar_meta.image["slantRangePixelSpacing"] *
+                    resolution["sample"]
                 )
                 value_res_line = (
-                    self.sar_meta.image["azimuthPixelSpacing"] * resolution["line"]
+                    self.sar_meta.image["azimuthPixelSpacing"] *
+                    resolution["line"]
                 )
             else:
                 logger.warning(
@@ -396,7 +399,7 @@ class Sentinel1Dataset(BaseDataset):
         # line_jump_noise = np.ravel(dt['noise_range']['line'][1:-1].data) # annotated line number of burst begining, this one is corrupted for some S1 TOPS product
         # annoted line number of burst begining
         line_jump_noise = np.ravel(
-            dt["noise_range"]["line"][1 : 1 + len(line_jump_meas)].data
+            dt["noise_range"]["line"][1: 1 + len(line_jump_meas)].data
         )
         burst_first_lineshift = line_jump_meas - line_jump_noise
         if len(np.unique(burst_first_lineshift)) == 1:
@@ -404,7 +407,8 @@ class Sentinel1Dataset(BaseDataset):
             logging.debug("line_shift: %s", line_shift)
         else:
             raise ValueError(
-                "Inconsistency in line shifting : {}".format(burst_first_lineshift)
+                "Inconsistency in line shifting : {}".format(
+                    burst_first_lineshift)
             )
         res = dt["noise_range"].ds.assign_coords(
             {"line": dt["noise_range"]["line"] + line_shift}
@@ -624,35 +628,43 @@ class Sentinel1Dataset(BaseDataset):
                     path_dataframe_aux = config["path_dataframe_aux"]
                     dataframe_aux = pd.read_csv(path_dataframe_aux)
 
-                    sel_cal = dataframe_aux.loc[
-                        (
-                            dataframe_aux.sat_name
-                            == self.sar_meta.manifest_attrs["satellite"]
-                        )
-                        & (dataframe_aux.aux_type == "CAL")
-                        & (
-                            dataframe_aux.icid
-                            == int(self.sar_meta.manifest_attrs["icid"])
-                        )
-                        & (dataframe_aux.validation_date <= self.sar_meta.start_date)
-                    ]
+                    sel_cal = dataframe_aux.loc[(dataframe_aux.sat_name == self.sar_meta.manifest_attrs['satellite']) &
+                                                (dataframe_aux.aux_type == "CAL") &
+                                                (dataframe_aux.icid == int(self.sar_meta.manifest_attrs['icid'])) &
+                                                (dataframe_aux.validation_date <= self.sar_meta.start_date)]
+                    # Check if sel_cal is empty
+                    if sel_cal.empty:
+                        sel_ = dataframe_aux.loc[(dataframe_aux.sat_name == self.sar_meta.manifest_attrs['satellite']) &
+                                                 (dataframe_aux.aux_type == "CAL") &
+                                                 (dataframe_aux.icid == int(self.sar_meta.manifest_attrs['icid']))]
+                        if sel_.empty:
+                            raise ValueError(
+                                f"Cannot recalibrate - No matching CAL data found for your data : start_date : {self.sar_meta.start_date}.")
+                        else:
+                            raise ValueError(f"Cannot recalibrate - No matching CAL data found for your data: start_date: {self.sar_meta.start_date} & \
+                                miniumum validation date in active aux files: {sel_.sort_values(by=['validation_date'], ascending=False).validation_date.values[0]}.")
+
                     sel_cal = sel_cal.sort_values(
-                        by=["validation_date", "generation_date"], ascending=False
-                    )
+                        by=["validation_date", "generation_date"], ascending=False)
+
                     path_new_cal = sel_cal.iloc[0].aux_path
 
-                    sel_pp1 = dataframe_aux.loc[
-                        (
-                            dataframe_aux.sat_name
-                            == self.sar_meta.manifest_attrs["satellite"]
-                        )
-                        & (dataframe_aux.aux_type == "PP1")
-                        & (
-                            dataframe_aux.icid
-                            == int(self.sar_meta.manifest_attrs["icid"])
-                        )
-                        & (dataframe_aux.validation_date <= self.sar_meta.start_date)
-                    ]
+                    sel_pp1 = dataframe_aux.loc[(dataframe_aux.sat_name == self.sar_meta.manifest_attrs['satellite']) &
+                                                (dataframe_aux.aux_type == "PP1") &
+                                                (dataframe_aux.icid == int(self.sar_meta.manifest_attrs['icid'])) &
+                                                (dataframe_aux.validation_date <= self.sar_meta.start_date)]
+
+                    # Check if sel_pp1 is empty
+                    if sel_pp1.empty:
+                        sel_ = dataframe_aux.loc[(dataframe_aux.sat_name == self.sar_meta.manifest_attrs['satellite']) &
+                                                 (dataframe_aux.aux_type == "PP1") &
+                                                 (dataframe_aux.icid == int(self.sar_meta.manifest_attrs['icid']))]
+                        if sel_.empty:
+                            raise ValueError(
+                                f"Cannot recalibrate - No matching PP1 data found for your data : start_date : {self.sar_meta.start_date}.")
+                        else:
+                            raise ValueError(f"Cannot recalibrate - No matching PP1 data found for your data: start_date: {self.sar_meta.start_date} & \
+                                miniumum validation date in active aux files: {sel_.sort_values(by=['validation_date'], ascending=False).validation_date.values[0]}.")
                     sel_pp1 = sel_pp1.sort_values(
                         by=["validation_date", "generation_date"], ascending=False
                     )
@@ -666,7 +678,8 @@ class Sentinel1Dataset(BaseDataset):
             if "velocity" not in skip_variables:
                 self._dataset = self._dataset.merge(self.get_sensor_velocity())
             if "range_ground_spacing" not in skip_variables:
-                self._dataset = self._dataset.merge(self._range_ground_spacing())
+                self._dataset = self._dataset.merge(
+                    self._range_ground_spacing())
 
             # set miscellaneous attrs
             for var, attrs in attrs_dict.items():
@@ -695,7 +708,8 @@ class Sentinel1Dataset(BaseDataset):
                     # self.land_mask_slc_per_bursts(
                     #    lazy_loading=lazy_loading)  # replace "GRD" like (Affine transform) land_mask by a burst-by-burst rasterised land mask
                 else:
-                    logger.debug("not a TOPS product -> land_mask already available.")
+                    logger.debug(
+                        "not a TOPS product -> land_mask already available.")
         return
 
     def add_swath_number(self):
@@ -744,7 +758,8 @@ class Sentinel1Dataset(BaseDataset):
 
             # Marquer les pixels déjà vus
             flag_tab = xr.where(
-                (flag_tab == 1) & condition & (swath_tab == swath_index), 2, flag_tab
+                (flag_tab == 1) & condition & (
+                    swath_tab == swath_index), 2, flag_tab
             )
 
             # Affecter le swath actuel
@@ -798,7 +813,8 @@ class Sentinel1Dataset(BaseDataset):
         )
 
         self._dataset_recalibration = self._dataset_recalibration.assign(
-            rollAngle=(["line", "sample"], interp_roll(self._dataset.azimuth_time))
+            rollAngle=(["line", "sample"], interp_roll(
+                self._dataset.azimuth_time))
         )
         self._dataset_recalibration = self._dataset_recalibration.assign(
             offboresigthAngle=(
@@ -918,14 +934,17 @@ class Sentinel1Dataset(BaseDataset):
                         self._dataset_recalibration[keyf] = xr.DataArray(
                             np.nan,
                             dims=["pol"],
-                            coords={"pol": self._dataset_recalibration.coords["pol"]},
+                            coords={
+                                "pol": self._dataset_recalibration.coords["pol"]},
                         )
                         self._dataset_recalibration[keyf].attrs["aux_path"] = (
                             os.path.join(
                                 os.path.basename(
-                                    os.path.dirname(os.path.dirname(path_aux_pp1_old))
+                                    os.path.dirname(
+                                        os.path.dirname(path_aux_pp1_old))
                                 ),
-                                os.path.basename(os.path.dirname(path_aux_pp1_old)),
+                                os.path.basename(
+                                    os.path.dirname(path_aux_pp1_old)),
                                 os.path.basename(path_aux_pp1_old),
                             )
                         )
@@ -945,14 +964,17 @@ class Sentinel1Dataset(BaseDataset):
                         self._dataset_recalibration[keyf] = xr.DataArray(
                             np.nan,
                             dims=["pol"],
-                            coords={"pol": self._dataset_recalibration.coords["pol"]},
+                            coords={
+                                "pol": self._dataset_recalibration.coords["pol"]},
                         )
                         self._dataset_recalibration[keyf].attrs["aux_path"] = (
                             os.path.join(
                                 os.path.basename(
-                                    os.path.dirname(os.path.dirname(path_aux_pp1_new))
+                                    os.path.dirname(
+                                        os.path.dirname(path_aux_pp1_new))
                                 ),
-                                os.path.basename(os.path.dirname(path_aux_pp1_new)),
+                                os.path.basename(
+                                    os.path.dirname(path_aux_pp1_new)),
                                 os.path.basename(path_aux_pp1_new),
                             )
                         )
@@ -1004,8 +1026,10 @@ class Sentinel1Dataset(BaseDataset):
                     var_dB
                     + 10 * np.log10(self._dataset_recalibration["old_geap"])
                     - 10 * np.log10(self._dataset_recalibration["new_geap"])
-                    - 2 * 10 * np.log10(self._dataset_recalibration["old_gproc"])
-                    + 2 * 10 * np.log10(self._dataset_recalibration["new_gproc"])
+                    - 2 * 10 *
+                    np.log10(self._dataset_recalibration["old_gproc"])
+                    + 2 * 10 *
+                    np.log10(self._dataset_recalibration["new_gproc"])
                 )
 
                 self._dataset_recalibration[var + "__corrected"] = 10 ** (
@@ -1016,6 +1040,19 @@ class Sentinel1Dataset(BaseDataset):
             )
 
         self._dataset = self._add_denoised(self._dataset)
+        
+        for var_name, lut_name in self._map_var_lut.items():
+            var_name_raw = var_name + "_raw"
+            if var_name_raw in self._dataset:
+                self._dataset[var_name_raw] = self._dataset[var_name_raw].where(
+                    self._dataset[var_name_raw] > 0, 0)
+            else:
+                logger.debug(
+                    "Skipping variable '%s' ('%s' lut is missing)"
+                    % (var_name, lut_name)
+                )
+                
+                
         self.datatree["measurement"] = self.datatree["measurement"].assign(
             self._dataset
         )
@@ -1154,8 +1191,10 @@ class Sentinel1Dataset(BaseDataset):
                         (sub_a_min, sub_x_min, sub_a_max, sub_x_max) = map(
                             int, block.geometry.bounds
                         )
-                        sub_a = lines[(lines >= sub_a_min) & (lines <= sub_a_max)]
-                        sub_x = samples[(samples >= sub_x_min) & (samples <= sub_x_max)]
+                        sub_a = lines[(lines >= sub_a_min) &
+                                      (lines <= sub_a_max)]
+                        sub_x = samples[(samples >= sub_x_min)
+                                        & (samples <= sub_x_max)]
                         noise.loc[dict(line=sub_a, sample=sub_x)] = block.lut_f(
                             sub_a, sub_x
                         )
@@ -1220,7 +1259,8 @@ class Sentinel1Dataset(BaseDataset):
                 lines_start, lines_stop, samples, noiseLuts
             ):
                 lut_f = Lut_box_range(a_start, a_stop, x, lll)
-                block = pd.Series(dict([("lut_f", lut_f), ("geometry", lut_f.area)]))
+                block = pd.Series(
+                    dict([("lut_f", lut_f), ("geometry", lut_f.area)]))
                 blocks.append(block)
 
             # to geopandas
@@ -1297,8 +1337,10 @@ class Sentinel1Dataset(BaseDataset):
                 sample_stop,
                 noise_lut,
             ):
-                lut_f = Lut_box_azi(sw, a, a_start, a_stop, x_start, x_stop, lut)
-                block = pd.Series(dict([("lut_f", lut_f), ("geometry", lut_f.area)]))
+                lut_f = Lut_box_azi(sw, a, a_start, a_stop,
+                                    x_start, x_stop, lut)
+                block = pd.Series(
+                    dict([("lut_f", lut_f), ("geometry", lut_f.area)]))
                 blocks.append(block)
 
             if len(blocks) == 0:
@@ -1424,7 +1466,8 @@ class Sentinel1Dataset(BaseDataset):
         for varname in varnames:
             varname_in_geoloc = mapping_dataset_geoloc[varname]
             if varname in ["azimuth_time"]:
-                z_values = self.sar_meta.geoloc[varname_in_geoloc].astype(float)
+                z_values = self.sar_meta.geoloc[varname_in_geoloc].astype(
+                    float)
             elif varname == "longitude":
                 z_values = self.sar_meta.geoloc[varname_in_geoloc]
                 if self.sar_meta.cross_antemeridian:
@@ -1472,7 +1515,8 @@ class Sentinel1Dataset(BaseDataset):
                 else:
                     line_time = self.get_burst_azitime
                     XX, YY = np.meshgrid(
-                        line_time.astype(float), self._dataset.digital_number.sample
+                        line_time.astype(
+                            float), self._dataset.digital_number.sample
                     )
                     da_var = rbs(XX, YY, grid=False)
                     da_var = xr.DataArray(
@@ -1485,7 +1529,8 @@ class Sentinel1Dataset(BaseDataset):
                     )
             else:
                 if lazy_loading:
-                    da_var = map_blocks_coords(self._da_tmpl.astype(typee), interp_func)
+                    da_var = map_blocks_coords(
+                        self._da_tmpl.astype(typee), interp_func)
                 else:
                     da_var = interp_func(
                         self._dataset.digital_number.line,
@@ -1564,14 +1609,13 @@ class Sentinel1Dataset(BaseDataset):
         """
         lut = self._get_lut(var_name)
         res = np.abs(self._dataset.digital_number) ** 2.0 / (lut**2)
-        # dn default value is 0: convert to Nan
-        res = res.where(res > 0)
         astype = self._dtypes.get(var_name)
         if astype is not None:
             res = res.astype(astype)
 
         res.attrs.update(lut.attrs)
-        res.attrs["history"] = merge_yaml([lut.attrs["history"]], section=var_name)
+        res.attrs["history"] = merge_yaml(
+            [lut.attrs["history"]], section=var_name)
         res.attrs["references"] = (
             "https://sentinel.esa.int/web/sentinel/radiometric-calibration-of-level-1-products"
         )
@@ -1610,7 +1654,8 @@ class Sentinel1Dataset(BaseDataset):
             )
 
         if self.sar_meta.product != "GRD":
-            raise ValueError("SAR product must be GRD. Not implemented for SLC")
+            raise ValueError(
+                "SAR product must be GRD. Not implemented for SLC")
 
         # Retrieve the variable data array and corresponding LUT
         da_var = self._dataset[var_name]
@@ -1699,7 +1744,8 @@ class Sentinel1Dataset(BaseDataset):
                 ds[varname] = ds[varname_raw]
             elif len(set(self.sar_meta.denoised.values())) != 1:
                 # TODO: to be implemented
-                raise NotImplementedError("semi denoised products not yet implemented")
+                raise NotImplementedError(
+                    "semi denoised products not yet implemented")
             else:
                 varname_raw_corrected = varname_raw + "__corrected"
                 if (self.apply_recalibration) & (
@@ -1709,7 +1755,8 @@ class Sentinel1Dataset(BaseDataset):
                         self._dataset_recalibration[varname_raw_corrected] - ds[noise]
                     )
                     denoised.attrs["history"] = merge_yaml(
-                        [ds[varname_raw].attrs["history"], ds[noise].attrs["history"]],
+                        [ds[varname_raw].attrs["history"],
+                            ds[noise].attrs["history"]],
                         section=varname,
                     )
                     denoised.attrs["comment_recalibration"] = (
@@ -1718,7 +1765,8 @@ class Sentinel1Dataset(BaseDataset):
                 else:
                     denoised = ds[varname_raw] - ds[noise]
                     denoised.attrs["history"] = merge_yaml(
-                        [ds[varname_raw].attrs["history"], ds[noise].attrs["history"]],
+                        [ds[varname_raw].attrs["history"],
+                            ds[noise].attrs["history"]],
                         section=varname,
                     )
                     denoised.attrs["comment_recalibration"] = (
@@ -1772,7 +1820,8 @@ class Sentinel1Dataset(BaseDataset):
         vels = np.sqrt(np.sum(velos, axis=0))
         interp_f = interp1d(azi_times.astype(float), vels)
         _vels = interp_f(azimuth_times.astype(float))
-        res = xr.DataArray(_vels, dims=["line"], coords={"line": self.dataset.line})
+        res = xr.DataArray(_vels, dims=["line"], coords={
+                           "line": self.dataset.line})
         return xr.Dataset({"velocity": res})
 
     def _range_ground_spacing(self):
@@ -1804,11 +1853,13 @@ class Sentinel1Dataset(BaseDataset):
                     "line": int(len(line_tmp) / 2),
                 }
             )
-            range_ground_spacing_vect = ground_spacing[1] / np.sin(np.radians(inc))
+            range_ground_spacing_vect = ground_spacing[1] / \
+                np.sin(np.radians(inc))
             range_ground_spacing_vect.attrs["history"] = ""
 
         else:  # GRD
-            valuess = np.ones((len(self._dataset["sample"]))) * ground_spacing[1]
+            valuess = np.ones(
+                (len(self._dataset["sample"]))) * ground_spacing[1]
             range_ground_spacing_vect = xr.DataArray(
                 valuess, coords={"sample": self._dataset["sample"]}, dims=["sample"]
             )



View it on GitLab: https://salsa.debian.org/debian-gis-team/xsar/-/compare/d70177f0a0ade8eaf12980916d932b9893fe05cf...c281c9156e995f54bc57411f08bcec5fb2ca46a3

-- 
View it on GitLab: https://salsa.debian.org/debian-gis-team/xsar/-/compare/d70177f0a0ade8eaf12980916d932b9893fe05cf...c281c9156e995f54bc57411f08bcec5fb2ca46a3
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/pkg-grass-devel/attachments/20241204/5d1aa8b0/attachment-0001.htm>


More information about the Pkg-grass-devel mailing list