[Git][debian-gis-team/xsar][upstream] New upstream version 2024.02.1

Antonio Valentino (@antonio.valentino) gitlab at salsa.debian.org
Sat Feb 10 09:54:08 GMT 2024



Antonio Valentino pushed to branch upstream at Debian GIS Project / xsar


Commits:
8d44ee50 by Antonio Valentino at 2024-02-10T09:30:12+00:00
New upstream version 2024.02.1
- - - - -


12 changed files:

- .git_archival.txt
- .github/workflows/conda-feedstock-check.yml
- .github/workflows/install-test.yml
- .github/workflows/publish.yml
- environment.yml
- requirements.txt
- setup.py
- src/xsar/config.yml
- src/xsar/rcm_dataset.py
- src/xsar/sentinel1_dataset.py
- src/xsar/sentinel1_meta.py
- src/xsar/utils.py


Changes:

=====================================
.git_archival.txt
=====================================
@@ -1 +1 @@
-ref-names: HEAD -> develop, tag: 2023.09.0
\ No newline at end of file
+ref-names: HEAD -> develop, tag: 2024.02.1
\ No newline at end of file


=====================================
.github/workflows/conda-feedstock-check.yml
=====================================
@@ -40,7 +40,7 @@ jobs:
       - uses: actions/checkout at v4
       - name: Strip python version
         run: cat environment.yml | egrep -vw python > environment-nopython.yml
-      - uses: conda-incubator/setup-miniconda at v2
+      - uses: conda-incubator/setup-miniconda at v3
         with:
           auto-update-conda: true
           activate-environment: xsar


=====================================
.github/workflows/install-test.yml
=====================================
@@ -52,7 +52,7 @@ jobs:
       # cache conda from https://dev.to/epassaro/caching-anaconda-environments-in-github-actions-5hde
       # and https://github.com/conda-incubator/setup-miniconda#caching
       - name: Setup Mambaforge
-        uses: conda-incubator/setup-miniconda at v2
+        uses: conda-incubator/setup-miniconda at v3
         with:
           miniforge-variant: Mambaforge
           miniforge-version: latest
@@ -66,7 +66,7 @@ jobs:
         run: |
           echo "DATE=$(date +'%Y%m%d')" >> $GITHUB_ENV
 
-      - uses: actions/cache at v3
+      - uses: actions/cache at v4
         with:
           path: ~/conda_pkgs_dir
           key: conda-cache-${{ matrix.os }}-${{ matrix.python-version }}-${{ hashFiles('environment.yml') }}-${{ env.DATE }}-${{ env.CACHE_NUMBER }}
@@ -98,7 +98,7 @@ jobs:
           pytest
 
       # check if docs has changed
-      - uses: dorny/paths-filter at v2
+      - uses: dorny/paths-filter at v3
         id: changes
         if: github.event_name != 'schedule'
         with:


=====================================
.github/workflows/publish.yml
=====================================
@@ -13,7 +13,7 @@ jobs:
       - name: Checkout
         uses: actions/checkout at v4
       - name: Set up Python
-        uses: actions/setup-python at v4
+        uses: actions/setup-python at v5
         with:
           python-version: "3.x"
       - name: Install dependencies
@@ -27,7 +27,7 @@ jobs:
         run: |
           twine check dist/*
       - name: Upload build artifacts
-        uses: actions/upload-artifact at v3
+        uses: actions/upload-artifact at v4
         with:
           name: packages
           path: dist/*
@@ -45,10 +45,10 @@ jobs:
 
     steps:
       - name: Download build artifacts
-        uses: actions/download-artifact at v3
+        uses: actions/download-artifact at v4
         with:
           name: packages
           path: dist/
 
       - name: Publish to PyPI
-        uses: pypa/gh-action-pypi-publish at b7f401de30cb6434a1e19f805ff006643653240e
+        uses: pypa/gh-action-pypi-publish at 2f6f737ca5f74c637829c0f5c3acd0e29ea5e8bf


=====================================
environment.yml
=====================================
@@ -15,4 +15,5 @@ dependencies:
   - rioxarray >=0.10
   - pandoc
   - python >=3.8
-  - importlib_resources
\ No newline at end of file
+  - importlib_resources
+  - lxml
\ No newline at end of file


=====================================
requirements.txt
=====================================
@@ -12,4 +12,5 @@ matplotlib
 packaging
 pytest
 dill
-xarray-datatree
\ No newline at end of file
+xarray-datatree
+lxml
\ No newline at end of file


=====================================
setup.py
=====================================
@@ -8,7 +8,7 @@ setup(
     scripts=glob.glob('src/scripts/*.py'),
     url='https://github.com/umr-lops/xsar',
     use_scm_version=True,
-    setup_requires=['setuptools_scm', 'setuptools_scm_git_archive'],
+    setup_requires=['setuptools_scm'],
     include_package_data=True,
     install_requires=[
         'GDAL',
@@ -30,7 +30,8 @@ setup(
         'pytz',
         'psutil',
         'jinja2',
-        'rioxarray'
+        'rioxarray',
+        'lxml'
     ],
     extras_require={
         "RS2": ["xradarsat2"],


=====================================
src/xsar/config.yml
=====================================
@@ -1,2 +1,15 @@
 # default data dir for tests and examples
 data_dir: /tmp
+auxiliary_dir:
+
+auxiliary_names:    
+    S1B:     
+        v_IPF_36:
+            AUX_PP1: 'S1B_AUX_PP1_V20160422T000000_G20220323T140710.SAFE'
+            AUX_CAL: 'S1B_AUX_CAL_V20160422T000000_G20210104T140113.SAFE'
+            #AUX_INS: 'S1B_AUX_INS_V20160422T000000_G20211027T134314.SAFE'
+    S1A:
+        v_IPF_36:
+            AUX_PP1: 'S1A_AUX_PP1_V20190228T092500_G20220323T153041.SAFE'
+            AUX_CAL: 'S1A_AUX_CAL_V20190228T092500_G20210104T141310.SAFE'
+            #AUX_INS: 'S1A_AUX_INS_V20190228T092500_G20211103T111906.SAFE'}
\ No newline at end of file


=====================================
src/xsar/rcm_dataset.py
=====================================
@@ -616,7 +616,7 @@ class RcmDataset(BaseDataset):
                 List of Tiff file paths located in a folder
             """
 
-            return glob.glob(os.path.join(self.sar_meta.path, "imagery", "*"))
+            return glob.glob(os.path.join(self.sar_meta.path, "imagery", "*.tif"))
 
         def _sort_list_files_and_get_pols(list_tiff):
             """


=====================================
src/xsar/sentinel1_dataset.py
=====================================
@@ -13,7 +13,7 @@ from scipy.interpolate import interp1d
 from shapely.geometry import box
 
 from .utils import timing, map_blocks_coords, BlockingActorProxy, merge_yaml, \
-    to_lon180
+    to_lon180, config
 from .sentinel1_meta import Sentinel1Meta
 from .ipython_backends import repr_mimebundle
 import datatree
@@ -21,11 +21,13 @@ from .base_dataset import BaseDataset
 import pandas as pd
 import geopandas as gpd
 
+
 logger = logging.getLogger('xsar.sentinel1_dataset')
 logger.addHandler(logging.NullHandler())
 
 # we know tiff as no geotransform : ignore warning
-warnings.filterwarnings("ignore", category=rasterio.errors.NotGeoreferencedWarning)
+warnings.filterwarnings(
+    "ignore", category=rasterio.errors.NotGeoreferencedWarning)
 
 # allow nan without warnings
 # some dask warnings are still non filtered: https://github.com/dask/dask/issues/3245
@@ -37,7 +39,8 @@ mapping_dataset_geoloc = {'latitude': 'latitude',
                           'elevation': 'elevationAngle',
                           'altitude': 'height',
                           'azimuth_time': 'azimuthTime',
-                          'slant_range_time': 'slantRangeTime'}
+                          'slant_range_time': 'slantRangeTime',
+                          }
 
 
 # noinspection PyTypeChecker
@@ -91,7 +94,8 @@ class Sentinel1Dataset(BaseDataset):
     def __init__(self, dataset_id, resolution=None,
                  resampling=rasterio.enums.Resampling.rms,
                  luts=False, chunks={'line': 5000, 'sample': 5000},
-                 dtypes=None, patch_variable=True, lazyloading=True):
+                 dtypes=None, patch_variable=True, lazyloading=True,
+                 recalibration=False, aux_config_name='v_IPF_36'):
         # default dtypes
         if dtypes is not None:
             self._dtypes.update(dtypes)
@@ -111,7 +115,8 @@ class Sentinel1Dataset(BaseDataset):
             # assert isinstance(sar_meta.coords2ll(100, 100),tuple)
         else:
             # we want self.sar_meta to be a dask actor on a worker
-            self.sar_meta = BlockingActorProxy(Sentinel1Meta.from_dict, dataset_id.dict)
+            self.sar_meta = BlockingActorProxy(
+                Sentinel1Meta.from_dict, dataset_id.dict)
         del dataset_id
 
         if self.sar_meta.multidataset:
@@ -119,19 +124,22 @@ class Sentinel1Dataset(BaseDataset):
                 """Can't open an multi-dataset. Use `xsar.Sentinel1Meta('%s').subdatasets` to show availables ones""" % self.sar_meta.path
             )
         # security to prevent using resolution argument with SLC
-        if self.sar_meta.product == 'SLC' and resolution is not None and self.sar_meta.swath in ['IW','EW']:
+        if self.sar_meta.product == 'SLC' and resolution is not None and self.sar_meta.swath in ['IW', 'EW']:
             # we tolerate resampling for WV since image width is only 20 km
-            logger.error('xsar is not handling resolution change for SLC TOPS products.')
-            raise Exception('xsar is not handling resolution change for SLC TOPS products.')
+            logger.error(
+                'xsar is not handling resolution change for SLC TOPS products.')
+            raise Exception(
+                'xsar is not handling resolution change for SLC TOPS products.')
 
         # build datatree
         self.resolution, DN_tmp = self.sar_meta.reader.load_digital_number(resolution=resolution,
                                                                            resampling=resampling,
                                                                            chunks=chunks)
-        ### geoloc
+
+        # geoloc
         geoloc = self.sar_meta.geoloc
         geoloc.attrs['history'] = 'annotations'
-        ### bursts
+        # bursts
         bu = self.sar_meta._bursts
         bu.attrs['history'] = 'annotations'
 
@@ -151,12 +159,19 @@ class Sentinel1Dataset(BaseDataset):
         ds_noise_range = self.sar_meta.get_noise_range_raw
         ds_noise_range.attrs['history'] = 'noise'
         ds_noise_azi = self.sar_meta.get_noise_azi_raw
+
+        # antenna pattern
+        ds_antenna_pattern = self.sar_meta.get_antenna_pattern
+
+        # swath merging
+        ds_swath_merging = self.sar_meta.get_swath_merging
+
         if self.sar_meta.swath == 'WV':
             # since WV noise is not defined on azimuth we apply the patch on range noise
             # ds_noise_azi['noise_lut'] = self._patch_lut(ds_noise_azi[
             #                                                 'noise_lut'])  # patch applied here is distinct to same patch applied on interpolated noise LUT
             ds_noise_range['noise_lut'] = self._patch_lut(ds_noise_range[
-                                                            'noise_lut'])  # patch applied here is distinct to same patch applied on interpolated noise LUT
+                'noise_lut'])  # patch applied here is distinct to same patch applied on interpolated noise LUT
         ds_noise_azi.attrs['history'] = 'noise'
 
         self.datatree = datatree.DataTree.from_dict({'measurement': DN_tmp, 'geolocation_annotation': geoloc,
@@ -166,37 +181,64 @@ class Sentinel1Dataset(BaseDataset):
                                                      'image': self.sar_meta.image,
                                                      'calibration': ds_luts,
                                                      'noise_range': ds_noise_range,
-                                                     'noise_azimuth': ds_noise_azi
+                                                     'noise_azimuth': ds_noise_azi,
+                                                     'antenna_pattern': ds_antenna_pattern,
+                                                     'swath_merging': ds_swath_merging
                                                      })
 
+        # apply recalibration ?
+
+        self.aux_config_name = aux_config_name
+        self.apply_recalibration = recalibration
+        if self.apply_recalibration and (self.sar_meta.swath != "EW" and self.sar_meta.swath != "IW"):
+            self.apply_recalibration = False
+            raise ValueError(
+                f"Recalibration in only done for EW/IW modes. You have '{self.sar_meta.swath}'. apply_recalibration is set to False.")
+
+        if self.apply_recalibration and np.all(np.isnan(self.datatree.antenna_pattern['roll'].values)):
+            self.apply_recalibration = False
+            raise ValueError(
+                f"Recalibration can't be done without roll angle. You probably work with an old file for which roll angle is not auxiliary file.")
+
         # self.datatree['measurement'].ds = .from_dict({'measurement':self._load_digital_number(resolution=resolution, resampling=resampling, chunks=chunks)
         # self._dataset = self.datatree['measurement'].ds #the two variables should be linked then.
         self._dataset = self.datatree[
             'measurement'].to_dataset()  # test oct 22 to see if then I can modify variables of the dt
+
+        # create a datatree for variables used in recalibration
+        self.datatree["recalibration"] = datatree.DataTree()
+        self._dataset_recalibration = xr.Dataset(
+            coords=self.datatree["measurement"].coords)
+
         for att in ['name', 'short_name', 'product', 'safe', 'swath', 'multidataset']:
             if att not in self.datatree.attrs:
                 # tmp = xr.DataArray(self.sar_meta.__getattr__(att),attrs={'source':'filename decoding'})
                 self.datatree.attrs[att] = self.sar_meta.__getattr__(att)
                 self._dataset.attrs[att] = self.sar_meta.__getattr__(att)
 
-        self._dataset = xr.merge([xr.Dataset({'time': self.get_burst_azitime}), self._dataset])
+        self._dataset = xr.merge(
+            [xr.Dataset({'time': self.get_burst_azitime}), self._dataset])
         value_res_sample = self.sar_meta.image['slantRangePixelSpacing']
         value_res_line = self.sar_meta.image['azimuthPixelSpacing']
         refe_spacing = 'slant'
         if resolution is not None:
-            refe_spacing = 'ground'  # if the data sampling changed it means that the quantities are projected on ground
+            # if the data sampling changed it means that the quantities are projected on ground
+            refe_spacing = 'ground'
             if isinstance(resolution, str):
                 value_res_sample = float(resolution.replace('m', ''))
                 value_res_line = value_res_sample
             elif isinstance(resolution, dict):
-                value_res_sample = self.sar_meta.image['slantRangePixelSpacing'] * resolution['sample']
-                value_res_line = self.sar_meta.image['azimuthPixelSpacing'] * resolution['line']
+                value_res_sample = self.sar_meta.image['slantRangePixelSpacing'] * \
+                    resolution['sample']
+                value_res_line = self.sar_meta.image['azimuthPixelSpacing'] * \
+                    resolution['line']
             else:
                 logger.warning('resolution type not handle (%s) should be str or dict -> sampleSpacing'
                                ' and lineSpacing are not correct', type(resolution))
         self._dataset['sampleSpacing'] = xarray.DataArray(value_res_sample,
                                                           attrs={'units': 'm', 'referential': refe_spacing})
-        self._dataset['lineSpacing'] = xarray.DataArray(value_res_line, attrs={'units': 'm'})
+        self._dataset['lineSpacing'] = xarray.DataArray(
+            value_res_line, attrs={'units': 'm'})
         # dataset principal
 
         # dataset no-pol template for function evaluation on coordinates (*no* values used)
@@ -209,7 +251,8 @@ class Sentinel1Dataset(BaseDataset):
                 line_time = self.get_burst_azitime
                 self._da_tmpl = xr.DataArray(
                     dask.array.empty_like(
-                        np.empty((len(line_time), len(self._dataset.digital_number.sample))),
+                        np.empty((len(line_time), len(
+                            self._dataset.digital_number.sample))),
                         dtype=np.int8, name="empty_var_tmpl-%s" % dask.base.tokenize(self.sar_meta.name)),
                     dims=('line', 'sample'),
                     coords={
@@ -237,12 +280,20 @@ class Sentinel1Dataset(BaseDataset):
         # del tmp_f
         # return
         self._dataset.attrs.update(self.sar_meta.to_dict("all"))
-        self.datatree['measurement'] = self.datatree['measurement'].assign(self._dataset)
+        self.datatree['measurement'] = self.datatree['measurement'].assign(
+            self._dataset)
         self.datatree.attrs.update(self.sar_meta.to_dict("all"))
-        if 'GRD' in str(self.datatree.attrs['product']):  # load land_mask by default for GRD products
-            self.add_high_resolution_variables(patch_variable=patch_variable, luts=luts, lazy_loading=lazyloading)
+
+        # load land_mask by default for GRD products
+        if 'GRD' in str(self.datatree.attrs['product']):
+            self.add_high_resolution_variables(
+                patch_variable=patch_variable, luts=luts, lazy_loading=lazyloading)
+            if self.apply_recalibration:
+                self.select_gains()
             self.apply_calibration_and_denoising()
-        self.datatree['measurement'].attrs = self.datatree.attrs  # added 6 fev 23, to fill  empty attrs
+
+        # added 6 fev 23, to fill  empty attrs
+        self.datatree['measurement'].attrs = self.datatree.attrs
         self.sliced = False
         """True if dataset is a slice of original L1 dataset"""
 
@@ -252,6 +303,44 @@ class Sentinel1Dataset(BaseDataset):
         # save original bbox
         self._bbox_coords_ori = self._bbox_coords
 
+    def select_gains(self):
+        """
+        attribution of the good swath gain by getting the swath number of each pixel 
+
+        Returns:
+        --------
+
+        """
+        vars_to_drop = []
+
+        def get_gains(ds, var_template):
+            if self.sar_meta.swath == "EW":
+                resultat = xr.where(ds['swath_number'] == 1, ds[var_template + "_1"],
+                                    xr.where(ds['swath_number'] == 2, ds[var_template + "_2"],
+                                    xr.where(ds['swath_number'] == 3, ds[var_template + "_3"],
+                                             xr.where(ds['swath_number'] == 4, ds[var_template + "_4"],
+                                             xr.where(ds['swath_number'] == 5, ds[var_template + "_5"],
+                                                      np.nan)))))
+                return resultat
+            elif self.sar_meta.swath == "IW":
+                resultat = xr.where(ds['swath_number'] == 1, ds[var_template + "_1"],
+                                    xr.where(ds['swath_number'] == 2, ds[var_template + "_2"],
+                                    xr.where(ds['swath_number'] == 3, ds[var_template + "_3"],
+                                             np.nan)))
+                return resultat
+            else:
+                raise ValueError(
+                    f"Recalibration in only done for EW/IW modes. You have '{self.sar_meta.swath}'")
+
+        for var_template in ["old_geap", "new_geap", "old_gproc", "new_gproc"]:
+            self._dataset_recalibration[var_template] = get_gains(
+                self._dataset_recalibration, var_template)
+            # vars_to_drop.extend([f"{var_template}_{i}" for i in range(1, 6)])
+        # self._dataset = self._dataset.drop_vars(vars_to_drop,errors = "ignore")
+
+        self.datatree['recalibration'] = self.datatree['recalibration'].assign(
+            self._dataset_recalibration)
+
     def add_high_resolution_variables(self, luts=False, patch_variable=True, skip_variables=None, load_luts=True,
                                       lazy_loading=True):
         """
@@ -346,9 +435,11 @@ class Sentinel1Dataset(BaseDataset):
 
                 # noise_lut is noise_lut_range * noise_lut_azi
                 if 'noise_lut_range' in self._luts.keys() and 'noise_lut_azi' in self._luts.keys():
-                    self._luts = self._luts.assign(noise_lut=self._luts.noise_lut_range * self._luts.noise_lut_azi)
+                    self._luts = self._luts.assign(
+                        noise_lut=self._luts.noise_lut_range * self._luts.noise_lut_azi)
                     self._luts.noise_lut.attrs['history'] = merge_yaml(
-                        [self._luts.noise_lut_range.attrs['history'] + self._luts.noise_lut_azi.attrs['history']],
+                        [self._luts.noise_lut_range.attrs['history'] +
+                            self._luts.noise_lut_azi.attrs['history']],
                         section='noise_lut'
                     )
 
@@ -375,7 +466,15 @@ class Sentinel1Dataset(BaseDataset):
                 if vv in geoloc_vars:
                     geoloc_vars.remove(vv)
 
-            self._dataset = self._dataset.merge(self._load_from_geoloc(geoloc_vars, lazy_loading=lazy_loading))
+            self._dataset = self._dataset.merge(
+                self._load_from_geoloc(geoloc_vars, lazy_loading=lazy_loading))
+
+            
+            self.add_swath_number()   
+                
+            if self.apply_recalibration:
+                self.add_gains(config["auxiliary_names"][self.sar_meta.short_name.split(":")[-2][0:3]][self.aux_config_name]["AUX_CAL"],
+                               config["auxiliary_names"][self.sar_meta.short_name.split(":")[-2][0:3]][self.aux_config_name]["AUX_PP1"])
 
             rasters = self._load_rasters_vars()
             if rasters is not None:
@@ -383,7 +482,8 @@ class Sentinel1Dataset(BaseDataset):
             if 'velocity' not in skip_variables:
                 self._dataset = self._dataset.merge(self.get_sensor_velocity())
             if 'range_ground_spacing' not in skip_variables:
-                self._dataset = self._dataset.merge(self._range_ground_spacing())
+                self._dataset = self._dataset.merge(
+                    self._range_ground_spacing())
 
             # set miscellaneous attrs
             for var, attrs in attrs_dict.items():
@@ -393,21 +493,196 @@ class Sentinel1Dataset(BaseDataset):
                     pass
             # self.datatree[
             #     'measurement'].ds = self._dataset  # last link to make sure all previous modifications are also in the datatree
-            self.recompute_attrs()  # need high resolution rasterised longitude and latitude , needed for .rio accessor
-            self.datatree['measurement'] = self.datatree['measurement'].assign(self._dataset)
+            # need high resolution rasterised longitude and latitude , needed for .rio accessor
+            self.recompute_attrs()
+            self.datatree['measurement'] = self.datatree['measurement'].assign(
+                self._dataset)
             if 'land_mask' in skip_variables:
                 self._dataset = self._dataset.drop('land_mask')
-                self.datatree['measurement'] = self.datatree['measurement'].assign(self._dataset)
+                self.datatree['measurement'] = self.datatree['measurement'].assign(
+                    self._dataset)
             else:
                 assert 'land_mask' in self.datatree['measurement']
                 if self.sar_meta.product == 'SLC' and 'WV' not in self.sar_meta.swath:  # TOPS cases
                     logger.debug('a TOPS product')
-                    self.land_mask_slc_per_bursts(
-                        lazy_loading=lazy_loading)  # replace "GRD" like (Affine transform) land_mask by a burst-by-burst rasterised land mask
+                    # self.land_mask_slc_per_bursts(
+                    #    lazy_loading=lazy_loading)  # replace "GRD" like (Affine transform) land_mask by a burst-by-burst rasterised land mask
                 else:
-                    logger.debug('not a TOPS product -> land_mask already available.')
+                    logger.debug(
+                        'not a TOPS product -> land_mask already available.')
         return
 
+    def add_swath_number(self):
+        """
+        add a DataArray with the swath number chosen for each pixel of the dataset ; 
+        also add a DataArray with a flag
+
+        Returns:
+        --------
+        """
+        swath_tab = xr.DataArray(np.full_like(self._dataset.elevation, np.nan, dtype=int), coords={
+                                 'line': self._dataset.coords["line"], 'sample': self._dataset.coords["sample"]})
+        flag_tab = xr.DataArray(np.zeros_like(self._dataset.elevation, dtype=int), coords={
+                                'line': self._dataset.coords["line"], 'sample': self._dataset.coords["sample"]})
+
+        # Supposons que dataset.swaths ait 45 éléments comme mentionné
+        for i in range(len(self.datatree['swath_merging'].swaths)):
+            y_min, y_max = self.datatree['swath_merging']['firstAzimuthLine'][
+                i], self.datatree['swath_merging']['lastAzimuthLine'][i]
+            x_min, x_max = self.datatree['swath_merging']['firstRangeSample'][
+                i], self.datatree['swath_merging']['lastRangeSample'][i]
+
+            # Localisation des pixels appartenant à ce swath
+            swath_index = int(self.datatree['swath_merging'].swaths[i])
+
+            condition = (self._dataset.line >= y_min) & (self._dataset.line <= y_max) & (
+                self._dataset.sample >= x_min) & (self._dataset.sample <= x_max)
+
+            # Marquer les pixels déjà vus
+            flag_tab = xr.where((flag_tab == 1) & condition & (
+                swath_tab == swath_index), 2, flag_tab)
+
+            # Affecter le swath actuel
+            swath_tab = xr.where(condition, swath_index, swath_tab)
+
+            # Marquer les premiers pixels vus
+            flag_tab = xr.where((flag_tab == 0) & condition, 1, flag_tab)
+
+        self._dataset_recalibration['swath_number'] = swath_tab
+        self._dataset_recalibration['swath_number_flag'] = flag_tab
+        self._dataset_recalibration['swath_number_flag'].attrs[
+            "flag_info"] = "0 : no swath \n1 : unique swath \n2 : undecided swath"
+
+        self.datatree['recalibration'] = self.datatree['recalibration'].assign(
+            self._dataset_recalibration)
+
+    def add_gains(self, new_aux_cal_name, new_aux_pp1_name):
+        from .utils import get_path_aux_cal, get_path_aux_pp1, get_geap_gains, get_gproc_gains
+        import os
+        from scipy.interpolate import interp1d
+        logger.debug(
+            f"doing recalibration with AUX_CAL = {new_aux_cal_name} & AUX_PP1 = {new_aux_pp1_name}")
+
+        path_aux_cal_new = get_path_aux_cal(new_aux_cal_name)
+        path_aux_cal_old = get_path_aux_cal(
+            os.path.basename(self.sar_meta.manifest_attrs["aux_cal"]))
+
+        path_aux_pp1_new = get_path_aux_pp1(new_aux_pp1_name)
+        path_aux_pp1_old = get_path_aux_pp1(
+            os.path.basename(self.sar_meta.manifest_attrs["aux_pp1"]))
+
+        #  1 - compute offboresight angle
+        roll = self.datatree['antenna_pattern']['roll']
+        azimuthTime = self.datatree['antenna_pattern']['azimuthTime']
+        interp_roll = interp1d(azimuthTime.values.flatten().astype(
+            int), roll.values.flatten(), kind='linear', fill_value='extrapolate')
+
+        self._dataset_recalibration = self._dataset_recalibration.assign(
+            rollAngle=(['line', 'sample'], interp_roll(self._dataset.azimuth_time)))
+        self._dataset_recalibration = self._dataset_recalibration.assign(offboresigthAngle=(
+            ['line', 'sample'], self._dataset['elevation'].data - self._dataset_recalibration['rollAngle'].data))
+
+        # 2- get gains geap and map them
+        dict_geap_old = get_geap_gains(
+            path_aux_cal_old, mode=self.sar_meta.manifest_attrs["swath_type"], pols=self.sar_meta.manifest_attrs['polarizations'])
+        dict_geap_new = get_geap_gains(
+            path_aux_cal_new, mode=self.sar_meta.manifest_attrs["swath_type"], pols=self.sar_meta.manifest_attrs['polarizations'])
+
+        for key, infos_geap in dict_geap_old.items():
+            pol = key[-2:]
+            number = key[2:-3]
+
+            keyf = "old_geap_"+number
+
+            if keyf not in self._dataset_recalibration:
+                data_shape = (len(self._dataset_recalibration.coords['line']), len(
+                    self._dataset_recalibration.coords['sample']), len(self._dataset_recalibration.coords['pol']))
+                self._dataset_recalibration[keyf] = xr.DataArray(np.full(data_shape, np.nan),
+                                                                 coords={'line': self._dataset_recalibration.coords['line'],
+                                                                         'sample': self._dataset_recalibration.coords['sample'],
+                                                                         'pol': self._dataset_recalibration.coords['pol']},
+                                                                 dims=['line', 'sample', 'pol'])  # coords=self._dataset.coords))
+
+                self._dataset_recalibration[keyf].attrs['aux_path'] = os.path.join(os.path.basename(os.path.dirname(os.path.dirname(path_aux_cal_old))),
+                                                                                   os.path.basename(
+                                                                                       os.path.dirname(path_aux_cal_old)),
+                                                                                   os.path.basename(path_aux_cal_old))
+
+            interp = interp1d(
+                infos_geap['offboresightAngle'], infos_geap['gain'], kind='linear')
+            self._dataset_recalibration[keyf].loc[:, :, pol] = interp(
+                self._dataset_recalibration['offboresigthAngle'])
+
+        for key, infos_geap in dict_geap_new.items():
+            pol = key[-2:]
+            number = key[2:-3]
+
+            keyf = "new_geap_"+number
+
+            if keyf not in self._dataset_recalibration:
+                data_shape = (len(self._dataset_recalibration.coords['line']), len(
+                    self._dataset_recalibration.coords['sample']), len(self._dataset_recalibration.coords['pol']))
+                self._dataset_recalibration[keyf] = xr.DataArray(np.full(data_shape, np.nan),
+                                                                 coords={'line': self._dataset_recalibration.coords['line'],
+                                                                         'sample': self._dataset_recalibration.coords['sample'],
+                                                                         'pol': self._dataset_recalibration.coords['pol']},
+                                                                 dims=['line', 'sample', 'pol'])  # coords=self._dataset.coords))
+
+                self._dataset_recalibration[keyf].attrs['aux_path'] = os.path.join(os.path.basename(os.path.dirname(os.path.dirname(path_aux_cal_new))),
+                                                                                   os.path.basename(
+                                                                                       os.path.dirname(path_aux_cal_new)),
+                                                                                   os.path.basename(path_aux_cal_new))
+
+            interp = interp1d(
+                infos_geap['offboresightAngle'], infos_geap['gain'], kind='linear')
+            self._dataset_recalibration[keyf].loc[:, :, pol] = interp(
+                self._dataset_recalibration['offboresigthAngle'])
+
+        # 3- get gains gproc and map them
+        dict_gproc_old = get_gproc_gains(
+            path_aux_pp1_old, mode=self.sar_meta.manifest_attrs["swath_type"], product=self.sar_meta.product)
+        dict_gproc_new = get_gproc_gains(
+            path_aux_pp1_new, mode=self.sar_meta.manifest_attrs["swath_type"], product=self.sar_meta.product)
+
+        for idxpol, pol in enumerate(['HH', 'HV', 'VV', 'VH']):
+            if pol in self.sar_meta.manifest_attrs["polarizations"]:
+                valid_keys_indices = [(key, idxpol, pol)
+                                      for key, infos_gproc in dict_gproc_old.items()]
+                for key, idxpol, pol in valid_keys_indices:
+                    sw_nb = str(key)[-1]
+                    keyf = "old_gproc_"+sw_nb
+                    if keyf not in self._dataset_recalibration:
+                        self._dataset_recalibration[keyf] = xr.DataArray(
+                            np.nan, dims=['pol'], coords={'pol': self._dataset_recalibration.coords["pol"]})
+                        self._dataset_recalibration[keyf].attrs['aux_path'] = os.path.join(os.path.basename(os.path.dirname(os.path.dirname(path_aux_pp1_old))),
+                                                                                           os.path.basename(
+                                                                                               os.path.dirname(path_aux_pp1_old)),
+                                                                                           os.path.basename(path_aux_pp1_old))
+                    self._dataset_recalibration[keyf].loc[...,
+                                                          pol] = dict_gproc_old[key][idxpol]
+
+        for idxpol, pol in enumerate(['HH', 'HV', 'VV', 'VH']):
+            if pol in self.sar_meta.manifest_attrs["polarizations"]:
+                valid_keys_indices = [(key, idxpol, pol)
+                                      for key, infos_gproc in dict_gproc_new.items()]
+                for key, idxpol, pol in valid_keys_indices:
+                    sw_nb = str(key)[-1]
+                    keyf = "new_gproc_"+sw_nb
+                    if keyf not in self._dataset_recalibration:
+                        self._dataset_recalibration[keyf] = xr.DataArray(
+                            np.nan, dims=['pol'], coords={'pol': self._dataset_recalibration.coords["pol"]})
+                        self._dataset_recalibration[keyf].attrs['aux_path'] = os.path.join(os.path.basename(os.path.dirname(os.path.dirname(path_aux_pp1_new))),
+                                                                                           os.path.basename(
+                                                                                               os.path.dirname(path_aux_pp1_new)),
+                                                                                           os.path.basename(path_aux_pp1_new))
+                    self._dataset_recalibration[keyf].loc[...,
+                                                          pol] = dict_gproc_new[key][idxpol]
+
+        self.datatree['recalibration'] = self.datatree['recalibration'].assign(
+            self._dataset_recalibration)
+
+        # return self._dataset
+
     def apply_calibration_and_denoising(self):
         """
         apply calibration and denoising functions to get high resolution sigma0 , beta0 and gamma0 + variables *_raw
@@ -419,13 +694,34 @@ class Sentinel1Dataset(BaseDataset):
         for var_name, lut_name in self._map_var_lut.items():
             if lut_name in self._luts:
                 # merge var_name into dataset (not denoised)
-                self._dataset = self._dataset.merge(self._apply_calibration_lut(var_name))
+                self._dataset = self._dataset.merge(
+                    self._apply_calibration_lut(var_name))
                 # merge noise equivalent for var_name (named 'ne%sz' % var_name[0)
                 self._dataset = self._dataset.merge(self._get_noise(var_name))
             else:
-                logger.debug("Skipping variable '%s' ('%s' lut is missing)" % (var_name, lut_name))
+                logger.debug("Skipping variable '%s' ('%s' lut is missing)" % (
+                    var_name, lut_name))
+
+        if self.apply_recalibration:
+            interest_var = ["sigma0_raw", "gamma0_raw", "beta0_raw"]
+            for var in interest_var:
+                if var not in self._dataset:
+                    continue
+
+                var_dB = 10*np.log10(self._dataset[var])
+
+                corrected_dB = var_dB + 10*np.log10(self._dataset_recalibration["old_geap"]) - 10*np.log10(self._dataset_recalibration["new_geap"]) -\
+                    2*10*np.log10(self._dataset_recalibration["old_gproc"]) + 2*10*np.log10(
+                        self._dataset_recalibration["new_gproc"])
+
+                self._dataset_recalibration[var +
+                                            "__corrected"] = 10**(corrected_dB/10)
+            self.datatree['recalibration'] = self.datatree['recalibration'].assign(
+                self._dataset_recalibration)
+
         self._dataset = self._add_denoised(self._dataset)
-        self.datatree['measurement'] = self.datatree['measurement'].assign(self._dataset)
+        self.datatree['measurement'] = self.datatree['measurement'].assign(
+            self._dataset)
         # self._dataset = self.datatree[
         #     'measurement'].to_dataset()  # test oct 22 to see if then I can modify variables of the dt
         return
@@ -545,13 +841,18 @@ class Sentinel1Dataset(BaseDataset):
                                         samples[-1] + 0.5)
                         # set match_blocks as the non empty intersection with asked_box
                         match_blocks = self.blocks.copy()
-                        match_blocks.geometry = self.blocks.geometry.intersection(asked_box)
+                        match_blocks.geometry = self.blocks.geometry.intersection(
+                            asked_box)
                         match_blocks = match_blocks[~match_blocks.is_empty]
                     for i, block in match_blocks.iterrows():
-                        (sub_a_min, sub_x_min, sub_a_max, sub_x_max) = map(int, block.geometry.bounds)
-                        sub_a = lines[(lines >= sub_a_min) & (lines <= sub_a_max)]
-                        sub_x = samples[(samples >= sub_x_min) & (samples <= sub_x_max)]
-                        noise.loc[dict(line=sub_a, sample=sub_x)] = block.lut_f(sub_a, sub_x)
+                        (sub_a_min, sub_x_min, sub_a_max, sub_x_max) = map(
+                            int, block.geometry.bounds)
+                        sub_a = lines[(lines >= sub_a_min) &
+                                      (lines <= sub_a_max)]
+                        sub_x = samples[(samples >= sub_x_min)
+                                        & (samples <= sub_x_max)]
+                        noise.loc[dict(line=sub_a, sample=sub_x)
+                                  ] = block.lut_f(sub_a, sub_x)
 
                 # values returned as np array
                 return noise.values
@@ -599,7 +900,8 @@ class Sentinel1Dataset(BaseDataset):
             lines_stop = np.ceil(
                 lines + np.diff(lines, append=lines[-1] + 1) / 2
             ).astype(int)  # end is not included in the interval
-            lines_stop[-1] = 65535  # be sure to include all image if last azimuth line, is not last azimuth image
+            # be sure to include all image if last azimuth line, is not last azimuth image
+            lines_stop[-1] = 65535
             for a_start, a_stop, x, l in zip(lines_start, lines_stop, samples, noiseLuts):
                 lut_f = Lut_box_range(a_start, a_stop, x, l)
                 block = pd.Series(dict([
@@ -637,7 +939,8 @@ class Sentinel1Dataset(BaseDataset):
                 def __init__(self, sw, a, a_start, a_stop, x_start, x_stop, lut):
                     self.lines = a
                     self.samples = np.arange(x_start, x_stop + 1)
-                    self.area = box(max(0, a_start - 0.5), max(0, x_start - 0.5), a_stop + 0.5, x_stop + 0.5)
+                    self.area = box(
+                        max(0, a_start - 0.5), max(0, x_start - 0.5), a_stop + 0.5, x_stop + 0.5)
                     if len(lut) > 1:
                         self.lut_f = interp1d(a, lut, kind='linear', fill_value='extrapolate', assume_sorted=True,
                                               bounds_error=False)
@@ -661,7 +964,8 @@ class Sentinel1Dataset(BaseDataset):
                                                                     sample_start,
                                                                     sample_stop,
                                                                     noise_lut):
-                lut_f = Lut_box_azi(sw, a, a_start, a_stop, x_start, x_stop, lut)
+                lut_f = Lut_box_azi(sw, a, a_start, a_stop,
+                                    x_start, x_stop, lut)
                 block = pd.Series(dict([
                     ('lut_f', lut_f),
                     ('geometry', lut_f.area)]))
@@ -699,7 +1003,6 @@ class Sentinel1Dataset(BaseDataset):
             'noise_lut_range': noise_lut_range,
             'noise_lut_azi': noise_lut_azi,
         }
-
         luts_list = []
         luts = None
         for lut_name in luts_names:
@@ -715,7 +1018,8 @@ class Sentinel1Dataset(BaseDataset):
                     # noise_lut_azi doesn't need the raw_lut
                     lut_f_delayed = dask.delayed(_map_func[lut_name])()
                 else:
-                    lut_f_delayed = dask.delayed(_map_func[lut_name])(raw_lut.sel(pol=pol))
+                    lut_f_delayed = dask.delayed(
+                        _map_func[lut_name])(raw_lut.sel(pol=pol))
                 lut = map_blocks_coords(
                     self._da_tmpl.astype(self._dtypes[lut_name]),
                     lut_f_delayed,
@@ -780,7 +1084,8 @@ class Sentinel1Dataset(BaseDataset):
         for varname in varnames:
             varname_in_geoloc = mapping_dataset_geoloc[varname]
             if varname in ['azimuth_time']:
-                z_values = self.sar_meta.geoloc[varname_in_geoloc].astype(float)
+                z_values = self.sar_meta.geoloc[varname_in_geoloc].astype(
+                    float)
             elif varname == 'longitude':
                 z_values = self.sar_meta.geoloc[varname_in_geoloc]
                 if self.sar_meta.cross_antemeridian:
@@ -812,7 +1117,8 @@ class Sentinel1Dataset(BaseDataset):
             if self.sar_meta._bursts['burst'].size != 0:
                 datemplate = self._da_tmpl.astype(typee).copy()
                 # replace the line coordinates by line_time coordinates
-                datemplate = datemplate.assign_coords({'line': datemplate.coords['line_time']})
+                datemplate = datemplate.assign_coords(
+                    {'line': datemplate.coords['line_time']})
                 if lazy_loading:
                     da_var = map_blocks_coords(
                         datemplate,
@@ -820,10 +1126,12 @@ class Sentinel1Dataset(BaseDataset):
                         func_kwargs={"rbs": rbs}
                     )
                     # put back the real line coordinates
-                    da_var = da_var.assign_coords({'line': self._dataset.digital_number.line})
+                    da_var = da_var.assign_coords(
+                        {'line': self._dataset.digital_number.line})
                 else:
                     line_time = self.get_burst_azitime
-                    XX, YY = np.meshgrid(line_time.astype(float), self._dataset.digital_number.sample)
+                    XX, YY = np.meshgrid(line_time.astype(
+                        float), self._dataset.digital_number.sample)
                     da_var = rbs(XX, YY, grid=False)
                     da_var = xr.DataArray(da_var.T, coords={'line': self._dataset.digital_number.line,
                                                             "sample": self._dataset.digital_number.sample},
@@ -835,7 +1143,8 @@ class Sentinel1Dataset(BaseDataset):
                         interp_func
                     )
                 else:
-                    da_var = interp_func(self._dataset.digital_number.line, self._dataset.digital_number.sample)
+                    da_var = interp_func(
+                        self._dataset.digital_number.line, self._dataset.digital_number.sample)
             if varname == 'longitude':
                 if self.sar_meta.cross_antemeridian:
                     da_var.data = da_var.data.map_blocks(to_lon180)
@@ -884,7 +1193,8 @@ class Sentinel1Dataset(BaseDataset):
             self.interpolation_func_slc[varname] = rbs
         line_time = self.get_burst_azitime
         line_az_times_values = line_time.values[line]
-        z_interp_value = self.interpolation_func_slc[varname](line_az_times_values, sample, grid=False)
+        z_interp_value = self.interpolation_func_slc[varname](
+            line_az_times_values, sample, grid=False)
         return z_interp_value
 
     def _apply_calibration_lut(self, var_name):
@@ -911,60 +1221,72 @@ class Sentinel1Dataset(BaseDataset):
             res = res.astype(astype)
 
         res.attrs.update(lut.attrs)
-        res.attrs['history'] = merge_yaml([lut.attrs['history']], section=var_name)
+        res.attrs['history'] = merge_yaml(
+            [lut.attrs['history']], section=var_name)
         res.attrs['references'] = 'https://sentinel.esa.int/web/sentinel/radiometric-calibration-of-level-1-products'
 
         return res.to_dataset(name=var_name)
 
-    def reverse_calibration_lut(self, ds_var):
+    def reverse_calibration_lut(self, var_name):
         """
-        TODO: replace ds_var by var_name
-        Inverse of `_apply_calibration_lut` : from `var_name`, reverse apply lut, to get digital_number.
-        See `official ESA documentation <https://sentinel.esa.int/web/sentinel/radiometric-calibration-of-level-1-products>`_ .
-        > Level-1 products provide four calibration Look Up Tables (LUTs) to produce ß0i, σ0i and γi
-        > or to return to the Digital Number (DN)
+        ONLY MADE FOR GRD YET
+        can't retrieve complex number for SLC 
 
-        A warning message may be issued if original complex 'digital_number' is converted to module during this operation.
+        Reverse the calibration Look Up Table (LUT) applied to `var_name` to retrieve the original digital number (DN).
+        This is the inverse operation of `_apply_calibration_lut`.
+        Refer to the official ESA documentation for more details on the radiometric calibration of Level-1 products:
+        https://sentinel.esa.int/web/sentinel/radiometric-calibration-of-level-1-products
 
         Parameters
         ----------
-        ds_var: xarray.Dataset
-            with only one variable name that must exist in `self._map_var_lut` to be able to reverse the lut to get digital_number
+        var_name: str
+            The variable name from which the LUT should be reversed to get 'digital_number'. The variable must exist in `self._map_var_lut`.
 
         Returns
         -------
         xarray.Dataset
-            with one variable named 'digital_number'.
+            A dataset with one variable named 'digital_number'.
+
+        Raises
+        ------
+        ValueErro
+            If `var_name` does not have an associated LUT in `self._map_var_lut`.
         """
-        var_names = list(ds_var.keys())
-        assert len(var_names) == 1
-        var_name = var_names[0]
+        # Check if the variable has an associated LUT, raise ValueError if not
         if var_name not in self._map_var_lut:
             raise ValueError(
-                "Unable to find lut for var '%s'. Allowed : %s" % (var_name, str(self._map_var_lut.keys())))
-        da_var = ds_var[var_name]
+                f"Unable to find lut for var '{var_name}'. Allowed: {list(self._map_var_lut.keys())}")
+
+        if self.sar_meta.product != 'GRD':
+            raise ValueError(
+                f"SAR product must be GRD. Not implemented for SLC")
+
+        # Retrieve the variable data array and corresponding LUT
+        da_var = self._dataset[var_name]
         lut = self._luts[self._map_var_lut[var_name]]
 
-        # resize lut with same a/sample as da_var
-        lut = lut.sel(line=da_var.line, sample=da_var.sample, method='nearest')
-        # as we used 'nearest', force exact coords
-        lut['line'] = da_var.line
-        lut['sample'] = da_var.sample
-        # waiting for https://github.com/pydata/xarray/pull/4155
-        # lut = lut.interp(line=da_var.line, sample=da_var.sample)
+        # Interpolate the LUT to match the variable's coordinates
+        lut = lut.interp(line=da_var.line, sample=da_var.sample)
+
+        # Reverse the LUT application to compute the squared digital number
+        dn2 = da_var * (lut ** 2)
+
+        # Where the variable data array is NaN, set the squared digital number to 0
+        dn2 = xr.where(np.isnan(da_var), 0, dn2)
 
-        # revert lut to get dn
-        dn = np.sqrt(da_var * lut ** 2)
+        # Apply square root to get the digital number
+        dn = np.sqrt(dn2)
 
-        if self._dataset.digital_number.dtype == np.complex and dn.dtype != np.complex:
+        # Check and warn if the dtype of the original 'digital_number' is not preserved
+        if self._dataset.digital_number.dtype == np.complex_ and dn.dtype != np.complex_:
             warnings.warn(
-                "Unable to retrieve 'digital_number' as dtype '%s'. Fallback to '%s'"
-                % (str(self._dataset.digital_number.dtype), str(dn.dtype))
+                f"Unable to retrieve 'digital_number' as dtype '{self._dataset.digital_number.dtype}'. "
+                f"Fallback to '{dn.dtype}'"
             )
 
+        # Create a dataset with the computed digital number
         name = 'digital_number'
         ds = dn.to_dataset(name=name)
-
         return ds
 
     def _get_noise(self, var_name):
@@ -989,7 +1311,8 @@ class Sentinel1Dataset(BaseDataset):
         astype = self._dtypes.get(name)
         if astype is not None:
             dataarr = dataarr.astype(astype)
-        dataarr.attrs['history'] = merge_yaml([lut.attrs['history'], noise_lut.attrs['history']], section=name)
+        dataarr.attrs['history'] = merge_yaml(
+            [lut.attrs['history'], noise_lut.attrs['history']], section=name)
         return dataarr.to_dataset(name=name)
 
     def _add_denoised(self, ds, clip=False, vars=None):
@@ -1021,13 +1344,27 @@ class Sentinel1Dataset(BaseDataset):
                 ds[varname] = ds[varname_raw]
             elif len(set(self.sar_meta.denoised.values())) != 1:
                 # TODO: to be implemented
-                raise NotImplementedError("semi denoised products not yet implemented")
+                raise NotImplementedError(
+                    "semi denoised products not yet implemented")
             else:
-                denoised = ds[varname_raw] - ds[noise]
-                denoised.attrs['history'] = merge_yaml(
-                    [ds[varname_raw].attrs['history'], ds[noise].attrs['history']],
-                    section=varname
-                )
+                varname_raw_corrected = varname_raw + "__corrected"
+                if ((self.apply_recalibration) & (varname_raw_corrected in self._dataset_recalibration.variables)):
+                    denoised = self._dataset_recalibration[varname_raw_corrected] - ds[noise]
+                    denoised.attrs['history'] = merge_yaml(
+                        [ds[varname_raw].attrs['history'],
+                            ds[noise].attrs['history']],
+                        section=varname
+                    )
+                    denoised.attrs['comment_recalibration'] = 'kersten recalibration applied'
+                else:
+                    denoised = ds[varname_raw] - ds[noise]
+                    denoised.attrs['history'] = merge_yaml(
+                        [ds[varname_raw].attrs['history'],
+                            ds[noise].attrs['history']],
+                        section=varname
+                    )
+                    denoised.attrs['comment_recalibration'] = 'kersten recalibration not applied'
+
                 if clip:
                     denoised = denoised.clip(min=0)
                     denoised.attrs['comment'] = 'clipped, no values <0'
@@ -1070,7 +1407,8 @@ class Sentinel1Dataset(BaseDataset):
         vels = np.sqrt(np.sum(velos, axis=0))
         interp_f = interp1d(azi_times.astype(float), vels)
         _vels = interp_f(azimuth_times.astype(float))
-        res = xr.DataArray(_vels, dims=['line'], coords={'line': self.dataset.line})
+        res = xr.DataArray(_vels, dims=['line'], coords={
+                           'line': self.dataset.line})
         return xr.Dataset({'velocity': res})
 
     def _range_ground_spacing(self):
@@ -1096,11 +1434,13 @@ class Sentinel1Dataset(BaseDataset):
             # get the incidence at the middle of line dimension of the part of image selected
             inc = self._dataset['incidence'].isel({'line': int(len(line_tmp) / 2),
                                                    })
-            range_ground_spacing_vect = ground_spacing[1] / np.sin(np.radians(inc))
+            range_ground_spacing_vect = ground_spacing[1] / \
+                np.sin(np.radians(inc))
             range_ground_spacing_vect.attrs['history'] = ''
 
         else:  # GRD
-            valuess = np.ones((len(self._dataset['sample']))) * ground_spacing[1]
+            valuess = np.ones(
+                (len(self._dataset['sample']))) * ground_spacing[1]
             range_ground_spacing_vect = xr.DataArray(valuess, coords={'sample': self._dataset['sample']},
                                                      dims=['sample'])
         return xr.Dataset({'range_ground_spacing': range_ground_spacing_vect})
@@ -1134,7 +1474,8 @@ class Sentinel1Dataset(BaseDataset):
             azi_time_int = np.timedelta64(int(azi_time_int * 1e12), 'ps')
             ind, geoloc_azitime, geoloc_iburst, geoloc_line = self._get_indices_bursts()
             # compute the azimuth time by adding a step function (first term) and a growing term (second term)
-            azitime = geoloc_azitime[ind] + (line - geoloc_line[ind]) * azi_time_int.astype('<m8[ns]')
+            azitime = geoloc_azitime[ind] + \
+                (line - geoloc_line[ind]) * azi_time_int.astype('<m8[ns]')
         else:  # GRD* cases
             # n_pixels = int((len(self.datatree['geolocation_annotation'].ds['sample']) - 1) / 2)
             # geoloc_azitime = self.datatree['geolocation_annotation'].ds['azimuth_time'].values[:, n_pixels]


=====================================
src/xsar/sentinel1_meta.py
=====================================
@@ -431,3 +431,12 @@ class Sentinel1Meta(BaseMeta):
     @property
     def get_noise_range_raw(self):
         return self.dt['noise_range_raw'].to_dataset()
+    
+    @property
+    def get_antenna_pattern(self):
+        return self.dt['antenna_pattern'].to_dataset()
+
+    
+    @property
+    def get_swath_merging(self):
+        return self.dt['swath_merging'].to_dataset()


=====================================
src/xsar/utils.py
=====================================
@@ -22,6 +22,7 @@ from importlib.resources import files
 from pathlib import Path
 import fsspec
 import aiohttp
+from lxml import objectify
 
 logger = logging.getLogger('xsar.utils')
 logger.addHandler(logging.NullHandler())
@@ -671,4 +672,114 @@ def url_get(url, cache_dir=os.path.join(config['data_dir'], 'fsspec_cache')):
 
     return fname
 
+def get_geap_gains(path_aux_cal, mode, pols):
+    """
+    Find gains Geap associated with mode product and slice number from AUX_CAL.
+
+    DOC : `https://sentinel.esa.int/documents/247904/1877131/DI-MPC-PB-0241-3-10_Sentinel-1IPFAuxiliaryProductSpecification.pdf/ae025687-c3e3-6ab0-de8d-d9cf58657431?t=1669115416469`
+
+    Parameters
+    ----------
+    path_aux_cal: str 
+
+    mode: str
+        "IW" for example.
+
+    pols : list 
+        ["VV","VH"] for example;
+
+
+    Returns
+    ----------
+    dict
+        return a dict for the given (mode+pols).
+        this dictionnary contains a dict with offboresight angle values and associated gains values
+    """
+    with open(path_aux_cal, 'rb') as file:
+        xml = file.read()
+        
+    root_aux = objectify.fromstring(xml)
+    dict_gains = {}
+
+    for calibrationParams in root_aux.calibrationParamsList.getchildren():
+        swath = calibrationParams.swath
+        polarisation = calibrationParams.polarisation
+
+        if (mode in swath.text and polarisation in pols):
+            dict_temp = {}
+
+            increment = calibrationParams.elevationAntennaPattern.elevationAngleIncrement
+            
+            valuesIQ = np.array([float(
+                e) for e in calibrationParams.elevationAntennaPattern['values'].text.split(' ')])
+            
+            if valuesIQ.size == 1202:
+                gain = np.sqrt(valuesIQ[::2]**2+valuesIQ[1::2]**2)
+            elif valuesIQ.size == 601:
+                gain = 10**(valuesIQ/10)
+            else : 
+                raise ValueError(f"valuesIQ must be of size 601 (float) or 1202 (complex)")
+            #gain = np.sqrt(valuesIQ[::2]**2+valuesIQ[1::2]**2)
+
+            count = gain.size
+            ang = np.linspace(-((count - 1)/2) * increment,
+                              ((count - 1)/2) * increment, count)
+
+            dict_temp["offboresightAngle"] = ang
+            dict_temp["gain"] = gain
+            dict_gains[swath+"_"+polarisation] = dict_temp
+
+    return dict_gains
+
+
+def get_gproc_gains(path_aux_pp1, mode, product):
+    """
+    Find gains Gproc associated with mode product and slice number from AUX_PP1.
+
+    DOC : `https://sentinel.esa.int/documents/247904/1877131/DI-MPC-PB-0241-3-10_Sentinel-1IPFAuxiliaryProductSpecification.pdf/ae025687-c3e3-6ab0-de8d-d9cf58657431?t=1669115416469`
+
+    Parameters
+    ----------
+    path_aux_pp1: str 
+
+    Returns
+    ----------
+    dict
+        return a dict of 4 linear gain values for each (mode+product_type+slice_number)
+        in this order : Gproc_HH, Gproc_HV, Gproc_VV, Gproc_VH
+    """
 
+    # Parse the XML file
+    with open(path_aux_pp1, 'rb') as file:
+        xml = file.read()
+    root_pp1 = objectify.fromstring(xml)
+    dict_gains = {}
+    for prd in root_pp1.productList.getchildren():
+        for swathParams in prd.slcProcParams.swathParamsList.getchildren():
+            if (mode in swathParams.swath.text and product in prd.productId.text):
+                gains = [float(g) for g in swathParams.gain.text.split(' ')]
+                key = swathParams.swath
+                dict_gains[key] = gains
+    return dict_gains
+
+
+
+def get_path_aux_cal(aux_cal_name):
+    path = os.path.join(config["auxiliary_dir"],
+                        aux_cal_name[0:3]+"_AUX_CAL",
+                        aux_cal_name, 
+                        "data", 
+                        aux_cal_name[0:3].lower() + "-aux-cal.xml")
+    if not os.path.isfile(path):
+        raise FileNotFoundError(f"File doesn't exist: {path}")
+    return path
+
+def get_path_aux_pp1(aux_pp1_name):
+    path = os.path.join(config["auxiliary_dir"],
+                        aux_pp1_name[0:3]+"_AUX_PP1",
+                        aux_pp1_name, 
+                        "data", 
+                        aux_pp1_name[0:3].lower() + "-aux-pp1.xml")
+    if not os.path.isfile(path):
+        raise FileNotFoundError(f"File doesn't exist: {path}")
+    return path
\ No newline at end of file



View it on GitLab: https://salsa.debian.org/debian-gis-team/xsar/-/commit/8d44ee50a868fd79647768cb19cc04ba9a548e12

-- 
View it on GitLab: https://salsa.debian.org/debian-gis-team/xsar/-/commit/8d44ee50a868fd79647768cb19cc04ba9a548e12
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/pkg-grass-devel/attachments/20240210/b7a1591e/attachment-0001.htm>


More information about the Pkg-grass-devel mailing list