[Git][debian-gis-team/mapcache][master] 6 commits: New upstream version 1.16.0

Bas Couwenberg (@sebastic) gitlab at salsa.debian.org
Mon Apr 20 16:34:38 BST 2026



Bas Couwenberg pushed to branch master at Debian GIS Project / mapcache


Commits:
a1b16a39 by Bas Couwenberg at 2026-04-20T17:23:53+02:00
New upstream version 1.16.0
- - - - -
628a391a by Bas Couwenberg at 2026-04-20T17:23:56+02:00
Update upstream source from tag 'upstream/1.16.0'

Update to upstream version '1.16.0'
with Debian dir e70cff61bd44a375085bdef96c5d8208fa0d2680
- - - - -
4aa28763 by Bas Couwenberg at 2026-04-20T17:24:17+02:00
New upstream release.

- - - - -
03929e9e by Bas Couwenberg at 2026-04-20T17:26:17+02:00
Update copyright file.

- - - - -
5a87d5a5 by Bas Couwenberg at 2026-04-20T17:26:51+02:00
Drop gdal-3.13.patch, fixed upstream.

- - - - -
e7694e75 by Bas Couwenberg at 2026-04-20T17:27:04+02:00
Set distribution to unstable.

- - - - -


25 changed files:

- + .github/dependabot.yml
- .github/workflows/build-linux.yml
- .github/workflows/build-windows.yml
- .github/workflows/check-crlf.yml
- .github/workflows/irc_notify.yml
- .gitignore
- CMakeLists.txt
- LICENSE.md
- MIGRATION_GUIDE.md
- README.md
- contrib/mapcache_detail/mapcache_detail.c
- debian/changelog
- debian/copyright
- − debian/patches/gdal-3.13.patch
- − debian/patches/series
- lib/cache_lmdb.c
- lib/dimension.c
- lib/source_gdal.c
- lib/source_mapserver.c
- + tests/data/mapcache_backend_template.xml
- + tests/mcpython/generate_synthetic_geotiff.py
- + tests/mcpython/requirements.txt
- + tests/mcpython/test_disk_cache.py
- + tests/mcpython/test_sqlite_cache.py
- + tests/mcpython/verification_core.py


Changes:

=====================================
.github/dependabot.yml
=====================================
@@ -0,0 +1,8 @@
+# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
+
+version: 2
+updates:
+  - package-ecosystem: "github-actions" # See documentation for possible values
+    directory: "/" # Location of package manifests
+    schedule:
+      interval: "weekly"


=====================================
.github/workflows/build-linux.yml
=====================================
@@ -6,31 +6,37 @@ jobs:
     build-matrix:
       strategy:
         matrix:
-          os: [ ubuntu-20.04 ]
+          os: [ ubuntu-latest ]
           option: [ minimal, default, maximal ]
       runs-on: ${{ matrix.os }}
       steps:
         - name: Checkout repository
-          uses: actions/checkout at v2
+          uses: actions/checkout at v6
 
         - name: Install dependencies
           run: |
             if [[ 'minimal,default,maximal' =~ ${{ matrix.option }} ]]
             then
+              sudo add-apt-repository -y ppa:ubuntugis/ubuntugis-unstable
               sudo apt-get update -y
               sudo apt-get upgrade -y
               sudo apt-get install -y libcurl4-openssl-dev apache2-dev
+              sudo apt-get install -y libpng-dev libjpeg-dev
             fi
             if [[ 'default,maximal' =~ ${{ matrix.option }} ]]
             then
               sudo apt-get install -y libgdal-dev libfcgi-dev libpixman-1-dev
-              sudo apt-get install -y gdal-bin libxml2-utils
+              sudo apt-get install -y gdal-bin libxml2-utils python3-pip python3-gdal python3-pytest
             fi
             if [[ 'maximal' =~ ${{ matrix.option }} ]]
             then
               sudo apt-get install -y libhiredis-dev libdb-dev libmapserver-dev libpcre2-dev
             fi
 
+        - name: Install python dependencies
+          run: |
+            pip install -r ${{ github.workspace }}/tests/mcpython/requirements.txt
+
         - name: Build MapCache
           run: |
             if [[ 'minimal' == ${{ matrix.option }} ]]
@@ -69,7 +75,7 @@ jobs:
 
         - name: Run tests
           run: |
-            if [[ 'ubuntu-20.04' == ${{ matrix.os }} ]] \
+            if [[ 'ubuntu-latest' == ${{ matrix.os }} ]] \
                && [[ 'default' == ${{ matrix.option }} ]]
             then
               cd ${{ github.workspace }}/tests
@@ -78,3 +84,13 @@ jobs:
             else
               echo No test performed on this target
             fi
+
+        - name: Run python tests
+          run: |
+            if [[ 'ubuntu-latest' == ${{ matrix.os }} ]] \
+               && [[ 'default' == ${{ matrix.option }} ]]
+            then
+              pytest ${{ github.workspace }}/tests/mcpython/
+            else
+              echo No python test performed on this target
+            fi


=====================================
.github/workflows/build-windows.yml
=====================================
@@ -6,21 +6,21 @@ jobs:
     build-matrix:
       strategy:
         matrix:
-          os: [ windows-2019 ]
+          os: [ windows-2022 ]
           option: [ default ]
       runs-on: ${{matrix.os}}
       steps:
         - name: Checkout repository
-          uses: actions/checkout at v2
+          uses: actions/checkout at v6
 
         - name: Install dependencies
           run: |
             Set-Location -Path "${{github.workspace}}"
             New-Item -Path . -Name "sdk" -ItemType "directory"
             Set-Location -Path "sdk"
-            curl -O https://download.gisinternals.com/sdk/downloads/release-1928-x64-dev.zip
-            unzip -qq release-1928-x64-dev.zip
-            $sdkprefix = "${{github.workspace}}\sdk\release-1928-x64"
+            curl -O https://download.gisinternals.com/sdk/downloads/release-1930-x64-dev.zip
+            unzip -qq release-1930-x64-dev.zip
+            $sdkprefix = "${{github.workspace}}\sdk\release-1930-x64"
             Set-Location -Path "$sdkprefix\lib"
             Copy-Item -Path "libfcgi.lib" -Destination "fcgi.lib"
             Copy-Item -Path "apr-1.lib" -Destination "apr-1-1.lib"
@@ -30,7 +30,7 @@ jobs:
 
         - name: Build MapCache
           run: |
-            $sdkprefix = "${{github.workspace}}\sdk\release-1928-x64"
+            $sdkprefix = "${{github.workspace}}\sdk\release-1930-x64"
             Set-Location -Path "${{github.workspace}}"
             New-Item -Path . -Name "build" -ItemType "directory"
             Set-Location -Path "build"
@@ -41,7 +41,7 @@ jobs:
             Compress-Archive -DestinationPath "${{github.workspace}}\mapcache.zip" -Path "${{github.workspace}}\mapcache.xml", "mapcache.dll", "mapcache.fcgi.exe", "mapcache_seed.exe", "mapcache_detail.exe"
 
         - name: Upload binary artifacts
-          uses: actions/upload-artifact at v2
+          uses: actions/upload-artifact at v7
           with:
             name: binaries
             path: mapcache.zip
@@ -49,7 +49,7 @@ jobs:
 
         - name: Setup tests
           run: |
-            $sdkprefix = "${{github.workspace}}\sdk\release-1928-x64"
+            $sdkprefix = "${{github.workspace}}\sdk\release-1930-x64"
             Set-Location -Path "${{github.workspace}}\build"
             Copy-Item -Path "..\tests\data\world.tif" -Destination .
             New-Item -Path "mapcache.xml"
@@ -80,7 +80,7 @@ jobs:
 
         - name: Run tests
           run: |
-            $sdkprefix = "${{github.workspace}}\sdk\release-1928-x64"
+            $sdkprefix = "${{github.workspace}}\sdk\release-1930-x64"
             Set-Location -Path "$sdkprefix\bin"
             $env:GDAL_DATA = "$sdkprefix\bin\gdal-data"
             $env:PROJ_LIB = "$sdkprefix\bin\proj9\share"
@@ -110,7 +110,7 @@ jobs:
             }
 
         - name: Upload test artifacts
-          uses: actions/upload-artifact at v2
+          uses: actions/upload-artifact at v7
           with:
             name: test-results
             path: |


=====================================
.github/workflows/check-crlf.yml
=====================================
@@ -8,13 +8,13 @@ on: [push, pull_request]
 jobs:
   Check-CRLF:
     name: verify that only LF linefeeds are used
-    runs-on: ubuntu-18.04
+    runs-on: ubuntu-latest
 
     steps:
       - name: Checkout repository contents
-        uses: actions/checkout at v1
+        uses: actions/checkout at v6
 
       - name: Use action to check for CRLF endings
-        uses: erclu/check-crlf at v1.1.2
+        uses: erclu/check-crlf at master
         with: # ignore directories below, space-delimited
           exclude: 
\ No newline at end of file


=====================================
.github/workflows/irc_notify.yml
=====================================
@@ -16,7 +16,7 @@ jobs:
     #if: github.repository == 'MapServer/mapcache'
     steps:
       - name: irc push
-        uses: rectalogic/notify-irc at v1
+        uses: rectalogic/notify-irc at v2
         if: github.event_name == 'push'
         with:
           channel: "#mapcache"
@@ -26,7 +26,7 @@ jobs:
             ${{ github.actor }} pushed ${{ github.event.ref }} ${{ github.event.compare }}
             ${{ join(github.event.commits.*.message) }}
       - name: irc pull request
-        uses: rectalogic/notify-irc at v1
+        uses: rectalogic/notify-irc at v2
         if: github.event_name == 'pull_request'
         with:
           channel: "#mapcache"
@@ -34,8 +34,9 @@ jobs:
           nickname: mapcache-github-notifier
           message: |
             ${{ github.actor }} opened PR ${{ github.event.pull_request.html_url }}
+            ${{ github.event.pull_request.title }}
       - name: irc tag created
-        uses: rectalogic/notify-irc at v1
+        uses: rectalogic/notify-irc at v2
         if: github.event_name == 'create' && github.event.ref_type == 'tag'
         with:
           channel: "#mapcache"


=====================================
.gitignore
=====================================
@@ -3,3 +3,4 @@ nbproject/
 /build/
 /build_vagrant/
 /.vagrant/
+__pycache__


=====================================
CMakeLists.txt
=====================================
@@ -12,8 +12,8 @@ endif ()
 
 
 set (MAPCACHE_VERSION_MAJOR 1)
-set (MAPCACHE_VERSION_MINOR 14)
-set (MAPCACHE_VERSION_REVISION 1)
+set (MAPCACHE_VERSION_MINOR 16)
+set (MAPCACHE_VERSION_REVISION 0)
 
 if(NOT DEFINED CMAKE_INSTALL_LIBDIR)
    set(CMAKE_INSTALL_LIBDIR lib)
@@ -371,7 +371,7 @@ status_optional_component("GeoTIFF" "${USE_GEOTIFF}" "${GEOTIFF_LIBRARY}")
 status_optional_component("Experimental TIFF write support" "${USE_TIFF_WRITE}" "${TIFF_LIBRARY}")
 status_optional_component("PCRE" "${USE_PCRE}" "${PCRE_LIBRARY}")
 status_optional_component("PCRE2" "${USE_PCRE2}" "${PCRE2-8_LIBRARY}")
-status_optional_component("Experimental mapserver support" "${USE_MAPSERVER}" "${MAPSERVER_LIBRARY}")
+status_optional_component("Experimental MapServer support" "${USE_MAPSERVER}" "${MAPSERVER_LIBRARY}")
 status_optional_component("RIAK" "${USE_RIAK}" "${RIAK_LIBRARY}")
 status_optional_component("GDAL" "${USE_GDAL}" "${GDAL_LIBRARY}")
 message(STATUS " * Optional features")


=====================================
LICENSE.md
=====================================
@@ -1,7 +1,7 @@
 MapCache Licensing
 ==================
 
-Copyright (c) 2008-2024 Open Source Geospatial Foundation.  
+Copyright (c) 2008-2026 Open Source Geospatial Foundation.  
 Copyright (c) 1996-2008 Regents of the University of Minnesota.
 
 Permission is hereby granted, free of charge, to any person obtaining a


=====================================
MIGRATION_GUIDE.md
=====================================
@@ -1,3 +1,10 @@
+Migrating from MapCache 1.14 to 1.16
+====================================
+
+* No backward compatibility issue is expected.
+  See [MapCache 1.16 Changelog](https://mapserver.org/development/changelog/mapcache/changelog-1-16.html)
+  for a list of bug fixes and new features.
+
 Migrating from MapCache 1.12 to 1.14
 ====================================
 


=====================================
README.md
=====================================
@@ -3,6 +3,7 @@ MapCache
 
 [![Build MapCache on Linux Status](https://github.com/MapServer/mapcache/actions/workflows/build-linux.yml/badge.svg)](https://github.com/MapServer/mapcache/actions?query=workflow%3A%22Build%20MapCache%20on%20Linux%22%20branch%3Amain)
 [![Build MapCache on Windows Status](https://github.com/MapServer/mapcache/actions/workflows/build-windows.yml/badge.svg)](https://github.com/MapServer/mapcache/actions?query=workflow%3A%22Build%20MapCache%20on%20Windows%22%20branch%3Amain)
+[![Release](https://img.shields.io/github/v/release/MapServer/mapcache)](https://github.com/MapServer/mapcache/releases)
 
 Summary
 -------


=====================================
contrib/mapcache_detail/mapcache_detail.c
=====================================
@@ -237,8 +237,9 @@ static void _destroy_json_pool() {
 GEOSGeometry * mapcache_extent_to_GEOSGeometry(const mapcache_extent *extent)
 {
   GEOSCoordSequence *cs = GEOSCoordSeq_create(5,2);
-  GEOSGeometry *lr = GEOSGeom_createLinearRing(cs);
-  GEOSGeometry *bb = GEOSGeom_createPolygon(lr,NULL,0);
+  GEOSGeometry *lr;
+  GEOSGeometry *bb;
+
   GEOSCoordSeq_setX(cs,0,extent->minx);
   GEOSCoordSeq_setY(cs,0,extent->miny);
   GEOSCoordSeq_setX(cs,1,extent->maxx);
@@ -249,6 +250,10 @@ GEOSGeometry * mapcache_extent_to_GEOSGeometry(const mapcache_extent *extent)
   GEOSCoordSeq_setY(cs,3,extent->maxy);
   GEOSCoordSeq_setX(cs,4,extent->minx);
   GEOSCoordSeq_setY(cs,4,extent->miny);
+
+  lr = GEOSGeom_createLinearRing(cs);
+  bb = GEOSGeom_createPolygon(lr, NULL, 0);
+
   return bb;
 }
 


=====================================
debian/changelog
=====================================
@@ -1,5 +1,7 @@
-mapcache (1.14.1-4) UNRELEASED; urgency=medium
+mapcache (1.16.0-1) unstable; urgency=medium
 
+  * New upstream release.
+    (closes: #1134122)
   * Mark libmapcache1t64 as Multi-Arch: same.
   * Update lintian overrides.
   * Drop Rules-Requires-Root: no, default since dpkg 1.22.13.
@@ -8,10 +10,9 @@ mapcache (1.14.1-4) UNRELEASED; urgency=medium
   * Drop Priority: optional, default since dpkg 1.22.13.
   * Bump Standards-Version to 4.7.4, changes: priority.
   * Drop obsolete Breaks/Replaces.
-  * Add patch to fix FTBFS with GDAL 3.13.0.
-    (closes: #1134122)
+  * Update copyright file.
 
- -- Bas Couwenberg <sebastic at debian.org>  Fri, 18 Jul 2025 13:31:33 +0200
+ -- Bas Couwenberg <sebastic at debian.org>  Mon, 20 Apr 2026 17:26:53 +0200
 
 mapcache (1.14.1-3) unstable; urgency=medium
 


=====================================
debian/copyright
=====================================
@@ -6,8 +6,8 @@ Source: https://github.com/mapserver/mapcache/releases
 Files: *
 Copyright: 2004, Frank Warmerdam <warmerdam at pobox.com>
            2021, Boris Manojlovic
-      1996-2022, Regents of the University of Minnesota
-      2008-2024, Open Source Geospatial Foundation
+      1996-2025, Regents of the University of Minnesota
+      2008-2026, Open Source Geospatial Foundation
 License: Expat
 
 Files: include/cJSON.h


=====================================
debian/patches/gdal-3.13.patch deleted
=====================================
@@ -1,23 +0,0 @@
-Description: Fix FTBFS with GDAL 3.13.0.
- error: implicit declaration of function 'MIN' [-Wimplicit-function-declaration]
-Author: Bas Couwenberg <sebastic at debian.org>
-Forwarded: https://github.com/MapServer/mapcache/issues/373#issuecomment-4262514532
-
---- a/lib/source_gdal.c
-+++ b/lib/source_gdal.c
-@@ -347,9 +347,15 @@ CreateWarpedVRT( GDALDatasetH hSrcDS,
-     {
-         double dfDesiredXRes = (extent->maxx - extent->minx) / width;
-         double dfDesiredYRes = (extent->maxy - extent->miny) / height;
-+#if GDAL_VERSION_NUM >= GDAL_COMPUTE_VERSION(3, 13, 0)
-+        double dfDesiredRes = CPL_MIN( dfDesiredXRes, dfDesiredYRes );
-+        double dfGuessedFullRes = CPL_MIN( adfDstGeoTransform[1],
-+                                   fabs(adfDstGeoTransform[5]) );
-+#else
-         double dfDesiredRes = MIN( dfDesiredXRes, dfDesiredYRes );
-         double dfGuessedFullRes = MIN( adfDstGeoTransform[1],
-                                    fabs(adfDstGeoTransform[5]) );
-+#endif
-         double dfApproxDstOvrRatio = dfDesiredRes / dfGuessedFullRes;
- 
-         GDALRasterBandH hFirstBand = GDALGetRasterBand(hSrcDS, 1);


=====================================
debian/patches/series deleted
=====================================
@@ -1 +0,0 @@
-gdal-3.13.patch


=====================================
lib/cache_lmdb.c
=====================================
@@ -43,6 +43,7 @@
 
 #include "lmdb.h"
 
+/* Cache specific configuration */
 typedef struct mapcache_cache_lmdb mapcache_cache_lmdb;
 struct mapcache_cache_lmdb {
   mapcache_cache cache;
@@ -51,9 +52,10 @@ struct mapcache_cache_lmdb {
   size_t max_size;
   unsigned int max_readers;
   MDB_env *env;
+  MDB_dbi dbi;
 };
 
-/* LMDB env should be opened only once per process */
+/* A LMDB DB environment for a single directory */
 typedef struct lmdb_env_s lmdb_env_s;
 struct lmdb_env_s {
   MDB_env *env;
@@ -61,7 +63,9 @@ struct lmdb_env_s {
   int is_open;
 };
 
-static lmdb_env_s *lmdb_env;
+/* A hash table of all open environments with directories as keys */
+static apr_hash_t* lmdb_env_ht = NULL;
+static apr_thread_mutex_t *lmdb_env_mutex = NULL;
 
 static int _mapcache_cache_lmdb_has_tile(mapcache_context *ctx, mapcache_cache *pcache, mapcache_tile *tile)
 {
@@ -71,7 +75,7 @@ static int _mapcache_cache_lmdb_has_tile(mapcache_context *ctx, mapcache_cache *
   mapcache_cache_lmdb *cache = (mapcache_cache_lmdb*)pcache;
   char *skey;
 
-  if (lmdb_env->is_open == 0) {
+  if (!cache->env) {
     ctx->set_error(ctx,500,"lmdb is not open %s",cache->basedir);
     return MAPCACHE_FALSE;
   }
@@ -80,13 +84,13 @@ static int _mapcache_cache_lmdb_has_tile(mapcache_context *ctx, mapcache_cache *
   key.mv_size = strlen(skey)+1;
   key.mv_data = skey;
 
-  rc = mdb_txn_begin(lmdb_env->env, NULL, MDB_RDONLY, &txn);
+  rc = mdb_txn_begin(cache->env, NULL, MDB_RDONLY, &txn);
   if (rc) {
     ctx->set_error(ctx,500,"lmdb failed to begin transaction for has_tile in %s:%s",cache->basedir,mdb_strerror(rc));
     return MAPCACHE_FALSE;
   }
 
-  rc = mdb_get(txn, lmdb_env->dbi, &key, &data);
+  rc = mdb_get(txn, cache->dbi, &key, &data);
   if(rc == 0) {
     ret = MAPCACHE_TRUE;
   } else if(rc == MDB_NOTFOUND) {
@@ -113,7 +117,7 @@ static void _mapcache_cache_lmdb_delete(mapcache_context *ctx, mapcache_cache *p
   mapcache_cache_lmdb *cache = (mapcache_cache_lmdb*)pcache;
   char *skey;
 
-  if (lmdb_env->is_open == 0) {
+  if (!cache->env) {
     ctx->set_error(ctx,500,"lmdb is not open %s",cache->basedir);
     return;
   }
@@ -122,13 +126,13 @@ static void _mapcache_cache_lmdb_delete(mapcache_context *ctx, mapcache_cache *p
   key.mv_size = strlen(skey)+1;
   key.mv_data = skey;
 
-  rc = mdb_txn_begin(lmdb_env->env, NULL, 0, &txn);
+  rc = mdb_txn_begin(cache->env, NULL, 0, &txn);
   if (rc) {
     ctx->set_error(ctx,500,"lmdb failed to begin transaction for delete in %s:%s",cache->basedir,mdb_strerror(rc));
     return;
   }
 
-  rc = mdb_del(txn, lmdb_env->dbi, &key, NULL);
+  rc = mdb_del(txn, cache->dbi, &key, NULL);
   if (rc) {
     if (rc == MDB_NOTFOUND) {
       ctx->log(ctx,MAPCACHE_DEBUG,"attempt to delete tile %s absent in the db %s",skey,cache->basedir);
@@ -152,7 +156,7 @@ static int _mapcache_cache_lmdb_get(mapcache_context *ctx, mapcache_cache *pcach
   char *skey;
   mapcache_cache_lmdb *cache = (mapcache_cache_lmdb*)pcache;
 
-  if (lmdb_env->is_open == 0) {
+  if (!cache->env) {
     ctx->set_error(ctx,500,"lmdb is not open %s",cache->basedir);
     return MAPCACHE_FALSE;
   }
@@ -161,13 +165,13 @@ static int _mapcache_cache_lmdb_get(mapcache_context *ctx, mapcache_cache *pcach
   key.mv_size = strlen(skey)+1;
   key.mv_data = skey;
 
-  rc = mdb_txn_begin(lmdb_env->env, NULL, MDB_RDONLY, &txn);
+  rc = mdb_txn_begin(cache->env, NULL, MDB_RDONLY, &txn);
   if (rc) {
     ctx->set_error(ctx,500,"lmdb failed to begin transaction for get in %s:%s",cache->basedir,mdb_strerror(rc));
     return MAPCACHE_FALSE;
   }
 
-  rc = mdb_get(txn, lmdb_env->dbi, &key, &data);
+  rc = mdb_get(txn, cache->dbi, &key, &data);
   if(rc == 0) {
     if(((char*)(data.mv_data))[0] == '#') {
       tile->encoded_data = mapcache_empty_png_decode(ctx,tile->grid_link->grid->tile_sx, tile->grid_link->grid->tile_sy, (unsigned char*)data.mv_data,&tile->nodata);
@@ -178,6 +182,7 @@ static int _mapcache_cache_lmdb_get(mapcache_context *ctx, mapcache_cache *pcach
       tile->encoded_data->avail = data.mv_size;
     }
     tile->mtime = *((apr_time_t*)(((char*)data.mv_data)+data.mv_size-sizeof(apr_time_t)));
+    
     ret = MAPCACHE_SUCCESS;
   } else if(rc == MDB_NOTFOUND) {
     ret = MAPCACHE_CACHE_MISS;
@@ -232,18 +237,18 @@ static void _mapcache_cache_lmdb_set(mapcache_context *ctx, mapcache_cache *pcac
     tile->encoded_data->size -= sizeof(apr_time_t);
   }
 
-  if (lmdb_env->is_open == 0) {
+  if (!cache->env) {
     ctx->set_error(ctx,500,"lmdb is not open %s",cache->basedir);
     return;
   }
 
-  rc = mdb_txn_begin(lmdb_env->env, NULL, 0, &txn);
+  rc = mdb_txn_begin(cache->env, NULL, 0, &txn);
   if (rc) {
     ctx->set_error(ctx,500,"lmdb failed to begin transaction for set in %s:%s",cache->basedir,mdb_strerror(rc));
     return;
   }
 
-  rc = mdb_put(txn, lmdb_env->dbi, &key, &data, 0);
+  rc = mdb_put(txn, cache->dbi, &key, &data, 0);
   if(rc) {
     ctx->set_error(ctx,500,"lmbd failed to put for tile_set in %s:%s",cache->basedir,mdb_strerror(rc));
   }
@@ -265,12 +270,12 @@ static void _mapcache_cache_lmdb_multiset(mapcache_context *ctx, mapcache_cache
 
   now = apr_time_now();
 
-  if (lmdb_env->is_open == 0) {
+  if (!cache->env) {
     ctx->set_error(ctx,500,"lmdb is not open %s",cache->basedir);
     return;
   }
 
-  rc = mdb_txn_begin(lmdb_env->env, NULL, 0, &txn);
+  rc = mdb_txn_begin(cache->env, NULL, 0, &txn);
   if (rc) {
     ctx->set_error(ctx,500,"lmdb failed to begin transaction for multiset in %s:%s",cache->basedir,mdb_strerror(rc));
     return;
@@ -308,7 +313,7 @@ static void _mapcache_cache_lmdb_multiset(mapcache_context *ctx, mapcache_cache
     key.mv_data = skey;
     key.mv_size = strlen(skey)+1;
 
-    rc = mdb_put(txn, lmdb_env->dbi, &key, &data, 0);
+    rc = mdb_put(txn, cache->dbi, &key, &data, 0);
     if(rc) {
       ctx->set_error(ctx,500,"lmbd failed to put for multiset in %s:%s",cache->basedir,mdb_strerror(rc));
       goto abort_txn;
@@ -386,76 +391,136 @@ static void _mapcache_cache_lmdb_configuration_post_config(mapcache_context *ctx
   }
 }
 
+/**
+ * Clean-up LMDB at shutdown
+ * 
+ * \private \memberof mapcache_cache_lmdb
+ */
+static apr_status_t _lmdb_cleanup(void *data) {
+  apr_hash_index_t *hi;
+  if(lmdb_env_ht) {
+    for (hi = apr_hash_first(NULL, lmdb_env_ht); hi; hi = apr_hash_next(hi)) {
+      lmdb_env_s *env_s;
+      apr_hash_this(hi, NULL, NULL, (void**)&env_s);
+      if(env_s->is_open) {
+        mdb_dbi_close(env_s->env, env_s->dbi);
+        mdb_env_close(env_s->env);
+        env_s->is_open = 0;
+      }
+    }
+    lmdb_env_ht = NULL;
+  }
+  if(lmdb_env_mutex) {
+    apr_thread_mutex_destroy(lmdb_env_mutex);
+    lmdb_env_mutex = NULL;
+  }
+  return APR_SUCCESS;
+}
+
+
+/**
+ * Open LMDB database at a process start
+ * 
+ * LMDB requires single DB environment to be opened only once per process
+ * thus here new environments are created when a new process starts
+ */
 static void _mapcache_cache_lmdb_child_init(mapcache_context *ctx, mapcache_cache *cache, apr_pool_t *pchild)
 {
   mapcache_cache_lmdb *dcache = (mapcache_cache_lmdb*)cache;
-
   int rc, dead=0;
   MDB_txn *txn;
+  lmdb_env_s *env_s = NULL;
+
+  if(!lmdb_env_mutex) {
+    apr_thread_mutex_create(&lmdb_env_mutex, APR_THREAD_MUTEX_DEFAULT, pchild);
+  }
+
+  apr_thread_mutex_lock(lmdb_env_mutex);
+
+  if(!lmdb_env_ht) {
+    lmdb_env_ht = apr_hash_make(pchild);
+    apr_pool_cleanup_register(pchild, NULL, _lmdb_cleanup, apr_pool_cleanup_null);
+  }
+
+  env_s = apr_hash_get(lmdb_env_ht, dcache->basedir, APR_HASH_KEY_STRING);
+
+  /* Environment for particular base dir is alreay open */
+  if(env_s) {
+    dcache->env = env_s->env;
+    dcache->dbi = env_s->dbi;
+    apr_thread_mutex_unlock(lmdb_env_mutex);
+    return;
+  }
+
+  env_s = apr_pcalloc(pchild,sizeof(lmdb_env_s));
+  rc = mdb_env_create(&(env_s->env));
 
-  lmdb_env_s *var = apr_pcalloc(ctx->pool,sizeof(lmdb_env_s));
-  lmdb_env = var;
-  lmdb_env->is_open = 0;
-  rc = mdb_env_create(&(lmdb_env->env));
   if (rc) {
     ctx->set_error(ctx,500,"lmdb failed to create environment of database %s:%s",dcache->basedir,mdb_strerror(rc));
-    return;
+    goto cleanup;
   }
-  rc = mdb_env_set_mapsize(lmdb_env->env, dcache->max_size);
+  rc = mdb_env_set_mapsize(env_s->env, dcache->max_size);
   if (rc) {
     ctx->set_error(ctx,500,"lmdb failed to set maximum size of database %s:%s",dcache->basedir,mdb_strerror(rc));
-    mdb_env_close(lmdb_env->env);
-    return;
+    mdb_env_close(env_s->env);
+    goto cleanup;
   }
   if (dcache->max_readers) {
-    rc = mdb_env_set_maxreaders(lmdb_env->env, dcache->max_readers);
+    rc = mdb_env_set_maxreaders(env_s->env, dcache->max_readers);
     if (rc) {
       ctx->set_error(ctx,500,"lmdb failed to set maximum readers of database %s:%s",dcache->basedir,mdb_strerror(rc));
-      mdb_env_close(lmdb_env->env);
-      return;
+      mdb_env_close(env_s->env);
+      goto cleanup;
     }
   }
   /* Clean out any stale reader entries from lock table */
-  rc = mdb_reader_check(lmdb_env->env, &dead);
+  rc = mdb_reader_check(env_s->env, &dead);
   if (rc) {
     ctx->set_error(ctx,500,"lmdb failed to clear stale readers of database %s:%s",dcache->basedir,mdb_strerror(rc));
-    mdb_env_close(lmdb_env->env);
-    return;
+    mdb_env_close(env_s->env);
+    goto cleanup;
   }
   if (dead) {
     ctx->log(ctx,MAPCACHE_NOTICE,"lmdb cleared %d stale readers of database %s",dead,dcache->basedir);
   }
-  rc = mdb_env_open(lmdb_env->env, dcache->basedir, 0, 0664);
+  rc = mdb_env_open(env_s->env, dcache->basedir, 0, 0664);
   if (rc) {
     ctx->set_error(ctx,500,"lmdb failed to open environment of database %s:%s",dcache->basedir,mdb_strerror(rc));
-    mdb_env_close(lmdb_env->env);
-    return;
+    mdb_env_close(env_s->env);
+    goto cleanup;
   }
-  rc = mdb_txn_begin(lmdb_env->env, NULL, MDB_CREATE, &txn);
+  rc = mdb_txn_begin(env_s->env, NULL, MDB_CREATE, &txn);
   if (rc) {
     ctx->set_error(ctx,500,"lmdb failed to begin transaction of database %s:%s",dcache->basedir,mdb_strerror(rc));
-    mdb_env_close(lmdb_env->env);
-    return;
+    mdb_env_close(env_s->env);
+    goto cleanup;
   }
-  rc = mdb_dbi_open(txn, NULL, 0, &(lmdb_env->dbi));
+  rc = mdb_dbi_open(txn, NULL, 0, &(env_s->dbi));
   if (rc) {
     ctx->set_error(ctx,500,"lmdb failed to open dbi of database %s:%s",dcache->basedir,mdb_strerror(rc));
     mdb_txn_abort(txn);
-    mdb_env_close(lmdb_env->env);
-    return;
+    mdb_env_close(env_s->env);
+    goto cleanup;
   }
   rc = mdb_txn_commit(txn);
   if (rc) {
     ctx->set_error(ctx,500,"lmdb failed to commit transaction of database %s:%s",dcache->basedir,mdb_strerror(rc));
-    mdb_dbi_close(lmdb_env->env, lmdb_env->dbi);
-    mdb_env_close(lmdb_env->env);
-    return;
+    mdb_dbi_close(env_s->env, env_s->dbi);
+    mdb_env_close(env_s->env);
+    goto cleanup;
   }
-  lmdb_env->is_open = 1;
+  env_s->is_open = 1;
+  dcache->env = env_s->env;
+  dcache->dbi = env_s->dbi;
+  apr_hash_set(lmdb_env_ht, dcache->basedir, APR_HASH_KEY_STRING, env_s);
+
+cleanup:
+  apr_thread_mutex_unlock(lmdb_env_mutex);
 }
 
+
 /**
- * \brief creates and initializes a mapcache_dbd_cache
+ * \brief creates and initializes a mapcache_lmdb_cache
  */
 mapcache_cache* mapcache_cache_lmdb_create(mapcache_context *ctx)
 {


=====================================
lib/dimension.c
=====================================
@@ -131,26 +131,36 @@ static apr_array_header_t* _mapcache_dimension_regex_get_entries_for_value(mapca
 {
   mapcache_dimension_regex *dimension = (mapcache_dimension_regex*)dim;
   apr_array_header_t *values = apr_array_make(ctx->pool,1,sizeof(char*));
-#if defined(USE_PCRE2)
-  pcre2_match_data *match_data;
-  int rc = pcre2_match(dimension->pcregex,(PCRE2_SPTR)value,strlen(value),0,0,match_data,NULL);
-  if(rc>0) {
-    APR_ARRAY_PUSH(values,char*) = apr_pstrdup(ctx->pool,value);
-  }
-#elif defined(USE_PCRE)
-  int ovector[30];
-  int rc = pcre_exec(dimension->pcregex,NULL,value,strlen(value),0,0,ovector,30);
-  if(rc>0) {
-    APR_ARRAY_PUSH(values,char*) = apr_pstrdup(ctx->pool,value);
-  }
-#else
-  if(!regexec(dimension->regex,value,0,0,0)) {
-    APR_ARRAY_PUSH(values,char*) = apr_pstrdup(ctx->pool,value);
+  #if defined(USE_PCRE2)
+  {
+    pcre2_match_data *match_data;
+    int rc;
+    match_data = pcre2_match_data_create_from_pattern(dimension->pcregex, NULL);
+    rc = pcre2_match(dimension->pcregex,(PCRE2_SPTR)value,strlen(value),0,0,match_data,NULL);
+    if(rc>0) {
+      APR_ARRAY_PUSH(values,char*) = apr_pstrdup(ctx->pool,value);
+    } else {
+      ctx->set_error(ctx,400,"failed to validate requested value for %s (%s)",dim->class_name,dim->name);
+    }
+    pcre2_match_data_free(match_data);
   }
-#endif
-  else {
-    ctx->set_error(ctx,400,"failed to validate requested value for %s (%s)",dim->class_name,dim->name);
+  #elif defined(USE_PCRE)
+  {
+    int ovector[30];
+    int rc = pcre_exec(dimension->pcregex,NULL,value,strlen(value),0,0,ovector,30);
+    if(rc>0) {
+      APR_ARRAY_PUSH(values,char*) = apr_pstrdup(ctx->pool,value);
+    } else {
+      ctx->set_error(ctx,400,"failed to validate requested value for %s (%s)",dim->class_name,dim->name);
+    }
   }
+  #else
+    if(!regexec(dimension->regex,value,0,0,0)) {
+      APR_ARRAY_PUSH(values,char*) = apr_pstrdup(ctx->pool,value);
+    } else {
+      ctx->set_error(ctx,400,"failed to validate requested value for %s (%s)",dim->class_name,dim->name);
+    }
+  #endif
   return values;
 }
 
@@ -181,8 +191,8 @@ static void _mapcache_dimension_regex_parse_xml(mapcache_context *ctx, mapcache_
 #if defined(USE_PCRE2)
   {
     int pcre_err;
-    PCRE2_SIZE *pcre_offset;
-    dimension->pcregex = pcre2_compile((PCRE2_SPTR8)dimension->regex_string,strlen(dimension->regex_string), 0, &pcre_err, pcre_offset, NULL);
+    PCRE2_SIZE pcre_offset;
+    dimension->pcregex = pcre2_compile((PCRE2_SPTR8)dimension->regex_string,strlen(dimension->regex_string), 0, &pcre_err, &pcre_offset, NULL);
     if(!dimension->pcregex) {
       ctx->set_error(ctx,400,"failed to compile regular expression \"%s\" for %s \"%s\": %d",
                      dimension->regex_string,dim->class_name,dim->name,pcre_err);


=====================================
lib/source_gdal.c
=====================================
@@ -347,8 +347,8 @@ CreateWarpedVRT( GDALDatasetH hSrcDS,
     {
         double dfDesiredXRes = (extent->maxx - extent->minx) / width;
         double dfDesiredYRes = (extent->maxy - extent->miny) / height;
-        double dfDesiredRes = MIN( dfDesiredXRes, dfDesiredYRes );
-        double dfGuessedFullRes = MIN( adfDstGeoTransform[1],
+        double dfDesiredRes = MAPCACHE_MIN( dfDesiredXRes, dfDesiredYRes );
+        double dfGuessedFullRes = MAPCACHE_MIN( adfDstGeoTransform[1],
                                    fabs(adfDstGeoTransform[5]) );
         double dfApproxDstOvrRatio = dfDesiredRes / dfGuessedFullRes;
 


=====================================
lib/source_mapserver.c
=====================================
@@ -60,7 +60,7 @@ struct mc_mapobj {
 void mapcache_mapserver_connection_constructor(mapcache_context *ctx, void **conn_, void *params) {
   mapcache_source_mapserver *src = (mapcache_source_mapserver*) params;
   struct mc_mapobj *mcmap = calloc(1,sizeof(struct mc_mapobj));
-  mcmap->map = msLoadMap(src->mapfile,NULL);
+  mcmap->map = msLoadMap(src->mapfile,NULL, NULL);
   if(!mcmap->map) {
     errorObj *errors = NULL;
     ctx->set_error(ctx, 500, "Failed to load mapfile '%s'",src->mapfile);
@@ -229,7 +229,7 @@ void _mapcache_source_mapserver_configuration_check(mapcache_context *ctx, mapca
   msSetup();
 
   /* do a test load to check the mapfile is correct */
-  map = msLoadMap(src->mapfile, NULL);
+  map = msLoadMap(src->mapfile, NULL, NULL);
   if(!map) {
     msWriteError(stderr);
     ctx->set_error(ctx,400,"failed to load mapfile \"%s\"",src->mapfile);


=====================================
tests/data/mapcache_backend_template.xml
=====================================
@@ -0,0 +1,72 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<mapcache>
+    <source name="synthetic-source" type="gdal">
+        <resample>NEAREST</resample>
+        <data>SYNTHETIC_GEOTIFF_PATH_PLACEHOLDER</data>
+    </source>
+    <grid name="synthetic_grid">
+        <extent>-500000 -500000 500000 500000</extent>
+        <srs>EPSG:3857</srs>
+        <units>m</units>
+        <origin>top-left</origin>
+        <size>256 256</size>
+        <resolutions>
+           1000
+           500
+           250
+           125
+           62.5
+           31.25
+           15.625
+           7.8125
+           3.90625
+           1.953125
+           0.9765625
+           0.48828125
+           0.244140625
+           0.1220703125
+           0.06103515625
+           0.030517578125
+           0.0152587890625
+           0.00762939453125
+        </resolutions>
+    </grid>
+    <cache name="disk" type="disk">
+        <base>TILE_CACHE_BASE_DIR/disk</base>
+    </cache>
+    <tileset name="disk-tileset">
+        <cache>disk</cache>
+        <source>synthetic-source</source>
+        <grid>synthetic_grid</grid>
+        <format>PNG</format> <!-- Using PNG for lossless data for correctness tests -->
+        <resample>NEAREST</resample>
+        <metatile>1 1</metatile>
+    </tileset>
+    <!-- Required utils have not landed yet
+    <cache name="lmdb" type="lmdb">
+        <base>TILE_CACHE_BASE_DIR/lmdb</base>
+    </cache>
+    <tileset name="lmdb-tileset">
+        <cache>lmdb</cache>
+        <source>synthetic-source</source>
+        <grid>synthetic_grid</grid>
+        <format>PNG</format>
+        <resample>NEAREST</resample>
+        <metatile>1 1</metatile>
+    </tileset>
+    -->
+    <cache name="sqlite" type="sqlite3">
+        <dbfile>TILE_CACHE_BASE_DIR/cache.sqlite</dbfile>
+    </cache>
+    <tileset name="sqlite-tileset">
+        <cache>sqlite</cache>
+        <source>synthetic-source</source>
+        <grid>synthetic_grid</grid>
+        <format>PNG</format>
+        <resample>NEAREST</resample>
+        <metatile>1 1</metatile>
+    </tileset>
+    <service type="wmts" enabled="true"/>
+    <service type="wms" enabled="true"/>
+    <log_level>debug</log_level>
+</mapcache>


=====================================
tests/mcpython/generate_synthetic_geotiff.py
=====================================
@@ -0,0 +1,98 @@
+# Project:  MapCache
+# Purpose:  Generates a GeoTIFF with a predictable content to serve as a reference
+# Author:   Maris Nartiss
+#
+# *****************************************************************************
+# Copyright (c) 2025 Regents of the University of Minnesota.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the "Software"),
+# to deal in the Software without restriction, including without limitation
+# the rights to use, copy, modify, merge, publish, distribute, sublicense,
+# and/or sell copies of the Software, and to permit persons to whom the
+# Software is furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies of this Software or works derived from this Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+# DEALINGS IN THE SOFTWARE.
+# ****************************************************************************/
+
+import numpy as np
+import logging
+
+from osgeo import gdal, osr
+
+
+def generate_synthetic_geotiff(
+    output_filename="synthetic_test_data.tif", width=256, height=256
+):
+    """
+    Generates a synthetic GeoTIFF with unique pixel values based on their coordinates.
+    Each pixel value encodes its row and column index, allowing for detection of
+    shift and rotation errors.
+    """
+    osr.DontUseExceptions()
+    # Define image properties
+    wm_min_x = -500000
+    wm_max_x = 500000
+    wm_min_y = -500000
+    wm_max_y = 500000
+
+    pixel_width = (wm_max_x - wm_min_x) / width
+    pixel_height = (wm_min_y - wm_max_y) / height  # Negative for north-up image
+
+    # GeoTransform: [top-left x, pixel width, 0, top-left y, 0, pixel height]
+    # Top-left corner is (wm_min_x, wm_max_y)
+    geotransform = [wm_min_x, pixel_width, 0, wm_max_y, 0, pixel_height]
+
+    # Spatial Reference System (Web Mercator)
+    srs = osr.SpatialReference()
+    srs.ImportFromEPSG(3857)
+
+    # Determine appropriate data type based on image dimensions
+    # For 3-band output, we'll use uint8 for each band.
+    gdal_datatype = gdal.GDT_Byte
+    numpy_datatype = np.uint8
+    num_bands = 3
+
+    # Create the GeoTIFF file
+    driver = gdal.GetDriverByName("GTiff")
+    dataset = driver.Create(output_filename, width, height, num_bands, gdal_datatype)
+
+    if dataset is None:
+        logging.error(f"Error: Could not create {output_filename}")
+        return
+
+    dataset.SetGeoTransform(geotransform)
+    dataset.SetSpatialRef(srs)
+
+    # Create NumPy arrays to hold the pixel data for each band
+    data_band1 = np.zeros((height, width), dtype=numpy_datatype)
+    data_band2 = np.zeros((height, width), dtype=numpy_datatype)
+    data_band3 = np.zeros((height, width), dtype=numpy_datatype)
+
+    # Generate unique pixel values based on row and column index for each band
+    # Band 1 (Red): row % 256
+    # Band 2 (Green): col % 256
+    # Band 3 (Blue): (row + col) % 256
+    for row in range(height):
+        for col in range(width):
+            data_band1[row, col] = row % 256
+            data_band2[row, col] = col % 256
+            data_band3[row, col] = (row + col) % 256
+
+    # Write the data to each band
+    dataset.GetRasterBand(1).WriteArray(data_band1)
+    dataset.GetRasterBand(2).WriteArray(data_band2)
+    dataset.GetRasterBand(3).WriteArray(data_band3)
+
+    # Close the dataset
+    dataset = None
+    logging.info(f"Successfully created synthetic GeoTIFF: {output_filename}")


=====================================
tests/mcpython/requirements.txt
=====================================
@@ -0,0 +1,2 @@
+numpy
+pytest


=====================================
tests/mcpython/test_disk_cache.py
=====================================
@@ -0,0 +1,182 @@
+# Project:  MapCache
+# Purpose:  Test MapCache disk based storage backend
+# Author:   Maris Nartiss
+#
+# *****************************************************************************
+# Copyright (c) 2025 Regents of the University of Minnesota.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the "Software"),
+# to deal in the Software without restriction, including without limitation
+# the rights to use, copy, modify, merge, publish, distribute, sublicense,
+# and/or sell copies of the Software, and to permit persons to whom the
+# Software is furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies of this Software or works derived from this Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+# DEALINGS IN THE SOFTWARE.
+# ****************************************************************************/
+
+import os
+import pytest
+import numpy as np
+import logging
+
+from osgeo import gdal
+
+# Import the GeoTIFF generation function
+from generate_synthetic_geotiff import generate_synthetic_geotiff
+
+# Import generic verification functions and constants
+from verification_core import (
+    TILE_SIZE,
+    TILE_CACHE_BASE_DIR,
+    TEMP_MAPCACHE_CONFIG_DIR,
+    calculate_expected_tile_data,
+    compare_tile_arrays,
+    cleanup,
+    run_seeder,
+    create_temp_mapcache_config,
+)
+
+# --- Configuration --- #
+SYNTHETIC_GEOTIFF_FILENAME = os.path.join(
+    TEMP_MAPCACHE_CONFIG_DIR, "synthetic_test_data.tif"
+)
+GEOTIFF_WIDTH = 512
+GEOTIFF_HEIGHT = 512
+MAPCACHE_TEMPLATE_CONFIG = os.path.join(
+    os.path.dirname(__file__), "..", "data", "mapcache_backend_template.xml"
+)
+
+# --- Grid Parameters --- #
+INITIAL_RESOLUTION = 1000
+ORIGIN_X = -500000
+ORIGIN_Y = 500000
+
+
+def read_tile(tile_path, tile_size=TILE_SIZE):
+    if not os.path.exists(tile_path):
+        logging.error(f"Error: Actual tile not found at {tile_path}")
+        return None
+
+    actual_ds = gdal.Open(tile_path, gdal.GA_ReadOnly)
+    if actual_ds is None:
+        logging.error(f"Error: Could not open actual tile {tile_path}")
+        return None
+
+    # Read all bands from the actual tile
+    actual_tile_data = np.zeros(
+        (tile_size, tile_size, actual_ds.RasterCount), dtype=np.uint8
+    )
+    for i in range(actual_ds.RasterCount):
+        actual_tile_data[:, :, i] = actual_ds.GetRasterBand(i + 1).ReadAsArray()
+
+    actual_ds = None  # Close the dataset
+
+    # Mapcache might output 4 bands (RGBA) even if source is 3 bands. Handle this.
+    # If actual_tile_data has 4 bands, ignore the alpha band for comparison.
+    if actual_tile_data.shape[2] == 4:
+        actual_tile_data_rgb = actual_tile_data[:, :, :3]  # Take only RGB bands
+    elif actual_tile_data.shape[2] == 3:
+        actual_tile_data_rgb = actual_tile_data
+    else:
+        logging.error(
+            f"Error: Unexpected number of bands in actual tile: {actual_tile_data.shape[2]}"
+        )
+        return None
+
+    return actual_tile_data_rgb
+
+
+def run_mapcache_test(zoom, x, y, geotiff_path, initial_resolution, origin_x, origin_y):
+    logging.info(f"Running MapCache test for tile Z{zoom}-X{x}-Y{y}...")
+
+    # Calculate expected tile data using generic function
+    expected_tile_data = calculate_expected_tile_data(
+        zoom,
+        x,
+        y,
+        geotiff_path,
+        initial_resolution,
+        origin_x,
+        origin_y,
+    )
+    if expected_tile_data is None:
+        return False
+
+    # --- Read Actual Tile Data ---
+    actual_tile_path = os.path.join(
+        TILE_CACHE_BASE_DIR,
+        "disk",
+        "disk-tileset",
+        "synthetic_grid",
+        f"{zoom:02d}",
+        f"{x // 1000000:03d}",
+        f"{(x // 1000) % 1000:03d}",
+        f"{x % 1000:03d}",
+        f"{y // 1000000:03d}",
+        f"{(y // 1000) % 1000:03d}",
+        f"{y % 1000:03d}.png",
+    )
+
+    logging.info(f"Reading tile {actual_tile_path}")
+    actual_tile_data_rgb = read_tile(actual_tile_path, TILE_SIZE)
+    if actual_tile_data_rgb is None:
+        return False
+
+    # --- Compare ---
+    return compare_tile_arrays(expected_tile_data, actual_tile_data_rgb, zoom, x, y)
+
+
+ at pytest.fixture(scope="module")
+def setup_test_environment(request):
+    cleanup()
+    logging.info("Testing disk storage backend...")
+    os.makedirs(TEMP_MAPCACHE_CONFIG_DIR, exist_ok=True)
+    generate_synthetic_geotiff(
+        output_filename=SYNTHETIC_GEOTIFF_FILENAME,
+        width=GEOTIFF_WIDTH,
+        height=GEOTIFF_HEIGHT,
+    )
+    create_temp_mapcache_config(
+        SYNTHETIC_GEOTIFF_FILENAME,
+        MAPCACHE_TEMPLATE_CONFIG,
+    )
+    run_seeder("disk-tileset", "0,1")
+
+    def teardown():
+        cleanup()
+        logging.info("Cleanup complete.")
+
+    request.addfinalizer(teardown)
+
+
+def test_disk_tiles(setup_test_environment):
+    ok0 = run_mapcache_test(
+        0,
+        0,
+        0,
+        SYNTHETIC_GEOTIFF_FILENAME,
+        INITIAL_RESOLUTION,
+        ORIGIN_X,
+        ORIGIN_Y,
+    )
+    ok1 = run_mapcache_test(
+        1,
+        1,
+        2,
+        SYNTHETIC_GEOTIFF_FILENAME,
+        INITIAL_RESOLUTION,
+        ORIGIN_X,
+        ORIGIN_Y,
+    )
+    assert ok0
+    assert ok1


=====================================
tests/mcpython/test_sqlite_cache.py
=====================================
@@ -0,0 +1,195 @@
+# Project:  MapCache
+# Purpose:  Test MapCache SQLite based storage backend
+# Author:   Maris Nartiss
+#
+# *****************************************************************************
+# Copyright (c) 2025 Regents of the University of Minnesota.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the "Software"),
+# to deal in the Software without restriction, including without limitation
+# the rights to use, copy, modify, merge, publish, distribute, sublicense,
+# and/or sell copies of the Software, and to permit persons to whom the
+# Software is furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies of this Software or works derived from this Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+# DEALINGS IN THE SOFTWARE.
+# ****************************************************************************/
+
+import os
+import pytest
+import numpy as np
+import sqlite3
+import logging
+
+from osgeo import gdal
+
+# Import the GeoTIFF generation function
+from generate_synthetic_geotiff import generate_synthetic_geotiff
+
+# Import generic verification functions and constants
+from verification_core import (
+    TILE_SIZE,
+    TILE_CACHE_BASE_DIR,
+    TEMP_MAPCACHE_CONFIG_DIR,
+    calculate_expected_tile_data,
+    compare_tile_arrays,
+    cleanup,
+    run_seeder,
+    create_temp_mapcache_config,
+)
+
+# --- Configuration --- #
+SYNTHETIC_GEOTIFF_FILENAME = os.path.join(
+    TEMP_MAPCACHE_CONFIG_DIR, "synthetic_test_data.tif"
+)
+GEOTIFF_WIDTH = 512
+GEOTIFF_HEIGHT = 512
+MAPCACHE_TEMPLATE_CONFIG = os.path.join(
+    os.path.dirname(__file__), "..", "data", "mapcache_backend_template.xml"
+)
+
+# --- Grid Parameters --- #
+INITIAL_RESOLUTION = 1000
+ORIGIN_X = -500000
+ORIGIN_Y = 500000
+
+
+def read_tile(zoom, x, y, tile_size=TILE_SIZE):
+    db_file = os.path.join(TILE_CACHE_BASE_DIR, "cache.sqlite")
+    tmp_tile = os.path.join(TILE_CACHE_BASE_DIR, "temp_tile_sqlite.png")
+    if not os.path.exists(db_file):
+        logging.error(f"Error: Database file not found at {db_file}")
+        return None
+
+    try:
+        con = sqlite3.connect(db_file)
+        cur = con.cursor()
+        cur.execute(
+            "SELECT data FROM tiles WHERE tileset='sqlite-tileset' AND "
+            "grid='synthetic_grid' AND x=? AND y=? AND z=?",
+            (x, y, zoom),
+        )
+        row = cur.fetchone()
+        if row is None:
+            logging.error(f"Error: Tile not found in database for Z{zoom}-X{x}-Y{y}")
+            return None
+
+        # Assume SQLite returned raw PNG data
+        with open(tmp_tile, "wb") as f:
+            f.write(row[0])
+
+        actual_ds = gdal.Open(tmp_tile, gdal.GA_ReadOnly)
+        if actual_ds is None:
+            logging.error(
+                f"Error: Could not open tile from database data. Tile: {tmp_tile}"
+            )
+            return None
+
+        # Read all bands from the actual tile
+        actual_tile_data = np.zeros(
+            (tile_size, tile_size, actual_ds.RasterCount), dtype=np.uint8
+        )
+        for i in range(actual_ds.RasterCount):
+            actual_tile_data[:, :, i] = actual_ds.GetRasterBand(i + 1).ReadAsArray()
+
+        actual_ds = None  # Close the dataset
+
+        # Mapcache might output 4 bands (RGBA) even if source is 3 bands. Handle this.
+        if actual_tile_data.shape[2] == 4:
+            actual_tile_data_rgb = actual_tile_data[:, :, :3]  # Take only RGB bands
+        elif actual_tile_data.shape[2] == 3:
+            actual_tile_data_rgb = actual_tile_data
+        else:
+            logging.error(
+                f"Error: Unexpected number of bands in actual tile: {actual_tile_data.shape[2]}"
+            )
+            return None
+
+        return actual_tile_data_rgb
+
+    except sqlite3.Error as e:
+        logging.error(f"Database error: {e}")
+        return None
+    finally:
+        if con:
+            con.close()
+
+
+def run_mapcache_test(zoom, x, y, geotiff_path, initial_resolution, origin_x, origin_y):
+    logging.info(f"Running MapCache test for tile Z{zoom}-X{x}-Y{y}...")
+
+    # Calculate expected tile data using generic function
+    expected_tile_data = calculate_expected_tile_data(
+        zoom,
+        x,
+        y,
+        geotiff_path,
+        initial_resolution,
+        origin_x,
+        origin_y,
+    )
+    if expected_tile_data is None:
+        return False
+
+    # --- Read Actual Tile Data ---
+    actual_tile_data_rgb = read_tile(zoom, x, y, TILE_SIZE)
+    if actual_tile_data_rgb is None:
+        return False
+
+    # --- Compare ---
+    return compare_tile_arrays(expected_tile_data, actual_tile_data_rgb, zoom, x, y)
+
+
+ at pytest.fixture(scope="module")
+def setup_test_environment(request):
+    cleanup()
+    logging.info("Testing sqlite storage backend...")
+    os.makedirs(TEMP_MAPCACHE_CONFIG_DIR, exist_ok=True)
+    generate_synthetic_geotiff(
+        output_filename=SYNTHETIC_GEOTIFF_FILENAME,
+        width=GEOTIFF_WIDTH,
+        height=GEOTIFF_HEIGHT,
+    )
+    create_temp_mapcache_config(
+        SYNTHETIC_GEOTIFF_FILENAME,
+        MAPCACHE_TEMPLATE_CONFIG,
+    )
+    run_seeder("sqlite-tileset", "0,1")
+
+    def teardown():
+        cleanup()
+        logging.info("Cleanup complete.")
+
+    request.addfinalizer(teardown)
+
+
+def test_sqlite_tiles(setup_test_environment):
+    ok0 = run_mapcache_test(
+        0,
+        0,
+        0,
+        SYNTHETIC_GEOTIFF_FILENAME,
+        INITIAL_RESOLUTION,
+        ORIGIN_X,
+        ORIGIN_Y,
+    )
+    ok1 = run_mapcache_test(
+        1,
+        1,
+        2,
+        SYNTHETIC_GEOTIFF_FILENAME,
+        INITIAL_RESOLUTION,
+        ORIGIN_X,
+        ORIGIN_Y,
+    )
+    assert ok0
+    assert ok1


=====================================
tests/mcpython/verification_core.py
=====================================
@@ -0,0 +1,198 @@
+# Project:  MapCache
+# Purpose:  Common code for various MapCache storage backend tests
+# Author:   Maris Nartiss
+#
+# *****************************************************************************
+# Copyright (c) 2025 Regents of the University of Minnesota.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the "Software"),
+# to deal in the Software without restriction, including without limitation
+# the rights to use, copy, modify, merge, publish, distribute, sublicense,
+# and/or sell copies of the Software, and to permit persons to whom the
+# Software is furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies of this Software or works derived from this Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+# DEALINGS IN THE SOFTWARE.
+# ****************************************************************************/
+
+import os
+import shutil
+import math
+import subprocess
+import numpy as np
+import logging
+
+from osgeo import gdal
+
+TILE_SIZE = 256
+TEMP_MAPCACHE_CONFIG_DIR = "/tmp/mc_test"
+TEMP_MAPCACHE_CONFIG_FILE = os.path.join(TEMP_MAPCACHE_CONFIG_DIR, "mapcache.xml")
+TILE_CACHE_BASE_DIR = os.path.join(TEMP_MAPCACHE_CONFIG_DIR, "cache_data")
+
+
+def calculate_expected_tile_data(
+    zoom, x, y, geotiff_path, initial_resolution, origin_x, origin_y
+):
+    """
+    Calculates the expected pixel data for a given tile (zoom, x, y)
+    by reading directly from the source GeoTIFF using GDAL,
+    thus matching any resampling done by MapCache.
+    Returns a 3-band NumPy array (uint8) or None on error.
+    """
+    # Calculate resolution for the current zoom level
+    resolution = initial_resolution / (2**zoom)
+
+    # Calculate geographic bounds of the tile
+    min_x_tile = origin_x + x * TILE_SIZE * resolution
+    max_y_tile = origin_y - y * TILE_SIZE * resolution
+    max_x_tile = min_x_tile + TILE_SIZE * resolution
+    min_y_tile = max_y_tile - TILE_SIZE * resolution
+
+    logging.info(
+        "Tile geographic bounds (Web Mercator):\n  "
+        f"MinX: {min_x_tile}, MinY: {min_y_tile}\n  "
+        f"MaxX: {max_x_tile}, MaxY: {max_y_tile}"
+    )
+
+    expected_tile_data = np.zeros((TILE_SIZE, TILE_SIZE, 3), dtype=np.uint8)
+
+    # Open the source GeoTIFF
+    src_ds = gdal.Open(geotiff_path, gdal.GA_ReadOnly)
+    if src_ds is None:
+        logging.error(f"Error: Could not open source GeoTIFF {geotiff_path}")
+        return None
+
+    geotiff_width = src_ds.RasterXSize
+    geotiff_height = src_ds.RasterYSize
+
+    src_gt = src_ds.GetGeoTransform()
+
+    # Get the source bands
+    src_bands = [src_ds.GetRasterBand(i + 1) for i in range(src_ds.RasterCount)]
+
+    # Iterate over each pixel in the output tile
+    for py in range(TILE_SIZE):
+        for px in range(TILE_SIZE):
+            # Calculate geographic coordinates (Web Mercator) of the center of the pixel in the tile
+            map_x = min_x_tile + (px + 0.5) * resolution
+            map_y = max_y_tile - (py + 0.5) * resolution
+
+            # Map Web Mercator (x, y) to pixel (col, row) in the source GeoTIFF
+            src_col = math.floor((map_x - src_gt[0]) / src_gt[1])
+            src_row = math.floor((map_y - src_gt[3]) / src_gt[5])
+
+            # Read pixel value directly from the source GeoTIFF
+            if 0 <= src_row < geotiff_height and 0 <= src_col < geotiff_width:
+                for band_idx in range(3):  # Assuming 3 bands (RGB)
+                    # ReadRaster(xoff, yoff, xsize, ysize, buf_xsize, buf_ysize, buf_type)
+                    # Read a single pixel
+                    val = src_bands[band_idx].ReadRaster(
+                        src_col, src_row, 1, 1, 1, 1, gdal.GDT_Byte
+                    )
+                    expected_tile_data[py, px, band_idx] = np.frombuffer(
+                        val, dtype=np.uint8
+                    )[0]
+            else:
+                # If the coordinate falls outside the source GeoTIFF, set to 0 (black)
+                expected_tile_data[py, px, :] = 0
+
+    src_ds = None  # Close the source dataset
+    return expected_tile_data
+
+
+def compare_tile_arrays(expected_data, actual_data, zoom, x, y):
+    """
+    Compares two NumPy arrays representing tile data and reports discrepancies.
+    Returns True if arrays are equal, False otherwise.
+    """
+    if actual_data is None:
+        return False
+
+    if np.array_equal(expected_data, actual_data):
+        logging.info(f"SUCCESS: Tile Z{zoom}-X{x}-Y{y} matches expected data.")
+        return True
+    else:
+        logging.error(f"FAILURE: Tile Z{zoom}-X{x}-Y{y} does NOT match expected data.")
+        diff = expected_data.astype(np.int16) - actual_data.astype(np.int16)
+        logging.error("Differences (Expected - Actual):\n%s", diff)
+        diff_coords = np.argwhere(diff != 0)
+        if len(diff_coords) > 0:
+            logging.error("First 10 differing pixel coordinates and values:")
+            for coord in diff_coords[:10]:
+                py, px, band_idx = coord
+                logging.error(
+                    f"  Pixel ({px}, {py}), Band {band_idx}: "
+                    f"Expected={expected_data[py, px, band_idx]}, "
+                    f"Actual={actual_data[py, px, band_idx]}"
+                )
+        return False
+
+
+def cleanup():
+    if os.path.exists(TEMP_MAPCACHE_CONFIG_DIR):
+        shutil.rmtree(TEMP_MAPCACHE_CONFIG_DIR)
+
+
+def create_temp_mapcache_config(geotiff_path, mapcache_template_config):
+    """
+    Replace dynamic parts of config file with actual values
+    """
+
+    os.makedirs(TEMP_MAPCACHE_CONFIG_DIR, exist_ok=True)
+
+    with open(mapcache_template_config, "r") as f:
+        template_content = f.read()
+
+    content = template_content.replace(
+        "SYNTHETIC_GEOTIFF_PATH_PLACEHOLDER", geotiff_path
+    )
+    content = content.replace("TILE_CACHE_BASE_DIR", TILE_CACHE_BASE_DIR)
+
+    with open(TEMP_MAPCACHE_CONFIG_FILE, "w") as f:
+        f.write(content)
+
+    logging.info(f"Created temporary mapcache config: {TEMP_MAPCACHE_CONFIG_FILE}")
+
+
+def run_seeder(tileset, zoomlevels):
+    """
+    Prepopulate storage backend with tiles
+    Tileset is a tileset name
+    Zoomlevels – a string with zoomlevels to seed e.g. "0,2"
+    """
+
+    logging.info("Running mapcache seeder...")
+    seeder_command = [
+        "mapcache_seed",
+        "-c",
+        TEMP_MAPCACHE_CONFIG_FILE,
+        "-t",
+        tileset,
+        "--force",
+        "-z",
+        zoomlevels,
+    ]
+    try:
+        result = subprocess.run(
+            seeder_command, check=True, capture_output=True, text=True
+        )
+        logging.info("Seeder stdout: %s", result.stdout)
+        if result.stderr:
+            logging.error("Seeder stderr: %s", result.stderr)
+        logging.info("Mapcache seeder finished.")
+    except subprocess.CalledProcessError as e:
+        logging.error(
+            f"Error running mapcache_seed: {e} Temporary files in: {TEMP_MAPCACHE_CONFIG_DIR}"
+        )
+        logging.error("Stdout: %s", e.stdout)
+        logging.error("Stderr: %s", e.stderr)
+        exit(1)



View it on GitLab: https://salsa.debian.org/debian-gis-team/mapcache/-/compare/c1b003b4672cba313bf21319c2f4ae62d16f4309...e7694e7517fd746726bafb2f22ce803038583670

-- 
View it on GitLab: https://salsa.debian.org/debian-gis-team/mapcache/-/compare/c1b003b4672cba313bf21319c2f4ae62d16f4309...e7694e7517fd746726bafb2f22ce803038583670
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/pkg-grass-devel/attachments/20260420/784e2178/attachment-0001.htm>


More information about the Pkg-grass-devel mailing list