[Git][debian-gis-team/mapproxy][upstream] New upstream version 3.1.0+dfsg
Bas Couwenberg (@sebastic)
gitlab at salsa.debian.org
Tue Oct 22 16:44:42 BST 2024
Bas Couwenberg pushed to branch upstream at Debian GIS Project / mapproxy
Commits:
55c71307 by Bas Couwenberg at 2024-10-22T17:29:11+02:00
New upstream version 3.1.0+dfsg
- - - - -
27 changed files:
- .github/workflows/dockerbuild.yml
- .github/workflows/ghpages.yml
- CHANGES.txt
- doc/caches.rst
- doc/configuration.rst
- mapproxy/cache/base.py
- mapproxy/cache/compact.py
- mapproxy/cache/file.py
- mapproxy/cache/geopackage.py
- mapproxy/cache/legend.py
- mapproxy/cache/mbtiles.py
- mapproxy/cache/path.py
- mapproxy/config/loader.py
- mapproxy/config/spec.py
- mapproxy/service/templates/demo/tms_demo.html
- mapproxy/test/helper.py
- mapproxy/test/unit/test_cache.py
- mapproxy/test/unit/test_cache_compact.py
- mapproxy/test/unit/test_cache_geopackage.py
- + mapproxy/test/unit/test_cache_mbtile.py
- mapproxy/test/unit/test_cache_tile.py
- + mapproxy/test/unit/test_fs.py
- mapproxy/util/ext/lockfile.py
- mapproxy/util/fs.py
- mapproxy/util/lock.py
- requirements-tests.txt
- setup.py
Changes:
=====================================
.github/workflows/dockerbuild.yml
=====================================
@@ -48,7 +48,7 @@ jobs:
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push base image
- uses: docker/build-push-action at v5
+ uses: docker/build-push-action at v6
with:
file: ./Dockerfile
push: true
@@ -57,7 +57,7 @@ jobs:
platforms: linux/amd64,linux/arm64
- name: Build and push development image
- uses: docker/build-push-action at v5
+ uses: docker/build-push-action at v6
with:
file: ./Dockerfile
push: true
@@ -66,7 +66,7 @@ jobs:
platforms: linux/amd64,linux/arm64
- name: Build and push nginx image
- uses: docker/build-push-action at v5
+ uses: docker/build-push-action at v6
with:
file: ./Dockerfile
push: true
@@ -95,7 +95,7 @@ jobs:
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push base alpine image
- uses: docker/build-push-action at v5
+ uses: docker/build-push-action at v6
with:
file: ./Dockerfile-alpine
push: true
@@ -104,7 +104,7 @@ jobs:
platforms: linux/amd64,linux/arm64
- name: Build and push alpine development image
- uses: docker/build-push-action at v5
+ uses: docker/build-push-action at v6
with:
file: ./Dockerfile-alpine
push: true
@@ -113,7 +113,7 @@ jobs:
platforms: linux/amd64,linux/arm64
- name: Build and push alpine based nginx image
- uses: docker/build-push-action at v5
+ uses: docker/build-push-action at v6
with:
file: ./Dockerfile-alpine
push: true
=====================================
.github/workflows/ghpages.yml
=====================================
@@ -23,13 +23,13 @@ jobs:
run: sphinx-build doc/ docs -D html_context.current_version=${{ github.ref_name }}
- name: Deploy docs to folder `latest` to GitHub Pages
- uses: JamesIves/github-pages-deploy-action at v4.5.0
+ uses: JamesIves/github-pages-deploy-action at v4.6.4
with:
folder: docs
target-folder: docs/latest
- name: Deploy docs to a folder named after the new tag to GitHub Pages
- uses: JamesIves/github-pages-deploy-action at v4.5.0
+ uses: JamesIves/github-pages-deploy-action at v4.6.4
with:
folder: docs
target-folder: docs/${{ github.ref_name }}
@@ -46,7 +46,7 @@ jobs:
> config/versions.json
- name: Deploy config folder to GitHub Pages
- uses: JamesIves/github-pages-deploy-action at v4.5.0
+ uses: JamesIves/github-pages-deploy-action at v4.6.4
with:
folder: config
target-folder: docs/config
=====================================
CHANGES.txt
=====================================
@@ -1,3 +1,20 @@
+3.1.0 2024-10-22
+~~~~~~~~~~~~~~~~
+
+Improvements:
+
+ - Add new config parameters `file_permissions` and `directory_permissions` to set file and directory
+ permissions on newly created cache files and directories.
+
+Maintenance:
+
+ - Dependency updates
+
+Fixes:
+
+ - Fix transparency in TMS demo page.
+
+
3.0.1 2024-08-27
~~~~~~~~~~~~~~~~
=====================================
doc/caches.rst
=====================================
@@ -89,6 +89,11 @@ This is the default cache type and it uses a single file for each tile. Availabl
.. versionadded:: 2.0.0
+``directory_permissions``, ``file_permissions``:
+ Permissions that MapProxy will set when creating files and directories. Must be given as string containing the octal representation of permissions. I.e. ``rwxrw-r--`` is ``'764'``. This will not work on windows OS.
+
+ .. versionadded:: 3.1.0
+
.. _cache_mbtiles:
``mbtiles``
@@ -126,6 +131,11 @@ You can set the ``sources`` to an empty list, if you use an existing MBTiles fil
The note about ``bulk_meta_tiles`` for SQLite below applies to MBtiles as well.
+``directory_permissions``, ``file_permissions``:
+ Permissions that MapProxy will set when creating files and directories. Must be given as string containing the octal representation of permissions. I.e. ``rwxrw-r--`` is ``'764'``. This will not work on windows OS.
+
+ .. versionadded:: 3.1.0
+
.. _cache_sqlite:
``sqlite``
@@ -176,6 +186,11 @@ Available options:
type: sqlite
directory: /path/to/cache
+``directory_permissions``, ``file_permissions``:
+ Permissions that MapProxy will set when creating files and directories. Must be given as string containing the octal representation of permissions. I.e. ``rwxrw-r--`` is ``'764'``. This will not work on windows OS.
+
+ .. versionadded:: 3.1.0
+
.. _cache_couchdb:
``couchdb``
@@ -273,7 +288,7 @@ MapProxy will place the JSON document for tile z=3, x=1, y=2 at ``http://localho
.. code-block:: json
-
+
{
"_attachments": {
@@ -651,6 +666,11 @@ Available options:
``version``:
The version of the ArcGIS compact cache format. This option is required. Either ``1`` or ``2``.
+``directory_permissions``, ``file_permissions``:
+ Permissions that MapProxy will set when creating files and directories. Must be given as string containing the octal representation of permissions. I.e. ``rwxrw-r--`` is ``'764'``. This will not work on windows OS.
+
+ .. versionadded:: 3.1.0
+
You can set the ``sources`` to an empty list, if you use an existing compact cache files and do not have a source.
=====================================
doc/configuration.rst
=====================================
@@ -1055,6 +1055,11 @@ The following options define how tiles are created and stored. Most options can
``max_tile_limit``
Maximum number of tiles MapProxy will merge together for a WMS request. This limit is for each layer and defaults to 500 tiles.
+``directory_permissions``, ``file_permissions``:
+ Permissions that MapProxy will set when creating files and directories. Must be given as string containing the octal representation of permissions. I.e. ``rwxrw-r--`` is ``'764'``. This will not work on windows OS.
+
+ .. versionadded:: 3.1.0
+
``srs``
"""""""
=====================================
mapproxy/cache/base.py
=====================================
@@ -98,10 +98,11 @@ if sys.platform == 'win32':
class TileLocker(object):
- def __init__(self, lock_dir, lock_timeout, lock_cache_id):
+ def __init__(self, lock_dir, lock_timeout, lock_cache_id, directory_permissions=None):
self.lock_dir = lock_dir
self.lock_timeout = lock_timeout
self.lock_cache_id = lock_cache_id
+ self.directory_permissions = directory_permissions
def lock_filename(self, tile):
return os.path.join(self.lock_dir, self.lock_cache_id + '-' +
@@ -117,4 +118,4 @@ class TileLocker(object):
cleanup_lockdir(self.lock_dir, max_lock_time=self.lock_timeout + 10,
force=False)
return FileLock(lock_filename, timeout=self.lock_timeout,
- remove_on_unlock=REMOVE_ON_UNLOCK)
+ remove_on_unlock=REMOVE_ON_UNLOCK, directory_permissions=self.directory_permissions)
=====================================
mapproxy/cache/compact.py
=====================================
@@ -34,11 +34,14 @@ class CompactCacheBase(TileCacheBase):
supports_timestamp = False
bundle_class = None
- def __init__(self, cache_dir, coverage=None):
+ def __init__(self, cache_dir, coverage=None,
+ directory_permissions=None, file_permissions=None):
super(CompactCacheBase, self).__init__(coverage)
md5 = hashlib.new('md5', cache_dir.encode('utf-8'), usedforsecurity=False)
self.lock_cache_id = 'compactcache-' + md5.hexdigest()
self.cache_dir = cache_dir
+ self.directory_permissions = directory_permissions
+ self.file_permissions = file_permissions
def _get_bundle_fname_and_offset(self, tile_coord):
x, y, z = tile_coord
@@ -53,7 +56,8 @@ class CompactCacheBase(TileCacheBase):
def _get_bundle(self, tile_coord):
bundle_fname, offset = self._get_bundle_fname_and_offset(tile_coord)
- return self.bundle_class(bundle_fname, offset=offset)
+ return self.bundle_class(bundle_fname, offset=offset, file_permissions=self.file_permissions,
+ directory_permissions=self.directory_permissions)
def is_cached(self, tile, dimensions=None):
if tile.coord is None:
@@ -138,10 +142,12 @@ BUNDLEX_V1_EXT = '.bundlx'
class BundleV1(object):
- def __init__(self, base_filename, offset):
+ def __init__(self, base_filename, offset, file_permissions=None, directory_permissions=None):
self.base_filename = base_filename
self.lock_filename = base_filename + '.lck'
self.offset = offset
+ self.file_permissions = file_permissions
+ self.directory_permissions = directory_permissions
def _rel_tile_coord(self, tile_coord):
return (
@@ -150,10 +156,11 @@ class BundleV1(object):
)
def data(self):
- return BundleDataV1(self.base_filename + BUNDLE_EXT, self.offset)
+ return BundleDataV1(self.base_filename + BUNDLE_EXT, self.offset,
+ self.directory_permissions, self.file_permissions)
def index(self):
- return BundleIndexV1(self.base_filename + BUNDLEX_V1_EXT)
+ return BundleIndexV1(self.base_filename + BUNDLEX_V1_EXT, self.directory_permissions, self.file_permissions)
def is_cached(self, tile, dimensions=None):
if tile.source or tile.coord is None:
@@ -185,7 +192,7 @@ class BundleV1(object):
data = buf.read()
tiles_data.append((t.coord, data))
- with FileLock(self.lock_filename):
+ with FileLock(self.lock_filename, directory_permissions=self.directory_permissions, remove_on_unlock=True):
with self.data().readwrite() as bundle:
with self.index().readwrite() as idx:
for tile_coord, data in tiles_data:
@@ -229,7 +236,7 @@ class BundleV1(object):
if tile.coord is None:
return True
- with FileLock(self.lock_filename):
+ with FileLock(self.lock_filename, directory_permissions=self.directory_permissions, remove_on_unlock=True):
with self.index().readwrite() as idx:
x, y = self._rel_tile_coord(tile.coord)
idx.remove_tile_offset(x, y)
@@ -269,9 +276,11 @@ INT64LE = struct.Struct('<Q')
class BundleIndexV1(object):
- def __init__(self, filename):
+ def __init__(self, filename, directory_permissions=None, file_permissions=None):
self.filename = filename
self._fh = None
+ self.directory_permissions = directory_permissions
+ self.file_permissions = file_permissions
# defer initialization to update/remove calls to avoid
# index creation on is_cached (prevents new files in read-only caches)
self._initialized = False
@@ -280,7 +289,7 @@ class BundleIndexV1(object):
self._initialized = True
if os.path.exists(self.filename):
return
- ensure_directory(self.filename)
+ ensure_directory(self.filename, self.directory_permissions)
buf = BytesIO()
buf.write(BUNDLEX_V1_HEADER)
@@ -288,6 +297,10 @@ class BundleIndexV1(object):
buf.write(INT64LE.pack((i*4)+BUNDLE_V1_HEADER_SIZE)[:5])
buf.write(BUNDLEX_V1_FOOTER)
write_atomic(self.filename, buf.getvalue())
+ if self.file_permissions:
+ permission = int(self.file_permissions, base=8)
+ log.info("setting file permissions on compact cache file: " + self.file_permissions)
+ os.chmod(self.filename, permission)
def _tile_index_offset(self, x, y):
return BUNDLEX_V1_HEADER_SIZE + (x * BUNDLEX_V1_GRID_HEIGHT + y) * 5
@@ -333,6 +346,10 @@ class BundleIndexV1(object):
def readwrite(self):
self._init_index()
with open(self.filename, 'r+b') as fh:
+ if self.file_permissions:
+ permission = int(self.file_permissions, base=8)
+ log.info("setting file permissions on compact cache file: " + self.file_permissions)
+ os.chmod(self.filename, permission)
b = BundleIndexV1(self.filename)
b._fh = fh
yield b
@@ -361,15 +378,17 @@ BUNDLE_V1_HEADER_STRUCT_FORMAT = '<4I3Q5I'
class BundleDataV1(object):
- def __init__(self, filename, tile_offsets):
+ def __init__(self, filename, tile_offsets, directory_permissions=None, file_permissions=None):
self.filename = filename
self.tile_offsets = tile_offsets
self._fh = None
+ self.directory_permissions = directory_permissions
+ self.file_permissions = file_permissions
if not os.path.exists(self.filename):
self._init_bundle()
def _init_bundle(self):
- ensure_directory(self.filename)
+ ensure_directory(self.filename, self.directory_permissions)
header = list(BUNDLE_V1_HEADER)
header[10], header[8] = self.tile_offsets
header[11], header[9] = header[10]+127, header[8]+127
@@ -377,18 +396,22 @@ class BundleDataV1(object):
struct.pack(BUNDLE_V1_HEADER_STRUCT_FORMAT, *header) +
# zero-size entry for each tile
(b'\x00' * (BUNDLEX_V1_GRID_HEIGHT * BUNDLEX_V1_GRID_WIDTH * 4)))
+ if self.file_permissions:
+ permission = int(self.file_permissions, base=8)
+ log.info("setting file permissions on compact cache file: " + self.file_permissions)
+ os.chmod(self.filename, permission)
@contextlib.contextmanager
def readonly(self):
with open(self.filename, 'rb') as fh:
- b = BundleDataV1(self.filename, self.tile_offsets)
+ b = BundleDataV1(self.filename, self.tile_offsets, self.directory_permissions, self.file_permissions)
b._fh = fh
yield b
@contextlib.contextmanager
def readwrite(self):
with open(self.filename, 'r+b') as fh:
- b = BundleDataV1(self.filename, self.tile_offsets)
+ b = BundleDataV1(self.filename, self.tile_offsets, self.directory_permissions, self.file_permissions)
b._fh = fh
yield b
@@ -464,10 +487,12 @@ BUNDLE_V2_HEADER_SIZE = 64
class BundleV2(object):
- def __init__(self, base_filename, offset=None):
+ def __init__(self, base_filename, offset=None, file_permissions=None, directory_permissions=None):
# offset not used by V2
self.filename = base_filename + '.bundle'
self.lock_filename = base_filename + '.lck'
+ self.file_permissions = file_permissions
+ self.directory_permissions = directory_permissions
# defer initialization to update/remove calls to avoid
# index creation on is_cached (prevents new files in read-only caches)
@@ -477,12 +502,16 @@ class BundleV2(object):
self._initialized = True
if os.path.exists(self.filename):
return
- ensure_directory(self.filename)
+ ensure_directory(self.filename, self.directory_permissions)
buf = BytesIO()
buf.write(struct.pack(BUNDLE_V2_HEADER_STRUCT_FORMAT, *BUNDLE_V2_HEADER))
# Empty index (ArcGIS stores an offset of 4 and size of 0 for missing tiles)
buf.write(struct.pack('<%dQ' % BUNDLE_V2_TILES, *(4, ) * BUNDLE_V2_TILES))
write_atomic(self.filename, buf.getvalue())
+ if self.file_permissions:
+ permission = int(self.file_permissions, base=8)
+ log.info("setting file permissions on compact cache file: " + self.file_permissions)
+ os.chmod(self.filename, permission)
def _tile_idx_offset(self, x, y):
return BUNDLE_V2_HEADER_SIZE + (x + BUNDLE_V2_GRID_HEIGHT * y) * 8
@@ -605,7 +634,7 @@ class BundleV2(object):
data = buf.read()
tiles_data.append((t.coord, data))
- with FileLock(self.lock_filename):
+ with FileLock(self.lock_filename, directory_permissions=self.directory_permissions, remove_on_unlock=True):
with self._readwrite() as fh:
for tile_coord, data in tiles_data:
self._store_tile(fh, tile_coord, data, dimensions=dimensions)
@@ -617,7 +646,7 @@ class BundleV2(object):
return True
self._init_index()
- with FileLock(self.lock_filename):
+ with FileLock(self.lock_filename, directory_permissions=self.directory_permissions, remove_on_unlock=True):
with self._readwrite() as fh:
x, y = self._rel_tile_coord(tile.coord)
self._update_tile_offset(fh, x, y, 0, 0)
=====================================
mapproxy/cache/file.py
=====================================
@@ -33,7 +33,8 @@ class FileCache(TileCacheBase):
supports_dimensions = True
def __init__(self, cache_dir, file_ext, directory_layout='tc',
- link_single_color_images=False, coverage=None, image_opts=None):
+ link_single_color_images=False, coverage=None, image_opts=None,
+ directory_permissions=None, file_permissions=None):
"""
:param cache_dir: the path where the tile will be stored
:param file_ext: the file extension that will be appended to
@@ -46,6 +47,8 @@ class FileCache(TileCacheBase):
self.file_ext = file_ext
self.image_opts = image_opts
self.link_single_color_images = link_single_color_images
+ self.directory_permissions = directory_permissions
+ self.file_permissions = file_permissions
self._tile_location, self._level_location = path.location_funcs(layout=directory_layout)
if self._level_location is None:
self.level_location = None # disable level based clean-ups
@@ -57,7 +60,8 @@ class FileCache(TileCacheBase):
dimensions_str = ['{key}-{value}'.format(key=i, value=dimensions[i].replace('/', '_')) for i in items]
# todo: cache_dir is not used. should it get returned or removed?
cache_dir = os.path.join(self.cache_dir, '_'.join(dimensions_str)) # noqa
- return self._tile_location(tile, self.cache_dir, self.file_ext, create_dir=create_dir, dimensions=dimensions)
+ return self._tile_location(tile, self.cache_dir, self.file_ext, create_dir=create_dir, dimensions=dimensions,
+ directory_permissions=self.directory_permissions)
def level_location(self, level, dimensions=None):
"""
@@ -82,7 +86,7 @@ class FileCache(TileCacheBase):
)
location = os.path.join(*parts)
if create_dir:
- ensure_directory(location)
+ ensure_directory(location, self.directory_permissions)
return location
def load_tile_metadata(self, tile, dimensions=None):
=====================================
mapproxy/cache/geopackage.py
=====================================
@@ -39,13 +39,16 @@ class GeopackageCache(TileCacheBase):
supports_timestamp = False
def __init__(
- self, geopackage_file, tile_grid, table_name, with_timestamps=False, timeout=30, wal=False, coverage=None):
+ self, geopackage_file, tile_grid, table_name, with_timestamps=False, timeout=30, wal=False, coverage=None,
+ directory_permissions=None, file_permissions=None):
super(GeopackageCache, self).__init__(coverage)
self.tile_grid = tile_grid
self.table_name = self._check_table_name(table_name)
md5 = hashlib.new('md5', geopackage_file.encode('utf-8'), usedforsecurity=False)
self.lock_cache_id = 'gpkg' + md5.hexdigest()
self.geopackage_file = geopackage_file
+ self.directory_permissions = directory_permissions
+ self.file_permissions = file_permissions
if coverage:
self.bbox = coverage.transform_to(self.tile_grid.srs).bbox
@@ -65,6 +68,9 @@ class GeopackageCache(TileCacheBase):
self._db_conn_cache.db = sqlite3.connect(self.geopackage_file, timeout=self.timeout)
return self._db_conn_cache.db
+ def uncached_db(self):
+ return sqlite3.connect(self.geopackage_file, timeout=self.timeout)
+
def cleanup(self):
"""
Close all open connection and remove them from cache.
@@ -107,12 +113,12 @@ class GeopackageCache(TileCacheBase):
def ensure_gpkg(self):
if not os.path.isfile(self.geopackage_file):
with FileLock(self.geopackage_file + '.init.lck',
- remove_on_unlock=REMOVE_ON_UNLOCK):
- ensure_directory(self.geopackage_file)
+ remove_on_unlock=REMOVE_ON_UNLOCK, directory_permissions=self.directory_permissions):
+ ensure_directory(self.geopackage_file, self.directory_permissions)
self._initialize_gpkg()
else:
if not self.check_gpkg():
- ensure_directory(self.geopackage_file)
+ ensure_directory(self.geopackage_file, self.directory_permissions)
self._initialize_gpkg()
def check_gpkg(self):
@@ -125,7 +131,7 @@ class GeopackageCache(TileCacheBase):
return True
def _verify_table(self):
- with sqlite3.connect(self.geopackage_file, timeout=self.timeout) as db:
+ with self.uncached_db() as db:
cur = db.execute("""SELECT name FROM sqlite_master WHERE type='table' AND name=?""",
(self.table_name,))
content = cur.fetchone()
@@ -135,261 +141,169 @@ class GeopackageCache(TileCacheBase):
return True
def _verify_gpkg_contents(self):
- with sqlite3.connect(self.geopackage_file, timeout=self.timeout) as db:
+ with self.uncached_db() as db:
cur = db.execute("""SELECT * FROM gpkg_contents WHERE table_name = ?""", (self.table_name,))
- results = cur.fetchone()
- if not results:
- # Table doesn't exist in gpkg_contents _initialize_gpkg will add it.
- return False
- gpkg_data_type = results[1]
- gpkg_srs_id = results[9]
- cur = db.execute("""SELECT * FROM gpkg_spatial_ref_sys WHERE srs_id = ?""", (gpkg_srs_id,))
-
- gpkg_coordsys_id = cur.fetchone()[3]
- if gpkg_data_type.lower() != "tiles":
- log.info("The geopackage table name already exists for a data type other than tiles.")
- raise ValueError("table_name is improperly configured.")
- if gpkg_coordsys_id != get_epsg_num(self.tile_grid.srs.srs_code):
- log.info(
- f"The geopackage {self.geopackage_file} table name {self.table_name} already exists and has an SRS of"
- f" {gpkg_coordsys_id}, which does not match the configured Mapproxy SRS of"
- f" {get_epsg_num(self.tile_grid.srs.srs_code)}.")
- raise ValueError("srs is improperly configured.")
- return True
+ results = cur.fetchone()
+ if not results:
+ # Table doesn't exist in gpkg_contents _initialize_gpkg will add it.
+ return False
+ gpkg_data_type = results[1]
+ gpkg_srs_id = results[9]
+ cur = db.execute("""SELECT * FROM gpkg_spatial_ref_sys WHERE srs_id = ?""", (gpkg_srs_id,))
+
+ gpkg_coordsys_id = cur.fetchone()[3]
+ if gpkg_data_type.lower() != "tiles":
+ log.info("The geopackage table name already exists for a data type other than tiles.")
+ raise ValueError("table_name is improperly configured.")
+ if gpkg_coordsys_id != get_epsg_num(self.tile_grid.srs.srs_code):
+ log.info(
+ f"The geopackage {self.geopackage_file} table name {self.table_name} already exists and has an SRS"
+ f" of {gpkg_coordsys_id}, which does not match the configured Mapproxy SRS of"
+ f" {get_epsg_num(self.tile_grid.srs.srs_code)}.")
+ raise ValueError("srs is improperly configured.")
+ return True
def _verify_tile_size(self):
- with sqlite3.connect(self.geopackage_file, timeout=self.timeout) as db:
+ with self.uncached_db() as db:
cur = db.execute(
"""SELECT * FROM gpkg_tile_matrix WHERE table_name = ?""",
(self.table_name,))
- results = cur.fetchall()
- results = results[0]
- tile_size = self.tile_grid.tile_size
+ results = cur.fetchall()
+ results = results[0]
+ tile_size = self.tile_grid.tile_size
- if not results:
- # There is no tile conflict. Return to allow the creation of new tiles.
- return True
+ if not results:
+ # There is no tile conflict. Return to allow the creation of new tiles.
+ return True
- gpkg_table_name, gpkg_zoom_level, gpkg_matrix_width, gpkg_matrix_height, gpkg_tile_width, gpkg_tile_height, \
- gpkg_pixel_x_size, gpkg_pixel_y_size = results
- resolution = self.tile_grid.resolution(gpkg_zoom_level)
- if gpkg_tile_width != tile_size[0] or gpkg_tile_height != tile_size[1]:
- log.info(
- f"The geopackage {self.geopackage_file} table name {self.table_name} already exists and has tile sizes"
- f" of ({gpkg_tile_width},{gpkg_tile_height}) which is different than the configure tile sizes of"
- f" ({tile_size[0]},{tile_size[1]}).")
- log.info("The current mapproxy configuration is invalid for this geopackage.")
- raise ValueError("tile_size is improperly configured.")
- if not is_close(gpkg_pixel_x_size, resolution) or not is_close(gpkg_pixel_y_size, resolution):
- log.info(
- f"The geopackage {self.geopackage_file} table name {self.table_name} already exists and level"
- f" {gpkg_zoom_level} a resolution of ({gpkg_pixel_x_size:.13f},{gpkg_pixel_y_size:.13f})"
- f" which is different than the configured resolution of ({resolution:.13f},{resolution:.13f}).")
- log.info("The current mapproxy configuration is invalid for this geopackage.")
- raise ValueError("res is improperly configured.")
- return True
+ gpkg_table_name, gpkg_zoom_level, gpkg_matrix_width, gpkg_matrix_height, gpkg_tile_width, \
+ gpkg_tile_height, gpkg_pixel_x_size, gpkg_pixel_y_size = results
+ resolution = self.tile_grid.resolution(gpkg_zoom_level)
+ if gpkg_tile_width != tile_size[0] or gpkg_tile_height != tile_size[1]:
+ log.info(
+ f"The geopackage {self.geopackage_file} table name {self.table_name} already exists and has tile"
+ f" sizes of ({gpkg_tile_width},{gpkg_tile_height}) which is different than the configure tile sizes"
+ f" of ({tile_size[0]},{tile_size[1]}).")
+ log.info("The current mapproxy configuration is invalid for this geopackage.")
+ raise ValueError("tile_size is improperly configured.")
+ if not is_close(gpkg_pixel_x_size, resolution) or not is_close(gpkg_pixel_y_size, resolution):
+ log.info(
+ f"The geopackage {self.geopackage_file} table name {self.table_name} already exists and level"
+ f" {gpkg_zoom_level} a resolution of ({gpkg_pixel_x_size:.13f},{gpkg_pixel_y_size:.13f})"
+ f" which is different than the configured resolution of ({resolution:.13f},{resolution:.13f}).")
+ log.info("The current mapproxy configuration is invalid for this geopackage.")
+ raise ValueError("res is improperly configured.")
+ return True
def _initialize_gpkg(self):
log.info('initializing Geopackage file %s', self.geopackage_file)
- db = sqlite3.connect(self.geopackage_file, timeout=self.timeout)
+ with sqlite3.connect(self.geopackage_file, timeout=self.timeout) as db:
- if self.wal:
- db.execute('PRAGMA journal_mode=wal')
+ if self.wal:
+ db.execute('PRAGMA journal_mode=wal')
- proj = get_epsg_num(self.tile_grid.srs.srs_code)
- stmts = [
- """
- CREATE TABLE IF NOT EXISTS gpkg_contents(
- table_name TEXT NOT NULL PRIMARY KEY,
- -- The name of the tiles, or feature table
- data_type TEXT NOT NULL,
- -- Type of data stored in the table: "features" per clause Features
- -- (http://www.geopackage.org/spec/#features), "tiles" per clause Tiles
- -- (http://www.geopackage.org/spec/#tiles), or an implementer-defined value for other data
- -- tables per clause in an Extended GeoPackage
- identifier TEXT UNIQUE,
- -- A human-readable identifier (e.g. short name) for the table_name content
- description TEXT DEFAULT '',
- -- A human-readable description for the table_name content
- last_change DATETIME NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now')),
- -- Timestamp value in ISO 8601 format as defined by the strftime function %Y-%m-%dT%H:%M:%fZ
- -- format string applied to the current time
- min_x DOUBLE,
- -- Bounding box minimum easting or longitude for all content in table_name
- min_y DOUBLE,
- -- Bounding box minimum northing or latitude for all content in table_name
- max_x DOUBLE,
- -- Bounding box maximum easting or longitude for all content in table_name
- max_y DOUBLE,
- -- Bounding box maximum northing or latitude for all content in table_name
- srs_id INTEGER,
- -- Spatial Reference System ID: gpkg_spatial_ref_sys.srs_id; when data_type is features,
- -- SHALL also match gpkg_geometry_columns.srs_id; When data_type is tiles, SHALL also match
- -- gpkg_tile_matrix_set.srs.id
- CONSTRAINT fk_gc_r_srs_id FOREIGN KEY (srs_id) REFERENCES gpkg_spatial_ref_sys(srs_id))
- """,
- """
- CREATE TABLE IF NOT EXISTS gpkg_spatial_ref_sys(
- srs_name TEXT NOT NULL,
- -- Human readable name of this SRS (Spatial Reference System)
- srs_id INTEGER NOT NULL PRIMARY KEY,
- -- Unique identifier for each Spatial Reference System within a GeoPackage
- organization TEXT NOT NULL,
- -- Case-insensitive name of the defining organization e.g. EPSG or epsg
- organization_coordsys_id INTEGER NOT NULL,
- -- Numeric ID of the Spatial Reference System assigned by the organization
- definition TEXT NOT NULL,
- -- Well-known Text representation of the Spatial Reference System
- description TEXT)
- """,
- """
- CREATE TABLE IF NOT EXISTS gpkg_tile_matrix
- (table_name TEXT NOT NULL, -- Tile Pyramid User Data Table Name
- zoom_level INTEGER NOT NULL, -- 0 <= zoom_level <= max_level for table_name
- matrix_width INTEGER NOT NULL, -- Number of columns (>= 1) in tile matrix at this zoom level
- matrix_height INTEGER NOT NULL, -- Number of rows (>= 1) in tile matrix at this zoom level
- tile_width INTEGER NOT NULL, -- Tile width in pixels (>= 1) for this zoom level
- tile_height INTEGER NOT NULL, -- Tile height in pixels (>= 1) for this zoom level
- pixel_x_size DOUBLE NOT NULL, -- In t_table_name srid units or default meters for srid 0 (>0)
- pixel_y_size DOUBLE NOT NULL, -- In t_table_name srid units or default meters for srid 0 (>0)
- CONSTRAINT pk_ttm PRIMARY KEY (table_name, zoom_level),
- CONSTRAINT fk_tmm_table_name FOREIGN KEY (table_name) REFERENCES gpkg_contents(table_name))
- """,
- """
- CREATE TABLE IF NOT EXISTS gpkg_tile_matrix_set(
- table_name TEXT NOT NULL PRIMARY KEY,
- -- Tile Pyramid User Data Table Name
- srs_id INTEGER NOT NULL,
- -- Spatial Reference System ID: gpkg_spatial_ref_sys.srs_id
- min_x DOUBLE NOT NULL,
- -- Bounding box minimum easting or longitude for all content in table_name
- min_y DOUBLE NOT NULL,
- -- Bounding box minimum northing or latitude for all content in table_name
- max_x DOUBLE NOT NULL,
- -- Bounding box maximum easting or longitude for all content in table_name
- max_y DOUBLE NOT NULL,
- -- Bounding box maximum northing or latitude for all content in table_name
- CONSTRAINT fk_gtms_table_name FOREIGN KEY (table_name) REFERENCES gpkg_contents(table_name),
- CONSTRAINT fk_gtms_srs FOREIGN KEY (srs_id) REFERENCES gpkg_spatial_ref_sys (srs_id))
- """,
+ proj = get_epsg_num(self.tile_grid.srs.srs_code)
+
+ db.execute(create_gpkg_contents_statement)
+ db.execute(create_spatial_ref_sys_statment)
+ db.execute(create_tile_matrix_statement)
+ db.execute(create_tile_matrix_set_statement)
+ db.execute(create_table_statement.format(self.table_name))
+
+ db.execute("PRAGMA foreign_keys = 1;")
+
+ # List of WKT execute statements and data.("""
+ wkt_statement = """
+ INSERT OR REPLACE INTO gpkg_spatial_ref_sys (
+ srs_id,
+ organization,
+ organization_coordsys_id,
+ srs_name,
+ definition)
+ VALUES (?, ?, ?, ?, ?)
"""
- CREATE TABLE IF NOT EXISTS [{0}]
- (id INTEGER PRIMARY KEY AUTOINCREMENT, -- Autoincrement primary key
- zoom_level INTEGER NOT NULL, -- min(zoom_level) <= zoom_level <= max(zoom_level)
- -- for t_table_name
- tile_column INTEGER NOT NULL, -- 0 to tile_matrix matrix_width - 1
- tile_row INTEGER NOT NULL, -- 0 to tile_matrix matrix_height - 1
- tile_data BLOB NOT NULL, -- Of an image MIME type specified in clauses Tile
- -- Encoding PNG, Tile Encoding JPEG, Tile Encoding WEBP
- UNIQUE (zoom_level, tile_column, tile_row))
- """.format(self.table_name)]
-
- for stmt in stmts:
- db.execute(stmt)
-
- db.execute("PRAGMA foreign_keys = 1;")
-
- # List of WKT execute statements and data.("""
- wkt_statement = """
- INSERT OR REPLACE INTO gpkg_spatial_ref_sys (
- srs_id,
- organization,
- organization_coordsys_id,
- srs_name,
- definition)
- VALUES (?, ?, ?, ?, ?)
- """
- wkt_entries = [(3857, 'epsg', 3857, 'WGS 84 / Pseudo-Mercator',
- """
-PROJCS["WGS 84 / Pseudo-Mercator",GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,\
-AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],\
-UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4326"]],\
-PROJECTION["Mercator_1SP"],PARAMETER["central_meridian",0],PARAMETER["scale_factor",1],PARAMETER["false_easting",0],\
-PARAMETER["false_northing",0],UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["X",EAST],AXIS["Y",NORTH],\
-AUTHORITY["EPSG","3857"]]\
- """
- ),
- (4326, 'epsg', 4326, 'WGS 84',
- """
-GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],\
-AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,\
-AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4326"]]\
- """
- ),
- (-1, 'NONE', -1, ' ', 'undefined'),
- (0, 'NONE', 0, ' ', 'undefined')
- ]
-
- if get_epsg_num(self.tile_grid.srs.srs_code) not in [4326, 3857]:
- wkt_entries.append((proj, 'epsg', proj, 'Not provided', "Added via Mapproxy."))
- db.commit()
-
- # Add geopackage version to the header (1.0)
- db.execute("PRAGMA application_id = 1196437808;")
- db.commit()
-
- for wkt_entry in wkt_entries:
+ wkt_entries = [
+ (3857, 'epsg', 3857, 'WGS 84 / Pseudo-Mercator', proj_string_3857),
+ (4326, 'epsg', 4326, 'WGS 84', proj_string_4326),
+ (-1, 'NONE', -1, ' ', 'undefined'),
+ (0, 'NONE', 0, ' ', 'undefined')
+ ]
+
+ if get_epsg_num(self.tile_grid.srs.srs_code) not in [4326, 3857]:
+ wkt_entries.append((proj, 'epsg', proj, 'Not provided', "Added via Mapproxy."))
+ db.commit()
+
+ # Add geopackage version to the header (1.0)
+ db.execute("PRAGMA application_id = 1196437808;")
+ db.commit()
+
+ for wkt_entry in wkt_entries:
+ try:
+ db.execute(wkt_statement, wkt_entry)
+ except sqlite3.IntegrityError:
+ log.info("srs_id already exists.")
+ db.commit()
+
+ last_change = datetime.datetime.utcfromtimestamp(
+ int(os.environ.get('SOURCE_DATE_EPOCH', time.time()))
+ )
+
+ # Ensure that tile table exists here, don't overwrite a valid entry.
try:
- db.execute(wkt_statement, (wkt_entry[0], wkt_entry[1], wkt_entry[2], wkt_entry[3], wkt_entry[4]))
+ db.execute("""
+ INSERT INTO gpkg_contents (
+ table_name,
+ data_type,
+ identifier,
+ description,
+ last_change,
+ min_x,
+ max_x,
+ min_y,
+ max_y,
+ srs_id)
+ VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?);
+ """, (self.table_name,
+ "tiles",
+ self.table_name,
+ "Created with Mapproxy.",
+ last_change,
+ self.bbox[0],
+ self.bbox[2],
+ self.bbox[1],
+ self.bbox[3],
+ proj))
except sqlite3.IntegrityError:
- log.info("srs_id already exists.")
- db.commit()
-
- last_change = datetime.datetime.utcfromtimestamp(
- int(os.environ.get('SOURCE_DATE_EPOCH', time.time()))
- )
+ pass
+ db.commit()
- # Ensure that tile table exists here, don't overwrite a valid entry.
- try:
- db.execute("""
- INSERT INTO gpkg_contents (
- table_name,
- data_type,
- identifier,
- description,
- last_change,
- min_x,
- max_x,
- min_y,
- max_y,
- srs_id)
- VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?);
- """, (self.table_name,
- "tiles",
- self.table_name,
- "Created with Mapproxy.",
- last_change,
- self.bbox[0],
- self.bbox[2],
- self.bbox[1],
- self.bbox[3],
- proj))
- except sqlite3.IntegrityError:
- pass
- db.commit()
-
- # Ensure that tile set exists here, don't overwrite a valid entry.
- try:
- db.execute("""
- INSERT INTO gpkg_tile_matrix_set (table_name, srs_id, min_x, max_x, min_y, max_y)
- VALUES (?, ?, ?, ?, ?, ?);
- """, (
- self.table_name, proj, self.bbox[0], self.bbox[2], self.bbox[1], self.bbox[3]))
- except sqlite3.IntegrityError:
- pass
- db.commit()
-
- tile_size = self.tile_grid.tile_size
- for grid, resolution, level in zip(self.tile_grid.grid_sizes,
- self.tile_grid.resolutions, range(20)):
- db.execute(
- """INSERT OR REPLACE INTO gpkg_tile_matrix (table_name, zoom_level, matrix_width, matrix_height,
- tile_width, tile_height, pixel_x_size, pixel_y_size) VALUES(?, ?, ?, ?, ?, ?, ?, ?)""",
- (self.table_name, level, grid[0], grid[1], tile_size[0], tile_size[1], resolution, resolution))
- db.commit()
- db.close()
+ # Ensure that tile set exists here, don't overwrite a valid entry.
+ try:
+ db.execute("""
+ INSERT INTO gpkg_tile_matrix_set (table_name, srs_id, min_x, max_x, min_y, max_y)
+ VALUES (?, ?, ?, ?, ?, ?);
+ """, (
+ self.table_name, proj, self.bbox[0], self.bbox[2], self.bbox[1], self.bbox[3]))
+ except sqlite3.IntegrityError:
+ pass
+ db.commit()
+
+ tile_size = self.tile_grid.tile_size
+ for grid, resolution, level in zip(self.tile_grid.grid_sizes,
+ self.tile_grid.resolutions, range(20)):
+ db.execute("""
+ INSERT OR REPLACE INTO gpkg_tile_matrix (table_name, zoom_level, matrix_width,
+ matrix_height, tile_width, tile_height, pixel_x_size, pixel_y_size)
+ VALUES (?, ?, ?, ?, ?, ?, ?, ?);
+ """, (self.table_name, level, grid[0], grid[1], tile_size[0], tile_size[1], resolution, resolution))
+ db.commit()
+
+ if self.file_permissions is not None:
+ permission = int(self.file_permissions, base=8)
+ log.info("setting file permissions on GeoPackage: ", permission)
+ os.chmod(self.geopackage_file, permission)
def is_cached(self, tile, dimensions=None):
if tile.coord is None:
@@ -522,7 +436,8 @@ AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4326"]]\
class GeopackageLevelCache(TileCacheBase):
- def __init__(self, geopackage_dir, tile_grid, table_name, timeout=30, wal=False, coverage=None):
+ def __init__(self, geopackage_dir, tile_grid, table_name, timeout=30, wal=False, coverage=None,
+ directory_permissions=None, file_permissions=None):
super(GeopackageLevelCache, self).__init__(coverage)
md5 = hashlib.new('md5', geopackage_dir.encode('utf-8'), usedforsecurity=False)
self.lock_cache_id = 'gpkg-' + md5.hexdigest()
@@ -533,6 +448,8 @@ class GeopackageLevelCache(TileCacheBase):
self.wal = wal
self._geopackage = {}
self._geopackage_lock = threading.Lock()
+ self.directory_permissions = directory_permissions
+ self.file_premissions = file_permissions
def _get_level(self, level):
if level in self._geopackage:
@@ -548,7 +465,9 @@ class GeopackageLevelCache(TileCacheBase):
with_timestamps=False,
timeout=self.timeout,
wal=self.wal,
- coverage=self.coverage
+ coverage=self.coverage,
+ directory_permissions=self.directory_permissions,
+ file_permissions=self.file_premissions
)
return self._geopackage[level]
@@ -645,3 +564,109 @@ def is_close(a, b, rel_tol=1e-09, abs_tol=0.0):
"""
return abs(a - b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)
+
+
+create_gpkg_contents_statement = """
+ CREATE TABLE IF NOT EXISTS gpkg_contents(
+ table_name TEXT NOT NULL PRIMARY KEY,
+ -- The name of the tiles, or feature table
+ data_type TEXT NOT NULL,
+ -- Type of data stored in the table: "features" per clause Features
+ -- (http://www.geopackage.org/spec/#features), "tiles" per clause Tiles
+ -- (http://www.geopackage.org/spec/#tiles), or an implementer-defined value for other data
+ -- tables per clause in an Extended GeoPackage
+ identifier TEXT UNIQUE,
+ -- A human-readable identifier (e.g. short name) for the table_name content
+ description TEXT DEFAULT '',
+ -- A human-readable description for the table_name content
+ last_change DATETIME NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now')),
+ -- Timestamp value in ISO 8601 format as defined by the strftime function %Y-%m-%dT%H:%M:%fZ
+ -- format string applied to the current time
+ min_x DOUBLE,
+ -- Bounding box minimum easting or longitude for all content in table_name
+ min_y DOUBLE,
+ -- Bounding box minimum northing or latitude for all content in table_name
+ max_x DOUBLE,
+ -- Bounding box maximum easting or longitude for all content in table_name
+ max_y DOUBLE,
+ -- Bounding box maximum northing or latitude for all content in table_name
+ srs_id INTEGER,
+ -- Spatial Reference System ID: gpkg_spatial_ref_sys.srs_id; when data_type is features,
+ -- SHALL also match gpkg_geometry_columns.srs_id; When data_type is tiles, SHALL also match
+ -- gpkg_tile_matrix_set.srs.id
+ CONSTRAINT fk_gc_r_srs_id FOREIGN KEY (srs_id) REFERENCES gpkg_spatial_ref_sys(srs_id))
+"""
+
+create_spatial_ref_sys_statment = """
+ CREATE TABLE IF NOT EXISTS gpkg_spatial_ref_sys(
+ srs_name TEXT NOT NULL,
+ -- Human readable name of this SRS (Spatial Reference System)
+ srs_id INTEGER NOT NULL PRIMARY KEY,
+ -- Unique identifier for each Spatial Reference System within a GeoPackage
+ organization TEXT NOT NULL,
+ -- Case-insensitive name of the defining organization e.g. EPSG or epsg
+ organization_coordsys_id INTEGER NOT NULL,
+ -- Numeric ID of the Spatial Reference System assigned by the organization
+ definition TEXT NOT NULL,
+ -- Well-known Text representation of the Spatial Reference System
+ description TEXT)
+"""
+
+create_tile_matrix_statement = """
+ CREATE TABLE IF NOT EXISTS gpkg_tile_matrix
+ (table_name TEXT NOT NULL, -- Tile Pyramid User Data Table Name
+ zoom_level INTEGER NOT NULL, -- 0 <= zoom_level <= max_level for table_name
+ matrix_width INTEGER NOT NULL, -- Number of columns (>= 1) in tile matrix at this zoom level
+ matrix_height INTEGER NOT NULL, -- Number of rows (>= 1) in tile matrix at this zoom level
+ tile_width INTEGER NOT NULL, -- Tile width in pixels (>= 1) for this zoom level
+ tile_height INTEGER NOT NULL, -- Tile height in pixels (>= 1) for this zoom level
+ pixel_x_size DOUBLE NOT NULL, -- In t_table_name srid units or default meters for srid 0 (>0)
+ pixel_y_size DOUBLE NOT NULL, -- In t_table_name srid units or default meters for srid 0 (>0)
+ CONSTRAINT pk_ttm PRIMARY KEY (table_name, zoom_level),
+ CONSTRAINT fk_tmm_table_name FOREIGN KEY (table_name) REFERENCES gpkg_contents(table_name))
+"""
+
+create_tile_matrix_set_statement = """
+ CREATE TABLE IF NOT EXISTS gpkg_tile_matrix_set(
+ table_name TEXT NOT NULL PRIMARY KEY,
+ -- Tile Pyramid User Data Table Name
+ srs_id INTEGER NOT NULL,
+ -- Spatial Reference System ID: gpkg_spatial_ref_sys.srs_id
+ min_x DOUBLE NOT NULL,
+ -- Bounding box minimum easting or longitude for all content in table_name
+ min_y DOUBLE NOT NULL,
+ -- Bounding box minimum northing or latitude for all content in table_name
+ max_x DOUBLE NOT NULL,
+ -- Bounding box maximum easting or longitude for all content in table_name
+ max_y DOUBLE NOT NULL,
+ -- Bounding box maximum northing or latitude for all content in table_name
+ CONSTRAINT fk_gtms_table_name FOREIGN KEY (table_name) REFERENCES gpkg_contents(table_name),
+ CONSTRAINT fk_gtms_srs FOREIGN KEY (srs_id) REFERENCES gpkg_spatial_ref_sys (srs_id))
+"""
+
+create_table_statement = """
+ CREATE TABLE IF NOT EXISTS [{0}]
+ (id INTEGER PRIMARY KEY AUTOINCREMENT, -- Autoincrement primary key
+ zoom_level INTEGER NOT NULL, -- min(zoom_level) <= zoom_level <= max(zoom_level)
+ -- for t_table_name
+ tile_column INTEGER NOT NULL, -- 0 to tile_matrix matrix_width - 1
+ tile_row INTEGER NOT NULL, -- 0 to tile_matrix matrix_height - 1
+ tile_data BLOB NOT NULL, -- Of an image MIME type specified in clauses Tile
+ -- Encoding PNG, Tile Encoding JPEG, Tile Encoding WEBP
+ UNIQUE (zoom_level, tile_column, tile_row))
+"""
+
+proj_string_3857 = """
+ PROJCS["WGS 84 / Pseudo-Mercator",GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,\
+ AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],\
+ UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4326"]],\
+ PROJECTION["Mercator_1SP"],PARAMETER["central_meridian",0],PARAMETER["scale_factor",1],PARAMETER["false_easting",0],\
+ PARAMETER["false_northing",0],UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["X",EAST],AXIS["Y",NORTH],\
+ AUTHORITY["EPSG","3857"]]\
+"""
+
+proj_string_4326 = """
+ GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],\
+ AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,\
+ AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4326"]]\
+"""
=====================================
mapproxy/cache/legend.py
=====================================
@@ -48,9 +48,11 @@ def legend_hash(identifier, scale):
class LegendCache(object):
- def __init__(self, cache_dir=None, file_ext='png'):
+ def __init__(self, cache_dir=None, file_ext='png', directory_permissions=None, file_permissions=None):
self.cache_dir = cache_dir
self.file_ext = file_ext
+ self.directory_permissions = directory_permissions
+ self.file_permissions = file_permissions
def store(self, legend):
if legend.stored:
@@ -59,12 +61,16 @@ class LegendCache(object):
if legend.location is None:
hash = legend_hash(legend.id, legend.scale)
legend.location = os.path.join(self.cache_dir, hash) + '.' + self.file_ext
- ensure_directory(legend.location)
+ ensure_directory(legend.location, self.directory_permissions)
data = legend.source.as_buffer(ImageOptions(format='image/' + self.file_ext), seekable=True)
data.seek(0)
log.debug('writing to %s' % (legend.location))
write_atomic(legend.location, data.read())
+ if self.file_permissions:
+ permission = int(self.file_permissions, base=8)
+ log.info("setting file permissions on compact cache file: " + self.file_permissions)
+ os.chmod(legend.location, permission)
data.seek(0)
legend.stored = True
=====================================
mapproxy/cache/mbtiles.py
=====================================
@@ -45,11 +45,14 @@ def sqlite_datetime_to_timestamp(datetime):
class MBTilesCache(TileCacheBase):
supports_timestamp = False
- def __init__(self, mbtile_file, with_timestamps=False, timeout=30, wal=False, ttl=0, coverage=None):
+ def __init__(self, mbtile_file, with_timestamps=False, timeout=30, wal=False, ttl=0, coverage=None,
+ directory_permissions=None, file_permissions=None):
super(MBTilesCache, self).__init__(coverage)
md5 = hashlib.new('md5', mbtile_file.encode('utf-8'), usedforsecurity=False)
self.lock_cache_id = 'mbtiles-' + md5.hexdigest()
self.mbtile_file = mbtile_file
+ self.directory_permissions = directory_permissions
+ self.file_permissions = file_permissions
self.supports_timestamp = with_timestamps
self.ttl = with_timestamps and ttl or 0
self.timeout = timeout
@@ -75,58 +78,60 @@ class MBTilesCache(TileCacheBase):
def ensure_mbtile(self):
if not os.path.exists(self.mbtile_file):
with FileLock(self.mbtile_file + '.init.lck',
- remove_on_unlock=REMOVE_ON_UNLOCK):
+ remove_on_unlock=REMOVE_ON_UNLOCK, directory_permissions=self.directory_permissions):
if not os.path.exists(self.mbtile_file):
- ensure_directory(self.mbtile_file)
+ ensure_directory(self.mbtile_file, self.directory_permissions)
self._initialize_mbtile()
def _initialize_mbtile(self):
log.info('initializing MBTile file %s', self.mbtile_file)
- db = sqlite3.connect(self.mbtile_file)
-
- if self.wal:
- db.execute('PRAGMA journal_mode=wal')
-
- stmt = """
- CREATE TABLE tiles (
- zoom_level integer,
- tile_column integer,
- tile_row integer,
- tile_data blob
- """
+ with sqlite3.connect(self.mbtile_file) as db:
+ if self.wal:
+ db.execute('PRAGMA journal_mode=wal')
+
+ stmt = """
+ CREATE TABLE tiles (
+ zoom_level integer,
+ tile_column integer,
+ tile_row integer,
+ tile_data blob
+ """
- if self.supports_timestamp:
+ if self.supports_timestamp:
+ stmt += """
+ , last_modified datetime DEFAULT (datetime('now','localtime'))
+ """
stmt += """
- , last_modified datetime DEFAULT (datetime('now','localtime'))
+ );
"""
- stmt += """
- );
- """
- db.execute(stmt)
-
- db.execute("""
- CREATE TABLE metadata (name text, value text);
- """)
- db.execute("""
- CREATE UNIQUE INDEX idx_tile on tiles
- (zoom_level, tile_column, tile_row);
- """)
- db.commit()
- db.close()
+ db.execute(stmt)
+
+ db.execute("""
+ CREATE TABLE metadata (name text, value text);
+ """)
+ db.execute("""
+ CREATE UNIQUE INDEX idx_tile on tiles
+ (zoom_level, tile_column, tile_row);
+ """)
+ db.commit()
+
+ if self.file_permissions:
+ permission = int(self.file_permissions, base=8)
+ log.info("setting file permissions on MBTile file: ", permission)
+ os.chmod(self.mbtile_file, permission)
def update_metadata(self, name='', description='', version=1, overlay=True, format='png'):
- db = sqlite3.connect(self.mbtile_file)
- db.execute("""
+ self.db.execute("""
CREATE TABLE IF NOT EXISTS metadata (name text, value text);
""")
- db.execute("""DELETE FROM metadata;""")
+ self.db.execute("""DELETE FROM metadata;""")
if overlay:
layer_type = 'overlay'
else:
layer_type = 'baselayer'
- db.executemany("""
+ self.db.executemany("""
INSERT INTO metadata (name, value) VALUES (?,?)
""",
(
@@ -137,8 +142,7 @@ class MBTilesCache(TileCacheBase):
('format', format),
)
)
- db.commit()
- db.close()
+ self.db.commit()
def is_cached(self, tile, dimensions=None):
if tile.coord is None:
@@ -311,11 +315,14 @@ class MBTilesCache(TileCacheBase):
class MBTilesLevelCache(TileCacheBase):
supports_timestamp = True
- def __init__(self, mbtiles_dir, timeout=30, wal=False, ttl=0, coverage=None):
+ def __init__(self, mbtiles_dir, timeout=30, wal=False, ttl=0, coverage=None,
+ directory_permissions=None, file_permissions=None):
super(MBTilesLevelCache, self).__init__(coverage)
md5 = hashlib.new('md5', mbtiles_dir.encode('utf-8'), usedforsecurity=False)
self.lock_cache_id = 'sqlite-' + md5.hexdigest()
self.cache_dir = mbtiles_dir
+ self.directory_permissions = directory_permissions
+ self.file_permissions = file_permissions
self._mbtiles = {}
self.timeout = timeout
self.wal = wal
@@ -335,7 +342,9 @@ class MBTilesLevelCache(TileCacheBase):
timeout=self.timeout,
wal=self.wal,
ttl=self.ttl,
- coverage=self.coverage
+ coverage=self.coverage,
+ directory_permissions=self.directory_permissions,
+ file_permissions=self.file_permissions
)
return self._mbtiles[level]
=====================================
mapproxy/cache/path.py
=====================================
@@ -86,7 +86,8 @@ def level_part(level):
return "%02d" % level
-def tile_location_tc(tile, cache_dir, file_ext, create_dir=False, dimensions=None):
+def tile_location_tc(tile, cache_dir, file_ext, create_dir=False, dimensions=None,
+ directory_permissions=None):
"""
Return the location of the `tile`. Caches the result as ``location``
property of the `tile`.
@@ -113,11 +114,12 @@ def tile_location_tc(tile, cache_dir, file_ext, create_dir=False, dimensions=Non
"%03d.%s" % (int(y) % 1000, file_ext))
tile.location = os.path.join(*parts)
if create_dir:
- ensure_directory(tile.location)
+ ensure_directory(tile.location, directory_permissions)
return tile.location
-def tile_location_mp(tile, cache_dir, file_ext, create_dir=False, dimensions=None):
+def tile_location_mp(tile, cache_dir, file_ext, create_dir=False, dimensions=None,
+ directory_permissions=None):
"""
Return the location of the `tile`. Caches the result as ``location``
property of the `tile`.
@@ -143,11 +145,12 @@ def tile_location_mp(tile, cache_dir, file_ext, create_dir=False, dimensions=Non
"%04d.%s" % (int(y) % 10000, file_ext))
tile.location = os.path.join(*parts)
if create_dir:
- ensure_directory(tile.location)
+ ensure_directory(tile.location, directory_permissions)
return tile.location
-def tile_location_tms(tile, cache_dir, file_ext, create_dir=False, dimensions=None):
+def tile_location_tms(tile, cache_dir, file_ext, create_dir=False, dimensions=None,
+ directory_permissions=None):
"""
Return the location of the `tile`. Caches the result as ``location``
property of the `tile`.
@@ -167,11 +170,12 @@ def tile_location_tms(tile, cache_dir, file_ext, create_dir=False, dimensions=No
str(x), str(y) + '.' + file_ext
)
if create_dir:
- ensure_directory(tile.location)
+ ensure_directory(tile.location, directory_permissions)
return tile.location
-def tile_location_reverse_tms(tile, cache_dir, file_ext, create_dir=False, dimensions=None):
+def tile_location_reverse_tms(tile, cache_dir, file_ext, create_dir=False, dimensions=None,
+ directory_permissions=None):
"""
Return the location of the `tile`. Caches the result as ``location``
property of the `tile`.
@@ -190,7 +194,7 @@ def tile_location_reverse_tms(tile, cache_dir, file_ext, create_dir=False, dimen
cache_dir, dimensions_part(dimensions), str(y), str(x), str(z) + '.' + file_ext
)
if create_dir:
- ensure_directory(tile.location)
+ ensure_directory(tile.location, directory_permissions)
return tile.location
@@ -198,7 +202,8 @@ def level_location_tms(level, cache_dir, dimensions=None):
return level_location(str(level), cache_dir=cache_dir)
-def tile_location_quadkey(tile, cache_dir, file_ext, create_dir=False, dimensions=None):
+def tile_location_quadkey(tile, cache_dir, file_ext, create_dir=False, dimensions=None,
+ directory_permissions=None):
"""
Return the location of the `tile`. Caches the result as ``location``
property of the `tile`.
@@ -226,7 +231,7 @@ def tile_location_quadkey(tile, cache_dir, file_ext, create_dir=False, dimension
cache_dir, quadKey + '.' + file_ext
)
if create_dir:
- ensure_directory(tile.location)
+ ensure_directory(tile.location, directory_permissions)
return tile.location
@@ -235,7 +240,8 @@ def no_level_location(level, cache_dir, dimensions=None):
raise NotImplementedError('cache does not have any level location')
-def tile_location_arcgiscache(tile, cache_dir, file_ext, create_dir=False, dimensions=None):
+def tile_location_arcgiscache(tile, cache_dir, file_ext, create_dir=False, dimensions=None,
+ directory_permissions=None):
"""
Return the location of the `tile`. Caches the result as ``location``
property of the `tile`.
@@ -253,7 +259,7 @@ def tile_location_arcgiscache(tile, cache_dir, file_ext, create_dir=False, dimen
parts = (cache_dir, 'L%02d' % z, 'R%08x' % y, 'C%08x.%s' % (x, file_ext))
tile.location = os.path.join(*parts)
if create_dir:
- ensure_directory(tile.location)
+ ensure_directory(tile.location, directory_permissions)
return tile.location
=====================================
mapproxy/config/loader.py
=====================================
@@ -732,7 +732,20 @@ class WMSSourceConfiguration(SourceConfiguration):
prefix = 'file://'
url = prefix + context.globals.abspath(url[7:])
lg_client = WMSLegendURLClient(url)
- legend_cache = LegendCache(cache_dir=cache_dir)
+
+ global_directory_permissions = context.globals.get_value('directory_permissions', None,
+ global_key='cache.directory_permissions')
+ if global_directory_permissions:
+ log.info(f'Using global directory permission configuration for static legend cache:'
+ f' {global_directory_permissions}')
+
+ global_file_permissions = context.globals.get_value(
+ 'file_permissions', None, global_key='cache.file_permissions')
+ if global_file_permissions:
+ log.info(f'Using global file permission configuration for static legend cache: {global_file_permissions}')
+
+ legend_cache = LegendCache(cache_dir=cache_dir, directory_permissions=global_directory_permissions,
+ file_permissions=global_file_permissions)
return WMSLegendSource([lg_client], legend_cache, static=True)
def fi_xslt_transformer(self, conf, context):
@@ -883,7 +896,21 @@ class WMSSourceConfiguration(SourceConfiguration):
http_client, lg_request.url = self.http_client(lg_request.url)
lg_client = WMSLegendClient(lg_request, http_client=http_client)
lg_clients.append(lg_client)
- legend_cache = LegendCache(cache_dir=cache_dir)
+
+ global_directory_permissions = self.context.globals.get_value('directory_permissions', self.conf,
+ global_key='cache.directory_permissions')
+ if global_directory_permissions:
+ log.info(f'Using global directory permission configuration for legend cache:'
+ f' {global_directory_permissions}')
+
+ global_file_permissions = self.context.globals.get_value('file_permissions', self.conf,
+ global_key='cache.file_permissions')
+ if global_file_permissions:
+ log.info(f'Using global file permission configuration for legend cache:'
+ f' {global_file_permissions}')
+
+ legend_cache = LegendCache(cache_dir=cache_dir, directory_permissions=global_directory_permissions,
+ file_permissions=global_file_permissions)
lg_source = WMSLegendSource(lg_clients, legend_cache)
return lg_source
@@ -1090,6 +1117,36 @@ class CacheConfiguration(ConfigurationBase):
return self.context.globals.get_path('cache_dir', self.conf,
global_key='cache.base_dir')
+ @memoize
+ def directory_permissions(self):
+ directory_permissions = self.conf.get('cache', {}).get('directory_permissions')
+ if directory_permissions:
+ log.info('Using cache specific directory permission configuration for %s: %s',
+ self.conf['name'], directory_permissions)
+ return directory_permissions
+
+ global_permissions = self.context.globals.get_value('directory_permissions', self.conf,
+ global_key='cache.directory_permissions')
+ if global_permissions:
+ log.info('Using global directory permission configuration for %s: %s',
+ self.conf['name'], global_permissions)
+ return global_permissions
+
+ @memoize
+ def file_permissions(self):
+ file_permissions = self.conf.get('cache', {}).get('file_permissions')
+ if file_permissions:
+ log.info('Using cache specific file permission configuration for %s: %s',
+ self.conf['name'], file_permissions)
+ return file_permissions
+
+ global_permissions = self.context.globals.get_value('file_permissions', self.conf,
+ global_key='cache.file_permissions')
+ if global_permissions:
+ log.info('Using global file permission configuration for %s: %s',
+ self.conf['name'], global_permissions)
+ return global_permissions
+
@memoize
def has_multiple_grids(self):
return len(self.grid_confs()) > 1
@@ -1132,7 +1189,9 @@ class CacheConfiguration(ConfigurationBase):
image_opts=image_opts,
directory_layout=directory_layout,
link_single_color_images=link_single_color_images,
- coverage=coverage
+ coverage=coverage,
+ directory_permissions=self.directory_permissions(),
+ file_permissions=self.file_permissions()
)
def _mbtiles_cache(self, grid_conf, image_opts):
@@ -1155,7 +1214,9 @@ class CacheConfiguration(ConfigurationBase):
mbfile_path,
timeout=sqlite_timeout,
wal=wal,
- coverage=coverage
+ coverage=coverage,
+ directory_permissions=self.directory_permissions(),
+ file_permissions=self.file_permissions()
)
def _geopackage_cache(self, grid_conf, image_opts):
@@ -1190,11 +1251,21 @@ class CacheConfiguration(ConfigurationBase):
if levels:
return GeopackageLevelCache(
- cache_dir, grid_conf.tile_grid(), table_name, coverage=coverage
+ cache_dir,
+ grid_conf.tile_grid(),
+ table_name,
+ coverage=coverage,
+ directory_permissions=self.directory_permissions(),
+ file_permissions=self.file_permissions()
)
else:
return GeopackageCache(
- gpkg_file_path, grid_conf.tile_grid(), table_name, coverage=coverage
+ gpkg_file_path,
+ grid_conf.tile_grid(),
+ table_name,
+ coverage=coverage,
+ directory_permissions=self.directory_permissions(),
+ file_permissions=self.file_permissions()
)
def _azureblob_cache(self, grid_conf, image_opts):
@@ -1299,7 +1370,9 @@ class CacheConfiguration(ConfigurationBase):
timeout=sqlite_timeout,
wal=wal,
ttl=self.conf.get('cache', {}).get('ttl', 0),
- coverage=coverage
+ coverage=coverage,
+ directory_permissions=self.directory_permissions(),
+ file_permissions=self.file_permissions()
)
def _couchdb_cache(self, grid_conf, image_opts):
@@ -1318,9 +1391,15 @@ class CacheConfiguration(ConfigurationBase):
tile_id = self.conf['cache'].get('tile_id')
coverage = self.coverage()
- return CouchDBCache(url=url, db_name=db_name,
- file_ext=image_opts.format.ext, tile_grid=grid_conf.tile_grid(),
- md_template=md_template, tile_id_template=tile_id, coverage=coverage)
+ return CouchDBCache(
+ url=url,
+ db_name=db_name,
+ file_ext=image_opts.format.ext,
+ tile_grid=grid_conf.tile_grid(),
+ md_template=md_template,
+ tile_id_template=tile_id,
+ coverage=coverage
+ )
def _riak_cache(self, grid_conf, image_opts):
from mapproxy.cache.riak import RiakCache
@@ -1404,9 +1483,19 @@ class CacheConfiguration(ConfigurationBase):
version = self.conf['cache']['version']
if version == 1:
- return CompactCacheV1(cache_dir=cache_dir, coverage=coverage)
+ return CompactCacheV1(
+ cache_dir=cache_dir,
+ coverage=coverage,
+ directory_permissions=self.directory_permissions(),
+ file_permissions=self.file_permissions()
+ )
elif version == 2:
- return CompactCacheV2(cache_dir=cache_dir, coverage=coverage)
+ return CompactCacheV2(
+ cache_dir=cache_dir,
+ coverage=coverage,
+ directory_permissions=self.directory_permissions(),
+ file_permissions=self.file_permissions()
+ )
raise ConfigurationError("compact cache only supports version 1 or 2")
@@ -1666,8 +1755,15 @@ class CacheConfiguration(ConfigurationBase):
if not lock_dir:
lock_dir = os.path.join(cache_dir, 'tile_locks')
+ global_directory_permissions = self.context.globals.get_value('directory_permissions', self.conf,
+ global_key='cache.directory_permissions')
+ if global_directory_permissions:
+ log.info(f'Using global directory permission configuration for tile locks:'
+ f' {global_directory_permissions}')
+
lock_timeout = self.context.globals.get_value('http.client_timeout', {})
- locker = TileLocker(lock_dir, lock_timeout, identifier + '_renderd')
+ locker = TileLocker(lock_dir, lock_timeout, identifier + '_renderd',
+ global_directory_permissions)
# TODO band_merger
tile_creator_class = partial(RenderdTileCreator, renderd_address,
priority=priority, tile_locker=locker)
@@ -1679,10 +1775,17 @@ class CacheConfiguration(ConfigurationBase):
if isinstance(cache, DummyCache):
locker = DummyLocker()
else:
+ global_directory_permissions = self.context.globals.get_value('directory_permissions', self.conf,
+ global_key='cache.directory_permissions')
+ if global_directory_permissions:
+ log.info(f'Using global directory permission configuration for tile locks:'
+ f' {global_directory_permissions}')
+
locker = TileLocker(
lock_dir=self.lock_dir(),
lock_timeout=self.context.globals.get_value('http.client_timeout', {}),
lock_cache_id=cache.lock_cache_id,
+ directory_permissions=global_directory_permissions
)
mgr = TileManager(tile_grid, cache, sources, image_opts.format.ext,
=====================================
mapproxy/config/spec.py
=====================================
@@ -128,6 +128,8 @@ cache_types = {
'use_grid_names': bool(),
'directory': str(),
'tile_lock_dir': str(),
+ 'directory_permissions': str(),
+ 'file_permissions': str(),
}),
'sqlite': combined(cache_commons, {
'directory': str(),
@@ -135,12 +137,16 @@ cache_types = {
'sqlite_wal': bool(),
'tile_lock_dir': str(),
'ttl': int(),
+ 'directory_permissions': str(),
+ 'file_permissions': str(),
}),
'mbtiles': combined(cache_commons, {
'filename': str(),
'sqlite_timeout': number(),
'sqlite_wal': bool(),
'tile_lock_dir': str(),
+ 'directory_permissions': str(),
+ 'file_permissions': str(),
}),
'geopackage': combined(cache_commons, {
'filename': str(),
@@ -148,6 +154,8 @@ cache_types = {
'tile_lock_dir': str(),
'table_name': str(),
'levels': bool(),
+ 'directory_permissions': str(),
+ 'file_permissions': str(),
}),
'couchdb': combined(cache_commons, {
'url': str(),
@@ -196,6 +204,8 @@ cache_types = {
'directory': str(),
required('version'): number(),
'tile_lock_dir': str(),
+ 'directory_permissions': str(),
+ 'file_permissions': str(),
}),
'azureblob': combined(cache_commons, {
'connection_string': str(),
@@ -395,6 +405,8 @@ mapproxy_yaml_spec = {
'base_dir': str(),
'lock_dir': str(),
'tile_lock_dir': str(),
+ 'directory_permissions': str(),
+ 'file_permissions': str(),
'meta_size': [number()],
'meta_buffer': number(),
'bulk_meta_tiles': bool(),
=====================================
mapproxy/service/templates/demo/tms_demo.html
=====================================
@@ -42,7 +42,7 @@ jscript_functions=None
// Define TMS as specialized XYZ service: origin lower-left and may have custom grid
const source = new ol.source.XYZ({
url: '../tms/1.0.0/{{"/".join(layer.md["name_path"])}}/{z}/{x}/{-y}.' + format,
- opaque: transparent,
+ opaque: !transparent,
projection: "{{srs}}",
maxResolution: {{resolutions[0]}},
tileGrid: new ol.tilegrid.TileGrid({
=====================================
mapproxy/test/helper.py
=====================================
@@ -15,6 +15,7 @@
from __future__ import print_function
+import shutil
import tempfile
import os
import re
@@ -109,6 +110,28 @@ class TempFile(TempFiles):
return TempFiles.__enter__(self)[0]
+class TempDir:
+ def __enter__(self):
+ self.tmp_dir = tempfile.mkdtemp()
+ return self.tmp_dir
+
+ def __exit__(self, exc_type, exc_val, exc_tb):
+ if os.path.exists(self.tmp_dir):
+ shutil.rmtree(self.tmp_dir, ignore_errors=True)
+
+
+class ChangeWorkingDir:
+ def __init__(self, new_dir):
+ self.new_dir = new_dir
+
+ def __enter__(self):
+ self.old_dir = os.getcwd()
+ os.chdir(self.new_dir)
+
+ def __exit__(self, exc_type, exc_val, exc_tb):
+ os.chdir(self.old_dir)
+
+
class LogMock(object):
log_methods = ('info', 'debug', 'warn', 'error', 'fail')
@@ -163,7 +186,8 @@ def assert_files_in_dir(dir, expected, glob=None):
else:
files = os.listdir(dir)
files.sort()
- assert sorted(expected) == files
+ sorted_expected = sorted(expected)
+ assert sorted_expected == files, f'{", ".join(sorted_expected)} ~= {", ".join(files)}'
def validate_with_dtd(doc, dtd_name, dtd_basedir=None):
@@ -253,3 +277,9 @@ def capture(bytes=False):
finally:
sys.stdout = backup_stdout
sys.stderr = backup_stderr
+
+
+def assert_permissions(file_path, permissions):
+ actual_permissions = oct(os.stat(file_path).st_mode & 0o777)
+ desired_permissions = oct(int(permissions, base=8))
+ assert actual_permissions == desired_permissions, f'{actual_permissions} ~= {desired_permissions}'
=====================================
mapproxy/test/unit/test_cache.py
=====================================
@@ -21,6 +21,7 @@ import shutil
import tempfile
import threading
import time
+import stat
from io import BytesIO
from collections import defaultdict
@@ -126,6 +127,7 @@ class RecordFileCache(FileCache):
self.stored_tiles = set()
self.loaded_tiles = counting_set([])
self.is_cached_call_count = 0
+ self.directory_permissions = kw.get('directory_permissions')
def store_tile(self, tile, dimensions=None):
assert tile.coord not in self.stored_tiles
@@ -165,6 +167,16 @@ def tile_locker(tmpdir):
return TileLocker(tmpdir.join('lock').strpath, 10, "id")
+ at pytest.fixture
+def tile_locker_restricted(tmpdir):
+ return TileLocker(tmpdir.join('lock').strpath, 10, "id", '666')
+
+
+ at pytest.fixture
+def tile_locker_permissive(tmpdir):
+ return TileLocker(tmpdir.join('lock').strpath, 10, "id", '775')
+
+
@pytest.fixture
def mock_tile_client():
return MockTileClient()
@@ -428,6 +440,10 @@ class TestTileManagerLocking(object):
def file_cache(self, tmpdir):
return RecordFileCache(tmpdir.strpath, 'png')
+ @pytest.fixture
+ def file_cache_permissive(self, tmpdir):
+ return RecordFileCache(tmpdir.strpath, 'png', directory_permissions='775')
+
@pytest.fixture
def tile_mgr(self, file_cache, slow_source, tile_locker):
grid = TileGrid(SRS(4326), bbox=[-180, -90, 180, 90])
@@ -437,6 +453,24 @@ class TestTileManagerLocking(object):
locker=tile_locker,
)
+ @pytest.fixture
+ def tile_mgr_restricted(self, file_cache, slow_source, tile_locker_restricted):
+ grid = TileGrid(SRS(4326), bbox=[-180, -90, 180, 90])
+ image_opts = ImageOptions(format='image/png')
+ return TileManager(grid, file_cache, [slow_source], 'png',
+ meta_size=[2, 2], meta_buffer=0, image_opts=image_opts,
+ locker=tile_locker_restricted,
+ )
+
+ @pytest.fixture
+ def tile_mgr_permissive(self, file_cache_permissive, slow_source, tile_locker_permissive):
+ grid = TileGrid(SRS(4326), bbox=[-180, -90, 180, 90])
+ image_opts = ImageOptions(format='image/png')
+ return TileManager(grid, file_cache_permissive, [slow_source], 'png',
+ meta_size=[2, 2], meta_buffer=0, image_opts=image_opts,
+ locker=tile_locker_permissive,
+ )
+
def test_get_single(self, tile_mgr, file_cache, slow_source):
tile_mgr.creator().create_tiles([Tile((0, 0, 1)), Tile((1, 0, 1))])
assert file_cache.stored_tiles == set([(0, 0, 1), (1, 0, 1)])
@@ -458,6 +492,23 @@ class TestTileManagerLocking(object):
assert os.path.exists(file_cache.tile_location(Tile((0, 0, 1))))
+ def test_insufficient_permissions_on_dir(self, tile_mgr_restricted):
+ # TileLocker has restrictive permissions set for creating directories
+ try:
+ tile_mgr_restricted.creator().create_tiles([Tile((0, 0, 1)), Tile((1, 0, 1))])
+ except Exception as e:
+ assert 'Could not create Lock-file, wrong permissions on lock directory?' in e.args[0]
+ else:
+ assert False, 'no PermissionError raised'
+
+ def test_permissive_grants(self, tile_mgr_permissive, file_cache_permissive):
+ tile_mgr_permissive.creator().create_tiles([Tile((0, 0, 1)), Tile((1, 0, 1))])
+ location = file_cache_permissive.tile_location(Tile((0, 0, 1)))
+ assert os.path.exists(location)
+ dir = os.path.dirname(location)
+ mode = os.stat(dir).st_mode
+ assert stat.filemode(mode) == 'drwxrwxr-x'
+
class TestTileManagerMultipleSources(object):
@pytest.fixture
=====================================
mapproxy/test/unit/test_cache_compact.py
=====================================
@@ -28,6 +28,7 @@ from mapproxy.cache.tile import Tile
from mapproxy.image import ImageSource
from mapproxy.image.opts import ImageOptions
from mapproxy.script.defrag import defrag_compact_cache
+from mapproxy.test.helper import assert_permissions
from mapproxy.test.unit.test_cache_tile import TileCacheTestBase
@@ -130,6 +131,22 @@ class TestCompactCacheV1(TileCacheTestBase):
assert_header([4000 + 4, 6000 + 4 + 3000 + 4, 1000 + 4], 6000) # still contains bytes from overwritten tile
+class TestCompactCacheV1Permissions(TileCacheTestBase):
+ def setup_method(self):
+ TileCacheTestBase.setup_method(self)
+ self.cache = CompactCacheV1(
+ cache_dir=self.cache_dir,
+ file_permissions='700'
+ )
+
+ def test_permission(self):
+ coord = (0, 0, 0)
+ self.cache.store_tile(self.create_tile(coord=coord))
+ bundle = self.cache._get_bundle(coord)
+ assert_permissions(bundle.data().filename, '700')
+ assert_permissions(bundle.index().filename, '700')
+
+
class TestCompactCacheV2(TileCacheTestBase):
always_loads_metadata = True
@@ -214,6 +231,21 @@ class TestCompactCacheV2(TileCacheTestBase):
assert_header([4000 + 4, 6000 + 4 + 3000 + 4, 1000 + 4], 6000) # still contains bytes from overwritten tile
+class TestCompactCacheV2Permissions(TileCacheTestBase):
+ def setup_method(self):
+ TileCacheTestBase.setup_method(self)
+ self.cache = CompactCacheV2(
+ cache_dir=self.cache_dir,
+ file_permissions='700'
+ )
+
+ def test_permission(self):
+ coord = (0, 0, 0)
+ self.cache.store_tile(self.create_tile(coord=coord))
+ bundle = self.cache._get_bundle(coord)
+ assert_permissions(bundle.filename, '700')
+
+
class mockProgressLog(object):
def __init__(self):
self.logs = []
=====================================
mapproxy/test/unit/test_cache_geopackage.py
=====================================
@@ -28,7 +28,7 @@ from mapproxy.grid import tile_grid, TileGrid
from mapproxy.image import ImageSource
from mapproxy.layer import MapExtent
from mapproxy.srs import SRS
-from mapproxy.test.helper import assert_files_in_dir
+from mapproxy.test.helper import assert_files_in_dir, assert_permissions
from mapproxy.test.unit.test_cache_tile import TileCacheTestBase
from mapproxy.util.coverage import coverage
@@ -254,3 +254,27 @@ class TestGeopackageCacheInitErrors(object):
except ValueError as ve:
error_msg = ve
assert "res is improperly configured." in str(error_msg)
+
+
+class TestGeopackageCachePermissions(TileCacheTestBase):
+
+ always_loads_metadata = True
+
+ def setup_method(self):
+ TileCacheTestBase.setup_method(self)
+ self.gpkg_file = os.path.join(self.cache_dir, 'tmp.gpkg')
+ self.table_name = 'test_tiles'
+ self.cache = GeopackageCache(
+ self.gpkg_file,
+ tile_grid=tile_grid(3857, name='global-webmarcator'),
+ table_name=self.table_name,
+ file_permissions='700'
+ )
+
+ def teardown_method(self):
+ if self.cache:
+ self.cache.cleanup()
+ TileCacheTestBase.teardown_method(self)
+
+ def test_permissions(self):
+ assert_permissions(self.gpkg_file, '700')
=====================================
mapproxy/test/unit/test_cache_mbtile.py
=====================================
@@ -0,0 +1,153 @@
+# This file is part of the MapProxy project.
+# Copyright (C) 2024 terrestris <https://terrestris.de>
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import sqlite3
+import threading
+import time
+
+from io import BytesIO
+
+from mapproxy.cache.mbtiles import MBTilesCache, MBTilesLevelCache
+from mapproxy.cache.tile import Tile
+from mapproxy.image import ImageSource
+from mapproxy.test.helper import assert_files_in_dir, assert_permissions
+from mapproxy.test.image import create_tmp_image_buf
+from mapproxy.test.unit.test_cache_tile import TileCacheTestBase
+
+tile_image = create_tmp_image_buf((256, 256), color='blue')
+tile_image2 = create_tmp_image_buf((256, 256), color='red')
+
+
+class TestMBTileCache(TileCacheTestBase):
+ def setup_method(self):
+ TileCacheTestBase.setup_method(self)
+ self.cache = MBTilesCache(os.path.join(self.cache_dir, 'tmp.mbtiles'))
+
+ def teardown_method(self):
+ if self.cache:
+ self.cache.cleanup()
+ TileCacheTestBase.teardown_method(self)
+
+ def test_default_coverage(self):
+ assert self.cache.coverage is None
+
+ def test_load_empty_tileset(self):
+ assert self.cache.load_tiles([Tile(None)]) is True
+ assert self.cache.load_tiles([Tile(None), Tile(None), Tile(None)]) is True
+
+ def test_load_more_than_2000_tiles(self):
+ # prepare data
+ for i in range(0, 2010):
+ assert self.cache.store_tile(Tile((i, 0, 10), ImageSource(BytesIO(b'foo'))))
+
+ tiles = [Tile((i, 0, 10)) for i in range(0, 2010)]
+ assert self.cache.load_tiles(tiles)
+
+ def test_timeouts(self):
+ self.cache._db_conn_cache.db = sqlite3.connect(self.cache.mbtile_file, timeout=0.05)
+
+ def block():
+ # block database by delaying the commit
+ db = sqlite3.connect(self.cache.mbtile_file)
+ cur = db.cursor()
+ stmt = "INSERT OR REPLACE INTO tiles (zoom_level, tile_column, tile_row, tile_data) VALUES (?,?,?,?)"
+ cur.execute(stmt, (3, 1, 1, '1234'))
+ time.sleep(0.2)
+ db.commit()
+
+ try:
+ assert self.cache.store_tile(self.create_tile((0, 0, 1))) is True
+
+ t = threading.Thread(target=block)
+ t.start()
+ time.sleep(0.05)
+ assert self.cache.store_tile(self.create_tile((0, 0, 1))) is False
+ finally:
+ t.join()
+
+ assert self.cache.store_tile(self.create_tile((0, 0, 1))) is True
+
+
+class TestMBTileCachePermissions(TileCacheTestBase):
+ def setup_method(self):
+ TileCacheTestBase.setup_method(self)
+ self.cache = MBTilesCache(os.path.join(self.cache_dir, 'tmp.mbtiles'), file_permissions='700')
+
+ def teardown_method(self):
+ if self.cache:
+ self.cache.cleanup()
+ TileCacheTestBase.teardown_method(self)
+
+ def test_permissions(self):
+ assert_permissions(self.cache.mbtile_file, '700')
+
+
+class TestMBTileLevelCache(TileCacheTestBase):
+ always_loads_metadata = True
+
+ def setup_method(self):
+ TileCacheTestBase.setup_method(self)
+ self.cache = MBTilesLevelCache(self.cache_dir)
+
+ def test_default_coverage(self):
+ assert self.cache.coverage is None
+
+ def test_level_files(self):
+ assert_files_in_dir(self.cache_dir, [])
+
+ self.cache.store_tile(self.create_tile((0, 0, 1)))
+ assert_files_in_dir(self.cache_dir, ['1.mbtile'], glob='*.mbtile')
+
+ self.cache.store_tile(self.create_tile((0, 0, 5)))
+ assert_files_in_dir(self.cache_dir, ['1.mbtile', '5.mbtile'], glob='*.mbtile')
+
+ def test_remove_level_files(self):
+ self.cache.store_tile(self.create_tile((0, 0, 1)))
+ self.cache.store_tile(self.create_tile((0, 0, 2)))
+ assert_files_in_dir(self.cache_dir, ['1.mbtile', '2.mbtile'], glob='*.mbtile')
+
+ self.cache.remove_level_tiles_before(1, timestamp=0)
+ assert_files_in_dir(self.cache_dir, ['2.mbtile'], glob='*.mbtile')
+
+ def test_remove_level_tiles_before(self):
+ self.cache.store_tile(self.create_tile((0, 0, 1)))
+ self.cache.store_tile(self.create_tile((0, 0, 2)))
+
+ assert_files_in_dir(self.cache_dir, ['1.mbtile', '2.mbtile'], glob='*.mbtile')
+ assert self.cache.is_cached(Tile((0, 0, 1)))
+
+ self.cache.remove_level_tiles_before(1, timestamp=time.time() - 60)
+ assert self.cache.is_cached(Tile((0, 0, 1)))
+
+ self.cache.remove_level_tiles_before(1, timestamp=time.time() + 60)
+ assert not self.cache.is_cached(Tile((0, 0, 1)))
+
+ assert_files_in_dir(self.cache_dir, ['1.mbtile', '2.mbtile'], glob='*.mbtile')
+ assert self.cache.is_cached(Tile((0, 0, 2)))
+
+ def test_bulk_store_tiles_with_different_levels(self):
+ self.cache.store_tiles([
+ self.create_tile((0, 0, 1)),
+ self.create_tile((0, 0, 2)),
+ self.create_tile((1, 0, 2)),
+ self.create_tile((1, 0, 1)),
+ ], dimensions=None)
+
+ assert_files_in_dir(self.cache_dir, ['1.mbtile', '2.mbtile'], glob='*.mbtile')
+ assert self.cache.is_cached(Tile((0, 0, 1)))
+ assert self.cache.is_cached(Tile((1, 0, 1)))
+ assert self.cache.is_cached(Tile((0, 0, 2)))
+ assert self.cache.is_cached(Tile((1, 0, 2)))
=====================================
mapproxy/test/unit/test_cache_tile.py
=====================================
@@ -17,10 +17,8 @@ import calendar
import datetime
import os
import shutil
-import sqlite3
import sys
import tempfile
-import threading
import time
from io import BytesIO
@@ -30,11 +28,10 @@ import pytest
from PIL import Image
from mapproxy.cache.file import FileCache
-from mapproxy.cache.mbtiles import MBTilesCache, MBTilesLevelCache
+from mapproxy.cache.mbtiles import MBTilesCache
from mapproxy.cache.tile import Tile
from mapproxy.image import ImageSource
from mapproxy.image.opts import ImageOptions
-from mapproxy.test.helper import assert_files_in_dir
from mapproxy.test.image import create_tmp_image_buf, is_png
@@ -109,6 +106,31 @@ class TileCacheTestBase(object):
assert self.cache.load_tile(tile) is True
assert not tile.is_missing()
+ def test_store_tiles_no_permissions(self):
+ tiles = [self.create_tile((x, 589, 12)) for x in range(4)]
+ tiles[0].stored = True
+
+ if isinstance(self.cache, FileCache):
+ # reinit cache with permission props
+ TileCacheTestBase.teardown_method(self)
+ self.cache = FileCache(self.cache_dir, 'png', directory_permissions='555')
+ try:
+ self.cache.store_tiles(tiles, dimensions=None)
+ except Exception as e:
+ assert 'Permission denied' == e.args[1]
+ else:
+ assert False, 'no PermissionError raised'
+ elif isinstance(self.cache, MBTilesCache):
+ # reinit cache with permission props
+ self.cache.cleanup()
+ TileCacheTestBase.teardown_method(self)
+ self.cache = MBTilesCache(os.path.join(self.cache_dir, 'tmp.mbtiles'),
+ directory_permissions='755', file_permissions='555')
+ success = self.cache.store_tiles(tiles, dimensions=None)
+ assert not success
+ self.cache.cleanup()
+ TileCacheTestBase.teardown_method(self)
+
def test_load_tiles_cached(self):
self.cache.store_tile(self.create_tile((0, 0, 1)))
self.cache.store_tile(self.create_tile((0, 1, 1)))
@@ -354,56 +376,6 @@ class TestFileTileCache(TileCacheTestBase):
cache.level_location(0)
-class TestMBTileCache(TileCacheTestBase):
- def setup_method(self):
- TileCacheTestBase.setup_method(self)
- self.cache = MBTilesCache(os.path.join(self.cache_dir, 'tmp.mbtiles'))
-
- def teardown_method(self):
- if self.cache:
- self.cache.cleanup()
- TileCacheTestBase.teardown_method(self)
-
- def test_default_coverage(self):
- assert self.cache.coverage is None
-
- def test_load_empty_tileset(self):
- assert self.cache.load_tiles([Tile(None)]) is True
- assert self.cache.load_tiles([Tile(None), Tile(None), Tile(None)]) is True
-
- def test_load_more_than_2000_tiles(self):
- # prepare data
- for i in range(0, 2010):
- assert self.cache.store_tile(Tile((i, 0, 10), ImageSource(BytesIO(b'foo'))))
-
- tiles = [Tile((i, 0, 10)) for i in range(0, 2010)]
- assert self.cache.load_tiles(tiles)
-
- def test_timeouts(self):
- self.cache._db_conn_cache.db = sqlite3.connect(self.cache.mbtile_file, timeout=0.05)
-
- def block():
- # block database by delaying the commit
- db = sqlite3.connect(self.cache.mbtile_file)
- cur = db.cursor()
- stmt = "INSERT OR REPLACE INTO tiles (zoom_level, tile_column, tile_row, tile_data) VALUES (?,?,?,?)"
- cur.execute(stmt, (3, 1, 1, '1234'))
- time.sleep(0.2)
- db.commit()
-
- try:
- assert self.cache.store_tile(self.create_tile((0, 0, 1))) is True
-
- t = threading.Thread(target=block)
- t.start()
- time.sleep(0.05)
- assert self.cache.store_tile(self.create_tile((0, 0, 1))) is False
- finally:
- t.join()
-
- assert self.cache.store_tile(self.create_tile((0, 0, 1))) is True
-
-
class TestQuadkeyFileTileCache(TileCacheTestBase):
def setup_method(self):
TileCacheTestBase.setup_method(self)
@@ -417,61 +389,3 @@ class TestQuadkeyFileTileCache(TileCacheTestBase):
self.cache.store_tile(tile)
tile_location = os.path.join(self.cache_dir, '11.png')
assert os.path.exists(tile_location), tile_location
-
-
-class TestMBTileLevelCache(TileCacheTestBase):
- always_loads_metadata = True
-
- def setup_method(self):
- TileCacheTestBase.setup_method(self)
- self.cache = MBTilesLevelCache(self.cache_dir)
-
- def test_default_coverage(self):
- assert self.cache.coverage is None
-
- def test_level_files(self):
- assert_files_in_dir(self.cache_dir, [])
-
- self.cache.store_tile(self.create_tile((0, 0, 1)))
- assert_files_in_dir(self.cache_dir, ['1.mbtile'], glob='*.mbtile')
-
- self.cache.store_tile(self.create_tile((0, 0, 5)))
- assert_files_in_dir(self.cache_dir, ['1.mbtile', '5.mbtile'], glob='*.mbtile')
-
- def test_remove_level_files(self):
- self.cache.store_tile(self.create_tile((0, 0, 1)))
- self.cache.store_tile(self.create_tile((0, 0, 2)))
- assert_files_in_dir(self.cache_dir, ['1.mbtile', '2.mbtile'], glob='*.mbtile')
-
- self.cache.remove_level_tiles_before(1, timestamp=0)
- assert_files_in_dir(self.cache_dir, ['2.mbtile'], glob='*.mbtile')
-
- def test_remove_level_tiles_before(self):
- self.cache.store_tile(self.create_tile((0, 0, 1)))
- self.cache.store_tile(self.create_tile((0, 0, 2)))
-
- assert_files_in_dir(self.cache_dir, ['1.mbtile', '2.mbtile'], glob='*.mbtile')
- assert self.cache.is_cached(Tile((0, 0, 1)))
-
- self.cache.remove_level_tiles_before(1, timestamp=time.time() - 60)
- assert self.cache.is_cached(Tile((0, 0, 1)))
-
- self.cache.remove_level_tiles_before(1, timestamp=time.time() + 60)
- assert not self.cache.is_cached(Tile((0, 0, 1)))
-
- assert_files_in_dir(self.cache_dir, ['1.mbtile', '2.mbtile'], glob='*.mbtile')
- assert self.cache.is_cached(Tile((0, 0, 2)))
-
- def test_bulk_store_tiles_with_different_levels(self):
- self.cache.store_tiles([
- self.create_tile((0, 0, 1)),
- self.create_tile((0, 0, 2)),
- self.create_tile((1, 0, 2)),
- self.create_tile((1, 0, 1)),
- ], dimensions=None)
-
- assert_files_in_dir(self.cache_dir, ['1.mbtile', '2.mbtile'], glob='*.mbtile')
- assert self.cache.is_cached(Tile((0, 0, 1)))
- assert self.cache.is_cached(Tile((1, 0, 1)))
- assert self.cache.is_cached(Tile((0, 0, 2)))
- assert self.cache.is_cached(Tile((1, 0, 2)))
=====================================
mapproxy/test/unit/test_fs.py
=====================================
@@ -0,0 +1,30 @@
+import os
+
+from mapproxy.test.helper import TempDir, assert_files_in_dir, ChangeWorkingDir, assert_permissions
+from mapproxy.util.fs import ensure_directory
+
+
+class TestFs:
+ def test_ensure_directory_absolute(self):
+ with TempDir() as dir_name:
+ file_path = os.path.join(dir_name, 'a/b/c')
+ ensure_directory(file_path)
+ assert_files_in_dir(dir_name, ['a'])
+ assert_files_in_dir(os.path.join(dir_name, 'a'), ['b'])
+ assert_files_in_dir(os.path.join(dir_name, 'a/b'), [])
+
+ def test_ensure_directory_relative(self):
+ with TempDir() as dir_name:
+ with ChangeWorkingDir(dir_name):
+ ensure_directory('./a/b/c')
+ assert_files_in_dir(dir_name, ['a'])
+ assert_files_in_dir(os.path.join(dir_name, 'a'), ['b'])
+ assert_files_in_dir(os.path.join(dir_name, 'a/b'), [])
+
+ def test_ensure_directory_permissions(self):
+ with TempDir() as dir_name:
+ file_path = os.path.join(dir_name, 'a/b')
+ desired_permissions = '700'
+ ensure_directory(file_path, directory_permissions=desired_permissions)
+ assert_files_in_dir(dir_name, ['a'])
+ assert_permissions(os.path.join(dir_name, 'a'), desired_permissions)
=====================================
mapproxy/util/ext/lockfile.py
=====================================
@@ -117,7 +117,10 @@ class LockFile:
def __init__(self, path):
self._path = path
- fp = open(path, 'w+')
+ try:
+ fp = open(path, 'w+')
+ except IOError:
+ raise Exception('Could not create Lock-file, wrong permissions on lock directory?')
try:
_lock_file(fp)
=====================================
mapproxy/util/fs.py
=====================================
@@ -106,14 +106,25 @@ def remove_dir_if_emtpy(directory):
raise
-def ensure_directory(file_name):
+def ensure_directory(file_name, directory_permissions=None):
"""
Create directory if it does not exist, else do nothing.
"""
dir_name = os.path.dirname(file_name)
- if not os.path.exists(dir_name):
+ if not os.path.isdir(dir_name):
try:
- os.makedirs(dir_name)
+ if dir_name == '.' or dir_name == '/':
+ return
+
+ # call ensure_directory recursively
+ ensure_directory(dir_name, directory_permissions)
+
+ # print("create dir" + dir + "with permission" + directory_permissions)
+ os.mkdir(dir_name)
+ if directory_permissions:
+ permission = int(directory_permissions, base=8)
+ os.chmod(dir_name, permission)
+
except OSError as e:
if e.errno != errno.EEXIST:
raise e
=====================================
mapproxy/util/lock.py
=====================================
@@ -23,6 +23,7 @@ import os
import errno
from mapproxy.util.ext.lockfile import LockFile, LockError
+from mapproxy.util.fs import ensure_directory
import logging
log = logging.getLogger(__name__)
@@ -35,12 +36,13 @@ class LockTimeout(Exception):
class FileLock(object):
- def __init__(self, lock_file, timeout=60.0, step=0.01, remove_on_unlock=False):
+ def __init__(self, lock_file, timeout=60.0, step=0.01, remove_on_unlock=False, directory_permissions=None):
self.lock_file = lock_file
self.timeout = timeout
self.step = step
self.remove_on_unlock = remove_on_unlock
self._locked = False
+ self.directory_permissions = directory_permissions
def __enter__(self):
self.lock()
@@ -48,19 +50,12 @@ class FileLock(object):
def __exit__(self, _exc_type, _exc_value, _traceback):
self.unlock()
- def _make_lockdir(self):
- if not os.path.exists(os.path.dirname(self.lock_file)):
- try:
- os.makedirs(os.path.dirname(self.lock_file))
- except OSError as e:
- if e.errno is not errno.EEXIST:
- raise e
-
def _try_lock(self):
return LockFile(self.lock_file)
def lock(self):
- self._make_lockdir()
+ ensure_directory(self.lock_file, self.directory_permissions)
+
current_time = time.time()
stop_time = current_time + self.timeout
@@ -121,10 +116,11 @@ def cleanup_lockdir(lockdir, suffix='.lck', max_lock_time=300, force=True):
try:
if os.path.isfile(name) and name.endswith(suffix):
if os.path.getmtime(name) < expire_time:
- try:
- os.unlink(name)
- except IOError as ex:
- log.warning('could not remove old lock file %s: %s', name, ex)
+ if os.stat(name).st_uid == os.getuid():
+ try:
+ os.unlink(name)
+ except IOError as ex:
+ log.warning('could not remove old lock file %s: %s', name, ex)
except OSError as e:
# some one might have removed the file (ENOENT)
# or we don't have permissions to remove it (EACCES)
=====================================
requirements-tests.txt
=====================================
@@ -17,7 +17,7 @@ certifi==2024.7.4
cffi==1.16.0;
cfn-lint==0.80.3
chardet==5.2.0
-cryptography==43.0.0
+cryptography==43.0.1
decorator==5.1.1
docker==7.0.0
docutils==0.20.1
@@ -44,7 +44,7 @@ packaging==23.2
pluggy==1.5.0
py==1.11.0
pyasn1==0.5.1
-pycparser==2.21
+pycparser==2.22
pyparsing==3.1.1
pyproj==2.6.1.post1;python_version=="3.8"
pyproj==3.6.1;python_version>"3.8"
@@ -61,12 +61,12 @@ riak==2.7.0
rsa==4.9
s3transfer==0.10.2
six==1.16.0
-soupsieve==2.0.1
+soupsieve==2.6
sshpubkeys==3.3.1
toml==0.10.2
urllib3==1.26.19
waitress==2.1.2
-websocket-client==1.7.0
+websocket-client==1.8.0
wrapt==1.16.0
xmltodict==0.13.0
-zipp==3.19.1
+zipp==3.20.1
=====================================
setup.py
=====================================
@@ -63,7 +63,7 @@ def long_description(changelog_releases=10):
setup(
name='MapProxy',
- version="3.0.1",
+ version="3.1.0",
description='An accelerating proxy for tile and web map services',
long_description=long_description(7),
author='Oliver Tonnhofer',
View it on GitLab: https://salsa.debian.org/debian-gis-team/mapproxy/-/commit/55c713079c6a4f29313d11ef79ab85b4dd5e82f5
--
View it on GitLab: https://salsa.debian.org/debian-gis-team/mapproxy/-/commit/55c713079c6a4f29313d11ef79ab85b4dd5e82f5
You're receiving this email because of your account on salsa.debian.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/pkg-grass-devel/attachments/20241022/6a143052/attachment-0001.htm>
More information about the Pkg-grass-devel
mailing list