[Git][debian-gis-team/jsonpath-ng][upstream] New upstream version 1.6.1
Antonio Valentino (@antonio.valentino)
gitlab at salsa.debian.org
Sun Jan 14 09:52:05 GMT 2024
Antonio Valentino pushed to branch upstream at Debian GIS Project / jsonpath-ng
Commits:
5937e530 by Antonio Valentino at 2024-01-14T09:39:09+00:00
New upstream version 1.6.1
- - - - -
25 changed files:
- PKG-INFO
- README.rst
- jsonpath_ng.egg-info/PKG-INFO
- jsonpath_ng.egg-info/SOURCES.txt
- jsonpath_ng/__init__.py
- jsonpath_ng/bin/jsonpath.py
- jsonpath_ng/ext/filter.py
- jsonpath_ng/ext/iterable.py
- jsonpath_ng/ext/parser.py
- jsonpath_ng/jsonpath.py
- jsonpath_ng/lexer.py
- jsonpath_ng/parser.py
- + pyproject.toml
- setup.py
- − tests/bin/__init__.py
- tests/bin/test_jsonpath.py
- + tests/conftest.py
- + tests/helpers.py
- tests/test_create.py
- tests/test_examples.py
- tests/test_exceptions.py
- tests/test_jsonpath.py
- tests/test_jsonpath_rw_ext.py
- tests/test_lexer.py
- tests/test_parser.py
Changes:
=====================================
PKG-INFO
=====================================
@@ -1,6 +1,6 @@
Metadata-Version: 2.1
Name: jsonpath-ng
-Version: 1.6.0
+Version: 1.6.1
Summary: A final implementation of JSONPath for Python that aims to be standard compliant, including arithmetic and binary comparison operators and providing clear AST for metaprogramming.
Home-page: https://github.com/h2non/jsonpath-ng
Author: Tomas Aparicio
@@ -15,6 +15,7 @@ Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
+Classifier: Programming Language :: Python :: 3.12
License-File: LICENSE
Python JSONPath Next-Generation |Build Status| |PyPI|
@@ -24,13 +25,13 @@ A final implementation of JSONPath for Python that aims to be standard compliant
and binary comparison operators, as defined in the original `JSONPath proposal`_.
This packages merges both `jsonpath-rw`_ and `jsonpath-rw-ext`_ and
-provides several AST API enhancements, such as the ability to update or removes nodes in the tree.
+provides several AST API enhancements, such as the ability to update or remove nodes in the tree.
About
-----
This library provides a robust and significantly extended implementation
-of JSONPath for Python. It is tested with CPython 2.6, 2.7 & 3.x.
+of JSONPath for Python. It is tested with CPython 3.7 and higher.
This library differs from other JSONPath implementations in that it is a
full *language* implementation, meaning the JSONPath expressions are
@@ -69,6 +70,23 @@ Basic examples:
>>> [str(match.full_path) for match in jsonpath_expr.find({'foo': [{'baz': 1}, {'baz': 2}]})]
['foo.[0].baz', 'foo.[1].baz']
+ # Modifying values matching the path
+ >>> jsonpath_expr.update( {'foo': [{'baz': 1}, {'baz': 2}]}, 3)
+ {'foo': [{'baz': 3}, {'baz': 3}]}
+
+ # Modifying one of the values matching the path
+ >>> matches = jsonpath_expr.find({'foo': [{'baz': 1}, {'baz': 2}]})
+ >>> matches[0].full_path.update( {'foo': [{'baz': 1}, {'baz': 2}]}, 3)
+ {'foo': [{'baz': 3}, {'baz': 2}]}
+
+ # Removing all values matching a path
+ >>> jsonpath_expr.filter(lambda d: True, {'foo': [{'baz': 1}, {'baz': 2}]})
+ {'foo': [{}, {}]}
+
+ # Removing values containing particular data matching path
+ >>> jsonpath_expr.filter(lambda d: d == 2, {'foo': [{'baz': 1}, {'baz': 2}]})
+ {'foo': [{'baz': 1}, {}]}
+
# And this can be useful for automatically providing ids for bits of data that do not have them (currently a global switch)
>>> jsonpath.auto_id_field = 'id'
>>> [match.value for match in parse('foo[*].id').find({'foo': [{'id': 'bizzle'}, {'baz': 3}]})]
@@ -197,7 +215,7 @@ Extras
will be replaced by the JSONPath to it, giving automatic unique ids
to any piece of data. These ids will take into account any ids
already present as well.
-- *Named operators*: Instead of using ``@`` to reference the currently
+- *Named operators*: Instead of using ``@`` to reference the current
object, this library uses ```this```. In general, any string
contained in backquotes can be made to be a new operator, currently
by extending the library.
=====================================
README.rst
=====================================
@@ -5,13 +5,13 @@ A final implementation of JSONPath for Python that aims to be standard compliant
and binary comparison operators, as defined in the original `JSONPath proposal`_.
This packages merges both `jsonpath-rw`_ and `jsonpath-rw-ext`_ and
-provides several AST API enhancements, such as the ability to update or removes nodes in the tree.
+provides several AST API enhancements, such as the ability to update or remove nodes in the tree.
About
-----
This library provides a robust and significantly extended implementation
-of JSONPath for Python. It is tested with CPython 2.6, 2.7 & 3.x.
+of JSONPath for Python. It is tested with CPython 3.7 and higher.
This library differs from other JSONPath implementations in that it is a
full *language* implementation, meaning the JSONPath expressions are
@@ -50,6 +50,23 @@ Basic examples:
>>> [str(match.full_path) for match in jsonpath_expr.find({'foo': [{'baz': 1}, {'baz': 2}]})]
['foo.[0].baz', 'foo.[1].baz']
+ # Modifying values matching the path
+ >>> jsonpath_expr.update( {'foo': [{'baz': 1}, {'baz': 2}]}, 3)
+ {'foo': [{'baz': 3}, {'baz': 3}]}
+
+ # Modifying one of the values matching the path
+ >>> matches = jsonpath_expr.find({'foo': [{'baz': 1}, {'baz': 2}]})
+ >>> matches[0].full_path.update( {'foo': [{'baz': 1}, {'baz': 2}]}, 3)
+ {'foo': [{'baz': 3}, {'baz': 2}]}
+
+ # Removing all values matching a path
+ >>> jsonpath_expr.filter(lambda d: True, {'foo': [{'baz': 1}, {'baz': 2}]})
+ {'foo': [{}, {}]}
+
+ # Removing values containing particular data matching path
+ >>> jsonpath_expr.filter(lambda d: d == 2, {'foo': [{'baz': 1}, {'baz': 2}]})
+ {'foo': [{'baz': 1}, {}]}
+
# And this can be useful for automatically providing ids for bits of data that do not have them (currently a global switch)
>>> jsonpath.auto_id_field = 'id'
>>> [match.value for match in parse('foo[*].id').find({'foo': [{'id': 'bizzle'}, {'baz': 3}]})]
@@ -178,7 +195,7 @@ Extras
will be replaced by the JSONPath to it, giving automatic unique ids
to any piece of data. These ids will take into account any ids
already present as well.
-- *Named operators*: Instead of using ``@`` to reference the currently
+- *Named operators*: Instead of using ``@`` to reference the current
object, this library uses ```this```. In general, any string
contained in backquotes can be made to be a new operator, currently
by extending the library.
=====================================
jsonpath_ng.egg-info/PKG-INFO
=====================================
@@ -1,6 +1,6 @@
Metadata-Version: 2.1
Name: jsonpath-ng
-Version: 1.6.0
+Version: 1.6.1
Summary: A final implementation of JSONPath for Python that aims to be standard compliant, including arithmetic and binary comparison operators and providing clear AST for metaprogramming.
Home-page: https://github.com/h2non/jsonpath-ng
Author: Tomas Aparicio
@@ -15,6 +15,7 @@ Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
+Classifier: Programming Language :: Python :: 3.12
License-File: LICENSE
Python JSONPath Next-Generation |Build Status| |PyPI|
@@ -24,13 +25,13 @@ A final implementation of JSONPath for Python that aims to be standard compliant
and binary comparison operators, as defined in the original `JSONPath proposal`_.
This packages merges both `jsonpath-rw`_ and `jsonpath-rw-ext`_ and
-provides several AST API enhancements, such as the ability to update or removes nodes in the tree.
+provides several AST API enhancements, such as the ability to update or remove nodes in the tree.
About
-----
This library provides a robust and significantly extended implementation
-of JSONPath for Python. It is tested with CPython 2.6, 2.7 & 3.x.
+of JSONPath for Python. It is tested with CPython 3.7 and higher.
This library differs from other JSONPath implementations in that it is a
full *language* implementation, meaning the JSONPath expressions are
@@ -69,6 +70,23 @@ Basic examples:
>>> [str(match.full_path) for match in jsonpath_expr.find({'foo': [{'baz': 1}, {'baz': 2}]})]
['foo.[0].baz', 'foo.[1].baz']
+ # Modifying values matching the path
+ >>> jsonpath_expr.update( {'foo': [{'baz': 1}, {'baz': 2}]}, 3)
+ {'foo': [{'baz': 3}, {'baz': 3}]}
+
+ # Modifying one of the values matching the path
+ >>> matches = jsonpath_expr.find({'foo': [{'baz': 1}, {'baz': 2}]})
+ >>> matches[0].full_path.update( {'foo': [{'baz': 1}, {'baz': 2}]}, 3)
+ {'foo': [{'baz': 3}, {'baz': 2}]}
+
+ # Removing all values matching a path
+ >>> jsonpath_expr.filter(lambda d: True, {'foo': [{'baz': 1}, {'baz': 2}]})
+ {'foo': [{}, {}]}
+
+ # Removing values containing particular data matching path
+ >>> jsonpath_expr.filter(lambda d: d == 2, {'foo': [{'baz': 1}, {'baz': 2}]})
+ {'foo': [{'baz': 1}, {}]}
+
# And this can be useful for automatically providing ids for bits of data that do not have them (currently a global switch)
>>> jsonpath.auto_id_field = 'id'
>>> [match.value for match in parse('foo[*].id').find({'foo': [{'id': 'bizzle'}, {'baz': 3}]})]
@@ -197,7 +215,7 @@ Extras
will be replaced by the JSONPath to it, giving automatic unique ids
to any piece of data. These ids will take into account any ids
already present as well.
-- *Named operators*: Instead of using ``@`` to reference the currently
+- *Named operators*: Instead of using ``@`` to reference the current
object, this library uses ```this```. In general, any string
contained in backquotes can be made to be a new operator, currently
by extending the library.
=====================================
jsonpath_ng.egg-info/SOURCES.txt
=====================================
@@ -1,6 +1,7 @@
LICENSE
MANIFEST.in
README.rst
+pyproject.toml
setup.py
jsonpath_ng/__init__.py
jsonpath_ng/exceptions.py
@@ -22,6 +23,8 @@ jsonpath_ng/ext/iterable.py
jsonpath_ng/ext/parser.py
jsonpath_ng/ext/string.py
tests/__init__.py
+tests/conftest.py
+tests/helpers.py
tests/test_create.py
tests/test_examples.py
tests/test_exceptions.py
@@ -29,7 +32,6 @@ tests/test_jsonpath.py
tests/test_jsonpath_rw_ext.py
tests/test_lexer.py
tests/test_parser.py
-tests/bin/__init__.py
tests/bin/test1.json
tests/bin/test2.json
tests/bin/test_jsonpath.py
\ No newline at end of file
=====================================
jsonpath_ng/__init__.py
=====================================
@@ -3,4 +3,4 @@ from .parser import parse # noqa
# Current package version
-__version__ = '1.6.0'
+__version__ = '1.6.1'
=====================================
jsonpath_ng/bin/jsonpath.py
=====================================
@@ -5,9 +5,6 @@
# terms of the Do What The Fuck You Want To Public License, Version 2,
# as published by Sam Hocevar. See the COPYING file for more details.
-# Use modern Python
-from __future__ import unicode_literals, print_function, absolute_import
-
# Standard Library imports
import json
import sys
=====================================
jsonpath_ng/ext/filter.py
=====================================
@@ -25,7 +25,7 @@ OPERATOR_MAP = {
'<': operator.lt,
'>=': operator.ge,
'>': operator.gt,
- '=~': lambda a, b: True if re.search(b, a) else False,
+ '=~': lambda a, b: True if isinstance(a, str) and re.search(b, a) else False,
}
@@ -53,6 +53,16 @@ class Filter(JSONPath):
len(list(filter(lambda x: x.find(datum.value[i]),
self.expressions))))]
+ def filter(self, fn, data):
+ # NOTE: We reverse the order just to make sure the indexes are preserved upon
+ # removal.
+ for datum in reversed(self.find(data)):
+ index_obj = datum.path
+ if isinstance(data, dict):
+ index_obj.index = list(data)[index_obj.index]
+ index_obj.filter(fn, data)
+ return data
+
def update(self, data, val):
if type(data) is list:
for index, item in enumerate(data):
=====================================
jsonpath_ng/ext/iterable.py
=====================================
@@ -90,3 +90,29 @@ class Len(JSONPath):
def __repr__(self):
return 'Len()'
+
+
+class Keys(JSONPath):
+ """The JSONPath referring to the keys of the current object.
+ Concrete syntax is '`keys`'.
+ """
+
+ def find(self, datum):
+ datum = DatumInContext.wrap(datum)
+ try:
+ value = list(datum.value.keys())
+ except Exception as e:
+ return []
+ else:
+ return [DatumInContext(value[i],
+ context=None,
+ path=Keys()) for i in range (0, len(datum.value))]
+
+ def __eq__(self, other):
+ return isinstance(other, Keys)
+
+ def __str__(self):
+ return '`keys`'
+
+ def __repr__(self):
+ return 'Keys()'
=====================================
jsonpath_ng/ext/parser.py
=====================================
@@ -94,6 +94,8 @@ class ExtentedJsonPathParser(parser.JsonPathParser):
"jsonpath : NAMED_OPERATOR"
if p[1] == 'len':
p[0] = _iterable.Len()
+ elif p[1] == 'keys':
+ p[0] = _iterable.Keys()
elif p[1] == 'sorted':
p[0] = _iterable.SortedThis()
elif p[1].startswith("split("):
=====================================
jsonpath_ng/jsonpath.py
=====================================
@@ -1,6 +1,3 @@
-from __future__ import (absolute_import, division, generators, nested_scopes,
- print_function, unicode_literals)
-
import logging
from itertools import * # noqa
from jsonpath_ng.lexer import JsonPathLexer
@@ -16,7 +13,7 @@ NOT_SET = object()
LIST_KEY = object()
-class JSONPath(object):
+class JSONPath:
"""
The base class for JSONPath abstract syntax; those
methods stubbed here are the interface to supported
@@ -78,7 +75,7 @@ class JSONPath(object):
return DatumInContext(value, path=Root(), context=None)
-class DatumInContext(object):
+class DatumInContext:
"""
Represents a datum along a path from a context.
@@ -604,7 +601,7 @@ class Fields(JSONPath):
data[field] = {}
if field in data:
if hasattr(val, '__call__'):
- val(data[field], data, field)
+ data[field] = val(data[field], data, field)
else:
data[field] = val
return data
@@ -678,7 +675,7 @@ class Index(JSONPath):
data = _create_list_key(data)
self._pad_value(data)
if hasattr(val, '__call__'):
- val.__call__(data[self.index], data, self.index)
+ data[self.index] = val.__call__(data[self.index], data, self.index)
elif len(data) > self.index:
data[self.index] = val
return data
@@ -798,7 +795,7 @@ def _create_list_key(dict_):
return new_list
-def _clean_list_keys(dict_):
+def _clean_list_keys(struct_):
"""
Replace {LIST_KEY: ['foo', 'bar']} with ['foo', 'bar'].
@@ -806,12 +803,13 @@ def _clean_list_keys(dict_):
['foo', 'bar']
"""
- for key, value in dict_.items():
- if isinstance(value, dict):
- dict_[key] = _clean_list_keys(value)
- elif isinstance(value, list):
- dict_[key] = [_clean_list_keys(v) if isinstance(v, dict) else v
- for v in value]
- if LIST_KEY in dict_:
- return dict_[LIST_KEY]
- return dict_
+ if(isinstance(struct_, list)):
+ for ind, value in enumerate(struct_):
+ struct_[ind] = _clean_list_keys(value)
+ elif(isinstance(struct_, dict)):
+ if(LIST_KEY in struct_):
+ return _clean_list_keys(struct_[LIST_KEY])
+ else:
+ for key, value in struct_.items():
+ struct_[key] = _clean_list_keys(value)
+ return struct_
=====================================
jsonpath_ng/lexer.py
=====================================
@@ -1,4 +1,3 @@
-from __future__ import unicode_literals, print_function, absolute_import, division, generators, nested_scopes
import sys
import logging
@@ -9,7 +8,7 @@ from jsonpath_ng.exceptions import JsonPathLexerError
logger = logging.getLogger(__name__)
-class JsonPathLexer(object):
+class JsonPathLexer:
'''
A Lexical analyzer for JsonPath.
'''
=====================================
jsonpath_ng/parser.py
=====================================
@@ -1,10 +1,3 @@
-from __future__ import (
- print_function,
- absolute_import,
- division,
- generators,
- nested_scopes,
-)
import logging
import sys
import os.path
@@ -22,7 +15,7 @@ def parse(string):
return JsonPathParser().parse(string)
-class JsonPathParser(object):
+class JsonPathParser:
'''
An LALR-parser for JsonPath
'''
@@ -80,6 +73,8 @@ class JsonPathParser(object):
]
def p_error(self, t):
+ if t is None:
+ raise JsonPathParserError('Parse error near the end of string!')
raise JsonPathParserError('Parse error at %s:%s near token %s (%s)'
% (t.lineno, t.col, t.value, t.type))
@@ -174,8 +169,9 @@ class JsonPathParser(object):
p[0] = Slice()
def p_slice(self, p): # Currently does not support `step`
- "slice : maybe_int ':' maybe_int"
- p[0] = Slice(start=p[1], end=p[3])
+ """slice : maybe_int ':' maybe_int
+ | maybe_int ':' maybe_int ':' maybe_int """
+ p[0] = Slice(*p[1::2])
def p_maybe_int(self, p):
"""maybe_int : NUMBER
@@ -186,7 +182,7 @@ class JsonPathParser(object):
'empty :'
p[0] = None
-class IteratorToTokenStream(object):
+class IteratorToTokenStream:
def __init__(self, iterator):
self.iterator = iterator
=====================================
pyproject.toml
=====================================
@@ -0,0 +1,8 @@
+[tool.pytest.ini_options]
+filterwarnings = [
+ # Escalate warnings to errors.
+ "error",
+
+ # The ply package doesn't close its debug log file. Ignore this warning.
+ "ignore::ResourceWarning",
+]
=====================================
setup.py
=====================================
@@ -4,7 +4,7 @@ import setuptools
setuptools.setup(
name='jsonpath-ng',
- version='1.6.0',
+ version='1.6.1',
description=(
'A final implementation of JSONPath for Python that aims to be '
'standard compliant, including arithmetic and binary comparison '
@@ -34,6 +34,7 @@ setuptools.setup(
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
- 'Programming Language :: Python :: 3.11'
+ 'Programming Language :: Python :: 3.11',
+ 'Programming Language :: Python :: 3.12',
],
)
=====================================
tests/bin/__init__.py deleted
=====================================
@@ -1,2 +0,0 @@
-# Use modern python
-from __future__ import absolute_import, print_function, unicode_literals
=====================================
tests/bin/test_jsonpath.py
=====================================
@@ -1,56 +1,37 @@
-# Use modern Python
-from __future__ import unicode_literals, print_function, absolute_import, division, generators, nested_scopes
+"""
+Tests for the jsonpath.py command line interface.
+"""
-# Standard library imports
-import unittest
-import logging
import io
-import sys
-import os
import json
+import os
+import sys
from jsonpath_ng.bin.jsonpath import main
-class TestJsonPathScript(unittest.TestCase):
- """
- Tests for the jsonpath.py command line interface.
- """
-
- @classmethod
- def setup_class(cls):
- logging.basicConfig()
-
- def setUp(self):
- self.input = io.StringIO()
- self.output = io.StringIO()
- self.saved_stdout = sys.stdout
- self.saved_stdin = sys.stdin
- sys.stdout = self.output
- sys.stdin = self.input
-
- def tearDown(self):
- self.output.close()
- self.input.close()
- sys.stdout = self.saved_stdout
- sys.stdin = self.saved_stdin
-
- def test_stdin_mode(self):
- # 'format' is a benign Python 2/3 way of ensuring it is a text type rather than binary
- self.input.write('{0}'.format(json.dumps({
- 'foo': {
- 'baz': 1,
- 'bizzle': {
- 'baz': 2
- }
- }
- })))
- self.input.seek(0)
- main('jsonpath.py', 'foo..baz')
- self.assertEqual(self.output.getvalue(), '1\n2\n')
-
- def test_filename_mode(self):
- test1 = os.path.join(os.path.dirname(__file__), 'test1.json')
- test2 = os.path.join(os.path.dirname(__file__), 'test2.json')
- main('jsonpath.py', 'foo..baz', test1, test2)
- self.assertEqual(self.output.getvalue(), '1\n2\n3\n4\n')
+def test_stdin_mode(monkeypatch, capsys):
+ stdin_text = json.dumps(
+ {
+ "foo": {
+ "baz": 1,
+ "bizzle": {"baz": 2},
+ },
+ }
+ )
+ monkeypatch.setattr(sys, "stdin", io.StringIO(stdin_text))
+
+ main("jsonpath.py", "foo..baz")
+
+ stdout, _ = capsys.readouterr()
+ assert stdout == "1\n2\n"
+
+
+def test_filename_mode(capsys):
+ test1 = os.path.join(os.path.dirname(__file__), "test1.json")
+ test2 = os.path.join(os.path.dirname(__file__), "test2.json")
+
+ main("jsonpath.py", "foo..baz", test1, test2)
+
+ stdout, _ = capsys.readouterr()
+ assert stdout == "1\n2\n3\n4\n"
=====================================
tests/conftest.py
=====================================
@@ -0,0 +1,15 @@
+import pytest
+
+
+ at pytest.fixture(autouse=True)
+def disable_auto_id_field(monkeypatch):
+ monkeypatch.setattr("jsonpath_ng.jsonpath.auto_id_field", None)
+
+
+ at pytest.fixture()
+def auto_id_field(monkeypatch, disable_auto_id_field):
+ """Enable `jsonpath_ng.jsonpath.auto_id_field`."""
+
+ field_name = "id"
+ monkeypatch.setattr("jsonpath_ng.jsonpath.auto_id_field", field_name)
+ return field_name
=====================================
tests/helpers.py
=====================================
@@ -0,0 +1,35 @@
+def assert_value_equality(results, expected_values):
+ """Assert equality between two objects.
+
+ *results* must be a list of results as returned by `.find()` methods.
+
+ If *expected_values* is a list, then value equality and ordering will be checked.
+ If *expected_values* is a set, value equality and container length will be checked.
+ Otherwise, the value of the results will be compared to the expected values.
+ """
+
+ left_values = [result.value for result in results]
+ if isinstance(expected_values, list):
+ assert left_values == expected_values
+ elif isinstance(expected_values, set):
+ assert len(left_values) == len(expected_values)
+ assert set(left_values) == expected_values
+ else:
+ assert results[0].value == expected_values
+
+
+def assert_full_path_equality(results, expected_full_paths):
+ """Assert equality between two objects.
+
+ *results* must be a list or set of results as returned by `.find()` methods.
+
+ If *expected_full_paths* is a list, then path equality and ordering will be checked.
+ If *expected_full_paths* is a set, then path equality and length will be checked.
+ """
+
+ full_paths = [str(result.full_path) for result in results]
+ if isinstance(expected_full_paths, list):
+ assert full_paths == expected_full_paths, full_paths
+ else: # isinstance(expected_full_paths, set):
+ assert len(full_paths) == len(expected_full_paths)
+ assert set(full_paths) == expected_full_paths
=====================================
tests/test_create.py
=====================================
@@ -1,177 +1,69 @@
-import doctest
-from collections import namedtuple
+import copy
+from contextlib import nullcontext as does_not_raise
import pytest
-import jsonpath_ng
from jsonpath_ng.ext import parse
-Params = namedtuple('Params', 'string initial_data insert_val target')
-
- at pytest.mark.parametrize('string, initial_data, insert_val, target', [
-
- Params(string='$.foo',
- initial_data={},
- insert_val=42,
- target={'foo': 42}),
-
- Params(string='$.foo.bar',
- initial_data={},
- insert_val=42,
- target={'foo': {'bar': 42}}),
-
- Params(string='$.foo[0]',
- initial_data={},
- insert_val=42,
- target={'foo': [42]}),
-
- Params(string='$.foo[1]',
- initial_data={},
- insert_val=42,
- target={'foo': [{}, 42]}),
-
- Params(string='$.foo[0].bar',
- initial_data={},
- insert_val=42,
- target={'foo': [{'bar': 42}]}),
-
- Params(string='$.foo[1].bar',
- initial_data={},
- insert_val=42,
- target={'foo': [{}, {'bar': 42}]}),
-
- Params(string='$.foo[0][0]',
- initial_data={},
- insert_val=42,
- target={'foo': [[42]]}),
-
- Params(string='$.foo[1][1]',
- initial_data={},
- insert_val=42,
- target={'foo': [{}, [{}, 42]]}),
-
- Params(string='foo[0]',
- initial_data={},
- insert_val=42,
- target={'foo': [42]}),
-
- Params(string='foo[1]',
- initial_data={},
- insert_val=42,
- target={'foo': [{}, 42]}),
-
- Params(string='foo',
- initial_data={},
- insert_val=42,
- target={'foo': 42}),
-
- # Initial data can be a list if we expect a list back
- Params(string='[0]',
- initial_data=[],
- insert_val=42,
- target=[42]),
-
- Params(string='[1]',
- initial_data=[],
- insert_val=42,
- target=[{}, 42]),
-
- # Converts initial data to a list if necessary
- Params(string='[0]',
- initial_data={},
- insert_val=42,
- target=[42]),
-
- Params(string='[1]',
- initial_data={},
- insert_val=42,
- target=[{}, 42]),
-
- Params(string='foo[?bar="baz"].qux',
- initial_data={'foo': [
- {'bar': 'baz'},
- {'bar': 'bizzle'},
- ]},
- insert_val=42,
- target={'foo': [
- {'bar': 'baz', 'qux': 42},
- {'bar': 'bizzle'}
- ]}),
-])
-def test_update_or_create(string, initial_data, insert_val, target):
+ at pytest.mark.parametrize(
+ "string, initial_data, expected_result",
+ (
+ ("$.foo", {}, {"foo": 42}),
+ ("$.foo.bar", {}, {"foo": {"bar": 42}}),
+ ("$.foo[0]", {}, {"foo": [42]}),
+ ("$.foo[1]", {}, {"foo": [{}, 42]}),
+ ("$.foo[0].bar", {}, {"foo": [{"bar": 42}]}),
+ ("$.foo[1].bar", {}, {"foo": [{}, {"bar": 42}]}),
+ ("$.foo[0][0]", {}, {"foo": [[42]]}),
+ ("$.foo[1][1]", {}, {"foo": [{}, [{}, 42]]}),
+ ("foo[0]", {}, {"foo": [42]}),
+ ("foo[1]", {}, {"foo": [{}, 42]}),
+ ("foo", {}, {"foo": 42}),
+ #
+ # Initial data can be a list if we expect a list back.
+ ("[0]", [], [42]),
+ ("[1]", [], [{}, 42]),
+ #
+ # Convert initial data to a list, if necessary.
+ ("[0]", {}, [42]),
+ ("[1]", {}, [{}, 42]),
+ #
+ (
+ 'foo[?bar="baz"].qux',
+ {
+ "foo": [
+ {"bar": "baz"},
+ {"bar": "bizzle"},
+ ]
+ },
+ {"foo": [{"bar": "baz", "qux": 42}, {"bar": "bizzle"}]},
+ ),
+ ("[1].foo", [{"foo": 1}, {"bar": 2}], [{"foo": 1}, {"foo": 42, "bar": 2}]),
+ ),
+)
+def test_update_or_create(string, initial_data, expected_result):
jsonpath = parse(string)
- result = jsonpath.update_or_create(initial_data, insert_val)
- assert result == target
-
-
- at pytest.mark.parametrize('string, initial_data, insert_val, target', [
- # Slice not supported
- Params(string='foo[0:1]',
- initial_data={},
- insert_val=42,
- target={'foo': [42, 42]}),
- # result is {'foo': {}}
-
- # Filter does not create items to meet criteria
- Params(string='foo[?bar="baz"].qux',
- initial_data={},
- insert_val=42,
- target={'foo': [{'bar': 'baz', 'qux': 42}]}),
- # result is {'foo': {}}
-
- # Does not convert initial data to a dictionary
- Params(string='foo',
- initial_data=[],
- insert_val=42,
- target={'foo': 42}),
- # raises TypeError
-
-])
- at pytest.mark.xfail
-def test_unsupported_classes(string, initial_data, insert_val, target):
+ result = jsonpath.update_or_create(initial_data, 42)
+ assert result == expected_result
+
+
+ at pytest.mark.parametrize(
+ "string, initial_data, expectation",
+ (
+ # Slice not supported
+ ("foo[0:1]", {}, does_not_raise()),
+ #
+ # Filter does not create items to meet criteria
+ ('foo[?bar="baz"].qux', {}, does_not_raise()),
+ #
+ # Does not convert initial data to a dictionary
+ ("foo", [], pytest.raises(TypeError)),
+ ),
+)
+def test_unsupported_classes(string, initial_data, expectation):
+ copied_initial_data = copy.copy(initial_data)
jsonpath = parse(string)
- result = jsonpath.update_or_create(initial_data, insert_val)
- assert result == target
-
-
- at pytest.mark.parametrize('string, initial_data, insert_val, target', [
-
- Params(string='$.name[0].text',
- initial_data={},
- insert_val='Sir Michael',
- target={'name': [{'text': 'Sir Michael'}]}),
-
- Params(string='$.name[0].given[0]',
- initial_data={'name': [{'text': 'Sir Michael'}]},
- insert_val='Michael',
- target={'name': [{'text': 'Sir Michael',
- 'given': ['Michael']}]}),
-
- Params(string='$.name[0].prefix[0]',
- initial_data={'name': [{'text': 'Sir Michael',
- 'given': ['Michael']}]},
- insert_val='Sir',
- target={'name': [{'text': 'Sir Michael',
- 'given': ['Michael'],
- 'prefix': ['Sir']}]}),
-
- Params(string='$.birthDate',
- initial_data={'name': [{'text': 'Sir Michael',
- 'given': ['Michael'],
- 'prefix': ['Sir']}]},
- insert_val='1943-05-05',
- target={'name': [{'text': 'Sir Michael',
- 'given': ['Michael'],
- 'prefix': ['Sir']}],
- 'birthDate': '1943-05-05'}),
-])
-def test_build_doc(string, initial_data, insert_val, target):
- jsonpath = parse(string)
- result = jsonpath.update_or_create(initial_data, insert_val)
- assert result == target
-
-
-def test_doctests():
- results = doctest.testmod(jsonpath_ng)
- assert results.failed == 0
+ with expectation:
+ result = jsonpath.update_or_create(initial_data, 42)
+ assert result != copied_initial_data
=====================================
tests/test_examples.py
=====================================
@@ -1,55 +1,67 @@
import pytest
-from jsonpath_ng.ext.filter import Filter, Expression
from jsonpath_ng.ext import parse
-from jsonpath_ng.jsonpath import *
-
-
- at pytest.mark.parametrize('string, parsed', [
- # The authors of all books in the store
- ("$.store.book[*].author",
- Child(Child(Child(Child(Root(), Fields('store')), Fields('book')),
- Slice()), Fields('author'))),
-
- # All authors
- ("$..author", Descendants(Root(), Fields('author'))),
-
- # All things in the store
- ("$.store.*", Child(Child(Root(), Fields('store')), Fields('*'))),
-
- # The price of everything in the store
- ("$.store..price",
- Descendants(Child(Root(), Fields('store')), Fields('price'))),
-
- # The third book
- ("$..book[2]",
- Child(Descendants(Root(), Fields('book')),Index(2))),
-
- # The last book in order
- # ("$..book[(@.length-1)]", # Not implemented
- # Child(Descendants(Root(), Fields('book')), Slice(start=-1))),
- ("$..book[-1:]",
- Child(Descendants(Root(), Fields('book')), Slice(start=-1))),
-
- # The first two books
- # ("$..book[0,1]", # Not implemented
- # Child(Descendants(Root(), Fields('book')), Slice(end=2))),
- ("$..book[:2]",
- Child(Descendants(Root(), Fields('book')), Slice(end=2))),
-
- # Filter all books with ISBN number
- ("$..book[?(@.isbn)]",
- Child(Descendants(Root(), Fields('book')),
- Filter([Expression(Child(This(), Fields('isbn')), None, None)]))),
-
- # Filter all books cheaper than 10
- ("$..book[?(@.price<10)]",
- Child(Descendants(Root(), Fields('book')),
- Filter([Expression(Child(This(), Fields('price')), '<', 10)]))),
-
- # All members of JSON structure
- ("$..*", Descendants(Root(), Fields('*'))),
-])
+from jsonpath_ng.ext.filter import Expression, Filter
+from jsonpath_ng.jsonpath import Child, Descendants, Fields, Index, Root, Slice, This
+
+
+ at pytest.mark.parametrize(
+ "string, parsed",
+ [
+ # The authors of all books in the store
+ (
+ "$.store.book[*].author",
+ Child(
+ Child(Child(Child(Root(), Fields("store")), Fields("book")), Slice()),
+ Fields("author"),
+ ),
+ ),
+ #
+ # All authors
+ ("$..author", Descendants(Root(), Fields("author"))),
+ #
+ # All things in the store
+ ("$.store.*", Child(Child(Root(), Fields("store")), Fields("*"))),
+ #
+ # The price of everything in the store
+ (
+ "$.store..price",
+ Descendants(Child(Root(), Fields("store")), Fields("price")),
+ ),
+ #
+ # The third book
+ ("$..book[2]", Child(Descendants(Root(), Fields("book")), Index(2))),
+ #
+ # The last book in order
+ # "$..book[(@.length-1)]" # Not implemented
+ ("$..book[-1:]", Child(Descendants(Root(), Fields("book")), Slice(start=-1))),
+ #
+ # The first two books
+ # "$..book[0,1]" # Not implemented
+ ("$..book[:2]", Child(Descendants(Root(), Fields("book")), Slice(end=2))),
+ #
+ # Filter all books with an ISBN
+ (
+ "$..book[?(@.isbn)]",
+ Child(
+ Descendants(Root(), Fields("book")),
+ Filter([Expression(Child(This(), Fields("isbn")), None, None)]),
+ ),
+ ),
+ #
+ # Filter all books cheaper than 10
+ (
+ "$..book[?(@.price<10)]",
+ Child(
+ Descendants(Root(), Fields("book")),
+ Filter([Expression(Child(This(), Fields("price")), "<", 10)]),
+ ),
+ ),
+ #
+ # All members of JSON structure
+ ("$..*", Descendants(Root(), Fields("*"))),
+ ],
+)
def test_goessner_examples(string, parsed):
"""
Test Stefan Goessner's `examples`_
@@ -59,16 +71,7 @@ def test_goessner_examples(string, parsed):
assert parse(string, debug=True) == parsed
- at pytest.mark.parametrize('string, parsed', [
- # Navigate objects
- ("$.store.book[0].title",
- Child(Child(Child(Child(Root(), Fields('store')), Fields('book')),
- Index(0)), Fields('title'))),
+def test_attribute_and_dict_syntax():
+ """Verify that attribute and dict syntax result in identical parse trees."""
- # Navigate dictionaries
- ("$['store']['book'][0]['title']",
- Child(Child(Child(Child(Root(), Fields('store')), Fields('book')),
- Index(0)), Fields('title'))),
-])
-def test_obj_v_dict(string, parsed):
- assert parse(string, debug=True) == parsed
+ assert parse("$.store.book[0].title") == parse("$['store']['book'][0]['title']")
=====================================
tests/test_exceptions.py
=====================================
@@ -1,20 +1,33 @@
import pytest
-from jsonpath_ng import parse as rw_parse
-from jsonpath_ng.exceptions import JSONPathError, JsonPathParserError
+from jsonpath_ng import parse as base_parse
+from jsonpath_ng.exceptions import JsonPathParserError
from jsonpath_ng.ext import parse as ext_parse
-def test_rw_exception_class():
- with pytest.raises(JSONPathError):
- rw_parse('foo.bar.`grandparent`.baz')
-
-
-def test_rw_exception_subclass():
+ at pytest.mark.parametrize(
+ "path",
+ (
+ "foo[*.bar.baz",
+ "foo.bar.`grandparent`.baz",
+ "foo[*",
+ # `len` extension not available in the base parser
+ "foo.bar.`len`",
+ ),
+)
+def test_rw_exception_subclass(path):
with pytest.raises(JsonPathParserError):
- rw_parse('foo.bar.`grandparent`.baz')
+ base_parse(path)
-def test_ext_exception_subclass():
+ at pytest.mark.parametrize(
+ "path",
+ (
+ "foo[*.bar.baz",
+ "foo.bar.`grandparent`.baz",
+ "foo[*",
+ ),
+)
+def test_ext_exception_subclass(path):
with pytest.raises(JsonPathParserError):
- ext_parse('foo.bar.`grandparent`.baz')
+ ext_parse(path)
=====================================
tests/test_jsonpath.py
=====================================
@@ -1,358 +1,369 @@
-from __future__ import unicode_literals, print_function, absolute_import, division, generators, nested_scopes
-import unittest
+import copy
-from jsonpath_ng import jsonpath # For setting the global auto_id_field flag
+import pytest
-from jsonpath_ng.parser import parse
-from jsonpath_ng.jsonpath import *
+from jsonpath_ng.ext.parser import parse as ext_parse
+from jsonpath_ng.jsonpath import DatumInContext, Fields, Root, This
from jsonpath_ng.lexer import JsonPathLexerError
-
-class TestDatumInContext(unittest.TestCase):
- """
- Tests of properties of the DatumInContext and AutoIdForDatum objects
- """
-
- @classmethod
- def setup_class(cls):
- logging.basicConfig()
-
- def test_DatumInContext_init(self):
-
- test_datum1 = DatumInContext(3)
- assert test_datum1.path == This()
- assert test_datum1.full_path == This()
-
- test_datum2 = DatumInContext(3, path=Root())
- assert test_datum2.path == Root()
- assert test_datum2.full_path == Root()
-
- test_datum3 = DatumInContext(3, path=Fields('foo'), context='does not matter')
- assert test_datum3.path == Fields('foo')
- assert test_datum3.full_path == Fields('foo')
-
- test_datum3 = DatumInContext(3, path=Fields('foo'), context=DatumInContext('does not matter', path=Fields('baz'), context='does not matter'))
- assert test_datum3.path == Fields('foo')
- assert test_datum3.full_path == Fields('baz').child(Fields('foo'))
-
- def test_DatumInContext_in_context(self):
-
- assert (DatumInContext(3).in_context(path=Fields('foo'), context=DatumInContext('whatever'))
- ==
- DatumInContext(3, path=Fields('foo'), context=DatumInContext('whatever')))
-
- assert (DatumInContext(3).in_context(path=Fields('foo'), context='whatever').in_context(path=Fields('baz'), context='whatever')
- ==
- DatumInContext(3).in_context(path=Fields('foo'), context=DatumInContext('whatever').in_context(path=Fields('baz'), context='whatever')))
-
- # def test_AutoIdForDatum_pseudopath(self):
- # assert AutoIdForDatum(DatumInContext(value=3, path=Fields('foo')), id_field='id').pseudopath == Fields('foo')
- # assert AutoIdForDatum(DatumInContext(value={'id': 'bizzle'}, path=Fields('foo')), id_field='id').pseudopath == Fields('bizzle')
-
- # assert AutoIdForDatum(DatumInContext(value={'id': 'bizzle'}, path=Fields('foo')),
- # id_field='id',
- # context=DatumInContext(value=3, path=This())).pseudopath == Fields('bizzle')
-
- # assert (AutoIdForDatum(DatumInContext(value=3, path=Fields('foo')),
- # id_field='id').in_context(DatumInContext(value={'id': 'bizzle'}, path=This()))
- # ==
- # AutoIdForDatum(DatumInContext(value=3, path=Fields('foo')),
- # id_field='id',
- # context=DatumInContext(value={'id': 'bizzle'}, path=This())))
-
- # assert (AutoIdForDatum(DatumInContext(value=3, path=Fields('foo')),
- # id_field='id',
- # context=DatumInContext(value={"id": 'bizzle'},
- # path=Fields('maggle'))).in_context(DatumInContext(value='whatever', path=Fields('miggle')))
- # ==
- # AutoIdForDatum(DatumInContext(value=3, path=Fields('foo')),
- # id_field='id',
- # context=DatumInContext(value={'id': 'bizzle'}, path=Fields('miggle').child(Fields('maggle')))))
-
- # assert AutoIdForDatum(DatumInContext(value=3, path=Fields('foo')),
- # id_field='id',
- # context=DatumInContext(value={'id': 'bizzle'}, path=This())).pseudopath == Fields('bizzle').child(Fields('foo'))
-
-
-
-class TestJsonPath(unittest.TestCase):
- """
- Tests of the actual jsonpath functionality
- """
-
- @classmethod
- def setup_class(cls):
- logging.basicConfig()
-
+from jsonpath_ng.parser import parse as base_parse
+
+from .helpers import assert_full_path_equality, assert_value_equality
+
+
+ at pytest.mark.parametrize(
+ "path_arg, context_arg, expected_path, expected_full_path",
+ (
+ (None, None, This(), This()),
+ (Root(), None, Root(), Root()),
+ (Fields("foo"), "unimportant", Fields("foo"), Fields("foo")),
+ (
+ Fields("foo"),
+ DatumInContext("unimportant", path=Fields("baz"), context="unimportant"),
+ Fields("foo"),
+ Fields("baz").child(Fields("foo")),
+ ),
+ ),
+)
+def test_datumincontext_init(path_arg, context_arg, expected_path, expected_full_path):
+ datum = DatumInContext(3, path=path_arg, context=context_arg)
+ assert datum.path == expected_path
+ assert datum.full_path == expected_full_path
+
+
+def test_datumincontext_in_context():
+ d1 = DatumInContext(3, path=Fields("foo"), context=DatumInContext("bar"))
+ d2 = DatumInContext(3).in_context(path=Fields("foo"), context=DatumInContext("bar"))
+ assert d1 == d2
+
+
+def test_datumincontext_in_context_nested():
+ sequential_calls = (
+ DatumInContext(3)
+ .in_context(path=Fields("foo"), context="whatever")
+ .in_context(path=Fields("baz"), context="whatever")
+ )
+ nested_calls = DatumInContext(3).in_context(
+ path=Fields("foo"),
+ context=DatumInContext("whatever").in_context(
+ path=Fields("baz"), context="whatever"
+ ),
+ )
+ assert sequential_calls == nested_calls
+
+
+parsers = pytest.mark.parametrize(
+ "parse",
+ (
+ pytest.param(base_parse, id="parse=jsonpath_ng.parser.parse"),
+ pytest.param(ext_parse, id="parse=jsonpath_ng.ext.parser.parse"),
+ ),
+)
+
+
+update_test_cases = (
#
- # Check that the data value returned is good
+ # Fields
+ # ------
#
- def check_cases(self, test_cases):
- # Note that just manually building an AST would avoid this dep and isolate the tests, but that would suck a bit
- # Also, we coerce iterables, etc, into the desired target type
-
- for string, data, target in test_cases:
- print('parse("%s").find(%s) =?= %s' % (string, data, target))
- result = parse(string).find(data)
- if isinstance(target, list):
- assert [r.value for r in result] == target
- elif isinstance(target, set):
- assert set([r.value for r in result]) == target
- else:
- assert result.value == target
-
- def test_fields_value(self):
- jsonpath.auto_id_field = None
- self.check_cases([ ('foo', {'foo': 'baz'}, ['baz']),
- ('foo,baz', {'foo': 1, 'baz': 2}, [1, 2]),
- ('@foo', {'@foo': 1}, [1]),
- ('*', {'foo': 1, 'baz': 2}, set([1, 2])) ])
-
- jsonpath.auto_id_field = 'id'
- self.check_cases([ ('*', {'foo': 1, 'baz': 2}, set([1, 2, '`this`'])) ])
-
- def test_root_value(self):
- jsonpath.auto_id_field = None
- self.check_cases([
- ('$', {'foo': 'baz'}, [{'foo':'baz'}]),
- ('foo.$', {'foo': 'baz'}, [{'foo':'baz'}]),
- ('foo.$.foo', {'foo': 'baz'}, ['baz']),
- ])
-
- def test_this_value(self):
- jsonpath.auto_id_field = None
- self.check_cases([
- ('`this`', {'foo': 'baz'}, [{'foo':'baz'}]),
- ('foo.`this`', {'foo': 'baz'}, ['baz']),
- ('foo.`this`.baz', {'foo': {'baz': 3}}, [3]),
- ])
-
- def test_index_value(self):
- self.check_cases([
- ('[0]', [42], [42]),
- ('[5]', [42], []),
- ('[2]', [34, 65, 29, 59], [29]),
- ('[0]', None, [])
- ])
-
- def test_slice_value(self):
- self.check_cases([('[*]', [1, 2, 3], [1, 2, 3]),
- ('[*]', range(1, 4), [1, 2, 3]),
- ('[1:]', [1, 2, 3, 4], [2, 3, 4]),
- ('[:2]', [1, 2, 3, 4], [1, 2])])
-
- # Funky slice hacks
- self.check_cases([
- ('[*]', 1, [1]), # This is a funky hack
- ('[0:]', 1, [1]), # This is a funky hack
- ('[*]', {'foo':1}, [{'foo': 1}]), # This is a funky hack
- ('[*].foo', {'foo':1}, [1]), # This is a funky hack
- ])
-
- def test_child_value(self):
- self.check_cases([('foo.baz', {'foo': {'baz': 3}}, [3]),
- ('foo.baz', {'foo': {'baz': [3]}}, [[3]]),
- ('foo.baz.bizzle', {'foo': {'baz': {'bizzle': 5}}}, [5])])
-
- def test_descendants_value(self):
- self.check_cases([
- ('foo..baz', {'foo': {'baz': 1, 'bing': {'baz': 2}}}, [1, 2] ),
- ('foo..baz', {'foo': [{'baz': 1}, {'baz': 2}]}, [1, 2] ),
- ])
-
- def test_parent_value(self):
- self.check_cases([('foo.baz.`parent`', {'foo': {'baz': 3}}, [{'baz': 3}]),
- ('foo.`parent`.foo.baz.`parent`.baz.bizzle', {'foo': {'baz': {'bizzle': 5}}}, [5])])
-
- def test_hyphen_key(self):
- self.check_cases([('foo.bar-baz', {'foo': {'bar-baz': 3}}, [3]),
- ('foo.[bar-baz,blah-blah]', {'foo': {'bar-baz': 3, 'blah-blah':5}},
- [3,5])])
- self.assertRaises(JsonPathLexerError, self.check_cases,
- [('foo.-baz', {'foo': {'-baz': 8}}, [8])])
-
-
-
-
+ ("foo", {"foo": 1}, 5, {"foo": 5}),
+ ("$.*", {"foo": 1, "bar": 2}, 3, {"foo": 3, "bar": 3}),
#
- # Check that the paths for the data are correct.
- # FIXME: merge these tests with the above, since the inputs are the same anyhow
+ # Indexes
+ # -------
#
- def check_paths(self, test_cases):
- # Note that just manually building an AST would avoid this dep and isolate the tests, but that would suck a bit
- # Also, we coerce iterables, etc, into the desired target type
-
- for string, data, target in test_cases:
- print('parse("%s").find(%s).paths =?= %s' % (string, data, target))
- assert hash(parse(string)) == hash(parse(string))
- result = parse(string).find(data)
- if isinstance(target, list):
- assert [str(r.full_path) for r in result] == target
- elif isinstance(target, set):
- assert set([str(r.full_path) for r in result]) == target
- else:
- assert str(result.path) == target
-
- def test_fields_paths(self):
- jsonpath.auto_id_field = None
- self.check_paths([ ('foo', {'foo': 'baz'}, ['foo']),
- ('foo,baz', {'foo': 1, 'baz': 2}, ['foo', 'baz']),
- ('*', {'foo': 1, 'baz': 2}, set(['foo', 'baz'])) ])
-
- jsonpath.auto_id_field = 'id'
- self.check_paths([ ('*', {'foo': 1, 'baz': 2}, set(['foo', 'baz', 'id'])) ])
-
- def test_root_paths(self):
- jsonpath.auto_id_field = None
- self.check_paths([
- ('$', {'foo': 'baz'}, ['$']),
- ('foo.$', {'foo': 'baz'}, ['$']),
- ('foo.$.foo', {'foo': 'baz'}, ['foo']),
- ])
-
- def test_this_paths(self):
- jsonpath.auto_id_field = None
- self.check_paths([
- ('`this`', {'foo': 'baz'}, ['`this`']),
- ('foo.`this`', {'foo': 'baz'}, ['foo']),
- ('foo.`this`.baz', {'foo': {'baz': 3}}, ['foo.baz']),
- ])
-
- def test_index_paths(self):
- self.check_paths([('[0]', [42], ['[0]']),
- ('[2]', [34, 65, 29, 59], ['[2]'])])
-
- def test_slice_paths(self):
- self.check_paths([ ('[*]', [1, 2, 3], ['[0]', '[1]', '[2]']),
- ('[1:]', [1, 2, 3, 4], ['[1]', '[2]', '[3]']) ])
-
- def test_child_paths(self):
- self.check_paths([('foo.baz', {'foo': {'baz': 3}}, ['foo.baz']),
- ('foo.baz', {'foo': {'baz': [3]}}, ['foo.baz']),
- ('foo.baz.bizzle', {'foo': {'baz': {'bizzle': 5}}}, ['foo.baz.bizzle'])])
-
- def test_descendants_paths(self):
- self.check_paths([('foo..baz', {'foo': {'baz': 1, 'bing': {'baz': 2}}}, ['foo.baz', 'foo.bing.baz'] )])
-
- def test_literals_in_field_names(self):
- self.check_paths([("A.'a.c'", {'A' : {'a.c': 'd'}}, ["A.'a.c'"])])
-
+ ("[0]", ["foo", "bar", "baz"], "test", ["test", "bar", "baz"]),
#
- # Check the "auto_id_field" feature
- #
- def test_fields_auto_id(self):
- jsonpath.auto_id_field = "id"
- self.check_cases([ ('foo.id', {'foo': 'baz'}, ['foo']),
- ('foo.id', {'foo': {'id': 'baz'}}, ['baz']),
- ('foo,baz.id', {'foo': 1, 'baz': 2}, ['foo', 'baz']),
- ('*.id',
- {'foo':{'id': 1},
- 'baz': 2},
- set(['1', 'baz'])) ])
-
- def test_root_auto_id(self):
- jsonpath.auto_id_field = 'id'
- self.check_cases([
- ('$.id', {'foo': 'baz'}, ['$']), # This is a wonky case that is not that interesting
- ('foo.$.id', {'foo': 'baz', 'id': 'bizzle'}, ['bizzle']),
- ('foo.$.baz.id', {'foo': 4, 'baz': 3}, ['baz']),
- ])
-
- def test_this_auto_id(self):
- jsonpath.auto_id_field = 'id'
- self.check_cases([
- ('id', {'foo': 'baz'}, ['`this`']), # This is, again, a wonky case that is not that interesting
- ('foo.`this`.id', {'foo': 'baz'}, ['foo']),
- ('foo.`this`.baz.id', {'foo': {'baz': 3}}, ['foo.baz']),
- ])
-
- def test_index_auto_id(self):
- jsonpath.auto_id_field = "id"
- self.check_cases([('[0].id', [42], ['[0]']),
- ('[2].id', [34, 65, 29, 59], ['[2]'])])
-
- def test_slice_auto_id(self):
- jsonpath.auto_id_field = "id"
- self.check_cases([ ('[*].id', [1, 2, 3], ['[0]', '[1]', '[2]']),
- ('[1:].id', [1, 2, 3, 4], ['[1]', '[2]', '[3]']) ])
-
- def test_child_auto_id(self):
- jsonpath.auto_id_field = "id"
- self.check_cases([('foo.baz.id', {'foo': {'baz': 3}}, ['foo.baz']),
- ('foo.baz.id', {'foo': {'baz': [3]}}, ['foo.baz']),
- ('foo.baz.id', {'foo': {'id': 'bizzle', 'baz': 3}}, ['bizzle.baz']),
- ('foo.baz.id', {'foo': {'baz': {'id': 'hi'}}}, ['foo.hi']),
- ('foo.baz.bizzle.id', {'foo': {'baz': {'bizzle': 5}}}, ['foo.baz.bizzle'])])
-
- def test_descendants_auto_id(self):
- jsonpath.auto_id_field = "id"
- self.check_cases([('foo..baz.id',
- {'foo': {
- 'baz': 1,
- 'bing': {
- 'baz': 2
- }
- } },
- ['foo.baz',
- 'foo.bing.baz'] )])
-
- def check_update_cases(self, test_cases):
- for original, expr_str, value, expected in test_cases:
- print('parse(%r).update(%r, %r) =?= %r'
- % (expr_str, original, value, expected))
- expr = parse(expr_str)
- actual = expr.update(original, value)
- assert actual == expected
-
- def test_update_root(self):
- self.check_update_cases([
- ('foo', '$', 'bar', 'bar')
- ])
-
- def test_update_this(self):
- self.check_update_cases([
- ('foo', '`this`', 'bar', 'bar')
- ])
-
- def test_update_fields(self):
- self.check_update_cases([
- ({'foo': 1}, 'foo', 5, {'foo': 5}),
- ({'foo': 1, 'bar': 2}, '$.*', 3, {'foo': 3, 'bar': 3})
- ])
-
- def test_update_child(self):
- self.check_update_cases([
- ({'foo': 'bar'}, '$.foo', 'baz', {'foo': 'baz'}),
- ({'foo': {'bar': 1}}, 'foo.bar', 'baz', {'foo': {'bar': 'baz'}})
- ])
+ # Slices
+ # ------
+ #
+ ("[0:2]", ["foo", "bar", "baz"], "test", ["test", "test", "baz"]),
+ #
+ # Root
+ # ----
+ #
+ ("$", "foo", "bar", "bar"),
+ #
+ # This
+ # ----
+ #
+ ("`this`", "foo", "bar", "bar"),
+ #
+ # Children
+ # --------
+ #
+ ("$.foo", {"foo": "bar"}, "baz", {"foo": "baz"}),
+ ("foo.bar", {"foo": {"bar": 1}}, "baz", {"foo": {"bar": "baz"}}),
+ #
+ # Descendants
+ # -----------
+ #
+ ("$..somefield", {"somefield": 1}, 42, {"somefield": 42}),
+ (
+ "$..nestedfield",
+ {"outer": {"nestedfield": 1}},
+ 42,
+ {"outer": {"nestedfield": 42}},
+ ),
+ (
+ "$..bar",
+ {"outs": {"bar": 1, "ins": {"bar": 9}}, "outs2": {"bar": 2}},
+ 42,
+ {"outs": {"bar": 42, "ins": {"bar": 42}}, "outs2": {"bar": 42}},
+ ),
+ #
+ # Where
+ # -----
+ #
+ (
+ "*.bar where baz",
+ {"foo": {"bar": {"baz": 1}}, "bar": {"baz": 2}},
+ 5,
+ {"foo": {"bar": 5}, "bar": {"baz": 2}},
+ ),
+ (
+ "(* where flag) .. bar",
+ {"foo": {"bar": 1, "flag": 1}, "baz": {"bar": 2}},
+ 3,
+ {"foo": {"bar": 3, "flag": 1}, "baz": {"bar": 2}},
+ ),
+ #
+ # Lambdas
+ # -------
+ #
+ (
+ "foo[*].baz",
+ {'foo': [{'baz': 1}, {'baz': 2}]},
+ lambda x, y, z: x + 1,
+ {'foo': [{'baz': 2}, {'baz': 3}]}
+ ),
+)
+
+
+ at pytest.mark.parametrize(
+ "expression, data, update_value, expected_value",
+ update_test_cases,
+)
+ at parsers
+def test_update(parse, expression, data, update_value, expected_value):
+ data_copy = copy.deepcopy(data)
+ result = parse(expression).update(data_copy, update_value)
+ assert result == expected_value
+
+
+find_test_cases = (
+ #
+ # * (star)
+ # --------
+ #
+ ("*", {"foo": 1, "baz": 2}, {1, 2}, {"foo", "baz"}),
+ #
+ # Fields
+ # ------
+ #
+ ("foo", {"foo": "baz"}, ["baz"], ["foo"]),
+ ("foo,baz", {"foo": 1, "baz": 2}, [1, 2], ["foo", "baz"]),
+ ("@foo", {"@foo": 1}, [1], ["@foo"]),
+ #
+ # Roots
+ # -----
+ #
+ ("$", {"foo": "baz"}, [{"foo": "baz"}], ["$"]),
+ ("foo.$", {"foo": "baz"}, [{"foo": "baz"}], ["$"]),
+ ("foo.$.foo", {"foo": "baz"}, ["baz"], ["foo"]),
+ #
+ # This
+ # ----
+ #
+ ("`this`", {"foo": "baz"}, [{"foo": "baz"}], ["`this`"]),
+ ("foo.`this`", {"foo": "baz"}, ["baz"], ["foo"]),
+ ("foo.`this`.baz", {"foo": {"baz": 3}}, [3], ["foo.baz"]),
+ #
+ # Indexes
+ # -------
+ #
+ ("[0]", [42], [42], ["[0]"]),
+ ("[5]", [42], [], []),
+ ("[2]", [34, 65, 29, 59], [29], ["[2]"]),
+ ("[0]", None, [], []),
+ #
+ # Slices
+ # ------
+ #
+ ("[*]", [1, 2, 3], [1, 2, 3], ["[0]", "[1]", "[2]"]),
+ ("[*]", range(1, 4), [1, 2, 3], ["[0]", "[1]", "[2]"]),
+ ("[1:]", [1, 2, 3, 4], [2, 3, 4], ["[1]", "[2]", "[3]"]),
+ ("[1:3]", [1, 2, 3, 4], [2, 3], ["[1]", "[2]"]),
+ ("[:2]", [1, 2, 3, 4], [1, 2], ["[0]", "[1]"]),
+ ("[:3:2]", [1, 2, 3, 4], [1, 3], ["[0]", "[2]"]),
+ ("[1::2]", [1, 2, 3, 4], [2, 4], ["[1]", "[3]"]),
+ ("[1:6:3]", range(1, 10), [2, 5], ["[1]", "[4]"]),
+ ("[::-2]", [1, 2, 3, 4, 5], [5, 3, 1], ["[4]", "[2]", "[0]"]),
+ #
+ # Slices (funky hacks)
+ # --------------------
+ #
+ ("[*]", 1, [1], ["[0]"]),
+ ("[0:]", 1, [1], ["[0]"]),
+ ("[*]", {"foo": 1}, [{"foo": 1}], ["[0]"]),
+ ("[*].foo", {"foo": 1}, [1], ["[0].foo"]),
+ #
+ # Children
+ # --------
+ #
+ ("foo.baz", {"foo": {"baz": 3}}, [3], ["foo.baz"]),
+ ("foo.baz", {"foo": {"baz": [3]}}, [[3]], ["foo.baz"]),
+ ("foo.baz.qux", {"foo": {"baz": {"qux": 5}}}, [5], ["foo.baz.qux"]),
+ #
+ # Descendants
+ # -----------
+ #
+ (
+ "foo..baz",
+ {"foo": {"baz": 1, "bing": {"baz": 2}}},
+ [1, 2],
+ ["foo.baz", "foo.bing.baz"],
+ ),
+ (
+ "foo..baz",
+ {"foo": [{"baz": 1}, {"baz": 2}]},
+ [1, 2],
+ ["foo.[0].baz", "foo.[1].baz"],
+ ),
+ #
+ # Parents
+ # -------
+ #
+ ("foo.baz.`parent`", {"foo": {"baz": 3}}, [{"baz": 3}], ["foo"]),
+ (
+ "foo.`parent`.foo.baz.`parent`.baz.qux",
+ {"foo": {"baz": {"qux": 5}}},
+ [5],
+ ["foo.baz.qux"],
+ ),
+ #
+ # Hyphens
+ # -------
+ #
+ ("foo.bar-baz", {"foo": {"bar-baz": 3}}, [3], ["foo.bar-baz"]),
+ (
+ "foo.[bar-baz,blah-blah]",
+ {"foo": {"bar-baz": 3, "blah-blah": 5}},
+ [3, 5],
+ ["foo.bar-baz", "foo.blah-blah"],
+ ),
+ #
+ # Literals
+ # --------
+ #
+ ("A.'a.c'", {"A": {"a.c": "d"}}, ["d"], ["A.'a.c'"]),
+)
- def test_update_where(self):
- self.check_update_cases([
- ({'foo': {'bar': {'baz': 1}}, 'bar': {'baz': 2}},
- '*.bar where baz', 5, {'foo': {'bar': 5}, 'bar': {'baz': 2}})
- ])
- def test_update_descendants_where(self):
- self.check_update_cases([
- ({'foo': {'bar': 1, 'flag': 1}, 'baz': {'bar': 2}},
- '(* where flag) .. bar', 3,
- {'foo': {'bar': 3, 'flag': 1}, 'baz': {'bar': 2}})
- ])
+ at pytest.mark.parametrize(
+ "path, data, expected_values, expected_full_paths", find_test_cases
+)
+ at parsers
+def test_find(parse, path, data, expected_values, expected_full_paths):
+ results = parse(path).find(data)
- def test_update_descendants(self):
- self.check_update_cases([
- ({'somefield': 1}, '$..somefield', 42, {'somefield': 42}),
- ({'outer': {'nestedfield': 1}}, '$..nestedfield', 42, {'outer': {'nestedfield': 42}}),
- ({'outs': {'bar': 1, 'ins': {'bar': 9}}, 'outs2': {'bar': 2}},
- '$..bar', 42,
- {'outs': {'bar': 42, 'ins': {'bar': 42}}, 'outs2': {'bar': 42}})
- ])
+ # Verify result values and full paths match expectations.
+ assert_value_equality(results, expected_values)
+ assert_full_path_equality(results, expected_full_paths)
- def test_update_index(self):
- self.check_update_cases([
- (['foo', 'bar', 'baz'], '[0]', 'test', ['test', 'bar', 'baz'])
- ])
- def test_update_slice(self):
- self.check_update_cases([
- (['foo', 'bar', 'baz'], '[0:2]', 'test', ['test', 'test', 'baz'])
- ])
+find_test_cases_with_auto_id = (
+ #
+ # * (star)
+ # --------
+ #
+ ("*", {"foo": 1, "baz": 2}, {1, 2, "`this`"}),
+ #
+ # Fields
+ # ------
+ #
+ ("foo.id", {"foo": "baz"}, ["foo"]),
+ ("foo.id", {"foo": {"id": "baz"}}, ["baz"]),
+ ("foo,baz.id", {"foo": 1, "baz": 2}, ["foo", "baz"]),
+ ("*.id", {"foo": {"id": 1}, "baz": 2}, {"1", "baz"}),
+ #
+ # Roots
+ # -----
+ #
+ ("$.id", {"foo": "baz"}, ["$"]),
+ ("foo.$.id", {"foo": "baz", "id": "bizzle"}, ["bizzle"]),
+ ("foo.$.baz.id", {"foo": 4, "baz": 3}, ["baz"]),
+ #
+ # This
+ # ----
+ #
+ ("id", {"foo": "baz"}, ["`this`"]),
+ ("foo.`this`.id", {"foo": "baz"}, ["foo"]),
+ ("foo.`this`.baz.id", {"foo": {"baz": 3}}, ["foo.baz"]),
+ #
+ # Indexes
+ # -------
+ #
+ ("[0].id", [42], ["[0]"]),
+ ("[2].id", [34, 65, 29, 59], ["[2]"]),
+ #
+ # Slices
+ # ------
+ #
+ ("[*].id", [1, 2, 3], ["[0]", "[1]", "[2]"]),
+ ("[1:].id", [1, 2, 3, 4], ["[1]", "[2]", "[3]"]),
+ #
+ # Children
+ # --------
+ #
+ ("foo.baz.id", {"foo": {"baz": 3}}, ["foo.baz"]),
+ ("foo.baz.id", {"foo": {"baz": [3]}}, ["foo.baz"]),
+ ("foo.baz.id", {"foo": {"id": "bizzle", "baz": 3}}, ["bizzle.baz"]),
+ ("foo.baz.id", {"foo": {"baz": {"id": "hi"}}}, ["foo.hi"]),
+ ("foo.baz.bizzle.id", {"foo": {"baz": {"bizzle": 5}}}, ["foo.baz.bizzle"]),
+ #
+ # Descendants
+ # -----------
+ #
+ (
+ "foo..baz.id",
+ {"foo": {"baz": 1, "bing": {"baz": 2}}},
+ ["foo.baz", "foo.bing.baz"],
+ ),
+)
+
+
+ at pytest.mark.parametrize("path, data, expected_values", find_test_cases_with_auto_id)
+ at parsers
+def test_find_values_auto_id(auto_id_field, parse, path, data, expected_values):
+ result = parse(path).find(data)
+ assert_value_equality(result, expected_values)
+
+
+ at parsers
+def test_find_full_paths_auto_id(auto_id_field, parse):
+ results = parse("*").find({"foo": 1, "baz": 2})
+ assert_full_path_equality(results, {"foo", "baz", "id"})
+
+
+ at pytest.mark.parametrize(
+ "string, target",
+ (
+ ("m.[1].id", ["1.m.a2id"]),
+ ("m.[1].$.b.id", ["1.bid"]),
+ ("m.[0].id", ["1.m.[0]"]),
+ ),
+)
+ at parsers
+def test_nested_index_auto_id(auto_id_field, parse, string, target):
+ data = {
+ "id": 1,
+ "b": {"id": "bid", "name": "bob"},
+ "m": [{"a": "a1"}, {"a": "a2", "id": "a2id"}],
+ }
+ result = parse(string).find(data)
+ assert_value_equality(result, target)
+
+
+def test_invalid_hyphenation_in_key():
+ with pytest.raises(JsonPathLexerError):
+ base_parse("foo.-baz")
=====================================
tests/test_jsonpath_rw_ext.py
=====================================
@@ -1,17 +1,3 @@
-# -*- coding: utf-8 -*-
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
"""
test_jsonpath_ng_ext
----------------------------------
@@ -19,571 +5,461 @@ test_jsonpath_ng_ext
Tests for `jsonpath_ng_ext` module.
"""
-from jsonpath_ng import jsonpath # For setting the global auto_id_field flag
-from oslotest import base
+import pytest
+from jsonpath_ng.exceptions import JsonPathParserError
from jsonpath_ng.ext import parser
-
-# Example from https://docs.pytest.org/en/7.1.x/example/parametrize.html#a-quick-port-of-testscenarios
-def pytest_generate_tests(metafunc):
- idlist = []
- argvalues = []
- for scenario in metafunc.cls.scenarios:
- idlist.append(scenario[0])
- items = scenario[1].items()
- argnames = [x[0] for x in items]
- argvalues.append([x[1] for x in items])
- metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class")
-
-
-class Testjsonpath_ng_ext:
- scenarios = [
- ('sorted_list', dict(string='objects.`sorted`',
- data={'objects': ['alpha', 'gamma', 'beta']},
- target=[['alpha', 'beta', 'gamma']])),
- ('sorted_list_indexed', dict(string='objects.`sorted`[1]',
- data={'objects': [
- 'alpha', 'gamma', 'beta']},
- target='beta')),
- ('sorted_dict', dict(string='objects.`sorted`',
- data={'objects': {'cow': 'moo', 'horse': 'neigh',
- 'cat': 'meow'}},
- target=[['cat', 'cow', 'horse']])),
- ('sorted_dict_indexed', dict(string='objects.`sorted`[0]',
- data={'objects': {'cow': 'moo',
- 'horse': 'neigh',
- 'cat': 'meow'}},
- target='cat')),
-
- ('len_list', dict(string='objects.`len`',
- data={'objects': ['alpha', 'gamma', 'beta']},
- target=3)),
- ('len_dict', dict(string='objects.`len`',
- data={'objects': {'cow': 'moo', 'cat': 'neigh'}},
- target=2)),
- ('len_str', dict(string='objects[0].`len`',
- data={'objects': ['alpha', 'gamma']},
- target=5)),
-
- ('filter_exists_syntax1', dict(string='objects[?cow]',
- data={'objects': [{'cow': 'moo'},
- {'cat': 'neigh'}]},
- target=[{'cow': 'moo'}])),
- ('filter_exists_syntax2', dict(string='objects[?@.cow]',
- data={'objects': [{'cow': 'moo'},
- {'cat': 'neigh'}]},
- target=[{'cow': 'moo'}])),
- ('filter_exists_syntax3', dict(string='objects[?(@.cow)]',
- data={'objects': [{'cow': 'moo'},
- {'cat': 'neigh'}]},
- target=[{'cow': 'moo'}])),
- ('filter_exists_syntax4', dict(string='objects[?(@."cow!?cat")]',
- data={'objects': [{'cow!?cat': 'moo'},
- {'cat': 'neigh'}]},
- target=[{'cow!?cat': 'moo'}])),
- ('filter_eq1', dict(string='objects[?cow="moo"]',
- data={'objects': [{'cow': 'moo'},
- {'cow': 'neigh'},
- {'cat': 'neigh'}]},
- target=[{'cow': 'moo'}])),
- ('filter_eq2', dict(string='objects[?(@.["cow"]="moo")]',
- data={'objects': [{'cow': 'moo'},
- {'cow': 'neigh'},
- {'cat': 'neigh'}]},
- target=[{'cow': 'moo'}])),
- ('filter_eq3', dict(string='objects[?cow=="moo"]',
- data={'objects': [{'cow': 'moo'},
- {'cow': 'neigh'},
- {'cat': 'neigh'}]},
- target=[{'cow': 'moo'}])),
- ('filter_gt', dict(string='objects[?cow>5]',
- data={'objects': [{'cow': 8},
- {'cow': 7},
- {'cow': 5},
- {'cow': 'neigh'}]},
- target=[{'cow': 8}, {'cow': 7}])),
- ('filter_and', dict(string='objects[?cow>5&cat=2]',
- data={'objects': [{'cow': 8, 'cat': 2},
- {'cow': 7, 'cat': 2},
- {'cow': 2, 'cat': 2},
- {'cow': 5, 'cat': 3},
- {'cow': 8, 'cat': 3}]},
- target=[{'cow': 8, 'cat': 2},
- {'cow': 7, 'cat': 2}])),
- ('filter_float_gt', dict(
- string='objects[?confidence>=0.5].prediction',
- data={
- 'objects': [
- {'confidence': 0.42,
- 'prediction': 'Good'},
- {'confidence': 0.58,
- 'prediction': 'Bad'},
+from .helpers import assert_value_equality
+
+test_cases = (
+ pytest.param(
+ "objects.`sorted`",
+ {"objects": ["alpha", "gamma", "beta"]},
+ [["alpha", "beta", "gamma"]],
+ id="sorted_list",
+ ),
+ pytest.param(
+ "objects.`sorted`[1]",
+ {"objects": ["alpha", "gamma", "beta"]},
+ "beta",
+ id="sorted_list_indexed",
+ ),
+ pytest.param(
+ "objects.`sorted`",
+ {"objects": {"cow": "moo", "horse": "neigh", "cat": "meow"}},
+ [["cat", "cow", "horse"]],
+ id="sorted_dict",
+ ),
+ pytest.param(
+ "objects.`sorted`[0]",
+ {"objects": {"cow": "moo", "horse": "neigh", "cat": "meow"}},
+ "cat",
+ id="sorted_dict_indexed",
+ ),
+ pytest.param(
+ "objects.`len`", {"objects": ["alpha", "gamma", "beta"]}, 3, id="len_list"
+ ),
+ pytest.param(
+ "objects.`len`", {"objects": {"cow": "moo", "cat": "neigh"}}, 2, id="len_dict"
+ ),
+ pytest.param("objects[0].`len`", {"objects": ["alpha", "gamma"]}, 5, id="len_str"),
+ pytest.param(
+ 'objects[?@="alpha"]',
+ {"objects": ["alpha", "gamma", "beta"]},
+ ["alpha"],
+ id="filter_list",
+ ),
+ pytest.param(
+ 'objects[?@ =~ "a.+"]',
+ {"objects": ["alpha", "gamma", "beta"]},
+ ["alpha", "gamma"],
+ id="filter_list_2",
+ ),
+ pytest.param(
+ 'objects[?@ =~ "a.+"]', {"objects": [1, 2, 3]}, [], id="filter_list_3"
+ ),
+ pytest.param(
+ "objects.`keys`", {"objects": ["alpha", "gamma", "beta"]}, [], id="keys_list"
+ ),
+ pytest.param(
+ "objects.`keys`",
+ {"objects": {"cow": "moo", "cat": "neigh"}},
+ ["cow", "cat"],
+ id="keys_dict",
+ ),
+ pytest.param(
+ "objects[?cow]",
+ {"objects": [{"cow": "moo"}, {"cat": "neigh"}]},
+ [{"cow": "moo"}],
+ id="filter_exists_syntax1",
+ ),
+ pytest.param(
+ "objects[?@.cow]",
+ {"objects": [{"cow": "moo"}, {"cat": "neigh"}]},
+ [{"cow": "moo"}],
+ id="filter_exists_syntax2",
+ ),
+ pytest.param(
+ "objects[?(@.cow)]",
+ {"objects": [{"cow": "moo"}, {"cat": "neigh"}]},
+ [{"cow": "moo"}],
+ id="filter_exists_syntax3",
+ ),
+ pytest.param(
+ 'objects[?(@."cow!?cat")]',
+ {"objects": [{"cow!?cat": "moo"}, {"cat": "neigh"}]},
+ [{"cow!?cat": "moo"}],
+ id="filter_exists_syntax4",
+ ),
+ pytest.param(
+ 'objects[?cow="moo"]',
+ {"objects": [{"cow": "moo"}, {"cow": "neigh"}, {"cat": "neigh"}]},
+ [{"cow": "moo"}],
+ id="filter_eq1",
+ ),
+ pytest.param(
+ 'objects[?(@.["cow"]="moo")]',
+ {"objects": [{"cow": "moo"}, {"cow": "neigh"}, {"cat": "neigh"}]},
+ [{"cow": "moo"}],
+ id="filter_eq2",
+ ),
+ pytest.param(
+ 'objects[?cow=="moo"]',
+ {"objects": [{"cow": "moo"}, {"cow": "neigh"}, {"cat": "neigh"}]},
+ [{"cow": "moo"}],
+ id="filter_eq3",
+ ),
+ pytest.param(
+ "objects[?cow>5]",
+ {"objects": [{"cow": 8}, {"cow": 7}, {"cow": 5}, {"cow": "neigh"}]},
+ [{"cow": 8}, {"cow": 7}],
+ id="filter_gt",
+ ),
+ pytest.param(
+ "objects[?cow>5&cat=2]",
+ {
+ "objects": [
+ {"cow": 8, "cat": 2},
+ {"cow": 7, "cat": 2},
+ {"cow": 2, "cat": 2},
+ {"cow": 5, "cat": 3},
+ {"cow": 8, "cat": 3},
+ ]
+ },
+ [{"cow": 8, "cat": 2}, {"cow": 7, "cat": 2}],
+ id="filter_and",
+ ),
+ pytest.param(
+ "objects[?confidence>=0.5].prediction",
+ {
+ "objects": [
+ {"confidence": 0.42, "prediction": "Good"},
+ {"confidence": 0.58, "prediction": "Bad"},
+ ]
+ },
+ ["Bad"],
+ id="filter_float_gt",
+ ),
+ pytest.param(
+ "objects[/cow]",
+ {
+ "objects": [
+ {"cat": 1, "cow": 2},
+ {"cat": 2, "cow": 1},
+ {"cat": 3, "cow": 3},
+ ]
+ },
+ [[{"cat": 2, "cow": 1}, {"cat": 1, "cow": 2}, {"cat": 3, "cow": 3}]],
+ id="sort1",
+ ),
+ pytest.param(
+ "objects[/cow][0].cat",
+ {
+ "objects": [
+ {"cat": 1, "cow": 2},
+ {"cat": 2, "cow": 1},
+ {"cat": 3, "cow": 3},
+ ]
+ },
+ 2,
+ id="sort1_indexed",
+ ),
+ pytest.param(
+ "objects[\\cat]",
+ {"objects": [{"cat": 2}, {"cat": 1}, {"cat": 3}]},
+ [[{"cat": 3}, {"cat": 2}, {"cat": 1}]],
+ id="sort2",
+ ),
+ pytest.param(
+ "objects[\\cat][-1].cat",
+ {"objects": [{"cat": 2}, {"cat": 1}, {"cat": 3}]},
+ 1,
+ id="sort2_indexed",
+ ),
+ pytest.param(
+ "objects[/cow,\\cat]",
+ {
+ "objects": [
+ {"cat": 1, "cow": 2},
+ {"cat": 2, "cow": 1},
+ {"cat": 3, "cow": 1},
+ {"cat": 3, "cow": 3},
+ ]
+ },
+ [
+ [
+ {"cat": 3, "cow": 1},
+ {"cat": 2, "cow": 1},
+ {"cat": 1, "cow": 2},
+ {"cat": 3, "cow": 3},
+ ]
+ ],
+ id="sort3",
+ ),
+ pytest.param(
+ "objects[/cow,\\cat][0].cat",
+ {
+ "objects": [
+ {"cat": 1, "cow": 2},
+ {"cat": 2, "cow": 1},
+ {"cat": 3, "cow": 1},
+ {"cat": 3, "cow": 3},
+ ]
+ },
+ 3,
+ id="sort3_indexed",
+ ),
+ pytest.param(
+ "objects[/cat.cow]",
+ {
+ "objects": [
+ {"cat": {"dog": 1, "cow": 2}},
+ {"cat": {"dog": 2, "cow": 1}},
+ {"cat": {"dog": 3, "cow": 3}},
+ ]
+ },
+ [
+ [
+ {"cat": {"dog": 2, "cow": 1}},
+ {"cat": {"dog": 1, "cow": 2}},
+ {"cat": {"dog": 3, "cow": 3}},
+ ]
+ ],
+ id="sort4",
+ ),
+ pytest.param(
+ "objects[/cat.cow][0].cat.dog",
+ {
+ "objects": [
+ {"cat": {"dog": 1, "cow": 2}},
+ {"cat": {"dog": 2, "cow": 1}},
+ {"cat": {"dog": 3, "cow": 3}},
+ ]
+ },
+ 2,
+ id="sort4_indexed",
+ ),
+ pytest.param(
+ "objects[/cat.(cow,bow)]",
+ {
+ "objects": [
+ {"cat": {"dog": 1, "bow": 3}},
+ {"cat": {"dog": 2, "cow": 1}},
+ {"cat": {"dog": 2, "bow": 2}},
+ {"cat": {"dog": 3, "cow": 2}},
+ ]
+ },
+ [
+ [
+ {"cat": {"dog": 2, "cow": 1}},
+ {"cat": {"dog": 2, "bow": 2}},
+ {"cat": {"dog": 3, "cow": 2}},
+ {"cat": {"dog": 1, "bow": 3}},
+ ]
+ ],
+ id="sort5_twofields",
+ ),
+ pytest.param(
+ "objects[/cat.(cow,bow)][0].cat.dog",
+ {
+ "objects": [
+ {"cat": {"dog": 1, "bow": 3}},
+ {"cat": {"dog": 2, "cow": 1}},
+ {"cat": {"dog": 2, "bow": 2}},
+ {"cat": {"dog": 3, "cow": 2}},
+ ]
+ },
+ 2,
+ id="sort5_indexed",
+ ),
+ pytest.param("3 * 3", {}, [9], id="arithmetic_number_only"),
+ pytest.param("$.foo * 10", {"foo": 4}, [40], id="arithmetic_mul1"),
+ pytest.param("10 * $.foo", {"foo": 4}, [40], id="arithmetic_mul2"),
+ pytest.param("$.foo * 10", {"foo": 4}, [40], id="arithmetic_mul3"),
+ pytest.param("$.foo * 3", {"foo": "f"}, ["fff"], id="arithmetic_mul4"),
+ pytest.param("foo * 3", {"foo": "f"}, ["foofoofoo"], id="arithmetic_mul5"),
+ pytest.param("($.foo * 10 * $.foo) + 2", {"foo": 4}, [162], id="arithmetic_mul6"),
+ pytest.param("$.foo * 10 * $.foo + 2", {"foo": 4}, [240], id="arithmetic_mul7"),
+ pytest.param(
+ "foo + bar", {"foo": "name", "bar": "node"}, ["foobar"], id="arithmetic_str0"
+ ),
+ pytest.param(
+ 'foo + "_" + bar',
+ {"foo": "name", "bar": "node"},
+ ["foo_bar"],
+ id="arithmetic_str1",
+ ),
+ pytest.param(
+ '$.foo + "_" + $.bar',
+ {"foo": "name", "bar": "node"},
+ ["name_node"],
+ id="arithmetic_str2",
+ ),
+ pytest.param(
+ "$.foo + $.bar",
+ {"foo": "name", "bar": "node"},
+ ["namenode"],
+ id="arithmetic_str3",
+ ),
+ pytest.param(
+ "foo.cow + bar.cow",
+ {"foo": {"cow": "name"}, "bar": {"cow": "node"}},
+ ["namenode"],
+ id="arithmetic_str4",
+ ),
+ pytest.param(
+ "$.objects[*].cow * 2",
+ {"objects": [{"cow": 1}, {"cow": 2}, {"cow": 3}]},
+ [2, 4, 6],
+ id="arithmetic_list1",
+ ),
+ pytest.param(
+ "$.objects[*].cow * $.objects[*].cow",
+ {"objects": [{"cow": 1}, {"cow": 2}, {"cow": 3}]},
+ [1, 4, 9],
+ id="arithmetic_list2",
+ ),
+ pytest.param(
+ "$.objects[*].cow * $.objects2[*].cow",
+ {"objects": [{"cow": 1}, {"cow": 2}, {"cow": 3}], "objects2": [{"cow": 5}]},
+ [],
+ id="arithmetic_list_err1",
+ ),
+ pytest.param('$.objects * "foo"', {"objects": []}, [], id="arithmetic_err1"),
+ pytest.param('"bar" * "foo"', {}, [], id="arithmetic_err2"),
+ pytest.param(
+ "payload.metrics[?(@.name='cpu.frequency')].value * 100",
+ {
+ "payload": {
+ "metrics": [
+ {
+ "timestamp": "2013-07-29T06:51:34.472416",
+ "name": "cpu.frequency",
+ "value": 1600,
+ "source": "libvirt.LibvirtDriver",
+ },
+ {
+ "timestamp": "2013-07-29T06:51:34.472416",
+ "name": "cpu.user.time",
+ "value": 17421440000000,
+ "source": "libvirt.LibvirtDriver",
+ },
]
- },
- target=['Bad']
- )),
- ('sort1', dict(string='objects[/cow]',
- data={'objects': [{'cat': 1, 'cow': 2},
- {'cat': 2, 'cow': 1},
- {'cat': 3, 'cow': 3}]},
- target=[[{'cat': 2, 'cow': 1},
- {'cat': 1, 'cow': 2},
- {'cat': 3, 'cow': 3}]])),
- ('sort1_indexed', dict(string='objects[/cow][0].cat',
- data={'objects': [{'cat': 1, 'cow': 2},
- {'cat': 2, 'cow': 1},
- {'cat': 3, 'cow': 3}]},
- target=2)),
- ('sort2', dict(string='objects[\\cat]',
- data={'objects': [{'cat': 2}, {'cat': 1}, {'cat': 3}]},
- target=[[{'cat': 3}, {'cat': 2}, {'cat': 1}]])),
- ('sort2_indexed', dict(string='objects[\\cat][-1].cat',
- data={'objects': [{'cat': 2}, {'cat': 1},
- {'cat': 3}]},
- target=1)),
- ('sort3', dict(string='objects[/cow,\\cat]',
- data={'objects': [{'cat': 1, 'cow': 2},
- {'cat': 2, 'cow': 1},
- {'cat': 3, 'cow': 1},
- {'cat': 3, 'cow': 3}]},
- target=[[{'cat': 3, 'cow': 1},
- {'cat': 2, 'cow': 1},
- {'cat': 1, 'cow': 2},
- {'cat': 3, 'cow': 3}]])),
- ('sort3_indexed', dict(string='objects[/cow,\\cat][0].cat',
- data={'objects': [{'cat': 1, 'cow': 2},
- {'cat': 2, 'cow': 1},
- {'cat': 3, 'cow': 1},
- {'cat': 3, 'cow': 3}]},
- target=3)),
- ('sort4', dict(string='objects[/cat.cow]',
- data={'objects': [{'cat': {'dog': 1, 'cow': 2}},
- {'cat': {'dog': 2, 'cow': 1}},
- {'cat': {'dog': 3, 'cow': 3}}]},
- target=[[{'cat': {'dog': 2, 'cow': 1}},
- {'cat': {'dog': 1, 'cow': 2}},
- {'cat': {'dog': 3, 'cow': 3}}]])),
- ('sort4_indexed', dict(string='objects[/cat.cow][0].cat.dog',
- data={'objects': [{'cat': {'dog': 1,
- 'cow': 2}},
- {'cat': {'dog': 2,
- 'cow': 1}},
- {'cat': {'dog': 3,
- 'cow': 3}}]},
- target=2)),
- ('sort5_twofields', dict(string='objects[/cat.(cow,bow)]',
- data={'objects':
- [{'cat': {'dog': 1, 'bow': 3}},
- {'cat': {'dog': 2, 'cow': 1}},
- {'cat': {'dog': 2, 'bow': 2}},
- {'cat': {'dog': 3, 'cow': 2}}]},
- target=[[{'cat': {'dog': 2, 'cow': 1}},
- {'cat': {'dog': 2, 'bow': 2}},
- {'cat': {'dog': 3, 'cow': 2}},
- {'cat': {'dog': 1, 'bow': 3}}]])),
-
- ('sort5_indexed', dict(string='objects[/cat.(cow,bow)][0].cat.dog',
- data={'objects':
- [{'cat': {'dog': 1, 'bow': 3}},
- {'cat': {'dog': 2, 'cow': 1}},
- {'cat': {'dog': 2, 'bow': 2}},
- {'cat': {'dog': 3, 'cow': 2}}]},
- target=2)),
- ('arithmetic_number_only', dict(string='3 * 3', data={},
- target=[9])),
-
- ('arithmetic_mul1', dict(string='$.foo * 10', data={'foo': 4},
- target=[40])),
- ('arithmetic_mul2', dict(string='10 * $.foo', data={'foo': 4},
- target=[40])),
- ('arithmetic_mul3', dict(string='$.foo * 10', data={'foo': 4},
- target=[40])),
- ('arithmetic_mul4', dict(string='$.foo * 3', data={'foo': 'f'},
- target=['fff'])),
- ('arithmetic_mul5', dict(string='foo * 3', data={'foo': 'f'},
- target=['foofoofoo'])),
- ('arithmetic_mul6', dict(string='($.foo * 10 * $.foo) + 2',
- data={'foo': 4}, target=[162])),
- ('arithmetic_mul7', dict(string='$.foo * 10 * $.foo + 2',
- data={'foo': 4}, target=[240])),
-
- ('arithmetic_str0', dict(string='foo + bar',
- data={'foo': 'name', "bar": "node"},
- target=["foobar"])),
- ('arithmetic_str1', dict(string='foo + "_" + bar',
- data={'foo': 'name', "bar": "node"},
- target=["foo_bar"])),
- ('arithmetic_str2', dict(string='$.foo + "_" + $.bar',
- data={'foo': 'name', "bar": "node"},
- target=["name_node"])),
- ('arithmetic_str3', dict(string='$.foo + $.bar',
- data={'foo': 'name', "bar": "node"},
- target=["namenode"])),
- ('arithmetic_str4', dict(string='foo.cow + bar.cow',
- data={'foo': {'cow': 'name'},
- "bar": {'cow': "node"}},
- target=["namenode"])),
-
- ('arithmetic_list1', dict(string='$.objects[*].cow * 2',
- data={'objects': [{'cow': 1},
- {'cow': 2},
- {'cow': 3}]},
- target=[2, 4, 6])),
-
- ('arithmetic_list2', dict(string='$.objects[*].cow * $.objects[*].cow',
- data={'objects': [{'cow': 1},
- {'cow': 2},
- {'cow': 3}]},
- target=[1, 4, 9])),
-
- ('arithmetic_list_err1', dict(
- string='$.objects[*].cow * $.objects2[*].cow',
- data={'objects': [{'cow': 1}, {'cow': 2}, {'cow': 3}],
- 'objects2': [{'cow': 5}]},
- target=[])),
-
- ('arithmetic_err1', dict(string='$.objects * "foo"',
- data={'objects': []}, target=[])),
- ('arithmetic_err2', dict(string='"bar" * "foo"', data={}, target=[])),
-
- ('real_life_example1', dict(
- string="payload.metrics[?(@.name='cpu.frequency')].value * 100",
- data={'payload': {'metrics': [
- {'timestamp': '2013-07-29T06:51:34.472416',
- 'name': 'cpu.frequency',
- 'value': 1600,
- 'source': 'libvirt.LibvirtDriver'},
- {'timestamp': '2013-07-29T06:51:34.472416',
- 'name': 'cpu.user.time',
- 'value': 17421440000000,
- 'source': 'libvirt.LibvirtDriver'}]}},
- target=[160000])),
-
- ('real_life_example2', dict(
- string="payload.(id|(resource.id))",
- data={'payload': {'id': 'foobar'}},
- target=['foobar'])),
- ('real_life_example3', dict(
- string="payload.id|(resource.id)",
- data={'payload': {'resource':
- {'id': 'foobar'}}},
- target=['foobar'])),
- ('real_life_example4', dict(
- string="payload.id|(resource.id)",
- data={'payload': {'id': 'yes',
- 'resource': {'id': 'foobar'}}},
- target=['yes', 'foobar'])),
-
- ('sub1', dict(
- string="payload.`sub(/(foo\\\\d+)\\\\+(\\\\d+bar)/, \\\\2-\\\\1)`",
- data={'payload': "foo5+3bar"},
- target=["3bar-foo5"]
- )),
- ('sub2', dict(
- string='payload.`sub(/foo\\\\+bar/, repl)`',
- data={'payload': "foo+bar"},
- target=["repl"]
- )),
- ('str1', dict(
- string='payload.`str()`',
- data={'payload': 1},
- target=["1"]
- )),
- ('split1', dict(
- string='payload.`split(-, 2, -1)`',
- data={'payload': "foo-bar-cat-bow"},
- target=["cat"]
- )),
- ('split2', dict(
- string='payload.`split(-, 2, 2)`',
- data={'payload': "foo-bar-cat-bow"},
- target=["cat-bow"]
- )),
-
- ('bug-#2-correct', dict(
- string='foo[?(@.baz==1)]',
- data={'foo': [{'baz': 1}, {'baz': 2}]},
- target=[{'baz': 1}],
- )),
-
- ('bug-#2-wrong', dict(
- string='foo[*][?(@.baz==1)]',
- data={'foo': [{'baz': 1}, {'baz': 2}]},
- target=[],
- )),
-
- ('boolean-filter-true', dict(
- string='foo[?flag = true].color',
- data={'foo': [{"color": "blue", "flag": True},
- {"color": "green", "flag": False}]},
- target=['blue']
- )),
-
- ('boolean-filter-false', dict(
- string='foo[?flag = false].color',
- data={'foo': [{"color": "blue", "flag": True},
- {"color": "green", "flag": False}]},
- target=['green']
- )),
-
- ('boolean-filter-other-datatypes-involved', dict(
- string='foo[?flag = true].color',
- data={'foo': [{"color": "blue", "flag": True},
- {"color": "green", "flag": 2},
- {"color": "red", "flag": "hi"}]},
- target=['blue']
- )),
-
- ('boolean-filter-string-true-string-literal', dict(
- string='foo[?flag = "true"].color',
- data={'foo': [{"color": "blue", "flag": True},
- {"color": "green", "flag": "true"}]},
- target=['green']
- )),
- ]
-
- def test_fields_value(self, string, data, target):
- jsonpath.auto_id_field = None
- result = parser.parse(string, debug=True).find(data)
- if isinstance(target, list):
- assert target == [r.value for r in result]
- elif isinstance(target, set):
- assert target == set([r.value for r in result])
- elif isinstance(target, (int, float)):
- assert target == result[0].value
- else:
- assert target == result[0].value
-
-# NOTE(sileht): copy of tests/test_jsonpath.py
-# to ensure we didn't break jsonpath_ng
-
-
-class TestJsonPath(base.BaseTestCase):
- """Tests of the actual jsonpath functionality """
-
- #
- # Check that the data value returned is good
- #
- def check_cases(self, test_cases):
- # Note that just manually building an AST would avoid this dep and
- # isolate the tests, but that would suck a bit
- # Also, we coerce iterables, etc, into the desired target type
-
- for string, data, target in test_cases:
- print('parse("%s").find(%s) =?= %s' % (string, data, target))
- result = parser.parse(string).find(data)
- if isinstance(target, list):
- assert [r.value for r in result] == target
- elif isinstance(target, set):
- assert set([r.value for r in result]) == target
- else:
- assert result.value == target
-
- def test_fields_value(self):
- jsonpath.auto_id_field = None
- self.check_cases([('foo', {'foo': 'baz'}, ['baz']),
- ('foo,baz', {'foo': 1, 'baz': 2}, [1, 2]),
- ('@foo', {'@foo': 1}, [1]),
- ('*', {'foo': 1, 'baz': 2}, set([1, 2]))])
-
- jsonpath.auto_id_field = 'id'
- self.check_cases([('*', {'foo': 1, 'baz': 2}, set([1, 2, '`this`']))])
-
- def test_root_value(self):
- jsonpath.auto_id_field = None
- self.check_cases([
- ('$', {'foo': 'baz'}, [{'foo': 'baz'}]),
- ('foo.$', {'foo': 'baz'}, [{'foo': 'baz'}]),
- ('foo.$.foo', {'foo': 'baz'}, ['baz']),
- ])
-
- def test_this_value(self):
- jsonpath.auto_id_field = None
- self.check_cases([
- ('`this`', {'foo': 'baz'}, [{'foo': 'baz'}]),
- ('foo.`this`', {'foo': 'baz'}, ['baz']),
- ('foo.`this`.baz', {'foo': {'baz': 3}}, [3]),
- ])
-
- def test_index_value(self):
- self.check_cases([
- ('[0]', [42], [42]),
- ('[5]', [42], []),
- ('[2]', [34, 65, 29, 59], [29])
- ])
-
- def test_slice_value(self):
- self.check_cases([('[*]', [1, 2, 3], [1, 2, 3]),
- ('[*]', range(1, 4), [1, 2, 3]),
- ('[1:]', [1, 2, 3, 4], [2, 3, 4]),
- ('[:2]', [1, 2, 3, 4], [1, 2])])
-
- # Funky slice hacks
- self.check_cases([
- ('[*]', 1, [1]), # This is a funky hack
- ('[0:]', 1, [1]), # This is a funky hack
- ('[*]', {'foo': 1}, [{'foo': 1}]), # This is a funky hack
- ('[*].foo', {'foo': 1}, [1]), # This is a funky hack
- ])
-
- def test_child_value(self):
- self.check_cases([('foo.baz', {'foo': {'baz': 3}}, [3]),
- ('foo.baz', {'foo': {'baz': [3]}}, [[3]]),
- ('foo.baz.bizzle', {'foo': {'baz': {'bizzle': 5}}},
- [5])])
-
- def test_descendants_value(self):
- self.check_cases([
- ('foo..baz', {'foo': {'baz': 1, 'bing': {'baz': 2}}}, [1, 2]),
- ('foo..baz', {'foo': [{'baz': 1}, {'baz': 2}]}, [1, 2]),
- ])
-
- def test_parent_value(self):
- self.check_cases([('foo.baz.`parent`', {'foo': {'baz': 3}},
- [{'baz': 3}]),
- ('foo.`parent`.foo.baz.`parent`.baz.bizzle',
- {'foo': {'baz': {'bizzle': 5}}}, [5])])
-
- def test_hyphen_key(self):
- # NOTE(sileht): hyphen is now a operator
- # so to use it has key we must escape it with quote
- # self.check_cases([('foo.bar-baz', {'foo': {'bar-baz': 3}}, [3]),
- # ('foo.[bar-baz,blah-blah]',
- # {'foo': {'bar-baz': 3, 'blah-blah': 5}},
- # [3, 5])])
- self.check_cases([('foo."bar-baz"', {'foo': {'bar-baz': 3}}, [3]),
- ('foo.["bar-baz","blah-blah"]',
- {'foo': {'bar-baz': 3, 'blah-blah': 5}},
- [3, 5])])
- # self.assertRaises(lexer.JsonPathLexerError, self.check_cases,
- # [('foo.-baz', {'foo': {'-baz': 8}}, [8])])
-
- #
- # Check that the paths for the data are correct.
- # FIXME: merge these tests with the above, since the inputs are the same
- # anyhow
- #
- def check_paths(self, test_cases):
- # Note that just manually building an AST would avoid this dep and
- # isolate the tests, but that would suck a bit
- # Also, we coerce iterables, etc, into the desired target type
-
- for string, data, target in test_cases:
- print('parse("%s").find(%s).paths =?= %s' % (string, data, target))
- result = parser.parse(string).find(data)
- if isinstance(target, list):
- assert [str(r.full_path) for r in result] == target
- elif isinstance(target, set):
- assert set([str(r.full_path) for r in result]) == target
- else:
- assert str(result.path) == target
-
- def test_fields_paths(self):
- jsonpath.auto_id_field = None
- self.check_paths([('foo', {'foo': 'baz'}, ['foo']),
- ('foo,baz', {'foo': 1, 'baz': 2}, ['foo', 'baz']),
- ('*', {'foo': 1, 'baz': 2}, set(['foo', 'baz']))])
-
- jsonpath.auto_id_field = 'id'
- self.check_paths([('*', {'foo': 1, 'baz': 2},
- set(['foo', 'baz', 'id']))])
-
- def test_root_paths(self):
- jsonpath.auto_id_field = None
- self.check_paths([
- ('$', {'foo': 'baz'}, ['$']),
- ('foo.$', {'foo': 'baz'}, ['$']),
- ('foo.$.foo', {'foo': 'baz'}, ['foo']),
- ])
-
- def test_this_paths(self):
- jsonpath.auto_id_field = None
- self.check_paths([
- ('`this`', {'foo': 'baz'}, ['`this`']),
- ('foo.`this`', {'foo': 'baz'}, ['foo']),
- ('foo.`this`.baz', {'foo': {'baz': 3}}, ['foo.baz']),
- ])
-
- def test_index_paths(self):
- self.check_paths([('[0]', [42], ['[0]']),
- ('[2]', [34, 65, 29, 59], ['[2]'])])
-
- def test_slice_paths(self):
- self.check_paths([('[*]', [1, 2, 3], ['[0]', '[1]', '[2]']),
- ('[1:]', [1, 2, 3, 4], ['[1]', '[2]', '[3]'])])
-
- def test_child_paths(self):
- self.check_paths([('foo.baz', {'foo': {'baz': 3}}, ['foo.baz']),
- ('foo.baz', {'foo': {'baz': [3]}}, ['foo.baz']),
- ('foo.baz.bizzle', {'foo': {'baz': {'bizzle': 5}}},
- ['foo.baz.bizzle'])])
-
- def test_descendants_paths(self):
- self.check_paths([('foo..baz', {'foo': {'baz': 1, 'bing': {'baz': 2}}},
- ['foo.baz', 'foo.bing.baz'])])
-
- #
- # Check the "auto_id_field" feature
- #
- def test_fields_auto_id(self):
- jsonpath.auto_id_field = "id"
- self.check_cases([('foo.id', {'foo': 'baz'}, ['foo']),
- ('foo.id', {'foo': {'id': 'baz'}}, ['baz']),
- ('foo,baz.id', {'foo': 1, 'baz': 2}, ['foo', 'baz']),
- ('*.id',
- {'foo': {'id': 1},
- 'baz': 2},
- set(['1', 'baz']))])
-
- def test_root_auto_id(self):
- jsonpath.auto_id_field = 'id'
- self.check_cases([
- ('$.id', {'foo': 'baz'}, ['$']), # This is a wonky case that is
- # not that interesting
- ('foo.$.id', {'foo': 'baz', 'id': 'bizzle'}, ['bizzle']),
- ('foo.$.baz.id', {'foo': 4, 'baz': 3}, ['baz']),
- ])
-
- def test_this_auto_id(self):
- jsonpath.auto_id_field = 'id'
- self.check_cases([
- ('id', {'foo': 'baz'}, ['`this`']), # This is, again, a wonky case
- # that is not that interesting
- ('foo.`this`.id', {'foo': 'baz'}, ['foo']),
- ('foo.`this`.baz.id', {'foo': {'baz': 3}}, ['foo.baz']),
- ])
-
- def test_index_auto_id(self):
- jsonpath.auto_id_field = "id"
- self.check_cases([('[0].id', [42], ['[0]']),
- ('[2].id', [34, 65, 29, 59], ['[2]'])])
-
- def test_slice_auto_id(self):
- jsonpath.auto_id_field = "id"
- self.check_cases([('[*].id', [1, 2, 3], ['[0]', '[1]', '[2]']),
- ('[1:].id', [1, 2, 3, 4], ['[1]', '[2]', '[3]'])])
-
- def test_child_auto_id(self):
- jsonpath.auto_id_field = "id"
- self.check_cases([('foo.baz.id', {'foo': {'baz': 3}}, ['foo.baz']),
- ('foo.baz.id', {'foo': {'baz': [3]}}, ['foo.baz']),
- ('foo.baz.id', {'foo': {'id': 'bizzle', 'baz': 3}},
- ['bizzle.baz']),
- ('foo.baz.id', {'foo': {'baz': {'id': 'hi'}}},
- ['foo.hi']),
- ('foo.baz.bizzle.id',
- {'foo': {'baz': {'bizzle': 5}}},
- ['foo.baz.bizzle'])])
-
- def test_descendants_auto_id(self):
- jsonpath.auto_id_field = "id"
- self.check_cases([('foo..baz.id',
- {'foo': {
- 'baz': 1,
- 'bing': {
- 'baz': 2
- }
- }},
- ['foo.baz',
- 'foo.bing.baz'])])
+ }
+ },
+ [160000],
+ id="real_life_example1",
+ ),
+ pytest.param(
+ "payload.(id|(resource.id))",
+ {"payload": {"id": "foobar"}},
+ ["foobar"],
+ id="real_life_example2",
+ ),
+ pytest.param(
+ "payload.id|(resource.id)",
+ {"payload": {"resource": {"id": "foobar"}}},
+ ["foobar"],
+ id="real_life_example3",
+ ),
+ pytest.param(
+ "payload.id|(resource.id)",
+ {"payload": {"id": "yes", "resource": {"id": "foobar"}}},
+ ["yes", "foobar"],
+ id="real_life_example4",
+ ),
+ pytest.param(
+ "payload.`sub(/(foo\\\\d+)\\\\+(\\\\d+bar)/, \\\\2-\\\\1)`",
+ {"payload": "foo5+3bar"},
+ ["3bar-foo5"],
+ id="sub1",
+ ),
+ pytest.param(
+ "payload.`sub(/foo\\\\+bar/, repl)`",
+ {"payload": "foo+bar"},
+ ["repl"],
+ id="sub2",
+ ),
+ pytest.param("payload.`str()`", {"payload": 1}, ["1"], id="str1"),
+ pytest.param(
+ "payload.`split(-, 2, -1)`",
+ {"payload": "foo-bar-cat-bow"},
+ ["cat"],
+ id="split1",
+ ),
+ pytest.param(
+ "payload.`split(-, 2, 2)`",
+ {"payload": "foo-bar-cat-bow"},
+ ["cat-bow"],
+ id="split2",
+ ),
+ pytest.param(
+ "foo[?(@.baz==1)]",
+ {"foo": [{"baz": 1}, {"baz": 2}]},
+ [{"baz": 1}],
+ id="bug-#2-correct",
+ ),
+ pytest.param(
+ "foo[*][?(@.baz==1)]", {"foo": [{"baz": 1}, {"baz": 2}]}, [], id="bug-#2-wrong"
+ ),
+ pytest.param(
+ "foo[?flag = true].color",
+ {
+ "foo": [
+ {"color": "blue", "flag": True},
+ {"color": "green", "flag": False},
+ ]
+ },
+ ["blue"],
+ id="boolean-filter-true",
+ ),
+ pytest.param(
+ "foo[?flag = false].color",
+ {
+ "foo": [
+ {"color": "blue", "flag": True},
+ {"color": "green", "flag": False},
+ ]
+ },
+ ["green"],
+ id="boolean-filter-false",
+ ),
+ pytest.param(
+ "foo[?flag = true].color",
+ {
+ "foo": [
+ {"color": "blue", "flag": True},
+ {"color": "green", "flag": 2},
+ {"color": "red", "flag": "hi"},
+ ]
+ },
+ ["blue"],
+ id="boolean-filter-other-datatypes-involved",
+ ),
+ pytest.param(
+ 'foo[?flag = "true"].color',
+ {
+ "foo": [
+ {"color": "blue", "flag": True},
+ {"color": "green", "flag": "true"},
+ ]
+ },
+ ["green"],
+ id="boolean-filter-string-true-string-literal",
+ ),
+)
+
+
+ at pytest.mark.parametrize("path, data, expected_values", test_cases)
+def test_values(path, data, expected_values):
+ results = parser.parse(path).find(data)
+ assert_value_equality(results, expected_values)
+
+
+def test_invalid_hyphenation_in_key():
+ # This test is almost copied-and-pasted directly from `test_jsonpath.py`.
+ # However, the parsers generate different exceptions for this syntax error.
+ # This discrepancy needs to be resolved.
+ with pytest.raises(JsonPathParserError):
+ parser.parse("foo.-baz")
=====================================
tests/test_lexer.py
=====================================
@@ -1,69 +1,55 @@
-from __future__ import unicode_literals, print_function, absolute_import, division, generators, nested_scopes
-import logging
-import unittest
-
-from ply.lex import LexToken
+import pytest
from jsonpath_ng.lexer import JsonPathLexer, JsonPathLexerError
-class TestLexer(unittest.TestCase):
-
- def token(self, value, ty=None):
- t = LexToken()
- t.type = ty if ty != None else value
- t.value = value
- t.lineno = -1
- t.lexpos = -1
- return t
-
- def assert_lex_equiv(self, s, stream2):
- # NOTE: lexer fails to reset after call?
- l = JsonPathLexer(debug=True)
- stream1 = list(l.tokenize(s)) # Save the stream for debug output when a test fails
- stream2 = list(stream2)
- assert len(stream1) == len(stream2)
- for token1, token2 in zip(stream1, stream2):
- print(token1, token2)
- assert token1.type == token2.type
- assert token1.value == token2.value
-
- @classmethod
- def setup_class(cls):
- logging.basicConfig()
-
- def test_simple_inputs(self):
- self.assert_lex_equiv('$', [self.token('$', '$')])
- self.assert_lex_equiv('"hello"', [self.token('hello', 'ID')])
- self.assert_lex_equiv("'goodbye'", [self.token('goodbye', 'ID')])
- self.assert_lex_equiv("'doublequote\"'", [self.token('doublequote"', 'ID')])
- self.assert_lex_equiv(r'"doublequote\""', [self.token('doublequote"', 'ID')])
- self.assert_lex_equiv(r"'singlequote\''", [self.token("singlequote'", 'ID')])
- self.assert_lex_equiv('"singlequote\'"', [self.token("singlequote'", 'ID')])
- self.assert_lex_equiv('fuzz', [self.token('fuzz', 'ID')])
- self.assert_lex_equiv('1', [self.token(1, 'NUMBER')])
- self.assert_lex_equiv('45', [self.token(45, 'NUMBER')])
- self.assert_lex_equiv('-1', [self.token(-1, 'NUMBER')])
- self.assert_lex_equiv(' -13 ', [self.token(-13, 'NUMBER')])
- self.assert_lex_equiv('"fuzz.bang"', [self.token('fuzz.bang', 'ID')])
- self.assert_lex_equiv('fuzz.bang', [self.token('fuzz', 'ID'), self.token('.', '.'), self.token('bang', 'ID')])
- self.assert_lex_equiv('fuzz.*', [self.token('fuzz', 'ID'), self.token('.', '.'), self.token('*', '*')])
- self.assert_lex_equiv('fuzz..bang', [self.token('fuzz', 'ID'), self.token('..', 'DOUBLEDOT'), self.token('bang', 'ID')])
- self.assert_lex_equiv('&', [self.token('&', '&')])
- self.assert_lex_equiv('@', [self.token('@', 'ID')])
- self.assert_lex_equiv('`this`', [self.token('this', 'NAMED_OPERATOR')])
- self.assert_lex_equiv('|', [self.token('|', '|')])
- self.assert_lex_equiv('where', [self.token('where', 'WHERE')])
-
- def test_basic_errors(self):
- def tokenize(s):
- l = JsonPathLexer(debug=True)
- return list(l.tokenize(s))
-
- self.assertRaises(JsonPathLexerError, tokenize, "'\"")
- self.assertRaises(JsonPathLexerError, tokenize, '"\'')
- self.assertRaises(JsonPathLexerError, tokenize, '`"')
- self.assertRaises(JsonPathLexerError, tokenize, "`'")
- self.assertRaises(JsonPathLexerError, tokenize, '"`')
- self.assertRaises(JsonPathLexerError, tokenize, "'`")
- self.assertRaises(JsonPathLexerError, tokenize, '?')
- self.assertRaises(JsonPathLexerError, tokenize, '$.foo.bar.#')
+token_test_cases = (
+ ("$", (("$", "$"),)),
+ ('"hello"', (("hello", "ID"),)),
+ ("'goodbye'", (("goodbye", "ID"),)),
+ ("'doublequote\"'", (('doublequote"', "ID"),)),
+ (r'"doublequote\""', (('doublequote"', "ID"),)),
+ (r"'singlequote\''", (("singlequote'", "ID"),)),
+ ('"singlequote\'"', (("singlequote'", "ID"),)),
+ ("fuzz", (("fuzz", "ID"),)),
+ ("1", ((1, "NUMBER"),)),
+ ("45", ((45, "NUMBER"),)),
+ ("-1", ((-1, "NUMBER"),)),
+ (" -13 ", ((-13, "NUMBER"),)),
+ ('"fuzz.bang"', (("fuzz.bang", "ID"),)),
+ ("fuzz.bang", (("fuzz", "ID"), (".", "."), ("bang", "ID"))),
+ ("fuzz.*", (("fuzz", "ID"), (".", "."), ("*", "*"))),
+ ("fuzz..bang", (("fuzz", "ID"), ("..", "DOUBLEDOT"), ("bang", "ID"))),
+ ("&", (("&", "&"),)),
+ ("@", (("@", "ID"),)),
+ ("`this`", (("this", "NAMED_OPERATOR"),)),
+ ("|", (("|", "|"),)),
+ ("where", (("where", "WHERE"),)),
+)
+
+
+ at pytest.mark.parametrize("string, expected_token_info", token_test_cases)
+def test_lexer(string, expected_token_info):
+ lexer = JsonPathLexer(debug=True)
+ tokens = list(lexer.tokenize(string))
+ assert len(tokens) == len(expected_token_info)
+ for token, (expected_value, expected_type) in zip(tokens, expected_token_info):
+ assert token.type == expected_type
+ assert token.value == expected_value
+
+
+invalid_token_test_cases = (
+ "'\"",
+ "\"'",
+ '`"',
+ "`'",
+ '"`',
+ "'`",
+ "?",
+ "$.foo.bar.#",
+)
+
+
+ at pytest.mark.parametrize("string", invalid_token_test_cases)
+def test_lexer_errors(string):
+ with pytest.raises(JsonPathLexerError):
+ list(JsonPathLexer().tokenize(string))
=====================================
tests/test_parser.py
=====================================
@@ -1,40 +1,38 @@
-from __future__ import unicode_literals, print_function, absolute_import, division, generators, nested_scopes
-import unittest
+import pytest
+from jsonpath_ng.jsonpath import Child, Descendants, Fields, Index, Slice, Where
from jsonpath_ng.lexer import JsonPathLexer
from jsonpath_ng.parser import JsonPathParser
-from jsonpath_ng.jsonpath import *
-class TestParser(unittest.TestCase):
- # TODO: This will be much more effective with a few regression tests and `arbitrary` parse . pretty testing
+# Format: (string, expected_object)
+parser_test_cases = (
+ #
+ # Atomic
+ # ------
+ #
+ ("foo", Fields("foo")),
+ ("*", Fields("*")),
+ ("baz,bizzle", Fields("baz", "bizzle")),
+ ("[1]", Index(1)),
+ ("[1:]", Slice(start=1)),
+ ("[:]", Slice()),
+ ("[*]", Slice()),
+ ("[:2]", Slice(end=2)),
+ ("[1:2]", Slice(start=1, end=2)),
+ ("[5:-2]", Slice(start=5, end=-2)),
+ #
+ # Nested
+ # ------
+ #
+ ("foo.baz", Child(Fields("foo"), Fields("baz"))),
+ ("foo.baz,bizzle", Child(Fields("foo"), Fields("baz", "bizzle"))),
+ ("foo where baz", Where(Fields("foo"), Fields("baz"))),
+ ("foo..baz", Descendants(Fields("foo"), Fields("baz"))),
+ ("foo..baz.bing", Descendants(Fields("foo"), Child(Fields("baz"), Fields("bing")))),
+)
- @classmethod
- def setup_class(cls):
- logging.basicConfig()
- def check_parse_cases(self, test_cases):
- parser = JsonPathParser(debug=True, lexer_class=lambda:JsonPathLexer(debug=False)) # Note that just manually passing token streams avoids this dep, but that sucks
-
- for string, parsed in test_cases:
- print(string, '=?=', parsed) # pytest captures this and we see it only on a failure, for debugging
- assert parser.parse(string) == parsed
-
- def test_atomic(self):
- self.check_parse_cases([('foo', Fields('foo')),
- ('*', Fields('*')),
- ('baz,bizzle', Fields('baz','bizzle')),
- ('[1]', Index(1)),
- ('[1:]', Slice(start=1)),
- ('[:]', Slice()),
- ('[*]', Slice()),
- ('[:2]', Slice(end=2)),
- ('[1:2]', Slice(start=1, end=2)),
- ('[5:-2]', Slice(start=5, end=-2))
- ])
-
- def test_nested(self):
- self.check_parse_cases([('foo.baz', Child(Fields('foo'), Fields('baz'))),
- ('foo.baz,bizzle', Child(Fields('foo'), Fields('baz', 'bizzle'))),
- ('foo where baz', Where(Fields('foo'), Fields('baz'))),
- ('foo..baz', Descendants(Fields('foo'), Fields('baz'))),
- ('foo..baz.bing', Descendants(Fields('foo'), Child(Fields('baz'), Fields('bing'))))])
+ at pytest.mark.parametrize("string, expected_object", parser_test_cases)
+def test_parser(string, expected_object):
+ parser = JsonPathParser(lexer_class=lambda: JsonPathLexer())
+ assert parser.parse(string) == expected_object
View it on GitLab: https://salsa.debian.org/debian-gis-team/jsonpath-ng/-/commit/5937e530baa299e3e171b67bc521456c0b4ddad8
--
View it on GitLab: https://salsa.debian.org/debian-gis-team/jsonpath-ng/-/commit/5937e530baa299e3e171b67bc521456c0b4ddad8
You're receiving this email because of your account on salsa.debian.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/pkg-grass-devel/attachments/20240114/f042ec57/attachment-0001.htm>
More information about the Pkg-grass-devel
mailing list