[Debichem-devel] Bug#1069219: pymatgen ftbfs with Python 3.12
Vladimir Petko
vpa1977 at gmail.com
Fri Jul 12 03:58:28 BST 2024
Package: pymatgen
Version: 2024.1.27+dfsg1-7
Followup-For: Bug #1069219
User: ubuntu-devel at lists.ubuntu.com
Usertags: origin-ubuntu oracular ubuntu-patch
Control: tags -1 patch
Dear Maintainer,
In the absence of the new upstream of the package, would it be possible to
consider applying a Python 3.12 compatibility patch meanwhile?
In Ubuntu, the attached patch was applied to achieve the following:
* d/p/python312.patch: cherry-pick upstream patch to resolve python
3.12 ftbfs (LP: #2067725).
Thanks for considering the patch.
-- System Information:
Debian Release: trixie/sid
APT prefers noble-updates
APT policy: (500, 'noble-updates'), (500, 'noble-security'), (500, 'noble'), (100, 'noble-backports')
Architecture: amd64 (x86_64)
Foreign Architectures: i386
Kernel: Linux 6.8.0-36-generic (SMP w/32 CPU threads; PREEMPT)
Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE=en
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled
-------------- next part --------------
diff -Nru pymatgen-2024.1.27+dfsg1/debian/patches/python312.patch pymatgen-2024.1.27+dfsg1/debian/patches/python312.patch
--- pymatgen-2024.1.27+dfsg1/debian/patches/python312.patch 1970-01-01 12:00:00.000000000 +1200
+++ pymatgen-2024.1.27+dfsg1/debian/patches/python312.patch 2024-07-10 15:59:41.000000000 +1200
@@ -0,0 +1,749 @@
+Description: Officially support Python 3.12 and test in CI (#3685)
+ * add python 3.12 to officially supported versions and test it in CI
+ * down pin chgnet>=0.3.0
+ * fix weird typo nrafo_ew_tstructs
+ * don't depend on tblite above 3.11 since unsupported
+ https://github.com/tblite/tblite/issues/175
+ * improve TestVasprun.test_standard
+ * drop Lobsterin inerheritance from UserDict, use simple dict instead and
+ modify __getitem__ to get the salient __getitem__ behavior from UserDict
+ * try DotDict as super class for Lobsterin
+ * override Lobsterin.__contains__ to fix on py312
+Author: Janosh Riebesell <janosh.riebesell at gmail.com>
+Bug: https://github.com/materialsproject/pymatgen/pull/3685
+Bug-Ubuntu: https://bugs.launchpad.net/ubuntu/+source/pymatgen/+bug/2067725
+Origin: upstream, https://github.com/materialsproject/pymatgen/pull/3685
+Last-Update: 2024-07-11
+
+--- a/.github/workflows/release.yml
++++ b/.github/workflows/release.yml
+@@ -96,7 +96,7 @@
+ - uses: actions/setup-python at v5
+ name: Install Python
+ with:
+- python-version: "3.11"
++ python-version: "3.12"
+
+ - run: |
+ python -m pip install build
+@@ -137,10 +137,10 @@
+ # For pypi trusted publishing
+ id-token: write
+ steps:
+- - name: Set up Python 3.11
++ - name: Set up Python
+ uses: actions/setup-python at v5
+ with:
+- python-version: 3.11
++ python-version: "3.12"
+
+ - name: Get build artifacts
+ uses: actions/download-artifact at v3
+--- a/.github/workflows/test.yml
++++ b/.github/workflows/test.yml
+@@ -29,16 +29,16 @@
+ matrix:
+ # pytest-split automatically distributes work load so parallel jobs finish in similar time
+ os: [ubuntu-latest, windows-latest]
+- python-version: ["3.9", "3.11"]
++ python-version: ["3.9", "3.12"]
+ split: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
+ # include/exclude is meant to maximize CI coverage of different platforms and python
+ # versions while minimizing the total number of jobs. We run all pytest splits with the
+ # oldest supported python version (currently 3.9) on windows (seems most likely to surface
+- # errors) and with newest version (currently 3.11) on ubuntu (to get complete and speedy
++ # errors) and with newest version (currently 3.12) on ubuntu (to get complete and speedy
+ # coverage on unix). We ignore mac-os, which is assumed to be similar to ubuntu.
+ exclude:
+ - os: windows-latest
+- python-version: "3.11"
++ python-version: "3.12"
+ - os: ubuntu-latest
+ python-version: "3.9"
+
+--- a/dev_scripts/chemenv/get_plane_permutations_optimized.py
++++ b/dev_scripts/chemenv/get_plane_permutations_optimized.py
+@@ -209,7 +209,7 @@
+ # Definition of the facets
+ all_planes_point_indices = [algo.plane_points]
+ if algo.other_plane_points is not None:
+- all_planes_point_indices.extend(algo.other_plane_points)
++ all_planes_point_indices += algo.other_plane_points
+
+ # Loop on the facets
+ explicit_permutations_per_plane = []
+@@ -305,7 +305,7 @@
+ # Definition of the facets
+ all_planes_point_indices = [algo.plane_points]
+ if algo.other_plane_points is not None:
+- all_planes_point_indices.extend(algo.other_plane_points)
++ all_planes_point_indices += algo.other_plane_points
+
+ # Setup of the permutations to be used for this algorithm
+
+--- a/dev_scripts/chemenv/plane_multiplicity.py
++++ b/dev_scripts/chemenv/plane_multiplicity.py
+@@ -12,11 +12,11 @@
+ __date__ = "Feb 20, 2016"
+
+ if __name__ == "__main__":
+- allcg = AllCoordinationGeometries()
++ all_cg = AllCoordinationGeometries()
+
+ cg_symbol = "I:12"
+ all_plane_points = []
+- cg = allcg[cg_symbol]
++ cg = all_cg[cg_symbol]
+
+ # I:12
+ if cg_symbol == "I:12":
+@@ -25,7 +25,7 @@
+ for edge in edges:
+ opposite_edge = [opposite_points[edge[0]], opposite_points[edge[1]]]
+ equiv_plane = list(edge)
+- equiv_plane.extend(opposite_edge)
++ equiv_plane += opposite_edge
+ equiv_plane.sort()
+ all_plane_points.append(tuple(equiv_plane))
+ all_plane_points = [tuple(equiv_plane) for equiv_plane in set(all_plane_points)]
+--- a/pymatgen/alchemy/materials.py
++++ b/pymatgen/alchemy/materials.py
+@@ -214,7 +214,7 @@
+ for hist in self.history:
+ hist.pop("input_structure", None)
+ output.append(str(hist))
+- output.extend(("\nOther parameters", "------------", str(self.other_parameters)))
++ output += ("\nOther parameters", "------------", str(self.other_parameters))
+ return "\n".join(output)
+
+ def set_parameter(self, key: str, value: Any) -> None:
+--- a/pymatgen/alchemy/transmuters.py
++++ b/pymatgen/alchemy/transmuters.py
+@@ -20,6 +20,8 @@
+ if TYPE_CHECKING:
+ from collections.abc import Sequence
+
++ from pymatgen.alchemy.filters import AbstractStructureFilter
++
+ __author__ = "Shyue Ping Ong, Will Richards"
+ __copyright__ = "Copyright 2012, The Materials Project"
+ __version__ = "0.1"
+@@ -38,7 +40,7 @@
+
+ def __init__(
+ self,
+- transformed_structures,
++ transformed_structures: list[TransformedStructure],
+ transformations=None,
+ extend_collection: int = 0,
+ ncores: int | None = None,
+@@ -122,8 +124,8 @@
+ for x in self.transformed_structures:
+ new = x.append_transformation(transformation, extend_collection, clear_redo=clear_redo)
+ if new is not None:
+- new_structures.extend(new)
+- self.transformed_structures.extend(new_structures)
++ new_structures += new
++ self.transformed_structures += new_structures
+
+ def extend_transformations(self, transformations):
+ """Extends a sequence of transformations to the TransformedStructure.
+@@ -134,7 +136,7 @@
+ for trafo in transformations:
+ self.append_transformation(trafo)
+
+- def apply_filter(self, structure_filter):
++ def apply_filter(self, structure_filter: AbstractStructureFilter):
+ """Applies a structure_filter to the list of TransformedStructures
+ in the transmuter.
+
+@@ -142,10 +144,9 @@
+ structure_filter: StructureFilter to apply.
+ """
+
+- def test_transformed_structure(ts):
+- return structure_filter.test(ts.final_structure)
++ self.transformed_structures = list(
++ filter(lambda ts: structure_filter.test(ts.final_structure), self.transformed_structures))
+
+- self.transformed_structures = list(filter(test_transformed_structure, self.transformed_structures))
+ for ts in self.transformed_structures:
+ ts.append_filter(structure_filter)
+
+@@ -166,8 +167,8 @@
+ key: The key for the parameter.
+ value: The value for the parameter.
+ """
+- for x in self.transformed_structures:
+- x.other_parameters[key] = value
++ for struct in self.transformed_structures:
++ struct.other_parameters[key] = value
+
+ def add_tags(self, tags):
+ """Add tags for the structures generated by the transmuter.
+@@ -194,11 +195,11 @@
+ transmuter.
+ """
+ if isinstance(trafo_structs_or_transmuter, self.__class__):
+- self.transformed_structures.extend(trafo_structs_or_transmuter.transformed_structures)
++ self.transformed_structures += trafo_structs_or_transmuter.transformed_structures
+ else:
+ for ts in trafo_structs_or_transmuter:
+ assert isinstance(ts, TransformedStructure)
+- self.transformed_structures.extend(trafo_structs_or_transmuter)
++ self.transformed_structures += trafo_structs_or_transmuter
+
+ @classmethod
+ def from_structures(cls, structures, transformations=None, extend_collection=0):
+@@ -217,8 +218,8 @@
+ Returns:
+ StandardTransmuter
+ """
+- trafo_struct = [TransformedStructure(s, []) for s in structures]
+- return cls(trafo_struct, transformations, extend_collection)
++ t_struct = [TransformedStructure(s, []) for s in structures]
++ return cls(t_struct, transformations, extend_collection)
+
+
+ class CifTransmuter(StandardTransmuter):
+@@ -251,8 +252,8 @@
+ if read_data:
+ structure_data[-1].append(line)
+ for data in structure_data:
+- trafo_struct = TransformedStructure.from_cif_str("\n".join(data), [], primitive)
+- transformed_structures.append(trafo_struct)
++ t_struct = TransformedStructure.from_cif_str("\n".join(data), [], primitive)
++ transformed_structures.append(t_struct)
+ super().__init__(transformed_structures, transformations, extend_collection)
+
+ @classmethod
+@@ -291,8 +292,8 @@
+ extend_collection: Whether to use more than one output structure
+ from one-to-many transformations.
+ """
+- trafo_struct = TransformedStructure.from_poscar_str(poscar_string, [])
+- super().__init__([trafo_struct], transformations, extend_collection=extend_collection)
++ t_struct = TransformedStructure.from_poscar_str(poscar_string, [])
++ super().__init__([t_struct], transformations, extend_collection=extend_collection)
+
+ @staticmethod
+ def from_filenames(poscar_filenames, transformations=None, extend_collection=False):
+@@ -371,7 +372,7 @@
+ """
+ ts, transformation, extend_collection, clear_redo = inputs
+ new = ts.append_transformation(transformation, extend_collection, clear_redo=clear_redo)
+- o = [ts]
++ out = [ts]
+ if new:
+- o.extend(new)
+- return o
++ out += new
++ return out
+--- a/pymatgen/analysis/structure_prediction/substitutor.py
++++ b/pymatgen/analysis/structure_prediction/substitutor.py
+@@ -116,10 +116,10 @@
+ raise ValueError("the species in target_species are not allowed for the probability model you are using")
+
+ for permutation in itertools.permutations(target_species):
+- for s in structures_list:
++ for dct in structures_list:
+ # check if: species are in the domain,
+ # and the probability of subst. is above the threshold
+- els = s["structure"].elements
++ els = dct["structure"].elements
+ if (
+ len(els) == len(permutation)
+ and len(set(els) & set(self.get_allowed_species())) == len(els)
+@@ -132,18 +132,18 @@
+
+ transf = SubstitutionTransformation(clean_subst)
+
+- if Substitutor._is_charge_balanced(transf.apply_transformation(s["structure"])):
+- ts = TransformedStructure(
+- s["structure"],
++ if Substitutor._is_charge_balanced(transf.apply_transformation(dct["structure"])):
++ t_struct = TransformedStructure(
++ dct["structure"],
+ [transf],
+- history=[{"source": s["id"]}],
++ history=[{"source": dct["id"]}],
+ other_parameters={
+ "type": "structure_prediction",
+ "proba": self._sp.cond_prob_list(permutation, els),
+ },
+ )
+- result.append(ts)
+- transmuter.append_transformed_structures([ts])
++ result.append(t_struct)
++ transmuter.append_transformed_structures([t_struct])
+
+ if remove_duplicates:
+ transmuter.apply_filter(RemoveDuplicatesFilter(symprec=self._symprec))
+--- a/pymatgen/io/lobster/inputs.py
++++ b/pymatgen/io/lobster/inputs.py
+@@ -132,7 +132,7 @@
+ raise OSError("There are duplicates for the keywords! The program will stop here.")
+ self.update(settingsdict)
+
+- def __setitem__(self, key, val):
++ def __setitem__(self, key, val) -> None:
+ """
+ Add parameter-val pair to Lobsterin. Warns if parameter is not in list of
+ valid lobsterin tags. Also cleans the parameter and val by stripping
+@@ -151,17 +151,25 @@
+
+ super().__setitem__(new_key, val.strip() if isinstance(val, str) else val)
+
+- def __getitem__(self, item):
++ def __getitem__(self, key)-> Any:
+ """Implements getitem from dict to avoid problems with cases."""
+- found = False
+- for key_here in self:
+- if item.strip().lower() == key_here.lower():
+- new_key = key_here
+- found = True
+- if not found:
+- new_key = item
++ normalized_key = next((k for k in self if key.strip().lower() == k.lower()), key)
++
++ key_is_unknown = normalized_key.lower() not in map(str.lower, Lobsterin.AVAILABLE_KEYWORDS)
++ if key_is_unknown or normalized_key not in self.data:
++ raise KeyError(f"{key=} is not available")
+
+- return super().__getitem__(new_key)
++ return self.data[normalized_key]
++
++ def __contains__(self, key) -> bool:
++ """Implements getitem from dict to avoid problems with cases."""
++ normalized_key = next((k for k in self if key.strip().lower() == k.lower()), key)
++
++ key_is_unknown = normalized_key.lower() not in map(str.lower, Lobsterin.AVAILABLE_KEYWORDS)
++ if key_is_unknown or normalized_key not in self.data:
++ return False
++
++ return True
+
+ def __delitem__(self, key):
+ del self.data[key.lower()]
+@@ -582,36 +590,36 @@
+ data = file.read().split("\n")
+ if len(data) == 0:
+ raise OSError("lobsterin file contains no data.")
+- Lobsterindict: dict[str, Any] = {}
++ lobsterin_dict: dict[str, Any] = {}
+
+ for datum in data:
+- # Remove all comments
+- if not datum.startswith(("!", "#", "//")):
+- pattern = r"\b[^!#//]+" # exclude comments after commands
+- matched_pattern = re.findall(pattern, datum)
+- if matched_pattern:
+- raw_datum = matched_pattern[0].replace("\t", " ") # handle tab in between and end of command
+- key_word = raw_datum.strip().split(" ") # extract keyword
+- if len(key_word) > 1:
+- # check which type of keyword this is, handle accordingly
+- if key_word[0].lower() not in [datum2.lower() for datum2 in Lobsterin.LISTKEYWORDS]:
+- if key_word[0].lower() not in [datum2.lower() for datum2 in Lobsterin.FLOAT_KEYWORDS]:
+- if key_word[0].lower() not in Lobsterindict:
+- Lobsterindict[key_word[0].lower()] = " ".join(key_word[1:])
+- else:
+- raise ValueError(f"Same keyword {key_word[0].lower()} twice!")
+- elif key_word[0].lower() not in Lobsterindict:
+- Lobsterindict[key_word[0].lower()] = float(key_word[1])
+- else:
+- raise ValueError(f"Same keyword {key_word[0].lower()} twice!")
+- elif key_word[0].lower() not in Lobsterindict:
+- Lobsterindict[key_word[0].lower()] = [" ".join(key_word[1:])]
++ if datum.startswith(("!", "#", "//")):
++ continue # ignore comments
++ pattern = r"\b[^!#//]+" # exclude comments after commands
++ if matched_pattern := re.findall(pattern, datum):
++ raw_datum = matched_pattern[0].replace("\t", " ") # handle tab in between and end of command
++ key_word = raw_datum.strip().split(" ") # extract keyword
++ key = key_word[0].lower()
++ if len(key_word) > 1:
++ # check which type of keyword this is, handle accordingly
++ if key not in [datum2.lower() for datum2 in Lobsterin.LISTKEYWORDS]:
++ if key not in [datum2.lower() for datum2 in Lobsterin.FLOAT_KEYWORDS]:
++ if key in lobsterin_dict:
++ raise ValueError(f"Same keyword {key} twice!")
++ lobsterin_dict[key] = " ".join(key_word[1:])
++ elif key in lobsterin_dict:
++ raise ValueError(f"Same keyword {key} twice!")
+ else:
+- Lobsterindict[key_word[0].lower()].append(" ".join(key_word[1:]))
+- elif len(key_word) > 0:
+- Lobsterindict[key_word[0].lower()] = True
++ lobsterin_dict[key] = float("nan" if key_word[1].strip() == "None" else key_word[1])
++ elif key not in lobsterin_dict:
++ lobsterin_dict[key] = [" ".join(key_word[1:])]
++ else:
++ lobsterin_dict[key].append(" ".join(key_word[1:]))
++ elif len(key_word) > 0:
++ lobsterin_dict[key] = True
++
+
+- return cls(Lobsterindict)
++ return cls(lobsterin_dict)
+
+ @staticmethod
+ def _get_potcar_symbols(POTCAR_input: str) -> list:
+--- a/pymatgen/io/vasp/outputs.py
++++ b/pymatgen/io/vasp/outputs.py
+@@ -1336,7 +1336,7 @@
+ return istep
+
+ @staticmethod
+- def _parse_dos(elem):
++ def _parse_dos(elem) -> tuple[Dos, Dos, list[dict]]:
+ efermi = float(elem.find("i").text)
+ energies = None
+ tdensities = {}
+@@ -1356,22 +1356,18 @@
+ orbs.pop(0)
+ lm = any("x" in s for s in orbs)
+ for s in partial.find("array").find("set").findall("set"):
+- pdos = defaultdict(dict)
++ pdos: dict[Orbital | OrbitalType, dict[Spin, np.ndarray]] = defaultdict(dict)
+
+ for ss in s.findall("set"):
+ spin = Spin.up if ss.attrib["comment"] == "spin 1" else Spin.down
+ data = np.array(_parse_vasp_array(ss))
+- _nrow, ncol = data.shape
+- for j in range(1, ncol):
+- orb = Orbital(j - 1) if lm else OrbitalType(j - 1)
+- pdos[orb][spin] = data[:, j]
++ _n_row, n_col = data.shape
++ for col_idx in range(1, n_col):
++ orb = Orbital(col_idx - 1) if lm else OrbitalType(col_idx - 1)
++ pdos[orb][spin] = data[:, col_idx] # type: ignore[index]
+ pdoss.append(pdos)
+ elem.clear()
+- return (
+- Dos(efermi, energies, tdensities),
+- Dos(efermi, energies, idensities),
+- pdoss,
+- )
++ return Dos(efermi, energies, tdensities), Dos(efermi, energies, idensities), pdoss
+
+ @staticmethod
+ def _parse_eigen(elem):
+--- a/setup.py
++++ b/setup.py
+@@ -48,10 +48,11 @@
+ ],
+ extras_require={
+ "ase": ["ase>=3.3"],
+- "tblite": ["tblite[ase]>=0.3.0"],
++ # don't depend on tblite above 3.11 since unsupported https://github.com/tblite/tblite/issues/175
++ "tblite": ["tblite[ase]>=0.3.0; python_version<'3.12'"],
+ "vis": ["vtk>=6.0.0"],
+ "abinit": ["netcdf4"],
+- "relaxation": ["matgl", "chgnet"],
++ "relaxation": ["matgl", "chgnet>=0.3.0"],
+ "electronic_structure": ["fdint>=2.0.2"],
+ "dev": [
+ "mypy",
+@@ -73,7 +74,7 @@
+ # caused CI failure due to ModuleNotFoundError: No module named 'packaging'
+ # "BoltzTraP2>=22.3.2; platform_system!='Windows'",
+ "chemview>=0.6",
+- "chgnet",
++ "chgnet>=0.3.0",
+ "f90nml>=1.1.2",
+ "galore>=0.6.1",
+ "h5py>=3.8.0",
+@@ -82,7 +83,8 @@
+ "netCDF4>=1.5.8",
+ "phonopy>=2.4.2",
+ "seekpath>=1.9.4",
+- "tblite[ase]>=0.3.0; platform_system=='Linux'",
++ # don't depend on tblite above 3.11 since unsupported https://github.com/tblite/tblite/issues/175
++ "tblite[ase]>=0.3.0; platform_system=='Linux' and python_version<'3.12'",
+ # "hiphive>=0.6",
+ # "openbabel>=3.1.1; platform_system=='Linux'",
+ ],
+@@ -158,6 +160,7 @@
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
++ "Programming Language :: Python :: 3.12",
+ "Programming Language :: Python :: 3",
+ "Topic :: Scientific/Engineering :: Chemistry",
+ "Topic :: Scientific/Engineering :: Information Analysis",
+--- a/tests/alchemy/test_materials.py
++++ b/tests/alchemy/test_materials.py
+@@ -37,9 +37,9 @@
+ [0, -2.2171384943, 3.1355090603],
+ ]
+ struct = Structure(lattice, ["Si4+", "Si4+"], coords)
+- ts = TransformedStructure(struct, [])
+- ts.append_transformation(SupercellTransformation.from_scaling_factors(2, 1, 1))
+- alt = ts.append_transformation(
++ t_struct = TransformedStructure(struct, [])
++ t_struct.append_transformation(SupercellTransformation.from_scaling_factors(2, 1, 1))
++ alt = t_struct.append_transformation(
+ PartialRemoveSpecieTransformation("Si4+", 0.5, algo=PartialRemoveSpecieTransformation.ALGO_COMPLETE), 5
+ )
+ assert len(alt) == 2
+@@ -61,11 +61,11 @@
+ with open(f"{TEST_FILES_DIR}/transformations.json") as file:
+ dct = json.load(file)
+ dct["other_parameters"] = {"tags": ["test"]}
+- ts = TransformedStructure.from_dict(dct)
+- ts.other_parameters["author"] = "Will"
+- ts.append_transformation(SubstitutionTransformation({"Fe": "Mn"}))
+- assert ts.final_structure.composition.reduced_formula == "MnPO4"
+- assert ts.other_parameters == {"author": "Will", "tags": ["test"]}
++ t_struct = TransformedStructure.from_dict(dct)
++ t_struct.other_parameters["author"] = "Will"
++ t_struct.append_transformation(SubstitutionTransformation({"Fe": "Mn"}))
++ assert t_struct.final_structure.composition.reduced_formula == "MnPO4"
++ assert t_struct.other_parameters == {"author": "Will", "tags": ["test"]}
+
+ def test_undo_and_redo_last_change(self):
+ trans = [
+@@ -104,7 +104,8 @@
+ self.trans.set_parameter("author", "will")
+ with pytest.warns(UserWarning) as warns:
+ snl = self.trans.to_snl([("will", "will at test.com")])
+- assert len(warns) == 1, "Warning not raised on type conversion with other_parameters"
++ # Python 3.12 also raises DeprecationWarning
++ assert len(warns) == 2, "Warning not raised on type conversion with other_parameters"
+
+ ts = TransformedStructure.from_snl(snl)
+ assert ts.history[-1]["@class"] == "SubstitutionTransformation"
+--- a/tests/io/lobster/test_inputs.py
++++ b/tests/io/lobster/test_inputs.py
+@@ -1440,36 +1440,36 @@
+
+ class TestLobsterin(unittest.TestCase):
+ def setUp(self):
+- self.Lobsterinfromfile = Lobsterin.from_file(f"{TEST_FILES_DIR}/cohp/lobsterin.1")
+- self.Lobsterinfromfile2 = Lobsterin.from_file(f"{TEST_FILES_DIR}/cohp/lobsterin.2")
+- self.Lobsterinfromfile3 = Lobsterin.from_file(f"{TEST_FILES_DIR}/cohp/lobsterin.3")
+- self.Lobsterinfromfile4 = Lobsterin.from_file(f"{TEST_FILES_DIR}/cohp/lobsterin.4.gz")
++ self.Lobsterin = Lobsterin.from_file(f"{TEST_FILES_DIR}/cohp/lobsterin.1")
++ self.Lobsterin2 = Lobsterin.from_file(f"{TEST_FILES_DIR}/cohp/lobsterin.2")
++ self.Lobsterin3 = Lobsterin.from_file(f"{TEST_FILES_DIR}/cohp/lobsterin.3")
++ self.Lobsterin4 = Lobsterin.from_file(f"{TEST_FILES_DIR}/cohp/lobsterin.4.gz")
+
+ def test_from_file(self):
+ # test read from file
+- assert self.Lobsterinfromfile["cohpstartenergy"] == approx(-15.0)
+- assert self.Lobsterinfromfile["cohpendenergy"] == approx(5.0)
+- assert self.Lobsterinfromfile["basisset"] == "pbeVaspFit2015"
+- assert self.Lobsterinfromfile["gaussiansmearingwidth"] == approx(0.1)
+- assert self.Lobsterinfromfile["basisfunctions"][0] == "Fe 3d 4p 4s"
+- assert self.Lobsterinfromfile["basisfunctions"][1] == "Co 3d 4p 4s"
+- assert self.Lobsterinfromfile["skipdos"]
+- assert self.Lobsterinfromfile["skipcohp"]
+- assert self.Lobsterinfromfile["skipcoop"]
+- assert self.Lobsterinfromfile["skippopulationanalysis"]
+- assert self.Lobsterinfromfile["skipgrosspopulation"]
++ assert self.Lobsterin["cohpstartenergy"] == approx(-15.0)
++ assert self.Lobsterin["cohpendenergy"] == approx(5.0)
++ assert self.Lobsterin["basisset"] == "pbeVaspFit2015"
++ assert self.Lobsterin["gaussiansmearingwidth"] == approx(0.1)
++ assert self.Lobsterin["basisfunctions"][0] == "Fe 3d 4p 4s"
++ assert self.Lobsterin["basisfunctions"][1] == "Co 3d 4p 4s"
++ assert self.Lobsterin["skipdos"]
++ assert self.Lobsterin["skipcohp"]
++ assert self.Lobsterin["skipcoop"]
++ assert self.Lobsterin["skippopulationanalysis"]
++ assert self.Lobsterin["skipgrosspopulation"]
+
+ # test if comments are correctly removed
+- assert self.Lobsterinfromfile == self.Lobsterinfromfile2
++ assert self.Lobsterin == self.Lobsterin2
+
+ def test_getitem(self):
+ # tests implementation of getitem, should be case independent
+- assert self.Lobsterinfromfile["COHPSTARTENERGY"] == approx(-15.0)
++ assert self.Lobsterin["COHPSTARTENERGY"] == approx(-15.0)
+
+ def test_setitem(self):
+ # test implementation of setitem
+- self.Lobsterinfromfile["skipCOHP"] = False
+- assert self.Lobsterinfromfile["skipcohp"] is False
++ self.Lobsterin["skipCOHP"] = False
++ assert self.Lobsterin["skipcohp"] is False
+
+ def test_initialize_from_dict(self):
+ # initialize from dict
+@@ -1633,40 +1633,39 @@
+
+ def test_diff(self):
+ # test diff
+- assert self.Lobsterinfromfile.diff(self.Lobsterinfromfile2)["Different"] == {}
+- assert self.Lobsterinfromfile.diff(self.Lobsterinfromfile2)["Same"]["COHPSTARTENERGY"] == approx(-15.0)
++ assert self.Lobsterin.diff(self.Lobsterin2)["Different"] == {}
++ assert self.Lobsterin.diff(self.Lobsterin2)["Same"]["COHPSTARTENERGY"] == approx(-15.0)
+
+ # test diff in both directions
+- for entry in self.Lobsterinfromfile.diff(self.Lobsterinfromfile3)["Same"]:
+- assert entry in self.Lobsterinfromfile3.diff(self.Lobsterinfromfile)["Same"]
+- for entry in self.Lobsterinfromfile3.diff(self.Lobsterinfromfile)["Same"]:
+- assert entry in self.Lobsterinfromfile.diff(self.Lobsterinfromfile3)["Same"]
+- for entry in self.Lobsterinfromfile.diff(self.Lobsterinfromfile3)["Different"]:
+- assert entry in self.Lobsterinfromfile3.diff(self.Lobsterinfromfile)["Different"]
+- for entry in self.Lobsterinfromfile3.diff(self.Lobsterinfromfile)["Different"]:
+- assert entry in self.Lobsterinfromfile.diff(self.Lobsterinfromfile3)["Different"]
+-
+- assert (
+- self.Lobsterinfromfile.diff(self.Lobsterinfromfile3)["Different"]["SKIPCOHP"]["lobsterin1"]
+- == self.Lobsterinfromfile3.diff(self.Lobsterinfromfile)["Different"]["SKIPCOHP"]["lobsterin2"]
++ for entry in self.Lobsterin.diff(self.Lobsterin3)["Same"]:
++ assert entry in self.Lobsterin3.diff(self.Lobsterin)["Same"]
++ for entry in self.Lobsterin3.diff(self.Lobsterin)["Same"]:
++ assert entry in self.Lobsterin.diff(self.Lobsterin3)["Same"]
++ for entry in self.Lobsterin.diff(self.Lobsterin3)["Different"]:
++ assert entry in self.Lobsterin3.diff(self.Lobsterin)["Different"]
++ for entry in self.Lobsterin3.diff(self.Lobsterin)["Different"]:
++ assert entry in self.Lobsterin.diff(self.Lobsterin3)["Different"]
++ assert(self.Lobsterin.diff(self.Lobsterin3)["Different"]["SKIPCOHP"]["lobsterin1"]
++ == self.Lobsterin3.diff(self.Lobsterin)["Different"]["SKIPCOHP"]["lobsterin2"]
+ )
+
+ def test_dict_functionality(self):
+- assert self.Lobsterinfromfile.get("COHPstartEnergy") == -15.0
+- assert self.Lobsterinfromfile.get("COHPstartEnergy") == -15.0
+- assert self.Lobsterinfromfile.get("COhPstartenergy") == -15.0
+- lobsterincopy = self.Lobsterinfromfile.copy()
+- lobsterincopy.update({"cohpstarteNergy": -10.00})
+- assert lobsterincopy["cohpstartenergy"] == -10.0
+- lobsterincopy.pop("cohpstarteNergy")
+- assert "cohpstartenergy" not in lobsterincopy
+- lobsterincopy.pop("cohpendenergY")
+- lobsterincopy["cohpsteps"] = 100
+- assert lobsterincopy["cohpsteps"] == 100
+- before = len(lobsterincopy.items())
+- lobsterincopy.popitem()
+- after = len(lobsterincopy.items())
+- assert before != after
++ for key in ("COHPstartEnergy", "COHPstartEnergy", "COhPstartenergy"):
++ start_energy = self.Lobsterin.get(key)
++ assert start_energy == -15.0, f"{start_energy=}, {key=}"
++ lobsterin_copy = self.Lobsterin.copy()
++ lobsterin_copy.update({"cohpstarteNergy": -10.00})
++ assert lobsterin_copy["cohpstartenergy"] == -10.0
++ lobsterin_copy.pop("cohpstarteNergy")
++ assert "cohpstartenergy" not in lobsterin_copy
++ lobsterin_copy.pop("cohpendenergY")
++ lobsterin_copy["cohpsteps"] = 100
++ assert lobsterin_copy["cohpsteps"] == 100
++ len_before = len(lobsterin_copy.items())
++ assert len_before == 9, f"{len_before=}"
++ lobsterin_copy.popitem()
++ len_after = len(lobsterin_copy.items())
++ assert len_after == len_before - 1
+
+ def test_read_write_lobsterin(self):
+ outfile_path = tempfile.mkstemp()[1]
+@@ -1905,10 +1904,10 @@
+ found += 1
+ return found == 1
+
+- def test_msonable_implementation(self):
++ def test_as_from_dict(self):
+ # tests as dict and from dict methods
+- new_lobsterin = Lobsterin.from_dict(self.Lobsterinfromfile.as_dict())
+- assert new_lobsterin == self.Lobsterinfromfile
++ new_lobsterin = Lobsterin.from_dict(self.Lobsterin.as_dict())
++ assert new_lobsterin == self.Lobsterin
+ new_lobsterin.to_json()
+
+
+@@ -2302,13 +2301,17 @@
+ filename=f"{TEST_FILES_DIR}/cohp/LCAOWaveFunctionAfterLSO1PlotOfSpin1Kpoint1band1.gz",
+ structure=Structure.from_file(f"{TEST_FILES_DIR}/cohp/POSCAR_O.gz"),
+ )
+- wave1.write_file(filename=f"{self.tmp_path}/wavecar_test.vasp", part="real")
+- assert os.path.isfile("wavecar_test.vasp")
+-
+- wave1.write_file(filename=f"{self.tmp_path}/wavecar_test.vasp", part="imaginary")
+- assert os.path.isfile("wavecar_test.vasp")
+- wave1.write_file(filename=f"{self.tmp_path}/density.vasp", part="density")
+- assert os.path.isfile("density.vasp")
++ real_wavecar_path = f"{self.tmp_path}/real-wavecar.vasp"
++ wave1.write_file(filename=real_wavecar_path, part="real")
++ assert os.path.isfile(real_wavecar_path)
++
++ imag_wavecar_path = f"{self.tmp_path}/imaginary-wavecar.vasp"
++ wave1.write_file(filename=imag_wavecar_path, part="imaginary")
++ assert os.path.isfile(imag_wavecar_path)
++
++ density_wavecar_path = f"{self.tmp_path}/density-wavecar.vasp"
++ wave1.write_file(filename=density_wavecar_path, part="density")
++ assert os.path.isfile(density_wavecar_path)
+
+
+ class TestSitePotentials(PymatgenTest):
+--- a/tests/io/test_shengbte.py
++++ b/tests/io/test_shengbte.py
+@@ -84,7 +84,7 @@
+ reference_string = reference_file.read()
+ assert test_string == reference_string
+
+- def test_msonable_implementation(self):
++ def test_as_from_dict(self):
+ # tests as dict and from dict methods
+ ctrl_from_file = Control.from_file(self.filename)
+ control_from_dict = Control.from_dict(ctrl_from_file.as_dict())
+--- a/tests/io/vasp/test_outputs.py
++++ b/tests/io/vasp/test_outputs.py
+@@ -20,7 +20,7 @@
+ from pymatgen.core.structure import Structure
+ from pymatgen.electronic_structure.core import Magmom, Orbital, OrbitalType, Spin
+ from pymatgen.entries.compatibility import MaterialsProjectCompatibility
+-from pymatgen.io.vasp.inputs import Kpoints, Poscar, Potcar
++from pymatgen.io.vasp.inputs import Incar, Kpoints, Poscar, Potcar
+ from pymatgen.io.vasp.outputs import (
+ WSWQ,
+ BSVasprun,
+@@ -231,8 +231,8 @@
+ assert len(trajectory) == len(vasp_run.ionic_steps)
+ assert "forces" in trajectory[0].site_properties
+
+- for i, step in enumerate(vasp_run.ionic_steps):
+- assert vasp_run.structures[i] == step["structure"]
++ for idx, step in enumerate(vasp_run.ionic_steps):
++ assert vasp_run.structures[idx] == step["structure"]
+
+ assert all(
+ vasp_run.structures[i] == vasp_run.ionic_steps[i]["structure"] for i in range(len(vasp_run.ionic_steps))
+@@ -257,12 +257,12 @@
+ "PAW_PBE P 17Jan2003",
+ "PAW_PBE O 08Apr2002",
+ ]
+- assert vasp_run.kpoints is not None, "Kpoints cannot be read"
+- assert vasp_run.actual_kpoints is not None, "Actual kpoints cannot be read"
+- assert vasp_run.actual_kpoints_weights is not None, "Actual kpoints weights cannot be read"
++ assert isinstance(vasp_run.kpoints, Kpoints), f"{vasp_run.kpoints=}"
++ assert isinstance(vasp_run.actual_kpoints, list), f"{vasp_run.actual_kpoints=}"
++ assert isinstance(vasp_run.actual_kpoints_weights, list), f"{vasp_run.actual_kpoints_weights=}"
+ for atom_doses in vasp_run.pdos:
+ for orbital_dos in atom_doses:
+- assert orbital_dos is not None, "Partial Dos cannot be read"
++ assert isinstance(orbital_dos, Orbital), f"{orbital_dos=}"
+
+ # test skipping ionic steps.
+ vasprun_skip = Vasprun(filepath, 3, parse_potcar_file=False)
+@@ -295,7 +295,7 @@
+ filepath = f"{TEST_FILES_DIR}/vasprun.xml.unconverged"
+ with pytest.warns(UnconvergedVASPWarning, match="vasprun.xml.unconverged is an unconverged VASP run") as warns:
+ vasprun_unconverged = Vasprun(filepath, parse_potcar_file=False)
+- assert len(warns) == 1
++ assert len(warns) >= 1
+
+ assert vasprun_unconverged.converged_ionic
+ assert not vasprun_unconverged.converged_electronic
+@@ -642,7 +642,7 @@
+ # Ensure no potcar is found and nothing is updated
+ with pytest.warns(UserWarning, match="No POTCAR file with matching TITEL fields was found in") as warns:
+ vasp_run = Vasprun(filepath, parse_potcar_file=".")
+- assert len(warns) == 2
++ assert len(warns) >= 2, f"{len(warns)=}"
+ assert vasp_run.potcar_spec == [
+ {"titel": "PAW_PBE Li 17Jan2003", "hash": None, "summary_stats": {}},
+ {"titel": "PAW_PBE Fe 06Sep2000", "hash": None, "summary_stats": {}},
diff -Nru pymatgen-2024.1.27+dfsg1/debian/patches/series pymatgen-2024.1.27+dfsg1/debian/patches/series
--- pymatgen-2024.1.27+dfsg1/debian/patches/series 2024-02-27 04:38:40.000000000 +1300
+++ pymatgen-2024.1.27+dfsg1/debian/patches/series 2024-07-10 15:59:41.000000000 +1200
@@ -8,3 +8,4 @@
docs_index.patch
docs_libjs_local.patch
CVE-2024-23346_JonesFaithfulTransformation_sympy-c231cbd.patch
+python312.patch
More information about the Debichem-devel
mailing list