Skip to content

Commit

Permalink
Merge branch 'develop' into ci/issue-5289-update-peter-evans-create-p…
Browse files Browse the repository at this point in the history
…ull-request-action
  • Loading branch information
csadorf authored Jan 13, 2022
2 parents 96e7408 + 995f254 commit 873e2ea
Show file tree
Hide file tree
Showing 24 changed files with 126 additions and 61 deletions.
23 changes: 22 additions & 1 deletion .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,9 +50,30 @@ jobs:
uses: actions/setup-python@v2
with:
python-version: '3.8'

- name: Upgrade pip
run: |
pip install --upgrade pip
pip --version
- name: Build pymatgen with compatible numpy
run: |
# This step is necessary because certain versions of `pymatgen` will not specify an explicit version of
# `numpy` in its build requirements, and so the latest version will be used. This causes problems,
# however, because this means that the compiled version of `pymatgen` can only be used with that version
# of `numpy` or higher, since `numpy` only guarantees forward compatibility of the ABI. If we want to
# run with an older version of `numpy`, we need to ensure that `pymatgen` is built with that same
# version. This we can accomplish by installing the desired version of `numpy` manually and then calling
# the install command for `pymatgen` with the `--no-build-isolation` flag. This flag will ensure that
# build dependencies are ignored and won't be installed (preventing the most recent version of `numpy`
# to be installed) and the build relies on those requirements already being present in the environment.
# We also need to install `wheel` because otherwise the `pymatgen` build will fail because `bdist_wheel`
# will not be available.
pip install numpy==1.21.4 wheel
pip install pymatgen==2022.0.16 --no-cache-dir --no-build-isolation
- name: Install python dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements/requirements-py-3.8.txt
pip install --no-deps -e .
pip freeze
Expand Down
19 changes: 17 additions & 2 deletions .github/workflows/ci-code.yml
Original file line number Diff line number Diff line change
Expand Up @@ -89,11 +89,26 @@ jobs:
sudo apt install postgresql graphviz
- name: Upgrade pip and setuptools
# It is crucial to update `setuptools` or the installation of `pymatgen` can break
run: |
pip install --upgrade pip setuptools
pip install --upgrade pip
pip --version
- name: Build pymatgen with compatible numpy
run: |
# This step is necessary because certain versions of `pymatgen` will not specify an explicit version of
# `numpy` in its build requirements, and so the latest version will be used. This causes problems,
# however, because this means that the compiled version of `pymatgen` can only be used with that version
# of `numpy` or higher, since `numpy` only guarantees forward compatibility of the ABI. If we want to
# run with an older version of `numpy`, we need to ensure that `pymatgen` is built with that same
# version. This we can accomplish by installing the desired version of `numpy` manually and then calling
# the install command for `pymatgen` with the `--no-build-isolation` flag. This flag will ensure that
# build dependencies are ignored and won't be installed (preventing the most recent version of `numpy`
# to be installed) and the build relies on those requirements already being present in the environment.
# We also need to install `wheel` because otherwise the `pymatgen` build will fail because `bdist_wheel`
# will not be available.
pip install numpy==1.21.4 wheel
pip install pymatgen==2022.0.16 --no-cache-dir --no-build-isolation
- name: Install aiida-core
run: |
pip install --use-feature=2020-resolver -r requirements/requirements-py-${{ matrix.python-version }}.txt
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/ci-style.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,8 @@ jobs:
- name: Install python dependencies
run: |
pip install -e .[all]
pip install -r requirements/requirements-py-3.8.txt
pip install -e .[pre-commit]
pip freeze
- name: Run pre-commit
Expand Down
16 changes: 16 additions & 0 deletions .github/workflows/rabbitmq.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,22 @@ jobs:
pip install --upgrade pip
pip --version
- name: Build pymatgen with compatible numpy
run: |
# This step is necessary because certain versions of `pymatgen` will not specify an explicit version of
# `numpy` in its build requirements, and so the latest version will be used. This causes problems,
# however, because this means that the compiled version of `pymatgen` can only be used with that version
# of `numpy` or higher, since `numpy` only guarantees forward compatibility of the ABI. If we want to
# run with an older version of `numpy`, we need to ensure that `pymatgen` is built with that same
# version. This we can accomplish by installing the desired version of `numpy` manually and then calling
# the install command for `pymatgen` with the `--no-build-isolation` flag. This flag will ensure that
# build dependencies are ignored and won't be installed (preventing the most recent version of `numpy`
# to be installed) and the build relies on those requirements already being present in the environment.
# We also need to install `wheel` because otherwise the `pymatgen` build will fail because `bdist_wheel`
# will not be available.
pip install numpy==1.21.4 wheel
pip install pymatgen==2022.0.16 --no-cache-dir --no-build-isolation
- name: Install aiida-core
run: |
pip install -r requirements/requirements-py-3.8.txt
Expand Down
23 changes: 22 additions & 1 deletion .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -82,9 +82,30 @@ jobs:
run: |
sudo apt update
sudo apt install postgresql graphviz
- name: Upgrade pip
run: |
pip install --upgrade pip
pip --version
- name: Build pymatgen with compatible numpy
run: |
# This step is necessary because certain versions of `pymatgen` will not specify an explicit version of
# `numpy` in its build requirements, and so the latest version will be used. This causes problems,
# however, because this means that the compiled version of `pymatgen` can only be used with that version
# of `numpy` or higher, since `numpy` only guarantees forward compatibility of the ABI. If we want to
# run with an older version of `numpy`, we need to ensure that `pymatgen` is built with that same
# version. This we can accomplish by installing the desired version of `numpy` manually and then calling
# the install command for `pymatgen` with the `--no-build-isolation` flag. This flag will ensure that
# build dependencies are ignored and won't be installed (preventing the most recent version of `numpy`
# to be installed) and the build relies on those requirements already being present in the environment.
# We also need to install `wheel` because otherwise the `pymatgen` build will fail because `bdist_wheel`
# will not be available.
pip install numpy==1.21.4 wheel
pip install pymatgen==2022.0.16 --no-cache-dir --no-build-isolation
- name: Install aiida-core
run: |
pip install --upgrade pip setuptools
pip install -r requirements/requirements-py-3.8.txt
pip install --no-deps -e .
- name: Run sub-set of test suite
Expand Down
8 changes: 5 additions & 3 deletions aiida/engine/daemon/execmanager.py
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,8 @@ def upload_calculation(
else:

if remote_copy_list:
with open(os.path.join(workdir, '_aiida_remote_copy_list.txt'), 'w') as handle: # pylint: disable=unspecified-encoding
filepath = os.path.join(workdir, '_aiida_remote_copy_list.txt')
with open(filepath, 'w', encoding='utf-8') as handle: # type: ignore[assignment]
for remote_computer_uuid, remote_abs_path, dest_rel_path in remote_copy_list:
handle.write(
'would have copied {} to {} in working directory on remote {}'.format(
Expand All @@ -282,7 +283,8 @@ def upload_calculation(
)

if remote_symlink_list:
with open(os.path.join(workdir, '_aiida_remote_symlink_list.txt'), 'w') as handle: # pylint: disable=unspecified-encoding
filepath = os.path.join(workdir, '_aiida_remote_symlink_list.txt')
with open(filepath, 'w', encoding='utf-8') as handle: # type: ignore[assignment]
for remote_computer_uuid, remote_abs_path, dest_rel_path in remote_symlink_list:
handle.write(
'would have created symlinks from {} to {} in working directory on remote {}'.format(
Expand Down Expand Up @@ -317,7 +319,7 @@ def upload_calculation(
if relpath not in provenance_exclude_list and all(
dirname not in provenance_exclude_list for dirname in dirnames
):
with open(filepath, 'rb') as handle:
with open(filepath, 'rb') as handle: # type: ignore[assignment]
node._repository.put_object_from_filelike(handle, relpath) # pylint: disable=protected-access

# Since the node is already stored, we cannot use the normal repository interface since it will raise a
Expand Down
4 changes: 2 additions & 2 deletions aiida/engine/processes/builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,10 +97,10 @@ def __setattr__(self, attr: str, value: Any) -> None:
except KeyError as exception:
if not self._port_namespace.dynamic:
raise AttributeError(f'Unknown builder parameter: {attr}') from exception
port = None # type: ignore[assignment]
port = None
else:
value = port.serialize(value) # type: ignore[union-attr]
validation_error = port.validate(value)
validation_error = port.validate(value) # type: ignore[union-attr]
if validation_error:
raise ValueError(f'invalid attribute value {validation_error.message}')

Expand Down
4 changes: 2 additions & 2 deletions aiida/engine/processes/ports.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ class WithNonDb:
def __init__(self, *args, **kwargs) -> None:
self._non_db_explicitly_set: bool = bool('non_db' in kwargs)
non_db = kwargs.pop('non_db', False)
super().__init__(*args, **kwargs) # type: ignore[call-arg]
super().__init__(*args, **kwargs)
self._non_db: bool = non_db

@property
Expand Down Expand Up @@ -76,7 +76,7 @@ class WithSerialize:

def __init__(self, *args, **kwargs) -> None:
serializer = kwargs.pop('serializer', None)
super().__init__(*args, **kwargs) # type: ignore[call-arg]
super().__init__(*args, **kwargs)
self._serializer: Callable[[Any], 'Data'] = serializer

def serialize(self, value: Any) -> 'Data':
Expand Down
2 changes: 1 addition & 1 deletion aiida/orm/implementation/entities.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ def from_dbmodel(cls, dbmodel: Any, backend: 'Backend') -> EntityType:
class BackendCollection(Generic[EntityType]):
"""Container class that represents a collection of entries of a particular backend entity."""

ENTITY_CLASS: ClassVar[Type[EntityType]]
ENTITY_CLASS: ClassVar[Type[EntityType]] # type: ignore[misc]

def __init__(self, backend: 'Backend'):
"""
Expand Down
4 changes: 2 additions & 2 deletions aiida/orm/implementation/sqlalchemy/backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ def bulk_insert(self, entity_type: EntityTypes, rows: List[dict], allow_defaults
# https://docs.sqlalchemy.org/en/14/changelog/migration_14.html#orm-batch-inserts-with-psycopg2-now-batch-statements-with-returning-in-most-cases
# by contrast, in sqlite, bulk_insert is faster: https://docs.sqlalchemy.org/en/14/faq/performance.html
session = self.get_session()
with (nullcontext() if self.in_transaction else self.transaction()): # type: ignore[attr-defined]
with (nullcontext() if self.in_transaction else self.transaction()):
session.bulk_insert_mappings(mapper, rows, render_nulls=True, return_defaults=True)
return [row['id'] for row in rows]

Expand All @@ -168,7 +168,7 @@ def bulk_update(self, entity_type: EntityTypes, rows: List[dict]) -> None: # py
if not keys.issuperset(row):
raise IntegrityError(f'Incorrect fields given for {entity_type}: {set(row)} not subset of {keys}')
session = self.get_session()
with (nullcontext() if self.in_transaction else self.transaction()): # type: ignore[attr-defined]
with (nullcontext() if self.in_transaction else self.transaction()):
session.bulk_update_mappings(mapper, rows)

def delete_nodes_and_connections(self, pks_to_delete: Sequence[int]) -> None: # pylint: disable=no-self-use
Expand Down
2 changes: 1 addition & 1 deletion aiida/orm/nodes/node.py
Original file line number Diff line number Diff line change
Expand Up @@ -835,7 +835,7 @@ def _get_objects_to_hash(self) -> List[Any]:
assert self._repository is not None, 'repository not initialised'
top_level_module = self.__module__.split('.', 1)[0]
try:
version = importlib.import_module(top_level_module).__version__ # type: ignore[attr-defined]
version = importlib.import_module(top_level_module).__version__
except (ImportError, AttributeError) as exc:
raise exceptions.HashingError("The node's package version could not be determined") from exc
objects = [
Expand Down
2 changes: 1 addition & 1 deletion aiida/orm/nodes/process/calculation/calcjob.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ def _get_objects_to_hash(self) -> List[Any]:
"""
from importlib import import_module
objects = [
import_module(self.__module__.split('.', 1)[0]).__version__, # type: ignore[attr-defined]
import_module(self.__module__.split('.', 1)[0]).__version__,
{
key: val
for key, val in self.attributes_items()
Expand Down
4 changes: 2 additions & 2 deletions aiida/plugins/factories.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,8 +100,8 @@ def CalcJobImporterFactory(entry_point_name: str, load: bool = True) -> Optional
entry_point = BaseFactory(entry_point_group, entry_point_name, load=load)
valid_classes = (CalcJobImporter,)

if isclass(entry_point) and issubclass(entry_point, CalcJobImporter): # type: ignore[arg-type]
return entry_point
if isclass(entry_point) and issubclass(entry_point, CalcJobImporter):
return entry_point # type: ignore[return-value]

raise_invalid_type_error(entry_point_name, entry_point_group, valid_classes)

Expand Down
2 changes: 1 addition & 1 deletion aiida/repository/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ def from_serialized(cls, serialized: dict, name='') -> 'File':
objects = {name: File.from_serialized(obj, name) for name, obj in serialized.get('o', {}).items()}

instance = cls.__new__(cls)
instance.__init__(name, file_type, key, objects)
instance.__init__(name, file_type, key, objects) # type: ignore[misc]
return instance

def serialize(self) -> dict:
Expand Down
2 changes: 1 addition & 1 deletion aiida/repository/repository.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ def from_serialized(cls, backend: AbstractRepositoryBackend, serialized: Dict[st
:param backend: instance of repository backend to use to actually store the file objects.
"""
instance = cls.__new__(cls)
instance.__init__(backend)
instance.__init__(backend) # type: ignore[misc]

if serialized:
for name, obj in serialized['o'].items():
Expand Down
2 changes: 1 addition & 1 deletion aiida/tools/archive/create.py
Original file line number Diff line number Diff line change
Expand Up @@ -345,7 +345,7 @@ def transform(row):

if filename.exists():
filename.unlink()
shutil.move(tmp_filename, filename) # type: ignore
shutil.move(tmp_filename, filename) # type: ignore[arg-type]

EXPORT_LOGGER.report('Archive created successfully')

Expand Down
14 changes: 7 additions & 7 deletions aiida/tools/archive/implementations/sqlite/backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -376,36 +376,36 @@ def get_backend_entity(dbmodel) -> Type[entities.SqlaModelEntity]: # pylint: di
raise TypeError(f'Cannot get backend entity for {dbmodel}')


@get_backend_entity.register(DbAuthInfo)
@get_backend_entity.register(DbAuthInfo) # type: ignore[call-overload]
def _(dbmodel):
return create_backend_cls(authinfos.SqlaAuthInfo, dbmodel.__class__)


@get_backend_entity.register(DbComment) # type: ignore[no-redef]
@get_backend_entity.register(DbComment) # type: ignore[call-overload]
def _(dbmodel):
return create_backend_cls(comments.SqlaComment, dbmodel.__class__)


@get_backend_entity.register(DbComputer) # type: ignore[no-redef]
@get_backend_entity.register(DbComputer) # type: ignore[call-overload]
def _(dbmodel):
return create_backend_cls(computers.SqlaComputer, dbmodel.__class__)


@get_backend_entity.register(DbGroup) # type: ignore[no-redef]
@get_backend_entity.register(DbGroup) # type: ignore[call-overload]
def _(dbmodel):
return create_backend_cls(groups.SqlaGroup, dbmodel.__class__)


@get_backend_entity.register(DbLog) # type: ignore[no-redef]
@get_backend_entity.register(DbLog) # type: ignore[call-overload]
def _(dbmodel):
return create_backend_cls(logs.SqlaLog, dbmodel.__class__)


@get_backend_entity.register(DbNode) # type: ignore[no-redef]
@get_backend_entity.register(DbNode) # type: ignore[call-overload]
def _(dbmodel):
return create_backend_cls(nodes.SqlaNode, dbmodel.__class__)


@get_backend_entity.register(DbUser) # type: ignore[no-redef]
@get_backend_entity.register(DbUser) # type: ignore[call-overload]
def _(dbmodel):
return create_backend_cls(users.SqlaUser, dbmodel.__class__)
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ def path_callback(inpath, outpath) -> bool:

if outpath.exists() and force:
outpath.unlink()
shutil.move(Path(tmpdirname) / 'new.zip', outpath) # type: ignore
shutil.move(Path(tmpdirname) / 'new.zip', outpath) # type: ignore[arg-type]


def _read_json(inpath: Path, filename: str, is_tar: bool) -> Dict[str, Any]:
Expand Down
Loading

0 comments on commit 873e2ea

Please sign in to comment.