From 66c1fcd0603dcd4a00e57986cca479fa6bbf25aa Mon Sep 17 00:00:00 2001 From: Joerg Henrichs Date: Wed, 15 Jan 2025 15:04:02 +1100 Subject: [PATCH 1/6] Update to fparser 0 2 (#373) * Support new and old style of PSyclone command line (no more nemo api etc) * Fix mypy errors. * Added missing tests for calling psyclone, and converting old style to new stle arguments and vice versa. * Updated comment. * Removed mixing, use a simple regex instead. * Added support for ifx/icx compiler as intel-llvm class. * Added support for nvidia compiler. * Add preliminary support for Cray compiler. * Added Cray compiler wrapper ftn and cc. * Follow a more consistent naming scheme for crays, even though the native compiler names are longer (crayftn-cray, craycc-cray). * Changed names again. * Renamed cray compiler wrapper to be CrayCcWrapper and CrayFtnWrapper, to avoid confusion with Craycc. * Fixed incorrect name in comments. * Additional compilers (#349) * Moved OBJECT_ARCHIVES from constants to ArtefactSet. * Moved PRAGMAD_C from constants to ArtefactSet. * Turned 'all_source' into an enum. * Allow integer as revision. * Fixed flake8 error. * Removed specific functions to add/get fortran source files etc. * Removed non-existing and unneccessary collections. * Try to fix all run_configs. * Fixed rebase issues. * Added replace functionality to ArtefactStore, updated test_artefacts to cover all lines in that file. * Started to replace artefacts when files are pre-processed. * Removed linker argument from linking step in all examples. * Try to get jules to link. * Fixed build_jules. * Fixed other issues raised in reviews. * Try to get jules to link. * Fixed other issues raised in reviews. * Simplify handling of X90 files by replacing the X90 with x90, meaning only one artefact set is involved when running PSyclone. * Make OBJECT_ARCHIVES also a dict, migrate more code to replace/add files to the default build artefact collections. * Fixed some examples. * Fix flake8 error. * Fixed failing tests. * Support empty comments. * Fix preprocessor to not unnecessary remove and add files that are already in the output directory. * Allow find_soure_files to be called more than once by adding files (not replacing artefact). * Updated lfric_common so that files created by configurator are written in build (not source). * Use c_build_files instead of pragmad_c. * Removed unnecessary str. * Documented the new artefact set handling. * Fixed typo. * Make the PSyclone API configurable. * Fixed formatting of documentation, properly used ArtefactSet names. * Support .f and .F Fortran files. * Removed setter for tool.is_available, which was only used for testing. * #3 Fix documentation and coding style issues from review. * Renamed Categories into Category. * Minor coding style cleanup. * Removed more unnecessary (). * Re-added (invalid) grab_pre_build call. * Fixed typo. * Renamed set_default_vendor to set_default_compiler_suite. * Renamed VendorTool to CompilerSuiteTool. * Also accept a Path as exec_name specification for a tool. * Move the check_available function into the base class. * Fixed some types and documentation. * Fix typing error. * Added explanation for meta-compiler. * Improved error handling and documentation. * Replace mpiifort with mpifort to be a tiny bit more portable. * Use classes to group tests for git/svn/fcm together. * Fixed issue in get_transformation script, and moved script into lfric_common to remove code duplication. * Code improvement as suggested by review. * Fixed run config * Added reference to ticket. * Updated type information. * More typing fixes. * Fixed typing warnings. * As requested by reviewer removed is_working_copy functionality. * Issue a warning (which can be silenced) when a tool in a toolbox is replaced. * Fixed flake8. * Fixed flake8. * Fixed failing test. * Addressed issues raised in review. * Removed now unnecessary operations. * Updated some type information. * Fixed all references to APIs to be consistent with PSyclone 2.5. * Added api to the checksum computation. * Fixed type information. * Added test to verify that changing the api changes the checksum. * Make compiler version a tuple of integers * Update some tests to use tuple versions * Explicitly test handling of bad version format * Fix formatting * Tidying up * Make compiler raise an error for any invalid version string Assume these compilers don't need to be hashed. Saves dealing with empty tuples. * Check compiler version string for compiler name * Fix formatting * Add compiler.get_version_string() method Includes other cleanup from PR comments * Add mpi and openmp settings to BuildConfig, made compiler MPI aware. * Looks like the circular dependency has been fixed. * Revert "Looks like the circular dependency has been fixed." ... while it works with the tests, a real application still triggered it. This reverts commit 150dc379af9df8c38e623fae144a0d5196319f10. * Don't even try to find a C compiler if no C files are to be compiled. * Updated gitignore to ignore (recently renamed) documentation. * Fixed failing test. * Return from compile Fortran early if there are no files to compiles. Fixed coding style. * Add MPI enables wrapper for intel and gnu compiler. * Fixed test. * Automatically add openmp flag to compiler and linker based on BuildConfig. * Removed enforcement of keyword parameters, which is not supported in python 3.7. * Fixed failing test. * Support more than one tool of a given suite by sorting them. * Use different version checkout for each compiler vendor with mixins * Refactoring, remove unittest compiler class * Fix some mypy errors * Use 'Union' type hint to fix build checks * Added option to add flags to a tool. * Introduce proper compiler wrapper, used this to implement properly wrapper MPI compiler. * Fixed typo in types. * Return run_version_command to base Compiler class Provides default version command that can be overridden for other compilers. Also fix some incorrect tests Other tidying * Add a missing type hint * Added (somewhat stupid) 'test' to reach 100% coverage of PSyclone tool. * Simplified MPI support in wrapper. * More compiler wrapper coverage. * Removed duplicated function. * Removed debug print. * Removed permanently changing compiler attributes, which can cause test failures later. * More test for C compiler wrapper. * More work on compiler wrapper tests. * Fixed version and availability handling, added missing tests for 100% coverage. * Fixed typing error. * Try to fix python 3.7. * Tried to fix failing tests. * Remove inheritance from mixins and use protocol * Simplify compiler inheritance Mixins have static methods with unique names, overrides only happen in concrete classes * Updated wrapper and tests to handle error raised in get_version. * Simplified regular expressions (now tests cover detection of version numbers with only a major version). * Test for missing mixin. * Use the parsing mixing from the compiler in a compiler wrapper. * Use setattr instead of assignment to make mypy happy. * Simplify usage of compiler-specific parsing mixins. * Minor code cleanup. * Updated documentation. * Simplify usage of compiler-specific parsing mixins. * Test for missing mixin. * Fixed test. * Added missing openmp_flag property to compiler wrapper. * Don't use isinstance for consistency check, which does not work for CompilerWrappers. * Fixed isinstance test for C compilation which doesn't work with a CompilerWrapper. * Use a linker's compiler to determine MPI support. Removed mpi property from CompilerSuite. * Added more tests for invalid version numbers. * Added more test cases for invalid version number, improved regex to work as expected. * Fixed typo in test. * Fixed flake/mypy errors. * Combine wrapper flags with flags from wrapped compiler. * Made mypy happy. * Fixed test. * Split tests into smaller individual ones, fixed missing asssert in test. * Parameterised compiler version tests to also test wrapper. * Added missing MPI parameter when getting the compiler. * Fixed comments. * Order parameters to be in same order for various compiler classes. * Remove stray character * Added getter for wrapped compiler. * Fixed small error that would prevent nested compiler wrappers from being used. * Added a cast to make mypy happy. * Add simple getter for linker library flags * Add getter for linker flags by library * Fix formatting * Add optional libs argument to link function * Reorder and clean up linker tests * Make sure `Linker.link()` raises for unknown lib * Add missing type * Fix typing error * Add 'libs' argument to link_exe function * Try to add documentation for the linker libs feature * Use correct list type in link_exe hint * Add silent replace option to linker.add_lib_flags * Fixed spelling mistake in option. * Clarified documentation. * Removed unnecessary functions in CompilerWrapper. * Fixed failing test triggered by executing them in specific order (tools then steps) * Fixed line lengths. * Add tests for linker LDFLAG * Add pre- and post- lib flags to link function * Fix syntax in built-in lib flags * Remove netcdf as a built-in linker library Bash-style substitution is not currently handled * Configure pre- and post-lib flags on the Linker object Previously they were passed into the Linker.link() function * Use more realistic linker lib flags * Formatting fix * Removed mixing, use a simple regex instead. * Added support for ifx/icx compiler as intel-llvm class. * Added support for nvidia compiler. * Add preliminary support for Cray compiler. * Added Cray compiler wrapper ftn and cc. * Made mpi and openmp default to False in the BuildConfig constructor. * Removed white space. * Follow a more consistent naming scheme for crays, even though the native compiler names are longer (crayftn-cray, craycc-cray). * Changed names again. * Support compilers that do not support OpenMP. * Added documentation for openmp parameter. * Renamed cray compiler wrapper to be CrayCcWrapper and CrayFtnWrapper, to avoid confusion with Craycc. * Fixed incorrect name in comments. --------- Co-authored-by: jasonjunweilyu <161689601+jasonjunweilyu@users.noreply.github.com> Co-authored-by: Luke Hoffmann Co-authored-by: Luke Hoffmann <992315+lukehoffmann@users.noreply.github.com> * Support new and old style of PSyclone command line (no more nemo api etc) * Fix mypy errors. * Added missing tests for calling psyclone, and converting old style to new stle arguments and vice versa. * Added shell tool. * Try to make mypy happy. * Removed debug code. * ToolRepository now only returns default that are available. Updated tests to make tools as available. * Fixed typos and coding style. * Support new and old style of PSyclone command line (no more nemo api etc) * Fix mypy errors. * Added missing tests for calling psyclone, and converting old style to new stle arguments and vice versa. * Updated comment. * Fixed failing tests. * Updated fparser dependency to version 0.2. * Replace old code for handling sentinels with triggering this behaviour in fparser. Require config in constructor of Analyser classes. * Fixed tests for latest changes. * Removed invalid openmp continuation line - since now fparser fails when trying to parse this line. * Added test for disabled openmp parsing. Updated test to work with new test file. * Coding style changes. * Fix flake issues. * Fixed double _. * Removed more accesses to private members. * Added missing type hint. * Make flake8 happy. --------- Co-authored-by: Luke Hoffmann <992315+lukehoffmann@users.noreply.github.com> Co-authored-by: jasonjunweilyu <161689601+jasonjunweilyu@users.noreply.github.com> Co-authored-by: Luke Hoffmann --- pyproject.toml | 2 +- source/fab/parse/c.py | 42 +++++----- source/fab/parse/fortran.py | 41 +++------- source/fab/parse/fortran_common.py | 76 +++++++++++++------ source/fab/parse/x90.py | 5 +- source/fab/steps/analyse.py | 10 +-- source/fab/steps/psyclone.py | 8 +- source/fab/tools/tool.py | 3 +- .../psyclone/test_psyclone_system_test.py | 4 +- tests/unit_tests/parse/c/test_c_analyser.py | 19 ++--- .../parse/fortran/test_fortran_analyser.f90 | 1 - .../parse/fortran/test_fortran_analyser.py | 39 +++++++--- 12 files changed, 140 insertions(+), 110 deletions(-) diff --git a/pyproject.toml b/pyproject.toml index bd6bdb74..26f67d53 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -7,7 +7,7 @@ authors = [ license = {file = 'LICENSE.txt'} dynamic = ['version', 'readme'] requires-python = '>=3.7, <4' -dependencies = ['fparser'] +dependencies = ['fparser >= 0.2'] classifiers = [ 'Development Status :: 1 - Planning', 'Environment :: Console', diff --git a/source/fab/parse/c.py b/source/fab/parse/c.py index 24c26045..02cf2853 100644 --- a/source/fab/parse/c.py +++ b/source/fab/parse/c.py @@ -11,14 +11,14 @@ from pathlib import Path from typing import List, Optional, Union, Tuple -from fab.dep_tree import AnalysedDependent - try: import clang # type: ignore import clang.cindex # type: ignore except ImportError: clang = None +from fab.build_config import BuildConfig +from fab.dep_tree import AnalysedDependent from fab.util import log_or_dot, file_checksum logger = logging.getLogger(__name__) @@ -26,29 +26,33 @@ class AnalysedC(AnalysedDependent): """ - An analysis result for a single C file, containing symbol definitions and dependencies. + An analysis result for a single C file, containing symbol definitions and + dependencies. - Note: We don't need to worry about compile order with pure C projects; we can compile all in one go. - However, with a *Fortran -> C -> Fortran* dependency chain, we do need to ensure that one Fortran file - is compiled before another, so this class must be part of the dependency tree analysis. + Note: We don't need to worry about compile order with pure C projects; we + can compile all in one go. However, with a *Fortran -> C -> Fortran* + dependency chain, we do need to ensure that one Fortran file is + compiled before another, so this class must be part of the + dependency tree analysis. """ - # Note: This subclass adds nothing to it's parent, which provides everything it needs. - # We'd normally remove an irrelevant class like this but we want to keep the door open - # for filtering analysis results by type, rather than suffix. - pass + # Note: This subclass adds nothing to it's parent, which provides + # everything it needs. We'd normally remove an irrelevant class + # like this but we want to keep the door open for filtering + # analysis results by type, rather than suffix. -class CAnalyser(object): +class CAnalyser: """ Identify symbol definitions and dependencies in a C file. """ - def __init__(self): + def __init__(self, config: BuildConfig): # runtime - self._config = None + self._config = config + self._include_region: List[Tuple[int, str]] = [] # todo: simplifiy by passing in the file path instead of the analysed tokens? def _locate_include_regions(self, trans_unit) -> None: @@ -100,8 +104,7 @@ def _check_for_include(self, lineno) -> Optional[str]: include_stack.pop() if include_stack: return include_stack[-1] - else: - return None + return None def run(self, fpath: Path) \ -> Union[Tuple[AnalysedC, Path], Tuple[Exception, None]]: @@ -149,9 +152,11 @@ def run(self, fpath: Path) \ continue logger.debug('Considering node: %s', node.spelling) - if node.kind in {clang.cindex.CursorKind.FUNCTION_DECL, clang.cindex.CursorKind.VAR_DECL}: + if node.kind in {clang.cindex.CursorKind.FUNCTION_DECL, + clang.cindex.CursorKind.VAR_DECL}: self._process_symbol_declaration(analysed_file, node, usr_symbols) - elif node.kind in {clang.cindex.CursorKind.CALL_EXPR, clang.cindex.CursorKind.DECL_REF_EXPR}: + elif node.kind in {clang.cindex.CursorKind.CALL_EXPR, + clang.cindex.CursorKind.DECL_REF_EXPR}: self._process_symbol_dependency(analysed_file, node, usr_symbols) except Exception as err: logger.exception(f'error walking parsed nodes {fpath}') @@ -166,7 +171,8 @@ def _process_symbol_declaration(self, analysed_file, node, usr_symbols): if node.is_definition(): # only global symbols can be used by other files, not static symbols if node.linkage == clang.cindex.LinkageKind.EXTERNAL: - # This should catch function definitions which are exposed to the rest of the application + # This should catch function definitions which are exposed to + # the rest of the application logger.debug(' * Is defined in this file') # todo: ignore if inside user pragmas? analysed_file.add_symbol_def(node.spelling) diff --git a/source/fab/parse/fortran.py b/source/fab/parse/fortran.py index c5ea7f60..ed2e14d3 100644 --- a/source/fab/parse/fortran.py +++ b/source/fab/parse/fortran.py @@ -11,7 +11,6 @@ from pathlib import Path from typing import Union, Optional, Iterable, Dict, Any, Set -from fparser.common.readfortran import FortranStringReader # type: ignore from fparser.two.Fortran2003 import ( # type: ignore Entity_Decl_List, Use_Stmt, Module_Stmt, Program_Stmt, Subroutine_Stmt, Function_Stmt, Language_Binding_Spec, Char_Literal_Constant, Interface_Block, Name, Comment, Module, Call_Stmt, Derived_Type_Def, Derived_Type_Stmt, @@ -21,6 +20,7 @@ from fparser.two.Fortran2008 import ( # type: ignore Type_Declaration_Stmt, Attr_Spec_List) +from fab.build_config import BuildConfig from fab.dep_tree import AnalysedDependent from fab.parse.fortran_common import iter_content, _has_ancestor_type, _typed_child, FortranAnalyserBase from fab.util import file_checksum, string_checksum @@ -167,15 +167,21 @@ class FortranAnalyser(FortranAnalyserBase): A build step which analyses a fortran file using fparser2, creating an :class:`~fab.dep_tree.AnalysedFortran`. """ - def __init__(self, std=None, ignore_mod_deps: Optional[Iterable[str]] = None): + def __init__(self, + config: BuildConfig, + std: Optional[str] = None, + ignore_mod_deps: Optional[Iterable[str]] = None): """ + :param config: The BuildConfig to use. :param std: The Fortran standard. :param ignore_mod_deps: Module names to ignore in use statements. """ - super().__init__(result_class=AnalysedFortran, std=std) + super().__init__(config=config, + result_class=AnalysedFortran, + std=std) self.ignore_mod_deps: Iterable[str] = list(ignore_mod_deps or []) self.depends_on_comment_found = False @@ -295,33 +301,6 @@ def _process_comment(self, analysed_file, obj): # without .o means a fortran symbol else: analysed_file.add_symbol_dep(dep) - if comment[:2] == "!$": - # Check if it is a use statement with an OpenMP sentinel: - # Use fparser's string reader to discard potential comment - # TODO #327: once fparser supports reading the sentinels, - # this can be removed. - # fparser issue: https://github.com/stfc/fparser/issues/443 - reader = FortranStringReader(comment[2:]) - try: - line = reader.next() - except StopIteration: - # No other item, ignore - return - try: - # match returns a 5-tuple, the third one being the module name - module_name = Use_Stmt.match(line.strline)[2] - module_name = module_name.string - except Exception: - # Not a use statement in a sentinel, ignore: - return - - # Register the module name - if module_name in self.ignore_mod_deps: - logger.debug(f"ignoring use of {module_name}") - return - if module_name.lower() not in self._intrinsic_modules: - # found a dependency on fortran - analysed_file.add_module_dep(module_name) def _process_subroutine_or_function(self, analysed_file, fpath, obj): # binding? @@ -353,7 +332,7 @@ def _process_subroutine_or_function(self, analysed_file, fpath, obj): analysed_file.add_symbol_def(name.string) -class FortranParserWorkaround(object): +class FortranParserWorkaround(): """ Use this class to create a workaround when the third-party Fortran parser is unable to process a valid source file. diff --git a/source/fab/parse/fortran_common.py b/source/fab/parse/fortran_common.py index 0ed4f3fe..5942dc8c 100644 --- a/source/fab/parse/fortran_common.py +++ b/source/fab/parse/fortran_common.py @@ -10,13 +10,14 @@ import logging from abc import ABC, abstractmethod from pathlib import Path -from typing import Union, Tuple, Type +from typing import Optional, Tuple, Type, Union from fparser.common.readfortran import FortranFileReader # type: ignore from fparser.two.parser import ParserFactory # type: ignore from fparser.two.utils import FortranSyntaxError # type: ignore from fab import FabException +from fab.build_config import BuildConfig from fab.dep_tree import AnalysedDependent from fab.parse import EmptySourceFile from fab.util import log_or_dot, file_checksum @@ -58,7 +59,8 @@ def _typed_child(parent, child_type: Type, must_exist=False): # Look for a child of a certain type. # Returns the child or None. # Raises ValueError if more than one child of the given type is found. - children = list(filter(lambda child: isinstance(child, child_type), parent.children)) + children = list(filter(lambda child: isinstance(child, child_type), + parent.children)) if len(children) > 1: raise ValueError(f"too many children found of type {child_type}") @@ -66,41 +68,52 @@ def _typed_child(parent, child_type: Type, must_exist=False): return children[0] if must_exist: - raise FabException(f'Could not find child of type {child_type} in {parent}') + raise FabException(f'Could not find child of type {child_type} ' + f'in {parent}') return None class FortranAnalyserBase(ABC): """ - Base class for Fortran parse-tree analysers, e.g FortranAnalyser and X90Analyser. + Base class for Fortran parse-tree analysers, e.g FortranAnalyser and + X90Analyser. """ _intrinsic_modules = ['iso_fortran_env', 'iso_c_binding'] - def __init__(self, result_class, std=None): + def __init__(self, config: BuildConfig, + result_class, + std: Optional[str] = None): """ + :param config: The BuildConfig object. :param result_class: The type (class) of the analysis result. Defined by the subclass. :param std: The Fortran standard. """ + self._config = config self.result_class = result_class self.f2008_parser = ParserFactory().create(std=std or "f2008") - # todo: this, and perhaps other runtime variables like it, might be better set at construction - # if we construct these objects at runtime instead... - # runtime, for child processes to read - self._config = None + @property + def config(self) -> BuildConfig: + '''Returns the BuildConfig to use. + ''' + return self._config def run(self, fpath: Path) \ - -> Union[Tuple[AnalysedDependent, Path], Tuple[EmptySourceFile, None], Tuple[Exception, None]]: + -> Union[Tuple[AnalysedDependent, Path], + Tuple[EmptySourceFile, None], + Tuple[Exception, None]]: """ - Parse the source file and record what we're interested in (subclass specific). + Parse the source file and record what we're interested in (subclass + specific). Reloads previous analysis results if available. - Returns the analysis data and the result file where it was stored/loaded. + Returns the analysis data and the result file where it was + stored/loaded. """ # calculate the prebuild filename @@ -114,9 +127,11 @@ def run(self, fpath: Path) \ # Load the result file into whatever result class we use. loaded_result = self.result_class.load(analysis_fpath) if loaded_result: - # This result might have been created by another user; their prebuild folder copied to ours. - # If so, the fpath in the result will *not* point to the file we eventually want to compile, - # it will point to the user's original file, somewhere else. So replace it with our own path. + # This result might have been created by another user; their + # prebuild folder copied to ours. If so, the fpath in the + # result will *not* point to the file we eventually want to + # compile, it will point to the user's original file, + # somewhere else. So replace it with our own path. loaded_result.fpath = fpath return loaded_result, analysis_fpath @@ -125,43 +140,54 @@ def run(self, fpath: Path) \ # parse the file, get a node tree node_tree = self._parse_file(fpath=fpath) if isinstance(node_tree, Exception): - return Exception(f"error parsing file '{fpath}':\n{node_tree}"), None + return (Exception(f"error parsing file '{fpath}':\n{node_tree}"), + None) if node_tree.content[0] is None: logger.debug(f" empty tree found when parsing {fpath}") - # todo: If we don't save the empty result we'll keep analysing it every time! + # todo: If we don't save the empty result we'll keep analysing + # it every time! return EmptySourceFile(fpath), None # find things in the node tree - analysed_file = self.walk_nodes(fpath=fpath, file_hash=file_hash, node_tree=node_tree) + analysed_file = self.walk_nodes(fpath=fpath, file_hash=file_hash, + node_tree=node_tree) analysed_file.save(analysis_fpath) return analysed_file, analysis_fpath def _get_analysis_fpath(self, fpath, file_hash) -> Path: - return Path(self._config.prebuild_folder / f'{fpath.stem}.{file_hash}.an') + return Path(self.config.prebuild_folder / + f'{fpath.stem}.{file_hash}.an') def _parse_file(self, fpath): """Get a node tree from a fortran file.""" - reader = FortranFileReader(str(fpath), ignore_comments=False) - reader.exit_on_error = False # don't call sys.exit, it messes up the multi-processing + reader = FortranFileReader( + str(fpath), + ignore_comments=False, + include_omp_conditional_lines=self.config.openmp) + # don't call sys.exit, it messes up the multi-processing + reader.exit_on_error = False try: tree = self.f2008_parser(reader) return tree except FortranSyntaxError as err: - # we can't return the FortranSyntaxError, it breaks multiprocessing! + # Don't return the FortranSyntaxError, it breaks multiprocessing! logger.error(f"\nfparser raised a syntax error in {fpath}\n{err}") return Exception(f"syntax error in {fpath}\n{err}") except Exception as err: logger.error(f"\nunhandled error '{type(err)}' in {fpath}\n{err}") - return Exception(f"unhandled error '{type(err)}' in {fpath}\n{err}") + return Exception(f"unhandled error '{type(err)}' in " + f"{fpath}\n{err}") @abstractmethod def walk_nodes(self, fpath, file_hash, node_tree) -> AnalysedDependent: """ - Examine the nodes in the parse tree, recording things we're interested in. + Examine the nodes in the parse tree, recording things we're + interested in. - Return type depends on our subclass, and will be a subclass of AnalysedDependent. + Return type depends on our subclass, and will be a subclass of + AnalysedDependent. """ raise NotImplementedError diff --git a/source/fab/parse/x90.py b/source/fab/parse/x90.py index 902c01fe..09d51718 100644 --- a/source/fab/parse/x90.py +++ b/source/fab/parse/x90.py @@ -9,6 +9,7 @@ from fparser.two.Fortran2003 import Use_Stmt, Call_Stmt, Name, Only_List, Actual_Arg_Spec_List, Part_Ref # type: ignore from fab.parse import AnalysedFile +from fab.build_config import BuildConfig from fab.parse.fortran_common import FortranAnalyserBase, iter_content, logger, _typed_child from fab.util import by_type @@ -64,8 +65,8 @@ class X90Analyser(FortranAnalyserBase): # Makes a parsable fortran version of x90. # todo: Use hashing to reuse previous analysis results. - def __init__(self): - super().__init__(result_class=AnalysedX90) + def __init__(self, config: BuildConfig): + super().__init__(config=config, result_class=AnalysedX90) def walk_nodes(self, fpath, file_hash, node_tree) -> AnalysedX90: # type: ignore diff --git a/source/fab/steps/analyse.py b/source/fab/steps/analyse.py index 1b739e56..49d4b55a 100644 --- a/source/fab/steps/analyse.py +++ b/source/fab/steps/analyse.py @@ -130,8 +130,10 @@ def analyse( unreferenced_deps = list(unreferenced_deps or []) # todo: these seem more like functions - fortran_analyser = FortranAnalyser(std=std, ignore_mod_deps=ignore_mod_deps) - c_analyser = CAnalyser() + fortran_analyser = FortranAnalyser(config=config, + std=std, + ignore_mod_deps=ignore_mod_deps) + c_analyser = CAnalyser(config=config) # Creates the *build_trees* artefact from the files in `self.source_getter`. @@ -144,10 +146,6 @@ def analyse( # - At this point we have a source tree for the entire source. # - (Optionally) Extract a sub tree for every root symbol, if provided. For building executables. - # todo: code smell - refactor (in another PR to keep things small) - fortran_analyser._config = config - c_analyser._config = config - # parse files: List[Path] = source_getter(config.artefact_store) analysed_files = _parse_files(config, files=files, fortran_analyser=fortran_analyser, c_analyser=c_analyser) diff --git a/source/fab/steps/psyclone.py b/source/fab/steps/psyclone.py index 04c1cc27..c9bb3990 100644 --- a/source/fab/steps/psyclone.py +++ b/source/fab/steps/psyclone.py @@ -192,8 +192,7 @@ def _analyse_x90s(config, x90s: Set[Path]) -> Dict[Path, AnalysedX90]: parsable_x90s = run_mp(config, items=x90s, func=make_parsable_x90) # parse - x90_analyser = X90Analyser() - x90_analyser._config = config + x90_analyser = X90Analyser(config=config) with TimerLogger(f"analysing {len(parsable_x90s)} parsable x90 files"): x90_results = run_mp(config, items=parsable_x90s, func=x90_analyser.run) log_or_dot_finish(logger) @@ -209,7 +208,7 @@ def _analyse_x90s(config, x90s: Set[Path]) -> Dict[Path, AnalysedX90]: analysed_x90 = {result.fpath.with_suffix('.x90'): result for result in analysed_x90} # make the hashes from the original x90s, not the parsable versions which have invoke names removed. - for p, r in analysed_x90.items(): + for p in analysed_x90: analysed_x90[p]._file_hash = file_checksum(p).file_hash return analysed_x90 @@ -249,8 +248,7 @@ def _analyse_kernels(config, kernel_roots) -> Dict[str, int]: # We use the normal Fortran analyser, which records psyclone kernel metadata. # todo: We'd like to separate that from the general fortran analyser at some point, to reduce coupling. # The Analyse step also uses the same fortran analyser. It stores its results so they won't be analysed twice. - fortran_analyser = FortranAnalyser() - fortran_analyser._config = config + fortran_analyser = FortranAnalyser(config=config) with TimerLogger(f"analysing {len(kernel_files)} potential psyclone kernel files"): fortran_results = run_mp(config, items=kernel_files, func=fortran_analyser.run) log_or_dot_finish(logger) diff --git a/source/fab/tools/tool.py b/source/fab/tools/tool.py index 78a8de62..a870c657 100644 --- a/source/fab/tools/tool.py +++ b/source/fab/tools/tool.py @@ -63,7 +63,8 @@ def check_available(self) -> bool: :returns: whether the tool is working (True) or not. ''' try: - self.run(self._availability_option) + op = self._availability_option + self.run(op) except (RuntimeError, FileNotFoundError): return False return True diff --git a/tests/system_tests/psyclone/test_psyclone_system_test.py b/tests/system_tests/psyclone/test_psyclone_system_test.py index 3c16fd4a..adc35315 100644 --- a/tests/system_tests/psyclone/test_psyclone_system_test.py +++ b/tests/system_tests/psyclone/test_psyclone_system_test.py @@ -48,8 +48,8 @@ def test_make_parsable_x90(tmp_path): parsable_x90_path = make_parsable_x90(input_x90_path) - x90_analyser = X90Analyser() with BuildConfig('proj', ToolBox(), fab_workspace=tmp_path) as config: + x90_analyser = X90Analyser(config=config) x90_analyser._config = config # todo: code smell x90_analyser.run(parsable_x90_path) @@ -72,8 +72,8 @@ class TestX90Analyser: def run(self, tmp_path): parsable_x90_path = self.expected_analysis_result.fpath - x90_analyser = X90Analyser() with BuildConfig('proj', ToolBox(), fab_workspace=tmp_path) as config: + x90_analyser = X90Analyser(config=config) x90_analyser._config = config analysed_x90, _ = x90_analyser.run(parsable_x90_path) # type: ignore # don't delete the prebuild diff --git a/tests/unit_tests/parse/c/test_c_analyser.py b/tests/unit_tests/parse/c/test_c_analyser.py index c288baf9..7d457da9 100644 --- a/tests/unit_tests/parse/c/test_c_analyser.py +++ b/tests/unit_tests/parse/c/test_c_analyser.py @@ -17,9 +17,9 @@ def test_simple_result(tmp_path): - c_analyser = CAnalyser() - c_analyser._config = BuildConfig('proj', ToolBox(), mpi=False, - openmp=False, fab_workspace=tmp_path) + config = BuildConfig('proj', ToolBox(), mpi=False, openmp=False, + fab_workspace=tmp_path) + c_analyser = CAnalyser(config) with mock.patch('fab.parse.AnalysedFile.save'): fpath = Path(__file__).parent / "test_c_analyser.c" @@ -72,7 +72,7 @@ def __init__(self, spelling, line): mock_trans_unit = Mock() mock_trans_unit.cursor.get_tokens.return_value = tokens - analyser = CAnalyser() + analyser = CAnalyser(config=None) analyser._locate_include_regions(mock_trans_unit) assert analyser._include_region == expect @@ -81,7 +81,7 @@ def __init__(self, spelling, line): class Test__check_for_include: def test_vanilla(self): - analyser = CAnalyser() + analyser = CAnalyser(config=None) analyser._include_region = [ (10, "sys_include_start"), (20, "sys_include_end"), @@ -113,7 +113,7 @@ def _definition(self, spelling, linkage): node.linkage = linkage node.spelling = spelling - analyser = CAnalyser() + analyser = CAnalyser(config=None) analysed_file = Mock() analyser._process_symbol_declaration(analysed_file=analysed_file, node=node, usr_symbols=None) @@ -134,7 +134,7 @@ def _declaration(self, spelling, include_type): node.is_definition.return_value = False node.spelling = spelling - analyser = CAnalyser() + analyser = CAnalyser(config=None) analyser._check_for_include = Mock(return_value=include_type) usr_symbols = [] @@ -155,7 +155,7 @@ def test_not_usr_symbol(self): analysed_file.add_symbol_dep.assert_not_called() def _dependency(self, spelling, usr_symbols): - analyser = CAnalyser() + analyser = CAnalyser(config=None) analysed_file = Mock() node = Mock(spelling=spelling) @@ -168,7 +168,8 @@ def test_clang_disable(): with mock.patch('fab.parse.c.clang', None): with mock.patch('fab.parse.c.file_checksum') as mock_file_checksum: - result = CAnalyser().run(Path(__file__).parent / "test_c_analyser.c") + c_analyser = CAnalyser(config=None) + result = c_analyser.run(Path(__file__).parent / "test_c_analyser.c") assert isinstance(result[0], ImportWarning) mock_file_checksum.assert_not_called() diff --git a/tests/unit_tests/parse/fortran/test_fortran_analyser.f90 b/tests/unit_tests/parse/fortran/test_fortran_analyser.f90 index 508ba56b..2c530269 100644 --- a/tests/unit_tests/parse/fortran/test_fortran_analyser.f90 +++ b/tests/unit_tests/parse/fortran/test_fortran_analyser.f90 @@ -19,7 +19,6 @@ END SUBROUTINE internal_sub SUBROUTINE openmp_sentinel !$ USE compute_chunk_size_mod, ONLY: compute_chunk_size ! Note OpenMP sentinel -!$ USE test that is not a sentinel with a use statement inside !GCC$ unroll 6 !DIR$ assume (mod(p, 6) == 0) !$omp do diff --git a/tests/unit_tests/parse/fortran/test_fortran_analyser.py b/tests/unit_tests/parse/fortran/test_fortran_analyser.py index 75621020..6eb28f3c 100644 --- a/tests/unit_tests/parse/fortran/test_fortran_analyser.py +++ b/tests/unit_tests/parse/fortran/test_fortran_analyser.py @@ -3,6 +3,11 @@ # For further details please refer to the file COPYRIGHT # which you should have received as part of this distribution # ############################################################################## + +'''Tests the Fortran analyser. +''' + + from pathlib import Path from tempfile import NamedTemporaryFile from unittest import mock @@ -18,9 +23,6 @@ from fab.parse.fortran_common import iter_content from fab.tools import ToolBox -'''Tests the Fortran analyser. -''' - # todo: test function binding @@ -36,7 +38,7 @@ def module_expected(module_fpath) -> AnalysedFortran: test module.''' return AnalysedFortran( fpath=module_fpath, - file_hash=1757501304, + file_hash=3737289404, module_defs={'foo_mod'}, symbol_defs={'external_sub', 'external_func', 'foo_mod'}, module_deps={'bar_mod', 'compute_chunk_size_mod'}, @@ -50,9 +52,10 @@ class TestAnalyser: @pytest.fixture def fortran_analyser(self, tmp_path): - fortran_analyser = FortranAnalyser() - fortran_analyser._config = BuildConfig('proj', ToolBox(), - fab_workspace=tmp_path) + # Enable openmp, so fparser will handle the lines with omp sentinels + config = BuildConfig('proj', ToolBox(), + fab_workspace=tmp_path, openmp=True) + fortran_analyser = FortranAnalyser(config=config) return fortran_analyser def test_empty_file(self, fortran_analyser): @@ -71,6 +74,24 @@ def test_module_file(self, fortran_analyser, module_fpath, assert artefact == (fortran_analyser._config.prebuild_folder / f'test_fortran_analyser.{analysis.file_hash}.an') + def test_module_file_no_openmp(self, fortran_analyser, module_fpath, + module_expected): + '''Disable OpenMP, meaning the dependency on compute_chunk_size_mod + should not be detected anymore. + ''' + fortran_analyser.config._openmp = False + with mock.patch('fab.parse.AnalysedFile.save'): + analysis, artefact = fortran_analyser.run(fpath=module_fpath) + + # Without parsing openmp sentinels, the compute_chunk... symbols + # must not be added: + module_expected.module_deps.remove('compute_chunk_size_mod') + module_expected.symbol_deps.remove('compute_chunk_size_mod') + + assert analysis == module_expected + assert artefact == (fortran_analyser._config.prebuild_folder / + f'test_fortran_analyser.{analysis.file_hash}.an') + def test_program_file(self, fortran_analyser, module_fpath, module_expected): # same as test_module_file() but replacing MODULE with PROGRAM @@ -83,7 +104,7 @@ def test_program_file(self, fortran_analyser, module_fpath, fpath=Path(tmp_file.name)) module_expected.fpath = Path(tmp_file.name) - module_expected._file_hash = 3388519280 + module_expected._file_hash = 325155675 module_expected.program_defs = {'foo_mod'} module_expected.module_defs = set() module_expected.symbol_defs.update({'internal_func', @@ -133,7 +154,7 @@ def test_define_without_bind_name(self, tmp_path): # run our handler fpath = Path('foo') analysed_file = AnalysedFortran(fpath=fpath, file_hash=0) - analyser = FortranAnalyser() + analyser = FortranAnalyser(config=None) analyser._process_variable_binding(analysed_file=analysed_file, obj=var_decl) From 54a234bca7e20a9941d4b9862c90c478b2adc815 Mon Sep 17 00:00:00 2001 From: Joerg Henrichs Date: Thu, 23 Jan 2025 13:40:38 +1100 Subject: [PATCH 2/6] Updated clang dependency name (in synch with what is already done on main). (#378) --- pyproject.toml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pyproject.toml b/pyproject.toml index 26f67d53..e165f7a9 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -19,7 +19,7 @@ classifiers = [ ] [project.optional-dependencies] -c-language = ['clang'] +c-language = ['libclang'] plots = ['matplotlib'] tests = ['pytest', 'pytest-cov', 'pytest-mock'] checks = ['flake8>=5.0.4', 'mypy'] From c585ff734a83b97cd945ff96468bcf56eb3ce46c Mon Sep 17 00:00:00 2001 From: Joerg Henrichs Date: Thu, 30 Jan 2025 10:18:52 +1100 Subject: [PATCH 3/6] Linker wrapper new (#375) * Support new and old style of PSyclone command line (no more nemo api etc) * Fix mypy errors. * Added missing tests for calling psyclone, and converting old style to new stle arguments and vice versa. * Updated comment. * Removed mixing, use a simple regex instead. * Added support for ifx/icx compiler as intel-llvm class. * Added support for nvidia compiler. * Add preliminary support for Cray compiler. * Added Cray compiler wrapper ftn and cc. * Follow a more consistent naming scheme for crays, even though the native compiler names are longer (crayftn-cray, craycc-cray). * Changed names again. * Renamed cray compiler wrapper to be CrayCcWrapper and CrayFtnWrapper, to avoid confusion with Craycc. * Fixed incorrect name in comments. * Additional compilers (#349) * Moved OBJECT_ARCHIVES from constants to ArtefactSet. * Moved PRAGMAD_C from constants to ArtefactSet. * Turned 'all_source' into an enum. * Allow integer as revision. * Fixed flake8 error. * Removed specific functions to add/get fortran source files etc. * Removed non-existing and unneccessary collections. * Try to fix all run_configs. * Fixed rebase issues. * Added replace functionality to ArtefactStore, updated test_artefacts to cover all lines in that file. * Started to replace artefacts when files are pre-processed. * Removed linker argument from linking step in all examples. * Try to get jules to link. * Fixed build_jules. * Fixed other issues raised in reviews. * Try to get jules to link. * Fixed other issues raised in reviews. * Simplify handling of X90 files by replacing the X90 with x90, meaning only one artefact set is involved when running PSyclone. * Make OBJECT_ARCHIVES also a dict, migrate more code to replace/add files to the default build artefact collections. * Fixed some examples. * Fix flake8 error. * Fixed failing tests. * Support empty comments. * Fix preprocessor to not unnecessary remove and add files that are already in the output directory. * Allow find_soure_files to be called more than once by adding files (not replacing artefact). * Updated lfric_common so that files created by configurator are written in build (not source). * Use c_build_files instead of pragmad_c. * Removed unnecessary str. * Documented the new artefact set handling. * Fixed typo. * Make the PSyclone API configurable. * Fixed formatting of documentation, properly used ArtefactSet names. * Support .f and .F Fortran files. * Removed setter for tool.is_available, which was only used for testing. * #3 Fix documentation and coding style issues from review. * Renamed Categories into Category. * Minor coding style cleanup. * Removed more unnecessary (). * Re-added (invalid) grab_pre_build call. * Fixed typo. * Renamed set_default_vendor to set_default_compiler_suite. * Renamed VendorTool to CompilerSuiteTool. * Also accept a Path as exec_name specification for a tool. * Move the check_available function into the base class. * Fixed some types and documentation. * Fix typing error. * Added explanation for meta-compiler. * Improved error handling and documentation. * Replace mpiifort with mpifort to be a tiny bit more portable. * Use classes to group tests for git/svn/fcm together. * Fixed issue in get_transformation script, and moved script into lfric_common to remove code duplication. * Code improvement as suggested by review. * Fixed run config * Added reference to ticket. * Updated type information. * More typing fixes. * Fixed typing warnings. * As requested by reviewer removed is_working_copy functionality. * Issue a warning (which can be silenced) when a tool in a toolbox is replaced. * Fixed flake8. * Fixed flake8. * Fixed failing test. * Addressed issues raised in review. * Removed now unnecessary operations. * Updated some type information. * Fixed all references to APIs to be consistent with PSyclone 2.5. * Added api to the checksum computation. * Fixed type information. * Added test to verify that changing the api changes the checksum. * Make compiler version a tuple of integers * Update some tests to use tuple versions * Explicitly test handling of bad version format * Fix formatting * Tidying up * Make compiler raise an error for any invalid version string Assume these compilers don't need to be hashed. Saves dealing with empty tuples. * Check compiler version string for compiler name * Fix formatting * Add compiler.get_version_string() method Includes other cleanup from PR comments * Add mpi and openmp settings to BuildConfig, made compiler MPI aware. * Looks like the circular dependency has been fixed. * Revert "Looks like the circular dependency has been fixed." ... while it works with the tests, a real application still triggered it. This reverts commit 150dc379af9df8c38e623fae144a0d5196319f10. * Don't even try to find a C compiler if no C files are to be compiled. * Updated gitignore to ignore (recently renamed) documentation. * Fixed failing test. * Return from compile Fortran early if there are no files to compiles. Fixed coding style. * Add MPI enables wrapper for intel and gnu compiler. * Fixed test. * Automatically add openmp flag to compiler and linker based on BuildConfig. * Removed enforcement of keyword parameters, which is not supported in python 3.7. * Fixed failing test. * Support more than one tool of a given suite by sorting them. * Use different version checkout for each compiler vendor with mixins * Refactoring, remove unittest compiler class * Fix some mypy errors * Use 'Union' type hint to fix build checks * Added option to add flags to a tool. * Introduce proper compiler wrapper, used this to implement properly wrapper MPI compiler. * Fixed typo in types. * Return run_version_command to base Compiler class Provides default version command that can be overridden for other compilers. Also fix some incorrect tests Other tidying * Add a missing type hint * Added (somewhat stupid) 'test' to reach 100% coverage of PSyclone tool. * Simplified MPI support in wrapper. * More compiler wrapper coverage. * Removed duplicated function. * Removed debug print. * Removed permanently changing compiler attributes, which can cause test failures later. * More test for C compiler wrapper. * More work on compiler wrapper tests. * Fixed version and availability handling, added missing tests for 100% coverage. * Fixed typing error. * Try to fix python 3.7. * Tried to fix failing tests. * Remove inheritance from mixins and use protocol * Simplify compiler inheritance Mixins have static methods with unique names, overrides only happen in concrete classes * Updated wrapper and tests to handle error raised in get_version. * Simplified regular expressions (now tests cover detection of version numbers with only a major version). * Test for missing mixin. * Use the parsing mixing from the compiler in a compiler wrapper. * Use setattr instead of assignment to make mypy happy. * Simplify usage of compiler-specific parsing mixins. * Minor code cleanup. * Updated documentation. * Simplify usage of compiler-specific parsing mixins. * Test for missing mixin. * Fixed test. * Added missing openmp_flag property to compiler wrapper. * Don't use isinstance for consistency check, which does not work for CompilerWrappers. * Fixed isinstance test for C compilation which doesn't work with a CompilerWrapper. * Use a linker's compiler to determine MPI support. Removed mpi property from CompilerSuite. * Added more tests for invalid version numbers. * Added more test cases for invalid version number, improved regex to work as expected. * Fixed typo in test. * Fixed flake/mypy errors. * Combine wrapper flags with flags from wrapped compiler. * Made mypy happy. * Fixed test. * Split tests into smaller individual ones, fixed missing asssert in test. * Parameterised compiler version tests to also test wrapper. * Added missing MPI parameter when getting the compiler. * Fixed comments. * Order parameters to be in same order for various compiler classes. * Remove stray character * Added getter for wrapped compiler. * Fixed small error that would prevent nested compiler wrappers from being used. * Added a cast to make mypy happy. * Add simple getter for linker library flags * Add getter for linker flags by library * Fix formatting * Add optional libs argument to link function * Reorder and clean up linker tests * Make sure `Linker.link()` raises for unknown lib * Add missing type * Fix typing error * Add 'libs' argument to link_exe function * Try to add documentation for the linker libs feature * Use correct list type in link_exe hint * Add silent replace option to linker.add_lib_flags * Fixed spelling mistake in option. * Clarified documentation. * Removed unnecessary functions in CompilerWrapper. * Fixed failing test triggered by executing them in specific order (tools then steps) * Fixed line lengths. * Add tests for linker LDFLAG * Add pre- and post- lib flags to link function * Fix syntax in built-in lib flags * Remove netcdf as a built-in linker library Bash-style substitution is not currently handled * Configure pre- and post-lib flags on the Linker object Previously they were passed into the Linker.link() function * Use more realistic linker lib flags * Formatting fix * Removed mixing, use a simple regex instead. * Added support for ifx/icx compiler as intel-llvm class. * Added support for nvidia compiler. * Add preliminary support for Cray compiler. * Added Cray compiler wrapper ftn and cc. * Made mpi and openmp default to False in the BuildConfig constructor. * Removed white space. * Follow a more consistent naming scheme for crays, even though the native compiler names are longer (crayftn-cray, craycc-cray). * Changed names again. * Support compilers that do not support OpenMP. * Added documentation for openmp parameter. * Renamed cray compiler wrapper to be CrayCcWrapper and CrayFtnWrapper, to avoid confusion with Craycc. * Fixed incorrect name in comments. --------- Co-authored-by: jasonjunweilyu <161689601+jasonjunweilyu@users.noreply.github.com> Co-authored-by: Luke Hoffmann Co-authored-by: Luke Hoffmann <992315+lukehoffmann@users.noreply.github.com> * Support new and old style of PSyclone command line (no more nemo api etc) * Fix mypy errors. * Added missing tests for calling psyclone, and converting old style to new stle arguments and vice versa. * Added shell tool. * Try to make mypy happy. * Removed debug code. * ToolRepository now only returns default that are available. Updated tests to make tools as available. * Fixed typos and coding style. * Support new and old style of PSyclone command line (no more nemo api etc) * Fix mypy errors. * Added missing tests for calling psyclone, and converting old style to new stle arguments and vice versa. * Updated comment. * Fixed failing tests. * Updated fparser dependency to version 0.2. * Replace old code for handling sentinels with triggering this behaviour in fparser. Require config in constructor of Analyser classes. * Fixed tests for latest changes. * Removed invalid openmp continuation line - since now fparser fails when trying to parse this line. * Added test for disabled openmp parsing. Updated test to work with new test file. * Coding style changes. * Fix flake issues. * Fixed double _. * Make Linker inherit CompilerWrapper * Fix up tests for new Linker inheritence * Fix a flake error * Use linker wrapping to combine flags from the wrapped linker with the linker wrapper. * Minor code cleanup. * Created linker wrapper in ToolRepository. * Try making linker a CompilerSuiteTool instead of a CompilerWrapper. * Updated tests. * Fix support for post-libs. * Fixed mypy. * Removed more accesses to private members. * Added missing type hint. * Make flake8 happy. * Added missing openmp handling in linker. * Addressed issues raised in review. * Forgot that file in previous commit. --------- Co-authored-by: Luke Hoffmann <992315+lukehoffmann@users.noreply.github.com> Co-authored-by: jasonjunweilyu <161689601+jasonjunweilyu@users.noreply.github.com> Co-authored-by: Luke Hoffmann --- source/fab/tools/compiler.py | 7 +- source/fab/tools/compiler_wrapper.py | 4 +- source/fab/tools/linker.py | 203 +++++++++++++----- source/fab/tools/preprocessor.py | 2 +- source/fab/tools/tool_repository.py | 35 ++- tests/conftest.py | 7 +- tests/unit_tests/steps/test_link.py | 38 +++- .../steps/test_link_shared_object.py | 22 +- tests/unit_tests/tools/test_compiler.py | 4 +- tests/unit_tests/tools/test_linker.py | 197 +++++++++++------ 10 files changed, 369 insertions(+), 150 deletions(-) diff --git a/source/fab/tools/compiler.py b/source/fab/tools/compiler.py index 0b5618de..b937e9d1 100644 --- a/source/fab/tools/compiler.py +++ b/source/fab/tools/compiler.py @@ -64,7 +64,7 @@ def __init__(self, name: str, self._compile_flag = compile_flag if compile_flag else "-c" self._output_flag = output_flag if output_flag else "-o" self._openmp_flag = openmp_flag if openmp_flag else "" - self.flags.extend(os.getenv("FFLAGS", "").split()) + self.add_flags(os.getenv("FFLAGS", "").split()) self._version_regex = version_regex @property @@ -83,6 +83,11 @@ def openmp_flag(self) -> str: '''Returns the flag to enable OpenMP.''' return self._openmp_flag + @property + def output_flag(self) -> str: + '''Returns the flag that specifies the output flag.''' + return self._output_flag + def get_hash(self) -> int: ''':returns: a hash based on the compiler name and version. ''' diff --git a/source/fab/tools/compiler_wrapper.py b/source/fab/tools/compiler_wrapper.py index 09ce5015..9338f848 100644 --- a/source/fab/tools/compiler_wrapper.py +++ b/source/fab/tools/compiler_wrapper.py @@ -4,8 +4,8 @@ # which you should have received as part of this distribution ############################################################################## -"""This file contains the base class for any compiler, and derived -classes for gcc, gfortran, icc, ifort +"""This file contains the base class for any compiler-wrapper, including +the derived classes for mpif90, mpicc, and CrayFtnWrapper and CrayCcWrapper. """ from pathlib import Path diff --git a/source/fab/tools/linker.py b/source/fab/tools/linker.py index 8959b3de..2acef01b 100644 --- a/source/fab/tools/linker.py +++ b/source/fab/tools/linker.py @@ -7,9 +7,11 @@ """This file contains the base class for any Linker. """ +from __future__ import annotations + import os from pathlib import Path -from typing import cast, Dict, List, Optional +from typing import cast, Dict, List, Optional, Union import warnings from fab.tools.category import Category @@ -18,39 +20,51 @@ class Linker(CompilerSuiteTool): - '''This is the base class for any Linker. If a compiler is specified, - its name, executable, and compile suite will be used for the linker (if - not explicitly set in the constructor). - - :param name: the name of the linker. - :param exec_name: the name of the executable. - :param suite: optional, the name of the suite. - :param compiler: optional, a compiler instance - :param output_flag: flag to use to specify the output name. + '''This is the base class for any Linker. It takes either another linker + instance, or a compiler instance as parameter in the constructor. Exactly + one of these must be provided. + + :param compiler: an optional compiler instance + :param linker: an optional linker instance + :param name: name of the linker + + :raises RuntimeError: if both compiler and linker are specified. + :raises RuntimeError: if neither compiler nor linker is specified. ''' - # pylint: disable=too-many-arguments - def __init__(self, name: Optional[str] = None, + def __init__(self, compiler: Optional[Compiler] = None, + linker: Optional[Linker] = None, exec_name: Optional[str] = None, - suite: Optional[str] = None, - compiler: Optional[Compiler] = None, - output_flag: str = "-o"): - if (not name or not exec_name or not suite) and not compiler: - raise RuntimeError("Either specify name, exec name, and suite " - "or a compiler when creating Linker.") - # Make mypy happy, since it can't work out otherwise if these string - # variables might still be None :( - compiler = cast(Compiler, compiler) + name: Optional[str] = None): + + if linker and compiler: + raise RuntimeError("Both compiler and linker is specified in " + "linker constructor.") + if not linker and not compiler: + raise RuntimeError("Neither compiler nor linker is specified in " + "linker constructor.") + self._compiler = compiler + self._linker = linker + + search_linker = self + while search_linker._linker: + search_linker = search_linker._linker + final_compiler = search_linker._compiler if not name: - name = compiler.name + assert final_compiler # make mypy happy + name = f"linker-{final_compiler.name}" + if not exec_name: - exec_name = compiler.exec_name - if not suite: - suite = compiler.suite - self._output_flag = output_flag - super().__init__(name, exec_name, suite, Category.LINKER) - self._compiler = compiler - self.flags.extend(os.getenv("LDFLAGS", "").split()) + # This will search for the name in linker or compiler + exec_name = self.get_exec_name() + + super().__init__( + name=name, + exec_name=exec_name, + suite=self.suite, + category=Category.LINKER) + + self.add_flags(os.getenv("LDFLAGS", "").split()) # Maintain a set of flags for common libraries. self._lib_flags: Dict[str, List[str]] = {} @@ -58,20 +72,55 @@ def __init__(self, name: Optional[str] = None, self._pre_lib_flags: List[str] = [] self._post_lib_flags: List[str] = [] - @property - def mpi(self) -> bool: - ''':returns: whether the linker supports MPI or not.''' - return self._compiler.mpi - def check_available(self) -> bool: - ''' - :returns: whether the linker is available or not. We do this - by requesting the linker version. + ''':returns: whether this linker is available by asking the wrapped + linker or compiler. ''' if self._compiler: return self._compiler.check_available() + assert self._linker # make mypy happy + return self._linker.check_available() + + def get_exec_name(self) -> str: + ''':returns: the name of the executable by asking the wrapped + linker or compiler.''' + if self._compiler: + return self._compiler.exec_name + assert self._linker # make mypy happy + return self._linker.exec_name + + @property + def suite(self) -> str: + ''':returns: the suite this linker belongs to by getting it from + the wrapper compiler or linker.''' + return cast(CompilerSuiteTool, (self._compiler or self._linker)).suite + + @property + def mpi(self) -> bool: + ''':returns" whether this linker supports MPI or not by checking + with the wrapper compiler or linker.''' + if self._compiler: + return self._compiler.mpi + assert self._linker # make mypy happy + return self._linker.mpi - return super().check_available() + @property + def openmp(self) -> bool: + ''':returns" whether this linker supports OpenMP or not by checking + with the wrapper compiler or linker.''' + if self._compiler: + return self._compiler.openmp + assert self._linker # make mypy happy + return self._linker.openmp + + @property + def output_flag(self) -> str: + ''':returns: the flag that is used to specify the output name. + ''' + if self._compiler: + return self._compiler.output_flag + assert self._linker # make mypy happy + return self._linker.output_flag def get_lib_flags(self, lib: str) -> List[str]: '''Gets the standard flags for a standard library @@ -85,6 +134,10 @@ def get_lib_flags(self, lib: str) -> List[str]: try: return self._lib_flags[lib] except KeyError: + # If a lib is not defined here, but this is a wrapper around + # another linker, return the result from the wrapped linker + if self._linker: + return self._linker.get_lib_flags(lib) raise RuntimeError(f"Unknown library name: '{lib}'") def add_lib_flags(self, lib: str, flags: List[str], @@ -128,6 +181,47 @@ def add_post_lib_flags(self, flags: List[str]): ''' self._post_lib_flags.extend(flags) + def get_pre_link_flags(self) -> List[str]: + '''Returns the list of pre-link flags. It will concatenate the + flags for this instance with all potentially wrapped linkers. + This wrapper's flag will come first - the assumption is that + the pre-link flags are likely paths, so we need a wrapper to + be able to put a search path before the paths from a wrapped + linker. + + :returns: List of pre-link flags of this linker and all + wrapped linkers + ''' + params: List[str] = [] + if self._pre_lib_flags: + params.extend(self._pre_lib_flags) + if self._linker: + # If we are wrapping a linker, get the wrapped linker's + # pre-link flags and append them to the end (so the linker + # wrapper's settings come before the setting from the + # wrapped linker). + params.extend(self._linker.get_pre_link_flags()) + return params + + def get_post_link_flags(self) -> List[str]: + '''Returns the list of post-link flags. It will concatenate the + flags for this instance with all potentially wrapped linkers. + This wrapper's flag will be added to the end. + + :returns: List of post-link flags of this linker and all + wrapped linkers + ''' + params: List[str] = [] + if self._linker: + # If we are wrapping a linker, get the wrapped linker's + # post-link flags and add them first (so this linker + # wrapper's settings come after the setting from the + # wrapped linker). + params.extend(self._linker.get_post_link_flags()) + if self._post_lib_flags: + params.extend(self._post_lib_flags) + return params + def link(self, input_files: List[Path], output_file: Path, openmp: bool, libs: Optional[List[str]] = None) -> str: @@ -141,21 +235,30 @@ def link(self, input_files: List[Path], output_file: Path, :returns: the stdout of the link command ''' - if self._compiler: - # Create a copy: - params = self._compiler.flags[:] - if openmp: - params.append(self._compiler.openmp_flag) - else: - params = [] + + params: List[Union[str, Path]] = [] + + # Find the compiler by following the (potentially + # layered) linker wrapper. + linker = self + while linker._linker: + linker = linker._linker + # Now we must have a compiler + compiler = linker._compiler + assert compiler # make mypy happy + params.extend(compiler.flags) + + if openmp: + params.append(compiler.openmp_flag) + # TODO: why are the .o files sorted? That shouldn't matter params.extend(sorted(map(str, input_files))) + params.extend(self.get_pre_link_flags()) - if self._pre_lib_flags: - params.extend(self._pre_lib_flags) for lib in (libs or []): params.extend(self.get_lib_flags(lib)) - if self._post_lib_flags: - params.extend(self._post_lib_flags) - params.extend([self._output_flag, str(output_file)]) + + params.extend(self.get_post_link_flags()) + params.extend([self.output_flag, str(output_file)]) + return self.run(params) diff --git a/source/fab/tools/preprocessor.py b/source/fab/tools/preprocessor.py index e620ce2a..dd037874 100644 --- a/source/fab/tools/preprocessor.py +++ b/source/fab/tools/preprocessor.py @@ -63,7 +63,7 @@ class CppFortran(Preprocessor): ''' def __init__(self): super().__init__("cpp", "cpp", Category.FORTRAN_PREPROCESSOR) - self.flags.extend(["-traditional-cpp", "-P"]) + self.add_flags(["-traditional-cpp", "-P"]) # ============================================================================ diff --git a/source/fab/tools/tool_repository.py b/source/fab/tools/tool_repository.py index 965e252b..1bf839f8 100644 --- a/source/fab/tools/tool_repository.py +++ b/source/fab/tools/tool_repository.py @@ -17,8 +17,8 @@ from fab.tools.tool import Tool from fab.tools.category import Category from fab.tools.compiler import Compiler -from fab.tools.compiler_wrapper import (CrayCcWrapper, CrayFtnWrapper, - Mpif90, Mpicc) +from fab.tools.compiler_wrapper import (CompilerWrapper, CrayCcWrapper, + CrayFtnWrapper, Mpif90, Mpicc) from fab.tools.linker import Linker from fab.tools.versioning import Fcm, Git, Subversion from fab.tools import (Ar, Cpp, CppFortran, Craycc, Crayftn, @@ -81,12 +81,12 @@ def __init__(self): # Now create the potential mpif90 and Cray ftn wrapper all_fc = self[Category.FORTRAN_COMPILER][:] for fc in all_fc: - mpif90 = Mpif90(fc) - self.add_tool(mpif90) + if not fc.mpi: + mpif90 = Mpif90(fc) + self.add_tool(mpif90) # I assume cray has (besides cray) only support for Intel and GNU if fc.name in ["gfortran", "ifort"]: crayftn = CrayFtnWrapper(fc) - print("NEW NAME", crayftn, crayftn.name) self.add_tool(crayftn) # Now create the potential mpicc and Cray cc wrapper @@ -114,9 +114,28 @@ def add_tool(self, tool: Tool): # If we have a compiler, add the compiler as linker as well if tool.is_compiler: - tool = cast(Compiler, tool) - linker = Linker(name=f"linker-{tool.name}", compiler=tool) - self[linker.category].append(linker) + compiler = cast(Compiler, tool) + if isinstance(compiler, CompilerWrapper): + # If we have a compiler wrapper, create a new linker using + # the linker based on the wrappped compiler. For example, when + # creating linker-mpif90-gfortran, we want this to be based on + # linker-gfortran (and not on the compiler mpif90-gfortran), + # since the linker-gfortran might have library definitions + # that should be reused. So we first get the existing linker + # (since the compiler exists, a linker for this compiler was + # already created and must exist). + other_linker = self.get_tool( + category=Category.LINKER, + name=f"linker-{compiler.compiler.name}") + other_linker = cast(Linker, other_linker) + linker = Linker(linker=other_linker, + exec_name=compiler.exec_name, + name=f"linker-{compiler.name}") + self[linker.category].append(linker) + else: + linker = Linker(compiler=compiler, + name=f"linker-{compiler.name}") + self[linker.category].append(linker) def get_tool(self, category: Category, name: str) -> Tool: ''':returns: the tool with a given name in the specified category. diff --git a/tests/conftest.py b/tests/conftest.py index 86de6476..36896de7 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -11,7 +11,7 @@ import pytest -from fab.tools import Category, CCompiler, FortranCompiler, Linker, ToolBox +from fab.tools import CCompiler, FortranCompiler, Linker, ToolBox # This avoids pylint warnings about Redefining names from outer scope @@ -44,10 +44,9 @@ def fixture_mock_fortran_compiler(): @pytest.fixture(name="mock_linker") -def fixture_mock_linker(): +def fixture_mock_linker(mock_fortran_compiler): '''Provides a mock linker.''' - mock_linker = Linker("mock_linker", "mock_linker.exe", - Category.FORTRAN_COMPILER) + mock_linker = Linker(mock_fortran_compiler) mock_linker.run = mock.Mock() mock_linker._version = (1, 2, 3) mock_linker.add_lib_flags("netcdf", ["-lnetcdff", "-lnetcdf"]) diff --git a/tests/unit_tests/steps/test_link.py b/tests/unit_tests/steps/test_link.py index a20c4ff4..e9a6750c 100644 --- a/tests/unit_tests/steps/test_link.py +++ b/tests/unit_tests/steps/test_link.py @@ -3,22 +3,29 @@ # For further details please refer to the file COPYRIGHT # which you should have received as part of this distribution # ############################################################################## + +''' +Tests linking an executable. +''' + from pathlib import Path from types import SimpleNamespace from unittest import mock from fab.artefacts import ArtefactSet, ArtefactStore from fab.steps.link import link_exe -from fab.tools import Linker +from fab.tools import FortranCompiler, Linker import pytest class TestLinkExe: + '''Test class for linking an executable. + ''' def test_run(self, tool_box): - # ensure the command is formed correctly, with the flags at the - # end (why?!) - + '''Ensure the command is formed correctly, with the flags at the + end and that environment variable FFLAGS is picked up. + ''' config = SimpleNamespace( project_workspace=Path('workspace'), artefact_store=ArtefactStore(), @@ -29,9 +36,20 @@ def test_run(self, tool_box): config.artefact_store[ArtefactSet.OBJECT_FILES] = \ {'foo': {'foo.o', 'bar.o'}} - with mock.patch('os.getenv', return_value='-L/foo1/lib -L/foo2/lib'): - # We need to create a linker here to pick up the env var: - linker = Linker("mock_link", "mock_link.exe", "mock-vendor") + with mock.patch.dict("os.environ", + {"FFLAGS": "-L/foo1/lib -L/foo2/lib"}): + # We need to create the compiler here in order to pick + # up the environment + mock_compiler = FortranCompiler("mock_fortran_compiler", + "mock_fortran_compiler.exe", + "suite", module_folder_flag="", + version_regex="something", + syntax_only_flag=None, + compile_flag=None, + output_flag=None, openmp_flag=None) + mock_compiler.run = mock.Mock() + + linker = Linker(compiler=mock_compiler) # Mark the linker as available to it can be added to the tool box linker._is_available = True @@ -44,10 +62,12 @@ def test_run(self, tool_box): pytest.warns(UserWarning, match="_metric_send_conn not " "set, cannot send metrics"): - link_exe(config, libs=['mylib'], flags=['-fooflag', '-barflag']) + link_exe(config, libs=['mylib'], + flags=['-fooflag', '-barflag']) tool_run.assert_called_with( - ['mock_link.exe', '-L/foo1/lib', '-L/foo2/lib', 'bar.o', 'foo.o', + ['mock_fortran_compiler.exe', '-L/foo1/lib', '-L/foo2/lib', + 'bar.o', 'foo.o', '-L/my/lib', '-mylib', '-fooflag', '-barflag', '-o', 'workspace/foo'], capture_output=True, env=None, cwd=None, check=False) diff --git a/tests/unit_tests/steps/test_link_shared_object.py b/tests/unit_tests/steps/test_link_shared_object.py index 700a3de3..c76eb957 100644 --- a/tests/unit_tests/steps/test_link_shared_object.py +++ b/tests/unit_tests/steps/test_link_shared_object.py @@ -13,7 +13,7 @@ from fab.artefacts import ArtefactSet, ArtefactStore from fab.steps.link import link_shared_object -from fab.tools import Linker +from fab.tools import FortranCompiler, Linker import pytest @@ -32,9 +32,18 @@ def test_run(tool_box): config.artefact_store[ArtefactSet.OBJECT_FILES] = \ {None: {'foo.o', 'bar.o'}} - with mock.patch('os.getenv', return_value='-L/foo1/lib -L/foo2/lib'): - # We need to create a linker here to pick up the env var: - linker = Linker("mock_link", "mock_link.exe", "vendor") + with mock.patch.dict("os.environ", {"FFLAGS": "-L/foo1/lib -L/foo2/lib"}): + # We need to create the compiler here in order to pick + # up the environment + mock_compiler = FortranCompiler("mock_fortran_compiler", + "mock_fortran_compiler.exe", + "suite", module_folder_flag="", + version_regex="something", + syntax_only_flag=None, + compile_flag=None, output_flag=None, + openmp_flag=None) + mock_compiler.run = mock.Mock() + linker = Linker(mock_compiler) # Mark the linker as available so it can added to the tool box: linker._is_available = True tool_box.add_tool(linker, silent_replace=True) @@ -47,6 +56,7 @@ def test_run(tool_box): flags=['-fooflag', '-barflag']) tool_run.assert_called_with( - ['mock_link.exe', '-L/foo1/lib', '-L/foo2/lib', 'bar.o', 'foo.o', - '-fooflag', '-barflag', '-fPIC', '-shared', '-o', '/tmp/lib_my.so'], + ['mock_fortran_compiler.exe', '-L/foo1/lib', '-L/foo2/lib', 'bar.o', + 'foo.o', '-fooflag', '-barflag', '-fPIC', '-shared', + '-o', '/tmp/lib_my.so'], capture_output=True, env=None, cwd=None, check=False) diff --git a/tests/unit_tests/tools/test_compiler.py b/tests/unit_tests/tools/test_compiler.py index f6c7c158..60c18d78 100644 --- a/tests/unit_tests/tools/test_compiler.py +++ b/tests/unit_tests/tools/test_compiler.py @@ -25,7 +25,7 @@ def test_compiler(): category=Category.C_COMPILER, openmp_flag="-fopenmp") assert cc.category == Category.C_COMPILER assert cc._compile_flag == "-c" - assert cc._output_flag == "-o" + assert cc.output_flag == "-o" # pylint: disable-next=use-implicit-booleaness-not-comparison assert cc.flags == [] assert cc.suite == "gnu" @@ -35,7 +35,7 @@ def test_compiler(): fc = FortranCompiler("gfortran", "gfortran", "gnu", openmp_flag="-fopenmp", version_regex="something", module_folder_flag="-J") assert fc._compile_flag == "-c" - assert fc._output_flag == "-o" + assert fc.output_flag == "-o" assert fc.category == Category.FORTRAN_COMPILER assert fc.suite == "gnu" # pylint: disable-next=use-implicit-booleaness-not-comparison diff --git a/tests/unit_tests/tools/test_linker.py b/tests/unit_tests/tools/test_linker.py index 6984c790..052af88d 100644 --- a/tests/unit_tests/tools/test_linker.py +++ b/tests/unit_tests/tools/test_linker.py @@ -13,43 +13,75 @@ import pytest -from fab.tools import (Category, Linker) +from fab.tools import (Category, Linker, ToolRepository) def test_linker(mock_c_compiler, mock_fortran_compiler): '''Test the linker constructor.''' - linker = Linker(name="my_linker", exec_name="my_linker.exe", suite="suite") + assert mock_c_compiler.category == Category.C_COMPILER + assert mock_c_compiler.name == "mock_c_compiler" + + linker = Linker(mock_c_compiler) assert linker.category == Category.LINKER - assert linker.name == "my_linker" - assert linker.exec_name == "my_linker.exe" + assert linker.name == "linker-mock_c_compiler" + assert linker.exec_name == "mock_c_compiler.exe" assert linker.suite == "suite" assert linker.flags == [] + assert linker.output_flag == "-o" - linker = Linker(name="my_linker", compiler=mock_c_compiler) - assert linker.category == Category.LINKER - assert linker.name == "my_linker" - assert linker.exec_name == mock_c_compiler.exec_name - assert linker.suite == mock_c_compiler.suite - assert linker.flags == [] + assert mock_fortran_compiler.category == Category.FORTRAN_COMPILER + assert mock_fortran_compiler.name == "mock_fortran_compiler" - linker = Linker(compiler=mock_c_compiler) + linker = Linker(mock_fortran_compiler) assert linker.category == Category.LINKER - assert linker.name == mock_c_compiler.name - assert linker.exec_name == mock_c_compiler.exec_name - assert linker.suite == mock_c_compiler.suite + assert linker.name == "linker-mock_fortran_compiler" + assert linker.exec_name == "mock_fortran_compiler.exe" + assert linker.suite == "suite" assert linker.flags == [] - linker = Linker(compiler=mock_fortran_compiler) - assert linker.category == Category.LINKER - assert linker.name == mock_fortran_compiler.name - assert linker.exec_name == mock_fortran_compiler.exec_name - assert linker.flags == [] +def test_linker_constructor_error(mock_c_compiler): + '''Test the linker constructor with invalid parameters.''' + + with pytest.raises(RuntimeError) as err: + Linker() + assert ("Neither compiler nor linker is specified in linker constructor." + in str(err.value)) with pytest.raises(RuntimeError) as err: - linker = Linker(name="no-exec-given") - assert ("Either specify name, exec name, and suite or a compiler when " - "creating Linker." in str(err.value)) + Linker(compiler=mock_c_compiler, linker=mock_c_compiler) + assert ("Both compiler and linker is specified in linker constructor." + in str(err.value)) + + +@pytest.mark.parametrize("mpi", [True, False]) +def test_linker_mpi(mock_c_compiler, mpi): + '''Test that linker wrappers handle MPI as expected.''' + + mock_c_compiler._mpi = mpi + linker = Linker(compiler=mock_c_compiler) + assert linker.mpi == mpi + + wrapped_linker = Linker(linker=linker) + assert wrapped_linker.mpi == mpi + + +@pytest.mark.parametrize("openmp", [True, False]) +def test_linker_openmp(mock_c_compiler, openmp): + '''Test that linker wrappers handle openmp as expected. Note that + a compiler detects support for OpenMP by checking if an openmp flag + is defined. + ''' + + if openmp: + mock_c_compiler._openmp_flag = "-some-openmp-flag" + else: + mock_c_compiler._openmp_flag = "" + linker = Linker(compiler=mock_c_compiler) + assert linker.openmp == openmp + + wrapped_linker = Linker(linker=linker) + assert wrapped_linker.openmp == openmp def test_linker_gets_ldflags(mock_c_compiler): @@ -62,31 +94,28 @@ def test_linker_gets_ldflags(mock_c_compiler): def test_linker_check_available(mock_c_compiler): '''Tests the is_available functionality.''' - # First test if a compiler is given. The linker will call the + # First test when a compiler is given. The linker will call the # corresponding function in the compiler: - linker = Linker(compiler=mock_c_compiler) - with mock.patch.object(mock_c_compiler, "check_available", - return_value=True) as comp_run: + linker = Linker(mock_c_compiler) + with mock.patch('fab.tools.compiler.Compiler.get_version', + return_value=(1, 2, 3)): assert linker.check_available() - # It should be called once without any parameter - comp_run.assert_called_once_with() - # Second test, no compiler is given. Mock Tool.run to - # return a success: - linker = Linker("ld", "ld", suite="gnu") - mock_result = mock.Mock(returncode=0) - with mock.patch('fab.tools.tool.subprocess.run', - return_value=mock_result) as tool_run: - linker.check_available() - tool_run.assert_called_once_with( - ["ld", "--version"], capture_output=True, env=None, - cwd=None, check=False) - - # Third test: assume the tool does not exist, check_available - # will return False (and not raise an exception) - linker._is_available = None - with mock.patch("fab.tools.tool.Tool.run", - side_effect=RuntimeError("")) as tool_run: + # Then test the usage of a linker wrapper. The linker will call the + # corresponding function in the wrapper linker: + wrapped_linker = Linker(linker=linker) + with mock.patch('fab.tools.compiler.Compiler.get_version', + return_value=(1, 2, 3)): + assert wrapped_linker.check_available() + + +def test_linker_check_unavailable(mock_c_compiler): + '''Tests the is_available functionality.''' + # assume the tool does not exist, check_available + # will return False (and not raise an exception) + linker = Linker(mock_c_compiler) + with mock.patch('fab.tools.compiler.Compiler.get_version', + side_effect=RuntimeError("")): assert linker.check_available() is False @@ -103,8 +132,8 @@ def test_linker_get_lib_flags(mock_linker): def test_linker_get_lib_flags_unknown(mock_c_compiler): - """Linker should raise an error if flags are requested for a library that is - unknown + """Linker should raise an error if flags are requested for a library + that is unknown. """ linker = Linker(compiler=mock_c_compiler) with pytest.raises(RuntimeError) as err: @@ -123,7 +152,8 @@ def test_linker_add_lib_flags(mock_c_compiler): def test_linker_add_lib_flags_overwrite_defaults(mock_linker): - """Linker should provide a way to replace the default flags for a library""" + """Linker should provide a way to replace the default flags for + a library""" # Initially we have the default netcdf flags result = mock_linker.get_lib_flags("netcdf") @@ -178,7 +208,9 @@ def test_linker_remove_lib_flags_unknown(mock_linker): # Linking: # ==================== def test_linker_c(mock_c_compiler): - '''Test the link command line when no additional libraries are specified.''' + '''Test the link command line when no additional libraries are + specified.''' + linker = Linker(compiler=mock_c_compiler) # Add a library to the linker, but don't use it in the link step linker.add_lib_flags("customlib", ["-lcustom", "-jcustom"]) @@ -264,29 +296,16 @@ def test_compiler_linker_add_compiler_flag(mock_c_compiler): capture_output=True, env=None, cwd=None, check=False) -def test_linker_add_compiler_flag(): - '''Make sure ad-hoc linker flags work if a linker is created without a - compiler: - ''' - linker = Linker("no-compiler", "no-compiler.exe", "suite") - linker.flags.append("-some-other-flag") - mock_result = mock.Mock(returncode=0) - with mock.patch('fab.tools.tool.subprocess.run', - return_value=mock_result) as tool_run: - linker.link([Path("a.o")], Path("a.out"), openmp=False) - tool_run.assert_called_with( - ['no-compiler.exe', '-some-other-flag', 'a.o', '-o', 'a.out'], - capture_output=True, env=None, cwd=None, check=False) - - def test_linker_all_flag_types(mock_c_compiler): """Make sure all possible sources of linker flags are used in the right order""" + + # Environment variables for both the linker with mock.patch.dict("os.environ", {"LDFLAGS": "-ldflag"}): linker = Linker(compiler=mock_c_compiler) - mock_c_compiler.flags.extend(["-compiler-flag1", "-compiler-flag2"]) - linker.flags.extend(["-linker-flag1", "-linker-flag2"]) + mock_c_compiler.add_flags(["-compiler-flag1", "-compiler-flag2"]) + linker.add_flags(["-linker-flag1", "-linker-flag2"]) linker.add_pre_lib_flags(["-prelibflag1", "-prelibflag2"]) linker.add_lib_flags("customlib1", ["-lib1flag1", "lib1flag2"]) linker.add_lib_flags("customlib2", ["-lib2flag1", "lib2flag2"]) @@ -302,8 +321,6 @@ def test_linker_all_flag_types(mock_c_compiler): tool_run.assert_called_with([ "mock_c_compiler.exe", - # Note: compiler flags and linker flags will be switched when the Linker - # becomes a CompilerWrapper in a following PR "-ldflag", "-linker-flag1", "-linker-flag2", "-compiler-flag1", "-compiler-flag2", "-fopenmp", @@ -314,3 +331,49 @@ def test_linker_all_flag_types(mock_c_compiler): "-postlibflag1", "-postlibflag2", "-o", "a.out"], capture_output=True, env=None, cwd=None, check=False) + + +def test_linker_nesting(mock_c_compiler): + """Make sure all possible sources of linker flags are used in the right + order""" + + linker1 = Linker(compiler=mock_c_compiler) + linker1.add_pre_lib_flags(["pre_lib1"]) + linker1.add_lib_flags("lib_a", ["a_from_1"]) + linker1.add_lib_flags("lib_c", ["c_from_1"]) + linker1.add_post_lib_flags(["post_lib1"]) + linker2 = Linker(linker=linker1) + linker2.add_pre_lib_flags(["pre_lib2"]) + linker2.add_lib_flags("lib_b", ["b_from_2"]) + linker2.add_lib_flags("lib_c", ["c_from_2"]) + linker1.add_post_lib_flags(["post_lib2"]) + + mock_result = mock.Mock(returncode=0) + with mock.patch("fab.tools.tool.subprocess.run", + return_value=mock_result) as tool_run: + linker2.link( + [Path("a.o")], Path("a.out"), + libs=["lib_a", "lib_b", "lib_c"], + openmp=True) + tool_run.assert_called_with(["mock_c_compiler.exe", "-fopenmp", + "a.o", "pre_lib2", "pre_lib1", "a_from_1", + "b_from_2", "c_from_2", + "post_lib1", "post_lib2", "-o", "a.out"], + capture_output=True, env=None, cwd=None, + check=False) + + +def test_linker_inheriting(): + '''Make sure that libraries from a wrapper compiler will be + available for a wrapper. + ''' + tr = ToolRepository() + linker_gfortran = tr.get_tool(Category.LINKER, "linker-gfortran") + linker_mpif90 = tr.get_tool(Category.LINKER, "linker-mpif90-gfortran") + + linker_gfortran.add_lib_flags("lib_a", ["a_from_1"]) + assert linker_mpif90.get_lib_flags("lib_a") == ["a_from_1"] + + with pytest.raises(RuntimeError) as err: + linker_mpif90.get_lib_flags("does_not_exist") + assert "Unknown library name: 'does_not_exist'" in str(err.value) From 3adffd37723d838ddc9a1adfecb0a94b5f82d290 Mon Sep 17 00:00:00 2001 From: Joerg Henrichs Date: Wed, 5 Feb 2025 15:38:38 +1100 Subject: [PATCH 4/6] More linker wrapper improvements (#383) * Support new and old style of PSyclone command line (no more nemo api etc) * Fix mypy errors. * Added missing tests for calling psyclone, and converting old style to new stle arguments and vice versa. * Updated comment. * Removed mixing, use a simple regex instead. * Added support for ifx/icx compiler as intel-llvm class. * Added support for nvidia compiler. * Add preliminary support for Cray compiler. * Added Cray compiler wrapper ftn and cc. * Follow a more consistent naming scheme for crays, even though the native compiler names are longer (crayftn-cray, craycc-cray). * Changed names again. * Renamed cray compiler wrapper to be CrayCcWrapper and CrayFtnWrapper, to avoid confusion with Craycc. * Fixed incorrect name in comments. * Additional compilers (#349) * Moved OBJECT_ARCHIVES from constants to ArtefactSet. * Moved PRAGMAD_C from constants to ArtefactSet. * Turned 'all_source' into an enum. * Allow integer as revision. * Fixed flake8 error. * Removed specific functions to add/get fortran source files etc. * Removed non-existing and unneccessary collections. * Try to fix all run_configs. * Fixed rebase issues. * Added replace functionality to ArtefactStore, updated test_artefacts to cover all lines in that file. * Started to replace artefacts when files are pre-processed. * Removed linker argument from linking step in all examples. * Try to get jules to link. * Fixed build_jules. * Fixed other issues raised in reviews. * Try to get jules to link. * Fixed other issues raised in reviews. * Simplify handling of X90 files by replacing the X90 with x90, meaning only one artefact set is involved when running PSyclone. * Make OBJECT_ARCHIVES also a dict, migrate more code to replace/add files to the default build artefact collections. * Fixed some examples. * Fix flake8 error. * Fixed failing tests. * Support empty comments. * Fix preprocessor to not unnecessary remove and add files that are already in the output directory. * Allow find_soure_files to be called more than once by adding files (not replacing artefact). * Updated lfric_common so that files created by configurator are written in build (not source). * Use c_build_files instead of pragmad_c. * Removed unnecessary str. * Documented the new artefact set handling. * Fixed typo. * Make the PSyclone API configurable. * Fixed formatting of documentation, properly used ArtefactSet names. * Support .f and .F Fortran files. * Removed setter for tool.is_available, which was only used for testing. * #3 Fix documentation and coding style issues from review. * Renamed Categories into Category. * Minor coding style cleanup. * Removed more unnecessary (). * Re-added (invalid) grab_pre_build call. * Fixed typo. * Renamed set_default_vendor to set_default_compiler_suite. * Renamed VendorTool to CompilerSuiteTool. * Also accept a Path as exec_name specification for a tool. * Move the check_available function into the base class. * Fixed some types and documentation. * Fix typing error. * Added explanation for meta-compiler. * Improved error handling and documentation. * Replace mpiifort with mpifort to be a tiny bit more portable. * Use classes to group tests for git/svn/fcm together. * Fixed issue in get_transformation script, and moved script into lfric_common to remove code duplication. * Code improvement as suggested by review. * Fixed run config * Added reference to ticket. * Updated type information. * More typing fixes. * Fixed typing warnings. * As requested by reviewer removed is_working_copy functionality. * Issue a warning (which can be silenced) when a tool in a toolbox is replaced. * Fixed flake8. * Fixed flake8. * Fixed failing test. * Addressed issues raised in review. * Removed now unnecessary operations. * Updated some type information. * Fixed all references to APIs to be consistent with PSyclone 2.5. * Added api to the checksum computation. * Fixed type information. * Added test to verify that changing the api changes the checksum. * Make compiler version a tuple of integers * Update some tests to use tuple versions * Explicitly test handling of bad version format * Fix formatting * Tidying up * Make compiler raise an error for any invalid version string Assume these compilers don't need to be hashed. Saves dealing with empty tuples. * Check compiler version string for compiler name * Fix formatting * Add compiler.get_version_string() method Includes other cleanup from PR comments * Add mpi and openmp settings to BuildConfig, made compiler MPI aware. * Looks like the circular dependency has been fixed. * Revert "Looks like the circular dependency has been fixed." ... while it works with the tests, a real application still triggered it. This reverts commit 150dc379af9df8c38e623fae144a0d5196319f10. * Don't even try to find a C compiler if no C files are to be compiled. * Updated gitignore to ignore (recently renamed) documentation. * Fixed failing test. * Return from compile Fortran early if there are no files to compiles. Fixed coding style. * Add MPI enables wrapper for intel and gnu compiler. * Fixed test. * Automatically add openmp flag to compiler and linker based on BuildConfig. * Removed enforcement of keyword parameters, which is not supported in python 3.7. * Fixed failing test. * Support more than one tool of a given suite by sorting them. * Use different version checkout for each compiler vendor with mixins * Refactoring, remove unittest compiler class * Fix some mypy errors * Use 'Union' type hint to fix build checks * Added option to add flags to a tool. * Introduce proper compiler wrapper, used this to implement properly wrapper MPI compiler. * Fixed typo in types. * Return run_version_command to base Compiler class Provides default version command that can be overridden for other compilers. Also fix some incorrect tests Other tidying * Add a missing type hint * Added (somewhat stupid) 'test' to reach 100% coverage of PSyclone tool. * Simplified MPI support in wrapper. * More compiler wrapper coverage. * Removed duplicated function. * Removed debug print. * Removed permanently changing compiler attributes, which can cause test failures later. * More test for C compiler wrapper. * More work on compiler wrapper tests. * Fixed version and availability handling, added missing tests for 100% coverage. * Fixed typing error. * Try to fix python 3.7. * Tried to fix failing tests. * Remove inheritance from mixins and use protocol * Simplify compiler inheritance Mixins have static methods with unique names, overrides only happen in concrete classes * Updated wrapper and tests to handle error raised in get_version. * Simplified regular expressions (now tests cover detection of version numbers with only a major version). * Test for missing mixin. * Use the parsing mixing from the compiler in a compiler wrapper. * Use setattr instead of assignment to make mypy happy. * Simplify usage of compiler-specific parsing mixins. * Minor code cleanup. * Updated documentation. * Simplify usage of compiler-specific parsing mixins. * Test for missing mixin. * Fixed test. * Added missing openmp_flag property to compiler wrapper. * Don't use isinstance for consistency check, which does not work for CompilerWrappers. * Fixed isinstance test for C compilation which doesn't work with a CompilerWrapper. * Use a linker's compiler to determine MPI support. Removed mpi property from CompilerSuite. * Added more tests for invalid version numbers. * Added more test cases for invalid version number, improved regex to work as expected. * Fixed typo in test. * Fixed flake/mypy errors. * Combine wrapper flags with flags from wrapped compiler. * Made mypy happy. * Fixed test. * Split tests into smaller individual ones, fixed missing asssert in test. * Parameterised compiler version tests to also test wrapper. * Added missing MPI parameter when getting the compiler. * Fixed comments. * Order parameters to be in same order for various compiler classes. * Remove stray character * Added getter for wrapped compiler. * Fixed small error that would prevent nested compiler wrappers from being used. * Added a cast to make mypy happy. * Add simple getter for linker library flags * Add getter for linker flags by library * Fix formatting * Add optional libs argument to link function * Reorder and clean up linker tests * Make sure `Linker.link()` raises for unknown lib * Add missing type * Fix typing error * Add 'libs' argument to link_exe function * Try to add documentation for the linker libs feature * Use correct list type in link_exe hint * Add silent replace option to linker.add_lib_flags * Fixed spelling mistake in option. * Clarified documentation. * Removed unnecessary functions in CompilerWrapper. * Fixed failing test triggered by executing them in specific order (tools then steps) * Fixed line lengths. * Add tests for linker LDFLAG * Add pre- and post- lib flags to link function * Fix syntax in built-in lib flags * Remove netcdf as a built-in linker library Bash-style substitution is not currently handled * Configure pre- and post-lib flags on the Linker object Previously they were passed into the Linker.link() function * Use more realistic linker lib flags * Formatting fix * Removed mixing, use a simple regex instead. * Added support for ifx/icx compiler as intel-llvm class. * Added support for nvidia compiler. * Add preliminary support for Cray compiler. * Added Cray compiler wrapper ftn and cc. * Made mpi and openmp default to False in the BuildConfig constructor. * Removed white space. * Follow a more consistent naming scheme for crays, even though the native compiler names are longer (crayftn-cray, craycc-cray). * Changed names again. * Support compilers that do not support OpenMP. * Added documentation for openmp parameter. * Renamed cray compiler wrapper to be CrayCcWrapper and CrayFtnWrapper, to avoid confusion with Craycc. * Fixed incorrect name in comments. --------- Co-authored-by: jasonjunweilyu <161689601+jasonjunweilyu@users.noreply.github.com> Co-authored-by: Luke Hoffmann Co-authored-by: Luke Hoffmann <992315+lukehoffmann@users.noreply.github.com> * Support new and old style of PSyclone command line (no more nemo api etc) * Fix mypy errors. * Added missing tests for calling psyclone, and converting old style to new stle arguments and vice versa. * Added shell tool. * Try to make mypy happy. * Removed debug code. * ToolRepository now only returns default that are available. Updated tests to make tools as available. * Fixed typos and coding style. * Support new and old style of PSyclone command line (no more nemo api etc) * Fix mypy errors. * Added missing tests for calling psyclone, and converting old style to new stle arguments and vice versa. * Updated comment. * Fixed failing tests. * Updated fparser dependency to version 0.2. * Replace old code for handling sentinels with triggering this behaviour in fparser. Require config in constructor of Analyser classes. * Fixed tests for latest changes. * Removed invalid openmp continuation line - since now fparser fails when trying to parse this line. * Added test for disabled openmp parsing. Updated test to work with new test file. * Coding style changes. * Fix flake issues. * Fixed double _. * Make Linker inherit CompilerWrapper * Fix up tests for new Linker inheritence * Fix a flake error * Use linker wrapping to combine flags from the wrapped linker with the linker wrapper. * Minor code cleanup. * Created linker wrapper in ToolRepository. * Try making linker a CompilerSuiteTool instead of a CompilerWrapper. * Updated tests. * Fix support for post-libs. * Fixed mypy. * Removed more accesses to private members. * Added missing type hint. * Make flake8 happy. * Added missing openmp handling in linker. * Addressed issues raised in review. * Forgot that file in previous commit. * Updated linker to always require a compiler (previously it was actually never checked if the wrapped compiler actually exists, it only checked the linker). * Remove unreliable test, since there is no guarantee that a C linker is returned, see issue #379. * Fixed bug that a wrapper would not report openmp status based on the compiler. * Added more description for this test. --------- Co-authored-by: Luke Hoffmann <992315+lukehoffmann@users.noreply.github.com> Co-authored-by: jasonjunweilyu <161689601+jasonjunweilyu@users.noreply.github.com> Co-authored-by: Luke Hoffmann --- source/fab/tools/compiler.py | 4 +- source/fab/tools/linker.py | 88 +++++-------------- source/fab/tools/tool_repository.py | 16 ++-- .../unit_tests/tools/test_compiler_wrapper.py | 12 +++ tests/unit_tests/tools/test_linker.py | 23 ++--- .../unit_tests/tools/test_tool_repository.py | 6 +- 6 files changed, 53 insertions(+), 96 deletions(-) diff --git a/source/fab/tools/compiler.py b/source/fab/tools/compiler.py index b937e9d1..8e04d011 100644 --- a/source/fab/tools/compiler.py +++ b/source/fab/tools/compiler.py @@ -76,7 +76,9 @@ def mpi(self) -> bool: def openmp(self) -> bool: ''':returns: if the compiler supports openmp or not ''' - return self._openmp_flag != "" + # It is important not to use `_openmp_flag` directly, since a compiler + # wrapper overwrites `openmp_flag`. + return self.openmp_flag != "" @property def openmp_flag(self) -> str: diff --git a/source/fab/tools/linker.py b/source/fab/tools/linker.py index 2acef01b..63a3dd2b 100644 --- a/source/fab/tools/linker.py +++ b/source/fab/tools/linker.py @@ -11,7 +11,7 @@ import os from pathlib import Path -from typing import cast, Dict, List, Optional, Union +from typing import Dict, List, Optional, Union import warnings from fab.tools.category import Category @@ -20,11 +20,15 @@ class Linker(CompilerSuiteTool): - '''This is the base class for any Linker. It takes either another linker - instance, or a compiler instance as parameter in the constructor. Exactly - one of these must be provided. - - :param compiler: an optional compiler instance + '''This is the base class for any Linker. It takes an existing compiler + instance as parameter, and optional another linker. The latter is used + to get linker settings - for example, linker-mpif90-gfortran will use + mpif90-gfortran as compiler (i.e. to test if it is available and get + compilation flags), and linker-gfortran as linker. This way a user + only has to specify linker flags in the most basic class (gfortran), + all other linker wrapper will inherit the settings. + + :param compiler: a compiler instance :param linker: an optional linker instance :param name: name of the linker @@ -32,35 +36,19 @@ class Linker(CompilerSuiteTool): :raises RuntimeError: if neither compiler nor linker is specified. ''' - def __init__(self, compiler: Optional[Compiler] = None, + def __init__(self, compiler: Compiler, linker: Optional[Linker] = None, - exec_name: Optional[str] = None, name: Optional[str] = None): - if linker and compiler: - raise RuntimeError("Both compiler and linker is specified in " - "linker constructor.") - if not linker and not compiler: - raise RuntimeError("Neither compiler nor linker is specified in " - "linker constructor.") self._compiler = compiler self._linker = linker - search_linker = self - while search_linker._linker: - search_linker = search_linker._linker - final_compiler = search_linker._compiler if not name: - assert final_compiler # make mypy happy - name = f"linker-{final_compiler.name}" - - if not exec_name: - # This will search for the name in linker or compiler - exec_name = self.get_exec_name() + name = f"linker-{compiler.name}" super().__init__( name=name, - exec_name=exec_name, + exec_name=compiler.exec_name, suite=self.suite, category=Category.LINKER) @@ -76,51 +64,31 @@ def check_available(self) -> bool: ''':returns: whether this linker is available by asking the wrapped linker or compiler. ''' - if self._compiler: - return self._compiler.check_available() - assert self._linker # make mypy happy - return self._linker.check_available() - - def get_exec_name(self) -> str: - ''':returns: the name of the executable by asking the wrapped - linker or compiler.''' - if self._compiler: - return self._compiler.exec_name - assert self._linker # make mypy happy - return self._linker.exec_name + return self._compiler.check_available() @property def suite(self) -> str: ''':returns: the suite this linker belongs to by getting it from - the wrapper compiler or linker.''' - return cast(CompilerSuiteTool, (self._compiler or self._linker)).suite + the wrapped compiler.''' + return self._compiler.suite @property def mpi(self) -> bool: ''':returns" whether this linker supports MPI or not by checking - with the wrapper compiler or linker.''' - if self._compiler: - return self._compiler.mpi - assert self._linker # make mypy happy - return self._linker.mpi + with the wrapped compiler.''' + return self._compiler.mpi @property def openmp(self) -> bool: - ''':returns" whether this linker supports OpenMP or not by checking - with the wrapper compiler or linker.''' - if self._compiler: - return self._compiler.openmp - assert self._linker # make mypy happy - return self._linker.openmp + ''':returns: whether this linker supports OpenMP or not by checking + with the wrapped compiler.''' + return self._compiler.openmp @property def output_flag(self) -> str: ''':returns: the flag that is used to specify the output name. ''' - if self._compiler: - return self._compiler.output_flag - assert self._linker # make mypy happy - return self._linker.output_flag + return self._compiler.output_flag def get_lib_flags(self, lib: str) -> List[str]: '''Gets the standard flags for a standard library @@ -238,18 +206,10 @@ def link(self, input_files: List[Path], output_file: Path, params: List[Union[str, Path]] = [] - # Find the compiler by following the (potentially - # layered) linker wrapper. - linker = self - while linker._linker: - linker = linker._linker - # Now we must have a compiler - compiler = linker._compiler - assert compiler # make mypy happy - params.extend(compiler.flags) + params.extend(self._compiler.flags) if openmp: - params.append(compiler.openmp_flag) + params.append(self._compiler.openmp_flag) # TODO: why are the .o files sorted? That shouldn't matter params.extend(sorted(map(str, input_files))) diff --git a/source/fab/tools/tool_repository.py b/source/fab/tools/tool_repository.py index 1bf839f8..a9749757 100644 --- a/source/fab/tools/tool_repository.py +++ b/source/fab/tools/tool_repository.py @@ -117,19 +117,19 @@ def add_tool(self, tool: Tool): compiler = cast(Compiler, tool) if isinstance(compiler, CompilerWrapper): # If we have a compiler wrapper, create a new linker using - # the linker based on the wrappped compiler. For example, when + # the linker based on the wrapped compiler. For example, when # creating linker-mpif90-gfortran, we want this to be based on - # linker-gfortran (and not on the compiler mpif90-gfortran), - # since the linker-gfortran might have library definitions - # that should be reused. So we first get the existing linker - # (since the compiler exists, a linker for this compiler was - # already created and must exist). + # linker-gfortran. The compiler mpif90-gfortran will be the + # wrapper compiler. Reason is that e.g. linker-gfortran might + # have library definitions that should be reused. So we first + # get the existing linker (since the compiler exists, a linker + # for this compiler was already created and must exist). other_linker = self.get_tool( category=Category.LINKER, name=f"linker-{compiler.compiler.name}") other_linker = cast(Linker, other_linker) - linker = Linker(linker=other_linker, - exec_name=compiler.exec_name, + linker = Linker(compiler, + linker=other_linker, name=f"linker-{compiler.name}") self[linker.category].append(linker) else: diff --git a/tests/unit_tests/tools/test_compiler_wrapper.py b/tests/unit_tests/tools/test_compiler_wrapper.py index 07f9a08b..11096f0c 100644 --- a/tests/unit_tests/tools/test_compiler_wrapper.py +++ b/tests/unit_tests/tools/test_compiler_wrapper.py @@ -257,6 +257,18 @@ def test_compiler_wrapper_flags_independent(): assert mpicc.flags == ["-a", "-b"] assert mpicc.openmp_flag == gcc.openmp_flag + # Test a compiler wrapper correctly queries the wrapper compiler for + # openmp flag: Set the wrapper to have no _openmp_flag (which is + # actually the default, since the wrapper never sets its own flag), but + # gcc does have a flag, so mpicc should report that is supports openmp. + # mpicc.openmp calls openmp of its base class (Compiler), which queries + # if an openmp flag is defined. This query must go to the openmp property, + # since the wrapper overwrites this property to return the wrapped + # compiler's flag (and not the wrapper's flag, which would not be defined) + with mock.patch.object(mpicc, "_openmp_flag", ""): + assert mpicc._openmp_flag == "" + assert mpicc.openmp + # Adding flags to the wrapper should not affect the wrapped compiler: mpicc.add_flags(["-d", "-e"]) assert gcc.flags == ["-a", "-b"] diff --git a/tests/unit_tests/tools/test_linker.py b/tests/unit_tests/tools/test_linker.py index 052af88d..cd2d8dc9 100644 --- a/tests/unit_tests/tools/test_linker.py +++ b/tests/unit_tests/tools/test_linker.py @@ -41,28 +41,15 @@ def test_linker(mock_c_compiler, mock_fortran_compiler): assert linker.flags == [] -def test_linker_constructor_error(mock_c_compiler): - '''Test the linker constructor with invalid parameters.''' - - with pytest.raises(RuntimeError) as err: - Linker() - assert ("Neither compiler nor linker is specified in linker constructor." - in str(err.value)) - with pytest.raises(RuntimeError) as err: - Linker(compiler=mock_c_compiler, linker=mock_c_compiler) - assert ("Both compiler and linker is specified in linker constructor." - in str(err.value)) - - @pytest.mark.parametrize("mpi", [True, False]) def test_linker_mpi(mock_c_compiler, mpi): '''Test that linker wrappers handle MPI as expected.''' mock_c_compiler._mpi = mpi - linker = Linker(compiler=mock_c_compiler) + linker = Linker(mock_c_compiler) assert linker.mpi == mpi - wrapped_linker = Linker(linker=linker) + wrapped_linker = Linker(mock_c_compiler, linker=linker) assert wrapped_linker.mpi == mpi @@ -80,7 +67,7 @@ def test_linker_openmp(mock_c_compiler, openmp): linker = Linker(compiler=mock_c_compiler) assert linker.openmp == openmp - wrapped_linker = Linker(linker=linker) + wrapped_linker = Linker(mock_c_compiler, linker=linker) assert wrapped_linker.openmp == openmp @@ -103,7 +90,7 @@ def test_linker_check_available(mock_c_compiler): # Then test the usage of a linker wrapper. The linker will call the # corresponding function in the wrapper linker: - wrapped_linker = Linker(linker=linker) + wrapped_linker = Linker(mock_c_compiler, linker=linker) with mock.patch('fab.tools.compiler.Compiler.get_version', return_value=(1, 2, 3)): assert wrapped_linker.check_available() @@ -342,7 +329,7 @@ def test_linker_nesting(mock_c_compiler): linker1.add_lib_flags("lib_a", ["a_from_1"]) linker1.add_lib_flags("lib_c", ["c_from_1"]) linker1.add_post_lib_flags(["post_lib1"]) - linker2 = Linker(linker=linker1) + linker2 = Linker(mock_c_compiler, linker=linker1) linker2.add_pre_lib_flags(["pre_lib2"]) linker2.add_lib_flags("lib_b", ["b_from_2"]) linker2.add_lib_flags("lib_c", ["c_from_2"]) diff --git a/tests/unit_tests/tools/test_tool_repository.py b/tests/unit_tests/tools/test_tool_repository.py index 0c7d77e5..012487d4 100644 --- a/tests/unit_tests/tools/test_tool_repository.py +++ b/tests/unit_tests/tools/test_tool_repository.py @@ -11,7 +11,7 @@ import pytest from fab.tools import (Ar, Category, FortranCompiler, Gcc, Gfortran, Ifort, - Linker, ToolRepository) + ToolRepository) def test_tool_repository_get_singleton_new(): @@ -62,10 +62,6 @@ def test_tool_repository_get_default(): openmp=False) assert isinstance(gfortran, Gfortran) - gcc_linker = tr.get_default(Category.LINKER, mpi=False, openmp=False) - assert isinstance(gcc_linker, Linker) - assert gcc_linker.name == "linker-gcc" - gcc = tr.get_default(Category.C_COMPILER, mpi=False, openmp=False) assert isinstance(gcc, Gcc) From 34469e40c8ba304a8998db5b8c8e720466294417 Mon Sep 17 00:00:00 2001 From: jasonjunweilyu <161689601+jasonjunweilyu@users.noreply.github.com> Date: Fri, 7 Feb 2025 23:10:20 +1000 Subject: [PATCH 5/6] Removed fparser stop concatenation workround after fparser update (#386) * Removed fparser stop concatenation workround after fparser update * Removed redundant imports for code check --------- Co-authored-by: Junwei Lyu --- run_configs/lfric/atm.py | 6 +----- run_configs/lfric/gungho.py | 5 +---- run_configs/lfric/lfric_common.py | 32 ------------------------------- run_configs/lfric/mesh_tools.py | 4 +--- 4 files changed, 3 insertions(+), 44 deletions(-) diff --git a/run_configs/lfric/atm.py b/run_configs/lfric/atm.py index 3f93c588..92997bbf 100755 --- a/run_configs/lfric/atm.py +++ b/run_configs/lfric/atm.py @@ -20,8 +20,7 @@ from fab.tools import ToolBox from grab_lfric import lfric_source_config, gpl_utils_source_config -from lfric_common import (API, configurator, fparser_workaround_stop_concatenation, - get_transformation_script) +from lfric_common import (API, configurator, get_transformation_script) logger = logging.getLogger('fab') @@ -250,9 +249,6 @@ def file_filtering(config): api=API, ) - # todo: do we need this one in here? - fparser_workaround_stop_concatenation(state) - analyse( state, root_symbol='lfric_atm', diff --git a/run_configs/lfric/gungho.py b/run_configs/lfric/gungho.py index 7f075c10..011afa89 100755 --- a/run_configs/lfric/gungho.py +++ b/run_configs/lfric/gungho.py @@ -22,8 +22,7 @@ from fab.tools import ToolBox from grab_lfric import lfric_source_config, gpl_utils_source_config -from lfric_common import (API, configurator, fparser_workaround_stop_concatenation, - get_transformation_script) +from lfric_common import (API, configurator, get_transformation_script) logger = logging.getLogger('fab') @@ -75,8 +74,6 @@ api=API, ) - fparser_workaround_stop_concatenation(state) - analyse( state, root_symbol='gungho_model', diff --git a/run_configs/lfric/lfric_common.py b/run_configs/lfric/lfric_common.py index fe8eae03..cbf1ba09 100644 --- a/run_configs/lfric/lfric_common.py +++ b/run_configs/lfric/lfric_common.py @@ -1,10 +1,8 @@ import logging import os -import shutil from typing import Optional from pathlib import Path -from fab.artefacts import ArtefactSet from fab.build_config import BuildConfig from fab.steps import step from fab.steps.find_source_files import find_source_files @@ -84,36 +82,6 @@ def configurator(config, lfric_source: Path, gpl_utils_source: Path, rose_meta_c find_source_files(config, source_root=config_dir) -# ============================================================================ -@step -def fparser_workaround_stop_concatenation(config): - """ - fparser can't handle string concat in a stop statement. This step is - a workaround. - - https://github.com/stfc/fparser/issues/330 - - """ - feign_path = None - for file_path in config.artefact_store[ArtefactSet.FORTRAN_BUILD_FILES]: - if file_path.name == 'feign_config_mod.f90': - feign_path = file_path - break - else: - raise RuntimeError("Could not find 'feign_config_mod.f90'.") - - # rename "broken" version - broken_version = feign_path.with_suffix('.broken') - shutil.move(feign_path, broken_version) - - # make fixed version - bad = "_config: '// &\n 'Unable to close temporary file'" - good = "_config: Unable to close temporary file'" - - open(feign_path, 'wt').write( - open(broken_version, 'rt').read().replace(bad, good)) - - # ============================================================================ def get_transformation_script(fpath: Path, config: BuildConfig) -> Optional[Path]: diff --git a/run_configs/lfric/mesh_tools.py b/run_configs/lfric/mesh_tools.py index fde5b793..5d87c961 100755 --- a/run_configs/lfric/mesh_tools.py +++ b/run_configs/lfric/mesh_tools.py @@ -13,7 +13,7 @@ from fab.steps.psyclone import psyclone, preprocess_x90 from fab.tools import ToolBox -from lfric_common import API, configurator, fparser_workaround_stop_concatenation +from lfric_common import API, configurator from grab_lfric import lfric_source_config, gpl_utils_source_config @@ -60,8 +60,6 @@ api=API, ) - fparser_workaround_stop_concatenation(state) - analyse( state, root_symbol=['cubedsphere_mesh_generator', 'planar_mesh_generator', 'summarise_ugrid'], From 0829610f6119f7bf1111d400e54732ff12081eab Mon Sep 17 00:00:00 2001 From: Joerg Henrichs Date: Thu, 27 Feb 2025 13:50:15 +1100 Subject: [PATCH 6/6] Bring develop up to main (#389) * Fixed failing tests. * Improve support for MPI wrappers (#352) * Updated documentation. * Addressed issues raised in review. * Added more info about removing lib info. * Update clang binding module. (#376) * Removed remove_lib_flags function since there seems to be no real use case for it. * Remove support for '-' in nvidia version numbers (which causes problems with compiler wrapper which do not support this). * Fix regex for Cray compiler. * More improvements for Cray version regex. * Cleaner handling of linking external libraries (#374) * Removed likely a merge artefact. * Try to fix pytest failures on github. --------- Co-authored-by: Matthew Hambley --- Documentation/source/advanced_config.rst | 65 +++++++++++++++++-- source/fab/steps/compile_c.py | 2 + source/fab/steps/compile_fortran.py | 4 ++ source/fab/steps/link.py | 6 +- source/fab/tools/compiler.py | 40 ++---------- source/fab/tools/compiler_wrapper.py | 3 + source/fab/tools/linker.py | 10 --- tests/unit_tests/steps/test_link.py | 10 +-- tests/unit_tests/tools/test_compiler.py | 4 +- .../unit_tests/tools/test_compiler_wrapper.py | 22 +++---- tests/unit_tests/tools/test_linker.py | 14 ---- 11 files changed, 96 insertions(+), 84 deletions(-) diff --git a/Documentation/source/advanced_config.rst b/Documentation/source/advanced_config.rst index d2b54ebf..ec19008f 100644 --- a/Documentation/source/advanced_config.rst +++ b/Documentation/source/advanced_config.rst @@ -186,20 +186,75 @@ Linker flags ------------ Probably the most common instance of the need to pass additional arguments is -to specify 3rd party libraries at the link stage. +to specify 3rd party libraries at the link stage. The linker tool allow +for the definition of library-specific flags: for each library, the user can +specify the required linker flags for this library. In the linking step, +only the name of the libraries to be linked is then required. The linker +object will then use the required linking flags. Typically, a site-specific +setup set (see for example https://github.com/MetOffice/lfric-baf) will +specify the right flags for each site, and the application itself only +needs to list the name of the libraries. This way, the application-specific +Fab script is independent from any site-specific settings. Still, an +application-specific script can also overwrite any site-specific setting, +for example if a newer version of a dependency is to be evaluated. + +The settings for a library are defined as follows: .. code-block:: :linenos: - link_exe(state, flags=['-lm', '-lnetcdf']) + tr = ToolRepository() + linker = tr.get_tool(Category.LINKER, "linker-ifort") -Linkers will be pre-configured with flags for common libraries. Where possible, -a library name should be used to include the required flags for linking. + linker.add_lib_flags("yaxt", ["-L/some_path", "-lyaxt", "-lyaxt_c"]) + linker.add_lib_flags("xios", ["-lxios"]) + +This will define two libraries called ``yaxt`` and ``xios``. In the link step, +the application only needs to specify the name of the libraries required, e.g.: + +.. code-block:: + :linenos: + + link_exe(state, libs=["yaxt", "xios"]) + +The linker will then use the specified options. + +A linker object also allows to define options that should always be added, +either as options before any library details, or at the very end. For example: + +.. code-block:: + :linenos: + + linker.add_pre_lib_flags(["-L/my/common/library/path"]) + linker.add_post_lib_flags(["-lstdc++"]) + +The pre_lib_flags can be used to specify library paths that contain +several libraries only once, and this makes it easy to evaluate a different +set of libraries. Additionally, this can also be used to add common +linking options, e.g. Cray's ``-Ktrap=fp``. + +The post_lib_flags can be used for additional common libraries that need +to be linked in. For example, if the application contains a dependency to +C++ but it is using the Fortran compiler for linking, then the C++ libraries +need to be explicitly added. But if there are several libraries depending +on it, you would have to specify this several times (forcing the linker to +re-read the library several times). Instead, you can just add it to the +post flags once. + +The linker step itself can also take optional flags: .. code-block:: :linenos: - link_exe(state, libs=['netcdf']) + link_exe(state, flags=['-Ktrap=fp']) + +These flags will be added to the very end of the the linker options, +i.e. after any other library or post-lib flag. Note that the example above is +not actually recommended to use, since the specified flag is only +valid for certain linker, and a Fab application script should in general +not hard-code flags for a specific linker. Adding the flag to the linker +instance itself, as shown further above, is the better approach. + Path-specific flags ------------------- diff --git a/source/fab/steps/compile_c.py b/source/fab/steps/compile_c.py index 283d9607..c7d014d7 100644 --- a/source/fab/steps/compile_c.py +++ b/source/fab/steps/compile_c.py @@ -127,6 +127,8 @@ def _compile_file(arg: Tuple[AnalysedC, MpCommonArgs]): if compiler.category != Category.C_COMPILER: raise RuntimeError(f"Unexpected tool '{compiler.name}' of category " f"'{compiler.category}' instead of CCompiler") + # Tool box returns a Tool, in order to make mypy happy, we need + # to cast it to be a Compiler. compiler = cast(Compiler, compiler) with Timer() as timer: flags = Flags(mp_payload.flags.flags_for_path(path=analysed_file.fpath, diff --git a/source/fab/steps/compile_fortran.py b/source/fab/steps/compile_fortran.py index fe6f479b..e5767fae 100644 --- a/source/fab/steps/compile_fortran.py +++ b/source/fab/steps/compile_fortran.py @@ -136,6 +136,8 @@ def handle_compiler_args(config: BuildConfig, common_flags=None, if compiler.category != Category.FORTRAN_COMPILER: raise RuntimeError(f"Unexpected tool '{compiler.name}' of category " f"'{compiler.category}' instead of FortranCompiler") + # The ToolBox returns a Tool. In order to make mypy happy, we need to + # cast this to become a Compiler. compiler = cast(Compiler, compiler) logger.info( f'Fortran compiler is {compiler} {compiler.get_version_string()}') @@ -268,6 +270,8 @@ def process_file(arg: Tuple[AnalysedFortran, MpCommonArgs]) \ raise RuntimeError(f"Unexpected tool '{compiler.name}' of " f"category '{compiler.category}' instead of " f"FortranCompiler") + # The ToolBox returns a Tool, but we need to tell mypy that + # this is a Compiler compiler = cast(Compiler, compiler) flags = Flags(mp_common_args.flags.flags_for_path( path=analysed_file.fpath, config=config)) diff --git a/source/fab/steps/link.py b/source/fab/steps/link.py index e67a8cce..902defd9 100644 --- a/source/fab/steps/link.py +++ b/source/fab/steps/link.py @@ -9,7 +9,7 @@ """ import logging from string import Template -from typing import List, Optional, Union +from typing import List, Optional from fab.artefacts import ArtefactSet from fab.steps import step @@ -34,8 +34,8 @@ def __call__(self, artefact_store): @step def link_exe(config, - libs: Union[List[str], None] = None, - flags: Union[List[str], None] = None, + libs: Optional[List[str]] = None, + flags: Optional[List[str]] = None, source: Optional[ArtefactsGetter] = None): """ Link object files into an executable for every build target. diff --git a/source/fab/tools/compiler.py b/source/fab/tools/compiler.py index 8e04d011..7448411b 100644 --- a/source/fab/tools/compiler.py +++ b/source/fab/tools/compiler.py @@ -69,7 +69,7 @@ def __init__(self, name: str, @property def mpi(self) -> bool: - '''Returns whether this compiler supports MPI or not.''' + ''':returns: whether this compiler supports MPI or not.''' return self._mpi @property @@ -82,7 +82,7 @@ def openmp(self) -> bool: @property def openmp_flag(self) -> str: - '''Returns the flag to enable OpenMP.''' + ''':returns: the flag to enable OpenMP.''' return self._openmp_flag @property @@ -474,21 +474,7 @@ class Nvc(CCompiler): def __init__(self, name: str = "nvc", exec_name: str = "nvc"): super().__init__(name, exec_name, suite="nvidia", openmp_flag="-mp", - version_regex=r"nvc (\d[\d\.-]+\d)") - - def run_version_command( - self, version_command: Optional[str] = '--version') -> str: - '''Run the compiler's command to get its version. This implementation - runs the function in the base class, and changes any '-' into a - '.' to support nvidia version numbers which have dashes, e.g. 23.5-0. - - :param version_command: The compiler argument used to get version info. - - :returns: The output from the version command, with any '-' replaced - with '.' - ''' - version_string = super().run_version_command() - return version_string.replace("-", ".") + version_regex=r"nvc (\d[\d\.]+\d)") # ============================================================================ @@ -506,21 +492,7 @@ def __init__(self, name: str = "nvfortran", exec_name: str = "nvfortran"): module_folder_flag="-module", openmp_flag="-mp", syntax_only_flag="-Msyntax-only", - version_regex=r"nvfortran (\d[\d\.-]+\d)") - - def run_version_command( - self, version_command: Optional[str] = '--version') -> str: - '''Run the compiler's command to get its version. This implementation - runs the function in the base class, and changes any '-' into a - '.' to support nvidia version numbers which have dashes, e.g. 23.5-0. - - :param version_command: The compiler argument used to get version info. - - :returns: The output from the version command, with any '-' replaced - with '.' - ''' - version_string = super().run_version_command() - return version_string.replace("-", ".") + version_regex=r"nvfortran (\d[\d\.]+\d)") # ============================================================================ @@ -545,7 +517,7 @@ class Craycc(CCompiler): def __init__(self, name: str = "craycc-cc", exec_name: str = "cc"): super().__init__(name, exec_name, suite="cray", mpi=True, openmp_flag="-homp", - version_regex=r"Cray [Cc][^\d]* (\d[\d\.]+\d) ") + version_regex=r"Cray [Cc][^\d]* (\d[\d\.]+\d)") # ============================================================================ @@ -564,4 +536,4 @@ def __init__(self, name: str = "crayftn-ftn", exec_name: str = "ftn"): openmp_flag="-homp", syntax_only_flag="-syntax-only", version_regex=(r"Cray Fortran : Version " - r"(\d[\d\.]+\d) ")) + r"(\d[\d\.]+\d)")) diff --git a/source/fab/tools/compiler_wrapper.py b/source/fab/tools/compiler_wrapper.py index 9338f848..ae9089c0 100644 --- a/source/fab/tools/compiler_wrapper.py +++ b/source/fab/tools/compiler_wrapper.py @@ -148,6 +148,9 @@ def compile_file(self, input_file: Path, a syntax check ''' + # TODO #370: replace change_exec_name, and instead provide + # a function that returns the whole command line, which can + # then be modified here. orig_compiler_name = self._compiler.exec_name self._compiler.change_exec_name(self.exec_name) if add_flags is None: diff --git a/source/fab/tools/linker.py b/source/fab/tools/linker.py index 63a3dd2b..0fb0f61d 100644 --- a/source/fab/tools/linker.py +++ b/source/fab/tools/linker.py @@ -125,16 +125,6 @@ def add_lib_flags(self, lib: str, flags: List[str], # Make a copy to avoid modifying the caller's list self._lib_flags[lib] = flags[:] - def remove_lib_flags(self, lib: str): - '''Remove any flags configured for a standard library - - :param lib: the library name - ''' - try: - del self._lib_flags[lib] - except KeyError: - pass - def add_pre_lib_flags(self, flags: List[str]): '''Add a set of flags to use before any library-specific flags diff --git a/tests/unit_tests/steps/test_link.py b/tests/unit_tests/steps/test_link.py index e9a6750c..8669f4d8 100644 --- a/tests/unit_tests/steps/test_link.py +++ b/tests/unit_tests/steps/test_link.py @@ -14,7 +14,7 @@ from fab.artefacts import ArtefactSet, ArtefactStore from fab.steps.link import link_exe -from fab.tools import FortranCompiler, Linker +from fab.tools import FortranCompiler, Linker, ToolBox import pytest @@ -22,10 +22,11 @@ class TestLinkExe: '''Test class for linking an executable. ''' - def test_run(self, tool_box): + def test_run(self, tool_box: ToolBox): '''Ensure the command is formed correctly, with the flags at the end and that environment variable FFLAGS is picked up. ''' + config = SimpleNamespace( project_workspace=Path('workspace'), artefact_store=ArtefactStore(), @@ -47,14 +48,13 @@ def test_run(self, tool_box): syntax_only_flag=None, compile_flag=None, output_flag=None, openmp_flag=None) - mock_compiler.run = mock.Mock() linker = Linker(compiler=mock_compiler) # Mark the linker as available to it can be added to the tool box linker._is_available = True # Add a custom library to the linker - linker.add_lib_flags('mylib', ['-L/my/lib', '-mylib']) + linker.add_lib_flags('mylib', ['-L/my/lib', '-lmylib']) tool_box.add_tool(linker, silent_replace=True) mock_result = mock.Mock(returncode=0, stdout="abc\ndef".encode()) with mock.patch('fab.tools.tool.subprocess.run', @@ -68,6 +68,6 @@ def test_run(self, tool_box): tool_run.assert_called_with( ['mock_fortran_compiler.exe', '-L/foo1/lib', '-L/foo2/lib', 'bar.o', 'foo.o', - '-L/my/lib', '-mylib', '-fooflag', '-barflag', + '-L/my/lib', '-lmylib', '-fooflag', '-barflag', '-o', 'workspace/foo'], capture_output=True, env=None, cwd=None, check=False) diff --git a/tests/unit_tests/tools/test_compiler.py b/tests/unit_tests/tools/test_compiler.py index 60c18d78..fbbce38f 100644 --- a/tests/unit_tests/tools/test_compiler.py +++ b/tests/unit_tests/tools/test_compiler.py @@ -778,7 +778,7 @@ def test_nvc_get_version_23_5_0(): """) nvc = Nvc() with mock.patch.object(nvc, "run", mock.Mock(return_value=full_output)): - assert nvc.get_version() == (23, 5, 0) + assert nvc.get_version() == (23, 5) def test_nvc_get_version_with_icc_string(): @@ -819,7 +819,7 @@ def test_nvfortran_get_version_23_5_0(): nvfortran = Nvfortran() with mock.patch.object(nvfortran, "run", mock.Mock(return_value=full_output)): - assert nvfortran.get_version() == (23, 5, 0) + assert nvfortran.get_version() == (23, 5) def test_nvfortran_get_version_with_ifort_string(): diff --git a/tests/unit_tests/tools/test_compiler_wrapper.py b/tests/unit_tests/tools/test_compiler_wrapper.py index 11096f0c..78a53249 100644 --- a/tests/unit_tests/tools/test_compiler_wrapper.py +++ b/tests/unit_tests/tools/test_compiler_wrapper.py @@ -7,7 +7,7 @@ '''Tests the compiler wrapper implementation. ''' -from pathlib import Path, PosixPath +from pathlib import Path from unittest import mock import pytest @@ -179,10 +179,10 @@ def test_compiler_wrapper_fortran_with_add_args(): syntax_only=True) # Notice that "-J/b" has been removed mpif90.compiler.run.assert_called_with( - cwd=PosixPath('.'), additional_parameters=['-c', "-O3", - '-fsyntax-only', - '-J', '/module_out', - 'a.f90', '-o', 'a.o']) + cwd=Path('.'), additional_parameters=['-c', "-O3", + '-fsyntax-only', + '-J', '/module_out', + 'a.f90', '-o', 'a.o']) def test_compiler_wrapper_fortran_with_add_args_unnecessary_openmp(): @@ -199,7 +199,7 @@ def test_compiler_wrapper_fortran_with_add_args_unnecessary_openmp(): add_flags=["-fopenmp", "-O3"], openmp=True, syntax_only=True) mpif90.compiler.run.assert_called_with( - cwd=PosixPath('.'), + cwd=Path('.'), additional_parameters=['-c', '-fopenmp', '-fopenmp', '-O3', '-fsyntax-only', '-J', '/module_out', 'a.f90', '-o', 'a.o']) @@ -219,8 +219,8 @@ def test_compiler_wrapper_c_with_add_args(): mpicc.compile_file(Path("a.f90"), "a.o", openmp=False, add_flags=["-O3"]) mpicc.compiler.run.assert_called_with( - cwd=PosixPath('.'), additional_parameters=['-c', "-O3", - 'a.f90', '-o', 'a.o']) + cwd=Path('.'), additional_parameters=['-c', "-O3", 'a.f90', + '-o', 'a.o']) # Invoke C compiler with syntax-only flag (which is only supported # by Fortran compilers), which should raise an exception. with pytest.raises(RuntimeError) as err: @@ -238,7 +238,7 @@ def test_compiler_wrapper_c_with_add_args(): add_flags=["-fopenmp", "-O3"], openmp=True) mpicc.compiler.run.assert_called_with( - cwd=PosixPath('.'), + cwd=Path('.'), additional_parameters=['-c', '-fopenmp', '-fopenmp', '-O3', 'a.f90', '-o', 'a.o']) @@ -293,7 +293,7 @@ def test_compiler_wrapper_flags_with_add_arg(): mpicc.compile_file(Path("a.f90"), "a.o", add_flags=["-f"], openmp=True) mpicc.compiler.run.assert_called_with( - cwd=PosixPath('.'), + cwd=Path('.'), additional_parameters=["-c", "-fopenmp", "-a", "-b", "-d", "-e", "-f", "a.f90", "-o", "a.o"]) @@ -312,7 +312,7 @@ def test_compiler_wrapper_flags_without_add_arg(): # Test if no add_flags are specified: mpicc.compile_file(Path("a.f90"), "a.o", openmp=True) mpicc.compiler.run.assert_called_with( - cwd=PosixPath('.'), + cwd=Path('.'), additional_parameters=["-c", "-fopenmp", "-a", "-b", "-d", "-e", "a.f90", "-o", "a.o"]) diff --git a/tests/unit_tests/tools/test_linker.py b/tests/unit_tests/tools/test_linker.py index cd2d8dc9..18f27bbd 100644 --- a/tests/unit_tests/tools/test_linker.py +++ b/tests/unit_tests/tools/test_linker.py @@ -177,20 +177,6 @@ def test_linker_add_lib_flags_overwrite_silent(mock_linker): assert result == ["-t", "-b"] -def test_linker_remove_lib_flags(mock_linker): - """Linker should provide a way to remove the flags for a library""" - mock_linker.remove_lib_flags("netcdf") - - with pytest.raises(RuntimeError) as err: - mock_linker.get_lib_flags("netcdf") - assert "Unknown library name: 'netcdf'" in str(err.value) - - -def test_linker_remove_lib_flags_unknown(mock_linker): - """Linker should silently allow removing flags for unknown library""" - mock_linker.remove_lib_flags("unknown") - - # ==================== # Linking: # ====================