* Remove CI jobs related to Python 2.7
* Remove Python 2.7 specific code from Spack core
* Remove externals for Python 2 only
* Remove llnl.util.compat
Argparse started raising ArgumentError exceptions
when the same parser is added twice. Therefore, we
perform the addition only if the parser is not there
already
Port match syntax to our unparser
* ci: restore coverage computation
* Mark "test_foreground_background" as xfail
* Mark "test_foreground_background_output" as xfail
* Make number of processes explicit, remove verbosity on linux
* Run coverage on just 3 Python jobs for linux
* Run coverage on just 3 Python jobs for linux
* Run coverage on just 2 Python jobs for linux
* Add back verbose, since before we didn't encounter the xdist internal error
* Reduce the workers to 2
* Try to use command line
* ci: remove !docs from "core" filters
Written like it is now it causes package only PRs
to run with coverage.
* Try to skip job under condition, see if the workflow proceed
* Try to cancel a running CI job
* Simplify linux unit-tests, skip windows unit-tests on package PRs
* Reduce the inputs to unit-tests workflow
* Move control logic to main workflow, remove inputs
* Revert "Move control logic to main workflow, remove inputs"
This reverts commit 0c46fece4c49eb7a37585ec3ba651a31d7f958af.
* Do not compute "with_coverage" since it's always == to "core"
* Remove workflow dispatch from unit tests
* Revert "Revert "Move control logic to main workflow, remove inputs""
This reverts commit dd4e4a4e61a825901e736348fd044d37e88c90b5.
* Try to skip all from the main workflow
* Add back bootstrap to needed checks for "all"
* Restore the correct logic for conditionals
* Add two no-op jobs named "all-prechecks" and "all"
These are a suggestion from @tgamblin, they are stable named markers we
can use from gitlab and possibly for required checks to make CI more
resilient to refactors changing the names of specific checks.
* Enable parallel testing using xdist for unit testing in CI
* Normalize tmp paths to deal with macos
* add -u flag compatibility to spack python
As of now, it is accepted and ignored. The usage with xdist, where it
is invoked specifically by `python -u spack python` which is then passed
`-u` by xdist is the entire reason for doing this. It should never be
used without explicitly passing -u to the executing python interpreter.
* use spack python in xdist to support python 2
When running on python2, spack has many import cycles unless started
through main. To allow that, this uses `spack python` as the
interpreter, leveraging the `-u` support so xdist doesn't error out when
it unconditionally requests unbuffered binary IO.
* Use shutil.move to account for tmpdir being in a separate filesystem sometimes
This patchset refactors our GitHub actions into a single top-level ci workflow that
invokes a series of reusable actions. The main goal of this is to be able to easily
control which tests run and in what order based on the success or failure of top-level
prechecks. Our previous workflows ran in three sets:
* nix tests: style and verification first, then linux and macos tests if successful
* windows tests: style and verification first, then linux and macos tests if successful
* bootstrap tests
As a result, the bootstrap tests ran even if the style failed, and style and verification
had to run on two different platforms despite running identical checks. I'm relatively
sure that's because of the limitation on dependencies between steps in the jobs.
Reusable workflows allow us to run the style, verification and now audit checks once,
then depending on the results, and the files changed, run the appropriate nix, windows
and bootstrap tests. While it saves only a few minutes by itself, this makes it easier to
refactor checks to subset tests without having to replicate tests or other workflow
components in the future.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This adds necessary configuration for flake8 and black to work together.
This also sets the line length to 99, per the data here:
* https://github.com/spack/spack/pull/24718#issuecomment-876933636
Given the data and the spirit of black's 88-character limit, we set the limit to 99
characters for all of Spack, because:
* 99 is one less than 100, a nice round number, and all lines will fit in a
100-character wide terminal (even when the text editor puts a \ at EOL).
* 99 is just past the knee the file size curve for packages, and it means that packages
remain readable and not significantly longer than they are now.
* It doesn't seem to hurt core -- files in core might change length by a few percent but
seem like they'll be mostly the same as before -- just a bit more roomy.
- [x] set line length to 99
- [x] remove most exceptions from `.flake8` and add the ones black cares about
- [x] add `[tool.black]` to `pyproject.toml`
- [x] make `black` run if available in `spack style --fix`
Co-Authored-By: Tom Scogland <tscogland@llnl.gov>
When installing setuptools from sources in Spack, we might
get into weird failures due to the way we use pip.
In particular, for Spack it's necessary to install in a
non-isolated pip environment to allow using PYTHONPATH as a
selection method for all the build requirements of a
Python package.
This can fail when installing setuptools since there might
be a setuptools version already installed for the Python
interpreter being used, with different entry points than
the one we want to install.
Installing from wheels both pip and setuptools should
harden our installation procedure in the context of:
- Bootstrapping Python dependencies of Spack
- Using external Python packages
* Cancel running workflows automatically on PR update
* Add the last update later to check cancellation is working
* Use github.run_number instead of github.sha
* Add a CI job to audit all the packages in the built-in repository
* flecsi: fixed typo for dependency on legion
* py-pythonqwt: fix a typo in variant name
* sollve: removed a conflict with a non-existing variant
* acts: fixed use of wrong variant in dd4hep
Also removed duplicated variant declaration in dd4hep
* aoflagger: update variant of a dependency
Issues introduced indirectly in #22925
* camellia: removed unused variant
Issue introduced indirectly in #26150
* cbtf-*: remove cti variants and dependency on mrnet+cti
Issue introduced in #14178
* flecsale: update variants to match flecsi
Issue introduced in #11679
* grnboost: fixed issue with non-existing variant in a dependency
This package possibly never worked since #8763
* nalu: fixed issue with non-existing variant in a dependency
* open-iscsi: fixed issue with non-existing variant in a dependency
* openspeedshop-*: remove use of non-existing mrnet+cti variant
* percept: fixed issue with non-existing variant in a dependency
* phyluce: fixed issue with non-existing variant in a dependency
Issue introduced in #12952
* phyluce: fixed issue with non-existing variant in a dependency
Issue introduced in #22340
Modifications:
- [x] Removed `centos:6` unit test, adjusted vermin checks
- [x] Removed backport of `collections.OrderedDict`
- [x] Removed backport of `functools.total_ordering`
- [x] Removed Python 2.6 specific skip markers in unit tests
- [x] Fixed a few minor Python 2.6 related TODOs in code
Updating the vendored dependencies will be done in separate PRs
Currently Spack vendors `pytest` at a version which is three major
versions behind the latest (3.2.5 vs. 6.2.4). We do that since v3.2.5
is the latest version supporting Python 2.6. Remaining so much
behind the currently supported versions though might introduce
some incompatibilities and is surely a technical debt.
This PR modifies Spack to:
- Use the vendored `pytest@3.2.5` only as a fallback solution,
if the Python interpreter used for Spack doesn't provide a newer one
- Be able to parse `pytest --collect-only` in all the different output
formats from v3.2.5 to v6.2.4 and use it consistently for `spack unit-test --list-*`
- Updating the unit tests in Github Actions to use a more recent `pytest` version