Compare commits

...

78 commits

Author SHA1 Message Date
Massimiliano Culpo
89319413d5
Update CHANGELOG and set version to v0.21.2 2024-03-03 16:02:50 +01:00
Axel Huebl
a6ef73f7f2
Fix mgard: OpenMP on AppleClang (#42933)
macOS AppleClang does not provide OpenMP by default with XCode.
Use LLVM's OpenMP to fix compile errors of mgard with OpenMP (default).
2024-03-03 16:02:49 +01:00
Adam J. Stewart
4a84039795
py-transformers: add v4.35.2 (#41266) 2024-03-02 21:09:49 +01:00
Alec Scott
e5f4c6ad75
rust: add v1.75.0 & v1.74.0, merge related variants into +dev, add rust-analyzer (#41903)
* Add rust-analyzer as variant to rust build

* Expose cargo module only when +cargo

* rust: add v1.74.0 and v1.75.0 and remove variants in favor of +dev

* [@spackbot] updating style on behalf of alecbcs

* Fix variant typo

---------

Co-authored-by: alecbcs <alecbcs@users.noreply.github.com>
2024-03-01 21:01:52 +01:00
Alec Scott
c0c66a0fed
rust: add v1.73.0 and add support for external openssl certs (#41161)
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
2024-03-01 21:01:51 +01:00
Massimiliano Culpo
3d73193ecc Fix style tests for backports 2024-02-29 11:35:38 +01:00
Massimiliano Culpo
eb27ee7ae8 ASP-based solver: fix issue with conditional requirements and trigger conditions (#42566)
The lack of a rule to avoid enforcing requirements on multi-valued variants, when the condition activating the environment was not met, resulted in multiple optimal solutions. The fix is to prevent imposing a requirement if the when= rule activating it is not met.
2024-02-29 11:35:38 +01:00
Harmen Stoppels
2883559b49 compilers: fixup order of arguments to satisfies (#42682) 2024-02-29 11:35:38 +01:00
Massimiliano Culpo
7ce9f621d9 Fix copyright year for CI 2024-02-29 11:35:38 +01:00
Matthew Whitlock
0616290c7f Update packages_yaml.rst (#42438)
Fix an incorrect example.
2024-02-29 11:35:38 +01:00
Greg Becker
9963e2a20c Fix using sticky variants in externals (#42253) 2024-02-29 11:35:38 +01:00
Harmen Stoppels
cb4312996c repo.py: pass package name not fully qualified package name (#42217) 2024-02-29 11:35:38 +01:00
Harmen Stoppels
b107de072b oci: use pickleable errors (#42160) 2024-02-29 11:35:38 +01:00
Harmen Stoppels
dd58e922e7 oci: only push in parallel when forking (#42143) 2024-02-29 11:35:38 +01:00
Massimiliano Culpo
b23a829c4c Fix a bug when a required provider is requested for multiple virtuals (#42088) 2024-02-29 11:35:38 +01:00
Massimiliano Culpo
9af5eca9ec Fix using fully-qualified namespaces from root specs (#41957)
Explicitly requested namespaces are annotated during
the setup phase, and used to retrieve the correct package
class.

An attribute for the namespace has been added for each node.

Currently, a single namespace per package is allowed
during concretization.
2024-02-29 11:35:38 +01:00
Jordan Galby
2489b137d9 Fix setup-env when going back and forth between instances (#40924)
* setup-env: Fix back and forth between two instances

* setup-env.csh: Fix SPACK_ROOT when switch to a different instance

i.e. Always look for the current SPACK_ROOT

* setup-env: Update comments
2024-02-29 11:35:38 +01:00
Owen Solberg
64d046100a Containerize: accommodate nested or pre-existing spack-env paths (#41558)
The current `mkdir {{ paths.environment }}` will generate an error if:
* `{{ paths.environment }}` already exists, or
* `{{ paths.environment }}` is nested in non-existing dirs.

Adding `-p` to the command will make this robust to both possibilities.

Set noclobber bash option when writing manifest.
2024-02-29 11:35:38 +01:00
Massimiliano Culpo
000fe1b5a1 Set version to 0.21.2.dev0 2024-02-29 11:35:38 +01:00
Massimiliano Culpo
e30fedab10 Update CHANGELOG and version 2024-01-12 10:16:58 +01:00
eugeneswalker
a7c6df1b5a e4s ci: disable gpu test stack (#41296) 2024-01-12 10:16:58 +01:00
Harmen Stoppels
19e86088cd binary_distribution.py: support build cache layout 2 (#41773)
Add forward compatibility for tarballs created by Spack 0.22, which
use build cache layout version 2.

Spack 0.21 continues to produce build cache layout version 1 tarballs.

Build cache layout version 2 also lists parent directories of the
package prefix in the tarball, which is required for certain container
runtimes.
2024-01-11 09:40:22 +01:00
Harmen Stoppels
0295b466d7 Dont expect __qualname__ to exist (#41989) 2024-01-11 09:40:22 +01:00
Harmen Stoppels
68e9547615 installer.py: do not tty.die when cache only fails (#41990) 2024-01-11 09:40:22 +01:00
Harmen Stoppels
0ab9c8b904 installer.py: don't dereference stage before installing from binaries (#41986)
This fixes an issue where pkg.stage throws because a patch cannot be found,
but the patch is redundant because the spec is reused from a build cache and
will be installed from existing binaries.
2024-01-11 09:40:22 +01:00
Miguel Dias Costa
dd768bb6c3 update BerkeleyGW source urls (#38218)
* update url for BerkeleyGW version 3.0.1
* update source urls and add version 3.1.0 to berkeleygw package
2024-01-11 09:40:22 +01:00
Harmen Stoppels
b15f9d011c Spec.format: error on old style format strings (#41934) 2024-01-11 09:40:22 +01:00
Massimiliano Culpo
4f7cce68b8 ASP-based solver: don't error for type mismatch on preferences (#41138)
This commit discards type mismatches or failures to validate a package preference during concretization. The values discarded are logged as debug level messages. It also adds a config audit to help users spot misconfigurations in packages.yaml preferences.
2024-01-11 09:40:22 +01:00
Massimiliano Culpo
f39a1e5fc8 Fix an issue with deconcretization/reconcretization of environments (#41294) 2024-01-11 09:40:22 +01:00
Massimiliano Culpo
df2a3bd531 ASP-based solver: use a unique ID counter (#41290)
* solver: use a unique counter for condition, triggers and effects

* Do not reset counters when re-running setup

  What we need is just a unique ID, it doesn't need
  to start from zero every time.
2024-01-11 09:40:22 +01:00
Todd Gamblin
e29049d9c0 bugfix: sort variants in spack info --variants-by-name (#41389)
This was missed while backporting the new `spack info` command from #40326.

Variants should be sorted by name when invoking `spack info --variants-by-name`.
2024-01-11 09:40:22 +01:00
Stephen Sachs
de2249c334 clingo-bootstrap: use new Spack API for environment modifications (#41574) 2024-01-11 09:40:22 +01:00
Harmen Stoppels
03d7643480 tests: fix more cases of env variables (#41226) 2024-01-11 09:40:22 +01:00
Massimiliano Culpo
9f2b8eef7a Refactor a test to not use the "working_env" fixture (#41308)
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
2024-01-11 09:40:22 +01:00
Harmen Stoppels
f57ac8d2da tests: fix issue with os.environ binding (#41342) 2024-01-11 09:40:22 +01:00
Harmen Stoppels
539fa5c39a test_variant_propagation_with_unify_false: missing fixture (#41345) 2024-01-11 09:40:22 +01:00
Harmen Stoppels
50710c2b6e tests: fix side effects of default_config fixture (#41361)
* tests: default_config drop scope

* use default_config elsewhere

* use parse_install_tree for missing defaults in default config
2024-01-11 09:40:22 +01:00
Harmen Stoppels
f1b2515ce1 tests: add missing mutable db (#41359) 2024-01-11 09:40:22 +01:00
Harmen Stoppels
37f65e769c unit tests: replace /bin/bash with /bin/sh (#41495) 2024-01-11 09:40:22 +01:00
Harmen Stoppels
16b600c193 tests: use temporary_store (#41369) 2024-01-11 09:40:22 +01:00
Harmen Stoppels
bb62f71aa0 asp.py: remove "CLI" reference (#41718)
Can also be an environment root, or programatically
`Spec("x").concretized()`.
2024-01-11 09:40:22 +01:00
Juan Miguel Carceller
fff8e16cdd Add a webgui patch (#41404)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-01-11 09:40:22 +01:00
Dave Keeshan
33c287eed8 Fix filter_compiler_wrapper where compiler is None (#41502)
Fix filer_compiler_wrapper for cases where the compiler returned in None, this happens on some installed gcc systems that do not have fortran built into them as standard, e.g. gcc@11.4.0 on ubuntu 22.04
2024-01-11 09:40:22 +01:00
Robert Cohn
2d5ccd3068 handle use of an unconfigured compiler (#41213) 2024-01-10 20:28:14 +01:00
Michael Kuhn
125085c580 Fix multi-word aliases (#41126)
PR #40929 reverted the argument parsing to make `spack --verbose
install` work again. It looks like `--verbose` is the only instance
where this kind of argument inheritance is used since all other commands
override arguments with the same name instead. For instance, `spack
--bootstrap clean` does not invoke `spack clean --bootstrap`.

Therefore, fix multi-line aliases again by parsing the resolved
arguments and instead explicitly pass down `args.verbose` to commands.
2024-01-10 20:28:14 +01:00
Massimiliano Culpo
e70e401be1 spack graph: fix coloring with environments (#41240)
If we use all specs, we won't color correctly build-only dependencies
2024-01-10 20:28:14 +01:00
John W. Parent
e8bbd7763c MSVC preview version breaks clingo build (#41185)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2024-01-10 20:28:14 +01:00
Harmen Stoppels
712db8c85c setup_platform_environment before package env mods (#41205)
This roughly restores the order of operation from Spack 0.20,
where where `AutotoolsPackage.setup_build_environment` would
override the env variable set in `setup_platform_environment` on
macOS.
2024-01-10 20:28:14 +01:00
Massimiliano Culpo
47c560d526 ASP-based solver: don't emit spurious debug output (#41218)
When improving the error message, we started #showing in the
answer set a lot more symbols - but we forgot to suppress the
debug messages warning about UNKNOWN SYMBOLs
2024-01-10 20:28:14 +01:00
Harmen Stoppels
5b386cf9b1 test_which: do not mutate os.environ 2024-01-10 20:28:14 +01:00
Harmen Stoppels
2cbe84b1ee docs: document how spack picks a version / variant (#41070) 2024-01-10 20:28:14 +01:00
Massimiliano Culpo
e1f98fd206 Improve the error message for deprecated preferences (#41075)
Improves the warning for deprecated preferences, and adds a configuration
audit to get files:lines details of the issues.

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-01-10 20:28:14 +01:00
Massimiliano Culpo
d5ef4f8c83 Add audit check to spot when= arguments using wrong named specs (#41107)
* Add audit check to spot when= arguments using named specs

* Fix package issues caught by the new audit
2024-01-10 20:28:14 +01:00
Harmen Stoppels
b49022822d docs: packages config on separate page, demote bootstrapping (#41085) 2024-01-10 20:28:14 +01:00
Massimiliano Culpo
198bd87914 Fix infinite recursion when computing concretization errors (#41061) 2024-01-10 20:28:14 +01:00
Harmen Stoppels
080a781b81 Set version to 0.21.1.dev0 2024-01-10 20:28:14 +01:00
Todd Gamblin
65d3221a9c Update version and CHANGELOG.md for v0.21.0
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
2023-11-11 03:32:22 -08:00
Todd Gamblin
f6b17f6329 update release branch for tutorial command 2023-11-11 03:32:22 -08:00
Greg Becker
09e9bb5c3d spack deconcretize command (#38803)
We have two ways to concretize now:
* `spack concretize` concretizes only the root specs that are not concrete in the environment.
* `spack concretize -f` eliminates all cached concretization data and reconcretizes the *entire* environment.

This PR adds `spack deconcretize`, which eliminates cached concretization data for a spec.  This allows
users greater control over what is preserved from their `spack.lock` file and what is reused when not
using `spack concretize -f`.  If you want to update a spec installed in your environment, you can call
`spack deconcretize` on it, and that spec and any relevant dependents will be removed from the lock file.

`spack concretize` has two options:
* `--root`: limits deconcretized specs to *specific* roots in the environment. You can use this to
  deconcretize exactly one root in a `unify: false` environment.  i.e., if `foo` root is a dependent
  of `bar`, both roots, `spack deconcretize bar` will *not* deconcretize `foo`.
* `--all`: deconcretize *all* specs that match the input spec. By default `spack deconcretize`
  will complain about multiple matches, like `spack uninstall`.
2023-11-11 03:32:22 -08:00
Massimiliano Culpo
f6dc557764 builtin.repo: fix ^mkl pattern in minor packages (#41003)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2023-11-11 03:32:22 -08:00
Massimiliano Culpo
74172fb9d2 gromacs et al: fix ^mkl pattern (#41002)
The ^mkl pattern was used to refer to three packages
even though none of software using it was depending
on "mkl".

This pattern, which follows Hyrum's law, is now being
removed in favor of a more explicit one.

In this PR gromacs, abinit, lammps, and quantum-espresso
are modified.

Intel packages are also modified to provide "lapack"
and "blas" together.
2023-11-11 03:32:22 -08:00
Harmen Stoppels
c266e69cde env: compute env mods only for installed roots (#40997)
And improve the error message (load vs unload).

Of course you could have some uninstalled dependency too, but as long as
it doesn't implement `setup_run_environment` etc, I don't think it hurts
to attempt to load the root anyways, given that failure to do so is a
warning, not a fatal error.
2023-11-11 03:32:22 -08:00
Todd Gamblin
fe57ec2ab7 info: rework spack info command to display variants better (#40998)
This changes variant display to use a much more legible format, and to use screen space
much better (particularly on narrow terminals). It also adds color the variant display
to match other parts of `spack info`.

Descriptions and variant value lists that were frequently squished into a tiny column
before now have closer to the full terminal width.

This change also preserves any whitespace formatting present in `package.py`, so package
maintainers can make easer-to-read descriptions of variant values if they want. For
example, `gasnet` has had a nice description of the `conduits` variant for a while, but
it was wrapped and made illegible by `spack info`. That is now fixed and the original
newlines are kept.

Conditional variants are grouped by their when clauses by default, but if you do not
like the grouping, you can display all the variants in order with `--variants-by-name`.
I'm not sure when people will prefer this, but it makes it easier to tell that a
particular variant is/isn't there. I do think grouping by `when` is the better default.
2023-11-11 03:32:22 -08:00
Adam J. Stewart
3c3476a176 py-black: add v23.10: (#40959) 2023-11-11 03:32:22 -08:00
Scott Wittenburg
67f20c3e5c buildcache: skip unrecognized metadata files (#40941)
This commit improves forward compatibility of Spack with newer build cache metadata formats.

Before this commit, invalid or unrecognized metadata would be fatal errors, now they just cause
a mirror to be skipped.

Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
2023-11-11 03:32:22 -08:00
Satish Balay
1baf712b87 intel-oneapi-mkl: do not set __INTEL_POST_CFLAGS env variable (#40947)
This triggers warnings from icx compiler - that breaks petsc configure

$ I_MPI_CC=icx /opt/intel/oneapi/mpi/2021.7.0/bin/mpiicc -E a.c > /dev/null
$ __INTEL_POST_CFLAGS=-Wl,-rpath,/opt/intel/oneapi/mkl/2022.2.0/lib/intel64 I_MPI_CC=icx /opt/intel/oneapi/mpi/2021.7.0/bin/mpiicc -E a.c > /dev/null
icx: warning: -Wl,-rpath,/opt/intel/oneapi/mkl/2022.2.0/lib/intel64: 'linker' input unused [-Wunused-command-line-argument]
2023-11-09 11:44:57 +01:00
Massimiliano Culpo
c73ec0b36d modules: remove deprecated code and test data (#40966)
This removes a few deprecated attributes from the
schema of the "modules" section. Test data for
deprecated options is removed as well.
2023-11-09 11:44:57 +01:00
Harmen Stoppels
9d58d5e645 modules: restore exclude_implicits (#40958) 2023-11-09 11:44:57 +01:00
Massimiliano Culpo
fc5fd7fc60 tcl: filter compiler wrappers to avoid pointing to Spack (#40946) 2023-11-09 11:44:57 +01:00
Tom Vander Aa
efa59510a8 libevent: always autogen.sh (#40945)
The libevent release tarballs ship with a `configure` script generated by an old `libtool`. The `libtool` generated by `configure` is not compatible with `MACOSX_DEPLOYMENT_VERSION` > 10. Regeneration of the `configure` scripts fixes build on macOS. 

Original configure contains:
```
    case $host_os in
    rhapsody* | darwin1.[012])
      _lt_dar_allow_undefined='$wl-undefined ${wl}suppress' ;;
    darwin1.*)
      _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
    darwin*) # darwin 5.x on
      # if running on 10.5 or later, the deployment target defaults
      # to the OS version, if on x86, and 10.4, the deployment
      # target defaults to 10.4. Don't you love it?
      case ${MACOSX_DEPLOYMENT_TARGET-10.0},$host in
        10.0,*86*-darwin8*|10.0,*-darwin[91]*)
          _lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
        10.[012][,.]*)
          _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
        10.*)
          _lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
      esac
```

After re-running `autogen.sh`:
```
    case $host_os in
    rhapsody* | darwin1.[012])
      _lt_dar_allow_undefined='$wl-undefined ${wl}suppress' ;;
    darwin1.*)
      _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
    darwin*)
      case $MACOSX_DEPLOYMENT_TARGET,$host in
        10.[012],*|,*powerpc*-darwin[5-8]*)
          _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
        *)
          _lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
      esac
```
2023-11-09 11:44:57 +01:00
Harmen Stoppels
6094ee5eae Revert "defaults/modules.yaml: hide implicits (#40906)" (#40955)
This reverts commit a2f00886e9.
2023-11-09 11:44:57 +01:00
Greg Becker
f7cacdbf40 tutorial stack: update for changes to the basics section for SC23 (#40942) 2023-11-08 08:56:33 +01:00
Harmen Stoppels
5152738084 tutorial: use lmod@8.7.18 because @8.7.19: has bugs (#40939) 2023-11-08 08:56:33 +01:00
Richarda Butler
9ba8d60789 Propagate variant across nodes that don't have that variant (#38512)
Before this PR, variant were not propagated to leaf nodes that could accept 
the propagated value, if some intermediate node couldn't accept it.

This PR fixes that issue by marking nodes as "candidate" for propagation
and by setting the variant only if it can be accepted by the node.

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2023-11-08 08:56:33 +01:00
Harmen Stoppels
3c8cd8d30c Ensure global command line arguments end up in args like before (#40929) 2023-11-08 08:56:33 +01:00
Harmen Stoppels
53a31bbbbf tutorial pipeline: force gcc@12.3.0 (#40937) 2023-11-08 08:56:33 +01:00
Harmen Stoppels
bac314a4f1 spack tutorial: use backports/v0.21.0 branch 2023-11-08 08:56:33 +01:00
Harmen Stoppels
10ba172611 catch exceptions in which_string (#40935) 2023-11-08 08:56:33 +01:00
140 changed files with 3121 additions and 1505 deletions

View file

@ -1,3 +1,335 @@
# v0.21.2 (2024-03-01)
## Bugfixes
- Containerize: accommodate nested or pre-existing spack-env paths (#41558)
- Fix setup-env script, when going back and forth between instances (#40924)
- Fix using fully-qualified namespaces from root specs (#41957)
- Fix a bug when a required provider is requested for multiple virtuals (#42088)
- OCI buildcaches:
- only push in parallel when forking (#42143)
- use pickleable errors (#42160)
- Fix using sticky variants in externals (#42253)
- Fix a rare issue with conditional requirements and multi-valued variants (#42566)
## Package updates
- rust: add v1.75, rework a few variants (#41161,#41903)
- py-transformers: add v4.35.2 (#41266)
- mgard: fix OpenMP on AppleClang (#42933)
# v0.21.1 (2024-01-11)
## New features
- Add support for reading buildcaches created by Spack v0.22 (#41773)
## Bugfixes
- spack graph: fix coloring with environments (#41240)
- spack info: sort variants in --variants-by-name (#41389)
- Spec.format: error on old style format strings (#41934)
- ASP-based solver:
- fix infinite recursion when computing concretization errors (#41061)
- don't error for type mismatch on preferences (#41138)
- don't emit spurious debug output (#41218)
- Improve the error message for deprecated preferences (#41075)
- Fix MSVC preview version breaking clingo build on Windows (#41185)
- Fix multi-word aliases (#41126)
- Add a warning for unconfigured compiler (#41213)
- environment: fix an issue with deconcretization/reconcretization of specs (#41294)
- buildcache: don't error if a patch is missing, when installing from binaries (#41986)
- Multiple improvements to unit-tests (#41215,#41369,#41495,#41359,#41361,#41345,#41342,#41308,#41226)
## Package updates
- root: add a webgui patch to address security issue (#41404)
- BerkeleyGW: update source urls (#38218)
# v0.21.0 (2023-11-11)
`v0.21.0` is a major feature release.
## Features in this release
1. **Better error messages with condition chaining**
In v0.18, we added better error messages that could tell you what problem happened,
but they couldn't tell you *why* it happened. `0.21` adds *condition chaining* to the
solver, and Spack can now trace back through the conditions that led to an error and
build a tree of causes potential causes and where they came from. For example:
```console
$ spack solve hdf5 ^cmake@3.0.1
==> Error: concretization failed for the following reasons:
1. Cannot satisfy 'cmake@3.0.1'
2. Cannot satisfy 'cmake@3.0.1'
required because hdf5 ^cmake@3.0.1 requested from CLI
3. Cannot satisfy 'cmake@3.18:' and 'cmake@3.0.1
required because hdf5 ^cmake@3.0.1 requested from CLI
required because hdf5 depends on cmake@3.18: when @1.13:
required because hdf5 ^cmake@3.0.1 requested from CLI
4. Cannot satisfy 'cmake@3.12:' and 'cmake@3.0.1
required because hdf5 depends on cmake@3.12:
required because hdf5 ^cmake@3.0.1 requested from CLI
required because hdf5 ^cmake@3.0.1 requested from CLI
```
More details in #40173.
2. **OCI build caches**
You can now use an arbitrary [OCI](https://opencontainers.org) registry as a build
cache:
```console
$ spack mirror add my_registry oci://user/image # Dockerhub
$ spack mirror add my_registry oci://ghcr.io/haampie/spack-test # GHCR
$ spack mirror set --push --oci-username ... --oci-password ... my_registry # set login creds
$ spack buildcache push my_registry [specs...]
```
And you can optionally add a base image to get *runnable* images:
```console
$ spack buildcache push --base-image ubuntu:23.04 my_registry python
Pushed ... as [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
$ docker run --rm -it [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
```
This creates a container image from the Spack installations on the host system,
without the need to run `spack install` from a `Dockerfile` or `sif` file. It also
addresses the inconvenience of losing binaries of dependencies when `RUN spack
install` fails inside `docker build`.
Further, the container image layers and build cache tarballs are the same files. This
means that `spack install` and `docker pull` use the exact same underlying binaries.
If you previously used `spack install` inside of `docker build`, this feature helps
you save storage by a factor two.
More details in #38358.
3. **Multiple versions of build dependencies**
Increasingly, complex package builds require multiple versions of some build
dependencies. For example, Python packages frequently require very specific versions
of `setuptools`, `cython`, and sometimes different physics packages require different
versions of Python to build. The concretizer enforced that every solve was *unified*,
i.e., that there only be one version of every package. The concretizer now supports
"duplicate" nodes for *build dependencies*, but enforces unification through
transitive link and run dependencies. This will allow it to better resolve complex
dependency graphs in ecosystems like Python, and it also gets us very close to
modeling compilers as proper dependencies.
This change required a major overhaul of the concretizer, as well as a number of
performance optimizations. See #38447, #39621.
4. **Cherry-picking virtual dependencies**
You can now select only a subset of virtual dependencies from a spec that may provide
more. For example, if you want `mpich` to be your `mpi` provider, you can be explicit
by writing:
```
hdf5 ^[virtuals=mpi] mpich
```
Or, if you want to use, e.g., `intel-parallel-studio` for `blas` along with an external
`lapack` like `openblas`, you could write:
```
strumpack ^[virtuals=mpi] intel-parallel-studio+mkl ^[virtuals=lapack] openblas
```
The `virtuals=mpi` is an edge attribute, and dependency edges in Spack graphs now
track which virtuals they satisfied. More details in #17229 and #35322.
Note for packaging: in Spack 0.21 `spec.satisfies("^virtual")` is true if and only if
the package specifies `depends_on("virtual")`. This is different from Spack 0.20,
where depending on a provider implied depending on the virtual provided. See #41002
for an example where `^mkl` was being used to test for several `mkl` providers in a
package that did not depend on `mkl`.
5. **License directive**
Spack packages can now have license metadata, with the new `license()` directive:
```python
license("Apache-2.0")
```
Licenses use [SPDX identifiers](https://spdx.org/licenses), and you can use SPDX
expressions to combine them:
```python
license("Apache-2.0 OR MIT")
```
Like other directives in Spack, it's conditional, so you can handle complex cases like
Spack itself:
```python
license("LGPL-2.1", when="@:0.11")
license("Apache-2.0 OR MIT", when="@0.12:")
```
More details in #39346, #40598.
6. **`spack deconcretize` command**
We are getting close to having a `spack update` command for environments, but we're
not quite there yet. This is the next best thing. `spack deconcretize` gives you
control over what you want to update in an already concrete environment. If you have
an environment built with, say, `meson`, and you want to update your `meson` version,
you can run:
```console
spack deconcretize meson
```
and have everything that depends on `meson` rebuilt the next time you run `spack
concretize`. In a future Spack version, we'll handle all of this in a single command,
but for now you can use this to drop bits of your lockfile and resolve your
dependencies again. More in #38803.
7. **UI Improvements**
The venerable `spack info` command was looking shabby compared to the rest of Spack's
UI, so we reworked it to have a bit more flair. `spack info` now makes much better
use of terminal space and shows variants, their values, and their descriptions much
more clearly. Conditional variants are grouped separately so you can more easily
understand how packages are structured. More in #40998.
`spack checksum` now allows you to filter versions from your editor, or by version
range. It also notifies you about potential download URL changes. See #40403.
8. **Environments can include definitions**
Spack did not previously support using `include:` with The
[definitions](https://spack.readthedocs.io/en/latest/environments.html#spec-list-references)
section of an environment, but now it does. You can use this to curate lists of specs
and more easily reuse them across environments. See #33960.
9. **Aliases**
You can now add aliases to Spack commands in `config.yaml`, e.g. this might enshrine
your favorite args to `spack find` as `spack f`:
```yaml
config:
aliases:
f: find -lv
```
See #17229.
10. **Improved autoloading of modules**
Spack 0.20 was the first release to enable autoloading of direct dependencies in
module files.
The downside of this was that `module avail` and `module load` tab completion would
show users too many modules to choose from, and many users disabled generating
modules for dependencies through `exclude_implicits: true`. Further, it was
necessary to keep hashes in module names to avoid file name clashes.
In this release, you can start using `hide_implicits: true` instead, which exposes
only explicitly installed packages to the user, while still autoloading
dependencies. On top of that, you can safely use `hash_length: 0`, as this config
now only applies to the modules exposed to the user -- you don't have to worry about
file name clashes for hidden dependencies.
Note: for `tcl` this feature requires Modules 4.7 or higher
11. **Updated container labeling**
Nightly Docker images from the `develop` branch will now be tagged as `:develop` and
`:nightly`. The `:latest` tag is no longer associated with `:develop`, but with the
latest stable release. Releases will be tagged with `:{major}`, `:{major}.{minor}`
and `:{major}.{minor}.{patch}`. `ubuntu:18.04` has also been removed from the list of
generated Docker images, as it is no longer supported. See #40593.
## Other new commands and directives
* `spack env activate` without arguments now loads a `default` environment that you do
not have to create (#40756).
* `spack find -H` / `--hashes`: a new shortcut for piping `spack find` output to
other commands (#38663)
* Add `spack checksum --verify`, fix `--add` (#38458)
* New `default_args` context manager factors out common args for directives (#39964)
* `spack compiler find --[no]-mixed-toolchain` lets you easily mix `clang` and
`gfortran` on Linux (#40902)
## Performance improvements
* `spack external find` execution is now much faster (#39843)
* `spack location -i` now much faster on success (#40898)
* Drop redundant rpaths post install (#38976)
* ASP-based solver: avoid cycles in clingo using hidden directive (#40720)
* Fix multiple quadratic complexity issues in environments (#38771)
## Other new features of note
* archspec: update to v0.2.2, support for Sapphire Rapids, Power10, Neoverse V2 (#40917)
* Propagate variants across nodes that don't have that variant (#38512)
* Implement fish completion (#29549)
* Can now distinguish between source/binary mirror; don't ping mirror.spack.io as much (#34523)
* Improve status reporting on install (add [n/total] display) (#37903)
## Windows
This release has the best Windows support of any Spack release yet, with numerous
improvements and much larger swaths of tests passing:
* MSVC and SDK improvements (#37711, #37930, #38500, #39823, #39180)
* Windows external finding: update default paths; treat .bat as executable on Windows (#39850)
* Windows decompression: fix removal of intermediate file (#38958)
* Windows: executable/path handling (#37762)
* Windows build systems: use ninja and enable tests (#33589)
* Windows testing (#36970, #36972, #36973, #36840, #36977, #36792, #36834, #34696, #36971)
* Windows PowerShell support (#39118, #37951)
* Windows symlinking and libraries (#39933, #38599, #34701, #38578, #34701)
## Notable refactors
* User-specified flags take precedence over others in Spack compiler wrappers (#37376)
* Improve setup of build, run, and test environments (#35737, #40916)
* `make` is no longer a required system dependency of Spack (#40380)
* Support Python 3.12 (#40404, #40155, #40153)
* docs: Replace package list with packages.spack.io (#40251)
* Drop Python 2 constructs in Spack (#38720, #38718, #38703)
## Binary cache and stack updates
* e4s arm stack: duplicate and target neoverse v1 (#40369)
* Add macOS ML CI stacks (#36586)
* E4S Cray CI Stack (#37837)
* e4s cray: expand spec list (#38947)
* e4s cray sles ci: expand spec list (#39081)
## Removals, deprecations, and syntax changes
* ASP: targets, compilers and providers soft-preferences are only global (#31261)
* Parser: fix ambiguity with whitespace in version ranges (#40344)
* Module file generation is disabled by default; you'll need to enable it to use it (#37258)
* Remove deprecated "extra_instructions" option for containers (#40365)
* Stand-alone test feature deprecation postponed to v0.22 (#40600)
* buildcache push: make `--allow-root` the default and deprecate the option (#38878)
## Notable Bugfixes
* Bugfix: propagation of multivalued variants (#39833)
* Allow `/` in git versions (#39398)
* Fetch & patch: actually acquire stage lock, and many more issues (#38903)
* Environment/depfile: better escaping of targets with Git versions (#37560)
* Prevent "spack external find" to error out on wrong permissions (#38755)
* lmod: allow core compiler to be specified with a version range (#37789)
## Spack community stats
* 7,469 total packages, 303 new since `v0.20.0`
* 150 new Python packages
* 34 new R packages
* 353 people contributed to this release
* 336 committers to packages
* 65 committers to core
# v0.20.3 (2023-10-31) # v0.20.3 (2023-10-31)
## Bugfixes ## Bugfixes

View file

@ -46,12 +46,10 @@ modules:
tcl: tcl:
all: all:
autoload: direct autoload: direct
hide_implicits: true
# Default configurations if lmod is enabled # Default configurations if lmod is enabled
lmod: lmod:
all: all:
autoload: direct autoload: direct
hide_implicits: true
hierarchy: hierarchy:
- mpi - mpi

View file

@ -37,7 +37,11 @@ to enable reuse for a single installation, and you can use:
spack install --fresh <spec> spack install --fresh <spec>
to do a fresh install if ``reuse`` is enabled by default. to do a fresh install if ``reuse`` is enabled by default.
``reuse: true`` is the default. ``reuse: dependencies`` is the default.
.. seealso::
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
------------------------------------------ ------------------------------------------
Selection of the target microarchitectures Selection of the target microarchitectures
@ -99,547 +103,3 @@ while `py-numpy` still needs an older version:
Up to Spack v0.20 ``duplicates:strategy:none`` was the default (and only) behavior. From Spack v0.21 the Up to Spack v0.20 ``duplicates:strategy:none`` was the default (and only) behavior. From Spack v0.21 the
default behavior is ``duplicates:strategy:minimal``. default behavior is ``duplicates:strategy:minimal``.
.. _build-settings:
================================
Package Settings (packages.yaml)
================================
Spack allows you to customize how your software is built through the
``packages.yaml`` file. Using it, you can make Spack prefer particular
implementations of virtual dependencies (e.g., MPI or BLAS/LAPACK),
or you can make it prefer to build with particular compilers. You can
also tell Spack to use *external* software installations already
present on your system.
At a high level, the ``packages.yaml`` file is structured like this:
.. code-block:: yaml
packages:
package1:
# settings for package1
package2:
# settings for package2
# ...
all:
# settings that apply to all packages.
So you can either set build preferences specifically for *one* package,
or you can specify that certain settings should apply to *all* packages.
The types of settings you can customize are described in detail below.
Spack's build defaults are in the default
``etc/spack/defaults/packages.yaml`` file. You can override them in
``~/.spack/packages.yaml`` or ``etc/spack/packages.yaml``. For more
details on how this works, see :ref:`configuration-scopes`.
.. _sec-external-packages:
-----------------
External Packages
-----------------
Spack can be configured to use externally-installed
packages rather than building its own packages. This may be desirable
if machines ship with system packages, such as a customized MPI
that should be used instead of Spack building its own MPI.
External packages are configured through the ``packages.yaml`` file.
Here's an example of an external configuration:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
This example lists three installations of OpenMPI, one built with GCC,
one built with GCC and debug information, and another built with Intel.
If Spack is asked to build a package that uses one of these MPIs as a
dependency, it will use the pre-installed OpenMPI in
the given directory. Note that the specified path is the top-level
install prefix, not the ``bin`` subdirectory.
``packages.yaml`` can also be used to specify modules to load instead
of the installation prefixes. The following example says that module
``CMake/3.7.2`` provides cmake version 3.7.2.
.. code-block:: yaml
cmake:
externals:
- spec: cmake@3.7.2
modules:
- CMake/3.7.2
Each ``packages.yaml`` begins with a ``packages:`` attribute, followed
by a list of package names. To specify externals, add an ``externals:``
attribute under the package name, which lists externals.
Each external should specify a ``spec:`` string that should be as
well-defined as reasonably possible. If a
package lacks a spec component, such as missing a compiler or
package version, then Spack will guess the missing component based
on its most-favored packages, and it may guess incorrectly.
Each package version and compiler listed in an external should
have entries in Spack's packages and compiler configuration, even
though the package and compiler may not ever be built.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Prevent packages from being built from sources
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Adding an external spec in ``packages.yaml`` allows Spack to use an external location,
but it does not prevent Spack from building packages from sources. In the above example,
Spack might choose for many valid reasons to start building and linking with the
latest version of OpenMPI rather than continue using the pre-installed OpenMPI versions.
To prevent this, the ``packages.yaml`` configuration also allows packages
to be flagged as non-buildable. The previous example could be modified to
be:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
buildable: False
The addition of the ``buildable`` flag tells Spack that it should never build
its own version of OpenMPI from sources, and it will instead always rely on a pre-built
OpenMPI.
.. note::
If ``concretizer:reuse`` is on (see :ref:`concretizer-options` for more information on that flag)
pre-built specs include specs already available from a local store, an upstream store, a registered
buildcache or specs marked as externals in ``packages.yaml``. If ``concretizer:reuse`` is off, only
external specs in ``packages.yaml`` are included in the list of pre-built specs.
If an external module is specified as not buildable, then Spack will load the
external module into the build environment which can be used for linking.
The ``buildable`` does not need to be paired with external packages.
It could also be used alone to forbid packages that may be
buggy or otherwise undesirable.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Non-buildable virtual packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Virtual packages in Spack can also be specified as not buildable, and
external implementations can be provided. In the example above,
OpenMPI is configured as not buildable, but Spack will often prefer
other MPI implementations over the externally available OpenMPI. Spack
can be configured with every MPI provider not buildable individually,
but more conveniently:
.. code-block:: yaml
packages:
mpi:
buildable: False
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
Spack can then use any of the listed external implementations of MPI
to satisfy a dependency, and will choose depending on the compiler and
architecture.
In cases where the concretizer is configured to reuse specs, and other ``mpi`` providers
(available via stores or buildcaches) are not wanted, Spack can be configured to require
specs matching only the available externals:
.. code-block:: yaml
packages:
mpi:
buildable: False
require:
- one_of: [
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64",
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug",
"openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
]
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
This configuration prevents any spec using MPI and originating from stores or buildcaches to be reused,
unless it matches the requirements under ``packages:mpi:require``. For more information on requirements see
:ref:`package-requirements`.
.. _cmd-spack-external-find:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Automatically Find External Packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can run the :ref:`spack external find <spack-external-find>` command
to search for system-provided packages and add them to ``packages.yaml``.
After running this command your ``packages.yaml`` may include new entries:
.. code-block:: yaml
packages:
cmake:
externals:
- spec: cmake@3.17.2
prefix: /usr
Generally this is useful for detecting a small set of commonly-used packages;
for now this is generally limited to finding build-only dependencies.
Specific limitations include:
* Packages are not discoverable by default: For a package to be
discoverable with ``spack external find``, it needs to add special
logic. See :ref:`here <make-package-findable>` for more details.
* The logic does not search through module files, it can only detect
packages with executables defined in ``PATH``; you can help Spack locate
externals which use module files by loading any associated modules for
packages that you want Spack to know about before running
``spack external find``.
* Spack does not overwrite existing entries in the package configuration:
If there is an external defined for a spec at any configuration scope,
then Spack will not add a new external entry (``spack config blame packages``
can help locate all external entries).
.. _package-requirements:
--------------------
Package Requirements
--------------------
Spack can be configured to always use certain compilers, package
versions, and variants during concretization through package
requirements.
Package requirements are useful when you find yourself repeatedly
specifying the same constraints on the command line, and wish that
Spack respects these constraints whether you mention them explicitly
or not. Another use case is specifying constraints that should apply
to all root specs in an environment, without having to repeat the
constraint everywhere.
Apart from that, requirements config is more flexible than constraints
on the command line, because it can specify constraints on packages
*when they occur* as a dependency. In contrast, on the command line it
is not possible to specify constraints on dependencies while also keeping
those dependencies optional.
^^^^^^^^^^^^^^^^^^^
Requirements syntax
^^^^^^^^^^^^^^^^^^^
The package requirements configuration is specified in ``packages.yaml``,
keyed by package name and expressed using the Spec syntax. In the simplest
case you can specify attributes that you always want the package to have
by providing a single spec string to ``require``:
.. code-block:: yaml
packages:
libfabric:
require: "@1.13.2"
In the above example, ``libfabric`` will always build with version 1.13.2. If you
need to compose multiple configuration scopes ``require`` accepts a list of
strings:
.. code-block:: yaml
packages:
libfabric:
require:
- "@1.13.2"
- "%gcc"
In this case ``libfabric`` will always build with version 1.13.2 **and** using GCC
as a compiler.
For more complex use cases, require accepts also a list of objects. These objects
must have either a ``any_of`` or a ``one_of`` field, containing a list of spec strings,
and they can optionally have a ``when`` and a ``message`` attribute:
.. code-block:: yaml
packages:
openmpi:
require:
- any_of: ["@4.1.5", "%gcc"]
message: "in this example only 4.1.5 can build with other compilers"
``any_of`` is a list of specs. One of those specs must be satisfied
and it is also allowed for the concretized spec to match more than one.
In the above example, that means you could build ``openmpi@4.1.5%gcc``,
``openmpi@4.1.5%clang`` or ``openmpi@3.9%gcc``, but
not ``openmpi@3.9%clang``.
If a custom message is provided, and the requirement is not satisfiable,
Spack will print the custom error message:
.. code-block:: console
$ spack spec openmpi@3.9%clang
==> Error: in this example only 4.1.5 can build with other compilers
We could express a similar requirement using the ``when`` attribute:
.. code-block:: yaml
packages:
openmpi:
require:
- any_of: ["%gcc"]
when: "@:4.1.4"
message: "in this example only 4.1.5 can build with other compilers"
In the example above, if the version turns out to be 4.1.4 or less, we require the compiler to be GCC.
For readability, Spack also allows a ``spec`` key accepting a string when there is only a single
constraint:
.. code-block:: yaml
packages:
openmpi:
require:
- spec: "%gcc"
when: "@:4.1.4"
message: "in this example only 4.1.5 can build with other compilers"
This code snippet and the one before it are semantically equivalent.
Finally, instead of ``any_of`` you can use ``one_of`` which also takes a list of specs. The final
concretized spec must match one and only one of them:
.. code-block:: yaml
packages:
mpich:
require:
- one_of: ["+cuda", "+rocm"]
In the example above, that means you could build ``mpich+cuda`` or ``mpich+rocm`` but not ``mpich+cuda+rocm``.
.. note::
For ``any_of`` and ``one_of``, the order of specs indicates a
preference: items that appear earlier in the list are preferred
(note that these preferences can be ignored in favor of others).
.. note::
When using a conditional requirement, Spack is allowed to actively avoid the triggering
condition (the ``when=...`` spec) if that leads to a concrete spec with better scores in
the optimization criteria. To check the current optimization criteria and their
priorities you can run ``spack solve zlib``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting default requirements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can also set default requirements for all packages under ``all``
like this:
.. code-block:: yaml
packages:
all:
require: '%clang'
which means every spec will be required to use ``clang`` as a compiler.
Note that in this case ``all`` represents a *default set of requirements* -
if there are specific package requirements, then the default requirements
under ``all`` are disregarded. For example, with a configuration like this:
.. code-block:: yaml
packages:
all:
require: '%clang'
cmake:
require: '%gcc'
Spack requires ``cmake`` to use ``gcc`` and all other nodes (including ``cmake``
dependencies) to use ``clang``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting requirements on virtual specs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A requirement on a virtual spec applies whenever that virtual is present in the DAG.
This can be useful for fixing which virtual provider you want to use:
.. code-block:: yaml
packages:
mpi:
require: 'mvapich2 %gcc'
With the configuration above the only allowed ``mpi`` provider is ``mvapich2 %gcc``.
Requirements on the virtual spec and on the specific provider are both applied, if
present. For instance with a configuration like:
.. code-block:: yaml
packages:
mpi:
require: 'mvapich2 %gcc'
mvapich2:
require: '~cuda'
you will use ``mvapich2~cuda %gcc`` as an ``mpi`` provider.
.. _package-preferences:
-------------------
Package Preferences
-------------------
In some cases package requirements can be too strong, and package
preferences are the better option. Package preferences do not impose
constraints on packages for particular versions or variants values,
they rather only set defaults. The concretizer is free to change
them if it must, due to other constraints, and also prefers reusing
installed packages over building new ones that are a better match for
preferences.
Most package preferences (``compilers``, ``target`` and ``providers``)
can only be set globally under the ``all`` section of ``packages.yaml``:
.. code-block:: yaml
packages:
all:
compiler: [gcc@12.2.0, clang@12:, oneapi@2023:]
target: [x86_64_v3]
providers:
mpi: [mvapich2, mpich, openmpi]
These preferences override Spack's default and effectively reorder priorities
when looking for the best compiler, target or virtual package provider. Each
preference takes an ordered list of spec constraints, with earlier entries in
the list being preferred over later entries.
In the example above all packages prefer to be compiled with ``gcc@12.2.0``,
to target the ``x86_64_v3`` microarchitecture and to use ``mvapich2`` if they
depend on ``mpi``.
The ``variants`` and ``version`` preferences can be set under
package specific sections of the ``packages.yaml`` file:
.. code-block:: yaml
packages:
opencv:
variants: +debug
gperftools:
version: [2.2, 2.4, 2.3]
In this case, the preference for ``opencv`` is to build with debug options, while
``gperftools`` prefers version 2.2 over 2.4.
Any preference can be overwritten on the command line if explicitly requested.
Preferences cannot overcome explicit constraints, as they only set a preferred
ordering among homogeneous attribute values. Going back to the example, if
``gperftools@2.3:`` was requested, then Spack will install version 2.4
since the most preferred version 2.2 is prohibited by the version constraint.
.. _package_permissions:
-------------------
Package Permissions
-------------------
Spack can be configured to assign permissions to the files installed
by a package.
In the ``packages.yaml`` file under ``permissions``, the attributes
``read``, ``write``, and ``group`` control the package
permissions. These attributes can be set per-package, or for all
packages under ``all``. If permissions are set under ``all`` and for a
specific package, the package-specific settings take precedence.
The ``read`` and ``write`` attributes take one of ``user``, ``group``,
and ``world``.
.. code-block:: yaml
packages:
all:
permissions:
write: group
group: spack
my_app:
permissions:
read: group
group: my_team
The permissions settings describe the broadest level of access to
installations of the specified packages. The execute permissions of
the file are set to the same level as read permissions for those files
that are executable. The default setting for ``read`` is ``world``,
and for ``write`` is ``user``. In the example above, installations of
``my_app`` will be installed with user and group permissions but no
world permissions, and owned by the group ``my_team``. All other
packages will be installed with user and group write privileges, and
world read privileges. Those packages will be owned by the group
``spack``.
The ``group`` attribute assigns a Unix-style group to a package. All
files installed by the package will be owned by the assigned group,
and the sticky group bit will be set on the install prefix and all
directories inside the install prefix. This will ensure that even
manually placed files within the install prefix are owned by the
assigned group. If no group is assigned, Spack will allow the OS
default behavior to go as expected.
----------------------------
Assigning Package Attributes
----------------------------
You can assign class-level attributes in the configuration:
.. code-block:: yaml
packages:
mpileaks:
# Override existing attributes
url: http://www.somewhereelse.com/mpileaks-1.0.tar.gz
# ... or add new ones
x: 1
Attributes set this way will be accessible to any method executed
in the package.py file (e.g. the ``install()`` method). Values for these
attributes may be any value parseable by yaml.
These can only be applied to specific packages, not "all" or
virtual packages.

View file

@ -392,7 +392,7 @@ See section
:ref:`Configuration Scopes <configuration-scopes>` :ref:`Configuration Scopes <configuration-scopes>`
for an explanation about the different files for an explanation about the different files
and section and section
:ref:`Build customization <build-settings>` :ref:`Build customization <packages-config>`
for specifics and examples for ``packages.yaml`` files. for specifics and examples for ``packages.yaml`` files.
.. If your system administrator did not provide modules for pre-installed Intel .. If your system administrator did not provide modules for pre-installed Intel

View file

@ -17,7 +17,7 @@ case you want to skip directly to specific docs:
* :ref:`config.yaml <config-yaml>` * :ref:`config.yaml <config-yaml>`
* :ref:`mirrors.yaml <mirrors>` * :ref:`mirrors.yaml <mirrors>`
* :ref:`modules.yaml <modules>` * :ref:`modules.yaml <modules>`
* :ref:`packages.yaml <build-settings>` * :ref:`packages.yaml <packages-config>`
* :ref:`repos.yaml <repositories>` * :ref:`repos.yaml <repositories>`
You can also add any of these as inline configuration in the YAML You can also add any of these as inline configuration in the YAML

View file

@ -0,0 +1,77 @@
.. Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
==========================
Frequently Asked Questions
==========================
This page contains answers to frequently asked questions about Spack.
If you have questions that are not answered here, feel free to ask on
`Slack <https://slack.spack.io>`_ or `GitHub Discussions
<https://github.com/spack/spack/discussions>`_. If you've learned the
answer to a question that you think should be here, please consider
contributing to this page.
.. _faq-concretizer-precedence:
-----------------------------------------------------
Why does Spack pick particular versions and variants?
-----------------------------------------------------
This question comes up in a variety of forms:
1. Why does Spack seem to ignore my package preferences from ``packages.yaml`` config?
2. Why does Spack toggle a variant instead of using the default from the ``package.py`` file?
The short answer is that Spack always picks an optimal configuration
based on a complex set of criteria\ [#f1]_. These criteria are more nuanced
than always choosing the latest versions or default variants.
.. note::
As a rule of thumb: requirements + constraints > reuse > preferences > defaults.
The following set of criteria (from lowest to highest precedence) explain
common cases where concretization output may seem surprising at first.
1. :ref:`Package preferences <package-preferences>` configured in ``packages.yaml``
override variant defaults from ``package.py`` files, and influence the optimal
ordering of versions. Preferences are specified as follows:
.. code-block:: yaml
packages:
foo:
version: [1.0, 1.1]
variants: ~mpi
2. :ref:`Reuse concretization <concretizer-options>` configured in ``concretizer.yaml``
overrides preferences, since it's typically faster to reuse an existing spec than to
build a preferred one from sources. When build caches are enabled, specs may be reused
from a remote location too. Reuse concretization is configured as follows:
.. code-block:: yaml
concretizer:
reuse: dependencies # other options are 'true' and 'false'
3. :ref:`Package requirements <package-requirements>` configured in ``packages.yaml``,
and constraints from the command line as well as ``package.py`` files override all
of the above. Requirements are specified as follows:
.. code-block:: yaml
packages:
foo:
require:
- "@1.2: +mpi"
Requirements and constraints restrict the set of possible solutions, while reuse
behavior and preferences influence what an optimal solution looks like.
.. rubric:: Footnotes
.. [#f1] The exact list of criteria can be retrieved with the ``spack solve`` command

View file

@ -55,6 +55,7 @@ or refer to the full manual below.
getting_started getting_started
basic_usage basic_usage
replace_conda_homebrew replace_conda_homebrew
frequently_asked_questions
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
@ -70,7 +71,7 @@ or refer to the full manual below.
configuration configuration
config_yaml config_yaml
bootstrapping packages_yaml
build_settings build_settings
environments environments
containers containers
@ -78,6 +79,7 @@ or refer to the full manual below.
module_file_support module_file_support
repositories repositories
binary_caches binary_caches
bootstrapping
command_index command_index
chain chain
extensions extensions

View file

@ -0,0 +1,560 @@
.. Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _packages-config:
================================
Package Settings (packages.yaml)
================================
Spack allows you to customize how your software is built through the
``packages.yaml`` file. Using it, you can make Spack prefer particular
implementations of virtual dependencies (e.g., MPI or BLAS/LAPACK),
or you can make it prefer to build with particular compilers. You can
also tell Spack to use *external* software installations already
present on your system.
At a high level, the ``packages.yaml`` file is structured like this:
.. code-block:: yaml
packages:
package1:
# settings for package1
package2:
# settings for package2
# ...
all:
# settings that apply to all packages.
So you can either set build preferences specifically for *one* package,
or you can specify that certain settings should apply to *all* packages.
The types of settings you can customize are described in detail below.
Spack's build defaults are in the default
``etc/spack/defaults/packages.yaml`` file. You can override them in
``~/.spack/packages.yaml`` or ``etc/spack/packages.yaml``. For more
details on how this works, see :ref:`configuration-scopes`.
.. _sec-external-packages:
-----------------
External Packages
-----------------
Spack can be configured to use externally-installed
packages rather than building its own packages. This may be desirable
if machines ship with system packages, such as a customized MPI
that should be used instead of Spack building its own MPI.
External packages are configured through the ``packages.yaml`` file.
Here's an example of an external configuration:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
This example lists three installations of OpenMPI, one built with GCC,
one built with GCC and debug information, and another built with Intel.
If Spack is asked to build a package that uses one of these MPIs as a
dependency, it will use the pre-installed OpenMPI in
the given directory. Note that the specified path is the top-level
install prefix, not the ``bin`` subdirectory.
``packages.yaml`` can also be used to specify modules to load instead
of the installation prefixes. The following example says that module
``CMake/3.7.2`` provides cmake version 3.7.2.
.. code-block:: yaml
cmake:
externals:
- spec: cmake@3.7.2
modules:
- CMake/3.7.2
Each ``packages.yaml`` begins with a ``packages:`` attribute, followed
by a list of package names. To specify externals, add an ``externals:``
attribute under the package name, which lists externals.
Each external should specify a ``spec:`` string that should be as
well-defined as reasonably possible. If a
package lacks a spec component, such as missing a compiler or
package version, then Spack will guess the missing component based
on its most-favored packages, and it may guess incorrectly.
Each package version and compiler listed in an external should
have entries in Spack's packages and compiler configuration, even
though the package and compiler may not ever be built.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Prevent packages from being built from sources
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Adding an external spec in ``packages.yaml`` allows Spack to use an external location,
but it does not prevent Spack from building packages from sources. In the above example,
Spack might choose for many valid reasons to start building and linking with the
latest version of OpenMPI rather than continue using the pre-installed OpenMPI versions.
To prevent this, the ``packages.yaml`` configuration also allows packages
to be flagged as non-buildable. The previous example could be modified to
be:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
buildable: False
The addition of the ``buildable`` flag tells Spack that it should never build
its own version of OpenMPI from sources, and it will instead always rely on a pre-built
OpenMPI.
.. note::
If ``concretizer:reuse`` is on (see :ref:`concretizer-options` for more information on that flag)
pre-built specs include specs already available from a local store, an upstream store, a registered
buildcache or specs marked as externals in ``packages.yaml``. If ``concretizer:reuse`` is off, only
external specs in ``packages.yaml`` are included in the list of pre-built specs.
If an external module is specified as not buildable, then Spack will load the
external module into the build environment which can be used for linking.
The ``buildable`` does not need to be paired with external packages.
It could also be used alone to forbid packages that may be
buggy or otherwise undesirable.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Non-buildable virtual packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Virtual packages in Spack can also be specified as not buildable, and
external implementations can be provided. In the example above,
OpenMPI is configured as not buildable, but Spack will often prefer
other MPI implementations over the externally available OpenMPI. Spack
can be configured with every MPI provider not buildable individually,
but more conveniently:
.. code-block:: yaml
packages:
mpi:
buildable: False
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
Spack can then use any of the listed external implementations of MPI
to satisfy a dependency, and will choose depending on the compiler and
architecture.
In cases where the concretizer is configured to reuse specs, and other ``mpi`` providers
(available via stores or buildcaches) are not wanted, Spack can be configured to require
specs matching only the available externals:
.. code-block:: yaml
packages:
mpi:
buildable: False
require:
- one_of: [
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64",
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug",
"openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
]
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
This configuration prevents any spec using MPI and originating from stores or buildcaches to be reused,
unless it matches the requirements under ``packages:mpi:require``. For more information on requirements see
:ref:`package-requirements`.
.. _cmd-spack-external-find:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Automatically Find External Packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can run the :ref:`spack external find <spack-external-find>` command
to search for system-provided packages and add them to ``packages.yaml``.
After running this command your ``packages.yaml`` may include new entries:
.. code-block:: yaml
packages:
cmake:
externals:
- spec: cmake@3.17.2
prefix: /usr
Generally this is useful for detecting a small set of commonly-used packages;
for now this is generally limited to finding build-only dependencies.
Specific limitations include:
* Packages are not discoverable by default: For a package to be
discoverable with ``spack external find``, it needs to add special
logic. See :ref:`here <make-package-findable>` for more details.
* The logic does not search through module files, it can only detect
packages with executables defined in ``PATH``; you can help Spack locate
externals which use module files by loading any associated modules for
packages that you want Spack to know about before running
``spack external find``.
* Spack does not overwrite existing entries in the package configuration:
If there is an external defined for a spec at any configuration scope,
then Spack will not add a new external entry (``spack config blame packages``
can help locate all external entries).
.. _package-requirements:
--------------------
Package Requirements
--------------------
Spack can be configured to always use certain compilers, package
versions, and variants during concretization through package
requirements.
Package requirements are useful when you find yourself repeatedly
specifying the same constraints on the command line, and wish that
Spack respects these constraints whether you mention them explicitly
or not. Another use case is specifying constraints that should apply
to all root specs in an environment, without having to repeat the
constraint everywhere.
Apart from that, requirements config is more flexible than constraints
on the command line, because it can specify constraints on packages
*when they occur* as a dependency. In contrast, on the command line it
is not possible to specify constraints on dependencies while also keeping
those dependencies optional.
.. seealso::
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
^^^^^^^^^^^^^^^^^^^
Requirements syntax
^^^^^^^^^^^^^^^^^^^
The package requirements configuration is specified in ``packages.yaml``,
keyed by package name and expressed using the Spec syntax. In the simplest
case you can specify attributes that you always want the package to have
by providing a single spec string to ``require``:
.. code-block:: yaml
packages:
libfabric:
require: "@1.13.2"
In the above example, ``libfabric`` will always build with version 1.13.2. If you
need to compose multiple configuration scopes ``require`` accepts a list of
strings:
.. code-block:: yaml
packages:
libfabric:
require:
- "@1.13.2"
- "%gcc"
In this case ``libfabric`` will always build with version 1.13.2 **and** using GCC
as a compiler.
For more complex use cases, require accepts also a list of objects. These objects
must have either a ``any_of`` or a ``one_of`` field, containing a list of spec strings,
and they can optionally have a ``when`` and a ``message`` attribute:
.. code-block:: yaml
packages:
openmpi:
require:
- any_of: ["@4.1.5", "%gcc"]
message: "in this example only 4.1.5 can build with other compilers"
``any_of`` is a list of specs. One of those specs must be satisfied
and it is also allowed for the concretized spec to match more than one.
In the above example, that means you could build ``openmpi@4.1.5%gcc``,
``openmpi@4.1.5%clang`` or ``openmpi@3.9%gcc``, but
not ``openmpi@3.9%clang``.
If a custom message is provided, and the requirement is not satisfiable,
Spack will print the custom error message:
.. code-block:: console
$ spack spec openmpi@3.9%clang
==> Error: in this example only 4.1.5 can build with other compilers
We could express a similar requirement using the ``when`` attribute:
.. code-block:: yaml
packages:
openmpi:
require:
- any_of: ["%gcc"]
when: "@:4.1.4"
message: "in this example only 4.1.5 can build with other compilers"
In the example above, if the version turns out to be 4.1.4 or less, we require the compiler to be GCC.
For readability, Spack also allows a ``spec`` key accepting a string when there is only a single
constraint:
.. code-block:: yaml
packages:
openmpi:
require:
- spec: "%gcc"
when: "@:4.1.4"
message: "in this example only 4.1.5 can build with other compilers"
This code snippet and the one before it are semantically equivalent.
Finally, instead of ``any_of`` you can use ``one_of`` which also takes a list of specs. The final
concretized spec must match one and only one of them:
.. code-block:: yaml
packages:
mpich:
require:
- one_of: ["+cuda", "+rocm"]
In the example above, that means you could build ``mpich+cuda`` or ``mpich+rocm`` but not ``mpich+cuda+rocm``.
.. note::
For ``any_of`` and ``one_of``, the order of specs indicates a
preference: items that appear earlier in the list are preferred
(note that these preferences can be ignored in favor of others).
.. note::
When using a conditional requirement, Spack is allowed to actively avoid the triggering
condition (the ``when=...`` spec) if that leads to a concrete spec with better scores in
the optimization criteria. To check the current optimization criteria and their
priorities you can run ``spack solve zlib``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting default requirements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can also set default requirements for all packages under ``all``
like this:
.. code-block:: yaml
packages:
all:
require: '%clang'
which means every spec will be required to use ``clang`` as a compiler.
Note that in this case ``all`` represents a *default set of requirements* -
if there are specific package requirements, then the default requirements
under ``all`` are disregarded. For example, with a configuration like this:
.. code-block:: yaml
packages:
all:
require: '%clang'
cmake:
require: '%gcc'
Spack requires ``cmake`` to use ``gcc`` and all other nodes (including ``cmake``
dependencies) to use ``clang``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting requirements on virtual specs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A requirement on a virtual spec applies whenever that virtual is present in the DAG.
This can be useful for fixing which virtual provider you want to use:
.. code-block:: yaml
packages:
mpi:
require: 'mvapich2 %gcc'
With the configuration above the only allowed ``mpi`` provider is ``mvapich2 %gcc``.
Requirements on the virtual spec and on the specific provider are both applied, if
present. For instance with a configuration like:
.. code-block:: yaml
packages:
mpi:
require: 'mvapich2 %gcc'
mvapich2:
require: '~cuda'
you will use ``mvapich2~cuda %gcc`` as an ``mpi`` provider.
.. _package-preferences:
-------------------
Package Preferences
-------------------
In some cases package requirements can be too strong, and package
preferences are the better option. Package preferences do not impose
constraints on packages for particular versions or variants values,
they rather only set defaults. The concretizer is free to change
them if it must, due to other constraints, and also prefers reusing
installed packages over building new ones that are a better match for
preferences.
.. seealso::
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
Most package preferences (``compilers``, ``target`` and ``providers``)
can only be set globally under the ``all`` section of ``packages.yaml``:
.. code-block:: yaml
packages:
all:
compiler: [gcc@12.2.0, clang@12:, oneapi@2023:]
target: [x86_64_v3]
providers:
mpi: [mvapich2, mpich, openmpi]
These preferences override Spack's default and effectively reorder priorities
when looking for the best compiler, target or virtual package provider. Each
preference takes an ordered list of spec constraints, with earlier entries in
the list being preferred over later entries.
In the example above all packages prefer to be compiled with ``gcc@12.2.0``,
to target the ``x86_64_v3`` microarchitecture and to use ``mvapich2`` if they
depend on ``mpi``.
The ``variants`` and ``version`` preferences can be set under
package specific sections of the ``packages.yaml`` file:
.. code-block:: yaml
packages:
opencv:
variants: +debug
gperftools:
version: [2.2, 2.4, 2.3]
In this case, the preference for ``opencv`` is to build with debug options, while
``gperftools`` prefers version 2.2 over 2.4.
Any preference can be overwritten on the command line if explicitly requested.
Preferences cannot overcome explicit constraints, as they only set a preferred
ordering among homogeneous attribute values. Going back to the example, if
``gperftools@2.3:`` was requested, then Spack will install version 2.4
since the most preferred version 2.2 is prohibited by the version constraint.
.. _package_permissions:
-------------------
Package Permissions
-------------------
Spack can be configured to assign permissions to the files installed
by a package.
In the ``packages.yaml`` file under ``permissions``, the attributes
``read``, ``write``, and ``group`` control the package
permissions. These attributes can be set per-package, or for all
packages under ``all``. If permissions are set under ``all`` and for a
specific package, the package-specific settings take precedence.
The ``read`` and ``write`` attributes take one of ``user``, ``group``,
and ``world``.
.. code-block:: yaml
packages:
all:
permissions:
write: group
group: spack
my_app:
permissions:
read: group
group: my_team
The permissions settings describe the broadest level of access to
installations of the specified packages. The execute permissions of
the file are set to the same level as read permissions for those files
that are executable. The default setting for ``read`` is ``world``,
and for ``write`` is ``user``. In the example above, installations of
``my_app`` will be installed with user and group permissions but no
world permissions, and owned by the group ``my_team``. All other
packages will be installed with user and group write privileges, and
world read privileges. Those packages will be owned by the group
``spack``.
The ``group`` attribute assigns a Unix-style group to a package. All
files installed by the package will be owned by the assigned group,
and the sticky group bit will be set on the install prefix and all
directories inside the install prefix. This will ensure that even
manually placed files within the install prefix are owned by the
assigned group. If no group is assigned, Spack will allow the OS
default behavior to go as expected.
----------------------------
Assigning Package Attributes
----------------------------
You can assign class-level attributes in the configuration:
.. code-block:: yaml
packages:
mpileaks:
package_attributes:
# Override existing attributes
url: http://www.somewhereelse.com/mpileaks-1.0.tar.gz
# ... or add new ones
x: 1
Attributes set this way will be accessible to any method executed
in the package.py file (e.g. the ``install()`` method). Values for these
attributes may be any value parseable by yaml.
These can only be applied to specific packages, not "all" or
virtual packages.

View file

@ -4,7 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
#: PEP440 canonical <major>.<minor>.<micro>.<devN> string #: PEP440 canonical <major>.<minor>.<micro>.<devN> string
__version__ = "0.21.0.dev0" __version__ = "0.21.2"
spack_version = __version__ spack_version = __version__

View file

@ -40,6 +40,7 @@ def _search_duplicate_compilers(error_cls):
import collections.abc import collections.abc
import glob import glob
import inspect import inspect
import io
import itertools import itertools
import pathlib import pathlib
import pickle import pickle
@ -54,6 +55,7 @@ def _search_duplicate_compilers(error_cls):
import spack.repo import spack.repo
import spack.spec import spack.spec
import spack.util.crypto import spack.util.crypto
import spack.util.spack_yaml as syaml
import spack.variant import spack.variant
#: Map an audit tag to a list of callables implementing checks #: Map an audit tag to a list of callables implementing checks
@ -250,6 +252,88 @@ def _search_duplicate_specs_in_externals(error_cls):
return errors return errors
@config_packages
def _deprecated_preferences(error_cls):
"""Search package preferences deprecated in v0.21 (and slated for removal in v0.22)"""
# TODO (v0.22): remove this audit as the attributes will not be allowed in config
errors = []
packages_yaml = spack.config.CONFIG.get_config("packages")
def make_error(attribute_name, config_data, summary):
s = io.StringIO()
s.write("Occurring in the following file:\n")
dict_view = syaml.syaml_dict((k, v) for k, v in config_data.items() if k == attribute_name)
syaml.dump_config(dict_view, stream=s, blame=True)
return error_cls(summary=summary, details=[s.getvalue()])
if "all" in packages_yaml and "version" in packages_yaml["all"]:
summary = "Using the deprecated 'version' attribute under 'packages:all'"
errors.append(make_error("version", packages_yaml["all"], summary))
for package_name in packages_yaml:
if package_name == "all":
continue
package_conf = packages_yaml[package_name]
for attribute in ("compiler", "providers", "target"):
if attribute not in package_conf:
continue
summary = (
f"Using the deprecated '{attribute}' attribute " f"under 'packages:{package_name}'"
)
errors.append(make_error(attribute, package_conf, summary))
return errors
@config_packages
def _avoid_mismatched_variants(error_cls):
"""Warns if variant preferences have mismatched types or names."""
errors = []
packages_yaml = spack.config.CONFIG.get_config("packages")
def make_error(config_data, summary):
s = io.StringIO()
s.write("Occurring in the following file:\n")
syaml.dump_config(config_data, stream=s, blame=True)
return error_cls(summary=summary, details=[s.getvalue()])
for pkg_name in packages_yaml:
# 'all:' must be more forgiving, since it is setting defaults for everything
if pkg_name == "all" or "variants" not in packages_yaml[pkg_name]:
continue
preferences = packages_yaml[pkg_name]["variants"]
if not isinstance(preferences, list):
preferences = [preferences]
for variants in preferences:
current_spec = spack.spec.Spec(variants)
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
for variant in current_spec.variants.values():
# Variant does not exist at all
if variant.name not in pkg_cls.variants:
summary = (
f"Setting a preference for the '{pkg_name}' package to the "
f"non-existing variant '{variant.name}'"
)
errors.append(make_error(preferences, summary))
continue
# Variant cannot accept this value
s = spack.spec.Spec(pkg_name)
try:
s.update_variant_validate(variant.name, variant.value)
except Exception:
summary = (
f"Setting the variant '{variant.name}' of the '{pkg_name}' package "
f"to the invalid value '{str(variant)}'"
)
errors.append(make_error(preferences, summary))
return errors
#: Sanity checks on package directives #: Sanity checks on package directives
package_directives = AuditClass( package_directives = AuditClass(
group="packages", group="packages",
@ -776,7 +860,7 @@ def _version_constraints_are_satisfiable_by_some_version_in_repo(pkgs, error_cls
) )
except Exception: except Exception:
summary = ( summary = (
"{0}: dependency on {1} cannot be satisfied " "by known versions of {1.name}" "{0}: dependency on {1} cannot be satisfied by known versions of {1.name}"
).format(pkg_name, s) ).format(pkg_name, s)
details = ["happening in " + filename] details = ["happening in " + filename]
if dependency_pkg_cls is not None: if dependency_pkg_cls is not None:
@ -818,6 +902,53 @@ def _analyze_variants_in_directive(pkg, constraint, directive, error_cls):
return errors return errors
@package_directives
def _named_specs_in_when_arguments(pkgs, error_cls):
"""Reports named specs in the 'when=' attribute of a directive.
Note that 'conflicts' is the only directive allowing that.
"""
errors = []
for pkg_name in pkgs:
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
def _extracts_errors(triggers, summary):
_errors = []
for trigger in list(triggers):
when_spec = spack.spec.Spec(trigger)
if when_spec.name is not None and when_spec.name != pkg_name:
details = [f"using '{trigger}', should be '^{trigger}'"]
_errors.append(error_cls(summary=summary, details=details))
return _errors
for dname, triggers in pkg_cls.dependencies.items():
summary = f"{pkg_name}: wrong 'when=' condition for the '{dname}' dependency"
errors.extend(_extracts_errors(triggers, summary))
for vname, (variant, triggers) in pkg_cls.variants.items():
summary = f"{pkg_name}: wrong 'when=' condition for the '{vname}' variant"
errors.extend(_extracts_errors(triggers, summary))
for provided, triggers in pkg_cls.provided.items():
summary = f"{pkg_name}: wrong 'when=' condition for the '{provided}' virtual"
errors.extend(_extracts_errors(triggers, summary))
for _, triggers in pkg_cls.requirements.items():
triggers = [when_spec for when_spec, _, _ in triggers]
summary = f"{pkg_name}: wrong 'when=' condition in 'requires' directive"
errors.extend(_extracts_errors(triggers, summary))
triggers = list(pkg_cls.patches)
summary = f"{pkg_name}: wrong 'when=' condition in 'patch' directives"
errors.extend(_extracts_errors(triggers, summary))
triggers = list(pkg_cls.resources)
summary = f"{pkg_name}: wrong 'when=' condition in 'resource' directives"
errors.extend(_extracts_errors(triggers, summary))
return llnl.util.lang.dedupe(errors)
#: Sanity checks on package directives #: Sanity checks on package directives
external_detection = AuditClass( external_detection = AuditClass(
group="externals", group="externals",

View file

@ -66,8 +66,10 @@
from spack.stage import Stage from spack.stage import Stage
from spack.util.executable import which from spack.util.executable import which
_build_cache_relative_path = "build_cache" BUILD_CACHE_RELATIVE_PATH = "build_cache"
_build_cache_keys_relative_path = "_pgp" BUILD_CACHE_KEYS_RELATIVE_PATH = "_pgp"
CURRENT_BUILD_CACHE_LAYOUT_VERSION = 1
FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION = 2
class BuildCacheDatabase(spack_db.Database): class BuildCacheDatabase(spack_db.Database):
@ -481,7 +483,7 @@ def _fetch_and_cache_index(self, mirror_url, cache_entry={}):
scheme = urllib.parse.urlparse(mirror_url).scheme scheme = urllib.parse.urlparse(mirror_url).scheme
if scheme != "oci" and not web_util.url_exists( if scheme != "oci" and not web_util.url_exists(
url_util.join(mirror_url, _build_cache_relative_path, "index.json") url_util.join(mirror_url, BUILD_CACHE_RELATIVE_PATH, "index.json")
): ):
return False return False
@ -600,6 +602,10 @@ def __init__(self, msg):
super().__init__(msg) super().__init__(msg)
class InvalidMetadataFile(spack.error.SpackError):
pass
class UnsignedPackageException(spack.error.SpackError): class UnsignedPackageException(spack.error.SpackError):
""" """
Raised if installation of unsigned package is attempted without Raised if installation of unsigned package is attempted without
@ -614,11 +620,11 @@ def compute_hash(data):
def build_cache_relative_path(): def build_cache_relative_path():
return _build_cache_relative_path return BUILD_CACHE_RELATIVE_PATH
def build_cache_keys_relative_path(): def build_cache_keys_relative_path():
return _build_cache_keys_relative_path return BUILD_CACHE_KEYS_RELATIVE_PATH
def build_cache_prefix(prefix): def build_cache_prefix(prefix):
@ -1401,7 +1407,7 @@ def _build_tarball_in_stage_dir(spec: Spec, out_url: str, stage_dir: str, option
spec_dict = sjson.load(content) spec_dict = sjson.load(content)
else: else:
raise ValueError("{0} not a valid spec file type".format(spec_file)) raise ValueError("{0} not a valid spec file type".format(spec_file))
spec_dict["buildcache_layout_version"] = 1 spec_dict["buildcache_layout_version"] = CURRENT_BUILD_CACHE_LAYOUT_VERSION
spec_dict["binary_cache_checksum"] = {"hash_algorithm": "sha256", "hash": checksum} spec_dict["binary_cache_checksum"] = {"hash_algorithm": "sha256", "hash": checksum}
with open(specfile_path, "w") as outfile: with open(specfile_path, "w") as outfile:
@ -1560,6 +1566,42 @@ def _delete_staged_downloads(download_result):
download_result["specfile_stage"].destroy() download_result["specfile_stage"].destroy()
def _get_valid_spec_file(path: str, max_supported_layout: int) -> Tuple[Dict, int]:
"""Read and validate a spec file, returning the spec dict with its layout version, or raising
InvalidMetadataFile if invalid."""
try:
with open(path, "rb") as f:
binary_content = f.read()
except OSError:
raise InvalidMetadataFile(f"No such file: {path}")
# In the future we may support transparently decompressing compressed spec files.
if binary_content[:2] == b"\x1f\x8b":
raise InvalidMetadataFile("Compressed spec files are not supported")
try:
as_string = binary_content.decode("utf-8")
if path.endswith(".json.sig"):
spec_dict = Spec.extract_json_from_clearsig(as_string)
else:
spec_dict = json.loads(as_string)
except Exception as e:
raise InvalidMetadataFile(f"Could not parse {path} due to: {e}") from e
# Ensure this version is not too new.
try:
layout_version = int(spec_dict.get("buildcache_layout_version", 0))
except ValueError as e:
raise InvalidMetadataFile("Could not parse layout version") from e
if layout_version > max_supported_layout:
raise InvalidMetadataFile(
f"Layout version {layout_version} is too new for this version of Spack"
)
return spec_dict, layout_version
def download_tarball(spec, unsigned=False, mirrors_for_spec=None): def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
""" """
Download binary tarball for given package into stage area, returning Download binary tarball for given package into stage area, returning
@ -1652,6 +1694,18 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
try: try:
local_specfile_stage.fetch() local_specfile_stage.fetch()
local_specfile_stage.check() local_specfile_stage.check()
try:
_get_valid_spec_file(
local_specfile_stage.save_filename,
FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION,
)
except InvalidMetadataFile as e:
tty.warn(
f"Ignoring binary package for {spec.name}/{spec.dag_hash()[:7]} "
f"from {mirror} due to invalid metadata file: {e}"
)
local_specfile_stage.destroy()
continue
except Exception: except Exception:
continue continue
local_specfile_stage.cache_local() local_specfile_stage.cache_local()
@ -1674,14 +1728,26 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
else: else:
ext = "json.sig" if try_signed else "json" ext = "json.sig" if try_signed else "json"
specfile_path = url_util.join(mirror, _build_cache_relative_path, specfile_prefix) specfile_path = url_util.join(mirror, BUILD_CACHE_RELATIVE_PATH, specfile_prefix)
specfile_url = f"{specfile_path}.{ext}" specfile_url = f"{specfile_path}.{ext}"
spackfile_url = url_util.join(mirror, _build_cache_relative_path, tarball) spackfile_url = url_util.join(mirror, BUILD_CACHE_RELATIVE_PATH, tarball)
local_specfile_stage = try_fetch(specfile_url) local_specfile_stage = try_fetch(specfile_url)
if local_specfile_stage: if local_specfile_stage:
local_specfile_path = local_specfile_stage.save_filename local_specfile_path = local_specfile_stage.save_filename
signature_verified = False signature_verified = False
try:
_get_valid_spec_file(
local_specfile_path, FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION
)
except InvalidMetadataFile as e:
tty.warn(
f"Ignoring binary package for {spec.name}/{spec.dag_hash()[:7]} "
f"from {mirror} due to invalid metadata file: {e}"
)
local_specfile_stage.destroy()
continue
if try_signed and not unsigned: if try_signed and not unsigned:
# If we found a signed specfile at the root, try to verify # If we found a signed specfile at the root, try to verify
# the signature immediately. We will not download the # the signature immediately. We will not download the
@ -1961,11 +2027,12 @@ def _extract_inner_tarball(spec, filename, extract_to, unsigned, remote_checksum
def _tar_strip_component(tar: tarfile.TarFile, prefix: str): def _tar_strip_component(tar: tarfile.TarFile, prefix: str):
"""Strip the top-level directory `prefix` from the member names in a tarfile.""" """Yield all members of tarfile that start with given prefix, and strip that prefix (including
symlinks)"""
# Including trailing /, otherwise we end up with absolute paths. # Including trailing /, otherwise we end up with absolute paths.
regex = re.compile(re.escape(prefix) + "/*") regex = re.compile(re.escape(prefix) + "/*")
# Remove the top-level directory from the member (link)names. # Only yield members in the package prefix.
# Note: when a tarfile is created, relative in-prefix symlinks are # Note: when a tarfile is created, relative in-prefix symlinks are
# expanded to matching member names of tarfile entries. So, we have # expanded to matching member names of tarfile entries. So, we have
# to ensure that those are updated too. # to ensure that those are updated too.
@ -1973,12 +2040,14 @@ def _tar_strip_component(tar: tarfile.TarFile, prefix: str):
# them. # them.
for m in tar.getmembers(): for m in tar.getmembers():
result = regex.match(m.name) result = regex.match(m.name)
assert result is not None if not result:
continue
m.name = m.name[result.end() :] m.name = m.name[result.end() :]
if m.linkname: if m.linkname:
result = regex.match(m.linkname) result = regex.match(m.linkname)
if result: if result:
m.linkname = m.linkname[result.end() :] m.linkname = m.linkname[result.end() :]
yield m
def extract_tarball(spec, download_result, unsigned=False, force=False, timer=timer.NULL_TIMER): def extract_tarball(spec, download_result, unsigned=False, force=False, timer=timer.NULL_TIMER):
@ -2001,24 +2070,16 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
) )
specfile_path = download_result["specfile_stage"].save_filename specfile_path = download_result["specfile_stage"].save_filename
spec_dict, layout_version = _get_valid_spec_file(
with open(specfile_path, "r") as inputfile: specfile_path, FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION
content = inputfile.read() )
if specfile_path.endswith(".json.sig"):
spec_dict = Spec.extract_json_from_clearsig(content)
else:
spec_dict = sjson.load(content)
bchecksum = spec_dict["binary_cache_checksum"] bchecksum = spec_dict["binary_cache_checksum"]
filename = download_result["tarball_stage"].save_filename filename = download_result["tarball_stage"].save_filename
signature_verified = download_result["signature_verified"] signature_verified = download_result["signature_verified"]
tmpdir = None tmpdir = None
if ( if layout_version == 0:
"buildcache_layout_version" not in spec_dict
or int(spec_dict["buildcache_layout_version"]) < 1
):
# Handle the older buildcache layout where the .spack file # Handle the older buildcache layout where the .spack file
# contains a spec json, maybe an .asc file (signature), # contains a spec json, maybe an .asc file (signature),
# and another tarball containing the actual install tree. # and another tarball containing the actual install tree.
@ -2029,7 +2090,7 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
_delete_staged_downloads(download_result) _delete_staged_downloads(download_result)
shutil.rmtree(tmpdir) shutil.rmtree(tmpdir)
raise e raise e
else: elif 1 <= layout_version <= 2:
# Newer buildcache layout: the .spack file contains just # Newer buildcache layout: the .spack file contains just
# in the install tree, the signature, if it exists, is # in the install tree, the signature, if it exists, is
# wrapped around the spec.json at the root. If sig verify # wrapped around the spec.json at the root. If sig verify
@ -2053,12 +2114,13 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
raise NoChecksumException( raise NoChecksumException(
tarfile_path, size, contents, "sha256", expected, local_checksum tarfile_path, size, contents, "sha256", expected, local_checksum
) )
try: try:
with closing(tarfile.open(tarfile_path, "r")) as tar: with closing(tarfile.open(tarfile_path, "r")) as tar:
# Remove install prefix from tarfil to extract directly into spec.prefix # Remove install prefix from tarfil to extract directly into spec.prefix
_tar_strip_component(tar, prefix=_ensure_common_prefix(tar)) tar.extractall(
tar.extractall(path=spec.prefix) path=spec.prefix,
members=_tar_strip_component(tar, prefix=_ensure_common_prefix(tar)),
)
except Exception: except Exception:
shutil.rmtree(spec.prefix, ignore_errors=True) shutil.rmtree(spec.prefix, ignore_errors=True)
_delete_staged_downloads(download_result) _delete_staged_downloads(download_result)
@ -2093,20 +2155,47 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
def _ensure_common_prefix(tar: tarfile.TarFile) -> str: def _ensure_common_prefix(tar: tarfile.TarFile) -> str:
# Get the shortest length directory. # Find the lowest `binary_distribution` file (hard-coded forward slash is on purpose).
common_prefix = min((e.name for e in tar.getmembers() if e.isdir()), key=len, default=None) binary_distribution = min(
(
e.name
for e in tar.getmembers()
if e.isfile() and e.name.endswith(".spack/binary_distribution")
),
key=len,
default=None,
)
if common_prefix is None: if binary_distribution is None:
raise ValueError("Tarball does not contain a common prefix") raise ValueError("Tarball is not a Spack package, missing binary_distribution file")
# Validate that each file starts with the prefix pkg_path = pathlib.PurePosixPath(binary_distribution).parent.parent
# Even the most ancient Spack version has required to list the dir of the package itself, so
# guard against broken tarballs where `path.parent.parent` is empty.
if pkg_path == pathlib.PurePosixPath():
raise ValueError("Invalid tarball, missing package prefix dir")
pkg_prefix = str(pkg_path)
# Ensure all tar entries are in the pkg_prefix dir, and if they're not, they should be parent
# dirs of it.
has_prefix = False
for member in tar.getmembers(): for member in tar.getmembers():
if not member.name.startswith(common_prefix): stripped = member.name.rstrip("/")
raise ValueError( if not (
f"Tarball contains file {member.name} outside of prefix {common_prefix}" stripped.startswith(pkg_prefix) or member.isdir() and pkg_prefix.startswith(stripped)
) ):
raise ValueError(f"Tarball contains file {stripped} outside of prefix {pkg_prefix}")
if member.isdir() and stripped == pkg_prefix:
has_prefix = True
return common_prefix # This is technically not required, but let's be defensive about the existence of the package
# prefix dir.
if not has_prefix:
raise ValueError(f"Tarball does not contain a common prefix {pkg_prefix}")
return pkg_prefix
def install_root_node(spec, unsigned=False, force=False, sha256=None): def install_root_node(spec, unsigned=False, force=False, sha256=None):
@ -2184,10 +2273,10 @@ def try_direct_fetch(spec, mirrors=None):
for mirror in binary_mirrors: for mirror in binary_mirrors:
buildcache_fetch_url_json = url_util.join( buildcache_fetch_url_json = url_util.join(
mirror.fetch_url, _build_cache_relative_path, specfile_name mirror.fetch_url, BUILD_CACHE_RELATIVE_PATH, specfile_name
) )
buildcache_fetch_url_signed_json = url_util.join( buildcache_fetch_url_signed_json = url_util.join(
mirror.fetch_url, _build_cache_relative_path, signed_specfile_name mirror.fetch_url, BUILD_CACHE_RELATIVE_PATH, signed_specfile_name
) )
try: try:
_, _, fs = web_util.read_from_url(buildcache_fetch_url_signed_json) _, _, fs = web_util.read_from_url(buildcache_fetch_url_signed_json)
@ -2292,7 +2381,7 @@ def get_keys(install=False, trust=False, force=False, mirrors=None):
for mirror in mirror_collection.values(): for mirror in mirror_collection.values():
fetch_url = mirror.fetch_url fetch_url = mirror.fetch_url
keys_url = url_util.join( keys_url = url_util.join(
fetch_url, _build_cache_relative_path, _build_cache_keys_relative_path fetch_url, BUILD_CACHE_RELATIVE_PATH, BUILD_CACHE_KEYS_RELATIVE_PATH
) )
keys_index = url_util.join(keys_url, "index.json") keys_index = url_util.join(keys_url, "index.json")
@ -2357,7 +2446,7 @@ def push_keys(*mirrors, **kwargs):
for mirror in mirrors: for mirror in mirrors:
push_url = getattr(mirror, "push_url", mirror) push_url = getattr(mirror, "push_url", mirror)
keys_url = url_util.join( keys_url = url_util.join(
push_url, _build_cache_relative_path, _build_cache_keys_relative_path push_url, BUILD_CACHE_RELATIVE_PATH, BUILD_CACHE_KEYS_RELATIVE_PATH
) )
keys_local = url_util.local_file_path(keys_url) keys_local = url_util.local_file_path(keys_url)
@ -2495,11 +2584,11 @@ def download_buildcache_entry(file_descriptions, mirror_url=None):
) )
if mirror_url: if mirror_url:
mirror_root = os.path.join(mirror_url, _build_cache_relative_path) mirror_root = os.path.join(mirror_url, BUILD_CACHE_RELATIVE_PATH)
return _download_buildcache_entry(mirror_root, file_descriptions) return _download_buildcache_entry(mirror_root, file_descriptions)
for mirror in spack.mirror.MirrorCollection(binary=True).values(): for mirror in spack.mirror.MirrorCollection(binary=True).values():
mirror_root = os.path.join(mirror.fetch_url, _build_cache_relative_path) mirror_root = os.path.join(mirror.fetch_url, BUILD_CACHE_RELATIVE_PATH)
if _download_buildcache_entry(mirror_root, file_descriptions): if _download_buildcache_entry(mirror_root, file_descriptions):
return True return True
@ -2590,7 +2679,7 @@ def __init__(self, url, local_hash, urlopen=web_util.urlopen):
def get_remote_hash(self): def get_remote_hash(self):
# Failure to fetch index.json.hash is not fatal # Failure to fetch index.json.hash is not fatal
url_index_hash = url_util.join(self.url, _build_cache_relative_path, "index.json.hash") url_index_hash = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, "index.json.hash")
try: try:
response = self.urlopen(urllib.request.Request(url_index_hash, headers=self.headers)) response = self.urlopen(urllib.request.Request(url_index_hash, headers=self.headers))
except urllib.error.URLError: except urllib.error.URLError:
@ -2611,7 +2700,7 @@ def conditional_fetch(self) -> FetchIndexResult:
return FetchIndexResult(etag=None, hash=None, data=None, fresh=True) return FetchIndexResult(etag=None, hash=None, data=None, fresh=True)
# Otherwise, download index.json # Otherwise, download index.json
url_index = url_util.join(self.url, _build_cache_relative_path, "index.json") url_index = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, "index.json")
try: try:
response = self.urlopen(urllib.request.Request(url_index, headers=self.headers)) response = self.urlopen(urllib.request.Request(url_index, headers=self.headers))
@ -2655,7 +2744,7 @@ def __init__(self, url, etag, urlopen=web_util.urlopen):
def conditional_fetch(self) -> FetchIndexResult: def conditional_fetch(self) -> FetchIndexResult:
# Just do a conditional fetch immediately # Just do a conditional fetch immediately
url = url_util.join(self.url, _build_cache_relative_path, "index.json") url = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, "index.json")
headers = { headers = {
"User-Agent": web_util.SPACK_USER_AGENT, "User-Agent": web_util.SPACK_USER_AGENT,
"If-None-Match": '"{}"'.format(self.etag), "If-None-Match": '"{}"'.format(self.etag),

View file

@ -213,7 +213,8 @@ def _root_spec(spec_str: str) -> str:
if str(spack.platforms.host()) == "darwin": if str(spack.platforms.host()) == "darwin":
spec_str += " %apple-clang" spec_str += " %apple-clang"
elif str(spack.platforms.host()) == "windows": elif str(spack.platforms.host()) == "windows":
spec_str += " %msvc" # TODO (johnwparent): Remove version constraint when clingo patch is up
spec_str += " %msvc@:19.37"
else: else:
spec_str += " %gcc" spec_str += " %gcc"

View file

@ -324,19 +324,29 @@ def set_compiler_environment_variables(pkg, env):
# ttyout, ttyerr, etc. # ttyout, ttyerr, etc.
link_dir = spack.paths.build_env_path link_dir = spack.paths.build_env_path
# Set SPACK compiler variables so that our wrapper knows what to call # Set SPACK compiler variables so that our wrapper knows what to
# call. If there is no compiler configured then use a default
# wrapper which will emit an error if it is used.
if compiler.cc: if compiler.cc:
env.set("SPACK_CC", compiler.cc) env.set("SPACK_CC", compiler.cc)
env.set("CC", os.path.join(link_dir, compiler.link_paths["cc"])) env.set("CC", os.path.join(link_dir, compiler.link_paths["cc"]))
else:
env.set("CC", os.path.join(link_dir, "cc"))
if compiler.cxx: if compiler.cxx:
env.set("SPACK_CXX", compiler.cxx) env.set("SPACK_CXX", compiler.cxx)
env.set("CXX", os.path.join(link_dir, compiler.link_paths["cxx"])) env.set("CXX", os.path.join(link_dir, compiler.link_paths["cxx"]))
else:
env.set("CC", os.path.join(link_dir, "c++"))
if compiler.f77: if compiler.f77:
env.set("SPACK_F77", compiler.f77) env.set("SPACK_F77", compiler.f77)
env.set("F77", os.path.join(link_dir, compiler.link_paths["f77"])) env.set("F77", os.path.join(link_dir, compiler.link_paths["f77"]))
else:
env.set("F77", os.path.join(link_dir, "f77"))
if compiler.fc: if compiler.fc:
env.set("SPACK_FC", compiler.fc) env.set("SPACK_FC", compiler.fc)
env.set("FC", os.path.join(link_dir, compiler.link_paths["fc"])) env.set("FC", os.path.join(link_dir, compiler.link_paths["fc"]))
else:
env.set("FC", os.path.join(link_dir, "fc"))
# Set SPACK compiler rpath flags so that our wrapper knows what to use # Set SPACK compiler rpath flags so that our wrapper knows what to use
env.set("SPACK_CC_RPATH_ARG", compiler.cc_rpath_arg) env.set("SPACK_CC_RPATH_ARG", compiler.cc_rpath_arg)
@ -743,15 +753,16 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
set_compiler_environment_variables(pkg, env_mods) set_compiler_environment_variables(pkg, env_mods)
set_wrapper_variables(pkg, env_mods) set_wrapper_variables(pkg, env_mods)
tty.debug("setup_package: grabbing modifications from dependencies") # Platform specific setup goes before package specific setup. This is for setting
env_mods.extend(setup_context.get_env_modifications()) # defaults like MACOSX_DEPLOYMENT_TARGET on macOS.
tty.debug("setup_package: collected all modifications from dependencies")
# architecture specific setup
platform = spack.platforms.by_name(pkg.spec.architecture.platform) platform = spack.platforms.by_name(pkg.spec.architecture.platform)
target = platform.target(pkg.spec.architecture.target) target = platform.target(pkg.spec.architecture.target)
platform.setup_platform_environment(pkg, env_mods) platform.setup_platform_environment(pkg, env_mods)
tty.debug("setup_package: grabbing modifications from dependencies")
env_mods.extend(setup_context.get_env_modifications())
tty.debug("setup_package: collected all modifications from dependencies")
if context == Context.TEST: if context == Context.TEST:
env_mods.prepend_path("PATH", ".") env_mods.prepend_path("PATH", ".")
elif context == Context.BUILD and not dirty and not env_mods.is_unset("CPATH"): elif context == Context.BUILD and not dirty and not env_mods.is_unset("CPATH"):
@ -1322,7 +1333,7 @@ def make_stack(tb, stack=None):
# don't provide context if the code is actually in the base classes. # don't provide context if the code is actually in the base classes.
obj = frame.f_locals["self"] obj = frame.f_locals["self"]
func = getattr(obj, tb.tb_frame.f_code.co_name, "") func = getattr(obj, tb.tb_frame.f_code.co_name, "")
if func: if func and hasattr(func, "__qualname__"):
typename, *_ = func.__qualname__.partition(".") typename, *_ = func.__qualname__.partition(".")
if isinstance(obj, CONTEXT_BASES) and typename not in basenames: if isinstance(obj, CONTEXT_BASES) and typename not in basenames:
break break

View file

@ -9,11 +9,10 @@
import shutil import shutil
from os.path import basename, dirname, isdir from os.path import basename, dirname, isdir
from llnl.util.filesystem import find_headers, find_libraries, join_path from llnl.util.filesystem import find_headers, find_libraries, join_path, mkdirp
from llnl.util.link_tree import LinkTree from llnl.util.link_tree import LinkTree
from spack.directives import conflicts, variant from spack.directives import conflicts, variant
from spack.package import mkdirp
from spack.util.environment import EnvironmentModifications from spack.util.environment import EnvironmentModifications
from spack.util.executable import Executable from spack.util.executable import Executable
@ -212,3 +211,7 @@ def link_flags(self):
@property @property
def ld_flags(self): def ld_flags(self):
return "{0} {1}".format(self.search_flags, self.link_flags) return "{0} {1}".format(self.search_flags, self.link_flags)
#: Tuple of Intel math libraries, exported to packages
INTEL_MATH_LIBRARIES = ("intel-mkl", "intel-oneapi-mkl", "intel-parallel-studio")

View file

@ -2,6 +2,8 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details. # Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import warnings
import llnl.util.tty as tty import llnl.util.tty as tty
import llnl.util.tty.colify import llnl.util.tty.colify
import llnl.util.tty.color as cl import llnl.util.tty.color as cl
@ -52,8 +54,10 @@ def setup_parser(subparser):
def configs(parser, args): def configs(parser, args):
reports = spack.audit.run_group(args.subcommand) with warnings.catch_warnings():
_process_reports(reports) warnings.simplefilter("ignore")
reports = spack.audit.run_group(args.subcommand)
_process_reports(reports)
def packages(parser, args): def packages(parser, args):

View file

@ -7,13 +7,14 @@
import glob import glob
import hashlib import hashlib
import json import json
import multiprocessing
import multiprocessing.pool import multiprocessing.pool
import os import os
import shutil import shutil
import sys import sys
import tempfile import tempfile
import urllib.request import urllib.request
from typing import Dict, List, Optional, Tuple from typing import Dict, List, Optional, Tuple, Union
import llnl.util.tty as tty import llnl.util.tty as tty
from llnl.string import plural from llnl.string import plural
@ -307,8 +308,30 @@ def _progress(i: int, total: int):
return "" return ""
def _make_pool(): class NoPool:
return multiprocessing.pool.Pool(determine_number_of_jobs(parallel=True)) def map(self, func, args):
return [func(a) for a in args]
def starmap(self, func, args):
return [func(*a) for a in args]
def __enter__(self):
return self
def __exit__(self, *args):
pass
MaybePool = Union[multiprocessing.pool.Pool, NoPool]
def _make_pool() -> MaybePool:
"""Can't use threading because it's unsafe, and can't use spawned processes because of globals.
That leaves only forking"""
if multiprocessing.get_start_method() == "fork":
return multiprocessing.pool.Pool(determine_number_of_jobs(parallel=True))
else:
return NoPool()
def push_fn(args): def push_fn(args):
@ -591,7 +614,7 @@ def _push_oci(
image_ref: ImageReference, image_ref: ImageReference,
installed_specs_with_deps: List[Spec], installed_specs_with_deps: List[Spec],
tmpdir: str, tmpdir: str,
pool: multiprocessing.pool.Pool, pool: MaybePool,
) -> List[str]: ) -> List[str]:
"""Push specs to an OCI registry """Push specs to an OCI registry
@ -692,11 +715,10 @@ def _config_from_tag(image_ref: ImageReference, tag: str) -> Optional[dict]:
return config if "spec" in config else None return config if "spec" in config else None
def _update_index_oci( def _update_index_oci(image_ref: ImageReference, tmpdir: str, pool: MaybePool) -> None:
image_ref: ImageReference, tmpdir: str, pool: multiprocessing.pool.Pool request = urllib.request.Request(url=image_ref.tags_url())
) -> None: response = spack.oci.opener.urlopen(request)
response = spack.oci.opener.urlopen(urllib.request.Request(url=image_ref.tags_url())) spack.oci.opener.ensure_status(request, response, 200)
spack.oci.opener.ensure_status(response, 200)
tags = json.load(response)["tags"] tags = json.load(response)["tags"]
# Fetch all image config files in parallel # Fetch all image config files in parallel

View file

@ -0,0 +1,30 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import sys
from typing import List
import llnl.util.tty as tty
import spack.cmd
display_args = {"long": True, "show_flags": False, "variants": False, "indent": 4}
def confirm_action(specs: List[spack.spec.Spec], participle: str, noun: str):
"""Display the list of specs to be acted on and ask for confirmation.
Args:
specs: specs to be removed
participle: action expressed as a participle, e.g. "uninstalled"
noun: action expressed as a noun, e.g. "uninstallation"
"""
tty.msg(f"The following {len(specs)} packages will be {participle}:\n")
spack.cmd.display_specs(specs, **display_args)
print("")
answer = tty.get_yes_or_no("Do you want to proceed?", default=False)
if not answer:
tty.msg(f"Aborting {noun}")
sys.exit(0)

View file

@ -0,0 +1,103 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import argparse
import sys
from typing import List
import llnl.util.tty as tty
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.cmd.common.confirmation as confirmation
import spack.environment as ev
import spack.spec
description = "remove specs from the concretized lockfile of an environment"
section = "environments"
level = "long"
# Arguments for display_specs when we find ambiguity
display_args = {"long": True, "show_flags": False, "variants": False, "indent": 4}
def setup_parser(subparser):
subparser.add_argument(
"--root", action="store_true", help="deconcretize only specific environment roots"
)
arguments.add_common_arguments(subparser, ["yes_to_all", "specs"])
subparser.add_argument(
"-a",
"--all",
action="store_true",
dest="all",
help="deconcretize ALL specs that match each supplied spec",
)
def get_deconcretize_list(
args: argparse.Namespace, specs: List[spack.spec.Spec], env: ev.Environment
) -> List[spack.spec.Spec]:
"""
Get list of environment roots to deconcretize
"""
env_specs = [s for _, s in env.concretized_specs()]
to_deconcretize = []
errors = []
for s in specs:
if args.root:
# find all roots matching given spec
to_deconc = [e for e in env_specs if e.satisfies(s)]
else:
# find all roots matching or depending on a matching spec
to_deconc = [e for e in env_specs if any(d.satisfies(s) for d in e.traverse())]
if len(to_deconc) < 1:
tty.warn(f"No matching specs to deconcretize for {s}")
elif len(to_deconc) > 1 and not args.all:
errors.append((s, to_deconc))
to_deconcretize.extend(to_deconc)
if errors:
for spec, matching in errors:
tty.error(f"{spec} matches multiple concrete specs:")
sys.stderr.write("\n")
spack.cmd.display_specs(matching, output=sys.stderr, **display_args)
sys.stderr.write("\n")
sys.stderr.flush()
tty.die("Use '--all' to deconcretize all matching specs, or be more specific")
return to_deconcretize
def deconcretize_specs(args, specs):
env = spack.cmd.require_active_env(cmd_name="deconcretize")
if args.specs:
deconcretize_list = get_deconcretize_list(args, specs, env)
else:
deconcretize_list = [s for _, s in env.concretized_specs()]
if not args.yes_to_all:
confirmation.confirm_action(deconcretize_list, "deconcretized", "deconcretization")
with env.write_transaction():
for spec in deconcretize_list:
env.deconcretize(spec)
env.write()
def deconcretize(parser, args):
if not args.specs and not args.all:
tty.die(
"deconcretize requires at least one spec argument.",
" Use `spack deconcretize --all` to deconcretize ALL specs.",
)
specs = spack.cmd.parse_specs(args.specs) if args.specs else [any]
deconcretize_specs(args, specs)

View file

@ -6,6 +6,7 @@
import llnl.util.tty as tty import llnl.util.tty as tty
import spack.cmd.common.arguments import spack.cmd.common.arguments
import spack.cmd.common.confirmation
import spack.cmd.uninstall import spack.cmd.uninstall
import spack.environment as ev import spack.environment as ev
import spack.store import spack.store
@ -41,6 +42,6 @@ def gc(parser, args):
return return
if not args.yes_to_all: if not args.yes_to_all:
spack.cmd.uninstall.confirm_removal(specs) spack.cmd.common.confirmation.confirm_action(specs, "uninstalled", "uninstallation")
spack.cmd.uninstall.do_uninstall(specs, force=False) spack.cmd.uninstall.do_uninstall(specs, force=False)

View file

@ -61,7 +61,7 @@ def graph(parser, args):
args.dot = True args.dot = True
env = ev.active_environment() env = ev.active_environment()
if env: if env:
specs = env.all_specs() specs = env.concrete_roots()
else: else:
specs = spack.store.STORE.db.query() specs = spack.store.STORE.db.query()

View file

@ -3,6 +3,7 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import sys
import textwrap import textwrap
from itertools import zip_longest from itertools import zip_longest
@ -16,6 +17,7 @@
import spack.install_test import spack.install_test
import spack.repo import spack.repo
import spack.spec import spack.spec
import spack.version
from spack.package_base import preferred_version from spack.package_base import preferred_version
description = "get detailed information on a particular package" description = "get detailed information on a particular package"
@ -53,6 +55,7 @@ def setup_parser(subparser):
("--tags", print_tags.__doc__), ("--tags", print_tags.__doc__),
("--tests", print_tests.__doc__), ("--tests", print_tests.__doc__),
("--virtuals", print_virtuals.__doc__), ("--virtuals", print_virtuals.__doc__),
("--variants-by-name", "list variants in strict name order; don't group by condition"),
] ]
for opt, help_comment in options: for opt, help_comment in options:
subparser.add_argument(opt, action="store_true", help=help_comment) subparser.add_argument(opt, action="store_true", help=help_comment)
@ -77,35 +80,10 @@ def license(s):
class VariantFormatter: class VariantFormatter:
def __init__(self, variants): def __init__(self, pkg):
self.variants = variants self.variants = pkg.variants
self.headers = ("Name [Default]", "When", "Allowed values", "Description") self.headers = ("Name [Default]", "When", "Allowed values", "Description")
# Formats
fmt_name = "{0} [{1}]"
# Initialize column widths with the length of the
# corresponding headers, as they cannot be shorter
# than that
self.column_widths = [len(x) for x in self.headers]
# Expand columns based on max line lengths
for k, e in variants.items():
v, w = e
candidate_max_widths = (
len(fmt_name.format(k, self.default(v))), # Name [Default]
len(str(w)),
len(v.allowed_values), # Allowed values
len(v.description), # Description
)
self.column_widths = (
max(self.column_widths[0], candidate_max_widths[0]),
max(self.column_widths[1], candidate_max_widths[1]),
max(self.column_widths[2], candidate_max_widths[2]),
max(self.column_widths[3], candidate_max_widths[3]),
)
# Don't let name or possible values be less than max widths # Don't let name or possible values be less than max widths
_, cols = tty.terminal_size() _, cols = tty.terminal_size()
max_name = min(self.column_widths[0], 30) max_name = min(self.column_widths[0], 30)
@ -137,6 +115,8 @@ def default(self, v):
def lines(self): def lines(self):
if not self.variants: if not self.variants:
yield " None" yield " None"
return
else: else:
yield " " + self.fmt % self.headers yield " " + self.fmt % self.headers
underline = tuple([w * "=" for w in self.column_widths]) underline = tuple([w * "=" for w in self.column_widths])
@ -271,15 +251,165 @@ def print_tests(pkg):
color.cprint(" None") color.cprint(" None")
def print_variants(pkg): def _fmt_value(v):
if v is None or isinstance(v, bool):
return str(v).lower()
else:
return str(v)
def _fmt_name_and_default(variant):
"""Print colorized name [default] for a variant."""
return color.colorize(f"@c{{{variant.name}}} @C{{[{_fmt_value(variant.default)}]}}")
def _fmt_when(when, indent):
return color.colorize(f"{indent * ' '}@B{{when}} {color.cescape(when)}")
def _fmt_variant_description(variant, width, indent):
"""Format a variant's description, preserving explicit line breaks."""
return "\n".join(
textwrap.fill(
line, width=width, initial_indent=indent * " ", subsequent_indent=indent * " "
)
for line in variant.description.split("\n")
)
def _fmt_variant(variant, max_name_default_len, indent, when=None, out=None):
out = out or sys.stdout
_, cols = tty.terminal_size()
name_and_default = _fmt_name_and_default(variant)
name_default_len = color.clen(name_and_default)
values = variant.values
if not isinstance(variant.values, (tuple, list, spack.variant.DisjointSetsOfValues)):
values = [variant.values]
# put 'none' first, sort the rest by value
sorted_values = sorted(values, key=lambda v: (v != "none", v))
pad = 4 # min padding between 'name [default]' and values
value_indent = (indent + max_name_default_len + pad) * " " # left edge of values
# This preserves any formatting (i.e., newlines) from how the description was
# written in package.py, but still wraps long lines for small terminals.
# This allows some packages to provide detailed help on their variants (see, e.g., gasnet).
formatted_values = "\n".join(
textwrap.wrap(
f"{', '.join(_fmt_value(v) for v in sorted_values)}",
width=cols - 2,
initial_indent=value_indent,
subsequent_indent=value_indent,
)
)
formatted_values = formatted_values[indent + name_default_len + pad :]
# name [default] value1, value2, value3, ...
padding = pad * " "
color.cprint(f"{indent * ' '}{name_and_default}{padding}@c{{{formatted_values}}}", stream=out)
# when <spec>
description_indent = indent + 4
if when is not None and when != spack.spec.Spec():
out.write(_fmt_when(when, description_indent - 2))
out.write("\n")
# description, preserving explicit line breaks from the way it's written in the package file
out.write(_fmt_variant_description(variant, cols - 2, description_indent))
out.write("\n")
def _variants_by_name_when(pkg):
"""Adaptor to get variants keyed by { name: { when: { [Variant...] } }."""
# TODO: replace with pkg.variants_by_name(when=True) when unified directive dicts are merged.
variants = {}
for name, (variant, whens) in sorted(pkg.variants.items()):
for when in whens:
variants.setdefault(name, {}).setdefault(when, []).append(variant)
return variants
def _variants_by_when_name(pkg):
"""Adaptor to get variants keyed by { when: { name: Variant } }"""
# TODO: replace with pkg.variants when unified directive dicts are merged.
variants = {}
for name, (variant, whens) in pkg.variants.items():
for when in whens:
variants.setdefault(when, {})[name] = variant
return variants
def _print_variants_header(pkg):
"""output variants""" """output variants"""
if not pkg.variants:
print(" None")
return
color.cprint("") color.cprint("")
color.cprint(section_title("Variants:")) color.cprint(section_title("Variants:"))
formatter = VariantFormatter(pkg.variants) variants_by_name = _variants_by_name_when(pkg)
for line in formatter.lines:
color.cprint(color.cescape(line)) # Calculate the max length of the "name [default]" part of the variant display
# This lets us know where to print variant values.
max_name_default_len = max(
color.clen(_fmt_name_and_default(variant))
for name, when_variants in variants_by_name.items()
for variants in when_variants.values()
for variant in variants
)
return max_name_default_len, variants_by_name
def _unconstrained_ver_first(item):
"""sort key that puts specs with open version ranges first"""
spec, _ = item
return (spack.version.any_version not in spec.versions, spec)
def print_variants_grouped_by_when(pkg):
max_name_default_len, _ = _print_variants_header(pkg)
indent = 4
variants = _variants_by_when_name(pkg)
for when, variants_by_name in sorted(variants.items(), key=_unconstrained_ver_first):
padded_values = max_name_default_len + 4
start_indent = indent
if when != spack.spec.Spec():
sys.stdout.write("\n")
sys.stdout.write(_fmt_when(when, indent))
sys.stdout.write("\n")
# indent names slightly inside 'when', but line up values
padded_values -= 2
start_indent += 2
for name, variant in sorted(variants_by_name.items()):
_fmt_variant(variant, padded_values, start_indent, None, out=sys.stdout)
def print_variants_by_name(pkg):
max_name_default_len, variants_by_name = _print_variants_header(pkg)
max_name_default_len += 4
indent = 4
for name, when_variants in variants_by_name.items():
for when, variants in sorted(when_variants.items(), key=_unconstrained_ver_first):
for variant in variants:
_fmt_variant(variant, max_name_default_len, indent, when, out=sys.stdout)
sys.stdout.write("\n")
def print_variants(pkg):
"""output variants"""
print_variants_grouped_by_when(pkg)
def print_versions(pkg): def print_versions(pkg):
@ -300,18 +430,24 @@ def print_versions(pkg):
pad = padder(pkg.versions, 4) pad = padder(pkg.versions, 4)
preferred = preferred_version(pkg) preferred = preferred_version(pkg)
url = ""
if pkg.has_code:
url = fs.for_package_version(pkg, preferred)
def get_url(version):
try:
return fs.for_package_version(pkg, version)
except spack.fetch_strategy.InvalidArgsError:
return "No URL"
url = get_url(preferred) if pkg.has_code else ""
line = version(" {0}".format(pad(preferred))) + color.cescape(url) line = version(" {0}".format(pad(preferred))) + color.cescape(url)
color.cprint(line) color.cwrite(line)
print()
safe = [] safe = []
deprecated = [] deprecated = []
for v in reversed(sorted(pkg.versions)): for v in reversed(sorted(pkg.versions)):
if pkg.has_code: if pkg.has_code:
url = fs.for_package_version(pkg, v) url = get_url(v)
if pkg.versions[v].get("deprecated", False): if pkg.versions[v].get("deprecated", False):
deprecated.append((v, url)) deprecated.append((v, url))
else: else:
@ -384,7 +520,12 @@ def info(parser, args):
else: else:
color.cprint(" None") color.cprint(" None")
color.cprint(section_title("Homepage: ") + pkg.homepage) if getattr(pkg, "homepage"):
color.cprint(section_title("Homepage: ") + pkg.homepage)
_print_variants = (
print_variants_by_name if args.variants_by_name else print_variants_grouped_by_when
)
# Now output optional information in expected order # Now output optional information in expected order
sections = [ sections = [
@ -392,7 +533,7 @@ def info(parser, args):
(args.all or args.detectable, print_detectable), (args.all or args.detectable, print_detectable),
(args.all or args.tags, print_tags), (args.all or args.tags, print_tags),
(args.all or not args.no_versions, print_versions), (args.all or not args.no_versions, print_versions),
(args.all or not args.no_variants, print_variants), (args.all or not args.no_variants, _print_variants),
(args.all or args.phases, print_phases), (args.all or args.phases, print_phases),
(args.all or not args.no_dependencies, print_dependencies), (args.all or not args.no_dependencies, print_dependencies),
(args.all or args.virtuals, print_virtuals), (args.all or args.virtuals, print_virtuals),

View file

@ -23,7 +23,7 @@
# tutorial configuration parameters # tutorial configuration parameters
tutorial_branch = "releases/v0.20" tutorial_branch = "releases/v0.21"
tutorial_mirror = "file:///mirror" tutorial_mirror = "file:///mirror"
tutorial_key = os.path.join(spack.paths.share_path, "keys", "tutorial.pub") tutorial_key = os.path.join(spack.paths.share_path, "keys", "tutorial.pub")

View file

@ -11,10 +11,9 @@
import spack.cmd import spack.cmd
import spack.cmd.common.arguments as arguments import spack.cmd.common.arguments as arguments
import spack.cmd.common.confirmation as confirmation
import spack.environment as ev import spack.environment as ev
import spack.error
import spack.package_base import spack.package_base
import spack.repo
import spack.spec import spack.spec
import spack.store import spack.store
import spack.traverse as traverse import spack.traverse as traverse
@ -278,7 +277,7 @@ def uninstall_specs(args, specs):
return return
if not args.yes_to_all: if not args.yes_to_all:
confirm_removal(uninstall_list) confirmation.confirm_action(uninstall_list, "uninstalled", "uninstallation")
# Uninstall everything on the list # Uninstall everything on the list
do_uninstall(uninstall_list, args.force) do_uninstall(uninstall_list, args.force)
@ -292,21 +291,6 @@ def uninstall_specs(args, specs):
env.regenerate_views() env.regenerate_views()
def confirm_removal(specs: List[spack.spec.Spec]):
"""Display the list of specs to be removed and ask for confirmation.
Args:
specs: specs to be removed
"""
tty.msg("The following {} packages will be uninstalled:\n".format(len(specs)))
spack.cmd.display_specs(specs, **display_args)
print("")
answer = tty.get_yes_or_no("Do you want to proceed?", default=False)
if not answer:
tty.msg("Aborting uninstallation")
sys.exit(0)
def uninstall(parser, args): def uninstall(parser, args):
if not args.specs and not args.all: if not args.specs and not args.all:
tty.die( tty.die(

View file

@ -154,6 +154,14 @@ def add_compilers_to_config(compilers, scope=None, init_config=True):
""" """
compiler_config = get_compiler_config(scope, init_config) compiler_config = get_compiler_config(scope, init_config)
for compiler in compilers: for compiler in compilers:
if not compiler.cc:
tty.debug(f"{compiler.spec} does not have a C compiler")
if not compiler.cxx:
tty.debug(f"{compiler.spec} does not have a C++ compiler")
if not compiler.f77:
tty.debug(f"{compiler.spec} does not have a Fortran77 compiler")
if not compiler.fc:
tty.debug(f"{compiler.spec} does not have a Fortran compiler")
compiler_config.append(_to_dict(compiler)) compiler_config.append(_to_dict(compiler))
spack.config.set("compilers", compiler_config, scope=scope) spack.config.set("compilers", compiler_config, scope=scope)
@ -506,9 +514,10 @@ def get_compilers(config, cspec=None, arch_spec=None):
for items in config: for items in config:
items = items["compiler"] items = items["compiler"]
# NOTE: in principle this should be equality not satisfies, but config can still # We might use equality here.
# be written in old format gcc@10.1.0 instead of gcc@=10.1.0. if cspec and not spack.spec.parse_with_version_concrete(
if cspec and not cspec.satisfies(items["spec"]): items["spec"], compiler=True
).satisfies(cspec):
continue continue
# If an arch spec is given, confirm that this compiler # If an arch spec is given, confirm that this compiler

View file

@ -1358,7 +1358,7 @@ def concretize(self, force=False, tests=False):
# Remove concrete specs that no longer correlate to a user spec # Remove concrete specs that no longer correlate to a user spec
for spec in set(self.concretized_user_specs) - set(self.user_specs): for spec in set(self.concretized_user_specs) - set(self.user_specs):
self.deconcretize(spec) self.deconcretize(spec, concrete=False)
# Pick the right concretization strategy # Pick the right concretization strategy
if self.unify == "when_possible": if self.unify == "when_possible":
@ -1373,15 +1373,36 @@ def concretize(self, force=False, tests=False):
msg = "concretization strategy not implemented [{0}]" msg = "concretization strategy not implemented [{0}]"
raise SpackEnvironmentError(msg.format(self.unify)) raise SpackEnvironmentError(msg.format(self.unify))
def deconcretize(self, spec): def deconcretize(self, spec: spack.spec.Spec, concrete: bool = True):
"""
Remove specified spec from environment concretization
Arguments:
spec: Spec to deconcretize. This must be a root of the environment
concrete: If True, find all instances of spec as concrete in the environemnt.
If False, find a single instance of the abstract spec as root of the environment.
"""
# spec has to be a root of the environment # spec has to be a root of the environment
index = self.concretized_user_specs.index(spec) if concrete:
dag_hash = self.concretized_order.pop(index) dag_hash = spec.dag_hash()
del self.concretized_user_specs[index]
pairs = zip(self.concretized_user_specs, self.concretized_order)
filtered = [(spec, h) for spec, h in pairs if h != dag_hash]
# Cannot use zip and unpack two values; it fails if filtered is empty
self.concretized_user_specs = [s for s, _ in filtered]
self.concretized_order = [h for _, h in filtered]
else:
index = self.concretized_user_specs.index(spec)
dag_hash = self.concretized_order.pop(index)
del self.concretized_user_specs[index]
# If this was the only user spec that concretized to this concrete spec, remove it # If this was the only user spec that concretized to this concrete spec, remove it
if dag_hash not in self.concretized_order: if dag_hash not in self.concretized_order:
del self.specs_by_hash[dag_hash] # if we deconcretized a dependency that doesn't correspond to a root, it
# won't be here.
if dag_hash in self.specs_by_hash:
del self.specs_by_hash[dag_hash]
def _get_specs_to_concretize( def _get_specs_to_concretize(
self, self,
@ -1739,11 +1760,14 @@ def _env_modifications_for_view(
self, view: ViewDescriptor, reverse: bool = False self, view: ViewDescriptor, reverse: bool = False
) -> spack.util.environment.EnvironmentModifications: ) -> spack.util.environment.EnvironmentModifications:
try: try:
mods = uenv.environment_modifications_for_specs(*self.concrete_roots(), view=view) with spack.store.STORE.db.read_transaction():
installed_roots = [s for s in self.concrete_roots() if s.installed]
mods = uenv.environment_modifications_for_specs(*installed_roots, view=view)
except Exception as e: except Exception as e:
# Failing to setup spec-specific changes shouldn't be a hard error. # Failing to setup spec-specific changes shouldn't be a hard error.
tty.warn( tty.warn(
"couldn't load runtime environment due to {}: {}".format(e.__class__.__name__, e) f"could not {'unload' if reverse else 'load'} runtime environment due "
f"to {e.__class__.__name__}: {e}"
) )
return spack.util.environment.EnvironmentModifications() return spack.util.environment.EnvironmentModifications()
return mods.reversed() if reverse else mods return mods.reversed() if reverse else mods

View file

@ -380,14 +380,13 @@ def _print_timer(pre: str, pkg_id: str, timer: timer.BaseTimer) -> None:
def _install_from_cache( def _install_from_cache(
pkg: "spack.package_base.PackageBase", cache_only: bool, explicit: bool, unsigned: bool = False pkg: "spack.package_base.PackageBase", explicit: bool, unsigned: bool = False
) -> bool: ) -> bool:
""" """
Extract the package from binary cache Install the package from binary cache
Args: Args:
pkg: package to install from the binary cache pkg: package to install from the binary cache
cache_only: only extract from binary cache
explicit: ``True`` if installing the package was explicitly explicit: ``True`` if installing the package was explicitly
requested by the user, otherwise, ``False`` requested by the user, otherwise, ``False``
unsigned: ``True`` if binary package signatures to be checked, unsigned: ``True`` if binary package signatures to be checked,
@ -399,15 +398,11 @@ def _install_from_cache(
installed_from_cache = _try_install_from_binary_cache( installed_from_cache = _try_install_from_binary_cache(
pkg, explicit, unsigned=unsigned, timer=t pkg, explicit, unsigned=unsigned, timer=t
) )
pkg_id = package_id(pkg)
if not installed_from_cache: if not installed_from_cache:
pre = f"No binary for {pkg_id} found"
if cache_only:
tty.die(f"{pre} when cache-only specified")
tty.msg(f"{pre}: installing from source")
return False return False
t.stop() t.stop()
pkg_id = package_id(pkg)
tty.debug(f"Successfully extracted {pkg_id} from binary cache") tty.debug(f"Successfully extracted {pkg_id} from binary cache")
_write_timer_json(pkg, t, True) _write_timer_json(pkg, t, True)
@ -1335,7 +1330,6 @@ def _prepare_for_install(self, task: BuildTask) -> None:
""" """
install_args = task.request.install_args install_args = task.request.install_args
keep_prefix = install_args.get("keep_prefix") keep_prefix = install_args.get("keep_prefix")
restage = install_args.get("restage")
# Make sure the package is ready to be locally installed. # Make sure the package is ready to be locally installed.
self._ensure_install_ready(task.pkg) self._ensure_install_ready(task.pkg)
@ -1367,10 +1361,6 @@ def _prepare_for_install(self, task: BuildTask) -> None:
else: else:
tty.debug(f"{task.pkg_id} is partially installed") tty.debug(f"{task.pkg_id} is partially installed")
# Destroy the stage for a locally installed, non-DIYStage, package
if restage and task.pkg.stage.managed_by_spack:
task.pkg.stage.destroy()
if ( if (
rec rec
and installed_in_db and installed_in_db
@ -1671,11 +1661,16 @@ def _install_task(self, task: BuildTask, install_status: InstallStatus) -> None:
task.status = STATUS_INSTALLING task.status = STATUS_INSTALLING
# Use the binary cache if requested # Use the binary cache if requested
if use_cache and _install_from_cache(pkg, cache_only, explicit, unsigned): if use_cache:
self._update_installed(task) if _install_from_cache(pkg, explicit, unsigned):
if task.compiler: self._update_installed(task)
self._add_compiler_package_to_config(pkg) if task.compiler:
return self._add_compiler_package_to_config(pkg)
return
elif cache_only:
raise InstallError("No binary found when cache-only was specified", pkg=pkg)
else:
tty.msg(f"No binary for {pkg_id} found: installing from source")
pkg.run_tests = tests if isinstance(tests, bool) else pkg.name in tests pkg.run_tests = tests if isinstance(tests, bool) else pkg.name in tests
@ -1691,6 +1686,10 @@ def _install_task(self, task: BuildTask, install_status: InstallStatus) -> None:
try: try:
self._setup_install_dir(pkg) self._setup_install_dir(pkg)
# Create stage object now and let it be serialized for the child process. That
# way monkeypatch in tests works correctly.
pkg.stage
# Create a child process to do the actual installation. # Create a child process to do the actual installation.
# Preserve verbosity settings across installs. # Preserve verbosity settings across installs.
spack.package_base.PackageBase._verbose = spack.build_environment.start_build_process( spack.package_base.PackageBase._verbose = spack.build_environment.start_build_process(
@ -2223,11 +2222,6 @@ def install(self) -> None:
if not keep_prefix and not action == InstallAction.OVERWRITE: if not keep_prefix and not action == InstallAction.OVERWRITE:
pkg.remove_prefix() pkg.remove_prefix()
# The subprocess *may* have removed the build stage. Mark it
# not created so that the next time pkg.stage is invoked, we
# check the filesystem for it.
pkg.stage.created = False
# Perform basic task cleanup for the installed spec to # Perform basic task cleanup for the installed spec to
# include downgrading the write to a read lock # include downgrading the write to a read lock
self._cleanup_task(pkg) self._cleanup_task(pkg)
@ -2297,6 +2291,9 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
# whether to keep the build stage after installation # whether to keep the build stage after installation
self.keep_stage = install_args.get("keep_stage", False) self.keep_stage = install_args.get("keep_stage", False)
# whether to restage
self.restage = install_args.get("restage", False)
# whether to skip the patch phase # whether to skip the patch phase
self.skip_patch = install_args.get("skip_patch", False) self.skip_patch = install_args.get("skip_patch", False)
@ -2327,9 +2324,13 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
def run(self) -> bool: def run(self) -> bool:
"""Main entry point from ``build_process`` to kick off install in child.""" """Main entry point from ``build_process`` to kick off install in child."""
self.pkg.stage.keep = self.keep_stage stage = self.pkg.stage
stage.keep = self.keep_stage
with self.pkg.stage: if self.restage:
stage.destroy()
with stage:
self.timer.start("stage") self.timer.start("stage")
if not self.fake: if not self.fake:

View file

@ -1016,14 +1016,16 @@ def _main(argv=None):
bootstrap_context = bootstrap.ensure_bootstrap_configuration() bootstrap_context = bootstrap.ensure_bootstrap_configuration()
with bootstrap_context: with bootstrap_context:
return finish_parse_and_run(parser, cmd_name, args.command, env_format_error) return finish_parse_and_run(parser, cmd_name, args, env_format_error)
def finish_parse_and_run(parser, cmd_name, cmd, env_format_error): def finish_parse_and_run(parser, cmd_name, main_args, env_format_error):
"""Finish parsing after we know the command to run.""" """Finish parsing after we know the command to run."""
# add the found command to the parser and re-run then re-parse # add the found command to the parser and re-run then re-parse
command = parser.add_command(cmd_name) command = parser.add_command(cmd_name)
args, unknown = parser.parse_known_args(cmd) args, unknown = parser.parse_known_args(main_args.command)
# we need to inherit verbose since the install command checks for it
args.verbose = main_args.verbose
# Now that we know what command this is and what its args are, determine # Now that we know what command this is and what its args are, determine
# whether we can continue with a bad environment and raise if not. # whether we can continue with a bad environment and raise if not.

View file

@ -93,7 +93,7 @@ def _filter_compiler_wrappers_impl(pkg_or_builder):
replacements = [] replacements = []
for idx, (env_var, compiler_path) in enumerate(compiler_vars): for idx, (env_var, compiler_path) in enumerate(compiler_vars):
if env_var in os.environ: if env_var in os.environ and compiler_path is not None:
# filter spack wrapper and links to spack wrapper in case # filter spack wrapper and links to spack wrapper in case
# build system runs realpath # build system runs realpath
wrapper = os.environ[env_var] wrapper = os.environ[env_var]

View file

@ -486,43 +486,35 @@ def excluded(self):
spec = self.spec spec = self.spec
conf = self.module.configuration(self.name) conf = self.module.configuration(self.name)
# Compute the list of include rules that match # Compute the list of matching include / exclude rules, and whether excluded as implicit
include_rules = conf.get("include", []) include_matches = [x for x in conf.get("include", []) if spec.satisfies(x)]
include_matches = [x for x in include_rules if spec.satisfies(x)] exclude_matches = [x for x in conf.get("exclude", []) if spec.satisfies(x)]
excluded_as_implicit = not self.explicit and conf.get("exclude_implicits", False)
# Compute the list of exclude rules that match
exclude_rules = conf.get("exclude", [])
exclude_matches = [x for x in exclude_rules if spec.satisfies(x)]
def debug_info(line_header, match_list): def debug_info(line_header, match_list):
if match_list: if match_list:
msg = "\t{0} : {1}".format(line_header, spec.cshort_spec) tty.debug(f"\t{line_header} : {spec.cshort_spec}")
tty.debug(msg)
for rule in match_list: for rule in match_list:
tty.debug("\t\tmatches rule: {0}".format(rule)) tty.debug(f"\t\tmatches rule: {rule}")
debug_info("INCLUDE", include_matches) debug_info("INCLUDE", include_matches)
debug_info("EXCLUDE", exclude_matches) debug_info("EXCLUDE", exclude_matches)
if not include_matches and exclude_matches: if excluded_as_implicit:
return True tty.debug(f"\tEXCLUDED_AS_IMPLICIT : {spec.cshort_spec}")
return False return not include_matches and (exclude_matches or excluded_as_implicit)
@property @property
def hidden(self): def hidden(self):
"""Returns True if the module has been hidden, False otherwise.""" """Returns True if the module has been hidden, False otherwise."""
# A few variables for convenience of writing the method
spec = self.spec
conf = self.module.configuration(self.name) conf = self.module.configuration(self.name)
hidden_as_implicit = not self.explicit and conf.get( hidden_as_implicit = not self.explicit and conf.get("hide_implicits", False)
"hide_implicits", conf.get("exclude_implicits", False)
)
if hidden_as_implicit: if hidden_as_implicit:
tty.debug(f"\tHIDDEN_AS_IMPLICIT : {spec.cshort_spec}") tty.debug(f"\tHIDDEN_AS_IMPLICIT : {self.spec.cshort_spec}")
return hidden_as_implicit return hidden_as_implicit

View file

@ -134,7 +134,7 @@ def upload_blob(
return True return True
# Otherwise, do another PUT request. # Otherwise, do another PUT request.
spack.oci.opener.ensure_status(response, 202) spack.oci.opener.ensure_status(request, response, 202)
assert "Location" in response.headers assert "Location" in response.headers
# Can be absolute or relative, joining handles both # Can be absolute or relative, joining handles both
@ -143,19 +143,16 @@ def upload_blob(
) )
f.seek(0) f.seek(0)
response = _urlopen( request = Request(
Request( url=upload_url,
url=upload_url, method="PUT",
method="PUT", data=f,
data=f, headers={"Content-Type": "application/octet-stream", "Content-Length": str(file_size)},
headers={
"Content-Type": "application/octet-stream",
"Content-Length": str(file_size),
},
)
) )
spack.oci.opener.ensure_status(response, 201) response = _urlopen(request)
spack.oci.opener.ensure_status(request, response, 201)
# print elapsed time and # MB/s # print elapsed time and # MB/s
_log_upload_progress(digest, file_size, time.time() - start) _log_upload_progress(digest, file_size, time.time() - start)
@ -189,16 +186,16 @@ def upload_manifest(
if not tag: if not tag:
ref = ref.with_digest(digest) ref = ref.with_digest(digest)
response = _urlopen( request = Request(
Request( url=ref.manifest_url(),
url=ref.manifest_url(), method="PUT",
method="PUT", data=data,
data=data, headers={"Content-Type": oci_manifest["mediaType"]},
headers={"Content-Type": oci_manifest["mediaType"]},
)
) )
spack.oci.opener.ensure_status(response, 201) response = _urlopen(request)
spack.oci.opener.ensure_status(request, response, 201)
return digest, size return digest, size

View file

@ -310,19 +310,15 @@ def http_error_401(self, req: Request, fp, code, msg, headers):
# Login failed, avoid infinite recursion where we go back and # Login failed, avoid infinite recursion where we go back and
# forth between auth server and registry # forth between auth server and registry
if hasattr(req, "login_attempted"): if hasattr(req, "login_attempted"):
raise urllib.error.HTTPError( raise spack.util.web.DetailedHTTPError(
req.full_url, code, f"Failed to login to {req.full_url}: {msg}", headers, fp req, code, f"Failed to login: {msg}", headers, fp
) )
# On 401 Unauthorized, parse the WWW-Authenticate header # On 401 Unauthorized, parse the WWW-Authenticate header
# to determine what authentication is required # to determine what authentication is required
if "WWW-Authenticate" not in headers: if "WWW-Authenticate" not in headers:
raise urllib.error.HTTPError( raise spack.util.web.DetailedHTTPError(
req.full_url, req, code, "Cannot login to registry, missing WWW-Authenticate header", headers, fp
code,
"Cannot login to registry, missing WWW-Authenticate header",
headers,
fp,
) )
header_value = headers["WWW-Authenticate"] header_value = headers["WWW-Authenticate"]
@ -330,8 +326,8 @@ def http_error_401(self, req: Request, fp, code, msg, headers):
try: try:
challenge = get_bearer_challenge(parse_www_authenticate(header_value)) challenge = get_bearer_challenge(parse_www_authenticate(header_value))
except ValueError as e: except ValueError as e:
raise urllib.error.HTTPError( raise spack.util.web.DetailedHTTPError(
req.full_url, req,
code, code,
f"Cannot login to registry, malformed WWW-Authenticate header: {header_value}", f"Cannot login to registry, malformed WWW-Authenticate header: {header_value}",
headers, headers,
@ -340,8 +336,8 @@ def http_error_401(self, req: Request, fp, code, msg, headers):
# If there is no bearer challenge, we can't handle it # If there is no bearer challenge, we can't handle it
if not challenge: if not challenge:
raise urllib.error.HTTPError( raise spack.util.web.DetailedHTTPError(
req.full_url, req,
code, code,
f"Cannot login to registry, unsupported authentication scheme: {header_value}", f"Cannot login to registry, unsupported authentication scheme: {header_value}",
headers, headers,
@ -356,8 +352,8 @@ def http_error_401(self, req: Request, fp, code, msg, headers):
timeout=req.timeout, timeout=req.timeout,
) )
except ValueError as e: except ValueError as e:
raise urllib.error.HTTPError( raise spack.util.web.DetailedHTTPError(
req.full_url, req,
code, code,
f"Cannot login to registry, failed to obtain bearer token: {e}", f"Cannot login to registry, failed to obtain bearer token: {e}",
headers, headers,
@ -412,13 +408,13 @@ def create_opener():
return opener return opener
def ensure_status(response: HTTPResponse, status: int): def ensure_status(request: urllib.request.Request, response: HTTPResponse, status: int):
"""Raise an error if the response status is not the expected one.""" """Raise an error if the response status is not the expected one."""
if response.status == status: if response.status == status:
return return
raise urllib.error.HTTPError( raise spack.util.web.DetailedHTTPError(
response.geturl(), response.status, response.reason, response.info(), None request, response.status, response.reason, response.info(), None
) )

View file

@ -49,6 +49,7 @@
from spack.build_systems.nmake import NMakePackage from spack.build_systems.nmake import NMakePackage
from spack.build_systems.octave import OctavePackage from spack.build_systems.octave import OctavePackage
from spack.build_systems.oneapi import ( from spack.build_systems.oneapi import (
INTEL_MATH_LIBRARIES,
IntelOneApiLibraryPackage, IntelOneApiLibraryPackage,
IntelOneApiPackage, IntelOneApiPackage,
IntelOneApiStaticLibraryList, IntelOneApiStaticLibraryList,

View file

@ -24,8 +24,9 @@
import textwrap import textwrap
import time import time
import traceback import traceback
import typing
import warnings import warnings
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type, TypeVar from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Tuple, Type, TypeVar, Union
import llnl.util.filesystem as fsys import llnl.util.filesystem as fsys
import llnl.util.tty as tty import llnl.util.tty as tty
@ -682,13 +683,13 @@ def __init__(self, spec):
@classmethod @classmethod
def possible_dependencies( def possible_dependencies(
cls, cls,
transitive=True, transitive: bool = True,
expand_virtuals=True, expand_virtuals: bool = True,
depflag: dt.DepFlag = dt.ALL, depflag: dt.DepFlag = dt.ALL,
visited=None, visited: Optional[dict] = None,
missing=None, missing: Optional[dict] = None,
virtuals=None, virtuals: Optional[set] = None,
): ) -> Dict[str, Set[str]]:
"""Return dict of possible dependencies of this package. """Return dict of possible dependencies of this package.
Args: Args:
@ -2449,14 +2450,21 @@ def flatten_dependencies(spec, flat_dir):
dep_files.merge(flat_dir + "/" + name) dep_files.merge(flat_dir + "/" + name)
def possible_dependencies(*pkg_or_spec, **kwargs): def possible_dependencies(
*pkg_or_spec: Union[str, spack.spec.Spec, typing.Type[PackageBase]],
transitive: bool = True,
expand_virtuals: bool = True,
depflag: dt.DepFlag = dt.ALL,
missing: Optional[dict] = None,
virtuals: Optional[set] = None,
) -> Dict[str, Set[str]]:
"""Get the possible dependencies of a number of packages. """Get the possible dependencies of a number of packages.
See ``PackageBase.possible_dependencies`` for details. See ``PackageBase.possible_dependencies`` for details.
""" """
packages = [] packages = []
for pos in pkg_or_spec: for pos in pkg_or_spec:
if isinstance(pos, PackageMeta): if isinstance(pos, PackageMeta) and issubclass(pos, PackageBase):
packages.append(pos) packages.append(pos)
continue continue
@ -2469,9 +2477,16 @@ def possible_dependencies(*pkg_or_spec, **kwargs):
else: else:
packages.append(pos.package_class) packages.append(pos.package_class)
visited = {} visited: Dict[str, Set[str]] = {}
for pkg in packages: for pkg in packages:
pkg.possible_dependencies(visited=visited, **kwargs) pkg.possible_dependencies(
visited=visited,
transitive=transitive,
expand_virtuals=expand_virtuals,
depflag=depflag,
missing=missing,
virtuals=virtuals,
)
return visited return visited

View file

@ -490,7 +490,7 @@ def read(self, stream):
self.index = spack.tag.TagIndex.from_json(stream, self.repository) self.index = spack.tag.TagIndex.from_json(stream, self.repository)
def update(self, pkg_fullname): def update(self, pkg_fullname):
self.index.update_package(pkg_fullname) self.index.update_package(pkg_fullname.split(".")[-1])
def write(self, stream): def write(self, stream):
self.index.to_json(stream) self.index.to_json(stream)

View file

@ -18,9 +18,7 @@
#: IS ADDED IMMEDIATELY BELOW THE MODULE TYPE ATTRIBUTE #: IS ADDED IMMEDIATELY BELOW THE MODULE TYPE ATTRIBUTE
spec_regex = ( spec_regex = (
r"(?!hierarchy|core_specs|verbose|hash_length|defaults|filter_hierarchy_specs|hide|" r"(?!hierarchy|core_specs|verbose|hash_length|defaults|filter_hierarchy_specs|hide|"
r"whitelist|blacklist|" # DEPRECATED: remove in 0.20. r"include|exclude|projections|naming_scheme|core_compilers|all)(^\w[\w-]*)"
r"include|exclude|" # use these more inclusive/consistent options
r"projections|naming_scheme|core_compilers|all)(^\w[\w-]*)"
) )
#: Matches a valid name for a module set #: Matches a valid name for a module set
@ -46,14 +44,7 @@
"default": {}, "default": {},
"additionalProperties": False, "additionalProperties": False,
"properties": { "properties": {
# DEPRECATED: remove in 0.20. "exclude_env_vars": {"type": "array", "default": [], "items": {"type": "string"}}
"environment_blacklist": {
"type": "array",
"default": [],
"items": {"type": "string"},
},
# use exclude_env_vars instead
"exclude_env_vars": {"type": "array", "default": [], "items": {"type": "string"}},
}, },
}, },
"template": {"type": "string"}, "template": {"type": "string"},
@ -80,11 +71,6 @@
"properties": { "properties": {
"verbose": {"type": "boolean", "default": False}, "verbose": {"type": "boolean", "default": False},
"hash_length": {"type": "integer", "minimum": 0, "default": 7}, "hash_length": {"type": "integer", "minimum": 0, "default": 7},
# DEPRECATED: remove in 0.20.
"whitelist": array_of_strings,
"blacklist": array_of_strings,
"blacklist_implicits": {"type": "boolean", "default": False},
# whitelist/blacklist have been replaced with include/exclude
"include": array_of_strings, "include": array_of_strings,
"exclude": array_of_strings, "exclude": array_of_strings,
"exclude_implicits": {"type": "boolean", "default": False}, "exclude_implicits": {"type": "boolean", "default": False},
@ -188,52 +174,3 @@
"additionalProperties": False, "additionalProperties": False,
"properties": properties, "properties": properties,
} }
# deprecated keys and their replacements
old_to_new_key = {"exclude_implicits": "hide_implicits"}
def update_keys(data, key_translations):
"""Change blacklist/whitelist to exclude/include.
Arguments:
data (dict): data from a valid modules configuration.
key_translations (dict): A dictionary of keys to translate to
their respective values.
Return:
(bool) whether anything was changed in data
"""
changed = False
if isinstance(data, dict):
keys = list(data.keys())
for key in keys:
value = data[key]
translation = key_translations.get(key)
if translation:
data[translation] = data.pop(key)
changed = True
changed |= update_keys(value, key_translations)
elif isinstance(data, list):
for elt in data:
changed |= update_keys(elt, key_translations)
return changed
def update(data):
"""Update the data in place to remove deprecated properties.
Args:
data (dict): dictionary to be updated
Returns:
True if data was changed, False otherwise
"""
# translate blacklist/whitelist to exclude/include
return update_keys(data, old_to_new_key)

View file

@ -69,6 +69,8 @@
"patternProperties": {r"\w+": {}}, "patternProperties": {r"\w+": {}},
} }
REQUIREMENT_URL = "https://spack.readthedocs.io/en/latest/packages_yaml.html#package-requirements"
#: Properties for inclusion in other schemas #: Properties for inclusion in other schemas
properties = { properties = {
"packages": { "packages": {
@ -117,7 +119,7 @@
"properties": ["version"], "properties": ["version"],
"message": "setting version preferences in the 'all' section of packages.yaml " "message": "setting version preferences in the 'all' section of packages.yaml "
"is deprecated and will be removed in v0.22\n\n\tThese preferences " "is deprecated and will be removed in v0.22\n\n\tThese preferences "
"will be ignored by Spack. You can set them only in package specific sections " "will be ignored by Spack. You can set them only in package-specific sections "
"of the same file.\n", "of the same file.\n",
"error": False, "error": False,
}, },
@ -162,10 +164,14 @@
}, },
"deprecatedProperties": { "deprecatedProperties": {
"properties": ["target", "compiler", "providers"], "properties": ["target", "compiler", "providers"],
"message": "setting compiler, target or provider preferences in a package " "message": "setting 'compiler:', 'target:' or 'provider:' preferences in "
"specific section of packages.yaml is deprecated, and will be removed in " "a package-specific section of packages.yaml is deprecated, and will be "
"v0.22.\n\n\tThese preferences will be ignored by Spack. You " "removed in v0.22.\n\n\tThese preferences will be ignored by Spack, and "
"can set them only in the 'all' section of the same file.\n", "can be set only in the 'all' section of the same file. "
"You can run:\n\n\t\t$ spack audit configs\n\n\tto get better diagnostics, "
"including files:lines where the deprecated attributes are used.\n\n"
"\tUse requirements to enforce conditions on specific packages: "
f"{REQUIREMENT_URL}\n",
"error": False, "error": False,
}, },
} }

View file

@ -12,6 +12,7 @@
import pprint import pprint
import re import re
import types import types
import typing
import warnings import warnings
from typing import Callable, Dict, List, NamedTuple, Optional, Sequence, Set, Tuple, Union from typing import Callable, Dict, List, NamedTuple, Optional, Sequence, Set, Tuple, Union
@ -379,7 +380,7 @@ def check_packages_exist(specs):
for spec in specs: for spec in specs:
for s in spec.traverse(): for s in spec.traverse():
try: try:
check_passed = repo.exists(s.name) or repo.is_virtual(s.name) check_passed = repo.repo_for_pkg(s).exists(s.name) or repo.is_virtual(s.name)
except Exception as e: except Exception as e:
msg = "Cannot find package: {0}".format(str(e)) msg = "Cannot find package: {0}".format(str(e))
check_passed = False check_passed = False
@ -713,7 +714,7 @@ def _get_cause_tree(
(condition_id, set_id) in which the latter idea means that the condition represented by (condition_id, set_id) in which the latter idea means that the condition represented by
the former held in the condition set represented by the latter. the former held in the condition set represented by the latter.
""" """
seen = set(seen) | set(cause) seen.add(cause)
parents = [c for e, c in condition_causes if e == cause and c not in seen] parents = [c for e, c in condition_causes if e == cause and c not in seen]
local = "required because %s " % conditions[cause[0]] local = "required because %s " % conditions[cause[0]]
@ -812,7 +813,14 @@ def on_model(model):
errors = sorted( errors = sorted(
[(int(priority), msg, args) for priority, msg, *args in error_args], reverse=True [(int(priority), msg, args) for priority, msg, *args in error_args], reverse=True
) )
msg = self.message(errors) try:
msg = self.message(errors)
except Exception as e:
msg = (
f"unexpected error during concretization [{str(e)}]. "
f"Please report a bug at https://github.com/spack/spack/issues"
)
raise spack.error.SpackError(msg)
raise UnsatisfiableSpecError(msg) raise UnsatisfiableSpecError(msg)
@ -1006,14 +1014,6 @@ def on_model(model):
# record the possible dependencies in the solve # record the possible dependencies in the solve
result.possible_dependencies = setup.pkgs result.possible_dependencies = setup.pkgs
# print any unknown functions in the model
for sym in best_model:
if sym.name not in ("attr", "error", "opt_criterion"):
tty.debug(
"UNKNOWN SYMBOL: %s(%s)"
% (sym.name, ", ".join([str(s) for s in intermediate_repr(sym.arguments)]))
)
elif cores: elif cores:
result.control = self.control result.control = self.control
result.cores.extend(cores) result.cores.extend(cores)
@ -1118,11 +1118,8 @@ def __init__(self, tests=False):
self.reusable_and_possible = ConcreteSpecsByHash() self.reusable_and_possible = ConcreteSpecsByHash()
# id for dummy variables self._id_counter = itertools.count()
self._condition_id_counter = itertools.count()
self._trigger_id_counter = itertools.count()
self._trigger_cache = collections.defaultdict(dict) self._trigger_cache = collections.defaultdict(dict)
self._effect_id_counter = itertools.count()
self._effect_cache = collections.defaultdict(dict) self._effect_cache = collections.defaultdict(dict)
# Caches to optimize the setup phase of the solver # Caches to optimize the setup phase of the solver
@ -1136,6 +1133,7 @@ def __init__(self, tests=False):
# Set during the call to setup # Set during the call to setup
self.pkgs = None self.pkgs = None
self.explicitly_required_namespaces = {}
def pkg_version_rules(self, pkg): def pkg_version_rules(self, pkg):
"""Output declared versions of a package. """Output declared versions of a package.
@ -1148,7 +1146,9 @@ def key_fn(version):
# Origins are sorted by "provenance" first, see the Provenance enumeration above # Origins are sorted by "provenance" first, see the Provenance enumeration above
return version.origin, version.idx return version.origin, version.idx
pkg = packagize(pkg) if isinstance(pkg, str):
pkg = self.pkg_class(pkg)
declared_versions = self.declared_versions[pkg.name] declared_versions = self.declared_versions[pkg.name]
partially_sorted_versions = sorted(set(declared_versions), key=key_fn) partially_sorted_versions = sorted(set(declared_versions), key=key_fn)
@ -1340,7 +1340,10 @@ def _rule_from_str(
) )
def pkg_rules(self, pkg, tests): def pkg_rules(self, pkg, tests):
pkg = packagize(pkg) pkg = self.pkg_class(pkg)
# Namespace of the package
self.gen.fact(fn.pkg_fact(pkg.name, fn.namespace(pkg.namespace)))
# versions # versions
self.pkg_version_rules(pkg) self.pkg_version_rules(pkg)
@ -1518,7 +1521,7 @@ def condition(
# In this way, if a condition can't be emitted but the exception is handled in the caller, # In this way, if a condition can't be emitted but the exception is handled in the caller,
# we won't emit partial facts. # we won't emit partial facts.
condition_id = next(self._condition_id_counter) condition_id = next(self._id_counter)
self.gen.fact(fn.pkg_fact(named_cond.name, fn.condition(condition_id))) self.gen.fact(fn.pkg_fact(named_cond.name, fn.condition(condition_id)))
self.gen.fact(fn.condition_reason(condition_id, msg)) self.gen.fact(fn.condition_reason(condition_id, msg))
@ -1526,7 +1529,7 @@ def condition(
named_cond_key = (str(named_cond), transform_required) named_cond_key = (str(named_cond), transform_required)
if named_cond_key not in cache: if named_cond_key not in cache:
trigger_id = next(self._trigger_id_counter) trigger_id = next(self._id_counter)
requirements = self.spec_clauses(named_cond, body=True, required_from=name) requirements = self.spec_clauses(named_cond, body=True, required_from=name)
if transform_required: if transform_required:
@ -1542,7 +1545,7 @@ def condition(
cache = self._effect_cache[named_cond.name] cache = self._effect_cache[named_cond.name]
imposed_spec_key = (str(imposed_spec), transform_imposed) imposed_spec_key = (str(imposed_spec), transform_imposed)
if imposed_spec_key not in cache: if imposed_spec_key not in cache:
effect_id = next(self._effect_id_counter) effect_id = next(self._id_counter)
requirements = self.spec_clauses(imposed_spec, body=False, required_from=name) requirements = self.spec_clauses(imposed_spec, body=False, required_from=name)
if transform_imposed: if transform_imposed:
@ -1673,9 +1676,10 @@ def provider_requirements(self):
rules = self._rules_from_requirements( rules = self._rules_from_requirements(
virtual_str, requirements, kind=RequirementKind.VIRTUAL virtual_str, requirements, kind=RequirementKind.VIRTUAL
) )
self.emit_facts_from_requirement_rules(rules) if rules:
self.trigger_rules() self.emit_facts_from_requirement_rules(rules)
self.effect_rules() self.trigger_rules()
self.effect_rules()
def emit_facts_from_requirement_rules(self, rules: List[RequirementRule]): def emit_facts_from_requirement_rules(self, rules: List[RequirementRule]):
"""Generate facts to enforce requirements. """Generate facts to enforce requirements.
@ -1802,15 +1806,12 @@ def external_packages(self):
for local_idx, spec in enumerate(external_specs): for local_idx, spec in enumerate(external_specs):
msg = "%s available as external when satisfying %s" % (spec.name, spec) msg = "%s available as external when satisfying %s" % (spec.name, spec)
def external_imposition(input_spec, _): def external_imposition(input_spec, requirements):
return [fn.attr("external_conditions_hold", input_spec.name, local_idx)] return requirements + [
fn.attr("external_conditions_hold", input_spec.name, local_idx)
]
self.condition( self.condition(spec, spec, msg=msg, transform_imposed=external_imposition)
spec,
spack.spec.Spec(spec.name),
msg=msg,
transform_imposed=external_imposition,
)
self.possible_versions[spec.name].add(spec.version) self.possible_versions[spec.name].add(spec.version)
self.gen.newline() self.gen.newline()
@ -1832,7 +1833,13 @@ def preferred_variants(self, pkg_name):
# perform validation of the variant and values # perform validation of the variant and values
spec = spack.spec.Spec(pkg_name) spec = spack.spec.Spec(pkg_name)
spec.update_variant_validate(variant_name, values) try:
spec.update_variant_validate(variant_name, values)
except (spack.variant.InvalidVariantValueError, KeyError, ValueError) as e:
tty.debug(
f"[SETUP]: rejected {str(variant)} as a preference for {pkg_name}: {str(e)}"
)
continue
for value in values: for value in values:
self.variant_values_from_specs.add((pkg_name, variant.name, value)) self.variant_values_from_specs.add((pkg_name, variant.name, value))
@ -1922,7 +1929,7 @@ class Head:
node_flag = fn.attr("node_flag_set") node_flag = fn.attr("node_flag_set")
node_flag_source = fn.attr("node_flag_source") node_flag_source = fn.attr("node_flag_source")
node_flag_propagate = fn.attr("node_flag_propagate") node_flag_propagate = fn.attr("node_flag_propagate")
variant_propagate = fn.attr("variant_propagate") variant_propagation_candidate = fn.attr("variant_propagation_candidate")
class Body: class Body:
node = fn.attr("node") node = fn.attr("node")
@ -1936,7 +1943,7 @@ class Body:
node_flag = fn.attr("node_flag") node_flag = fn.attr("node_flag")
node_flag_source = fn.attr("node_flag_source") node_flag_source = fn.attr("node_flag_source")
node_flag_propagate = fn.attr("node_flag_propagate") node_flag_propagate = fn.attr("node_flag_propagate")
variant_propagate = fn.attr("variant_propagate") variant_propagation_candidate = fn.attr("variant_propagation_candidate")
f = Body if body else Head f = Body if body else Head
@ -1971,7 +1978,7 @@ class Body:
if not spec.concrete: if not spec.concrete:
reserved_names = spack.directives.reserved_names reserved_names = spack.directives.reserved_names
if not spec.virtual and vname not in reserved_names: if not spec.virtual and vname not in reserved_names:
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name) pkg_cls = self.pkg_class(spec.name)
try: try:
variant_def, _ = pkg_cls.variants[vname] variant_def, _ = pkg_cls.variants[vname]
except KeyError: except KeyError:
@ -1985,7 +1992,9 @@ class Body:
clauses.append(f.variant_value(spec.name, vname, value)) clauses.append(f.variant_value(spec.name, vname, value))
if variant.propagate: if variant.propagate:
clauses.append(f.variant_propagate(spec.name, vname, value, spec.name)) clauses.append(
f.variant_propagation_candidate(spec.name, vname, value, spec.name)
)
# Tell the concretizer that this is a possible value for the # Tell the concretizer that this is a possible value for the
# variant, to account for things like int/str values where we # variant, to account for things like int/str values where we
@ -2088,7 +2097,7 @@ def define_package_versions_and_validate_preferences(
"""Declare any versions in specs not declared in packages.""" """Declare any versions in specs not declared in packages."""
packages_yaml = spack.config.get("packages") packages_yaml = spack.config.get("packages")
for pkg_name in possible_pkgs: for pkg_name in possible_pkgs:
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name) pkg_cls = self.pkg_class(pkg_name)
# All the versions from the corresponding package.py file. Since concepts # All the versions from the corresponding package.py file. Since concepts
# like being a "develop" version or being preferred exist only at a # like being a "develop" version or being preferred exist only at a
@ -2548,14 +2557,8 @@ def setup(
reuse: list of concrete specs that can be reused reuse: list of concrete specs that can be reused
allow_deprecated: if True adds deprecated versions into the solve allow_deprecated: if True adds deprecated versions into the solve
""" """
self._condition_id_counter = itertools.count()
# preliminary checks
check_packages_exist(specs) check_packages_exist(specs)
# get list of all possible dependencies
self.possible_virtuals = set(x.name for x in specs if x.virtual)
node_counter = _create_counter(specs, tests=self.tests) node_counter = _create_counter(specs, tests=self.tests)
self.possible_virtuals = node_counter.possible_virtuals() self.possible_virtuals = node_counter.possible_virtuals()
self.pkgs = node_counter.possible_dependencies() self.pkgs = node_counter.possible_dependencies()
@ -2568,6 +2571,10 @@ def setup(
if missing_deps: if missing_deps:
raise spack.spec.InvalidDependencyError(spec.name, missing_deps) raise spack.spec.InvalidDependencyError(spec.name, missing_deps)
for node in spack.traverse.traverse_nodes(specs):
if node.namespace is not None:
self.explicitly_required_namespaces[node.name] = node.namespace
# driver is used by all the functions below to add facts and # driver is used by all the functions below to add facts and
# rules to generate an ASP program. # rules to generate an ASP program.
self.gen = driver self.gen = driver
@ -2673,23 +2680,21 @@ def setup(
def literal_specs(self, specs): def literal_specs(self, specs):
for spec in specs: for spec in specs:
self.gen.h2("Spec: %s" % str(spec)) self.gen.h2("Spec: %s" % str(spec))
condition_id = next(self._condition_id_counter) condition_id = next(self._id_counter)
trigger_id = next(self._trigger_id_counter) trigger_id = next(self._id_counter)
# Special condition triggered by "literal_solved" # Special condition triggered by "literal_solved"
self.gen.fact(fn.literal(trigger_id)) self.gen.fact(fn.literal(trigger_id))
self.gen.fact(fn.pkg_fact(spec.name, fn.condition_trigger(condition_id, trigger_id))) self.gen.fact(fn.pkg_fact(spec.name, fn.condition_trigger(condition_id, trigger_id)))
self.gen.fact(fn.condition_reason(condition_id, f"{spec} requested from CLI")) self.gen.fact(fn.condition_reason(condition_id, f"{spec} requested explicitly"))
# Effect imposes the spec
imposed_spec_key = str(spec), None imposed_spec_key = str(spec), None
cache = self._effect_cache[spec.name] cache = self._effect_cache[spec.name]
msg = ( if imposed_spec_key in cache:
"literal specs have different requirements. clear cache before computing literals" effect_id, requirements = cache[imposed_spec_key]
) else:
assert imposed_spec_key not in cache, msg effect_id = next(self._id_counter)
effect_id = next(self._effect_id_counter) requirements = self.spec_clauses(spec)
requirements = self.spec_clauses(spec)
root_name = spec.name root_name = spec.name
for clause in requirements: for clause in requirements:
clause_name = clause.args[0] clause_name = clause.args[0]
@ -2779,6 +2784,13 @@ def _specs_from_requires(self, pkg_name, section):
for s in spec_group[key]: for s in spec_group[key]:
yield _spec_with_default_name(s, pkg_name) yield _spec_with_default_name(s, pkg_name)
def pkg_class(self, pkg_name: str) -> typing.Type["spack.package_base.PackageBase"]:
request = pkg_name
if pkg_name in self.explicitly_required_namespaces:
namespace = self.explicitly_required_namespaces[pkg_name]
request = f"{namespace}.{pkg_name}"
return spack.repo.PATH.get_pkg_class(request)
class SpecBuilder: class SpecBuilder:
"""Class with actions to rebuild a spec from ASP results.""" """Class with actions to rebuild a spec from ASP results."""
@ -2790,9 +2802,11 @@ class SpecBuilder:
r"^.*_propagate$", r"^.*_propagate$",
r"^.*_satisfies$", r"^.*_satisfies$",
r"^.*_set$", r"^.*_set$",
r"^dependency_holds$",
r"^node_compiler$", r"^node_compiler$",
r"^package_hash$", r"^package_hash$",
r"^root$", r"^root$",
r"^track_dependencies$",
r"^variant_default_value_from_cli$", r"^variant_default_value_from_cli$",
r"^virtual_node$", r"^virtual_node$",
r"^virtual_root$", r"^virtual_root$",
@ -2836,6 +2850,9 @@ def _arch(self, node):
self._specs[node].architecture = arch self._specs[node].architecture = arch
return arch return arch
def namespace(self, node, namespace):
self._specs[node].namespace = namespace
def node_platform(self, node, platform): def node_platform(self, node, platform):
self._arch(node).platform = platform self._arch(node).platform = platform
@ -3050,14 +3067,6 @@ def build_specs(self, function_tuples):
action(*args) action(*args)
# namespace assignment is done after the fact, as it is not
# currently part of the solve
for spec in self._specs.values():
if spec.namespace:
continue
repo = spack.repo.PATH.repo_for_pkg(spec)
spec.namespace = repo.namespace
# fix flags after all specs are constructed # fix flags after all specs are constructed
self.reorder_flags() self.reorder_flags()

View file

@ -45,6 +45,9 @@
:- attr("depends_on", node(min_dupe_id, Package), node(ID, _), "link"), ID != min_dupe_id, unification_set("root", node(min_dupe_id, Package)), internal_error("link dependency out of the root unification set"). :- attr("depends_on", node(min_dupe_id, Package), node(ID, _), "link"), ID != min_dupe_id, unification_set("root", node(min_dupe_id, Package)), internal_error("link dependency out of the root unification set").
:- attr("depends_on", node(min_dupe_id, Package), node(ID, _), "run"), ID != min_dupe_id, unification_set("root", node(min_dupe_id, Package)), internal_error("run dependency out of the root unification set"). :- attr("depends_on", node(min_dupe_id, Package), node(ID, _), "run"), ID != min_dupe_id, unification_set("root", node(min_dupe_id, Package)), internal_error("run dependency out of the root unification set").
% Namespaces are statically assigned by a package fact
attr("namespace", node(ID, Package), Namespace) :- attr("node", node(ID, Package)), pkg_fact(Package, namespace(Namespace)).
% Rules on "unification sets", i.e. on sets of nodes allowing a single configuration of any given package % Rules on "unification sets", i.e. on sets of nodes allowing a single configuration of any given package
unify(SetID, PackageName) :- unification_set(SetID, node(_, PackageName)). unify(SetID, PackageName) :- unification_set(SetID, node(_, PackageName)).
:- 2 { unification_set(SetID, node(_, PackageName)) }, unify(SetID, PackageName). :- 2 { unification_set(SetID, node(_, PackageName)) }, unify(SetID, PackageName).
@ -695,6 +698,26 @@ requirement_group_satisfied(node(ID, Package), X) :-
activate_requirement(node(ID, Package), X), activate_requirement(node(ID, Package), X),
requirement_group(Package, X). requirement_group(Package, X).
% Do not impose requirements, if the conditional requirement is not active
do_not_impose(EffectID, node(ID, Package)) :-
trigger_condition_holds(TriggerID, node(ID, Package)),
pkg_fact(Package, condition_trigger(ConditionID, TriggerID)),
pkg_fact(Package, condition_effect(ConditionID, EffectID)),
requirement_group_member(ConditionID , Package, RequirementID),
not activate_requirement(node(ID, Package), RequirementID).
% When we have a required provider, we need to ensure that the provider/2 facts respect
% the requirement. This is particularly important for packages that could provide multiple
% virtuals independently
required_provider(Provider, Virtual)
:- requirement_group_member(ConditionID, Virtual, RequirementID),
condition_holds(ConditionID, _),
virtual(Virtual),
pkg_fact(Virtual, condition_effect(ConditionID, EffectID)),
imposed_constraint(EffectID, "node", Provider).
:- provider(node(Y, Package), node(X, Virtual)), required_provider(Provider, Virtual), Package != Provider.
% TODO: the following two choice rules allow the solver to add compiler % TODO: the following two choice rules allow the solver to add compiler
% flags if their only source is from a requirement. This is overly-specific % flags if their only source is from a requirement. This is overly-specific
% and should use a more-generic approach like in https://github.com/spack/spack/pull/37180 % and should use a more-generic approach like in https://github.com/spack/spack/pull/37180
@ -757,23 +780,36 @@ node_has_variant(node(ID, Package), Variant) :-
pkg_fact(Package, variant(Variant)), pkg_fact(Package, variant(Variant)),
attr("node", node(ID, Package)). attr("node", node(ID, Package)).
attr("variant_propagate", PackageNode, Variant, Value, Source) :- % Variant propagation is forwarded to dependencies
attr("variant_propagation_candidate", PackageNode, Variant, Value, Source) :-
attr("node", PackageNode), attr("node", PackageNode),
depends_on(ParentNode, PackageNode), depends_on(ParentNode, PackageNode),
attr("variant_propagate", ParentNode, Variant, Value, Source), attr("variant_value", node(_, Source), Variant, Value),
not attr("variant_set", PackageNode, Variant). attr("variant_propagation_candidate", ParentNode, Variant, _, Source).
attr("variant_value", node(ID, Package), Variant, Value) :- % If the node is a candidate, and it has the variant and value,
attr("node", node(ID, Package)), % then those variant and value should be propagated
attr("variant_propagate", node(ID, Package), Variant, Value, Source) :-
attr("variant_propagation_candidate", node(ID, Package), Variant, Value, Source),
node_has_variant(node(ID, Package), Variant), node_has_variant(node(ID, Package), Variant),
attr("variant_propagate", node(ID, Package), Variant, Value, _), pkg_fact(Package, variant_possible_value(Variant, Value)),
pkg_fact(Package, variant_possible_value(Variant, Value)). not attr("variant_set", node(ID, Package), Variant).
% Propagate the value, if there is the corresponding attribute
attr("variant_value", PackageNode, Variant, Value) :- attr("variant_propagate", PackageNode, Variant, Value, _).
% If a variant is propagated, we cannot have extraneous values (this is for multi valued variants)
variant_is_propagated(PackageNode, Variant) :- attr("variant_propagate", PackageNode, Variant, _, _).
:- variant_is_propagated(PackageNode, Variant),
attr("variant_value", PackageNode, Variant, Value),
not attr("variant_propagate", PackageNode, Variant, Value, _).
% Cannot receive different values from different sources on the same variant
error(100, "{0} and {1} cannot both propagate variant '{2}' to package {3} with values '{4}' and '{5}'", Source1, Source2, Variant, Package, Value1, Value2) :- error(100, "{0} and {1} cannot both propagate variant '{2}' to package {3} with values '{4}' and '{5}'", Source1, Source2, Variant, Package, Value1, Value2) :-
attr("variant_propagate", node(X, Package), Variant, Value1, Source1), attr("variant_propagate", node(X, Package), Variant, Value1, Source1),
attr("variant_propagate", node(X, Package), Variant, Value2, Source2), attr("variant_propagate", node(X, Package), Variant, Value2, Source2),
node_has_variant(node(X, Package), Variant), node_has_variant(node(X, Package), Variant),
Value1 < Value2. Value1 < Value2, Source1 < Source2.
% a variant cannot be set if it is not a variant on the package % a variant cannot be set if it is not a variant on the package
error(100, "Cannot set variant '{0}' for package '{1}' because the variant condition cannot be satisfied for the given spec", Variant, Package) error(100, "Cannot set variant '{0}' for package '{1}' because the variant condition cannot be satisfied for the given spec", Variant, Package)

View file

@ -213,6 +213,19 @@ def __call__(self, match):
return clr.colorize(re.sub(_SEPARATORS, insert_color(), str(spec)) + "@.") return clr.colorize(re.sub(_SEPARATORS, insert_color(), str(spec)) + "@.")
OLD_STYLE_FMT_RE = re.compile(r"\${[A-Z]+}")
def ensure_modern_format_string(fmt: str) -> None:
"""Ensure that the format string does not contain old ${...} syntax."""
result = OLD_STYLE_FMT_RE.search(fmt)
if result:
raise SpecFormatStringError(
f"Format string `{fmt}` contains old syntax `{result.group(0)}`. "
"This is no longer supported."
)
@lang.lazy_lexicographic_ordering @lang.lazy_lexicographic_ordering
class ArchSpec: class ArchSpec:
"""Aggregate the target platform, the operating system and the target microarchitecture.""" """Aggregate the target platform, the operating system and the target microarchitecture."""
@ -4360,6 +4373,7 @@ def format(self, format_string=DEFAULT_FORMAT, **kwargs):
that accepts a string and returns another one that accepts a string and returns another one
""" """
ensure_modern_format_string(format_string)
color = kwargs.get("color", False) color = kwargs.get("color", False)
transform = kwargs.get("transform", {}) transform = kwargs.get("transform", {})

View file

@ -4,7 +4,9 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import filecmp import filecmp
import glob import glob
import gzip
import io import io
import json
import os import os
import platform import platform
import sys import sys
@ -199,6 +201,9 @@ def dummy_prefix(tmpdir):
with open(data, "w") as f: with open(data, "w") as f:
f.write("hello world") f.write("hello world")
with open(p.join(".spack", "binary_distribution"), "w") as f:
f.write("{}")
os.symlink("app", relative_app_link) os.symlink("app", relative_app_link)
os.symlink(app, absolute_app_link) os.symlink(app, absolute_app_link)
@ -1022,7 +1027,9 @@ def test_tarball_common_prefix(dummy_prefix, tmpdir):
bindist._tar_strip_component(tar, common_prefix) bindist._tar_strip_component(tar, common_prefix)
# Extract into prefix2 # Extract into prefix2
tar.extractall(path="prefix2") tar.extractall(
path="prefix2", members=bindist._tar_strip_component(tar, common_prefix)
)
# Verify files are all there at the correct level. # Verify files are all there at the correct level.
assert set(os.listdir("prefix2")) == {"bin", "share", ".spack"} assert set(os.listdir("prefix2")) == {"bin", "share", ".spack"}
@ -1042,13 +1049,30 @@ def test_tarball_common_prefix(dummy_prefix, tmpdir):
) )
def test_tarfile_missing_binary_distribution_file(tmpdir):
"""A tarfile that does not contain a .spack/binary_distribution file cannot be
used to install."""
with tmpdir.as_cwd():
# An empty .spack dir.
with tarfile.open("empty.tar", mode="w") as tar:
tarinfo = tarfile.TarInfo(name="example/.spack")
tarinfo.type = tarfile.DIRTYPE
tar.addfile(tarinfo)
with pytest.raises(ValueError, match="missing binary_distribution file"):
bindist._ensure_common_prefix(tarfile.open("empty.tar", mode="r"))
def test_tarfile_without_common_directory_prefix_fails(tmpdir): def test_tarfile_without_common_directory_prefix_fails(tmpdir):
"""A tarfile that only contains files without a common package directory """A tarfile that only contains files without a common package directory
should fail to extract, as we won't know where to put the files.""" should fail to extract, as we won't know where to put the files."""
with tmpdir.as_cwd(): with tmpdir.as_cwd():
# Create a broken tarball with just a file, no directories. # Create a broken tarball with just a file, no directories.
with tarfile.open("empty.tar", mode="w") as tar: with tarfile.open("empty.tar", mode="w") as tar:
tar.addfile(tarfile.TarInfo(name="example/file"), fileobj=io.BytesIO(b"hello")) tar.addfile(
tarfile.TarInfo(name="example/.spack/binary_distribution"),
fileobj=io.BytesIO(b"hello"),
)
with pytest.raises(ValueError, match="Tarball does not contain a common prefix"): with pytest.raises(ValueError, match="Tarball does not contain a common prefix"):
bindist._ensure_common_prefix(tarfile.open("empty.tar", mode="r")) bindist._ensure_common_prefix(tarfile.open("empty.tar", mode="r"))
@ -1112,3 +1136,77 @@ def test_tarfile_of_spec_prefix(tmpdir):
assert tar.getmember(f"{expected_prefix}/b_directory/file").isreg() assert tar.getmember(f"{expected_prefix}/b_directory/file").isreg()
assert tar.getmember(f"{expected_prefix}/c_directory").isdir() assert tar.getmember(f"{expected_prefix}/c_directory").isdir()
assert tar.getmember(f"{expected_prefix}/c_directory/file").isreg() assert tar.getmember(f"{expected_prefix}/c_directory/file").isreg()
@pytest.mark.parametrize("layout,expect_success", [(None, True), (1, True), (2, False)])
def test_get_valid_spec_file(tmp_path, layout, expect_success):
# Test reading a spec.json file that does not specify a layout version.
spec_dict = Spec("example").to_dict()
path = tmp_path / "spec.json"
effective_layout = layout or 0 # If not specified it should be 0
# Add a layout version
if layout is not None:
spec_dict["buildcache_layout_version"] = layout
# Save to file
with open(path, "w") as f:
json.dump(spec_dict, f)
try:
spec_dict_disk, layout_disk = bindist._get_valid_spec_file(
str(path), max_supported_layout=1
)
assert expect_success
assert spec_dict_disk == spec_dict
assert layout_disk == effective_layout
except bindist.InvalidMetadataFile:
assert not expect_success
def test_get_valid_spec_file_doesnt_exist(tmp_path):
with pytest.raises(bindist.InvalidMetadataFile, match="No such file"):
bindist._get_valid_spec_file(str(tmp_path / "no-such-file"), max_supported_layout=1)
def test_get_valid_spec_file_gzipped(tmp_path):
# Create a gzipped file, contents don't matter
path = tmp_path / "spec.json.gz"
with gzip.open(path, "wb") as f:
f.write(b"hello")
with pytest.raises(
bindist.InvalidMetadataFile, match="Compressed spec files are not supported"
):
bindist._get_valid_spec_file(str(path), max_supported_layout=1)
@pytest.mark.parametrize("filename", ["spec.json", "spec.json.sig"])
def test_get_valid_spec_file_no_json(tmp_path, filename):
tmp_path.joinpath(filename).write_text("not json")
with pytest.raises(bindist.InvalidMetadataFile):
bindist._get_valid_spec_file(str(tmp_path / filename), max_supported_layout=1)
def test_download_tarball_with_unsupported_layout_fails(tmp_path, mutable_config, capsys):
layout_version = bindist.FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION + 1
spec = Spec("gmake@4.4.1%gcc@13.1.0 arch=linux-ubuntu23.04-zen2")
spec._mark_concrete()
spec_dict = spec.to_dict()
spec_dict["buildcache_layout_version"] = layout_version
# Setup a basic local build cache structure
path = (
tmp_path / bindist.build_cache_relative_path() / bindist.tarball_name(spec, ".spec.json")
)
path.parent.mkdir(parents=True)
with open(path, "w") as f:
json.dump(spec_dict, f)
# Configure as a mirror.
mirror_cmd("add", "test-mirror", str(tmp_path))
# Shouldn't be able "download" this.
assert bindist.download_tarball(spec, unsigned=True) is None
# And there should be a warning about an unsupported layout version.
assert f"Layout version {layout_version} is too new" in capsys.readouterr().err

View file

@ -25,7 +25,7 @@ def test_error_when_multiple_specs_are_given():
assert "only takes one spec" in output assert "only takes one spec" in output
@pytest.mark.parametrize("args", [("--", "/bin/bash", "-c", "echo test"), ("--",), ()]) @pytest.mark.parametrize("args", [("--", "/bin/sh", "-c", "echo test"), ("--",), ()])
@pytest.mark.usefixtures("config", "mock_packages", "working_env") @pytest.mark.usefixtures("config", "mock_packages", "working_env")
def test_build_env_requires_a_spec(args): def test_build_env_requires_a_spec(args):
output = build_env(*args, fail_on_error=False) output = build_env(*args, fail_on_error=False)
@ -35,7 +35,7 @@ def test_build_env_requires_a_spec(args):
_out_file = "env.out" _out_file = "env.out"
@pytest.mark.parametrize("shell", ["pwsh", "bat"] if sys.platform == "win32" else ["bash"]) @pytest.mark.parametrize("shell", ["pwsh", "bat"] if sys.platform == "win32" else ["sh"])
@pytest.mark.usefixtures("config", "mock_packages", "working_env") @pytest.mark.usefixtures("config", "mock_packages", "working_env")
def test_dump(shell_as, shell, tmpdir): def test_dump(shell_as, shell, tmpdir):
with tmpdir.as_cwd(): with tmpdir.as_cwd():

View file

@ -2000,7 +2000,7 @@ def test_ci_reproduce(
install_script = os.path.join(working_dir.strpath, "install.sh") install_script = os.path.join(working_dir.strpath, "install.sh")
with open(install_script, "w") as fd: with open(install_script, "w") as fd:
fd.write("#!/bin/bash\n\n#fake install\nspack install blah\n") fd.write("#!/bin/sh\n\n#fake install\nspack install blah\n")
spack_info_file = os.path.join(working_dir.strpath, "spack_info.txt") spack_info_file = os.path.join(working_dir.strpath, "spack_info.txt")
with open(spack_info_file, "w") as fd: with open(spack_info_file, "w") as fd:

View file

@ -0,0 +1,78 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import pytest
import spack.environment as ev
from spack.main import SpackCommand, SpackCommandError
deconcretize = SpackCommand("deconcretize")
@pytest.fixture(scope="function")
def test_env(mutable_mock_env_path, config, mock_packages):
ev.create("test")
with ev.read("test") as e:
e.add("a@2.0 foobar=bar ^b@1.0")
e.add("a@1.0 foobar=bar ^b@0.9")
e.concretize()
e.write()
def test_deconcretize_dep(test_env):
with ev.read("test") as e:
deconcretize("-y", "b@1.0")
specs = [s for s, _ in e.concretized_specs()]
assert len(specs) == 1
assert specs[0].satisfies("a@1.0")
def test_deconcretize_all_dep(test_env):
with ev.read("test") as e:
with pytest.raises(SpackCommandError):
deconcretize("-y", "b")
deconcretize("-y", "--all", "b")
specs = [s for s, _ in e.concretized_specs()]
assert len(specs) == 0
def test_deconcretize_root(test_env):
with ev.read("test") as e:
output = deconcretize("-y", "--root", "b@1.0")
assert "No matching specs to deconcretize" in output
assert len(e.concretized_order) == 2
deconcretize("-y", "--root", "a@2.0")
specs = [s for s, _ in e.concretized_specs()]
assert len(specs) == 1
assert specs[0].satisfies("a@1.0")
def test_deconcretize_all_root(test_env):
with ev.read("test") as e:
with pytest.raises(SpackCommandError):
deconcretize("-y", "--root", "a")
output = deconcretize("-y", "--root", "--all", "b")
assert "No matching specs to deconcretize" in output
assert len(e.concretized_order) == 2
deconcretize("-y", "--root", "--all", "a")
specs = [s for s, _ in e.concretized_specs()]
assert len(specs) == 0
def test_deconcretize_all(test_env):
with ev.read("test") as e:
with pytest.raises(SpackCommandError):
deconcretize()
deconcretize("-y", "--all")
specs = [s for s, _ in e.concretized_specs()]
assert len(specs) == 0

View file

@ -284,7 +284,7 @@ def setup_error(pkg, env):
_, err = capfd.readouterr() _, err = capfd.readouterr()
assert "cmake-client had issues!" in err assert "cmake-client had issues!" in err
assert "Warning: couldn't load runtime environment" in err assert "Warning: could not load runtime environment" in err
def test_activate_adds_transitive_run_deps_to_path(install_mockery, mock_fetch, monkeypatch): def test_activate_adds_transitive_run_deps_to_path(install_mockery, mock_fetch, monkeypatch):
@ -502,12 +502,12 @@ def test_env_activate_broken_view(
# test that Spack detects the missing package and fails gracefully # test that Spack detects the missing package and fails gracefully
with spack.repo.use_repositories(mock_custom_repository): with spack.repo.use_repositories(mock_custom_repository):
wrong_repo = env("activate", "--sh", "test") wrong_repo = env("activate", "--sh", "test")
assert "Warning: couldn't load runtime environment" in wrong_repo assert "Warning: could not load runtime environment" in wrong_repo
assert "Unknown namespace: builtin.mock" in wrong_repo assert "Unknown namespace: builtin.mock" in wrong_repo
# test replacing repo fixes it # test replacing repo fixes it
normal_repo = env("activate", "--sh", "test") normal_repo = env("activate", "--sh", "test")
assert "Warning: couldn't load runtime environment" not in normal_repo assert "Warning: could not load runtime environment" not in normal_repo
assert "Unknown namespace: builtin.mock" not in normal_repo assert "Unknown namespace: builtin.mock" not in normal_repo
@ -3538,7 +3538,7 @@ def test_environment_created_in_users_location(mutable_mock_env_path, tmp_path):
assert os.path.isdir(os.path.join(env_dir, dir_name)) assert os.path.isdir(os.path.join(env_dir, dir_name))
def test_environment_created_from_lockfile_has_view(mock_packages, tmpdir): def test_environment_created_from_lockfile_has_view(mock_packages, temporary_store, tmpdir):
"""When an env is created from a lockfile, a view should be generated for it""" """When an env is created from a lockfile, a view should be generated for it"""
env_a = str(tmpdir.join("a")) env_a = str(tmpdir.join("a"))
env_b = str(tmpdir.join("b")) env_b = str(tmpdir.join("b"))

View file

@ -43,7 +43,7 @@ def test_find_gpg(cmd_name, version, tmpdir, mock_gnupghome, monkeypatch):
f.write(TEMPLATE.format(version=version)) f.write(TEMPLATE.format(version=version))
fs.set_executable(fname) fs.set_executable(fname)
monkeypatch.setitem(os.environ, "PATH", str(tmpdir)) monkeypatch.setenv("PATH", str(tmpdir))
if version == "undetectable" or version.endswith("1.3.4"): if version == "undetectable" or version.endswith("1.3.4"):
with pytest.raises(spack.util.gpg.SpackGPGError): with pytest.raises(spack.util.gpg.SpackGPGError):
spack.util.gpg.init(force=True) spack.util.gpg.init(force=True)
@ -54,7 +54,7 @@ def test_find_gpg(cmd_name, version, tmpdir, mock_gnupghome, monkeypatch):
def test_no_gpg_in_path(tmpdir, mock_gnupghome, monkeypatch, mutable_config): def test_no_gpg_in_path(tmpdir, mock_gnupghome, monkeypatch, mutable_config):
monkeypatch.setitem(os.environ, "PATH", str(tmpdir)) monkeypatch.setenv("PATH", str(tmpdir))
bootstrap("disable") bootstrap("disable")
with pytest.raises(RuntimeError): with pytest.raises(RuntimeError):
spack.util.gpg.init(force=True) spack.util.gpg.init(force=True)

View file

@ -25,7 +25,7 @@ def parser():
def print_buffer(monkeypatch): def print_buffer(monkeypatch):
buffer = [] buffer = []
def _print(*args): def _print(*args, **kwargs):
buffer.extend(args) buffer.extend(args)
monkeypatch.setattr(spack.cmd.info.color, "cprint", _print, raising=False) monkeypatch.setattr(spack.cmd.info.color, "cprint", _print, raising=False)

View file

@ -904,13 +904,12 @@ def test_install_help_cdash():
@pytest.mark.disable_clean_stage_check @pytest.mark.disable_clean_stage_check
def test_cdash_auth_token(tmpdir, mock_fetch, install_mockery, capfd): def test_cdash_auth_token(tmpdir, mock_fetch, install_mockery, monkeypatch, capfd):
# capfd interferes with Spack's capturing # capfd interferes with Spack's capturing
with tmpdir.as_cwd(): with tmpdir.as_cwd(), capfd.disabled():
with capfd.disabled(): monkeypatch.setenv("SPACK_CDASH_AUTH_TOKEN", "asdf")
os.environ["SPACK_CDASH_AUTH_TOKEN"] = "asdf" out = install("-v", "--log-file=cdash_reports", "--log-format=cdash", "a")
out = install("-v", "--log-file=cdash_reports", "--log-format=cdash", "a") assert "Using CDash auth token from environment" in out
assert "Using CDash auth token from environment" in out
@pytest.mark.not_on_windows("Windows log_output logs phase header out of order") @pytest.mark.not_on_windows("Windows log_output logs phase header out of order")

View file

@ -11,7 +11,7 @@
import llnl.util.filesystem as fs import llnl.util.filesystem as fs
import spack.compiler import spack.compiler
import spack.compilers as compilers import spack.compilers
import spack.spec import spack.spec
import spack.util.environment import spack.util.environment
from spack.compiler import Compiler from spack.compiler import Compiler
@ -25,12 +25,14 @@ class MockOs:
pass pass
compiler_name = "gcc" compiler_name = "gcc"
compiler_cls = compilers.class_for_compiler_name(compiler_name) compiler_cls = spack.compilers.class_for_compiler_name(compiler_name)
monkeypatch.setattr(compiler_cls, "cc_version", lambda x: version) monkeypatch.setattr(compiler_cls, "cc_version", lambda x: version)
compiler_id = compilers.CompilerID(os=MockOs, compiler_name=compiler_name, version=None) compiler_id = spack.compilers.CompilerID(
variation = compilers.NameVariation(prefix="", suffix="") os=MockOs, compiler_name=compiler_name, version=None
return compilers.DetectVersionArgs( )
variation = spack.compilers.NameVariation(prefix="", suffix="")
return spack.compilers.DetectVersionArgs(
id=compiler_id, variation=variation, language="cc", path=path id=compiler_id, variation=variation, language="cc", path=path
) )
@ -56,7 +58,7 @@ def test_multiple_conflicting_compiler_definitions(mutable_config):
mutable_config.update_config("compilers", compiler_config) mutable_config.update_config("compilers", compiler_config)
arch_spec = spack.spec.ArchSpec(("test", "test", "test")) arch_spec = spack.spec.ArchSpec(("test", "test", "test"))
cmp = compilers.compiler_for_spec("clang@=0.0.0", arch_spec) cmp = spack.compilers.compiler_for_spec("clang@=0.0.0", arch_spec)
assert cmp.f77 == "f77" assert cmp.f77 == "f77"
@ -64,7 +66,7 @@ def test_get_compiler_duplicates(config):
# In this case there is only one instance of the specified compiler in # In this case there is only one instance of the specified compiler in
# the test configuration (so it is not actually a duplicate), but the # the test configuration (so it is not actually a duplicate), but the
# method behaves the same. # method behaves the same.
cfg_file_to_duplicates = compilers.get_compiler_duplicates( cfg_file_to_duplicates = spack.compilers.get_compiler_duplicates(
"gcc@4.5.0", spack.spec.ArchSpec("cray-CNL-xeon") "gcc@4.5.0", spack.spec.ArchSpec("cray-CNL-xeon")
) )
@ -74,7 +76,7 @@ def test_get_compiler_duplicates(config):
def test_all_compilers(config): def test_all_compilers(config):
all_compilers = compilers.all_compilers() all_compilers = spack.compilers.all_compilers()
filtered = [x for x in all_compilers if str(x.spec) == "clang@=3.3"] filtered = [x for x in all_compilers if str(x.spec) == "clang@=3.3"]
filtered = [x for x in filtered if x.operating_system == "SuSE11"] filtered = [x for x in filtered if x.operating_system == "SuSE11"]
assert len(filtered) == 1 assert len(filtered) == 1
@ -88,7 +90,7 @@ def test_version_detection_is_empty(
make_args_for_version, input_version, expected_version, expected_error make_args_for_version, input_version, expected_version, expected_error
): ):
args = make_args_for_version(version=input_version) args = make_args_for_version(version=input_version)
result, error = compilers.detect_version(args) result, error = spack.compilers.detect_version(args)
if not error: if not error:
assert result.id.version == expected_version assert result.id.version == expected_version
@ -104,7 +106,7 @@ def test_compiler_flags_from_config_are_grouped():
"modules": None, "modules": None,
} }
compiler = compilers.compiler_from_dict(compiler_entry) compiler = spack.compilers.compiler_from_dict(compiler_entry)
assert any(x == "-foo-flag foo-val" for x in compiler.flags["cflags"]) assert any(x == "-foo-flag foo-val" for x in compiler.flags["cflags"])
@ -253,8 +255,8 @@ def test_get_compiler_link_paths_load_env(working_env, monkeypatch, tmpdir):
gcc = str(tmpdir.join("gcc")) gcc = str(tmpdir.join("gcc"))
with open(gcc, "w") as f: with open(gcc, "w") as f:
f.write( f.write(
"""#!/bin/bash """#!/bin/sh
if [[ $ENV_SET == "1" && $MODULE_LOADED == "1" ]]; then if [ "$ENV_SET" = "1" ] && [ "$MODULE_LOADED" = "1" ]; then
echo '""" echo '"""
+ no_flag_output + no_flag_output
+ """' + """'
@ -288,7 +290,7 @@ def flag_value(flag, spec):
else: else:
compiler_entry = copy(default_compiler_entry) compiler_entry = copy(default_compiler_entry)
compiler_entry["spec"] = spec compiler_entry["spec"] = spec
compiler = compilers.compiler_from_dict(compiler_entry) compiler = spack.compilers.compiler_from_dict(compiler_entry)
return getattr(compiler, flag) return getattr(compiler, flag)
@ -659,8 +661,8 @@ def test_xl_r_flags():
[("gcc@4.7.2", False), ("clang@3.3", False), ("clang@8.0.0", True)], [("gcc@4.7.2", False), ("clang@3.3", False), ("clang@8.0.0", True)],
) )
def test_detecting_mixed_toolchains(compiler_spec, expected_result, config): def test_detecting_mixed_toolchains(compiler_spec, expected_result, config):
compiler = compilers.compilers_for_spec(compiler_spec).pop() compiler = spack.compilers.compilers_for_spec(compiler_spec).pop()
assert compilers.is_mixed_toolchain(compiler) is expected_result assert spack.compilers.is_mixed_toolchain(compiler) is expected_result
@pytest.mark.regression("14798,13733") @pytest.mark.regression("14798,13733")
@ -701,8 +703,8 @@ def test_compiler_get_real_version(working_env, monkeypatch, tmpdir):
gcc = str(tmpdir.join("gcc")) gcc = str(tmpdir.join("gcc"))
with open(gcc, "w") as f: with open(gcc, "w") as f:
f.write( f.write(
"""#!/bin/bash """#!/bin/sh
if [[ $CMP_ON == "1" ]]; then if [ "$CMP_ON" = "1" ]; then
echo "$CMP_VER" echo "$CMP_VER"
fi fi
""" """
@ -739,6 +741,49 @@ def module(*args):
assert version == test_version assert version == test_version
@pytest.mark.regression("42679")
def test_get_compilers(config):
"""Tests that we can select compilers whose versions differ only for a suffix."""
common = {
"flags": {},
"operating_system": "ubuntu23.10",
"target": "x86_64",
"modules": [],
"environment": {},
"extra_rpaths": [],
}
with_suffix = {
"spec": "gcc@13.2.0-suffix",
"paths": {
"cc": "/usr/bin/gcc-13.2.0-suffix",
"cxx": "/usr/bin/g++-13.2.0-suffix",
"f77": "/usr/bin/gfortran-13.2.0-suffix",
"fc": "/usr/bin/gfortran-13.2.0-suffix",
},
**common,
}
without_suffix = {
"spec": "gcc@13.2.0",
"paths": {
"cc": "/usr/bin/gcc-13.2.0",
"cxx": "/usr/bin/g++-13.2.0",
"f77": "/usr/bin/gfortran-13.2.0",
"fc": "/usr/bin/gfortran-13.2.0",
},
**common,
}
compilers = [{"compiler": without_suffix}, {"compiler": with_suffix}]
assert spack.compilers.get_compilers(
compilers, cspec=spack.spec.CompilerSpec("gcc@=13.2.0-suffix")
) == [spack.compilers._compiler_from_config_entry(with_suffix)]
assert spack.compilers.get_compilers(
compilers, cspec=spack.spec.CompilerSpec("gcc@=13.2.0")
) == [spack.compilers._compiler_from_config_entry(without_suffix)]
def test_compiler_get_real_version_fails(working_env, monkeypatch, tmpdir): def test_compiler_get_real_version_fails(working_env, monkeypatch, tmpdir):
# Test variables # Test variables
test_version = "2.2.2" test_version = "2.2.2"
@ -747,8 +792,8 @@ def test_compiler_get_real_version_fails(working_env, monkeypatch, tmpdir):
gcc = str(tmpdir.join("gcc")) gcc = str(tmpdir.join("gcc"))
with open(gcc, "w") as f: with open(gcc, "w") as f:
f.write( f.write(
"""#!/bin/bash """#!/bin/sh
if [[ $CMP_ON == "1" ]]; then if [ "$CMP_ON" = "1" ]; then
echo "$CMP_VER" echo "$CMP_VER"
fi fi
""" """
@ -801,7 +846,7 @@ def test_compiler_flags_use_real_version(working_env, monkeypatch, tmpdir):
gcc = str(tmpdir.join("gcc")) gcc = str(tmpdir.join("gcc"))
with open(gcc, "w") as f: with open(gcc, "w") as f:
f.write( f.write(
"""#!/bin/bash """#!/bin/sh
echo "4.4.4" echo "4.4.4"
""" """
) # Version for which c++11 flag is -std=c++0x ) # Version for which c++11 flag is -std=c++0x

View file

@ -203,7 +203,9 @@ def change(self, changes=None):
# TODO: in case tests using this fixture start failing. # TODO: in case tests using this fixture start failing.
if sys.modules.get("spack.pkg.changing.changing"): if sys.modules.get("spack.pkg.changing.changing"):
del sys.modules["spack.pkg.changing.changing"] del sys.modules["spack.pkg.changing.changing"]
if sys.modules.get("spack.pkg.changing.root"):
del sys.modules["spack.pkg.changing.root"] del sys.modules["spack.pkg.changing.root"]
if sys.modules.get("spack.pkg.changing"):
del sys.modules["spack.pkg.changing"] del sys.modules["spack.pkg.changing"]
# Change the recipe # Change the recipe
@ -349,6 +351,9 @@ def test_compiler_flags_differ_identical_compilers(self):
spec.concretize() spec.concretize()
assert spec.satisfies("cflags=-O2") assert spec.satisfies("cflags=-O2")
@pytest.mark.only_clingo(
"Optional compiler propagation isn't deprecated for original concretizer"
)
def test_concretize_compiler_flag_propagate(self): def test_concretize_compiler_flag_propagate(self):
spec = Spec("hypre cflags=='-g' ^openblas") spec = Spec("hypre cflags=='-g' ^openblas")
spec.concretize() spec.concretize()
@ -458,19 +463,54 @@ def test_concretize_two_virtuals_with_dual_provider_and_a_conflict(self):
@pytest.mark.only_clingo( @pytest.mark.only_clingo(
"Optional compiler propagation isn't deprecated for original concretizer" "Optional compiler propagation isn't deprecated for original concretizer"
) )
def test_concretize_propagate_disabled_variant(self): @pytest.mark.parametrize(
"""Test a package variant value was passed from its parent.""" "spec_str,expected_propagation",
spec = Spec("hypre~~shared ^openblas") [
spec.concretize() ("hypre~~shared ^openblas+shared", [("hypre", "~shared"), ("openblas", "+shared")]),
# Propagates past a node that doesn't have the variant
assert spec.satisfies("^openblas~shared") ("hypre~~shared ^openblas", [("hypre", "~shared"), ("openblas", "~shared")]),
(
"ascent~~shared +adios2",
[("ascent", "~shared"), ("adios2", "~shared"), ("bzip2", "~shared")],
),
# Propagates below a node that uses the other value explicitly
(
"ascent~~shared +adios2 ^adios2+shared",
[("ascent", "~shared"), ("adios2", "+shared"), ("bzip2", "~shared")],
),
(
"ascent++shared +adios2 ^adios2~shared",
[("ascent", "+shared"), ("adios2", "~shared"), ("bzip2", "+shared")],
),
],
)
def test_concretize_propagate_disabled_variant(self, spec_str, expected_propagation):
"""Tests various patterns of boolean variant propagation"""
spec = Spec(spec_str).concretized()
for key, expected_satisfies in expected_propagation:
spec[key].satisfies(expected_satisfies)
@pytest.mark.only_clingo(
"Optional compiler propagation isn't deprecated for original concretizer"
)
def test_concretize_propagated_variant_is_not_passed_to_dependent(self): def test_concretize_propagated_variant_is_not_passed_to_dependent(self):
"""Test a package variant value was passed from its parent.""" """Test a package variant value was passed from its parent."""
spec = Spec("hypre~~shared ^openblas+shared") spec = Spec("ascent~~shared +adios2 ^adios2+shared")
spec.concretize() spec.concretize()
assert spec.satisfies("^openblas+shared") assert spec.satisfies("^adios2+shared")
assert spec.satisfies("^bzip2~shared")
@pytest.mark.only_clingo(
"Optional compiler propagation isn't deprecated for original concretizer"
)
def test_concretize_propagate_specified_variant(self):
"""Test that only the specified variant is propagated to the dependencies"""
spec = Spec("parent-foo-bar ~~foo")
spec.concretize()
assert spec.satisfies("~foo") and spec.satisfies("^dependency-foo-bar~foo")
assert spec.satisfies("+bar") and not spec.satisfies("^dependency-foo-bar+bar")
@pytest.mark.only_clingo("Original concretizer is allowed to forego variant propagation") @pytest.mark.only_clingo("Original concretizer is allowed to forego variant propagation")
def test_concretize_propagate_multivalue_variant(self): def test_concretize_propagate_multivalue_variant(self):
@ -1461,6 +1501,30 @@ def test_sticky_variant_in_package(self):
s = Spec("sticky-variant %clang").concretized() s = Spec("sticky-variant %clang").concretized()
assert s.satisfies("%clang") and s.satisfies("~allow-gcc") assert s.satisfies("%clang") and s.satisfies("~allow-gcc")
@pytest.mark.regression("42172")
@pytest.mark.only_clingo("Original concretizer cannot use sticky variants")
@pytest.mark.parametrize(
"spec,allow_gcc",
[
("sticky-variant@1.0+allow-gcc", True),
("sticky-variant@1.0~allow-gcc", False),
("sticky-variant@1.0", False),
],
)
def test_sticky_variant_in_external(self, spec, allow_gcc):
# setup external for sticky-variant+allow-gcc
config = {"externals": [{"spec": spec, "prefix": "/fake/path"}], "buildable": False}
spack.config.set("packages:sticky-variant", config)
maybe = llnl.util.lang.nullcontext if allow_gcc else pytest.raises
with maybe(spack.error.SpackError):
s = Spec("sticky-variant-dependent%gcc").concretized()
if allow_gcc:
assert s.satisfies("%gcc")
assert s["sticky-variant"].satisfies("+allow-gcc")
assert s["sticky-variant"].external
@pytest.mark.only_clingo("Use case not supported by the original concretizer") @pytest.mark.only_clingo("Use case not supported by the original concretizer")
def test_do_not_invent_new_concrete_versions_unless_necessary(self): def test_do_not_invent_new_concrete_versions_unless_necessary(self):
# ensure we select a known satisfying version rather than creating # ensure we select a known satisfying version rather than creating
@ -1566,7 +1630,9 @@ def test_installed_version_is_selected_only_for_reuse(
assert not new_root["changing"].satisfies("@1.0") assert not new_root["changing"].satisfies("@1.0")
@pytest.mark.regression("28259") @pytest.mark.regression("28259")
def test_reuse_with_unknown_namespace_dont_raise(self, mock_custom_repository): def test_reuse_with_unknown_namespace_dont_raise(
self, temporary_store, mock_custom_repository
):
with spack.repo.use_repositories(mock_custom_repository, override=False): with spack.repo.use_repositories(mock_custom_repository, override=False):
s = Spec("c").concretized() s = Spec("c").concretized()
assert s.namespace != "builtin.mock" assert s.namespace != "builtin.mock"
@ -1577,8 +1643,8 @@ def test_reuse_with_unknown_namespace_dont_raise(self, mock_custom_repository):
assert s.namespace == "builtin.mock" assert s.namespace == "builtin.mock"
@pytest.mark.regression("28259") @pytest.mark.regression("28259")
def test_reuse_with_unknown_package_dont_raise(self, tmpdir, monkeypatch): def test_reuse_with_unknown_package_dont_raise(self, tmpdir, temporary_store, monkeypatch):
builder = spack.repo.MockRepositoryBuilder(tmpdir, namespace="myrepo") builder = spack.repo.MockRepositoryBuilder(tmpdir.mkdir("mock.repo"), namespace="myrepo")
builder.add_package("c") builder.add_package("c")
with spack.repo.use_repositories(builder.root, override=False): with spack.repo.use_repositories(builder.root, override=False):
s = Spec("c").concretized() s = Spec("c").concretized()
@ -2142,6 +2208,33 @@ def test_reuse_python_from_cli_and_extension_from_db(self, mutable_database):
assert with_reuse.dag_hash() == without_reuse.dag_hash() assert with_reuse.dag_hash() == without_reuse.dag_hash()
@pytest.mark.regression("35536")
@pytest.mark.parametrize(
"spec_str,expected_namespaces",
[
# Single node with fully qualified namespace
("builtin.mock.gmake", {"gmake": "builtin.mock"}),
# Dependency with fully qualified namespace
("hdf5 ^builtin.mock.gmake", {"gmake": "builtin.mock", "hdf5": "duplicates.test"}),
("hdf5 ^gmake", {"gmake": "duplicates.test", "hdf5": "duplicates.test"}),
],
)
@pytest.mark.only_clingo("Uses specs requiring multiple gmake specs")
def test_select_lower_priority_package_from_repository_stack(
self, spec_str, expected_namespaces
):
"""Tests that a user can explicitly select a lower priority, fully qualified dependency
from cli.
"""
# 'builtin.mock" and "duplicates.test" share a 'gmake' package
additional_repo = os.path.join(spack.paths.repos_path, "duplicates.test")
with spack.repo.use_repositories(additional_repo, override=False):
s = Spec(spec_str).concretized()
for name, namespace in expected_namespaces.items():
assert s[name].concrete
assert s[name].namespace == namespace
@pytest.fixture() @pytest.fixture()
def duplicates_test_repository(): def duplicates_test_repository():

View file

@ -16,8 +16,8 @@
version_error_messages = [ version_error_messages = [
"Cannot satisfy 'fftw@:1.0' and 'fftw@1.1:", "Cannot satisfy 'fftw@:1.0' and 'fftw@1.1:",
" required because quantum-espresso depends on fftw@:1.0", " required because quantum-espresso depends on fftw@:1.0",
" required because quantum-espresso ^fftw@1.1: requested from CLI", " required because quantum-espresso ^fftw@1.1: requested explicitly",
" required because quantum-espresso ^fftw@1.1: requested from CLI", " required because quantum-espresso ^fftw@1.1: requested explicitly",
] ]
external_error_messages = [ external_error_messages = [
@ -30,15 +30,15 @@
" which was not satisfied" " which was not satisfied"
), ),
" 'quantum-espresso+veritas' required", " 'quantum-espresso+veritas' required",
" required because quantum-espresso+veritas requested from CLI", " required because quantum-espresso+veritas requested explicitly",
] ]
variant_error_messages = [ variant_error_messages = [
"'fftw' required multiple values for single-valued variant 'mpi'", "'fftw' required multiple values for single-valued variant 'mpi'",
" Requested '~mpi' and '+mpi'", " Requested '~mpi' and '+mpi'",
" required because quantum-espresso depends on fftw+mpi when +invino", " required because quantum-espresso depends on fftw+mpi when +invino",
" required because quantum-espresso+invino ^fftw~mpi requested from CLI", " required because quantum-espresso+invino ^fftw~mpi requested explicitly",
" required because quantum-espresso+invino ^fftw~mpi requested from CLI", " required because quantum-espresso+invino ^fftw~mpi requested explicitly",
] ]
external_config = { external_config = {

View file

@ -504,3 +504,13 @@ def test_sticky_variant_accounts_for_packages_yaml(self):
with spack.config.override("packages:sticky-variant", {"variants": "+allow-gcc"}): with spack.config.override("packages:sticky-variant", {"variants": "+allow-gcc"}):
s = Spec("sticky-variant %gcc").concretized() s = Spec("sticky-variant %gcc").concretized()
assert s.satisfies("%gcc") and s.satisfies("+allow-gcc") assert s.satisfies("%gcc") and s.satisfies("+allow-gcc")
@pytest.mark.regression("41134")
@pytest.mark.only_clingo("Not backporting the fix to the old concretizer")
def test_default_preference_variant_different_type_does_not_error(self):
"""Tests that a different type for an existing variant in the 'all:' section of
packages.yaml doesn't fail with an error.
"""
with spack.config.override("packages:all", {"variants": "+foo"}):
s = Spec("a").concretized()
assert s.satisfies("foo=bar")

View file

@ -896,3 +896,69 @@ def test_requires_directive(concretize_scope, mock_packages):
# This package can only be compiled with clang # This package can only be compiled with clang
with pytest.raises(spack.error.SpackError, match="can only be compiled with Clang"): with pytest.raises(spack.error.SpackError, match="can only be compiled with Clang"):
Spec("requires_clang").concretized() Spec("requires_clang").concretized()
@pytest.mark.regression("42084")
def test_requiring_package_on_multiple_virtuals(concretize_scope, mock_packages):
update_packages_config(
"""
packages:
all:
providers:
scalapack: [netlib-scalapack]
blas:
require: intel-parallel-studio
lapack:
require: intel-parallel-studio
scalapack:
require: intel-parallel-studio
"""
)
s = Spec("dla-future").concretized()
assert s["blas"].name == "intel-parallel-studio"
assert s["lapack"].name == "intel-parallel-studio"
assert s["scalapack"].name == "intel-parallel-studio"
@pytest.mark.parametrize(
"spec_str,expected,not_expected",
[
(
"forward-multi-value +cuda cuda_arch=10 ^dependency-mv~cuda",
["cuda_arch=10", "^dependency-mv~cuda"],
["cuda_arch=11", "^dependency-mv cuda_arch=10", "^dependency-mv cuda_arch=11"],
),
(
"forward-multi-value +cuda cuda_arch=10 ^dependency-mv+cuda",
["cuda_arch=10", "^dependency-mv cuda_arch=10"],
["cuda_arch=11", "^dependency-mv cuda_arch=11"],
),
(
"forward-multi-value +cuda cuda_arch=11 ^dependency-mv+cuda",
["cuda_arch=11", "^dependency-mv cuda_arch=11"],
["cuda_arch=10", "^dependency-mv cuda_arch=10"],
),
(
"forward-multi-value +cuda cuda_arch=10,11 ^dependency-mv+cuda",
["cuda_arch=10,11", "^dependency-mv cuda_arch=10,11"],
[],
),
],
)
def test_forward_multi_valued_variant_using_requires(
spec_str, expected, not_expected, config, mock_packages
):
"""Tests that a package can forward multivalue variants to dependencies, using
`requires` directives of the form:
for _val in ("shared", "static"):
requires(f"^some-virtual-mv libs={_val}", when=f"libs={_val} ^some-virtual-mv")
"""
s = Spec(spec_str).concretized()
for constraint in expected:
assert s.satisfies(constraint)
for constraint in not_expected:
assert not s.satisfies(constraint)

View file

@ -1239,11 +1239,11 @@ def test_user_config_path_is_default_when_env_var_is_empty(working_env):
assert os.path.expanduser("~%s.spack" % os.sep) == spack.paths._get_user_config_path() assert os.path.expanduser("~%s.spack" % os.sep) == spack.paths._get_user_config_path()
def test_default_install_tree(monkeypatch): def test_default_install_tree(monkeypatch, default_config):
s = spack.spec.Spec("nonexistent@x.y.z %none@a.b.c arch=foo-bar-baz") s = spack.spec.Spec("nonexistent@x.y.z %none@a.b.c arch=foo-bar-baz")
monkeypatch.setattr(s, "dag_hash", lambda: "abc123") monkeypatch.setattr(s, "dag_hash", lambda: "abc123")
projection = spack.config.get("config:install_tree:projections:all", scope="defaults") _, _, projections = spack.store.parse_install_tree(spack.config.get("config"))
assert s.format(projection) == "foo-bar-baz/none-a.b.c/nonexistent-x.y.z-abc123" assert s.format(projections["all"]) == "foo-bar-baz/none-a.b.c/nonexistent-x.y.z-abc123"
def test_local_config_can_be_disabled(working_env): def test_local_config_can_be_disabled(working_env):

View file

@ -629,7 +629,7 @@ def platform_config():
spack.config.add_default_platform_scope(spack.platforms.real_host().name) spack.config.add_default_platform_scope(spack.platforms.real_host().name)
@pytest.fixture(scope="session") @pytest.fixture
def default_config(): def default_config():
"""Isolates the default configuration from the user configs. """Isolates the default configuration from the user configs.
@ -713,9 +713,6 @@ def configuration_dir(tmpdir_factory, linux_os):
t.write(content) t.write(content)
yield tmpdir yield tmpdir
# Once done, cleanup the directory
shutil.rmtree(str(tmpdir))
def _create_mock_configuration_scopes(configuration_dir): def _create_mock_configuration_scopes(configuration_dir):
"""Create the configuration scopes used in `config` and `mutable_config`.""" """Create the configuration scopes used in `config` and `mutable_config`."""
@ -1953,17 +1950,5 @@ def pytest_runtest_setup(item):
@pytest.fixture(scope="function") @pytest.fixture(scope="function")
def disable_parallel_buildcache_push(monkeypatch): def disable_parallel_buildcache_push(monkeypatch):
class MockPool: """Disable process pools in tests."""
def map(self, func, args): monkeypatch.setattr(spack.cmd.buildcache, "_make_pool", spack.cmd.buildcache.NoPool)
return [func(a) for a in args]
def starmap(self, func, args):
return [func(*a) for a in args]
def __enter__(self):
return self
def __exit__(self, *args):
pass
monkeypatch.setattr(spack.cmd.buildcache, "_make_pool", MockPool)

View file

@ -1,14 +0,0 @@
# DEPRECATED: remove this in v0.20
# See `exclude.yaml` for the new syntax
enable:
- lmod
lmod:
core_compilers:
- 'clang@3.3'
hierarchy:
- mpi
blacklist:
- callpath
all:
autoload: direct

View file

@ -1,30 +0,0 @@
# DEPRECATED: remove this in v0.20
# See `alter_environment.yaml` for the new syntax
enable:
- lmod
lmod:
core_compilers:
- 'clang@3.3'
hierarchy:
- mpi
all:
autoload: none
filter:
environment_blacklist:
- CMAKE_PREFIX_PATH
environment:
set:
'{name}_ROOT': '{prefix}'
'platform=test target=x86_64':
environment:
set:
FOO: 'foo'
unset:
- BAR
'platform=test target=core2':
load:
- 'foo/bar'

View file

@ -1,12 +0,0 @@
# DEPRECATED: remove this in v0.20
# See `exclude.yaml` for the new syntax
enable:
- tcl
tcl:
whitelist:
- zmpi
blacklist:
- callpath
- mpi
all:
autoload: direct

View file

@ -1,25 +0,0 @@
# DEPRECATED: remove this in v0.20
# See `alter_environment.yaml` for the new syntax
enable:
- tcl
tcl:
all:
autoload: none
filter:
environment_blacklist:
- CMAKE_PREFIX_PATH
environment:
set:
'{name}_ROOT': '{prefix}'
'platform=test target=x86_64':
environment:
set:
FOO: 'foo'
OMPI_MCA_mpi_leave_pinned: '1'
unset:
- BAR
'platform=test target=core2':
load:
- 'foo/bar'

View file

@ -1,8 +0,0 @@
# DEPRECATED: remove this in v0.20
# See `exclude_implicits.yaml` for the new syntax
enable:
- tcl
tcl:
blacklist_implicits: true
all:
autoload: direct

View file

@ -4,7 +4,7 @@ tcl:
all: all:
autoload: none autoload: none
filter: filter:
environment_blacklist: exclude_env_vars:
- CMAKE_PREFIX_PATH - CMAKE_PREFIX_PATH
environment: environment:
set: set:

View file

@ -695,7 +695,7 @@ def test_removing_spec_from_manifest_with_exact_duplicates(
@pytest.mark.regression("35298") @pytest.mark.regression("35298")
@pytest.mark.only_clingo("Propagation not supported in the original concretizer") @pytest.mark.only_clingo("Propagation not supported in the original concretizer")
def test_variant_propagation_with_unify_false(tmp_path, mock_packages): def test_variant_propagation_with_unify_false(tmp_path, mock_packages, config):
"""Spack distributes concretizations to different processes, when unify:false is selected and """Spack distributes concretizations to different processes, when unify:false is selected and
the number of roots is 2 or more. When that happens, the specs to be concretized need to be the number of roots is 2 or more. When that happens, the specs to be concretized need to be
properly reconstructed on the worker process, if variant propagation was requested. properly reconstructed on the worker process, if variant propagation was requested.
@ -778,3 +778,32 @@ def test_env_with_include_def_missing(mutable_mock_env_path, mock_packages):
with e: with e:
with pytest.raises(UndefinedReferenceError, match=r"which does not appear"): with pytest.raises(UndefinedReferenceError, match=r"which does not appear"):
e.concretize() e.concretize()
@pytest.mark.regression("41292")
def test_deconcretize_then_concretize_does_not_error(mutable_mock_env_path, mock_packages):
"""Tests that, after having deconcretized a spec, we can reconcretize an environment which
has 2 or more user specs mapping to the same concrete spec.
"""
mutable_mock_env_path.mkdir()
spack_yaml = mutable_mock_env_path / ev.manifest_name
spack_yaml.write_text(
"""spack:
specs:
# These two specs concretize to the same hash
- c
- c@1.0
# Spec used to trigger the bug
- a
concretizer:
unify: true
"""
)
e = ev.Environment(mutable_mock_env_path)
with e:
e.concretize()
e.deconcretize(spack.spec.Spec("a"), concrete=False)
e.concretize()
assert len(e.concrete_roots()) == 3
all_root_hashes = set(x.dag_hash() for x in e.concrete_roots())
assert len(all_root_hashes) == 2

View file

@ -12,10 +12,12 @@
import llnl.util.filesystem as fs import llnl.util.filesystem as fs
import spack.error import spack.error
import spack.mirror
import spack.patch import spack.patch
import spack.repo import spack.repo
import spack.store import spack.store
import spack.util.spack_json as sjson import spack.util.spack_json as sjson
from spack import binary_distribution
from spack.package_base import ( from spack.package_base import (
InstallError, InstallError,
PackageBase, PackageBase,
@ -118,59 +120,25 @@ def remove_prefix(self):
self.wrapped_rm_prefix() self.wrapped_rm_prefix()
class MockStage:
def __init__(self, wrapped_stage):
self.wrapped_stage = wrapped_stage
self.test_destroyed = False
def __enter__(self):
self.create()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type is None:
self.destroy()
def destroy(self):
self.test_destroyed = True
self.wrapped_stage.destroy()
def create(self):
self.wrapped_stage.create()
def __getattr__(self, attr):
if attr == "wrapped_stage":
# This attribute may not be defined at some point during unpickling
raise AttributeError()
return getattr(self.wrapped_stage, attr)
def test_partial_install_delete_prefix_and_stage(install_mockery, mock_fetch, working_env): def test_partial_install_delete_prefix_and_stage(install_mockery, mock_fetch, working_env):
s = Spec("canfail").concretized() s = Spec("canfail").concretized()
instance_rm_prefix = s.package.remove_prefix instance_rm_prefix = s.package.remove_prefix
try: s.package.remove_prefix = mock_remove_prefix
s.package.remove_prefix = mock_remove_prefix with pytest.raises(MockInstallError):
with pytest.raises(MockInstallError): s.package.do_install()
s.package.do_install() assert os.path.isdir(s.package.prefix)
assert os.path.isdir(s.package.prefix) rm_prefix_checker = RemovePrefixChecker(instance_rm_prefix)
rm_prefix_checker = RemovePrefixChecker(instance_rm_prefix) s.package.remove_prefix = rm_prefix_checker.remove_prefix
s.package.remove_prefix = rm_prefix_checker.remove_prefix
# must clear failure markings for the package before re-installing it # must clear failure markings for the package before re-installing it
spack.store.STORE.failure_tracker.clear(s, True) spack.store.STORE.failure_tracker.clear(s, True)
s.package.set_install_succeed() s.package.set_install_succeed()
s.package.stage = MockStage(s.package.stage) s.package.do_install(restage=True)
assert rm_prefix_checker.removed
s.package.do_install(restage=True) assert s.package.spec.installed
assert rm_prefix_checker.removed
assert s.package.stage.test_destroyed
assert s.package.spec.installed
finally:
s.package.remove_prefix = instance_rm_prefix
@pytest.mark.disable_clean_stage_check @pytest.mark.disable_clean_stage_check
@ -357,10 +325,8 @@ def test_partial_install_keep_prefix(install_mockery, mock_fetch, monkeypatch, w
spack.store.STORE.failure_tracker.clear(s, True) spack.store.STORE.failure_tracker.clear(s, True)
s.package.set_install_succeed() s.package.set_install_succeed()
s.package.stage = MockStage(s.package.stage)
s.package.do_install(keep_prefix=True) s.package.do_install(keep_prefix=True)
assert s.package.spec.installed assert s.package.spec.installed
assert not s.package.stage.test_destroyed
def test_second_install_no_overwrite_first(install_mockery, mock_fetch, monkeypatch): def test_second_install_no_overwrite_first(install_mockery, mock_fetch, monkeypatch):
@ -644,3 +610,48 @@ def test_empty_install_sanity_check_prefix(
spec = Spec("failing-empty-install").concretized() spec = Spec("failing-empty-install").concretized()
with pytest.raises(spack.build_environment.ChildError, match="Nothing was installed"): with pytest.raises(spack.build_environment.ChildError, match="Nothing was installed"):
spec.package.do_install() spec.package.do_install()
def test_install_from_binary_with_missing_patch_succeeds(
temporary_store: spack.store.Store, mutable_config, tmp_path, mock_packages
):
"""If a patch is missing in the local package repository, but was present when building and
pushing the package to a binary cache, installation from that binary cache shouldn't error out
because of the missing patch."""
# Create a spec s with non-existing patches
s = Spec("trivial-install-test-package").concretized()
patches = ["a" * 64]
s_dict = s.to_dict()
s_dict["spec"]["nodes"][0]["patches"] = patches
s_dict["spec"]["nodes"][0]["parameters"]["patches"] = patches
s = Spec.from_dict(s_dict)
# Create an install dir for it
os.makedirs(os.path.join(s.prefix, ".spack"))
with open(os.path.join(s.prefix, ".spack", "spec.json"), "w") as f:
s.to_json(f)
# And register it in the database
temporary_store.db.add(s, directory_layout=temporary_store.layout, explicit=True)
# Push it to a binary cache
build_cache = tmp_path / "my_build_cache"
binary_distribution.push_or_raise(
s,
build_cache.as_uri(),
binary_distribution.PushOptions(unsigned=True, regenerate_index=True),
)
# Now re-install it.
s.package.do_uninstall()
assert not temporary_store.db.query_local_by_spec_hash(s.dag_hash())
# Source install: fails, we don't have the patch.
with pytest.raises(spack.error.SpecError, match="Couldn't find patch for package"):
s.package.do_install()
# Binary install: succeeds, we don't need the patch.
spack.mirror.add(spack.mirror.Mirror.from_local_path(str(build_cache)))
s.package.do_install(package_cache_only=True, dependencies_cache_only=True, unsigned=True)
assert temporary_store.db.query_local_by_spec_hash(s.dag_hash())

View file

@ -165,23 +165,19 @@ def test_install_msg(monkeypatch):
assert inst.install_msg(name, pid, None) == expected assert inst.install_msg(name, pid, None) == expected
def test_install_from_cache_errors(install_mockery, capsys): def test_install_from_cache_errors(install_mockery):
"""Test to ensure cover _install_from_cache errors.""" """Test to ensure cover install from cache errors."""
spec = spack.spec.Spec("trivial-install-test-package") spec = spack.spec.Spec("trivial-install-test-package")
spec.concretize() spec.concretize()
assert spec.concrete assert spec.concrete
# Check with cache-only # Check with cache-only
with pytest.raises(SystemExit): with pytest.raises(inst.InstallError, match="No binary found when cache-only was specified"):
inst._install_from_cache(spec.package, True, True, False) spec.package.do_install(package_cache_only=True, dependencies_cache_only=True)
captured = str(capsys.readouterr())
assert "No binary" in captured
assert "found when cache-only specified" in captured
assert not spec.package.installed_from_binary_cache assert not spec.package.installed_from_binary_cache
# Check when don't expect to install only from binary cache # Check when don't expect to install only from binary cache
assert not inst._install_from_cache(spec.package, False, True, False) assert not inst._install_from_cache(spec.package, explicit=True, unsigned=False)
assert not spec.package.installed_from_binary_cache assert not spec.package.installed_from_binary_cache
@ -192,7 +188,7 @@ def test_install_from_cache_ok(install_mockery, monkeypatch):
monkeypatch.setattr(inst, "_try_install_from_binary_cache", _true) monkeypatch.setattr(inst, "_try_install_from_binary_cache", _true)
monkeypatch.setattr(spack.hooks, "post_install", _noop) monkeypatch.setattr(spack.hooks, "post_install", _noop)
assert inst._install_from_cache(spec.package, True, True, False) assert inst._install_from_cache(spec.package, explicit=True, unsigned=False)
def test_process_external_package_module(install_mockery, monkeypatch, capfd): def test_process_external_package_module(install_mockery, monkeypatch, capfd):

View file

@ -9,10 +9,7 @@
This just tests whether the right args are getting passed to make. This just tests whether the right args are getting passed to make.
""" """
import os import os
import shutil
import sys import sys
import tempfile
import unittest
import pytest import pytest
@ -20,110 +17,104 @@
from spack.util.environment import path_put_first from spack.util.environment import path_put_first
pytestmark = pytest.mark.skipif( pytestmark = pytest.mark.skipif(
sys.platform == "win32", sys.platform == "win32", reason="MakeExecutable not supported on Windows"
reason="MakeExecutable \
not supported on Windows",
) )
class MakeExecutableTest(unittest.TestCase): @pytest.fixture(autouse=True)
def setUp(self): def make_executable(tmp_path, working_env):
self.tmpdir = tempfile.mkdtemp() make_exe = tmp_path / "make"
with open(make_exe, "w") as f:
f.write("#!/bin/sh\n")
f.write('echo "$@"')
os.chmod(make_exe, 0o700)
make_exe = os.path.join(self.tmpdir, "make") path_put_first("PATH", [tmp_path])
with open(make_exe, "w") as f:
f.write("#!/bin/sh\n")
f.write('echo "$@"')
os.chmod(make_exe, 0o700)
path_put_first("PATH", [self.tmpdir])
def tearDown(self): def test_make_normal():
shutil.rmtree(self.tmpdir) make = MakeExecutable("make", 8)
assert make(output=str).strip() == "-j8"
assert make("install", output=str).strip() == "-j8 install"
def test_make_normal(self):
make = MakeExecutable("make", 8)
self.assertEqual(make(output=str).strip(), "-j8")
self.assertEqual(make("install", output=str).strip(), "-j8 install")
def test_make_explicit(self): def test_make_explicit():
make = MakeExecutable("make", 8) make = MakeExecutable("make", 8)
self.assertEqual(make(parallel=True, output=str).strip(), "-j8") assert make(parallel=True, output=str).strip() == "-j8"
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j8 install") assert make("install", parallel=True, output=str).strip() == "-j8 install"
def test_make_one_job(self):
make = MakeExecutable("make", 1)
self.assertEqual(make(output=str).strip(), "-j1")
self.assertEqual(make("install", output=str).strip(), "-j1 install")
def test_make_parallel_false(self): def test_make_one_job():
make = MakeExecutable("make", 8) make = MakeExecutable("make", 1)
self.assertEqual(make(parallel=False, output=str).strip(), "-j1") assert make(output=str).strip() == "-j1"
self.assertEqual(make("install", parallel=False, output=str).strip(), "-j1 install") assert make("install", output=str).strip() == "-j1 install"
def test_make_parallel_disabled(self):
make = MakeExecutable("make", 8)
os.environ["SPACK_NO_PARALLEL_MAKE"] = "true" def test_make_parallel_false():
self.assertEqual(make(output=str).strip(), "-j1") make = MakeExecutable("make", 8)
self.assertEqual(make("install", output=str).strip(), "-j1 install") assert make(parallel=False, output=str).strip() == "-j1"
assert make("install", parallel=False, output=str).strip() == "-j1 install"
os.environ["SPACK_NO_PARALLEL_MAKE"] = "1"
self.assertEqual(make(output=str).strip(), "-j1")
self.assertEqual(make("install", output=str).strip(), "-j1 install")
# These don't disable (false and random string) def test_make_parallel_disabled(monkeypatch):
os.environ["SPACK_NO_PARALLEL_MAKE"] = "false" make = MakeExecutable("make", 8)
self.assertEqual(make(output=str).strip(), "-j8")
self.assertEqual(make("install", output=str).strip(), "-j8 install")
os.environ["SPACK_NO_PARALLEL_MAKE"] = "foobar" monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "true")
self.assertEqual(make(output=str).strip(), "-j8") assert make(output=str).strip() == "-j1"
self.assertEqual(make("install", output=str).strip(), "-j8 install") assert make("install", output=str).strip() == "-j1 install"
del os.environ["SPACK_NO_PARALLEL_MAKE"] monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "1")
assert make(output=str).strip() == "-j1"
assert make("install", output=str).strip() == "-j1 install"
def test_make_parallel_precedence(self): # These don't disable (false and random string)
make = MakeExecutable("make", 8) monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "false")
assert make(output=str).strip() == "-j8"
assert make("install", output=str).strip() == "-j8 install"
# These should work monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "foobar")
os.environ["SPACK_NO_PARALLEL_MAKE"] = "true" assert make(output=str).strip() == "-j8"
self.assertEqual(make(parallel=True, output=str).strip(), "-j1") assert make("install", output=str).strip() == "-j8 install"
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j1 install")
os.environ["SPACK_NO_PARALLEL_MAKE"] = "1"
self.assertEqual(make(parallel=True, output=str).strip(), "-j1")
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j1 install")
# These don't disable (false and random string) def test_make_parallel_precedence(monkeypatch):
os.environ["SPACK_NO_PARALLEL_MAKE"] = "false" make = MakeExecutable("make", 8)
self.assertEqual(make(parallel=True, output=str).strip(), "-j8")
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j8 install")
os.environ["SPACK_NO_PARALLEL_MAKE"] = "foobar" # These should work
self.assertEqual(make(parallel=True, output=str).strip(), "-j8") monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "true")
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j8 install") assert make(parallel=True, output=str).strip() == "-j1"
assert make("install", parallel=True, output=str).strip() == "-j1 install"
del os.environ["SPACK_NO_PARALLEL_MAKE"] monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "1")
assert make(parallel=True, output=str).strip() == "-j1"
assert make("install", parallel=True, output=str).strip() == "-j1 install"
def test_make_jobs_env(self): # These don't disable (false and random string)
make = MakeExecutable("make", 8) monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "false")
dump_env = {} assert make(parallel=True, output=str).strip() == "-j8"
self.assertEqual( assert make("install", parallel=True, output=str).strip() == "-j8 install"
make(output=str, jobs_env="MAKE_PARALLELISM", _dump_env=dump_env).strip(), "-j8"
)
self.assertEqual(dump_env["MAKE_PARALLELISM"], "8")
def test_make_jobserver(self): monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "foobar")
make = MakeExecutable("make", 8) assert make(parallel=True, output=str).strip() == "-j8"
os.environ["MAKEFLAGS"] = "--jobserver-auth=X,Y" assert make("install", parallel=True, output=str).strip() == "-j8 install"
self.assertEqual(make(output=str).strip(), "")
self.assertEqual(make(parallel=False, output=str).strip(), "-j1")
del os.environ["MAKEFLAGS"]
def test_make_jobserver_not_supported(self):
make = MakeExecutable("make", 8, supports_jobserver=False) def test_make_jobs_env():
os.environ["MAKEFLAGS"] = "--jobserver-auth=X,Y" make = MakeExecutable("make", 8)
# Currently fallback on default job count, Maybe it should force -j1 ? dump_env = {}
self.assertEqual(make(output=str).strip(), "-j8") assert make(output=str, jobs_env="MAKE_PARALLELISM", _dump_env=dump_env).strip() == "-j8"
del os.environ["MAKEFLAGS"] assert dump_env["MAKE_PARALLELISM"] == "8"
def test_make_jobserver(monkeypatch):
make = MakeExecutable("make", 8)
monkeypatch.setenv("MAKEFLAGS", "--jobserver-auth=X,Y")
assert make(output=str).strip() == ""
assert make(parallel=False, output=str).strip() == "-j1"
def test_make_jobserver_not_supported(monkeypatch):
make = MakeExecutable("make", 8, supports_jobserver=False)
monkeypatch.setenv("MAKEFLAGS", "--jobserver-auth=X,Y")
# Currently fallback on default job count, Maybe it should force -j1 ?
assert make(output=str).strip() == "-j8"

View file

@ -27,16 +27,13 @@
] ]
def test_module_function_change_env(tmpdir, working_env): def test_module_function_change_env(tmp_path):
src_file = str(tmpdir.join("src_me")) environb = {b"TEST_MODULE_ENV_VAR": b"TEST_FAIL", b"NOT_AFFECTED": b"NOT_AFFECTED"}
with open(src_file, "w") as f: src_file = tmp_path / "src_me"
f.write("export TEST_MODULE_ENV_VAR=TEST_SUCCESS\n") src_file.write_text("export TEST_MODULE_ENV_VAR=TEST_SUCCESS\n")
module("load", str(src_file), module_template=f". {src_file} 2>&1", environb=environb)
os.environ["NOT_AFFECTED"] = "NOT_AFFECTED" assert environb[b"TEST_MODULE_ENV_VAR"] == b"TEST_SUCCESS"
module("load", src_file, module_template=". {0} 2>&1".format(src_file)) assert environb[b"NOT_AFFECTED"] == b"NOT_AFFECTED"
assert os.environ["TEST_MODULE_ENV_VAR"] == "TEST_SUCCESS"
assert os.environ["NOT_AFFECTED"] == "NOT_AFFECTED"
def test_module_function_no_change(tmpdir): def test_module_function_no_change(tmpdir):

View file

@ -14,7 +14,6 @@
import spack.package_base import spack.package_base
import spack.schema.modules import spack.schema.modules
import spack.spec import spack.spec
import spack.util.spack_yaml as syaml
from spack.modules.common import UpstreamModuleIndex from spack.modules.common import UpstreamModuleIndex
from spack.spec import Spec from spack.spec import Spec
@ -191,26 +190,6 @@ def find_nothing(*args):
spack.package_base.PackageBase.uninstall_by_spec(spec) spack.package_base.PackageBase.uninstall_by_spec(spec)
@pytest.mark.parametrize(
"module_type, old_config,new_config",
[("tcl", "exclude_implicits.yaml", "hide_implicits.yaml")],
)
def test_exclude_include_update(module_type, old_config, new_config):
module_test_data_root = os.path.join(spack.paths.test_path, "data", "modules", module_type)
with open(os.path.join(module_test_data_root, old_config)) as f:
old_yaml = syaml.load(f)
with open(os.path.join(module_test_data_root, new_config)) as f:
new_yaml = syaml.load(f)
# ensure file that needs updating is translated to the right thing.
assert spack.schema.modules.update_keys(old_yaml, spack.schema.modules.old_to_new_key)
assert new_yaml == old_yaml
# ensure a file that doesn't need updates doesn't get updated
original_new_yaml = new_yaml.copy()
assert not spack.schema.modules.update_keys(new_yaml, spack.schema.modules.old_to_new_key)
assert original_new_yaml == new_yaml
@pytest.mark.regression("37649") @pytest.mark.regression("37649")
def test_check_module_set_name(mutable_config): def test_check_module_set_name(mutable_config):
"""Tests that modules set name are validated correctly and an error is reported if the """Tests that modules set name are validated correctly and an error is reported if the

View file

@ -425,40 +425,38 @@ def test_extend_context(self, modulefile_content, module_configuration):
@pytest.mark.regression("4400") @pytest.mark.regression("4400")
@pytest.mark.db @pytest.mark.db
@pytest.mark.parametrize("config_name", ["hide_implicits", "exclude_implicits"]) def test_hide_implicits_no_arg(self, module_configuration, database):
def test_hide_implicits_no_arg(self, module_configuration, database, config_name): module_configuration("exclude_implicits")
module_configuration(config_name)
# mpileaks has been installed explicitly when setting up # mpileaks has been installed explicitly when setting up
# the tests database # the tests database
mpileaks_specs = database.query("mpileaks") mpileaks_specs = database.query("mpileaks")
for item in mpileaks_specs: for item in mpileaks_specs:
writer = writer_cls(item, "default") writer = writer_cls(item, "default")
assert not writer.conf.hidden assert not writer.conf.excluded
# callpath is a dependency of mpileaks, and has been pulled # callpath is a dependency of mpileaks, and has been pulled
# in implicitly # in implicitly
callpath_specs = database.query("callpath") callpath_specs = database.query("callpath")
for item in callpath_specs: for item in callpath_specs:
writer = writer_cls(item, "default") writer = writer_cls(item, "default")
assert writer.conf.hidden assert writer.conf.excluded
@pytest.mark.regression("12105") @pytest.mark.regression("12105")
@pytest.mark.parametrize("config_name", ["hide_implicits", "exclude_implicits"]) def test_hide_implicits_with_arg(self, module_configuration):
def test_hide_implicits_with_arg(self, module_configuration, config_name): module_configuration("exclude_implicits")
module_configuration(config_name)
# mpileaks is defined as explicit with explicit argument set on writer # mpileaks is defined as explicit with explicit argument set on writer
mpileaks_spec = spack.spec.Spec("mpileaks") mpileaks_spec = spack.spec.Spec("mpileaks")
mpileaks_spec.concretize() mpileaks_spec.concretize()
writer = writer_cls(mpileaks_spec, "default", True) writer = writer_cls(mpileaks_spec, "default", True)
assert not writer.conf.hidden assert not writer.conf.excluded
# callpath is defined as implicit with explicit argument set on writer # callpath is defined as implicit with explicit argument set on writer
callpath_spec = spack.spec.Spec("callpath") callpath_spec = spack.spec.Spec("callpath")
callpath_spec.concretize() callpath_spec.concretize()
writer = writer_cls(callpath_spec, "default", False) writer = writer_cls(callpath_spec, "default", False)
assert writer.conf.hidden assert writer.conf.excluded
@pytest.mark.regression("9624") @pytest.mark.regression("9624")
@pytest.mark.db @pytest.mark.db

View file

@ -1517,3 +1517,9 @@ def test_edge_equality_does_not_depend_on_virtual_order():
assert edge1 == edge2 assert edge1 == edge2
assert tuple(sorted(edge1.virtuals)) == edge1.virtuals assert tuple(sorted(edge1.virtuals)) == edge1.virtuals
assert tuple(sorted(edge2.virtuals)) == edge1.virtuals assert tuple(sorted(edge2.virtuals)) == edge1.virtuals
def test_old_format_strings_trigger_error(default_mock_concretization):
s = Spec("a").concretized()
with pytest.raises(SpecFormatStringError):
s.format("${PACKAGE}-${VERSION}-${HASH}")

View file

@ -147,7 +147,8 @@ def test_reverse_environment_modifications(working_env):
reversal = to_reverse.reversed() reversal = to_reverse.reversed()
os.environ = start_env.copy() os.environ.clear()
os.environ.update(start_env)
print(os.environ) print(os.environ)
to_reverse.apply_modifications() to_reverse.apply_modifications()

View file

@ -89,8 +89,8 @@ def test_which_with_slash_ignores_path(tmpdir, working_env):
assert exe.path == path assert exe.path == path
def test_which(tmpdir): def test_which(tmpdir, monkeypatch):
os.environ["PATH"] = str(tmpdir) monkeypatch.setenv("PATH", str(tmpdir))
assert ex.which("spack-test-exe") is None assert ex.which("spack-test-exe") is None
with pytest.raises(ex.CommandNotFoundError): with pytest.raises(ex.CommandNotFoundError):

View file

@ -11,6 +11,7 @@
import spack.build_environment import spack.build_environment
import spack.config import spack.config
import spack.error
import spack.spec import spack.spec
import spack.util.environment as environment import spack.util.environment as environment
import spack.util.prefix as prefix import spack.util.prefix as prefix

View file

@ -330,8 +330,11 @@ def add_extra_search_paths(paths):
for candidate_item in candidate_items: for candidate_item in candidate_items:
for directory in search_paths: for directory in search_paths:
exe = directory / candidate_item exe = directory / candidate_item
if exe.is_file() and os.access(str(exe), os.X_OK): try:
return str(exe) if exe.is_file() and os.access(str(exe), os.X_OK):
return str(exe)
except OSError:
pass
if required: if required:
raise CommandNotFoundError("spack requires '%s'. Make sure it is in your path." % args[0]) raise CommandNotFoundError("spack requires '%s'. Make sure it is in your path." % args[0])

View file

@ -10,6 +10,7 @@
import os import os
import re import re
import subprocess import subprocess
from typing import MutableMapping, Optional
import llnl.util.tty as tty import llnl.util.tty as tty
@ -21,8 +22,13 @@
awk_cmd = r"""awk 'BEGIN{for(name in ENVIRON)""" r"""printf("%s=%s%c", name, ENVIRON[name], 0)}'""" awk_cmd = r"""awk 'BEGIN{for(name in ENVIRON)""" r"""printf("%s=%s%c", name, ENVIRON[name], 0)}'"""
def module(*args, **kwargs): def module(
module_cmd = kwargs.get("module_template", "module " + " ".join(args)) *args,
module_template: Optional[str] = None,
environb: Optional[MutableMapping[bytes, bytes]] = None,
):
module_cmd = module_template or ("module " + " ".join(args))
environb = environb or os.environb
if args[0] in module_change_commands: if args[0] in module_change_commands:
# Suppress module output # Suppress module output
@ -33,10 +39,10 @@ def module(*args, **kwargs):
stderr=subprocess.STDOUT, stderr=subprocess.STDOUT,
shell=True, shell=True,
executable="/bin/bash", executable="/bin/bash",
env=environb,
) )
# In Python 3, keys and values of `environ` are byte strings. new_environb = {}
environ = {}
output = module_p.communicate()[0] output = module_p.communicate()[0]
# Loop over each environment variable key=value byte string # Loop over each environment variable key=value byte string
@ -45,11 +51,11 @@ def module(*args, **kwargs):
parts = entry.split(b"=", 1) parts = entry.split(b"=", 1)
if len(parts) != 2: if len(parts) != 2:
continue continue
environ[parts[0]] = parts[1] new_environb[parts[0]] = parts[1]
# Update os.environ with new dict # Update os.environ with new dict
os.environ.clear() environb.clear()
os.environb.update(environ) # novermin environb.update(new_environb) # novermin
else: else:
# Simply execute commands that don't change state and return output # Simply execute commands that don't change state and return output

View file

@ -25,8 +25,8 @@ if ($_pa_set == 1) then
eval set _pa_old_value='$'$_pa_varname eval set _pa_old_value='$'$_pa_varname
endif endif
# Do the actual prepending here, if it is a dir and not already in the path # Do the actual prepending here, if it is a dir and not first in the path
if ( -d $_pa_new_path && \:$_pa_old_value\: !~ *\:$_pa_new_path\:* ) then if ( -d $_pa_new_path && $_pa_old_value\: !~ $_pa_new_path\:* ) then
if ("x$_pa_old_value" == "x") then if ("x$_pa_old_value" == "x") then
setenv $_pa_varname $_pa_new_path setenv $_pa_varname $_pa_new_path
else else

View file

@ -370,25 +370,25 @@ e4s-rocm-external-build:
######################################## ########################################
# GPU Testing Pipeline # GPU Testing Pipeline
######################################## ########################################
.gpu-tests: # .gpu-tests:
extends: [ ".linux_x86_64_v3" ] # extends: [ ".linux_x86_64_v3" ]
variables: # variables:
SPACK_CI_STACK_NAME: gpu-tests # SPACK_CI_STACK_NAME: gpu-tests
gpu-tests-generate: # gpu-tests-generate:
extends: [ ".gpu-tests", ".generate-x86_64"] # extends: [ ".gpu-tests", ".generate-x86_64"]
image: ghcr.io/spack/ubuntu20.04-runner-x86_64:2023-01-01 # image: ghcr.io/spack/ubuntu20.04-runner-x86_64:2023-01-01
gpu-tests-build: # gpu-tests-build:
extends: [ ".gpu-tests", ".build" ] # extends: [ ".gpu-tests", ".build" ]
trigger: # trigger:
include: # include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml # - artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: gpu-tests-generate # job: gpu-tests-generate
strategy: depend # strategy: depend
needs: # needs:
- artifacts: True # - artifacts: True
job: gpu-tests-generate # job: gpu-tests-generate
######################################## ########################################
# E4S OneAPI Pipeline # E4S OneAPI Pipeline

View file

@ -8,26 +8,27 @@ spack:
definitions: definitions:
- gcc_system_packages: - gcc_system_packages:
- matrix: - matrix:
- - gmake - - zlib-ng
- gmake@4.3 - zlib-ng@2.0.7
- gmake@4.3 cflags=-O3 - zlib-ng@2.0.7 cflags=-O3
- tcl - tcl
- tcl ^gmake@4.3 cflags=-O3 - tcl ^zlib-ng@2.0.7 cflags=-O3
- hdf5 - hdf5
- hdf5~mpi - hdf5~mpi
- hdf5+hl+mpi ^mpich - hdf5+hl+mpi ^mpich
- trilinos - trilinos
- trilinos +hdf5 ^hdf5+hl+mpi ^mpich - trilinos +hdf5 ^hdf5+hl+mpi ^mpich
- gcc@12 - gcc@12.3.0
- mpileaks - mpileaks
- lmod - lmod@8.7.18
- environment-modules
- macsio@1.1+scr ^scr@2.0.0~fortran ^silo~fortran ^hdf5~fortran - macsio@1.1+scr ^scr@2.0.0~fortran ^silo~fortran ^hdf5~fortran
- ['%gcc@11'] - ['%gcc@11']
- gcc_old_packages: - gcc_old_packages:
- gmake%gcc@10 - zlib-ng%gcc@10
- clang_packages: - clang_packages:
- matrix: - matrix:
- [gmake, tcl ^gmake@4.3] - [zlib-ng, tcl ^zlib-ng@2.0.7]
- ['%clang@14'] - ['%clang@14']
- gcc_spack_built_packages: - gcc_spack_built_packages:
- matrix: - matrix:

View file

@ -17,38 +17,37 @@ if ($?_sp_initializing) then
endif endif
setenv _sp_initializing true setenv _sp_initializing true
# If SPACK_ROOT is not set, we'll try to find it ourselves. # find SPACK_ROOT.
# csh/tcsh don't have a built-in way to do this, but both keep files # csh/tcsh don't have a built-in way to do this, but both keep files
# they are sourcing open. We use /proc on linux and lsof on macs to # they are sourcing open. We use /proc on linux and lsof on macs to
# find this script's full path in the current process's open files. # find this script's full path in the current process's open files.
# figure out a command to list open files
if (-d /proc/$$/fd) then
set _sp_lsof = "ls -l /proc/$$/fd"
else
which lsof > /dev/null
if ($? == 0) then
set _sp_lsof = "lsof -p $$"
endif
endif
# filter this script out of list of open files
if ( $?_sp_lsof ) then
set _sp_source_file = `$_sp_lsof | sed -e 's/^[^/]*//' | grep "/setup-env.csh"`
endif
# This script is in $SPACK_ROOT/share/spack; get the root with dirname
if ($?_sp_source_file) then
set _sp_share_spack = `dirname "$_sp_source_file"`
set _sp_share = `dirname "$_sp_share_spack"`
setenv SPACK_ROOT `dirname "$_sp_share"`
endif
if (! $?SPACK_ROOT) then if (! $?SPACK_ROOT) then
# figure out a command to list open files echo "==> Error: setup-env.csh couldn't figure out where spack lives."
if (-d /proc/$$/fd) then echo " Set SPACK_ROOT to the root of your spack installation and try again."
set _sp_lsof = "ls -l /proc/$$/fd" exit 1
else
which lsof > /dev/null
if ($? == 0) then
set _sp_lsof = "lsof -p $$"
endif
endif
# filter this script out of list of open files
if ( $?_sp_lsof ) then
set _sp_source_file = `$_sp_lsof | sed -e 's/^[^/]*//' | grep "/setup-env.csh"`
endif
# This script is in $SPACK_ROOT/share/spack; get the root with dirname
if ($?_sp_source_file) then
set _sp_share_spack = `dirname "$_sp_source_file"`
set _sp_share = `dirname "$_sp_share_spack"`
setenv SPACK_ROOT `dirname "$_sp_share"`
endif
if (! $?SPACK_ROOT) then
echo "==> Error: setup-env.csh couldn't figure out where spack lives."
echo " Set SPACK_ROOT to the root of your spack installation and try again."
exit 1
endif
endif endif
# Command aliases point at separate source files # Command aliases point at separate source files

View file

@ -648,10 +648,10 @@ function spack_pathadd -d "Add path to specified variable (defaults to PATH)"
# passed to regular expression matching (`string match -r`) # passed to regular expression matching (`string match -r`)
set -l _a "$pa_oldvalue" set -l _a "$pa_oldvalue"
# skip path if it is already contained in the variable # skip path if it is already the first in the variable
# note spaces in regular expression: we're matching to a space delimited # note spaces in regular expression: we're matching to a space delimited
# list of paths # list of paths
if not echo $_a | string match -q -r " *$pa_new_path *" if not echo $_a | string match -q -r "^$pa_new_path *"
if test -n "$pa_oldvalue" if test -n "$pa_oldvalue"
set $pa_varname $pa_new_path $pa_oldvalue set $pa_varname $pa_new_path $pa_oldvalue
else else

View file

@ -214,9 +214,9 @@ _spack_pathadd() {
# Do the actual prepending here. # Do the actual prepending here.
eval "_pa_oldvalue=\${${_pa_varname}:-}" eval "_pa_oldvalue=\${${_pa_varname}:-}"
_pa_canonical=":$_pa_oldvalue:" _pa_canonical="$_pa_oldvalue:"
if [ -d "$_pa_new_path" ] && \ if [ -d "$_pa_new_path" ] && \
[ "${_pa_canonical#*:${_pa_new_path}:}" = "${_pa_canonical}" ]; [ "${_pa_canonical#$_pa_new_path:}" = "$_pa_canonical" ];
then then
if [ -n "$_pa_oldvalue" ]; then if [ -n "$_pa_oldvalue" ]; then
eval "export $_pa_varname=\"$_pa_new_path:$_pa_oldvalue\"" eval "export $_pa_varname=\"$_pa_new_path:$_pa_oldvalue\""

View file

@ -401,7 +401,7 @@ _spack() {
then then
SPACK_COMPREPLY="-h --help -H --all-help --color -c --config -C --config-scope -d --debug --timestamp --pdb -e --env -D --env-dir -E --no-env --use-env-repo -k --insecure -l --enable-locks -L --disable-locks -m --mock -b --bootstrap -p --profile --sorted-profile --lines -v --verbose --stacktrace --backtrace -V --version --print-shell-vars" SPACK_COMPREPLY="-h --help -H --all-help --color -c --config -C --config-scope -d --debug --timestamp --pdb -e --env -D --env-dir -E --no-env --use-env-repo -k --insecure -l --enable-locks -L --disable-locks -m --mock -b --bootstrap -p --profile --sorted-profile --lines -v --verbose --stacktrace --backtrace -V --version --print-shell-vars"
else else
SPACK_COMPREPLY="add arch audit blame bootstrap build-env buildcache cd change checksum ci clean clone commands compiler compilers concretize concretise config containerize containerise create debug dependencies dependents deprecate dev-build develop diff docs edit env extensions external fetch find gc gpg graph help info install license list load location log-parse maintainers make-installer mark mirror module patch pkg providers pydoc python reindex remove rm repo resource restage solve spec stage style tags test test-env tutorial undevelop uninstall unit-test unload url verify versions view" SPACK_COMPREPLY="add arch audit blame bootstrap build-env buildcache cd change checksum ci clean clone commands compiler compilers concretize concretise config containerize containerise create debug deconcretize dependencies dependents deprecate dev-build develop diff docs edit env extensions external fetch find gc gpg graph help info install license list load location log-parse maintainers make-installer mark mirror module patch pkg providers pydoc python reindex remove rm repo resource restage solve spec stage style tags test test-env tutorial undevelop uninstall unit-test unload url verify versions view"
fi fi
} }
@ -937,6 +937,15 @@ _spack_debug_report() {
SPACK_COMPREPLY="-h --help" SPACK_COMPREPLY="-h --help"
} }
_spack_deconcretize() {
if $list_options
then
SPACK_COMPREPLY="-h --help --root -y --yes-to-all -a --all"
else
_all_packages
fi
}
_spack_dependencies() { _spack_dependencies() {
if $list_options if $list_options
then then
@ -1267,7 +1276,7 @@ _spack_help() {
_spack_info() { _spack_info() {
if $list_options if $list_options
then then
SPACK_COMPREPLY="-h --help -a --all --detectable --maintainers --no-dependencies --no-variants --no-versions --phases --tags --tests --virtuals" SPACK_COMPREPLY="-h --help -a --all --detectable --maintainers --no-dependencies --no-variants --no-versions --phases --tags --tests --virtuals --variants-by-name"
else else
_all_packages _all_packages
fi fi

View file

@ -371,6 +371,7 @@ complete -c spack -n '__fish_spack_using_command_pos 0 ' -f -a containerize -d '
complete -c spack -n '__fish_spack_using_command_pos 0 ' -f -a containerise -d 'creates recipes to build images for different container runtimes' complete -c spack -n '__fish_spack_using_command_pos 0 ' -f -a containerise -d 'creates recipes to build images for different container runtimes'
complete -c spack -n '__fish_spack_using_command_pos 0 ' -f -a create -d 'create a new package file' complete -c spack -n '__fish_spack_using_command_pos 0 ' -f -a create -d 'create a new package file'
complete -c spack -n '__fish_spack_using_command_pos 0 ' -f -a debug -d 'debugging commands for troubleshooting Spack' complete -c spack -n '__fish_spack_using_command_pos 0 ' -f -a debug -d 'debugging commands for troubleshooting Spack'
complete -c spack -n '__fish_spack_using_command_pos 0 ' -f -a deconcretize -d 'remove specs from the concretized lockfile of an environment'
complete -c spack -n '__fish_spack_using_command_pos 0 ' -f -a dependencies -d 'show dependencies of a package' complete -c spack -n '__fish_spack_using_command_pos 0 ' -f -a dependencies -d 'show dependencies of a package'
complete -c spack -n '__fish_spack_using_command_pos 0 ' -f -a dependents -d 'show packages that depend on another' complete -c spack -n '__fish_spack_using_command_pos 0 ' -f -a dependents -d 'show packages that depend on another'
complete -c spack -n '__fish_spack_using_command_pos 0 ' -f -a deprecate -d 'replace one package with another via symlinks' complete -c spack -n '__fish_spack_using_command_pos 0 ' -f -a deprecate -d 'replace one package with another via symlinks'
@ -1290,6 +1291,18 @@ set -g __fish_spack_optspecs_spack_debug_report h/help
complete -c spack -n '__fish_spack_using_command debug report' -s h -l help -f -a help complete -c spack -n '__fish_spack_using_command debug report' -s h -l help -f -a help
complete -c spack -n '__fish_spack_using_command debug report' -s h -l help -d 'show this help message and exit' complete -c spack -n '__fish_spack_using_command debug report' -s h -l help -d 'show this help message and exit'
# spack deconcretize
set -g __fish_spack_optspecs_spack_deconcretize h/help root y/yes-to-all a/all
complete -c spack -n '__fish_spack_using_command_pos_remainder 0 deconcretize' -f -k -a '(__fish_spack_specs)'
complete -c spack -n '__fish_spack_using_command deconcretize' -s h -l help -f -a help
complete -c spack -n '__fish_spack_using_command deconcretize' -s h -l help -d 'show this help message and exit'
complete -c spack -n '__fish_spack_using_command deconcretize' -l root -f -a root
complete -c spack -n '__fish_spack_using_command deconcretize' -l root -d 'deconcretize only specific environment roots'
complete -c spack -n '__fish_spack_using_command deconcretize' -s y -l yes-to-all -f -a yes_to_all
complete -c spack -n '__fish_spack_using_command deconcretize' -s y -l yes-to-all -d 'assume "yes" is the answer to every confirmation request'
complete -c spack -n '__fish_spack_using_command deconcretize' -s a -l all -f -a all
complete -c spack -n '__fish_spack_using_command deconcretize' -s a -l all -d 'deconcretize ALL specs that match each supplied spec'
# spack dependencies # spack dependencies
set -g __fish_spack_optspecs_spack_dependencies h/help i/installed t/transitive deptype= V/no-expand-virtuals set -g __fish_spack_optspecs_spack_dependencies h/help i/installed t/transitive deptype= V/no-expand-virtuals
complete -c spack -n '__fish_spack_using_command_pos_remainder 0 dependencies' -f -k -a '(__fish_spack_specs)' complete -c spack -n '__fish_spack_using_command_pos_remainder 0 dependencies' -f -k -a '(__fish_spack_specs)'
@ -1855,7 +1868,7 @@ complete -c spack -n '__fish_spack_using_command help' -l spec -f -a guide
complete -c spack -n '__fish_spack_using_command help' -l spec -d 'help on the package specification syntax' complete -c spack -n '__fish_spack_using_command help' -l spec -d 'help on the package specification syntax'
# spack info # spack info
set -g __fish_spack_optspecs_spack_info h/help a/all detectable maintainers no-dependencies no-variants no-versions phases tags tests virtuals set -g __fish_spack_optspecs_spack_info h/help a/all detectable maintainers no-dependencies no-variants no-versions phases tags tests virtuals variants-by-name
complete -c spack -n '__fish_spack_using_command_pos 0 info' -f -a '(__fish_spack_packages)' complete -c spack -n '__fish_spack_using_command_pos 0 info' -f -a '(__fish_spack_packages)'
complete -c spack -n '__fish_spack_using_command info' -s h -l help -f -a help complete -c spack -n '__fish_spack_using_command info' -s h -l help -f -a help
complete -c spack -n '__fish_spack_using_command info' -s h -l help -d 'show this help message and exit' complete -c spack -n '__fish_spack_using_command info' -s h -l help -d 'show this help message and exit'
@ -1879,6 +1892,8 @@ complete -c spack -n '__fish_spack_using_command info' -l tests -f -a tests
complete -c spack -n '__fish_spack_using_command info' -l tests -d 'output relevant build-time and stand-alone tests' complete -c spack -n '__fish_spack_using_command info' -l tests -d 'output relevant build-time and stand-alone tests'
complete -c spack -n '__fish_spack_using_command info' -l virtuals -f -a virtuals complete -c spack -n '__fish_spack_using_command info' -l virtuals -f -a virtuals
complete -c spack -n '__fish_spack_using_command info' -l virtuals -d 'output virtual packages' complete -c spack -n '__fish_spack_using_command info' -l virtuals -d 'output virtual packages'
complete -c spack -n '__fish_spack_using_command info' -l variants-by-name -f -a variants_by_name
complete -c spack -n '__fish_spack_using_command info' -l variants-by-name -d 'list variants in strict name order; don\'t group by condition'
# spack install # spack install
set -g __fish_spack_optspecs_spack_install h/help only= u/until= j/jobs= overwrite fail-fast keep-prefix keep-stage dont-restage use-cache no-cache cache-only use-buildcache= include-build-deps no-check-signature show-log-on-error source n/no-checksum deprecated v/verbose fake only-concrete add no-add f/file= clean dirty test= log-format= log-file= help-cdash cdash-upload-url= cdash-build= cdash-site= cdash-track= cdash-buildstamp= y/yes-to-all U/fresh reuse reuse-deps set -g __fish_spack_optspecs_spack_install h/help only= u/until= j/jobs= overwrite fail-fast keep-prefix keep-stage dont-restage use-cache no-cache cache-only use-buildcache= include-build-deps no-check-signature show-log-on-error source n/no-checksum deprecated v/verbose fake only-concrete add no-add f/file= clean dirty test= log-format= log-file= help-cdash cdash-upload-url= cdash-build= cdash-site= cdash-track= cdash-buildstamp= y/yes-to-all U/fresh reuse reuse-deps

View file

@ -16,7 +16,8 @@ RUN {% if os_package_update %}{{ os_packages_build.update }} \
# What we want to install and how we want to install it # What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml) # is specified in a manifest file (spack.yaml)
RUN mkdir {{ paths.environment }} \ RUN mkdir -p {{ paths.environment }} && \
set -o noclobber \
{{ manifest }} > {{ paths.environment }}/spack.yaml {{ manifest }} > {{ paths.environment }}/spack.yaml
# Install the software, remove unnecessary deps # Install the software, remove unnecessary deps

View file

@ -0,0 +1,22 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class Adios2(Package):
"""This packagae has the variants shared and
bzip2, both defaulted to True"""
homepage = "https://example.com"
url = "https://example.com/adios2.tar.gz"
version("2.9.1", sha256="ddfa32c14494250ee8a48ef1c97a1bf6442c15484bbbd4669228a0f90242f4f9")
variant("shared", default=True, description="Build shared libraries")
variant("bzip2", default=True, description="Enable BZip2 compression")
depends_on("bzip2")

View file

@ -0,0 +1,21 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class Ascent(Package):
"""This packagae has the variants shared, defaulted
to True and adios2 defaulted to False"""
homepage = "https://github.com/Alpine-DAV/ascent"
url = "http://www.example.com/ascent-1.0.tar.gz"
version("0.9.2", sha256="44cd954aa5db478ab40042cd54fd6fcedf25000c3bb510ca23fcff8090531b91")
variant("adios2", default=False, description="Build Adios2 filter support")
variant("shared", default=True, description="Build Ascent as shared libs")
depends_on("adios2", when="+adios2")

View file

@ -0,0 +1,19 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class Bzip2(Package):
"""This packagae has the variants shared
defaulted to True"""
homepage = "https://example.com"
url = "https://example.com/bzip2-1.0.8tar.gz"
version("1.0.8", sha256="ab5a03176ee106d3f0fa90e381da478ddae405918153cca248e682cd0c4a2269")
variant("shared", default=True, description="Enables the build of shared libraries.")

View file

@ -0,0 +1,20 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class DependencyFooBar(Package):
"""This package has a variant "bar", which is False by default, and
variant "foo" which is True by default.
"""
homepage = "http://www.example.com"
url = "http://www.example.com/dependency-foo-bar-1.0.tar.gz"
version("1.0", md5="1234567890abcdefg1234567890098765")
variant("foo", default=True, description="")
variant("bar", default=False, description="")

View file

@ -0,0 +1,18 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class DependencyMv(Package):
"""Package providing a virtual dependency and with a multivalued variant."""
homepage = "http://www.example.com"
url = "http://www.example.com/foo-1.0.tar.gz"
version("1.0", md5="0123456789abcdef0123456789abcdef")
variant("cuda", default=False, description="Build with CUDA")
variant("cuda_arch", values=any_combination_of("10", "11"), when="+cuda")

View file

@ -0,0 +1,21 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class DlaFuture(Package):
"""A package that depends on 3 different virtuals, that might or might not be provided
by the same node.
"""
homepage = "http://www.example.com"
url = "http://www.example.com/dla-1.0.tar.gz"
version("1.0", md5="0123456789abcdef0123456789abcdef")
depends_on("blas")
depends_on("lapack")
depends_on("scalapack")

View file

@ -0,0 +1,23 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class ForwardMultiValue(Package):
"""A package that forwards the value of a multi-valued variant to a dependency"""
homepage = "http://www.llnl.gov"
url = "http://www.llnl.gov/mpileaks-1.0.tar.gz"
version("1.0", md5="0123456789abcdef0123456789abcdef")
variant("cuda", default=False, description="Build with CUDA")
variant("cuda_arch", values=any_combination_of("10", "11"), when="+cuda")
depends_on("dependency-mv")
requires("^dependency-mv cuda_arch=10", when="+cuda cuda_arch=10 ^dependency-mv+cuda")
requires("^dependency-mv cuda_arch=11", when="+cuda cuda_arch=11 ^dependency-mv+cuda")

View file

@ -13,6 +13,7 @@ class Gmake(Package):
url = "https://ftpmirror.gnu.org/make/make-4.4.tar.gz" url = "https://ftpmirror.gnu.org/make/make-4.4.tar.gz"
version("4.4", sha256="ce35865411f0490368a8fc383f29071de6690cbadc27704734978221f25e2bed") version("4.4", sha256="ce35865411f0490368a8fc383f29071de6690cbadc27704734978221f25e2bed")
version("3.0", sha256="ce35865411f0490368a8fc383f29071de6690cbadc27704734978221f25e2bed")
def do_stage(self): def do_stage(self):
mkdirp(self.stage.source_path) mkdirp(self.stage.source_path)

View file

@ -0,0 +1,22 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class ParentFooBar(Package):
"""This package has a variant "bar", which is True by default, and depends on another
package which has the same variant defaulting to False.
"""
homepage = "http://www.example.com"
url = "http://www.example.com/parent-foo-bar-1.0.tar.gz"
version("1.0", md5="abcdefg0123456789abcdefghfedcba0")
variant("foo", default=True, description="")
variant("bar", default=True, description="")
depends_on("dependency-foo-bar")

View file

@ -0,0 +1,17 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class StickyVariantDependent(AutotoolsPackage):
"""Package with a sticky variant and a conflict"""
homepage = "http://www.example.com"
url = "http://www.example.com/a-1.0.tar.gz"
version("1.0", md5="0123456789abcdef0123456789abcdef")
depends_on("sticky-variant")
conflicts("%gcc", when="^sticky-variant~allow-gcc")

View file

@ -85,6 +85,11 @@ class Abinit(AutotoolsPackage):
# libxml2 # libxml2
depends_on("libxml2", when="@9:+libxml2") depends_on("libxml2", when="@9:+libxml2")
# If the Intel suite is used for Lapack, it must be used for fftw and vice-versa
for _intel_pkg in INTEL_MATH_LIBRARIES:
requires(f"^[virtuals=fftw-api] {_intel_pkg}", when=f"^[virtuals=lapack] {_intel_pkg}")
requires(f"^[virtuals=lapack] {_intel_pkg}", when=f"^[virtuals=fftw-api] {_intel_pkg}")
# Cannot ask for +scalapack if it does not depend on MPI # Cannot ask for +scalapack if it does not depend on MPI
conflicts("+scalapack", when="~mpi") conflicts("+scalapack", when="~mpi")
@ -186,7 +191,8 @@ def configure_args(self):
# BLAS/LAPACK/SCALAPACK-ELPA # BLAS/LAPACK/SCALAPACK-ELPA
linalg = spec["lapack"].libs + spec["blas"].libs linalg = spec["lapack"].libs + spec["blas"].libs
if "^mkl" in spec: is_using_intel_libraries = spec["lapack"].name in INTEL_MATH_LIBRARIES
if is_using_intel_libraries:
linalg_flavor = "mkl" linalg_flavor = "mkl"
elif "@9:" in spec and "^openblas" in spec: elif "@9:" in spec and "^openblas" in spec:
linalg_flavor = "openblas" linalg_flavor = "openblas"
@ -207,7 +213,7 @@ def configure_args(self):
oapp(f"--with-linalg-flavor={linalg_flavor}") oapp(f"--with-linalg-flavor={linalg_flavor}")
if "^mkl" in spec: if is_using_intel_libraries:
fftflavor = "dfti" fftflavor = "dfti"
else: else:
if "+openmp" in spec: if "+openmp" in spec:
@ -218,7 +224,7 @@ def configure_args(self):
oapp(f"--with-fft-flavor={fftflavor}") oapp(f"--with-fft-flavor={fftflavor}")
if "@:8" in spec: if "@:8" in spec:
if "^mkl" in spec: if is_using_intel_libraries:
oapp(f"--with-fft-incs={spec['fftw-api'].headers.cpp_flags}") oapp(f"--with-fft-incs={spec['fftw-api'].headers.cpp_flags}")
oapp(f"--with-fft-libs={spec['fftw-api'].libs.ld_flags}") oapp(f"--with-fft-libs={spec['fftw-api'].libs.ld_flags}")
else: else:
@ -229,7 +235,7 @@ def configure_args(self):
] ]
) )
else: else:
if "^mkl" in spec: if is_using_intel_libraries:
options.extend( options.extend(
[ [
f"FFT_CPPFLAGS={spec['fftw-api'].headers.cpp_flags}", f"FFT_CPPFLAGS={spec['fftw-api'].headers.cpp_flags}",

View file

@ -79,7 +79,7 @@ def cmake_args(self):
] ]
args.append(self.define("CUDA_architecture_build_targets", arch_list)) args.append(self.define("CUDA_architecture_build_targets", arch_list))
if "^mkl" in self.spec: if self.spec["blas"].name in INTEL_MATH_LIBRARIES:
if self.version >= Version("3.8.0"): if self.version >= Version("3.8.0"):
args.append(self.define("AF_COMPUTE_LIBRARY", "Intel-MKL")) args.append(self.define("AF_COMPUTE_LIBRARY", "Intel-MKL"))
else: else:

View file

@ -48,7 +48,7 @@ def edit(self, spec, prefix):
if spec["blas"].name == "openblas": if spec["blas"].name == "openblas":
env["OPENBLAS"] = "1" env["OPENBLAS"] = "1"
if "^mkl" in spec: elif spec["blas"].name in INTEL_MATH_LIBRARIES:
env["MKL"] = "1" env["MKL"] = "1"
env["MKL_BASE"] = spec["mkl"].prefix.mkl env["MKL_BASE"] = spec["mkl"].prefix.mkl
else: else:

View file

@ -23,7 +23,7 @@ class Batchedblas(MakefilePackage):
def edit(self, spec, prefix): def edit(self, spec, prefix):
CCFLAGS = [self.compiler.openmp_flag, "-I./", "-O3"] CCFLAGS = [self.compiler.openmp_flag, "-I./", "-O3"]
BLAS = ["-lm", spec["blas"].libs.ld_flags] BLAS = ["-lm", spec["blas"].libs.ld_flags]
if not spec.satisfies("^mkl"): if spec["blas"].name not in INTEL_MATH_LIBRARIES:
CCFLAGS.append("-D_CBLAS_") CCFLAGS.append("-D_CBLAS_")
if spec.satisfies("%intel"): if spec.satisfies("%intel"):
CCFLAGS.extend(["-Os"]) CCFLAGS.extend(["-Os"])

View file

@ -16,22 +16,28 @@ class Berkeleygw(MakefilePackage):
maintainers("migueldiascosta") maintainers("migueldiascosta")
version(
"3.1.0",
sha256="7e890a5faa5a6bb601aa665c73903b3af30df7bdd13ee09362b69793bbefa6d2",
url="https://app.box.com/shared/static/2bik75lrs85zt281ydbup2xa7i5594gy.gz",
expand=False,
)
version( version(
"3.0.1", "3.0.1",
sha256="7d8c2cc1ee679afb48efbdd676689d4d537226b50e13a049dbcb052aaaf3654f", sha256="7d8c2cc1ee679afb48efbdd676689d4d537226b50e13a049dbcb052aaaf3654f",
url="https://berkeley.box.com/shared/static/m1dgnhiemo47lhxczrn6si71bwxoxor8.gz", url="https://app.box.com/shared/static/m1dgnhiemo47lhxczrn6si71bwxoxor8.gz",
expand=False, expand=False,
) )
version( version(
"3.0", "3.0",
sha256="ab411acead5e979fd42b8d298dbb0a12ce152e7be9eee0bb87e9e5a06a638e2a", sha256="ab411acead5e979fd42b8d298dbb0a12ce152e7be9eee0bb87e9e5a06a638e2a",
url="https://berkeley.box.com/shared/static/lp6hj4kxr459l5a6t05qfuzl2ucyo03q.gz", url="https://app.box.com/shared/static/lp6hj4kxr459l5a6t05qfuzl2ucyo03q.gz",
expand=False, expand=False,
) )
version( version(
"2.1", "2.1",
sha256="31f3b643dd937350c3866338321d675d4a1b1f54c730b43ad74ae67e75a9e6f2", sha256="31f3b643dd937350c3866338321d675d4a1b1f54c730b43ad74ae67e75a9e6f2",
url="https://berkeley.box.com/shared/static/ze3azi5vlyw7hpwvl9i5f82kaiid6g0x.gz", url="https://app.box.com/shared/static/ze3azi5vlyw7hpwvl9i5f82kaiid6g0x.gz",
expand=False, expand=False,
) )

View file

@ -122,8 +122,9 @@ def pgo_train(self):
# Run spack solve --fresh hdf5 with instrumented clingo. # Run spack solve --fresh hdf5 with instrumented clingo.
python_runtime_env = EnvironmentModifications() python_runtime_env = EnvironmentModifications()
for s in self.spec.traverse(deptype=("run", "link"), order="post"): python_runtime_env.extend(
python_runtime_env.extend(spack.user_environment.environment_modifications_for_spec(s)) spack.user_environment.environment_modifications_for_specs(self.spec)
)
python_runtime_env.unset("SPACK_ENV") python_runtime_env.unset("SPACK_ENV")
python_runtime_env.unset("SPACK_PYTHON") python_runtime_env.unset("SPACK_PYTHON")
self.spec["python"].command( self.spec["python"].command(

View file

@ -18,7 +18,7 @@ class Cpr(CMakePackage):
version("1.9.2", sha256="3bfbffb22c51f322780d10d3ca8f79424190d7ac4b5ad6ad896de08dbd06bf31") version("1.9.2", sha256="3bfbffb22c51f322780d10d3ca8f79424190d7ac4b5ad6ad896de08dbd06bf31")
depends_on("curl") depends_on("curl")
depends_on("git", when="build") depends_on("git", type="build")
def cmake_args(self): def cmake_args(self):
_force = "_FORCE" if self.spec.satisfies("@:1.9") else "" _force = "_FORCE" if self.spec.satisfies("@:1.9") else ""

View file

@ -40,7 +40,7 @@ def url_for_version(self, version):
def configure_args(self): def configure_args(self):
config_args = [] config_args = []
if "^mkl" in self.spec: if self.spec["fftw-api"].name in INTEL_MATH_LIBRARIES:
config_args.extend( config_args.extend(
[ [
"--enable-mkl", "--enable-mkl",

Some files were not shown because too many files have changed in this diff Show more