This roughly restores the order of operation from Spack 0.20,
where where `AutotoolsPackage.setup_build_environment` would
override the env variable set in `setup_platform_environment` on
macOS.
When improving the error message, we started #showing in the
answer set a lot more symbols - but we forgot to suppress the
debug messages warning about UNKNOWN SYMBOLs
Improves the warning for deprecated preferences, and adds a configuration
audit to get files:lines details of the issues.
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
We have two ways to concretize now:
* `spack concretize` concretizes only the root specs that are not concrete in the environment.
* `spack concretize -f` eliminates all cached concretization data and reconcretizes the *entire* environment.
This PR adds `spack deconcretize`, which eliminates cached concretization data for a spec. This allows
users greater control over what is preserved from their `spack.lock` file and what is reused when not
using `spack concretize -f`. If you want to update a spec installed in your environment, you can call
`spack deconcretize` on it, and that spec and any relevant dependents will be removed from the lock file.
`spack concretize` has two options:
* `--root`: limits deconcretized specs to *specific* roots in the environment. You can use this to
deconcretize exactly one root in a `unify: false` environment. i.e., if `foo` root is a dependent
of `bar`, both roots, `spack deconcretize bar` will *not* deconcretize `foo`.
* `--all`: deconcretize *all* specs that match the input spec. By default `spack deconcretize`
will complain about multiple matches, like `spack uninstall`.
The ^mkl pattern was used to refer to three packages
even though none of software using it was depending
on "mkl".
This pattern, which follows Hyrum's law, is now being
removed in favor of a more explicit one.
In this PR gromacs, abinit, lammps, and quantum-espresso
are modified.
Intel packages are also modified to provide "lapack"
and "blas" together.
And improve the error message (load vs unload).
Of course you could have some uninstalled dependency too, but as long as
it doesn't implement `setup_run_environment` etc, I don't think it hurts
to attempt to load the root anyways, given that failure to do so is a
warning, not a fatal error.
This changes variant display to use a much more legible format, and to use screen space
much better (particularly on narrow terminals). It also adds color the variant display
to match other parts of `spack info`.
Descriptions and variant value lists that were frequently squished into a tiny column
before now have closer to the full terminal width.
This change also preserves any whitespace formatting present in `package.py`, so package
maintainers can make easer-to-read descriptions of variant values if they want. For
example, `gasnet` has had a nice description of the `conduits` variant for a while, but
it was wrapped and made illegible by `spack info`. That is now fixed and the original
newlines are kept.
Conditional variants are grouped by their when clauses by default, but if you do not
like the grouping, you can display all the variants in order with `--variants-by-name`.
I'm not sure when people will prefer this, but it makes it easier to tell that a
particular variant is/isn't there. I do think grouping by `when` is the better default.
This commit improves forward compatibility of Spack with newer build cache metadata formats.
Before this commit, invalid or unrecognized metadata would be fatal errors, now they just cause
a mirror to be skipped.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
The libevent release tarballs ship with a `configure` script generated by an old `libtool`. The `libtool` generated by `configure` is not compatible with `MACOSX_DEPLOYMENT_VERSION` > 10. Regeneration of the `configure` scripts fixes build on macOS.
Original configure contains:
```
case $host_os in
rhapsody* | darwin1.[012])
_lt_dar_allow_undefined='$wl-undefined ${wl}suppress' ;;
darwin1.*)
_lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
darwin*) # darwin 5.x on
# if running on 10.5 or later, the deployment target defaults
# to the OS version, if on x86, and 10.4, the deployment
# target defaults to 10.4. Don't you love it?
case ${MACOSX_DEPLOYMENT_TARGET-10.0},$host in
10.0,*86*-darwin8*|10.0,*-darwin[91]*)
_lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
10.[012][,.]*)
_lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
10.*)
_lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
esac
```
After re-running `autogen.sh`:
```
case $host_os in
rhapsody* | darwin1.[012])
_lt_dar_allow_undefined='$wl-undefined ${wl}suppress' ;;
darwin1.*)
_lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
darwin*)
case $MACOSX_DEPLOYMENT_TARGET,$host in
10.[012],*|,*powerpc*-darwin[5-8]*)
_lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
*)
_lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
esac
```
Before this PR, variant were not propagated to leaf nodes that could accept
the propagated value, if some intermediate node couldn't accept it.
This PR fixes that issue by marking nodes as "candidate" for propagation
and by setting the variant only if it can be accepted by the node.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Use `major.minor.patch`, `major.minor`, `major` in tags
* Ensure `latest` is the semver largest version, and not "latest in time"
* Remove Ubuntu 18.04 from the list of images
Modify the packages.yaml schema so that soft-preferences on targets,
compilers and providers can only be specified under the "all" attribute.
This makes them effectively global preferences.
Version preferences instead can only be specified under a package
specific section.
If a preference attribute is found in a section where it should
not be, it will be ignored and a warning is printed to screen.
Most queries will end up calling `spec.satisfies(query)` on everything in the DB, which
will cause Spack to ask whether the query spec is virtual if its name doesn't match the
target spec's. This can be expensive, because it can cause Spack to check if any new
virtuals showed up in *all* the packages it knows about. That can currently trigger
thousands of `stat()` calls.
We can avoid the virtual check for most successful queries if we consider that if there
*is* a match by name, the query spec *can't* be virtual. This PR adds an optimization to
the query loop to save any comparisons that would trigger a virtual check for last.
- [x] Add a `deferred` list to the `query()` loop.
- [x] First run through the `query()` loop *only* checks for name matches.
- [x] Query loop now returns early if there's a name match, skipping most `satisfies()` calls.
- [x] Second run through the `deferred()` list only runs if query spec is virtual.
- [x] Fix up handling of concrete specs.
- [x] Add test for querying virtuals in DB.
- [x] Avoid allocating deferred if not necessary.
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Currently there's some hacky logic in the AppleClang compiler that makes
it also accept `gfortran` as a fortran compiler if `flang` is not found.
This is guarded by `if sys.platform` checks s.t. it only applies to
Darwin.
But on Linux the feature of detecting mixed toolchains is highly
requested too, cause it's rather annoying to run into a failed build of
`openblas` after dozens of minutes of compiling its dependencies, just
because clang doesn't have a fortran compiler.
In particular in CI where the system compilers may change during system
updates, it's typically impossible to fix compilers in a hand-written
compilers.yaml config file: the config will almost certainly be outdated
sooner or later, and maintaining one config file per target machine and
writing logic to select the correct config is rather undesirable too.
---
This PR introduces a flag `spack compiler find --mixed-toolchain` that
fills out missing `fc` and `f77` entries in `clang` / `apple-clang` by
picking the best matching `gcc`.
It is enabled by default on macOS, but not on Linux, matching current
behavior of `spack compiler find`.
The "best matching gcc" logic and compiler path updates are identical to
how compiler path dictionaries are currently flattened "horizontally"
(per compiler id). This just adds logic to do the same "vertically"
(across different compiler ids).
So, with this change on Ubuntu 22.04:
```
$ spack compiler find --mixed-toolchain
==> Added 6 new compilers to /home/harmen/.spack/linux/compilers.yaml
gcc@13.1.0 gcc@12.3.0 gcc@11.4.0 gcc@10.5.0 clang@16.0.0 clang@15.0.7
==> Compilers are defined in the following files:
/home/harmen/.spack/linux/compilers.yaml
```
you finally get:
```
compilers:
- compiler:
spec: clang@=15.0.7
paths:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags: {}
operating_system: ubuntu23.04
target: x86_64
modules: []
environment: {}
extra_rpaths: []
- compiler:
spec: clang@=16.0.0
paths:
cc: /usr/bin/clang-16
cxx: /usr/bin/clang++-16
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags: {}
operating_system: ubuntu23.04
target: x86_64
modules: []
environment: {}
extra_rpaths: []
```
The "best gcc" is automatically default system gcc, since it has no
suffixes / prefixes.
Add a new config section: `config:aliases`, which is a dictionary mapping aliases
to commands.
For instance:
```yaml
config:
aliases:
sp: spec -I
```
will define a new command `sp` that will execute `spec` with the `-I`
argument.
Aliases cannot override existing commands, and this is ensured with a test.
We cannot currently alias subcommands. Spack will warn about any aliases
containing a space, but will not error, which leaves room for subcommand
aliases in the future.
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Test that setup_run_environment changes to CC/CXX/FC/F77 are dropped in build env
* compilers set in run env shouldn't impact build
Adds `drop` to EnvironmentModifications courtesy of @haampie, and uses
it to clear modifications of CC, CXX, F77 and FC made by
`setup_{,dependent_}run_environment` routines when producing an
environment in BUILD context.
* comment / style
* comment
---------
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
This adds a rather trivial context manager that lets you deduplicate repeated
arguments in directives, e.g.
```python
depends_on("py-x@1", when="@1", type=("build", "run"))
depends_on("py-x@2", when="@2", type=("build", "run"))
depends_on("py-x@3", when="@3", type=("build", "run"))
depends_on("py-x@4", when="@4", type=("build", "run"))
```
can be condensed to
```python
with default_args(type=("build", "run")):
depends_on("py-x@1", when="@1")
depends_on("py-x@2", when="@2")
depends_on("py-x@3", when="@3")
depends_on("py-x@4", when="@4")
```
The advantage is it's clear for humans, the downside it's less clear for type checkers due to type erasure.
Create chains of causation for error messages.
The current implementation is only completed for some of the many errors presented by the concretizer. The rest will need to be filled out over time, but this demonstrates the capability.
The basic idea is to associate conditions in the solver with one another in causal relationships, and to associate errors with the proximate causes of their facts in the condition graph. Then we can construct causal trees to explain errors, which will hopefully present users with useful information to avoid the error or report issues.
Technically, this is implemented as a secondary solve. The concretizer computes the optimal model, and if the optimal model contains an error, then a secondary solve computes causation information about the error(s) in the concretizer output.
Examples:
$ spack solve hdf5 ^cmake@3.0.1
==> Error: concretization failed for the following reasons:
1. Cannot satisfy 'cmake@3.0.1'
2. Cannot satisfy 'cmake@3.0.1'
required because hdf5 ^cmake@3.0.1 requested from CLI
3. Cannot satisfy 'cmake@3.18:' and 'cmake@3.0.1
required because hdf5 ^cmake@3.0.1 requested from CLI
required because hdf5 depends on cmake@3.18: when @1.13:
required because hdf5 ^cmake@3.0.1 requested from CLI
4. Cannot satisfy 'cmake@3.12:' and 'cmake@3.0.1
required because hdf5 depends on cmake@3.12:
required because hdf5 ^cmake@3.0.1 requested from CLI
required because hdf5 ^cmake@3.0.1 requested from CLI
$ spack spec cmake ^curl~ldap # <-- with curl configured non-buildable and an external with `+ldap`
==> Error: concretization failed for the following reasons:
1. Attempted to use external for 'curl' which does not satisfy any configured external spec
2. Attempted to build package curl which is not buildable and does not have a satisfying external
attr('variant_value', 'curl', 'ldap', 'True') is an external constraint for curl which was not satisfied
3. Attempted to build package curl which is not buildable and does not have a satisfying external
attr('variant_value', 'curl', 'gssapi', 'True') is an external constraint for curl which was not satisfied
4. Attempted to build package curl which is not buildable and does not have a satisfying external
'curl+ldap' is an external constraint for curl which was not satisfied
'curl~ldap' required
required because cmake ^curl~ldap requested from CLI
$ spack solve yambo+mpi ^hdf5~mpi
==> Error: concretization failed for the following reasons:
1. 'hdf5' required multiple values for single-valued variant 'mpi'
2. 'hdf5' required multiple values for single-valued variant 'mpi'
Requested '~mpi' and '+mpi'
required because yambo depends on hdf5+mpi when +mpi
required because yambo+mpi ^hdf5~mpi requested from CLI
required because yambo+mpi ^hdf5~mpi requested from CLI
3. 'hdf5' required multiple values for single-valued variant 'mpi'
Requested '~mpi' and '+mpi'
required because netcdf-c depends on hdf5+mpi when +mpi
required because netcdf-fortran depends on netcdf-c
required because yambo depends on netcdf-fortran
required because yambo+mpi ^hdf5~mpi requested from CLI
required because netcdf-fortran depends on netcdf-c@4.7.4: when @4.5.3:
required because yambo depends on netcdf-fortran
required because yambo+mpi ^hdf5~mpi requested from CLI
required because yambo depends on netcdf-c
required because yambo+mpi ^hdf5~mpi requested from CLI
required because yambo depends on netcdf-c+mpi when +mpi
required because yambo+mpi ^hdf5~mpi requested from CLI
required because yambo+mpi ^hdf5~mpi requested from CLI
Future work:
In addition to fleshing out the causes of other errors, I would like to find a way to associate different components of the error messages with different causes. In this example it's pretty easy to infer which part is which, but I'm not confident that will always be the case.
See the previous PR #34500 for discussion of how the condition chains are incomplete. In the future, we may need custom logic for individual attributes to associate some important choice rules with conditions such that clingo choices or other derivations can be part of the explanation.
---------
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>