PR #40929 reverted the argument parsing to make `spack --verbose
install` work again. It looks like `--verbose` is the only instance
where this kind of argument inheritance is used since all other commands
override arguments with the same name instead. For instance, `spack
--bootstrap clean` does not invoke `spack clean --bootstrap`.
Therefore, fix multi-line aliases again by parsing the resolved
arguments and instead explicitly pass down `args.verbose` to commands.
This commit discards type mismatches or failures to validate a package preference during concretization. The values discarded are logged as debug level messages. It also adds a config audit to help users spot misconfigurations in packages.yaml preferences.
This roughly restores the order of operation from Spack 0.20,
where where `AutotoolsPackage.setup_build_environment` would
override the env variable set in `setup_platform_environment` on
macOS.
When improving the error message, we started #showing in the
answer set a lot more symbols - but we forgot to suppress the
debug messages warning about UNKNOWN SYMBOLs
* Permit packages that depend on Intel oneAPI packages to access sdk
* Implement and use IntelOneapiLibraryPackageWithSdk
* Restore libs property to IntelOneapiLibraryPackage
* Conform to style
* Provide new class to infrastructure
* Treat sdk/include as the main include
Improves the warning for deprecated preferences, and adds a configuration
audit to get files:lines details of the issues.
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Tests didn't cover the new `--variants-by-name` parameter in #40998.
Add some parameterization to hit that.
This changeset makes me think that the main section-printing loop in `spack info` isn't
factored so well. It makes it difficult to pass different arguments to different helper
functions. I could break it out into if statements if folks think that would be cleaner.
We have two ways to concretize now:
* `spack concretize` concretizes only the root specs that are not concrete in the environment.
* `spack concretize -f` eliminates all cached concretization data and reconcretizes the *entire* environment.
This PR adds `spack deconcretize`, which eliminates cached concretization data for a spec. This allows
users greater control over what is preserved from their `spack.lock` file and what is reused when not
using `spack concretize -f`. If you want to update a spec installed in your environment, you can call
`spack deconcretize` on it, and that spec and any relevant dependents will be removed from the lock file.
`spack concretize` has two options:
* `--root`: limits deconcretized specs to *specific* roots in the environment. You can use this to
deconcretize exactly one root in a `unify: false` environment. i.e., if `foo` root is a dependent
of `bar`, both roots, `spack deconcretize bar` will *not* deconcretize `foo`.
* `--all`: deconcretize *all* specs that match the input spec. By default `spack deconcretize`
will complain about multiple matches, like `spack uninstall`.
The ^mkl pattern was used to refer to three packages
even though none of software using it was depending
on "mkl".
This pattern, which follows Hyrum's law, is now being
removed in favor of a more explicit one.
In this PR gromacs, abinit, lammps, and quantum-espresso
are modified.
Intel packages are also modified to provide "lapack"
and "blas" together.
And improve the error message (load vs unload).
Of course you could have some uninstalled dependency too, but as long as
it doesn't implement `setup_run_environment` etc, I don't think it hurts
to attempt to load the root anyways, given that failure to do so is a
warning, not a fatal error.
This changes variant display to use a much more legible format, and to use screen space
much better (particularly on narrow terminals). It also adds color the variant display
to match other parts of `spack info`.
Descriptions and variant value lists that were frequently squished into a tiny column
before now have closer to the full terminal width.
This change also preserves any whitespace formatting present in `package.py`, so package
maintainers can make easer-to-read descriptions of variant values if they want. For
example, `gasnet` has had a nice description of the `conduits` variant for a while, but
it was wrapped and made illegible by `spack info`. That is now fixed and the original
newlines are kept.
Conditional variants are grouped by their when clauses by default, but if you do not
like the grouping, you can display all the variants in order with `--variants-by-name`.
I'm not sure when people will prefer this, but it makes it easier to tell that a
particular variant is/isn't there. I do think grouping by `when` is the better default.
This commit improves forward compatibility of Spack with newer build cache metadata formats.
Before this commit, invalid or unrecognized metadata would be fatal errors, now they just cause
a mirror to be skipped.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Before this PR, variant were not propagated to leaf nodes that could accept
the propagated value, if some intermediate node couldn't accept it.
This PR fixes that issue by marking nodes as "candidate" for propagation
and by setting the variant only if it can be accepted by the node.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Modify the packages.yaml schema so that soft-preferences on targets,
compilers and providers can only be specified under the "all" attribute.
This makes them effectively global preferences.
Version preferences instead can only be specified under a package
specific section.
If a preference attribute is found in a section where it should
not be, it will be ignored and a warning is printed to screen.
Most queries will end up calling `spec.satisfies(query)` on everything in the DB, which
will cause Spack to ask whether the query spec is virtual if its name doesn't match the
target spec's. This can be expensive, because it can cause Spack to check if any new
virtuals showed up in *all* the packages it knows about. That can currently trigger
thousands of `stat()` calls.
We can avoid the virtual check for most successful queries if we consider that if there
*is* a match by name, the query spec *can't* be virtual. This PR adds an optimization to
the query loop to save any comparisons that would trigger a virtual check for last.
- [x] Add a `deferred` list to the `query()` loop.
- [x] First run through the `query()` loop *only* checks for name matches.
- [x] Query loop now returns early if there's a name match, skipping most `satisfies()` calls.
- [x] Second run through the `deferred()` list only runs if query spec is virtual.
- [x] Fix up handling of concrete specs.
- [x] Add test for querying virtuals in DB.
- [x] Avoid allocating deferred if not necessary.
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Currently there's some hacky logic in the AppleClang compiler that makes
it also accept `gfortran` as a fortran compiler if `flang` is not found.
This is guarded by `if sys.platform` checks s.t. it only applies to
Darwin.
But on Linux the feature of detecting mixed toolchains is highly
requested too, cause it's rather annoying to run into a failed build of
`openblas` after dozens of minutes of compiling its dependencies, just
because clang doesn't have a fortran compiler.
In particular in CI where the system compilers may change during system
updates, it's typically impossible to fix compilers in a hand-written
compilers.yaml config file: the config will almost certainly be outdated
sooner or later, and maintaining one config file per target machine and
writing logic to select the correct config is rather undesirable too.
---
This PR introduces a flag `spack compiler find --mixed-toolchain` that
fills out missing `fc` and `f77` entries in `clang` / `apple-clang` by
picking the best matching `gcc`.
It is enabled by default on macOS, but not on Linux, matching current
behavior of `spack compiler find`.
The "best matching gcc" logic and compiler path updates are identical to
how compiler path dictionaries are currently flattened "horizontally"
(per compiler id). This just adds logic to do the same "vertically"
(across different compiler ids).
So, with this change on Ubuntu 22.04:
```
$ spack compiler find --mixed-toolchain
==> Added 6 new compilers to /home/harmen/.spack/linux/compilers.yaml
gcc@13.1.0 gcc@12.3.0 gcc@11.4.0 gcc@10.5.0 clang@16.0.0 clang@15.0.7
==> Compilers are defined in the following files:
/home/harmen/.spack/linux/compilers.yaml
```
you finally get:
```
compilers:
- compiler:
spec: clang@=15.0.7
paths:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags: {}
operating_system: ubuntu23.04
target: x86_64
modules: []
environment: {}
extra_rpaths: []
- compiler:
spec: clang@=16.0.0
paths:
cc: /usr/bin/clang-16
cxx: /usr/bin/clang++-16
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags: {}
operating_system: ubuntu23.04
target: x86_64
modules: []
environment: {}
extra_rpaths: []
```
The "best gcc" is automatically default system gcc, since it has no
suffixes / prefixes.
Add a new config section: `config:aliases`, which is a dictionary mapping aliases
to commands.
For instance:
```yaml
config:
aliases:
sp: spec -I
```
will define a new command `sp` that will execute `spec` with the `-I`
argument.
Aliases cannot override existing commands, and this is ensured with a test.
We cannot currently alias subcommands. Spack will warn about any aliases
containing a space, but will not error, which leaves room for subcommand
aliases in the future.
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Test that setup_run_environment changes to CC/CXX/FC/F77 are dropped in build env
* compilers set in run env shouldn't impact build
Adds `drop` to EnvironmentModifications courtesy of @haampie, and uses
it to clear modifications of CC, CXX, F77 and FC made by
`setup_{,dependent_}run_environment` routines when producing an
environment in BUILD context.
* comment / style
* comment
---------
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
This adds a rather trivial context manager that lets you deduplicate repeated
arguments in directives, e.g.
```python
depends_on("py-x@1", when="@1", type=("build", "run"))
depends_on("py-x@2", when="@2", type=("build", "run"))
depends_on("py-x@3", when="@3", type=("build", "run"))
depends_on("py-x@4", when="@4", type=("build", "run"))
```
can be condensed to
```python
with default_args(type=("build", "run")):
depends_on("py-x@1", when="@1")
depends_on("py-x@2", when="@2")
depends_on("py-x@3", when="@3")
depends_on("py-x@4", when="@4")
```
The advantage is it's clear for humans, the downside it's less clear for type checkers due to type erasure.
Create chains of causation for error messages.
The current implementation is only completed for some of the many errors presented by the concretizer. The rest will need to be filled out over time, but this demonstrates the capability.
The basic idea is to associate conditions in the solver with one another in causal relationships, and to associate errors with the proximate causes of their facts in the condition graph. Then we can construct causal trees to explain errors, which will hopefully present users with useful information to avoid the error or report issues.
Technically, this is implemented as a secondary solve. The concretizer computes the optimal model, and if the optimal model contains an error, then a secondary solve computes causation information about the error(s) in the concretizer output.
Examples:
$ spack solve hdf5 ^cmake@3.0.1
==> Error: concretization failed for the following reasons:
1. Cannot satisfy 'cmake@3.0.1'
2. Cannot satisfy 'cmake@3.0.1'
required because hdf5 ^cmake@3.0.1 requested from CLI
3. Cannot satisfy 'cmake@3.18:' and 'cmake@3.0.1
required because hdf5 ^cmake@3.0.1 requested from CLI
required because hdf5 depends on cmake@3.18: when @1.13:
required because hdf5 ^cmake@3.0.1 requested from CLI
4. Cannot satisfy 'cmake@3.12:' and 'cmake@3.0.1
required because hdf5 depends on cmake@3.12:
required because hdf5 ^cmake@3.0.1 requested from CLI
required because hdf5 ^cmake@3.0.1 requested from CLI
$ spack spec cmake ^curl~ldap # <-- with curl configured non-buildable and an external with `+ldap`
==> Error: concretization failed for the following reasons:
1. Attempted to use external for 'curl' which does not satisfy any configured external spec
2. Attempted to build package curl which is not buildable and does not have a satisfying external
attr('variant_value', 'curl', 'ldap', 'True') is an external constraint for curl which was not satisfied
3. Attempted to build package curl which is not buildable and does not have a satisfying external
attr('variant_value', 'curl', 'gssapi', 'True') is an external constraint for curl which was not satisfied
4. Attempted to build package curl which is not buildable and does not have a satisfying external
'curl+ldap' is an external constraint for curl which was not satisfied
'curl~ldap' required
required because cmake ^curl~ldap requested from CLI
$ spack solve yambo+mpi ^hdf5~mpi
==> Error: concretization failed for the following reasons:
1. 'hdf5' required multiple values for single-valued variant 'mpi'
2. 'hdf5' required multiple values for single-valued variant 'mpi'
Requested '~mpi' and '+mpi'
required because yambo depends on hdf5+mpi when +mpi
required because yambo+mpi ^hdf5~mpi requested from CLI
required because yambo+mpi ^hdf5~mpi requested from CLI
3. 'hdf5' required multiple values for single-valued variant 'mpi'
Requested '~mpi' and '+mpi'
required because netcdf-c depends on hdf5+mpi when +mpi
required because netcdf-fortran depends on netcdf-c
required because yambo depends on netcdf-fortran
required because yambo+mpi ^hdf5~mpi requested from CLI
required because netcdf-fortran depends on netcdf-c@4.7.4: when @4.5.3:
required because yambo depends on netcdf-fortran
required because yambo+mpi ^hdf5~mpi requested from CLI
required because yambo depends on netcdf-c
required because yambo+mpi ^hdf5~mpi requested from CLI
required because yambo depends on netcdf-c+mpi when +mpi
required because yambo+mpi ^hdf5~mpi requested from CLI
required because yambo+mpi ^hdf5~mpi requested from CLI
Future work:
In addition to fleshing out the causes of other errors, I would like to find a way to associate different components of the error messages with different causes. In this example it's pretty easy to infer which part is which, but I'm not confident that will always be the case.
See the previous PR #34500 for discussion of how the condition chains are incomplete. In the future, we may need custom logic for individual attributes to associate some important choice rules with conditions such that clingo choices or other derivations can be part of the explanation.
---------
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This PR implements the concept of "default environment", which doesn't have to be
created explicitly. The aim is to lower the barrier for adopting environments.
To (create and) activate the default environment, run
```
$ spack env activate
```
This mimics the behavior of
```
$ cd
```
which brings you to your home directory.
This is not a breaking change, since `spack env activate` without arguments
currently errors. It is similar to the already existing `spack env activate --temp`
command which always creates an env in a temporary directory, the difference
is that the default environment is a managed / named environment named `default`.
The name `default` is not a reserved name, it's just that `spack env activate`
creates it for you if you don't have it already.
With this change, you can get started with environments faster:
```
$ spack env activate [--prompt]
$ spack install --add x y z
```
instead of
```
$ spack env create default
==> Created environment 'default in /Users/harmenstoppels/spack/var/spack/environments/default
==> You can activate this environment with:
==> spack env activate default
$ spack env activate [--prompt] default
$ spack install --add x y z
```
Notice that Spack supports switching (but not stacking) environments, so the
parallel with `cd` is pretty clear:
```
$ spack env activate named_env
$ spack env status
==> In environment named_env
$ spack env activate
$ spack env status
==> In environment default
```
* Add command suggestions
This adds suggestions of similar commands in case users mistype a
command. Before:
```
$ spack spack
==> Error: spack is not a recognized Spack command or extension command; check with `spack commands`.
```
After:
```
$ spack spack
==> Error: spack is not a recognized Spack command or extension command; check with `spack commands`.
Did you mean one of the following commands?
spec
patch
```
* Add package name suggestions
* Remove suggestion to run spack clean -m
This PR adds support for including separate definitions from `spack.yaml`.
Supporting the inclusion of files with definitions enables user to make
curated/standardized collections of packages that can re-used by others.
Currently module globals aren't set before running
`setup_[dependent_]run_environment` to compute environment modifications
for module files. This commit fixes that.
Looking at the memory profiles of concurrent solves
for environment with unify:false, it seems memory
is only ramping up.
This exchange in the potassco mailing list:
https://sourceforge.net/p/potassco/mailman/potassco-users/thread/b55b5b8c2e8945409abb3fa3c935c27e%40lohn.at/#msg36517698
Seems to suggest that clingo doesn't release memory
until end of the application.
Since when unify:false we distribute work to processes,
here we give a maxtaskperchild=1, so we clean memory
after each solve.
Some providers must provide virtuals "together", i.e.
if they provide one virtual of a set, they must be the
providers also of the others.
There was a bug though, where we were not checking if
the other virtuals in the set were needed at all in
the DAG.
This commit fixes the bug.
This PR makes it possible to select only a subset of virtual dependencies from a spec that _may_ provide more. To select providers, a syntax to specify edge attributes is introduced:
```
hdf5 ^[virtuals=mpi] mpich
```
With that syntax we can concretize specs like:
```console
$ spack spec strumpack ^[virtuals=mpi] intel-parallel-studio+mkl ^[virtuals=lapack] openblas
```
On `develop` this would currently fail with:
```console
$ spack spec strumpack ^intel-parallel-studio+mkl ^openblas
==> Error: Spec cannot include multiple providers for virtual 'blas'
Requested 'intel-parallel-studio' and 'openblas'
```
In package recipes, virtual specs that are declared in the same `provides` directive need to be provided _together_. This means that e.g. `openblas`, which has:
```python
provides("blas", "lapack")
```
needs to provide both `lapack` and `blas` when requested to provide at least one of them.
## Additional notes
This capability is needed to model compilers. Assuming that languages are treated like virtual dependencies, we might want e.g. to use LLVM to compile C/C++ and Gnu GCC to compile Fortran. This can be accomplished by the following[^1]:
```
hdf5 ^[virtuals=c,cxx] llvm ^[virtuals=fortran] gcc
```
[^1]: We plan to add some syntactic sugar around this syntax, and reuse the `%` sigil to avoid having a lot of boilerplate around compilers.
Modifications:
- [x] Add syntax to interact with edge attributes from spec literals
- [x] Add concretization logic to be able to cherry-pick virtual dependencies
- [x] Extend semantic of the `provides` directive to express when virtuals need to be provided together
- [x] Add unit-tests and documentation
Allowing white space around `:` in version ranges introduces an ambiguity:
```
a@1: b
```
parses as `a@1:b` but should really be parsed as two separate specs `a@1:` and `b`.
With white space disallowed around `:` in ranges, the ambiguity is resolved.
Call setup_dependent_run_environment on both link and run edges,
instead of only run edges, which restores old behavior.
Move setup_build_environment into get_env_modifications
Also call setup_run_environment on direct build deps, since their run
environment has to be set up.
* Add tests to ensure variant propagation syntax can round-trip to/from string
* Add a regression test for the bug in 35298
* Reconstruct the spec constraints in the worker process
Specs do not preserve any information on propagation of variants
when round-tripping to/from JSON (which we use to pickle), but
preserve it when round-tripping to/from strings.
Therefore, we pass a spec literal to the worker and reconstruct
the Spec objects there.
- [x] Add links to information people are going to want to know when adding license
information to their packages (namely OSI licenses and SPDX identifiers).
- [x] Update the packaging docs for `license()` with Spack as an example for `when=`.
After all, it's a dual-licensed package that changed once in the past.
- [x] Add link to https://spdx.org/licenses/ in the `spack create` boilerplate as well.
Typically MSVC is detected via the VSWhere program. However, this may
not be available, or may be installed in an unpredictable location.
This PR adds an additional approach via Windows Registry queries to
determine VS install location root.
Additionally:
* Construct vs_install_paths after class-definition time (move it to
variable-access time).
* Skip over keys for which a user does not have read permissions
when performing searches (previously the presence of these keys
would have caused an error, regardless of whether they were
needed).
* Extend helper functionality with option for regex matching on
registry keys vs. exact string matching.
* Some internal refactoring: remove boolean parameters in some cases
where the function was always called with the same value
(e.g. `find_subkey`)
.bat or .exe files can be considered executable on Windows. This PR
expands the regex for detectable packages to allow for the detection
of packages that vendor .bat wrappers (intel mpi for example).
Additional changes:
* Outside of Windows, when searching for executables `path_hints=None`
was used to indicate that default path hints should be provided,
and `[]` was taken to mean that no defaults should be chosen
(in that case, nothing is searched); behavior on Windows has
now been updated to match.
* Above logic for handling of `path_hints=[]` has also been extended
to library search (for both Linux and Windows).
* All exceptions for external packages were documented as timeout
errors: this commit adds a distinction for other types of errors
in warning messages to the user.
Credits to @ChristianKniep for advocating the idea of OCI image layers
being identical to spack buildcache tarballs.
With this you can configure an OCI registry as a buildcache:
```console
$ spack mirror add my_registry oci://user/image # Dockerhub
$ spack mirror add my_registry oci://ghcr.io/haampie/spack-test # GHCR
$ spack mirror set --push --oci-username ... --oci-password ... my_registry # set login credentials
```
which should result in this config:
```yaml
mirrors:
my_registry:
url: oci://ghcr.io/haampie/spack-test
push:
access_pair: [<username>, <password>]
```
It can be used like any other registry
```
spack buildcache push my_registry [specs...]
```
It will upload the Spack tarballs in parallel, as well as manifest + config
files s.t. the binaries are compatible with `docker pull` or `skopeo copy`.
In fact, a base image can be added to get a _runnable_ image:
```console
$ spack buildcache push --base-image ubuntu:23.04 my_registry python
Pushed ... as [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
$ docker run --rm -it [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
```
which should really be a game changer for sharing binaries.
Further, all content-addressable blobs that are downloaded and verified
will be cached in Spack's download cache. This should make repeated
`push` commands faster, as well as `push` followed by a separate
`update-index` command.
An end to end example of how to use this in Github Actions is here:
**https://github.com/haampie/spack-oci-buildcache-example**
TODO:
- [x] Generate environment modifications in config so PATH is set up
- [x] Enrich config with Spack's `spec` json (this is allowed in the OCI specification)
- [x] When ^ is done, add logic to create an index in say `<image>:index` by fetching all config files (using OCI distribution discovery API)
- [x] Add logic to use object storage in an OCI registry in `spack install`.
- [x] Make the user pick the base image for generated OCI images.
- [x] Update buildcache install logic to deal with absolute paths in tarballs
- [x] Merge with `spack buildcache` command
- [x] Merge #37441 (included here)
- [x] Merge #39077 (included here)
- [x] #39187 + #39285
- [x] #39341
- [x] Not a blocker: #35737 fixes correctness run env for the generated container images
NOTE:
1. `oci://` is unfortunately taken, so it's being abused in this PR to mean "oci type mirror". `skopeo` uses `docker://` which I'd like to avoid, given that classical docker v1 registries are not supported.
2. this is currently `https`-only, given that basic auth is used to login. I _could_ be convinced to allow http, but I'd prefer not to, given that for a `spack buildcache push` command multiple domains can be involved (auth server, source of base image, destination registry). Right now, no urllib http handler is added, so redirects to https and auth servers with http urls will simply result in a hard failure.
CAVEATS:
1. Signing is not implemented in this PR. `gpg --clearsign` is not the nicest solution, since (a) the spec.json is merged into the image config, which must be valid json, and (b) it would be better to sign the manifest (referencing both config/spec file and tarball) using more conventional image signing tools
2. `spack.binary_distribution.push` is not yet implemented for the OCI buildcache, only `spack buildcache push` is. This is because I'd like to always push images + deps to the registry, so that it's `docker pull`-able, whereas in `spack ci` we really wanna push an individual package without its deps to say `pr-xyz`, while its deps reside in some `develop` buildcache.
3. The `push -j ...` flag only works for OCI buildcache, not for others
* spack checksum pkg@1.2, use as version filter
Currently pkg@1.2 splits on @ and looks for 1.2 specifically, with this
PR pkg@1.2 is a filter so any matching 1.2, 1.2.1, ..., 1.2.10 version
is displayed.
* fix tests
* fix style
Update Tcl modulefile template to simplify generated `append-path`,
`prepend-path` and `remove-path` commands and improve their readability.
If path element delimiter is colon character, do not set the `--delim`
option as it is the default delimiter value.
Renames exclude_implicits to hide_implicits
When hide_implicits option is enabled, generate modulefile of
implicitly installed software and hide them. Even if implicit, those
modulefiles may be referred as dependency in other modulefiles thus they
should be generated to make module properly load dependent module.
A new hidden property is added to BaseConfiguration class.
To hide modulefiles, modulercs are generated along modulefiles. Such rc
files contain specific module command to indicate a module should be
hidden (for instance when using "module avail").
A modulerc property is added to TclFileLayout and LmodFileLayout classes
to get fully qualified path name of the modulerc associated to a given
modulefile.
Modulerc files will be located in each module directory, next to the
version modulefiles. This scheme is supported by both module tool
implementations.
modulerc_header and hide_cmd_format attributes are added to
TclModulefileWriter and LmodModulefileWriter. They help to know how to
generate a modulerc file with hidden commands for each module tool.
Tcl modulerc file requires an header. As we use a command introduced on
Modules 4.7 (module-hide --hidden-loaded), a version requirement is
added to header string.
For lmod, modules that open up a hierarchy are never hidden, even if
they are implicitly installed.
Modulerc is created, updated or removed when associated modulefile is
written or removed. If an implicit modulefile becomes explicit, hidden
command in modulerc for this modulefile is removed. If modulerc becomes
empty, this file is removed. Modulerc file is not rewritten when no
content change is detected.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Previously, we only searched for `patch` inside of whatever Git
installation was available because the most common installation of Git
available on Windows had `patch`. That's not true for all possible
installations of Git though, so this updates the search to also check
PATH.
GitLab's .patch URLs only provide abbreviated hashes, while .diff URLs
provide full hashes. There does not seem to be a parameter to force
.patch URLs to also return full hashes, so we should make sure to use
the .diff ones.
With the introduction of multiple build dependencies from the same package in the DAG, we need to minimize a few weights accounting for edges rather than nodes. If we don't do that we might have multiple "optimal" solutions that differ only in how the same nodes are connected together. This commit ensures optimal versions are picked per parent in case of multiple choices for a dependency.
Fix the following syntax which validates only the first array entry:
```python
"compilers": {
"type": "array",
"items": [
{
"type": ...
}
]
}
```
to
```python
"compilers": {
"type": "array",
"items": {
"type": ...
}
}
```
which validates the entire array.
Oops...
This adds a `SetupContext` class which is responsible for setting
package.py module globals, and computing the changes to environment
variables for the build, test or run context.
The class uses `effective_deptypes` which takes a list of specs (e.g. single
item of a spec to build, or a list of environment roots) and a context
(build, run, test), and outputs a flat list of specs that affect the
environment together with a flag in what way they do so. This list is
topologically ordered from root to leaf, so that one can be assured that
dependents override variables set by dependencies, not the other way
around.
This is used to replace the logic in `modifications_from_dependencies`,
which has several issues: missing calls to `setup_run_environment`, and
the order in which operations are applied.
Further, it should improve performance a bit in certain cases, since
`effective_deptypes` run in O(v + e) time, whereas `spack env activate`
currently can take up to O(v^2 + e) time due to loops over roots. Each
edge in the DAG is visited once by calling `effective_deptypes` with
`env.concrete_roots()`.
By marking and propagating flags through the DAG, this commit also fixes
a bug where Spack wouldn't call `setup_run_environment` for runtime
dependencies of link dependencies. And this PR ensures that Spack
correctly sets up the runtime environment of direct build dependencies.
Regarding test dependencies: in a build context they are are build-time
test deps, whereas in a test context they are install-time test deps.
Since there are no means to distinguish the build/install type test deps,
they're both.
Further changes:
- all `package.py` module globals are guaranteed to be set before any of the
`setup_(dependent)_(run|build)_env` functions is called
- traversal order during setup: first the group of externals, then the group
of non-externals, with specs in each group traversed topological (dependencies
are setup before dependents)
- modules: only ever call `setup_dependent_run_environment` of *direct* link/run
type deps
- the marker in `set_module_variables_for_package` is dropped, since we should
call the method once per spec. This allows us to set only a cheap subset of
globals on the module: for example it's not necessary to compute the expensive
`cmake_args` and w/e if the spec under consideration is not the root node to be
built.
- `spack load`'s `--only` is deprecated (it has no effect now), and `spack load x`
now means: do everything that's required for `x` to work at runtime, which
requires runtime deps to be setup -- just like `spack env activate`.
- `spack load` no longer loads build deps (of build deps) ...
- `spack env activate` on partially installed or broken environments: this is all
or nothing now. If some spec errors during setup of its runtime env, you'll only
get the unconditional variables + a warning that says the runtime changes for
specs couldn't be applied.
- Remove traversal in upward direction from `setup_dependent_*` in packages.
Upward traversal may iterate to specs that aren't children of the roots
(e.g. zlib / python have hundreds of dependents, only a small fraction is
reachable from the roots. Packages should only modify the direct dependent
they receive as an argument)
The ability to select the top N versions got removed in the checksum overhaul,
cause initially numbers were used for commands.
Now that we settled on characters for commands, let's make numbers pick the top
N again.
Improve how mirrors are used in gitlab ci, where we have until now thought
of them as only a string.
By configuring ci mirrors ahead of time using the proposed mirror templates,
and by taking advantage of the expressiveness that spack now has for mirrors,
this PR will allow us to easily switch the protocol/url we use for fetching
binary dependencies.
This change also deprecates some gitlab functionality and marks it for
removal in Spack 0.23:
- arguments to "spack ci generate":
* --buildcache-destination
* --copy-to
- gitlab configuration options:
* enable-artifacts-buildcache
* temporary-storage-url-prefix
Reused specs used to be referenced directly into the built spec.
This might cause issues like in issue 39570 where two objects in
memory represent the same node, because two reused specs were
loaded from different sources but referred to the same spec
by DAG hash.
The issue is solved by copying concrete specs to a dictionary keyed
by dag hash.
`spack dev-build` would incorrectly set `keep_stage=True` for the
entire DAG, including for non-dev specs, even though the dev specs
have a DIYStage which never deletes sources.
This patch adds in a license directive to get the ball rolling on adding in license
information about packages to spack. I'm primarily interested in just adding
license into spack, but this would also help with other efforts that people are
interested in such as adding license information to the ASP solve for
concretization to make sure licenses are compatible.
Usage:
Specifying the specific license that a package is released under in a project's
`package.py` is good practice. To specify a license, find the SPDX identifier for
a project and then add it using the license directive:
```python
license("<SPDX Identifier HERE>")
```
For example, for Apache 2.0, you might write:
```python
license("Apache-2.0")
```
Note that specifying a license without a when clause makes it apply to all
versions and variants of the package, which might not actually be the case.
For example, a project might have switched licenses at some point or have
certain build configurations that include files that are licensed differently.
To account for this, you can specify when licenses should be applied. For
example, to specify that a specific license identifier should only apply
to versionup to and including 1.5, you could write the following directive:
```python
license("MIT", when="@:1.5")
```
This commit allows version specifiers to refer to git branches that contain
forward slashes. For example, the following is valid syntax now:
pkg@git.releases/1.0
It also adds a new method `Spec.format_path(fmt)` which is like `Spec.format`,
but also maps unsafe characters to `_` after interpolation. The difference is
as follows:
>>> Spec("pkg@git.releases/1.0").format("{name}/{version}")
'pkg/git.releases/1.0'
>>> Spec("pkg@git.releases/1.0").format_path("{name}/{version}")
'pkg/git.releases_1.0'
The `format_path` method is used in all projections. Notice that this method
also maps `=` to `_`
>>> Spec("pkg@git.main=1.0").format_path("{name}/{version}")
'pkg/git.main_1.0'
which should avoid syntax issues when `Spec.prefix` is literally copied into a
Makefile as sometimes happens in AutotoolsPackage or MakefilePackage
Currently `spack env activate --with-view` exists, but is a no-op.
So, it is not too much of a breaking change to make this redundant flag
accept a value `spack env activate --with-view <name>` which activates
a particular view by name.
The view name is stored in `SPACK_ENV_VIEW`.
This also fixes an issue where deactivating a view that was activated
with `--without-view` possibly removes entries from PATH, since now we
keep track of whether the default view was "enabled" or not.
* spack checksum: improve interactive filtering
* fix signature of executable
* Fix restart when using editor
* Don't show [x version(s) are new] when no known versions (e.g. in spack create <url>)
* Test ^D in test_checksum_interactive_quit_from_ask_each
* formatting
* colorize / skip header on invalid command
* show original total, not modified total
* use colify for command list
* Warn about possible URL changes
* show possible URL change as comment
* make mypy happy
* drop numbers
* [o]pen editor -> [e]dit
Because those end up being passed to ar which does not understand linker
arguments. This was making ldflags largely unusuable for statically
linked cmake packages.
* Allow branching out of the "generic build" unification set
For cases like the one in https://github.com/spack/spack/pull/39661
we need to relax rules on unification sets.
The issue is that, right now, nodes in the "generic build" unification
set are unified together with their build dependencies. This was done
out of caution to avoid the risk of circular dependencies, which would
ultimately cause a very slow solve.
For build-tools like Cython, however, the build dependencies is masked
by a long chain of "build, run" dependencies that belong in the
"generic build" unification space.
To allow splitting on cases like this, we relax the rule disallowing
branching out of the "generic build" unification set.
* Fix issue with pure build virtual dependencies
Pure build virtual dependencies were not accounted properly in the
list of possible virtuals. This caused some facts connecting virtuals
to the corresponding providers to not be emitted, and in the end
lead to unsat problems.
* Fixed a few issues in packages
py-gevent: restore dependency on py-cython@3
jsoncpp: fix typo in build dependency
ecp-data-vis-sdk: update spack.yaml and cmake recipe
py-statsmodels: add v0.13.5
* Make dependency on "blt" of type "build"
We run pip with `--no-build-isolation` because we don't wanna let pip
install build deps.
As a consequence, when pip runs hooks, it runs hooks of *any* package it
can find in `sys.path`.
For Spack-built Python this includes user site packages -- there
shouldn't be any system site packages. So in this case it suffices to
set the environment variable PYTHONNOUSERSITE=1.
For external Python, more needs to be done, cause there is no env
variable that disables both system and user site packages; setting the
`python -S` flag doesn't work because pip runs subprocesses that don't
inherit this flag (and there is no API to know if -S was passed)
So, for external Python, an empty venv is created before invoking pip in
Spack's build env ensures that pip can no longer see anything but
standard libraries and `PYTHONPATH`.
The downside of this is that pip will generate shebangs that point to
the python executable from the venv. So, for external python an extra
step is necessary where we fix up shebangs post install.
Two changes in this PR:
1. Register absolute paths in tarballs, which makes it easier
to use them as container image layers, or rootfs in general, outside
of Spack. Spack supports this already on develop.
2. Assemble the tarfile entries "by hand", which has a few advantages:
1. Avoid reading `/etc/passwd`, `/etc/groups`, `/etc/nsswitch.conf`
which `tar.add(dir)` does _for each file it adds_
2. Reduce the number of stat calls per file added by a factor two,
compared to `tar.add`, which should help with slow, shared filesystems
where these calls are expensive
4. Create normalized `TarInfo` entries from the start, instead of letting
Python create them and patching them after the fact
5. Don't recurse into subdirs before processing files, to avoid
keeping nested directories opened. (this changes the tar entry
order slightly, it's like sorting by `(not is_dir, name)`.
For a long time, the docs have generated a huge, static HTML package list. It has some
disadvantages:
* It's slow to load
* It's slow to build
* It's hard to search
We now have a nice website that can tell us about Spack packages, and it's searchable so
users can easily find the one or two packages out of 7400 that they're looking for. We
should link to this instead of including a static package list page in the docs.
- [x] Replace package list link with link to packages.spack.io
- [x] Remove `package_list.html` generation from `conf.py`.
- [x] Add a new section for "Links" to the docs.
- [x] Remove docstring notes from contribution guide (we haven't generated RST
for package docstrings for a while)
- [x] Remove referencese to `package-list` from docs.
Currently, Windows SDK detection will only pick up SDK versions
related to the current version of Windows Spack is running on.
However, in some circumstances, we want to detect other version
of the SDK, for example, for compiling on Windows 11 for Windows
10 to ensure an API is compatible with Win10.
* Make use of `prefix` in the Cray manifest schema (prepend it to
the relative CC etc.) - this was a Spack error.
* Warn people when wrong-looking compilers are found in the manifest
(i.e. non-existent CC path).
* Bypass compilers that we fail to add (don't allow a single bad
compiler to terminate the entire read-cray-manifest action).
* Refactor Cray manifest tests: module-level variables have been
replaced with fixtures, specifically using the `test_platform`
fixture, which allows the unit tests to run with the new
concretizer.
* Add unit test to check case where adding a compiler raises an
exception (check that this doesn't prevent processing the
rest of the manifest).
If you `spack install x ^y` where `y` is a pure build dep of `x`, and
then uninstall `y`, and then `spack install --overwrite x ^y`, the build
fails because `y` is not re-installed.
Same can happen when you install a develop spec, run `spack gc`,
modify sources, and install again; develop specs rely on overwrite
install to work correctly.
This PR adds a new audit sub-command to check that detection of relevant packages
is performed correctly in a few scenarios mocking real use-cases. The data for each
package being tested is in a YAML file called detection_test.yaml alongside the
corresponding package.py file.
This is to allow encoding detection tests for compilers and other widely used tools,
in preparation for compilers as dependencies.
Modifications:
- [x] Move `spack.util.string` to `llnl.string`
- [x] Remove dependency of `llnl` on `spack.error`
- [x] Move path of `spack.util.path` to `llnl.path`
- [x] Move `spack.util.environment.get_host_*` to `spack.spec`
* msvc.py: don't import distutils
Introduced in #27021, makes Spack forward incompatible with Python.
The module was already deprecated at the time of the PR.
* update spack package
Fixes#39622
Add a timeout to compiler detection and allow Spack to proceed when
this timeout occurs.
In all cases, the timeout is 120 seconds: it is assumed any compiler
invocation we do for the purposes of verifying it would resolve in
that amount of time.
Also refine executables that are tested as being possible MSVC
instances, and limit where we try to detect MSVC. In more detail:
* Compiler detection should timeout after a certain period of time.
Because compiler detection executes arbitrary executables on the
system, we could encounter a program that just hangs, or even a
compiler that hangs on a license key or similar. A timeout
prevents this from hanging Spack.
* Prevents things like cl-.* from being detected as potential MSVC
installs. cl is always just cl in all cases that Spack supports.
Change the MSVC class to indicate this.
* Prevent compilers unsupported on certain platforms from being
detected there (i.e. don't look for MSVC on systems other than
Windows).
The first point alone is sufficient to address #39622, but the
next two reduce the likelihood of timeouts (which is useful since
those slow down the user even if they are survivable).
Put back normalization of the "virtuals" input as a sorted tuple.
Without this we might get edges that differ just for the order of
virtuals, or that have lists, which are not hashable.
Add unit-tests to prevent regressions.
By default, do not let deprecated versions enter the solve.
Previously you could concretize to something deprecated, only to get errors on install.
With this commit, we get errors on concretization, so the issue is caught earlier.
PythonExtension is a base class for PythonPackage, and
is meant to be used for any package that is a Python
extension but is not built using "python_pip".
The "update_external_dependency" method in the base
class calls another method that is defined in the derived
class.
Push "get_external_python_for_prefix" up in the hierarchy
to make method calls consistent.
This commit replaces the internal representation of deptypes with `int`, which is more compact
and faster to operate with.
Double loops like:
```
any(x in ys for x in xs)
```
are replaced by constant operations bool(xs & ys), where xs and ys are dependency types.
Global constants are exposed for convenience in `spack.deptypes`
Currently, the concretizer emits facts for all versions known to Spack, including deprecated versions, and has a specific optimization objective to minimize their use.
This commit simplifies how deprecated versions are handled by considering possible versions for a spec only if they appear in a spec literal, or if the `config:deprecated:true` is set directly or through the `--deprecated` flag. The optimization objective has also been removed, in favor of just ordering versions and having deprecated ones last.
This results in:
a) no delayed errors on install, but concretization errors when deprecated versions would be the only option. This is in particular relevant for CI where it's better to get errors early
b) a slight concretization speed-up due to fewer facts
c) a simplification of the logic program.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
NMake makefiles are still called makefiles. The corresponding builder
variable was called "nmakefile", which is a bit unintuitive and lead
to a few easy-to-make, hard-to-notice mistakes when creating packages.
This commit renames the builder property to be "makefile"
Extensionless archives requiring two-stage decompression and extraction
require intermediate archives to be renamed after decompression/extraction
to prevent collision. Prior behavior attempted to cleanup the intermediate
archive with the original name, this PR ensures the renamed folder is
cleaned instead.
Co-authored-by: Dan Lipsa <dan.lipsa@khq.kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
* Perform external spec detection with multiple workers
The logic to perform external spec detection has been refactored
into classes. These classes use the GoF "template" pattern to account
for the small differences between searching for "executables" and
for "libraries", while unifying the larger part of the algorithm.
A ProcessPoolExecutor is used to parallelize the work.
* Speed-up external find by tagging detectable packages automatically
Querying packages by tag is much faster than inspecting the repository,
since tags are cached. This commit adds a "detectable" tag to every
package that implements the detection protocol, and external detection
uses it to search for packages.
* Pass package names instead of package classes to workers
The slowest part of the search is importing the Python modules
associated with candidate packages. The import is done serially
before we distribute the work to the pool of executors.
This commit pushes the import of the Python module to the job
performed by the workers, and passes just the name of the packages
to the executors.
In this way imports can be done in parallel.
* Rework unit-tests for Windows
Some unit tests were doing a full e2e run of a command
just to check a input handling. Make the test more
focused by just stressing a specific function.
Mark as xfailed 2 tests on Windows, that will be fixed
by a PR in the queue. The tests are failing because we
monkeypatch internals in the parent process, but the
monkeypatching is not done in the "spawned" child
process.
* Write timing information for installs from cache
* CI: aggregate and upload install_times.json to artifacts
* CI: Don't change root directory for artifact generation
* Flat event based timer variation
Event based timer allows for easily starting and stopping timers without
wiping sub-timer data. It also requires less branching logic when
tracking time.
The json output is non-hierarchical in this version and hierarchy is
less rigidly enforced between starting and stopping.
* Add and write timers for top level install
* Update completion
* remove unused subtimer api
* Fix unit tests
* Suppress timing summary option
* Save timers summaries to user_data artifacts
* Remove completion from fish
* Move spack python to script section
* Write timer correctly for non-cache installs
* Re-add hash to timer file
* Fish completion updates
* Fix null timer yield value
* fix type hints
* Remove timer-summary-file option
* Add "." in front of non-package timer name
---------
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
This is a fixed version of b72a268
* That commit would discard the final key component (so if you set
"config:install_tree:root", it would discard "root" and just set
install tree).
* When setting key:"value", with the quotes, that commit would
discard the quotes, which would confuse the system if adding a
value like "{example}" (the "{" character indicates a dictionary).
This commit retains the quotes.