This commit improves forward compatibility of Spack with newer build cache metadata formats.
Before this commit, invalid or unrecognized metadata would be fatal errors, now they just cause
a mirror to be skipped.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Before this PR, variant were not propagated to leaf nodes that could accept
the propagated value, if some intermediate node couldn't accept it.
This PR fixes that issue by marking nodes as "candidate" for propagation
and by setting the variant only if it can be accepted by the node.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Modify the packages.yaml schema so that soft-preferences on targets,
compilers and providers can only be specified under the "all" attribute.
This makes them effectively global preferences.
Version preferences instead can only be specified under a package
specific section.
If a preference attribute is found in a section where it should
not be, it will be ignored and a warning is printed to screen.
Most queries will end up calling `spec.satisfies(query)` on everything in the DB, which
will cause Spack to ask whether the query spec is virtual if its name doesn't match the
target spec's. This can be expensive, because it can cause Spack to check if any new
virtuals showed up in *all* the packages it knows about. That can currently trigger
thousands of `stat()` calls.
We can avoid the virtual check for most successful queries if we consider that if there
*is* a match by name, the query spec *can't* be virtual. This PR adds an optimization to
the query loop to save any comparisons that would trigger a virtual check for last.
- [x] Add a `deferred` list to the `query()` loop.
- [x] First run through the `query()` loop *only* checks for name matches.
- [x] Query loop now returns early if there's a name match, skipping most `satisfies()` calls.
- [x] Second run through the `deferred()` list only runs if query spec is virtual.
- [x] Fix up handling of concrete specs.
- [x] Add test for querying virtuals in DB.
- [x] Avoid allocating deferred if not necessary.
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Currently there's some hacky logic in the AppleClang compiler that makes
it also accept `gfortran` as a fortran compiler if `flang` is not found.
This is guarded by `if sys.platform` checks s.t. it only applies to
Darwin.
But on Linux the feature of detecting mixed toolchains is highly
requested too, cause it's rather annoying to run into a failed build of
`openblas` after dozens of minutes of compiling its dependencies, just
because clang doesn't have a fortran compiler.
In particular in CI where the system compilers may change during system
updates, it's typically impossible to fix compilers in a hand-written
compilers.yaml config file: the config will almost certainly be outdated
sooner or later, and maintaining one config file per target machine and
writing logic to select the correct config is rather undesirable too.
---
This PR introduces a flag `spack compiler find --mixed-toolchain` that
fills out missing `fc` and `f77` entries in `clang` / `apple-clang` by
picking the best matching `gcc`.
It is enabled by default on macOS, but not on Linux, matching current
behavior of `spack compiler find`.
The "best matching gcc" logic and compiler path updates are identical to
how compiler path dictionaries are currently flattened "horizontally"
(per compiler id). This just adds logic to do the same "vertically"
(across different compiler ids).
So, with this change on Ubuntu 22.04:
```
$ spack compiler find --mixed-toolchain
==> Added 6 new compilers to /home/harmen/.spack/linux/compilers.yaml
gcc@13.1.0 gcc@12.3.0 gcc@11.4.0 gcc@10.5.0 clang@16.0.0 clang@15.0.7
==> Compilers are defined in the following files:
/home/harmen/.spack/linux/compilers.yaml
```
you finally get:
```
compilers:
- compiler:
spec: clang@=15.0.7
paths:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags: {}
operating_system: ubuntu23.04
target: x86_64
modules: []
environment: {}
extra_rpaths: []
- compiler:
spec: clang@=16.0.0
paths:
cc: /usr/bin/clang-16
cxx: /usr/bin/clang++-16
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags: {}
operating_system: ubuntu23.04
target: x86_64
modules: []
environment: {}
extra_rpaths: []
```
The "best gcc" is automatically default system gcc, since it has no
suffixes / prefixes.
Add a new config section: `config:aliases`, which is a dictionary mapping aliases
to commands.
For instance:
```yaml
config:
aliases:
sp: spec -I
```
will define a new command `sp` that will execute `spec` with the `-I`
argument.
Aliases cannot override existing commands, and this is ensured with a test.
We cannot currently alias subcommands. Spack will warn about any aliases
containing a space, but will not error, which leaves room for subcommand
aliases in the future.
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Test that setup_run_environment changes to CC/CXX/FC/F77 are dropped in build env
* compilers set in run env shouldn't impact build
Adds `drop` to EnvironmentModifications courtesy of @haampie, and uses
it to clear modifications of CC, CXX, F77 and FC made by
`setup_{,dependent_}run_environment` routines when producing an
environment in BUILD context.
* comment / style
* comment
---------
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
This adds a rather trivial context manager that lets you deduplicate repeated
arguments in directives, e.g.
```python
depends_on("py-x@1", when="@1", type=("build", "run"))
depends_on("py-x@2", when="@2", type=("build", "run"))
depends_on("py-x@3", when="@3", type=("build", "run"))
depends_on("py-x@4", when="@4", type=("build", "run"))
```
can be condensed to
```python
with default_args(type=("build", "run")):
depends_on("py-x@1", when="@1")
depends_on("py-x@2", when="@2")
depends_on("py-x@3", when="@3")
depends_on("py-x@4", when="@4")
```
The advantage is it's clear for humans, the downside it's less clear for type checkers due to type erasure.
Create chains of causation for error messages.
The current implementation is only completed for some of the many errors presented by the concretizer. The rest will need to be filled out over time, but this demonstrates the capability.
The basic idea is to associate conditions in the solver with one another in causal relationships, and to associate errors with the proximate causes of their facts in the condition graph. Then we can construct causal trees to explain errors, which will hopefully present users with useful information to avoid the error or report issues.
Technically, this is implemented as a secondary solve. The concretizer computes the optimal model, and if the optimal model contains an error, then a secondary solve computes causation information about the error(s) in the concretizer output.
Examples:
$ spack solve hdf5 ^cmake@3.0.1
==> Error: concretization failed for the following reasons:
1. Cannot satisfy 'cmake@3.0.1'
2. Cannot satisfy 'cmake@3.0.1'
required because hdf5 ^cmake@3.0.1 requested from CLI
3. Cannot satisfy 'cmake@3.18:' and 'cmake@3.0.1
required because hdf5 ^cmake@3.0.1 requested from CLI
required because hdf5 depends on cmake@3.18: when @1.13:
required because hdf5 ^cmake@3.0.1 requested from CLI
4. Cannot satisfy 'cmake@3.12:' and 'cmake@3.0.1
required because hdf5 depends on cmake@3.12:
required because hdf5 ^cmake@3.0.1 requested from CLI
required because hdf5 ^cmake@3.0.1 requested from CLI
$ spack spec cmake ^curl~ldap # <-- with curl configured non-buildable and an external with `+ldap`
==> Error: concretization failed for the following reasons:
1. Attempted to use external for 'curl' which does not satisfy any configured external spec
2. Attempted to build package curl which is not buildable and does not have a satisfying external
attr('variant_value', 'curl', 'ldap', 'True') is an external constraint for curl which was not satisfied
3. Attempted to build package curl which is not buildable and does not have a satisfying external
attr('variant_value', 'curl', 'gssapi', 'True') is an external constraint for curl which was not satisfied
4. Attempted to build package curl which is not buildable and does not have a satisfying external
'curl+ldap' is an external constraint for curl which was not satisfied
'curl~ldap' required
required because cmake ^curl~ldap requested from CLI
$ spack solve yambo+mpi ^hdf5~mpi
==> Error: concretization failed for the following reasons:
1. 'hdf5' required multiple values for single-valued variant 'mpi'
2. 'hdf5' required multiple values for single-valued variant 'mpi'
Requested '~mpi' and '+mpi'
required because yambo depends on hdf5+mpi when +mpi
required because yambo+mpi ^hdf5~mpi requested from CLI
required because yambo+mpi ^hdf5~mpi requested from CLI
3. 'hdf5' required multiple values for single-valued variant 'mpi'
Requested '~mpi' and '+mpi'
required because netcdf-c depends on hdf5+mpi when +mpi
required because netcdf-fortran depends on netcdf-c
required because yambo depends on netcdf-fortran
required because yambo+mpi ^hdf5~mpi requested from CLI
required because netcdf-fortran depends on netcdf-c@4.7.4: when @4.5.3:
required because yambo depends on netcdf-fortran
required because yambo+mpi ^hdf5~mpi requested from CLI
required because yambo depends on netcdf-c
required because yambo+mpi ^hdf5~mpi requested from CLI
required because yambo depends on netcdf-c+mpi when +mpi
required because yambo+mpi ^hdf5~mpi requested from CLI
required because yambo+mpi ^hdf5~mpi requested from CLI
Future work:
In addition to fleshing out the causes of other errors, I would like to find a way to associate different components of the error messages with different causes. In this example it's pretty easy to infer which part is which, but I'm not confident that will always be the case.
See the previous PR #34500 for discussion of how the condition chains are incomplete. In the future, we may need custom logic for individual attributes to associate some important choice rules with conditions such that clingo choices or other derivations can be part of the explanation.
---------
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This PR implements the concept of "default environment", which doesn't have to be
created explicitly. The aim is to lower the barrier for adopting environments.
To (create and) activate the default environment, run
```
$ spack env activate
```
This mimics the behavior of
```
$ cd
```
which brings you to your home directory.
This is not a breaking change, since `spack env activate` without arguments
currently errors. It is similar to the already existing `spack env activate --temp`
command which always creates an env in a temporary directory, the difference
is that the default environment is a managed / named environment named `default`.
The name `default` is not a reserved name, it's just that `spack env activate`
creates it for you if you don't have it already.
With this change, you can get started with environments faster:
```
$ spack env activate [--prompt]
$ spack install --add x y z
```
instead of
```
$ spack env create default
==> Created environment 'default in /Users/harmenstoppels/spack/var/spack/environments/default
==> You can activate this environment with:
==> spack env activate default
$ spack env activate [--prompt] default
$ spack install --add x y z
```
Notice that Spack supports switching (but not stacking) environments, so the
parallel with `cd` is pretty clear:
```
$ spack env activate named_env
$ spack env status
==> In environment named_env
$ spack env activate
$ spack env status
==> In environment default
```
* Add command suggestions
This adds suggestions of similar commands in case users mistype a
command. Before:
```
$ spack spack
==> Error: spack is not a recognized Spack command or extension command; check with `spack commands`.
```
After:
```
$ spack spack
==> Error: spack is not a recognized Spack command or extension command; check with `spack commands`.
Did you mean one of the following commands?
spec
patch
```
* Add package name suggestions
* Remove suggestion to run spack clean -m
This PR adds support for including separate definitions from `spack.yaml`.
Supporting the inclusion of files with definitions enables user to make
curated/standardized collections of packages that can re-used by others.
Currently module globals aren't set before running
`setup_[dependent_]run_environment` to compute environment modifications
for module files. This commit fixes that.
Looking at the memory profiles of concurrent solves
for environment with unify:false, it seems memory
is only ramping up.
This exchange in the potassco mailing list:
https://sourceforge.net/p/potassco/mailman/potassco-users/thread/b55b5b8c2e8945409abb3fa3c935c27e%40lohn.at/#msg36517698
Seems to suggest that clingo doesn't release memory
until end of the application.
Since when unify:false we distribute work to processes,
here we give a maxtaskperchild=1, so we clean memory
after each solve.
Some providers must provide virtuals "together", i.e.
if they provide one virtual of a set, they must be the
providers also of the others.
There was a bug though, where we were not checking if
the other virtuals in the set were needed at all in
the DAG.
This commit fixes the bug.
This PR makes it possible to select only a subset of virtual dependencies from a spec that _may_ provide more. To select providers, a syntax to specify edge attributes is introduced:
```
hdf5 ^[virtuals=mpi] mpich
```
With that syntax we can concretize specs like:
```console
$ spack spec strumpack ^[virtuals=mpi] intel-parallel-studio+mkl ^[virtuals=lapack] openblas
```
On `develop` this would currently fail with:
```console
$ spack spec strumpack ^intel-parallel-studio+mkl ^openblas
==> Error: Spec cannot include multiple providers for virtual 'blas'
Requested 'intel-parallel-studio' and 'openblas'
```
In package recipes, virtual specs that are declared in the same `provides` directive need to be provided _together_. This means that e.g. `openblas`, which has:
```python
provides("blas", "lapack")
```
needs to provide both `lapack` and `blas` when requested to provide at least one of them.
## Additional notes
This capability is needed to model compilers. Assuming that languages are treated like virtual dependencies, we might want e.g. to use LLVM to compile C/C++ and Gnu GCC to compile Fortran. This can be accomplished by the following[^1]:
```
hdf5 ^[virtuals=c,cxx] llvm ^[virtuals=fortran] gcc
```
[^1]: We plan to add some syntactic sugar around this syntax, and reuse the `%` sigil to avoid having a lot of boilerplate around compilers.
Modifications:
- [x] Add syntax to interact with edge attributes from spec literals
- [x] Add concretization logic to be able to cherry-pick virtual dependencies
- [x] Extend semantic of the `provides` directive to express when virtuals need to be provided together
- [x] Add unit-tests and documentation
Allowing white space around `:` in version ranges introduces an ambiguity:
```
a@1: b
```
parses as `a@1:b` but should really be parsed as two separate specs `a@1:` and `b`.
With white space disallowed around `:` in ranges, the ambiguity is resolved.
Call setup_dependent_run_environment on both link and run edges,
instead of only run edges, which restores old behavior.
Move setup_build_environment into get_env_modifications
Also call setup_run_environment on direct build deps, since their run
environment has to be set up.
* Add tests to ensure variant propagation syntax can round-trip to/from string
* Add a regression test for the bug in 35298
* Reconstruct the spec constraints in the worker process
Specs do not preserve any information on propagation of variants
when round-tripping to/from JSON (which we use to pickle), but
preserve it when round-tripping to/from strings.
Therefore, we pass a spec literal to the worker and reconstruct
the Spec objects there.
- [x] Add links to information people are going to want to know when adding license
information to their packages (namely OSI licenses and SPDX identifiers).
- [x] Update the packaging docs for `license()` with Spack as an example for `when=`.
After all, it's a dual-licensed package that changed once in the past.
- [x] Add link to https://spdx.org/licenses/ in the `spack create` boilerplate as well.
Typically MSVC is detected via the VSWhere program. However, this may
not be available, or may be installed in an unpredictable location.
This PR adds an additional approach via Windows Registry queries to
determine VS install location root.
Additionally:
* Construct vs_install_paths after class-definition time (move it to
variable-access time).
* Skip over keys for which a user does not have read permissions
when performing searches (previously the presence of these keys
would have caused an error, regardless of whether they were
needed).
* Extend helper functionality with option for regex matching on
registry keys vs. exact string matching.
* Some internal refactoring: remove boolean parameters in some cases
where the function was always called with the same value
(e.g. `find_subkey`)
.bat or .exe files can be considered executable on Windows. This PR
expands the regex for detectable packages to allow for the detection
of packages that vendor .bat wrappers (intel mpi for example).
Additional changes:
* Outside of Windows, when searching for executables `path_hints=None`
was used to indicate that default path hints should be provided,
and `[]` was taken to mean that no defaults should be chosen
(in that case, nothing is searched); behavior on Windows has
now been updated to match.
* Above logic for handling of `path_hints=[]` has also been extended
to library search (for both Linux and Windows).
* All exceptions for external packages were documented as timeout
errors: this commit adds a distinction for other types of errors
in warning messages to the user.
Credits to @ChristianKniep for advocating the idea of OCI image layers
being identical to spack buildcache tarballs.
With this you can configure an OCI registry as a buildcache:
```console
$ spack mirror add my_registry oci://user/image # Dockerhub
$ spack mirror add my_registry oci://ghcr.io/haampie/spack-test # GHCR
$ spack mirror set --push --oci-username ... --oci-password ... my_registry # set login credentials
```
which should result in this config:
```yaml
mirrors:
my_registry:
url: oci://ghcr.io/haampie/spack-test
push:
access_pair: [<username>, <password>]
```
It can be used like any other registry
```
spack buildcache push my_registry [specs...]
```
It will upload the Spack tarballs in parallel, as well as manifest + config
files s.t. the binaries are compatible with `docker pull` or `skopeo copy`.
In fact, a base image can be added to get a _runnable_ image:
```console
$ spack buildcache push --base-image ubuntu:23.04 my_registry python
Pushed ... as [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
$ docker run --rm -it [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
```
which should really be a game changer for sharing binaries.
Further, all content-addressable blobs that are downloaded and verified
will be cached in Spack's download cache. This should make repeated
`push` commands faster, as well as `push` followed by a separate
`update-index` command.
An end to end example of how to use this in Github Actions is here:
**https://github.com/haampie/spack-oci-buildcache-example**
TODO:
- [x] Generate environment modifications in config so PATH is set up
- [x] Enrich config with Spack's `spec` json (this is allowed in the OCI specification)
- [x] When ^ is done, add logic to create an index in say `<image>:index` by fetching all config files (using OCI distribution discovery API)
- [x] Add logic to use object storage in an OCI registry in `spack install`.
- [x] Make the user pick the base image for generated OCI images.
- [x] Update buildcache install logic to deal with absolute paths in tarballs
- [x] Merge with `spack buildcache` command
- [x] Merge #37441 (included here)
- [x] Merge #39077 (included here)
- [x] #39187 + #39285
- [x] #39341
- [x] Not a blocker: #35737 fixes correctness run env for the generated container images
NOTE:
1. `oci://` is unfortunately taken, so it's being abused in this PR to mean "oci type mirror". `skopeo` uses `docker://` which I'd like to avoid, given that classical docker v1 registries are not supported.
2. this is currently `https`-only, given that basic auth is used to login. I _could_ be convinced to allow http, but I'd prefer not to, given that for a `spack buildcache push` command multiple domains can be involved (auth server, source of base image, destination registry). Right now, no urllib http handler is added, so redirects to https and auth servers with http urls will simply result in a hard failure.
CAVEATS:
1. Signing is not implemented in this PR. `gpg --clearsign` is not the nicest solution, since (a) the spec.json is merged into the image config, which must be valid json, and (b) it would be better to sign the manifest (referencing both config/spec file and tarball) using more conventional image signing tools
2. `spack.binary_distribution.push` is not yet implemented for the OCI buildcache, only `spack buildcache push` is. This is because I'd like to always push images + deps to the registry, so that it's `docker pull`-able, whereas in `spack ci` we really wanna push an individual package without its deps to say `pr-xyz`, while its deps reside in some `develop` buildcache.
3. The `push -j ...` flag only works for OCI buildcache, not for others
* spack checksum pkg@1.2, use as version filter
Currently pkg@1.2 splits on @ and looks for 1.2 specifically, with this
PR pkg@1.2 is a filter so any matching 1.2, 1.2.1, ..., 1.2.10 version
is displayed.
* fix tests
* fix style
Update Tcl modulefile template to simplify generated `append-path`,
`prepend-path` and `remove-path` commands and improve their readability.
If path element delimiter is colon character, do not set the `--delim`
option as it is the default delimiter value.
Renames exclude_implicits to hide_implicits
When hide_implicits option is enabled, generate modulefile of
implicitly installed software and hide them. Even if implicit, those
modulefiles may be referred as dependency in other modulefiles thus they
should be generated to make module properly load dependent module.
A new hidden property is added to BaseConfiguration class.
To hide modulefiles, modulercs are generated along modulefiles. Such rc
files contain specific module command to indicate a module should be
hidden (for instance when using "module avail").
A modulerc property is added to TclFileLayout and LmodFileLayout classes
to get fully qualified path name of the modulerc associated to a given
modulefile.
Modulerc files will be located in each module directory, next to the
version modulefiles. This scheme is supported by both module tool
implementations.
modulerc_header and hide_cmd_format attributes are added to
TclModulefileWriter and LmodModulefileWriter. They help to know how to
generate a modulerc file with hidden commands for each module tool.
Tcl modulerc file requires an header. As we use a command introduced on
Modules 4.7 (module-hide --hidden-loaded), a version requirement is
added to header string.
For lmod, modules that open up a hierarchy are never hidden, even if
they are implicitly installed.
Modulerc is created, updated or removed when associated modulefile is
written or removed. If an implicit modulefile becomes explicit, hidden
command in modulerc for this modulefile is removed. If modulerc becomes
empty, this file is removed. Modulerc file is not rewritten when no
content change is detected.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Previously, we only searched for `patch` inside of whatever Git
installation was available because the most common installation of Git
available on Windows had `patch`. That's not true for all possible
installations of Git though, so this updates the search to also check
PATH.
GitLab's .patch URLs only provide abbreviated hashes, while .diff URLs
provide full hashes. There does not seem to be a parameter to force
.patch URLs to also return full hashes, so we should make sure to use
the .diff ones.
With the introduction of multiple build dependencies from the same package in the DAG, we need to minimize a few weights accounting for edges rather than nodes. If we don't do that we might have multiple "optimal" solutions that differ only in how the same nodes are connected together. This commit ensures optimal versions are picked per parent in case of multiple choices for a dependency.
Fix the following syntax which validates only the first array entry:
```python
"compilers": {
"type": "array",
"items": [
{
"type": ...
}
]
}
```
to
```python
"compilers": {
"type": "array",
"items": {
"type": ...
}
}
```
which validates the entire array.
Oops...
This adds a `SetupContext` class which is responsible for setting
package.py module globals, and computing the changes to environment
variables for the build, test or run context.
The class uses `effective_deptypes` which takes a list of specs (e.g. single
item of a spec to build, or a list of environment roots) and a context
(build, run, test), and outputs a flat list of specs that affect the
environment together with a flag in what way they do so. This list is
topologically ordered from root to leaf, so that one can be assured that
dependents override variables set by dependencies, not the other way
around.
This is used to replace the logic in `modifications_from_dependencies`,
which has several issues: missing calls to `setup_run_environment`, and
the order in which operations are applied.
Further, it should improve performance a bit in certain cases, since
`effective_deptypes` run in O(v + e) time, whereas `spack env activate`
currently can take up to O(v^2 + e) time due to loops over roots. Each
edge in the DAG is visited once by calling `effective_deptypes` with
`env.concrete_roots()`.
By marking and propagating flags through the DAG, this commit also fixes
a bug where Spack wouldn't call `setup_run_environment` for runtime
dependencies of link dependencies. And this PR ensures that Spack
correctly sets up the runtime environment of direct build dependencies.
Regarding test dependencies: in a build context they are are build-time
test deps, whereas in a test context they are install-time test deps.
Since there are no means to distinguish the build/install type test deps,
they're both.
Further changes:
- all `package.py` module globals are guaranteed to be set before any of the
`setup_(dependent)_(run|build)_env` functions is called
- traversal order during setup: first the group of externals, then the group
of non-externals, with specs in each group traversed topological (dependencies
are setup before dependents)
- modules: only ever call `setup_dependent_run_environment` of *direct* link/run
type deps
- the marker in `set_module_variables_for_package` is dropped, since we should
call the method once per spec. This allows us to set only a cheap subset of
globals on the module: for example it's not necessary to compute the expensive
`cmake_args` and w/e if the spec under consideration is not the root node to be
built.
- `spack load`'s `--only` is deprecated (it has no effect now), and `spack load x`
now means: do everything that's required for `x` to work at runtime, which
requires runtime deps to be setup -- just like `spack env activate`.
- `spack load` no longer loads build deps (of build deps) ...
- `spack env activate` on partially installed or broken environments: this is all
or nothing now. If some spec errors during setup of its runtime env, you'll only
get the unconditional variables + a warning that says the runtime changes for
specs couldn't be applied.
- Remove traversal in upward direction from `setup_dependent_*` in packages.
Upward traversal may iterate to specs that aren't children of the roots
(e.g. zlib / python have hundreds of dependents, only a small fraction is
reachable from the roots. Packages should only modify the direct dependent
they receive as an argument)
The ability to select the top N versions got removed in the checksum overhaul,
cause initially numbers were used for commands.
Now that we settled on characters for commands, let's make numbers pick the top
N again.
Improve how mirrors are used in gitlab ci, where we have until now thought
of them as only a string.
By configuring ci mirrors ahead of time using the proposed mirror templates,
and by taking advantage of the expressiveness that spack now has for mirrors,
this PR will allow us to easily switch the protocol/url we use for fetching
binary dependencies.
This change also deprecates some gitlab functionality and marks it for
removal in Spack 0.23:
- arguments to "spack ci generate":
* --buildcache-destination
* --copy-to
- gitlab configuration options:
* enable-artifacts-buildcache
* temporary-storage-url-prefix
Reused specs used to be referenced directly into the built spec.
This might cause issues like in issue 39570 where two objects in
memory represent the same node, because two reused specs were
loaded from different sources but referred to the same spec
by DAG hash.
The issue is solved by copying concrete specs to a dictionary keyed
by dag hash.
`spack dev-build` would incorrectly set `keep_stage=True` for the
entire DAG, including for non-dev specs, even though the dev specs
have a DIYStage which never deletes sources.
This patch adds in a license directive to get the ball rolling on adding in license
information about packages to spack. I'm primarily interested in just adding
license into spack, but this would also help with other efforts that people are
interested in such as adding license information to the ASP solve for
concretization to make sure licenses are compatible.
Usage:
Specifying the specific license that a package is released under in a project's
`package.py` is good practice. To specify a license, find the SPDX identifier for
a project and then add it using the license directive:
```python
license("<SPDX Identifier HERE>")
```
For example, for Apache 2.0, you might write:
```python
license("Apache-2.0")
```
Note that specifying a license without a when clause makes it apply to all
versions and variants of the package, which might not actually be the case.
For example, a project might have switched licenses at some point or have
certain build configurations that include files that are licensed differently.
To account for this, you can specify when licenses should be applied. For
example, to specify that a specific license identifier should only apply
to versionup to and including 1.5, you could write the following directive:
```python
license("MIT", when="@:1.5")
```
This commit allows version specifiers to refer to git branches that contain
forward slashes. For example, the following is valid syntax now:
pkg@git.releases/1.0
It also adds a new method `Spec.format_path(fmt)` which is like `Spec.format`,
but also maps unsafe characters to `_` after interpolation. The difference is
as follows:
>>> Spec("pkg@git.releases/1.0").format("{name}/{version}")
'pkg/git.releases/1.0'
>>> Spec("pkg@git.releases/1.0").format_path("{name}/{version}")
'pkg/git.releases_1.0'
The `format_path` method is used in all projections. Notice that this method
also maps `=` to `_`
>>> Spec("pkg@git.main=1.0").format_path("{name}/{version}")
'pkg/git.main_1.0'
which should avoid syntax issues when `Spec.prefix` is literally copied into a
Makefile as sometimes happens in AutotoolsPackage or MakefilePackage
Currently `spack env activate --with-view` exists, but is a no-op.
So, it is not too much of a breaking change to make this redundant flag
accept a value `spack env activate --with-view <name>` which activates
a particular view by name.
The view name is stored in `SPACK_ENV_VIEW`.
This also fixes an issue where deactivating a view that was activated
with `--without-view` possibly removes entries from PATH, since now we
keep track of whether the default view was "enabled" or not.
* spack checksum: improve interactive filtering
* fix signature of executable
* Fix restart when using editor
* Don't show [x version(s) are new] when no known versions (e.g. in spack create <url>)
* Test ^D in test_checksum_interactive_quit_from_ask_each
* formatting
* colorize / skip header on invalid command
* show original total, not modified total
* use colify for command list
* Warn about possible URL changes
* show possible URL change as comment
* make mypy happy
* drop numbers
* [o]pen editor -> [e]dit
Because those end up being passed to ar which does not understand linker
arguments. This was making ldflags largely unusuable for statically
linked cmake packages.
* Allow branching out of the "generic build" unification set
For cases like the one in https://github.com/spack/spack/pull/39661
we need to relax rules on unification sets.
The issue is that, right now, nodes in the "generic build" unification
set are unified together with their build dependencies. This was done
out of caution to avoid the risk of circular dependencies, which would
ultimately cause a very slow solve.
For build-tools like Cython, however, the build dependencies is masked
by a long chain of "build, run" dependencies that belong in the
"generic build" unification space.
To allow splitting on cases like this, we relax the rule disallowing
branching out of the "generic build" unification set.
* Fix issue with pure build virtual dependencies
Pure build virtual dependencies were not accounted properly in the
list of possible virtuals. This caused some facts connecting virtuals
to the corresponding providers to not be emitted, and in the end
lead to unsat problems.
* Fixed a few issues in packages
py-gevent: restore dependency on py-cython@3
jsoncpp: fix typo in build dependency
ecp-data-vis-sdk: update spack.yaml and cmake recipe
py-statsmodels: add v0.13.5
* Make dependency on "blt" of type "build"
We run pip with `--no-build-isolation` because we don't wanna let pip
install build deps.
As a consequence, when pip runs hooks, it runs hooks of *any* package it
can find in `sys.path`.
For Spack-built Python this includes user site packages -- there
shouldn't be any system site packages. So in this case it suffices to
set the environment variable PYTHONNOUSERSITE=1.
For external Python, more needs to be done, cause there is no env
variable that disables both system and user site packages; setting the
`python -S` flag doesn't work because pip runs subprocesses that don't
inherit this flag (and there is no API to know if -S was passed)
So, for external Python, an empty venv is created before invoking pip in
Spack's build env ensures that pip can no longer see anything but
standard libraries and `PYTHONPATH`.
The downside of this is that pip will generate shebangs that point to
the python executable from the venv. So, for external python an extra
step is necessary where we fix up shebangs post install.
Two changes in this PR:
1. Register absolute paths in tarballs, which makes it easier
to use them as container image layers, or rootfs in general, outside
of Spack. Spack supports this already on develop.
2. Assemble the tarfile entries "by hand", which has a few advantages:
1. Avoid reading `/etc/passwd`, `/etc/groups`, `/etc/nsswitch.conf`
which `tar.add(dir)` does _for each file it adds_
2. Reduce the number of stat calls per file added by a factor two,
compared to `tar.add`, which should help with slow, shared filesystems
where these calls are expensive
4. Create normalized `TarInfo` entries from the start, instead of letting
Python create them and patching them after the fact
5. Don't recurse into subdirs before processing files, to avoid
keeping nested directories opened. (this changes the tar entry
order slightly, it's like sorting by `(not is_dir, name)`.
For a long time, the docs have generated a huge, static HTML package list. It has some
disadvantages:
* It's slow to load
* It's slow to build
* It's hard to search
We now have a nice website that can tell us about Spack packages, and it's searchable so
users can easily find the one or two packages out of 7400 that they're looking for. We
should link to this instead of including a static package list page in the docs.
- [x] Replace package list link with link to packages.spack.io
- [x] Remove `package_list.html` generation from `conf.py`.
- [x] Add a new section for "Links" to the docs.
- [x] Remove docstring notes from contribution guide (we haven't generated RST
for package docstrings for a while)
- [x] Remove referencese to `package-list` from docs.
Currently, Windows SDK detection will only pick up SDK versions
related to the current version of Windows Spack is running on.
However, in some circumstances, we want to detect other version
of the SDK, for example, for compiling on Windows 11 for Windows
10 to ensure an API is compatible with Win10.
* Make use of `prefix` in the Cray manifest schema (prepend it to
the relative CC etc.) - this was a Spack error.
* Warn people when wrong-looking compilers are found in the manifest
(i.e. non-existent CC path).
* Bypass compilers that we fail to add (don't allow a single bad
compiler to terminate the entire read-cray-manifest action).
* Refactor Cray manifest tests: module-level variables have been
replaced with fixtures, specifically using the `test_platform`
fixture, which allows the unit tests to run with the new
concretizer.
* Add unit test to check case where adding a compiler raises an
exception (check that this doesn't prevent processing the
rest of the manifest).
If you `spack install x ^y` where `y` is a pure build dep of `x`, and
then uninstall `y`, and then `spack install --overwrite x ^y`, the build
fails because `y` is not re-installed.
Same can happen when you install a develop spec, run `spack gc`,
modify sources, and install again; develop specs rely on overwrite
install to work correctly.
This PR adds a new audit sub-command to check that detection of relevant packages
is performed correctly in a few scenarios mocking real use-cases. The data for each
package being tested is in a YAML file called detection_test.yaml alongside the
corresponding package.py file.
This is to allow encoding detection tests for compilers and other widely used tools,
in preparation for compilers as dependencies.
Modifications:
- [x] Move `spack.util.string` to `llnl.string`
- [x] Remove dependency of `llnl` on `spack.error`
- [x] Move path of `spack.util.path` to `llnl.path`
- [x] Move `spack.util.environment.get_host_*` to `spack.spec`
* msvc.py: don't import distutils
Introduced in #27021, makes Spack forward incompatible with Python.
The module was already deprecated at the time of the PR.
* update spack package
Fixes#39622
Add a timeout to compiler detection and allow Spack to proceed when
this timeout occurs.
In all cases, the timeout is 120 seconds: it is assumed any compiler
invocation we do for the purposes of verifying it would resolve in
that amount of time.
Also refine executables that are tested as being possible MSVC
instances, and limit where we try to detect MSVC. In more detail:
* Compiler detection should timeout after a certain period of time.
Because compiler detection executes arbitrary executables on the
system, we could encounter a program that just hangs, or even a
compiler that hangs on a license key or similar. A timeout
prevents this from hanging Spack.
* Prevents things like cl-.* from being detected as potential MSVC
installs. cl is always just cl in all cases that Spack supports.
Change the MSVC class to indicate this.
* Prevent compilers unsupported on certain platforms from being
detected there (i.e. don't look for MSVC on systems other than
Windows).
The first point alone is sufficient to address #39622, but the
next two reduce the likelihood of timeouts (which is useful since
those slow down the user even if they are survivable).
Put back normalization of the "virtuals" input as a sorted tuple.
Without this we might get edges that differ just for the order of
virtuals, or that have lists, which are not hashable.
Add unit-tests to prevent regressions.
By default, do not let deprecated versions enter the solve.
Previously you could concretize to something deprecated, only to get errors on install.
With this commit, we get errors on concretization, so the issue is caught earlier.
PythonExtension is a base class for PythonPackage, and
is meant to be used for any package that is a Python
extension but is not built using "python_pip".
The "update_external_dependency" method in the base
class calls another method that is defined in the derived
class.
Push "get_external_python_for_prefix" up in the hierarchy
to make method calls consistent.
This commit replaces the internal representation of deptypes with `int`, which is more compact
and faster to operate with.
Double loops like:
```
any(x in ys for x in xs)
```
are replaced by constant operations bool(xs & ys), where xs and ys are dependency types.
Global constants are exposed for convenience in `spack.deptypes`
Currently, the concretizer emits facts for all versions known to Spack, including deprecated versions, and has a specific optimization objective to minimize their use.
This commit simplifies how deprecated versions are handled by considering possible versions for a spec only if they appear in a spec literal, or if the `config:deprecated:true` is set directly or through the `--deprecated` flag. The optimization objective has also been removed, in favor of just ordering versions and having deprecated ones last.
This results in:
a) no delayed errors on install, but concretization errors when deprecated versions would be the only option. This is in particular relevant for CI where it's better to get errors early
b) a slight concretization speed-up due to fewer facts
c) a simplification of the logic program.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
NMake makefiles are still called makefiles. The corresponding builder
variable was called "nmakefile", which is a bit unintuitive and lead
to a few easy-to-make, hard-to-notice mistakes when creating packages.
This commit renames the builder property to be "makefile"
Extensionless archives requiring two-stage decompression and extraction
require intermediate archives to be renamed after decompression/extraction
to prevent collision. Prior behavior attempted to cleanup the intermediate
archive with the original name, this PR ensures the renamed folder is
cleaned instead.
Co-authored-by: Dan Lipsa <dan.lipsa@khq.kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
* Perform external spec detection with multiple workers
The logic to perform external spec detection has been refactored
into classes. These classes use the GoF "template" pattern to account
for the small differences between searching for "executables" and
for "libraries", while unifying the larger part of the algorithm.
A ProcessPoolExecutor is used to parallelize the work.
* Speed-up external find by tagging detectable packages automatically
Querying packages by tag is much faster than inspecting the repository,
since tags are cached. This commit adds a "detectable" tag to every
package that implements the detection protocol, and external detection
uses it to search for packages.
* Pass package names instead of package classes to workers
The slowest part of the search is importing the Python modules
associated with candidate packages. The import is done serially
before we distribute the work to the pool of executors.
This commit pushes the import of the Python module to the job
performed by the workers, and passes just the name of the packages
to the executors.
In this way imports can be done in parallel.
* Rework unit-tests for Windows
Some unit tests were doing a full e2e run of a command
just to check a input handling. Make the test more
focused by just stressing a specific function.
Mark as xfailed 2 tests on Windows, that will be fixed
by a PR in the queue. The tests are failing because we
monkeypatch internals in the parent process, but the
monkeypatching is not done in the "spawned" child
process.
* Write timing information for installs from cache
* CI: aggregate and upload install_times.json to artifacts
* CI: Don't change root directory for artifact generation
* Flat event based timer variation
Event based timer allows for easily starting and stopping timers without
wiping sub-timer data. It also requires less branching logic when
tracking time.
The json output is non-hierarchical in this version and hierarchy is
less rigidly enforced between starting and stopping.
* Add and write timers for top level install
* Update completion
* remove unused subtimer api
* Fix unit tests
* Suppress timing summary option
* Save timers summaries to user_data artifacts
* Remove completion from fish
* Move spack python to script section
* Write timer correctly for non-cache installs
* Re-add hash to timer file
* Fish completion updates
* Fix null timer yield value
* fix type hints
* Remove timer-summary-file option
* Add "." in front of non-package timer name
---------
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
This is a fixed version of b72a268
* That commit would discard the final key component (so if you set
"config:install_tree:root", it would discard "root" and just set
install tree).
* When setting key:"value", with the quotes, that commit would
discard the quotes, which would confuse the system if adding a
value like "{example}" (the "{" character indicates a dictionary).
This commit retains the quotes.
These commands are currently broken on powershell (Windows) due to
improper use of the InvokeCommand commandlet and a lack of direct
support for the `--pwsh` argument in `spack load`, `spack unload`,
and `spack env deactivate`.
If you wanted to set a configuration option like
`config:install_tree:root` to "C:/path/to/config.yaml", Spack had
trouble parsing this because of the ":" in the value. This adds
logic to allow using quotes to enclose the value, so you can add
`config:install_tree:root:"C:/path/to/config.yaml"`.
Configuration keys should never contain a quote character, so the
presence of any quote is taken to mean that the rest of the string
is specifying the value.
Setting the undocumented variable SPACK_CONCRETIZER_REQUIRE_CHECKSUM
now causes the solver to avoid accounting for versions that are not checksummed.
This feature is used in CI to avoid spurious concretization against e.g. develop branches.
Currently, OneAPI's setvars scripts effectively disregard any arguments
we're passing to the MSVC vcvars env setup script, and additionally,
completely ignore the requested version of OneAPI, defaulting to whatever
the latest installed on the system is.
This leads to a scenario where we have improperly constructed Windows
native development environments, with potentially multiple versions of
MSVC and OneAPI being loaded or called in the same env. Obviously this is
far from ideal and leads to some fairly inscrutable errors such as
overlapping header files between MSVC and OneAPI and a different version
of OneAPI being called than the env was setup for.
This PR solves this issue by creating a structured invocation of each
relevant script in an order that ensures the correct values are set in
the resultant build env.
The order needs to be:
1. MSVC vcvarsall
2. The compiler specific env.bat script for the relevant version of
the oneapi compiler we're looking for. The root setvars scripts seems
to respect this as well, although it is less explicit
3. The root oneapi setvars script, which sets up everything else the
oneapi env needs and seems to respect previous env invocations.
Bash completion is now smarter about handling aliases. In particular, if all completions
for some input command are aliased to the same thing, we'll just complete with that thing.
If you've already *typed* the full alias for a command, we'll complete the alias.
So, for example, here there's more than one real command involved, so all aliases are
shown:
```console
$ spack con
concretise concretize config containerise containerize
```
Here, there are two possibilities: `concretise` and `concretize`, but both map to
`concretize` so we just complete that:
```console
$ spack conc
concretize
```
And here, the user has already typed `concretis`, so we just go with it as there is only
one option:
```console
spack concretis
concretise
```
From a user:
> Aargh.
> ```
> ==> Error: concretise is not a recognized Spack command or extension command; check with `spack commands`.
> ```
To make things easier for our friends in the UK, this adds `concretise` and
`containerise` aliases for the `spack concretize` and `spack containerize` commands.
- [x] add aliases
- [x] update completions
This reapplies 66f7540, which adds supports for hardlinks/junctions on
Windows systems where developer mode is not enabled.
The commit was reverted on account of multiple issues:
* Checks added to prevent dangling symlinks were interfering with
existing CI builds on Linux (i.e. builds that otherwise succeed were
failing for creating dangling symlinks).
* The logic also updated symlinking to perform redirection of relative
paths, which lead to malformed symlinks.
This commit fixes these issues.
#35042 introduced lazy hash parsing, but didn't remove a
few attributes from the parser that were needed only for
concrete specs
This commit removes them, since they are effectively
dead code.
The heuristic for duplicate nodes contains a few typos, and
apparently slows down the solve for specs that have a lot of
sub-optimal choices to be taken.
This is likely because with a lot of sub-optimal choices, the
low priority, flawed heuristic is being used by clingo.
Here I split the heuristic, so complex rules that matter only
if we allow multiple nodes from the same package are used
only in that case.
Since #34821 we are annotating virtual dependencies on
DAG edges, and reconstructing virtuals in memory when
we read a concrete spec from previous formats.
Therefore, we can remove a TODO in asp.py, and rely on
"virtual_on_edge" facts to be imposed.
Computing str(spec) is faster than computing hash(spec), and
since all the abstract specs we deal with come from user configuration
they cannot cover DAG structures that are not captured by str() but
are captured by hash()
Delay lookup for abstract hashes until concretization time, instead of
until Spec comparison. This has a few advantages:
1. `satisfies` / `intersects` etc don't always know where to resolve the
abstract hash (in some cases it's wrong to look in the current env,
db, buildcache, ...). Better to let the call site dictate it.
2. Allows search by abstract hash without triggering a database lookup,
causing quadratic complexity issues (accidental nested loop during
search)
3. Simplifies queries against the buildcache, they can now use Spec
instances instead of strings.
The rules are straightforward:
1. a satisfies b when b's hash is prefix of a's hash
2. a intersects b when either a's or b's hash is a prefix of b's or a's
hash respectively
The median length of this list of 1. For reasons I don't know, `.sort()`
still like to call the key function.
This saves ~9% of total database read time, and the number of calls
goes from 5305 -> 1715.
* Do not impose provider conditions, if the node is not a provider
fixes#39455
When a node can be a provider of a spec, but is not selected as
a provider, we should not be imposing provider conditions on the
virtual.
* Adjust the integrity constraint, by using the correct atom
* Add "only_clingo", "only_original" and "not_on_windows" markers
* Modify tests to use the "not_on_windows" marker
* Mark tests that run only with clingo
* Mark tests that run only with the original concretizer
To avoid paying the cost of setup and of a full grounding again,
move cycle detection into a separate program and check first if
the solution has cycles.
If it has, ground only the integrity constraint preventing cycles
and solve again.
The "concretizer" section has been extended with a "duplicates:strategy"
attribute, that can take three values:
- "none": only 1 node per package
- "minimal": allow multiple nodes opf specific packages
- "full": allow full duplication for a build tool
This refactor introduces extra indices for triggers and
effect of a condition, so that the corresponding clauses
are evaluated once for every condition they apply to.
All the solution modes we use imply that we have to solve for all
the literals, except for "when possible".
Here we remove a minimization on the number of literals not
solved, and emit directly a fact when a literal *has* to be
solved.
Introduce the concept of "condition sets", i.e. the set of packages on which
a package can require / impose conditions. This currently maps to the link/run
sub-dag of each package + its direct build dependencies.
Parametrize the "condition" and "requirement" logic to multiple nodes.
So far the encoding has a single ID per package, i.e. all the
facts will be node(0, Package). This will prepare the stage for
extending this logic and having multiple nodes from the same
package in a DAG.
Each fact that is deduced from package rules, and start with
a bare package atom, is transformed into a "facts" atom containing
a nested function.
For instance we transformed
version_declared(Package, ...) -> facts(Package, version_declared(...))
This allows us to clearly mark facts that represent a rule on the package,
and will be of help later when we'll have to distinguish the cases where
the atom "Package" is being used referred to package rules and not to a
node in the DAG.
Windows executable paths can have spaces in them, which was leading to
errors when constructing Executable objects: the parser was intended
to handle cases where users could provide an executable along with one
or more space-delimited arguments.
* Executable now assumes that it is constructed with a string argument
that represents the path to the executable, but no additional arguments.
* Invocations of Executable.__init__ that depended on this have been
updated (this includes the core, tests, and one instance of builtin
repository package).
* The error handling for failed invocations of Executable.__call__ now
includes a check for whether the executable name/path contains a
space, to help users debug cases where they (now incorrectly)
concatenate the path and the arguments.
* The module-level skip for tests in `cmd.install` on Windows is removed.
A few classes of errors still persist:
* Cdash tests are not working on Windows
* Tests for failed installs are also not working (this will require
investigating bugs in output redirection)
* Environments are not yet supported on Windows
overall though, this enables testing of most basic uses of "spack install"
* Git repositories cached for version lookups were using a layout that
mimicked the URL as much as possible. This was useful for listing the
cache directory and understanding what was present at a glance, but
the paths were overly long on Windows. On all systems, the layout is
now a single directory based on a hash of the Git URL and is shortened
(which ensures a consistent and acceptable length, and also avoids
special characters).
* In particular, this removes util.url.parse_git_url and its associated
test, which were used exclusively for generating the git cache layout
* Bootstrapping is now enabled for unit tests on Windows
#36770 added git as a dependency to `setuptools-scm`. This in turn makes `git` a
transitive dependency for our bootstrapping process.
Since `git` may take a long time to build, and is found on most systems, try to
detect it as an external.
This makes the name of the global variable representing
the repository currently in use uppercase. Doing so is advised
by pylint rules, and helps to identify where the global is used.
* Prefix conflict messages with package name
This patch prefixes all conflict messages with the package name to
alleviate what was otherwise a very manual process. Note that this patch
is a one line change but has a fairly outsized impact.
* same for requires directive
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
* Ensure that all variants have a description
* Update mock packages too
* Fix test invocations
* Black fix
* mgard: update variant descriptions
* flake8 fix
* black fix
* Add to audit tests
* Relax type hints
* Older Python support
* Undo all changes to mock packages
* Flake8 fix
This PR extracts two responsibilities from the `Database` class:
1. Managing locks for prefixes during an installation
2. Marking installation failures
and pushes them into their own class (`SpecLocker` and `FailureMarker`). These responsibilities are also pushed up into the `Store`, leaving to `Database` only the duty to manage `index.json` files.
`SpecLocker` classes no longer share a global list of locks, but locks are per instance. Their identifier is simply `(dag hash, package name)`, and not the spec prefix path, to avoid circular dependencies across Store / Database / Spec.
FastPackageChecker.modified_since should use a default number < 0
When the repo cache does not exist, Spack uses mtime 0. This causes the repo
cache not to be generated when the repo has mtime 0.
Some popular package managers such as spack use 0 mtime normalization for
reproducible tarballs. So when installing spack with spack from a buildcache, the
repo cache doesn't generate
Also add some typehints
* Inform mypy that tty.die is noreturn
* avoid temporary allocation in env
* update spack buildcache save-specfile
* fix spack buildcache check/download/get-buildcache-name
- ensure that required args and mutually exclusive ones are marked as
such in argparse for better error messages
- deprecate --spec-file everywhere
- use disambiguate for better error messages
* CI: Refactor ci reproducer
* Autostart container
* Reproducer paths match CI paths
* Generate start scripts for docker and reproducer
* CI: Add interactive and gpg options to reproduce-build
* Interactive will determine if the docker container persists
after running reproduction.
* GPG path/url allow downloading GPG keys needed for binary
cache download validation. This is important for running
reproducer for protected CI jobs.
* Add exit_on_failure option to CI scripts
* CI: Add runtime option for reproducer
- Run `mkdirp` on `spec.prefix`
- Extract directly into `spec.prefix`
1. No need for `$store/tmp.xxx` where we extract the tarball directly, pray that it has one subdir `<name>-<version>-<hash>`, and then `rm -rf` the package prefix followed by `mv`.
2. No need to clean up this temp dir in `spack clean`.
3. Instead figure out package directory prefix from the tarball contents, and strip the tarinfo entries accordingly (kinda like tar --strip-components but more strict)
- Set package dir permissions
- Don't error during error handling when files cannot removed
- No need to "enrich" spec.json with this tarball-toplevel-path
After this PR, we can in fact tarball packages relative to `/` instead of `spec.prefix/..`, which makes it possible to use Spack tarballs as container layers, where relocation is impossible, and rootfs tarballs are expected.
`spack buildcache list` did not have a way to display the namespace of
packages in the buildcache. This PR adds that functionality.
For consistency's sake, it moves the `-N/--namespace` arg definition to
the `common/arguments.py` and modifies `find`, `solve`, `spec` to use
the common definition.
Previously, `find` was using `--namespace` (singular) to control whether
to display the namespace (it doesn't restrict the search to that
namespace). The other commands were using `--namespaces` (plural). For
backwards compatibility and for consistency with `--deps`, `--tags`,
etc, the plural `--namespaces` was chosen. The argument parser ensures
that `find --namespace` will continue to behave as before.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
* Add rewrite of spack checksum to include --verify and better add versions to package.py files
* Fix formatting and remove unused import
* Update checksum unit-tests to correctly test multiple versions and add to package
* Remove references to latest in stage.py
* Update bash-completion scripts to fix unit tests failures
* Fix docs generation
* Remove unused url_dict argument from methods
* Reduce chance of redundant remote_versions work
* Add print() before tty.die() to increase error readablity
* Update version regular expression to allow for multi-line versions
* Add a few unit tests to improve test coverage
* Update command completion
* Add type hints to added functions and fix a few py-lint suggestions
* Add @no_type_check to prevent mypy from failing on pkg.versions
* Add type hints to format.py and fix unit test
* Black format lib/spack/spack/package_base.py
* Attempt ignoring type errors
* Add optional dict type hint and declare versions in PackageBase
* Refactor util/format.py to allow for url_dict as an optional parameter
* Directly reference PackageBase class instead of using TypeVar
* Fix comment typo
---------
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
The sanitization function is completely bogus as it tries to replace /
on unix after ... splitting on it. The way it's implemented is very
questionable: the input is a file name, not a path. It doesn't make
sense to interpret the input as a path and then make the components
valid -- you'll interpret / in a filename as a dir separator.
It also fails to deal with path components that contain just unsupported
characters (resulting in empty component).
The correct way to deal with this is to have a function that takes a
potential file name and replaces unsupported characters.
I'm not going to fix the other issues on Windows, such as reserved file
names, but left a note, and hope that @johnwparent can fix that
separately.
(Obviously we wouldn't have this problem at all if we just fixed the
filename in a safe way instead of trying to derive something from
the url; we could use the content digest when available for example)
* commands: provide more information to Command
* fish: Add script to generate fish completion
* fish: auto prepend `spack` command to avoid duplication
* fish: impove completion generation code readability
* commands: replace match-case with if-else
* fish: fix optspec variable name prefix
* fish: fix return value in get_optspecs
* fish: fix return value in get_optspecs
* format: split long line and trim trailing space
* bugfix: replace f-string with interpolation
* fish: compete more specs and some fixes
* fish: complete hash spec starts with /
* fish: improve compatibility
* style: trim trailing whitespace
* commands: add fish to update args and update tests
* commands: add fish completion file
* style: merge imports
* fish: source completion in setup-env
* fish: caret only completes dependencies
* fish: make sure we always get same order of output
* fish: spack activate
only show installed packages that have extensions
* fish: update completion file
* fish: make dict keys sorted
* Blacken code
* Fix bad merge
* Undo style changes to setup-env.fish
* Fix unit tests
* Style fix
* Compatible with fish_indent
* Use list for stability of order
* Sort one more place
* Sort more things
* Sorting unneeded
* Unsort
* Print difference
* Style fix
* Help messages need quotes
* Arguments to -a must be quoted
* Update types
* Update types
* Update types
* Add type hints
* Change order of positionals
* Always expand help
* Remove shared base class
* Fix type hints
* Remove platform-specific choices
* First line of help only
* Remove unused maps
* Remove suppress
* Remove debugging comments
* Better quoting
* Fish completions have no double dash
* Remove test for deleted class
* Fix grammar in header file
* Use single quotes in most places
* Better support for remainder nargs
* No magic strings
* * and + can also complete multiple
* lower case, no period
---------
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The user store is lazily evaluated. The change
in #38975 made it such that the first evaluation
was happening in the middle of swapping to user
configuration.
Ensure we construct the user store before that.
Use curly braces instead of quotes to enclose value or text in Tcl
modulefile. Within curly braces Tcl special characters like [, ] or $
are treated verbatim whereas they are evaluated within quotes.
Curly braces is Tcl recommended way to enclose verbatim content [1].
Note: if curly braces charaters are used within content, they must be
balanced. This point has been checked against current repository and no
unbalanced curly braces has been spotted.
Fixes#24243
[1] https://wiki.tcl-lang.org/page/Tcl+Minimal+Escaping+Style
* Fetching patches wouldn't result in acquiring a stage lock during install
* The installer would acquire a stage lock *after* fetching instead of
before, leading to races
* The name of the stage for patches was random, so on build failure
(where stage dirs are not removed), these directories would continue
to exist after a second successful install.
* There was this redundant "composite fetch" object -- there's already
a composite stage. Remove this.
* For some reason we do *double* shasum validation of patches, before
and after compression -- that's just too much? I removed it.
Spack heuristically adds `<install prefix>/lib` and `<install prefix>/lib64` as rpath entries, as it doesn't know what the install dir is going to be ahead of the build. This PR cleans up non-existing, absolute paths[^1], which
1. avoids redundant stat calls at runtime
2. drops redundant rpaths in `patchelf`, making it relocatable -- you don't need patchelf recursively then.
[^1]: It also removes relative paths not starting with `$` (so, `$ORIGIN/../lib` is retained -- we _could_ interpolate `$ORIGIN`, but that's hard to get right when symlinks have to be taken into account). Relative paths _are_ supported in glibc, but are relative to _the current working directory_, which is madness, and it would be better to drop those paths.
A LazyReference object is a reference to an attribute of a
lazily evaluated singleton. Its only purpose is to let developers
use shorter names to refer to such attribute.
This class does more harm than good, as it obfuscates the fact
that we are using the attribute of a global object. Also, it can easily
go out of sync with the singleton it refers to if, for instance, the
singleton is updated but the references are not.
This commit removes the LazyReference class entirely, and access
the attributes explicitly passing through the global value to which
they are attached.
These tests now work without any changes to core. Furthermore, it is
surprising that they had to be disabled (at least, as long as the
installer.py tests are run on Windows: these tests are more-basic
and their functionality would have been exercised automatically).
Without --allow-root spack cannot push binaries that contain paths in
binaries. This flag is almost always needed, so there is no point of
requiring users to spell it out.
Even without --allow-root, rpaths would still have to be patched, so the
flag is not there to guarantee binaries are not modified on install.
This commit makes --allow-root the default, and drops the code
required for it. It also deprecates `spack buildcache preview`, since
the command made sense only with --allow-root.
As a side effect, Spack no longer depends on binutils for relocation
Add support for conflict directives in Lua modulefile like done for Tcl
modulefile.
Note that conflicts are correctly honored on Lmod and Environment
Modules <4.2 only if mutually expressed on both modulefiles that
conflict with each other.
Migrate conflict code from Tcl-specific classes to the common part. Add
tests for Lmod and split the conflict test case in two.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* When using system tools to unpack a .gz file, the input file needs a
different name than the output file. Normally, we generate this new
name by stripping off the .gz extension off of the file name.
This was not sufficient if the file name did not have an extension,
so we temporarily rename the file in that case.
* When using system tar utility to untar on Windows, we were (erroneously)
skipping the actual untar step if the filename was lacking a .tar
extension
* For foo.txz, we were not changing the extension of the decompressed file
(i.e. we would decompress foo.txz to foo.txz). This did not cause any
problems, but is confusing, so has been updated such that the output
filename reflects its decompressed state (i.e. foo.tar).
* Added test for strip_compression_extension
* Update test_native_unpacking to test each archive type with and without
an extension as part of the file name (i.e. we test "foo.tar.gz", but
also make sure we decompress properly if it is named "foo").
Running `spack test run <python package>` resulted in the error
```
'str' object is not callable
```
because the python executable was not set correctly.
Lock objects can now be instantiated independently,
without being tied to the global configuration. The
same is true for database and store objects.
The database __init__ method has been simplified to
take a single lock configuration object. Some common
lock configurations (e.g. NO_LOCK or NO_TIMEOUT) have
been named and are provided as globals.
The use_store context manager keeps the configuration
consistent by pushing and popping an internal scope.
It can also be tuned by passing extra data to set up
e.g. upstreams or anything else that might be related
to the store.
* Bugfix: spack.yaml concretizer:unify needs to be read and used
* Optional: add environment test to ensure configuration scheme is used
* Activate environment in unit tests
A more proper solution would be to keep
an environment instance configuration as
an attribute, but that is a bigger refactor
* Delay evaluation of Environment.unify
* Slightly simplify unit tests
---------
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Allow the following formats:
```yaml
mirrors:
name: <url>
```
```yaml
mirrors:
name:
url: s3://xyz
access_pair: [x, y]
```
```yaml
mirrors:
name:
fetch: http://xyz
push:
url: s3://xyz
access_pair: [x, y]
```
And reserve two new properties to indicate the mirror type (e.g.
mirror.spack.io is a source mirror, not a binary cache)
```yaml
mirrors:
spack-public:
source: true
binary: false
url: https://mirror.spack.io
```
A few packages have version directives evaluated
within if statements, conditional on the value of
`platform.platform()`.
Sometimes there are no cases for e.g. platform=darwin and that
causes a lot of spurious failures with version existence
audits.
This PR allows expressing conditions to skip version
existence checks in audits and avoid these spurious reports.
### Rationale
While working on #29549, I noticed a lot of inconsistencies in our argparse help messages. This is important for fish where these help messages end up as descriptions in the tab completion menu. See https://github.com/spack/spack/pull/29549#issuecomment-1627596477 for some examples of longer or more stylized help messages.
### Implementation
This PR makes the following changes:
- [x] help messages start with a lowercase letter.
- [x] Help messages do not end with a period
- [x] the first line of a help message is short and simple
longer text is separated by an empty line
- [x] "help messages do not use triple quotes"
"""(except docstrings)"""
- [x] Parentheses not needed for string concatenation inside function call
- [x] Remove "..." "..." string concatenation leftover from black reformatting
- [x] Remove Sphinx argument docs from help messages
The first 2 choices aren't very controversial, and are designed to match the syntax of the `--help` flag automatically added by argparse. The 3rd choice is more up for debate, and is designed to match our package/module docstrings. The 4th choice is designed to avoid excessive newline characters and indentation. We may actually want to go even further and disallow docstrings altogether.
### Alternatives
Choice 3 in particular has a lot of alternatives. My goal is solely to ensure that fish tab completion looks reasonable. Alternatives include:
1. Get rid of long help messages, only allow short simple messages
2. Move longer help messages to epilog
3. Separate by 2 newline characters instead of 1
4. Separate by period instead of newline. First sentence goes into tab completion description
The number of commands with long help text is actually rather small, and is mostly relegated to `spack ci` and `spack buildcache`. So 1 isn't actually as ridiculous as it sounds.
Let me know if there are any other standardizations or alternatives you would like to suggest.
Refactor `TermTitle` into `InstallStatus` and use it to show progress
information both in the terminal title as well as inline. This also
turns on the terminal title status by default.
The inline output will look like the following after this change:
```
==> Installing m4-1.4.19-w2fxrpuz64zdq63woprqfxxzc3tzu7p3 [4/4]
```
People frequently ask us how to pipe `spack find` output to other commands, and we tell
them to do things like this:
```console
$ spack find --format "/{hash}" | spack uninstall -ay
```
Sometimes users don't know about hash references and come up with potentially ambiguous
formulations like this:
```console
spack find --format {name}@{version}%{compiler} | spack uninstall -ay
```
Since this is a common enough thing to want to do, and to make it more obvious how, this
PR adds a `-H` / `--hashes` as a shortcut, so you can now just do:
```console
spack find -H | spack uninstall -ay
```
`"%s" % spec` formats the spec with deps included, which produces sometimes KBs
of data and is slow to run in pure Python. It can delay otherwise very short-lived
read/write locks on the database.
Discovered in #38762 where profile output showed about 2 seconds is spent in
`spec.format`, which is significant overhead when using multiprocessing to install
from binary cache in parallel (installation often takes <5s for small packages). With
this change, `spec.format` no longer shows up in profile output.
(This line hasn't changed since Spack v0.9 ;p)
* move format() call to custom NoSuchSpecError exception
* add a comment saying why, so we can eventually change `Spec.__str__`
1. Fix O(n^2) iteration in `_get_overwrite_specs`
2. Early exit `get_by_hash` on full hash
3. Fix O(n^2) double lookup in `all_matching_specs` with hashes
4. Fix some legibility issues
* When installing a package Spack will attempt to set group permissions on
the install prefix even when the configuration does not specify a group.
Co-authored-by: David Gomez <dvdgomez@users.noreply.github.com>
Move the logic checking which mirrors have the specs we need closer
to where that information is needed. Also update the staging summary
to contain a brief description of why we scheduled or pruned each
job. If a spec was found on any mirrors, regardless of whether
we scheduled a job for it, print those mirrors.
PowerShell requires explicit shell and env support in Spack.
This is due to the distinct differences in shell interactions between
cmd and pwsh. Add a doskey in pwsh piping 'spack' commands to a
powershell script similar to the sh function 'spack'. Add
support for PowerShell-specific shell interactions from Spack
(set/unset shell variables).
* Support hardlinks/junctions on Windows systems without developer
mode enabled
* Generally, use of llnl.util.symlink.symlink is preferred over
os.symlink since it handles this automatically
* Generally an error is now reported if a user attempts to create a
symlink to a file that does not exist (this was previously allowed
on Linux/Mac).
* One exception to this: when Spack installs files from the source
into their final prefix, dangling symlinks are allowed (on
Linux/Mac - Windows does not allow this in any circumstance).
The intent behind this is to avoid generating failures for
installations on Linux/Mac that were succeeding before.
* Because Windows is strict about forbidding dangling symlinks,
`traverse_tree` has been updated to skip creating symlinks if they
would point to a file that is ignored. This check is not
transitive (i.e., a symlink to a symlink to an ignored file would
not be caught appropriately)
* Relocate function: resolve_link_target_relative_to_the_link
(this is not otherwise modified)
Co-authored-by: jamessmillie <smillie@txcorp.com>
Update list of excluded variables in `from_sourcing_file` function to
cover all variables specific to Environment Modules or Lmod. Add
specifically variables relative to the definition of `module()`, `ml()`
and `_module_raw()` Bash functions.
Fixes#13504
Update `env.set` command and underlying `SetEnv` object to add the `raw`
boolean attribute. `raw` is optional and set to False by default. When
set to True, value format is skipped for object when generating
environment modifications.
With this change it is now possible to define environment variable
whose value contains variable reference syntax (like `{foo}` or `{}`)
that should be set as-is.
Fixes#29578
Spack flags supplied by users should supersede flags from package build systems and
other places in Spack. However, Spack currently adds user-supplied flags to the
beginning of the compile line, which means that in some cases build system flags will
supersede user-supplied ones.
The right place to add a flag to ensure it has highest precedence for the compiler really
depends on the type of flag. For example, search paths like `-L` and `-I` are examined
in order, so adding them first is highest precedence. Compilers take the *last* occurrence
of optimization flags like `-O2`, so those should be placed *after* other such flags. Shim
libraries with `-l` should go *before* other libraries on the command line, so we want
user-supplied libs to go first, etc.
`lib/spack/env/cc` already knows how to split arguments into categories like `libs_list`,
`rpath_dirs_list`, etc., so we can leverage that functionality to merge user flags into
the arg list correctly.
The general rules for injected flags are:
1. All `-L`, `-I`, `-isystem`, `-l`, and `*-rpath` flags from `spack_flags_*` to appear
before their regular counterparts.
2. All other flags ordered with the ones from flags after their regular counterparts,
i.e. `other_flags` before `spack_flags_other_flags`
- [x] Generalize argument categorization into its own function in the `cc` shell script
- [x] Apply the same splitting logic to injected flags and flags from the original compile line.
- [x] Use the resulting flag lists to merge user- and build-system-supplied flags by category.
- [x] Add tests.
Signed-off-by: Andrey Parfenov <andrey.parfenov@intel.com>
Co-authored-by: iermolae <igor.ermolaev@intel.com>
The `unparser` that Spack uses for package hashing had several tweaks to ensure compatibility
with Python 2.7:
1. Currently, the unparser automatically moves `*` and `**` args to the end to preserve
compatibility with `python@:3.4`
2. `print a, b, c` statements and single-tuple `print((a, b, c))` function calls were
remapped to `print(a, b, c)` in the unparsed output for consistency across versions.
(1) is causing issues in our tests because a recent patch to the Python source code
(https://github.com/python/cpython/pull/102953/files#diff-7972dffec6674d5f09410c71766ac6caacb95b9bccbf032061806ae304519c9bR813-R823)
has a `**` arg before an named argument, and we round-trip the core python source code
as a test of our unparser. This isn't actually a break with our consistent unpausing -- it's still
consistent, the python source just doesn't unparse to the same thing anymore. It does makes
it harder to test, so it's not worth maintaining the Python2-specific stuff anymore.
Since we only support `python@3.6:`, this PR removes (1) and (2) from the unparser, but keeps
one last tweak for unicode AST inconsistencies, as it's still needed for Python 3.5-3.7.
This fixes the CI error we've been seeing on `python@3.11.4` and `python@3.10.12`. Again, that
bug exists only in the test system and doesn't affect our canonical hashing of Python code.
* DependencySpec: add virtuals attribute on edges
This works for both the new and the old concretizer. Also,
added type hints to involved functions.
* Improve virtual reconstruction from old format
* Reconstruct virtuals when reading from Cray manifest
* Reconstruct virtual information on test dependencies
Update Tcl modulefile template to use the `depends-on` command to
autoload modules if Lmod is the current module tool.
Autoloading modules with `module load` command in Tcl modulefile does
not work well for Lmod at some extend. An attempt to unload then load
designated module is performed each time such command is encountered. It
may lead to a load storm that may not end correctly with large number of
module dependencies.
`depends-on` command should be used for Lmod instead of `module load`,
as it checks if module is already loaded, and does not attempt to reload
this module.
Lua modulefile template already uses `depends_on` command to autoload
dependencies. Thus it is already considered that to use Lmod with Spack,
it must support `depends_on` command (version 7.6+).
Environment Modules copes well with `module load` command to autoload
dependencies (version 3.2+). `depends-on` command is supported starting
version 5.1 (as an alias of `prereq-all` command) which was relased last
year.
This change introduces a test to determine if current module tool that
evaluates modulefile is Lmod. If so, autoload dependencies are defined
with `depends-on` command. Otherwise `module load` command is used.
Test is based on `LMOD_VERSION_MAJOR` environment variable, which is set
by Lmod starting version 5.1.
Fixes#36764
When interpreting local paths as relative URL endpoints, they were
formatted as Windows paths on Windows (i.e. with '\'). URLs should
always be POSIX-style.
Update modulefile templates to append a trailing delimiter to MANPATH
environment variable, if the modulefile sets it.
With a trailing delimiter at ends of MANPATH's value, man will search
the system man pages after searching the specific paths set.
Using append-path/append_path to add this element, the module tool
ensures it is appended only once. When modulefile is unloaded, the
number of append attempt is decreased, thus the trailing delimiter is
removed only if this number equals 0.
Disclaimer: no path element should be appended to MANPATH by generated
modulefiles. It should always be prepended to ensure this variable's
value ends with the trailing delimiter.
Fixes#11355.
* Improve lib/spack/spack/test/cmd/compiler.py
* Use "tmp_path" in the "mock_executable" fixture
* Return a pathlib.Path from mock_executable
* Fix mock_executable fixture on Windows
"mock_gcc" was very similar to mock_executable, so use the latter to reduce code duplication
* Remove wrong compiler cache, fix compiler removal
fixes#37996
_CACHE_CONFIG_FILES was both unneeded and wrong, if called
subsequently with different scopes.
Here we remove that cache, and we fix an issue with compiler
removal triggered by having the same compiler spec in multiple
scopes.
* e4s cray ci stack
* e4s ci: add cray
* add zen4 tag
* WIP: new defintions just for cray
* updates
* remove ci signing job overrride, not necessary
* echo $PATH and show modules loaded
* add mirror
* add external def for cray-libsci
* comment out quantum-espresso
* use /etc/protected-runner as key path
* cray ci stack: do not remove tags: [spack, public]
* make cray stack composable
* generate job should run on public tagged runner, override default config:install_tree:root
* CI: Use relative path in default script
* CI: Use relative includes paths for shell runners
* Use concrete_env_dir for relpath
* ml-darwin-aarch64-mps: jax has bazel codesign issue
---------
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
#37592 updated cached cmake packages to set CMAKE_CUDA_ARCHITECTURES.
The condition `if archs != "none"` lead to `CMAKE_CUDA_ARCHITECTURES=none`
when cuda_arch=none (incorrect check on the value of a multi-valued
variant), i.e. CMAKE_CUDA_ARCHITECTURES is always set. This PR udpates
the condition to if archs[0] != "none" to ensure CMAKE_CUDA_ARCHITECTURES
is only set if cuda_arch is not none (which seems to be the pattern used
in other packages).
This does the same for HIP (although in general ROCmPackage disallows
amdgpu_target=none when +rocm).
* Add CMake options for building with CUDA/HIP support to
CachedCMakePackages (intended to reduce duplication across packages
building with +hip/+cuda and using CachedCMakePackage)
* Define generic variables like CMAKE_PREFIX_PATH for
CachedCMakePackages (so that a user may invoke "cmake" themselves
without needing to setthem on the command line).
* Make `lbann` a CachedCMakePackage.
Co-authored-by: Chris White <white238@llnl.gov>
fa7719a changed syntax for specifying exact versions, which are
required for some compiler specs (including those read as part
of parsing a Cray manifest). This fixes that and also makes a
couple other improvements to manifest parsing.
* Instantiate compiler specs with exact versions (fixes#37893)
* fix slingshot network detection (CPE 22.10+ has libcxi.so
in /usr/lib64)
* "spack external find": add arg to ignore default dir for cray
manifests
Change default naming scheme for tcl modules for a more user-friendly
experience.
Change from flat projection to "per software name" projection.
Flat naming scheme restrains module selection capabilities. The
`{name}/{version}...` scheme make possible to use user-friendly
mechanisms:
* implicit defaults (`module load git`)
* extended default (`module load git/2`)
* advanced version specifiers (`module load git@2:`)
Note the win-sdk package is not installable and reports an error
which instructs the user how to add it. Without this fix, a
(more confusing) error occurs before this message can be generated.
"spack build-env" was not generating proper environment variable
definitions on Windows; this commit updates the generated commands
to succeed with batch/PowerShell.
Ensure that spack compiler add/find/list and lists of concrete specs
print the compiler effectively as {compiler.name}{@compiler.version}.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Make it clear that copy-only pipelines are not supported while still
using the deprecated ci config format. Also ensure that the deprecated
stack does not fail on spack pipelines for tags.
* Fix reporting of packageless specs as having no tests
* Add test_test_output_multiple_specs with update to simple-standalone-test (and tests)
* Refactored test status summary; added more tests or checks
MSVC compiler logic was using string parsing to extract version
from compiler spec, which was fragile. This broke in #37572, so has
been fixed and made more robust by using attribute access.
Ensure that requirements `packages:*:require:@x` and preferences `packages:*:version:[x]`
fail concretization when no version defined in the package satisfies `x`. This always holds
except for git versions -- they are defined on the fly.
Two bugs came in from #37438
1. `unify: when_possible` was broken, because of an incorrect assertion. abstract/concrete
spec pairs were compared against the results that were in the process of being computed,
rather than against the previous results.
2. `unify: true` had an ordering bug that could mix the association between abstract and
concrete specs
- [x] 1 is resolved by creating a lookup from old concrete specs to old abstract specs,
and we use that to associate the "new" concrete specs that happen to be the old
ones with their abstract specs (since those are stripped out for concretization
- [x] 2 is resolved by combining the new and old abstract as lists instead of combining
them as sets. This is important because `set() | set()` does not make any ordering
promises, even though set ordering is otherwise guaranteed in `python@3.7:`
Spack displays package code context when it shouldn't (e.g., on `FetchError`s)
and doesn't display it when it should (e.g., when errors occur in builder classes.
The line attribution can sometimes be off by one, as well.
- [x] Display package context when errors occur in a subclass of `PackageBase`
- [x] Display package context when errors occur in a subclass of `BaseBuilder`
- [x] Do not display package context when errors occur in `PackageBase`,
`BaseBuilder` or other core code that is not in a `package.py` file.
- [x] Fix off-by-one error for core code (don't subtract one from the line number *unless*
it's in an actual `package.py` file.
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
We currently throw a nasty error if you try to reuse packages from some other namespace
(e.g., OLCF), but we should be able to reuse patched local versions of builtin packages.
Right now the only obstacle to that is that we try to look up virtual info for unknown
namespaces, and we can't get the package from the repo to do that. We *can* assume that
a package with a known namespace is similar, and that its virtual provider information
is reasonably accurate, so we now do that. This isn't 100% accurate, but neither is
relying on the package itself, as it may have gone out of date.
The real solution here is virtual edge information, but this is a stopgap until we have
that.
`spec_clauses()` attempts to look up package information for concrete specs in order to
determine which virtuals they may provide. This fails for renamed/deleted dependencies
of buildcaches and installed packages.
This will eventually be fixed by #35258, which adds virtual information on edges, but we
need a workaround to make older buildcaches usable.
- [x] make an exception for renamed packages and omit their virtual constraints
- [x] add a note that this will be solved by adding virtuals to edges
The concretizer can fail with `reuse:true` if a buildcache or installation contains a
package with a dependency that has been renamed or deleted in the main repo (e.g.,
`netcdf` was refactored to `netcdf-c`, `netcdf-fortran`, etc., but there are still
binary packages with dependencies called `netcdf`).
We should still be able to install things for which we are missing `package.py` files.
`Spec.inject_patches_variant()` was failing this requirement by attempting to look up
the package class for concrete specs. This isn't needed -- we can skip it.
- [x] swap two conditions in `Spec.inject_patches_variant()`
The @= in `spack find` output adds a bit of noise. Remove it as we
did for `spack spec` and `spack concretize`.
This modifies display_specs so it actually covers other places we use that routine, as
well, e.g., `spack buildcache list`.
before:
```
-- linux-ubuntu20.04-aarch64 / gcc@=11.1.0 -----------------------
ofdlcpi libpressio@0.88.0
```
after:
```
-- linux-ubuntu20.04-aarch64 / gcc@11.1.0 -----------------------
ofdlcpi libpressio@0.88.0
```
If a user does not explicitly `--force` the concretization of an entire environment,
Spack will try to reuse the concrete specs that are already in the lockfile.
---------
Co-authored-by: becker33 <becker33@users.noreply.github.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* gitlab ci: release fixes and improvements
- use rules to reduce boilerplate in .gitlab-ci.yml
- support copy-only pipeline jobs
- make pipelines for release branches rebuild everything
- make pipelines for protected tags copy-only
* gitlab ci: remove url changes used in testing
* gitlab ci: tag mirrors need public key
Make sure that mirrors associated with release branches and tags
contain the public key needed to verify the signed binaries. This
also ensures that when stack-specific mirror contents are copied
to the root, the root mirror has the public key as well.
* review: be more specific about tags, curl flags
* Make the check in ci.yaml consistent with the .gitlab-ci.yml
---------
Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
Currently, specs on buildcache mirrors must be referenced by their full description. This PR allows buildcache specs to be referenced by their hashes, rather than their full description.
### How it works
Hash resolution has been moved from `SpecParser` into `Spec`, and now includes the ability to execute a `BinaryCacheQuery` after checking the local store, but before concluding that the hash doesn't exist.
### Side-effects of Proposed Changes
Failures will take longer when nonexistent hashes are parsed, as mirrors will now be scanned.
### Other Changes
- `BinaryCacheIndex.update` has been modified to fail appropriately only when mirrors have been configured.
- Tests of hash failures have been updated to use `mutable_empty_config` so they don't needlessly search mirrors.
- Documentation has been clarified for `BinaryCacheQuery`, and more documentation has been added to the hash resolution functions added to `Spec`.
This PR ensures that we'll get a comprehensible error message whenever an old
version of Spack tries to use a DB or a lockfile that is "too new".
* Fix error message when using a too new DB
* Add a unit-test to ensure we have a comprehensible error message
Add a section to the lock file to track the Spack version/commit that produced
an environment. This should (eventually) enhance reproducibility, though we
do not currently do anything with the information. It just adds to provenance
at the moment.
Changes include:
- [x] adding the version/commit to `spack.lock`
- [x] refactor `spack.main.get_version()
- [x] fix a couple of environment lock file-related typos