"spack build-env" was not generating proper environment variable
definitions on Windows; this commit updates the generated commands
to succeed with batch/PowerShell.
Ensure that spack compiler add/find/list and lists of concrete specs
print the compiler effectively as {compiler.name}{@compiler.version}.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Make it clear that copy-only pipelines are not supported while still
using the deprecated ci config format. Also ensure that the deprecated
stack does not fail on spack pipelines for tags.
* Fix reporting of packageless specs as having no tests
* Add test_test_output_multiple_specs with update to simple-standalone-test (and tests)
* Refactored test status summary; added more tests or checks
MSVC compiler logic was using string parsing to extract version
from compiler spec, which was fragile. This broke in #37572, so has
been fixed and made more robust by using attribute access.
Ensure that requirements `packages:*:require:@x` and preferences `packages:*:version:[x]`
fail concretization when no version defined in the package satisfies `x`. This always holds
except for git versions -- they are defined on the fly.
Two bugs came in from #37438
1. `unify: when_possible` was broken, because of an incorrect assertion. abstract/concrete
spec pairs were compared against the results that were in the process of being computed,
rather than against the previous results.
2. `unify: true` had an ordering bug that could mix the association between abstract and
concrete specs
- [x] 1 is resolved by creating a lookup from old concrete specs to old abstract specs,
and we use that to associate the "new" concrete specs that happen to be the old
ones with their abstract specs (since those are stripped out for concretization
- [x] 2 is resolved by combining the new and old abstract as lists instead of combining
them as sets. This is important because `set() | set()` does not make any ordering
promises, even though set ordering is otherwise guaranteed in `python@3.7:`
Spack displays package code context when it shouldn't (e.g., on `FetchError`s)
and doesn't display it when it should (e.g., when errors occur in builder classes.
The line attribution can sometimes be off by one, as well.
- [x] Display package context when errors occur in a subclass of `PackageBase`
- [x] Display package context when errors occur in a subclass of `BaseBuilder`
- [x] Do not display package context when errors occur in `PackageBase`,
`BaseBuilder` or other core code that is not in a `package.py` file.
- [x] Fix off-by-one error for core code (don't subtract one from the line number *unless*
it's in an actual `package.py` file.
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
We currently throw a nasty error if you try to reuse packages from some other namespace
(e.g., OLCF), but we should be able to reuse patched local versions of builtin packages.
Right now the only obstacle to that is that we try to look up virtual info for unknown
namespaces, and we can't get the package from the repo to do that. We *can* assume that
a package with a known namespace is similar, and that its virtual provider information
is reasonably accurate, so we now do that. This isn't 100% accurate, but neither is
relying on the package itself, as it may have gone out of date.
The real solution here is virtual edge information, but this is a stopgap until we have
that.
`spec_clauses()` attempts to look up package information for concrete specs in order to
determine which virtuals they may provide. This fails for renamed/deleted dependencies
of buildcaches and installed packages.
This will eventually be fixed by #35258, which adds virtual information on edges, but we
need a workaround to make older buildcaches usable.
- [x] make an exception for renamed packages and omit their virtual constraints
- [x] add a note that this will be solved by adding virtuals to edges
The concretizer can fail with `reuse:true` if a buildcache or installation contains a
package with a dependency that has been renamed or deleted in the main repo (e.g.,
`netcdf` was refactored to `netcdf-c`, `netcdf-fortran`, etc., but there are still
binary packages with dependencies called `netcdf`).
We should still be able to install things for which we are missing `package.py` files.
`Spec.inject_patches_variant()` was failing this requirement by attempting to look up
the package class for concrete specs. This isn't needed -- we can skip it.
- [x] swap two conditions in `Spec.inject_patches_variant()`
The @= in `spack find` output adds a bit of noise. Remove it as we
did for `spack spec` and `spack concretize`.
This modifies display_specs so it actually covers other places we use that routine, as
well, e.g., `spack buildcache list`.
before:
```
-- linux-ubuntu20.04-aarch64 / gcc@=11.1.0 -----------------------
ofdlcpi libpressio@0.88.0
```
after:
```
-- linux-ubuntu20.04-aarch64 / gcc@11.1.0 -----------------------
ofdlcpi libpressio@0.88.0
```
If a user does not explicitly `--force` the concretization of an entire environment,
Spack will try to reuse the concrete specs that are already in the lockfile.
---------
Co-authored-by: becker33 <becker33@users.noreply.github.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* gitlab ci: release fixes and improvements
- use rules to reduce boilerplate in .gitlab-ci.yml
- support copy-only pipeline jobs
- make pipelines for release branches rebuild everything
- make pipelines for protected tags copy-only
* gitlab ci: remove url changes used in testing
* gitlab ci: tag mirrors need public key
Make sure that mirrors associated with release branches and tags
contain the public key needed to verify the signed binaries. This
also ensures that when stack-specific mirror contents are copied
to the root, the root mirror has the public key as well.
* review: be more specific about tags, curl flags
* Make the check in ci.yaml consistent with the .gitlab-ci.yml
---------
Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
Currently, specs on buildcache mirrors must be referenced by their full description. This PR allows buildcache specs to be referenced by their hashes, rather than their full description.
### How it works
Hash resolution has been moved from `SpecParser` into `Spec`, and now includes the ability to execute a `BinaryCacheQuery` after checking the local store, but before concluding that the hash doesn't exist.
### Side-effects of Proposed Changes
Failures will take longer when nonexistent hashes are parsed, as mirrors will now be scanned.
### Other Changes
- `BinaryCacheIndex.update` has been modified to fail appropriately only when mirrors have been configured.
- Tests of hash failures have been updated to use `mutable_empty_config` so they don't needlessly search mirrors.
- Documentation has been clarified for `BinaryCacheQuery`, and more documentation has been added to the hash resolution functions added to `Spec`.
This PR ensures that we'll get a comprehensible error message whenever an old
version of Spack tries to use a DB or a lockfile that is "too new".
* Fix error message when using a too new DB
* Add a unit-test to ensure we have a comprehensible error message
Add a section to the lock file to track the Spack version/commit that produced
an environment. This should (eventually) enhance reproducibility, though we
do not currently do anything with the information. It just adds to provenance
at the moment.
Changes include:
- [x] adding the version/commit to `spack.lock`
- [x] refactor `spack.main.get_version()
- [x] fix a couple of environment lock file-related typos
The flags --mirror-name / --mirror-url / --directory were deprecated in
favor of just passing a positional name, url or directory, and letting spack
figure it out.
---------
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
Prior to this PR, the HOMEDRIVE environment variable was used to
detect what drive we are operating in. This variable is not available
for service account logins (like what is used for CI), so switch to
extracting the drive from PROGRAMFILES (which is more-widely defined).
On Windows, several commonly available system tools for decompression
are unreliable (gz/bz2/xz). This commit refactors `decompressor_for`
to call out to a Windows or Unix-specific method:
* The decompressor_for_nix method behaves the same as before and
generally treats the Python/system support options for decompression
as interchangeable (although avoids using Python's built-in tar
support since that has had issues with permissions).
* The decompressor_for_win method can only use Python support for
gz/bz2/xz, although for a tar.gz it does use system support for
untar (after the decompression step). .zip uses the system tar
utility, and .Z depends on external support (i.e. that the user
has installed 7zip).
A naming scheme has been introduced for the various _decompression
methods:
* _system_gunzip means to use a system tool (and fail if it's not
available)
* _py_gunzip means to use Python's built-in support for decompressing
.gzip files (and fail if it's not available)
* _gunzip is a method that can do either