We've previously generated CI pipelines for PRs, and they rebuild any packages that don't have
a binary in an existing build cache. The assumption we were making was that ALL prior merged
builds would be in cache, but due to the way we do security in the pipeline, they aren't. `develop`
pipelines can take a while to catch up with the latest PRs, and while it does that, there may be a
bunch of redundant builds on PRs that duplicate things being rebuilt on `develop`. Until we can
do better caching of PR builds, we'll have this problem.
We can do better in PRs, though, by *only* rebuilding things in the CI environment that are actually
touched by the PR. This change computes exactly what packages are changed by a PR branch and
*only* includes those packages' dependents and dependencies in the generated pipeline. Other
as-yet unbuilt packages are pruned from CI for the PR.
For `develop` pipelines, we still want to build everything to ensure that the stack works, and to ensure
that `develop` catches up with PRs. This is especially true since we do not do rebuilds for *every* commit
on `develop` -- just the most recent one after each `develop` pipeline finishes. Since we skip around,
we may end up missing builds unless we ensure that we rebuild everything.
We differentiate between `develop` and PR pipelines in `.gitlab-ci.yml` by setting
`SPACK_PRUNE_UNTOUCHED` for PRs. `develop` will still have the old behavior.
- [x] Add `SPACK_PRUNE_UNTOUCHED` variable to `spack ci`
- [x] Refactor `spack pkg` command by moving historical package checking logic to `spack.repo`
- [x] Implement pruning logic in `spack ci` to remove untouched packages
- [x] add tests
Add output of build- and install-time tests to info command
Enable dependencies, variants, and versions by default (i.e., provide --no*
options; add gcc to test_info_fields to increase coverage for c_names->v_names
Adds `spack external read-cray-manifest`, which reads a json file that describes a set of package DAGs. The parsed results are stored directly in the database. A user can see these installed specs with `spack find` (like any installed spec). The easiest way to use them right now as dependencies is to run `spack spec ... ^/hash-of-external-package`.
Changes include:
* `spack external read-cray-manifest --file <path/to/file>` will add all specs described in the file to Spack's installation DB and will also install described compilers to the compilers configuration (the expected format of the file is described in this PR as well including examples of the file)
* Database records now may include an "origin" (the command added in this PR registers the origin as "external-db"). In the future, it is assumed users may want to be able to treat installs registered with this command differently (e.g. they may want to uninstall all specs added with this command)
* Hash properties are now always preserved when copying specs if the source spec is concrete
* I don't think the hashes of installed-and-concrete specs should change and this was the easiest way to handle that
* also specs that are concrete preserve their `.normal` property when copied (external specs may mention compilers that are not registered, and without this change they would fail in `normalize` when calling `validate_or_raise`)
* it might be this should only be the case if the spec was installed
- [x] Improve testing
- [x] Specifically mark DB records added with this command (so that users can do something like "uninstall all packages added with `spack read-external-db`)
* This is now possible with `spack uninstall --all --origin=external-db` (this will remove all specs added from manifest files)
- [x] Strip variants that are listed in json entries but don't actually exist for the package
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
* Add 'make-installer' command for Windows
* Add '--bat' arg to env activate, env deactivate and unload commands
* An equivalent script to setup-env on linux: spack_cmd.bat. This script
has a wrapper to evaluate cd, load/unload, env activate/deactivate.(#21734)
* Add spacktivate and config editor (#22049)
* spack_cmd: will find python and spack on its own. It preferentially
tries to use python on your PATH (#22414)
* Ignore Windows python installer if found (#23134)
* Bundle git in windows installer (#23597)
* Add Windows section to Getting Started document
(#23131), (#23295), (#24240)
Co-authored-by: Stephen Crowell <stephen.crowell@kitware.com>
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Co-authored-by: Jared Popelar <jpopelar@txcorp.com>
Co-authored-by: Ben Cowan <benc@txcorp.com>
Update Installer CI
Co-authored-by: John Parent <john.parent@kitware.com>
* hdf5: mark +fortran+shared conflict for older version
This version was only activated unintentionally by silo's conflict
statement, but `@1.8.15+shared+fortran+cxx` errors out in configure:
```
CMake Error at CMakeLists.txt:814 (message):
**** Shared FORTRAN libraries are unsupported ****
```
* silo: refine hdf5 conflicts to avoid building old version
Before this, `silo+hdf5` concretized to 1.10.7 or sometimes 1.8.15. Now
I've verified it works for the following configurations:
```
silo@4.10.2 patches=7b5a1dc,952d3c9
^ hdf5@1.10.7 api=default
silo@4.10.2 patches=7b5a1dc,952d3c9,eb2a3a0
^ hdf5@1.10.8 api=v18
silo@4.10.2 patches=7b5a1dc,952d3c9,eb2a3a0
^ hdf5@1.12.1 api=v110
silo@4.11-bsd patches=eb2a3a0
^ hdf5@1.12.1 api=v110
silo@4.11-bsd patches=eb2a3a0
^ hdf5@1.10.8 api=default
silo@4.11-bsd patches=eb2a3a0
^ hdf5@1.12.1 api=default
```
and verified that the following fail:
```
silo@4.10.2 ^hdf5@1.12.1 api=default
silo@4.11 ^hdf5 api=v18
silo@4.11-bsd ^hdf5@1.13.0 api=v12
silo@4.11-bsd ^hdf5@1.13.0 api=default
```
and have updated the constraints to match. Hdf5 no longer has to be
downgraded to work with Silo.
* silo: fix dependency conflicts
* py-h5py: shorten and add comments to py-h5py hdf5 dependencies
* e4s: remove slightly outdated hdf5 requirement
* e4s: remove excessive hdf5 variant constraints
These I think are holdovers from the old concretizer.
- `hdf5_compat` can be expressed as `+hdf5 ^hdf5@1.8`
- The extra variants on hdf5 shouldn't break conduit
- axom unnecessarily restricts hdf5 version
* conduit: restore hdf5_compat flag
* Add a new test to catch exit code failure
fixes#29226
This introduces a new unit test that checks the return
code of `spack unit-test` when it is supposed to fail.
This is to prevent bugs like the one introduced in #25601
in which CI didn't catch a missing return statement.
In retrospective it seems that the shell test we have right
now all go through `tty.die` or similar code paths which
call `sys.exit(a)` explicitly. This new test instead checks
`spack unit-test` which relies on the return code from
command invocation in case of errors.
We can see what is in the bootstrap store with `spack find -b`, and you can clean it with `spack
clean -b`, but we can't do much else with it, and if there are bootstrap issues they can be hard to
debug.
We already have `spack --mock`, which allows you to swap in the mock packages from the command
line. This PR introduces `spack -b` / `spack --bootstrap`, which runs all of spack with
`ensure_bootstrap_configuration()` set. This means that you can run `spack -b find`, `spack -b
install`, `spack -b spec`, etc. to see what *would* happen with bootstrap configuration, to remove
specific bootstrap packages, etc. This will hopefully make developers' lives easier as they deal
with bootstrap packages.
This PR also uses a `nullcontext` context manager. `nullcontext` has been implemented in several
other places in Spack, and this PR consolidates them to `llnl.util.lang`, with a note that we can
delete the function if we ever reqyire a new enough Python.
- [x] introduce `spack --bootstrap` option
- [x] consolidated all `nullcontext` usages to `llnl.util.lang`
See https://github.com/spack/spack/issues/25353#issuecomment-1041868116
This commit changes the default behavior of
```
$ spack external find
```
from searching all the possible packages Spack knows about to
search only for the ones tagged as being a "build-tool".
It also introduces a `--all` option to restore the old behavior.
Since Spack does not install external packages, this commit skips them by
default when running stand-alone tests. The assumption is that such packages
have likely undergone an acceptance test process.
However, the tests can be run against installed externals using
```
% spack test run --externals ...
```
`--reuse` was previously handled individually by each command that
needed it. We are growing more concretization options, and they'll
need their own section for commands that support them.
Now there are two concretization options:
* `--reuse`: Attempt to reuse packages from installs and buildcaches.
* `--fresh`: Opposite of reuse -- traditional spack install.
To handle thes, this PR adds a `ConfigSetAction` for `argparse`, so
that you can write argparse code like this:
```
subgroup.add_argument(
'--reuse', action=ConfigSetAction, dest="concretizer:reuse",
const=True, default=None,
help='reuse installed dependencies/buildcaches when possible'
)
```
With this, you don't need to add logic to pull the argument out and
handle it; the `ConfigSetAction` just does it for you. This can probably
be used to clean up some other commands later, as well.
Code that was previously passing `reuse=True` around everywhere has
been refactored to use config, and config is set from the CLI using
a new `add_concretizer_args()` function in `spack.cmd.common.arguments`.
- [x] Add `ConfigSetAction` to simplify concretizer config on the CLI
- [x] Refactor code so that it does not pass `reuse=True` to every function.
- [x] Refactor commands to use `add_concretizer_args()` and to pass
concretizer config using the config system.
* trilinos: update dependencies
Use the tribits deps to clarify some dependencies, and group some together
using `with` statements, eliminating some transitive conflict duplication.
* trilinos: Restricit cuda incompatibility
* e4s: vastly reduce number of packages in trilinos-cuda build
Not clear who the customers of cuda-enabled trilinos are, or what options
they need, or which sets of options conflict...
* e4s: remove ~wrapper from trilinos+cuda
To make it easier to see how package hashes change and how they are computed, add two
commands:
* `spack pkg source <spec>`: dumps source code for a package to the terminal
* `spack pkg source --canonical <spec>`: dumps canonicalized source code for a
package to the terminal. It strips comments, directives, and known-unused
multimethods from the package. It is used to generate package hashes.
* `spack pkg hash <spec>`: This gives the package hash for a particular spec.
It is generated from the canonical source code for the spec.
- [x] `add spack pkg source` and `spack pkg hash`
- [x] add tests
- [x] fix bug in multimethod resolution with boolean `@when` values
Co-authored-by: Greg Becker <becker33@llnl.gov>
* llvm: make targets a multivalued variant
* Fix the targets variant values
1. Make them lowercase and add a mapping to cmake equivalent
2. auto -> all
2. Restore composability by using a multivalued variant, so that
`targets=all` and `targets=x86` is combined to `targets=all,x86`
which is then transformed into LLVM_TARGETS_TO_BUILD=all.
* use targets=x86 in iwyu
* Default to nvptx/amdgpu/host arch targets
* default to none
* Update var/spack/repos/builtin/packages/zig/package.py
This command pokes the environment, Python interpreter
and bootstrap store to check if dependencies needed by
Spack are available.
If any are missing, it shows a comprehensible message.
This commit introduces the command
spack module tcl setdefault <package>
similar to the one already available for lmod
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
spack monitor now requires authentication as each build must be associated
with a user, so it does not make sense to allow the --monitor-no-auth flag
and this commit will remove it
* Fix building container images
Patchelf is bootstrapped from sources, so we cannot
disable that mechanism until a finer selection is
possible in the configuration.
* Build on changes to the Dockerfile
* Don't login to Dockerhub on PRs
This commit introduces the command
spack module tcl setdefault <package>
similar to the one already available for lmod
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This PR is meant to move code with "business logic" from `spack.cmd.buildcache` to appropriate core modules[^1].
Modifications:
- [x] Add `spack.binary_distribution.push` to create a binary package from a spec and push it to a mirror
- [x] Add `spack.binary_distribution.install_root_node` to install only the root node of a concrete spec from a buildcache (may check the sha256 sum if it is passed in as input)
- [x] Add `spack.binary_distribution.install_single_spec` to install a single concrete spec from a buildcache
- [x] Add `spack.binary_distribution.download_single_spec` to download a single concrete spec from a buildcache to a local destination
- [x] Add `Spec.from_specfile` that construct a spec given the path of a JSON or YAML spec file
- [x] Removed logic from `spack.cmd.buildcache`
- [x] Removed calls to `spack.cmd.buildcache` in `spack.bootstrap`
- [x] Deprecate `spack buildcache copy` with a message that says it will be removed in v0.19.0
[^1]: The rationale is that commands should be lightweight wrappers of the core API, since that helps with both testing and scripting (easier mocking and no need to invoke `SpackCommand`s in a script).
* Add connection specification to mirror creation
This allows each mirror to contain information about the credentials
used to access it.
Update command and tests based on comments
Switch to only "long form" flags for the s3 connection information.
Use the "any" function instead of checking for an empty list when looking
for s3 connection information.
Split test to use the access token separately from the access id and key.
Use long flag form in test.
Add endpoint_url to available S3 options.
Extend the special parameters for an S3 mirror to accept the
endpoint_url parameter.
Add a test.
* Add connection information per URL not per mirror
Expand the mirror-based connection information to be per-URL.
This will allow a user to specify different S3 connection information
for both the fetch and the push URLs.
Add a parameter for "profile", another way of storing the id/secret pair.
* Switch from "access_profile" to "profile"