* Add macOS ML CI stacks
* torchmeta is no longer maintained and requires ancient PyTorch
* Add MXNet
* update darwin aarch64 stacks
* add darwin-aarch64 scoped config.yaml
* remove unnecessary cleanup job
* fix specifications
* fix labels
* fix labels
* fix indent on tags specification
* no tags for trigger jobs
* try overriding tags in stack spack.yaml
* do not use CI_STACK_CONFIG_SCOPES
* incorporate config:install_tree:root: overrides and compiler defs
* copy relevant ci-scoped config settings directly into stack spack.yaml
* remove build-job-remove
* spack ci generate: add debug flag
* include cdash config directly in stack spack.yaml
* customize build-job script section to avoid absolute paths
* add any-job specification
* tags: use aarch64-macos instead of aarch64
* generate tags: use aarch64-macos instead of aarch64
* do not add morepadding
* use shared mirror; comment out known failures
* remove any-job
* nproc || true
* comment out specs failing due to bazel from cache codesign issue
---------
Co-authored-by: eugeneswalker <eugenesunsetwalker@gmail.com>
* [pcluster pipeline] Use local buildcache instead of upstream spack
Spack currently does not relocate compiler references from upstream spack
installations. When using a buildcache we don't need an upstream spack.
* gcc needs to be installed via postinstall to get correct deps
* quantum-espresso@gcc@12.3.0 returns ICE on neoverse_{n,v}1
* Force gitlab to pull the new container
* Revert "Force gitlab to pull the new container"
This reverts commit 3af5f4cd88245138992deb2a46c17e6f85858d68.
Seems the gitlab version does not yet support "pull_policy" in .gitlab-ci.yml
* Gitlab keeps picking up wrong container. Renaming
* Update containers once more after failed build
Add aws-plcuster[-aarch64] stacks. These stacks build packages defined in
https://github.com/spack/spack-configs/tree/main/AWS/parallelcluster
They use a custom container from https://github.com/spack/gitlab-runners which
includes necessary ParallelCluster software to link and build as well as an
upstream spack installation with current GCC and dependencies.
Intel and ARM software is installed and used during the build stage but removed
from the buildcache before the signing stage.
Files `configs/linux/{arch}/ci.yaml` select the necessary providers in order to
build for specific architectures (icelake, skylake, neoverse_{n,v}1).
* CI: Expand E4S ROCm stack to include missing DaV packages
Ascent: Fixup for VTK-m with Kokkos backend
* DaV SDK: Removed duplicated openmp variant for ascent
* Drop visit and add conflict for Kokkos
* E4S: Drop ascent from CUDA builds
Ensure that requirements `packages:*:require:@x` and preferences `packages:*:version:[x]`
fail concretization when no version defined in the package satisfies `x`. This always holds
except for git versions -- they are defined on the fly.
* gitlab ci: release fixes and improvements
- use rules to reduce boilerplate in .gitlab-ci.yml
- support copy-only pipeline jobs
- make pipelines for release branches rebuild everything
- make pipelines for protected tags copy-only
* gitlab ci: remove url changes used in testing
* gitlab ci: tag mirrors need public key
Make sure that mirrors associated with release branches and tags
contain the public key needed to verify the signed binaries. This
also ensures that when stack-specific mirror contents are copied
to the root, the root mirror has the public key as well.
* review: be more specific about tags, curl flags
* Make the check in ci.yaml consistent with the .gitlab-ci.yml
---------
Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
The flags --mirror-name / --mirror-url / --directory were deprecated in
favor of just passing a positional name, url or directory, and letting spack
figure it out.
---------
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
`spack buildcache create` is a misnomer cause it's the only way to push to
an existing buildcache (and it in fact calls binary_distribution.push).
Also we have `spack buildcache update-index` but for create the flag is
`--rebuild-index`, which is confusing (and also... why "rebuild"
something if the command is "create" in the first place, that implies it
wasn't there to begin with).
So, after this PR, you can use either
```
spack buildcache create --rebuild-index
```
or
```
spack buildcache push --update-index
```
Also, alias `spack buildcache rebuild-index` to `spack buildcache
update-index`.
Paths with spaces are an issue on Windows and our current powershell
scripts are not sufficiently hardended against their use.
This PR removes promlematic commandlets that do not work well with paths
with spaces and adds escape quotes in other areas where this could be an
issue.
* DaV SDK: Enable ParaView raytracing with in SDK
* CI: Drop swr testing from Data Vis SDK
* ISPC: extend LLVM requirement to main
* DaV SDK: Disallow concretizing develop unifyfs
No longer needed after mochi-margo patch
* CI: Fixup docs for bootstrap.
* CI: Add compatibility shim
* Add an update method for CI
Update requires manually renaming section to `ci`. After
this patch, updating and using the deprecated `gitlab-ci` section
should be possible.
* Fix typos in generate warnings
* Fixup CI schema validation
* Add unit tests for legacy CI
* Add deprecated CI stack for continuous testing
* Allow updating gitlab-ci section directly with env update
* Make warning give good advice for updating gitlab-ci
* Fix typo in CI name
* Remove white space
* Remove unneeded component of deprected-ci
* ECP-SDK: enable hdf5 VOL adapters
- When +hdf5, enable VOL adapters suitable for the SDK.
- Each VOL package must prepend to the HDF5_PLUGIN_PATH.
- hdf5: 1.13.3 will break existing VOL packages, constrain
VOLs related to SDK and add note to keep 1.13.2 available.
- hdf5-vol-async:
- Do not set HDF5_VOL_CONNECTOR, consumers must opt-in.
- Enforce DAG constraints on MPI to require threaded version.
- Depend on an explicit version of argbots to relax
concretization issues in other spack environments.
- paraview: fix compiler flag usage for the 110 ABI (followup to #33617).
* ECP Data and ViS: Add constraits for HDF5 VOLS
* CI: HDF5 1.14 builds without VisIt
* hdf5-vol-async: Update docs string
---------
Co-authored-by: Stephen McDowell <stephen.mcdowell@kitware.com>
- Update default image to Ubuntu 22.04 (previously was still Ubuntu 18.04)
- Optionally use depfiles to install the environment within the container
- Allow extending Dockerfile Jinja2 template
- Allow extending Singularity definition file Jinja2 template
- Deprecate previous options to add extra instructions
Update tcl and lmod modulefile template to provide more information on
help message (name, version and target) like done on whatis for lmod
modulefiles.
Adapt tcl and lmod modulefile templates to generate append-path or
remove-path commands in modulefile when respectively append_flags or
remove_flags commands are defined in package for run environment.
Fixes#10299.
Simplify environment modification block in modulefile Tcl template by
always setting a path delimiter to the prepend-path, append-path and
remove-path commands.
Remove --delim option to the setenv command as this command does not
allow such option.
Update test_prepend_path_separator test to explicitly check the 6
path-like commands that should be present in generated modulefile.
Example one:
```
spack install --add x y z
```
is equivalent to
```
spack add x y z
spack concretize
spack install --only-concrete
```
where `--only-concrete` installs without modifying spack.yaml/spack.lock
Example two:
```
spack install
```
concretizes current spack.yaml if outdated and installs all specs.
Example three:
```
spack install x y z
```
concretizes current spack.yaml if outdated and installs *only* concrete
specs in the environment that match abstract specs `x`, `y`, or `z`.
Adapt tcl modulefile template to call "module load" on autoload
dependency without testing if this dependency is already loaded or not.
The is-loaded test is not necessary, as module commands know how to cope
with an already loaded module. With environment-modules 4.2+ (released
in 2018) it is also important to have this "module load" command even if
dependency is already loaded in order to record that the modulefile
declares such dependency. This is important if you want to keep a
consistent environment when a dependent module is unloaded.
The "Autoloading" verbose message is also removed as recent module
commands will report such information to the user (depending on the
verbosity configured for the module command).
Such change has been test successfully with Modules 3.2 (EL7), 4.5 (EL8)
and 5.2 (latest) and also with Lmod 7 and 8 (as it is mentionned in
Spack docs that Lmod can be used along with tcl modules). Dependencies
are correctly loaded or unloaded, whether they are loaded/unloaded or
not.
This change fixes Tcl quoting issue introduced in #32853.
Fixes#19155.
* py-pytorch-lightning: add v2.0.0
* py-lightning-utilities: add v0.8.0
* Update all PyTorch packages
* Open-CE does not yet have patches for PyTorch 2 on ppc64le
This adds a new mode for `concretizer:reuse` called `dependencies`,
which only reuses dependencies. Currently, `spack install foo` will
reuse older versions of `foo`, which might be surprising to users.
* ci: version bump for ghcr.io/spack/e4s-amazonlinux-2
This new image comes with GnuPG v2.4.0
* py-cython: upperbounds for Python versions
* fix py-gevent nonsense
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
* CI configuration boilerplate reduction and refactor
Configuration:
- New notation for list concatenation (prepend/append)
- New notation for string concatenation (prepend/append)
- Break out configuration files for: ci.yaml, cdash.yaml, view.yaml
- Spack CI section refactored to improve self-consistency and
composability
- Scripts are now lists of lists and/or lists of strings
- Job attributes are now listed under precedence ordered list that are
composed/merged using Spack config merge rules.
- "service-jobs" are identified explicitly rather than as a batch
CI:
- Consolidate common, platform, and architecture configurations for all CI stacks into composable configuration files
- Make padding consistent across all stacks (256)
- Merge all package -> runner mappings to be consistent across all
stacks
Unit Test:
- Refactor CI module unit-tests for refactor configuration
Docs:
- Add docs for new notations in configuration.rst
- Rewrite docs on CI pipelines to be consistent with refactored CI
workflow
* Script verbose environ, dev bootstrap
* Port #35409
By setting the traversal depth to 1, only specs matching the changed
package and direct dependents of those (and of course all dependencies
of that set) are removed from pruning candidacy.
* Style: black 23, skip magic trailing commas
* isort should use same line length as black
* Fix unused import
* Update version of black used in CI
* Update new packages
* Update new packages
* e4s: restore builds builds
* gitlab ci: allow UO to build protected binaries for signing
* use newer image; comment out failing builds
* gitlab-ci: Some tweaks for e4s power builds
- fix tags (no longer require generate jobs to run on aws)
- fix resource requests for generation jobs resource requests
- remove SPACK_SIGNING_KEY from protected power build jobs
- update UO signing key path
- change the CDash build group to reflect stack name
- retry pipeline generation jobs *always*
* correct double packages: section
* gitlab-ci:script: modernize
* remove new gnu make, not for ppc64le
---------
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
Since SPACK_PACKAGE_IDS is now also "namespaced" with <prefix>, it makes
more sense to call the flag `--make-prefix` and alias the old flag
`--make-target-prefix` to it.
With the new variable [prefix/]SPACK_PACKAGE_IDS you can conveniently execute
things after each successful install.
For example push just-built packages to a buildcache
```
SPACK ?= spack
export SPACK_COLOR = always
MAKEFLAGS += -Orecurse
MY_BUILDCACHE := $(CURDIR)/cache
.PHONY: all clean
all: push
ifeq (,$(filter clean,$(MAKECMDGOALS)))
include env.mk
endif
# the relevant part: push has *all* example/push/<pkg identifier> as prereqs
push: $(addprefix example/push/,$(example/SPACK_PACKAGE_IDS))
$(SPACK) -e . buildcache update-index --directory $(MY_BUILDCACHE)
$(info Pushed everything, yay!)
# and each example/push/<pkg identifier> has the install target as prereq,
# and the body can use target local $(HASH) and $(SPEC) variables to do
# things, such as pushing to a build cache
example/push/%: example/install/%
@mkdir -p $(dir $@)
$(SPACK) -e . buildcache create --allow-root --only=package --unsigned --directory $(MY_BUILDCACHE) /$(HASH) # push $(SPEC)
@touch $@
spack.lock: spack.yaml
$(SPACK) -e . concretize -f
env.mk: spack.lock
$(SPACK) -e . env depfile -o $@ --make-target-prefix example
clean:
rm -rf spack.lock env.mk example/
``
With this change we get the invariant that `mirror.fetch_url` and
`mirror.push_url` return valid URLs, even when the backing config
file is actually using (relative) paths with potentially `$spack` and
`$env` like variables.
Secondly it avoids expanding mirror path / URLs too early,
so if I say `spack mirror add name ./path`, it stays `./path` in my
config. When it's retrieved through MirrorCollection() we
exand it to say `file://<env dir>/path` if `./path` was set in an
environment scope.
Thirdly, the interface is simplified for the relevant buildcache
commands, so it's more like `git push`:
```
spack buildcache create [mirror] [specs...]
```
`mirror` is either a mirror name, a path, or a URL.
Resolving the relevant mirror goes as follows:
- If it contains either / or \ it is used as an anonymous mirror with
path or url.
- Otherwise, it's interpreted as a named mirror, which must exist.
This helps to guard against typos, e.g. typing `my-mirror` when there
is no such named mirror now errors with:
```
$ spack -e . buildcache create my-mirror
==> Error: no mirror named "my-mirror". Did you mean ./my-mirror?
```
instead of creating a directory in the current working directory. I
think this is reasonable, as the alternative (requiring that a local dir
exists) feels a bit pendantic in the general case -- spack is happy to
create the build cache dir when needed, saving a `mkdir`.
The old (now deprecated) format will still be available in Spack 0.20,
but is scheduled to be removed in 0.21:
```
spack buildcache create (--directory | --mirror-url | --mirror-name) [specs...]
```
This PR also touches `tmp_scope` in tests, because it didn't really
work for me, since spack fixes the possible --scope values once and
for all across tests, so tests failed when run out of order.
Sometimes I just want to know how many packages of a certain type there are.
- [x] add `--count` option to `spack list` that output the number of packages that
*would* be listed.
```console
> spack list --count
6864
> spack list --count py-
2040
> spack list --count r-
1162
```
* paraview: add `rocm` variant
This conflicts with CUDA and requires at least ParaView 5.11.0. More
dependencies are also needed.
* E4S: Add ParaView for ROCm and CUDA stacks
* DAV SDK: Update ParaView version and GPU variants
* Verify using hipcc vs amdclang++ for newer hip
Co-authored-by: Ben Boeckel <ben.boeckel@kitware.com>
Gitlab does not merge lists when a job extends two other definitions
that include the same list (e.g. tags). Also, it merges dictionaries
as long as the keys are distinct, but just takes the last mentioned
value when there are key collisions.
This change makes sure that when different tags are needed by a
pipeline, the ones we want are actually provided. It also changes
the example stack to better follow this pattern so we do not lead
developers astray in the future.
`spack graph` has been reworked to use:
- Jinja templates
- builder objects to construct the template context when DOT graphs are requested.
This allowed to add a new colored output for DOT graphs that highlights both
the dependency types and the nodes that are needed at runtime for a given spec.
* ML CI: Linux x86_64
* Update comments
* Rename again
* Rename comments
* Update to match other arches
* No compiler
* Compiler was wrong anyway
* Faster TF
The main issue that's fixed is that Spack passes paths (as strings) to
functions that require urls. That wasn't an issue on unix, since there
you can simply concatenate `file://` and `path` and all is good, but on
Windows that gives invalid file urls. Also on Unix, Spack would not deal with uri encoding like x%20y for file paths.
It also removes Spack's custom url.parse function, which had its own incorrect interpretation of file urls, taking file://x/y to mean the relative path x/y instead of hostname=x and path=/y. Also it automatically interpolated variables, which is surprising for a function that parses URLs.
Instead of all sorts of ad-hoc `if windows: fix_broken_file_url` this PR
adds two helper functions around Python's own path2url and reverse.
Also fixes a bug where some `spack buildcache` commands
used `-d` as a flag to mean `--mirror-url` requiring a URL, and others
`--directory`, requiring a path. It is now the latter consistently.
It's very common for us to tell users to grep through the existing Spack packages to
find examples of what they want, and it's also very common for package developers to do
it. Now, searching packages is even easier.
`spack pkg grep` runs grep on all `package.py` files in repos known to Spack. It has no
special options other than the search string; all options passed to it are forwarded
along to `grep`.
```console
> spack pkg grep --help
usage: spack pkg grep [--help] ...
positional arguments:
grep_args arguments for grep
options:
--help show this help message and exit
```
```console
> spack pkg grep CMakePackage | head -3
/Users/gamblin2/src/spack/var/spack/repos/builtin/packages/3dtk/package.py:class _3dtk(CMakePackage):
/Users/gamblin2/src/spack/var/spack/repos/builtin/packages/abseil-cpp/package.py:class AbseilCpp(CMakePackage):
/Users/gamblin2/src/spack/var/spack/repos/builtin/packages/accfft/package.py:class Accfft(CMakePackage, CudaPackage):
```
```console
> spack pkg grep -Eho '(\S*)\(PythonPackage\)' | head -3
AwsParallelcluster(PythonPackage)
Awscli(PythonPackage)
Bueno(PythonPackage)
```
## Return Value
This retains the return value semantics of `grep`:
* 0 for found,
* 1 for not found
* >1 for error
## Choosing a `grep`
You can set the ``SPACK_GREP`` environment variable to choose the ``grep``
executable this command should use.
Unit tests on Windows are supposed to pass for any PR to pass CI.
However, the return code for the unit test command was not being
checked, which meant this check was always passing (effectively
disabled). This PR
* Properly checks the result of the unit tests and fails if the
unit tests fail
* Fixes (or disables on Windows) a number of tests which have
"drifted" out of support on Windows since this check was
effectively disabled
At some point the `a` mock package became an `AutotoolsPackage`, and that means it
depends on `gnuconfig` on macOS. This was causing one of our shell tests to fail on
macOS because it was testing for `{a.prefix.bin}:{b.prefix.bin}` in `PATH`, but
`gnuconfig` shows up between them.
- [x] simplify the test to check `spack load --sh a` and `spack load --sh b` separately
This commit reworks the bootstrapping procedure to use Spack environments
as much as possible.
The `spack.bootstrap` module has also been reorganized into a Python package.
A distinction is made among "core" Spack dependencies (clingo, GnuPG, patchelf)
and other dependencies. For a number of reasons, explained in the `spack.bootstrap.core`
module docstring, "core" dependencies are bootstrapped with the current ad-hoc
method.
All the other dependencies are instead bootstrapped using a Spack environment
that lives in a directory specific to the interpreter and the architecture being used.
* CI: Update Data and Vis SDK Stack
* Update image to match target deployments (E4S)
* Enable all packages
* Test supported variants of ParaView and VisIt
* Sensei: Update Python hint for newer cmake
* Sensei: add Python3 hint
This adds super-lazy maintainer mode to `spack checksum`: Instead of
only printing the new checksums to the terminal, `-a` and
`--add-to-package` will add the new checksums to the `package.py` file
and open it in the editor afterwards for final checks.
Environments and environment views have taken over the role of `spack activate/deactivate`, and we should deprecate these commands for several reasons:
- Global activation is a really poor idea:
- Install prefixes should be immutable; since they can have multiple, unrelated dependents; see below
- Added complexity elsewhere: verification of installations, tarballs for build caches, creation of environment views of packages with unrelated extensions "globally activated"... by removing the feature, it gets easier for people to contribute, and we'd end up with fewer bugs due to edge cases.
- Environment accomplish the same thing for non-global "activation" i.e. `spack view`, but better.
Also we write in the docs:
```
However, Spack global activations have two potential drawbacks:
#. Activated packages that involve compiled C extensions may still
need their dependencies to be loaded manually. For example,
``spack load openblas`` might be required to make ``py-numpy``
work.
#. Global activations "break" a core feature of Spack, which is that
multiple versions of a package can co-exist side-by-side. For example,
suppose you wish to run a Python package in two different
environments but the same basic Python --- one with
``py-numpy@1.7`` and one with ``py-numpy@1.8``. Spack extensions
will not support this potential debugging use case.
```
Now that environments are established and views can take over the role of activation
non-destructively, we can remove global activation/deactivation.
"spack install foo" no longer adds package "foo" to the environment
(i.e. to the list of root specs) by default: you must specify "--add".
Likewise "spack uninstall foo" no longer removes package "foo" from
the environment: you must specify --remove. Generally this means
that install/uninstall commands will no longer modify the users list
of root specs (which many users found problematic: they had to
deactivate an environment if they wanted to uninstall a spec without
changing their spack.yaml description).
In more detail: if you have environments e1 and e2, and specs [P, Q, R]
such that P depends on R, Q depends on R, [P, R] are in e1, and [Q, R]
are in e2:
* `spack uninstall --dependents --remove r` in e1: removes R from e1
(but does not uninstall it) and uninstalls (and removes) P
* `spack uninstall -f --dependents r` in e1: will uninstall P, Q, and
R (i.e. e2 will have dependent specs uninstalled as a side effect)
* `spack uninstall -f --dependents --remove r` in e1: this uninstalls
P, Q, and R, and removes [P, R] from e1
* `spack uninstall -f --remove r` in e1: uninstalls R (so it is
"missing" in both environments) and removes R from e1 (note that e1
would still install R as a dependency of P, but it would no longer
be listed as a root spec)
* `spack uninstall --dependents r` in e1: will fail because e2 needs R
Individual unit tests were created for each of these scenarios.
This commit extends the DSL that can be used in packages
to allow declaring that a package uses different build-systems
under different conditions.
It requires each spec to have a `build_system` single valued
variant. The variant can be used in many context to query, manipulate
or select the build system associated with a concrete spec.
The knowledge to build a package has been moved out of the
PackageBase hierarchy, into a new Builder hierarchy. Customization
of the default behavior for a given builder can be obtained by
coding a new derived builder in package.py.
The "run_after" and "run_before" decorators are now applied to
methods on the builder. They can also incorporate a "when="
argument to specify that a method is run only when certain
conditions apply.
For packages that do not define their own builder, forwarding logic
is added between the builder and package (methods not found in one
will be retrieved from the other); this PR is expected to be fully
backwards compatible with unmodified packages that use a single
build system.
* backtraces without --debug
Currently `--debug` is too verbose and not-`--debug` gives to little
context about where exceptions are coming from.
So, instead, it'd be nice to have `spack --backtrace` and
`SPACK_BACKTRACE=1` as methods to get something inbetween: no verbose
debug messages, but always a full backtrace.
This is useful for CI, where we don't want to drown in debug messages
when installing deps, but we do want to get details where something goes
wrong if it goes wrong.
* completion
When we lose a running pod (possibly loss of spot instance) or encounter
some other infrastructure-related failure of this job, we need to retry
it. This retries the job the maximum number of times in those cases.
`reuse` and `when_possible` concretization broke the invariant that
`spec[pkg_name]` has unique keys. This invariant is relied on in tons of
places, such as when setting up the build environment.
When using `when_possible` concretization, one may end up with two or
more `perl`s or `python`s among the transitive deps of a spec, because
concretization does not consider build-only deps of reusable specs.
Until the code base is fixed not to rely on this broken property of
`__getitem__`, we should disable reuse in CI.
When installing some/all specs from a buildcache, build edges are pruned
from those specs. This can result in a much smaller effective DAG. Until
now, `spack env depfile` would always generate a full DAG.
Ths PR adds the `spack env depfile --use-buildcache` flag that was
introduced for `spack install` before. This way, not only can we drop
build edges, but also we can automatically set the right buildcache
related flags on the specific specs that are gonna get installed.
This way we get parallel installs of binary deps without redundancy,
which is useful for Gitlab CI.
Currently "spack ci generate" chooses the first matching entry in
gitlab-ci:mappings to fill attributes for a generated build-job,
requiring that the entire configuration matrix is listed out
explicitly. This unfortunately causes significant problems in
environments with large configuration spaces, for example the
environment in #31598 (spack.yaml) supports 5 operating systems,
3 architectures and 130 packages with explicit size requirements,
resulting in 1300 lines of configuration YAML.
This patch adds a configuraiton option to the gitlab-ci schema called
"match_behavior"; when it is set to "merge", all matching entries
are applied in order to the final build-job, allowing a few entries
to cover an entire matrix of configurations.
The default for "match_behavior" is "first", which behaves as before
this commit (only the runner attributes of the first match are used).
In addition, match entries may now include a "remove-attributes"
configuration, which allows matches to remove tags that have been
aggregated by prior matches. This only makes sense to use with
"match_behavior:merge". You can combine "runner-attributes" with
"remove-attributes" to effectively override prior tags.
* env depfile: allow deps only install
- Refactor `spack env depfile` to use a Jinja template, making it a bit
easier to follow as a human being.
- Add a layer of indirection in the generated Makefile through an
`<prefix>/.install-deps/<hash>` target, which allows one to specify
different options when installing dependencies. For example, only
verbose/debug mode on when installing some particular spec:
```
$ spack -e my_env env depfile -o Makefile --make-target-prefix example
$ make example/.install-deps/<hash> -j16
$ make example/.install/<hash> SPACK="spack -d" SPACK_INSTALL_FLAGS=--verbose -j16
```
This could be used to speed up `spack ci rebuild`:
- Parallel install of dependencies from buildcache
- Better readability of logs, e.g. reducing verbosity when installing
dependencies, and splitting logs into deps.log and current_spec.log
* Silence please!
Caches used by repositories don't reference the global spack.repo.path instance
anymore, but get the repository they refer to during initialization.
Spec.virtual now use the index, and computation done to compute the index
use Repository.is_virtual_safe.
Code to construct mock packages and mock repository has been factored into
a unique MockRepositoryBuilder that is used throughout the codebase.
Add debug print for pushing and popping config scopes.
Changed spack.repo.use_repositories so that it can override or not previous repos
spack.repo.use_repositories updates spack.config.config according to the modifications done
Removed a peculiar behavior from spack.config.Configuration where push would always
bubble-up a scope named command_line if it existed