* oneapi 2024.0.0 release
* oneapi v2 directory support and some cleanups
* sycl abi change requires 2024 compilers for packages that use sycl
---------
Co-authored-by: Robert Cohn <robert.s.cohn@intel.com>
We have two ways to concretize now:
* `spack concretize` concretizes only the root specs that are not concrete in the environment.
* `spack concretize -f` eliminates all cached concretization data and reconcretizes the *entire* environment.
This PR adds `spack deconcretize`, which eliminates cached concretization data for a spec. This allows
users greater control over what is preserved from their `spack.lock` file and what is reused when not
using `spack concretize -f`. If you want to update a spec installed in your environment, you can call
`spack deconcretize` on it, and that spec and any relevant dependents will be removed from the lock file.
`spack concretize` has two options:
* `--root`: limits deconcretized specs to *specific* roots in the environment. You can use this to
deconcretize exactly one root in a `unify: false` environment. i.e., if `foo` root is a dependent
of `bar`, both roots, `spack deconcretize bar` will *not* deconcretize `foo`.
* `--all`: deconcretize *all* specs that match the input spec. By default `spack deconcretize`
will complain about multiple matches, like `spack uninstall`.
This changes variant display to use a much more legible format, and to use screen space
much better (particularly on narrow terminals). It also adds color the variant display
to match other parts of `spack info`.
Descriptions and variant value lists that were frequently squished into a tiny column
before now have closer to the full terminal width.
This change also preserves any whitespace formatting present in `package.py`, so package
maintainers can make easer-to-read descriptions of variant values if they want. For
example, `gasnet` has had a nice description of the `conduits` variant for a while, but
it was wrapped and made illegible by `spack info`. That is now fixed and the original
newlines are kept.
Conditional variants are grouped by their when clauses by default, but if you do not
like the grouping, you can display all the variants in order with `--variants-by-name`.
I'm not sure when people will prefer this, but it makes it easier to tell that a
particular variant is/isn't there. I do think grouping by `when` is the better default.
Currently there's some hacky logic in the AppleClang compiler that makes
it also accept `gfortran` as a fortran compiler if `flang` is not found.
This is guarded by `if sys.platform` checks s.t. it only applies to
Darwin.
But on Linux the feature of detecting mixed toolchains is highly
requested too, cause it's rather annoying to run into a failed build of
`openblas` after dozens of minutes of compiling its dependencies, just
because clang doesn't have a fortran compiler.
In particular in CI where the system compilers may change during system
updates, it's typically impossible to fix compilers in a hand-written
compilers.yaml config file: the config will almost certainly be outdated
sooner or later, and maintaining one config file per target machine and
writing logic to select the correct config is rather undesirable too.
---
This PR introduces a flag `spack compiler find --mixed-toolchain` that
fills out missing `fc` and `f77` entries in `clang` / `apple-clang` by
picking the best matching `gcc`.
It is enabled by default on macOS, but not on Linux, matching current
behavior of `spack compiler find`.
The "best matching gcc" logic and compiler path updates are identical to
how compiler path dictionaries are currently flattened "horizontally"
(per compiler id). This just adds logic to do the same "vertically"
(across different compiler ids).
So, with this change on Ubuntu 22.04:
```
$ spack compiler find --mixed-toolchain
==> Added 6 new compilers to /home/harmen/.spack/linux/compilers.yaml
gcc@13.1.0 gcc@12.3.0 gcc@11.4.0 gcc@10.5.0 clang@16.0.0 clang@15.0.7
==> Compilers are defined in the following files:
/home/harmen/.spack/linux/compilers.yaml
```
you finally get:
```
compilers:
- compiler:
spec: clang@=15.0.7
paths:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags: {}
operating_system: ubuntu23.04
target: x86_64
modules: []
environment: {}
extra_rpaths: []
- compiler:
spec: clang@=16.0.0
paths:
cc: /usr/bin/clang-16
cxx: /usr/bin/clang++-16
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags: {}
operating_system: ubuntu23.04
target: x86_64
modules: []
environment: {}
extra_rpaths: []
```
The "best gcc" is automatically default system gcc, since it has no
suffixes / prefixes.
This PR implements the concept of "default environment", which doesn't have to be
created explicitly. The aim is to lower the barrier for adopting environments.
To (create and) activate the default environment, run
```
$ spack env activate
```
This mimics the behavior of
```
$ cd
```
which brings you to your home directory.
This is not a breaking change, since `spack env activate` without arguments
currently errors. It is similar to the already existing `spack env activate --temp`
command which always creates an env in a temporary directory, the difference
is that the default environment is a managed / named environment named `default`.
The name `default` is not a reserved name, it's just that `spack env activate`
creates it for you if you don't have it already.
With this change, you can get started with environments faster:
```
$ spack env activate [--prompt]
$ spack install --add x y z
```
instead of
```
$ spack env create default
==> Created environment 'default in /Users/harmenstoppels/spack/var/spack/environments/default
==> You can activate this environment with:
==> spack env activate default
$ spack env activate [--prompt] default
$ spack install --add x y z
```
Notice that Spack supports switching (but not stacking) environments, so the
parallel with `cd` is pretty clear:
```
$ spack env activate named_env
$ spack env status
==> In environment named_env
$ spack env activate
$ spack env status
==> In environment default
```
This completes to `spack concretize`:
```
spack conc<tab>
```
but this still gets hung up on the difference between `concretize` and `concretise`:
```
spack -e . conc<tab>
```
We were checking `"$COMP_CWORD" = 1`, which tracks the word on the command line
including any flags and their args, but we should track `"$COMP_CWORD_NO_FLAGS" = 1` to
figure out if the arg we're completing is the first real command.
This PR adds support for including separate definitions from `spack.yaml`.
Supporting the inclusion of files with definitions enables user to make
curated/standardized collections of packages that can re-used by others.
* e4s ci: add exago +cuda, +rocm builds
* exago: rename 5-18-2022-snapshot to snapshot.5-18-2022
* disable exago +rocm for non-external rocm ci install
* note that hiop +rocm fails to find hip libraries when they are spack-installed
Credits to @ChristianKniep for advocating the idea of OCI image layers
being identical to spack buildcache tarballs.
With this you can configure an OCI registry as a buildcache:
```console
$ spack mirror add my_registry oci://user/image # Dockerhub
$ spack mirror add my_registry oci://ghcr.io/haampie/spack-test # GHCR
$ spack mirror set --push --oci-username ... --oci-password ... my_registry # set login credentials
```
which should result in this config:
```yaml
mirrors:
my_registry:
url: oci://ghcr.io/haampie/spack-test
push:
access_pair: [<username>, <password>]
```
It can be used like any other registry
```
spack buildcache push my_registry [specs...]
```
It will upload the Spack tarballs in parallel, as well as manifest + config
files s.t. the binaries are compatible with `docker pull` or `skopeo copy`.
In fact, a base image can be added to get a _runnable_ image:
```console
$ spack buildcache push --base-image ubuntu:23.04 my_registry python
Pushed ... as [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
$ docker run --rm -it [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
```
which should really be a game changer for sharing binaries.
Further, all content-addressable blobs that are downloaded and verified
will be cached in Spack's download cache. This should make repeated
`push` commands faster, as well as `push` followed by a separate
`update-index` command.
An end to end example of how to use this in Github Actions is here:
**https://github.com/haampie/spack-oci-buildcache-example**
TODO:
- [x] Generate environment modifications in config so PATH is set up
- [x] Enrich config with Spack's `spec` json (this is allowed in the OCI specification)
- [x] When ^ is done, add logic to create an index in say `<image>:index` by fetching all config files (using OCI distribution discovery API)
- [x] Add logic to use object storage in an OCI registry in `spack install`.
- [x] Make the user pick the base image for generated OCI images.
- [x] Update buildcache install logic to deal with absolute paths in tarballs
- [x] Merge with `spack buildcache` command
- [x] Merge #37441 (included here)
- [x] Merge #39077 (included here)
- [x] #39187 + #39285
- [x] #39341
- [x] Not a blocker: #35737 fixes correctness run env for the generated container images
NOTE:
1. `oci://` is unfortunately taken, so it's being abused in this PR to mean "oci type mirror". `skopeo` uses `docker://` which I'd like to avoid, given that classical docker v1 registries are not supported.
2. this is currently `https`-only, given that basic auth is used to login. I _could_ be convinced to allow http, but I'd prefer not to, given that for a `spack buildcache push` command multiple domains can be involved (auth server, source of base image, destination registry). Right now, no urllib http handler is added, so redirects to https and auth servers with http urls will simply result in a hard failure.
CAVEATS:
1. Signing is not implemented in this PR. `gpg --clearsign` is not the nicest solution, since (a) the spec.json is merged into the image config, which must be valid json, and (b) it would be better to sign the manifest (referencing both config/spec file and tarball) using more conventional image signing tools
2. `spack.binary_distribution.push` is not yet implemented for the OCI buildcache, only `spack buildcache push` is. This is because I'd like to always push images + deps to the registry, so that it's `docker pull`-able, whereas in `spack ci` we really wanna push an individual package without its deps to say `pr-xyz`, while its deps reside in some `develop` buildcache.
3. The `push -j ...` flag only works for OCI buildcache, not for others
Update Tcl modulefile template to simplify generated `append-path`,
`prepend-path` and `remove-path` commands and improve their readability.
If path element delimiter is colon character, do not set the `--delim`
option as it is the default delimiter value.
* ci: don't register detectable compilers
Cause they go out of sync...
* remove intel compiler, it can be detected too
* Do not run spack compiler find since compilers are registered in concretize job already
* trilinos: work around +stokhos +cuda +superlu-dist bug due to EMPTY macro
* gromacs +cp2k: build in CI
* libxsmm: x86 only
* attempt to fix dbcsr + new mpich
* use c11 standard
* gromacs: does not depend on dbcsr
* cp2k: build with cmake in CI, s.t. dbcsr is a separate package
* cp2k: cmake patches for config files and C/C++ std
* cp2k: remove unnecessary constraints due to patch
This adds a `SetupContext` class which is responsible for setting
package.py module globals, and computing the changes to environment
variables for the build, test or run context.
The class uses `effective_deptypes` which takes a list of specs (e.g. single
item of a spec to build, or a list of environment roots) and a context
(build, run, test), and outputs a flat list of specs that affect the
environment together with a flag in what way they do so. This list is
topologically ordered from root to leaf, so that one can be assured that
dependents override variables set by dependencies, not the other way
around.
This is used to replace the logic in `modifications_from_dependencies`,
which has several issues: missing calls to `setup_run_environment`, and
the order in which operations are applied.
Further, it should improve performance a bit in certain cases, since
`effective_deptypes` run in O(v + e) time, whereas `spack env activate`
currently can take up to O(v^2 + e) time due to loops over roots. Each
edge in the DAG is visited once by calling `effective_deptypes` with
`env.concrete_roots()`.
By marking and propagating flags through the DAG, this commit also fixes
a bug where Spack wouldn't call `setup_run_environment` for runtime
dependencies of link dependencies. And this PR ensures that Spack
correctly sets up the runtime environment of direct build dependencies.
Regarding test dependencies: in a build context they are are build-time
test deps, whereas in a test context they are install-time test deps.
Since there are no means to distinguish the build/install type test deps,
they're both.
Further changes:
- all `package.py` module globals are guaranteed to be set before any of the
`setup_(dependent)_(run|build)_env` functions is called
- traversal order during setup: first the group of externals, then the group
of non-externals, with specs in each group traversed topological (dependencies
are setup before dependents)
- modules: only ever call `setup_dependent_run_environment` of *direct* link/run
type deps
- the marker in `set_module_variables_for_package` is dropped, since we should
call the method once per spec. This allows us to set only a cheap subset of
globals on the module: for example it's not necessary to compute the expensive
`cmake_args` and w/e if the spec under consideration is not the root node to be
built.
- `spack load`'s `--only` is deprecated (it has no effect now), and `spack load x`
now means: do everything that's required for `x` to work at runtime, which
requires runtime deps to be setup -- just like `spack env activate`.
- `spack load` no longer loads build deps (of build deps) ...
- `spack env activate` on partially installed or broken environments: this is all
or nothing now. If some spec errors during setup of its runtime env, you'll only
get the unconditional variables + a warning that says the runtime changes for
specs couldn't be applied.
- Remove traversal in upward direction from `setup_dependent_*` in packages.
Upward traversal may iterate to specs that aren't children of the roots
(e.g. zlib / python have hundreds of dependents, only a small fraction is
reachable from the roots. Packages should only modify the direct dependent
they receive as an argument)
Improve how mirrors are used in gitlab ci, where we have until now thought
of them as only a string.
By configuring ci mirrors ahead of time using the proposed mirror templates,
and by taking advantage of the expressiveness that spack now has for mirrors,
this PR will allow us to easily switch the protocol/url we use for fetching
binary dependencies.
This change also deprecates some gitlab functionality and marks it for
removal in Spack 0.23:
- arguments to "spack ci generate":
* --buildcache-destination
* --copy-to
- gitlab configuration options:
* enable-artifacts-buildcache
* temporary-storage-url-prefix
Currently `spack env activate --with-view` exists, but is a no-op.
So, it is not too much of a breaking change to make this redundant flag
accept a value `spack env activate --with-view <name>` which activates
a particular view by name.
The view name is stored in `SPACK_ENV_VIEW`.
This also fixes an issue where deactivating a view that was activated
with `--without-view` possibly removes entries from PATH, since now we
keep track of whether the default view was "enabled" or not.
* Allow branching out of the "generic build" unification set
For cases like the one in https://github.com/spack/spack/pull/39661
we need to relax rules on unification sets.
The issue is that, right now, nodes in the "generic build" unification
set are unified together with their build dependencies. This was done
out of caution to avoid the risk of circular dependencies, which would
ultimately cause a very slow solve.
For build-tools like Cython, however, the build dependencies is masked
by a long chain of "build, run" dependencies that belong in the
"generic build" unification space.
To allow splitting on cases like this, we relax the rule disallowing
branching out of the "generic build" unification set.
* Fix issue with pure build virtual dependencies
Pure build virtual dependencies were not accounted properly in the
list of possible virtuals. This caused some facts connecting virtuals
to the corresponding providers to not be emitted, and in the end
lead to unsat problems.
* Fixed a few issues in packages
py-gevent: restore dependency on py-cython@3
jsoncpp: fix typo in build dependency
ecp-data-vis-sdk: update spack.yaml and cmake recipe
py-statsmodels: add v0.13.5
* Make dependency on "blt" of type "build"
* e4s amd64 gcc ci stack: sync with e4s-23.08
* e4s amd64 oneapi ci stack: sync with e4s-23.08
* e4s ppc64 gcc ci stack: sync with e4s-23.08
* add new ci stack: e4s amd64 gcc w/ external rocm
* add new ci stack: e4s arm gcc ci
* updates
* py-scipy: -fvisibility issue is resolved in 2023.1.0: #39464
* paraview oneapi fails
* comment out pkgs that fail to build on power
* fix arm stack name
* fix cabana +cuda specification
* comment out failing spces
* visit fails build on arm
* comment out slepc arm builds due to make issue
* comment out failing dealii arm builds
This PR adds a new audit sub-command to check that detection of relevant packages
is performed correctly in a few scenarios mocking real use-cases. The data for each
package being tested is in a YAML file called detection_test.yaml alongside the
corresponding package.py file.
This is to allow encoding detection tests for compilers and other widely used tools,
in preparation for compilers as dependencies.
* e4s cray rhel stack: expand to full spec list
* comment out gasnet; require %gcc for unzip
* require openblas@0.3.20 to get around %cce error; follow up with issue report
* comment out failing specs;
* comment out axom and xyce due to errors
* improve clarity of failing specs
* Add OIDC tokens to gitlab-ci jobs
This should allow us to start issuing just-in-time generated
credentials for CI jobs that need to modify binary mirrors. The "aud"
claim of the token describes what the token is allowed to do. The
claim is verified against a set of rules on the IAM role using signed
information from GitLab. See spack-infrastructure for the claim
verification logic.
---------
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
This commit replaces the internal representation of deptypes with `int`, which is more compact
and faster to operate with.
Double loops like:
```
any(x in ys for x in xs)
```
are replaced by constant operations bool(xs & ys), where xs and ys are dependency types.
Global constants are exposed for convenience in `spack.deptypes`
Currently the configuration of the pipeline is such that
there are multiple "optimal" solutions. This is due to
the pipeline making ~lld the default for LLVM, but leaving
+libomptarget from package.py
Since LLVM recipe has a:
conflicts("~lld", "+libomptarget")
clingo is forced to change one of the two defaults, and
the penalty for doing so is the same for each. Hence, it
is random whether we'll get +libomptarget+lld
or ~libomptarget~lld.
This fixes it by changing also the default for libomptarget.
Instead of pointing to the image on DockerHub, which rate limits us and
causes pipeline failures durying busy times, use the version at ghcr.
And we might as well use the ghcr version everywhere else too.
Smart alias completion introduced in #39499 wasn't as smart as it needed to be, and
would complete any invalid command prefix and some env names with alias names.
- [x] don't complete aliases if there are no potential completions
e.g., don't convert `spack isnotacommand` -> `spack concretize`
- [x] don't complete with an aliases if we're not looking at a top-level subcommand.
* Perform external spec detection with multiple workers
The logic to perform external spec detection has been refactored
into classes. These classes use the GoF "template" pattern to account
for the small differences between searching for "executables" and
for "libraries", while unifying the larger part of the algorithm.
A ProcessPoolExecutor is used to parallelize the work.
* Speed-up external find by tagging detectable packages automatically
Querying packages by tag is much faster than inspecting the repository,
since tags are cached. This commit adds a "detectable" tag to every
package that implements the detection protocol, and external detection
uses it to search for packages.
* Pass package names instead of package classes to workers
The slowest part of the search is importing the Python modules
associated with candidate packages. The import is done serially
before we distribute the work to the pool of executors.
This commit pushes the import of the Python module to the job
performed by the workers, and passes just the name of the packages
to the executors.
In this way imports can be done in parallel.
* Rework unit-tests for Windows
Some unit tests were doing a full e2e run of a command
just to check a input handling. Make the test more
focused by just stressing a specific function.
Mark as xfailed 2 tests on Windows, that will be fixed
by a PR in the queue. The tests are failing because we
monkeypatch internals in the parent process, but the
monkeypatching is not done in the "spawned" child
process.
* Write timing information for installs from cache
* CI: aggregate and upload install_times.json to artifacts
* CI: Don't change root directory for artifact generation
* Flat event based timer variation
Event based timer allows for easily starting and stopping timers without
wiping sub-timer data. It also requires less branching logic when
tracking time.
The json output is non-hierarchical in this version and hierarchy is
less rigidly enforced between starting and stopping.
* Add and write timers for top level install
* Update completion
* remove unused subtimer api
* Fix unit tests
* Suppress timing summary option
* Save timers summaries to user_data artifacts
* Remove completion from fish
* Move spack python to script section
* Write timer correctly for non-cache installs
* Re-add hash to timer file
* Fish completion updates
* Fix null timer yield value
* fix type hints
* Remove timer-summary-file option
* Add "." in front of non-package timer name
---------
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
These commands are currently broken on powershell (Windows) due to
improper use of the InvokeCommand commandlet and a lack of direct
support for the `--pwsh` argument in `spack load`, `spack unload`,
and `spack env deactivate`.
Setting the undocumented variable SPACK_CONCRETIZER_REQUIRE_CHECKSUM
now causes the solver to avoid accounting for versions that are not checksummed.
This feature is used in CI to avoid spurious concretization against e.g. develop branches.
`compgen -W` does not behave the same way in zsh as it does in bash; it seems not to
actually generate the completions we want.
- [x] add a zsh equivalent and `_compgen_w` to abstract it away
- [x] use `_compgen_w` instead of `compgen -W`
Bash completion is now smarter about handling aliases. In particular, if all completions
for some input command are aliased to the same thing, we'll just complete with that thing.
If you've already *typed* the full alias for a command, we'll complete the alias.
So, for example, here there's more than one real command involved, so all aliases are
shown:
```console
$ spack con
concretise concretize config containerise containerize
```
Here, there are two possibilities: `concretise` and `concretize`, but both map to
`concretize` so we just complete that:
```console
$ spack conc
concretize
```
And here, the user has already typed `concretis`, so we just go with it as there is only
one option:
```console
spack concretis
concretise
```
From a user:
> Aargh.
> ```
> ==> Error: concretise is not a recognized Spack command or extension command; check with `spack commands`.
> ```
To make things easier for our friends in the UK, this adds `concretise` and
`containerise` aliases for the `spack concretize` and `spack containerize` commands.
- [x] add aliases
- [x] update completions
* VTK: Add patch for python 3.8 support
* CI: Re-enable VisIt in CI
* Configure spec matrix for stack with VisIt
* Add pugixml dep for 8.2.0
* Make VTK and ParaView consistent on proj dep
* OpenMPI 3: provides MP support by default
* Add details on proj dep in ParaView
* Add python 3.8 to test mock repo
* Patches to get VisIt VTK interface
* CI: Disable VisIt with GUI in DAV
* AMReX: 23.06+ Multi-Dim Support
This updated the Spack package to allow to install AMReX, modules of
AMReX in E4S deployments and dependent packages with support for
multiple dimensions. Due to an upstream change in AMReX, we do not
longer need to ship three, binary incompatible package variants.
* [E4S] oneAPI AMReX < 23.06 Variant
Work-around the auto-concretization to the multi-dim of `dimensions`,
which only in 23.06+ became a multi-variant.
* e4s cray rhel ci: temporarily disable amrex build until spurious ci failure can be resolved
---------
Co-authored-by: eugeneswalker <eugenesunsetwalker@gmail.com>