To work properly, Spack requires a few directories from its repository to be added to
`sys.path`. Previously these were buried in `spack_installable.main.main()`, but it's
sometimes useful to get the paths separately, e.g., if you want to set up your own
functioning spack environment.
With this change, adding the paths is much simpler:
```python
import spack_installable
sys.path[:0] = get_spack_sys_paths(spack_prefix)
```
- [x] Add `get_spack_sys_paths()` method with extra paths in order.
- [x] Refactor `spack_installable.main.main()` to use it.
With an improper/incomplete/broken installation of Clingo, it can be
importable but not have any of the expected attributes
Improve error reporting in this case
* Restore PackageBase class, and modify only ASP
This prevents a noticeable slowdown in concretization
due to the number of directives involved.
* Fix issue with 'clang' being preferred to 'gcc',
due to runtime version weights
* Constraints on runtimes are declared by compilers
The declaration of available runtime versions, and of
their compatibility constraints are in the associated
compiler class.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
The gcc-runtime package adds a separate node for gcc's dynamic runtime
libraries.
This should help with:
1. binary caches where rpaths for compiler support libs cannot be
relocated because the compiler is missing on the target system
2. creating "minimal" container images
The package is versioned like `gcc` (in principle it could be
unversioned, but Spack doesn't always guarantee not mixing compilers)
If you are calling Spack from the python API, you might have written something like this
before #41529:
```
find = SpackCommand("find")
find('--format={name}', 'saxpy@1.0.0', '+rocm', 'amdgpu_target="gfx90a"')
```
But with the breaking change in #41529, you should write:
```
find = SpackCommand("find")
find('--format={name}', 'gromacs', '+rocm', 'amdgpu_target=gfx90a')
```
Note that we don't need quotes in Python strings, and that this is what would come in
via argv if you typed a quoted variant on the CLI.
The error messages for strings like this are not great -- you get something like this:
```
==> No package matches the query: gromacs+rocm amdgpu_target="gfx90a"
```
Which doesn't indicate that the issue might be your quoting. This is because we were
simply outputting the argv we got, instead of using spec.format() to output the error
message. This PR fixes such errors to use `spec.format()` and to look like this:
```
==> No package matches the query: gromacs+rocm amdgpu_target='"gfx90a"'
```
So users should have an easier time understanding that Spack considers the variant value
to contain quotes here.
- [x] update ConstraintAction to store parsed Specs
- [x] refactor commands to display formatted parsed Specs instead of raw input
Users expect that changes to the externals sections in packages.yaml config apply immediately, but reuse concretization caused this not to be the case. With this commit, the concretizer is only allowed to reuse externals previously imported from config if identical config exists.
This PR adds a flag `--tag/-t` to `buildcache push`, which you can use like
```
$ spack mirror add my-oci-registry oci://example.com/hello/world
$ spack -e my_env buildcache push --base-image ubuntu:22.04 --tag my_custom_tag my-oci-registry
```
and lets users ship a full, installed environment as a minimal container image where each image layer is one Spack package, on top of a base image of choice. The image can then be used as
```
$ docker run -it --rm example.com/hello/world:my_custom_tag
```
Apart from environments, users can also pick arbitrary installed spec from their database, for instance:
```
$ spack buildcache push --base-image ubuntu:22.04 --tag some_specs my-oci-registry gcc@12 cmake
$ docker run -it --rm example.com/hello/world:some_specs
```
It has many advantages over `spack containerize`:
1. No external tools required (`docker`, `buildah`, ...)
2. Creates images from locally installed Spack packages (No need to rebuild inside `docker build`, where troubleshooting build failures is notoriously hard)
3. No need for multistage builds (Spack just tarballs existing installations of runtime deps)
4. Reduced storage size / composability: when pushing multiple environments with common specs, container image layers are shared.
5. Automatic build cache: later `spack install` of the env elsewhere speeds up since the containerized environment is a build cache
* add trim function to `Spec` and `--ignore` option to 'spack diff'
Allows user to compare two specs while ignoring the sub-DAG of a particular dependency, e.g.
spack diff --ignore=mpi --ignore=zlib trilinos/abcdef trilinos/fedcba
to focus on differences closer to the root of the software stack
Sometimes env variables computed in `setup_run_environment` depend on tests
w.r.t. files in `spec.prefix`, but Spack temporarily projects `spec.prefix` to
the view.
This is problematic for two reasons:
1. Some packages iterate over `<prefix>/bin`: they expect only the current
package's executables, but find all linked in the view, leading to false
positives.
2. Some packages test for `os.path.islink(...)`, which is always true in a view
`gcc` is an example that does both.
This PR lets Spack compute the environment modifications using the original
prefix, and projects to the view afterwards
Currently, a virtual spec is composed of just a name and a version. When a virtual spec contains other components, such as variants, Spack won't emit warnings or errors but will silently drop them - which is unexpected by users.
This PR changes the default behavior of `spack config get` and `spack config blame`
to print a flattened version of the entire spack configuration, including any active
environment, if the commands are invoked with no section arguments.
The new behavior is used in Gitlab CI to help debug CI configuration, but it can also
be useful when asking for more information in issues, or when simply debugging Spack.
Convert the 'develop' section of an environment to a dedicated configuration section.
This means for example that instead of having to define `develop` specs in the
`spack.yaml`, the environment can `include:` another `develop.yaml` configuration
which specifies which specs should be developed in the environment.
This change is not expected to be disruptive given that existing environment `spack.yaml`
files will conform to the new schema.
(Update 11/28/2023) I have implemented the `develop`/`undevelop` commands in terms
of more-generic modification functions added to the `config` module: `change_or_add`
and `update_all`. It is assumed that the semantics added here (described in 11/18 update)
would be desirable to extend to other config update actions (e.g. adding compilers,
changing package requirements, adding mirrors).
(Update 11/18/2023) I have updated this such that `spack develop`, and
`spack undevelop` to potentially modify all writable scopes, like
https://github.com/spack/spack/pull/41147. https://github.com/spack/spack/pull/35307
will be useful for modifying included scopes, but generally speaking specifying a
`--scope` will not be required for `spack develop`: `spack develop` will add new
develop specs to whatever scope already has develop specs defined, or to the
highest-priority writable scope (which should be the env scope).
TODOs:
- [x] If you `spack undevelop` a package which is mentioned at multiple layers of
configuration, then currently this would only modify one of them. That's not
technically a new issue (has always existed for configuration modification), but
may be confusing to users when presented via an interface other than `spack config set`
- [x] Need to add (or confirm) the ability to modify individual config files by providing
a path (rather than using a scope identifier as a key to retrieve associated config).
- [x] `spack develop` adds new develop specs to the scope that defines them
(potentially skipping higher priority scopes to e.g. augment included scope files)
---------
Co-authored-by: scheibelp <scheibelp@users.noreply.github.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
This PR does several things:
- [x] Allow any character to appear in the quoted values of variants and flags.
- [x] Allow easier passing of quoted flags on the command line, e.g. `cflags="-O2 -g"`.
- [x] Handle quoting better in spec output, using single quotes around double
quotes and vice versa.
- [x] Disallow spaces around `=` and `==` when parsing variants and flags.
## Motivation
This PR is motivated by the issues above and by ORNL's
[tips for launching at scale on Frontier](https://docs.olcf.ornl.gov/systems/frontier_user_guide.html#tips-for-launching-at-scale).
ORNL recommends using `sbcast --send-libs` to broadcast executables and their
libraries to compute nodes when running large jobs (e.g., 80k ranks). For an
executable named `exe`, `sbcast --send-libs` stores the needed libraries in a
directory alongside the executable called `exe_libs`. ORNL recommends pointing
`LD_LIBRARY_PATH` at that directory so that `exe` will find the local libraries and
not overwhelm the filesystem.
There are other ways to mitigate this problem:
* You could build with `RUNPATH` using `spack config add config:shared_linking:type:runpath`,
which would make `LD_LIBRARY_PATH` take precedence over Spack's `RUNPATHs`.
I don't recommend this one because `RUNPATH` can cause many other things to go wrong.
* You could use `spack config add config:shared_linking:bind:true`, added in #31948, which
will greatly reduce the filesystem load for large jobs by pointing `DT_NEEDED` entries in
ELF *directly* at the needed `.so` files instead of relying on `RPATH` search via soname.
I have not experimented with this at 80,000 ranks, but it should help quite a bit.
* You could use [Spindle](https://github.com/hpc/Spindle) (as LLNL does on its machines)
which should transparently fix this without any changes to your executable and without
any need to use `sbcast` or other tools.
But we want to support the `sbcast` use case as well.
## `sbcast` and Spack
Spack's `RPATHs` break the `sbcast` fix because they're considered with higher precedence
than `LD_LIBRARY_PATH`. So Spack applications will still end up hitting the shared filesystem
when searching for libraries. We can avoid this by injecting some `ldflags` in to the build, e.g.,
if were were going to launch, say, `LAMMPS` at scale, we could add another `RPATH`
specifically for use with `sbcast`:
spack install lammps ldflags='-Wl,-rpath=$ORIGIN/lmp_libs'
This will put the `lmp_libs` directory alongside `LAMMPS`'s `lmp` executable first in the
`RPATH`, so it will be searched before any directories on the shared filesystem.
## Issues with quoting
Before this PR, the command above would've errored out for two reasons:
1. `$` wasn't an allowed character in our spec parser.
2. You would've had to double quote the flags to get them to pass through correctly:
spack install lammps ldflags='"-Wl,-rpath=$ORIGIN/lmp_libs"'
This is ugly and I don't think many users will easily figure it out. The behavior was added in
#29282, and it improved parsing of specs passed as a single string, e.g.:
spack install 'lammps ldflags="-Wl,-rpath=$ORIGIN/lmp_libs"'
but a lot of users are naturally going to try to quote arguments *directly* on the command
line, without quoting their entire spec. #29282 used a heuristic to detect unquoted flags
and warn the user, but the warning could be confusing. In particular, if you wrote
`cflags="-O2 -g"` on the command line, it would break the flags up, warn, and tell you
that you could fix the issue by writing `cflags="-O2 -g"` even though you just wrote
that. It's telling you to *quote* that value, but the user has to know to double quote.
## New heuristic for quoted arguments from the CLI
There are only two places where we allow arbitrary quoted strings in specs: flags and
variant values, so this PR adds a simpler heuristic to the CLI parser: if an argument in
`sys.argv` starts with `name=...`, then we assume the whole argument is quoted.
This means you can write:
spack install bzip2 cflags="-O2 -g"
directly on the command line, without multiple levels of quoting. This also works:
spack install 'bzip2 cflags="-O2 -g"'
The only place where this heuristic runs into ambiguity is if you attempt to pass
anonymous specs that start with `name=...` as one large string. e.g., this will be
interpreted as one large flag value:
spack find 'cflags="-O2 -g" ~bar +baz'
This sets `cflags` to `"-O2 -g" ~bar +baz`, which is likely not what you wanted. You
can fix this easily by either removing the quotes:
spack find cflags="-O2 -g" ~bar +baz
Or by adding a space at the start, which has the same effect:
spack find ' cflags="-O2 -g" ~bar +baz'
You may wonder why we don't just look for quotes inside of flag arguments, and the
reason is that you *might* want them there. If you are passing arguments like:
spack install zlib cppflags="-D DEBUG_MSG1='quick fox' -D DEBUG_MSG2='lazy dog'"
You *need* the quotes there. So we've opted for one potentially confusing, but easily
fixed outcome vs. limiting what you can put in your quoted strings.
## Quotes in formatted spec output
In addition to being more lenient about characters accepted in quoted strings, this PR fixes
up spec formatting a bit. We now format quoted strings in specs with single quotes, unless
the string has a single quote in it, in which case we JSON-escape the string (i.e., we add
`\` before `"` and `\`).
zlib cflags='-D FOO="bar"'
zlib cflags="-D FOO='bar'"
zlib cflags="-D FOO='bar' BAR=\"baz\""
MySQL was performing a core API call to `Spec.flat_dependencies`
when setting up the build environment. This function is an
implementation detail of the old concretizer, where multiple nodes
from the same package are not allowed.
This PR uses a more idiomatic way to check if "python" is
in the DAG.
For reference, see #11356 to check why the call was introduced.
* initial commit for rocm-5.7.0 and 5.7.1 releases
* bump up ther version for 5.7.0 and 5.7.1 releases
* update recipes to support 5.7.0 and 5.7.1 releases
* bump up the version for ROCm 5.7.0 and ROCm-5.7.1 releases
* bump up the version for composable-kernel amd miopen-hip
* fix style errors
* fix style errors in hip etc
* renaming composable-kernel recipe
* changes for composable_kernel
* Revert "renaming composable-kernel recipe"
This reverts commit 0cf6c6debfc7b12014f514af26144132ae187e71.
* Revert "changes for composable_kernel"
This reverts commit 05272a10a79cc14dc9c1afbda8fa4de87ea672ad.
* bump up the version for hiprand
* using the checksum for hiprand-5.7.1
* bump up the version for 5.7.0 and 5.7.1 releases
* fix style errors
* fix merge conflicts with the develop.
* temp workaround for the error seen with rocm-5.7.0 when trying
to generate the dependency file for runtime/legion/legion_redop.cu
* fix build issue(work around) with legion
* add patch for migraphx package to turn off ck
* update to hip recipe
* fix hip-path detection inside llvm clang driver
* update llvm-amdgpu and rocm-validation-suite recipes
* fix style errors
* bump up the version for amdsmi for rocm-5.7.0 release
* add support for gfx941,gfx942 for rocm-5.7.0 release onwards
* revert changes to rocm.py file
* added gfx941 and gfx942 to rocm.py and add the gfx942 to kokkos and new checksum
the new version seem to support gfx942
* bump up the version for rccl for 5.7.1
* update the patch for rocm-openmp-extras for 5.7.0
* update mivisionx recipe for 5.7.0 release
* add new dependencies for rocfft tests
* port the fix for avx build, the start address of values_ buffer in KernelParameters is not
correct as it is computed based on 16-byte alignment
* set HIP_PATH=ROCM_PATH for 5.7.0 onwards
* address review comments
* revert adding xnack- and xnack+ to gfx940,gfx941,gfx942 as the prechecks were failing
* Add `signed` property to mirror config
* make unsigned a tri-state: true/false overrides mirror config, none takes mirror config
* test commands
* Document this
* add a test
Fix filer_compiler_wrapper for cases where the compiler returned in None, this happens on some installed gcc systems that do not have fortran built into them as standard, e.g. gcc@11.4.0 on ubuntu 22.04
Before (hard to read, doesn't fit on small terminals):
:
```console
-I, --install-status show install status of packages
packages can be: installed [+], missing and needed by an installed package [-], installed in an upstream instance [^], or not installed (no annotation)
```
After (fits in 80 columns):
```console
-I, --install-status show install status of packages
[+] installed [^] installed in an upstream
- not installed [-] missing dep of installed package
```
* Fix cdash reporter time stamps (#38818).
The cdash reporter is created before packages are installed so save the
starttime then instead of the endtime.
* Use endtime instead of starttime for the endtime of update
---------
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
- we don't have a fallback if make is not installed
- we assume file system locking works
- we don't verify that make is gnu make (bootstrapping fails on FreeBSD as a result)
- there are some weird race conditions in writing spack.yaml on concurrent spack install
- the view is updated after every package install instead of post environment install.
Forbid nested dependencies in depends_on declarations, by running an audit in CI.
Fix the packages not passing the new audit:
- amd-aocl
- exago
- palace
- shapemapper
- xsdk-examples
ginkgo: add a commit sha to v1.5.0.glu_experimental
This was missed while backporting the new `spack info` command from #40326.
Variants should be sorted by name when invoking `spack info --variants-by-name`.
This looks to me like the best compromise regarding externals in a
build cache. I wouldn't want `spack install` on my machine to install
specs that were marked external on another. At the same time there are
centers that control the target systems on which spack is used, and
would want to use external in buildcaches.
As a solution, reuse concretization will now consider those externals
used in buildcaches that match a locally configured external in
packages.yaml.
So for example person A installs and pushes specs with this config:
```yaml
packages:
ncurses:
externals:
- spec: ncurses@6.0.12345 +feature
prefix: /usr
```
and person B concretizes and installs using that buildcache with the
following config:
```yaml
packages:
ncurses:
externals:
- spec: ncurses@6
prefix: /usr
```
the spec will be reused (or rather, will be considered for reuse...)
* solver: use a unique counter for condition, triggers and effects
* Do not reset counters when re-running setup
What we need is just a unique ID, it doesn't need
to start from zero every time.
* oneapi 2024.0.0 release
* oneapi v2 directory support and some cleanups
* sycl abi change requires 2024 compilers for packages that use sycl
---------
Co-authored-by: Robert Cohn <robert.s.cohn@intel.com>
PR #40929 reverted the argument parsing to make `spack --verbose
install` work again. It looks like `--verbose` is the only instance
where this kind of argument inheritance is used since all other commands
override arguments with the same name instead. For instance, `spack
--bootstrap clean` does not invoke `spack clean --bootstrap`.
Therefore, fix multi-line aliases again by parsing the resolved
arguments and instead explicitly pass down `args.verbose` to commands.
This commit discards type mismatches or failures to validate a package preference during concretization. The values discarded are logged as debug level messages. It also adds a config audit to help users spot misconfigurations in packages.yaml preferences.
This roughly restores the order of operation from Spack 0.20,
where where `AutotoolsPackage.setup_build_environment` would
override the env variable set in `setup_platform_environment` on
macOS.
When improving the error message, we started #showing in the
answer set a lot more symbols - but we forgot to suppress the
debug messages warning about UNKNOWN SYMBOLs
* Permit packages that depend on Intel oneAPI packages to access sdk
* Implement and use IntelOneapiLibraryPackageWithSdk
* Restore libs property to IntelOneapiLibraryPackage
* Conform to style
* Provide new class to infrastructure
* Treat sdk/include as the main include
Improves the warning for deprecated preferences, and adds a configuration
audit to get files:lines details of the issues.
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Tests didn't cover the new `--variants-by-name` parameter in #40998.
Add some parameterization to hit that.
This changeset makes me think that the main section-printing loop in `spack info` isn't
factored so well. It makes it difficult to pass different arguments to different helper
functions. I could break it out into if statements if folks think that would be cleaner.
We have two ways to concretize now:
* `spack concretize` concretizes only the root specs that are not concrete in the environment.
* `spack concretize -f` eliminates all cached concretization data and reconcretizes the *entire* environment.
This PR adds `spack deconcretize`, which eliminates cached concretization data for a spec. This allows
users greater control over what is preserved from their `spack.lock` file and what is reused when not
using `spack concretize -f`. If you want to update a spec installed in your environment, you can call
`spack deconcretize` on it, and that spec and any relevant dependents will be removed from the lock file.
`spack concretize` has two options:
* `--root`: limits deconcretized specs to *specific* roots in the environment. You can use this to
deconcretize exactly one root in a `unify: false` environment. i.e., if `foo` root is a dependent
of `bar`, both roots, `spack deconcretize bar` will *not* deconcretize `foo`.
* `--all`: deconcretize *all* specs that match the input spec. By default `spack deconcretize`
will complain about multiple matches, like `spack uninstall`.
The ^mkl pattern was used to refer to three packages
even though none of software using it was depending
on "mkl".
This pattern, which follows Hyrum's law, is now being
removed in favor of a more explicit one.
In this PR gromacs, abinit, lammps, and quantum-espresso
are modified.
Intel packages are also modified to provide "lapack"
and "blas" together.
And improve the error message (load vs unload).
Of course you could have some uninstalled dependency too, but as long as
it doesn't implement `setup_run_environment` etc, I don't think it hurts
to attempt to load the root anyways, given that failure to do so is a
warning, not a fatal error.
This changes variant display to use a much more legible format, and to use screen space
much better (particularly on narrow terminals). It also adds color the variant display
to match other parts of `spack info`.
Descriptions and variant value lists that were frequently squished into a tiny column
before now have closer to the full terminal width.
This change also preserves any whitespace formatting present in `package.py`, so package
maintainers can make easer-to-read descriptions of variant values if they want. For
example, `gasnet` has had a nice description of the `conduits` variant for a while, but
it was wrapped and made illegible by `spack info`. That is now fixed and the original
newlines are kept.
Conditional variants are grouped by their when clauses by default, but if you do not
like the grouping, you can display all the variants in order with `--variants-by-name`.
I'm not sure when people will prefer this, but it makes it easier to tell that a
particular variant is/isn't there. I do think grouping by `when` is the better default.
This commit improves forward compatibility of Spack with newer build cache metadata formats.
Before this commit, invalid or unrecognized metadata would be fatal errors, now they just cause
a mirror to be skipped.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Before this PR, variant were not propagated to leaf nodes that could accept
the propagated value, if some intermediate node couldn't accept it.
This PR fixes that issue by marking nodes as "candidate" for propagation
and by setting the variant only if it can be accepted by the node.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Modify the packages.yaml schema so that soft-preferences on targets,
compilers and providers can only be specified under the "all" attribute.
This makes them effectively global preferences.
Version preferences instead can only be specified under a package
specific section.
If a preference attribute is found in a section where it should
not be, it will be ignored and a warning is printed to screen.
Most queries will end up calling `spec.satisfies(query)` on everything in the DB, which
will cause Spack to ask whether the query spec is virtual if its name doesn't match the
target spec's. This can be expensive, because it can cause Spack to check if any new
virtuals showed up in *all* the packages it knows about. That can currently trigger
thousands of `stat()` calls.
We can avoid the virtual check for most successful queries if we consider that if there
*is* a match by name, the query spec *can't* be virtual. This PR adds an optimization to
the query loop to save any comparisons that would trigger a virtual check for last.
- [x] Add a `deferred` list to the `query()` loop.
- [x] First run through the `query()` loop *only* checks for name matches.
- [x] Query loop now returns early if there's a name match, skipping most `satisfies()` calls.
- [x] Second run through the `deferred()` list only runs if query spec is virtual.
- [x] Fix up handling of concrete specs.
- [x] Add test for querying virtuals in DB.
- [x] Avoid allocating deferred if not necessary.
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Currently there's some hacky logic in the AppleClang compiler that makes
it also accept `gfortran` as a fortran compiler if `flang` is not found.
This is guarded by `if sys.platform` checks s.t. it only applies to
Darwin.
But on Linux the feature of detecting mixed toolchains is highly
requested too, cause it's rather annoying to run into a failed build of
`openblas` after dozens of minutes of compiling its dependencies, just
because clang doesn't have a fortran compiler.
In particular in CI where the system compilers may change during system
updates, it's typically impossible to fix compilers in a hand-written
compilers.yaml config file: the config will almost certainly be outdated
sooner or later, and maintaining one config file per target machine and
writing logic to select the correct config is rather undesirable too.
---
This PR introduces a flag `spack compiler find --mixed-toolchain` that
fills out missing `fc` and `f77` entries in `clang` / `apple-clang` by
picking the best matching `gcc`.
It is enabled by default on macOS, but not on Linux, matching current
behavior of `spack compiler find`.
The "best matching gcc" logic and compiler path updates are identical to
how compiler path dictionaries are currently flattened "horizontally"
(per compiler id). This just adds logic to do the same "vertically"
(across different compiler ids).
So, with this change on Ubuntu 22.04:
```
$ spack compiler find --mixed-toolchain
==> Added 6 new compilers to /home/harmen/.spack/linux/compilers.yaml
gcc@13.1.0 gcc@12.3.0 gcc@11.4.0 gcc@10.5.0 clang@16.0.0 clang@15.0.7
==> Compilers are defined in the following files:
/home/harmen/.spack/linux/compilers.yaml
```
you finally get:
```
compilers:
- compiler:
spec: clang@=15.0.7
paths:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags: {}
operating_system: ubuntu23.04
target: x86_64
modules: []
environment: {}
extra_rpaths: []
- compiler:
spec: clang@=16.0.0
paths:
cc: /usr/bin/clang-16
cxx: /usr/bin/clang++-16
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags: {}
operating_system: ubuntu23.04
target: x86_64
modules: []
environment: {}
extra_rpaths: []
```
The "best gcc" is automatically default system gcc, since it has no
suffixes / prefixes.
Add a new config section: `config:aliases`, which is a dictionary mapping aliases
to commands.
For instance:
```yaml
config:
aliases:
sp: spec -I
```
will define a new command `sp` that will execute `spec` with the `-I`
argument.
Aliases cannot override existing commands, and this is ensured with a test.
We cannot currently alias subcommands. Spack will warn about any aliases
containing a space, but will not error, which leaves room for subcommand
aliases in the future.
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Test that setup_run_environment changes to CC/CXX/FC/F77 are dropped in build env
* compilers set in run env shouldn't impact build
Adds `drop` to EnvironmentModifications courtesy of @haampie, and uses
it to clear modifications of CC, CXX, F77 and FC made by
`setup_{,dependent_}run_environment` routines when producing an
environment in BUILD context.
* comment / style
* comment
---------
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
This adds a rather trivial context manager that lets you deduplicate repeated
arguments in directives, e.g.
```python
depends_on("py-x@1", when="@1", type=("build", "run"))
depends_on("py-x@2", when="@2", type=("build", "run"))
depends_on("py-x@3", when="@3", type=("build", "run"))
depends_on("py-x@4", when="@4", type=("build", "run"))
```
can be condensed to
```python
with default_args(type=("build", "run")):
depends_on("py-x@1", when="@1")
depends_on("py-x@2", when="@2")
depends_on("py-x@3", when="@3")
depends_on("py-x@4", when="@4")
```
The advantage is it's clear for humans, the downside it's less clear for type checkers due to type erasure.
Create chains of causation for error messages.
The current implementation is only completed for some of the many errors presented by the concretizer. The rest will need to be filled out over time, but this demonstrates the capability.
The basic idea is to associate conditions in the solver with one another in causal relationships, and to associate errors with the proximate causes of their facts in the condition graph. Then we can construct causal trees to explain errors, which will hopefully present users with useful information to avoid the error or report issues.
Technically, this is implemented as a secondary solve. The concretizer computes the optimal model, and if the optimal model contains an error, then a secondary solve computes causation information about the error(s) in the concretizer output.
Examples:
$ spack solve hdf5 ^cmake@3.0.1
==> Error: concretization failed for the following reasons:
1. Cannot satisfy 'cmake@3.0.1'
2. Cannot satisfy 'cmake@3.0.1'
required because hdf5 ^cmake@3.0.1 requested from CLI
3. Cannot satisfy 'cmake@3.18:' and 'cmake@3.0.1
required because hdf5 ^cmake@3.0.1 requested from CLI
required because hdf5 depends on cmake@3.18: when @1.13:
required because hdf5 ^cmake@3.0.1 requested from CLI
4. Cannot satisfy 'cmake@3.12:' and 'cmake@3.0.1
required because hdf5 depends on cmake@3.12:
required because hdf5 ^cmake@3.0.1 requested from CLI
required because hdf5 ^cmake@3.0.1 requested from CLI
$ spack spec cmake ^curl~ldap # <-- with curl configured non-buildable and an external with `+ldap`
==> Error: concretization failed for the following reasons:
1. Attempted to use external for 'curl' which does not satisfy any configured external spec
2. Attempted to build package curl which is not buildable and does not have a satisfying external
attr('variant_value', 'curl', 'ldap', 'True') is an external constraint for curl which was not satisfied
3. Attempted to build package curl which is not buildable and does not have a satisfying external
attr('variant_value', 'curl', 'gssapi', 'True') is an external constraint for curl which was not satisfied
4. Attempted to build package curl which is not buildable and does not have a satisfying external
'curl+ldap' is an external constraint for curl which was not satisfied
'curl~ldap' required
required because cmake ^curl~ldap requested from CLI
$ spack solve yambo+mpi ^hdf5~mpi
==> Error: concretization failed for the following reasons:
1. 'hdf5' required multiple values for single-valued variant 'mpi'
2. 'hdf5' required multiple values for single-valued variant 'mpi'
Requested '~mpi' and '+mpi'
required because yambo depends on hdf5+mpi when +mpi
required because yambo+mpi ^hdf5~mpi requested from CLI
required because yambo+mpi ^hdf5~mpi requested from CLI
3. 'hdf5' required multiple values for single-valued variant 'mpi'
Requested '~mpi' and '+mpi'
required because netcdf-c depends on hdf5+mpi when +mpi
required because netcdf-fortran depends on netcdf-c
required because yambo depends on netcdf-fortran
required because yambo+mpi ^hdf5~mpi requested from CLI
required because netcdf-fortran depends on netcdf-c@4.7.4: when @4.5.3:
required because yambo depends on netcdf-fortran
required because yambo+mpi ^hdf5~mpi requested from CLI
required because yambo depends on netcdf-c
required because yambo+mpi ^hdf5~mpi requested from CLI
required because yambo depends on netcdf-c+mpi when +mpi
required because yambo+mpi ^hdf5~mpi requested from CLI
required because yambo+mpi ^hdf5~mpi requested from CLI
Future work:
In addition to fleshing out the causes of other errors, I would like to find a way to associate different components of the error messages with different causes. In this example it's pretty easy to infer which part is which, but I'm not confident that will always be the case.
See the previous PR #34500 for discussion of how the condition chains are incomplete. In the future, we may need custom logic for individual attributes to associate some important choice rules with conditions such that clingo choices or other derivations can be part of the explanation.
---------
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This PR implements the concept of "default environment", which doesn't have to be
created explicitly. The aim is to lower the barrier for adopting environments.
To (create and) activate the default environment, run
```
$ spack env activate
```
This mimics the behavior of
```
$ cd
```
which brings you to your home directory.
This is not a breaking change, since `spack env activate` without arguments
currently errors. It is similar to the already existing `spack env activate --temp`
command which always creates an env in a temporary directory, the difference
is that the default environment is a managed / named environment named `default`.
The name `default` is not a reserved name, it's just that `spack env activate`
creates it for you if you don't have it already.
With this change, you can get started with environments faster:
```
$ spack env activate [--prompt]
$ spack install --add x y z
```
instead of
```
$ spack env create default
==> Created environment 'default in /Users/harmenstoppels/spack/var/spack/environments/default
==> You can activate this environment with:
==> spack env activate default
$ spack env activate [--prompt] default
$ spack install --add x y z
```
Notice that Spack supports switching (but not stacking) environments, so the
parallel with `cd` is pretty clear:
```
$ spack env activate named_env
$ spack env status
==> In environment named_env
$ spack env activate
$ spack env status
==> In environment default
```
* Add command suggestions
This adds suggestions of similar commands in case users mistype a
command. Before:
```
$ spack spack
==> Error: spack is not a recognized Spack command or extension command; check with `spack commands`.
```
After:
```
$ spack spack
==> Error: spack is not a recognized Spack command or extension command; check with `spack commands`.
Did you mean one of the following commands?
spec
patch
```
* Add package name suggestions
* Remove suggestion to run spack clean -m
This PR adds support for including separate definitions from `spack.yaml`.
Supporting the inclusion of files with definitions enables user to make
curated/standardized collections of packages that can re-used by others.
Currently module globals aren't set before running
`setup_[dependent_]run_environment` to compute environment modifications
for module files. This commit fixes that.
Looking at the memory profiles of concurrent solves
for environment with unify:false, it seems memory
is only ramping up.
This exchange in the potassco mailing list:
https://sourceforge.net/p/potassco/mailman/potassco-users/thread/b55b5b8c2e8945409abb3fa3c935c27e%40lohn.at/#msg36517698
Seems to suggest that clingo doesn't release memory
until end of the application.
Since when unify:false we distribute work to processes,
here we give a maxtaskperchild=1, so we clean memory
after each solve.
Some providers must provide virtuals "together", i.e.
if they provide one virtual of a set, they must be the
providers also of the others.
There was a bug though, where we were not checking if
the other virtuals in the set were needed at all in
the DAG.
This commit fixes the bug.
This PR makes it possible to select only a subset of virtual dependencies from a spec that _may_ provide more. To select providers, a syntax to specify edge attributes is introduced:
```
hdf5 ^[virtuals=mpi] mpich
```
With that syntax we can concretize specs like:
```console
$ spack spec strumpack ^[virtuals=mpi] intel-parallel-studio+mkl ^[virtuals=lapack] openblas
```
On `develop` this would currently fail with:
```console
$ spack spec strumpack ^intel-parallel-studio+mkl ^openblas
==> Error: Spec cannot include multiple providers for virtual 'blas'
Requested 'intel-parallel-studio' and 'openblas'
```
In package recipes, virtual specs that are declared in the same `provides` directive need to be provided _together_. This means that e.g. `openblas`, which has:
```python
provides("blas", "lapack")
```
needs to provide both `lapack` and `blas` when requested to provide at least one of them.
## Additional notes
This capability is needed to model compilers. Assuming that languages are treated like virtual dependencies, we might want e.g. to use LLVM to compile C/C++ and Gnu GCC to compile Fortran. This can be accomplished by the following[^1]:
```
hdf5 ^[virtuals=c,cxx] llvm ^[virtuals=fortran] gcc
```
[^1]: We plan to add some syntactic sugar around this syntax, and reuse the `%` sigil to avoid having a lot of boilerplate around compilers.
Modifications:
- [x] Add syntax to interact with edge attributes from spec literals
- [x] Add concretization logic to be able to cherry-pick virtual dependencies
- [x] Extend semantic of the `provides` directive to express when virtuals need to be provided together
- [x] Add unit-tests and documentation
Allowing white space around `:` in version ranges introduces an ambiguity:
```
a@1: b
```
parses as `a@1:b` but should really be parsed as two separate specs `a@1:` and `b`.
With white space disallowed around `:` in ranges, the ambiguity is resolved.
Call setup_dependent_run_environment on both link and run edges,
instead of only run edges, which restores old behavior.
Move setup_build_environment into get_env_modifications
Also call setup_run_environment on direct build deps, since their run
environment has to be set up.
* Add tests to ensure variant propagation syntax can round-trip to/from string
* Add a regression test for the bug in 35298
* Reconstruct the spec constraints in the worker process
Specs do not preserve any information on propagation of variants
when round-tripping to/from JSON (which we use to pickle), but
preserve it when round-tripping to/from strings.
Therefore, we pass a spec literal to the worker and reconstruct
the Spec objects there.
- [x] Add links to information people are going to want to know when adding license
information to their packages (namely OSI licenses and SPDX identifiers).
- [x] Update the packaging docs for `license()` with Spack as an example for `when=`.
After all, it's a dual-licensed package that changed once in the past.
- [x] Add link to https://spdx.org/licenses/ in the `spack create` boilerplate as well.
Typically MSVC is detected via the VSWhere program. However, this may
not be available, or may be installed in an unpredictable location.
This PR adds an additional approach via Windows Registry queries to
determine VS install location root.
Additionally:
* Construct vs_install_paths after class-definition time (move it to
variable-access time).
* Skip over keys for which a user does not have read permissions
when performing searches (previously the presence of these keys
would have caused an error, regardless of whether they were
needed).
* Extend helper functionality with option for regex matching on
registry keys vs. exact string matching.
* Some internal refactoring: remove boolean parameters in some cases
where the function was always called with the same value
(e.g. `find_subkey`)
.bat or .exe files can be considered executable on Windows. This PR
expands the regex for detectable packages to allow for the detection
of packages that vendor .bat wrappers (intel mpi for example).
Additional changes:
* Outside of Windows, when searching for executables `path_hints=None`
was used to indicate that default path hints should be provided,
and `[]` was taken to mean that no defaults should be chosen
(in that case, nothing is searched); behavior on Windows has
now been updated to match.
* Above logic for handling of `path_hints=[]` has also been extended
to library search (for both Linux and Windows).
* All exceptions for external packages were documented as timeout
errors: this commit adds a distinction for other types of errors
in warning messages to the user.
Credits to @ChristianKniep for advocating the idea of OCI image layers
being identical to spack buildcache tarballs.
With this you can configure an OCI registry as a buildcache:
```console
$ spack mirror add my_registry oci://user/image # Dockerhub
$ spack mirror add my_registry oci://ghcr.io/haampie/spack-test # GHCR
$ spack mirror set --push --oci-username ... --oci-password ... my_registry # set login credentials
```
which should result in this config:
```yaml
mirrors:
my_registry:
url: oci://ghcr.io/haampie/spack-test
push:
access_pair: [<username>, <password>]
```
It can be used like any other registry
```
spack buildcache push my_registry [specs...]
```
It will upload the Spack tarballs in parallel, as well as manifest + config
files s.t. the binaries are compatible with `docker pull` or `skopeo copy`.
In fact, a base image can be added to get a _runnable_ image:
```console
$ spack buildcache push --base-image ubuntu:23.04 my_registry python
Pushed ... as [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
$ docker run --rm -it [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
```
which should really be a game changer for sharing binaries.
Further, all content-addressable blobs that are downloaded and verified
will be cached in Spack's download cache. This should make repeated
`push` commands faster, as well as `push` followed by a separate
`update-index` command.
An end to end example of how to use this in Github Actions is here:
**https://github.com/haampie/spack-oci-buildcache-example**
TODO:
- [x] Generate environment modifications in config so PATH is set up
- [x] Enrich config with Spack's `spec` json (this is allowed in the OCI specification)
- [x] When ^ is done, add logic to create an index in say `<image>:index` by fetching all config files (using OCI distribution discovery API)
- [x] Add logic to use object storage in an OCI registry in `spack install`.
- [x] Make the user pick the base image for generated OCI images.
- [x] Update buildcache install logic to deal with absolute paths in tarballs
- [x] Merge with `spack buildcache` command
- [x] Merge #37441 (included here)
- [x] Merge #39077 (included here)
- [x] #39187 + #39285
- [x] #39341
- [x] Not a blocker: #35737 fixes correctness run env for the generated container images
NOTE:
1. `oci://` is unfortunately taken, so it's being abused in this PR to mean "oci type mirror". `skopeo` uses `docker://` which I'd like to avoid, given that classical docker v1 registries are not supported.
2. this is currently `https`-only, given that basic auth is used to login. I _could_ be convinced to allow http, but I'd prefer not to, given that for a `spack buildcache push` command multiple domains can be involved (auth server, source of base image, destination registry). Right now, no urllib http handler is added, so redirects to https and auth servers with http urls will simply result in a hard failure.
CAVEATS:
1. Signing is not implemented in this PR. `gpg --clearsign` is not the nicest solution, since (a) the spec.json is merged into the image config, which must be valid json, and (b) it would be better to sign the manifest (referencing both config/spec file and tarball) using more conventional image signing tools
2. `spack.binary_distribution.push` is not yet implemented for the OCI buildcache, only `spack buildcache push` is. This is because I'd like to always push images + deps to the registry, so that it's `docker pull`-able, whereas in `spack ci` we really wanna push an individual package without its deps to say `pr-xyz`, while its deps reside in some `develop` buildcache.
3. The `push -j ...` flag only works for OCI buildcache, not for others
* spack checksum pkg@1.2, use as version filter
Currently pkg@1.2 splits on @ and looks for 1.2 specifically, with this
PR pkg@1.2 is a filter so any matching 1.2, 1.2.1, ..., 1.2.10 version
is displayed.
* fix tests
* fix style
Update Tcl modulefile template to simplify generated `append-path`,
`prepend-path` and `remove-path` commands and improve their readability.
If path element delimiter is colon character, do not set the `--delim`
option as it is the default delimiter value.
Renames exclude_implicits to hide_implicits
When hide_implicits option is enabled, generate modulefile of
implicitly installed software and hide them. Even if implicit, those
modulefiles may be referred as dependency in other modulefiles thus they
should be generated to make module properly load dependent module.
A new hidden property is added to BaseConfiguration class.
To hide modulefiles, modulercs are generated along modulefiles. Such rc
files contain specific module command to indicate a module should be
hidden (for instance when using "module avail").
A modulerc property is added to TclFileLayout and LmodFileLayout classes
to get fully qualified path name of the modulerc associated to a given
modulefile.
Modulerc files will be located in each module directory, next to the
version modulefiles. This scheme is supported by both module tool
implementations.
modulerc_header and hide_cmd_format attributes are added to
TclModulefileWriter and LmodModulefileWriter. They help to know how to
generate a modulerc file with hidden commands for each module tool.
Tcl modulerc file requires an header. As we use a command introduced on
Modules 4.7 (module-hide --hidden-loaded), a version requirement is
added to header string.
For lmod, modules that open up a hierarchy are never hidden, even if
they are implicitly installed.
Modulerc is created, updated or removed when associated modulefile is
written or removed. If an implicit modulefile becomes explicit, hidden
command in modulerc for this modulefile is removed. If modulerc becomes
empty, this file is removed. Modulerc file is not rewritten when no
content change is detected.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Previously, we only searched for `patch` inside of whatever Git
installation was available because the most common installation of Git
available on Windows had `patch`. That's not true for all possible
installations of Git though, so this updates the search to also check
PATH.
GitLab's .patch URLs only provide abbreviated hashes, while .diff URLs
provide full hashes. There does not seem to be a parameter to force
.patch URLs to also return full hashes, so we should make sure to use
the .diff ones.
With the introduction of multiple build dependencies from the same package in the DAG, we need to minimize a few weights accounting for edges rather than nodes. If we don't do that we might have multiple "optimal" solutions that differ only in how the same nodes are connected together. This commit ensures optimal versions are picked per parent in case of multiple choices for a dependency.
Fix the following syntax which validates only the first array entry:
```python
"compilers": {
"type": "array",
"items": [
{
"type": ...
}
]
}
```
to
```python
"compilers": {
"type": "array",
"items": {
"type": ...
}
}
```
which validates the entire array.
Oops...
This adds a `SetupContext` class which is responsible for setting
package.py module globals, and computing the changes to environment
variables for the build, test or run context.
The class uses `effective_deptypes` which takes a list of specs (e.g. single
item of a spec to build, or a list of environment roots) and a context
(build, run, test), and outputs a flat list of specs that affect the
environment together with a flag in what way they do so. This list is
topologically ordered from root to leaf, so that one can be assured that
dependents override variables set by dependencies, not the other way
around.
This is used to replace the logic in `modifications_from_dependencies`,
which has several issues: missing calls to `setup_run_environment`, and
the order in which operations are applied.
Further, it should improve performance a bit in certain cases, since
`effective_deptypes` run in O(v + e) time, whereas `spack env activate`
currently can take up to O(v^2 + e) time due to loops over roots. Each
edge in the DAG is visited once by calling `effective_deptypes` with
`env.concrete_roots()`.
By marking and propagating flags through the DAG, this commit also fixes
a bug where Spack wouldn't call `setup_run_environment` for runtime
dependencies of link dependencies. And this PR ensures that Spack
correctly sets up the runtime environment of direct build dependencies.
Regarding test dependencies: in a build context they are are build-time
test deps, whereas in a test context they are install-time test deps.
Since there are no means to distinguish the build/install type test deps,
they're both.
Further changes:
- all `package.py` module globals are guaranteed to be set before any of the
`setup_(dependent)_(run|build)_env` functions is called
- traversal order during setup: first the group of externals, then the group
of non-externals, with specs in each group traversed topological (dependencies
are setup before dependents)
- modules: only ever call `setup_dependent_run_environment` of *direct* link/run
type deps
- the marker in `set_module_variables_for_package` is dropped, since we should
call the method once per spec. This allows us to set only a cheap subset of
globals on the module: for example it's not necessary to compute the expensive
`cmake_args` and w/e if the spec under consideration is not the root node to be
built.
- `spack load`'s `--only` is deprecated (it has no effect now), and `spack load x`
now means: do everything that's required for `x` to work at runtime, which
requires runtime deps to be setup -- just like `spack env activate`.
- `spack load` no longer loads build deps (of build deps) ...
- `spack env activate` on partially installed or broken environments: this is all
or nothing now. If some spec errors during setup of its runtime env, you'll only
get the unconditional variables + a warning that says the runtime changes for
specs couldn't be applied.
- Remove traversal in upward direction from `setup_dependent_*` in packages.
Upward traversal may iterate to specs that aren't children of the roots
(e.g. zlib / python have hundreds of dependents, only a small fraction is
reachable from the roots. Packages should only modify the direct dependent
they receive as an argument)
The ability to select the top N versions got removed in the checksum overhaul,
cause initially numbers were used for commands.
Now that we settled on characters for commands, let's make numbers pick the top
N again.
Improve how mirrors are used in gitlab ci, where we have until now thought
of them as only a string.
By configuring ci mirrors ahead of time using the proposed mirror templates,
and by taking advantage of the expressiveness that spack now has for mirrors,
this PR will allow us to easily switch the protocol/url we use for fetching
binary dependencies.
This change also deprecates some gitlab functionality and marks it for
removal in Spack 0.23:
- arguments to "spack ci generate":
* --buildcache-destination
* --copy-to
- gitlab configuration options:
* enable-artifacts-buildcache
* temporary-storage-url-prefix
Reused specs used to be referenced directly into the built spec.
This might cause issues like in issue 39570 where two objects in
memory represent the same node, because two reused specs were
loaded from different sources but referred to the same spec
by DAG hash.
The issue is solved by copying concrete specs to a dictionary keyed
by dag hash.
`spack dev-build` would incorrectly set `keep_stage=True` for the
entire DAG, including for non-dev specs, even though the dev specs
have a DIYStage which never deletes sources.
This patch adds in a license directive to get the ball rolling on adding in license
information about packages to spack. I'm primarily interested in just adding
license into spack, but this would also help with other efforts that people are
interested in such as adding license information to the ASP solve for
concretization to make sure licenses are compatible.
Usage:
Specifying the specific license that a package is released under in a project's
`package.py` is good practice. To specify a license, find the SPDX identifier for
a project and then add it using the license directive:
```python
license("<SPDX Identifier HERE>")
```
For example, for Apache 2.0, you might write:
```python
license("Apache-2.0")
```
Note that specifying a license without a when clause makes it apply to all
versions and variants of the package, which might not actually be the case.
For example, a project might have switched licenses at some point or have
certain build configurations that include files that are licensed differently.
To account for this, you can specify when licenses should be applied. For
example, to specify that a specific license identifier should only apply
to versionup to and including 1.5, you could write the following directive:
```python
license("MIT", when="@:1.5")
```
This commit allows version specifiers to refer to git branches that contain
forward slashes. For example, the following is valid syntax now:
pkg@git.releases/1.0
It also adds a new method `Spec.format_path(fmt)` which is like `Spec.format`,
but also maps unsafe characters to `_` after interpolation. The difference is
as follows:
>>> Spec("pkg@git.releases/1.0").format("{name}/{version}")
'pkg/git.releases/1.0'
>>> Spec("pkg@git.releases/1.0").format_path("{name}/{version}")
'pkg/git.releases_1.0'
The `format_path` method is used in all projections. Notice that this method
also maps `=` to `_`
>>> Spec("pkg@git.main=1.0").format_path("{name}/{version}")
'pkg/git.main_1.0'
which should avoid syntax issues when `Spec.prefix` is literally copied into a
Makefile as sometimes happens in AutotoolsPackage or MakefilePackage
Currently `spack env activate --with-view` exists, but is a no-op.
So, it is not too much of a breaking change to make this redundant flag
accept a value `spack env activate --with-view <name>` which activates
a particular view by name.
The view name is stored in `SPACK_ENV_VIEW`.
This also fixes an issue where deactivating a view that was activated
with `--without-view` possibly removes entries from PATH, since now we
keep track of whether the default view was "enabled" or not.
* spack checksum: improve interactive filtering
* fix signature of executable
* Fix restart when using editor
* Don't show [x version(s) are new] when no known versions (e.g. in spack create <url>)
* Test ^D in test_checksum_interactive_quit_from_ask_each
* formatting
* colorize / skip header on invalid command
* show original total, not modified total
* use colify for command list
* Warn about possible URL changes
* show possible URL change as comment
* make mypy happy
* drop numbers
* [o]pen editor -> [e]dit
Because those end up being passed to ar which does not understand linker
arguments. This was making ldflags largely unusuable for statically
linked cmake packages.
* Allow branching out of the "generic build" unification set
For cases like the one in https://github.com/spack/spack/pull/39661
we need to relax rules on unification sets.
The issue is that, right now, nodes in the "generic build" unification
set are unified together with their build dependencies. This was done
out of caution to avoid the risk of circular dependencies, which would
ultimately cause a very slow solve.
For build-tools like Cython, however, the build dependencies is masked
by a long chain of "build, run" dependencies that belong in the
"generic build" unification space.
To allow splitting on cases like this, we relax the rule disallowing
branching out of the "generic build" unification set.
* Fix issue with pure build virtual dependencies
Pure build virtual dependencies were not accounted properly in the
list of possible virtuals. This caused some facts connecting virtuals
to the corresponding providers to not be emitted, and in the end
lead to unsat problems.
* Fixed a few issues in packages
py-gevent: restore dependency on py-cython@3
jsoncpp: fix typo in build dependency
ecp-data-vis-sdk: update spack.yaml and cmake recipe
py-statsmodels: add v0.13.5
* Make dependency on "blt" of type "build"
We run pip with `--no-build-isolation` because we don't wanna let pip
install build deps.
As a consequence, when pip runs hooks, it runs hooks of *any* package it
can find in `sys.path`.
For Spack-built Python this includes user site packages -- there
shouldn't be any system site packages. So in this case it suffices to
set the environment variable PYTHONNOUSERSITE=1.
For external Python, more needs to be done, cause there is no env
variable that disables both system and user site packages; setting the
`python -S` flag doesn't work because pip runs subprocesses that don't
inherit this flag (and there is no API to know if -S was passed)
So, for external Python, an empty venv is created before invoking pip in
Spack's build env ensures that pip can no longer see anything but
standard libraries and `PYTHONPATH`.
The downside of this is that pip will generate shebangs that point to
the python executable from the venv. So, for external python an extra
step is necessary where we fix up shebangs post install.
Two changes in this PR:
1. Register absolute paths in tarballs, which makes it easier
to use them as container image layers, or rootfs in general, outside
of Spack. Spack supports this already on develop.
2. Assemble the tarfile entries "by hand", which has a few advantages:
1. Avoid reading `/etc/passwd`, `/etc/groups`, `/etc/nsswitch.conf`
which `tar.add(dir)` does _for each file it adds_
2. Reduce the number of stat calls per file added by a factor two,
compared to `tar.add`, which should help with slow, shared filesystems
where these calls are expensive
4. Create normalized `TarInfo` entries from the start, instead of letting
Python create them and patching them after the fact
5. Don't recurse into subdirs before processing files, to avoid
keeping nested directories opened. (this changes the tar entry
order slightly, it's like sorting by `(not is_dir, name)`.
For a long time, the docs have generated a huge, static HTML package list. It has some
disadvantages:
* It's slow to load
* It's slow to build
* It's hard to search
We now have a nice website that can tell us about Spack packages, and it's searchable so
users can easily find the one or two packages out of 7400 that they're looking for. We
should link to this instead of including a static package list page in the docs.
- [x] Replace package list link with link to packages.spack.io
- [x] Remove `package_list.html` generation from `conf.py`.
- [x] Add a new section for "Links" to the docs.
- [x] Remove docstring notes from contribution guide (we haven't generated RST
for package docstrings for a while)
- [x] Remove referencese to `package-list` from docs.
Currently, Windows SDK detection will only pick up SDK versions
related to the current version of Windows Spack is running on.
However, in some circumstances, we want to detect other version
of the SDK, for example, for compiling on Windows 11 for Windows
10 to ensure an API is compatible with Win10.
* Make use of `prefix` in the Cray manifest schema (prepend it to
the relative CC etc.) - this was a Spack error.
* Warn people when wrong-looking compilers are found in the manifest
(i.e. non-existent CC path).
* Bypass compilers that we fail to add (don't allow a single bad
compiler to terminate the entire read-cray-manifest action).
* Refactor Cray manifest tests: module-level variables have been
replaced with fixtures, specifically using the `test_platform`
fixture, which allows the unit tests to run with the new
concretizer.
* Add unit test to check case where adding a compiler raises an
exception (check that this doesn't prevent processing the
rest of the manifest).
If you `spack install x ^y` where `y` is a pure build dep of `x`, and
then uninstall `y`, and then `spack install --overwrite x ^y`, the build
fails because `y` is not re-installed.
Same can happen when you install a develop spec, run `spack gc`,
modify sources, and install again; develop specs rely on overwrite
install to work correctly.
This PR adds a new audit sub-command to check that detection of relevant packages
is performed correctly in a few scenarios mocking real use-cases. The data for each
package being tested is in a YAML file called detection_test.yaml alongside the
corresponding package.py file.
This is to allow encoding detection tests for compilers and other widely used tools,
in preparation for compilers as dependencies.
Modifications:
- [x] Move `spack.util.string` to `llnl.string`
- [x] Remove dependency of `llnl` on `spack.error`
- [x] Move path of `spack.util.path` to `llnl.path`
- [x] Move `spack.util.environment.get_host_*` to `spack.spec`
* msvc.py: don't import distutils
Introduced in #27021, makes Spack forward incompatible with Python.
The module was already deprecated at the time of the PR.
* update spack package
Fixes#39622
Add a timeout to compiler detection and allow Spack to proceed when
this timeout occurs.
In all cases, the timeout is 120 seconds: it is assumed any compiler
invocation we do for the purposes of verifying it would resolve in
that amount of time.
Also refine executables that are tested as being possible MSVC
instances, and limit where we try to detect MSVC. In more detail:
* Compiler detection should timeout after a certain period of time.
Because compiler detection executes arbitrary executables on the
system, we could encounter a program that just hangs, or even a
compiler that hangs on a license key or similar. A timeout
prevents this from hanging Spack.
* Prevents things like cl-.* from being detected as potential MSVC
installs. cl is always just cl in all cases that Spack supports.
Change the MSVC class to indicate this.
* Prevent compilers unsupported on certain platforms from being
detected there (i.e. don't look for MSVC on systems other than
Windows).
The first point alone is sufficient to address #39622, but the
next two reduce the likelihood of timeouts (which is useful since
those slow down the user even if they are survivable).
Put back normalization of the "virtuals" input as a sorted tuple.
Without this we might get edges that differ just for the order of
virtuals, or that have lists, which are not hashable.
Add unit-tests to prevent regressions.
By default, do not let deprecated versions enter the solve.
Previously you could concretize to something deprecated, only to get errors on install.
With this commit, we get errors on concretization, so the issue is caught earlier.
PythonExtension is a base class for PythonPackage, and
is meant to be used for any package that is a Python
extension but is not built using "python_pip".
The "update_external_dependency" method in the base
class calls another method that is defined in the derived
class.
Push "get_external_python_for_prefix" up in the hierarchy
to make method calls consistent.
This commit replaces the internal representation of deptypes with `int`, which is more compact
and faster to operate with.
Double loops like:
```
any(x in ys for x in xs)
```
are replaced by constant operations bool(xs & ys), where xs and ys are dependency types.
Global constants are exposed for convenience in `spack.deptypes`
Currently, the concretizer emits facts for all versions known to Spack, including deprecated versions, and has a specific optimization objective to minimize their use.
This commit simplifies how deprecated versions are handled by considering possible versions for a spec only if they appear in a spec literal, or if the `config:deprecated:true` is set directly or through the `--deprecated` flag. The optimization objective has also been removed, in favor of just ordering versions and having deprecated ones last.
This results in:
a) no delayed errors on install, but concretization errors when deprecated versions would be the only option. This is in particular relevant for CI where it's better to get errors early
b) a slight concretization speed-up due to fewer facts
c) a simplification of the logic program.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
NMake makefiles are still called makefiles. The corresponding builder
variable was called "nmakefile", which is a bit unintuitive and lead
to a few easy-to-make, hard-to-notice mistakes when creating packages.
This commit renames the builder property to be "makefile"
Extensionless archives requiring two-stage decompression and extraction
require intermediate archives to be renamed after decompression/extraction
to prevent collision. Prior behavior attempted to cleanup the intermediate
archive with the original name, this PR ensures the renamed folder is
cleaned instead.
Co-authored-by: Dan Lipsa <dan.lipsa@khq.kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
* Perform external spec detection with multiple workers
The logic to perform external spec detection has been refactored
into classes. These classes use the GoF "template" pattern to account
for the small differences between searching for "executables" and
for "libraries", while unifying the larger part of the algorithm.
A ProcessPoolExecutor is used to parallelize the work.
* Speed-up external find by tagging detectable packages automatically
Querying packages by tag is much faster than inspecting the repository,
since tags are cached. This commit adds a "detectable" tag to every
package that implements the detection protocol, and external detection
uses it to search for packages.
* Pass package names instead of package classes to workers
The slowest part of the search is importing the Python modules
associated with candidate packages. The import is done serially
before we distribute the work to the pool of executors.
This commit pushes the import of the Python module to the job
performed by the workers, and passes just the name of the packages
to the executors.
In this way imports can be done in parallel.
* Rework unit-tests for Windows
Some unit tests were doing a full e2e run of a command
just to check a input handling. Make the test more
focused by just stressing a specific function.
Mark as xfailed 2 tests on Windows, that will be fixed
by a PR in the queue. The tests are failing because we
monkeypatch internals in the parent process, but the
monkeypatching is not done in the "spawned" child
process.
* Write timing information for installs from cache
* CI: aggregate and upload install_times.json to artifacts
* CI: Don't change root directory for artifact generation
* Flat event based timer variation
Event based timer allows for easily starting and stopping timers without
wiping sub-timer data. It also requires less branching logic when
tracking time.
The json output is non-hierarchical in this version and hierarchy is
less rigidly enforced between starting and stopping.
* Add and write timers for top level install
* Update completion
* remove unused subtimer api
* Fix unit tests
* Suppress timing summary option
* Save timers summaries to user_data artifacts
* Remove completion from fish
* Move spack python to script section
* Write timer correctly for non-cache installs
* Re-add hash to timer file
* Fish completion updates
* Fix null timer yield value
* fix type hints
* Remove timer-summary-file option
* Add "." in front of non-package timer name
---------
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
This is a fixed version of b72a268
* That commit would discard the final key component (so if you set
"config:install_tree:root", it would discard "root" and just set
install tree).
* When setting key:"value", with the quotes, that commit would
discard the quotes, which would confuse the system if adding a
value like "{example}" (the "{" character indicates a dictionary).
This commit retains the quotes.
These commands are currently broken on powershell (Windows) due to
improper use of the InvokeCommand commandlet and a lack of direct
support for the `--pwsh` argument in `spack load`, `spack unload`,
and `spack env deactivate`.
If you wanted to set a configuration option like
`config:install_tree:root` to "C:/path/to/config.yaml", Spack had
trouble parsing this because of the ":" in the value. This adds
logic to allow using quotes to enclose the value, so you can add
`config:install_tree:root:"C:/path/to/config.yaml"`.
Configuration keys should never contain a quote character, so the
presence of any quote is taken to mean that the rest of the string
is specifying the value.
Setting the undocumented variable SPACK_CONCRETIZER_REQUIRE_CHECKSUM
now causes the solver to avoid accounting for versions that are not checksummed.
This feature is used in CI to avoid spurious concretization against e.g. develop branches.
Currently, OneAPI's setvars scripts effectively disregard any arguments
we're passing to the MSVC vcvars env setup script, and additionally,
completely ignore the requested version of OneAPI, defaulting to whatever
the latest installed on the system is.
This leads to a scenario where we have improperly constructed Windows
native development environments, with potentially multiple versions of
MSVC and OneAPI being loaded or called in the same env. Obviously this is
far from ideal and leads to some fairly inscrutable errors such as
overlapping header files between MSVC and OneAPI and a different version
of OneAPI being called than the env was setup for.
This PR solves this issue by creating a structured invocation of each
relevant script in an order that ensures the correct values are set in
the resultant build env.
The order needs to be:
1. MSVC vcvarsall
2. The compiler specific env.bat script for the relevant version of
the oneapi compiler we're looking for. The root setvars scripts seems
to respect this as well, although it is less explicit
3. The root oneapi setvars script, which sets up everything else the
oneapi env needs and seems to respect previous env invocations.
Bash completion is now smarter about handling aliases. In particular, if all completions
for some input command are aliased to the same thing, we'll just complete with that thing.
If you've already *typed* the full alias for a command, we'll complete the alias.
So, for example, here there's more than one real command involved, so all aliases are
shown:
```console
$ spack con
concretise concretize config containerise containerize
```
Here, there are two possibilities: `concretise` and `concretize`, but both map to
`concretize` so we just complete that:
```console
$ spack conc
concretize
```
And here, the user has already typed `concretis`, so we just go with it as there is only
one option:
```console
spack concretis
concretise
```
From a user:
> Aargh.
> ```
> ==> Error: concretise is not a recognized Spack command or extension command; check with `spack commands`.
> ```
To make things easier for our friends in the UK, this adds `concretise` and
`containerise` aliases for the `spack concretize` and `spack containerize` commands.
- [x] add aliases
- [x] update completions
This reapplies 66f7540, which adds supports for hardlinks/junctions on
Windows systems where developer mode is not enabled.
The commit was reverted on account of multiple issues:
* Checks added to prevent dangling symlinks were interfering with
existing CI builds on Linux (i.e. builds that otherwise succeed were
failing for creating dangling symlinks).
* The logic also updated symlinking to perform redirection of relative
paths, which lead to malformed symlinks.
This commit fixes these issues.
#35042 introduced lazy hash parsing, but didn't remove a
few attributes from the parser that were needed only for
concrete specs
This commit removes them, since they are effectively
dead code.
The heuristic for duplicate nodes contains a few typos, and
apparently slows down the solve for specs that have a lot of
sub-optimal choices to be taken.
This is likely because with a lot of sub-optimal choices, the
low priority, flawed heuristic is being used by clingo.
Here I split the heuristic, so complex rules that matter only
if we allow multiple nodes from the same package are used
only in that case.
Since #34821 we are annotating virtual dependencies on
DAG edges, and reconstructing virtuals in memory when
we read a concrete spec from previous formats.
Therefore, we can remove a TODO in asp.py, and rely on
"virtual_on_edge" facts to be imposed.
Computing str(spec) is faster than computing hash(spec), and
since all the abstract specs we deal with come from user configuration
they cannot cover DAG structures that are not captured by str() but
are captured by hash()
Delay lookup for abstract hashes until concretization time, instead of
until Spec comparison. This has a few advantages:
1. `satisfies` / `intersects` etc don't always know where to resolve the
abstract hash (in some cases it's wrong to look in the current env,
db, buildcache, ...). Better to let the call site dictate it.
2. Allows search by abstract hash without triggering a database lookup,
causing quadratic complexity issues (accidental nested loop during
search)
3. Simplifies queries against the buildcache, they can now use Spec
instances instead of strings.
The rules are straightforward:
1. a satisfies b when b's hash is prefix of a's hash
2. a intersects b when either a's or b's hash is a prefix of b's or a's
hash respectively
The median length of this list of 1. For reasons I don't know, `.sort()`
still like to call the key function.
This saves ~9% of total database read time, and the number of calls
goes from 5305 -> 1715.
* Do not impose provider conditions, if the node is not a provider
fixes#39455
When a node can be a provider of a spec, but is not selected as
a provider, we should not be imposing provider conditions on the
virtual.
* Adjust the integrity constraint, by using the correct atom
* Add "only_clingo", "only_original" and "not_on_windows" markers
* Modify tests to use the "not_on_windows" marker
* Mark tests that run only with clingo
* Mark tests that run only with the original concretizer
To avoid paying the cost of setup and of a full grounding again,
move cycle detection into a separate program and check first if
the solution has cycles.
If it has, ground only the integrity constraint preventing cycles
and solve again.
The "concretizer" section has been extended with a "duplicates:strategy"
attribute, that can take three values:
- "none": only 1 node per package
- "minimal": allow multiple nodes opf specific packages
- "full": allow full duplication for a build tool
This refactor introduces extra indices for triggers and
effect of a condition, so that the corresponding clauses
are evaluated once for every condition they apply to.
All the solution modes we use imply that we have to solve for all
the literals, except for "when possible".
Here we remove a minimization on the number of literals not
solved, and emit directly a fact when a literal *has* to be
solved.