After #41373, where we stopped considering the source directory to be the stage for develop builds,
we resumed *deleting* the stage even after a successful build.
We don't want this for develop builds because developers need to iterate; we should keep the artifacts
unless they explicitly run `spack clean`.
Now:
- [x] Build artifacts for develop packages are not removed after a successful install
- [x] They are also not removed before an install starts, i.e. develop packages always
reuse prior artifacts, if available.
- [x] They can be deleted in any other context, e.g. by running `spack clean --stage`
Users requested an option to filter between local/upstream results in `spack find` output.
```
# default behavior, same as without --install-tree argument
$ spack find --install-tree all
# show only local results
$ spack find --install-tree local
# show results from all upstreams
$ spack find --install-tree upstream
# show results from a particular upstream or the local install_tree
$ spack find --install-tree /path/to/install/tree/root
```
---------
Co-authored-by: becker33 <becker33@users.noreply.github.com>
* Allow compilers to function across compatible OS's
* Add documentation in the default yaml
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
* Add macos-14 as a runner (Apple M1)
* Mark a test xfail
We need to check later if this test needs modifications
on Apple Silicon chips.
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Co-authored-by: alalazo <alalazo@users.noreply.github.com>
* buildcache sync: manifest-glob with arbitrary destination
The current implementation of the --manifest-glob is a bit restrictive
requiring the destination to be known by the generation stage of CI.
This allows specifying an arbitrary destination mirror URL.
* Add unit test for buildcache sync with manifest
* Fix test and arguments for manifest-glob with override destination
* Add testing path for unused mirror argument
* Remove a few compilers from static test data
These compilers were used only in a bunch of tests, so
they are added only there.
* Remove clang@3.3 from unit test configuration
* Parametrize compilers.yaml
* Remove specially named gcc from static data
The compilers are used in two tests
* Remove apple-clang and macOS compilers from static data
The compiler was used only in multimethod tests
* Remove clang@3.5 (compiler seems to be unused)
* Remove gcc@4.4.0 (compiler seems to be unused)
* Exclude x86_64 tests on other architectures
* Mark two tests as for clingo only
* Update version syntax in compilers.yaml
* Parametrize tcl tests on architectures
* Parametrize lmod tests on architectures
* Substitute gcc@4.5.0 with gcc@4.8.0 so it can be used on aarch64
* Fix a few issues with aarch64 and unit-tests
It's now possible to add config on the command line with `spack -c <CONFIG_VARS> ...`, but the new `command_line` scope isn't reflected in the help output for `--scope`:
```bash
> spack help config
...
--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT
configuration scope to read/modify
...
```
This PR adds:
- A new runtime for `%oneapi` compilers, called `intel-oneapi-runtime`
- Information to both `gcc-runtime` and `intel-oneapi-runtime`, to ensure
that we don't mix compilers using different soname for either `libgfortran`
or `libifcore`
To do so, the following internal mechanisms have been implemented:
- Possibility to inject virtual dependencies from the `runtime_constraints`
callback on packages
Information has been added to `gcc-runtime` to provide the correct soname
under different conditions on its `%gcc`.
Rules injected into the solver looks like:
```prolog
% Add a dependency on 'gfortran@5' for nodes compiled with gcc@=13.2.0 and using the 'fortran' language
attr("dependency_holds", node(ID, Package), "gfortran", "link") :-
attr("node", node(ID, Package)),
attr("node_compiler", node(ID, Package), "gcc"),
attr("node_compiler_version", node(ID, Package), "gcc", "13.2.0"),
not external(node(ID, Package)),
not runtime(Package),
attr("language", node(ID, Package), "fortran").
attr("virtual_node", node(RuntimeID, "gfortran")) :-
attr("depends_on", node(ID, Package), ProviderNode, "link"),
provider(ProviderNode, node(RuntimeID, "gfortran")),
attr("node", node(ID, Package)),
attr("node_compiler", node(ID, Package), "gcc"),
attr("node_compiler_version", node(ID, Package), "gcc", "13.2.0"),
not external(node(ID, Package)),
not runtime(Package),
attr("language", node(ID, Package), "fortran").
attr("node_version_satisfies", node(RuntimeID, "gfortran"), "5") :-
attr("depends_on", node(ID, Package), ProviderNode, "link"),
provider(ProviderNode, node(RuntimeID, "gfortran")),
attr("node", node(ID, Package)),
attr("node_compiler", node(ID, Package), "gcc"),
attr("node_compiler_version", node(ID, Package), "gcc", "13.2.0"),
not external(node(ID, Package)),
not runtime(Package),
attr("language", node(ID, Package), "fortran").
```
This adds support for prereleases. Alpha, beta and release candidate
suffixes are ordered in the intuitive way:
```
1.2.0-alpha < 1.2.0-alpha.1 < 1.2.0-beta.2 < 1.2.0-rc.3 < 1.2.0 < 1.2.0-xyz
```
Alpha, beta and rc prereleases are defined as follows: split the version
string into components like before (on delimiters and string boundaries).
If there's a string component `alpha`, `beta` or `rc` followed by an optional
numeric component at the end, then the version is prerelease.
So `1.2.0-alpha.1 == 1.2.0alpha1 == 1.2.0.alpha1` are all the same, as usual.
The strings `alpha`, `beta` and `rc` are chosen because they match semver,
they are sufficiently long to be unambiguous, and and all contain at least
one non-hex character so distinguish them from shasum/digest type suffixes.
The comparison key is now stored as `(release_tuple, prerelease_tuple)`, so in
the above example:
```
((1,2,0),(ALPHA,)) < ((1,2,0),(ALPHA,1)) < ((1,2,0),(BETA,2)) < ((1,2,0),(RC,3)) < ((1,2,0),(FINAL,)) < ((1,2,0,"xyz"), (FINAL,))
```
The version ranges `@1.2.0:` and `@:1.1` do *not* include prereleases of
`1.2.0`.
So for packaging, if the `1.2.0alpha` and `1.2.0` versions have the same constraints on
dependencies, it's best to write
```python
depends_on("x@1:", when="@1.2.0alpha:")
```
However, `@1.2:` does include `1.2.0alpha`. This is because Spack considers
`1.2 < 1.2.0` as distinct versions, with `1.2 < 1.2.0alpha < 1.2.0` as a consequence.
Alternatively, the above `depends_on` statement can thus be written
```python
depends_on("x@1:", when="@1.2:")
```
which can be useful too. A short-hand to include prereleases, but you
can still be explicit to exclude the prerelease by specifying the patch version
number.
### Concretization
Concretization uses a different version order than `<`. Prereleases are ordered
between final releases and develop versions. That way, users should not
have to set `preferred=True` on every final release if they add just one
prerelease to a package. The concretizer is unlikely to pick a prerelease when
final releases are possible.
### Limitations
1. You can't express a range that includes all alpha release but excludes all beta
releases. Only alternative is good old repeated nines: `@:1.2.0alpha99`.
2. The Python ecosystem defaults to `a`, `b`, `rc` strings, so translation of Python versions to
Spack versions requires expansion to `alpha`, `beta`, `rc`. It's mildly annoying, because
this means we may need to compute URLs differently (not done in this commit).
### Hash
Care is taken not to break hashes of versions that do not have a prerelease
suffix.
Generate CI scripts as powershell on Windows. This is intended to
output exactly the same bash scripts as before on Linux.
Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
Running a `spack-python` script like this:
```python
import spack
import multiprocessing
def echo(args):
print(args)
if __name__ == "__main__":
pool = multiprocessing.Pool(2)
pool.map(echo, range(10))
```
will fail in `develop` with an error like this:
```console
_pickle.PicklingError: Can't pickle <function echo at 0x104865820>: attribute lookup echo on __main__ failed
```
Python expects to be able to look up the method `echo` in `sys.path["__main__"]` in
subprocesses spawned by `multiprocessing`, but because we use `InteractiveConsole` to
run `spack python`, the executed file isn't considered to be the `__main__` module, and
lookups in subprocesses fail. We tried to fake this by setting `__name__` to `__main__`
in the `spack python` command, but that doesn't fix the fact that no `__main__` module
exists.
Another annoyance with `InteractiveConsole` is that `__file__` is not defined in the
main script scope, so you can't use it in your scripts.
We can use the [runpy.run_path()](https://docs.python.org/3/library/runpy.html#runpy.run_path) function,
which has been around since Python 3.2, to fix this.
- [x] Use `runpy` module to launch non-interactive `spack python` invocations
- [x] Only use `InteractiveConsole` for interactive `spack python`
Often in containers, the files we use to detect whether a cray system supports new features are not available.
Given that the cray containers only support the newer versions, and that these versions have been
around for a while at this point and few sites don't support them, this PR changes the logic for
detecting cray systems so that:
1. Don't even consider whether something is the `cray` platform if `opt/cray` is not in `MODULEPATH`
2. Only use the `cray` platform if we can read files in /opt/cray/pe and positively detect an older version
3. Otherwise, assume we're *not* on a cray (includes newer Cray PE's, which we treat as Linux)
`jinja2` can be a costly import, and right now it happens at startup every time we run
Spack. This slows down `spack --print-shell-vars` a bit, which is needed by `setup-env.*sh`.
Patch allowing Clingo to build with VS22 has landed both in Spack
and Clingo upstream, update Spack's bootstrap constraints to handle
this.
Additionally, properly scope the patch application in the clingo
package to handle upstream patch.
Currently (outside of this PR) when you `spack develop` a path, this path is treated as the staging
directory (this means that for example all build artifacts are placed in the develop path).
This PR creates a separate staging directory for all `spack develop`ed builds. It looks like
```
# the stage root
/the-stage-root-for-all-spack-builds/
spack-stage-<hash>
# Spack packages inheriting CMakePackage put their build artifacts here
spack-build-<hash>/
```
Unlike non-develop builds, there is no `spack-src` directory, `source_path` is the provided `dev_path`.
Instead, separately, in the `dev_path`, we have:
```
/dev/path/for/foo/
build-{arch}-<hash> -> /the-stage-root-for-all-spack-builds/spack-stage-<hash>/
```
The main benefit of this is that build artifacts for out-of-source builds that are relative to
`Stage.path` are easily identified (and you can delete them with `spack clean`).
Other behavior added here:
- [x] A symlink is made from the `dev_path` to the stage directory. This symlink name incorporates
spec details, so that multiple Spack environments that develop the same path will not conflict
with one another
- [x] `spack cd` and `spack location` have added a `-c` shorthand for `--source-dir`
Spack builds can still change the develop path (in particular to keep track of applied patches),
and for in-source builds, this doesn't change much (although logs would not be written into
the develop path). Packages inheriting from `CMakePackage` should get this benefit
automatically though.
The `patch()` directive can now be invoked with `reverse=True` to apply a patch in reverse.
This is useful for reverting commits that caused errors in projects, even if only the forward
patch is available, e.g. via a GitHub commit patch URL.
`patch(..., reverse=True)` runs `patch -R` behind the scenes. This is a POSIX option so we
can expect it to be available on the `patch` command.
---------
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
fixes#43097
Before this PR the behavior of mixins used together with
builders was to mask completely the callbacks defined from
the class coming later in the MRO.
Here we fix the behavior by accumulating all callbacks,
and de-duplicating them later.
Remove dependency on `importlib_metadata` and `pkg_resources`, which can be problematic if the version in PYTHONPATH is incompatible with the interpreter Spack is running under.
Closes#43052.
Maybe moving the argument to the `find` subcommand is a good idea, but I
just wanted to get the docs fix out.
Co-authored-by: Patrice Peterson <patrice.peterson@itz.uni-halle.de>
This PR adds the ability to load spack extensions through `importlib.metadata` entry
points, in addition to the regular configuration variable.
It requires Python 3.8 or greater to be properly supported.
* ASP-based solver: improve reusing nodes with gcc-runtime
This PR skips emitting dependency constraints on "gcc-runtime",
for concrete specs that are considered for reuse.
Instead, an appropriate version of gcc-runtime is recomputed
considering also the concrete nodes from reused specs.
This ensures that root nodes in a DAG have always a runtime
that is at a version greater or equal than their dependencies.
* Add unit-test for view with multiple runtimes
* Select latest version of runtimes in views
* Construct result keeping track of latest
* Keep ordering stable, just in case