We added a hotfix to releases/v0.19 with a feature flag, but the flag
is incompatible with the config schema on `develop`.
- [x] Ensure schema is compatible on develop even though config option is unused.
* Speed-up bootstrap mirror unit test
The unit test doesn't need to concretize, since it checks
only metadata for the mirror.
* architecture.py: use "default_mock_concretization" for slow test
Environments and environment views have taken over the role of `spack activate/deactivate`, and we should deprecate these commands for several reasons:
- Global activation is a really poor idea:
- Install prefixes should be immutable; since they can have multiple, unrelated dependents; see below
- Added complexity elsewhere: verification of installations, tarballs for build caches, creation of environment views of packages with unrelated extensions "globally activated"... by removing the feature, it gets easier for people to contribute, and we'd end up with fewer bugs due to edge cases.
- Environment accomplish the same thing for non-global "activation" i.e. `spack view`, but better.
Also we write in the docs:
```
However, Spack global activations have two potential drawbacks:
#. Activated packages that involve compiled C extensions may still
need their dependencies to be loaded manually. For example,
``spack load openblas`` might be required to make ``py-numpy``
work.
#. Global activations "break" a core feature of Spack, which is that
multiple versions of a package can co-exist side-by-side. For example,
suppose you wish to run a Python package in two different
environments but the same basic Python --- one with
``py-numpy@1.7`` and one with ``py-numpy@1.8``. Spack extensions
will not support this potential debugging use case.
```
Now that environments are established and views can take over the role of activation
non-destructively, we can remove global activation/deactivation.
Currently, external `PythonPackage`s cause install failures because the logic in `PythonPackage` assumes that it can ask for `spec["python"]`. Because we chop off externals' dependencies, an external Python extension may not have a `python` dependency.
This PR resolves the issue by guaranteeing that a `python` node is present in one of two ways:
1. If there is already a `python` node in the DAG, we wire the external up to it.
2. If there is no existing `python` node, we wire up a synthetic external `python` node, and we assume that it has the same prefix as the external.
The assumption in (2) isn't always valid, but it's better than leaving the user with a non-working `PythonPackage`.
The logic here is specific to `python`, but other types of extensions could take advantage of it. Packages need only define `update_external_dependencies(self)`, and this method will be called on externals after concretization. This likely needs to be fleshed out in the future so that any added nodes are included in concretization, but for now we only bolt on dependencies post-concretization.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Spack currently creates a temporary sbang that is moved "atomically" in place,
but this temporary causes races when multiple processes start installing sbang.
Let's just stick to an idempotent approach. Notice that we only re-install sbang
if Spack updates it (since we do file compare), and sbang was only touched
18 times in the past 6 years, whereas we hit the sbang tempfile issue
frequently with parallel install on a fresh spack instance in CI.
Also fixes a bug where permissions weren't updated if config changed but
the latest version of the sbang file was already installed.
The `intel` compiler at versions > 20 is provided by the `intel-oneapi-compilers-classic`
package (a thin wrapper around the `intel-oneapi-compilers` package), and the `oneapi`
compiler is provided by the `intel-oneapi-compilers` package.
Prior to this work, neither of these compilers could be bootstrapped by Spack as part of
an install with `install_missing_compilers: True`.
Changes made to make these two packages bootstrappable:
1. The `intel-oneapi-compilers-classic` package includes a bin directory and symlinks
to the compiler executables, not just logical pointers in Spack.
2. Spack can look for bootstrapped compilers in directories other than `$prefix/bin`,
defined on a per-package basis
3. `intel-oneapi-compilers` specifies a non-default search directory for the
compiler executables.
4. The `spack.compilers` module now can make more advanced associations between
packages and compilers, not just simple name translations
5. Spack support for lmod hierarchies accounts for differences between package
names and the associated compiler names for `intel-oneapi-compilers/oneapi`,
`intel-oneapi-compilers-classic/intel@20:`, `llvm+clang/clang`, and
`llvm-amdgpu/rocmcc`.
- [x] full end-to-end testing
- [x] add unit tests
"spack install foo" no longer adds package "foo" to the environment
(i.e. to the list of root specs) by default: you must specify "--add".
Likewise "spack uninstall foo" no longer removes package "foo" from
the environment: you must specify --remove. Generally this means
that install/uninstall commands will no longer modify the users list
of root specs (which many users found problematic: they had to
deactivate an environment if they wanted to uninstall a spec without
changing their spack.yaml description).
In more detail: if you have environments e1 and e2, and specs [P, Q, R]
such that P depends on R, Q depends on R, [P, R] are in e1, and [Q, R]
are in e2:
* `spack uninstall --dependents --remove r` in e1: removes R from e1
(but does not uninstall it) and uninstalls (and removes) P
* `spack uninstall -f --dependents r` in e1: will uninstall P, Q, and
R (i.e. e2 will have dependent specs uninstalled as a side effect)
* `spack uninstall -f --dependents --remove r` in e1: this uninstalls
P, Q, and R, and removes [P, R] from e1
* `spack uninstall -f --remove r` in e1: uninstalls R (so it is
"missing" in both environments) and removes R from e1 (note that e1
would still install R as a dependency of P, but it would no longer
be listed as a root spec)
* `spack uninstall --dependents r` in e1: will fail because e2 needs R
Individual unit tests were created for each of these scenarios.
Somehow a network error when cloning the repo for ci gets
categorized by gitlab as a script failure. To make sure we retry
jobs that failed for that reason or a similar one, include
"script_failure" as one of the reasons for retrying service jobs
(which include "no specs to rebuild" jobs, update buildcache
index jobs, and temp storage cleanup jobs.
Add a `project` block to the toml config along with development and CI
dependencies and a minimal `build-system` block, doing basically
nothing, so that spack can be bootstrapped to a full development
environment with:
```shell
$ hatch -e dev shell
```
or for a minimal environment without hatch:
```shell
$ python3 -m venv venv
$ source venv/bin/activate
$ python3 -m pip install --upgrade pip
$ python3 -m pip install -e '.[dev]'
```
This means we can re-use the requirements list throughout the workflow
yaml files and otherwise maintain this list in *one place* rather than
several disparate ones. We may be stuck with a couple more temporarily
to continue supporting python2.7, but aside from that it's less places
to get out of sync and a couple new bootstrap options.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This change uses the aws cli, if available, to retrieve spec files
from the mirror to a local temp directory, then parallelizes the
reading of those files from disk using multiprocessing.ThreadPool.
If the aws cli is not available, then a ThreadPool is used to fetch
and read the spec files from the mirror.
Using aws cli results in ~16 times speed up to recreate the binary
mirror index, while just parallelizing the fetching and reading
results in ~3 speed up.
The compiler bootstrapping logic currently does not add a task when the compiler package is already in the install task queue. This causes failures when the compiler package is added without the additional metadata telling the task to update the compilers list.
Solution: requeue compilers for bootstrapping when needed, to update `task.compiler` metadata.
Currently, develop specs that are not roots and are not explicitly listed dependencies
of the roots are not applied.
- [x] ensure dev specs are applied.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
`spack env create` enables a view by default (in a weird hidden
directory, but well...). This is asking for trouble with the other
default of `concretizer:unify:false`, since having different flavors of
the same spec in an environment, leads to collision errors when
generating the view.
A change of defaults would improve user experience:
However, `unify:true` makes most sense, since any time the issue is
brought up in Slack, the user changes the concretization config, since
it wasn't the intention to have different flavors of the same spec, and
install times are decreased.
Further we improve the docs and drop the duplicate root spec limitation
Dependencies specified by hash are unique in Spack in that the abstract
specs are created with internal structure. In this case, the constraint
generation for spec matrices fails due to flattening the structure.
It turns out that the dep_difference method for Spec.constrain does not
need to operate on transitive deps to ensure correctness. Removing transitive
deps from this method resolves the bug.
- [x] Includes regression test
Without this, Meson will use its Wraps to automatically download and
install dependencies. We want to manage dependencies explicitly,
therefore disable this functionality.
Currently, Spack can fail for a valid spec if the spec is constructed from overlapping, but not conflicting, concrete specs via the hash.
For example, if abcdef and ghijkl are the hashes of specs that both depend on zlib/mnopqr, then foo ^/abcdef ^/ghijkl will fail to construct a spec, with the error message "Cannot depend on zlib... twice".
This PR changes this behavior to check whether the specs are compatible before failing.
With this PR, foo ^/abcdef ^/ghijkl will concretize.
As a side-effect, so will foo ^zlib ^zlib and other specs that are redundant on their dependencies.
Argparse started raising ArgumentError exceptions
when the same parser is added twice. Therefore, we
perform the addition only if the parser is not there
already
Port match syntax to our unparser
Compilers and linker optimize string constants for space by aliasing
them when one is a suffix of another. For gcc / binutils this happens
already at -O1, due to -fmerge-constants. This means that we have
to take care during relocation to always preserve a certain length
of the suffix of those prefixes that are C-strings.
In this commit we pick length 7 as a safe suffix length, assuming the
suffix is typically the 7 characters from the hash (i.e. random), so
it's unlikely to alias with any string constant used in the sources.
In general we now pad shortened strings from the left with leading
dir seperators, but in the case of C-strings that are much shorter
and don't share a common suffix (due to projections), we do allow
shrinking the C-string, appending a null, and retaining the old part
of the prefix.
Also when rewiring, we ensure that the new hash preserves the last
7 bytes of the old hash.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
A user may want to set some attributes on a package without actually modifying the package (e.g. if they want to git pull updates to the package without conflicts). This PR adds a per-package configuration section called "set", which is a dictionary of attribute names to desired values. For example:
packages:
openblas:
package_attributes:
submodules: true
git: "https://github.com/myfork/openblas"
in this case, the package will always retrieve git submodules, and will use an alternate location for the git repo.
While git, url, and submodules are the attributes for which we envision the most usage, this allows any attribute to be overridden, and the acceptable values are any value parseable from yaml.
Newer versions of the CrayPE for EX systems have standalone compiler executables for CCE and compiler wrappers for Cray MPICH. With those, we can treat the cray systems as part of the linux platform rather than having a separate cray platform.
This PR:
- [x] Changes cray platform detection to ignore EX systems with Craype version 21.10 or later
- [x] Changes the cce compiler to be detectable via paths
- [x] Changes the spack compiler wrapper to understand the executable names for the standalone cce compiler (`craycc`, `crayCC`, `crayftn`).
Whenever the rpath string actually _grows_, it falls back to patchelf,
when it stays the same length or gets shorter, we update it in-place,
padded with null bytes.
This PR only deals with absolute -> absolute rpath replacement. We don't
use `_build_tarball(relative=True)` in our CI. If `relative` then it falls
back to the old replacement code.
With this PR, relocation time goes down significantly, likely because patchelf
does some odd things with mmap, causing lots of overhead. Example:
- `binutils`: 700MB installed, goes from `1.91s` to `0.57s`, or `3.4x` faster.
Relocation time: 27% -> 10% of total install time
- `llvm`: 6.8GB installed, goes from `28.56s` to `5.38`, or `5.3x` faster.
Relocation time: 44% -> 13% of total install time
The bottleneck is now decompression.
Note: I'm somewhat confused about the "relative rpath" code paths. Right
now this PR only deals with absolute -> absolute replacement. As far as
I understand, if you embrace relative rpaths when uploading to the
buildcache, the whole point is you _don't_ want to patch rpaths on
install? So it seems fine to not expand `$ORIGIN` again imho.
When a package asks for non-parallel make, we need to force `make -j1` because just doing `make` will run in parallel under jobserver (e.g. `spack env depfile`).
We now always add `-j1` when asked for a non-parallel execution (even if there is no jobserver).
And each `MakeExecutable` can now ask for jobserver support or not. For example: the default `ninja` does not support jobserver so spack applies the default `-j`, but `ninja@kitware` or `ninja-fortran` does, so spack doesn't add `-j`.
Tips: you can run `SPACK_INSTALL_FLAGS=-j1 make -f spack-env-depfile.make -j8` to avoid massive job-spawning because of build tools that don't support jobserver (ninja).
We try to avoid non-default variant values in the concretizer, but this doesn't make
sense for variants forced to take some non-default value by variant propagation.
Counting this as a penalty effectively biases the concretizer for small specs dependency
graphs -- something we try very hard to avoid elsewhere because it can lead to very
strange decisions.
Example: with the penalty, `spack spec hdf5` will choose the default `openmpi` as its
`mpi` provider, but `spack spec hdf5 ~~shared` will choose `mpich` because it has to set
fewer non-default variant values because `mpich`'s DAG is smaller. That's not a good
reason to prefer a non-default virtual provider.
To fix this, if the user explicitly requests a non-default value to be propagated, there
shouldn't be a penalty. Variant values set on the CLI already don't count as default; we
just need to extend that to propagated values.
Adds another post install hook that loops over the install prefix, looking for shared libraries type of ELF files, and sets the soname to their own absolute paths.
The idea being, whenever somebody links against those libraries, the linker copies the soname (which is the absolute path to the library) as a "needed" library, so that at runtime the dynamic loader realizes the needed library is a path which should be loaded directly without searching.
As a result:
1. rpaths are not used for the fixed/static list of needed libraries in the dynamic section (only for _actually_ dynamically loaded libraries through `dlopen`), which largely solves the issue that Spack's rpaths are a heuristic (`<prefix>/lib` and `<prefix>/lib64` might not be where libraries really are...)
2. improved startup times (no library search required)
Untouched spec pruning was added to reduce the number of specs
developers see getting rebuilt in their PR pipelines that they
don't understand. Because the state of the develop mirror lags
quite far behind the tip of the develop branch, PRs often find
they need to rebuild things untouched by their PR.
Untouched spec pruning was previously implemented by finding all
specs in the environment with names of packages touched by the PR,
traversing in both directions the DAGS of those specs, and adding
all dependencies as well as dependents to a list of concrete specs
that should not be considered for pruning.
We found that this heuristic results in too many pruned specs, and
that dependents of touched specs must have all their dependencies
added to the list of specs that should not be considered for pruning.
This issue was introduced in #29761:
```
==> Installing ncurses-6.3-22hz6q6cvo3ep2uhrs3erpp2kogxncbn
==> No binary for ncurses-6.3-22hz6q6cvo3ep2uhrs3erpp2kogxncbn found: installing from source
==> Using cached archive: /spack/var/spack/cache/_source-cache/archive/97/97fc51ac2b085d4cde31ef4d2c3122c21abc217e9090a43a30fc5ec21684e059.tar.gz
==> No patches needed for ncurses
==> ncurses: Executing phase: 'autoreconf'
==> ncurses: Executing phase: 'configure'
==> ncurses: Executing phase: 'build'
==> ncurses: Executing phase: 'install'
==> Error: AttributeError: 'str' object has no attribute 'propagate'
The 'ncurses' package cannot find an attribute while trying to build from sources. This might be due to a change in Spack's package format to support multiple build-systems for a single package. You can fix this by updating the build recipe, and you can also report the issue as a bug. More information at https://spack.readthedocs.io/en/latest/packaging_guide.html#installation-procedure
/spack/lib/spack/spack/build_environment.py:1075, in _setup_pkg_and_run:
1072 tb_string = traceback.format_exc()
1073
1074 # build up some context from the offending package so we can
>> 1075 # show that, too.
1076 package_context = get_package_context(tb)
1077
1078 logfile = None
```
It turns out this was caused by a bug that had been around much longer, in which the flags were passed by reference to the flag_handler, and the flag_handler was modifying the spec object, not just the flags given to the build system. The scope of this bug was limited by the forking model in Spack, which is how it went under the radar for so long.
PR includes regression test.
* remove deptype_query remnants
* deptypes -> deptype
These arguments haven't existed since 2017, but `traverse` now fails on unknown **kwargs, so they have finally popped up.
This updates the propagation logic used in `concretize.lp` to avoid rules with `path()`
in the body and instead base propagation around `depends_on()`.
Currently, compiler flags and variants are inconsistent: compiler flags set for a
package are inherited by its dependencies, while variants are not. We should have these
be consistent by allowing for inheritance to be enabled or disabled for both variants
and compiler flags.
- [x] Make new (spec language) operators
- [x] Apply operators to variants and compiler flags
- [x] Conflicts currently result in an unsatisfiable spec
(i.e., you can't propagate two conflicting values)
What I propose is using two of the currently used sigils to symbolized that the variant
or compiler flag will be inherited:
Example syntax:
- `package ++variant`
enabled variant that will be propagated to dependencies
- `package +variant`
enabled variant that will NOT be propagated to dependencies
- `package ~~variant`
disabled variant that will be propagated to dependencies
- `package ~variant`
disabled variant that will NOT be propagated to dependencies
- `package cflags==True`
`cflags` will be propagated to dependencies
- `package cflags=True`
`cflags` will NOT be propagated to dependencies
Syntax for string-valued variants is similar to compiler flags.
Fixes an issue on the RHEL8 UBI container where this test would fail because `gr_mem`
was empty for every entry in the `grp` DB.
You have to check *both* the `pwd` database (which has primary groups) and `grp` (which
has other gorups) to do this correctly.
- [x] update `llnl.util.filesystem.group_ids()` to do this
- [x] use it in the `sbang` test
This PR introduces breadth-first traversal, and moves depth-first traversal
logic out of Spec's member functions, into `traverse.py`.
It introduces a high-level API with three main methods:
```python
spack.traverse.traverse_edges(specs, kwargs...)
spack.traverse.traverse_nodes(specs, kwags...)
spack.traverse.traverse_tree(specs, kwargs...)
```
with the usual `root`, `order`, `cover`, `direction`, `deptype`, `depth`, `key`,
`visited` kwargs for the first two.
What's new is that `order="breadth"` is added for breadth-first traversal.
The lower level API is not exported, but is certainly useful for advanced use
cases. The lower level API includes visitor classes for direction reversal and
edge pruning, which can be used to create more advanced traversal methods,
especially useful when the `deptype` is not constant but depends on the node
or depth.
---
There's a couple nice use-cases for breadth-first traversal:
- Sometimes roots have to be handled differently (e.g. follow build edges of
roots but not of deps). BFS ensures that root nodes are always discovered at
depth 0, instead of at any depth > 1 as a dep of another root.
- When printing a tree, it would be nice to reduce indent levels so it fits in the
terminal, and ensure that e.g. `zlib` is not printed at indent level 10 as a
dependency of a build dep of a build dep -- rather if it's a direct dep of my
package, I wanna see it at depth 1. This basically requires one breadth-first
traversal to construct a tree, which can then be printed with depth-first traversal.
- In environments in general, it's sometimes inconvenient to have a double
loop: first over the roots then over each root's deps, and maintain your own
`visited` set outside. With BFS, you can simply init the queue with the
environment root specs and it Just Works. [Example here](3ec7304699/lib/spack/spack/environment/environment.py (L1815-L1816))
Currently, many tests hardcode to older versions of gcc for comparisons of
concretization among compiler versions. Those versions are too old to concretize for
`aarch64`-family targets, which leads to failing tests on `aarch64`.
This PR fixes those tests by updating the compiler versions used for testing.
Currently, many tests hardcode the expected architecture result in concretization to the
`x86_64` family of architectures.
This PR generalizes the tests that can be generalized, to cover multiple architecture
families. For those that test specific relationships among `x86_64`-family targets, it
ensures that concretization uses the `x86_64`-family targets in those cases.
Currently, many tests rely on the fact that `AutotoolsPackage` imposes no dependencies
on the inheriting package. That is not true on `aarch64`-family architectures.
This PR ensures that the fact `AutotoolsPackage` on `aarch64` pulls in a dependency on
`gnuconfig` is ignored when testing for the appropriate relationships among dependencies
Additionally, 5 tests currently prompt the user for input when `gpg` is available in the
user's path. This PR fixes that issue. And 7 tests fail currently when the user has a
yubikey available. This PR fixes the incorrect gpg argument causing those issues.
The `spack info <package>` command does not show the `Virtual Packages:` output unless the `--virtuals` command option is passed. Before this changes, the information that the command is supposed to be illustrating is not shown in the example and is confusing.
Changes to improve locating shared libraries on Windows, which in
turn enables the use of Clingo. This PR attempts to establish a
proper distinction between linking on Windows vs. Linux/Mac: on
Windows, linking is always done with .lib files (never .dll files).
This somewhat complicates the model since the Spec.lib method could
return libraries that were used for both linking and loading, but
since these are not always the same on Windows, it was decided to
treat Spec.libs as being for link-time libraries. Additional functions
are added to help dependents locate run-time libraries.
* Clingo is now the default concretizer on Windows
* Clingo is now the concretizer used for unit tests on Windows
* Fix a permissions issue that can occur while moving Git files during
fetching/staging
* Packages can now implement "win_add_library_dependent" to register
files/directories that include libraries that would need to link
to dependency dlls
* Packages can now implement "win_add_rpath" to register the locations
of dlls that dependents would want to load
* "Spec.libs" on Windows is updated to return link-time libraries
(i.e. .lib files, rather than .dll files)
* PackageBase.rpath on Windows is now updated to return the most-likely
locations where .dlls will be found (which is generally in the bin/
directory)
Currently there's a slow sequential step in binary relocation where all
strings of a binary are collected, with rpaths removed, and then
filtered for the old install root.
This is completely unnecessary, and also incorrect, since we replace
more than just the old install root in the prefix to prefix mapping. And
in fact the prefix to prefix mapping is parallel, and a single pass. So
even as an optimization, this filter makes no sense anymore.
Therefor we remove it
- single pass over the binary data matching all prefixes
- collect offsets and replacement strings
- do in-place updates with `fseek` / `fwrite`, since typically our
replacement touch O(few bytes) while the file is O(many megabytes)
- be nice: leave the file untouched if some string can't be
replaced
* Add patches for building clingo with MSVC
* Help python find clingo DLL
* If an executable is located in "C:\Program Files", Executable was
running into issues with the extra space. This quotes the exe
to ensure that it is treated as a single value.
Signed-off-by: Kiruya Momochi <65301509+KiruyaMomochi@users.noreply.github.com>
This commit extends the DSL that can be used in packages
to allow declaring that a package uses different build-systems
under different conditions.
It requires each spec to have a `build_system` single valued
variant. The variant can be used in many context to query, manipulate
or select the build system associated with a concrete spec.
The knowledge to build a package has been moved out of the
PackageBase hierarchy, into a new Builder hierarchy. Customization
of the default behavior for a given builder can be obtained by
coding a new derived builder in package.py.
The "run_after" and "run_before" decorators are now applied to
methods on the builder. They can also incorporate a "when="
argument to specify that a method is run only when certain
conditions apply.
For packages that do not define their own builder, forwarding logic
is added between the builder and package (methods not found in one
will be retrieved from the other); this PR is expected to be fully
backwards compatible with unmodified packages that use a single
build system.
Instead of looping over multiple regexes and the entire text file
contents, create a giant regex with all literal prefixes and do a single
pass over files to detect prefixes. Not only is a single pass faster,
it's also likely that the regex is compiled better, given that most
prefixes share a common ... prefix.
In the dfs code, flip edges so that `parent` means `from` and
`spec` means `to` in the direction of traversal. This makes it slightly
easier to write generic/composable code. For example when using visitors
where one visitor reverses direction, and another only cares about
accepting particular edges or not depending on whether the target node
is seen before, it would be good if the second visitor didn't have to
know whether the order was changed or not.
Use the same compression level as `gzip` (6) instead of what Python uses
(9).
The LLVM tarball takes 4m instead of 12m to create, and is <1% larger.
That's not worth the wait...
#32137 added an option to update() a BinaryCacheIndex with a
cooldown: repeated attempts within this cooldown would not
actually retry. However, the cooldown was not properly
tracked for failures (which is common when the mirror
does not store any binaries and therefore has no index.json).
This commit ensures that update(..., with_cooldown=True) will
also skip the update even if a failure has occurred within the
cooldown period.
Due to reuse concretization, we may generate DAGs with two occurrences
of the same package corresponding to distinct specs. This happens when
build dependencies are reused, since their dependencies are ignored in
concretization.
This caused a regression, for example: `spec['openssl']` would take the
'openssl' of the build dep `cmake`, instead of the direct `openssl`
dependency, simply because the edge to `cmake` was traversed first and
we do depth first traversal.
One solution that was discussed is to limit `spec[name]` to just direct
deps, or direct deps + transitive link deps, but this is too breaking.
Instead, this PR simply prioritizes transitive link and direct
build/run/test deps, and then falls back to a full DAG traversal. So,
it's just about order of iteration.
Scan the text files for relocatable prefixes *before* creating a tarball,
to reduce the amount of work to be done during install from binary
cache.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Instead of showing
```
==> Error: Timed out waiting for a write lock.
```
show
```
==> Error: Timed out waiting for a write lock after 1.200ms and 4 attempts on file: /some/file
```
s.t. we actually get to see where acquiring a lock failed even when not
running in debug mode.
And use pretty time units everywhere, so we don't get 1.45e-9 seconds
but 1.450ns etc.
* backtraces without --debug
Currently `--debug` is too verbose and not-`--debug` gives to little
context about where exceptions are coming from.
So, instead, it'd be nice to have `spack --backtrace` and
`SPACK_BACKTRACE=1` as methods to get something inbetween: no verbose
debug messages, but always a full backtrace.
This is useful for CI, where we don't want to drown in debug messages
when installing deps, but we do want to get details where something goes
wrong if it goes wrong.
* completion
Currently `relocate_text` and `relocate_text_bin` are unsafe in the
sense that they run in parallel, and lead to races when modifying
different items pointing to the same inode.
This leads to the issue observed in #33453.
This PR:
1. Renames those functions to `unsafe_*` so people are aware
2. Adds logic to deal with hardlinks in current binary packages
3. Adds logic to deal with hardlinks when creating new binary tarballs,
so the install side doesn't have to de-dupe hardlinks.
4. Adds a test for 3
The assumption is that all our relocation logic preserves inodes. That
is, we should never copy a file, modify it, and then move it back. I
quickly verified, and its seems like this is true for (binary) text
relocation, as well as rpath patching in patchelf (even when the file
grows) and mach-o fixes.
* gitlab: Do not use root_spec['pkg_name'] anymore
For a long time it was fine to index a concrete root spec with the name
of a dependency in order to access the concrete dependency spec. Since
pipelines started using `--use-buildcache dependencies:only,package:never`
though, it has exposed a scheduling issue in how pipelines are
generated. If a concrete root spec depends on two different hashes of
`openssl` for example, indexing that root with just the package name
is ambiguous, so we should no longer depend on that approach when
scheduling jobs.
* env: make sure exactly one spec in env matches hash
When installing some/all specs from a buildcache, build edges are pruned
from those specs. This can result in a much smaller effective DAG. Until
now, `spack env depfile` would always generate a full DAG.
Ths PR adds the `spack env depfile --use-buildcache` flag that was
introduced for `spack install` before. This way, not only can we drop
build edges, but also we can automatically set the right buildcache
related flags on the specific specs that are gonna get installed.
This way we get parallel installs of binary deps without redundancy,
which is useful for Gitlab CI.
When downloading from binary cache not only replace RPATHs to dependencies, but
also text references to dependencies.
Example:
`autoconf@2.69` contains a text reference to the executable of its dependency
`perl`:
```
$ grep perl-5 /shared/spack/opt/spack/linux-amzn2-x86_64_v3/gcc-7.3.1/autoconf-2.69-q3lo/bin/autoreconf
eval 'case $# in 0) exec /shared/spack/opt/spack/linux-amzn2-x86_64_v3/gcc-7.3.1/perl-5.34.1-yphg/bin/perl -S "$0";; *) exec /shared/spack/opt/spack/linux-amzn2-x86_64_v3/gcc-7.3.1/perl-5.34.1-yphg/bin/perl -S "$0" "$@";; esac'
```
These references need to be replace or any package using `autoreconf` will fail
as it cannot find the installed `perl`.
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
"spack install" will not update the binary index if given a concrete
spec, which causes it to fall back to direct fetches when a simple
index update would have helped. For S3 buckets in particular, this
significantly and needlessly slows down the install process.
This commit alters the logic so that the binary index is updated
whenever a by-hash lookup fails. The lookup is attempted again with
the updated index before falling back to direct fetches. To avoid
updating too frequently (potentially once for each spec being
installed), BinaryCacheIndex.update now includes a "cooldown"
option, and when this option is enabled it will not update more
than once in a cooldown window (set in config.yaml).
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* Add libpressio and dependencies; some of these packages are
maintained as forks of the original repositories and in those
cases the docstring mentions this.
* Add optional dependency in adios2 on libpressio
* cub package: set CUB_DIR environment variable for dependent
installations
* Clear R_HOME/R_ENVIRON before Spack installation (avoid sources
outside of Spack from affecting the installation in Spack)
* Rename dlib to dorian3d-dlib and update dependents; add new dlib
implementation. Pending an official policy on how to handle
packages with short names, reviewer unilaterally decided that
the rename was acceptable given that the new Spack dlib package
is referenced more widely (by orders of magnitude) than the
original
Co-authored-by: Samuel Li <shaomeng@users.noreply.github.com>
When installing an individual spec `spack --only=package --cache-only /xyz`
from a buildcache, Spack currently issues tons of warnings about
missing build deps (and their deps) in the database.
This PR disables these warnings, since it's fine to have a spec without
its build deps in the db (they are just "missing").
Currently `traverse_dependencies` fixes deptypes to traverse once and
for all in the recursion, but this is incorrect, since deptypes depend
on the node (e.g. if it's a dependency and cache-only, don't follow
build type edges, even if the parent is build from sources and needs
build deps.)
Support spackbot rebuilding all specs from source when asked (with "rebuild everything")
- Allow overriding --prune-dag cli opt with env var
- Use job variable to optionally prevent rebuild jobs early exit behavior
- ci rebuild: Use new install argument to insist deps are always installed from binary, but
package is only installed from source.
- gitlab: fix bug w/ untouched pruning
- ci rebuild: install from hash rather than json file
- When doing a "rebuild everything" pipeline, make sure that each install job only consumes
binary dependencies from the mirror being populated by the current pipeline. This avoids
using, e.g. binaries from develop, when rebuilding everything on a PR.
- When running a pipeline to rebuild everything, do not die because we generated a hash on
the broken specs list. Instead only warn in that case.
- bugfix: Replace broken no-args tty.die() with sys.exit(1)
Print a message of the form
```
Fetch mm:ss. Build: mm:ss. Total: mm:ss
```
when installing from buildcache.
Previously this only happened for source builds.
Currently "spack ci generate" chooses the first matching entry in
gitlab-ci:mappings to fill attributes for a generated build-job,
requiring that the entire configuration matrix is listed out
explicitly. This unfortunately causes significant problems in
environments with large configuration spaces, for example the
environment in #31598 (spack.yaml) supports 5 operating systems,
3 architectures and 130 packages with explicit size requirements,
resulting in 1300 lines of configuration YAML.
This patch adds a configuraiton option to the gitlab-ci schema called
"match_behavior"; when it is set to "merge", all matching entries
are applied in order to the final build-job, allowing a few entries
to cover an entire matrix of configurations.
The default for "match_behavior" is "first", which behaves as before
this commit (only the runner attributes of the first match are used).
In addition, match entries may now include a "remove-attributes"
configuration, which allows matches to remove tags that have been
aggregated by prior matches. This only makes sense to use with
"match_behavior:merge". You can combine "runner-attributes" with
"remove-attributes" to effectively override prior tags.
When a pipeline generation job is automatically failed because it
generated jobs for specs known to be broken on develop, print better
information about the broken specs that were encountered. Include
at a minimum the hash and the url of the job whose failure caused it
to be put on the broken specs list in the first place.
* env depfile: allow deps only install
- Refactor `spack env depfile` to use a Jinja template, making it a bit
easier to follow as a human being.
- Add a layer of indirection in the generated Makefile through an
`<prefix>/.install-deps/<hash>` target, which allows one to specify
different options when installing dependencies. For example, only
verbose/debug mode on when installing some particular spec:
```
$ spack -e my_env env depfile -o Makefile --make-target-prefix example
$ make example/.install-deps/<hash> -j16
$ make example/.install/<hash> SPACK="spack -d" SPACK_INSTALL_FLAGS=--verbose -j16
```
This could be used to speed up `spack ci rebuild`:
- Parallel install of dependencies from buildcache
- Better readability of logs, e.g. reducing verbosity when installing
dependencies, and splitting logs into deps.log and current_spec.log
* Silence please!
* spack.compiler.Compiler: introduce prefix property
We currently don't really have something that gives the GCC install
path, which is used by many LLVM-based compilers (llvm, llvm-amdgpu,
nvhpc, ...) to fix the GCC toolchain once and for all.
This `prefix` property is dynamic in the sense that it queries the
compiler itself. This is necessary because it's not easy to deduce the
install path from the `cc` property (might be a symlink, might be a
filename like `gcc` which works by having the compiler load a module
that sets the PATH variable, might be a generic compiler wrapper based
on environment variables like on cray...).
With this property introduced, we can clean up some recipes that have
the logic repeated for GCC.
* intel-oneapi-compilers: set --gcc-sysroot to %gcc prefix
Caches used by repositories don't reference the global spack.repo.path instance
anymore, but get the repository they refer to during initialization.
Spec.virtual now use the index, and computation done to compute the index
use Repository.is_virtual_safe.
Code to construct mock packages and mock repository has been factored into
a unique MockRepositoryBuilder that is used throughout the codebase.
Add debug print for pushing and popping config scopes.
Changed spack.repo.use_repositories so that it can override or not previous repos
spack.repo.use_repositories updates spack.config.config according to the modifications done
Removed a peculiar behavior from spack.config.Configuration where push would always
bubble-up a scope named command_line if it existed
Resolves#31782
With this change, if a spec is concrete after parsing (e.g. spec.yaml
or /hash-based), then it is not disambiguated (a process which requires
(a) that the spec be installed and (b) that it be part of the
currently-active environment).
This commit allows you to:
* Diff specs from an environment regardless of whether they have
been installed (more useful for projection/matrix-based envs)
* Diff specs read from .yaml files which may or may not be entirely
different installations of Spack
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Docs: Getting Started Dependencies
Finally document what one needs to install to use Spack on
Linux and Mac :-)
With <3 for minimal container users and my colleagues with
their fancy Macs.
* Debian Update Packages: GCC, Python
- build-essential: includes gcc, g++ (thx Cory)
- Python: add python3-venv, python3-distutils (thx Pradyun)
* Add RHEL8 Dependencies
* filter_file: introduce argument 'start_at'
* autotools: extend patching of the libtool script
* autotools: refactor _patch_usr_bin_file
* autotools: improve readability of the filtering
* autotools: keep the modification time of the configure scripts
* autotools: do not try to patch directories
* autotools: explain libtool patching for posterity
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Remove `module-info mode load` condition that prevents auto-unloading when autoloading is enabled. It looks like this condition was added to work around an issue in environment-modules that is no longer necessary.
Add quotes to make is-loaded happy
When concurrent misc_cache provider index rebuilds happen, try to
rebuild it only once, so we don't exceed misc_cache lock timeout.
For example, when using `spack env depfile`, with no previous
misc_cache, running `make -f depfile -j8` could run at most 8 concurrent
`spack install` locking on misc_cache to rebuild the provider index. If
one rebuild takes 30s, before this fix, the "worst" lock could wait up
to 30s * 7, easily exceeding misc_cache lock timeout. Now, the "worst"
lock would take 30s * 1 + ~1s * 6.
Currently, module changes from `setup_dependent_package` are applied only to the module of the package class, but not to any parent classes' modules between the package class module and `spack.package_base`.
In this PR, we create a custom class to accumulate module changes, and apply those changes to each class that requires it. This design allows us to code for a single module, while applying the changes to multiple modules as needed under the hood, without requiring the user to reason about package inheritance.
* find/list: display package counts last
We have over 6,600 packages now, and `spack list` still displays the number of packages
before it lists them all. This is useless for large sets of results (e.g., with no args)
as the number has scrolled way off the screen before you can see it. The same is true
for `spack find` with large installations.
This PR changes `spack find` and `spack list` so that they display the package count
last.
* add some quick testing
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Allow environment variables and spack-specific path substitution variables (e.g. `$spack`) to be
used in the paths associated with develop specs, while maintaining the ability to keep those
paths relative to the environment rather than the working directory.
Install: Add use-buildcache option to install
* Allow differentiating between top level packages and dependencies when
determining whether to install from the cache or not.
* Add unit test for --use-buildcache
* Use metavar to display use-buildcache options.
* Update spack-completion
Make it possible to install the Clingo package on Windows; this
also provides a means to use Clingo with Spack on Windows.
This includes
* A new "winbison" package: Windows has a port of bison and flex where
the two packages are grouped together. Clingo dependencies have been
updated to use winbison on Windows and bison elsewhere (this avoids
complicating the existin bison/flex packages until we can add support
for implied virtuals).
* The CMake build system was incorrectly converting CMAKE_INSTALL_PREFIX
to POSIX format.
* The re2c package has been modified to use CMake on Windows; for now
this is done by overloading the configure/build/install methods to
perform CMake-appropriate operations; the package should be refactored
once support for multiple build systems in one Package is available.
This commit fixes#27027.
The root cause of the issue is that the `SPACK_OLD_PROMPT` variable
was evaluated in string interpolation regardless of whether the
guard condition above evaluates to true or false. This commit uses
the `eval` keyword to defer evaluation until the command is executed.
Co-authored-by: Alexander Hornburg <alexande@xilinx.com>
Spack currently depends on parsing filenames of downloaded files to
determine what type of archive they are and how to decompress them.
This commit adds a preliminary check based on magic numbers to
determine archive type (but falls back on name parsing if the
extension type cannot be determined).
As part of this work, this commit also enables decompression of
.tar.xz-compressed archives on Windows.
Include exception info related to url retrieval in debug messages
which otherwise would be swallowed. This is intended to be useful
for detecting if CA configuration interferes with downloads from
HTTPS links.
This fixes a bug where two installations that differ only by package hash will not show
up in `spack find`.
The bug arose because `_cmp_node` on `Spec` didn't include the package hash in its
yielded fields. So, any two `Spec` objects that were only different by package hash
would appear to be equal and would overwrite each other when inserted into the same
`dict`. Note that we could still *install* specs with different package hashes, and they
would appear in the database, but we code that needed to put them into data structures
that use `__hash__` would have issues.
This PR makes `Spec.__hash__` and `Spec.__eq__` include the `process_hash()`, and it
makes `Spec._cmp_node` include the package hash. All of these *should* include all
information in a spec so that we don't end up in a situation where we are blind to
particular field differences.
Eventually, we should unify the `_cmp_*` methods with `to_node_dict` so there aren't two
sources of truth, but this needs some thought, since the `_cmp_*` methods exist for
speed. We should benchmark whether it's really worth having two types of hashing now
that we use `json` instead of `yaml` for spec hashing.
- [x] Add `package_hash` to `Spec._cmp_node`
- [x] Add `package_hash` to `spack.solve.asp.spec_clauses` so that the `package_hash`
will show up in `spack diff`.
- [x] Add `package_hash` to the `process_hash` (which doesn't affect abstract specs
but will make concrete specs correct)
- [x] Make `_cmp_iter` report the dag_hash so that no two specs with different
process hashes will be considered equal.
53a7b49 created a reference error which broke `.libs` (and
`find_libraries`) for many packages. This fixes the reference
error and improves the testing for `find_libraries` by actually
checking the extension types of libraries that are retrieved by
the function.
* catch json schema errors and reraise as property of SpackError
* no need to catch subclass of given error
* Builtin json library for Python 2 uses more generic type
* Correct instantiation of SpackError (requires a string rather than an exception)
* Use exception chaining (where possible)
Add a post-install step which runs (only) on Windows to modify an
install prefix, adding symlinks to all dependency libraries.
Windows does not have the same concept of RPATHs as Linux, but when
resolving symbols will check the local directory for dependency
libraries; by placing a symlink to each dependency library in the
directory with the library that needs it, the package can then
use all Spack-built dependencies.
Note:
* This collects dependency libraries based on Package.rpath, which
includes only direct link dependencies
* There is no examination of libraries to check what dependencies
they require, so all libraries of dependencies are symlinked
into any directory of the package which contains libraries
* ci: restore coverage computation
* Mark "test_foreground_background" as xfail
* Mark "test_foreground_background_output" as xfail
* Make number of processes explicit, remove verbosity on linux
* Run coverage on just 3 Python jobs for linux
* Run coverage on just 3 Python jobs for linux
* Run coverage on just 2 Python jobs for linux
* Add back verbose, since before we didn't encounter the xdist internal error
* Reduce the workers to 2
* Try to use command line
* Fix a version cmp bug in asp.py
* Fix submodule bug for git refs
* Add branch in logic for submodules
* Fix git version comparisons
main does not satisfy git.foo=main
git.foo=main does satisfy main
* Add two no-op jobs named "all-prechecks" and "all"
These are a suggestion from @tgamblin, they are stable named markers we
can use from gitlab and possibly for required checks to make CI more
resilient to refactors changing the names of specific checks.
* Enable parallel testing using xdist for unit testing in CI
* Normalize tmp paths to deal with macos
* add -u flag compatibility to spack python
As of now, it is accepted and ignored. The usage with xdist, where it
is invoked specifically by `python -u spack python` which is then passed
`-u` by xdist is the entire reason for doing this. It should never be
used without explicitly passing -u to the executing python interpreter.
* use spack python in xdist to support python 2
When running on python2, spack has many import cycles unless started
through main. To allow that, this uses `spack python` as the
interpreter, leveraging the `-u` support so xdist doesn't error out when
it unconditionally requests unbuffered binary IO.
* Use shutil.move to account for tmpdir being in a separate filesystem sometimes
This patchset refactors our GitHub actions into a single top-level ci workflow that
invokes a series of reusable actions. The main goal of this is to be able to easily
control which tests run and in what order based on the success or failure of top-level
prechecks. Our previous workflows ran in three sets:
* nix tests: style and verification first, then linux and macos tests if successful
* windows tests: style and verification first, then linux and macos tests if successful
* bootstrap tests
As a result, the bootstrap tests ran even if the style failed, and style and verification
had to run on two different platforms despite running identical checks. I'm relatively
sure that's because of the limitation on dependencies between steps in the jobs.
Reusable workflows allow us to run the style, verification and now audit checks once,
then depending on the results, and the files changed, run the appropriate nix, windows
and bootstrap tests. While it saves only a few minutes by itself, this makes it easier to
refactor checks to subset tests without having to replicate tests or other workflow
components in the future.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Move the copying of the buildcache to a root job that runs after all the child
pipelines have finished, so that the operation can be coordinated across all
child pipelines to remove the possibility of race conditions during potentially
simlutandous copies. This lets us ensure the .spec.json.sig and .spack files
for any spec in the root mirror always come from the same child pipeline
mirror (though which pipeline is arbitrary). It also allows us to avoid copying
of duplicates, which we now do.
If you have an environment like
```
$ cat spack.yaml
spack:
specs: [openmpi@4.1.0+cuda]
```
this PR provides a new command `spack change` that you can use to adjust environment specs from the command line:
```
$ spack change openmpi~cuda
$ cat spack.yaml
spack:
specs: [openmpi@4.1.0~cuda]
```
in other words, this allows you to tweak the details of environment specs from the command line.
Notes:
* This is only allowed for environments that do not define matrices
* This is possible but not anticipated to be needed immediately
* If this were done, it should probably only be done for "named"/not-anonymous specs (i.e. we can change `openmpi+cuda` but not spec like `+cuda` or `@4.0.1~cuda`)
fixes#31484
Before this change if anything was matching an external
condition, it was considered "external" and thus something
to be "built".
This was happening in particular to external packages
that were re-read from the DB, which then couldn't be
reused, causing the problems shown in #31484.
This PR fixes the issue by excluding specs with a
"hash" from being considered "external"
* Test that users have a way to select a virtual
This ought to be solved by extending the "require"
attribute to virtual packages, so that one can:
```yaml
mpi:
require: 'multi-provider-mpi'
```
* Prevent conflicts to be enforced on specs that can be reused.
* Rename the "external_only" fact to "buildable_false", to better reflect its origin
* Preliminary support for include URLs in spack.yaml (environment) files
This commit adds support in environments for external configuration files obtained from a URL with a preference for grabbing raw text from GitHub and gitlab for efficient downloads of the relevant files. The URL can also be a link to a directory that contains multiple configuration files.
Remote configuration files are retrieved and cached for the environment. Configuration files with the same name will not be overwritten once cached.
Extend the semantics of package requirements to
allow using them also under a virtual package
attribute in packages.yaml
These requirements are enforced whenever that
virtual spec is present in the DAG.
Allow users to express default requirements in packages.yaml.
These requirements are overridden if more specific requirements
are present for a given package.
This support requires adding the '--tests' option to 'spack ci rebuild'.
Packages whose stand-alone tests are broken (in the CI environment) can
be configured in gitlab-ci to be skipped by adding them to
broken-tests-packages.
Highlights include:
- Restructured 'spack ci' help to provide better subcommand summaries;
- Ensured only one InstallError (i.e., installer's) rather than allowing
build_environment to have its own; and
- Refactored CI and CDash reporting to keep CDash-related properties and
behavior in a separate class.
This allows stand-alone tests from `spack ci` to run when the `--tests`
option is used. With `--tests`, stand-alone tests are run **after** a
**successful** (re)build of the package. Test results are collected
and report(able) using CDash.
This PR adds the following features:
- Adds `-t` and `--tests` to `spack ci rebuild` to run stand-alone tests;
- Adds `--fail-fast` to stop stand-alone tests after the first failure;
- Ensures a *single* `InstallError` across packages
(i.e., removes second class from build environment);
- Captures skipping tests for externals and uninstalled packages
(for CDash reporting);
- Copies test logs and outputs to the CI artifacts directory to facilitate
debugging;
- Parses stand-alone test results to report outputs from each `run_test` as
separate test parts (CDash reporting);
- Logs a test completion message to allow capture of timing of the last
`run_test` part;
- Adds the runner description to the CDash site to better distinguish entries
in CDash tables;
- Adds `gitlab-ci` `broken-tests-packages` to CI configuration to skip
stand-alone testing for packages with known issues;
- Changes `spack ci --help` so description of each subcommand is a single line;
- Changes `spack ci <subcommand> --help` to provide the full description of
each command (versus no description); and
- Ensures `junit` test log file ends in an `.xml` extension (versus default where
it does not).
Tasks:
- [x] Include the equivalent of the architecture information, or at least the host target, in the CDash output
- [x] Upload stand-alone test results files as `test` artifacts
- [x] Confirm tests are run in GitLab
- [x] Ensure CDash results are uploaded as artifacts
- [x] Resolve issues with CDash build-and test results appearing on same row of the table
- [x] Add unit tests as needed
- [x] Investigate why some (dependency) packages don't have test results (e.g., related from other pipelines)
- [x] Ensure proper parsing and reporting of skipped tests (as `not run`) .. post- #28701 merge
- [x] Restore the proper CDash URLand or mirror ONCE out-of-band testing completed
* modified list.py and added functionality for --tag
* Removed long and very long, shifted rest of code above return statement
* removed results variable
* added import statement at top
* added the line accidentally deleted
* added line accidentally deleted
* changed p.name to p, added line inside if statement
* line order switched
* [@spackbot] updating style on behalf of sparkyniner
* ran update completion command
* add tests
* Update lib/spack/spack/test/cmd/list.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [@spackbot] updating style on behalf of sparkyniner
* changed argument to mock_packages and moved code under filter by tag
* removed bad rebase code and added additional test
* [@spackbot] updating style on behalf of sparkyniner
* added line removed earlier
* added line removed earlier
* replaced function
* added more recommended changes
Co-authored-by: sairaj <sairaj@sairajs-MacBook-Pro.local>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Assertions without messages if/when hit create a blank error message for users.
This PR adds error messages to all assertions in asp.py even
if it seems unlikely they will ever be needed.
The argument is very likely a typo, and was meant to
be given to the fixture decorator. Since the value
being passed is the default, let's just remove it.
Ensure that build tools with module-level commands in spack use
the version built as part of their build graph if one exists.
This is now also required for mesa, scons, cmake and ctest, out
of graph versions of these tools in path will not be found unless
added as an external.
This bug appeared because a new version of rocprim needs cmake
3.16, while I have 3.14 in my path I had added an external for
cmake 3.20 to the dag, but 3.14 was still used to configure
rocprim causing it to fail. As far as I can tell, all the build
tools added in build_environment.py had this problem, despite the
fact that they should have been resolving these tools by name
with a path search and find the one in the dag that way. I'm
still investigating why the path searching and Executable logic
didn't do it, but this makes three of the build systems much more
explicit, and leaves only gmake and ninja as dependencies from
out in the system while ensuring the version in the dag is used
if there is one.
The additional sqlite version is to perturb the hash of python to
work around a relocation bug which will be fixed in a subsequent
PR.
* filesystem: use lstat in recursive mtime
When a `develop` path contains a dead symlink, the `os.stat` in the recursive `mtime` determination trips up over it.
Closes#32165.
`requirement_policy/3` is generated and may not be in Spack's inputs to Clingo.
Currently this is causing warnings like:
```
$ spack spec zlib
/global/u2/t/tgamblin/src/spack/lib/spack/spack/solver/concretize.lp:510:3-43: info: atom does not occur in any rule head:
requirement_policy(Package,X,"one_of")
/global/u2/t/tgamblin/src/spack/lib/spack/spack/solver/concretize.lp:517:3-43: info: atom does not occur in any rule head:
requirement_policy(Package,X,"one_of")
/global/u2/t/tgamblin/src/spack/lib/spack/spack/solver/concretize.lp:523:3-43: info: atom does not occur in any rule head:
requirement_policy(Package,X,"any_of")
/global/u2/t/tgamblin/src/spack/lib/spack/spack/solver/concretize.lp:534:3-43: info: atom does not occur in any rule head:
requirement_policy(Package,X,"any_of")
Input spec
--------------------------------
zlib
Concretized
--------------------------------
zlib@1.2.11%gcc@7.5.0+optimize+pic+shared arch=cray-sles15-haswell
```
- [x] Silence warning with `#defined requirement_policy/3`
Spack doesn't have an easy way to say something like "If I build
package X, then I *need* version Y":
* If you specify something on the command line, then you ensure
that the constraints are applied, but the package is always built
* Likewise if you `spack add X...`` to your environment, the
constraints are guaranteed to hold, but the environment always
builds the package
* You can add preferences to packages.yaml, but these are not
guaranteed to hold (Spack can choose other settings)
This commit adds a 'require' subsection to packages.yaml: the
specs added there are guaranteed to hold. The commit includes
documentation for the feature.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
All PRs are failing the docs build on account of an error with
pygments. These errors coincide with a new release of pygments
(2.13.0) and restricting to < 2.13 allows the doc tests to pass,
so this commit enforces that constraint for the docs build.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
A few attribute in packages are meant to be reserved for
Spack internal use. This audit checks packages to ensure none
of these attributes are overridden.
- [x] add additional audit check
This PR fixes the performance regression reported in #31985 and a few
other issues found while refactoring the spack mirror create command.
Modifications:
* (Primary) Do not require concretization for
`spack mirror create --all`
* Forbid using --versions-per-spec together with --all
* Fixed a few issues when reading specs from input file (specs were
not concretized, command would fail when trying to mirror
dependencies)
* Fix issue with default directory for spack mirror create not being
canonicalized
* Add more unit tests to poke spack mirror create
* Skip externals also when mirroring environments
* Changed slightly the wording for reporting (it was mentioning
"Successfully created" even in presence of errors)
* Fix issue with colify (was not called properly during error
reporting)
`LD_LIBRARY_PATH` can break system executables (e.g., when an enviornment is loaded) and isn't necessary thanks to `RPATH`s. Packages that require `LD_LIBRARY_PATH` can set this in `setup_run_environment`.
- [x] Prefix inspections no longer set `LD_LIBRARY_PATH` by default
- [x] Document changes and workarounds for people who want `LD_LIBRARY_PATH`
These changes make many packages build on nixos where nearly nothing
comes from /bin or /usr/bin (the only things in "system locations" are
/bin/sh and /usr/bin/env, all the rest is found through PATH).
Many configuration scripts hardcode /usr/bin/file instead of using the
one from PATH. This patches them to use file from PATH.
fixes#31736
Catch errors when concretizing specs and report them as
debug messages. The corresponding spec is skipped.
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Extracted two functions in cmd/install.py
* Extracted a function to perform installation from the active environment
* Rename a few functions, remove args from their arguments
* Rework conditional in install_from_active_environment to reduce nesting in the function
* Extract functions to parsespecs from cli and files
* Extract functions to getuser confirmation for overwrite
* Extract functions to install specs inside and outside environments
* Rename a couple of functions
* Fix outdated comment
* Add missing imports
* Split conditional to dedent one level
* Invert check and exit early to dedent one level when requiring user confirmation
The current use of git ref's as a version requires a search algorithm to pick the right matching version based on the tags in the git history of the package.
This is less than ideal for the use case where users already know the specific version they want the git ref to be associated with. This PR makes a new version syntax [package]@[ref]=[version] to allow the users to specify the exact hash they wish to use.
On some systems the shell in login mode wipes important parts of the
environment, such as PATH. This causes the build to fail since it can't
find `spack`.
For better robustness, don't use a login shell.
In a full CI job the final spack install is run in an environment formed by scripts running in this order:
export AWS_SECRET=... # 1. Load environment from GitLab project variables
source spack/share/spack/setup-env.sh # 2. Load Spack into the environment (PATH)
spack env activate -V concrete_env # 3. Activate the concrete environment
source /etc/profile # 4. Bash login shell (from -l)
spack install ...
Whereas when a user launches their own container with (docker|podman) run -it, they end up running spack install in an environment formed in this order:
source /etc/bash.bashrc # (not 4). Bash interactive shell (default with TTY)
export AWS_SECRET=... #~1. Manually load environment from GitLab project variables
source spack/share/spack/setup-env.sh # 2. Load Spack into the environment (PATH)
spack env activate -V concrete_env # 3. Activate the concrete environment
spack install ...
The big problem being that (4) has a completely different position and content (on Leap 15 and possibly other containers).
So in context, this PR removes (4) from the CI job case, leaving us with the simpler:
export AWS_SECRET=... # 1. Load environment from GitLab project variables
source spack/share/spack/setup-env.sh # 2. Load Spack into the environment (PATH)
spack env activate -V concrete_env # 3. Activate the concrete environment
spack install ...
* database: don't sort on return from query_local
* ASP-based solver: don't build the hash-lookup dictionary twice
Building this dictionary twice and traversing all the specs
might be time-consuming for large buildcaches.
This test relied on an old version of the `flake8_package` fixture that modified
the spack repository, but it doesn't do that anymore. There are other tests for
`changed_files()` that do a better job of mocking up a git repository with
changes, so we can just delete this one.
`spack style` tests were annoyingly brittle because we could not easily be
specific about which tools to run (we had to use `--no-black`, `--no-isort`,
`--no-flake8`, and `--no-mypy`). We should be able to specify what to run OR
what to skip.
Now you can run, e.g.:
spack style --tool black,flake8
or:
spack style --skip black,isort
- [x] Remove `--no-black`, `--no-isort`, `--no-flake8`, and `--no-mypy` args.
- [x] Add `--tool TOOL` argument.
- [x] Add `--skip TOOL` argument.
- [x] Allow either `--tool black --tool flake8` or `--tool black,flake8` syntax.
- [x] remove alignment spaces from tempaltes
- [x] replace single with double quotes
- [x] Makefile template now generates parsable code
(function body is `pass` instead of just a comment)
- [x] template checks now run black to check output
Previously we'd accept any version for bootstrapping black, but we need <= 21.
- [x] modify bootstrapping code to check black version before accepting an
executable from `PATH`.
- [x] add `.git-blame-ignore-revs` to ignore black reformatting
- [x] make `spack blame` respect `.git-blame-ignore-revs`
(even if the user hasn't configured git to do so)
Some of our tests rely on single vs. double quotes, and others rely on specific
line numbers in the source. These needed fixing after the switch to Black.
Black will automatically fix a lot of the exceptions we previously allowed for
directives, so we don't need them in our custom `flake8_formatter` anymore.
- [x] remove `E501` (long line) exceptions for directives from `flake8_formatter`,
as they won't help us now.
- [x] Refine exceptions for long URLs in the `flake8_formatter`.
- [x] Adjust the mock `flake8-package` to exhibit the exceptions we still allow.
- [x] Update style tests for new `flake8-package`.
- [x] Blacken style test.
Many noqa's in the code are no longer necessary now that the column limit is 99
characters. Others can easily be eliminated, and still more can just be made more
specific if they do not have to do with line length.
The only E501's still in the code are in the tests for `spack.util.path` and the tests
for `spack style`.
This adds necessary configuration for flake8 and black to work together.
This also sets the line length to 99, per the data here:
* https://github.com/spack/spack/pull/24718#issuecomment-876933636
Given the data and the spirit of black's 88-character limit, we set the limit to 99
characters for all of Spack, because:
* 99 is one less than 100, a nice round number, and all lines will fit in a
100-character wide terminal (even when the text editor puts a \ at EOL).
* 99 is just past the knee the file size curve for packages, and it means that packages
remain readable and not significantly longer than they are now.
* It doesn't seem to hurt core -- files in core might change length by a few percent but
seem like they'll be mostly the same as before -- just a bit more roomy.
- [x] set line length to 99
- [x] remove most exceptions from `.flake8` and add the ones black cares about
- [x] add `[tool.black]` to `pyproject.toml`
- [x] make `black` run if available in `spack style --fix`
Co-Authored-By: Tom Scogland <tscogland@llnl.gov>
In #31618 the idea was to determine the file extension heuristically by dropping query params etc from a url and then consider it as a file path. That broke for URLs that only have query params like http://example.com/?patch=x as it would result in empty string as basename. This PR reverts to the old behavior of saving files as ?patch=x in that case.
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
* llvm: Use variant when clauses for many of the expressed conflicts
* llvm: Remove the shared variant as it wasn't really used
* llvm: Remove unnecessary deps and make explicit the ones that are
* llvm: Cleanup patch conditions
* pocl: Update for llvm cleanup
* unit-test: update unparse package hash with the updated llvm package
* llvm: Fix ppc long double patching and add clarifying comments
`self.archive_file` is (among others) a symlink to a tarball. `extension()` on a
symlink will result in no extension. This patch fixes the behavior introduced in
https://github.com/spack/spack/pull/31618.
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
When
1. Spack installs libtool,
2. system libtool is installed too, and
3. system automake is used
Spack passes system automake's `-I <prefix>` flag to itself, even though
it's a default search path. This takes precedence over spack's libtool
prefix dir. This causes the wrong `libtool.m4` file to be used (since
system libtool is in the same prefix as system automake).
And that leads to error messages about incompatible libtool, something
something LT_INIT.
fixes#31627
spack.mirror.get_all_versions now uses the package class
instead of the package object in its implementation.
Ensure spec is concrete before staging for mirrors
Newer versions of botocore (>=1.23.47) support the full IOBase
interface, so the hacks added to supplement the missing attributes are
no longer needed. Conditionally disable the hacks if they appear to be
unnecessary based on the class hierarchy found at runtime.
* Add connection information to buildcache update command
Ensure that the s3 connection made when attempting to update the content
of a buildcache attempts to use the extra connection information
from the mirror creation.
* Add unique help for endpoint URL argument
Fix copy/paste error for endpoint URL help which was the same as
the access token
* Re-work URL checking for S3 mirrors
Due to the fact that nested bucket URLs would never match the string used
for checking that the mirror is the same, switch the check used.
Sort all mirror URLs by length to have the most specific cases first
and see if the desired URL "starts with" the mirror URL.
* Long line style fixes
Add execptions for long lines and fix other style errors
* Use format() function to rebuild URL
Use the format command to rebuild the url instead of crafing a
formatted string out of known values
* Add early exit for URL checking
When a valid mirror is found, break from the loop
For a long time the module configuration has had a few settings that use
`blacklist`/`whitelist` terminology. We've been asked by some of our users to replace
this with more inclusive language. In addition to being non-inclusive, `blacklist` and
`whitelist` are inconsistent with the rest of Spack, which uses `include` and `exclude`
for the same concepts.
- [x] Deprecate `blacklist`, `whitelist`, `blacklist_implicits` and `environment_blacklist`
in favor of `exclude`, `include`, `exclude_implicits` and `exclude_env_vars` in module
configuration, to be removed in Spack v0.20.
- [x] Print deprecation warnings if any of the deprecated names are in module config.
- [x] Update tests to test old and new names.
- [x] Update docs.
- [x] Update `spack config update` to fix this automatically, and include a note in the error
that you can use this command.
Python's built-in tarfile support doesn't address some general
cases of malformed tarfiles that are already handled by the system
'tar' utility; until these can be addressed, use that exclusively.
The goal of this PR is to make clearer where we need a package object in Spack as opposed to a package class.
We currently instantiate a lot of package objects when we could make do with a class. We should use the class
when we only need metadata, and we should only instantiate and us an instance of `PackageBase` at build time.
Modifications:
- [x] Remove the `spack.repo.get` convenience function (which was used in many places, and not really needed)
- [x] Use `spack.repo.path.get_pkg_class` wherever possible
- [x] Try to route most of the need for `spack.repo.path.get` through `Spec.package`
- [x] Introduce a non-data descriptor, that can be used as a decorator, to have "class level properties"
- [x] Refactor unit tests that had to be modified to reduce code duplication
- [x] `Spec.package` and `Repo.get` now require a concrete spec as input
- [x] Remove `RepoPath.all_packages` and `Repo.all_packages`
There's a race condition in `remove()` as the lockfile is removed after
releasing the lock, which is a problem when another process acquires a
write lock during deletion.
Also simplify life a bit in multiprocessing when a file is possibly
removed multiple times, which currently is an error on the second
deletion, so the proposed fix is to make remove(...) idempotent and not
error when deleting non-existing cache entries.
Don't tests for existence of lockfile, cause windows/linux behavior is different
Oversight in #31433, the non-phony `env` target was missing a file being
created for it, which can cause make to infinitely loop when including
multiple generated makefiles.
When no default editor is installed and no environment variable is set,
which_string would return None and this would be passed to os.execv
resulting in a TypeError. The message presented to the user would be:
Error: execv: path should be string, bytes or os.PathLike,
not NoneType
This change checks that which_string has returned successfully before
attempting to execute the result, resulting in a new error message:
Error: No text editor found! Please set the VISUAL and/or EDITOR
environment variable(s) to your preferred text editor.
It's not strictly necessary, but I've also changed try_exec to catch
all errors rather than just OSErrors. This would have provided slightly
more context for the original error message.
There were two choices: 1) remove '-p' from '-a' or 2) allow monkeypatching
the cleaning of the python cache since clean's test_function_calls isn't
supposed to be actually cleaning anything.
This commit supports the latter and adds a test case for `-p`.
Release branches and tags run protected pipelines, and we noticed
that those pipelines were generating all jobs in the stack, even
when the mirror contained all the built specs and an up to date
index. The problem was caused because the override mirror was
not present in spacks mirror configuration at the time when the
binary_distribution.update() method was called. This fixes that
by always adding the mirror override, if present, to the list of
configured mirrors.
* remove unhelpful comment
* Filter compiler duplicates while reading manifest
* more-specific version matching edited to use module-specific version (to avoid an issue where a user might add a compiler with the same version to the initial test configuration
* PythonPackage: add default libs/headers attributes
* Style fix
* libs and headers should be properties
* Check both platlib and include
* Fix variable reference
Building on #24639, this allows versions to be prefixed by `git.`. If a version begins `git.`, it is treated as a git ref, and handled as git commits are starting in the referenced PR.
An exception is made for versions that are `git.develop`, `git.main`, `git.master`, `git.head`, or `git.trunk`. Those are assumed to be greater than all other versions, as those prefixed strings are in other contexts.
This commit adds some changes which improve use of Spack-installed
oneAPI packages with Spack-generated modules, but not within Spack
(e.g. if you install some of these packages with Spack, then load
their associated modules to build other packages outside of Spack).
The majority of the PR diff is refactoring. The functional changes
are:
* intel-oneapi-mkl:
* setup_run_environment: update Intel compiler flags to RPATH the
mkl libs
* intel-oneapi-compilers: update the compiler configuration to RPATH
libraries installed by this package (note that Spack already handled
this when installing dependent packages with Spack, but this is
specifically to use the intel-oneapi-compilers package outside
of Spack). Specifically:
* inject_rpaths: this modifies the binaries installed as part of
the intel-oneapi-compilers package to RPATH libraries provided
by this package
* extend_config_flags: this instructs the compiler executables
provided by this package to RPATH those same libraries
Refactoring includes:
* intel-oneapi-compilers: in addition to the functional changes,
a large portion of this diff is reorganizing the creation of
resources/archives downloaded for the install
* The base oneAPI package renamed component_path, so several packages
changed as a result:
* intel-oneapi-dpl
* intel-oneapi-dnn
* extrae
* intel-oneapi-mpi:
* Perform file filtering in one pass rather than two
Allow `spack external find` (with no extra args) to proceed if the manifest file exists but
without sufficient permissions; in that case, print a warning. Also add a test for that behavior.
TODOs:
- [x] continue past any exception raised during manifest parsing as part of `spack external find`,
except for CTRL-C (and other errors that indicate immediate program termination)
- [x] Semi-unrelated but came up when discussing this with the user who reported this issue to
me: the manifest parser now accepts older schemas
See: https://github.com/spack/spack/issues/31191
fixes#30997
Instead of giving a penalty of 30 to all nodes when preferences
are not package specific, give a penalty of 100 to all targets
of a node where we have package specific preferences, if the target
is not explicitly preferred.
* Fixed a bug in the 'external find --all' command where the call failed
to find packages by both executable and library. The bug was that the
call `path.all_packages()` incorrectly turned the variable
`packages_to_check` into a generator rather than keeping it a list.
Thus the second call to `detection.by_library` had no work to do.
* Fixed the help message for the find external and compiler commands as
well as others that used the `scopes_metavar` field to define where
the results should be stored in configuration space. Specifically,
the fact that configuration could be added to the environment was not
mentioned in the help message.
* add test to verify fix works
* fix spec cflags/variants parsing test (breaking change)
* fix `spack spec` arg quoting issue
* add error report for deprecated cflags coalescing
* use .group(n) vs subscript regex group extraction for 3.5 compat
* add random test for untested functionality to pass codecov
* fix new test failure since rebase
Fix a bug introduced in #21720. `spack_json.dump()` calls `_strify()` on dictionaries to
convert `unicode` to `str`, but it constructs `dict` objects instead of
`collections.OrderedDict` objects, so in Python 2 (or earlier versions of 3) it can
scramble dictionary order.
This can cause hashes to differ between Python 2 and Python 3, or between Python 3.7
and earlier Python 3's.
- [x] use `OrderedDict` in `_strify`
- [x] add a regression test
The "submodules" argument of the "version" directive can now accept
a callable that returns a list of submodules, in addition to the usual
Boolean values
* bootstrap: account for disabled sources
Fix a bug introduced in #30192, which effectively skips
any prescription on disabled bootstrapping sources.
* Add unit test to avoid regression
Fixes#31021
With #25185, we no longer default to using tar when we can't
determine the extension type, opting to fail instead.
This broke fetching for the pcre package, where we couldn't determine
the extension. To determine the extension, we were attempting to
extract it from the destination filename; however, this file name
may omit details of the origin URL that are required to determine the
extension, so instead we examine the URL directly.
This also updates the decompressor_for method not to set ext=None
by default: it must now always be set by the caller.
Most package installations include compressed source files. This
adds support for common archive types on Windows:
* Add support for using system 7zip functionality to decompress .Z
files when available (and on Windows, use 7zip for .xz archives)
* Default to using built-in Python support for tar/bz2 decompression
(note that Python tar documentation mentions preservation of file
permissions)
* Add tests for decompression support
* Extract logic for handling exploding archives (i.e. compressed
archives that expand to more than one base file) into an
exploding_archive_catch context manager in the filesystem module
Spack's staging logic constructs a file path based on a URL. The URL
may contain characters which are not allowed in valid file paths on
the system (e.g. Windows prohibits ':' and '?' among others). This
commit adds a function to remove such offending characters (but
otherwise preserves the URL string when constructing a file path).
Updates to improve Spack-generated modules for Intel oneAPI compilers:
* intel-oneapi-compilers set CC etc.
* Add a new package intel-oneapi-compilers-classic which can be used to
generate a module which sets CC etc. to older compilers (e.g. icc)
* lmod module logic now updated to treat the intel-oneapi-compilers*
packages as compilers
Explicitly import package utilities in all packages, and corresponding fallout.
This includes:
* rename `spack.package` to `spack.package_base`
* rename `spack.pkgkit` to `spack.package`
* update all packages in builtin, builtin_mock and tutorials to include `from spack.package import *`
* update spack style
* ensure packages include the import
* automatically add the new import and remove any/all imports of `spack` and `spack.pkgkit`
from packages when using `--fix`
* add support for type-checking packages with mypy when SPACK_MYPY_CHECK_PACKAGES
is set in the environment
* fix all type checking errors in packages in spack upstream
* update spack create to include the new imports
* update spack repo to inject the new import, injection persists to allow for a deprecation period
Original message below:
As requested @adamjstewart, update all packages to use pkgkit. I ended up using isort to do this,
so repro is easy:
```console
$ isort -a 'from spack.pkgkit import *' --rm 'spack' ./var/spack/repos/builtin/packages/*/package.py
$ spack style --fix
```
There were several line spacing fixups caused either by space manipulation in isort or by packages
that haven't been touched since we added requirements, but there are no functional changes in here.
* [x] add config to isort to make sure this is maintained going forward
referred targets are currently the only minimization criteria for Spack for which we allow
negative values. That means Spack may be incentivized to add nodes to the DAG if they
match the preferred target.
This PR re-norms the minimization criteria so that preferred targets are weighted from 0,
and default target weights are offset by the number of preferred targets per-package to
calculate node_target_weight.
Also fixes a bug in the test for preferred targets that was making the test easier to pass
than it should be.
This PR supports the creation of securely signed binaries built from spack
develop as well as release branches and tags. Specifically:
- remove internal pr mirror url generation logic in favor of buildcache destination
on command line
- with a single mirror url specified in the spack.yaml, this makes it clearer where
binaries from various pipelines are pushed
- designate some tags as reserved: ['public', 'protected', 'notary']
- these tags are stripped from all jobs by default and provisioned internally
based on pipeline type
- update gitlab ci yaml to include pipelines on more protected branches than just
develop (so include releases and tags)
- binaries from all protected pipelines are pushed into mirrors including the
branch name so releases, tags, and develop binaries are kept separate
- update rebuild jobs running on protected pipelines to run on special runners
provisioned with an intermediate signing key
- protected rebuild jobs no longer use "SPACK_SIGNING_KEY" env var to
obtain signing key (in fact, final signing key is nowhere available to rebuild jobs)
- these intermediate signatures are verified at the end of each pipeline by a new
signing job to ensure binaries were produced by a protected pipeline
- optionallly schedule a signing/notary job at the end of the pipeline to sign all
packges in the mirror
- add signing-job-attributes to gitlab-ci section of spack environment to allow
configuration
- signing job runs on special runner (separate from protected rebuild runners)
provisioned with public intermediate key and secret signing key
Old concrete specs were slipping through in `_assign_hash`, and `package_hash` was
attempting to recompute a package hash when we could not know the package a time
of concretization.
Part of this was that the logic for `_assign_hash` was hard to understand -- it was
called twice from `_finalize_concretization` and had special cases for both args it
was called with. It's much easier to understand the logic here if we just inline it.
- [x] Get rid of `_assign_hash` and just integrate it with `_finalize_concretization`
- [x] Don't call `_package_hash` at all for already-concrete specs.
- [x] Add regression test.
This PR introduces a new build cache layout and package format, with improvements for
both efficiency and security.
## Old Format
Currently a binary package consists of a `spec.json` file at the root and a `.spack` file,
which is a `tar` archive containing a copy of the `spec.json` format, possibly a detached
signature (`.asc`) file, and a tar-gzip compressed archive containing the install tree.
```
build_cache/
# metadata (for indexing)
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json
<arch>/
<compiler>/
<name>-<ver>/
# tar archive
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spack
# tar archive contents:
# metadata (contains sha256 of internal .tar.gz)
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json
# signature
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json.asc
# tar.gz-compressed prefix
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.tar.gz
```
After this change, the nesting has been removed so that the `.spack` file is the
compressed archive of the install tree. Now signed binary packages, will take the
form of a clearsigned `spec.json` file (a `spec.json.sig`) at the root, while unsigned
binary packages will contain a `spec.json` at the root.
## New Format
```
build_cache/
# metadata (for indexing, contains sha256 of .spack file)
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json
# clearsigned spec.json metadata
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json.sig
<arch>/
<compiler>/
<name>-<ver>/
# tar.gz-compressed prefix (may support more compression formats later)
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spack
```
## Benefits
The major benefit of this change is that the signatures on binary packages can be
verified without:
1. Having to download the tarball, or
2. having to extract an unknown tarball.
(1) is an improvement in efficiency; (2) is a security fix: we now ensure that we trust the
binary before we try to run it through `tar`, which avoids potential attacks.
## Backward compatibility
Also after this change, spack should still be able to handle the previous buildcache
structure and binary mirrors with mixed layouts.
This PR builds on #28392 by adding a convenience command to create a local mirror that can be used to bootstrap Spack. This is to overcome the inconvenience in setting up this mirror manually, which has been reported when trying to setup Spack on air-gapped systems.
Using this PR the user can create a bootstrapping mirror, on a machine with internet access, by:
% spack bootstrap mirror --binary-packages /opt/bootstrap
==> Adding "clingo-bootstrap@spack+python %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding "gnupg@2.3: %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding "patchelf@0.13.1:0.13.99 %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding binary packages from "https://github.com/alalazo/spack-bootstrap-mirrors/releases/download/v0.1-rc.2/bootstrap-buildcache.tar.gz" to the mirror at /opt/bootstrap/local-mirror
To register the mirror on the platform where it's supposed to be used run the following command(s):
% spack bootstrap add --trust local-sources /opt/bootstrap/metadata/sources
% spack bootstrap add --trust local-binaries /opt/bootstrap/metadata/binaries
The mirror has to be moved over to the air-gapped system, and registered using the commands shown at prompt. The command has options to:
1. Add pre-built binaries downloaded from Github (default is not to add them)
2. Add development dependencies for Spack (currently the Python packages needed to use spack style)
* bootstrap: refactor bootstrap.yaml to move sources metadata out
* bootstrap: allow adding/removing custom bootstrapping sources
This operation can be performed from the command line since
new subcommands have been added to `spack bootstrap`
* Add --trust argument to spack bootstrap add
* Add a command to generate a local mirror for bootstrapping
* Add a unit test for mirror creation
Currently, environments can either be concretized fully together or fully separately. This works well for users who create environments for interoperable software and can use `concretizer:unify:true`. It does not allow environments with conflicting software to be concretized for maximal interoperability.
The primary use-case for this is facilities providing system software. Facilities provide multiple MPI implementations, but all software built against a given MPI ought to be interoperable.
This PR adds a concretization option `concretizer:unify:when_possible`. When this option is used, Spack will concretize specs in the environment separately, but will optimize for minimal differences in overlapping packages.
* Add a level of indirection to root specs
This commit introduce the "literal" atom, which comes with
a few different "arities". The unary "literal" contains an
integer that id the ID of a spec literal. Other "literals"
contain information on the requests made by literal ID. For
instance zlib@1.2.11 generates the following facts:
literal(0,"root","zlib").
literal(0,"node","zlib").
literal(0,"node_version_satisfies","zlib","1.2.11").
This should help with solving large environments "together
where possible" since later literals can be now solved
together in batches.
* Add a mechanism to relax the number of literals being solved
* Modify spack solve to display the new criteria
Since the new criteria is above all the build criteria,
we need to modify the way we display the output.
Originally done by Greg in #27964 and cherry-picked
to this branch by the co-author of the commit.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Inject reusable specs into the solve
Instead of coupling the PyclingoDriver() object with
spack.config, inject the concrete specs that can be
reused.
A method level function takes care of reading from
the store and the buildcache.
* spack solve: show output of multi-rounds
* add tests for best-effort coconcretization
* Enforce having at least a literal being solved
Co-authored-by: Greg Becker <becker33@llnl.gov>
Previously the regex was only checking for presence of quotes as a beginning
or end character and not a matching set. This erroneously identified the
following *single* argument as being quoted:
source bashenvfile &> /dev/null && python3 -c "import os, json; print(json.dumps(dict(os.environ)))"
Add a config option to strip `-Werror*` or `-Werror=*` from compile lines everywhere.
```yaml
config:
keep_werror: false
```
By default, we strip all `-Werror` arguments out of compile lines, to avoid unwanted
failures when upgrading compilers. You can re-enable `-Werror` in your builds if
you really want to, with either:
```yaml
config:
keep_werror: all
```
or to keep *just* specific `-Werror=XXX` args:
```yaml
config:
keep_werror: specific
```
This should make swapping in newer versions of compilers much smoother when
maintainers have decided to enable `-Werror` by default.
Parse error information is kept for specs, but it doesn't seem like we propagate it
to the user when we encounter an error. This fixes that.
e.g., for this error in a package:
```python
depends_on("python@:3.8", when="0.900:")
```
Before, with no context and no clue that it's even from a particular spec:
```
==> Error: Unexpected token: ':'
```
With this PR:
```
==> Error: Unexpected token: ':'
Encountered when parsing spec:
0.900:
^
```
* Introduce concretizer:unify option to replace spack:concretization
* Deprecate concretization
* Make spack:concretization overrule concretize:unify for now
* Add environment update logic to move from spack:concretization to spack:concretizer:reuse
* Migrate spack:concretization to spack:concretize:unify in all locations
* For new environments make concretizer:unify explicit, so that defaults can be changed in 0.19
The oneapi and dpcpp compilers are essentially the same except for which
binary is used foc CXX. Spack will detect them as "mixed toolchain" and
not inject compiler optimization flags. This will be needed once
archspec has entries for the oneapi and dpcpp compilers. This PR detects
when dpcpp and oneapi are in the toolchains list and explicitly sets
`is_mixed_toolchain` to `False`.
Error messages for the clingo concretizer have proven challenging. The current messages are incredibly vague and often don't help users at all. Unsat cores in clingo are not guaranteed to be minimal, and lead to cores that are either not useful or need to be post-processed for hours to reach a minimal core.
Following up on an idea from a slack conversation with kwryankrattiger on slack, this PR takes a new approach. We eliminate most integrity constraints and minima/maxima on choice rules in clingo, and instead force invalid states to imply an error predicate. The error predicate can include context on the cause of the error (Package, Version, etc). These error predicates are then heavily optimized against, to ensure that we do not include error facts in the solution when a solution with no error facts could be generated. When post-processing the clingo solution to construct specs, any error facts cause the program to raise an exception. This leads to much more legible error messages. Each error predicate includes a priority and an error message. The error message is formatted by the remaining arguments to produce the error message. The priority is used to ensure that when clingo has a choice of which rules to violate, it chooses the one which will be most informative to the user.
Performance:
"fresh" concretizations appear to suffer a ~20% performance penalty under this branch, while "reuse" concretizations see a speedup of around 33%.
Possible optimizations if users still see unhelpful messages:
There are currently 3 levels of priority of the error messages. Additional priorities are possible, and can allow us finer granularity to ensure more informative error messages are provided in lieu of less informative ones.
Future work:
Improve tests to ensure that every possible rule implying an error message is exercised
A non-existent upstream should not be fatal: it could only mean it is
not deployed yet. In the meantime, it should not block the user to
rebuild anything it needs.
A warning is still emitted, to let the user decide if this is ok or not.
Fixes missing chgrp on symlinks in package installations, and errors on
symlinks referencing non-existent or non-writable locations.
Note: `os.chown(.., follow_symlinks=False)` is python3 only, but
`os.lchown` exists in both versions.
* Change license dir from hard-coded to a configurable item
* Change config item to be a string not an array
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Trying to compute `dag_hash()` or `package_hash()` on a concrete spec that doesn't have
a `_package_hash` attribute would attempt to recompute the package hash.
This most commonly manifests as a failed lookup of a namespace if you attempt to uninstall
or compute the hashes of packages in exsternal repositories that aren't registered, e.g.:
```console
> spack spec --json c/htno
==> Error: Unknown namespace: myrepo
```
While it wouldn't change the already-assigned `dag_hash` value, this behavior is
incorrect, since the package file for a previously concrete spec:
1. might have changed since concretization,
2. might not exist anymore, or
3. might just not be findable by Spack.
This PR ensures that the package hash can't be computed on older concrete specs. Instead
of calling `package_hash()` from within `to_node_dict()`, we now check for the `_package_hash`
attribute and only add the package_hash to the spec record if it's there.
This PR also handles the tricky semantics of computing `package_hash()` at concretization
time. We have to compute it *before* marking the spec concrete so that `to_node_dict` can
use it. But this means that the logic for `package_hash()` can't rely on `spec.concrete`,
as it is called *during* concretization. Instead of checking for concreteness, `package_hash()`
now checks `_patches_assigned()` to determine whether it should add them to the package
hash.
- [x] Add an assert to `package_hash()` so it can't be called on specs for which it
would be wrong.
- [x] Add an `_assign_hash()` method to handle tricky semantics of `package_hash`
and `dag_hash`.
- [x] Rework concretization to call `_assign_hash()` before and after marking specs
concrete.
- [x] Rework content hash part of package hash to check for `_patches_assigned()`
instead of `spec.concrete`.
- [x] regression test
Previously we sorted by hash values for `spack graph`, but changing hashes can make the
test brittle and the node order seem nondeterministic to users.
- [x] Sort nodes in `spack graph` by the default edge order, which takes into account
parent and child names as well as dependency types.
- [x] Update ASCII test output for new order.
The dependency check currently checks whether there are only build
dependencies left for a particular package. However, the database also
contains uninstalled packages, which can cause the check to fail.
For instance, with `bison` and `flex` having already been uninstalled,
`m4` will have the following dependents:
```
bison ('build', 'run')--> m4
flex ('build',)--> m4
libnl ('build',)--> m4
```
`bison` and `flex` should be ignored in this case because they are not
installed anymore.
Fixes#30673
#24556 merged in support for Python's .zip file support via ZipFile.
However as per #30200 ZipFile does not preserve file permissions of
the extracted contents. This PR returns to using the `unzip`
executable on non-Windows systems (as was the case before #24556)
and now uses `tar` on Windows to extract .zip files.
We previously had checks in `directory_layout` to check for build-dependency
conflicts when we weren't storing build dependencies. We don't need
those anymore; we can just rely on the DAG hash now that it includes everything
we know about each spec.
- [x] Remove vestigial code for checking installed spec against concrete spec
in `ensure_installed()`
- [x] Remove `SpecHashCollisionError` -- if specs have the same hash now, they're
the same as far as `DirectoryLayout` should be concerned.
- [x] Convert spec comparison to `dag_hash()` comparison when adding extensions.
The database now stores full hashes, so we need to adjust the criteria we use to
determine if something can be uninstalled. Specifically, it's ok to uninstall thing that
have remaining build-only dependents.
With the original DAG hash, we did not store build dependencies in the database, but
with the full DAG hash, we do. Previously, we'd never tell the concretizer about build
dependencies of things used by hash, because we never had them. Now, we have to avoid
telling the concretizer about them, or they'll unnecessarily constrain build
dependencies for new concretizations.
- [x] Make database track all dependencies included in the `dag_hash`
- [x] Modify spec_clauses so that build dependency information is optional
and off by default.
- [x] `spack diff` asks `spec_clauses` for build dependencies for completeness
- [x] Modify `concretize.lp` so that reuse optimization doesn't affect fresh
installations.
- [x] Modify concretizer setup so that it does *not* prioritize installed versions
over package versions. We don't need this with reuse, so they're low priority.
- [x] Fix `test_installed_deps` for full hash and new concretizer (does not work
for old concretizer with full hash -- leave this for later if we need it)
- [x] Move `test_installed_deps` mock packages to `builtin.mock` for easier debugging
with `spack -m`.
- [x] Fix `test_reuse_installed_packages_when_package_def_changes` for full hash
- [x] update test to use `build_hash` instead of `dag_hash`, as we're testing for
graph structure, and specifically NOT testing for package changes.
- [x] make hash descriptors callable on specs to simplify syntax for invoking them
- [x] make `Spec.spec_hash()` public
This removes all but one usage of runtime hash. The runtime hash was being used to write
historical lockfiles for tests, but we don't need it for that; we can just save those
lockfiles.
- [x] add legacy lockfiles for v1, v2, v3
- [x] fix bugs with v1 lockfile tests (the dummy lockfile we were writing was not actually
a v1 lockfile because it used the new spec file format).
- [x] remove all but one runtime_hash usage -- that one needs a small rework of the
concretizer to really fix, as it's about separate concretization of build
dependencies.
- [x] Document the history of the lockfile format in `environment/__init__.py`
Some test cases had to be modified in a kludgy way so that abstract specs made
concrete would have versions on them. We shouldn't *need* to do this, as the
only reason we care is because the content hash has to be able to get an archive
for a version.
This modifies the content hash so that it can be called on abstract specs,
including only relevant content.
This does NOT add a partial content hash to the DAG hash, as we do not really
want that -- we don't need in-memory spec hashes to need to load package files.
It just makes `Package.content_hash()` less prickly and tests easier to
understand.
`spack monitor` expects a field called `spec_full_hash`, so we shouldn't change that.
Instead, we can pass a `dag_hash` (which is now the full hash) but not change the field
name.
`hashes_final` was used to indicate when a spec was concrete but possibly lacked
`full_hash` or `build_hash` fields. This was only necessary because older Spacks
didn't generate them, and we want to avoid recomputing them, as we likely do not
have the same package files as existed at concretization time.
Now, we don't need to do that -- there is only the DAG hash and specs are either
concrete and have a `dag_hash`, or not concrete and have no `dag_hash`. There's
no middle ground.
Without some enforcement of spec ordering, python 2 produced
different results in the affected test than did python 3. This
change makes the arbitrary but reproducible decision to sort
the specs by their lockfile key alphabetically.
The full hash appears twice in the spec dict now, replacing just
the value replaces it under "hash" and "full_hash". Only replace
the one that appears after "full_hash".
I'm actually not sure what purpose this test served, so maybe it
could be removed, as it may be testing some distinction between
full and dag hash which no longer exists.
For a long time, Spack has used a coarser hash to identify packages
than it likely should. Packages are identified by `dag_hash()`, which
includes only link and run dependencies. Build dependencies are
stripped before hashing, and we have notincluded hashes of build
artifacts or the `package.py` files used to build. This means the
DAG hash actually doesn't represent all the things Spack can build,
and it reduces reproducibility.
We did this because, in the early days, users were (rightly) annoyed
when a new version of CMake, autotools, or some other build dependency
would necessitate a rebuild of their entire stack. Coarsening the hash
avoided this issue and enabled a modicum of stability when only reusing
packages by hash match.
Now that we have `--reuse`, we don't need to be so careful. Users can
avoid unnecessary rebuilds much more easily, and we can add more
provenance to the spec without worrying that frequent hash changes
will cause too many rebuilds.
This commit starts the refactor with the following major change:
- [x] Make `Spec.dag_hash()` include build, run, and link
dependencides and the package hash (it is now equivalent to
`full_hash()`).
It also adds a couple of bugfixes for problems discovered during
the switch:
- [x] Don't add a `package_hash()` in `to_node_dict()` unless
the spec is concrete (fixes breaks on abstract specs)
- [x] Don't add source ids to the package hash for packages without
a known fetch strategy (may mock packages are like this)
- [x] Change how `Spec.patches` is memoized. Using
`llnl.util.lang.memoized` on `Spec` objects causes specs to
be stored in a `dict`, which means they need a hash. But,
`dag_hash()` now includes patch `sha256`'s via the package
hash, which can lead to infinite recursion
`spack pkg list` tests were broken by #29593 for cases when your `builtin.mock` repo
still has stale backup files (or, really, stale directories) sitting around. This
happens if you switch branches a lot. In this case, things like this were causing
erroneous packages in the mock listing:
```
var/spack/repos/builtin.mock/packages/
foo/
package.py~
```
- [x] make `list_packages` consider only directories with one-deep `package.py` files.
Reworking lua to allow easier substitution of the base lua implementation.
Also adding in a maintained version of luajit and re-factoring the entire stack
to use a custom build-system to centralize functionality like environment
variable management and luarocks installation.
The `lua-lang` virtual is now versioned so that a package that requires
Lua 5.1 semantics can get any lua, but one that requires 5.2 will only
get upstream lua.
The luaposix package requires lua-bit32, but only when built with a
lua conforming to version 5.1. This adds the package, and the
dependencies, but exposed a problem with luarocks dependency
detection. Since we're installing each package in its own "tree" and
there's no environment variable to list extra trees, spack now
generates a luarocks config file that lists all the trees of all the
dependencies, and references it by setting `LUAROCKS_CONFIG`
in the build environment of every LuaPackage. This allows luarocks
to find the spack installed dependencies correctly rather than
trying (and failing) to download them.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Tom Scogland <tscogland@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Some of our `git` tests still fail when `init.defaultBranch` is set to something other
than `master`.
- [x] get rid of all hard-coded `master` refs
- [x] Use `'default'` to key tests that use the default branch
When running on Windows, Spack may generate files in the stage/install
prefixes that do not have write permissions, which prevents the
removal of those directories (e.g. when cleaning stages or uninstalling).
There should be a refactoring to avoid this in the first place, but that
is assumed to be longer term, so the temporary fix is to make such files
writable if they are not. This PR:
* Automatically handles these permissions errors when uninstalling
packages from the Spack root (makes then writable)
* Updates similar already-existing logic when removing Spack-managed
stage directories (the error-handling was assuming all errors were
permissions errors and was therefore handling other errors
inappropriately)
Note: these permissions issues only appear on Windows so this logic is
only applied there (permissions are not modified for this purpose on
Linux etc.).
This also adds special handling for a case where calling `isdir`
on an `os.DirEntry` object would fail for improperly-created symlinks
(e.g. on Windows, using `os.symlink` without `target_is_directory=True`).
Note this specific issue only came up when enabling link_tree tests
(specifically `source_merge_visitor_cant_be_cyclical`).
* create function for translating compiler names on specs/compiler entries in manifest
* add tests for translating compiler names on spec/compiler entries
* use higher-level function in test and add comment to prefer testing via higher-level function
* opensuse clingo check should not fail on account of this pr, but I cannot get it to pass by restarting via CI UI
* Force GCC to always provide a C++14 flag
Updated gnu logic so that the c++14 flag for g++ is always propagated.
This fixes issues with build systems that error out if passed an empty
string for a flag.
Engaging in the best kind of software engineering by updating the unit
test to pass with the value it is now passed. This should better match
the expected flag for g++ compiling with the C++14 standard
This ensures that multiple spack instances called from `make` will respect the maximum number of jobs in the POSIX jobserver across packages.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
* use the init.defaultBranch name, not master
* make tcl and modules/common independent
Both used to use not just the same directory, but the same *file* for
their outputs. In parallel this can cause problems, but it can also
accidentally allow expected failures to pass if the file is left around
by mistake.
* use a non-global misc_cache in tests
* make pkg tests resilient to gitignore
* make source cache and module directories non-global
`make` solves a lot of headaches that would otherwise have to be implemented in Spack:
1. Parallelism over packages through multiple `spack install` processes
2. Orderly output of parallel package installs thanks to `make --sync-output=recurse` or `make -Orecurse` (works well in GNU Make 4.3; macOS is unfortunately on a 16 years old 3.x version, but it's one `spack install gmake` away...)
3. Shared jobserver across packages, which means a single `-j` to rule them all, instead of manually finding a balance between `#spack install processes` & `#jobs per package` (See #30302).
This pr adds the `spack env depfile` command that generates a Makefile with dag hashes as
targets, and dag hashes of dependencies as prerequisites, and a command
along the lines of `spack install --only=packages /hash` to just install
a single package.
It exposes two convenient phony targets: `all`, `fetch-all`. The former installs the environment, the latter just fetches all sources. So one can either use `make all -j16` directly or run `make fetch-all -j16` on a login node and `make all -j16` on a compute node.
Example:
```yaml
spack:
specs: [perl]
view: false
```
running
```
$ spack -e . env depfile --make-target-prefix env | tee Makefile
```
generates
```Makefile
SPACK ?= spack
.PHONY: env/all env/fetch-all env/clean
env/all: env/env
env/fetch-all: env/fetch
env/env: env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww
@touch $@
env/fetch: env/.fetch/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.fetch/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.fetch/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.fetch/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.fetch/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.fetch/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.fetch/73t7ndb5w72hrat5hsax4caox2sgumzu env/.fetch/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.fetch/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.fetch/c4go4gxlcznh5p5nklpjm644epuh3pzc
@touch $@
env/dirs:
@mkdir -p env/.fetch env/.install
env/.fetch/%: | env/dirs
$(info Fetching $(SPEC))
$(SPACK) -e '/tmp/tmp.7PHPSIRACv' fetch $(SPACK_FETCH_FLAGS) /$(notdir $@) && touch $@
env/.install/%: env/.fetch/%
$(info Installing $(SPEC))
+$(SPACK) -e '/tmp/tmp.7PHPSIRACv' install $(SPACK_INSTALL_FLAGS) --only-concrete --only=package --no-add /$(notdir $@) && touch $@
# Set the human-readable spec for each target
env/%/cdqldivylyxocqymwnfzmzc5sx2zwvww: SPEC = perl@5.34.1%gcc@10.3.0+cpanm+shared+threads arch=linux-ubuntu20.04-zen2
env/%/gv5kin2xnn33uxyfte6k4a3bynhmtxze: SPEC = berkeley-db@18.1.40%gcc@10.3.0+cxx~docs+stl patches=b231fcc arch=linux-ubuntu20.04-zen2
env/%/cuymc7e5gupwyu7vza5d4vrbuslk277p: SPEC = bzip2@1.0.8%gcc@10.3.0~debug~pic+shared arch=linux-ubuntu20.04-zen2
env/%/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk: SPEC = diffutils@3.8%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws: SPEC = libiconv@1.16%gcc@10.3.0 libs=shared,static arch=linux-ubuntu20.04-zen2
env/%/yfz2agazed7ohevqvnrmm7jfkmsgwjao: SPEC = gdbm@1.19%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/73t7ndb5w72hrat5hsax4caox2sgumzu: SPEC = readline@8.1%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/trvdyncxzfozxofpm3cwgq4vecpxixzs: SPEC = ncurses@6.2%gcc@10.3.0~symlinks+termlib abi=none arch=linux-ubuntu20.04-zen2
env/%/sbzszb7v557ohyd6c2ekirx2t3ctxfxp: SPEC = pkgconf@1.8.0%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/c4go4gxlcznh5p5nklpjm644epuh3pzc: SPEC = zlib@1.2.12%gcc@10.3.0+optimize+pic+shared patches=0d38234 arch=linux-ubuntu20.04-zen2
# Install dependencies
env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww: env/.install/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.install/c4go4gxlcznh5p5nklpjm644epuh3pzc
env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p: env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk
env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk: env/.install/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao: env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu
env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu: env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs
env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs: env/.install/sbzszb7v557ohyd6c2ekirx2t3ctxfxp
env/clean:
rm -f -- env/env env/fetch env/.fetch/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.fetch/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.fetch/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.fetch/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.fetch/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.fetch/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.fetch/73t7ndb5w72hrat5hsax4caox2sgumzu env/.fetch/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.fetch/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.fetch/c4go4gxlcznh5p5nklpjm644epuh3pzc env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.install/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.install/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.install/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.install/c4go4gxlcznh5p5nklpjm644epuh3pzc
```
Then with `make -O` you get very nice orderly output when packages are built in parallel:
```console
$ make -Orecurse -j16
spack -e . install --only-concrete --only=package /c4go4gxlcznh5p5nklpjm644epuh3pzc && touch c4go4gxlcznh5p5nklpjm644epuh3pzc
==> Installing zlib-1.2.12-c4go4gxlcznh5p5nklpjm644epuh3pzc
...
Fetch: 0.00s. Build: 0.88s. Total: 0.88s.
[+] /tmp/tmp.b1eTyAOe85/store/linux-ubuntu20.04-zen2/gcc-10.3.0/zlib-1.2.12-c4go4gxlcznh5p5nklpjm644epuh3pzc
spack -e . install --only-concrete --only=package /sbzszb7v557ohyd6c2ekirx2t3ctxfxp && touch sbzszb7v557ohyd6c2ekirx2t3ctxfxp
==> Installing pkgconf-1.8.0-sbzszb7v557ohyd6c2ekirx2t3ctxfxp
...
Fetch: 0.00s. Build: 3.96s. Total: 3.96s.
[+] /tmp/tmp.b1eTyAOe85/store/linux-ubuntu20.04-zen2/gcc-10.3.0/pkgconf-1.8.0-sbzszb7v557ohyd6c2ekirx2t3ctxfxp
```
For Perl, at least for me, using `make -j16` versus `spack -e . install -j16` speeds up the builds from 3m32.623s to 2m22.775s, as some configure scripts run in parallel.
Another nice feature is you can do Makefile "metaprogramming" and depend on packages built by Spack. This example fetches all sources (in parallel) first, print a message, and only then build packages (in parallel).
```Makefile
SPACK ?= spack
.PHONY: env
all: env
spack.lock: spack.yaml
$(SPACK) -e . concretize -f
env.mk: spack.lock
$(SPACK) -e . env depfile -o $@ --make-target-prefix spack
fetch: spack/fetch
@echo Fetched all packages && touch $@
env: fetch spack/env
@echo This executes after the environment has been installed
clean:
rm -rf spack/ env.mk spack.lock
ifeq (,$(filter clean,$(MAKECMDGOALS)))
include env.mk
endif
```
Added support for finding the OpenCV package via the find external
command. Included support for identifying variants based on available
shared libraries.
Added support to finding the OpenBLAS package via the find external
command.
Enabled packages to show that they can be discovered via the find
external command in the info message.
Updated the OpenCV and OpenBLAS packages to use the extensible search
mechanism for library extensions on multiple OS platforms.
Corrected how find externals works on Darwin for OpenCV and OpenBLAS
to accommodate that the version numbers are placed before the file
extension instead of after it, as on Linux.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This is an amended version of https://github.com/spack/spack/pull/24894 (reverted in https://github.com/spack/spack/pull/29603). https://github.com/spack/spack/pull/24894
broke all instances of `spack external find` (namely when it is invoked without arguments/options)
because it was mandating the presence of a file which most systems would not have.
This allows `spack external find` to proceed if that file is not present and adds tests for this.
- [x] Add a test which confirms that `spack external find` successfully reads a manifest file
if present in the default manifest path
--- Original commit message ---
Adds `spack external read-cray-manifest`, which reads a json file that describes a
set of package DAGs. The parsed results are stored directly in the database. A user
can see these installed specs with `spack find` (like any installed spec). The easiest
way to use them right now as dependencies is to run
`spack spec ... ^/hash-of-external-package`.
Changes include:
* `spack external read-cray-manifest --file <path/to/file>` will add all specs described
in the file to Spack's installation DB and will also install described compilers to the
compilers configuration (the expected format of the file is described in this PR as well including examples of the file)
* Database records now may include an "origin" (the command added in this PR
registers the origin as "external-db"). In the future, it is assumed users may want
to be able to treat installs registered with this command differently (e.g. they may
want to uninstall all specs added with this command)
* Hash properties are now always preserved when copying specs if the source spec
is concrete
* I don't think the hashes of installed-and-concrete specs should change and this
was the easiest way to handle that
* also specs that are concrete preserve their `.normal` property when copied
(external specs may mention compilers that are not registered, and without this
change they would fail in `normalize` when calling `validate_or_raise`)
* it might be this should only be the case if the spec was installed
- [x] Improve testing
- [x] Specifically mark DB records added with this command (so that users can do
something like "uninstall all packages added with `spack read-external-db`)
* This is now possible with `spack uninstall --all --origin=external-db` (this will
remove all specs added from manifest files)
- [x] Strip variants that are listed in json entries but don't actually exist for the package
* ASP-based solver: discard unknown packages from reuse
This is an add-on to #28259 that cover for the case of
a single package.py being removed from a repository,
rather than an entire custom repository being removed.
* Add unit test
CTest determines whether to enable tests using the BUILD_TESTING variable.
This should be used by projects to conditionally enable the compilation of tests.
Spack knowns which packages have to run tests and can thus automatically define this variable.
I tried to use --overwrite on nvhpc, but nvhpc's install size is 16GB. Seems
better to do os.rename in the same directory than moving the directory to
`/tmp`.
- [x] install --overwrite: use rename instead of tmpdir
- [x] use tempfile
fixes#28259
This commit discard specs from unknown namespaces from the
ones that can be "reused" during concretization. Previously
Spack would just error out when encountering them.
The parent thread in the process stdout redirection logic on Windows
was closing a file that was being read in child thread, which lead to
error-based termination of the reader thread. This updates the
interaction to avoid the error.
* ASP-based solver: allow configuring target selection
This commit adds a new "concretizer:targets" configuration
section, and two options under it.
- "concretizer:targets:granularity" allows switching from
considering only generic targets to consider all possible
microarchitectures.
- "concretizer:targets:host_compatible" instead controls
whether we can concretize for microarchitectures that
are incompatible with the current host.
* Add documentation
* Add unit-tests
* ASP-based solver: always consider version of installed packages
fixes#29201
Explicitly add facts for versions of installed software when
using the --reuse option, so that we could consider versions
that are not declared in package.py
The parser is already committing a crime of querying the database for
specs when it encounters a `/hash`. It's helpful, but unfortunately not
helpful when trying to install a specific spec in an environment by
hash. Therefore, consider the environment first, then the database.
This allows the following:
```console
$ spack -e . concretize
==> Starting concretization
==> Environment concretized in 0.27 seconds.
==> Concretized diffutils
- 7vangk4 diffutils@3.8%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
- hyb7ehx ^libiconv@1.16%gcc@10.3.0 libs=shared,static arch=linux-ubuntu20.04-zen2
$ spack -e . install /hyb7ehx
==> Installing libiconv-1.16-hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
...
==> libiconv: Successfully installed libiconv-1.16-hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
Fetch: 0.01s. Build: 17.54s. Total: 17.55s.
[+] /tmp/tmp.VpvYApofVm/store/linux-ubuntu20.04-zen2/gcc-10.3.0/libiconv-1.16-hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
```
Fix bug introduced in #30191. `Spec.installed` and `Spec.installed_upstream` should just return
`False` for abstract specs, as they can be called in that context.
- [x] `Spec.installed` returns `False` now instead of asserting that the `Spec`
is concrete.
- [x] `Spec.installed_upstream` returns `False` now instead of asserting that the `Spec`
is concrete.
- [x] `Spec.installed_upstream` no longer caches its result, as install status seems
like a bad thing to cache -- it can easily be invalidated. Calling code should
use transactions if there are peformance issues, as with other places in Spack.
- [x] add tests for `Spec.installed` and `Spec.installed_upstream`
This PR moves the `installed` and `installed_upstream` properties from `PackageBase` to `Spec` and is a step towards being able to reuse specs for which we don't have a `package.py` available. It _should_ be sufficient to complete the concretization step and see the spec in the concretized DAG.
To fully reuse a spec without a package.py though we need a way to serialize enough data to reconstruct the results of calls to:
- `Spec.libs`, `Spec.headers` and `Spec.ommand`
- `Package.setup_dependent_*_environment` and `Package.setup_run_environment`
- [x] Add stub methods to packages with warnings
- [x] Add a missing "root=False" in cmd/fetch.py
- [x] Assert that a spec is concrete before checking installation status
This PR updates the list of images we build nightly, deprecating
Ubuntu 16.04 and CentOS 8 and adding Ubuntu 20.04, Ubuntu 22.04
and CentOS Stream. It also removes a lot of duplication by generating
the Dockerfiles during the CI workflow and uploading them as artifacts
for later inspection or reuse.
Fix test_ci_generate_prune_untouched(), which would fail if run when
the latest commit changed the .gitlab-ci.yml. This change mocks the
get_stack_changed() method in that test to disregard the state of
the current spack repo in favor of a mock repo under test control.
gitlab ci: Remove code for relating CDash builds
Relating CDash builds to their dependencies was a seldom used feature. Removing
it will make it easier for us to reorganize our CDash projects & build groups in the
future by eliminating the needs to keep track of CDash build ids in our binary mirrors.
* Allow packages to add a 'submodules' property that determines when ad-hoc Git-commit-based versions should initialize submodules
* add support for ad-hoc git-commit-based versions to instantiate submodules if the associated package has a 'submodules' property and it indicates this should happen for the associated spec
* allow Package-level submodule request to influence all explicitly-defined version() in the Package
* skip test on windows which fails because of long paths
Spack added support in #24639 for ad-hoc Git-commit-hash-based
versions: A user can install a package x@hash, where X is a package
that stores its source code in a Git repository, and the hash refers
to a commit in that repository which is not recorded as an explicit
version in the package.py file for X.
A couple issues were found relating to this:
* If an environment defines an alternative package repo (i.e. with
repos.yaml), and spack.yaml contains user Specs with ad-hoc
Git-commit-hash-based versions for packages in that repo,
then as part of retrieving the data needed for version comparisons
it will attempt to retrieve the package before the environment's
configuration is instantiated.
* The bookkeeping information added to compare ad-hoc git versions was
being stripped from Specs during concretization (such that user
Specs which succeeded before concretizing would then fail after)
This addresses the issues:
* The first issue is resolved by deferring access to the associated
Package until the versions are actually compared to one another.
* The second issue is resolved by ensuring that the Git bookkeeping
information is explicitly applied to Specs after they are concretized.
This also:
* Resolves an ambiguity in the mock_git_version_info fixture used to
create a tree of Git commits and provide a list where each index
maps to a known commit.
* Isolates the cache used for Git repositories in tests using the
mock_git_version_info fixture
* Adds a TODO which points out that if the remote Git repository
overwrites tags, that Spack will then fail when using
ad-hoc Git-commit-hash-based versions
This commit updates the `gpg publish` command to work with the mirror
arguments, when trying to push keys to a mirror.
- [x] update `gpg publish command
- [x] add test for publishing GPG keys and rebuilding the key index within a mirror
In a typical call to spack, the OperatingSystem gets instantiated
multiple times. For macOS, each one requires a call to `sw_vers`, which
is done through the Executable helper class. Memoizing
reduces the call count from "spac spec" from three to one.
Currently environments are indexed by build hashes. When looking into this bug I noticed there is a disconnect between environments that are concretized in memory for the first time and environments that are read from a `spack.lock`. The issue is that specs read from a `spack.lock` don't have a full hash, since they are indexed by a build hash which is strictly coarser. They are also marked "final" as they are read from a file, so we can't compute additional hashes.
This bugfix PR makes "first concretization" equivalent to re-reading the specs from a corresponding `spack.lock`, and doing so unveiled a few tests were we were making wrong assumptions and relying on the fact that a `spack.lock` file was not there already.
* Add unit test
* Modify mpich to trigger jobs in pipelines
* Fix two failing unit tests
* Fix another full_hash vs. build_hash mismatch in tests
* Ignore top-level module config; add auto-update
In Spack 0.17 we got module sets (modules:[name]:[prop]), and for
backwards compat modules:[prop] was short for modules:default:[prop].
But this makes it awkward to define default config for the "default"
module set.
Since 0.17 is branched off, we can now deprecate top-level module config
(that is, just ignore it with a warning).
This PR does that, and it implements `spack config update modules` to
make upgrading easy (we should have added that to 0.17 already...)
It also removes references to `dotkit` stuff which was already
deprecated in 0.13 and could have been removed in 0.14.
Prefix inspections are the only exception, since the top-level prefix inspections
used for `spack load` and `spack env activate`.
Spack currently allows dependencies to be concretized for an
architecture incompatible with the root. This commit adds rules
to make this situation impossible by design.
* Extract the MetaPathFinder and Loaders for packages in their own classes
https://peps.python.org/pep-0451/
Currently, RepoPath and Repo implement the (deprecated) interface of
MetaPathFinder (find_module) and of Loader (load_module). This commit
extracts both of them and places the code in their own classes.
The MetaPathFinder interface is updated to contain both the deprecated
"find_module" (for Python 2.7 support) and the recommended "find_spec".
Update of the Loader interface is deferred at a subsequent commit.
* Move the lines to be prepended inside "RepoLoader"
Also adjust the naming of a few variables too
* Remove spack.util.imp, since code is only used in spack.repo
* Remove support from loading Python modules Python > 3 but < 3.5
* Remove `Repo._create_namespace`
This function was interacting badly with the MetaPathFinder
and causing issues with "normal" imports. Removing the
function allows to do things like:
```python
import spack.pkg.builtin.mpich
cls = spack.pkg.builtin.mpich.Mpich
```
* Remove code needed to trigger the Singleton evaluation
The finder is coded in a way to trigger the Singleton,
so we don't need external code now that we register it
at module level into `sys.meta_path`.
* Add unit tests
Some servers require `User-Agent` to be set, and otherwise error with
access denied. One such example is mpich.
To fix this, set `User-Agent: Spackbot/[version]` as a header.
Apparently by convention, it should include the word `bot`.
#27021 broke fetching for CVS-based packages because:
- The mirror logic was using URL parsing to extract a path from the
CVS repository location
- #27021 added sanity checks to enforce that strings passed to the
URL parser were actually URLs
This replaces the call to "url_util.parse" with logic that is
customized for CVS. This implies that VCSFetchStrategy should
rename the "url_attr" attribute to something more generic, but
that should be handled separately.
Allow declaring possible values for variants with an associated condition. If the variant takes one of those values, the condition is imposed as a further constraint.
The idea of this PR is to implement part of the mechanisms needed for modeling [packages with multiple build-systems]( https://github.com/spack/seps/pull/3). After this PR the build-system directive can be implemented as:
```python
variant(
'build-system',
default='cmake',
values=(
'autotools',
conditional('cmake', when='@X.Y:')
),
description='...',
)
```
Modifications:
- [x] Allow conditional possible values in variants
- [x] Add a unit-test for the feature
- [x] Add documentation
* tests for rewiring pure specs to spliced specs
* relocate text, binaries, and links
* using llnl.util.symlink for windows compat.
Note: This does not include CLI hooks for relocation.
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
- Add variants for various common build flags, including support for both versions of the Racket VM environment.
- Prevent `-j` flags to `make`, which has been known to cause problems with Racket builds.
- Prefer the minimal release to improve install times. Bells and whistles carry their own runtime dependencies and should be installed via `raco`. An enterprising user may even create a `RacketPackage` class to make spack aware of `raco` installed packages.
- Match the official version numbering scheme.
Update "spack external find --all" to also find library-only packages.
A Package can add a ".libraries" attribute, which is a list of regular
expressions to use to find libraries associated with the Package.
"spack external find --all" will search LD_LIBRARY_PATH for potential
libraries.
This PR adds examples for NCCL, RCCL, and hipblas packages. These
examples specify the suffix ".so" for the regular expressions used
to find libraries, so generally are only useful for detecting library
packages on Linux.
Do not prompt user with checksum warning when using git commit hashes
as versions. Spack was incorrectly reporting this as a potential
problem: it would display a prompt asking the user whether they
want to proceed if Spack was running in a terminal, or it would
terminate the running instance of Spack if running as part of a
script.
* Add pl2bat to PATH: Windows on Perl requires the script pl2bat.bat
and Perl to be available to the installer via the PATH. The build
and dependent environments of Perl on Windows have the install
prefix bin added to the PATH.
* symlink with win32file module instead of using Executable to
call mklink (mklink is a shell function and so is not accessible
in this manner).
We've previously generated CI pipelines for PRs, and they rebuild any packages that don't have
a binary in an existing build cache. The assumption we were making was that ALL prior merged
builds would be in cache, but due to the way we do security in the pipeline, they aren't. `develop`
pipelines can take a while to catch up with the latest PRs, and while it does that, there may be a
bunch of redundant builds on PRs that duplicate things being rebuilt on `develop`. Until we can
do better caching of PR builds, we'll have this problem.
We can do better in PRs, though, by *only* rebuilding things in the CI environment that are actually
touched by the PR. This change computes exactly what packages are changed by a PR branch and
*only* includes those packages' dependents and dependencies in the generated pipeline. Other
as-yet unbuilt packages are pruned from CI for the PR.
For `develop` pipelines, we still want to build everything to ensure that the stack works, and to ensure
that `develop` catches up with PRs. This is especially true since we do not do rebuilds for *every* commit
on `develop` -- just the most recent one after each `develop` pipeline finishes. Since we skip around,
we may end up missing builds unless we ensure that we rebuild everything.
We differentiate between `develop` and PR pipelines in `.gitlab-ci.yml` by setting
`SPACK_PRUNE_UNTOUCHED` for PRs. `develop` will still have the old behavior.
- [x] Add `SPACK_PRUNE_UNTOUCHED` variable to `spack ci`
- [x] Refactor `spack pkg` command by moving historical package checking logic to `spack.repo`
- [x] Implement pruning logic in `spack ci` to remove untouched packages
- [x] add tests
* cmake: use CMAKE_INSTALL_RPATH_USE_LINK_PATH
Spack has a heuristic to add rpaths for packages it knows are required,
but it's really a heuristic, and it does not work when the dependencies
put their libraries in a different folder than `<prefix>/lib{64,}`.
CMake patches binaries after install with the "install rpaths", which by
default are provided by Spack and its heuristic through
`CMAKE_INSTALL_RPATH`.
CMake however knows better what libraries are effectively being linked
to, and has an option to include those in the install rpath too, through
`CMAKE_INSTALL_RPATH_USE_LINK_PATH`.
These two CMake options are complementary, repeated rpaths seem to be
filtered, and the "use link path" paths are appended to Spack's
heuristic "install rpath".
So, it seems like a good idea to enable "use link path" by default, so
that:
- `dlopen` by library name uses Spack's heuristic search paths
- linked libraries in non-standard locations within a prefix get an
rpath thanks to CMake.
* docs
Add output of build- and install-time tests to info command
Enable dependencies, variants, and versions by default (i.e., provide --no*
options; add gcc to test_info_fields to increase coverage for c_names->v_names
We shouldn't be using "remove_linked_tree" to remove the lock file,
since that function expects to receive a directory path as an
argument.
Also, as a further measure to avoid regression, this commit restores
the "ignore_errors=True" argument on linux and adds a unit test
checking that "remove_linked_tree" doesn't change file permissions
as a side effect of a failure to remove.
Reduces the number of stat calls to a bare minimum:
- Single pass over src prefixes
- Handle projection clashes in memory
Symlinked directories in the src prefixes are now conditionally
transformed into directories with symlinks in the dst dir. Notably
`intel-mkl`, `cuda` and `qt` has top-level symlinked directories that
previously resulted in empty directories in the view. We now avoid
cycles and possible exponential blowup by only expanding symlinks that:
- point to dirs deeper in the folder structure;
- are a fixed depth of 2.
Currently `old_root` is computed by reading the symlink at `self.root`.
We should be more defensive in removing it by checking that it is in the
same directory as the new root. Otherwise, in the worst case, when
someone runs `spack env create --with-view=./view -d .` and `view`
already exists and is a symlink to `/`, Spack effectively runs `rm -rf /`.
`file` was used to detect Python scripts with shebangs, so that the interpreter could be changed from <python prefix> to <view path>. With this change, we detect shebangs using Python instead, so that `file` is no longer required.
The number of commit characters in patch files fetched from GitHub can change,
so we should use `full_index=1` to enforce full commit hashes (and a stable
patch `sha256`).
Similarly, URLs for branches like `master` don't give us stable patch files,
because branches are moving targets. Use specific tags or commits for those.
- [x] update all github patch URLs to use `full_index=1`
- [x] don't use `master` or other branches for patches
- [x] add an audit check and a test for `?full_index=1`
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Known issues reports only 2 issues, among the bugs reported on GitHub.
One of the two is also outdated, since the issue has been solved
with the new concretizer. Thus, this commit removes the section.
When you install Spack from a tarball, it will always show an exact
version for Spack itself, even when you don't download a tagged commit:
```
$ wget -q https://github.com/spack/spack/archive/refs/heads/develop.tar.gz
$ tar -xf develop.tar.gz
$ ./spack-develop/bin/spack --version
0.16.2
```
This PR sets the Spack version to `0.18.0.dev0` on develop, following [PEP440](https://github.com/spack/spack/pull/25267#issuecomment-896340234) as
suggested by Adam Stewart.
```
spack (fix/set-dev-version)$ spack --version
0.18.0.dev0 (git 0.17.1-1526-e270464ae0)
spack (fix/set-dev-version)$ mv .git .git_
spack $ spack --version
0.18.0.dev0
```
- [x] Update the release guide
- [x] Add __version__ to spack's __init__.py
- [x] Use PEP 440 canonical version strings
- [x] Make spack --version output [actual version] (git version)
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Add tests to ensure google cloud storage urls work as mirrors
This commit adds two tests to track that GCS buckets can work as
mirrors, and can be parsed as valid URLs.
Currently, gs:// format URLs are not correctly parsed.
* Fix URL parsing for GCS buckets
This commit adds GCS bucket URLs as valid URLs.
* lower priority of package-provided urls
This change favors urls found in a scraped page over those provided by
the package from `url_for_version`. In most cases this doesn't matter,
but R specifically returns known bad URLs in some cases, and the
fallback path for a failed fetch uses `fetch_remote_versions` to find a
substitute. This fixes that problem.
fixes#29204
* consider what links actually exist in all cases
Checksum was only actually scraping when called with no versions. It
now always scrapes and then selects URLs from the set of URLs known to
exist whenever possible.
fixes#25831
* bow to the wrath of flake8
* test-fetch urls from package, prefer if successful
* Update lib/spack/spack/package.py
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* reword as suggested
* re-enable mypy specific ignore and ignore pyflakes
* remove flake8 ignore from .flake8
* address review comments
* address comments
* add sneaky missing substitute
I missed this one because we call substitute on a URL that doesn't
contain a version component. I'm not sure how that's supposed to work,
but apparently it's required by at least one mock package, so back in it
goes.
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
Adds `spack external read-cray-manifest`, which reads a json file that describes a set of package DAGs. The parsed results are stored directly in the database. A user can see these installed specs with `spack find` (like any installed spec). The easiest way to use them right now as dependencies is to run `spack spec ... ^/hash-of-external-package`.
Changes include:
* `spack external read-cray-manifest --file <path/to/file>` will add all specs described in the file to Spack's installation DB and will also install described compilers to the compilers configuration (the expected format of the file is described in this PR as well including examples of the file)
* Database records now may include an "origin" (the command added in this PR registers the origin as "external-db"). In the future, it is assumed users may want to be able to treat installs registered with this command differently (e.g. they may want to uninstall all specs added with this command)
* Hash properties are now always preserved when copying specs if the source spec is concrete
* I don't think the hashes of installed-and-concrete specs should change and this was the easiest way to handle that
* also specs that are concrete preserve their `.normal` property when copied (external specs may mention compilers that are not registered, and without this change they would fail in `normalize` when calling `validate_or_raise`)
* it might be this should only be the case if the spec was installed
- [x] Improve testing
- [x] Specifically mark DB records added with this command (so that users can do something like "uninstall all packages added with `spack read-external-db`)
* This is now possible with `spack uninstall --all --origin=external-db` (this will remove all specs added from manifest files)
- [x] Strip variants that are listed in json entries but don't actually exist for the package
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
This PR removes a few outdated sections from the "Basics" part of the
documentation. It also makes a few topic under the environment section
more prominent by removing an unneeded spack.yaml subsection and
promoting everything under it.
Consolidate Spack's internal filepath logic to a select
few places and refactor to consistent internal useage of
os.path utilities. Creates a prefix, and a series of utilities
in the path utility module that facilitate handling paths
in a platform agnostic manner.
Convert Windows paths to posix paths internally
Prefer posixpath.join instead of os.path.join
Updated util/ directory to account for Windows integration
Co-authored-by: Stephen Crowell <stephen.crowell@khq.kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
Module template format for windows (#23041)
* Incorporate new search location
* Add external user option
* proper doc string
* Explicit commands in getting started
* raise during chgrp on Win
recover installer changes
Notate admin privleges
Windows phase install hooks
Find external python and install ninja (#23496)
Allow external find python to find windows python and spack install ninja
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Fixup common tests
* Remove requirement for Python 2.6
* Skip new failing test
Windows: Update url util to handle Windows paths (#27959)
* update url util to handle windows paths
* Update tests to handle fixed url handling
* canonicalize path only when the path type matches the host platform
* Skip some url tests on Windows
Co-authored-by: Omar Padron <omar.padron@kitware.com>
Use threading.TIMEOUT_MAX when available (#24246)
This value was introduced in Python 3.2. Specifying a timeout greater than
this value will raise an OverflowError.
Co-authored-by: Lou Lawrence <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Add compiler hint to the root spec for Windows
Reporters on Windows (#26038)
Reporters use Jinja2 as the templating engine, and Jinja2 indexes
templates by Unix separators, even on Windows, so search using Unix paths
on all systems.
Support patching on win via git (#25871)
Handle GRP on windows
CMake - Windows Bootstrap (#25825)
Remove hardcoded cmake compiler (#26410)
Revert breaking cmake changes
Ensure no autotools on Windows
Perl on Windows (#26612)
Python source build windows (#26313)
Reconfigure sysconf for Windows
Python2.6 compatibility
Fxixup new sbang tests for windows
Ruby support (#28287)
Add NASM support (#28319)
Add mock Ninja package for testing
* Style fixes
* Use Python's zipfile, if available
The compression libs are optional in Python. Rely on python as a
first attempt then fall back to `unzip`
MSVC's internal CMake and Ninja now detected by spack external find and added to packages.yaml
Saving progress on packaging zlib for Windows
Fixing the shared CMake flag
* Loading Intel's ifx Fortran compiler into MSVC; if there are multiple
versions of MSVC installed and detected, ifx will only be placed into
the first block written in compilers.yaml. The version number of ifx can
be detected using MSVC's version flag (instead of /QV) by using
ignore_version_errors. This commit also provides support for detection
of Intel compilers in their own compiler block by adding ifx.exe to the
fc/f77_name blocks inside intel.py
* Giving CMake a Fortran compiler argument
* Adding patch file for removing duplicated mangling header for versions 3.9.1 and older; static and shared now successfully building on Windows
* Have netlib-lapack depend on ninja@1.10
Co-authored-by: John R. Cary <cary@txcorp.com>
Co-authored-by: Jared Popelar <jpopelar@txcorp.com>
Making a default config.yaml for Windows
Small path length for build_stage
Provide more prerequisite details, mention default config.yaml
Killing an unnecessary setvars call
Replacing some lost changes, proofreading, updating windows-supported package list
Co-authored-by: John Parent <john.parent@kitware.com>
* Add 'make-installer' command for Windows
* Add '--bat' arg to env activate, env deactivate and unload commands
* An equivalent script to setup-env on linux: spack_cmd.bat. This script
has a wrapper to evaluate cd, load/unload, env activate/deactivate.(#21734)
* Add spacktivate and config editor (#22049)
* spack_cmd: will find python and spack on its own. It preferentially
tries to use python on your PATH (#22414)
* Ignore Windows python installer if found (#23134)
* Bundle git in windows installer (#23597)
* Add Windows section to Getting Started document
(#23131), (#23295), (#24240)
Co-authored-by: Stephen Crowell <stephen.crowell@kitware.com>
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Co-authored-by: Jared Popelar <jpopelar@txcorp.com>
Co-authored-by: Ben Cowan <benc@txcorp.com>
Update Installer CI
Co-authored-by: John Parent <john.parent@kitware.com>
Made the vcvars batch script location a member variable of the msvc compiler subclass, initialized from the compiler executable path. Added a setup_custom_environment() method to the msvc subclass that sources the vcvars script, dumps the environment, and copies the relevant environment variables to the Spack environment. Added class variables to the Windows OS and MSVC compiler subclasses to enable finding the compiler executables and determining their versions.
* Fixed path and uid issues.
* Added needed import statement; kluged .exe extension.
* Got package to build. Some manual intervention necessary, including sourcing the MSVC setup script and having certain configuration parameters.
* Removed CMake executable suffix hack.
To provide Windows-compatible functionality, spack code should use
llnl.util.symlink instead of os.symlink. On non-Windows platforms
and on Windows where supported, os.symlink will still be used.
Use junctions when symlinks aren't supported on Windows (#22583)
Support islink for junctions (#24182)
Windows: Update llnl/util/filesystem
* Use '/' as path separator on Windows.
* Recognizing that Windows paths start with '<Letter>:/' instead of '/'
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
os.rename() fails on Windows if file already exists.
Create getuid utility function (#21736)
On Windows, replace os.getuid with ctypes.windll.shell32.IsUserAnAdmin().
Tests: Use getuid util function
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
1. Forwarding sys.stdin, e.g. use input_multiprocess_fd,
gives an error on Windows. Skipping for now
3. subprocess_context needs to serialize for Windows, like it does
for Mac.
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
* Snapshot of some MSVC infrastructure added during experiments a while ago. Rebasing from spack/develop.
* Added platform and OS definitions for Windows.
* Updated Windows platform file to conform to new archspec use.
* Added Windows as a platform; introduced some debugging code.
* Added type annotations.
* Fixed copyright.
* Removed print statements.
* Ensure `spack arch` returns correctly on Windows (#21428)
* Correctly identify windows as 'windows-Windows10-AMD64'
Re-work the checks and comparisons around commit versions, when no
commit version is involved the overhead is now in the noise, where one
is the overhead is now constant rather than linear.
fixes#29446
The new setup_*_environment functions have been falling back
to calling the old functions and warn the user since #11115.
This commit removes the fallback behavior and any use of:
- setup_environment
- setup_dependent_environment
in the codebase
Change the internal representation of `Spec` to allow for multiple dependencies or
dependents stemming from the same package. This change permits to represent cases
which are frequent in cross compiled environments or to bootstrap compilers.
Modifications:
- [x] Substitute `DependencyMap` with `_EdgeMap`. The main differences are that the
latter does not support direct item assignment and can be modified only through its
API. It also provides a `select_by` method to query items.
- [x] Reworked a few public APIs of `Spec` to get list of dependencies or related edges.
- [x] Added unit tests to prevent regression on #11983 and prove the synthetic construction
of specs with multiple deps from the same package.
Since #22845 went in first, this PR reuses that format and thus it should not change hashes.
The same package may be present multiple times in the list of dependencies with different
associated specs (each with its own hash).
* environment.py: allow link:run
Some users want minimal views, excluding run-type dependencies, since
those type of dependencies are covered by rpaths and the symlinked
libraries in the view aren't used anyways.
With this change, an environment like this:
```
spack:
specs: ['py-flake8']
view:
default:
root: view
link: run
```
includes python packages and python, but no link type deps of python.
Speeds up comparison on `Version` by ~2.5x, e.g.
```python
In [1]: v = spack.version.Version('1.0.0'); w = spack.version.Version('1.0.2')
In [2]: %timeit v < w
1.47 µs ± 5.59 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
535 ns ± 1.75 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
fixes#29203
This PR fixes a subtle bug we have when importing
Spack packages as Python modules that can lead to
multiple module objects being created for the same
package.
It also fixes all the places in unit-tests where
"relying" on the old bug was crucial to have a new
"clean" state of the package class.
This commit reverts the GCS fetch strategy to before commit:
d759612523
The previous commit added some s3 syntax to handle connections, but
added them into the GCS fetch strategy in a way that prevents GCS from
working anymore.
* rocmcc compiler: initial commit based on aocc and clang
Co-authored-by: luker <luke.roskop@hpe.com>
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
The status displayed in the terminal title could be wrong when doing
distributed builds. For instance, doing `spack install glib` in two
different terminals could lead to the current package being reported as
`40/29` due to the way Spack handles retrying locks.
Work around this by keeping track of the package IDs that were already
encountered to avoid counting packages twice.
See https://github.com/spack/spack/pull/28468/files#r809156986
If we exit before generating the:
error("Dependencies must have compatible OS's with their dependents").
...
facts we'll output a problem that is effectively
different by the one solved by clingo.
* cmd/checksum: prefer url matching url_from_version
This is a minimal change toward getting the right archive from places
like github. The heuristic is:
* if an archive url exists, take its version
* generate a url from the package with pkg.url_from_version
* if they match
* stop considering other URLs for this version
* otherwise, continue replacing the url for the version
I doubt this will always work, but it should address a variety of
versions of this bug. A good test right now is `spack checksum gh`,
which checksums macos binaries without this, and the correct source
packages with it.
fixes#15985
related to #14129
related to #13940
* add heuristics to help create as well
Since create can't rely on an existing package, this commit adds another
pair of heuristics:
1. if the current version is a specifically listed archive, don't
replace it
2. if the current url matches the result of applying
`spack.url.substitute_version(a, ver)` for any a in archive_urls,
prefer it and don't replace it
fixes#13940
* clean up style and a lingering debug import
* ok flake8, you got me
* document reference_package argument
* Update lib/spack/spack/util/web.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* try to appease sphinx
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
We can see what is in the bootstrap store with `spack find -b`, and you can clean it with `spack
clean -b`, but we can't do much else with it, and if there are bootstrap issues they can be hard to
debug.
We already have `spack --mock`, which allows you to swap in the mock packages from the command
line. This PR introduces `spack -b` / `spack --bootstrap`, which runs all of spack with
`ensure_bootstrap_configuration()` set. This means that you can run `spack -b find`, `spack -b
install`, `spack -b spec`, etc. to see what *would* happen with bootstrap configuration, to remove
specific bootstrap packages, etc. This will hopefully make developers' lives easier as they deal
with bootstrap packages.
This PR also uses a `nullcontext` context manager. `nullcontext` has been implemented in several
other places in Spack, and this PR consolidates them to `llnl.util.lang`, with a note that we can
delete the function if we ever reqyire a new enough Python.
- [x] introduce `spack --bootstrap` option
- [x] consolidated all `nullcontext` usages to `llnl.util.lang`
Some "concrete" versions on the command line, e.g. `qt@5` are really
meant to satisfy some actual concrete version from a package. We should
only assume the user is introducing a new, unknown version on the CLI
if we, well, don't know of any version that satisfies the user's
request. So, if we know about `5.11.1` and `5.11.3` and they ask for
`5.11.2`, we'd ask the solver to consider `5.11.2` as a solution. If
they just ask for `5`, though, `5.11.1` or `5.11.3` are fine solutions,
as they satisfy `@5`, so use them.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
See https://github.com/spack/spack/issues/25353#issuecomment-1041868116
This commit changes the default behavior of
```
$ spack external find
```
from searching all the possible packages Spack knows about to
search only for the ones tagged as being a "build-tool".
It also introduces a `--all` option to restore the old behavior.
Prefer `sw_vers` to `platform.mac_ver`. In anaconda3 installation, for example, the latter reports 10.16 on Monterey -- I think this is affected by how and where the python instance was built.
Use MACOSX_DEPLOYMENT_TARGET if present to override the operating system choice.
It will be useful for metrics gathering and possibly debugging to
have this environment variable available in the runner pods that
do the actual rebuilds.
Since Spack does not install external packages, this commit skips them by
default when running stand-alone tests. The assumption is that such packages
have likely undergone an acceptance test process.
However, the tests can be run against installed externals using
```
% spack test run --externals ...
```
fixes#28260
Since we iterate over different variants from many packages, the variant
values may have types which are not comparable, which causes errors
at runtime. This is not a real issue though, since we don't need the facts
to be ordered. Thus, to avoid needless sorting, the sorted function has
been removed and a comment has been added to tip any developer that
might need to inspect these clauses for debugging to add back sorting
on the first two items only.
It's kind of difficult to add a test for this, since the error depends on
whether Python sorting algorithm ever needs to compare the third
value of a tuple being ordered.
* extensions: allow multiple "extends" directives
This will allow multiple extends directives in a package as long as only one of
them is selected as a dependency in the concrete spec.
* document the option to have multiple extends
Reuse previously was a very invasive change that required parameters to be added to all
the methods that called `concretize()` on a `Spec` object. With the addition of
concretizer configuration, we can use the config system to simplify this argument
passing and keep the code cleaner.
We decided that concretizer config options should be read at `Solver` instantiation
time, and if config changes between instnatiation of a particular solver and
`solve()` invocation, the `Solver` should use the settings from `__init__()`.
- [x] remove `reuse` keyword argument from most concretize functions
- [x] refactor usages to use `spack.config.override("concretizer:reuse", True)`
- [x] rework argument passing in `Solver` so that parameters are set from config
at instantiation time
`--reuse` was previously handled individually by each command that
needed it. We are growing more concretization options, and they'll
need their own section for commands that support them.
Now there are two concretization options:
* `--reuse`: Attempt to reuse packages from installs and buildcaches.
* `--fresh`: Opposite of reuse -- traditional spack install.
To handle thes, this PR adds a `ConfigSetAction` for `argparse`, so
that you can write argparse code like this:
```
subgroup.add_argument(
'--reuse', action=ConfigSetAction, dest="concretizer:reuse",
const=True, default=None,
help='reuse installed dependencies/buildcaches when possible'
)
```
With this, you don't need to add logic to pull the argument out and
handle it; the `ConfigSetAction` just does it for you. This can probably
be used to clean up some other commands later, as well.
Code that was previously passing `reuse=True` around everywhere has
been refactored to use config, and config is set from the CLI using
a new `add_concretizer_args()` function in `spack.cmd.common.arguments`.
- [x] Add `ConfigSetAction` to simplify concretizer config on the CLI
- [x] Refactor code so that it does not pass `reuse=True` to every function.
- [x] Refactor commands to use `add_concretizer_args()` and to pass
concretizer config using the config system.
Config scopes were different for `config` and `mutable_config`,
and `mutable_config` did not have a command line scope.
- [x] Fix by consolidating the creation logic for the two fixtures.
The concretizer is going to grow to have many more configuration,
and we really need some structured config for that.
* We have the `config:concretizer` option that chooses the solver,
but extending that is awkward (we'd need to replace a string with
a `dict`) and the solver choice will be deprecated eventually.
* We have the `concretization` option in environments, but it's
not a top-level config section -- it's just for environments,
and it also only admits a string right now.
To avoid overlapping with either of these and to allow the most
extensibility in the future, this adds a new `concretizer` config
section that can be used in and outside of environments. There
is only one option right now: `reuse`. This can expand to include
other options later.
Likely, we will soon deprecate `config:concretizer` and warn when
the user doesn't use `clingo`, and we will eventually (sometime later)
move the `together` / `separately` options from `concretization` into
the top-level `concretizer` section.
This commit just adds the new section and schema. Fully wiring it
up is TBD.
The solver has a lot of configuration associated with it. Rather
than adding arguments to everything, we should encapsulate that
in a class. This is the start of that work; it replaces `solve()`
and its kwargs with a class and properties.
* Add 'stable' to the list of infinity version names.
Rename libunwind 1.5-head to 1.5-stable.
* Add stable to the infinite version list in packaging_guide.rst.
* core: Make platform environment an instance not class method
In preparation for accessing data constructed in __init__.
* macos: set consistent macosx deployment target
This should silence numerous warnings from mixed gcc/macos toolchains.
* perl: prevent too-new deployment target version
```
*** Unexpected MACOSX_DEPLOYMENT_TARGET=11
***
*** Please either set it to a valid macOS version number (e.g., 10.15) or to empty.
```
* Stylin'
* Add deployment target overrides to failing autoconf packages
* Move configure workaround to base autoconf package
This reverts commit 3c119eaf8b4fb37c943d503beacf5ad2aa513d4c.
* Stylin'
* macos: add utility functions for SDK
These aren't yet used but should probably be added to spack debug
report.
* Remove node_target_satisfies/3 in favor of target_satisfies/2
When emitting input facts we don't need to couple target with
packages, but we can emit fewer facts independently and let
the grounder combine them.
* Remove compiler_version_satisfies/4 in favor of compiler_version_satisfies/3
When emitting input facts we don't need to couple compilers with
packages, but we can emit fewer facts independently and let
the grounder combine them.
* Introduce heuristic in the ASP-program
With heuristic we can drive clingo to make better
initial guesses, which lead to fewer choices and
conflicts in the overall solve
* Fix reindex with uninstalled deps
When a prefix of a dep is removed, and the db is reindexed, it is added
through the dependent, but until now it incorrectly listed the spec as
'installed'.
There was also some questionable behavior in the db when the same spec
was added multiple times, it would always be marked installed.
* Always reserve path
* Only add installed spec's prefixes to install prefixes set
* Improve warning, and ensure ensure only ensures
* test: reindex with every file system remnant removed except for the old index; it should give a database with nothing installed, including records with installed==False,external==False,ref_count==0,explicit=True, and these should be removable from the database
* stacks: add regression tests for matrix expansion
* Use constrain semantics to construct spec lists for stacks
* Fix semantics for constraining an anonymous spec. Add tests
* Add sticky variants
* Add unit tests for sticky variants
* Add documentation for sticky variants
* Revert "Revert 19736 because conflicts are avoided by clingo by default (#26721)"
This reverts commit 33ef7d57c1.
* Add stickiness to "allow-unsupported-compiler"
`spack license update-copyright-year` was updating license headers but not the MIT
license file. Make it do that and add a test.
Also simplify the way we bump the latest copyright year so that we only need to
update it in one place.
* Use pip to bootstrap pip
* Bootstrap wheel from source
* Update PythonPackage to install using pip
* Update several packages
* Add wheel as base class dep
* Build phase no longer exists
* Add py-poetry package, fix py-flit-core bootstrapping
* Fix isort build
* Clean up many more packages
* Remove unused import
* Fix unit tests
* Don't directly run setup.py
* Typo fix
* Remove unused imports
* Fix issues caught by CI
* Remove custom setup.py file handling
* Use PythonPackage for installing wheels
* Remove custom phases in PythonPackages
* Remove <phase>_args methods
* Remove unused import
* Fix various packages
* Try to test Python packages directly in CI
* Actually run the pipeline
* Fix more packages
* Fix mappings, fix packages
* Fix dep version
* Work around bug in concretizer
* Various concretization fixes
* Fix gitlab yaml, packages
* Fix typo in gitlab yaml
* Skip more packages that fail to concretize
* Fix? jupyter ecosystem concretization issues
* Solve Jupyter concretization issues
* Prevent duplicate entries in PYTHONPATH
* Skip fenics-dolfinx
* Build fewer Python packages
* Fix missing npm dep
* Specify image
* More package fixes
* Add backends for every from-source package
* Fix version arg
* Remove GitLab CI stuff, add py-installer package
* Remove test deps, re-add install_options
* Function declaration syntax fix
* More build fixes
* Update spack create template
* Update PythonPackage documentation
* Fix documentation build
* Fix unit tests
* Remove pip flag added only in newer pip
* flux: add explicit dependency on jsonschema
* Update packages that have been added since this was branched off of develop
* Move Python 2 deprecation to a separate PR
* py-neurolab: add build dep on py-setuptools
* Use wheels for pip/wheel
* Allow use of pre-installed pip for external Python
* pip -> python -m pip
* Use python -m pip for all packages
* Fix py-wrapt
* Add both platlib and purelib to PYTHONPATH
* py-pyyaml: setuptools is needed for all versions
* py-pyyaml: link flags aren't needed
* Appease spack audit packages
* Some build backend is required for all versions, distutils -> setuptools
* Correctly handle different setup.py filename
* Use wheels for py-tomli to avoid circular dep on py-flit-core
* Fix busco installation procedure
* Clarify things in spack create template
* Test other Python build backends
* Undo changes to busco
* Various fixes
* Don't test other backends
When `spack compiler list` is run without being restricted to a
particular scope, and no compilers are found, say that none are
available, and hint that the use should run spack compiler find to
auto detect compilers.
* Improve docs
* Check if stdin is a tty
* add a test
Many packages implement logic at the class level to handle complex dependencies and
conflicts. Others have started using `with when("@1.0"):` blocks since we added that
capability. The loops and other control logic can cause some pure directive logic not to
be removed by our package hashing logic -- and in many cases that's a lot of code that
will cause unnecessary rebuilds.
This commit changes the unparser so that it will descend into these blocks. Specifically:
1. Descend into loops, if statements, and with blocks at the class level.
2. Don't look inside function definitions (in or outside a class).
3. Don't look at nested class definitions (they don't have directives)
4. Add logic to *remove* empty loops/with blocks/if statements if all directives
in them were removed.
This allows our package hash to ignore a lot of pure metadata that it was not ignoring
before, and makes it less sensitive.
In addition, we add `maintainers` and `tags` to the list of metadata attributes that
Spack should remove from packages when constructing canonoical source for a package
hash.
- [x] Make unparser handle if/for/while/with at class level.
- [x] Add tests for control logic removal.
- [x] Add a test to ensure that all packages are not only unparseable, but also
that their canonical source is still compilable. This is a test for
our control logic removal.
- [x] Add another unparse test package that has complex logic.
These are the unit tests from astunparse, converted to pytest, with a few backports from
upstream cpython. These should hopefully keep `unparser.py` well covered as we change it.
We can't tell `print(a, b, c)` and `print((a, b, c))` apart -- both of these expressions
generate different ASTs in Python 2 and Python 3. However, we can decide that we don't
care. This commit treats both of them the same when `py_ver_consistent` is set with
`unparse()`.
This means that the package hash won't notice changes from printing a tuple to printing
multiple values, but we don't care, because this is extremely unlikely to affect the build.
More than likely this is just an error message for the user of the package.
- [x] treat `print(a, b, c)` and `print((a, b, c))` the same in py2 and py3
- [x] add another package parsing test -- legion -- that exercises this feature
To make it easier to see how package hashes change and how they are computed, add two
commands:
* `spack pkg source <spec>`: dumps source code for a package to the terminal
* `spack pkg source --canonical <spec>`: dumps canonicalized source code for a
package to the terminal. It strips comments, directives, and known-unused
multimethods from the package. It is used to generate package hashes.
* `spack pkg hash <spec>`: This gives the package hash for a particular spec.
It is generated from the canonical source code for the spec.
- [x] `add spack pkg source` and `spack pkg hash`
- [x] add tests
- [x] fix bug in multimethod resolution with boolean `@when` values
Co-authored-by: Greg Becker <becker33@llnl.gov>