Modifications:
- [x] Change `defaults/config.yaml`
- [x] Add a fix for bootstrapping patchelf from sources if `compilers.yaml` is empty
- [x] Make `SPACK_TEST_SOLVER=clingo` the default for unit-tests
- [x] Fix package failures in the e4s pipeline
Caveats:
1. CentOS 6 still uses the original concretizer as it can't connect to the buildcache due to issues with `ssl` (bootstrapping from sources requires a C++14 capable compiler)
1. I had to update the image tag for GitlabCI in e699f14.
1. libtool v2.4.2 has been deprecated and other packages received some update
This will allow a user to (from anywhere a Spec is parsed including both name and version) refer to a git commit in lieu of
a package version, and be able to make comparisons with releases in the history based on commits (or with other commits). We do this by way of:
- Adding a property, is_commit, to a version, meaning I can always check if a version is a commit and then change some action.
- Adding an attribute to the Version object which can lookup commits from a git repo and find the last known version before that commit, and the distance
- Construct new Version comparators, which are tuples. For normal versions, they are unchanged. For commits with a previous version x.y.z, d commits away, the comparator is (x, y, z, '', d). For commits with no previous version, the comparator is ('', d) where d is the distance from the first commit in the repo.
- Metadata on git commits is cached in the misc_cache, for quick lookup later.
- Git repos are cached as bare repos in `~/.spack/git_repos`
- In both caches, git repo urls are turned into file paths within the cache
If a commit cannot be found in the cached git repo, we fetch from the repo. If a commit is found in the cached metadata, we do not recompare to newly downloaded tags (assuming repo structure does not change). The cached metadata may be thrown out by using the `spack clean -m` option if you know the repo structure has changed in a way that invalidates existing entries. Future work will include automatic updates.
# Finding previous versions
Spack will search the repo for any tags that match the string of a version given by the `version` directive. Spack will also search for any tags that match `v + string` for any version string. Beyond that, Spack will search for tags that match a SEMVER regex (i.e., tags of the form x.y.z) and interpret those tags as valid versions as well. Future work will increase the breadth of tags understood by Spack
For each tag, Spack queries git to determine whether the tag is an ancestor of the commit in question or not. Spack then sorts the tags that are ancestors of the commit by commit-distance in the repo, and takes the nearest ancestor. The version represented by that tag is listed as the previous version for the commit.
Not all commits will find a previous version, depending on the package workflow. Future work may enable more tangential relationships between commits and versions to be discovered, but many commits in real world git repos require human knowledge to associate with a most recent previous version. Future work will also allow packages to specify commit/tag/version relationships manually for such situations.
# Version comparisons.
The empty string is a valid component of a Spack version tuple, and is in fact the lowest-valued component. It cannot be generated as part of any valid version. These two characteristics make it perfect for delineating previous versions from distances. For any version x.y.z, (x, y, z, '', _) will be less than any "real" version beginning x.y.z. This ensures that no distance from a release will cause the commit to be interpreted as "greater than" a version which is not an ancestor of it.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
This PR coincides with tiny changes to spack to support spack monitor using the new spec
the corresponding spack monitor PR is at https://github.com/spack/spack-monitor/pull/31.
Since there are no changes to the database we can actually update the current server
fairly easily, so either someone can test locally or we can just update and then
test from that (and update as needed).
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
#22845 revealed a long-standing bug that had never been triggered before, because the
hashing algorithm had been stable for multiple years while the bug was in production. The
bug was that when reading a concretized environment, Spack did not properly read in the
build hashes associated with the specs in the environment. Those hashes were recomputed
(and as long as we didn't change the algorithm, were recomputed identically). Spack's
policy, though, is never to recompute a hash. Once something is installed, we respect its
metadata hash forever -- even if internally Spack changes the hashing method. Put
differently, once something is concretized, it has a concrete hash, and that's it -- forever.
When we changed the hashing algorithm for performance in #22845 we exposed the bug.
This PR fixes the bug at its source, but properly reading in the cached build hash attributes
associated with the specs. I've also renamed some variables in the Environment class
methods to make a mistake of this sort more difficult to make in the future.
* ensure environment build hashes are never recomputed
* add comment clarifying reattachment of env build hashes
* bump lockfile version and include specfile version in env meta
* Fix unit-test for v1 to v2 conversion
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Refactor platform etc. to avoid circular dependencies
All the base classes in spack.architecture have been
moved to the corresponding specialized subpackages,
e.g. Platform is now defined within spack.platforms.
This resolves a circular dependency where spack.architecture
was both:
- Defining the base classes for spack.platforms, etc.
- Collecting derived classes from spack.platforms, etc.
Now it dopes only the latter.
* Move a few platform related functions to "spack.platforms"
* Removed spack.architecture.sys_type()
* Fixup for docs
* Rename Python modules according to review
Currently as part of installing a package, we lock a prefix, check if
it exists, and create it if not; the logic for creating the prefix
included a check for the existence of that prefix (and raised an
exception if it did), which was redundant.
This also includes removal of tests which were not verifying
anything (they pass with or without the modifications in this PR).
Modifications:
- Export platforms from spack.platforms directly, so that client modules don't have to import submodules
- Use only plain imports in test/architecture.py
- Parametrized test in test/architecture.py and put most of the setup/teardown in fixtures
This is a major rework of Spack's core core `spec.yaml` metadata format. It moves from `spec.yaml` to `spec.json` for speed, and it changes the format in several ways. Specifically:
1. The spec format now has a `_meta` section with a version (now set to version `2`). This will simplify major changes like this one in the future.
2. The node list in spec dictionaries is no longer keyed by name. Instead, it is a list of records with no required key. The name, hash, etc. are fields in the dictionary records like any other.
3. Dependencies can be keyed by any hash (`hash`, `full_hash`, `build_hash`).
4. `build_spec` provenance from #20262 is included in the spec format. This means that, for spliced specs, we preserve the *full* provenance of how to build, and we can reproduce a spliced spec from the original builds that produced it.
**NOTE**: Because we have switched the spec format, this PR changes Spack's hashing algorithm. This means that after this commit, Spack will think a lot of things need rebuilds.
There are two major benefits this PR provides:
* The switch to JSON format speeds up Spack significantly, as Python's builtin JSON implementation is orders of magnitude faster than YAML.
* The new Spec format will soon allow us to represent DAGs with potentially multiple versions of the same dependency -- e.g., for build dependencies or for compilers-as-dependencies. This PR lays the necessary groundwork for those features.
The old `spec.yaml` format continues to be supported, but is now considered a legacy format, and Spack will opportunistically convert these to the new `spec.json` format.
This modification accounts for:
1. Bootstrapping from sources using system, non-standard Python
2. Using later an ABI compatible standard Python interpreter
* tests: make `spack url [stats|summary]` work on mock packages
Mock packages have historically had mock hashes, but this means they're also invalid
as far as Spack's hash detection is concerned.
- [x] convert all hashes in mock package to md5 or sha256
- [x] ensure that all mock packages have a URL
- [x] ignore some special cases with multiple VCS fetchers
* url stats: add `--show-issues` option
`spack url stats` tells us how many URLs are using what protocol, type of checksum,
etc., but it previously did not tell us which packages and URLs had the issues. This
adds a `--show-issues` option to show URLs with insecure (`http`) URLs or `md5` hashes
(which are now deprecated by NIST).
Fixes removal of SPACK_ENV_PATH from PATH in the presence of trailing
slashes in the elements of PATH:
The compiler wrapper has to ensure that it is not called nested like
it would happen when gcc's collect2 uses PATH to call the linker ld,
or else the compilation fails.
To prevent nested calls, the compiler wrapper removes the elements
of SPACK_ENV_PATH from PATH.
Sadly, the autotest framework appends a slash to each element
of PATH when adding AUTOTEST_PATH to the PATH for the tests,
and some tests like those of GNU bison run cc inside the test.
Thus, ensure that PATH cleanup works even with trailing slashes.
This fixes the autotest suite of bison, compiling hundreds of
bison-generated test cases in a autotest-generated testsuite.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
This PR will add a new audit, specifically for spack package homepage urls (and eventually
other kinds I suspect) to see if there is an http address that can be changed to https.
Usage is as follows:
```bash
$ spack audit packages-https <package>
```
And in list view:
```bash
$ spack audit list
generic:
Generic checks relying on global variables
configs:
Sanity checks on compilers.yaml
Sanity checks on packages.yaml
packages:
Sanity checks on specs used in directives
packages-https:
Sanity checks on https checks of package urls, etc.
```
I think it would be unwise to include with packages, because when run for all, since we do requests it takes a long time. I also like the idea of more well scoped checks - likely there will be other addresses for http/https within a package that we eventually check. For now, there are two error cases - one is when an https url is tried but there is some SSL error (or other error that means we cannot update to https):
```bash
$ spack audit packages-https zoltan
PKG-HTTPS-DIRECTIVES: 1 issue found
1. Error with attempting https for "zoltan":
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'www.cs.sandia.gov'. (_ssl.c:1125)>
```
This is either not fixable, or could be fixed with a change to the url or (better) contacting the site owners to ask about some certificate or similar.
The second case is when there is an http that needs to be https, which is a huge issue now, but hopefully not after this spack PR.
```bash
$ spack audit packages-https xman
Package "xman" uses http but has a valid https endpoint.
```
And then when a package is fixed:
```bash
$ spack audit packages-https zlib
PKG-HTTPS-DIRECTIVES: 0 issues found.
```
And that's mostly it. :)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
* Add a __reduce__ method to Spec
fixes#23892
The recursion limit seems to be due to the default
way in which a Spec is serialized, following all
the attributes. It's still not clear to me why this
is related to being in an environment, but in any
case we already have methods to serialize Specs to
disk in JSON and YAML format. Here we use them to
pickle a Spec instance too.
* Downgrade to build-hash
Hopefully nothing will change the package in
between serializing the spec and sending it
to the child process.
* Add support for Python 2
* Make sure PackageInstaller does not remove the just-restored
install dir after failure in spack install --overwrite
* Remove cryptic error message and rethrow actual error
The gcc compiler can be configured to use `ld.gold` by default. It will
then call `ld.gold` explicitly when linking. When so, spack need to have
a ld.gold wrapper in PATH to inject rpaths link flags etc...
Also I wouldn't be surprised to see some package calling `ld.gold`
directly.
As for ld.gold, the argument could be made that we want to support any
package that could call ld.lld.
* Add a __reduce__ method to SpecBuildInterface
This class was confusing pickle when being serialized,
due to its scary nature of being an object that disguise
as another type.
* Add more MacOS tests, switch them to clingo
* Fix condition syntax
* Remove Python v3.6 and v3.9 with macOS
* Conditionally remove 'context' from kwargs in _urlopen
Previously, 'context' is purged from kwargs in _urlopen to
conform to varying support for 'context' in different versions
of urllib. This fix tries to use 'context', and then removes
it if an exception is thrown and tries again.
* Specify error type in try statement in _urlopen
Specify TypeError when checking if 'context' is in kwargs
for _urlopen. Also, if try fails, check that 'context' is
in the error message before removing from kwargs.
This is a direct followup to #13557 which caches additional attributes that were added in #24095 that are expensive to compute. I had to reopen#25556 in another PR to invalidate the GitLab CI cache, but see #25556 for prior discussion.
### Before
```console
$ time spack env activate .
real 2m13.037s
user 1m25.584s
sys 0m43.654s
$ time spack env view regenerate
==> Updating view at /Users/Adam/.spack/.spack-env/view
real 16m3.541s
user 10m28.892s
sys 4m57.816s
$ time spack env deactivate
real 2m30.974s
user 1m38.090s
sys 0m49.781s
```
### After
```console
$ time spack env activate .
real 0m8.937s
user 0m7.323s
sys 0m1.074s
$ time spack env view regenerate
==> Updating view at /Users/Adam/.spack/.spack-env/view
real 2m22.024s
user 1m44.739s
sys 0m30.717s
$ time spack env deactivate
real 0m10.398s
user 0m8.414s
sys 0m1.630s
```
Fixes#25555Fixes#25541
* Speedup environment activation, part 2
* Only query distutils a single time
* Fix KeyError bug
* Make vermin happy
* Manual memoize
* Add comment on cross-compiling
* Use platform-specific include directory
* Fix multiple bugs
* Fix python_inc discrepancy
* Fix import tests
* Set pubkey trust to ultimate during `gpg trust`
Tries to solve the same problem as #24760 without surpressing stderr
from gpg commands.
This PR makes every imported key trusted in the gpg database.
Note: I've outlined
[here](https://github.com/spack/spack/pull/24760#issuecomment-883183175)
that gpg's trust model makes sense, since how can we trust a random
public key we download from a binary cache?
* Fix test
Fixes#25603
This commit adds a new context manager to temporarily
deactivate active environments. This context manager
is used when setting up bootstrapping configuration to
make sure that the current environment is not affected
by operations on the bootstrap store.
* Preserve exit code 1 if nothing is found
* Use context manager for the environment
This commit adds a regression test for version selection
with preferences in `packages.yaml`. Before PR 25585 we
used negative weights in a minimization to select the
optimal version. This may lead to situations where a
dependency may make the version score of dependents
"better" if it is preferred in packages.yaml.
PackageInstaller and Package.installed disagree over what it means
for a package to be installed: PackageInstaller believes it should be
enough for a database entry to exist, whereas Package.installed
requires a database entry & a prefix directory.
This leads to the following niche issue:
* a develop spec in an environment is successfully installed
* then somehow its install prefix is removed (e.g. through a bug fixed
in #25583)
* you modify the sources and reinstall the environment
1. spack checks pkg.installed and realizes the develop spec is NOT
installed, therefore it doesn't need to have 'overwrite: true'
2. the installer gets the build task and checks the database and
realizes the spec IS installed, hence it doesn't have to install it.
3. the develop spec is not rebuilt.
The solution is to make PackageInstaller and pkg.installed agree over
what it means to be installed, and this PR does that by dropping the
prefix directory check from pkg.installed, so that it only checks the
database.
As a result, spack will create a build task with overwrite: true for
the develop spec, and the installer in fact handles overwrite requests
fine even if the install prefix doesn't exist (it just does a normal
install).
see #25563
When we have a concrete environment and we ask to install a
concrete spec from a file, currently Spack returns a list of
specs that are all the one that match the argument DAG hash.
Instead we want to compare build hashes, which also account
for build-only dependencies.
#25303 filtered padding from build output, but it's still there in binary install/relocate output,
so our CI logs are still quite long and frequently hit the limit.
- [x] add context handler from #25303 to buildcache installation as well
This allows you to run `spack graph --installed` from within an environment and get a dot graph of
its concrete specs.
- [x] make `spack graph -i` environment-aware
- [x] add code to the generated dot graph to ensure roots have min rank (i.e., they're all at the
top or left of the DAG)
Bootstrapping clingo on macOS on `develop` gives errors like this:
```
==> Error: RuntimeError: Unable to locate python command in /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/bin
/Users/gamblin2/Workspace/spack/var/spack/repos/builtin/packages/python/package.py:662, in command:
659 return Executable(path)
660 else:
661 msg = 'Unable to locate {0} command in {1}'
>> 662 raise RuntimeError(msg.format(self.name, self.prefix.bin))
```
On macOS, `python` is laid out differently. In particular, `sys.executable` is here:
```console
Python 2.7.16 (default, May 8 2021, 11:48:02)
[GCC Apple LLVM 12.0.5 (clang-1205.0.19.59.6) [+internal-os, ptrauth-isa=deploy on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.executable
'/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python'
```
Based on that, you'd think that
`/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents` would be
where you'd look for a `bin` directory, but you (and Spack) would be wrong:
```console
$ ls /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/
Info.plist MacOS/ PkgInfo Resources/ _CodeSignature/ version.plist
```
You need to look in `sys.exec_prefix`
```
>>> sys.exec_prefix
'/System/Library/Frameworks/Python.framework/Versions/2.7'
```
Which looks much more like a standard prefix, with understandable `bin`, `lib`, and `include`
directories:
```console
$ ls /System/Library/Frameworks/Python.framework/Versions/2.7
Extras/ Mac/ Resources/ bin/ lib/
Headers@ Python* _CodeSignature/ include/
$ ls -l /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python
lrwxr-xr-x 1 root wheel 7B Jan 1 2020 /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python@ -> python2
```
- [x] change `bootstrap.py` to use the `sys.exec_prefix` as the external prefix, instead of just
getting the parent directory of the executable.
This adds lockfile tracking to Spack's lock mechanism, so that we ensure that there
is only one open file descriptor per inode.
The `fcntl` locks that Spack uses are associated with an inode and a process.
This is convenient, because if a process exits, it releases its locks.
Unfortunately, this also means that if you close a file, *all* locks associated
with that file's inode are released, regardless of whether the process has any
other open file descriptors on it.
Because of this, we need to track open lock files so that we only close them when
a process no longer needs them. We do this by tracking each lockfile by its
inode and process id. This has several nice properties:
1. Tracking by pid ensures that, if we fork, we don't inadvertently track the parent
process's lockfiles. `fcntl` locks are not inherited across forks, so we'll
just track new lockfiles in the child.
2. Tracking by inode ensures that referencs are counted per inode, and that we don't
inadvertently close a file whose inode still has open locks.
3. Tracking by both pid and inode ensures that we only open lockfiles the minimum
number of times necessary for the locks we have.
Note: as mentioned elsewhere, these locks aren't thread safe -- they're designed to
work in Python and assume the GIL.
Tasks:
- [x] Introduce an `OpenFileTracker` class to track open file descriptors by inode.
- [x] Reference-count open file descriptors and only close them if they're no longer
needed (this avoids inadvertently releasing locks that should not be released).
This commit rework version facts so that:
1. All the information on versions is collected
before emitting the facts
2. The same kind of atom is emitted for versions
stemming from different origins (package.py
vs. packages.yaml)
In the end all the possible versions for a given
package are totally ordered and they are given
different and increasing weights staring from zero.
This refactor allow us to avoid using negative
weights, which in some configurations may make
parent node score "better" and lead to unexpected
"optimal" results.
Once PR binary graduation is deployed, the shared PR mirror will
contain binaries just built by a merged PR, before the subsequent
develop pipeline has had time to finish. Using the shared PR mirror
as a source of binaries will reduce the number of times we have to
rebuild the same full hash.
* Refactor active environment getters
- Make `spack.environment.active_environment` a trivial getter for the active
environment, replacing `spack.environment.get_env` when the arguments are
not needed
- New method `spack.cmd.require_active_environment(cmd_name)` for
commands that require an environment (rather than abusing
get_env/active_environment)
- Clean up calling code to call spack.environment.active_environment or
spack.cmd.require_active_environment as appropriate
- Remove the `-e` parsing from `active_environment`, because `main.py` is
responsible for processing `-e` and already activates the environment.
- Move `spack.environment.find_environment` to
`spack.cmd.find_environment`, to avoid having spack.environment aware
of argparse.
- Refactor `spack install` command so argument parsing is all handled in the
command, no argparse in spack.environment or spack.installer
- Update documentation
* Python 2: toplevel import errors only with 'as ev'
In two files, `import spack.environment as ev` leads to errors
These errors are not well understood ("'module' object has no attribute
'environment'"). All other files standardize on the above syntax.
* Bootstrap clingo from binaries
* Move information on clingo binaries to a JSON file
* Add support to bootstrap on Cray
Bootstrapping on Cray requires, at the moment, to
swap the platform when looking for binaries - due
to #22800.
* Add SHA256 verification for bootstrapped software
Use sha256 verification for binaries necessary to bootstrap
the concretizer and gpg for signature verification
* patchelf: use Spec._old_concretize() to bootstrap
As noted in #24450 we may happen to need the
concretizer when bootstrapping clingo. In that case
only the old concretizer is available.
* Add a schema for bootstrapping methods
Two fields have been added to bootstrap.yaml:
"sources" which lists the methods available for
bootstrapping software
"trusted" which records if a source is trusted or not
A subcommand has been added to "spack bootstrap" to list
the sources currently available.
* Methods used for bootstrapping are configurable from bootstrap:sources
The function that tries to ensure a given Python module
is importable now tries bootstrapping methods in the same
order as they are defined in `bootstrap.yaml`
* Permit to trust/untrust bootstrapping methods
* Add binary tests for MacOS, Ubuntu
* Add documentation
* Add a note on bash
Spack is internally using a patched version of `argparse` mainly to backport Python 3 functionality
into Python 2. This PR makes it such that for the supported Python 3 versions we use `argparse`
from the standard Python library. This PR has been extracted from #25371 where it was needed
to be able to use recent versions of `pytest`.
* Fixed formatting issues when using a pristine argparse.py
* Fix error message for Python 3.X when missing positional arguments
* Account for the change of API in Python 3.7
* Layout multi-valued args into columns in error messages
* Seamless transition in develop if argparse.pyc is in external
* Be more defensive in case we can't remove the file.
Add link type to spack.yaml format
Add tests to verify link behavior is correct for installed files
for all three view types
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
The commands have been deprecated in #7098, and have
been failing with an error message since then.
Cleaning the code since it is unlikely that somebody
is still using them.
Preferred providers had a non-zero weight because in an earlier formulation of the logic program that was needed to prefer external providers over default providers. With the current formulation for externals this is not needed anymore, so we can give a weight of zero to both default choices and providers that are externals. _Using zero ensures that we don't introduce any drift towards having less providers, which was happening when minimizing positive weights_.
Modifications:
- [x] Default weight for providers starts at 0 (instead of 10, needed before to prefer externals)
- [x] Rules to compute the `provider_weight` have been refactored. There are multiple possible weights for a given `Virtual`. Only one gets selected by the solver (the one that minimizes the objective function).
- [x] `provider_weight` are now accounting for each different `Virtual`. Before there was a single weight per provider, even if the package was providing multiple virtuals.
* Give preferred providers a weight of zero
Preferred providers had a non-zero weight because in an earlier
formulation of the logic program that was needed to prefer
external providers over default providers.
With the current formulation for externals this is not needed anymore,
so we can give a weight of zero to default choices. Using zero
ensures that we don't introduce any drift towards having
less providers, which was happening when minimizing positive weights.
* Simplify how we compute weights for providers
Rewrite rules so that specific events (i.e. being
an external) unlock the possibility to use certain
weights. The weight being considered is then selected
by the minimization process to be the one that gives
the best score.
* Allow providers to have different weights for different virtuals
Before this change we didn't differentiate providers based on
the virtual they provide, which meant that packages providing
more than one virtual had nonetheless a single weight.
With this change there will be a weight per virtual.
This is both a bugfix and a generalization of #25168. In #25168, we attempted to filter padding
*just* from the debug output of `spack.util.executable.Executable` objects. It turns out we got it
wrong -- filtering the command line string instead of the arg list resulted in output like this:
```
==> [2021-08-05-21:34:19.918576] ["'", '/', 'b', 'i', 'n', '/', 't', 'a', 'r', "'", ' ', "'", '-', 'o', 'x', 'f', "'", ' ', "'", '/', 't', 'm', 'p', '/', 'r', 'o', 'o', 't', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '-', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '-', 'w', 'p', 'h', 'p', 't', 'l', 'h', 'w', 'u', 's', 'e', 'i', 'a', '4', 'k', 'p', 'g', 'y', 'd', 'q', 'l', 'l', 'i', '2', '4', 'q', 'b', '5', '5', 'q', 'u', '4', '/', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '.', 't', 'a', 'r', '.', 'b', 'z', '2', "'"]
```
Additionally, plenty of builds output padded paths in other plcaes -- e.g., not just command
arguments, but in other `tty` messages via `llnl.util.filesystem` and other places. `Executable`
isn't really the right place for this.
This PR reverts the changes to `Executable` and moves the filtering into `llnl.util.tty`. There is
now a context manager there that you can use to install a filter for all output.
`spack.installer.build_process()` now uses this context manager to make `tty` do path filtering
when padding is enabled.
- [x] revert filtering in `Executable`
- [x] add ability for `tty` to filter output
- [x] install output filter in `build_process()`
- [x] tests
`compare_specs()` had a `colorful` keyword argument, but everything else in
spack uses `color` for this.
- [x] rename the argument
- [x] make the default follow spack's `--color=always/never/auto` setting
Add a workflow to test bootstrapping clingo on
different platforms so that we can detect changes
that break it.
Compute `site_packages_dir` in `bootstrap.py` as it was
before #24095, until we figure a better way to override
that attribute.
Long, padded install paths can get to be very long in the verbose install
output. This has to be filtered out by the Executable class, as it
generates these debug messages.
- [x] add ability to filter paths from Executable output.
- [x] add a context manager that can enable path filtering
- [x] make `build_process` in `installer.py`
This should hopefully allow us to see most of the build output in
Gitlab pipeline builds again.
`build_process` has been around a long time but it's become a very large,
unwieldy method. It's hard to work with because it has a lot of local
variables that need to persist across all of the code.
- [x] To address this, convert it its own `BuildInfoProcess` class.
- [x] Start breaking the method apart by factoring out the main
installation logic into its own function.
When context managers are used to save and restore values, we need to remember
to use try/finally around the yield in case an exception is thrown. Otherwise,
the cleanup will be skipped.
- Change config from the undocumented `use_curl: true/false` to `url_fetch_method: urllib/curl`.
- Documentation of `url_fetch_method` in `defaults/config.yaml`
- Default fetch option explicitly set to `urllib` for users who may not have curl on their system
To upgrade from `use_curl` to `url_fetch_method`, run `spack config update config`
The output order for `spack diff` is nondeterministic for larger diffs -- if you
ran it several times it will not put the fields in the spec in the same order on
successive invocations.
This makes a few fixes to `spack diff`:
- [x] Implement the change discussed in https://github.com/spack/spack/pull/22283#discussion_r598337448
to make `AspFunction` comparable in and of itself and to eliminate the need for `to_tuple()`
- [x] Sort the lists of diff properties so that the output is always in the same order.
- [x] Make the output for different fields the same as what we use in the solver. Previously, we
would use `Type(value)` for non-string values and `value` for strings. Now we just use
the value. So the output looks a little cleaner:
```
== Old ========================== == New ====================
@@ node_target @@ @@ node_target @@
- gdbm Target(x86_64) - gdbm x86_64
+ zlib Target(skylake) + zlib skylake
@@ variant_value @@ @@ variant_value @@
- ncurses symlinks bool(False) - ncurses symlinks False
+ zlib optimize bool(True) + zlib optimize True
@@ version @@ @@ version @@
- gdbm Version(1.18.1) - gdbm 1.18.1
+ zlib Version(1.2.11) + zlib 1.2.11
@@ node_os @@ @@ node_os @@
- gdbm catalina - gdbm catalina
+ zlib catalina + zlib catalina
```
I suppose if we want to use `repr()` in the output we could do that and could be
consistent but we don't do that elsewhere -- the types of things in Specs are
all stringifiable so the string and the name of the attribute (`version`, `node_os`,
etc.) are sufficient to know what they are.
When a spec fails to build on `develop`, instead of storing an empty file as the entry in the broken specs list, this change stores the full spec yaml as well as links to the failing pipeline and job.
A `spack diff` will take two specs, and then use the spack.solver.asp.SpackSolverSetup to generate
lists of facts about each (e.g., nodes, variants, etc.) and then take a set difference between the
two to show the user the differences.
Example output:
$ spack diff python@2.7.8 python@3.8.11
==> Warning: This interface is subject to change.
--- python@2.7.8/tsxdi6gl4lihp25qrm4d6nys3nypufbf
+++ python@3.8.11/yjtseru4nbpllbaxb46q7wfkyxbuvzxx
@@ variant_value @@
- python patches a8c52415a8b03c0e5f28b5d52ae498f7a7e602007db2b9554df28cd5685839b8
+ python patches 0d98e93189bc278fbc37a50ed7f183bd8aaf249a8e1670a465f0db6bb4f8cf87
@@ version @@
- openssl Version(1.0.2u)
+ openssl Version(1.1.1k)
- python Version(2.7.8)
+ python Version(3.8.11)
Currently this uses diff-like output but we will attempt to improve on this in the future.
One use case for `spack diff` is whenever a user has a disambiguate situation and cannot
remember how two different installs are different. The command can also output `--json` in
the case of a more analysis type use case where we want to save complete data with all
diffs and the intersection. However, the command is really more intended for a command
line use case, and we likely will have an analyzer more suited to saving data
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Catch ConnectionError from CDash reporter
Catch ConnectionError when attempting to upload the results of `spack install`
to CDash. This follows in the spirit of #24299. We do not want `spack install`
to exit with a non-zero status when something goes wrong while attempting to
report results to CDash.
* Catch HTTP Error 400 (Bad Request) in relate_cdash_builds()
`spack style` previously used a Travis CI variable to figure out
what the base branch of a PR was, and this was apparently also set
on `develop`. We switched to `GITHUB_BASE_REF` to support GitHub
Actions, but it looks like this is set to `""` in pushes to develop,
so `spack style` breaks there.
This PR does two things:
- [x] Remove `GITHUB_BASE_REF` knowledge from `spack style` entirely
- [x] Handle `GITHUB_BASE_REF` in style scripts instead, and explicitly
pass the base ref if it is present, but don't otherwise.
This makes `spack style` *not* dependent on the environment and fixes
handling of the base branch in the right place.
This adds a `--root` option so that `spack style` can check style for
a spack instance other than its own.
We also change the inner workings of `spack style` so that `--config FILE`
(and similar options for the various tools) options are used. This ensures
that when `spack style` runs, it always uses the config from the running spack,
and does *not* pick up configuration from the external root.
- [x] add `--root` option to `spack style`
- [x] add `--config` (or similar) option when invoking style tools
- [x] add a test that verifies we can check an external instance
Intel oneAPI installs maintain a lock file in XDG_RUNTIME_DIR,
which by default exists in /tmp (and is shared by all component
installs). This prevented multiple oneAPI components from being
installed in parallel. This commit sets XDG_RUNTIME_DIR to exist
within Spack's installation Stage, so allows multiple components
to be installed at the same time.
This uses our bootstrapping logic to automatically install dependencies for
`spack style`. Users should no longer have to pre-install all of the tools
(`isort`, `mypy`, `black`, `flake8`). The command will do it for them.
- [x] add logic to bootstrap specs with specific version requirements in `spack style`
- [x] remove style tools from CI requirements (to ensure we test bootstrapping)
- [x] rework dependencies for `mypy` and `py-typed-ast`
- `py-typed-ast` needs to be a link dependency
- it needs to be at 1.4.1 or higher to work with python 3.9
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
#24095 introduced a couple of bugs, which are fixed here:
1. The module path is computed incorrectly for bootstrapped clingo
2. We remove too many paths for `sys.path` in case of failures
Third-party Python libraries may be installed in one of several directories:
1. `lib/pythonX.Y/site-packages` for Spack-installed Python
2. `lib64/pythonX.Y/site-packages` for system Python on RHEL/CentOS/Fedora
3. `lib/pythonX/dist-packages` for system Python on Debian/Ubuntu
Previously, Spack packages were hard-coded to use the (1). Now, we query the Python installation itself and ask it which to use. Ever since #21446 this is how we've been determining where to install Python libraries anyway.
Note: there are still many packages that are hard-coded to use (1). I can change them in this PR, but I don't have the bandwidth to test all of them.
* Python: handle dist-packages and site-packages
* Query Python to find site-packages directory
* Add try-except statements for when distutils isn't installed
* Catch more errors
* Fix root directory used in import tests
* Rely on site_packages_dir property
* Permit to enable/disable bootstrapping and customize store location
This PR adds configuration handles to allow enabling
and disabling bootstrapping, and to customize the store
location.
* Move bootstrap related configuration into its own YAML file
* Add a bootstrap command to manage configuration
Spack allows users to set `padded_length` to pad out the installation path in
build farms so that any binaries created are more easily relocatable. The issue
with this is that the padding dominates installation output and makes it
difficult to see what is going on. The padding also causes logs to easily
exceed size limits for things like GitLab artifacts.
This PR fixes this by adding a filter in the logger daemon. If you use a
setting like this:
config:
install_tree:
padded_length: 512
Then lines like this in the output:
==> [2021-06-23-15:59:05.020387] './configure' '--prefix=/Users/gamblin2/padding-log-test/opt/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_pla/darwin-bigsur-skylake/apple-clang-12.0.5/zlib-1.2.11-74mwnxgn6nujehpyyalhwizwojwn5zga
will be replaced with the much more readable:
==> [2021-06-23-15:59:05.020387] './configure' '--prefix=/Users/gamblin2/padding-log-test/opt/[padded-to-512-chars]/darwin-bigsur-skylake/apple-clang-12.0.5/zlib-1.2.11-74mwnxgn6nujehpyyalhwizwojwn5zga
You can see that the padding has been replaced with `[padded-to-512-chars]` to
indicate the total number of characters in the padded prefix. Over a long log
file, this should save a lot of space and allow us to see error messages in
GitHub/GitLab log output.
The *actual* build logs still have full paths in them. Also lines that are
output by Spack and not by a package build are not filtered and will still
display the fully padded path. There aren't that many of these, so the change
should still help reduce file size and readability quite a bit.
015e29efe1 that introduced this section to the
documentation said “two” here instead of the actual count, three.
9f54cea5c5 then added a fourth, BLAS/LAPACK.
Rather than trying to keep this leading count in sync, this change just replaces
the wording with something more generic/stable.
* fix remaining flake8 errors
* imports: sort imports everywhere in Spack
We enabled import order checking in #23947, but fixing things manually drives
people crazy. This used `spack style --fix --all` from #24071 to automatically
sort everything in Spack so PR submitters won't have to deal with it.
This should go in after #24071, as it assumes we're using `isort`, not
`flake8-import-order` to order things. `isort` seems to be more flexible and
allows `llnl` mports to be in their own group before `spack` ones, so this
seems like a good switch.
`dateutil.parser` was an optional dependency for CVS tests. It was failing on macOS
beacuse the dateutil types were not being installed, and mypy was failing *even when the
CVS tests were skipped*. This seems like it was an oversight on macOS --
`types-dateutil-parser` was not installed there, though it was on Linux unit tests.
It takes 6 lines of YAML and some weird test-skipping logic to get `python-dateutil` and
`types-python-dateutil` installed in all the tests where we need them, but it only takes
4 lines of code to write the date parser we need for CVS, so I just did that instead.
Note that CVS date format can vary from system to system, but it seems like it's always
pretty similar for the parts we care about.
- [x] Replace dateutil.parser with a simpler date regex
- [x] Lose the dependency on `dateutil.parser`
Previous tests of `spack style` didn't really run the tools --
they just ensure that the commands worked enough to get coverage.
This adds several real tests and ensures that we hit the corner
cases in `spack style`. This also tests sucess as well as failure
cases.
This consolidates code across tools in `spack style` so that each
`run_<tool>` function can be called indirecty through a dictionary
of handlers, and os that checks like finding the executable for the
tool can be shared across commands.
- [x] rework `spack style` to use decorators to register tools
- [x] define tool order in one place in `spack style`
- [x] fix python 2/3 issues to Get `isort` checks working
- [x] make isort error regex more robust across versions
- [x] remove unused output option
- [x] change vestigial `TRAVIS_BRANCH` to `GITHUB_BASE_REF`
- [x] update completion
We should not fail the generate stage simply due to the presence of
a broken-spec somewhere in the DAG. Only fail if the known broken
spec needs to be rebuilt.
This PR adds a context manager that permit to group the common part of a `when=` argument and add that to the context:
```python
class Gcc(AutotoolsPackage):
with when('+nvptx'):
depends_on('cuda')
conflicts('@:6', msg='NVPTX only supported in gcc 7 and above')
conflicts('languages=ada')
conflicts('languages=brig')
conflicts('languages=go')
```
The above snippet is equivalent to:
```python
class Gcc(AutotoolsPackage):
depends_on('cuda', when='+nvptx')
conflicts('@:6', when='+nvptx', msg='NVPTX only supported in gcc 7 and above')
conflicts('languages=ada', when='+nvptx')
conflicts('languages=brig', when='+nvptx')
conflicts('languages=go', when='+nvptx')
```
which needs a repetition of the `when='+nvptx'` argument. The context manager might help improving readability and permits to group together directives related to the same semantic aspect (e.g. all the directives needed to model the behavior of `gcc` when `+nvptx` is active).
Modifications:
- [x] Added a `when` context manager to be used with package directives
- [x] Add unit tests and documentation for the new feature
- [x] Modified `cp2k` and `gcc` to show the use of the context manager
ci: only write to broken-specs list on SpackError
Only write to the broken-specs list when `spack install` raises a SpackError,
instead of writing to this list unnecessarily when infrastructure-related problems
prevent a develop job from completing successfully.
If two Specs have the same hash (and prefix) but are not equal, Spack
originally had logic to detect this and raise an error (since both
cannot be installed in the same place). Recently this has eroded and
the check no-longer works; moreover, when defining projections (which
may truncate the hash or other distinguishing properties from the
prefix) Spack was also failing to detect collisions (in both of these
cases, Spack would overwrite the old prefix with the new Spec).
This PR maintains a list of all "taken" prefixes: if a hash is not
registered (i.e. recorded as installed in the database) but the prefix
is occupied, that is a collision. This can detect collisions created
by defining projections (specifically when they omit the hash).
The PR does not detect collisions where specs have the same hash
(and prefix) but are not equal.
Prior to any Spack build, Spack modifies PATH etc. to help the build
find the dependencies it needs. It also allows any package to define
custom environment modifications (and furthermore a package can
specify environment modifications to apply when it is used as a
dependency). If an external package defines custom environment
modifications that alter PATH, and the external package is in a merged
or system prefix, then that prefix could "override" the Spack-built
packages.
This commit reorders environment modifications so that PrependPath
actions which expose Spack-built packages override PrependPath actions
for custom environment modifications of external packages.
In more detail, the original order of environment modifications is:
* Modules
* Compiler flag variables
* PATH, CMAKE_PREFIX_PATH, and PKG_CONFIG_PATH for dependencies
* Custom package.py modifications in the following order:
* dependencies
* root
This commit changes the order:
* Modules
* Compiler flag variables
* For each external dependency
* PATH, CMAKE_PREFIX_PATH, and PKG_CONFIG_PATH modifications
* Custom modifications
* For each Spack-built dependency
* PATH, CMAKE_PREFIX_PATH, and PKG_CONFIG_PATH modifications
* Custom modifications
Spack pipelines need to take specific actions internally that depend
on whether the pipeline is being run on a PR to spack or a merge to
the develop branch. Pipelines can also run in other repositories,
which represents other possible use cases than just the two mentioned
above. This PR creates a "SPACK_PIPELINE_TYPE" gitlab variable which
is propagated to rebuild jobs, and is also used internally to determine
which pipeline-specific tasks to run.
One goal of the PR is fix an issue where rebuild jobs which failed on
develop pipelines did not properly report the broken full hash to the
"broken-specs-url".
* Add Externally Findable section to info command
* Use comma delimited detection attributes in addition to boolean value
* Unit test externally detectable part of spack info
* Force the Python interpreter with an env variable
This commit forces the Python interpreter with an
environment variable, to ensure that the Python set
by the "setup-python" action is the one being used.
Due to the policy adopted by Spack to prefer python3
over python we may end up picking a Python 3.X
interpreter where Python 2.7 was meant to be used.
* Revert "Update conftest.py (#24473)"
This reverts commit 477c8ce820.
* Make python-dateutil a soft dependency for unit tests
Before #23212 people could clone spack and run
```
spack unit-tests
```
while now this is not possible, since python-dateutil is
a required but not vendored dependency. This change makes
it not a hard requirement, i.e. it will be used if found
in the current interpreter.
* Workaround mypy complaint
This commit fixes a subtle bug that may occur when
a package is a "possible_provider" of a virtual but
no "provides_virtual" can be deduced. In that case
the cardinality constraint on "provides_virtual"
may arbitrarily assign a package the role of provider
even if the constraints for it to be one are not fulfilled.
The fix reworks the logic around three concepts:
- "possible_provider": a package may provide a virtual if some constraints are met
- "provides_virtual": a package meet the constraints to provide a virtual
- "provider": a package selected to provide a virtual
Spack packages can now fetch versions from CVS repositories. Note
this fetch mechanism is unsafe unless using :extssh:. Most public
CVS repositories use an insecure protocol implemented as part of CVS.
Here we are adding an install_times.json into the spack install metadata folder.
We record a total, global time, along with the times for each phase. The type
of phase or install start / end is included (e.g., build or fail)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Add a new "spack audit" command. This command can check for issues
with configuration or with packages and is intended to help a
user debug a failed Spack build.
In some cases the reported issues are always errors but are too
costly to check for (e.g. packages that specify missing variants on
dependencies). In other cases the issues may be legitimate but
uncommon usage of Spack and we want to be sure the user intended the
behavior (e.g. duplicate compiler definitions).
Audits are grouped by theme, and for now the two themes are packages
and configuration. For example you can run all available audits
on packages with "spack audit packages". It is intended that in
the future users will be able to define their own audits.
The package audits are good candidates for running in package_sanity
(i.e. they could catch bugs in user-submitted packages before they
are merged) but that is left for a later PR.
This should get us most of the way there to support using monitor during a spack container build, for both Singularity and Docker. Some quick notes:
### Docker
Docker works by way of BUILDKIT and being able to specify --secret. What this means is that you can prefix a line with a mount of type secret as follows:
```bash
# Install the software, remove unnecessary deps
RUN --mount=type=secret,id=su --mount=type=secret,id=st cd /opt/spack-environment && spack env activate . && export SPACKMON_USER=$(cat /run/secrets/su) && export SPACKMON_TOKEN=$(cat /run/secrets/st) && spack install --monitor --fail-fast && spack gc -y
```
Where the id for one or more secrets corresponds to the file mounted at `/run/secrets/<name>`. So, for example, to build this container with su (spackmon user) and sv (spackmon token) defined I would export them on my host and do:
```bash
$ DOCKER_BUILDKIT=1 docker build --network="host" --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container .
```
And when we add `env` to the secret definition that tells the build to look for the secret with id "st" in the environment variable `SPACKMON_TOKEN` for example.
If the user is building locally with a local spack monitor, we also need to set the `--network` to be the host, otherwise you can't connect to it (a la isolation of course.)
## Singularity
Singularity doesn't have as nice an ability to clearly specify secrets, so (hoping this eventually gets implemented) what I'm doing now is providing the user instructions to write the credentials to a file, add it to the container to source, and remove when done.
## Tags
Note that the tags PR https://github.com/spack/spack/pull/23712 will need to be merged before `--monitor-tags` will actually work because I'm checking for the attribute (that doesn't exist yet):
```bash
"tags": getattr(args, "monitor_tags", None)
```
So when that PR is merged to update the argument group, it will work here, and I can either update the PR here to not check if the attribute is there (it will be) or open another one in the case this PR is already merged.
Finally, I added a bunch of documetation for how to use monitor with containerize. I say "mostly working" because I can't do a full test run with this new version until the container base is built with the updated spack (the request to the monitor server for an env install was missing so I had to add it here).
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
When running executables from build dependencies, we want to avoid that
`LD_PRELOAD` and `DYLD_INSERT_LIBRARIES` any of their shared libs build
by spack with system libraries.
this will first support uploads for spack monitor, and eventually could be
used for other kinds of spack uploads
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
* extending example for buildcaches
I was attempting to create a local build cache from a directory, and I found the
docs for both buildcaches and mirrors, but did not connect the docs that the
url variable could be the local filesystem variable. I am extending the docs for
buildcaches with an example of creating and interacting with one on the filesystem
because I suspect other users will run into this need and possibly not find what
they are looking for.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
* adding as follows to spack mirror list
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
It is currently kind of confusing to the reader to distinguish spack buildcache install
and spack install, and it is not clear how to use a build cache once a mirror is added.
Hopefully this little big of description can help (and I hope I got it right!)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Use the 'version_yearlike' attribute instead of 'version' to
check if the SPACK_COMPILER_EXTRA_RPATHS should be set to include
the built-in 'libfabrics'.
When using the bare 'version', the comparison is wrong when
building with 'intel-parallel-studio', which has the version
format '<edition>.YYYY.Nupdate', due to the leading '<edition>'.
Extracting specs for the result of a solve has been factored
as a method into the asp.Result class. The method account for
virtual specs being passed as initial requests.
Minimizing compiler mismatches in the DAG and preferring newer
versions of packages are now higher priority than trying to use as
many default values as possible in multi-valued variants.
Since the module roots were removed from the config file,
`--print-shell-vars` cannot find the module roots anymore. Fix it by
using the new `root_path` function. Moreover, the roots for lmod and
modules seems to have been flipped by accident.
The VALID_VERSION regex didn't check that the version string was
completely valid, only that a prefix of it was. This version ensures
the entire string represents a valid version.
This makes a few related changes.
1. Make the SEGMENT_REGEX identify *which* arm it matches by what groups
are populated, including whether it's a string or int component or a
separator all at once.
2. Use the updated regex to parse the input once with a findall rather
than twice, once with findall and once with split, since the version
components and separators can be distinguished by their group status.
3. Rather than "convert to int, on exception stay string," if the int
group is set then convert to int, if not then construct an instance
of the VersionStrComponent class
4. VersionStrComponent now implements all of the special string
comparison logic as part of its __lt__ and __eq__ methods to deal
with infinity versions and also overloads comparison with integers.
5. Version now uses direct tuple comparison since it has no per-element
special logic outside the VersionStrComponent class.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Passing absolute paths from pipeline generate job to downstream rebuild jobs
causes problems when the CI_PROJECT_DIR is not the same for the generate and
rebuild jobs. This has happened, for example, when gitlab checks out the
project into a runner-specific directory and different runners are chosen
for the generate and rebuild jobs.
* ensure that the stage root exists for `spack stage -p <PATH>`
* add test to verify `spack stage -p <PATH>` works!
* move out shared tmp staging path setup to a fixture to fix the test
* Simplified the spack.util.gpg implementation
All the classes defined in this Python module,
which were previously used to construct singleton
instances, have been removed in favor of four
global variables. These variables are initialized
lazily, like before.
The API of the module has been unchanged for the
most part. A few tests have been modified to use
the new global names.
For me the buildcache force overwrite option does not work. It tries to
delete a file, but errors with a key error, apparently because the
leading / has to be removed.
* util.tty.log: read up to 100 lines if ready
Rework to read up to 100 lines from the captured stdin as long as data
is ready to be read immediately. Adds a helper function to poll with
`select` for ready data. This showed a roughly 5-10x perf improvement
for high-rate writes through the logger with relatively short lines.
* util.tty.log: Defer flushes to end of ready reads
Rather than flush per line, flush per set of reads. Since this is a
non-blocking loop, the total perceived wait is short.
* util.tty.log: only scan each line once, usually
Rather than always find all control characters then substitute them all,
use `subn` to count the number of control characters replaced. Only if
control characters exist find out what they are. This could be made
truly single pass with sub with a function, but it's a more intrusive
change and this got 99%ish of the performance improvement (roughly
another 2x in some cases).
* util.tty.log: remove check for `readable`
Python < 3 does not support a readable check on streams, should not be
necessary here since we control the only use and it's explicitly a
stream to be read.
This PR allows users to `--export`, `--export-secret`, or both to export GPG keys
from Spack. The docs are updated that include a warning that this usually does not
need to be done.
This addresses an issue brought up in slack, and also represented in #14721.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).
This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.
As part of this change, the module roots configuration moved from the config section to inside each module configuration.
Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
### Overview
The goal of this PR is to make gitlab pipeline builds (especially build failures) more reproducible outside of the pipeline environment. The two key changes here which aim to improve reproducibility are:
1. Produce a `spack.lock` during pipeline generation which is passed to child jobs via artifacts. This concretized environment is used both by generated child jobs as well as uploaded as an artifact to be used when reproducing the build locally.
2. In the `spack ci rebuild` command, if a spec needs to be rebuilt from source, do this by generating and running an `install.sh` shell script which is then also uploaded as a job artifact to be run during local reproduction.
To make it easier to take advantage of improved build reproducibility, this PR also adds a new subcommand, `spack ci reproduce-build`, which, given a url to job artifacts:
- fetches and unzips the job artifacts to a local directory
- looks for the generated pipeline yaml and parses it to find details about the job to reproduce
- attempts to provide a copy of the same version of spack used in the ci build
- if the ci build used a docker image, the command prints a `docker run` command you can run to get an interactive shell for reproducing the build
#### Some highlights
One consequence of this change will be much smaller pipeline yaml files. By encoding the concrete environment in a `spack.lock` and passing to child jobs via artifacts, we will no longer need to encode the concrete root of each spec and write it into the job variables, greatly reducing the size of the generated pipeline yaml.
Additionally `spack ci rebuild` output (stdout/stderr) is no longer internally redirected to a log file, so job output will appear directly in the gitlab job trace. With debug logging turned on, this often results in log files getting truncated because they exceed the maximum amount of log output gitlab allows. If this is a problem, you still have the option to `tee` command output to a file in the within the artifacts directory, as now each generated job exposes a `user_data` directory as an artifact, which you can fill with whatever you want in your custom job scripts.
There are some changes to be aware of in how pipelines should be set up after this PR:
#### Pipeline generation
Because the pipeline generation job now writes a `spack.lock` artifact to be consumed by generated downstream jobs, `spack ci generate` takes a new option `--artifacts-root`, inside which it creates a `concrete_env` directory to place the lockfile. This artifacts root directory is also where the `user_data` directory will live, in case you want to generate any custom artifacts. If you do not provide `--artifacts-root`, the default is for it to create a `jobs_scratch_dir` within your `CI_PROJECT_DIR` (a gitlab predefined environment variable) or whatever is your current working directory if that variable isn't set. Here's the diff of the PR testing `.gitlab-ci.yml` taking advantage of the new option:
```
$ git diff develop..pipelines-reproducible-builds share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
diff --git a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
index 579d7b56f3..0247803a30 100644
--- a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
+++ b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
@@ -28,10 +28,11 @@ default:
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
- spack env activate --without-view .
- spack ci generate --check-index-only
+ --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
artifacts:
paths:
- - "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
+ - "${CI_PROJECT_DIR}/jobs_scratch_dir"
tags: ["spack", "public", "medium", "x86_64"]
interruptible: true
```
Notice how we replaced the specific pointer to the generated pipeline file with its containing folder, the same folder we passed as `--artifacts-root`. This way anything in that directory (the generated pipeline yaml, as well as the concrete environment directory containing the `spack.lock`) will be uploaded as an artifact and available to the downstream jobs.
#### Rebuild jobs
Rebuild jobs now must activate the concrete environment created by `spack ci generate` and provided via artifacts. When the pipeline is generated, a directory called `concrete_environment` is created within the artifacts root directory, and this is where the `spack.lock` file is written to be passed to the generated rebuild jobs. The artifacts root directory can be specified using the `--artifacts-root` option to `spack ci generate`, otherwise, it is assumed to be `$CI_PROJECT_DIR`. The directory containing the concrete environment files (`spack.yaml` and `spack.lock`) is then passed to generated child jobs via the `SPACK_CONCRETE_ENV_DIR` variable in the generated pipeline yaml file.
When you don't provide custom `script` sections in your `mappings` within the `gitlab-ci` section of your `spack.yaml`, the default behavior of rebuild jobs is now to change into `SPACK_CONCRETE_ENV_DIR` and activate that environment. If you do provide custom rebuild scripts in your `spack.yaml`, be aware those scripts should do the same thing: assume `SPACK_CONCRETE_ENV_DIR` contains the concretized environment to activate. No other changes to existing custom rebuild scripts should be required as a result of this PR.
As mentioned above, one key change made in this PR is the generation of the `install.sh` script by the rebuild jobs, as that same script is both run by the CI rebuild job as well as exported as an artifact to aid in subsequent attempts to reproduce the build outside of CI. The generated `install.sh` script contains only a single `spack install` command with arguments computed by `spack ci rebuild`. If the install fails, the job trace in gitlab will contain instructions on how to reproduce the build locally:
```
To reproduce this build locally, run:
spack ci reproduce-build https://gitlab.next.spack.io/api/v4/projects/7/jobs/240607/artifacts [--working-dir <dir>]
If this project does not have public pipelines, you will need to first:
export GITLAB_PRIVATE_TOKEN=<generated_token>
... then follow the printed instructions.
```
When run locally, the `spack ci reproduce-build` command shown above will download and process the job artifacts from gitlab, then print out instructions you can copy-paste to run a local reproducer of the CI job.
This PR includes a few other changes to the way pipelines work, see the documentation on pipelines for more details.
This PR erelies on
~- [ ] #23194 to be able to refer to uninstalled specs by DAG hash~
EDIT: that is going to take longer to come to fruition, so for now, we will continue to install specs represented by a concrete `spec.yaml` file on disk.
- [x] #22657 to support install a single spec already present in the active, concrete environment
- [x] add `in_buildcache` field to DB records to indicate what parts of an index,
which includes roots and dependencies, are in the buildcache.
- [x] add `mark()` method to DB for setting values on single nodes of the DAG.
I would like to be able to export (and save and then load programatically)
spack blame metadata, so this commit adds a spack blame --json argument,
along with developer docs for it
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
This work will come in two phases. The first here is to allow saving of a local result
with spack monitor, and the second will add a spack monitor command so the user can
do spack monitor upload.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Currently if one package does `depends_on('pkg default_library=shared')`
and another does `depends_on('pkg default_library=both')`, you'd get a
concretization error.
With this PR one package can do `depends_on('pkg default_library=shared')`
and another depends_on('default_library=static'), and it would concretize to
`pkg default_library=shared,static`
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Bash has a builtin `fc` that will override the compiler if you use "fc",
so it's better to use the full spack-supplied compiler path.
Additionally, the filter regex in the docs was wrong: it replaced the
entire assignment operation with the RHS.
* Modification to R environment
This PR modifies how the R environmnet is presented, and fixes
installing the standalone Rmath library.
- The Rmath build and install methods are combined into one
- Set parallel=False when installing Rmath
- remove the run environment that set up variables for libraries and
headers that are not really needed, and pollute the environment.
* Add setup_run_environment back
- Add back the setup_run_environment with LD_LIBRARY_PATH and
PKG_CONFIG_PATH.
- Adjust documentation to reflect the current code.
Spack uses curl to fetch URL resources. For locally-stored resources
it uses curl's file protocol; when using this protocol, curl expects
that the URL encoding conforms to RFC 3986 (which reserves characters
like '?' and '=' for special use).
We were not performing this encoding, and found a resource where
curl was interpreting this in an unfavorable way (succeeding, but
producing an empty file). This commit properly encodes URLs when
using curl's file protocol.
This error did not likely come up before because in most contexts
Spack was either fetching via http or it was using URLs without
offending characters (for example, the sha-based URLs in mirrors
never contain these characters).
Spack doesn't require users to manually index their repos; it reindexes the indexes automatically when things change. To determine when to do this, it has to `stat()` all package files in each repository to make sure that indexes up to date with packages. We currently index virtual providers, patches by sha256, and tags on packages.
When this was originally implemented, we ran the checker all the time, at startup, but that was slow (see #7587). But we didn't go far enough -- it still consults the checker and does all the stat operations just to see if a package exists (`Repo.exists()`). That might've been a wash in 2018, but as the number of packages has grown, it's gotten slower -- checking 5k packages is expensive and users see this for small operations. It's a win now to make `Repo.exists()` check files directly.
**Fix:**
This PR does a number of things to speed up `spack load`, `spack info`, and other commands:
- [x] Make `Repo.exists()` check files directly again with `os.path.exists()` (this is the big one)
- [x] Refactor `Spec.satisfies()` so that a checking for virtual packages only happens if needed
(avoids some calls to exists())
- [x] Avoid calling `Repo.exists(spec)` in `Repo.get()`. `Repo.get()` will ultimately try to load
a `package.py` file anyway; we can let the failure to load it indicate that the package doesn't
exist, and avoid another call to exists().
- [x] Fix up some comments in spec parsing
- [x] Call `UnknownPackageError` more consistently in `repo.py`
- [x] `analyze` isn't commonly used; move it to long help
(`spack -H` vs `spack -h`). Give it its own section.
- [x] make it clear from `spack -h` that `spack module` can generate
module files
- [x] shorten help for `spack style`
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).
This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.
As part of this change, the module roots configuration moved from the `config` section to inside each module configuration.
Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
TODO:
- [x] code changes to support multiple module sets
- [x] code changes to support modules relative to a view
- [x] Tests for multiple module configurations
- [x] Tests for modules relative to a view
- [x] Backwards compatibility for module roots from config section
- [x] Backwards compatibility for default module set without the name specified
- [x] Tests for backwards compatibility
The implementation for __str__ has been simplified to traverse the spec directly,
and doesn't call anymore the flat_dependencies method. Dead code has been
removed.
For configure (e.g. for hdf5) to pass, this option needs to be pulled out when invoked in ccld mode.
I thought it had fixed the issue but I still saw it after that. After some digging, my guess is that I was able
to get hdf5 to build with ifort instead of ifx. Lot of overlapping changes occurring at the time, as it were.
There are still outstanding issues building hdf5 with ifx, and Intel is looking into what appears to be a
compiler bug, but this manifests during build and is likely a separate issue.
I have verified that the making the edit in 'ccld' mode removes the -loopopt=0 and enables hdf5 to pass
configure. It should be fine to make the edit in 'ld' mode as well, but I have not tested that and didn't
include an -or- condition for it.
Currently, environment views blink out of existence during the view regeneration, and are slowly built back up to their new and improved state. This is not good if other processes attempt to access the view -- they can see it in an inconsistent state.
This PR fixes makes environment view updates atomic. This requires a level of indirection (via symlink, similar to nix or guix) from the view root to the underlying implementation on the filesystem.
Now, an environment view at `/path/to/foo` is a symlink to `/path/to/._foo/<hash>`, where `<hash>` is a hash of the contents of the view. We construct the view in its content-keyed hash directory, create a new symlink to this directory, and atomically replace the symlink with one to the new view.
This PR has a couple of other benefits:
* It future-proofs environment views so that we can implement rollback.
* It ensures that we don't leave users in an inconsistent state if building a new view fails for some reason.
For background:
* there is no atomic operation in posix that allows for a non-empty directory to be replaced.
* There is an atomic `renameat2` in the linux kernel starting in version 3.15, but many filesystems don't support the system call, including NFS3 and NFS4, which makes it a poor implementation choice for an HPC tool, so we use the symlink approach that others tools like nix and guix have used successfully.
fixes#22351
The ASP-based solver now accounts for the presence
in the DAG of deprecated versions and tries to minimize
their number at highest priority.
Variants explicitly set in an abstract root spec are considered
as defaults for the package they refer to, and they override
what is in packages.yaml and in package.py. This is relevant
only for multi-valued variants, where a constraint may extend
an already default value.
The code for guessing cpu archtype based on craype modules names got confused,
at least on LLNL RZ prototype systems. In particular a (L) or (D) at the end of a craype-x86-xxx or other
cpu architecture module was geting the logic confused.
With this patch, any white space + remaining characters in the moduel name are removed.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
There have been a lot of questions and some confusion recently surrounding Spack installation test capabilities so this PR is intended to clean up and refine the documentation for "Checking an installation".
It aims to better distinguish between checks that are performed during an installation (i.e., build-time tests) and those that can be done days and weeks after the software has been installed (i.e., install (or smoke) tests).
When we first merged the ASP-based solver, unit-tests
were run in a Docker container with root permissions
and that was preventing a few tests to succeed.
Since some time though, clingo is tested as a regular
user within Github Actions VMs, so we should start to
run checks again.
In an active concretize environment, support installing one or more
cli specs only if they are already present in the environment. The
`--no-add` option is the default for root specs, but optional for
dependency specs. I.e. if you `spack install <depspec>` in an
environment, the dependency-only spec `depspec` will be added as a
root of the environment before being installed. In addition,
`spack install --no-add <spec>` fails if it does not find an
unambiguous match for `spec`.
Like compilers targets now try to minimize
mismatches, instead of maximizing matches.
Deduction of mismatches is reworked to be
the opposite of a match, since computing
that is faster.
The ASP-based solver can natively manage cases where more than one root spec is given, and is able to concretize all the roots together (ensuring one spec per package at most).
Modifications:
- [x] When concretising together an environment the ASP-based solver calls directly its `solve` method rather than constructing a temporary fake root package.
The loading protocol mandates that the the module we are going
to import needs to be already in sys.modules before its code is
executed, so to prevent unbounded recursions and multiple loading.
Loading a module from file exits early if the module is already
in sys.modules
When installing OneAPI packages as root (e.g. in a container), the
installer places cache files in /var/intel/installercache that
interfere with future Spack installs. This ensures that when
running an installation as a root user that this is removed.