Modifications:
- [x] Change `defaults/config.yaml`
- [x] Add a fix for bootstrapping patchelf from sources if `compilers.yaml` is empty
- [x] Make `SPACK_TEST_SOLVER=clingo` the default for unit-tests
- [x] Fix package failures in the e4s pipeline
Caveats:
1. CentOS 6 still uses the original concretizer as it can't connect to the buildcache due to issues with `ssl` (bootstrapping from sources requires a C++14 capable compiler)
1. I had to update the image tag for GitlabCI in e699f14.
1. libtool v2.4.2 has been deprecated and other packages received some update
This will allow a user to (from anywhere a Spec is parsed including both name and version) refer to a git commit in lieu of
a package version, and be able to make comparisons with releases in the history based on commits (or with other commits). We do this by way of:
- Adding a property, is_commit, to a version, meaning I can always check if a version is a commit and then change some action.
- Adding an attribute to the Version object which can lookup commits from a git repo and find the last known version before that commit, and the distance
- Construct new Version comparators, which are tuples. For normal versions, they are unchanged. For commits with a previous version x.y.z, d commits away, the comparator is (x, y, z, '', d). For commits with no previous version, the comparator is ('', d) where d is the distance from the first commit in the repo.
- Metadata on git commits is cached in the misc_cache, for quick lookup later.
- Git repos are cached as bare repos in `~/.spack/git_repos`
- In both caches, git repo urls are turned into file paths within the cache
If a commit cannot be found in the cached git repo, we fetch from the repo. If a commit is found in the cached metadata, we do not recompare to newly downloaded tags (assuming repo structure does not change). The cached metadata may be thrown out by using the `spack clean -m` option if you know the repo structure has changed in a way that invalidates existing entries. Future work will include automatic updates.
# Finding previous versions
Spack will search the repo for any tags that match the string of a version given by the `version` directive. Spack will also search for any tags that match `v + string` for any version string. Beyond that, Spack will search for tags that match a SEMVER regex (i.e., tags of the form x.y.z) and interpret those tags as valid versions as well. Future work will increase the breadth of tags understood by Spack
For each tag, Spack queries git to determine whether the tag is an ancestor of the commit in question or not. Spack then sorts the tags that are ancestors of the commit by commit-distance in the repo, and takes the nearest ancestor. The version represented by that tag is listed as the previous version for the commit.
Not all commits will find a previous version, depending on the package workflow. Future work may enable more tangential relationships between commits and versions to be discovered, but many commits in real world git repos require human knowledge to associate with a most recent previous version. Future work will also allow packages to specify commit/tag/version relationships manually for such situations.
# Version comparisons.
The empty string is a valid component of a Spack version tuple, and is in fact the lowest-valued component. It cannot be generated as part of any valid version. These two characteristics make it perfect for delineating previous versions from distances. For any version x.y.z, (x, y, z, '', _) will be less than any "real" version beginning x.y.z. This ensures that no distance from a release will cause the commit to be interpreted as "greater than" a version which is not an ancestor of it.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
This PR coincides with tiny changes to spack to support spack monitor using the new spec
the corresponding spack monitor PR is at https://github.com/spack/spack-monitor/pull/31.
Since there are no changes to the database we can actually update the current server
fairly easily, so either someone can test locally or we can just update and then
test from that (and update as needed).
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
#22845 revealed a long-standing bug that had never been triggered before, because the
hashing algorithm had been stable for multiple years while the bug was in production. The
bug was that when reading a concretized environment, Spack did not properly read in the
build hashes associated with the specs in the environment. Those hashes were recomputed
(and as long as we didn't change the algorithm, were recomputed identically). Spack's
policy, though, is never to recompute a hash. Once something is installed, we respect its
metadata hash forever -- even if internally Spack changes the hashing method. Put
differently, once something is concretized, it has a concrete hash, and that's it -- forever.
When we changed the hashing algorithm for performance in #22845 we exposed the bug.
This PR fixes the bug at its source, but properly reading in the cached build hash attributes
associated with the specs. I've also renamed some variables in the Environment class
methods to make a mistake of this sort more difficult to make in the future.
* ensure environment build hashes are never recomputed
* add comment clarifying reattachment of env build hashes
* bump lockfile version and include specfile version in env meta
* Fix unit-test for v1 to v2 conversion
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Refactor platform etc. to avoid circular dependencies
All the base classes in spack.architecture have been
moved to the corresponding specialized subpackages,
e.g. Platform is now defined within spack.platforms.
This resolves a circular dependency where spack.architecture
was both:
- Defining the base classes for spack.platforms, etc.
- Collecting derived classes from spack.platforms, etc.
Now it dopes only the latter.
* Move a few platform related functions to "spack.platforms"
* Removed spack.architecture.sys_type()
* Fixup for docs
* Rename Python modules according to review
Currently as part of installing a package, we lock a prefix, check if
it exists, and create it if not; the logic for creating the prefix
included a check for the existence of that prefix (and raised an
exception if it did), which was redundant.
This also includes removal of tests which were not verifying
anything (they pass with or without the modifications in this PR).
Modifications:
- Export platforms from spack.platforms directly, so that client modules don't have to import submodules
- Use only plain imports in test/architecture.py
- Parametrized test in test/architecture.py and put most of the setup/teardown in fixtures
This is a major rework of Spack's core core `spec.yaml` metadata format. It moves from `spec.yaml` to `spec.json` for speed, and it changes the format in several ways. Specifically:
1. The spec format now has a `_meta` section with a version (now set to version `2`). This will simplify major changes like this one in the future.
2. The node list in spec dictionaries is no longer keyed by name. Instead, it is a list of records with no required key. The name, hash, etc. are fields in the dictionary records like any other.
3. Dependencies can be keyed by any hash (`hash`, `full_hash`, `build_hash`).
4. `build_spec` provenance from #20262 is included in the spec format. This means that, for spliced specs, we preserve the *full* provenance of how to build, and we can reproduce a spliced spec from the original builds that produced it.
**NOTE**: Because we have switched the spec format, this PR changes Spack's hashing algorithm. This means that after this commit, Spack will think a lot of things need rebuilds.
There are two major benefits this PR provides:
* The switch to JSON format speeds up Spack significantly, as Python's builtin JSON implementation is orders of magnitude faster than YAML.
* The new Spec format will soon allow us to represent DAGs with potentially multiple versions of the same dependency -- e.g., for build dependencies or for compilers-as-dependencies. This PR lays the necessary groundwork for those features.
The old `spec.yaml` format continues to be supported, but is now considered a legacy format, and Spack will opportunistically convert these to the new `spec.json` format.
This modification accounts for:
1. Bootstrapping from sources using system, non-standard Python
2. Using later an ABI compatible standard Python interpreter
* tests: make `spack url [stats|summary]` work on mock packages
Mock packages have historically had mock hashes, but this means they're also invalid
as far as Spack's hash detection is concerned.
- [x] convert all hashes in mock package to md5 or sha256
- [x] ensure that all mock packages have a URL
- [x] ignore some special cases with multiple VCS fetchers
* url stats: add `--show-issues` option
`spack url stats` tells us how many URLs are using what protocol, type of checksum,
etc., but it previously did not tell us which packages and URLs had the issues. This
adds a `--show-issues` option to show URLs with insecure (`http`) URLs or `md5` hashes
(which are now deprecated by NIST).
Fixes removal of SPACK_ENV_PATH from PATH in the presence of trailing
slashes in the elements of PATH:
The compiler wrapper has to ensure that it is not called nested like
it would happen when gcc's collect2 uses PATH to call the linker ld,
or else the compilation fails.
To prevent nested calls, the compiler wrapper removes the elements
of SPACK_ENV_PATH from PATH.
Sadly, the autotest framework appends a slash to each element
of PATH when adding AUTOTEST_PATH to the PATH for the tests,
and some tests like those of GNU bison run cc inside the test.
Thus, ensure that PATH cleanup works even with trailing slashes.
This fixes the autotest suite of bison, compiling hundreds of
bison-generated test cases in a autotest-generated testsuite.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
This PR will add a new audit, specifically for spack package homepage urls (and eventually
other kinds I suspect) to see if there is an http address that can be changed to https.
Usage is as follows:
```bash
$ spack audit packages-https <package>
```
And in list view:
```bash
$ spack audit list
generic:
Generic checks relying on global variables
configs:
Sanity checks on compilers.yaml
Sanity checks on packages.yaml
packages:
Sanity checks on specs used in directives
packages-https:
Sanity checks on https checks of package urls, etc.
```
I think it would be unwise to include with packages, because when run for all, since we do requests it takes a long time. I also like the idea of more well scoped checks - likely there will be other addresses for http/https within a package that we eventually check. For now, there are two error cases - one is when an https url is tried but there is some SSL error (or other error that means we cannot update to https):
```bash
$ spack audit packages-https zoltan
PKG-HTTPS-DIRECTIVES: 1 issue found
1. Error with attempting https for "zoltan":
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'www.cs.sandia.gov'. (_ssl.c:1125)>
```
This is either not fixable, or could be fixed with a change to the url or (better) contacting the site owners to ask about some certificate or similar.
The second case is when there is an http that needs to be https, which is a huge issue now, but hopefully not after this spack PR.
```bash
$ spack audit packages-https xman
Package "xman" uses http but has a valid https endpoint.
```
And then when a package is fixed:
```bash
$ spack audit packages-https zlib
PKG-HTTPS-DIRECTIVES: 0 issues found.
```
And that's mostly it. :)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
* Add a __reduce__ method to Spec
fixes#23892
The recursion limit seems to be due to the default
way in which a Spec is serialized, following all
the attributes. It's still not clear to me why this
is related to being in an environment, but in any
case we already have methods to serialize Specs to
disk in JSON and YAML format. Here we use them to
pickle a Spec instance too.
* Downgrade to build-hash
Hopefully nothing will change the package in
between serializing the spec and sending it
to the child process.
* Add support for Python 2
* Make sure PackageInstaller does not remove the just-restored
install dir after failure in spack install --overwrite
* Remove cryptic error message and rethrow actual error
The gcc compiler can be configured to use `ld.gold` by default. It will
then call `ld.gold` explicitly when linking. When so, spack need to have
a ld.gold wrapper in PATH to inject rpaths link flags etc...
Also I wouldn't be surprised to see some package calling `ld.gold`
directly.
As for ld.gold, the argument could be made that we want to support any
package that could call ld.lld.
* Add a __reduce__ method to SpecBuildInterface
This class was confusing pickle when being serialized,
due to its scary nature of being an object that disguise
as another type.
* Add more MacOS tests, switch them to clingo
* Fix condition syntax
* Remove Python v3.6 and v3.9 with macOS
* Conditionally remove 'context' from kwargs in _urlopen
Previously, 'context' is purged from kwargs in _urlopen to
conform to varying support for 'context' in different versions
of urllib. This fix tries to use 'context', and then removes
it if an exception is thrown and tries again.
* Specify error type in try statement in _urlopen
Specify TypeError when checking if 'context' is in kwargs
for _urlopen. Also, if try fails, check that 'context' is
in the error message before removing from kwargs.
This is a direct followup to #13557 which caches additional attributes that were added in #24095 that are expensive to compute. I had to reopen#25556 in another PR to invalidate the GitLab CI cache, but see #25556 for prior discussion.
### Before
```console
$ time spack env activate .
real 2m13.037s
user 1m25.584s
sys 0m43.654s
$ time spack env view regenerate
==> Updating view at /Users/Adam/.spack/.spack-env/view
real 16m3.541s
user 10m28.892s
sys 4m57.816s
$ time spack env deactivate
real 2m30.974s
user 1m38.090s
sys 0m49.781s
```
### After
```console
$ time spack env activate .
real 0m8.937s
user 0m7.323s
sys 0m1.074s
$ time spack env view regenerate
==> Updating view at /Users/Adam/.spack/.spack-env/view
real 2m22.024s
user 1m44.739s
sys 0m30.717s
$ time spack env deactivate
real 0m10.398s
user 0m8.414s
sys 0m1.630s
```
Fixes#25555Fixes#25541
* Speedup environment activation, part 2
* Only query distutils a single time
* Fix KeyError bug
* Make vermin happy
* Manual memoize
* Add comment on cross-compiling
* Use platform-specific include directory
* Fix multiple bugs
* Fix python_inc discrepancy
* Fix import tests
* Set pubkey trust to ultimate during `gpg trust`
Tries to solve the same problem as #24760 without surpressing stderr
from gpg commands.
This PR makes every imported key trusted in the gpg database.
Note: I've outlined
[here](https://github.com/spack/spack/pull/24760#issuecomment-883183175)
that gpg's trust model makes sense, since how can we trust a random
public key we download from a binary cache?
* Fix test
Fixes#25603
This commit adds a new context manager to temporarily
deactivate active environments. This context manager
is used when setting up bootstrapping configuration to
make sure that the current environment is not affected
by operations on the bootstrap store.
* Preserve exit code 1 if nothing is found
* Use context manager for the environment
This commit adds a regression test for version selection
with preferences in `packages.yaml`. Before PR 25585 we
used negative weights in a minimization to select the
optimal version. This may lead to situations where a
dependency may make the version score of dependents
"better" if it is preferred in packages.yaml.
PackageInstaller and Package.installed disagree over what it means
for a package to be installed: PackageInstaller believes it should be
enough for a database entry to exist, whereas Package.installed
requires a database entry & a prefix directory.
This leads to the following niche issue:
* a develop spec in an environment is successfully installed
* then somehow its install prefix is removed (e.g. through a bug fixed
in #25583)
* you modify the sources and reinstall the environment
1. spack checks pkg.installed and realizes the develop spec is NOT
installed, therefore it doesn't need to have 'overwrite: true'
2. the installer gets the build task and checks the database and
realizes the spec IS installed, hence it doesn't have to install it.
3. the develop spec is not rebuilt.
The solution is to make PackageInstaller and pkg.installed agree over
what it means to be installed, and this PR does that by dropping the
prefix directory check from pkg.installed, so that it only checks the
database.
As a result, spack will create a build task with overwrite: true for
the develop spec, and the installer in fact handles overwrite requests
fine even if the install prefix doesn't exist (it just does a normal
install).
see #25563
When we have a concrete environment and we ask to install a
concrete spec from a file, currently Spack returns a list of
specs that are all the one that match the argument DAG hash.
Instead we want to compare build hashes, which also account
for build-only dependencies.
#25303 filtered padding from build output, but it's still there in binary install/relocate output,
so our CI logs are still quite long and frequently hit the limit.
- [x] add context handler from #25303 to buildcache installation as well
This allows you to run `spack graph --installed` from within an environment and get a dot graph of
its concrete specs.
- [x] make `spack graph -i` environment-aware
- [x] add code to the generated dot graph to ensure roots have min rank (i.e., they're all at the
top or left of the DAG)
Bootstrapping clingo on macOS on `develop` gives errors like this:
```
==> Error: RuntimeError: Unable to locate python command in /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/bin
/Users/gamblin2/Workspace/spack/var/spack/repos/builtin/packages/python/package.py:662, in command:
659 return Executable(path)
660 else:
661 msg = 'Unable to locate {0} command in {1}'
>> 662 raise RuntimeError(msg.format(self.name, self.prefix.bin))
```
On macOS, `python` is laid out differently. In particular, `sys.executable` is here:
```console
Python 2.7.16 (default, May 8 2021, 11:48:02)
[GCC Apple LLVM 12.0.5 (clang-1205.0.19.59.6) [+internal-os, ptrauth-isa=deploy on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.executable
'/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python'
```
Based on that, you'd think that
`/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents` would be
where you'd look for a `bin` directory, but you (and Spack) would be wrong:
```console
$ ls /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/
Info.plist MacOS/ PkgInfo Resources/ _CodeSignature/ version.plist
```
You need to look in `sys.exec_prefix`
```
>>> sys.exec_prefix
'/System/Library/Frameworks/Python.framework/Versions/2.7'
```
Which looks much more like a standard prefix, with understandable `bin`, `lib`, and `include`
directories:
```console
$ ls /System/Library/Frameworks/Python.framework/Versions/2.7
Extras/ Mac/ Resources/ bin/ lib/
Headers@ Python* _CodeSignature/ include/
$ ls -l /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python
lrwxr-xr-x 1 root wheel 7B Jan 1 2020 /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python@ -> python2
```
- [x] change `bootstrap.py` to use the `sys.exec_prefix` as the external prefix, instead of just
getting the parent directory of the executable.
This adds lockfile tracking to Spack's lock mechanism, so that we ensure that there
is only one open file descriptor per inode.
The `fcntl` locks that Spack uses are associated with an inode and a process.
This is convenient, because if a process exits, it releases its locks.
Unfortunately, this also means that if you close a file, *all* locks associated
with that file's inode are released, regardless of whether the process has any
other open file descriptors on it.
Because of this, we need to track open lock files so that we only close them when
a process no longer needs them. We do this by tracking each lockfile by its
inode and process id. This has several nice properties:
1. Tracking by pid ensures that, if we fork, we don't inadvertently track the parent
process's lockfiles. `fcntl` locks are not inherited across forks, so we'll
just track new lockfiles in the child.
2. Tracking by inode ensures that referencs are counted per inode, and that we don't
inadvertently close a file whose inode still has open locks.
3. Tracking by both pid and inode ensures that we only open lockfiles the minimum
number of times necessary for the locks we have.
Note: as mentioned elsewhere, these locks aren't thread safe -- they're designed to
work in Python and assume the GIL.
Tasks:
- [x] Introduce an `OpenFileTracker` class to track open file descriptors by inode.
- [x] Reference-count open file descriptors and only close them if they're no longer
needed (this avoids inadvertently releasing locks that should not be released).
This commit rework version facts so that:
1. All the information on versions is collected
before emitting the facts
2. The same kind of atom is emitted for versions
stemming from different origins (package.py
vs. packages.yaml)
In the end all the possible versions for a given
package are totally ordered and they are given
different and increasing weights staring from zero.
This refactor allow us to avoid using negative
weights, which in some configurations may make
parent node score "better" and lead to unexpected
"optimal" results.
Once PR binary graduation is deployed, the shared PR mirror will
contain binaries just built by a merged PR, before the subsequent
develop pipeline has had time to finish. Using the shared PR mirror
as a source of binaries will reduce the number of times we have to
rebuild the same full hash.
* Refactor active environment getters
- Make `spack.environment.active_environment` a trivial getter for the active
environment, replacing `spack.environment.get_env` when the arguments are
not needed
- New method `spack.cmd.require_active_environment(cmd_name)` for
commands that require an environment (rather than abusing
get_env/active_environment)
- Clean up calling code to call spack.environment.active_environment or
spack.cmd.require_active_environment as appropriate
- Remove the `-e` parsing from `active_environment`, because `main.py` is
responsible for processing `-e` and already activates the environment.
- Move `spack.environment.find_environment` to
`spack.cmd.find_environment`, to avoid having spack.environment aware
of argparse.
- Refactor `spack install` command so argument parsing is all handled in the
command, no argparse in spack.environment or spack.installer
- Update documentation
* Python 2: toplevel import errors only with 'as ev'
In two files, `import spack.environment as ev` leads to errors
These errors are not well understood ("'module' object has no attribute
'environment'"). All other files standardize on the above syntax.
* Bootstrap clingo from binaries
* Move information on clingo binaries to a JSON file
* Add support to bootstrap on Cray
Bootstrapping on Cray requires, at the moment, to
swap the platform when looking for binaries - due
to #22800.
* Add SHA256 verification for bootstrapped software
Use sha256 verification for binaries necessary to bootstrap
the concretizer and gpg for signature verification
* patchelf: use Spec._old_concretize() to bootstrap
As noted in #24450 we may happen to need the
concretizer when bootstrapping clingo. In that case
only the old concretizer is available.
* Add a schema for bootstrapping methods
Two fields have been added to bootstrap.yaml:
"sources" which lists the methods available for
bootstrapping software
"trusted" which records if a source is trusted or not
A subcommand has been added to "spack bootstrap" to list
the sources currently available.
* Methods used for bootstrapping are configurable from bootstrap:sources
The function that tries to ensure a given Python module
is importable now tries bootstrapping methods in the same
order as they are defined in `bootstrap.yaml`
* Permit to trust/untrust bootstrapping methods
* Add binary tests for MacOS, Ubuntu
* Add documentation
* Add a note on bash
Spack is internally using a patched version of `argparse` mainly to backport Python 3 functionality
into Python 2. This PR makes it such that for the supported Python 3 versions we use `argparse`
from the standard Python library. This PR has been extracted from #25371 where it was needed
to be able to use recent versions of `pytest`.
* Fixed formatting issues when using a pristine argparse.py
* Fix error message for Python 3.X when missing positional arguments
* Account for the change of API in Python 3.7
* Layout multi-valued args into columns in error messages
* Seamless transition in develop if argparse.pyc is in external
* Be more defensive in case we can't remove the file.
Add link type to spack.yaml format
Add tests to verify link behavior is correct for installed files
for all three view types
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
The commands have been deprecated in #7098, and have
been failing with an error message since then.
Cleaning the code since it is unlikely that somebody
is still using them.
Preferred providers had a non-zero weight because in an earlier formulation of the logic program that was needed to prefer external providers over default providers. With the current formulation for externals this is not needed anymore, so we can give a weight of zero to both default choices and providers that are externals. _Using zero ensures that we don't introduce any drift towards having less providers, which was happening when minimizing positive weights_.
Modifications:
- [x] Default weight for providers starts at 0 (instead of 10, needed before to prefer externals)
- [x] Rules to compute the `provider_weight` have been refactored. There are multiple possible weights for a given `Virtual`. Only one gets selected by the solver (the one that minimizes the objective function).
- [x] `provider_weight` are now accounting for each different `Virtual`. Before there was a single weight per provider, even if the package was providing multiple virtuals.
* Give preferred providers a weight of zero
Preferred providers had a non-zero weight because in an earlier
formulation of the logic program that was needed to prefer
external providers over default providers.
With the current formulation for externals this is not needed anymore,
so we can give a weight of zero to default choices. Using zero
ensures that we don't introduce any drift towards having
less providers, which was happening when minimizing positive weights.
* Simplify how we compute weights for providers
Rewrite rules so that specific events (i.e. being
an external) unlock the possibility to use certain
weights. The weight being considered is then selected
by the minimization process to be the one that gives
the best score.
* Allow providers to have different weights for different virtuals
Before this change we didn't differentiate providers based on
the virtual they provide, which meant that packages providing
more than one virtual had nonetheless a single weight.
With this change there will be a weight per virtual.
This is both a bugfix and a generalization of #25168. In #25168, we attempted to filter padding
*just* from the debug output of `spack.util.executable.Executable` objects. It turns out we got it
wrong -- filtering the command line string instead of the arg list resulted in output like this:
```
==> [2021-08-05-21:34:19.918576] ["'", '/', 'b', 'i', 'n', '/', 't', 'a', 'r', "'", ' ', "'", '-', 'o', 'x', 'f', "'", ' ', "'", '/', 't', 'm', 'p', '/', 'r', 'o', 'o', 't', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '-', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '-', 'w', 'p', 'h', 'p', 't', 'l', 'h', 'w', 'u', 's', 'e', 'i', 'a', '4', 'k', 'p', 'g', 'y', 'd', 'q', 'l', 'l', 'i', '2', '4', 'q', 'b', '5', '5', 'q', 'u', '4', '/', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '.', 't', 'a', 'r', '.', 'b', 'z', '2', "'"]
```
Additionally, plenty of builds output padded paths in other plcaes -- e.g., not just command
arguments, but in other `tty` messages via `llnl.util.filesystem` and other places. `Executable`
isn't really the right place for this.
This PR reverts the changes to `Executable` and moves the filtering into `llnl.util.tty`. There is
now a context manager there that you can use to install a filter for all output.
`spack.installer.build_process()` now uses this context manager to make `tty` do path filtering
when padding is enabled.
- [x] revert filtering in `Executable`
- [x] add ability for `tty` to filter output
- [x] install output filter in `build_process()`
- [x] tests
`compare_specs()` had a `colorful` keyword argument, but everything else in
spack uses `color` for this.
- [x] rename the argument
- [x] make the default follow spack's `--color=always/never/auto` setting
Add a workflow to test bootstrapping clingo on
different platforms so that we can detect changes
that break it.
Compute `site_packages_dir` in `bootstrap.py` as it was
before #24095, until we figure a better way to override
that attribute.
Long, padded install paths can get to be very long in the verbose install
output. This has to be filtered out by the Executable class, as it
generates these debug messages.
- [x] add ability to filter paths from Executable output.
- [x] add a context manager that can enable path filtering
- [x] make `build_process` in `installer.py`
This should hopefully allow us to see most of the build output in
Gitlab pipeline builds again.
`build_process` has been around a long time but it's become a very large,
unwieldy method. It's hard to work with because it has a lot of local
variables that need to persist across all of the code.
- [x] To address this, convert it its own `BuildInfoProcess` class.
- [x] Start breaking the method apart by factoring out the main
installation logic into its own function.
When context managers are used to save and restore values, we need to remember
to use try/finally around the yield in case an exception is thrown. Otherwise,
the cleanup will be skipped.
- Change config from the undocumented `use_curl: true/false` to `url_fetch_method: urllib/curl`.
- Documentation of `url_fetch_method` in `defaults/config.yaml`
- Default fetch option explicitly set to `urllib` for users who may not have curl on their system
To upgrade from `use_curl` to `url_fetch_method`, run `spack config update config`
The output order for `spack diff` is nondeterministic for larger diffs -- if you
ran it several times it will not put the fields in the spec in the same order on
successive invocations.
This makes a few fixes to `spack diff`:
- [x] Implement the change discussed in https://github.com/spack/spack/pull/22283#discussion_r598337448
to make `AspFunction` comparable in and of itself and to eliminate the need for `to_tuple()`
- [x] Sort the lists of diff properties so that the output is always in the same order.
- [x] Make the output for different fields the same as what we use in the solver. Previously, we
would use `Type(value)` for non-string values and `value` for strings. Now we just use
the value. So the output looks a little cleaner:
```
== Old ========================== == New ====================
@@ node_target @@ @@ node_target @@
- gdbm Target(x86_64) - gdbm x86_64
+ zlib Target(skylake) + zlib skylake
@@ variant_value @@ @@ variant_value @@
- ncurses symlinks bool(False) - ncurses symlinks False
+ zlib optimize bool(True) + zlib optimize True
@@ version @@ @@ version @@
- gdbm Version(1.18.1) - gdbm 1.18.1
+ zlib Version(1.2.11) + zlib 1.2.11
@@ node_os @@ @@ node_os @@
- gdbm catalina - gdbm catalina
+ zlib catalina + zlib catalina
```
I suppose if we want to use `repr()` in the output we could do that and could be
consistent but we don't do that elsewhere -- the types of things in Specs are
all stringifiable so the string and the name of the attribute (`version`, `node_os`,
etc.) are sufficient to know what they are.
When a spec fails to build on `develop`, instead of storing an empty file as the entry in the broken specs list, this change stores the full spec yaml as well as links to the failing pipeline and job.
A `spack diff` will take two specs, and then use the spack.solver.asp.SpackSolverSetup to generate
lists of facts about each (e.g., nodes, variants, etc.) and then take a set difference between the
two to show the user the differences.
Example output:
$ spack diff python@2.7.8 python@3.8.11
==> Warning: This interface is subject to change.
--- python@2.7.8/tsxdi6gl4lihp25qrm4d6nys3nypufbf
+++ python@3.8.11/yjtseru4nbpllbaxb46q7wfkyxbuvzxx
@@ variant_value @@
- python patches a8c52415a8b03c0e5f28b5d52ae498f7a7e602007db2b9554df28cd5685839b8
+ python patches 0d98e93189bc278fbc37a50ed7f183bd8aaf249a8e1670a465f0db6bb4f8cf87
@@ version @@
- openssl Version(1.0.2u)
+ openssl Version(1.1.1k)
- python Version(2.7.8)
+ python Version(3.8.11)
Currently this uses diff-like output but we will attempt to improve on this in the future.
One use case for `spack diff` is whenever a user has a disambiguate situation and cannot
remember how two different installs are different. The command can also output `--json` in
the case of a more analysis type use case where we want to save complete data with all
diffs and the intersection. However, the command is really more intended for a command
line use case, and we likely will have an analyzer more suited to saving data
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Catch ConnectionError from CDash reporter
Catch ConnectionError when attempting to upload the results of `spack install`
to CDash. This follows in the spirit of #24299. We do not want `spack install`
to exit with a non-zero status when something goes wrong while attempting to
report results to CDash.
* Catch HTTP Error 400 (Bad Request) in relate_cdash_builds()
`spack style` previously used a Travis CI variable to figure out
what the base branch of a PR was, and this was apparently also set
on `develop`. We switched to `GITHUB_BASE_REF` to support GitHub
Actions, but it looks like this is set to `""` in pushes to develop,
so `spack style` breaks there.
This PR does two things:
- [x] Remove `GITHUB_BASE_REF` knowledge from `spack style` entirely
- [x] Handle `GITHUB_BASE_REF` in style scripts instead, and explicitly
pass the base ref if it is present, but don't otherwise.
This makes `spack style` *not* dependent on the environment and fixes
handling of the base branch in the right place.
This adds a `--root` option so that `spack style` can check style for
a spack instance other than its own.
We also change the inner workings of `spack style` so that `--config FILE`
(and similar options for the various tools) options are used. This ensures
that when `spack style` runs, it always uses the config from the running spack,
and does *not* pick up configuration from the external root.
- [x] add `--root` option to `spack style`
- [x] add `--config` (or similar) option when invoking style tools
- [x] add a test that verifies we can check an external instance
Intel oneAPI installs maintain a lock file in XDG_RUNTIME_DIR,
which by default exists in /tmp (and is shared by all component
installs). This prevented multiple oneAPI components from being
installed in parallel. This commit sets XDG_RUNTIME_DIR to exist
within Spack's installation Stage, so allows multiple components
to be installed at the same time.