This PR ensures that we'll get a comprehensible error message whenever an old
version of Spack tries to use a DB or a lockfile that is "too new".
* Fix error message when using a too new DB
* Add a unit-test to ensure we have a comprehensible error message
Add a section to the lock file to track the Spack version/commit that produced
an environment. This should (eventually) enhance reproducibility, though we
do not currently do anything with the information. It just adds to provenance
at the moment.
Changes include:
- [x] adding the version/commit to `spack.lock`
- [x] refactor `spack.main.get_version()
- [x] fix a couple of environment lock file-related typos
The flags --mirror-name / --mirror-url / --directory were deprecated in
favor of just passing a positional name, url or directory, and letting spack
figure it out.
---------
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
Prior to this PR, the HOMEDRIVE environment variable was used to
detect what drive we are operating in. This variable is not available
for service account logins (like what is used for CI), so switch to
extracting the drive from PROGRAMFILES (which is more-widely defined).
On Windows, several commonly available system tools for decompression
are unreliable (gz/bz2/xz). This commit refactors `decompressor_for`
to call out to a Windows or Unix-specific method:
* The decompressor_for_nix method behaves the same as before and
generally treats the Python/system support options for decompression
as interchangeable (although avoids using Python's built-in tar
support since that has had issues with permissions).
* The decompressor_for_win method can only use Python support for
gz/bz2/xz, although for a tar.gz it does use system support for
untar (after the decompression step). .zip uses the system tar
utility, and .Z depends on external support (i.e. that the user
has installed 7zip).
A naming scheme has been introduced for the various _decompression
methods:
* _system_gunzip means to use a system tool (and fail if it's not
available)
* _py_gunzip means to use Python's built-in support for decompressing
.gzip files (and fail if it's not available)
* _gunzip is a method that can do either
This is a refactor of Spack's stand-alone test process to be more spack- and pytest-like.
It is more spack-like in that test parts are no longer "hidden" in a package's run_test()
method and pytest-like in that any package method whose name starts test_
(i.e., a "test" method) is a test part. We also support the ability to embed test parts in a
test method when that makes sense.
Test methods are now implicit test parts. The docstring is the purpose for the test part.
The name of the method is the name of the test part. The working directory is the active
spec's test stage directory. You can embed test parts using the test_part context manager.
Functionality added by this commit:
* Adds support for multiple test_* stand-alone package test methods, each of which is
an implicit test_part for execution and reporting purposes;
* Deprecates package use of run_test();
* Exposes some functionality from run_test() as optional helper methods;
* Adds a SkipTest exception that can be used to flag stand-alone tests as being skipped;
* Updates the packaging guide section on stand-alone tests to provide more examples;
* Restores the ability to run tests "inherited" from provided virtual packages;
* Prints the test log path (like we currently do for build log paths);
* Times and reports the post-install process (since it can include post-install tests);
* Corrects context-related error message to distinguish test recipes from build recipes.
fixes#22341
Using double quotes creates issues with shell variable substitutions,
in particular when the manifest has "definitions:" in it. Use single
quotes instead.
Add a "require" directive to packages, which functions exactly like
requirements specified in packages.yaml (uses the same fact-generation
logic); update both to allow making the requirement conditional.
* Packages may now use "require" to add constraints. This can be useful
for something like "require(%gcc)" (where before we had to add a
conflict for every compiler except gcc).
* Requirements (in packages.yaml or in a "require" directive) can be
conditional on a spec, e.g. "require(%gcc, when=@1.0.0)" (version
1.0.0 can only build with gcc).
* Requirements may include a message which clarifies why they are needed.
The concretizer assigns a high priority to errors which generate these
messages (in particular over errors for unsatisfied requirements that
do not produce messages, but also over a number of more-generic
errors).
## Version types, parsing and printing
- The version classes have changed: `VersionBase` is removed, there is now a
`ConcreteVersion` base class. `StandardVersion` and `GitVersion` both inherit
from this.
- The public api (`Version`, `VersionRange`, `ver`) has changed a bit:
1. `Version` produces either `StandardVersion` or `GitVersion` instances.
2. `VersionRange` produces a `ClosedOpenRange`, but this shouldn't affect the user.
3. `ver` produces any of `VersionList`, `ClosedOpenRange`, `StandardVersion`
or `GitVersion`.
- No unexpected type promotion, so that the following is no longer an identity:
`Version(x) != VersionRange(x, x)`.
- `VersionList.concrete` now returns a version if it contains only a single element
subtyping `ConcreteVersion` (i.e. `StandardVersion(...)` or `GitVersion(...)`)
- In version lists, the parser turns `@x` into `VersionRange(x, x)` instead
of `Version(x)`.
- The above also means that `ver("x")` produces a range, whereas
`ver("=x")` produces a `StandardVersion`. The `=` is part of _VersionList_
syntax.
- `VersionList.__str__` now outputs `=x.y.z` for specific version entries,
and `x.y.z` as a short-hand for ranges `x.y.z:x.y.z`.
- `Spec.format` no longer aliases `{version}` to `{versions}`, but pulls the
concrete version out of the list and prints that -- except when the list is
is not concrete, then is falls back to `{versions}` to avoid a pedantic error.
For projections of concrete specs, `{version}` should be used to render
`1.2.3` instead of `=1.2.3` (which you would get with `{versions}`).
The default `Spec` format string used in `Spec.__str__` now uses
`{versions}` so that `str(Spec(string)) == string` holds.
## Changes to `GitVersion`
- `GitVersion` is a small wrapper around `StandardVersion` which enriches it
with a git ref. It no longer inherits from it.
- `GitVersion` _always_ needs to be able to look up an associated Spack version
if it was not assigned (yet). It throws a `VersionLookupError` whenever `ref_version`
is accessed but it has no means to look up the ref; in the past Spack would
not error and use the commit sha as a literal version, which was incorrect.
- `GitVersion` is never equal to `StandardVersion`, nor is satisfied by it. This
is such that we don't lose transitivity. This fixes the following bug on `develop`
where `git_version_a == standard_version == git_version_b` does not imply
`git_version_a == git_version_b`. It also ensures equality always implies equal
hash, which is also currently broken on develop; inclusion tests of a set of
versions + git versions would behave differently from inclusion tests of a
list of the same objects.
- The above means `ver("ref=1.2.3) != ver("=1.2.3")` could break packages that branch
on specific versions, but that was brittle already, since the same happens with
externals: `pkg@1.2.3-external` suffixes wouldn't be exactly equal either. Instead,
those checks should be `x.satisfies("@1.2.3")` which works both for git versions and
custom version suffixes.
- `GitVersion` from commit will now print as `<hash>=<version>` once the
git ref is resolved to a spack version. This is for reliability -- version is frozen
when added to the database and queried later. It also improves performance
since there is no need to clone all repos of all git versions after `spack clean -m`
is run and something queries the database, triggering version comparison, such
as potentially reuse concretization.
- The "empty VerstionStrComponent trick" for `GitVerison` is dropped since it wasn't
representable as a version string (by design). Instead, it's replaced by `git`,
so you get `1.2.3.git.4` (which reads 4 commits after a tag 1.2.3). This means
that there's an edge case for version schemes `1.1.1`, `1.1.1a`, since the
generated git version `1.1.1.git.1` (1 commit after `1.1.1`) compares larger
than `1.1.1a`, since `a < git` are compared as strings. This is currently a
wont-fix edge case, but if really required, could be fixed by special casing
the `git` string.
- Saved, concrete specs (database, lock file, ...) that only had a git sha as their
version, but have no means to look the effective Spack version anymore, will
now see their version mapped to `hash=develop`. Previously these specs
would always have their sha literally interpreted as a version string (even when
it _could_ be looked up). This only applies to databases, lock files and spec.json
files created before Spack 0.20; after this PR, we always have a Spack version
associated to the relevant GitVersion).
- Fixes a bug where previously `to_dict` / `from_dict` (de)serialization would not
reattach the repo to the GitVersion, causing the git hash to be used as a literal
(bogus) version instead of the resolved version. This was in particularly breaking
version comparison in the build process on macOS/Windows.
## Installing or matching specific versions
- In the past, `spack install pkg@3.2` would install `pkg@=3.2` if it was a
known specific version defined in the package, even when newer patch releases
`3.2.1`, `3.2.2`, `...` were available. This behavior was only there because
there was no syntax to distinguish between `3.2` and `3.2.1`. Since there is
syntax for this now through `pkg@=3.2`, the old exact matching behavior is
removed. This means that `spack install pkg@3.2` constrains the `pkg` version
to the range `3.2`, and `spack install pkg@=3.2` constrains it to the specific
version `3.2`.
- Also in directives such as `depends_on("pkg@2.3")` and their when
conditions `conflicts("...", when="@2.3")` ranges are ranges, and specific
version matches require `@=2.3.`.
- No matching version: in the case `pkg@3.2` matches nothing, concretization
errors. However, if you run `spack install pkg@=3.2` and this version
doesn't exist, Spack will define it; this allows you to install non-registered
versions.
- For consistency, you can now do `%gcc@10` and let it match a configured
`10.x.y` compiler. It errors when there is no matching compiler.
In the past it was interpreted like a specific `gcc@=10` version, which
would get bootstrapped.
- When compiler _bootstrapping_ is enabled, `%gcc@=10.2.0` can be used to
bootstrap a specific compiler version.
## Other changes
- Externals, compilers, and develop spec definitions are backwards compatible.
They are typically defined as `pkg@3.2.1` even though they should be
saying `pkg@=3.2.1`. Spack now transforms `pkg@3` into `pkg@=3` in those cases.
- Finally, fix strictness of `version(...)` directive/declaration. It just does a simple
type check, and now requires strings/integers. Floats are not allowed because
they are ambiguous `str(3.10) == "3.1"`.
`spack buildcache create` is a misnomer cause it's the only way to push to
an existing buildcache (and it in fact calls binary_distribution.push).
Also we have `spack buildcache update-index` but for create the flag is
`--rebuild-index`, which is confusing (and also... why "rebuild"
something if the command is "create" in the first place, that implies it
wasn't there to begin with).
So, after this PR, you can use either
```
spack buildcache create --rebuild-index
```
or
```
spack buildcache push --update-index
```
Also, alias `spack buildcache rebuild-index` to `spack buildcache
update-index`.
Spack never parsed `nagfor` linker arguments put on the compiler line:
```
nagfor -Wl,-Wl,,-rpath,,/path
````
so, let's continue not attempting to parse that.
`buildcache create --rel`: deprecate this because there is no point in
making things relative before tarballing; on install you need to expand
`$ORIGIN` / `@loader_path` / relative symlinks anyways because some
dependencies may actually be in an upstream, or have different
projections.
`buildcache install --allow-root`: this flag was propagated through a
lot of functions but was ultimately unused.
This switches the default Make build type to `build_type=Release`.
This offers:
- higher optimization level, including loop vectorization on older GCC
- adds NDEBUG define, which disables assertions, which could cause speedups if assertions are in loops etc
- no `-g` means smaller install size
Downsides are:
- worse backtraces (though this does NOT strip symbols)
- perf reports may be useless
- no function arguments / local variables in debugger (could be of course)
- no file path / line numbers in debugger
The downsides can be mitigated by overriding to `build_type=RelWithDebInfo` in `packages.yaml`,
if needed. The upside is that builds will be MUCH smaller (and faster) with this change.
---------
Co-authored-by: Gregory Becker <becker33@llnl.gov>
* Vendor ruamel.yaml v0.17.21
* Add unit test for whitespace regression
* Add an abstraction layer in Spack to wrap ruamel.yaml
All YAML operations are routed through spack.util.spack_yaml
The custom classes have been adapted to the new ruamel.yaml
class hierarchy.
Fixed line annotation issue in "spack config blame"
This ensures that:
a) no externals are added to the tarball metadata file
b) no externals are added to the prefix to prefix map on install, also
for old tarballs that did include externals
c) ensure that the prefix -> prefix map is always string to string, and
doesn't contain None in case for some reason a hash is missing
* Disable module generation by default (#35564)
a) It's used by site administrators, so it's niche
b) If it's used by site administrators, they likely need to modify the config anyhow, so the default config only serves as an example to get started
c) it's too arbitrary to enable tcl, but disable lmod
* Remove leftover from old module file schema
* Warn if module file config is detected and generation is disabled
---------
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Change the signature of the Environment.__init__ method to have
a single argument, i.e. the directory where the environment manifest
is located. Initializing that directory is now delegated to a function
taking care of all the error handling upfront. Environment objects
require a "spack.yaml" to be available to be constructed.
Add a class to manage the environment manifest file. The environment
now delegates to an attribute of that class the responsibility of keeping
track of changes modifying the manifest. This allows simplifying the
updates of the manifest file, and helps keeping in sync the spec lists in
memory with the spack.yaml on disk.
Spack comes to a crawl post-install of nvhpc, which is partly thanks to
this post install hook which has a lot of redundancy, and isn't correct.
1. There's no need to store "type" because that _is_ "mode".
2. There are more file types than "symlink", "dir", "file".
3. Don't checksum device type things
4. Don't run 3 stat calls (exists, stat, isdir/islink), but one lstat
call
5. Don't read entire files into memory
I also don't know why `spack.crypto` wasn't used for checksumming, but I
guess it's too late for that now. Finally md5 would've been the faster
algorithm, which would've been fine given that a non cryptographicall
checksum was used anyways.
If we modify both Path and PATH, on Windows they will clobber one
another. This PR updates the shell modification logic to automatically
convert variable names to upper-case on Windows.
- [x] Replace `version(ver, checksum=None, **kwargs)` signature with
`version(ver, checksum=None, *, sha256=..., ...)` explicitly listing all arguments.
- [x] Fix various issues in packages:
- `tags` instead of `tag`
- `default` instead of `preferred`
- `sha26` instead of `sha256`
- etc
Also, use `sha256=...` consistently.
Note: setting `sha256` currently doesn't validate the checksum length, so you could do
`sha256="a"*32` and it would get checked as `md5`... but that's something for another PR.
This means that `spack install` will now build the minimal set of packages
required to install the root(s).
To opt out of build edge pruning, use `spack install --include-build-deps`.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Other tools like git support `GIT_EDITOR` which takes higher precedence than the
standard `VISUAL` or `EDITOR` variables. This adds similar support for Spack, in the
`SPACK_EDITOR` env var.
- [x] consolidate editor code from hooks into `spack.util.editor`
- [x] add more editor tests
- [x] add support for `SPACK_EDITOR`
- [x] add a documentation section for controlling the editor and reference it
Code from `spack.util.editor` was duplicated into our licensing hook in #11968. We
really only want one place where editor search logic is implemented. This consolidates
the logic into `spack.util.editor`, including a special case to run `gvim` with `-f`.
- [x] consolidate editor search logic in spack.util.editor
- [x] add tests for licensing case, where `Executable` is used instead of `os.execv`
- [x] make `_exec_func` argument of `editor()` into public `exec_fn` arg
- [x] add type annotations
fixes#36628
Fix using compilers that declare "target: any" in their
configuration. This should happen only on Cray with the
module based programming environment.
* Simplify test/cmd/ci.py::test_ci_generate_with_custom_scripts
* Rearrange the build-job logic in generate_gitlab_ci_yaml
* Preserve all unknown attributes in build jobs
* Slip tests for custom attributes in the tests for other job types
* Support custom artifacts
* [@spackbot] updating style on behalf of blue42u
* Don't bother sorting needs
---------
Co-authored-by: blue42u <blue42u@users.noreply.github.com>
Other tools like git support `GIT_EDITOR` which takes higher precedence than the
standard `VISUAL` or `EDITOR` variables. This adds similar support for Spack, in the
`SPACK_EDITOR` env var.
- [x] consolidate editor code from hooks into `spack.util.editor`
- [x] add more editor tests
- [x] add support for `SPACK_EDITOR`
- [x] add a documentation section for controlling the editor and reference it
Code from `spack.util.editor` was duplicated into our licensing hook in #11968. We
really only want one place where editor search logic is implemented. This consolidates
the logic into `spack.util.editor`, including a special case to run `gvim` with `-f`.
- [x] consolidate editor search logic in spack.util.editor
- [x] add tests for licensing case, where `Executable` is used instead of `os.execv`
- [x] make `_exec_func` argument of `editor()` into public `exec_fn` arg
- [x] add type annotations
* lmod modules: allow users to remove items from hierarchy per-spec
This allows MPI wrappers that depend on MPI to be removed from the MPI portion of
the hierarchy and be made available when the appropriate compiler is loaded.
module load gcc
module load mpi-wrapper # implicitly loads mpi
module load hdf5
This allows users to treat an mpi wrapper like an mpi program
This commit changes the environment modifications class to escape
strings with double quotes instead of single quotes.
Single quotes prevent the expansion of enviornment variables that are
nested within environment variable definitions.
Fixes#36689
- The "base" builder class should be last in the MRO
- `filter_compiler_wrappers` needs to be moved to builders
- Decorating a function from a mixin class require using
the correct metaclass for the mixin
This aims to resolve#34164 by resolving the <include-fragment> tags
that GitHub has started using for their release pages, see
https://github.github.io/include-fragment-element/.
This feels a bit hacky but intended as a starting point for discussion.
After reading a page during spidering, it first parses for
include-fragments, gets them all, and treats them all as separate pages.
Then it looks for href links in both the page itself and the fragments.
Co-authored-by: Alec Scott <alec@bcs.sh>
MSVC compilers rely on vcvars environment setup scripts to establish
build environement variables neccesary for all projects to build
successfully. Prior to this we were only piping LIB, INCLUDE, and PATH
change through.
Instead we need to propegate all changes to the env variables made by
VCVARs in order to establish robust support for the MSVC compiler.
This most significantly impacts projects that need to be build with
NMake and MSBuild
* CI: Fixup docs for bootstrap.
* CI: Add compatibility shim
* Add an update method for CI
Update requires manually renaming section to `ci`. After
this patch, updating and using the deprecated `gitlab-ci` section
should be possible.
* Fix typos in generate warnings
* Fixup CI schema validation
* Add unit tests for legacy CI
* Add deprecated CI stack for continuous testing
* Allow updating gitlab-ci section directly with env update
* Make warning give good advice for updating gitlab-ci
* Fix typo in CI name
* Remove white space
* Remove unneeded component of deprected-ci
* Extract a method to warn when the manifest is not up-to-date
* Extract methods to update the repository and ensure dir exists
* Simplify further the write method, add failing unit-test
* Fix the function computing YAML equivalence between two instances
Previously `spack -e bla config update <section>` would treat the
environment config scope as standard config file instead of a single
file config scope. This fixes that.
When app is uninstalled, if it matches a default, then remove the
default symlink targeting its modulefile.
Until now, when a default were uninstalled, the default symlink were
left pointing to a nonexistent modulefile.
- Update default image to Ubuntu 22.04 (previously was still Ubuntu 18.04)
- Optionally use depfiles to install the environment within the container
- Allow extending Dockerfile Jinja2 template
- Allow extending Singularity definition file Jinja2 template
- Deprecate previous options to add extra instructions
* Reduce effort on grounding by employing cardinality constraints
If we use a cardinality constraint instead of a rule
using pair of values, we'll end up grounding 1 rule
instead of all the possible pair combinations of the
allowed values.
* Display all errors from concretization, instead of just one
If clingo produces multiple "error" facts, we now print all
of them in the error message. Before we were printing just
the one with the least priority.
Consolidate a few common patterns in concretize.lp to ensure
that certain node attributes have one and only one value
assigned.
All errors are displayed, so use a single criterion
instead of three.
* Account for weights in concretize.lp
To recover the optimization order we had before, account
for weights of errors when minimizing.
The priority is mapped to powers of 10, so to effectively
get back the same results as with priorities.
When generating modulefile, correctly detect software installation asked
by user as explicit installation.
Explicit installation status were previously fetched from database
record of spec, which was only set after modulefile generation.
Code is updated to pass down the explicit status of software
installation to the object that generates modulefiles.
Fixes#34730.
Fixes#12105.
A value for the explicit argument has to be set when creating a new
installation, but for operations on existing installation, this value is
retrieved from database. Such operations are: module rm, module refresh,
module setdefaults or when get_module function is used.
Update on the way tests that mimics an installation, thus explicit
argument has to be set under such situation.
Add `config:stage_name` which is a Spec format string that can
customize the names of stages created by Spack. This was primarily
created to allow generating shorter stage names on Windows (along
with `config:build_stage`, this can be used to create stages with
short absolute paths).
By default, this is not set and the prior name stage format is used.
This also removes the username component that is always added to
Stage paths on Windows (if users want to include this, they can
add it to the `build_stage`).
* compiler wrapper: fix -Xlinker parsing
* handle the case of -rpath without value; avoid that we drop the flag
* also handle the -Xlinker -rpath -Xlinker without further args case...
* fix test
* get rid of global $rp var, reduce branching
- [x] Specs that define 'new' versions in the require: section need to generate associated facts to indicate that those versions are valid.
- [x] add test to verify success with unknown versions.
- [x] remove unneeded check that was leading to extra complexity and test
failures (at this point, all `hash=version` does not require listing out that version
in `packages.yaml`)
- [x] unique index for origin (dont reuse 0)
Co-authored-by: Peter Josef Scheibel <scheibel1@llnl.gov>
Add a `find_first` method that locates one instance of a file
that matches a specified pattern by recursively searching a directory
tree. Unlike other `find` methods, this only locates one file at most,
so can use optimizations that avoid searching the entire tree:
Typically the relevant files are at low depth, so it makes sense to
locate files through iterative deepening and early exit.
Update tcl and lmod modulefile template to provide more information on
help message (name, version and target) like done on whatis for lmod
modulefiles.
Adapt tcl and lmod modulefile templates to generate append-path or
remove-path commands in modulefile when respectively append_flags or
remove_flags commands are defined in package for run environment.
Fixes#10299.
Simplify environment modification block in modulefile Tcl template by
always setting a path delimiter to the prepend-path, append-path and
remove-path commands.
Remove --delim option to the setenv command as this command does not
allow such option.
Update test_prepend_path_separator test to explicitly check the 6
path-like commands that should be present in generated modulefile.
fixes#36339
We were missing a rule that enforced a match between
the `node_compiler` and the compiler used to satisfy
a requirement.
Fix compiler with custom, made up version too
* Specs that define 'new' versions in the require: section need to generate
associated facts to indicate that those versions are valid.
* add test to verify success with unknown versions.
Since environment-modules has support for autoloading since 4.2,
and Spack-builds of it enable it by default, use the same autoload
default for tcl as lmod.
If you have a "require:" section in your packages config, and you
use it to specify a list of requirements, the list elements can
now include strings (before this, each element in the list had to
be a `one_of` or `any_of` specification, which is awkward if you
wanted to apply just one spec with no alternatives).
Example one:
```
spack install --add x y z
```
is equivalent to
```
spack add x y z
spack concretize
spack install --only-concrete
```
where `--only-concrete` installs without modifying spack.yaml/spack.lock
Example two:
```
spack install
```
concretizes current spack.yaml if outdated and installs all specs.
Example three:
```
spack install x y z
```
concretizes current spack.yaml if outdated and installs *only* concrete
specs in the environment that match abstract specs `x`, `y`, or `z`.
The `ignore` parameter was only used for `spack activate/deactivate`, and it isn't used
by Spack Environments which have their own handling of file conflicts. We should remove it.
Everything that handles `ignore=` was removed in #29317 and included in 0.19, when we
removed `spack activate` and `spack deactivate` in favor of environments. So all of these
usages removed here were already being ignored by Spack.
Adapt tcl modulefile template to call "module load" on autoload
dependency without testing if this dependency is already loaded or not.
The is-loaded test is not necessary, as module commands know how to cope
with an already loaded module. With environment-modules 4.2+ (released
in 2018) it is also important to have this "module load" command even if
dependency is already loaded in order to record that the modulefile
declares such dependency. This is important if you want to keep a
consistent environment when a dependent module is unloaded.
The "Autoloading" verbose message is also removed as recent module
commands will report such information to the user (depending on the
verbosity configured for the module command).
Such change has been test successfully with Modules 3.2 (EL7), 4.5 (EL8)
and 5.2 (latest) and also with Lmod 7 and 8 (as it is mentionned in
Spack docs that Lmod can be used along with tcl modules). Dependencies
are correctly loaded or unloaded, whether they are loaded/unloaded or
not.
This change fixes Tcl quoting issue introduced in #32853.
Fixes#19155.
In the Windows filesystem logic for creating a symlink, we intend to
fall back to a copy when the symlink cannot be created (for some
configuration settings on Windows it is not possible for the user
to create a symlink). It turns out we were overly-broad in which
exceptions lead to this fallback, and the subsequent copy would
also fail: at least one case where this occurred is when we
attempted to create a symlink that already existed.
The updated logic expressly avoids falling back to a copy when the
file/symlink already exists.
* ASP-based solver: use satisfies instead of intersects
They are semantically equivalent for concrete versions,
but the GitVersion.intersects implementation is buggy
* Mitigation for git version bug
fixes#36134
This commit works around the issue in #36134, by using
GitVersion.satisfies instead of GitVersion.intersects
There are still underlying issues when trying to infer the
"reference version" when no explicit one is given, but:
1. They are not reproducible with our synthetic repo
2. They occur only when the `git.<xxx>` form of Git version
is used
Here we just work around the user facing issue and ensure
the tests are correct with our synthetic repository.
This PR does 2 unrelated things:
1. It changes the encoding of the compilers
2. It tweaks the heuristic for the solves in a0d88179074f81d13a3fad629a43d86334e7f982
Both were initially motivated by trying to get a performance gain but, while 2 showed significant speed-ups[^1], 1 instead didn't. I kept it anyhow, since I think the code related to compilers is more consolidated with the new encoding and we might get some performance improvement out of it if we can base our errors on the `node_compiler(Package, CompilerID)` atoms instead of `attrs`.
[^1]: In general the changes in the heuristic brought a ~10% speed-up on the tests I did. I'll post detailed results below.
Add a warning about compilers.yaml that is triggered if there are multiple compilers with the same spec, os and
target (since they can't be selected by users with the spec syntax only).
Since GPG clear-sign cannot deal with lines longer than 19995 characters
and doesn't even error but simply truncates those linese (don't ask me
why...), we have to be careful not to hit that line limit when reducing
the filesize.
So, instead this PR sets the indent level to 0 and drops the whitespace
after `: `, which still reduces file size by 50% or so.
Adds builders appropriate for building these packages on Windows.
It is intended that builds on other platforms are unaffected (e.g.
they build with Autotools as before on Linux).
... and use colors in disambiguate message for clarity.
This commit avoids the loop:
```
for root in roots:
for dep in deps(root):
...
```
instead it ensures each node is visited once and only once.
Also adds a small optimization when searching for concrete specs, since
we can assume uniqueness of dag hash, so it's fine to early exit.
This adds a new mode for `concretizer:reuse` called `dependencies`,
which only reuses dependencies. Currently, `spack install foo` will
reuse older versions of `foo`, which might be surprising to users.
* CI configuration boilerplate reduction and refactor
Configuration:
- New notation for list concatenation (prepend/append)
- New notation for string concatenation (prepend/append)
- Break out configuration files for: ci.yaml, cdash.yaml, view.yaml
- Spack CI section refactored to improve self-consistency and
composability
- Scripts are now lists of lists and/or lists of strings
- Job attributes are now listed under precedence ordered list that are
composed/merged using Spack config merge rules.
- "service-jobs" are identified explicitly rather than as a batch
CI:
- Consolidate common, platform, and architecture configurations for all CI stacks into composable configuration files
- Make padding consistent across all stacks (256)
- Merge all package -> runner mappings to be consistent across all
stacks
Unit Test:
- Refactor CI module unit-tests for refactor configuration
Docs:
- Add docs for new notations in configuration.rst
- Rewrite docs on CI pipelines to be consistent with refactored CI
workflow
* Script verbose environ, dev bootstrap
* Port #35409
When untouched spec pruning is enabled, specs possibly affected
by a change cannot be pruned from a pipeline.
Previously spack looked at all specs matching changed package
names, and traversed dependents of each, all the way to the
environment root, to compute the set of environment specs
possibly affected by a change (and thus, not candidates for
pruning).
With this PR, when untouched spec pruning is enabled, a new
environment variable can control how far towards the root spack
traverses to compute the set of specs possibly affected by a
change. SPACK_UNTOUCHED_PRUNING_DEPENDENT_DEPTH can be set
to any numeric value before the "spack ci generate" command
is called to control this traversal depth parameter. Setting
it to "0" traverses only touched specs, setting it to "1"
traverses only touched specs and their direct dependents, and
so on. Omitting the variable results in the previous behavior
of traversing all the way to the root. Setting it to a negative
value means no traversal is done, and always yields an empty
set of possibly affected specs (which would result in the max
pruning possible).
Currently `spack buildcache create` creates compressed tarballs that
differ between each invocation, thanks to:
1. The gzip header containing mtime set to time.time()
2. The generated buildinfo file which has a different mtime every time.
To avoid this, you have to explicitly construct GZipFile yourself, since
the Python API doesn't expose the mtime arg, and we have to manually
create the tarinfo object for the buildinfo metadata file.
Normalize mode: regular files & hardlinks executable by user, dirs, symlinks: set 0o755 permissions in tarfile; other files use 0o644
This commit formalizes `satisfies(lhs, rhs, strict=True/False)`
and splits it into two functions: `satisfies(lhs, rhs)` and
`intersects(lhs, rhs)`.
- `satisfies(lhs, rhs)` means: all concrete specs matching the
left hand side also match the right hand side
- `intersects(lhs, rhs)` means: there exist concrete specs
matching both lhs and rhs.
`intersects` now has the property that it's commutative,
which previously was not guaranteed.
For abstract specs, `intersects(lhs, rhs)` implies that
`constrain(lhs, rhs)` works.
What's *not* done in this PR is ensuring that
`intersects(concrete, abstract)` returns false when the
abstract spec has additional properties not present in the
concrete spec, but `constrain(concrete, abstract)` will
raise an error.
To accomplish this, some semantics have changed, as well
as bugfixes to ArchSpec:
- GitVersion is now interpreted as a more constrained
version
- Compiler flags are interpreted as strings since their
order is important
- Abstract specs respect variant type (bool / multivalued)
Two fixes:
1. `-Wl,a,b,c,d` is a comma separated list of linker arguments, we
incorrectly assume key/value pairs, which runs into issues with for
example `-Wl,--enable-new-dtags,-rpath,/x`
2. `-Xlinker,xxx` is not a think, so it shouldn't be parsed.
Currently, if two compilers with the same spec differ on the flags, the concretizer will:
1. mix both sets of flags for the spec in the ASP program
2. error noting that the set of flags from the compiler (both of them) doesn't match the set from the lower priority compiler
This PR fixes both -- only flags from the highest priority compiler with a given spec are considered.
`mypy` only understands `sys.platform == "win32"`, not indirect assignments of that
value to things like `is_windows`. If we don't use the accepted platform checks, `mypy`
registers many Windows-only symbols as not present on Linux, when it should skip the
checks for platform-specific code.
Update `spack.util.environment` to remove legacy idioms.
* Remove kwargs from method signature and use a class for traces
* Uppercase a few global variables
* spack.util.environment: add type-hints
* Improve docstrings
* Fixed most style issues reported by pylint
a shared library /lib64/libcxi.so, which seems to also appear on other
non-slingshot systems. This patch also checks to make sure that there
is a Cray programming enviornment in /opt/cray/pe in addition to the
shared library.
if dump file existed it was not truncating the file, resulting in
a file with unaltered filesize, with the new content at the beginning,
"padded" with the tail of the old content, since the new content was
not enough to overwrite it.
`colify` is an old module in Spack that still uses `**kwargs` liberally.
We should be more explicit. Doing this eliminates the need for many
checks (can't pass the wrong arg if it isn't allowed) and makes the
function documentation more clear.
Fixes a bug introduced in 44ed0de8c0
where the push method of binary_distribution now takes named args
include_root and include_depedencies, to avoid the **kwarg hole.
But the call site wasn't update and we passed a dict of keys/values instead
of arguments, which resulted in a call like this:
```
push(include_root={"include_root": True, "include_dependencies": False})
```
This commit fixes that, and adds a test to see if we push the correct packages.
This error shows up a lot, typically it's harmless because an error
happened before the source build even started, in which case we don't
have build logs to copy. So, warn instead of error, cause it distracts
from the actual CI error.
Currently we attempt to setup the build environment even when
dependencies are not installed, which typically results in error while
searching for libraries or executables in a dependency's prefix.
With this change, we get a more user friendly error:
```
$ spack build-env perl
==> Error: Not all dependencies of perl are installed, cannot setup build environment:
- qpj6dw5 perl@5.36.0%apple-clang@14.0.0+cpanm+open+shared+threads build_system=generic arch=darwin-ventura-m1
- jq2plbe ^berkeley-db@18.1.40%apple-clang@14.0.0+cxx~docs+stl build_system=autotools patches=26090f4,b231fcc arch=darwin-ventura-m1
...
$ echo $?
1
```
* Allow users to specify root env dir
Environments managed by spack have some advantages over anonymous Environments
but they are tucked away inside spack's directory tree. This PR gives
users the ability to specify where the environments should live.
See #32823
This is also taken as an opportunity to ensure that all references are to "managed environments",
rather than "named environments". Prior to this PR some references to the latter persisted.
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
The call:
```
x.satisfies(y[, strict=False])
```
is commutative, and tests non-empty intersection, whereas:
```
x.satsifies(y, strict=True)
```
is not commutative, and tests set-inclusion.
There are 2 fast paths. When strict=False both self and other need to
be concrete, when strict=True we can optimize when other is concrete.
Spack generally ignores file-file projection clashes in environment
views, but would eventually error when linking the `.spack` directory
for two specs of the same package.
This leads to obscure errors where users have no clue what the issue is
and how to fix it. On top of that, the error comes very late, since it
happens when the .spack dir contents are linked (which happens after
everything else)
This PR improves that by doing a quick check ahead of time if clashes
are going to be anticipated (by simply checking for clashes in the
projection of each spec's .spack metadir). If there are clashes, a
human-readable error is thrown which shows two of the conflicting specs,
and tells users to user unify:true, view:false, or set up custom
projections.
The checksum exception was not detailed enough and not reraised when using cache only, resulting in useless error messages.
Now it dumps the file path, expected
hash, computed hash, and the downloaded file summary.
* Style: black 23, skip magic trailing commas
* isort should use same line length as black
* Fix unused import
* Update version of black used in CI
* Update new packages
* Update new packages
Specs that did not contribute any files to an env view caused a problem
where zip(specs, files grouped by prefix) got "out of sync", causing the
wrong merge map to be passed to a package's `add_files_to_view`, which
specifically caused an issue where *sometimes* bin/python ended up as a
symlink instead of a copy.
One such example is kokkos + kokkos-nvcc-wrapper, as the latter package
only provides the file bin/nvcc_wrapper, which is also added to view by
kokkos, causing kokkos-nvcc-wrapper to contribute 0 files.
The test feels a bit contrived, but it captures the problem... pkg a is
added first and has 0 files to contribute, pkg b adds a single file, and
we check if pkg b receives a merge map (and a does not).
`spack gc` removes build deps of explicitly installed specs, but somehow
if you take one of the specs that `spack gc` would remove, and feed it
to `spack uninstall /<hash>` by hash, it complains about all the
dependents that still rely on it.
This resolves the inconsistency by only following run/link type deps in
spack uninstall.
That way you can finally do `spack uninstall cmake` without having to
remove all packages built with cmake.
Default package requirements might contain
variants that are not defined in each package,
so we shouldn't verify them when emitting facts
for the ASP solver.
Account for group when enforcing requirements
packages:all : don't emit facts for requirement conditions
that can't apply to current spec
#35098 added the correct extraction of toolset version for the MSVC
compiler. This updates the associated method in MSBuilder to retrieve
the (now correct) property.
This PR enables the successful execution of the spack binary cache
tutorial on Windows. It assumes gnupg and file are available (they
can be installed with choco).
* Fix handling of args with quotes in spack.bat
* `file` utility can be installed on Windows (e.g. with choco): update
error message accordingly
At least with ZSH, prefix inspections containing `./bin` result in a
`$PREFIX/./bin` and result in strange `$PATH` handling.
I.e., `module load git` will prepend `/path/to/git/./bin`, `which git`
will find the right executable, but `git --version` will print the
system one. Normalize the relative path to avoid this behavior.
See also spack/spack#31867.
fixes#34879
This commit adds a new maintainer directive,
which by default extend the list of maintainers
for a given package.
The directive is backward compatible with the current
practice of having a "maintainers" list declared at
the class level.
Move the relocation of binary text in its own class
Drop threaded text replacement, since the current bottleneck
is decompression. It would be better to parallellize over packages,
instead of over files per package.
A small improvement with separate classes for text replacement is that we
now compile the regex in the constructor; previously it was compiled per
binary to be relocated.
This commit makes explicit the format version of the spec file
we are reading from.
Before there were different functions capable of reading some
part of the spec file at multiple format versions. The decision
was implicit, since checks were based on the structure of the
JSON without ever checking a format version number.
The refactor makes also explicit which spec file format is used
by which database and lockfile format, since the information is
stored in global mappings.
To ensure we don't change the hash of old specs, JSON representations
of specs have been added as data. A unit tests checks that we read
the correct hash in, and that the hash stays the same when we
re-serialize the spec using the most recent format version.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Currently we print "sha256 checksum failed for [file]. Expected X but
got Y".
This PR extends that message with file size and contents info:
"... but got Y. File size = 123456 bytes. Contents = b'abc...def'"
That way we can immediately see if the file was downloaded only
partially, or if we downloaded a text page instead of a binary, etc.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
When running unit-test the test/ci.py module is leaving
garbage (help.sh, test.sh files) in the current working
directory.
This commit changes the current working directory to a
temporary path before those files are created.
* environments: don't rewrite relative view path, expand path on cli ahead of time
Currently if you have a spack.yaml that specifies a view by relative
path, Spack expands it to an absolute path on `spack -e . install` and
persists that to disk.
This is rather annoying when you have a `spack.yaml` file inside a git
repo, cause you want to use relative paths to make it relocatable, but
you constantly have to undo the changes made to spack.yaml by Spack.
So, as an alternative:
1. Always stick to paths as they are provided in spack.yaml, never
replace them with a canonicalized version
2. Turn relative paths on the command line into absolute paths before
storing to spack.yaml. This way you can do `spack env create --dir
./env --with-view ./view` and both `./env` and `./view` are resolved
to the current working dir, as expected (not `./env/view`). This
corresponds to the old behavior of `spack env create`.
* create --with-view always takes a value
All packages with explicit Windows support can be found with
`spack list --tags=windows`.
This also removes the documentation which explicitly lists
supported packages on Windows (which is currently out of date and
is now unnecessary with the added tags).
Note that if a package does not appear in this list, it *may*
still build on Windows, but it likely means that no explicit
attempt has been made to support it.
Since SPACK_PACKAGE_IDS is now also "namespaced" with <prefix>, it makes
more sense to call the flag `--make-prefix` and alias the old flag
`--make-target-prefix` to it.
Normally when using external packages in concretization, Spack ignores
all dependencies of the external. #33777 updated this logic to attach
a Python Spec to external Python extensions (most py-* packages), but
as implemented there were a couple issues:
* this did not account for concretization groups and could generate
multiple different python specs for a single DAG
* in some cases this created a fake Python spec with insufficient
details to be usable (concretization/installation of the
extension would fail)
This PR addresses both of these issues:
* For environment specs that are concretized together, external python
extensions in those specs will all be assigned the same Python spec
* If Spack needs to "invent" a Python spec, then it will have all the
needed details (e.g. compiler/architecture)
With the new variable [prefix/]SPACK_PACKAGE_IDS you can conveniently execute
things after each successful install.
For example push just-built packages to a buildcache
```
SPACK ?= spack
export SPACK_COLOR = always
MAKEFLAGS += -Orecurse
MY_BUILDCACHE := $(CURDIR)/cache
.PHONY: all clean
all: push
ifeq (,$(filter clean,$(MAKECMDGOALS)))
include env.mk
endif
# the relevant part: push has *all* example/push/<pkg identifier> as prereqs
push: $(addprefix example/push/,$(example/SPACK_PACKAGE_IDS))
$(SPACK) -e . buildcache update-index --directory $(MY_BUILDCACHE)
$(info Pushed everything, yay!)
# and each example/push/<pkg identifier> has the install target as prereq,
# and the body can use target local $(HASH) and $(SPEC) variables to do
# things, such as pushing to a build cache
example/push/%: example/install/%
@mkdir -p $(dir $@)
$(SPACK) -e . buildcache create --allow-root --only=package --unsigned --directory $(MY_BUILDCACHE) /$(HASH) # push $(SPEC)
@touch $@
spack.lock: spack.yaml
$(SPACK) -e . concretize -f
env.mk: spack.lock
$(SPACK) -e . env depfile -o $@ --make-target-prefix example
clean:
rm -rf spack.lock env.mk example/
``
In the past we checked remote binary mirrors for existence of a spec
before attempting to download it. That changed to only checking local
copies of index.jsons (if available) to prioritize certain mirrors where
we expect to find a tarball. That was faster for CI since fetching
index.json and loading it just to order mirrors takes more time than
just attempting to fetch tarballs -- and also if we have a direct hit
there's no point to look at other mirrors.
Long story short: the info message only makes sense in the old version
of Spack, so it's better to remove it.
* Forward lookup of "test_log_file" and "test_failures"
refers #34531closes#34487fixes#34440
* Add unit test
* py-libensemble: fix tests
* Support stand-alone tests with cached files as install-time tests
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Ensure `spack mirror add <name> <url/path>` without further arguments translates to `<name>: <url>` key value pairs in mirrors.yaml. If --s3-* flags are provided, only store the provided ones.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
With this change we get the invariant that `mirror.fetch_url` and
`mirror.push_url` return valid URLs, even when the backing config
file is actually using (relative) paths with potentially `$spack` and
`$env` like variables.
Secondly it avoids expanding mirror path / URLs too early,
so if I say `spack mirror add name ./path`, it stays `./path` in my
config. When it's retrieved through MirrorCollection() we
exand it to say `file://<env dir>/path` if `./path` was set in an
environment scope.
Thirdly, the interface is simplified for the relevant buildcache
commands, so it's more like `git push`:
```
spack buildcache create [mirror] [specs...]
```
`mirror` is either a mirror name, a path, or a URL.
Resolving the relevant mirror goes as follows:
- If it contains either / or \ it is used as an anonymous mirror with
path or url.
- Otherwise, it's interpreted as a named mirror, which must exist.
This helps to guard against typos, e.g. typing `my-mirror` when there
is no such named mirror now errors with:
```
$ spack -e . buildcache create my-mirror
==> Error: no mirror named "my-mirror". Did you mean ./my-mirror?
```
instead of creating a directory in the current working directory. I
think this is reasonable, as the alternative (requiring that a local dir
exists) feels a bit pendantic in the general case -- spack is happy to
create the build cache dir when needed, saving a `mkdir`.
The old (now deprecated) format will still be available in Spack 0.20,
but is scheduled to be removed in 0.21:
```
spack buildcache create (--directory | --mirror-url | --mirror-name) [specs...]
```
This PR also touches `tmp_scope` in tests, because it didn't really
work for me, since spack fixes the possible --scope values once and
for all across tests, so tests failed when run out of order.
Sometimes I just want to know how many packages of a certain type there are.
- [x] add `--count` option to `spack list` that output the number of packages that
*would* be listed.
```console
> spack list --count
6864
> spack list --count py-
2040
> spack list --count r-
1162
```
Currently, all of the replacements in `spack.util.path.replacements()` get evaluated for
each replacement. This makes it easy to get bootstrap issues, because config is used
very early on in Spack.
Right now, if I run `test_autotools_gnuconfig_replacement_no_gnuconfig` on my M1 mac, I
get the circular reference error below. This fixes the issue by making all of the path
replacements lazy lambdas.
As a bonus, this cleans up the way we do substitution for `$env` -- it's consistent with
other substitutions now.
- [x] make all path `replacements()` lazy
- [x] clean up handling of `$env`
```console
> spack unit-test -k test_autotools_gnuconfig_replacement_no_gnuconfig
...
==> [2022-12-31-15:44:21.771459] Error: AttributeError:
The 'autotools-config-replacement' package cannot find an attribute while trying to build from sources. This might be due to a change in Spack's package format to support multiple build-systems for a single package. You can fix this by updating the build recipe, and you can also report the issue as a bug. More information at https://spack.readthedocs.io/en/latest/packaging_guide.html#installation-procedure
/Users/gamblin2/src/spack/lib/spack/spack/package_base.py:1332, in prefix:
1330 @property
1331 def prefix(self):
>> 1332 """Get the prefix into which this package should be installed."""
1333 return self.spec.prefix
Traceback (most recent call last):
File "/Users/gamblin2/src/spack/lib/spack/spack/build_environment.py", line 1030, in _setup_pkg_and_run
kwargs["env_modifications"] = setup_package(
^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/build_environment.py", line 757, in setup_package
set_module_variables_for_package(pkg)
File "/Users/gamblin2/src/spack/lib/spack/spack/build_environment.py", line 596, in set_module_variables_for_package
m.std_cmake_args = spack.build_systems.cmake.CMakeBuilder.std_args(pkg)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/build_systems/cmake.py", line 241, in std_args
define("CMAKE_INSTALL_PREFIX", pkg.prefix),
^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/package_base.py", line 1333, in prefix
return self.spec.prefix
^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/spec.py", line 1710, in prefix
self.prefix = spack.store.layout.path_for_spec(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/directory_layout.py", line 336, in path_for_spec
path = self.relative_path_for_spec(spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/directory_layout.py", line 106, in relative_path_for_spec
projection = spack.projections.get_projection(self.projections, spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/projections.py", line 13, in get_projection
if spec.satisfies(spec_like, strict=True):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/spec.py", line 3642, in satisfies
if not self.virtual and other.virtual:
^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/spec.py", line 1622, in virtual
return spack.repo.path.is_virtual(self.name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/repo.py", line 890, in is_virtual
return have_name and pkg_name in self.provider_index
^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/repo.py", line 770, in provider_index
self._provider_index.merge(repo.provider_index)
^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/repo.py", line 1096, in provider_index
return self.index["providers"]
~~~~~~~~~~^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/repo.py", line 592, in __getitem__
self._build_all_indexes()
File "/Users/gamblin2/src/spack/lib/spack/spack/repo.py", line 607, in _build_all_indexes
self.indexes[name] = self._build_index(name, indexer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/repo.py", line 616, in _build_index
index_mtime = self.cache.mtime(cache_filename)
^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/llnl/util/lang.py", line 826, in __getattr__
return getattr(self.instance, name)
^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/llnl/util/lang.py", line 825, in __getattr__
raise AttributeError()
AttributeError
```
Since we dropped support for Python 2.7, we can embrace using keyword only arguments
for many functions in Spack that use **kwargs in the function signature. Here this is done
for the llnl.util.filesystem module.
There were a couple of bugs lurking in the code related to typo-like errors when retrieving
from kwargs. Those have been fixed as well.
The code in FileCache for write_transaction attempts to delete the temporary file when an exception occurs under the context by calling shutil.rmtree. However, rmtree only operates on directories while the rest of FileCache uses normal files. This causes an empty file to be written to the cache key when unneeded.
Use os.remove instead which operates on normal files.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Windows CMD prompt does not automatically support ASCI color control
characters on the console from Python. Enable this behavior by
accessing the current console and allowing the interpreation of ASCI
control characters from Python via the win32 API.
This commit allows (remote) spec.json files to be clearsigned and gzipped.
The idea is to reduce the number of requests and number of bytes transferred
The code in Spack to generate install and test reports currently suffers from unneeded complexity. For
instance, we have classes in Spack core packages, like `spack.reporters.CDash`, that need an
`argparse.Namespace` to be initialized and have "hard-coded" string literals on which they branch to
change their behavior:
```python
if do_fn.__name__ == "do_test" and skip_externals:
package["result"] = "skipped"
else:
package["result"] = "success"
package["stdout"] = fetch_log(pkg, do_fn, self.dir)
package["installed_from_binary_cache"] = pkg.installed_from_binary_cache
if do_fn.__name__ == "_install_task" and installed_already:
return
```
This PR attempt to polish the major issues encountered in both `spack.report` and `spack.reporters`.
Details:
- [x] `spack.reporters` is now a package that contains both the base class `Reporter` and all
the derived classes (`JUnit` and `CDash`)
- [x] Classes derived from `spack.reporters.Reporter` don't take an `argparse.Namespace` anymore
as argument to `__init__`. The rationale is that code for commands should be built upon Spack
core classes, not vice-versa.
- [x] An `argparse.Action` has been coded to create the correct `Reporter` object based on command
line arguments
- [x] The context managers to generate reports from either `spack install` or from `spack test` have
been greatly simplified, and have been made less "dynamic" in nature. In particular, the `collect_info`
class has been deleted in favor of two more specific context managers. This allows for a simpler
structure of the code, and less knowledge required to client code (in particular on which method to patch)
- [x] The `InfoCollector` class has been turned into a simple hierarchy, so to avoid conditional statements
within methods that assume a knowledge of the context in which the method is called.
On systems with remote groups, the primary user group may be remote and may not exist on
the local system (i.e., it might just be a number). On the CLI, it looks like this:
```console
> touch foo
> l foo
-rw-r--r-- 1 gamblin2 57095 0 Dec 29 22:24 foo
> chmod 2000 foo
chmod: changing permissions of 'foo': Operation not permitted
```
Here, the local machine doesn't know about per-user groups, so they appear as gids in
`ls` output. `57095` is also `gamblin2`'s uid, but the local machine doesn't know that
`gamblin2` is in the `57095` group.
Unfortunately, it seems that Python's `os.chmod()` just fails silently, setting
permissions to `0o0000` instead of `0o2000`. We can avoid this by ensuring that the file
has a group the user is known to be a member of.
- [x] Add `ensure_known_group()` in the permissions tests.
- [x] Call `ensure_known_group()` on tempfile in `test_chmod_real_entries_ignores_suid_sgid`.
There are a number of places in our docstrings where we write "list of X" as the type, even though napoleon doesn't actually support this. It ends up causing warnings when generating docs.
Now that we require Python 3, we don't have to rely on type hints in docs -- we can just use Python type hints and omit the types of args and return values from docstrings.
We should probably do this for all types in docstrings eventually, but this PR focuses on the ones that generate warnings during doc builds.
Some `mypy` annoyances we should consider in the future:
1. Adding some of these type annotations gets you:
```
note: By default the bodies of untyped functions are not checked, consider using --check-untyped-defs [annotation-unchecked]
```
because they are in unannotated functions (like constructors where we don't really need any annotations).
You can silence these with `disable_error_code = "annotation-unchecked"` in `pyproject.toml`
2. Right now we support running `mypy` in Python `3.6`. That means we have to support `mypy` `.971`, which does not support `disable_error_code = "annotation-unchecked"`, so I just filter `[annotation-unchecked]` lines out in `spack style`.
3. I would rather just turn on `check_untyped_defs` and get more `mypy` coverage everywhere, but that will require about 1,000 fixes. We should probably do that eventually.
4. We could also consider only running `mypy` on newer python versions. This is not easy to do while supporting `3.6`, because you have to use `if TYPE_CHECKING` for a lot of things to ensure that 3.6 still parses correctly. If we only supported `3.7` and above we could use [`from __future__ import annotations`](https://mypy.readthedocs.io/en/stable/runtime_troubles.html#future-annotations-import-pep-563), but we have to support 3.6 for now. Sigh.
- [x] Convert a number of docstring types to Python type hints
- [x] Get rid of "list of" wherever it appears
Local `git` tests will fail with `fatal: transport 'file' not allowed` when using git 2.38.1 or higher, due to a fix for `CVE-2022-39253`.
This was fixed in CI in #33429, but that doesn't help the issue for anyone's local environment. Instead of fixing this with git config in CI, we should ensure that the tests run anywhere.
- [x] Introduce `spack.util.git`.
- [x] Use `spack.util.git.get_git()` to get a git executable, instead of `which("git")` everywhere.
- [x] Make all `git` tests use a `git` fixture that goes through `spack.util.git.get_git()`.
- [x] Add `-c protocol.file.allow=always` to all `git` invocations under `pytest`.
- [x] Revert changes from #33429, which are no longer needed.
`spack graph` has been reworked to use:
- Jinja templates
- builder objects to construct the template context when DOT graphs are requested.
This allowed to add a new colored output for DOT graphs that highlights both
the dependency types and the nodes that are needed at runtime for a given spec.
`spack solve` is supposed to show you times you can compare. setup, ground, solve, etc.
all in a list. You're also supposed to be able to compare easily across runs. With
`pretty_seconds()` (introduced in #33900), it's easy to miss the units, e.g., spot the
bottleneck here:
```console
> spack solve --timers tcl
setup 22.125ms
load 16.083ms
ground 8.298ms
solve 848.055us
total 58.615ms
```
It's easier to see what matters if these are all in the same units, e.g.:
```
> spack solve --timers tcl
setup 0.0147s
load 0.0130s
ground 0.0078s
solve 0.0008s
total 0.0463s
```
And the units won't fluctuate from run to run as you make changes.
-[x] make `spack solve` timings consistent like before