This PR fixes two problems with clang/llvm's version detection. clang's
version output looks like this:
```
clang version 11.0.0
Target: x86_64-unknown-linux-gnu
```
This caused clang's version to be misdetected as:
```
clang@11.0.0
Target:
```
This resulted in errors when trying to actually use it as a compiler.
When using `spack external find`, we couldn't determine the compiler
version, resulting in errors like this:
```
==> Warning: "llvm@11.0.0+clang+lld+lldb" has been detected on the system but will not be added to packages.yaml [reason=c compiler not found for llvm@11.0.0+clang+lld+lldb]
```
Changing the regex to only match until the end of the line fixes these
problems.
Fixes: #19473
This adds a new `mark` command that can be used to mark packages as either
explicitly or implicitly installed. Apart from fixing the package
database after installing a dependency manually, it can be used to
implement upgrade workflows as outlined in #13385.
The following commands demonstrate how the `mark` and `gc` commands can be
used to only keep the current version of a package installed:
```console
$ spack install pkgA
$ spack install pkgB
$ git pull # Imagine new versions for pkgA and/or pkgB are introduced
$ spack mark -i -a
$ spack install pkgA
$ spack install pkgB
$ spack gc
```
If there is no new version for a package, `install` will simply mark it as
explicitly installed and `gc` will not remove it.
Co-authored-by: Greg Becker <becker33@llnl.gov>
Users can add test() methods to their packages to run smoke tests on
installations with the new `spack test` command (the old `spack test` is
now `spack unit-test`). spack test is environment-aware, so you can
`spack install` an environment and then run `spack test run` to run smoke
tests on all of its packages. Historical test logs can be perused with
`spack test results`. Generic smoke tests for MPI implementations, C,
C++, and Fortran compilers as well as specific smoke tests for 18
packages.
Inside the test method, individual tests can be run separately (and
continue to run best-effort after a test failure) using the `run_test`
method. The `run_test` method encapsulates finding test executables,
running and checking return codes, checking output, and error handling.
This handles the following trickier aspects of testing with direct
support in Spack's package API:
- [x] Caching source or intermediate build files at build time for
use at test time.
- [x] Test dependencies,
- [x] packages that require a compiler for testing (such as library only
packages).
See the packaging guide for more details on using Spack testing support.
Included is support for package.py files for virtual packages. This does
not change the Spack interface, but is a major change in internals.
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Co-authored-by: wspear <wjspear@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The deprecatedProperties custom validator now can accept a function
to compute a better error message.
Improve error/warning message for deprecated properties
As of #18260, `spack load` and `spack env activate` now use
`prefix_inspections` from the modules configuration to decide
how to modify environment variables.
This updates the modules configuration documentation to describe
how to update environment variables with the `prefix_inspections`
section. This also updates the `spack load` and environments
documentation to refer to the new `prefix_inspections` documentation.
`spack load` and `spack env activate` now use the prefix inspections
defined in `modules.yaml`. This allows users to customize/override
environment variable modifications if desired.
If no `prefix_inspections` configuration is present, Spack uses the
values in the default configuration.
This PR reworks a few attributes in the container subsection of
spack.yaml to permit the injection of custom base images when
generating containers with Spack. In more detail, users can still
specify the base operating system and Spack version they want to use:
spack:
container:
images:
os: ubuntu:18.04
spack: develop
in which case the generated recipe will use one of the Spack images
built on Docker Hub for the build stage and the base OS image in the
final stage. Alternatively, they can specify explicitly the two
base images:
spack:
container:
images:
build: spack/ubuntu-bionic:latest
final: ubuntu:18.04
and it will be up to them to ensure their consistency.
Additional changes:
* This commit adds documentation on the two approaches.
* Users can now specify OS packages to install (e.g. with apt or yum)
prior to the build (previously this was only available for the
finalized image).
* Handles to avoid an update of the available system packages have been
added to the configuration to facilitate the generation of recipes
permitting deterministic builds.
This commit address the case of concretizing a root spec with a
transitive conditional dependency on a virtual package, provided
by an external. Before these modifications default variant values
for the dependency bringing in the virtual package were not
respected, and the external package providing the virtual was added
to the DAG.
The issue stems from two facts:
- Selecting a provider has higher precedence than selecting default variants
- To ensure that an external is preferred, we used a negative weight
To solve it we shift all the providers weight so that:
- External providers have a weight of 0
- Non external provider have a weight of 10 or more
Using a weight of zero for external providers is such that having
an external provider, if present, or not having a provider at all
has the same effect on the higher priority minimization.
Also fixed a few minor bugs in concretize.lp, that were causing
spurious entries in the final answer set.
Cleaned concretize.lp from leftover rules.
If a the default of a multi-valued variant is set to
multiple values either in package.py or in packages.yaml
we need to ensure that all the values are present in the
concretized spec.
Since each default value has a weight of 0 and the
variant value is set implicitly by the concretizer
we need to add a rule to maximize on the number of
default values that are used.
This commit introduces a new rule:
real_node(Package) :- not external(Package), node(Package).
that permits to distinguish between an external node and a
real node that shouldn't trim dependency. It solves the
case of concretizing ninja with an external Python.
`node_compiler_hard()` means that something explicitly asked for a node's
compiler to be set -- i.e., it's not inherited, it's required. We're
generating this in spec_clauses even for specs in rule bodies, which
results in conditions like this for optional dependencies:
In py-torch/package.py:
depends_on('llvm-openmp', when='%apple-clang +openmp')
In the generated ASP:
declared_dependency("py-torch","llvm-openmp","build")
:- node("py-torch"),
variant_value("py-torch","openmp","True"),
node_compiler("py-torch","apple-clang"),
node_compiler_hard("py-torch","apple-clang"),
node_compiler_version_satisfies("py-torch","apple-clang",":").
The `node_compiler_hard` there means we would have to *explicitly* set
py-torch's compiler to trigger the llvm-openmp dependency, rather than
just letting it be set by preferences. This is wrong; the dependency
should be there regardless of how the compiler was set.
- [x] remove fn.node_compiler_hard() call from spec_clauses when
generating rule body clauses.
If the version list passed to one_of_iff is empty, it still generates a
rule like this:
node_compiler_version_satisfies("fujitsu-mpi", "arm", ":") :- 1 { } 1.
1 { } 1 :- node_compiler_version_satisfies("fujitsu-mpi", "arm", ":").
The cardinality rules on the right and left above are never
satisfiale, and these rules do nothing.
- [x] Skip generating any rules at all for empty version lists.
As reported, conflicts with compiler ranges were not treated
correctly. This commit adds tests to verify the expected behavior
for the new concretizer.
The new rules to enforce a correct behavior involve:
- Adding a rule to prefer the compiler selected for
the root package, if no other preference is set
- Give a strong negative weight to compiler preferences
expressed in packages.yaml
- Maximize on compiler AND compiler version match
Variant of this kind don't have a list of possible
values encoded in the ASP facts. Since all we have
is a validator the list of possible values just includes
just the default value and possibly the value passed
from packages.yaml or cli.
This is done after the builder has actually built
the specs, to respect the semantics use with the
old concretizer.
Later we could move this to the solver as
a multivalued variant.
This is done after the builder has actually built
the specs, to respect the semantics use with the
old concretizer.
A better approach is to substitute the spec
directly in concretization.
The "none" variant value cannot be combined with
other values.
The '*' wildcard matches anything, including "none".
It's thus relevant in queries, but disregarded in
concretization.
- The test on concretization of anonymous dependencies
has been fixed by raising the expected exception.
- The test on compiler bootstrap has been fixed by
updating the version of GCC used in the test.
Since gcc@2.0 does not support targets later than
x86_64, the new concretizer was looking for a
non-existing spec, i.e. it was correctly trying
to retrieve 'gcc target=x86_64' instead of
'gcc target=core2'.
- The test on gitlab CI needed an update of the target
This commit adds support for specifying rules in
packages.yaml that refer to virtual packages.
The approach is to normalize in memory each
configuration and turn it into an equivalent
configuration without rules on virtual. This
is possible if the set of packages to be handled
is considered fixed.
The weight of the target used in concretization is, in order:
1. A specific per package weight, if set in packages.yaml
2. Inherited from the parent, if possible
3. The default target weight (always set)
Generate facts on externals by inspecting
packages.yaml. Added rules in concretize.lp
Added extra logic so that external specs
disregard any conflict encoded in the
package.
In ASP this would be a simple addition to
an integrity constraint:
:- c1, c2, c3, not external(pkg)
Using the the Backend API from Python it
requires some scaffolding to obtain a default
negated statement.
Conflict rules from packages are added as integrity
constraints in the ASP formulation. Most of the code
to generate them has been reused from PyclingoDriver.rules
The new concretizer and the old concretizer solve constraints
in a different way. Here we ensure that a SpackError is raised,
instead of a specific error that made sense in the old concretizer
but probably not in the new.
Instead of python callbacks, use cardinality constraints for package
versions. This is slightly faster and has the advantage that it can be
written to an ASP program to be executed *outside* of Spack. We can use
this in the future to unify the pyclingo driver and the clingo text
driver.
This makes use of add_weight_rule() to implement cardinality constraints.
add_weight_rule() only has a lower bound parameter, but you can implement
a strict "exactly one of" constraint using it. In particular, wee want to
define:
1 {v1; v2; v3; ...} 1 :- version_satisfies(pkg, constraint).
version_satisfies(pkg, constraint) :- 1 {v1; v2; v3; ...} 1.
And we do that like this, for every version constraint:
atleast1(pkg, constr) :- 1 {version(pkg, v1); version(pkg, v2); ...}.
morethan1(pkg, constr) :- 2 {version(pkg, v1); version(pkg, v2); ...}.
version_satisfies(pkg, constr) :- atleast1, not morethan1(pkg, constr).
:- version_satisfies(pkg, constr), morethan1.
:- version_satisfies(pkg, constr), not atleast1.
v1, v2, v3, etc. are computed on the Python side by comparing every
possible package version with the constraint.
Computing things like this has the added advantage that if v1, v2, v3,
etc. comprise *all* possible versions of a package, we can just omit the
rules for the constraint under consideration. This happens pretty
frequently in the Spack mainline.
- [x] Solver now uses the Python interface to clingo
- [x] can extract unsatisfiable cores from problems when things go wrong
- [x] use Python callbacks for versions instead of choice rules (this may
ultimately hurt performance)
There are now three parts:
- `SpackSolverSetup`
- Spack-specific logic for generating constraints. Calls methods on
`AspTextGenerator` to set up the solver with a Spack problem. This
shouln't change much from solver backend to solver backend.
- ClingoDriver
- The solver driver provides methods for SolverSetup to generates an ASP
program, send it to `clingo` (run as an external tool), and parse the
output into function tuples suitable for `SpecBuilder`.
- The interface is generic and should not have to change much for a
driver for, say, the Clingo Python interface.
- SpecBuilder
- Builds Spack specs from function tuples parsed by the solver driver.
The original implementation was difficult to read, as it only had
single-letter variable names. This converts all of them to descriptive
names, e.g., P -> Package, V -> Virtual/Version/Variant, etc.
To handle unknown compilers propely in tests (and elsewhere), we need to
add unknown compilers from the spec to the list of possible compilers.
Rework how the compiler list is generated and includes compilers from
specs if the existence check is disabled.
Specs like hdf5 ^mpi were unsatisfiable because we added a requierment
for `node("mpi").`. This can't be resolved because "mpi" is not a
package.
- [x] Introduce `virtual_node()`, which says *some* provider must be in
the DAG.
This adds compiler flags to the ASP solve so that we can have conditions
based on them in the solve. But, it keeps order out of the solve to
avoid unneeded complexity and combinatorial explosions.
The solver determines which flags are on a spec, but the order is
determined by DAG precedence (childrens' flags take precedence over
parents' and are added on the right) and order (order flags were
specified on the command line is respected).
The solver is responsible for determining when to propagate flags, when
to inheit them from other nodes, when to take them from compiler
preferences, etc.
Weight microarchitectures and prefers more rercent ones. Also disallow
nodes where the compiler does not support the selected target.
We should revisit this at some point as it seems like if I play around
with the compiler support for different architectures, the solver runs
very slowly. See notes in comments -- the bad case was gcc supporting
broadwell and skylake with clang maxing out at haswell.
We didn't have a cardinality constraint for multi-valued variants, so the
solver wasn't filling them in.
- [x] add a requirement for at least one value for multi-valued variants
Variants like `cpu_target` on `openblas` don't have defineed values, but
they have a default. Ensure that the default is always a possible value
for the solver.
Spack was generating the same dependency connstraints twice in the output ASP:
```
declared_dependency("abinit", "hdf5", "link")
:- node("abinit"),
variant_value("abinit", "mpi", "True"),
variant_value("abinit", "mpi", "True").
```
This was because `AspFunction` was modifying itself when called.
- [x] fix `AspFunction` so that every call returns a new object
- [x] Add support for packages.yaml and command-line compiler preferences.
- [x] Rework compiler version propagation to use optimization rather than
hard logic constraints
Technically the ASP output order does not matter, but it's hard to diff
two different solve fomulations unless we order it.
- [x] make sure ASP output is emitted in a deterministic order (by
sorting all hash keys)
This needs more thought, as I am pretty sure the weights are not correct.
Or, at least, I'm not convinced that they do what we want in all cases.
See note in concretize.lp.
Solver now prefers newer versions like the old concretizer. Prefer
package preferences from packages.yaml, preferred=True, package
definition, and finally each version itself.
Competition output only prints out one model, so we do not have to
unnecessarily parse all the non-optimal models. We'll just look at the
best model and bring that in.
In practice, this saves a lot of JSON parsing and spec construction time.
Clingo actually has an option to output JSON -- use that instead of
parsing the raw otuput ourselves.
This also allows us to pick the best answer -- modify the parser to
*only* construct a spec for that one rather than building all of them
like we did before.
- Instead of using default logic, handle variant defaults by minimizing
the number of non-default variants in the solution.
- This actually seems to be pretty fast, and it fixes the long-standing
issue that writing this:
spack install hdf5 ^mpich
will fail if you don't specify hdf5+mpi. With optimization and
allowing enums to be enumerated, the solver seems to be able to quickly
discover that +mpi is the only way hdf5 can depend on mpich, and it
forces the switch to be thrown.
Use '1 { version(x); version(y); version(z) } 1.' instead of declaring
conflicts for non-matching versions. This keeps the sense of version
clauses positive, which will allow them to be used more easily in
conditionals later.
Also refactor `spec_clauses()` method to return clauses that can be used
in conditions, etc. instead of just printing out facts.
- This handles setting the compiler and falling back to a default
compiler, as well as providing default values for compilers/compiler
versions.
- Versions still aren't quite right -- you can't properly override
versions on compiler specs.
- Model architecture default settings and propagation off of variants
- Leverage ASP default logic to set architecture to default if it's not
set otherwise.
- Move logic out of Python and into concretize.lp as first-order rules.
We are relying on default logic in the variant handling in that we set a
default value if we never see `variant_set(P, V, X)`.
- Move the logic for this into `concretize.lp` instead of generating it
for every package.
- For programs that don't have explicit variant settings, clingo warns
that variant_set(P, V, X) doesn't appear in any rule head, because a
setting is never generated.
- Specifically suppress this warning.
- moving the dump logic into spack.solver.asp.solve() allows us to print
out useful debug info sooner
- prior approach required a successful solve to print out anyhting.
According to the documentation for spack and pkg-config,
$view/share/pkgconfig should also be a valid place to look
for package config files. This commit ensures that when
spack activate env $dir is called, the environment has this
directory in PKG_CONFIG_PATH.
As of #13100, Spack installs the dependencies of a _single_ spec in parallel.
Environments, when installed, can only get parallelism from each individual
spec, as they're installed in order. This PR makes entire environments build
in parallel by extending Spack's package installer to accept multiple root
specs. The install command and Environment class have been updated to use
the new parallel install method.
The specs and kwargs for each *uninstalled* package (when not force-replacing
installations) of an environment are collected, passed to the `PackageInstaller`,
and processed using a single build queue.
This introduces a `BuildRequest` class to track install arguments, and it
significantly cleans up the code used to track package ids during installation.
Package ids in the build queue are now just DAG hashes as you would expect,
Other tasks:
- [x] Finish updating the unit tests based on `PackageInstaller`'s use of
`BuildRequest` and the associated changes
- [x] Change `environment.py`'s `install_all` to use the `PackageInstaller` directly
- [x] Change the `install` command to leverage the new installation process for multiple specs
- [x] Change install output messages for external packages, e.g.:
`[+] /usr` -> `[+] /usr (external bzip2-1.0.8-<dag-hash>`
- [x] Fix incomplete environment install's view setup/update and not confirming all
packages are installed (?)
- [x] Ensure externally installed package dependencies are properly accounted for in
remaining build tasks
- [x] Add tests for coverage (if insufficient and can identity the appropriate, uncovered non-comment lines)
- [x] Add documentation
- [x] Resolve multi-compiler environment install issues
- [x] Fix issue with environment installation reporting (restore CDash/JUnit reports)
This change makes improvements to the `spack ci rebuild` command
which supports running gitlab pipelines on PRs from forks. Much
of this has to do with making sure we can run without the secrets
previously required for running gitlab pipelines (e.g signing key,
aws credentials, etc). Specific improvements in this PR:
Check if spack has precisely one signing key, and use that information
as an additional constraint on whether or not we should attempt to sign
the binary package we create.
Also, if spack does not have at least one public key, add the install
option "--no-check-signature"
If we are running a pipeline without any profile or environment
variables allowing us to push to S3, the pipeline could still
successfully create a buildcache in the artifacts and move on. So
just print a message and move on if pushing either the buildcache
entry or cdash id file to the remote mirror fails.
When we attempt to generate a pacakge or gpg key index on an S3
mirror, and there is nothing to index, just print a warning and
exit gracefully rather than throw an exception.
Support the use of PR-specific mirrors for temporary binary pkg
storage. This will allow quality-of-life improvement for developers,
providing a place to store binaries over the lifetime of a PR, so
that they must only wait for packages to rebuild from source when
they push a new commit that causes it to be necessary.
Replace two-pass install with a single pass and the new option:
--require-full-hash-match. Doing this also removes the need to
save a copy of the spack.yaml to be copied over the one spack
rewrites in between the two spack install passes.
Work around a mirror configuration issue caused by using
spack.util.executable to do the package installation.
* Update pipeline trigger jobs for PRs from forks
Moving to PRs from forks relies on external synchronization script
pushing special branch names. Also secrets will only live on the
spack mirror project, and must be propagated to the E4S project via
variables on the trigger jobs.
When this change is merged, pipelines will not run until we update
the "Custom CI configuration path" in the Gitlab CI Settings, as the
name of the file has changed to better reflect its purpose.
* Arg to MirrorCollection is used exclusively, so add main remote mirror to it
* Compute full hash less frequently
* Add tests covering index generation error handling code
Since #11598 sbang has been installed within the install_tree. This doesn’t play
nicely with install_tree padding, since sbang can’t do its job if it is installed in a
long path (this is the whole point of sbang).
This PR changes the padding specification. Instead of $padding inside paths,
we now have a separate `padding:` field in the `install_tree` configuration.
Previously, the `install_tree` looked like this:
```
/path/to/opt/spack_padding_padding_padding_padding_padding/
bin/
sbang
.spack-db/
...
linux-rhel7-x86_64/
...
```
```
This PR updates things to look like this:
/path/to/opt/
bin/
sbang
spack_padding_padding_padding_padding_padding/
.spack-db/
...
linux-rhel7-x86_64/
...
So padding is added at the start of all install prefixes *within* the unpadded
root. The database and all installations still go under the padded root.
This ensures that `sbang` is in the shorted possible path while also allowing
us to make long paths for relocatable binaries.
As of #18205, all packages must be pickle-able to be installed by
Spack.
This adds a test to check that each package can be pickled. If any
package fails to pickle, the test keeps going and collects the names
of all failed packages; it then takes the first one that failed and
attempts to re-pickle it, generating the full stack trace for the
failed pickle attempt.
Spack creates a separate process to do package installation. Different
operating systems and Python versions use different methods to create
it but up until Python 3.8 both Linux and Mac OS used "fork" (which
duplicates process memory, file descriptor table, etc.).
Python >= 3.8 on Mac OS prefers creating an entirely new process
(referred to as the "spawn" start method) because "fork" was found to
cause issues (in other words "spawn" is the default start method used
by multiprocessing.Process). Spack was dependent on the particular
behavior of fork to replicate process memory and transmit file
descriptors.
This PR refactors the Spack internals to support starting a child
process with the "spawn" method. To achieve this, it makes the
following changes:
- ensure that the package repository and other global state are
transmitted to the child process
- ensure that file descriptors are transmitted to the child process in
a way that works with multiprocessing and spawn
- make all the state needed for the build process and tests picklable
(package, stage, etc.)
- move a number of locally-defined functions into global scope so that
they can be pickled
- rework tests where needed to avoid using local functions
This PR also reworks sbang tests to work on macOS, where temporary
directories are deeper than the Linux sbang limit. We make the limit
platform-dependent (macOS supports 512-character shebangs)
See: #14102
In compiler bootstrapping pipelines, we add an artificial dependency
between jobs for packages to be built with a bootstrapped compiler
and the job building the compiler. To find the right bootstrapped
compiler for each spec, we compared not only the compiler spec to
that required by the package spec, but also the architectures of
the compiler and package spec.
But this prevented us from finding the bootstrapped compiler for a
spec in cases where the architecture of the compiler wasn't exactly
the same as the spec. For example, a gcc@4.8.5 might have
bootstrapped a compiler with haswell as the architecture, while the
spec had broadwell. By comparing the families instead of the architecture
itself, we know that we can build the zlib for broadwell with the gcc for
haswell.
Currently, full JSON output is the only machine readable option for `spack find`
in an environment.
`spack find --format` is also designed to be machine readable, but we print extra
headers in environments.
-[x] don't print headers in `spack find` output when in an environment
When invoking "buildcache list" multiple times, the command was
reporting no specs in the cache the second time around. The
presence of an up-to-date index was causing the internal
representation to be left un-initialized.
Added a command to set up Spack for our tutorial at
https://spack-tutorial.readthedocs.io.
The command does some common operations we need first-time users to do.
Specifically:
- checks out a particular branch of Spack
- deletes spurious configuration in `~/.spack` that might be
left over from prior parts of the tutorial
- adds a mirror and trusts its public key
Previously, we hardcoded a list of Spack versions which could be used by the containerize command.
This PR removes that list. It's a maintenance burden when cutting a release, and prevents older versions of Spack from creating containers to be used by newer versions.
There was an error introduced in #19209 where `full_hash()` and
`build_hash()` are called on older specs that we've read in from the DB;
older specs may not be able to compute these hashes (e.g. if they have
removed patches used in computing the full_hash).
When serializing a Spec, we want to generate the full/build hash when
possible, but we need a mechanism to skip it for Specs that have
themselves been read from YAML (and may not support this).
To get around this ambiguity and to fix the issue, we:
- Add an attribute to the spec called `_hashes_final`, that is `True`
if we can't lazily compute `build_hash` and `full_hash`.
- Set `_hashes_final` to `False` for new specs (i.e., lazily
computing hashes is ok)
- Set `_hashes_final` to `True` for concrete specs read in via
`from_node_dict`, as it may be too late to recompute hashes.
- Compute and write out all hashes in `node_dict_with_hashes` *if
possible*.
Effectively what this means is that we can round-trip specs that are
missing `_build_hash` and `_full_hash` without recomputing them, but for
all new specs, we'll compute them and store them. So Spack should work
fine with old DBs now.
This fixes sbang relocation when using old binary packages, and updates
code in `relocate.py`.
There are really two places where we would want to handle an `sbang`
relocation:
1. Installing an old package that uses `sbang` with shebang lines like
`#!/bin/bash $spack_prefix/sbang`
2. Installing a *new* package that uses `sbang` with shebang lines like
`#!/bin/sh $install_tree/sbang`
The second case is actually handled automatically by our text relocation;
we don't need any special relocation logic for new shebangs, as our
relocation logic already changes references to the build-time
`install_tree` to point to the `install_tree` at intall-time.
Case 1 was not properly handled -- we would not take an old binary
package and point its shebangs at the new `sbang` location. This PR fixes
that and updates the code in `relocation.py` with some notes.
There is one more case we don't currently handle: if a binary package is
created from an installation in a short prefix that does *not* need
`sbang` and is installed to a long prefix that *does* need `sbang`, we
won't do anything. We should just patch the file as we would for a normal
install. In some upcoming PR we should probably change *all* `sbang`
relocation logic to be idempotent and to apply to any sort of shebang'd
file. Then we'd only have to worry about which files to `sbang`-ify at
install time and wouldn't need to care about these special cases.
fixes#15183
- Moved the container related content from
workflows.rst into containers.rst
- Deleted the docker_for_developers.rst file,
since it describes an outdated procedure
Co-authored-by: Axel Huebl <a.huebl@hzdr.de>
Co-authored-by: Omar Padron <omar.padron@kitware.com>
`config.get_config` now caches the results and returns the same
configuration if called multiple times with the same arguments
(i.e. the same section and scope).
As a consequence, it is expected that users will always call
update methods provided in the `config` module after changing
the configuration (even if manipulating it as a Python nested
dictionary). The following two examples should cover most
scenarios:
* Most configuration update logic in the core (e.g. relating to
adding new compiler) should call `Configuration.update_config`
* Tests that need to change the global configuration should use the
newly-provided `config.replace_config` function.
(if neither of these methods apply, then the essential requirement
is to use a method marked as `_config_mutator`)
Failure to call such a function after modifying the configuration
will lead to unexpected results (e.g. calling `get_config` after
changing the configuration will not reflect the changes since the
first call to get_config).
* "spack install" now has a "--require-full-hash-match" option, which
forces Spack to skip an available binary package when the full hash
doesn't match. Normally only a DAG-hash match is required, which
ensures equivalent Specs, but does not account for changing logic
inside the associated package.
* Add a local binary cache index which tracks specs that have a binary
install available in a remote binary cache. It is updated with
"spack buildcache list" or for a given spec when a binary package
is retrieved for that Spec.
Spack has a fallback for hash checking with m55sums that may not be
supported in earlier versions of Python 3.x. The comments in the
Spack code acknowledge that this is best effort and may fail, but
recent vermin checks (running as part of our CI) reject this. This
disables vermin checks for that fallback.
* enable flatcc to be built with gcc/9.X.X
* add static option for building libyogrt
* cleanup
* Initial working version
* rework new oneapi wrappers
* tested and removed my initials from source
* cleanup
* Update __init__.py
* remove whitespace
* working now with mods for testing, detection. Detection for oneapi is working, but entry needs to be modified to add link path for libimf.so. Cleared cruft for old Intel versions
* fixed some formatting
* cleanup
* flake8 cleanup
* flake8
* fixed syntax of compiler version detection tests
* fixed syntax of compiler version detection tests
modified: detection.py
* fix typo
* fixes for compilers tests
* remove erroneous tests for outdated -std= flags, remove ifx version check (output won't parse)
Co-authored-by: Frank Willmore <willmore@anl.gov>
`sbang` now lives at https://github.com/spack/sbang, and it has its own
test suite that's more extensive than what's in Spack. We'll leave sbang
tests to sbang from now on, and just vendor `bin/sbang` directly.
Remaining `sbang` tests have to do with patching files, not with
`sbang`'s functionality.
This update also fixes a bug with `sbang` and multiple command line
arguments that was introduced in #19529. See:
* https://github.com/spack/sbang/pull/1
* https://github.com/spack/sbang/pull/2
- [x] include latest `sbang` from https://github.com/spack/sbang
- [x] remove old `sbang` tests from Spack
- [x] update `COPYRIGHT` and `cmd/license.py`
`sbang` was previously a bash script but did not need to be. This
converts it to a plain old POSIX shell script and adds some options. This
also allows us to simplify sbang shebangs to `#!/bin/sh /path/to/sbang`
instead of `#!/bin/bash /path/to/sbang`.
The new script passes shellcheck (with a few exceptions noted in the file)
- [x] `SBANG_DEBUG` env var enables printing what *would* be executed
- [x] `sbang` checks whether it has been passed an option and fails gracefully
- [x] `sbang` will now fail if it can't find a second shebang line, or if
the second line happens to be sbang (avoid infinite loops)
- [x] add more rigorous tests for `sbang` behavior using `SBANG_DEBUG`
PHP supports an initial shebang, but its comment syntax can't handle our 2-line
shebangs. So, we need to embed the 2nd-line shebang comment to look like a
PHP comment:
<?php #!/path/to/php ?>
This adds patching support to the sbang hook and support for
instrumenting php shebangs.
This also patches `phar`, which is a tool used to create php packages.
`phar` itself has to add sbangs to those packages (as phar archives
apparently contain UTF-8, as well as binary blobs), and `phar` sets a
checksum based on the contents of the package.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
`sbang` is not always accessible to users of packages, e.g., if Spack
is installed in someone's home directory and they deploy software
for others. Avoid this by:
1. Always installing the `sbang` script in the `install_tree`
2. Relocating binaries to point to the copy in the `install_tree`
and not the one in the Spack installation.
This PR also:
- ensures that `sbang` is reinstalled if it is modified in Spack
- adds tests
- updates the way `gobject-introspection` patches Makefiles
to support `sbang`
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
The logic in `config.py` merges lists correctly so that list elements
from higher-precedence config files come first, but the way we merge
`dict` elements reverses the precedence.
Since `mirrors.yaml` relies on `OrderedDict` for precedence, this bug
causes mirrors in lower-precedence config scopes to be checked before
higher-precedence scopes.
We should probably convert `mirrors.yaml` to use a list at some point,
but in the meantie here's a fix for `OrderedDict`.
- [x] ensuring that keys are ordered correctly in `OrderedDict` by
re-inserting keys from the destination `dict` after adding the keys from
the source `dict`.
- [x] also simplify the logic in `merge_yaml` by always reinserting
common keys -- this preserves mark information without all the special
cases, and makes it simpler to preserve insertion order.
Assuming a default spack configuration, if we run this:
```console
$ spack mirror add foo https://bar.com
```
Results before this change:
```console
$ spack config blame mirrors
--- mirrors:
/Users/gamblin2/src/spack/etc/spack/defaults/mirrors.yaml:2 spack-public: https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/
/Users/gamblin2/.spack/mirrors.yaml:2 foo: https://bar.com
```
Results after:
```console
$ spack config blame mirrors
--- mirrors:
/Users/gamblin2/.spack/mirrors.yaml:2 foo: https://bar.com
/Users/gamblin2/src/spack/etc/spack/defaults/mirrors.yaml:2 spack-public: https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/
```
Shell integration no longer requires setting `SPACK_ROOT`, so we can
simplify the documentation on it. The docs on shell support and using
packages are getting a bit old, and information on `spack load` (which
seems to be everyone's most common way of using packages) is hard to
find.
This PR simplifies the shell documentation to remove SPACK_ROOT, and also
moves some sections around for clearer organization.
- [x] make docs on sourcing setup scripts clearer and simpler
- [x] introduce `spack load` early in the basic usage guide instead of
burying it in the module docs
- [x] clean up module docs so that spack module tcl loads comes later
- [x] be clear about the different ways to use packages so that the users
can find the docs better.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
fixes#19476
Module file content is written to file in a
temporary location and read back to be analyzed
by unit tests.
The approach to patch "open" and write to a
StringIO in memory has been abandoned, since
over time other operations insisting on the
filesystem have been added to the module file
generator.
Synchronization on GitHub macOS runners seems to be very slow, and
frequently the foreground/background tests fail due to the race this
causes. This increases the tolerance for slowness a bit more, to allow up
to 4 spurious output lines in the tests.
This should hopefully result in no more false negatives on these tests
for macOS on GitHub.
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f73867a32b4a96e980e31a433ed3c0037fd.
* Add qgraf
* Update package.py
Changes from review
* Changes from MR
* Fix for URLs containing @ symbol
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Adding AOCC compiler to SPACK community
The AOCC compiler system offers a high level of advanced optimizations, multi-threading and processor support that includes global optimization, vectorization, inter-procedural analyses, loop transformations, and code generation. AMD also provides highly optimized libraries, which extract the optimal performance from each x86 processor core when utilized. The AOCC Compiler Suite simplifies and accelerates development and tuning for x86 applications.
* Added unit tests for detection and flags for AOCC
* Addressed reviewers comments w.r.t version checks and url,checksum related line lengths
Co-authored-by: Test User <spack@example.com>
* ADD: testing to dev-build command
* RM: mutally exclusive group for testing in parser
* FIX: test option to subparser and not testing
* ADD: spack-completion.bash
* RM: local devbuildcosmo cmd
* FIX: bad merge --drop-in -b --before options forgotten
* FIX: --test place in spack-completion.bash
* FIX: typo
* FIX: blank line removing
* FIX: trailing white space
Co-authored-by: Elsa Germann <egermann@tsa-ln002.cm.cluster>
The package list at https://spack.readthedocs.io/en/latest/package_list.html claims "it is automatically generated based on the packages in the latest Spack release" but it is actually based on the develop branch. This leads to confusion when users find that e.g. herwigpp is included in the list, but it cannot be found when they install the latest release. That latest release has a package list at https://spack.readthedocs.io/en/stable/package_list.html which does indeed not include herwigpp.
Changing the language from "the latest Spack release" to "this Spack version" might make that clearer. Maybe.
* Add nvhpc compiler definition: "spack compiler add" will now look
for instances of the NVIDIA HPC SDK compiler executables
(nvc, nvc++, nvfortran) in supplied paths
* Add the nvhpc package which installs the nvhpc compiler
* Add testing for nvhpc detection and C++-standard/pic flags
Co-authored-by: Scott McMillan <smcmillan@nvidia.com>
Output was, e.g. `Executables in /bin and /,u,s,r,/,b,i,n are both associated with the same spec xz@5.2.2`, will be `Executables in /bin and /usr/bin are both associated with the same spec xz@5.2.2`.
Previously config.guess and config.sub were patched only
in the root of the source path.
This modification extend the previous behavior to patch every
config.guess or config.sub file even in subfolders, if need be.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* allow environments to specify dev-build packages
* spack develop and spack undevelop commands
* never pull dev-build packges from bincache
* reinstall dev_specs when code has changed; reinstall dependents too
* preserve dev info paths and versions in concretization as special variant
* move install overwrite transaction into installer
* move dev-build argument handling to package.do_install
now that specs are dev-aware, package.do_install can add
necessary args (keep_stage=True, use_cache=False) to dev
builds. This simplifies driving logic in cmd and env._install
* allow 'any' as wildcard for variants
* spec: allow anonymous dependencies
raise an error when constraining by or normalizing an anonymous dep
refactor concretize_develop to remove dev_build variant
refactor tests to check for ^dev_path=any instead of +dev_build
* fix variant class hierarchy
* autotools: add attribute to delete libtool archives .la files
According to Autotools Mythbuster (https://autotools.io/libtool/lafiles.html)
libtool archive files are mostly vestigial, but they might create issues
when relocating binary packages as shown in #18694.
For GCC specifically, most distributions remove these files with
explicit commands:
https://git.stg.centos.org/rpms/gcc/blob/master/f/gcc.spec#_1303
Considered all of that, this commit adds an easy way for each
AutotoolsPackage to remove every .la file that has been installed.
The default, for the time being, is to maintain them - to be consistent
with what Spack was doing previously.
* autotools: delete libtool archive files by default
Following review this commit changes the default for
libtool archive files deletion and adds test to verify
the behavior.
This commit refactors the computation of the search path
for aclocal in its own method, so that it's easier to reuse
for packages that need to have a custom autoreconf phase.
Co-authored-by: Toyohisa Kameyama <kameyama@riken.jp>
This reverts #18359 and follow-on PRs intended to address issues with
#18359 because that PR changes the hash of all specs. A future PR will
reintroduce the changes.
* Revert "Fix location in spec.yaml where we look for full_hash (#19132)"
* Revert "Fix fetch of spec.yaml files from buildcache (#19101)"
* Revert "Merge pull request #18359 from scottwittenburg/add-binary-distribution-cache-manager"
When we attempt to determine whether a remote spec (in a binary mirror)
is up-to-date or needs to be rebuilt, we compare the full_hash stored in
the remote spec.yaml file against the full_hash computed from the local
concrete spec. Since the full_hash moved into the spec (and is no longer
at the top level of the spec.yaml), we need to look there for it. This
oversight from #18359 was causing all specs to get rebuilt when the
full_hash wasn't fouhd at the expected location.
This changes makes sure that when we run the pipeline job that updates
the buildcache package index on the remote mirror, we also update the
key index. The public keys corresponding to the signing keys used to
sign the package was pushed to the mirror as a part of creating the
buildcache index, so this is just ensuring those keys are reflected
in the key index.
Also, this change makes sure the "spack buildcache update-index"
job runs even when there may have been pipeline failures, since we
would like the index always to reflect the true state of the mirror.
Since those files currently exist in buildcaches (in S3 buckets) with
potentially different content types, we should be less restrictive in
what content types we accept when attempting to fetch them. This PR
removes the content type constraint so any file with the matching
name will be found.
* Remove duplication of reconstructed RPATHs caused by multiple
identical entries in prefixes dictionary
* Don't rewrite RPATHs if relative RPATHs are unchanged because the
directory layout is unchanged
* Need to check the binary is not a Mach-o binary in a linux package or an ELF binary in a macOS package.
* use sys.platform
* Darwin -> darwin for sys.platform
* Rework spack.util.web.list_url()
list_url() now accepts an optional recursive argument (default: False)
for controlling whether to only return files within the prefix url or to
return all files whose path starts with the prefix url. Allows for the
most effecient implementation for the given prefix url scheme. For
example, only recursive queries are supported for S3 prefixes, so the
returned list is trimmed down if recursive == False, but the native
search is returned as-is when recursive == True. Suitable
implementations for each case are also used for file system URLs.
* Switch to using an explicit index for public keys
Switches to maintaining a build cache's keys under build_cache/_pgp.
Within this directory is an index.json file listing all the available
keys and a <fingerprint>.pub file for each such key.
- Adds spack.binary_distribution.generate_key_index()
- (re)generates a build cache's key index
- Modifies spack.binary_distribution.build_tarball()
- if tarball is signed, automatically pushes the key used for signing
along with the tarball
- if regenerate_index == True, automatically (re)generates the build
cache's key index along with the build cache's package index; as in
spack.binary_distribution.generate_key_index()
- Modifies spack.binary_distribution.get_keys()
- a build cache's key index is now used instead of programmatic
listing
- Adds spack.binary_distribution.push_keys()
- publishes keys from Spack's keyring to a given list of mirrors
- Adds new spack subcommand: spack gpg publish
- publishes keys from Spack's keyring to a given list of mirrors
- Modifies spack.util.gpg.Gpg.signing_keys()
- Accepts optional positional arguments for filtering the set of keys
returned
- Adds spack.util.gpg.Gpg.public_keys()
- As spack.util.gpg.Gpg.signing_keys(), except public keys are
returned
- Modifies spack.util.gpg.Gpg.export_keys()
- Fixes an issue where GnuPG would prompt for user input if trying to
overwrite an existing file
- Modifies spack.util.gpg.Gpg.untrust()
- Fixes an issue where GnuPG would fail for input that were not key
fingerprints
- Modifies spack.util.web.url_exists()
- Fixes an issue where url_exists() would throw instead of returning
False
* rework gpg module/fix error with very long GNUPGHOME dir
* add a shim for functools.cached_property
* handle permission denied error in gpg util
* fix tests/make gpgconf optional if no socket dir is available
Update pipelines documentation to describe how 'tags', 'variables',
'image', 'before_script', 'script', and 'after_script' can be
supplied at the top level, to be used by any of the runner mappings,
and also overridden by any of the runner mappings.
Also show an example of capturing the custom spack SHA at pipeline
generation time, so all jobs are sure to run with the same version
of spack, as a means to illustrate the $env:VARIABLE_NAME syntax.
* Use the config path instead of the basename
* Removing unused variables
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Test
Making sure if there are 2 include config files with the same basename they are both implemented
* Edit test assert
Co-authored-by: Greg Becker <becker33@llnl.gov>
Fixes#18441
When writing an environment, there are cases where the lock file for
the environment may be removed. In this case there was a period
between removing the lock file and writing the new manifest file
where an exception could leave the manifest in its old state (in
which case the lock and manifest would be out of sync).
This adds a context manager which is used to restore the prior lock
file state in cases where the manifest file cannot be written.
This is a special case of overriding since each section is being matched with the current spec.
The trailing ':' for sections with override is now removed when parsing the configuration so the special handling for the modules configuration stopped working but it went unnoticed.
`spack install --yes-to-all` doesn't actually make the build non-interactive,
but that is why people typically use it. This documents that you must also
specify `--no-checksum` for a fully non-interactive build.
* Modules: Deduplicate suffixes but don't sort them.
The suffixes' order is defined by the order in which they appear in the configuration file.
* Modules: Modify tests to use spack_yaml.load_config.
spack_yaml.load_config ensures that the configuration is stored in an ordered manner. Without this change, the behavior of the tests did not match Spack's.
* Modules: Tweak the suffixes test to better catch ordering issues.
* spack config: default modification scope can be an environment
The previous model was that environments are the highest priority config
scope for config reading operations, but were not considered for config
writing operations. Now, the active environment is the highest priority
config scope for both reading and writing operations.
Now spack config add, spack external find and spack compiler set environment
configuration in the environment by default if an environment is active. This is a
change in default behavior for these routines, but better matches the mental
model for an environment taking precedence over the user's default config file.
* add scope argument to 'spack external find' to choose non-default scope
* Increase testing for config modifications on environments
Co-authored-by: Gregory Becker <becker33@llnl.gov>
The 'external_modules' attribute on a Spec, when read from a YAML
configuration file, may contain extra formatting that is lost when
that Spec is written-to/read-from JSON format. This was resulting in
a hashing instability (when the Spec was read back, it would report a
different hash). This commit adds a function which removes the extra
formatting from 'external_modules' as it is passed to the Spec in
__init__ to ensure a consistent hash.
As detailed in https://bugs.python.org/issue33725, starting new
processes with 'fork' on Mac OS is not guaranteed to work in general.
As of Python 3.8 the default process spawning mechanism was changed
to avoid this issue.
Spack depends on the fork-based method to preserve file descriptors
transparently, to preserve global state, and to avoid pickling some
objects. An effort is underway to remove dependence on fork-based
process spawning (see #18205). In the meantime, this allows Spack to
run with Python 3.8 on Mac OS by explicitly choosing to use 'fork'.
Co-authored-by: Peter Josef Scheibel <scheibel1@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
I know that it's just an example, but I was trying to figure out what was going on and it wasn't making sense....
`tput sgr0` resets the terminal state (http://linuxcommand.org/lc3_adv_tput.php) and I can't see any reason to do it twice. Deleting the second occurrence doesn't seem to break the fancy prompt effect.
Compilers can have strange versions, as the version is provided by the user. We know the real version internally, (by querying the compiler) so expose it as a property and use it in places we don't trust the user. Eventually we'll refactor this with compilers as dependencies, but this is the best fix we've got for now.
- [x] Make `real_version` a property and cache the version returned by the compiler
- [x] Use `real_version` to make C++ language level flags work
Restores the fetching progress bar sans failure outputs; restores non-debug reporting of using fetch cache for installed packages; and adds a unit test.
* Add status bar check to test and fetch output when already installed
Some of the feature flags are named differently and clwb is missing on
my i7-1065G7. cascadelake and cannonlake might have similar problems but
I do not have access to those architectures to test.
* make_package_relative: relocate rpaths on cray
* relocate_package: relocate rpaths on cray
* platforms: add `binary_formats` property
We need to know which binary formats are supported on a platform so we
know which types of relocations to try. This adds a list of binary
formats to the platform and removes a bunch of special cases from
`binary_distribution.py`.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Before this PR, packages.yaml files that contained an
empty "paths" or "modules" attribute were not updated
correctly, since the update function was not reporting
them as changed after the update.
This PR fixes that issue and adds a unit test to
avoid regression.
This commit adds output to the "spack external find"
command to inform users of the result of the operation.
It also fixes a bug introduced in #17804 due to the fact
that a function was not updated to conform to the new
packages.yaml format (_get_predefined_externals).
* Handle uninstalled rootspecs in buildcache
- Do not parse specs / find matching specs when in an environment and no
package string is provided
- Error only when a spec.yaml or spec string are not installed. In an
environment it is fine when the root spec does not exist.
- When iterating through the matched specs, simply skip uninstalled
packages
* Run Python2.6 unit tests on Github Actions
* Skip url tests on Python 2.6 to reduce waiting times
* Skip foreground background tests on Python 2.6 to reduce waiting times
* Removed references to Travis in the documentation
* Deleted install_patchelf.sh (can be installed from repo on CentOS 6)
* Buildcache:
* Try mocking an install of quux, corge and garply using prebuilt binaries
* Put patchelf install after ccache restore
* Add script to install patchelf from source so it can be used on Ubuntu:Trusty which does not have a patchelf pat package. The script will skip building on macOS
* Remove mirror at end of bindist test
* Add patchelf to Ubuntu build env
* Revert mock patchelf package to allow other tests to run.
* Remove depends_on('patchelf', type='build') relying instead on
* Test fixture to ensure patchelf is available.
* Call g++ command to build libraries directly during test build
* Flake8
* Install patchelf in before_install stage using apt unless on Trusty where a build is done.
* Add some symbolic links between packages
* Flake8
* Flake8:
* Update mock packages to write their own source files
* Create the stage because spec search does not create it any longer
* updates after change of list command arguments
* cleanup after merge
* flake8
- [x] Remove references to `master` branch
- [x] Document how release branches are structured
- [x] Document how to make a major release
- [x] Document how to make a point release
- [x] Document how to do work in our release projects
* Move flake8 tests on Github Actions
* Move shell test to Github Actions
* Moved documentation build to Github Action
* Don't run coverage on Python 2.6
Since we get connection errors consistently on Travis
when trying to upload coverage results for Python 2.6,
avoid computing coverage entirely to speed-up tests.
`spack -V` stopped working when we added the `releases/latest` tag to
track the most recent release. It started just reporting the version,
even on a `develop` checkout. We need to tell it to *only* search for
tags that start with `v`, so that it will ignore `releases/latest`.
`spack -V` also would print out unwanted git eror output on a shallow
clone.
- [x] add `--match 'v*'` to `git describe` arguments
- [x] route error output to `os.devnull`
`spack buildcache list` was trying to construct an `Arch` object and
compare it to `arch_for_spec(<spec>)`. for each spec in the buildcache.
`Arch` objects are only intended to be constructed for the machine they
describe. The `ArchSpec` object (part of the `Spec`) is the descriptor
that lets us talk about architectures anywhere.
- [x] Modify `spack buildcache list` and `spack buildcache install` to
filter with `Spec` matching instead of using `Arch`.
- [x] Make it easier to get a `Spec` with a proper `ArchSpec` from an
`Arch` object via new `Arch.to_spec()` method.
- [x] Pull `spack.architecture.default_arch()` out of
`spack.architecture.sys_type()` so we can get an `Arch` instead of
a string.
* Loosen Axom's variants, add shared variant for axom, fix clang/xlf rpath'ing problem on blueos
* Fix flake8
* Add main branch to list of known git branches
The modifications in 193e8333fa
introduced a bug in the loading of compiler modules, since a
function that was expecting a list of string was just getting
a string.
This commit fixes the bug and adds an assertion to verify the
prerequisite of the function.
Packages can implement “detect_version” to support detection
of external instances of a package. This is generally easier
than implementing “determine_spec_details”. The API for
determine_version is similar: for example you can return
“None” to indicate that an executable is not an instance
of a package.
Users may implement a “determine_variants” method for a package.
When doing external detection, executables are grouped by version
and each group results in a single invocation of “determine_variants”
for the associated spec. The method returns a string specifying
the variants for the package. The method may additionally return
a dictionary representing extra attributes for the package.
These will be stored in the spec yaml and can be retrieved
from self.spec.extra_attributes
The Spack GCC package has been updated with an implementation
of “determine_variants” which adds the following extra
attributes to the package: c, cxx, fortran
The YAML config for paths and modules of external packages has
changed: the new format allows a single spec to load multiple
modules. Spack will automatically convert from the old format
when reading the configs (the updates do not add new essential
properties, so this change in Spack is backwards-compatible).
With this update, Spack cannot modify existing configs/environments
without updating them (e.g. “spack config add” will fail if the
configuration is in a format that predates this PR). The user is
prompted to do this explicitly and commands are provided. All
config scopes can be updated at once. Each environment must be
updated one at a time.
`spack -V` stopped working when we added the `releases/latest` tag to
track the most recent release. It started just reporting the version,
even on a `develop` checkout. We need to tell it to *only* search for
tags that start with `v`, so that it will ignore `releases/latest`.
`spack -V` also would print out unwanted git eror output on a shallow
clone.
- [x] add `--match 'v*'` to `git describe` arguments
- [x] route error output to `os.devnull`
`spack buildcache list` was trying to construct an `Arch` object and
compare it to `arch_for_spec(<spec>)`. for each spec in the buildcache.
`Arch` objects are only intended to be constructed for the machine they
describe. The `ArchSpec` object (part of the `Spec`) is the descriptor
that lets us talk about architectures anywhere.
- [x] Modify `spack buildcache list` and `spack buildcache install` to
filter with `Spec` matching instead of using `Arch`.
- [x] Make it easier to get a `Spec` with a proper `ArchSpec` from an
`Arch` object via new `Arch.to_spec()` method.
- [x] Pull `spack.architecture.default_arch()` out of
`spack.architecture.sys_type()` so we can get an `Arch` instead of
a string.
* Run Python2.6 unit tests on Github Actions
* Skip url tests on Python 2.6 to reduce waiting times
* Skip foreground background tests on Python 2.6 to reduce waiting times
* Removed references to Travis in the documentation
* Deleted install_patchelf.sh (can be installed from repo on CentOS 6)
Relative paths in views have been broken since #17608 or earlier.
- [x] Fix by passing base path of the environment into the `ViewDescriptor`.
Relative paths are calculated from this path.
Relative paths in views have been broken since #17608 or earlier.
- [x] Fix by passing base path of the environment into the `ViewDescriptor`.
Relative paths are calculated from this path.
A bug was introduced in #13100 where ChildErrors would be redundantly
printed when raised during a build. We should eventually revisit error
handling in builds and figure out what the right separation of
responsibilities is for distributed builds, but for now just skip
printing.
- [x] SpackErrors were designed to be printed by the forked process, not
by the parent, so check if they've already been printed.
- [x] update tests
A bug was introduced in #13100 where ChildErrors would be redundantly
printed when raised during a build. We should eventually revisit error
handling in builds and figure out what the right separation of
responsibilities is for distributed builds, but for now just skip
printing.
- [x] SpackErrors were designed to be printed by the forked process, not
by the parent, so check if they've already been printed.
- [x] update tests
Fixes#17299
Cray Shasta systems appear to use an unmodified Sles or other Linux operating system on the backend (like Cray "Cluster" systems and unlike Cray "XC40" systems that use CNL).
This updates the CNL version detection to properly note that this is the underlying OS instead of CNL and delegate to LinuxDistro.
* environment-views: fix bug where missing recipe/repo breaks env commands
When a recipe or a repo has been removed from Spack and an environment
is active, it causes the view activation to crash Spack before any
commands can be executed. Further, the error message it not at all clear
in explaining the issue.
This forces view regeneration to always start from scratch to avoid the
missing package recipes, and defaults add_view=False in main for views activated
by the `spack -e` option.
* add messages to env status and deactivate
Warn users that a view may be corrupt when deactivating an environment
or checking its status while active. Updated message for activate.
* tests for view checking
Co-authored-by: Gregory Becker <becker33@llnl.gov>
* switch from bool to int debug levels
* Added debug options and changed lock logging to use more detailed values
* Limit installer and timestamp PIDs to standard debug output
* Reduced verbosity of fetch/stage/install output, changing most to debug level 1
* Combine lock log methods; change build process install to debug
* Changed binary cache install messages to extraction messages
* bugfix: make compiler preferences slightly saner
This fixes two issues with the way we currently select compilers.
If multiple compilers have the same "id" (os/arch/compiler/version), we
currently prefer them by picking this one with the most supported
languages. This can have some surprising effects:
* If you have no `gfortran` but you have `gfortran-8`, you can detect
`clang` that has no configured C compiler -- just `f77` and `f90`. This
happens frequently on macOS with homebrew. The bug is due to some
kludginess about the way we detect mixed `clang`/`gfortran`.
* We can prefer suffixed versions of compilers to non-suffixed versions,
which means we may select `clang-gpu` over `clang` at LLNL. But,
`clang-gpu` is not actually clang, and it can break builds. We should
prefer `clang` if it's available.
- [x] prefer compilers that have C compilers and prefer no name variation
to variation.
* tests: add test for which()
Apple's gcc is really clang. We previously ignored it by default but
there was a regression in #17110.
Originally we checked for all clang versions with this, but I know of
none other than `gcc` on macos that actually do this, so limiting to
`apple-clang` should be ok.
- [x] Fix check for `apple-clang` in `gcc.py` to use version detection
from `spack.compilers.apple_clang`
The `spack-build-env.txt` file may contains many secrets, but the obvious one is the private signing key in `SPACK_SIGNING_KEY`. This file is nonetheless uploaded as a build artifact to gitlab. For anyone running CI on a public version of Gitlab this is a major security problem. Even for private Gitlab instances it can be very problematic.
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
Fixes#17299
Cray Shasta systems appear to use an unmodified Sles or other Linux operating system on the backend (like Cray "Cluster" systems and unlike Cray "XC40" systems that use CNL).
This updates the CNL version detection to properly note that this is the underlying OS instead of CNL and delegate to LinuxDistro.
* environment-views: fix bug where missing recipe/repo breaks env commands
When a recipe or a repo has been removed from Spack and an environment
is active, it causes the view activation to crash Spack before any
commands can be executed. Further, the error message it not at all clear
in explaining the issue.
This forces view regeneration to always start from scratch to avoid the
missing package recipes, and defaults add_view=False in main for views activated
by the `spack -e` option.
* add messages to env status and deactivate
Warn users that a view may be corrupt when deactivating an environment
or checking its status while active. Updated message for activate.
* tests for view checking
Co-authored-by: Gregory Becker <becker33@llnl.gov>
* switch from bool to int debug levels
* Added debug options and changed lock logging to use more detailed values
* Limit installer and timestamp PIDs to standard debug output
* Reduced verbosity of fetch/stage/install output, changing most to debug level 1
* Combine lock log methods; change build process install to debug
* Changed binary cache install messages to extraction messages
* bugfix: make compiler preferences slightly saner
This fixes two issues with the way we currently select compilers.
If multiple compilers have the same "id" (os/arch/compiler/version), we
currently prefer them by picking this one with the most supported
languages. This can have some surprising effects:
* If you have no `gfortran` but you have `gfortran-8`, you can detect
`clang` that has no configured C compiler -- just `f77` and `f90`. This
happens frequently on macOS with homebrew. The bug is due to some
kludginess about the way we detect mixed `clang`/`gfortran`.
* We can prefer suffixed versions of compilers to non-suffixed versions,
which means we may select `clang-gpu` over `clang` at LLNL. But,
`clang-gpu` is not actually clang, and it can break builds. We should
prefer `clang` if it's available.
- [x] prefer compilers that have C compilers and prefer no name variation
to variation.
* tests: add test for which()
Apple's gcc is really clang. We previously ignored it by default but
there was a regression in #17110.
Originally we checked for all clang versions with this, but I know of
none other than `gcc` on macos that actually do this, so limiting to
`apple-clang` should be ok.
- [x] Fix check for `apple-clang` in `gcc.py` to use version detection
from `spack.compilers.apple_clang`
Spack did not support usage of the `--config-scope` option in
combination with an environment: In `lib/spack/spack/main.py`,
`spack.config.command_line_scopes` is set equal to any config scopes
passed by the `--config-scope` option. However, this is done after
activating an environment. In the process of activating an environment,
the `spack.config.config` singleton is instantiated, so later setting of
`spack.config.command_line_scopes` is ignored.
This commit sets command line scopes before activating an environment to
ensure that they are included in the configuration.
Co-authored-by: Tim Fuller <tjfulle@sandia.gov>
The `spack-build-env.txt` file may contains many secrets, but the obvious one is the private signing key in `SPACK_SIGNING_KEY`. This file is nonetheless uploaded as a build artifact to gitlab. For anyone running CI on a public version of Gitlab this is a major security problem. Even for private Gitlab instances it can be very problematic.
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
For normal users, `-o` or `--no-same-owner` (GNU extension) is
the default behavior, but for the root user, `tar` attempts to preserve
the ownership from the tarball.
This makes `tar` use `-o` all the time. This should improve untarring
files owned by users not available in rootless Docker builds.
The error message was not updated when the behavior of Spack environments
was changed to not automatically activate the local environment in #17258.
The previous error message no longer makes sense.
When Spack installs a package, it stores repository package.py files
for it and all of its dependencies - any package with a Spack metadata
directory in its installation prefix.
It turns out this was too broad: this ends up including external
packages installed by Spack (e.g. installed by another Spack instance).
Currently Spack doesn't store the namespace properly for such packages,
so even though the package file could be fetched from the external,
Spack is unable to locate it.
This commit avoids the issue by skipping any attempt to locate and copy
from the package repository of externals, regardless of whether they
have a Spack repo directory.
Spack was attempting to calculate abspath on the located config.guess
path even when it was not found (None); this commit skips the abspath
calculation when config.guess is not found.
The error message was not updated when the behavior of Spack environments
was changed to not automatically activate the local environment in #17258.
The previous error message no longer makes sense.
When Spack installs a package, it stores repository package.py files
for it and all of its dependencies - any package with a Spack metadata
directory in its installation prefix.
It turns out this was too broad: this ends up including external
packages installed by Spack (e.g. installed by another Spack instance).
Currently Spack doesn't store the namespace properly for such packages,
so even though the package file could be fetched from the external,
Spack is unable to locate it.
This commit avoids the issue by skipping any attempt to locate and copy
from the package repository of externals, regardless of whether they
have a Spack repo directory.
* Buildcache:
* Try mocking an install of quux, corge and garply using prebuilt binaries
* Put patchelf install after ccache restore
* Add script to install patchelf from source so it can be used on Ubuntu:Trusty which does not have a patchelf pat package. The script will skip building on macOS
* Remove mirror at end of bindist test
* Add patchelf to Ubuntu build env
* Revert mock patchelf package to allow other tests to run.
* Remove depends_on('patchelf', type='build') relying instead on
* Test fixture to ensure patchelf is available.
* Call g++ command to build libraries directly during test build
* Flake8
* Install patchelf in before_install stage using apt unless on Trusty where a build is done.
* Add some symbolic links between packages
* Flake8
* Flake8:
* Update mock packages to write their own source files
* Create the stage because spec search does not create it any longer
* updates after change of list command arguments
* cleanup after merge
* flake8
fixes#17396
This prevents the class attribute to be inherited and
saves current maintainers from becoming the default
maintainers of every Cuda package.
We got rid of `master` after #17377, but users still want a way to get
the latest stable release without knowing its number.
We've added a `releases/latest` tag to replace what was once `master`.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Fixes#16478
This allows an uninstall to proceed even when encountering pre-uninstall
hook failures if the user chooses the --force option for the uninstall.
This also prevents post-uninstall hook failures from raising an exception,
which would terminate a sequence of uninstalls. This isn't likely essential
for #16478, but I think overall it will improve the user experience: if
the post-uninstall hook fails, there isn't much point in terminating a
sequence of spec uninstalls because at the point where the post-uninstall
hook is run, the spec has already been removed from the database (so it
will never have another chance to run).
Notes:
* When doing spack uninstall -a, certain pre/post-uninstall hooks aren't
important to run, but this isn't easy to track with the current model.
For example: if you are uninstalling a package and its extension, you
do not have to do the activation check for the extension.
* This doesn't handle the uninstallation of specs that are not in the DB,
so it may leave "dangling" specs in the installation prefix
- [x] Remove references to `master` branch
- [x] Document how release branches are structured
- [x] Document how to make a major release
- [x] Document how to make a point release
- [x] Document how to do work in our release projects
Spack was attempting to calculate abspath on the located config.guess
path even when it was not found (None); this commit skips the abspath
calculation when config.guess is not found.
* Move flake8 tests on Github Actions
* Move shell test to Github Actions
* Moved documentation build to Github Action
* Don't run coverage on Python 2.6
Since we get connection errors consistently on Travis
when trying to upload coverage results for Python 2.6,
avoid computing coverage entirely to speed-up tests.
* share/spack/setup-env.fish file to setup environment in fish shell
* setup-env.fish testing script
* Update share/spack/setup-env.fish
Co-Authored-By: Elsa Gonsiorowski, PhD <gonsie@me.com>
* Update share/spack/qa/setup-env-test.fish
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* updates completions using `spack commands --update-completion`
* added stderr-nocaret warning
* added fish shell tests to CI system
Co-authored-by: becker33 <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Elsa Gonsiorowski, PhD <gonsie@me.com>
* cray: detect frontend compilers automatically
This commit permits to detect frontend compilers
automatically, with the exception of cce.
Co-authored-by: Gregory Becker <becker33.llnl.gov>
[george.hartzell@172-16-193-97 spack-explore-docker]$ spack containerize
Running `spack containerize` with the example `spack.yaml` file fails
with an error that ends like so:
```
[...]
File "/local_scratch/hartzell/tmp/spack-explore-docker/lib/spack/external/ruamel/yaml/scanner.py", line 165, in need_more_tokens
self.stale_possible_simple_keys()
File "/local_scratch/hartzell/tmp/spack-explore-docker/lib/spack/external/ruamel/yaml/scanner.py", line 309, in stale_possible_simple_keys
"could not find expected ':'", self.get_mark())
ruamel.yaml.scanner.ScannerError: while scanning a simple key
in "/local_scratch/hartzell/tmp/spack-explore-docker/spack.yaml", line 26, column 1
could not find expected ':'
in "/local_scratch/hartzell/tmp/spack-explore-docker/spack.yaml", line 28, column 5
```
Indenting the block string fixes the problem for me.
CentOS 7,
```
$ spack --version
0.14.2-1529-ec58f28c2
```
* env: no automatic activation
* Ensure ci rebuild jobs activate the environment (no longer automagic)
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
* Start moving toward a json buildcache index
* Add spec and database index schemas
* Add a schema for buildcache spec.yaml files
* Provide a mode for database class to generate buildcache index
* Update db and ci tests to validate object w/ new schema
* Remove unused temporary upload-s3 command
* Use database class to generate buildcache index
* Do not generate index with each buildcache creation
* Make buildcache index mode into a couple of constructor args to Database class
* Use keyword args for _createtarball
* Parse new json index when we get specs from buildcache
Now that only one index file per mirror needs to be fetched in
order to have all the concrete specs for binaries available on the
mirror, we can just fetch and refresh the cached specs every time
instead of needing to use the '-f' flag to force re-reading.
* First fix for SPACK_DEPENDENCIES problem when doing setup
* Get rid of transitive include path in setup.
* Export SPACK_INCLUDE_DIRS into spconfig.py
* add buildcache create test
* add functionality and test to create buildcache from environment
* use env.concretized_user_specs rather than env.roots to get concretized specs, as suggested in review from becker33
* Allow `spack remove -f` and `spack uninstall` to work on matrices
Allow Environment.remove(force=True) to remove the concrete spec from the environment
even when the user spec cannot be removed because it is in a matrix.
* Separate Apple Clang from LLVM Clang
Apple Clang is a compiler of its own. All places
referring to "-apple" suffix have been updated.
* Hack to use a dash in 'apple-clang'
To be able to use autodoc from Sphinx we need
a valid Python name for the module that contains
Apple's Clang code.
* Updated packages to account for the existence of apple-clang
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added unit test for XCode related functions
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* short-circuit is_activated check when the extendee is installed upstream
* add test for checking activation status of packages with an extendee installed upstream
spack config add <value>: add nested value value to the configuration scope specified
spack config remove/rm: remove specified configuration from the relevant scope
* Some minor fixes to set_permissions() in file_permissions.py
The set_permissions() routine claims to prevent users from creating
world writable suid binaries. However, it seems to only be checking
for/preventing group writable suid binaries.
This patch modifies the routine to check for both world and group
writable suid binaries, and complain appropriately.
* permissions.py: Add test to check blocks world writable SUID files
The original test_chmod_rejects_group_writable_suid tested
that the set_permissions() function in
lib/spack/spack/util/file_permissions.py
would raise an exception if changed permission on a file with
both SUID and SGID plus sticky bits is chmod-ed to g+rwx and o+rwx.
I have modified so that more narrowly tests a file with SUID
(and no SGID or sticky bit) set is chmod-ed to g+w.
I have added a second test test_chmod_rejects_world_writable_suid
that checks that exception is raised if an SUID file is chmod-ed
to o+w
* file_permissions.py: Raise exception when try to make sgid file world writable
Updated set_permissions() in file_permissions.py to also raise
an exception if try to make an SGID file world writable. And
added corresponding unit test as well.
* Remove debugging prints from permissions.py
* Module index should not be unconditionally overwritten
Uncovered after we switched our CI to generate modules for packages
one-by-one rather than in bulk. This overwrote a complete module index
with an index with a single entry, and broke our downstream Spack
instances that needed the upstream module index.
* Changed the 'include' config section to use 'substitute_path_variables' to allow for Spack config variables to be used (e.g. $spack).
* Fixed a bug with 'include' section path expansion and added a test case for 'include' paths with embedded config variables.
* Cray: fix Blue Waters support
* pkg-config env vars needed on Blue Waters
* cray platform: fix support for user-build MPI on cray machines
* reintroduce cray environment cleaning behind cnl version guard
* cray platform: fix support for user-build MPI on cray machines
Co-authored-by: Gregory <becker33@llnl.gov>
Builds can be stopped before the final install phase due to user requests. Those builds
should not be registered as installed in the database.
We had code intended to handle this but:
1. It caught the wrong type of exception
2. We were catching these exceptions to suppress them at a lower level in the stack
This PR allows the StopIteration to propagate through a ChildError, and catches it
properly. Also added to an existing test to prevent regression.
This fixes a fork bomb in `spack versions`. Recursive generation of pools
to scrape URLs in `_spider` was creating large numbers of processes.
Instead of recursively creating process pools, we now use a single
`ThreadPool` with a concurrency limit.
More on the issue: having ~10 users running at the same time spack
versions on front-end nodes caused kernel lockup due to the high number
of sockets opened (sys-admin reports ~210k distributed over 3 nodes).
Users were internal, so they had ulimit -n set to ~70k.
The forking behavior could be observed by just running:
$ spack versions boost
and checking the number of processes spawned. Number of processes
per se was not the issue, but each one of them opens a socket
which can stress `iptables`.
In the original issue the kernel watchdog was reporting:
Message from syslogd@login03 at May 19 12:01:30 ...
kernel:Watchdog CPU:110 Hard LOCKUP
Message from syslogd@login03 at May 19 12:01:31 ...
kernel:watchdog: BUG: soft lockup - CPU#110 stuck for 23s! [python3:2756]
Message from syslogd@login03 at May 19 12:01:31 ...
kernel:watchdog: BUG: soft lockup - CPU#94 stuck for 22s! [iptables:5603]
* add an --exclude-file option to 'spack mirror create' which allows a user to specify a file of specs to exclude when creating a mirror. this is anticipated to be useful especially when using the '--all' option
* allow specifying number of versions when mirroring all packages
* when mirroring all specs within an environment, include dependencies of root specs
* add '--exclude-specs' option to allow user to specify that specs should be excluded on the command line
* add test for excluding specs
fixes#12527
Mention that specs can be uninstalled by hash also in
the help message. Reference `spack gc` in case people
are looking for ways to clean the store from build time
dependencies.
Use "spec" instead of "package" to avoid ambiguity in
the error message.
* Unify tests for compiler command in the same file
Tests for the "spack compiler" command were previously
scattered among different files.
* Tests should use mutable_config, since they modify the compiler list
Because of the way abstract variants are implemented, the following
spec matrix does not work as intended:
```
matrix:
- [foo]
- [bar=a, bar=b]
exclude:
- bar=a
```
because abstract variants always satisfy any variant of the same
name, regardless of values.
This PR converts abstract variants to whatever their appropriate
type is before running satisfaction checks for the excludes clause
in a matrix.
fixes#16841
Now that the version number of GCC reached double digits, an update
to the regex is needed to recognize gcc-10 as an executable to be
inspected when searching for compilers.
* make_link_relative: added docstring
* make_elf_binaries_relative: added docstring, unit tests
* raise_if_not_relocatable: added docstring, added unit test for exceptional case
* relocate_links: removed unused arguments, added docstring and comments
Also fixed a possible bug that was issuing spurious
warning when a file was relocated successfully
* relocate_text: added docstring and comments, renamed arguments
* relocate_text_bin: added docstring and comments, renamed arguments, unit tests
Problem: when calling `static_to_shared_library` on the `cray` arch, it
produces a non-sensical compiler command with no input files. For
example, when installing lua@5.2.4, it produced:
'gcc -lm -ldl -o /big-long-spack-path/liblua.so.5.2.4'
Solution: do the same thing on `cray` that is done for `linux`
* account for schema validation errors where the associated instance doesn't have a line number
* fix unrelated flake error (but it must be fixed because this PR touches this file and the flake rules have been updated since the last edit to this file)
Allows `all` to be configured non-buildable in packages.yaml.
The following config would only allow zlib to be built by Spack, all other packages would have to be found as externals.
```
packages:
all:
buildable: False
zlib:
buildable: True
```
This change also adds a code path through the spack ci pipelines
infrastructure which supports PR testing on the Spack repository.
Gitlab pipelines run as a result of a PR (either creation or pushing
to a PR branch) will only verify that the packages in the environment
build without error. When the PR branch is merged to develop,
another pipeline will run which results in the generated binaries
getting pushed to the binary mirror.
Providing only $padding or ${padding} results in an attempt to
substitute a padding of maximum system path length, while leaving
room for the parts of the install path spack generates. Providing
$padding-<len> or ${padding-<len>} simply substitutes padding of
the specified length.
Packages built with lmod core_compiler are placed in `Core`.
Other packages may belong in `Core`. For example, python may be built with a proprietary compiler for performance, but belong on the `Core` directory.
With this PR, lmod config can include a `core_specs` list. Any package that satisfies a spec in that list is placed in `Core`, regardless of its compiler or dependencies.
This improves the documentation for `spack external find` in several ways:
* Provide a code example of implementing `determine_spec_details` for a package
* Explain how to define executables to look for (and also e.g. that they are treated as regular expressions and so can pull in unexpected files).
* Add the "why" for a couple of constraints (i.e. explain that this logic only works for build/run deps because it examines `PATH` for executables)
* Spread the docs between build customization and packaging sections
* Add cross-references
* Add a label so that `spack external find` is linked from the command reference.
* Add pmi support (required by ucx, ofi, and gni backends)
* Add support for ucx backend
* Add dependency on MPI for pmi=simplepmi, slurmpmi, or slurmpmi2
* Remove charmpp as an MPI provider since the changes in this PR can
add MPI as a dependency (mentioned previously)
* Install into transport_protocol-OS-arch subdirectory to match
default charmpp installation behavior (which helps dependents find it)
- add docstrings and make parameter names consistent in `relocate.py`
- Make `replace_prefix_*` and other functions private (as they are implementation details)
- remove unused function _replace_prefix_nullterm()
- Add unit tests for `relocate.py` functions
- add patchelf to Travis and use it during tests
- add hello_world fixture with a compiled binary, so we can test relocation
After migrating to `travis-ci.com`, we saw I/O issues in our tests --
tests that relied on `capfd` and `capsys` were failing. We've also seen
this in GitHub actions, and it's kept us from switching to them so far.
Turns out that the issue is that using streams like `sys.stdout` as
default arguments doesn't play well with `pytest` and output redirection,
as `pytest` changes the values of `sys.stdout` and `sys.stderr`. if these
values are evaluated before output redirection (as they are when used as
default arg values), output won't be captured properly later.
- [x] replace all stream default arg values with `None`, and only assign stream
values inside functions.
- [x] fix tests we didn't notice were relying on this erroneous behavior
This adds the `url` alternative `urls` to `package.all_urls`. With
this addition, one can find again new versions with
`spack versions <package>` for packages that are populated with
from mixin mirror `urls`.
Example: `util-macros` from x.org mixin.
* Non-interactive mode for spack checksum; allow passing 'package@version' to spack checksum
* Flake8 fixes
* Update checksum.py
Fix typo
* Update spack-completion script
* Automatically set non-interactive mode if more than one version passed
* Update lib/spack/spack/cmd/checksum.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Add documentation and update spack-completion
* Flake8
* Rename option
* Update spack-completion
* Update lib/spack/spack/cmd/checksum.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update checksum.py
* Update stage.py
* Update create.py
Use batch mode when adding a new package
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This fixes some errors with setting up test configuration. These
errors do not cause current Spack tests to fail but do create
red herring issues elsewhere (see #15666). Fixing these errors
leads to more errors in tests that depended on the original
misconfigured state, so those are also addressed here.
This is an update to #16003 which accounts for some unit tests with
conflicting config/mutable_config fixtures. These conflicts were
not exposed until the mutable_config fixture was fixed. Details are
included below. The change which builds on #16003 is prefixed with
"(new)".
* For tests that use the real Spack package repository, the config
needs to avoid using MPI providers that are not intended to be
installed by Spack. Without this, it is possible that Spack tests
which concretize the MPI virtual will end up trying to use an
implementation that it shouldn't (e.g. one that is always
provided externally). See #15666 for an example.
* The mutable_config test fixture was not initializing the scope
roots to the right directories (so the resulting config was empty).
* The current_host fixture in the concretize.py tests was using the
config fixture rather than mutable_config, and was polluting the
config cache for other tests.
* One test in concretize.py was clearing a nonexistent cache
(PackagePrefs._packages_config_cache). This reference has been
removed.
* The test 'test_preferred_compilers' was was depending on cross
test config pollution to succeed. The initial spec before
concretization has been updated to updated to be explicit about
the desired result.
* (new) For tests that use install_mockery and mutable_config,
replace install_mockery with a separate install_mockery_mutable_config
fixture that is exactly the same as install_mockery but uses the
mutable_config fixture to avoid conflicts.
Fixed#15884.
Spack asks every package linked into an environment to tell us how
environment variables should be modified when a spack environment is
activated. As part of this, specs in an environment are symlinked into
the environment's view (see #13249), and the package calculates
environment modifications with *the default view as the prefix*.
All of this works nicely for pointing the user's environment at the view
*if* every package is successfully linked. Unfortunately, right now we
only track what specs "should" be in a view, not which specs actually
are. So we end up calculating environment modifications on things that
aren't linked into thee view, and the exception isn't caught, so lots of
spack commands end up failing.
This fixes the issue by ignoring and warning about specs where
calculating environment modifications fails. So we can still keep using
Spack even if the current environment is incomplete.
We should probably also just avoid computing env modifications *entirely*
for unlinked packages, but right now that is a slow operation (requires a
lot of YAML parsing). We should revisit that when we have some better
state management for views, but the fix adopted here will still be
necessary, as we want spack commands to be resilient to other types of
bugs in `setup_run_environment()` and friends. That code is in packages
and we have to assume it could be buggy when we call it outside of builds
(as it might fail more than just the build).
Add a `spack external find` command that tries to populate
`packages.yaml` with external packages from the user's `$PATH`. This
focuses on finding build dependencies. Currently, support has only been
added for `cmake`.
For a package to be discoverable with `spack external find`, it must define:
* an `executables` class attribute containing a list of
regular expressions that match executable names.
* a `determine_spec_details(prefix, specs_in_prefix)` method
Spack will call `determine_spec_details()` once for each prefix where
executables are found, passing in the path to the prefix and the path to
all found executables. The package is responsible for invoking the
executables and figuring out what type of installation(s) are in the
prefix, and returning one or more specs (each with version, variants or
whatever else the user decides to include in the spec).
The found specs and prefixes will be added to the user's `packages.yaml`
file. Providing the `--not-buildable` option will mark all generated
entries in `packages.yaml` as `buildable: False`
Cray has two machine types. "XC" machines are the larger
machines more common in HPC, but "Cluster" machines are
also cropping up at some HPC sites. Cluster machines run
a slightly different form of the CrayPE programming environment,
and often come without default modules loaded. Cluster
machines also run different versions of some software, and run
a linux distro on the backend nodes instead of running Compute
Node Linux (CNL).
Below are the changes made to support "Cluster" machines in
Spack. Some of these changes are semi-related general upkeep
of the cray platform.
* cray platform: detect properly after module purge
* cray platform: support machines running OSs other than CNL
Make Cray backend OS delegate to LinuxDistro when no cle_release file
favor backend over frontend OS when name clashes
* cray platform: target detection uses multiple strategies
This commit improves the robustness of target
detection on Cray by trying multiple strategies.
The first one that produces results wins. If
nothing is found only the generic family of the
frontend host is used as a target.
* cray-libsci: add package from NERSC
* build_env: unload cray-libsci module when not explicitly needed
cray-libsci is a package in Spack. The cray PrgEnv
modules load it implicitly when we set up the compiler.
We now unload it after setting up the compiler and
only reload it when requested via external package.
* util/module_cmd: more robust module parsing
Cray modules have documentation inside the module
that is visible to the `module show` command.
Spack module parsing is now robust to documentation
inside modules.
* cce compiler: uses clang flags for versions >= 9.0
* build_env: push CRAY_LD_LIBRARY_PATH into everything
Some Cray modules add paths to CRAY_LD_LIBRARY_PATH
instead of LD_LIBRARY_PATH. This has performance benefits
at load time, but leads to Spack builds not finding their
dependencies from external modules.
Spack now prepends CRAY_LD_LIBRARY_PATH to
LD_LIBRARY_PATH before beginning the build.
* mvapich2: setup cray compilers when on cray
previously, mpich was the only mpi implementation to support
cray systems (because it is the MPI on Cray XC systems).
Cray cluster systems use mvapich2, which now supports cray
compiler wrappers.
* build_env: clean pkgconf from environment
Cray modules silently add pkgconf to the user environment
This can break builds that do not user pkgconf.
Now we remove it frmo the environment and add it again if it
is in the spec.
* cray platform: cheat modules for rome/zen2 module on naples/zen node
Cray modules for naples/zen architecture currently specify
rome/zen2. For now, we detect this and return zen for modules
named `craype-x86-rome`.
* compiler: compiler default versions
When detecting compiler default versions for target/compiler
compatibility checks, Spack previously ran the compiler without
setting up its environment. Now we setup a temporary environment
to run the compiler with its modules to detect its version.
* compilers/cce: improve logic to determine C/C++ std flags
* tests: fix existing tests to play nicely with new cray support
* tests: test new functionality
Some new functionality can only be tested on a cray system.
Add tests for what can be tested on a linux system.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Since #9481 Python's None is not permitted as a value for
MV variants. The string 'none' is used instead.
Add the same fix for the amgx and lammps packages
If spack is checked out in a git worktree (see [1]), all git-related
commands fail because the `spack_is_git_repo()`-check is not thorough
enough.
When developing in a feature-branch in a seperate worktree, this is
annoying as all unittests regarding git-related spack commands fail,
cluttering the test results with false-positives.
[1]: https://git-scm.com/docs/git-worktree
Change-Id: I94b573a2c0e058e9ccc169e7ee6561626fbb06fd
* For tests that use the real Spack package repository, the config
needs to avoid using MPI providers that are not intended to be
installed by Spack. Without this, it is possible that Spack tests
which concretize the MPI virtual will end up trying to use an
implementation that it shouldn't (e.g. one that is always
provided externally). See #15666 for an example.
* The mutable_config test fixture was not initializing the scope
roots to the right directories (so the resulting config was empty).
* The current_host fixture in the concretize.py tests was using the
config fixture rather than mutable_config, and was polluting the
config cache for other tests.
* One test in concretize.py was clearing a nonexistent cache
(PackagePrefs._packages_config_cache). This reference has been
removed.
* The test 'test_preferred_compilers' was was depending on cross
test config pollution to succeed. The initial spec before
concretization has been updated to updated to be explicit about
the desired result.
* dev-build: --drop-in <shell>
Add a `--drop-in <shell>` option to `spack dev-build`.
This option will automatically run a
`spack build-env <spec> -- <shell>` at the end of a `dev-build`, e.g.
to quickly drop-and-devel into a build phase of a package.
Example usage:
```
spack dev-build --before cmake --drop-in bash openpmd-api@develop
```
* build_env: drop in unit test
Co-authored-by: Greg Becker <becker33@llnl.gov>
Generally speaking, errors that are encountered when attempting to load
command extensions now terminate the running Spack instance.
* Added new exceptions `spack.cmd.PythonNameError` and
`spack.cmd.CommandNameError`.
* New functions `spack.cmd.require_python_name(pname)` and
`spack.cmd.require_cmd_name(cname)` check that `pname` and `cname`
respectively meet requirements, throwing the appropriate error if not.
* `spack.cmd.get_module()` uses `require_cmd_name()` and passes through
exceptions from module load attempts.
* `spack.cmd.get_command()` uses `require_cmd_name()` and invokes
`get_module()` with the correct command-name form rather than the
previous (incorrect) Python name.
* Added New exceptions `spack.extensions.CommandNotFoundError` and
`spack.extensions.ExtensionNamingError`.
* `_extension_regexp` has a new leading underscore to indicate expected
privacy.
* `spack.extensions.extension_name()` raises an `ExtensionNamingError`
rather than using `tty.warn()`.
* `spack.extensions.load_command_extension()` checks command source
existence early and bails out if missing. Also, exceptions raised by
`load_module_from_file()` are passed through.
* `spack.extensions.get_module()` raises `CommandNotFoundError` as
appropriate.
* Spack `main()` allows `parser.add_command()` exceptions to cause
program end.
Tests:
* More common boilerplate has been pulled out into fixtures including
`sys.modules` dictionary cleanup and resource-managed creation of a
simple command extension with specified contents in the source file
for a single named command.
* "Hello, World!" test now uses a command named `hello-world` instead of
`hello` in order to verify correct handling of commands with hyphens.
* New tests for:
* Missing (or misnamed) command.
* Badly-named extension.
* Verification that errors encountered during import of a command are
propagated upward.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This PR introduces trivial refactoring in:
- `get_existing_elf_rpaths`
- `get_relative_elf_rpaths`
- `get_normalized_elf_rpaths`
- `set_placeholder`
mainly to be more consistent with practices used in other
parts of the code and to simplify functions locally. It also
adds or reworks unit tests for these functions and extends
their docstrings.
Co-authored-by: Patrick Gartung <gartung@fnal.gov>
Co-authored-by: Peter J. Scheibel <scheibel1@llnl.gov>
Packages in Spack are classes, and we need to be able to execute class
methods on mock packages. The previous design used instances of a single
MockPackage class; this version gives each package its own class that can
spider depenencies. This allows us to implement class methods like
`possible_dependencies()` on mock packages.
This design change moves mock package creation into the
`MockPackageMultiRepo`, and mock packages now *must* be created from a
repo. This is required for us to mock `possible_dependencies()`, which
needs to be able to get dependency packages from the package repo.
Changes include:
* `MockPackage` is now `MockPackageBase`
* `MockPackageBase` instances must now be created with
`MockPackageMultiRepo.add_package()`
* add `possible_dependencies()` method to `MockPackageBase`
* refactor tests to use new code structure
* move package mocking infrastructure into `spack.util.mock_package`,
as it's becoming a more sophisticated class and it gets lots in `conftest.py`
The variants table in `spack info` is cramped, as the *widest* it can be
is 80 columns. And that's actually only sort of true -- the padding
calculation is off, so it still wraps on terminals of size 80 because it
comes out *slightly* wider.
This change looks at the terminal size and calculates the width of the
description column based on it. On larger terminals, the output looks
much nicer, and on small terminals, the output no longer wraps.
Here's an example for `spack info qmcpack` with 110 columns.
Before:
Name [Default] Allowed values Description
==================== ==================== ==============================
afqmc [off] on, off Install with AFQMC support.
NOTE that if used in
combination with CUDA, only
AFQMC will have CUDA.
build_type [Release] Debug, Release, The build type to build
RelWithDebInfo
complex [off] on, off Build the complex (general
twist/k-point) version
cuda [off] on, off Build with CUDA
After:
Name [Default] Allowed values Description
==================== ==================== ========================================================
afqmc [off] on, off Install with AFQMC support. NOTE that if used in
combination with CUDA, only AFQMC will have CUDA.
build_type [Release] Debug, Release, The build type to build
RelWithDebInfo
complex [off] on, off Build the complex (general twist/k-point) version
cuda [off] on, off Build with CUDA
Update compiler config with bootstrapped compiler when it was already installed and added config defaults to code so mutable_config test fixture works.
To specify an environment for a comment, the user can specify
"spack -e <env>". The documentation incorrectly specified "-E" (which
is actually used to ignore any implicit use of environments).
If the Spack compiler wrapper encounters any "-isystem" option, then
when adding include directories for Spack dependencies, Spack will
use "-isystem" instead of "-I". This prevents Spack-generated "-I"
options from overriding the "-isystem" options generated by the build
system. To ensure that build-system "-isystem" directories are
searched first, Spack places all of its inserted "-isystem"
directories after.
The new ordering of -isystem includes is:
* -isystem from build system (not system directories)
* -isystem from Spack
* -isystem from build system (for directories like /usr/include)
The prior order of "-I" arguments is preserved (although as of this
commit Spack no longer generates -I if -isystem is detected):
* -I from build system (not system directories)
* -I from Spack (only if there are no "-isystem" options)
* -I from build system (for directories like /usr/include)
Since #16132, we've consolidated the setting of FORCE_UNSAFE_CONFIGURE to
`autotools.py`, so we don't need to use it in packages like `coreutils`,
in our commands, or in our container recipes.
- [x] Remove FORCE_UNSAFE_CONFIGURE from packages
- [x] Remove FORCE_UNSAFE_CONFIGURE from container recipes
- [x] Remove FORCE_UNSAFE_CONFIGURE from `spack ci` command
This commit sets the `FORCE_UNSAFE_CONFIGURE` environment variable to 1 in autotools builds.
We see a lot of builds popping up and complaining about `FORCE_UNSAFE_CONFIGURE`. This behavior is not actually part of `autoconf` per se. It comes from this patch to `mknod.m4`, which is used by a lot of autoconf builds:
* https://lists.gnu.org/archive/html/bug-gnulib/2010-07/msg00282.html
Which originated from this problem that someone had on AIX:
* https://lists.gnu.org/archive/html/bug-gnulib/2010-07/msg00279.html
The gist of the problem seems to be that they want to check whether `mknod` can do something as root, but instead of checking whether they're running as root and using `su` or something to test this, they just made it harder to run `configure` as root.
This seems very ad hoc and this is one of many checks that are run as root in `configure`. Many of them run before this check, so it's not clear that the `FORCE_UNSAFE_CONFIGURE` thing is even preventing bad things from happening.
So:
1. This only happens in `autotools` builds, so we should go ahead and put it into `autotools.py` instead of in the global build environment, and
2. The variable does too little and provides a false sense of security in the first place, so we'll just disable it and avoid the nuisance. If we really feel strongly about this we can put some warnings in Spack about running as root, but at the top level, not in the middle of an already running script like `configure`.
* SourceForge: Mirror Mixin
Add a mixing class for direct `CNAME`s to sourceforge mirrors.
Since the main gateway servers are often down, this could reduce
timeouts and fetch errors for sourceforge.net hosted software.
* SourceForge: unspectacular mirror replacement
add mirrors to all sourceforge packages with trivial
download logic.
tested fetch of latest version of each of these packages
with various mirrors before committing.
* SourceForge: xz
the author homepage is chronocially overrun and this is the offical
upload with many mirrors.
`DYLD_LIBRARY_PATH` can frequently break builtin macOS software when
pointed at Spack libraries. This is because it takes *higher* precedence
than the default library search paths, which are used by system software.
`DYLD_FALLBACK_LIBRARY_PATH`, on the other hand, takes lower precedence.
At first glance, this might seem bad, because the software installed by
Spack in an environment needs to find *its* libraries, and it should not
use the defaults. However, Spack's isntallations are always `RPATH`'d,
so they do not have this problem.
`DYLD_FALLBACK_LIBRARY_PATH` is thus useful for things built in an
environment that need to use Spack's libraries, that don't set *their*
RPATHs correctly for whatever reason. We now prefer it to
`DYLD_LIBRARY_PATH` in modules and in environments because it helps a
little bit, and it is much less intrusive.
provided (#15662).
Prior to this fix, the checked Spec object would not be populated, and
concretization would fail.
Co-authored-by: Marc Allen <mrcall@amazon.com>
`spack test` has a spurious '[+] ' in the output:
```
lib/spack/spack/test/install.py .........[+] ......
```
Output is properly suppressed:
```
lib/spack/spack/test/install.py ...............
```
Makes the following changes:
* (Fixes#15620) tty configuration was failing when stdout was
redirected. The implementation now creates a pseudo terminal for
stdin and checks stdout properly, so redirections of stdin/out/err
should be handled now.
* Handles terminal configuration when the Spack process moves between
the foreground and background (possibly multiple times) during a
build.
* Spack adjusts terminal settings to allow users to to enable/disable
build process output to the terminal using a "v" toggle, abnormal
exit cases (like CTRL-C) could leave the terminal in an unusable
state. This is addressed here with a special-case handler which
restores terminal settings.
Significantly extend testing of process output logger:
* New PseudoShell object for setting up a master and child process
and configuring file descriptor inheritance between the two
* Tests for "v" verbosity toggle making use of the added PseudoShell
object
* Added `uniq` function which takes a list of elements and replaces
any consecutive sequence of duplicate elements with a single
instance (e.g. "112211" -> "121")
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The performance improvements done in #14693 where leaving the DB in an inconsistent state when specs were removed from it. This PR updates the DB internal state whenever the DB is written to a file.
Note that we still cannot properly enumerate installed dependents, so there is a TODO in this code. Fixing that will require the dependents dictionaries in specs to be re-keyed (either by hash, or not keyed at all -- a list would do). See #11983 for details.
Reading the database repeatedly can be quite slow. We need a way to speed
up Spack when it reads the DB multiple times, but the DB has not been
modified between reads (which is nearly all the time).
- [x] Add a file containing a unique uuid that is regenerated at database
write time. Use this uuid to suppress re-parsing the database
contents if we know a previous uuid and the uuid has not changed.
- [x] Fix mutable_database fixture so that it resets the last seen
verifier when it resets.
- [x] Enable not rereading the database immediately after a write. Make
the tests reset the last seen verifier in between tests that use the
database fixture.
- [x] make presence of uuid module optional
Removed the code that was converting the old index.yaml format into
index.json. Since the change happened in #2189 it should be
considered safe to drop this (untested) code.
* only override spec prefix for non-external packages
* add test that environment shell modifications respect explicitly-specified prefixes for external packages
* add clarifying comment
spack.util.environment_after_sourcing_files compares the local
environment against a shell environment after having sourced a
file; but this ends up including the default shell profile and
rc, which might differ from the local environment.
To change this, compare against the default shell environment,
expressed here as 'source /dev/null'.
According to my nightly CI/CD tests, x.org is another large provider
of software in common build chains that is often down.
Added a hand-selected amount of mirrors that is well up-to-sync.
Tested with `util-macros` that has a quite "recent" patch release.
Other packages to follow in an individual PR.
Makes the following changes:
* (Fixes#15620) tty configuration was failing when stdout was
redirected. The implementation now creates a pseudo terminal for
stdin and checks stdout properly, so redirections of stdin/out/err
should be handled now.
* Handles terminal configuration when the Spack process moves between
the foreground and background (possibly multiple times) during a
build.
* Spack adjusts terminal settings to allow users to to enable/disable
build process output to the terminal using a "v" toggle, abnormal
exit cases (like CTRL-C) could leave the terminal in an unusable
state. This is addressed here with a special-case handler which
restores terminal settings.
Significantly extend testing of process output logger:
* New PseudoShell object for setting up a master and child process
and configuring file descriptor inheritance between the two
* Tests for "v" verbosity toggle making use of the added PseudoShell
object
* Added `uniq` function which takes a list of elements and replaces
any consecutive sequence of duplicate elements with a single
instance (e.g. "112211" -> "121")
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Moved link to the right place in the docs
* Fixed a few minor issues in extensions docs
Fixed a typo, added a subsubsection for better
navigation, reworded "modules in Python" as
"Python packages"
sourceware.org is often quite overrun and times out or results in
certificate errors.
Since libffi, bzip2, elfutils, etc. are quite fundamental in
build chains, lets add some official mirrors.
libffi, bzip2, elfutils, lvm2, valgrind: add mirrors
The performance improvements done in #14693 where leaving the DB in an inconsistent state when specs were removed from it. This PR updates the DB internal state whenever the DB is written to a file.
Note that we still cannot properly enumerate installed dependents, so there is a TODO in this code. Fixing that will require the dependents dictionaries in specs to be re-keyed (either by hash, or not keyed at all -- a list would do). See #11983 for details.
* Skip collection of compiler link paths if compiler does not define a verbose flag
* modules config bug: allow user to configure a compiler without an explicit entry for loaded modules
* Add capability for detecting build number for Arm compilers
* Fixing fleck8 errors and updating test_arm_version_detection function for more detailed Arm compielr version detection
* Ran flake8 locally and corrected errors
* Altering Arm compielr version check to remove else clause and be more consistent with other compielr version checks. Added test case so both the 'if' and 'else' conditionals of the Arm compiler version check have a test case
Co-authored-by: EC2 Default User <ec2-user@ip-172-31-7-135.us-east-2.compute.internal>
spack.util.environment_after_sourcing_files compares the local
environment against a shell environment after having sourced a
file; but this ends up including the default shell profile and
rc, which might differ from the local environment.
To change this, compare against the default shell environment,
expressed here as 'source /dev/null'.
* only override spec prefix for non-external packages
* add test that environment shell modifications respect explicitly-specified prefixes for external packages
* add clarifying comment
Currently, to force Spack to use an external MPI, you have to specify `buildable: False`
for every MPI provider in Spack in your packages.yaml file. This is both tedious and
fragile, as new MPI providers can be added and break your workflow when you do a
git pull.
This PR allows you to specify an entire virtual dependency as non-buildable, and
specify particular implementations to be built:
```
packages:
all:
providers:
mpi: [mpich]
mpi:
buildable: false
paths:
mpich@3.2 %gcc@7.3.0: /usr/packages/mpich-3.2-gcc-7.3.0
```
will force all Spack builds to use the specified `mpich` install.
Removed provider_index use of 'import from' and refactored a few routines to a further subclassing of _IndexBase for implementing user defined bindings of provider specs.
* relocate: removed import from statements
* relocate: renamed *Exception to *Error
This aims at consistency in naming with both
the standard library (ValueError, AttributeError,
etc.) and other errors in 'spack.error'.
Improved existing docstrings
* relocate: simplified search function by un-nesting conditionals
The search function that searches for patchelf has been
refactored to remove deeply nested conditionals.
Extended docstring.
* relocate: removed a condition specific to unit tests
* relocate: added test for _patchelf
Our unit tests run many times. Any unit test which actually installs
a package (which involves fetching code on the internet) is a severe
bug because it runs an installation many times (i.e. re-downloading
the same package for each version of Python that we run unit tests
for).
This reverts commit 25893f1, which added tests that install real
packages.
If the Python used by Spack does not include Setuptools, then
'spack test' will fail because Spack's vendored pytest dependency
imports and uses Setuptools in some of its functions. It turns out
that Spack doesn't use the functionality those methods enable, so
this PR removes those functions and thereby allows 'spack test' to
run without Setuptools.
For any Spack test using Spack's YAML configuration, avoid using real
Spack configuration that has been cached by other tests and Spack
startup logic. Previously this was only done for tests using
'mutable_config' (i.e. those which expected to *change* the
configuration of Spack), but in fact all tests that read Spack config
should use it.
This was an issue when running tests within an environment, because
compiler configuration ends up being queried earlier, and the user's
real config "leaks" into the cache. Outside an environment, the cache
is never set until tests touch it, so we weren't seeing this issue.
`spack test` has a spurious '[+] ' in the output:
```
lib/spack/spack/test/install.py .........[+] ......
```
Output is properly suppressed:
```
lib/spack/spack/test/install.py ...............
```
Reading the database repeatedly can be quite slow. We need a way to speed
up Spack when it reads the DB multiple times, but the DB has not been
modified between reads (which is nearly all the time).
- [x] Add a file containing a unique uuid that is regenerated at database
write time. Use this uuid to suppress re-parsing the database
contents if we know a previous uuid and the uuid has not changed.
- [x] Fix mutable_database fixture so that it resets the last seen
verifier when it resets.
- [x] Enable not rereading the database immediately after a write. Make
the tests reset the last seen verifier in between tests that use the
database fixture.
- [x] make presence of uuid module optional
Spack currently cannot run as a background process uninterrupted because some of the logging functions used in the install method (especially to create the dynamic verbosity toggle with the v key) cause the OS to issue a SIGTTOU to Spack when it's backgrounded.
This PR puts the necessary gatekeeping in place so that Spack doesn't do anything that will cause a signal to stop the process when operating as a background process.
Spack currently cannot run as a background process uninterrupted because some of the logging functions used in the install method (especially to create the dynamic verbosity toggle with the v key) cause the OS to issue a SIGTTOU to Spack when it's backgrounded.
This PR puts the necessary gatekeeping in place so that Spack doesn't do anything that will cause a signal to stop the process when operating as a background process.
This makes sure that a package's fetch_options are used when fetching
new versions to checksum. This allows working around problems with
slow servers or those requiring a cookie to be set.
Bug: Spack hangs on some Cray machines
Reason: The TERM environment variable is necessary to run bash -lc "echo $CRAY_CPU_TARGET", but we run that command within env -i, which wipes the environment.
Fix: Manually forward the TERM environment variable to env -i /bin/bash -lc "echo $CRAY_CPU_TARGET"
When trying to use an upstream Spack repository, as of f2aca86 Spack
was attempting to write to the upstream DB based on a new metadata
directory added in that commit. Upstream DBs are read-only, so this
should not occur.
This adds a check to prevent Spack from writing to the upstream DB
fixes#15449
Before this PR a call to pkg.url_for_version was modifying
class attributes determining different results for subsequents
calls and an error when the urls was empty.
This recovers the old behavior of replace_prefix_bin that was
modified to work with elf binaries by prefixing os.sep to new prefix
until length is the same as old prefix.
Testing the install StopIteration exception resulted in an attribute error:
AttributeError: 'StopIteration' object has no attribute 'message'
This PR adds a unit test and resolves that error.
The new build process, introduced in #13100 , relies on a spec's dependents in addition to their dependencies. Loading a spec from a yaml file was not initializing the dependents.
- [x] populate dependents when loading from yaml
The distributed build PR (#13100) -- did not check the install status of dependencies when using the `--only package` option so would refuse to install a package with the claim that it had uninstalled dependencies whether that was the case or not.
- [x] add install status checks for the `--only package` case.
- [x] add initial set of tests
This change stores packages' configure arguments during build and makes
use of them while refreshing module files. This fixes problems such as in
#10716.
Bug: Spack hangs on some Cray machines
Reason: The TERM environment variable is necessary to run bash -lc "echo $CRAY_CPU_TARGET", but we run that command within env -i, which wipes the environment.
Fix: Manually forward the TERM environment variable to env -i /bin/bash -lc "echo $CRAY_CPU_TARGET"
- [x] move some logic for handling virtual packages from the `spack
dependencies` command into `spack.package.possible_dependencies()`
- [x] rework possible dependencies tests so that expected and actual
output are on the left/right respectively
When trying to use an upstream Spack repository, as of f2aca86 Spack
was attempting to write to the upstream DB based on a new metadata
directory added in that commit. Upstream DBs are read-only, so this
should not occur.
This adds a check to prevent Spack from writing to the upstream DB
* try extend path to solve PyQt5.sip not found issue
* disable private sip installation in sippackage class
* undo manual PyQt5 dir creation in py-sip site-packages dir
* fix typo
* fix typo
* also apply fix to PyQt4
* tidy up
* flake8 and tidy up
* tidy and undo hardcoding of python_include_dir
* replace hardcoded python inc dir
* fix minor issues
* rethink include dir variable name
* improve style
* add new versions
* implement new sip setup to qsci installation
* set sip-incdir correctly for the new setup
* setup extend_path thing before qsci python bindings
* take care of conflict
* flake8
* also extend for PyQt4
* improve style
* improve style
* SipPackage build sys should depend on py-sip
* consolidate extend_path fixes into SipPackage
* fix typo
* fix bugs
* flake8
* revert sip doc to pre-resource setup
* import os module
* flake8
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
Add a 'define_from_variant` helper function to CMake-based Spack
packages to convert package variants into CMake arguments. For
example:
args.append('-DFOO=%s' % ('ON' if '+foo' in self.spec else 'OFF'))
can be replaced with:
args.append(self.define_from_variant('foo'))
The following conversions are handled automatically:
* Flag variants will be converted to CMake booleans
* Multivalued variants will be converted to semicolon-separated strings
* Other variant values are converted to CMake string arguments
This also adds a 'define' helper method to convert any variable to
a CMake argument. It has the same conversion rules as
'define_from_variant' (but operates directly on values rather than
requiring the user to supply the name of a package variant).
* Buildcache: Install into non-default directory layouts
Store a dictionary mapping of original dependency prefixes to dependency hashes
Use the loaded spec to grab the new dependency prefixes in the new directory layout.
Map the original dependency prefixes to the new dependency prefixes using the dependency hashes.
Use the dependency prefixes map to replace original rpaths with new rpaths preserving the order.
For mach-o binaries, use the dependency prefixes map to replace the dependency library entires for libraries and executables and the replace the library id for libraries.
On Linux, patchelf is used to replace the rpaths of elf binaries.
On macOS, install_name_tool is used to replace the rpaths and dependency libraries of mach-o binaries and the id of mach-o libraries.
On Linux, macholib is used to replace the dependency libraries of mach-o binaries and the id of mach-o libraries.
Binary text with padding replacement is attempted for all binaries for the following paths:
spack layout root
spack prefix
sbang script location
dependency prefixes
package prefix
Text replacement is attempted for all text files using the paths above.
Symbolic links to the absolute path of the package install prefix are replaced, all others produce warnings.
PR #15212 added a new connect_timeout option that can be overridden
using fetch_options but had to specified per-version. This adds a new
per-package variable that can be used to override fetch_options for
all versions in the package. This includes connect_timeout as well
as 'cookie' (e.g. for the jdk package).
Packages can combine package-level fetch_options with per-version
fetch_options, in which case the version fetch_options completely
override the package-level fetch_options.
This commit includes tests for the added behavior.
fixes#15449
Before this PR a call to pkg.url_for_version was modifying
class attributes determining different results for subsequents
calls and an error when the urls was empty.
* add --skip-unstable-versions option to 'spack mirror create' which skips sources/resource for packages if their version is not stable (i.e. if they are the head of a git branch rather than a fixed commit)
* '--skip-unstable-versions' should skip all VCS sources/resources, not just those which are not cachable
Allows spack.config InternalConfigScope and Configuration.set() to
handle keys with trailing ':' to indicate replacement vs merge
behavior with respect to lower priority scopes.
Lists may now be replaced rather than merged (this behavior was
previously only available for dictionaries).
This commit adds tests for the new behavior.
Testing the install StopIteration exception resulted in an attribute error:
AttributeError: 'StopIteration' object has no attribute 'message'
This PR adds a unit test and resolves that error.
This recovers the old behavior of replace_prefix_bin that was
modified to work with elf binaries by prefixing os.sep to new prefix
until length is the same as old prefix.
Removed the code that was converting the old index.yaml format into
index.json. Since the change happened in #2189 it should be
considered safe to drop this (untested) code.
Spack's fflags are meant for both f77 and fc. Therefore, they must
be passed as FFLAGS and FCFLAGS to the configure scripts of
Autotools-based packages.
The distributed build PR (#13100) -- did not check the install status of dependencies when using the `--only package` option so would refuse to install a package with the claim that it had uninstalled dependencies whether that was the case or not.
- [x] add install status checks for the `--only package` case.
- [x] add initial set of tests
connect_timeout can be used to increase the time Spack waits for the
server to answer. This can be used to work around slow connections or
servers.
Fixes#14700
* CudaPackage: add support for Tesla K80 and older CUDA
* Flake8 fixes
* Fix cuda_arch when no arch is set
* Fine-tune cuda_arch=37,50 supported CUDA versions
* CUDA 6.5+ supports SM_37
* Add @svenevs as a maintainer
The new build process, introduced in #13100 , relies on a spec's dependents in addition to their dependencies. Loading a spec from a yaml file was not initializing the dependents.
- [x] populate dependents when loading from yaml
* Buildcache command: add install option -o/--otherarch
This will allow matching specs from other archs, for example
installing macOS buildcaches on linux hosts.
* spack commands --update-completion
args.specs is a list, which results in output like this:
```
eval `spack load --sh ['libxml2', 'xz']`
```
We want this instead:
```
eval `spack load --sh libxml2 xz`
```
This change stores packages' configure arguments during build and makes
use of them while refreshing module files. This fixes problems such as in
#10716.
* Emit a sensible error message if compiler's target is overly specific
fixes#14798fixes#13733
Compiler specifications require a generic architecture family as
their target. This commit improves the error message that is
displayed to users if they edit compilers.yaml and use an overly
specific name.
The hashing logic looks for function calls that are Spack directives.
It expects that when a Spack directive is used that it is referenced
directly by name, and that the directive function is not itself
retrieved by calling another function. When the hashing logic
encountered a function call where the function was determined
dynamically, it would fail (attempting to access a name attribute
that does not happen to exist in this case).
This updates the hashing logic to filter out function calls where the
function is determined dynamically when looking for uses of Spack
directives.
Spack now requires an exact match of the compiler version
requested by the user. A loose constraint can be given to
Spack by using a version range instead of a concrete version
(e.g. 4.5: instead of 4.5).
Sometimes one needs to preserve the (relative order) of
mtimes on installed files. So it's better to just copy
over all the metadata from the source tree to the install
tree. If permissions need fixing, that will be done anyway
afterwards.
One major use case are resource()s:
They're unpacked in one place and then copied to their
final place using install_tree(). If the resource is a
source tree using autoconf/automake, resetting mtimes
uncorrectly might force unwanted autoconf/etc calls.
If the mimetype returned from `file -h -b --mime-type` contains slashes
in its subtype, the tuple returned from `spack.relocate.mime_type` will
have a size larger than two, which leads to errors.
Change-Id: I31de477e69f114ffdc9ae122d00c573f5f749dbb
Fixes#9394Closes#13217.
## Background
Spack provides the ability to enable/disable parallel builds through two options: package `parallel` and configuration `build_jobs`. This PR changes the algorithm to allow multiple, simultaneous processes to coordinate the installation of the same spec (and specs with overlapping dependencies.).
The `parallel` (boolean) property sets the default for its package though the value can be overridden in the `install` method.
Spack's current parallel builds are limited to build tools supporting `jobs` arguments (e.g., `Makefiles`). The number of jobs actually used is calculated as`min(config:build_jobs, # cores, 16)`, which can be overridden in the package or on the command line (i.e., `spack install -j <# jobs>`).
This PR adds support for distributed (single- and multi-node) parallel builds. The goals of this work include improving the efficiency of installing packages with many dependencies and reducing the repetition associated with concurrent installations of (dependency) packages.
## Approach
### File System Locks
Coordination between concurrent installs of overlapping packages to a Spack instance is accomplished through bottom-up dependency DAG processing and file system locks. The runs can be a combination of interactive and batch processes affecting the same file system. Exclusive prefix locks are required to install a package while shared prefix locks are required to check if the package is installed.
Failures are communicated through a separate exclusive prefix failure lock, for concurrent processes, combined with a persistent store, for separate, related build processes. The resulting file contains the failing spec to facilitate manual debugging.
### Priority Queue
Management of dependency builds changed from reliance on recursion to use of a priority queue where the priority of a spec is based on the number of its remaining uninstalled dependencies.
Using a queue required a change to dependency build exception handling with the most visible issue being that the `install` method *must* install something in the prefix. Consequently, packages can no longer get away with an install method consisting of `pass`, for example.
## Caveats
- This still only parallelizes a single-rooted build. Multi-rooted installs (e.g., for environments) are TBD in a future PR.
Tasks:
- [x] Adjust package lock timeout to correspond to value used in the demo
- [x] Adjust database lock timeout to reduce contention on startup of concurrent
`spack install <spec>` calls
- [x] Replace (test) package's `install: pass` methods with file creation since post-install
`sanity_check_prefix` will otherwise error out with `Install failed .. Nothing was installed!`
- [x] Resolve remaining existing test failures
- [x] Respond to alalazo's initial feedback
- [x] Remove `bin/demo-locks.py`
- [x] Add new tests to address new coverage issues
- [x] Replace built-in package's `def install(..): pass` to "install" something
(i.e., only `apple-libunwind`)
- [x] Increase code coverage
* Buildcache creation change the way prefix is copied to workdir.
* install_tree copies hardlinked files
* tarfile creates hardlinked files on extraction.
* create a temporary tarfile from prefix and extract it to workdir
* Use temp tarfile to move workdir to prefix to preserve hardlinks instead of copying
It's often useful to run a module with `python -m`, e.g.:
python -m pyinstrument script.py
Running a python script this way was hard, though, as `spack python` did
not have a similar `-m` option. This PR adds a `-m` option to `spack
python` so that we can do things like this:
spack python -m pyinstrument ./test.py
This makes it easy to write a script that uses a small part of Spack and
then profile it. Previously thee easiest way to do this was to write a
custom Spack command, which is often overkill.
Fixes#10019
If multiple instances of a package were installed in a single
instance of Spack, and they differed in terms of dependencies, then
"spack find" would not distinguish specs based on their dependencies.
For example if two instances of X were installed, one with Y and one
with Z, then "spack find X ^Y" would display both instances of X.
Using `sys.executable` to run Python in a sub-shell doesn't always work in a virtual environment as the `sys.executable` Python is not necessarily compatible with any loaded spack/other virtual environment.
- revert use of sys.executable to print out subshell environment (#14496)
- try instead to use an available python, then if there *is not* one, use `sys.executable`
- this addresses RHEL8 (where there is no `python` and `PYTHONHOME` issue in a simpler way
When removing packages from a view, extensions were being deactivated
in an arbitrary order. Extensions must be deactivated in preorder
traversal (dependents before dependencies), so when this order was
violated the view update would fail.
This commit ensures that views deactivate extensions based on a
preorder traversal and adds a test for it.
Despite trying very hard to keep dicts out of our hash algorithm, we seem
to still accidentally add them in ways that the tests can't catch. This
can cause errors when hashes are not computed deterministically.
This fixes an error we saw with Python 3.5, where dictionary iteration
order is random. In this instance, we saw a bug when reading Spack
environment lockfiles -- The load would fail like this:
```
...
File "/sw/spack/lib/spack/spack/environment.py", line 1249, in concretized_specs
yield (s, self.specs_by_hash[h])
KeyError: 'qcttqplkwgxzjlycbs4rfxxladnt423p'
```
This was because the hashes differed depending on whether we wrote `path`
or `module` first when recomputing the build hash as part of reading a
Spack lockfile. We can fix it by ensuring a determistic iteration order.
- [x] Fix two places (one that caused an issue, and one that did
not... yet) where our to_node_dict-like methods were using regular python
dicts.
- [x] Also add a check that statically analyzes our to_node_dict
functions and flags any that use Python dicts.
The test found the two errors fixed here, specifically:
```
E AssertionError: assert [] == ['Use syaml_dict instead of ...pack/spack/spec.py:1495:28']
E Right contains more items, first extra item: 'Use syaml_dict instead of dict at /Users/gamblin2/src/spack/lib/spack/spack/spec.py:1495:28'
E Full diff:
E - []
E + ['Use syaml_dict instead of dict at '
E + '/Users/gamblin2/src/spack/lib/spack/spack/spec.py:1495:28']
```
and
```
E AssertionError: assert [] == ['Use syaml_dict instead of ...ack/architecture.py:359:15']
E Right contains more items, first extra item: 'Use syaml_dict instead of dict at /Users/gamblin2/src/spack/lib/spack/spack/architecture.py:359:15'
E Full diff:
E - []
E + ['Use syaml_dict instead of dict at '
E + '/Users/gamblin2/src/spack/lib/spack/spack/architecture.py:359:15']
```
This commit introduces a `--no-check-signature` option for
`spack install` so that unsigned packages can be installed. It is
off by default (signatures required).
VSX alitvec extensions are supported by PowerISA from v2.06 (Power7+), but might
not be listed in features.
FMA has been supported by PowerISA since Power1, but might not be listed in
features.
This commit adds these features to all the power ISA family sets.
Add an optional 'submodules_delete' field to Git versions in Spack
packages that allows them to remove specific submodules.
For example: the nervanagpu submodule has become unavailable for the
PyTorch project (see issue 19457 at
https://github.com/pytorch/pytorch/issues/). Removing this submodule
allows 0.4.1 to build.
* Initialize _cached_specs at the file level and check for spec in it before searching mirrors in try_download_spec.
* Make _cached_specs a set to avoid duplicates
* Fix packaging test
* Ignore build_cache in stage when spec.yaml files are downloaded.
`spack -V` previously always returned the version of spack from
`spack.spack_version`. This gives us a general idea of what version
users are on, but if they're on `develop` or on some branch, we have to
ask more questions.
This PR makes `spack -V` check whether this instance of Spack is a git
repository, and if it is, it appends useful information from `git
describe --tags` to the version. Specifically, it adds:
- number of commits since the last release tag
- abbreviated (but unique) commit hash
So, if you're on `develop` you might get something like this:
$ spack -V
0.13.3-912-3519a1762
This means you're on commit 3519a1762, which is 912 commits ahead of
the 0.13.3 release.
If you are on a release branch, or if you are using a tarball of Spack,
you'll get the usual `spack.spack_version`:
$ spack -V
0.13.3
This should help when asking users what version they are on, since a lot
of people use the `develop` branch.
This PR adds a new command to Spack:
```console
$ spack containerize -h
usage: spack containerize [-h] [--config CONFIG]
creates recipes to build images for different container runtimes
optional arguments:
-h, --help show this help message and exit
--config CONFIG configuration for the container recipe that will be generated
```
which takes an environment with an additional `container` section:
```yaml
spack:
specs:
- gromacs build_type=Release
- mpich
- fftw precision=float
packages:
all:
target: [broadwell]
container:
# Select the format of the recipe e.g. docker,
# singularity or anything else that is currently supported
format: docker
# Select from a valid list of images
base:
image: "ubuntu:18.04"
spack: prerelease
# Additional system packages that are needed at runtime
os_packages:
- libgomp1
```
and turns it into a `Dockerfile` or a Singularity definition file, for instance:
```Dockerfile
# Build stage with Spack pre-installed and ready to be used
FROM spack/ubuntu-bionic:prerelease as builder
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&& (echo "spack:" \
&& echo " specs:" \
&& echo " - gromacs build_type=Release" \
&& echo " - mpich" \
&& echo " - fftw precision=float" \
&& echo " packages:" \
&& echo " all:" \
&& echo " target:" \
&& echo " - broadwell" \
&& echo " config:" \
&& echo " install_tree: /opt/software" \
&& echo " concretization: together" \
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unecessary deps and strip executables
RUN cd /opt/spack-environment && spack install && spack autoremove -y
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s
# Modifications to the environment that are necessary to run
RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
# Bare OS image to run the installed executables
FROM ubuntu:18.04
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
RUN apt-get -yqq update && apt-get -yqq upgrade \
&& apt-get -yqq install libgomp1 \
&& rm -rf /var/lib/apt/lists/*
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
```
* Add binary_distribution::get_spec which takes concretized spec
Add binary_distribution::try_download_specs for downloading of spec.yaml files to cache
get_spec is used by package::try_install_from_binary_cache to download only the spec.yaml
for the concretized spec if it exists.
The Spec parser currently calls `spec.traverse()` after every parse, in
order to set the platform if it's not set. We don't need to do a full
traverse -- we can just check the platforrm as new specs are parsed.
This takes about a second off the time required to import all packages in
Spack (from 8s to 7s).
- [x] simplify platform-setting logic in `SpecParser`.
`filename_for_package_name()` and `dirname_for_package_name()`
automatically construct a Spec from their arguments, which adds a fair
amount of overhead to importing lots of packages. Removing this removes
about 11% of the runtime of importing all packages in Spack (9s -> 8s).
- [x] `filename_for_package_name()` and `dirname_for_package_name()` now
take a string `pkg_name` arguments instead of specs.
* `Environment.__init__` is now synchronized with all writing operations
* `spack uninstall` now synchronizes its updates to any associated environment
* A side effect of this is that the environment is no longer updated piecemeal as specs are uninstalled - all specs are removed from the environment before they are uninstalled
This commit makes two fundamental corrections to tests:
1) Changes 'matches' to the correct 'match' argument for 'pytest.raises' (for all affected tests except those checking for 'SystemExit');
2) Replaces the 'match' argument for tests expecting 'SystemExit' (since the exit code is retained instead) with 'capsys' error message capture.
Both changes are needed to ensure the associated exception message is actually checked.
Updates to environments were not multi-process safe, which prevented them from taking advantage of parallel builds as implemented in #13100. This is a minimal set of changes to enable `spack install` in an environment to be parallelized:
- [x] add an internal lock, stored in the `.spack-env` directory,
to synchronize updates to `spack.yaml` and `spack.lock`
- [x] add `Environment.write_transaction` interface for this lock
- [x] makes use of `Environment.write_transaction` in `install`,
`add`, and `remove` commands
- `uninstall` is not synchronized yet; that is left for a future PR.
Spack commands referring to upstream-installed specs by hash have
been broken since 6b619da (merged September 2019), which added a new
Database function specifically for parsing hashes from command-line
specs; this function was inappropriately attempting to acquire locks
on upstream databases.
This PR updates the offending function to avoid locking upstream
databases and also updates associated tests to catch regression
errors: the upstream database created for these tests was not
explicitly set as an upstream (i.e. initialized with upstream=True)
so it was not guarding against inappropriate accesses.
* Unified environment modifications in config files
fixes#13357
This commit factors all the code that is involved in
the validation (schema) and parsing of environment modifications
from configuration files in a single place. The factored out
code is then used for module files and compiler configuration.
Attributes were separated by dashes in `compilers.yaml` files and
by underscores in `modules.yaml` files. This PR unifies the syntax
on attributes separated by underscores.
Unit testing of environment modifications in compilers
has been refactored and simplified.
Using `sys.executable` to run Python in a sub-shell doesn't always work in a virtual environment as the `sys.executable` Python is not necessarily compatible with any loaded spack/other virtual environment.
- revert use of sys.executable to print out subshell environment (#14496)
- try instead to use an available python, then if there *is not* one, use `sys.executable`
- this addresses RHEL8 (where there is no `python` and `PYTHONHOME` issue in a simpler way
Openblas target is now determined automatically upon inspection of
`TargetList.txt`. If the spack target is a generic architecture family
(like x86_64 or aarch64) the DYNAMIC_ARCH setting is used
instead of targeting a specific microarchitecture.
Instead of another script, this adds a simple argument to `spack
commands` that updates the completion script. Developers can now just
run:
spack commands --update-completion
This should make it simpler for developers to remember to run this
*before* the tests fail. Also, this version tab-completes.
Previously the `spack load` command was a wrapper around `module load`. This required some bootstrapping of modules to make `spack load` work properly.
With this PR, the `spack` shell function handles the environment modifications necessary to add packages to your user environment. This removes the dependence on environment modules or lmod and removes the requirement to bootstrap spack (beyond using the setup-env scripts).
Included in this PR is support for MacOS when using Apple's System Integrity Protection (SIP), which is enabled by default in modern MacOS versions. SIP clears the `LD_LIBRARY_PATH` and `DYLD_LIBRARY_PATH` variables on process startup for executables that live in `/usr` (but not '/usr/local', `/System`, `/bin`, and `/sbin` among other system locations. Spack cannot know the `LD_LIBRARY_PATH` of the calling process when executed using `/bin/sh` and `/usr/bin/python`. The `spack` shell function now manually forwards these two variables, if they are present, as `SPACK_<VAR>` and recovers those values on startup.
- [x] spack load/unload no longer delegate to modules
- [x] refactor user_environment modification calculations
- [x] update documentation for spack load/unload
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
This PR adds a `--format=bash` option to `spack commands` to
auto-generate the Bash programmable tab completion script. It can be
extended to work for other shells.
Progress:
- [x] Fix bug in superclass initialization in `ArgparseWriter`
- [x] Refactor `ArgparseWriter` (see below)
- [x] Ensure that output of old `--format` options remains the same
- [x] Add `ArgparseCompletionWriter` and `BashCompletionWriter`
- [x] Add `--aliases` option to add command aliases
- [x] Standardize positional argument names
- [x] Tests for `spack commands --format=bash` coverage
- [x] Tests to make sure `spack-completion.bash` stays up-to-date
- [x] Tests for `spack-completion.bash` coverage
- [x] Speed up `spack-completion.bash` by caching subroutine calls
This PR also necessitates a significant refactoring of
`ArgparseWriter`. Previously, `ArgparseWriter` was mostly a single
`_write` method which handled everything from extracting the information
we care about from the parser to formatting the output. Now, `_write`
only handles recursion, while the information extraction is split into a
separate `parse` method, and the formatting is handled by `format`. This
allows subclasses to completely redefine how the format will appear
without overriding all of `_write`.
Co-Authored-by: Todd Gamblin <tgamblin@llnl.gov>
The gpg2 command isn't always around; it's sometimes called gpg. This is
the case with the brew-installed version, and it's breaking our tests.
- [x] Look for both 'gpg2' and 'gpg' when finding the command
- [x] If we find 'gpg', ensure the version is 2 or higher
- [x] Add tests for version detection.
- [x] Factored to a common place the fixture `testing_gpg_directory`, renamed it as
`mock_gnupghome`
- [x] Removed altogether the function `has_gnupg2`
For `has_gnupg2`, since we were not trying to parse the version from the output of:
```console
$ gpg2 --version
```
this is effectively equivalent to check if `spack.util.gpg.GPG.gpg()` was found. If we need to ensure version is `^2.X` it's probably better to do it in `spack.util.gpg.GPG.gpg()` than in a separate function.
Despite trying very hard to keep dicts out of our hash algorithm, we seem
to still accidentally add them in ways that the tests can't catch. This
can cause errors when hashes are not computed deterministically.
This fixes an error we saw with Python 3.5, where dictionary iteration
order is random. In this instance, we saw a bug when reading Spack
environment lockfiles -- The load would fail like this:
```
...
File "/sw/spack/lib/spack/spack/environment.py", line 1249, in concretized_specs
yield (s, self.specs_by_hash[h])
KeyError: 'qcttqplkwgxzjlycbs4rfxxladnt423p'
```
This was because the hashes differed depending on whether we wrote `path`
or `module` first when recomputing the build hash as part of reading a
Spack lockfile. We can fix it by ensuring a determistic iteration order.
- [x] Fix two places (one that caused an issue, and one that did
not... yet) where our to_node_dict-like methods were using regular python
dicts.
- [x] Also add a check that statically analyzes our to_node_dict
functions and flags any that use Python dicts.
The test found the two errors fixed here, specifically:
```
E AssertionError: assert [] == ['Use syaml_dict instead of ...pack/spack/spec.py:1495:28']
E Right contains more items, first extra item: 'Use syaml_dict instead of dict at /Users/gamblin2/src/spack/lib/spack/spack/spec.py:1495:28'
E Full diff:
E - []
E + ['Use syaml_dict instead of dict at '
E + '/Users/gamblin2/src/spack/lib/spack/spack/spec.py:1495:28']
```
and
```
E AssertionError: assert [] == ['Use syaml_dict instead of ...ack/architecture.py:359:15']
E Right contains more items, first extra item: 'Use syaml_dict instead of dict at /Users/gamblin2/src/spack/lib/spack/spack/architecture.py:359:15'
E Full diff:
E - []
E + ['Use syaml_dict instead of dict at '
E + '/Users/gamblin2/src/spack/lib/spack/spack/architecture.py:359:15']
```
Rework Spack's continuous integration workflow to be environment-based.
- Add the `spack ci` command, which replaces the many scripts in `bin/`
- `spack ci` decouples the CI workflow from the spack instance:
- CI is defined in a spack environment
- environment is in its own (single) git repository, separate from Spack
- spack instance used to run the pipeline is up to the user
- A new `gitlab-ci` section in environments allows users to configure how
specs in the environment should be mapped to runners
- Compilers can be bootstrapped in the new pipeline workflow
- Add extensive documentation on pipelines (see `pipelines.rst` for further details)
- Add extensive tests for pipeline code
* Reorder GNU mirrors (#14395)
As @adamjstewart commented in #14395, GNU suggests to use
their mirror. So reorder the mirror to the top.
GNU Doc: https://www.gnu.org/prep/ftp.en.html
* Use spack.util.url.join for URLs in GNU mirrors (#14395)
One should not use os.path.join for URLs. This does only
work on POSIX systems.
Instead use spack.util.url.join.
So every part in spack uses the same url joining method.
When removing packages from a view, extensions were being deactivated
in an arbitrary order. Extensions must be deactivated in preorder
traversal (dependents before dependencies), so when this order was
violated the view update would fail.
This commit ensures that views deactivate extensions based on a
preorder traversal and adds a test for it.
* Spack can uninstall unused specs
fixes#4382
Added an option to spack uninstall that removes all unused specs i.e.
build dependencies or transitive dependencies that are left
in the store after the specs that pulled them in have been removed.
* Moved the functionality to its own command
The command has been named 'spack autoremove' to follow the naming used
for the same functionality by other widely known package managers i.e.
yum and apt.
* Speed-up autoremoving specs by not locking and re-reading the scratch DB
* Make autoremove work directly on Spack's store
* Added unit tests for the new command
* Display a terser output to the user
* Renamed the "autoremove" command "gc"
Following discussion there's more consensus around
the latter name.
* Preserve root specs in env contexts
* Instead of preserving specs, restrict gc to the active environment
* Added docs
* Added a unit test for gc within an environment
* Updated copyright to 2020
* Updated documentation according to review
Rephrased a couple of sentences, added references to
`spack find` and dependency types.
* Updated function naming and docstrings
* Simplified computation of unused specs
Since the new approach uses private attributes of the DB
it has been coded as a method of that class rather than a
freestanding function.
The imports in `spec.py` are getting to be pretty unwieldy.
- [x] Remove all of the `import from` style imports and replace them with
`import` or `import as`
- [x] Remove a number names that were exported by `spack.spec` that
weren't even in `spack.spec`
Previously, `spack test` automatically passed all of its arguments to
`pytest -k` if no options were provided, and to `pytest` if they were.
`spack test -l` also provided a list of test filenames, but they didn't
really let you completely narrow down which tests you wanted to run.
Instead of trying to do our own weird thing, this passes `spack test`
args directly to `pytest`, and omits the implicit `-k`. This means we
can now run, e.g.:
```console
$ spack test spec_syntax.py::TestSpecSyntax::test_ambiguous
```
This wasn't possible before, because we'd pass the fully qualified name
to `pytest -k` and get an error.
Because `pytest` doesn't have the greatest ability to list tests, I've
tweaked the `-l`/`--list`, `-L`/`--list-long`, and `-N`/`--list-names`
options to `spack test` so that they help you understand the names
better. you can combine these options with `-k` or other arguments to do
pretty powerful searches.
This one makes it easy to get a list of names so you can run tests in
different orders (something I find useful for debugging `pytest` issues):
```console
$ spack test --list-names -k "spec and concretize"
cmd/env.py::test_concretize_user_specs_together
concretize.py::TestConcretize::test_conflicts_in_spec
concretize.py::TestConcretize::test_find_spec_children
concretize.py::TestConcretize::test_find_spec_none
concretize.py::TestConcretize::test_find_spec_parents
concretize.py::TestConcretize::test_find_spec_self
concretize.py::TestConcretize::test_find_spec_sibling
concretize.py::TestConcretize::test_no_matching_compiler_specs
concretize.py::TestConcretize::test_simultaneous_concretization_of_specs
spec_dag.py::TestSpecDag::test_concretize_deptypes
spec_dag.py::TestSpecDag::test_copy_concretized
```
You can combine any list option with keywords:
```console
$ spack test --list -k microarchitecture
llnl/util/cpu.py modules/lmod.py
```
```console
$ spack test --list-long -k microarchitecture
llnl/util/cpu.py::
test_generic_microarchitecture
modules/lmod.py::TestLmod::
test_only_generic_microarchitectures_in_root
```
Or just list specific files:
```console
$ spack test --list-long cmd/test.py
cmd/test.py::
test_list test_list_names_with_pytest_arg
test_list_long test_list_with_keywords
test_list_long_with_pytest_arg test_list_with_pytest_arg
test_list_names
```
Hopefully this stuff will help with debugging test issues.
- [x] make `spack test` send args directly to `pytest` instead of trying
to do fancy things.
- [x] rework `--list`, `--list-long`, and add `--list-names` to make
searching for tests easier.
- [x] make it possible to mix Spack's list args with `pytest` args
(they're just fancy parsing around `pytest --collect-only`)
- [x] add docs
- [x] add tests
- [x] update spack completion
Test configuration files (except modules.yaml) were in the root level of
test/data, but should really just be in their own directory. The absence
of modules.yaml was also breaking module tests if we got module
preferences after tests started, as the mock modules.yaml was not in the
test directory.
The module hook would previously fail if there were no enabled module types.
- Instead of looking for a `KeyError`, default to empty list when the
config variable is not present.
- Convert lambdas to real functions for clarity.
- Remove legacy yaml_version_check() hook
- Remove the pre_run hook from `hook/__init__.py` and `main.py`
We want to discourage the use of pre-run hooks because they have to run
at startup. To keep Spack fast, we should do things like this lazily
instead of in hooks that require spidering directories full of modules.
Continuing to shave small bits of time off startup --
`spack.cmd.common.arguments` constructs many `Args` objects at module
scope, which has to be done for all commands that import it. Instead of
doing this at load time, do it lazily.
- [x] construct Args objects lazily
- [x] remove the module-scoped argparse fixture
- [x] make the mock config scope set dirty to False by default (like the
regular scope)
This *seems* to reduce load time slightly
Previously, fixtures like `config`, `database`, and `store` were
module-scoped, but frequently used as test function arguments. These
fixtures swap out global on setup and restore them on teardown. As
function arguments, they would do the right set-up, but they'd leave the
global changes in place for the whole module the function lived in. This
meant that if you use `config` once, other functions in the same module
would inadvertently inherit the mock Spack configuration, as it would
only be torn down once all tests in the module were complete.
In general, we should module- or session-scope the *STATE* required for
these global objects (as it's expensive to create0, but we shouldn't
module-or session scope the activation/use of them, or things can get
really confusing.
- [x] Make generic context managers for global-modifying fixtures.
- [x] Make session- and module-scoped fixtures that ONLY build filesystem
state and create objects, but do not swap out any variables.
- [x] Make seeparate function-scoped fixtures that *use* the session
scoped fixtures and actually swap out (and back in) the global
variables like `config`, `database`, and `store`.
These changes make it so that global changes are *only* ever alive for a
singlee test function, and we don't get weird dependencies because a
global fixture hasn't been destroyed.
`PackagePrefs` has had a class-level cache of data from `packages.yaml` for
a long time, but it complicates testing and leads to subtle errors,
especially now that we frequently manipulate custom config scopes and
environments.
Moving the cache to instance-level doesn't slow down concretization or
the test suite, and it just caches for the life of a `PackagePrefs`
instance (i.e., for a single cocncretization) so we don't need to worry
about global state anymore.
- [x] Remove class-level caches from `PackagePrefs`
- [x] Add a cached _spec_order object on each `PackagePrefs` instance
- [x] Remove all calls to `PackagePrefs.clear_caches()`
Commands like `spack blame` were printig poorly when redirected to files,
as colify reverts to a single column when redirected. This works for
list data but not tables.
- [x] Force a table by always passing `tty=True` from `colify_table()`
In "spack info" the Variants header currently has two blank
lines under it. That's too much. It looks like the actual
content belongs to something else.
Instead underline the headers to make things more obvious.
This commit removes the `python_version.py` unit test module
and the vendored dependencies `pyqver2.py` and `pyqver3.py`.
It substitutes them with an equivalent check done using
`vermin` that is run as a separate workflow via Github Actions.
This allows us to delete 2 vendored dependencies that are unmaintained
and substitutes them with a maintained tool.
Also, updates the list of vendored dependencies.
`ViewDescriptor.regenerate()` calls `get_all_specs()`, which reads
`spec.yaml` files, which is slow. It's fine to do this once, but
`view.remove_specs()` *also* calls it immediately afterwards.
- [x] Pass the result of `get_all_specs()` as an optional parameter to
`view.remove_specs()` to avoid reading `spec.yaml` files twice.
`ViewDescriptor.regenerate()` was copying specs and stripping build
dependencies, which clears `_hash` and other cached fields on concrete
specs, which causes a bunch of YAML hashes to be recomputed.
- [x] Preserve the `_hash` and `_normal` fields on stripped specs, as
these will be unchanged.
`spack install` previously concretized, writes the entire environment
out, regenerated views, then wrote and regenerated views
again. Regenerating views is slow, so ensure that we only do that once.
- [x] add an option to env.write() to skip view regeneration
- [x] add a note on whether regenerate_views() shouldn't just be a
separate operation -- not clear if we want to keep it as part of write
to ensure consistency, or take it out to avoid performance issues.
Environments need to read the DB a lot when installing all specs.
- [x] Put a read transaction around `install_all()` and `install()`
to avoid repeated locking
Our `LockTransaction` class was reading overly aggressively. In cases
like this:
```
1 with spack.store.db.read_transaction():
2 with spack.store.db.write_transaction():
3 ...
```
The `ReadTransaction` on line 1 would read in the DB, but the
WriteTransaction on line 2 would read in the DB *again*, even though we
had a read lock the whole time. `WriteTransaction`s were only
considering nested writes to decide when to read, but they didn't know
when we already had a read lock.
- [x] `Lock.acquire_write()` return `False` in cases where we already had
a read lock.
If a write transaction was nested inside a read transaction, it would not
write properly on release, e.g., in a sequence like this, inside our
`LockTransaction` class:
```
1 with spack.store.db.read_transaction():
2 with spack.store.db.write_transaction():
3 ...
4 with spack.store.db.read_transaction():
...
```
The WriteTransaction on line 2 had no way of knowing that its
`__exit__()` call was the last *write* in the nesting, and it would skip
calling its write function.
The `__exit__()` call of the `ReadTransaction` on line 1 wouldn't know
how to write, and the file would never be written.
The DB would be correct in memory, but the `ReadTransaction` on line 4
would re-read the whole DB assuming that other processes may have
modified it. Since the DB was never written, we got stale data.
- [x] Make `Lock.release_write()` return `True` whenever we release the
*last write* in a nest.
Lock transactions were actually writing *after* the lock was
released. The code was looking at the result of `release_write()` before
writing, then writing based on whether the lock was released. This is
pretty obviously wrong.
- [x] Refactor `Lock` so that a release function can be passed to the
`Lock` and called *only* when a lock is really released.
- [x] Refactor `LockTransaction` classes to use the release function
instead of checking the return value of `release_read()` / `release_write()`
`ViewDescriptor.regenerate()` checks repeatedly whether packages are
installed and also does a lot of DB queries. Put a read transaction
around the whole thing to avoid repeatedly locking and unlocking the DB.
`Environment.added_specs()` has a loop around calls to
`Package.installed()`, which can result in repeated DB queries. Optimize
this with a read transaction in `Environment`.
Checks for deprecated specs were repeatedly taking out read locks on the
database, which can be very slow.
- [x] put a read transaction around the deprecation check
BundlePackages use a noop fetch strategy. The mirror logic was assuming
that the fetcher had a resource to cach after performing a fetch. This adds
a special check to skip caching if the stage is associated with a
BundleFetchStrategy. Note that this should allow caching resources
associated with BundlePackages.
When updating a mirror, Spack was re-retrieving all patches (since the
fetch logic for patches is separate). This updates the patch logic to
allow the mirror logic to avoid this.
Since cache_mirror does the fetch itself, it also needs to do the
checksum itself if it wants to verify that the source stored in the
mirror is valid. Note that this isn't strictly required because fetching
(including from mirrors) always separately verifies the checksum.
The targets for the cosmetic paths in mirrrors were being calculated
incorrectly as of fb3a3ba: the symlinks used relative paths as targets,
and the relative path was computed relative to the wrong directory.
When creating a cosmetic symlink for a resource in a mirror, remove
it if it already exists. The symlink is removed in case the logic to
create the symlink has changed.
* Some packages (e.g. mpfr at the time of this patch) can have patches
with the same name but different contents (which apply to different
versions of the package). This appends part of the patch hash to the
cache file name to avoid conflicts.
* Some exceptions which occur during fetching are not a subclass of
SpackError and therefore do not have a 'message' attribute. This
updates the logic for mirroring a single spec (add_single_spec)
to produce an appropriate error message in that case (where before
it failed with an AttributeError)
* In various circumstances, a mirror can contain the universal storage
path but not a cosmetic symlink; in this case it would not generate
a symlink. Now "spack mirror create" will create a symlink for any
package that doesn't have one.
`ViewDescriptor.regenerate()` calls `get_all_specs()`, which reads
`spec.yaml` files, which is slow. It's fine to do this once, but
`view.remove_specs()` *also* calls it immediately afterwards.
- [x] Pass the result of `get_all_specs()` as an optional parameter to
`view.remove_specs()` to avoid reading `spec.yaml` files twice.
`ViewDescriptor.regenerate()` was copying specs and stripping build
dependencies, which clears `_hash` and other cached fields on concrete
specs, which causes a bunch of YAML hashes to be recomputed.
- [x] Preserve the `_hash` and `_normal` fields on stripped specs, as
these will be unchanged.
`spack install` previously concretized, writes the entire environment
out, regenerated views, then wrote and regenerated views
again. Regenerating views is slow, so ensure that we only do that once.
- [x] add an option to env.write() to skip view regeneration
- [x] add a note on whether regenerate_views() shouldn't just be a
separate operation -- not clear if we want to keep it as part of write
to ensure consistency, or take it out to avoid performance issues.
Environments need to read the DB a lot when installing all specs.
- [x] Put a read transaction around `install_all()` and `install()`
to avoid repeated locking
Our `LockTransaction` class was reading overly aggressively. In cases
like this:
```
1 with spack.store.db.read_transaction():
2 with spack.store.db.write_transaction():
3 ...
```
The `ReadTransaction` on line 1 would read in the DB, but the
WriteTransaction on line 2 would read in the DB *again*, even though we
had a read lock the whole time. `WriteTransaction`s were only
considering nested writes to decide when to read, but they didn't know
when we already had a read lock.
- [x] `Lock.acquire_write()` return `False` in cases where we already had
a read lock.
If a write transaction was nested inside a read transaction, it would not
write properly on release, e.g., in a sequence like this, inside our
`LockTransaction` class:
```
1 with spack.store.db.read_transaction():
2 with spack.store.db.write_transaction():
3 ...
4 with spack.store.db.read_transaction():
...
```
The WriteTransaction on line 2 had no way of knowing that its
`__exit__()` call was the last *write* in the nesting, and it would skip
calling its write function.
The `__exit__()` call of the `ReadTransaction` on line 1 wouldn't know
how to write, and the file would never be written.
The DB would be correct in memory, but the `ReadTransaction` on line 4
would re-read the whole DB assuming that other processes may have
modified it. Since the DB was never written, we got stale data.
- [x] Make `Lock.release_write()` return `True` whenever we release the
*last write* in a nest.
Lock transactions were actually writing *after* the lock was
released. The code was looking at the result of `release_write()` before
writing, then writing based on whether the lock was released. This is
pretty obviously wrong.
- [x] Refactor `Lock` so that a release function can be passed to the
`Lock` and called *only* when a lock is really released.
- [x] Refactor `LockTransaction` classes to use the release function
instead of checking the return value of `release_read()` / `release_write()`
`ViewDescriptor.regenerate()` checks repeatedly whether packages are
installed and also does a lot of DB queries. Put a read transaction
around the whole thing to avoid repeatedly locking and unlocking the DB.
Users can now list mirrors of the main url in packages.
- [x] Instead of just a single `url` attribute, users can provide a list (`urls`) in the package, and these will be tried by in order by the fetch strategy.
- [x] To handle one of the most common mirror cases, define a `GNUMirrorPackage` mixin to handle all the standard GNU mirrors. GNU packages can set `gnu_mirror_path` to define the path within a mirror, and the mixin handles setting up all the requisite GNU mirror URLs.
- [x] update all GNU packages in `builtin` to use the `GNUMirrorPackage` mixin.
- Add an optional argument so that `possible_dependencies()` will report
missing dependencies.
- Add a test to ensure it works.
- Ignore missing dependencies in `possible_dependencies()` by default.
- this version allows getting possible dependencies of multiple packages
or specs at once.
- New method handles calling `PackageBase.possible_dependencies` multiple
times and passing `visited` dict around.
`Environment.added_specs()` has a loop around calls to
`Package.installed()`, which can result in repeated DB queries. Optimize
this with a read transaction in `Environment`.