Other parts of the concretizer code build up lists of things we can't
know without traversing all specs and packages, and they output these
list at the very end.
The code for this for variant values from spec literals was intertwined
with the code for traversing the input specs. This only covers the input
specs and misses variant values that might come from directives in
packages.
- [x] move ad-hoc value handling code into spec_clauses so we do it in
one place for CLI and packages
- [x] move handling of `variant_possible_value`, etc. into
`concretize.lp`, where we can automatically infer variant existence
more concisely.
- [x] simplify/clarify some of the code for variants in `spec_clauses()`
* [cmd versions] add spack versions --new flag to only fetch new versions
format
[cmd versions] rename --latest to --newest and add --remote-only
[cmd versions] add tests for --remote-only and --new
format
[cmd versions] update shell tab completion
[cmd versions] remove test for --remote-only --new which gives empty output
[cmd versions] final rename
format
* add brillig mock package
* add test for spack versions --new
* [brillig] format
* [versions] increase test coverage
* Update lib/spack/spack/cmd/versions.py
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Update lib/spack/spack/cmd/versions.py
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* allow install of build-deps from cache via --include-build-deps switch
* make clear that --include-build-deps is useful for CI pipeline troubleshooting
fixes#20055
Compiler with custom versions like gcc@foo are not currently
matched to the appropriate targets. This is because the
version of spec doesn't match the "real" version of the
compiler.
This PR replicates the strategy used in the original
concretizer to deal with that and tries to detect the real
version of compilers if the version in the spec returns no
results.
* AOCC-2.3.0 is now added to spack
Change-Id: I18fd9606e6fd9a288cc7dc6c6ead11ea17839a7c
* Added flag and version tests for AOCC-2.3.0
* Addressed review comments
Co-authored-by: vkallesh <Vijay-teekinavar.Kallesh@amd.com>
fixes#20040
Matching compilers among nodes has been prioritized
in #20020. Selection of default variants has been
tuned in #20182. With this setup there is no need
to have an ad-hoc rule for external packages. On
the contrary it should be removed to prefer having
default variant values over more external nodes in
the DAG.
refers #20040
Before this PR optimization rules would have selected default
providers at a higher priority than default variants. Here we
swap this priority and we consider variants that are forced by
any means (root spec or spec in depends_on clause) the same as
if they were with a default value.
This prevents the solver from avoiding expected configurations
just because they contain directives like:
depends_on('pkg+foo')
and `+foo` is not the default variant value for pkg.
fixes#19981
This commit adds support for target ranges in directives,
for instance:
conflicts('+foo', when='target=x86_64:,aarch64:')
If any target in a spec body is not a known target the
following clause will be emitted:
node_target_satisfies(Package, TargetConstraint)
when traversing the spec and a definition of
the clause will then be printed at the end similarly
to what is done for package and compiler versions.
fixes#20019
Before this modification having a newer version of a node came
at higher priority in the optimization than having matching
compilers. This could result in unexpected configurations for
packages with conflict directives on compilers of the type:
conflicts('%gcc@X.Y:', when='@:A.B')
where changing the compiler for just that node is preferred to
lower the node version to less than 'A.B'. Now the priority has
been switched so the solver will try to lower the version of the
nodes in question before changing their compiler.
refers #20079
Added docstrings to 'concretize' and 'concretized' to
document the format for tests.
Added tests for the activation of test dependencies.
refers #20040
This modification emits rules like:
provides_virtual("netlib-lapack","blas") :- variant_value("netlib-lapack","external-blas","False").
for packages that provide virtual dependencies conditionally instead
of a fact that doesn't account for the condition.
This PR fixes two problems with clang/llvm's version detection. clang's
version output looks like this:
```
clang version 11.0.0
Target: x86_64-unknown-linux-gnu
```
This caused clang's version to be misdetected as:
```
clang@11.0.0
Target:
```
This resulted in errors when trying to actually use it as a compiler.
When using `spack external find`, we couldn't determine the compiler
version, resulting in errors like this:
```
==> Warning: "llvm@11.0.0+clang+lld+lldb" has been detected on the system but will not be added to packages.yaml [reason=c compiler not found for llvm@11.0.0+clang+lld+lldb]
```
Changing the regex to only match until the end of the line fixes these
problems.
Fixes: #19473
This adds a new `mark` command that can be used to mark packages as either
explicitly or implicitly installed. Apart from fixing the package
database after installing a dependency manually, it can be used to
implement upgrade workflows as outlined in #13385.
The following commands demonstrate how the `mark` and `gc` commands can be
used to only keep the current version of a package installed:
```console
$ spack install pkgA
$ spack install pkgB
$ git pull # Imagine new versions for pkgA and/or pkgB are introduced
$ spack mark -i -a
$ spack install pkgA
$ spack install pkgB
$ spack gc
```
If there is no new version for a package, `install` will simply mark it as
explicitly installed and `gc` will not remove it.
Co-authored-by: Greg Becker <becker33@llnl.gov>
Users can add test() methods to their packages to run smoke tests on
installations with the new `spack test` command (the old `spack test` is
now `spack unit-test`). spack test is environment-aware, so you can
`spack install` an environment and then run `spack test run` to run smoke
tests on all of its packages. Historical test logs can be perused with
`spack test results`. Generic smoke tests for MPI implementations, C,
C++, and Fortran compilers as well as specific smoke tests for 18
packages.
Inside the test method, individual tests can be run separately (and
continue to run best-effort after a test failure) using the `run_test`
method. The `run_test` method encapsulates finding test executables,
running and checking return codes, checking output, and error handling.
This handles the following trickier aspects of testing with direct
support in Spack's package API:
- [x] Caching source or intermediate build files at build time for
use at test time.
- [x] Test dependencies,
- [x] packages that require a compiler for testing (such as library only
packages).
See the packaging guide for more details on using Spack testing support.
Included is support for package.py files for virtual packages. This does
not change the Spack interface, but is a major change in internals.
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Co-authored-by: wspear <wjspear@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The deprecatedProperties custom validator now can accept a function
to compute a better error message.
Improve error/warning message for deprecated properties
As of #18260, `spack load` and `spack env activate` now use
`prefix_inspections` from the modules configuration to decide
how to modify environment variables.
This updates the modules configuration documentation to describe
how to update environment variables with the `prefix_inspections`
section. This also updates the `spack load` and environments
documentation to refer to the new `prefix_inspections` documentation.
`spack load` and `spack env activate` now use the prefix inspections
defined in `modules.yaml`. This allows users to customize/override
environment variable modifications if desired.
If no `prefix_inspections` configuration is present, Spack uses the
values in the default configuration.
This PR reworks a few attributes in the container subsection of
spack.yaml to permit the injection of custom base images when
generating containers with Spack. In more detail, users can still
specify the base operating system and Spack version they want to use:
spack:
container:
images:
os: ubuntu:18.04
spack: develop
in which case the generated recipe will use one of the Spack images
built on Docker Hub for the build stage and the base OS image in the
final stage. Alternatively, they can specify explicitly the two
base images:
spack:
container:
images:
build: spack/ubuntu-bionic:latest
final: ubuntu:18.04
and it will be up to them to ensure their consistency.
Additional changes:
* This commit adds documentation on the two approaches.
* Users can now specify OS packages to install (e.g. with apt or yum)
prior to the build (previously this was only available for the
finalized image).
* Handles to avoid an update of the available system packages have been
added to the configuration to facilitate the generation of recipes
permitting deterministic builds.
This commit address the case of concretizing a root spec with a
transitive conditional dependency on a virtual package, provided
by an external. Before these modifications default variant values
for the dependency bringing in the virtual package were not
respected, and the external package providing the virtual was added
to the DAG.
The issue stems from two facts:
- Selecting a provider has higher precedence than selecting default variants
- To ensure that an external is preferred, we used a negative weight
To solve it we shift all the providers weight so that:
- External providers have a weight of 0
- Non external provider have a weight of 10 or more
Using a weight of zero for external providers is such that having
an external provider, if present, or not having a provider at all
has the same effect on the higher priority minimization.
Also fixed a few minor bugs in concretize.lp, that were causing
spurious entries in the final answer set.
Cleaned concretize.lp from leftover rules.
If a the default of a multi-valued variant is set to
multiple values either in package.py or in packages.yaml
we need to ensure that all the values are present in the
concretized spec.
Since each default value has a weight of 0 and the
variant value is set implicitly by the concretizer
we need to add a rule to maximize on the number of
default values that are used.
This commit introduces a new rule:
real_node(Package) :- not external(Package), node(Package).
that permits to distinguish between an external node and a
real node that shouldn't trim dependency. It solves the
case of concretizing ninja with an external Python.
`node_compiler_hard()` means that something explicitly asked for a node's
compiler to be set -- i.e., it's not inherited, it's required. We're
generating this in spec_clauses even for specs in rule bodies, which
results in conditions like this for optional dependencies:
In py-torch/package.py:
depends_on('llvm-openmp', when='%apple-clang +openmp')
In the generated ASP:
declared_dependency("py-torch","llvm-openmp","build")
:- node("py-torch"),
variant_value("py-torch","openmp","True"),
node_compiler("py-torch","apple-clang"),
node_compiler_hard("py-torch","apple-clang"),
node_compiler_version_satisfies("py-torch","apple-clang",":").
The `node_compiler_hard` there means we would have to *explicitly* set
py-torch's compiler to trigger the llvm-openmp dependency, rather than
just letting it be set by preferences. This is wrong; the dependency
should be there regardless of how the compiler was set.
- [x] remove fn.node_compiler_hard() call from spec_clauses when
generating rule body clauses.
If the version list passed to one_of_iff is empty, it still generates a
rule like this:
node_compiler_version_satisfies("fujitsu-mpi", "arm", ":") :- 1 { } 1.
1 { } 1 :- node_compiler_version_satisfies("fujitsu-mpi", "arm", ":").
The cardinality rules on the right and left above are never
satisfiale, and these rules do nothing.
- [x] Skip generating any rules at all for empty version lists.
As reported, conflicts with compiler ranges were not treated
correctly. This commit adds tests to verify the expected behavior
for the new concretizer.
The new rules to enforce a correct behavior involve:
- Adding a rule to prefer the compiler selected for
the root package, if no other preference is set
- Give a strong negative weight to compiler preferences
expressed in packages.yaml
- Maximize on compiler AND compiler version match
Variant of this kind don't have a list of possible
values encoded in the ASP facts. Since all we have
is a validator the list of possible values just includes
just the default value and possibly the value passed
from packages.yaml or cli.
This is done after the builder has actually built
the specs, to respect the semantics use with the
old concretizer.
Later we could move this to the solver as
a multivalued variant.
This is done after the builder has actually built
the specs, to respect the semantics use with the
old concretizer.
A better approach is to substitute the spec
directly in concretization.
The "none" variant value cannot be combined with
other values.
The '*' wildcard matches anything, including "none".
It's thus relevant in queries, but disregarded in
concretization.
- The test on concretization of anonymous dependencies
has been fixed by raising the expected exception.
- The test on compiler bootstrap has been fixed by
updating the version of GCC used in the test.
Since gcc@2.0 does not support targets later than
x86_64, the new concretizer was looking for a
non-existing spec, i.e. it was correctly trying
to retrieve 'gcc target=x86_64' instead of
'gcc target=core2'.
- The test on gitlab CI needed an update of the target
This commit adds support for specifying rules in
packages.yaml that refer to virtual packages.
The approach is to normalize in memory each
configuration and turn it into an equivalent
configuration without rules on virtual. This
is possible if the set of packages to be handled
is considered fixed.
The weight of the target used in concretization is, in order:
1. A specific per package weight, if set in packages.yaml
2. Inherited from the parent, if possible
3. The default target weight (always set)
Generate facts on externals by inspecting
packages.yaml. Added rules in concretize.lp
Added extra logic so that external specs
disregard any conflict encoded in the
package.
In ASP this would be a simple addition to
an integrity constraint:
:- c1, c2, c3, not external(pkg)
Using the the Backend API from Python it
requires some scaffolding to obtain a default
negated statement.
Conflict rules from packages are added as integrity
constraints in the ASP formulation. Most of the code
to generate them has been reused from PyclingoDriver.rules
The new concretizer and the old concretizer solve constraints
in a different way. Here we ensure that a SpackError is raised,
instead of a specific error that made sense in the old concretizer
but probably not in the new.
Instead of python callbacks, use cardinality constraints for package
versions. This is slightly faster and has the advantage that it can be
written to an ASP program to be executed *outside* of Spack. We can use
this in the future to unify the pyclingo driver and the clingo text
driver.
This makes use of add_weight_rule() to implement cardinality constraints.
add_weight_rule() only has a lower bound parameter, but you can implement
a strict "exactly one of" constraint using it. In particular, wee want to
define:
1 {v1; v2; v3; ...} 1 :- version_satisfies(pkg, constraint).
version_satisfies(pkg, constraint) :- 1 {v1; v2; v3; ...} 1.
And we do that like this, for every version constraint:
atleast1(pkg, constr) :- 1 {version(pkg, v1); version(pkg, v2); ...}.
morethan1(pkg, constr) :- 2 {version(pkg, v1); version(pkg, v2); ...}.
version_satisfies(pkg, constr) :- atleast1, not morethan1(pkg, constr).
:- version_satisfies(pkg, constr), morethan1.
:- version_satisfies(pkg, constr), not atleast1.
v1, v2, v3, etc. are computed on the Python side by comparing every
possible package version with the constraint.
Computing things like this has the added advantage that if v1, v2, v3,
etc. comprise *all* possible versions of a package, we can just omit the
rules for the constraint under consideration. This happens pretty
frequently in the Spack mainline.
- [x] Solver now uses the Python interface to clingo
- [x] can extract unsatisfiable cores from problems when things go wrong
- [x] use Python callbacks for versions instead of choice rules (this may
ultimately hurt performance)
There are now three parts:
- `SpackSolverSetup`
- Spack-specific logic for generating constraints. Calls methods on
`AspTextGenerator` to set up the solver with a Spack problem. This
shouln't change much from solver backend to solver backend.
- ClingoDriver
- The solver driver provides methods for SolverSetup to generates an ASP
program, send it to `clingo` (run as an external tool), and parse the
output into function tuples suitable for `SpecBuilder`.
- The interface is generic and should not have to change much for a
driver for, say, the Clingo Python interface.
- SpecBuilder
- Builds Spack specs from function tuples parsed by the solver driver.
The original implementation was difficult to read, as it only had
single-letter variable names. This converts all of them to descriptive
names, e.g., P -> Package, V -> Virtual/Version/Variant, etc.
To handle unknown compilers propely in tests (and elsewhere), we need to
add unknown compilers from the spec to the list of possible compilers.
Rework how the compiler list is generated and includes compilers from
specs if the existence check is disabled.
Specs like hdf5 ^mpi were unsatisfiable because we added a requierment
for `node("mpi").`. This can't be resolved because "mpi" is not a
package.
- [x] Introduce `virtual_node()`, which says *some* provider must be in
the DAG.
This adds compiler flags to the ASP solve so that we can have conditions
based on them in the solve. But, it keeps order out of the solve to
avoid unneeded complexity and combinatorial explosions.
The solver determines which flags are on a spec, but the order is
determined by DAG precedence (childrens' flags take precedence over
parents' and are added on the right) and order (order flags were
specified on the command line is respected).
The solver is responsible for determining when to propagate flags, when
to inheit them from other nodes, when to take them from compiler
preferences, etc.
Weight microarchitectures and prefers more rercent ones. Also disallow
nodes where the compiler does not support the selected target.
We should revisit this at some point as it seems like if I play around
with the compiler support for different architectures, the solver runs
very slowly. See notes in comments -- the bad case was gcc supporting
broadwell and skylake with clang maxing out at haswell.
We didn't have a cardinality constraint for multi-valued variants, so the
solver wasn't filling them in.
- [x] add a requirement for at least one value for multi-valued variants
Variants like `cpu_target` on `openblas` don't have defineed values, but
they have a default. Ensure that the default is always a possible value
for the solver.
Spack was generating the same dependency connstraints twice in the output ASP:
```
declared_dependency("abinit", "hdf5", "link")
:- node("abinit"),
variant_value("abinit", "mpi", "True"),
variant_value("abinit", "mpi", "True").
```
This was because `AspFunction` was modifying itself when called.
- [x] fix `AspFunction` so that every call returns a new object
- [x] Add support for packages.yaml and command-line compiler preferences.
- [x] Rework compiler version propagation to use optimization rather than
hard logic constraints
Technically the ASP output order does not matter, but it's hard to diff
two different solve fomulations unless we order it.
- [x] make sure ASP output is emitted in a deterministic order (by
sorting all hash keys)
This needs more thought, as I am pretty sure the weights are not correct.
Or, at least, I'm not convinced that they do what we want in all cases.
See note in concretize.lp.
Solver now prefers newer versions like the old concretizer. Prefer
package preferences from packages.yaml, preferred=True, package
definition, and finally each version itself.
Competition output only prints out one model, so we do not have to
unnecessarily parse all the non-optimal models. We'll just look at the
best model and bring that in.
In practice, this saves a lot of JSON parsing and spec construction time.
Clingo actually has an option to output JSON -- use that instead of
parsing the raw otuput ourselves.
This also allows us to pick the best answer -- modify the parser to
*only* construct a spec for that one rather than building all of them
like we did before.
- Instead of using default logic, handle variant defaults by minimizing
the number of non-default variants in the solution.
- This actually seems to be pretty fast, and it fixes the long-standing
issue that writing this:
spack install hdf5 ^mpich
will fail if you don't specify hdf5+mpi. With optimization and
allowing enums to be enumerated, the solver seems to be able to quickly
discover that +mpi is the only way hdf5 can depend on mpich, and it
forces the switch to be thrown.
Use '1 { version(x); version(y); version(z) } 1.' instead of declaring
conflicts for non-matching versions. This keeps the sense of version
clauses positive, which will allow them to be used more easily in
conditionals later.
Also refactor `spec_clauses()` method to return clauses that can be used
in conditions, etc. instead of just printing out facts.
- This handles setting the compiler and falling back to a default
compiler, as well as providing default values for compilers/compiler
versions.
- Versions still aren't quite right -- you can't properly override
versions on compiler specs.
- Model architecture default settings and propagation off of variants
- Leverage ASP default logic to set architecture to default if it's not
set otherwise.
- Move logic out of Python and into concretize.lp as first-order rules.
We are relying on default logic in the variant handling in that we set a
default value if we never see `variant_set(P, V, X)`.
- Move the logic for this into `concretize.lp` instead of generating it
for every package.
- For programs that don't have explicit variant settings, clingo warns
that variant_set(P, V, X) doesn't appear in any rule head, because a
setting is never generated.
- Specifically suppress this warning.
- moving the dump logic into spack.solver.asp.solve() allows us to print
out useful debug info sooner
- prior approach required a successful solve to print out anyhting.
According to the documentation for spack and pkg-config,
$view/share/pkgconfig should also be a valid place to look
for package config files. This commit ensures that when
spack activate env $dir is called, the environment has this
directory in PKG_CONFIG_PATH.
As of #13100, Spack installs the dependencies of a _single_ spec in parallel.
Environments, when installed, can only get parallelism from each individual
spec, as they're installed in order. This PR makes entire environments build
in parallel by extending Spack's package installer to accept multiple root
specs. The install command and Environment class have been updated to use
the new parallel install method.
The specs and kwargs for each *uninstalled* package (when not force-replacing
installations) of an environment are collected, passed to the `PackageInstaller`,
and processed using a single build queue.
This introduces a `BuildRequest` class to track install arguments, and it
significantly cleans up the code used to track package ids during installation.
Package ids in the build queue are now just DAG hashes as you would expect,
Other tasks:
- [x] Finish updating the unit tests based on `PackageInstaller`'s use of
`BuildRequest` and the associated changes
- [x] Change `environment.py`'s `install_all` to use the `PackageInstaller` directly
- [x] Change the `install` command to leverage the new installation process for multiple specs
- [x] Change install output messages for external packages, e.g.:
`[+] /usr` -> `[+] /usr (external bzip2-1.0.8-<dag-hash>`
- [x] Fix incomplete environment install's view setup/update and not confirming all
packages are installed (?)
- [x] Ensure externally installed package dependencies are properly accounted for in
remaining build tasks
- [x] Add tests for coverage (if insufficient and can identity the appropriate, uncovered non-comment lines)
- [x] Add documentation
- [x] Resolve multi-compiler environment install issues
- [x] Fix issue with environment installation reporting (restore CDash/JUnit reports)
This change makes improvements to the `spack ci rebuild` command
which supports running gitlab pipelines on PRs from forks. Much
of this has to do with making sure we can run without the secrets
previously required for running gitlab pipelines (e.g signing key,
aws credentials, etc). Specific improvements in this PR:
Check if spack has precisely one signing key, and use that information
as an additional constraint on whether or not we should attempt to sign
the binary package we create.
Also, if spack does not have at least one public key, add the install
option "--no-check-signature"
If we are running a pipeline without any profile or environment
variables allowing us to push to S3, the pipeline could still
successfully create a buildcache in the artifacts and move on. So
just print a message and move on if pushing either the buildcache
entry or cdash id file to the remote mirror fails.
When we attempt to generate a pacakge or gpg key index on an S3
mirror, and there is nothing to index, just print a warning and
exit gracefully rather than throw an exception.
Support the use of PR-specific mirrors for temporary binary pkg
storage. This will allow quality-of-life improvement for developers,
providing a place to store binaries over the lifetime of a PR, so
that they must only wait for packages to rebuild from source when
they push a new commit that causes it to be necessary.
Replace two-pass install with a single pass and the new option:
--require-full-hash-match. Doing this also removes the need to
save a copy of the spack.yaml to be copied over the one spack
rewrites in between the two spack install passes.
Work around a mirror configuration issue caused by using
spack.util.executable to do the package installation.
* Update pipeline trigger jobs for PRs from forks
Moving to PRs from forks relies on external synchronization script
pushing special branch names. Also secrets will only live on the
spack mirror project, and must be propagated to the E4S project via
variables on the trigger jobs.
When this change is merged, pipelines will not run until we update
the "Custom CI configuration path" in the Gitlab CI Settings, as the
name of the file has changed to better reflect its purpose.
* Arg to MirrorCollection is used exclusively, so add main remote mirror to it
* Compute full hash less frequently
* Add tests covering index generation error handling code
Since #11598 sbang has been installed within the install_tree. This doesn’t play
nicely with install_tree padding, since sbang can’t do its job if it is installed in a
long path (this is the whole point of sbang).
This PR changes the padding specification. Instead of $padding inside paths,
we now have a separate `padding:` field in the `install_tree` configuration.
Previously, the `install_tree` looked like this:
```
/path/to/opt/spack_padding_padding_padding_padding_padding/
bin/
sbang
.spack-db/
...
linux-rhel7-x86_64/
...
```
```
This PR updates things to look like this:
/path/to/opt/
bin/
sbang
spack_padding_padding_padding_padding_padding/
.spack-db/
...
linux-rhel7-x86_64/
...
So padding is added at the start of all install prefixes *within* the unpadded
root. The database and all installations still go under the padded root.
This ensures that `sbang` is in the shorted possible path while also allowing
us to make long paths for relocatable binaries.