Commit graph

4274 commits

Author SHA1 Message Date
Todd Gamblin
cde10692b0 concretizer: prioritize versions by package pref, newest, preferred, actual
Solver now prefers newer versions like the old concretizer.  Prefer
package preferences from packages.yaml, preferred=True, package
definition, and finally each version itself.
2020-11-17 10:04:13 -08:00
Todd Gamblin
18fba433f6 concretizer: Use "competition" output format to avoid extra parsing
Competition output only prints out one model, so we do not have to
unnecessarily parse all the non-optimal models.  We'll just look at the
best model and bring that in.

In practice, this saves a lot of JSON parsing and spec construction time.
2020-11-17 10:04:13 -08:00
Todd Gamblin
b4e6d9d28e concretizer: handle virtual provider preferences from packages.yaml 2020-11-17 10:04:13 -08:00
Todd Gamblin
36ec66d997 concretizer: use clingo json output instead of text
Clingo actually has an option to output JSON -- use that instead of
parsing the raw otuput ourselves.

This also allows us to pick the best answer -- modify the parser to
*only* construct a spec for that one rather than building all of them
like we did before.
2020-11-17 10:04:13 -08:00
Todd Gamblin
a332981f2f concretizer: require only one provider for any virtual in the DAG 2020-11-17 10:04:13 -08:00
Todd Gamblin
501cb371c9 concretizer: handle variant defaults with optimization
- Instead of using default logic, handle variant defaults by minimizing
  the number of non-default variants in the solution.

- This actually seems to be pretty fast, and it fixes the long-standing
  issue that writing this:

      spack install hdf5 ^mpich

  will fail if you don't specify hdf5+mpi.  With optimization and
  allowing enums to be enumerated, the solver seems to be able to quickly
  discover that +mpi is the only way hdf5 can depend on mpich, and it
  forces the switch to be thrown.
2020-11-17 10:04:13 -08:00
Todd Gamblin
1cab1b1994 concretizer: support conditional dependencies 2020-11-17 10:04:13 -08:00
Todd Gamblin
51af590e64 variants: allow MultiValuedVariants to be constructed incrementally 2020-11-17 10:04:13 -08:00
Todd Gamblin
be10568a6a concretizer: initial support for virtual dependencies
Add initial support for virtual dependencies.  Solver now knows about all
virtuals and can choose one to resolve a dependency.
2020-11-17 10:04:13 -08:00
Todd Gamblin
3f93553a08 concretizer: print out virtuals 2020-11-17 10:04:13 -08:00
Todd Gamblin
8a6207aa70 concretizer: handle versions with choice construct rather than conflicts
Use '1 { version(x); version(y); version(z) } 1.' instead of declaring
conflicts for non-matching versions.  This keeps the sense of version
clauses positive, which will allow them to be used more easily in
conditionals later.

Also refactor `spec_clauses()` method to return clauses that can be used
in conditions, etc. instead of just printing out facts.
2020-11-17 10:04:13 -08:00
Todd Gamblin
6e31430bec concretizer: add another definition pragma.
- single_value_variant may not be defined by the generated program.  Mark
  it to avoid warnings.
2020-11-17 10:04:13 -08:00
Todd Gamblin
a81258663c concretizer: cleanup 2020-11-17 10:04:13 -08:00
Todd Gamblin
6bbc64555b concretizer: use conditional literals for versions. 2020-11-17 10:04:13 -08:00
Todd Gamblin
4288639770 concretizer: mark depends_on/2 defined for solves without dependencies. 2020-11-17 10:04:13 -08:00
Todd Gamblin
60cf3fdb34 concretizer: add basic semantics for compilers
- This handles setting the compiler and falling back to a default
  compiler, as well as providing default values for compilers/compiler
  versions.

- Versions still aren't quite right -- you can't properly override
  versions on compiler specs.
2020-11-17 10:04:13 -08:00
Todd Gamblin
f7dce19754 concretizer: simplify and move architecture semantics into concretize.lp
- Model architecture default settings and propagation off of variants

- Leverage ASP default logic to set architecture to default if it's not
  set otherwise.

- Move logic out of Python and into concretize.lp as first-order rules.
2020-11-17 10:04:13 -08:00
Todd Gamblin
34ea3d20cf concretizer: break output up into easier-to-understand sections 2020-11-17 10:04:13 -08:00
Todd Gamblin
d94d957536 concretizer: simplify and suppress warnings for variant handling
We are relying on default logic in the variant handling in that we set a
default value if we never see `variant_set(P, V, X)`.

- Move the logic for this into `concretize.lp` instead of generating it
  for every package.

- For programs that don't have explicit variant settings, clingo warns
  that variant_set(P, V, X) doesn't appear in any rule head, because a
  setting is never generated.

  - Specifically suppress this warning.
2020-11-17 10:04:13 -08:00
Todd Gamblin
b171ac5050 concretizer: split long lines in ASP programs 2020-11-17 10:04:13 -08:00
Todd Gamblin
573a2612dc concretizer: split main logic program out into files
- Add `concretize.lp` and `display.lp` as independent files
- Dump them instead of embedded strings
2020-11-17 10:04:13 -08:00
Todd Gamblin
8bc1092f41 concretizer: colorize ASP output 2020-11-17 10:04:13 -08:00
Todd Gamblin
3637b611a7 concretizer: move dump logic into solver.asp
- moving the dump logic into spack.solver.asp.solve() allows us to print
  out useful debug info sooner

- prior approach required a successful solve to print out anyhting.
2020-11-17 10:04:13 -08:00
Todd Gamblin
81e187e410 concretizer: first rudimentary round-trip with asp-based solver 2020-11-17 10:04:13 -08:00
Todd Gamblin
c7812f7e10 concretizer: add rudimentary variants with defaults to ASP solve 2020-11-17 10:04:13 -08:00
Todd Gamblin
a8a6d943d6 concretizer: beginnings of solve() command
- `spack solve` command outputs a really basic ASP program that handles
  unconditional dependencies, architecture and versions

- doesn't yet handle conflicts, picking latest versions, preferred
  versions, compilers, etc.

- doesn't handle variants
2020-11-17 10:04:13 -08:00
Todd Gamblin
5b725a37bc repo: Add all_package_classes() method.
- We were able to get names and instances previously
- Add a convenience function to get package classes
2020-11-17 10:04:13 -08:00
Robert Underwood
f359664493
include share/pkgconfig in user environments (#19909)
According to the documentation for spack and pkg-config,
$view/share/pkgconfig should also be a valid place to look
for package config files.  This commit ensures that when
spack activate env $dir is called, the environment has this
directory in PKG_CONFIG_PATH.
2020-11-17 11:10:28 -06:00
Tamara Dahlgren
6fa6af1070
Support parallel environment builds (#18131)
As of #13100, Spack installs the dependencies of a _single_ spec in parallel.
Environments, when installed, can only get parallelism from each individual
spec, as they're installed in order.  This PR makes entire environments build
in parallel by extending Spack's package installer to accept multiple root
specs.  The install command and Environment class have been updated to use
the new parallel install method.

The specs and kwargs for each *uninstalled* package (when not force-replacing
installations) of an environment are collected, passed to the `PackageInstaller`,
and processed using a single build queue.

This introduces a `BuildRequest` class to track install arguments, and it
significantly cleans up the code used to track package ids during installation.
Package ids in the build queue are now just DAG hashes as you would expect,

Other tasks:

- [x] Finish updating the unit tests based on `PackageInstaller`'s use of
      `BuildRequest` and the associated changes
- [x] Change `environment.py`'s `install_all` to use the `PackageInstaller` directly
- [x] Change the `install` command to leverage the new installation process for multiple specs
- [x] Change install output messages for external packages, e.g.:
       `[+] /usr` -> `[+] /usr (external bzip2-1.0.8-<dag-hash>`
- [x] Fix incomplete environment install's view setup/update and not confirming all 
       packages are installed (?)
- [x] Ensure externally installed package dependencies are properly accounted for in 
       remaining build tasks
- [x] Add tests for coverage (if insufficient and can identity the appropriate, uncovered non-comment lines)
- [x] Add documentation
- [x] Resolve multi-compiler environment install issues
- [x] Fix issue with environment installation reporting (restore CDash/JUnit reports)
2020-11-17 02:41:07 -08:00
Wouter Deconinck
423e80af23
spack edit: accept readonly packages (#19949) 2020-11-16 17:14:46 -08:00
Scott Wittenburg
ef0a555ca2
pipelines: support testing PRs from forks (#19248)
This change makes improvements to the `spack ci rebuild` command
which supports running gitlab pipelines on PRs from forks.  Much
of this has to do with making sure we can run without the secrets
previously required for running gitlab pipelines (e.g signing key,
aws credentials, etc).  Specific improvements in this PR:

Check if spack has precisely one signing key, and use that information
as an additional constraint on whether or not we should attempt to sign
the binary package we create.

Also, if spack does not have at least one public key, add the install
option "--no-check-signature"

If we are running a pipeline without any profile or environment
variables allowing us to push to S3, the pipeline could still
successfully create a buildcache in the artifacts and move on.  So
just print a message and move on if pushing either the buildcache
entry or cdash id file to the remote mirror fails.

When we attempt to generate a pacakge or gpg key index on an S3
mirror, and there is nothing to index, just print a warning and
exit gracefully rather than throw an exception.

Support the use of PR-specific mirrors for temporary binary pkg
storage.  This will allow quality-of-life improvement for developers,
providing a place to store binaries over the lifetime of a PR, so
that they must only wait for packages to rebuild from source when
they push a new commit that causes it to be necessary.

Replace two-pass install with a single pass and the new option:
 --require-full-hash-match.  Doing this also removes the need to
save a copy of the spack.yaml to be copied over the one spack
rewrites in between the two spack install passes.

Work around a mirror configuration issue caused by using
spack.util.executable to do the package installation.

* Update pipeline trigger jobs for PRs from forks

Moving to PRs from forks relies on external synchronization script
pushing special branch names.  Also secrets will only live on the
spack mirror project, and must be propagated to the E4S project via
variables on the trigger jobs.

When this change is merged, pipelines will not run until we update
the "Custom CI configuration path" in the Gitlab CI Settings, as the
name of the file has changed to better reflect its purpose.

* Arg to MirrorCollection is used exclusively, so add main remote mirror to it

* Compute full hash less frequently

* Add tests covering index generation error handling code
2020-11-16 15:16:24 -08:00
Adam J. Stewart
3536433c41
macOS: Big Sur reports as either 10.16 or 11.0 (#19900) 2020-11-15 11:54:39 -06:00
Greg Becker
fafff0c6c0
move sbang to unpadded install tree root (#19640)
Since #11598 sbang has been installed within the install_tree. This doesn’t play
nicely with install_tree padding, since sbang can’t do its job if it is installed in a
long path (this is the whole point of sbang).

This PR changes the padding specification.  Instead of $padding inside paths,
we now have a separate `padding:` field in the `install_tree` configuration.

Previously, the `install_tree` looked like this:

```
    /path/to/opt/spack_padding_padding_padding_padding_padding/
        bin/
            sbang
        .spack-db/
            ...
        linux-rhel7-x86_64/
            ...
```

```
This PR updates things to look like this:

    /path/to/opt/
        bin/
            sbang
        spack_padding_padding_padding_padding_padding/
            .spack-db/
                ...
            linux-rhel7-x86_64/
                ...

So padding is added at the start of all install prefixes *within* the unpadded
root.  The database and all installations still go under the padded root.

This ensures that `sbang` is in the shorted possible path while also allowing
us to make long paths for relocatable binaries.
2020-11-12 16:08:55 -08:00
Peter Scheibel
32bfe0a001
Testing: ensure that all packages can be pickled (#19890)
As of #18205, all packages must be pickle-able to be installed by
Spack.

This adds a test to check that each package can be pickled. If any
package fails to pickle, the test keeps going and collects the names
of all failed packages; it then takes the first one that failed and
attempts to re-pickle it, generating the full stack trace for the
failed pickle attempt.
2020-11-12 15:55:34 -08:00
Adam J. Stewart
02281a891d
MavenPackage: allow additional build args (#19676) 2020-11-12 12:57:49 -08:00
Peter Scheibel
bb42470211
macos: update build process to use spawn instead of fork (#18205)
Spack creates a separate process to do package installation. Different
operating systems and Python versions use different methods to create
it but up until Python 3.8 both Linux and Mac OS used "fork" (which
duplicates process memory, file descriptor table, etc.).

Python >= 3.8 on Mac OS prefers creating an entirely new process
(referred to as the "spawn" start method) because "fork" was found to
cause issues (in other words "spawn" is the default start method used
by multiprocessing.Process). Spack was dependent on the particular
behavior of fork to replicate process memory and transmit file
descriptors.

This PR refactors the Spack internals to support starting a child
process with the "spawn" method. To achieve this, it makes the
following changes:

- ensure that the package repository and other global state are
  transmitted to the child process
- ensure that file descriptors are transmitted to the child process in
  a way that works with multiprocessing and spawn
- make all the state needed for the build process and tests picklable
  (package, stage, etc.)
- move a number of locally-defined functions into global scope so that
  they can be pickled
- rework tests where needed to avoid using local functions

This PR also reworks sbang tests to work on macOS, where temporary
directories are deeper than the Linux sbang limit. We make the limit
platform-dependent (macOS supports 512-character shebangs)

See: #14102
2020-11-12 12:26:23 -08:00
Scott Wittenburg
fbbd71d3d7
Pipelines: Compare target family instead of architecture (#19884)
In compiler bootstrapping pipelines, we add an artificial dependency
between jobs for packages to be built with a bootstrapped compiler
and the job building the compiler.  To find the right bootstrapped
compiler for each spec, we compared not only the compiler spec to
that required by the package spec, but also the architectures of
the compiler and package spec.

But this prevented us from finding the bootstrapped compiler for a
spec in cases where the architecture of the compiler wasn't exactly
the same as the spec.  For example, a gcc@4.8.5 might have 
bootstrapped a compiler with haswell as the architecture, while the 
spec had broadwell.  By comparing the families instead of the architecture
 itself, we know that we can build the zlib for broadwell with the gcc for 
haswell.
2020-11-12 10:46:15 -08:00
Greg Becker
527a81b469
Keep output machine readable using spack find --format in an env (#19698)
Currently, full JSON output is the only machine readable option for `spack find`
in an environment.

`spack find --format` is also designed to be machine readable, but we print extra
headers in environments.

-[x] don't print headers in `spack find` output when in an environment
2020-11-11 22:13:51 -08:00
Satish Balay
33469414a5
fix typo wrt target=graviton (#19865)
* fix typo wrt target=graviton

This fixes spack build on aarch64 box

* update archspec hash
2020-11-11 19:29:13 -06:00
Massimiliano Culpo
e80276cd14
spack env deactivate/spack unload: demote warning message to debug message (#19864) 2020-11-11 12:02:03 -08:00
Adam J. Stewart
d1ca322aef
Restore spack checksum verbosity (#19480) 2020-11-11 11:26:17 -06:00
Peter Scheibel
9d5f4f9c6f
Binary caching: fix buildcache list (multiple invocations) (#19848)
When invoking "buildcache list" multiple times, the command was
reporting no specs in the cache the second time around. The
presence of an up-to-date index was causing the internal
representation to be left un-initialized.
2020-11-10 23:24:18 -08:00
Greg Becker
0183a51c6c
tutorial cmd: fix gpg invocation (#19829) 2020-11-09 21:09:36 -08:00
Adam J. Stewart
228a4d353c
Fix minor typo in function comment (#19804) 2020-11-09 20:25:45 -05:00
Emir İşman
98e709da3b
[docs] getting_started.rst: fix typo (#19815) 2020-11-09 16:31:53 -06:00
Todd Gamblin
4092c90b57
commands: add spack tutorial command (#19808)
Added a command to set up Spack for our tutorial at
https://spack-tutorial.readthedocs.io.

The command does some common operations we need first-time users to do.
Specifically:

- checks out a particular branch of Spack
- deletes spurious configuration in `~/.spack` that might be
  left over from prior parts of the tutorial
- adds a mirror and trusts its public key
2020-11-09 12:47:08 +01:00
Greg Becker
8b96e10ecc
Remove hardcoded version numbers from container logic (#19716)
Previously, we hardcoded a list of Spack versions which could be used by the containerize command.

This PR removes that list. It's a maintenance burden when cutting a release, and prevents older versions of Spack from creating containers to be used by newer versions.
2020-11-05 18:59:44 +01:00
Tamara Dahlgren
619eb6c08b
bug fix: Display error when curl is missing even in non-debug mode (#19695) 2020-11-04 15:47:08 -08:00
Shahzeb Siddiqui
75e73d7fcc
documentation: fix formatting of code-block section (#19693) 2020-11-03 12:15:46 +01:00
Todd Gamblin
ecc3bfd484
Bugfix - hashing: don't recompute full_hash or build_hash (#19672)
There was an error introduced in #19209 where `full_hash()` and
`build_hash()` are called on older specs that we've read in from the DB;
older specs may not be able to compute these hashes (e.g. if they have
removed patches used in computing the full_hash).

When serializing a Spec, we want to generate the full/build hash when
possible, but we need a mechanism to skip it for Specs that have
themselves been read from YAML (and may not support this).

To get around this ambiguity and to fix the issue, we:

- Add an attribute to the spec called `_hashes_final`, that is `True`
  if we can't lazily compute `build_hash` and `full_hash`.
- Set `_hashes_final` to `False` for new specs (i.e., lazily
  computing hashes is ok)
- Set `_hashes_final` to `True` for concrete specs read in via
  `from_node_dict`, as it may be too late to recompute hashes.
- Compute and write out all hashes in `node_dict_with_hashes` *if
  possible*.

Effectively what this means is that we can round-trip specs that are
missing `_build_hash` and `_full_hash` without recomputing them, but for
all new specs, we'll compute them and store them. So Spack should work
fine with old DBs now.
2020-11-02 13:21:11 -08:00