Commit graph

3841 commits

Author SHA1 Message Date
Patrick Gartung
4dc67b79aa
Replace direct call to patchelf with get_existing_elf_rpaths which handles exceptions. (#14929)
* Replace direct call to patchelf with get_existing_elf_rpaths which handles exceptions.

* Remove unused patchelf definition.

* Convert to set.
2020-02-13 11:34:20 -06:00
Todd Gamblin
a7b43f1015 spack python: add -m option to run modules as scripts
It's often useful to run a module with `python -m`, e.g.:

    python -m pyinstrument script.py

Running a python script this way was hard, though, as `spack python` did
not have a similar `-m` option.  This PR adds a `-m` option to `spack
python` so that we can do things like this:

    spack python -m pyinstrument ./test.py

This makes it easy to write a script that uses a small part of Spack and
then profile it.  Previously thee easiest way to do this was to write a
custom Spack command, which is often overkill.
2020-02-12 16:45:41 -08:00
Todd Gamblin
c56c4b334d bugfix: spack -V should use working_dir() instead of git -C
- `git -C` doesn't work on git before 1.8.5
- `working_dir` gets us the same effect
2020-02-11 16:52:06 -08:00
Massimiliano Culpo
357786ce6b
Spack find: fix queries that specify dependencies (#14757)
Fixes #10019

If multiple instances of a package were installed in a single
instance of Spack, and they differed in terms of dependencies, then
"spack find" would not distinguish specs based on their dependencies.
For example if two instances of X were installed, one with Y and one
with Z, then "spack find X ^Y" would display both instances of X.
2020-02-10 11:22:21 -08:00
Todd Gamblin
6d2e6e1f4d Merge branch 'releases/v0.13' into develop 2020-02-09 15:51:39 -08:00
Todd Gamblin
a8d5c6ccf2 version bump: 0.13.4 2020-02-07 16:51:44 -06:00
Massimiliano Culpo
010f9451c9 bugfix: make _source_single_file work in venvs (#14569)
Using `sys.executable` to run Python in a sub-shell doesn't always work in a virtual environment as the `sys.executable` Python is not necessarily compatible with any loaded spack/other virtual environment.

- revert use of sys.executable to print out subshell environment (#14496)
- try instead to use an available python, then if there *is not* one, use `sys.executable`
- this addresses RHEL8 (where there is no `python` and `PYTHONHOME` issue in a simpler way
2020-02-07 16:36:23 -06:00
Adam J. Stewart
f9f28e8fba Fix use of sys.executable for module/env commands (#14496)
* Fix use of sys.executable for module/env commands

* Fix unit tests

* More consistent quotation, less duplication

* Fix import syntax
2020-02-07 16:36:18 -06:00
Sajid Ali
4da8f7fcef RHEL8 bugfix for module_cmd (#14349) 2020-02-07 16:35:57 -06:00
Jeffrey Salmond
5397d500c8 Remove extensions from view in the correct order (#12961)
When removing packages from a view, extensions were being deactivated
in an arbitrary order. Extensions must be deactivated in preorder
traversal (dependents before dependencies), so when this order was
violated the view update would fail.

This commit ensures that views deactivate extensions based on a
preorder traversal and adds a test for it.
2020-02-07 16:12:20 -06:00
Todd Gamblin
b442b21751 bugfix: hashes should use ordered dictionaries (#14390)
Despite trying very hard to keep dicts out of our hash algorithm, we seem
to still accidentally add them in ways that the tests can't catch. This
can cause errors when hashes are not computed deterministically.

This fixes an error we saw with Python 3.5, where dictionary iteration
order is random.  In this instance, we saw a bug when reading Spack
environment lockfiles -- The load would fail like this:

```
...
File "/sw/spack/lib/spack/spack/environment.py", line 1249, in concretized_specs
  yield (s, self.specs_by_hash[h])
KeyError: 'qcttqplkwgxzjlycbs4rfxxladnt423p'
```

This was because the hashes differed depending on whether we wrote `path`
or `module` first when recomputing the build hash as part of reading a
Spack lockfile.  We can fix it by ensuring a determistic iteration order.

- [x] Fix two places (one that caused an issue, and one that did
  not... yet) where our to_node_dict-like methods were using regular python
  dicts.

- [x] Also add a check that statically analyzes our to_node_dict
  functions and flags any that use Python dicts.

The test found the two errors fixed here, specifically:

```
E       AssertionError: assert [] == ['Use syaml_dict instead of ...pack/spack/spec.py:1495:28']
E         Right contains more items, first extra item: 'Use syaml_dict instead of dict at /Users/gamblin2/src/spack/lib/spack/spack/spec.py:1495:28'
E         Full diff:
E         - []
E         + ['Use syaml_dict instead of dict at '
E         +  '/Users/gamblin2/src/spack/lib/spack/spack/spec.py:1495:28']
```

and

```
E       AssertionError: assert [] == ['Use syaml_dict instead of ...ack/architecture.py:359:15']
E         Right contains more items, first extra item: 'Use syaml_dict instead of dict at /Users/gamblin2/src/spack/lib/spack/spack/architecture.py:359:15'
E         Full diff:
E         - []
E         + ['Use syaml_dict instead of dict at '
E         +  '/Users/gamblin2/src/spack/lib/spack/spack/architecture.py:359:15']
```
2020-02-07 16:11:06 -06:00
Oliver Breitwieser
22c9f5cbd8
Allow installing unsigned binary packages (#11107)
This commit introduces a `--no-check-signature` option for
`spack install` so that unsigned packages can be installed. It is
off by default (signatures required).
2020-02-06 18:59:16 -08:00
Matt Belhorn
b43f658c39
Adds fma and vsx features to entire power arch family. (#14759)
VSX alitvec extensions are supported by PowerISA from v2.06 (Power7+), but might
not be listed in features.

FMA has been supported by PowerISA since Power1, but might not be listed in
features.

This commit adds these features to all the power ISA family sets.
2020-02-06 16:42:05 +01:00
Andrew W Elble
4accc78409
Git fetching: add option to remove submodules (#14370)
Add an optional 'submodules_delete' field to Git versions in Spack
packages that allows them to remove specific submodules.

For example: the nervanagpu submodule has become unavailable for the
PyTorch project (see issue 19457 at
https://github.com/pytorch/pytorch/issues/). Removing this submodule
allows 0.4.1 to build.
2020-02-03 19:02:45 -08:00
Patrick Gartung
5ad44477b2
buildcache list: restore original behavior of allowing constraints like @version. (#14732) 2020-02-03 13:40:14 -06:00
Patrick Gartung
ab36008635
binary_distribution: Initialize _cached_specs at the module level and only search the mirrors in get_spec if spec is not in _cached_specs. (#14714)
* Initialize _cached_specs at the file level and check for spec in it before searching mirrors in try_download_spec.

* Make _cached_specs a set to avoid duplicates

* Fix packaging test

* Ignore build_cache in stage when spec.yaml files are downloaded.
2020-01-31 20:08:47 -06:00
Todd Gamblin
c029c8ff89 spack -V is now more descriptive for dev branches
`spack -V` previously always returned the version of spack from
`spack.spack_version`.  This gives us a general idea of what version
users are on, but if they're on `develop` or on some branch, we have to
ask more questions.

This PR makes `spack -V` check whether this instance of Spack is a git
repository, and if it is, it appends useful information from `git
describe --tags` to the version.  Specifically, it adds:

  - number of commits since the last release tag
  - abbreviated (but unique) commit hash

So, if you're on `develop` you might get something like this:

    $ spack -V
    0.13.3-912-3519a1762

This means you're on commit 3519a1762, which is 912 commits ahead of
the 0.13.3 release.

If you are on a release branch, or if you are using a tarball of Spack,
you'll get the usual `spack.spack_version`:

    $ spack -V
    0.13.3

This should help when asking users what version they are on, since a lot
of people use the `develop` branch.
2020-01-31 20:59:21 +01:00
Adam J. Stewart
09e318fc84
Document how to use Spack to replace Homebrew/Conda (#13083)
* Document how to use Spack to replace Homebrew/Conda
* Initial draft; can iterate more as features become available
2020-01-31 19:31:14 +01:00
Massimiliano Culpo
9635ff3d20
spack containerize generates containers from envs (#14202)
This PR adds a new command to Spack:
```console
$ spack containerize -h
usage: spack containerize [-h] [--config CONFIG]

creates recipes to build images for different container runtimes

optional arguments:
  -h, --help       show this help message and exit
  --config CONFIG  configuration for the container recipe that will be generated
```
which takes an environment with an additional `container` section:
```yaml
spack:
  specs:
  - gromacs build_type=Release 
  - mpich
  - fftw precision=float
  packages:
    all:
      target: [broadwell]

  container:
    # Select the format of the recipe e.g. docker,
    # singularity or anything else that is currently supported
    format: docker
    
    # Select from a valid list of images
    base:
      image: "ubuntu:18.04"
      spack: prerelease

    # Additional system packages that are needed at runtime
    os_packages:
    - libgomp1
```
and turns it into a `Dockerfile` or a Singularity definition file, for instance:
```Dockerfile
# Build stage with Spack pre-installed and ready to be used
FROM spack/ubuntu-bionic:prerelease as builder

# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&&  (echo "spack:" \
&&   echo "  specs:" \
&&   echo "  - gromacs build_type=Release" \
&&   echo "  - mpich" \
&&   echo "  - fftw precision=float" \
&&   echo "  packages:" \
&&   echo "    all:" \
&&   echo "      target:" \
&&   echo "      - broadwell" \
&&   echo "  config:" \
&&   echo "    install_tree: /opt/software" \
&&   echo "  concretization: together" \
&&   echo "  view: /opt/view") > /opt/spack-environment/spack.yaml

# Install the software, remove unecessary deps and strip executables
RUN cd /opt/spack-environment && spack install && spack autoremove -y
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
    xargs file -i | \
    grep 'charset=binary' | \
    grep 'x-executable\|x-archive\|x-sharedlib' | \
    awk -F: '{print $1}' | xargs strip -s


# Modifications to the environment that are necessary to run
RUN cd /opt/spack-environment && \
    spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh

# Bare OS image to run the installed executables
FROM ubuntu:18.04

COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh

RUN apt-get -yqq update && apt-get -yqq upgrade                                   \
 && apt-get -yqq install libgomp1 \
 && rm -rf /var/lib/apt/lists/*

ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
```
2020-01-30 17:19:55 -08:00
Patrick Gartung
ed501eaab2
Bypass build_cache/index.html read when trying to download spec.yaml for concretized spec. (#14698)
* Add binary_distribution::get_spec which takes concretized spec
Add binary_distribution::try_download_specs for downloading of spec.yaml files to cache
get_spec is used by package::try_install_from_binary_cache to download only the spec.yaml
for the concretized spec if it exists.
2020-01-30 16:06:50 -06:00
Patrick Gartung
12a99f4a2d
Use non-mutable default for names in binary_distribution::get_specs call (#14696)
* Use non-mutable default for names

* Make suggested change
2020-01-30 15:17:55 -06:00
Peter Scheibel
7b2895109c
Document how to add conditional dependencies (#14694)
* add short docs section on conditional dependencies
* add reference to spec syntax
* add note that conditional dependencies can save time
2020-01-30 12:34:54 -08:00
Peter Scheibel
b2adcdb389
Bugfix: put environment lock in the right place (#14692)
Locate the environment lock in the hidden environment directory
rather than the root of the environment.
2020-01-30 11:13:36 -08:00
Patrick Gartung
23a7feb917
Limit the number of spec files downloaded to find matches for buildcaches (#14659)
* Limit the number of spec flies downloaded to find matches
2020-01-30 10:56:10 -06:00
Todd Gamblin
3519a17624 specs: avoid traversing specs when parsing
The Spec parser currently calls `spec.traverse()` after every parse, in
order to set the platform if it's not set.  We don't need to do a full
traverse -- we can just check the platforrm as new specs are parsed.

This takes about a second off the time required to import all packages in
Spack (from 8s to 7s).

- [x] simplify platform-setting logic in `SpecParser`.
2020-01-29 21:15:58 -08:00
Todd Gamblin
a2f8a2321d repo: avoid unnecessary spec parsing in filename_for_package_name()
`filename_for_package_name()` and `dirname_for_package_name()`
automatically construct a Spec from their arguments, which adds a fair
amount of overhead to importing lots of packages.  Removing this removes
about 11% of the runtime of importing all packages in Spack (9s -> 8s).

- [x] `filename_for_package_name()` and `dirname_for_package_name()` now
  take a string `pkg_name` arguments instead of specs.
2020-01-29 21:15:58 -08:00
Peter Scheibel
85ef1be780
environments: synchronize read and uninstall (#14676)
* `Environment.__init__` is now synchronized with all writing operations
* `spack uninstall` now synchronizes its updates to any associated environment
  * A side effect of this is that the environment is no longer updated piecemeal as specs are uninstalled - all specs are removed from the environment before they are uninstalled
2020-01-29 17:22:44 -08:00
Tamara Dahlgren
60ed6d2012
bugfix: correct exception message matching in tests (#14655)
This commit makes two fundamental corrections to tests:
1) Changes 'matches' to the correct 'match' argument for 'pytest.raises' (for all affected tests except those checking for 'SystemExit');
2) Replaces the 'match' argument for tests expecting 'SystemExit' (since the exit code is retained instead) with 'capsys' error message capture.

Both changes are needed to ensure the associated exception message is actually checked.
2020-01-28 22:57:26 -08:00
t-karatsu
f7ec09d30b
Fujitsu compiler: Defining option that is always added. (#14657) 2020-01-28 21:02:40 -06:00
Peter Scheibel
69feea280d
env: synchronize updates to environments (#14621)
Updates to environments were not multi-process safe, which prevented them from taking advantage of parallel builds as implemented in #13100.  This is a minimal set of changes to enable `spack install` in an environment to be parallelized:

- [x] add an internal lock, stored in the `.spack-env` directory, 
      to synchronize updates to `spack.yaml` and `spack.lock`
- [x] add `Environment.write_transaction` interface for this lock
- [x] makes use of `Environment.write_transaction` in `install`, 
      `add`, and `remove` commands

- `uninstall` is not synchronized yet; that is left for a future PR.
2020-01-28 17:26:26 -08:00
Glenn Johnson
48a12c8773 Note about Intel compiler segfault with long paths (#14652)
This PR adds a note about segfaults with the Intel compiler when the
install paths are long and the dependencies many.
2020-01-28 14:57:06 -06:00
Greg Becker
52ab2421bb
Fix handling of filter_file exceptions (#14651) 2020-01-28 12:49:26 -08:00
Andrew W Elble
3f5bed2e36 make the new 'spack load' faster (#14628)
before, a 'time spack load singularity'
4.129u 0.346s 0:04.47 99.7%	0+0k 0+8io 0pf+0w

after, a 'time spack load singularity'
0.844u 0.319s 0:01.16 99.1%	0+0k 0+16io 0pf+0w
2020-01-27 20:53:52 -08:00
Owen Solberg
f58004e436 fix spack env loads example (#14558) 2020-01-27 20:49:53 -08:00
Andrew W Elble
d86816bc1a Fix: hash-based references to upstream specs (#14629)
Spack commands referring to upstream-installed specs by hash have
been broken since 6b619da (merged September 2019), which added a new
Database function specifically for parsing hashes from command-line
specs; this function was inappropriately attempting to acquire locks
on upstream databases.

This PR updates the offending function to avoid locking upstream
databases and also updates associated tests to catch regression
errors: the upstream database created for these tests was not
explicitly set as an upstream (i.e. initialized with upstream=True)
so it was not guarding against inappropriate accesses.
2020-01-27 18:25:23 -08:00
Patrick Gartung
7badd69d1e
Package source ID cannot be determined when the url can't be extrapolated for older version. (#14237) 2020-01-27 20:10:01 -06:00
Patrick Gartung
d0523ca087
Follow the example of spack arch (#14642) 2020-01-27 20:04:48 -06:00
Patrick Gartung
0ce4eef256
Only set tcl default. Remove lmod default. (#14640) 2020-01-27 14:55:09 -06:00
Patrick Gartung
9ffa053f18
Fix bug introduced by pull request 14467 being merged (#14639)
* Fix bug introduced by pull request 14467 being merged

* Only filter on platform and OS
2020-01-27 14:17:15 -06:00
Patrick Gartung
893b0792e4
Set module_roots in test/config/config.yaml to defaults. (#14517) 2020-01-27 14:03:15 -06:00
Massimiliano Culpo
b9629c36f2 Unified environment modifications in config files (#14372)
* Unified environment modifications in config files

fixes #13357

This commit factors all the code that is involved in
the validation (schema) and parsing of environment modifications
from configuration files in a single place. The factored out
code is then used for module files and compiler configuration.

Attributes were separated by dashes in `compilers.yaml` files and
by underscores in `modules.yaml` files. This PR unifies the syntax
on attributes separated by underscores.

Unit testing of environment modifications in compilers
has been refactored and simplified.
2020-01-27 08:40:47 -08:00
Adam J. Stewart
eb79c82cba Fix Python version compatibility tests for vermin 0.10.0 (#14632) 2020-01-26 22:31:13 -08:00
Patrick Gartung
d2098d337a
When spack install checks for buildcaches only add urls for current arch. (#14467) 2020-01-25 21:15:12 -06:00
Adam J. Stewart
dcd8d7a620 Add spack config list command for tab completion (#14474)
* Add spack config list command for tab completion
* Update tab completion scripts
2020-01-24 17:28:20 -08:00
Massimiliano Culpo
4d7d657366 bugfix: make _source_single_file work in venvs (#14569)
Using `sys.executable` to run Python in a sub-shell doesn't always work in a virtual environment as the `sys.executable` Python is not necessarily compatible with any loaded spack/other virtual environment.

- revert use of sys.executable to print out subshell environment (#14496)
- try instead to use an available python, then if there *is not* one, use `sys.executable`
- this addresses RHEL8 (where there is no `python` and `PYTHONHOME` issue in a simpler way
2020-01-24 16:49:45 -08:00
Seth R. Johnson
ca6e75c9f6 Use Spack target architecture to determine OpenBLAS target (#14380)
Openblas target is now determined automatically upon inspection of
`TargetList.txt`. If the spack target is a generic architecture family
(like x86_64 or aarch64) the DYNAMIC_ARCH setting is used
instead of targeting a specific microarchitecture.
2020-01-24 15:19:05 +01:00
Todd Gamblin
04a6a55cf8
commands: add simple spack commands --update-completion argument (#14607)
Instead of another script, this adds a simple argument to `spack
commands` that updates the completion script.  Developers can now just
run:

    spack commands --update-completion

This should make it simpler for developers to remember to run this
*before* the tests fail.  Also, this version tab-completes.
2020-01-23 14:48:06 -08:00
Greg Becker
c9e01ff9d7 shell support: spack load no longer needs modules (#14062)
Previously the `spack load` command was a wrapper around `module load`. This required some bootstrapping of modules to make `spack load` work properly.

With this PR, the `spack` shell function handles the environment modifications necessary to add packages to your user environment. This removes the dependence on environment modules or lmod and removes the requirement to bootstrap spack (beyond using the setup-env scripts).

Included in this PR is support for MacOS when using Apple's System Integrity Protection (SIP), which is enabled by default in modern MacOS versions. SIP clears the `LD_LIBRARY_PATH` and `DYLD_LIBRARY_PATH` variables on process startup for executables that live in `/usr` (but not '/usr/local', `/System`, `/bin`, and `/sbin` among other system locations. Spack cannot know the `LD_LIBRARY_PATH` of the calling process when executed using `/bin/sh` and `/usr/bin/python`. The `spack` shell function now manually forwards these two variables, if they are present, as `SPACK_<VAR>` and recovers those values on startup.

- [x] spack load/unload no longer delegate to modules
- [x] refactor user_environment modification calculations
- [x] update documentation for spack load/unload

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2020-01-22 22:36:02 -08:00
Adam J. Stewart
11f2b61261 Use spack commands --format=bash to generate shell completion (#14393)
This PR adds a `--format=bash` option to `spack commands` to
auto-generate the Bash programmable tab completion script. It can be
extended to work for other shells.

Progress:

- [x] Fix bug in superclass initialization in `ArgparseWriter`
- [x] Refactor `ArgparseWriter` (see below)
- [x] Ensure that output of old `--format` options remains the same
- [x] Add `ArgparseCompletionWriter` and `BashCompletionWriter`
- [x] Add `--aliases` option to add command aliases
- [x] Standardize positional argument names
- [x] Tests for `spack commands --format=bash` coverage
- [x] Tests to make sure `spack-completion.bash` stays up-to-date
- [x] Tests for `spack-completion.bash` coverage
- [x] Speed up `spack-completion.bash` by caching subroutine calls

This PR also necessitates a significant refactoring of
`ArgparseWriter`. Previously, `ArgparseWriter` was mostly a single
`_write` method which handled everything from extracting the information
we care about from the parser to formatting the output. Now, `_write`
only handles recursion, while the information extraction is split into a
separate `parse` method, and the formatting is handled by `format`. This
allows subclasses to completely redefine how the format will appear
without overriding all of `_write`.

Co-Authored-by: Todd Gamblin <tgamblin@llnl.gov>
2020-01-22 21:31:12 -08:00
Todd Gamblin
8011fedd9c bugfix: gpg2 is called 'gpg' on macOS
The gpg2 command isn't always around; it's sometimes called gpg.  This is
the case with the brew-installed version, and it's breaking our tests.

- [x] Look for both 'gpg2' and 'gpg' when finding the command
- [x] If we find 'gpg', ensure the version is 2 or higher
- [x] Add tests for version detection.
2020-01-22 17:34:31 -08:00
Massimiliano Culpo
74266ea789 tests: removed code duplication (#14596)
- [x] Factored to a common place the fixture `testing_gpg_directory`, renamed it as 
      `mock_gnupghome`
- [x] Removed altogether the function `has_gnupg2`

For `has_gnupg2`, since we were not trying to parse the version from the output of:
```console
$ gpg2 --version
```
this is effectively equivalent to check if `spack.util.gpg.GPG.gpg()` was found. If we need to ensure version is `^2.X` it's probably better to do it in `spack.util.gpg.GPG.gpg()` than in a separate function.
2020-01-22 14:04:16 -08:00
Todd Gamblin
2eadfa24e9
bugfix: hashes should use ordered dictionaries (#14390)
Despite trying very hard to keep dicts out of our hash algorithm, we seem
to still accidentally add them in ways that the tests can't catch. This
can cause errors when hashes are not computed deterministically.

This fixes an error we saw with Python 3.5, where dictionary iteration
order is random.  In this instance, we saw a bug when reading Spack
environment lockfiles -- The load would fail like this:

```
...
File "/sw/spack/lib/spack/spack/environment.py", line 1249, in concretized_specs
  yield (s, self.specs_by_hash[h])
KeyError: 'qcttqplkwgxzjlycbs4rfxxladnt423p'
```

This was because the hashes differed depending on whether we wrote `path`
or `module` first when recomputing the build hash as part of reading a
Spack lockfile.  We can fix it by ensuring a determistic iteration order.

- [x] Fix two places (one that caused an issue, and one that did
  not... yet) where our to_node_dict-like methods were using regular python
  dicts.

- [x] Also add a check that statically analyzes our to_node_dict
  functions and flags any that use Python dicts.

The test found the two errors fixed here, specifically:

```
E       AssertionError: assert [] == ['Use syaml_dict instead of ...pack/spack/spec.py:1495:28']
E         Right contains more items, first extra item: 'Use syaml_dict instead of dict at /Users/gamblin2/src/spack/lib/spack/spack/spec.py:1495:28'
E         Full diff:
E         - []
E         + ['Use syaml_dict instead of dict at '
E         +  '/Users/gamblin2/src/spack/lib/spack/spack/spec.py:1495:28']
```

and

```
E       AssertionError: assert [] == ['Use syaml_dict instead of ...ack/architecture.py:359:15']
E         Right contains more items, first extra item: 'Use syaml_dict instead of dict at /Users/gamblin2/src/spack/lib/spack/spack/architecture.py:359:15'
E         Full diff:
E         - []
E         + ['Use syaml_dict instead of dict at '
E         +  '/Users/gamblin2/src/spack/lib/spack/spack/architecture.py:359:15']
```
2020-01-21 23:36:10 -08:00
Scott Wittenburg
8283d87f6a pipelines: spack ci command with env-based workflow (#12854)
Rework Spack's continuous integration workflow to be environment-based.

- Add the `spack ci` command, which replaces the many scripts in `bin/`

- `spack ci` decouples the CI workflow from the spack instance:
  - CI is defined in a spack environment
  - environment is in its own (single) git repository, separate from Spack
  - spack instance used to run the pipeline is up to the user
  - A new `gitlab-ci` section in environments allows users to configure how
    specs in the environment should be mapped to runners
  - Compilers can be bootstrapped in the new pipeline workflow

- Add extensive documentation on pipelines (see `pipelines.rst` for further details)
- Add extensive tests for pipeline code
2020-01-21 22:35:18 -08:00
Dr. Christian Tacke
5eed196f74 Use util.url.join for URLs in GNU Mirrors / reorder Mirrors (#14395)
* Reorder GNU mirrors (#14395)

As @adamjstewart commented in #14395, GNU suggests to use
their mirror. So reorder the mirror to the top.

GNU Doc: https://www.gnu.org/prep/ftp.en.html

* Use spack.util.url.join for URLs in GNU mirrors (#14395)

One should not use os.path.join for URLs. This does only
work on POSIX systems.

Instead use spack.util.url.join.
So every part in spack uses the same url joining method.
2020-01-21 13:14:38 -06:00
Adam J. Stewart
808c80d65a
Fix use of sys.executable for module/env commands (#14496)
* Fix use of sys.executable for module/env commands

* Fix unit tests

* More consistent quotation, less duplication

* Fix import syntax
2020-01-16 15:46:18 -06:00
Adam J. Stewart
3cd6938d80
Fix typo in modules docstring (#14521) 2020-01-15 15:16:12 -06:00
Adam J. Stewart
240a9e6284
Fix parsing of rocketmq URL (#14490) 2020-01-14 11:59:10 -06:00
Tamara Dahlgren
eb7a4e1029 Fixes #14402 (#14483)
Check if patchelf is executable, not binary, in case a site is wrapping it.
2020-01-13 13:00:14 -08:00
Jeffrey Salmond
6b3e173331 Remove extensions from view in the correct order (#12961)
When removing packages from a view, extensions were being deactivated
in an arbitrary order. Extensions must be deactivated in preorder
traversal (dependents before dependencies), so when this order was
violated the view update would fail.

This commit ensures that views deactivate extensions based on a
preorder traversal and adds a test for it.
2020-01-08 15:52:39 -08:00
Tim Haines
2028687efe spack.compilers.clang: add new version check (#14365) 2020-01-08 07:13:36 +01:00
eugeneswalker
7546ca6d4d bugfix: Issue #14346, buildcache create s3 push fails when package w same DAG hash already exists at mirror (#14412) 2020-01-07 12:40:37 -06:00
Massimiliano Culpo
08d0267c9a Spack can automatically remove unused specs (#13534)
* Spack can uninstall unused specs

fixes #4382

Added an option to spack uninstall that removes all unused specs i.e.
build dependencies or transitive dependencies that are left
in the store after the specs that pulled them in have been removed.

* Moved the functionality to its own command

The command has been named 'spack autoremove' to follow the naming used
for the same functionality by other widely known package managers i.e.
yum and apt.

* Speed-up autoremoving specs by not locking and re-reading the scratch DB

* Make autoremove work directly on Spack's store

* Added unit tests for the new command

* Display a terser output to the user

* Renamed the "autoremove" command "gc"

Following discussion there's more consensus around
the latter name.

* Preserve root specs in env contexts

* Instead of preserving specs, restrict gc to the active environment

* Added docs

* Added a unit test for gc within an environment

* Updated copyright to 2020

* Updated documentation according to review

Rephrased a couple of sentences, added references to
`spack find` and dependency types.

* Updated function naming and docstrings

* Simplified computation of unused specs

Since the new approach uses private attributes of the DB
it has been coded as a method of that class rather than a
freestanding function.
2020-01-07 08:16:54 -08:00
Adam J. Stewart
adffa45264 Reference spack help --spec in spack spec --help 2020-01-06 00:20:19 -08:00
Sajid Ali
c3c1cf13e7 RHEL8 bugfix for module_cmd (#14349) 2020-01-02 15:30:11 -06:00
Adam J. Stewart
3f190a432e
MKL: set appropriate CMake env vars (#14274) 2020-01-02 12:41:42 -06:00
Todd Gamblin
6e828206b6 refactor: cleanup imports in spec.py
The imports in `spec.py` are getting to be pretty unwieldy.

- [x] Remove all of the `import from` style imports and replace them with
  `import` or `import as`
- [x] Remove a number names that were exported by `spack.spec` that
  weren't even in `spack.spec`
2020-01-02 08:05:00 -08:00
Todd Gamblin
9ed34f686f bugfix: cdash tests shoudln't modify working directory
The latest cdash test creates a local cdash_reports directory, but it
should do that in a tmpdir.
2020-01-02 00:01:15 -08:00
Todd Gamblin
4beb9fc5d3 tests: improved spack test command line options
Previously, `spack test` automatically passed all of its arguments to
`pytest -k` if no options were provided, and to `pytest` if they were.
`spack test -l` also provided a list of test filenames, but they didn't
really let you completely narrow down which tests you wanted to run.

Instead of trying to do our own weird thing, this passes `spack test`
args directly to `pytest`, and omits the implicit `-k`.  This means we
can now run, e.g.:

```console
$ spack test spec_syntax.py::TestSpecSyntax::test_ambiguous
```

This wasn't possible before, because we'd pass the fully qualified name
to `pytest -k` and get an error.

Because `pytest` doesn't have the greatest ability to list tests, I've
tweaked the `-l`/`--list`, `-L`/`--list-long`, and `-N`/`--list-names`
options to `spack test` so that they help you understand the names
better.  you can combine these options with `-k` or other arguments to do
pretty powerful searches.

This one makes it easy to get a list of names so you can run tests in
different orders (something I find useful for debugging `pytest` issues):

```console
$ spack test --list-names -k "spec and concretize"
cmd/env.py::test_concretize_user_specs_together
concretize.py::TestConcretize::test_conflicts_in_spec
concretize.py::TestConcretize::test_find_spec_children
concretize.py::TestConcretize::test_find_spec_none
concretize.py::TestConcretize::test_find_spec_parents
concretize.py::TestConcretize::test_find_spec_self
concretize.py::TestConcretize::test_find_spec_sibling
concretize.py::TestConcretize::test_no_matching_compiler_specs
concretize.py::TestConcretize::test_simultaneous_concretization_of_specs
spec_dag.py::TestSpecDag::test_concretize_deptypes
spec_dag.py::TestSpecDag::test_copy_concretized
```

You can combine any list option with keywords:

```console
$ spack test --list -k microarchitecture
llnl/util/cpu.py  modules/lmod.py
```

```console
$ spack test --list-long -k microarchitecture
llnl/util/cpu.py::
    test_generic_microarchitecture

modules/lmod.py::TestLmod::
    test_only_generic_microarchitectures_in_root
```

Or just list specific files:

```console
$ spack test --list-long cmd/test.py
cmd/test.py::
    test_list                       test_list_names_with_pytest_arg
    test_list_long                  test_list_with_keywords
    test_list_long_with_pytest_arg  test_list_with_pytest_arg
    test_list_names
```

Hopefully this stuff will help with debugging test issues.

- [x] make `spack test` send args directly to `pytest` instead of trying
  to do fancy things.
- [x] rework `--list`, `--list-long`, and add `--list-names` to make
  searching for tests easier.
- [x] make it possible to mix Spack's list args with `pytest` args
  (they're just fancy parsing around `pytest --collect-only`)
- [x] add docs
- [x] add tests
- [x] update spack completion
2020-01-01 21:37:02 -08:00
Glenn Johnson
1ac0c51dad Modify create clue list so R packages are detected (#12277)
R packages can contain configure scripts so R needs to be before
autotools in the clue list.
2019-12-31 16:44:31 -06:00
Todd Gamblin
ffc91bd86e tests: move mock config.yaml files to common directory
Test configuration files (except modules.yaml) were in the root level of
test/data, but should really just be in their own directory.  The absence
of modules.yaml was also breaking module tests if we got module
preferences after tests started, as the mock modules.yaml was not in the
test directory.
2019-12-31 13:48:01 -08:00
Todd Gamblin
3017584c48 config: remove all module-scope calls to spack.config.get()
This avoids parsing modules.yaml on startup.
2019-12-31 13:48:01 -08:00
Todd Gamblin
9cc013cc0f modules: make the module hook more robust
The module hook would previously fail if there were no enabled module types.

- Instead of looking for a `KeyError`, default to empty list when the
  config variable is not present.

- Convert lambdas to real functions for clarity.
2019-12-31 13:48:01 -08:00
Todd Gamblin
58cb4e5241 hooks: remove pre_run hook to improve startup time.
- Remove legacy yaml_version_check() hook
- Remove the pre_run hook from `hook/__init__.py` and `main.py`

We want to discourage the use of pre-run hooks because they have to run
at startup.  To keep Spack fast, we should do things like this lazily
instead of in hooks that require spidering directories full of modules.
2019-12-31 13:48:01 -08:00
Todd Gamblin
4af6303086
copyright: update copyright dates for 2020 (#14328) 2019-12-30 22:36:56 -08:00
Todd Gamblin
98ad6e39b5
bugfix: add required fixture for CDash authentication test (#14325) 2019-12-30 16:55:05 -08:00
Zack Galbreath
cc96758fdc Add support for authenticated CDash uploads (#14200) 2019-12-30 15:54:56 -08:00
Todd Gamblin
65ef6d5dcb refactor: rename mock_config fixture to mock_low_high_config
This avoids confusion with mock_configuration.
2019-12-30 13:01:31 -08:00
Todd Gamblin
b2e9696052 argparse: lazily construct common arguments
Continuing to shave small bits of time off startup --
`spack.cmd.common.arguments` constructs many `Args` objects at module
scope, which has to be done for all commands that import it.  Instead of
doing this at load time, do it lazily.

- [x] construct Args objects lazily

- [x] remove the module-scoped argparse fixture

- [x] make the mock config scope set dirty to False by default (like the
  regular scope)

This *seems* to reduce load time slightly
2019-12-30 13:01:31 -08:00
Todd Gamblin
e7dc8a2bea tests: refactor tests to avoid persistent global state
Previously, fixtures like `config`, `database`, and `store` were
module-scoped, but frequently used as test function arguments.  These
fixtures swap out global on setup and restore them on teardown.  As
function arguments, they would do the right set-up, but they'd leave the
global changes in place for the whole module the function lived in.  This
meant that if you use `config` once, other functions in the same module
would inadvertently inherit the mock Spack configuration, as it would
only be torn down once all tests in the module were complete.

In general, we should module- or session-scope the *STATE* required for
these global objects (as it's expensive to create0, but we shouldn't
module-or session scope the activation/use of them, or things can get
really confusing.

- [x] Make generic context managers for global-modifying fixtures.

- [x] Make session- and module-scoped fixtures that ONLY build filesystem
  state and create objects, but do not swap out any variables.

- [x] Make seeparate function-scoped fixtures that *use* the session
  scoped fixtures and actually swap out (and back in) the global
  variables like `config`, `database`, and `store`.

These changes make it so that global changes are *only* ever alive for a
singlee test function, and we don't get weird dependencies because a
global fixture hasn't been destroyed.
2019-12-30 13:01:31 -08:00
Todd Gamblin
e839432472 tests: make env tests that use configs non-destructive
Environment tests pushed config scopes but didn't properly remove them.

- [x] use `with env:` context manager instead of `env.prepare_config_scopes()`
2019-12-30 13:01:31 -08:00
Todd Gamblin
8e8235043d package_prefs: move class-level cache to PackagePref instance
`PackagePrefs` has had a class-level cache of data from `packages.yaml` for
a long time, but it complicates testing and leads to subtle errors,
especially now that we frequently manipulate custom config scopes and
environments.

Moving the cache to instance-level doesn't slow down concretization or
the test suite, and it just caches for the life of a `PackagePrefs`
instance (i.e., for a single cocncretization) so we don't need to worry
about global state anymore.

- [x] Remove class-level caches from `PackagePrefs`
- [x] Add a cached _spec_order object on each `PackagePrefs` instance
- [x] Remove all calls to `PackagePrefs.clear_caches()`
2019-12-30 13:01:31 -08:00
Todd Gamblin
4d6462247e
externals: avoid importing jinja2 on startup (#14308)
Jinja2 costs a tenth to a few tenths of a second to import, so we should avoid importing it on startup.

- [x] only import jinja2 within functions
2019-12-28 14:43:23 -08:00
Todd Gamblin
2dafeaf819
bugfix: colify_table should not revert to 1 column for non-tty (#14307)
Commands like `spack blame` were printig poorly when redirected to files,
as colify reverts to a single column when redirected.  This works for
list data but not tables.

- [x] Force a table by always passing `tty=True` from `colify_table()`
2019-12-28 11:26:31 -08:00
Dr. Christian Tacke
8ee75e19bd Improve info variant header (#14275)
In "spack info" the Variants header currently has two blank
lines under it. That's too much. It looks like the actual
content belongs to something else.

Instead underline the headers to make things more obvious.
2019-12-27 15:21:15 -08:00
Todd Gamblin
61b4ad1837
tests: finish removing pyqver from the repository (#14294)
Remove a few remaining mentions of the pyqver package, which was removed in #14289.
2019-12-24 17:37:03 -08:00
Massimiliano Culpo
d333e14721 tests: check min required python version with vermin (#14289)
This commit removes the `python_version.py` unit test module
and the vendored dependencies `pyqver2.py` and `pyqver3.py`.
It substitutes them with an equivalent check done using
`vermin` that is run as a separate workflow via Github Actions.

This allows us to delete 2 vendored dependencies that are unmaintained
and substitutes them with a maintained tool.

Also, updates the list of vendored dependencies.
2019-12-24 09:28:33 -08:00
t-karatsu
1e2c9d960c a64fx: fix typo in GCC flags (#14286) 2019-12-24 17:45:03 +01:00
Todd Gamblin
7652d1a4c1
Merge branch 'releases/v0.13' into develop 2019-12-24 01:04:41 -08:00
Todd Gamblin
231e237764
version bump: 0.13.3 2019-12-23 23:48:11 -08:00
Todd Gamblin
e22d3250dd
performance: dont' read spec.yaml files twice in view regeneration
`ViewDescriptor.regenerate()` calls `get_all_specs()`, which reads
`spec.yaml` files, which is slow.  It's fine to do this once, but
`view.remove_specs()` *also* calls it immediately afterwards.

- [x] Pass the result of `get_all_specs()` as an optional parameter to
  `view.remove_specs()` to avoid reading `spec.yaml` files twice.
2019-12-23 23:18:47 -08:00
Todd Gamblin
e3939b0c72
performance: don't recompute hashes when regenerating environments
`ViewDescriptor.regenerate()` was copying specs and stripping build
dependencies, which clears `_hash` and other cached fields on concrete
specs, which causes a bunch of YAML hashes to be recomputed.

- [x] Preserve the `_hash` and `_normal` fields on stripped specs, as
  these will be unchanged.
2019-12-23 23:18:46 -08:00
Todd Gamblin
f013687397
performance: reduce system calls required for remove_dead_links
`os.path.exists()` will report False if the target of a symlink doesn't
exist, so we can avoid a costly call to realpath here.
2019-12-23 23:18:46 -08:00
Todd Gamblin
79ddf6cf0d
performance: only regenerate env views once in spack install
`spack install` previously concretized, writes the entire environment
out, regenerated views, then wrote and regenerated views
again. Regenerating views is slow, so ensure that we only do that once.

- [x] add an option to env.write() to skip view regeneration

- [x] add a note on whether regenerate_views() shouldn't just be a
  separate operation -- not clear if we want to keep it as part of write
  to ensure consistency, or take it out to avoid performance issues.
2019-12-23 23:18:45 -08:00
Todd Gamblin
be6d7db2a8
performance: add read transactions for install_all() and install()
Environments need to read the DB a lot when installing all specs.

- [x] Put a read transaction around `install_all()` and `install()`
  to avoid repeated locking
2019-12-23 23:18:45 -08:00
Todd Gamblin
d87ededddc
lock transactions: avoid redundant reading in write transactions
Our `LockTransaction` class was reading overly aggressively.  In cases
like this:

```
1  with spack.store.db.read_transaction():
2    with spack.store.db.write_transaction():
3      ...
```

The `ReadTransaction` on line 1 would read in the DB, but the
WriteTransaction on line 2 would read in the DB *again*, even though we
had a read lock the whole time.  `WriteTransaction`s were only
considering nested writes to decide when to read, but they didn't know
when we already had a read lock.

- [x] `Lock.acquire_write()` return `False` in cases where we already had
       a read lock.
2019-12-23 23:18:45 -08:00
Todd Gamblin
b3a5f2e3c3
lock transactions: ensure that nested write transactions write
If a write transaction was nested inside a read transaction, it would not
write properly on release, e.g., in a sequence like this, inside our
`LockTransaction` class:

```
1  with spack.store.db.read_transaction():
2    with spack.store.db.write_transaction():
3      ...
4  with spack.store.db.read_transaction():
   ...
```

The WriteTransaction on line 2 had no way of knowing that its
`__exit__()` call was the last *write* in the nesting, and it would skip
calling its write function.

The `__exit__()` call of the `ReadTransaction` on line 1 wouldn't know
how to write, and the file would never be written.

The DB would be correct in memory, but the `ReadTransaction` on line 4
would re-read the whole DB assuming that other processes may have
modified it.  Since the DB was never written, we got stale data.

- [x] Make `Lock.release_write()` return `True` whenever we release the
      *last write* in a nest.
2019-12-23 23:18:44 -08:00
Todd Gamblin
98577e3af5
lock transactions: fix non-transactional writes
Lock transactions were actually writing *after* the lock was
released. The code was looking at the result of `release_write()` before
writing, then writing based on whether the lock was released.  This is
pretty obviously wrong.

- [x] Refactor `Lock` so that a release function can be passed to the
      `Lock` and called *only* when a lock is really released.

- [x] Refactor `LockTransaction` classes to use the release function
  instead of checking the return value of `release_read()` / `release_write()`
2019-12-23 23:18:44 -08:00
Todd Gamblin
a85b9070cb
performance: avoid repeated DB locking on view generation
`ViewDescriptor.regenerate()` checks repeatedly whether packages are
installed and also does a lot of DB queries.  Put a read transaction
around the whole thing to avoid repeatedly locking and unlocking the DB.
2019-12-23 23:18:44 -08:00
Todd Gamblin
91ea90c253
performance: speed up spack find in environments
`Environment.added_specs()` has a loop around calls to
`Package.installed()`, which can result in repeated DB queries.  Optimize
this with a read transaction in `Environment`.
2019-12-23 23:17:59 -08:00
Todd Gamblin
5bdba98837
performance: spack spec should use a read transacction with -I
`spack spec -I` queries the database for installation status and should
use a read transaction around calls to `Spec.tree()`.
2019-12-23 23:17:59 -08:00
Todd Gamblin
cbf8553406
concretization: improve performance by avoiding database locks
Checks for deprecated specs were repeatedly taking out read locks on the
database, which can be very slow.

- [x] put a read transaction around the deprecation check
2019-12-23 23:17:58 -08:00
Todd Gamblin
48befd67b5
performance: memoize spack.architecture.get_platform()
`get_platform()` is pretty expensive and can be called many times in a
spack invocation.

- [x] memoize `get_platform()`
2019-12-23 23:17:58 -08:00
Sajid Ali
37eac1a226
use sys.executable instead of python in _source_single_file (#14252) 2019-12-23 23:16:30 -08:00
Peter Josef Scheibel
639156130b
Patch fetching: remove unnecessary argument 2019-12-23 23:03:10 -08:00
Peter Josef Scheibel
587c650b88
Mirrors: skip attempts to fetch BundlePackages
BundlePackages use a noop fetch strategy. The mirror logic was assuming
that the fetcher had a resource to cach after performing a fetch. This adds
a special check to skip caching if the stage is associated with a
BundleFetchStrategy. Note that this should allow caching resources
associated with BundlePackages.
2019-12-23 23:03:10 -08:00
Peter Josef Scheibel
d71428622b
Mirrors: avoid re-downloading patches
When updating a mirror, Spack was re-retrieving all patches (since the
fetch logic for patches is separate). This updates the patch logic to
allow the mirror logic to avoid this.
2019-12-23 23:03:10 -08:00
Peter Josef Scheibel
a69b3c85b0
Mirrors: perform checksum of fetched sources
Since cache_mirror does the fetch itself, it also needs to do the
checksum itself if it wants to verify that the source stored in the
mirror is valid. Note that this isn't strictly required because fetching
(including from mirrors) always separately verifies the checksum.
2019-12-23 23:03:09 -08:00
Peter Josef Scheibel
98b498c671
Mirrors: fix cosmetic symlink targets
The targets for the cosmetic paths in mirrrors were being calculated
incorrectly as of fb3a3ba: the symlinks used relative paths as targets,
and the relative path was computed relative to the wrong directory.
2019-12-23 23:03:09 -08:00
Peter Josef Scheibel
64209dda97
Allow repeated invocations of 'mirror create'
When creating a cosmetic symlink for a resource in a mirror, remove
it if it already exists. The symlink is removed in case the logic to
create the symlink has changed.
2019-12-23 23:03:09 -08:00
Paul Ferrell
c15e55c668
mirror bug fixes: symlinks, duplicate patch names, and exception handling (#13789)
* Some packages (e.g. mpfr at the time of this patch) can have patches
  with the same name but different contents (which apply to different
  versions of the package). This appends part of the patch hash to the
  cache file name to avoid conflicts.
* Some exceptions which occur during fetching are not a subclass of
  SpackError and therefore do not have a 'message' attribute. This
  updates the logic for mirroring a single spec (add_single_spec)
  to produce an appropriate error message in that case (where before
  it failed with an AttributeError)
* In various circumstances, a mirror can contain the universal storage
  path but not a cosmetic symlink; in this case it would not generate
  a symlink. Now "spack mirror create" will create a symlink for any
  package that doesn't have one.
2019-12-23 23:03:03 -08:00
Todd Gamblin
d7f2a32887 performance: dont' read spec.yaml files twice in view regeneration
`ViewDescriptor.regenerate()` calls `get_all_specs()`, which reads
`spec.yaml` files, which is slow.  It's fine to do this once, but
`view.remove_specs()` *also* calls it immediately afterwards.

- [x] Pass the result of `get_all_specs()` as an optional parameter to
  `view.remove_specs()` to avoid reading `spec.yaml` files twice.
2019-12-23 18:36:56 -08:00
Todd Gamblin
78b84e4ade performance: don't recompute hashes when regenerating environments
`ViewDescriptor.regenerate()` was copying specs and stripping build
dependencies, which clears `_hash` and other cached fields on concrete
specs, which causes a bunch of YAML hashes to be recomputed.

- [x] Preserve the `_hash` and `_normal` fields on stripped specs, as
  these will be unchanged.
2019-12-23 18:36:56 -08:00
Todd Gamblin
9b90d7e801 performance: reduce system calls required for remove_dead_links
`os.path.exists()` will report False if the target of a symlink doesn't
exist, so we can avoid a costly call to realpath here.
2019-12-23 18:36:56 -08:00
Todd Gamblin
c83e365c59 performance: only regenerate env views once in spack install
`spack install` previously concretized, writes the entire environment
out, regenerated views, then wrote and regenerated views
again. Regenerating views is slow, so ensure that we only do that once.

- [x] add an option to env.write() to skip view regeneration

- [x] add a note on whether regenerate_views() shouldn't just be a
  separate operation -- not clear if we want to keep it as part of write
  to ensure consistency, or take it out to avoid performance issues.
2019-12-23 18:36:56 -08:00
Todd Gamblin
0fb3280011 performance: add read transactions for install_all() and install()
Environments need to read the DB a lot when installing all specs.

- [x] Put a read transaction around `install_all()` and `install()`
  to avoid repeated locking
2019-12-23 18:36:56 -08:00
Todd Gamblin
6c9467e8c6 lock transactions: avoid redundant reading in write transactions
Our `LockTransaction` class was reading overly aggressively.  In cases
like this:

```
1  with spack.store.db.read_transaction():
2    with spack.store.db.write_transaction():
3      ...
```

The `ReadTransaction` on line 1 would read in the DB, but the
WriteTransaction on line 2 would read in the DB *again*, even though we
had a read lock the whole time.  `WriteTransaction`s were only
considering nested writes to decide when to read, but they didn't know
when we already had a read lock.

- [x] `Lock.acquire_write()` return `False` in cases where we already had
       a read lock.
2019-12-23 18:36:56 -08:00
Todd Gamblin
bb517fdb84 lock transactions: ensure that nested write transactions write
If a write transaction was nested inside a read transaction, it would not
write properly on release, e.g., in a sequence like this, inside our
`LockTransaction` class:

```
1  with spack.store.db.read_transaction():
2    with spack.store.db.write_transaction():
3      ...
4  with spack.store.db.read_transaction():
   ...
```

The WriteTransaction on line 2 had no way of knowing that its
`__exit__()` call was the last *write* in the nesting, and it would skip
calling its write function.

The `__exit__()` call of the `ReadTransaction` on line 1 wouldn't know
how to write, and the file would never be written.

The DB would be correct in memory, but the `ReadTransaction` on line 4
would re-read the whole DB assuming that other processes may have
modified it.  Since the DB was never written, we got stale data.

- [x] Make `Lock.release_write()` return `True` whenever we release the
      *last write* in a nest.
2019-12-23 18:36:56 -08:00
Todd Gamblin
eb8fc4f3be lock transactions: fix non-transactional writes
Lock transactions were actually writing *after* the lock was
released. The code was looking at the result of `release_write()` before
writing, then writing based on whether the lock was released.  This is
pretty obviously wrong.

- [x] Refactor `Lock` so that a release function can be passed to the
      `Lock` and called *only* when a lock is really released.

- [x] Refactor `LockTransaction` classes to use the release function
  instead of checking the return value of `release_read()` / `release_write()`
2019-12-23 18:36:56 -08:00
Todd Gamblin
779ac9fe3e performance: avoid repeated DB locking on view generation
`ViewDescriptor.regenerate()` checks repeatedly whether packages are
installed and also does a lot of DB queries.  Put a read transaction
around the whole thing to avoid repeatedly locking and unlocking the DB.
2019-12-23 18:36:56 -08:00
Sajid Ali
96063f9168 use sys.executable instead of python in _source_single_file (#14252) 2019-12-21 00:02:28 -08:00
Massimiliano Culpo
80495d83ed microarchitectures: fix ppc flags for clang (#14196) 2019-12-20 14:40:54 -08:00
Massimiliano Culpo
497fddfcb9 Fetching from URLs falls back to mirrors if they exist (#13881)
Users can now list mirrors of the main url in packages.

- [x] Instead of just a single `url` attribute, users can provide a list (`urls`) in the package, and these will be tried by in order by the fetch strategy.

- [x] To handle one of the most common mirror cases, define a `GNUMirrorPackage` mixin to handle all the standard GNU mirrors.  GNU packages can set `gnu_mirror_path` to define the path within a mirror, and the mixin handles setting up all the requisite GNU mirror URLs.

- [x] update all GNU packages in `builtin` to use the `GNUMirrorPackage` mixin.
2019-12-20 14:32:18 -08:00
Chris Green
dc17d548c8 Add missing __init__.py under test, and correct bad file name from #13889. (#14228) 2019-12-19 17:27:53 -06:00
Todd Gamblin
af249d3cf6 package_sanity: add a test to enforce no nonexisting dependencies in builtin
We shouldn't allow packages to have missing dependencies in the mainline.

- [x] Add a test to enforce this.
2019-12-18 21:10:31 -08:00
Todd Gamblin
531f370e0d possible_dependencies() now reports missing dependencies
- Add an optional argument so that `possible_dependencies()` will report
  missing dependencies.
- Add a test to ensure it works.
- Ignore missing dependencies in `possible_dependencies()` by default.
2019-12-18 21:10:31 -08:00
Todd Gamblin
81b147cc0a package: add spack.package.possible_dependencies method
- this version allows getting possible dependencies of multiple packages
  or specs at once.

- New method handles calling `PackageBase.possible_dependencies` multiple
  times and passing `visited` dict around.
2019-12-18 21:10:31 -08:00
Todd Gamblin
a3799b2c7b performance: speed up spack find in environments
`Environment.added_specs()` has a loop around calls to
`Package.installed()`, which can result in repeated DB queries.  Optimize
this with a read transaction in `Environment`.
2019-12-18 16:07:28 -08:00
Todd Gamblin
0e9c8d236c performance: spack spec should use a read transacction with -I
`spack spec -I` queries the database for installation status and should
use a read transaction around calls to `Spec.tree()`.
2019-12-18 16:07:28 -08:00
Todd Gamblin
f73cdac731 concretization: improve performance by avoiding database locks
Checks for deprecated specs were repeatedly taking out read locks on the
database, which can be very slow.

- [x] put a read transaction around the deprecation check
2019-12-18 16:07:28 -08:00
Todd Gamblin
33335c9d0a performance: memoize spack.architecture.get_platform()
`get_platform()` is pretty expensive and can be called many times in a
spack invocation.

- [x] memoize `get_platform()`
2019-12-18 16:07:28 -08:00
Todd Gamblin
52ebc19b4e bugfix: don't fail if checking for "real" compiler version
doesn't understand a custom, user-defined compiler version.  However, if
the compiler's version check fails, you can't build anything with the
custom compiler.

- [x] Be more lenient: fall back to the custom compiler version and use
  it verbatim if the version check fails.
2019-12-18 11:37:59 -08:00
Todd Gamblin
4eb54b6358 bugfix: pgcc -V returns 2 on power machines
`pgcc -V` was failing on power machines because it returns 2 (despite
correctly printing version information).  On x86_64 machines the same
command returns 0 and doesn't cause an error.

- [x] Ignore return value of 2 for pgcc when doign a version check
2019-12-18 11:37:59 -08:00
Adam J. Stewart
18c2029fef Fix argparse rST parsing of help messages (#14014)
Thanks!
2019-12-17 10:23:22 -08:00
Massimiliano Culpo
5eca4f1470 microarchitectures: readable names for AArch64 vendors (#13825)
Vendors for ARM come out of `/proc/cpuinfo` as hex numbers instead of readable strings.

- Add support for associating vendor names with the hex numbers.
- Also move these mappings from Python code to `microarchitectures.json`
- Move darwin feature name mappings to `microarchitectures.json` as well
2019-12-17 00:47:50 -08:00
Greg Becker
9f1d728646 match bootstrapped compiler to architecture (#14059) 2019-12-14 16:42:46 -08:00
Peter Scheibel
60580f5871 package hash: gracefully handle @when with non-string args (#14153)
* when constructing package hash, default to including a method in the content hash if we can't determine whether it would be included by examining the AST
* add a test for updated content-hash calculations
* refactor content hash tests to eliminate repeated lines
2019-12-14 14:31:39 -08:00
Zack Galbreath
0f5724e908 Split out CDash options to a separate help document (#13704)
Prevent `spack help install` from getting too cluttered with CDash-specific documentation.
2019-12-13 10:15:22 -08:00
Peter Josef Scheibel
8c2305e867 Patch fetching: remove unnecessary argument 2019-12-13 08:38:50 +01:00
Peter Josef Scheibel
8199f22e7c Mirrors: skip attempts to fetch BundlePackages
BundlePackages use a noop fetch strategy. The mirror logic was assuming
that the fetcher had a resource to cach after performing a fetch. This adds
a special check to skip caching if the stage is associated with a
BundleFetchStrategy. Note that this should allow caching resources
associated with BundlePackages.
2019-12-13 08:38:50 +01:00
Peter Josef Scheibel
b62ba7609d Mirrors: avoid re-downloading patches
When updating a mirror, Spack was re-retrieving all patches (since the
fetch logic for patches is separate). This updates the patch logic to
allow the mirror logic to avoid this.
2019-12-13 08:38:50 +01:00
Peter Josef Scheibel
754dd6eb1f Mirrors: perform checksum of fetched sources
Since cache_mirror does the fetch itself, it also needs to do the
checksum itself if it wants to verify that the source stored in the
mirror is valid. Note that this isn't strictly required because fetching
(including from mirrors) always separately verifies the checksum.
2019-12-13 08:38:50 +01:00
Peter Josef Scheibel
b2cc50aa6a Mirrors: fix cosmetic symlink targets
The targets for the cosmetic paths in mirrrors were being calculated
incorrectly as of fb3a3ba: the symlinks used relative paths as targets,
and the relative path was computed relative to the wrong directory.
2019-12-13 08:38:50 +01:00
Peter Josef Scheibel
b64f458102 Allow repeated invocations of 'mirror create'
When creating a cosmetic symlink for a resource in a mirror, remove
it if it already exists. The symlink is removed in case the logic to
create the symlink has changed.
2019-12-13 08:38:50 +01:00
Greg Becker
917224cb3c pytest: add __init__ files for all test subdirs (#13889)
* pytest: add __init__ files for all test subdirs

* add licenses to empty files

* Fix Sphinx warning message about comment within docstring

* Further fixes to Sphinx docstring
2019-12-10 15:19:09 -06:00
Massimiliano Culpo
59d222c172 Better error message when setting unknown variants during concretization (#13128)
fixes #13124
2019-12-10 11:21:45 -08:00
Omar Padron
0592c58030
Follow up/11117 fixes and testing (#13607)
* fix docstring in generate_package_index() refering to "public" keys as "signing" keys

* use explicit kwargs in push_to_url()

* simplify url_util.parse() per tgamblin's suggestion

* replace standardize_header_names() with the much simpler get_header()

* add some basic tests

* update s3_fetch tests

* update S3 list code to strip leading slashes from prefix

* correct minor warning regression introduced in #11117

* add more tests

* flake8 fixes

* add capsys fixture to mirror_crud test

* add get_header() tests

* use get_header() in more places

* incorporate review comments
2019-12-09 17:23:33 -05:00
Greg Becker
da9a562182 environments: allow 'add' command to add virtuals (#13787)
This PR allows virtual packages to be added to the specs list using
the add command.

Virtual packages are already allowed in named lists in spack
environments/stacks, and they are already allowed in the specs list
when added using the yaml directly.
2019-12-09 12:23:03 -08:00
Andras Wacha
b33b8a3e29 Apply URLFetchStrategy to ftp:// and ftps:// url schemes (#13939)
* Apply URLFetchStrategy to ftp:// and ftps:// url schemes

* Corrected trailing whitespace error
2019-12-09 11:18:08 -06:00
George Hartzell
1d06949306 Tuneup docs re setting up sphinx for building docs (#14005)
I have, more than once, tried to install the list of things that need
to build the docs, only to discover that the list doesn't use Spack's
package names.  I'm tired of facepalming....

While I was there I touched up the prose about activating the new
Python packages; activating a python package doesn't add anything to
your PYTHONPATH, it links things into a directory that's *already* on
your PYTHONPATH.  Note that this all presupposes that you're using
that same python....
2019-12-08 16:22:25 -06:00
Axel Huebl
d705e96a63
Spec Header Dirs: Only first include/ (#13991)
* CUDA HeaderList: Unit Test

* Spec Header Dirs: Only first include/

Avoid matching recurringly nested include paths that usually
refer to internally shipped libraries in packages.
Example in CUDA Toolkit, shipping a libc++ fork internally
with libcu++ since 10.2.89:
`<prefix>/include/cuda/some/more/details/include/` or
`<prefix>/include/cuda/std/detail/libcxx/include`

regex: non-greedy first match of include

Co-Authored-By: Massimiliano Culpo <massimiliano.culpo@gmail.com>

* CUDA: Re-Enable 10.2.89 as Default
2019-12-06 23:47:03 -08:00