connect_timeout can be used to increase the time Spack waits for the
server to answer. This can be used to work around slow connections or
servers.
Fixes#14700
* CudaPackage: add support for Tesla K80 and older CUDA
* Flake8 fixes
* Fix cuda_arch when no arch is set
* Fine-tune cuda_arch=37,50 supported CUDA versions
* CUDA 6.5+ supports SM_37
* Add @svenevs as a maintainer
The new build process, introduced in #13100 , relies on a spec's dependents in addition to their dependencies. Loading a spec from a yaml file was not initializing the dependents.
- [x] populate dependents when loading from yaml
* Buildcache command: add install option -o/--otherarch
This will allow matching specs from other archs, for example
installing macOS buildcaches on linux hosts.
* spack commands --update-completion
args.specs is a list, which results in output like this:
```
eval `spack load --sh ['libxml2', 'xz']`
```
We want this instead:
```
eval `spack load --sh libxml2 xz`
```
This change stores packages' configure arguments during build and makes
use of them while refreshing module files. This fixes problems such as in
#10716.
* Emit a sensible error message if compiler's target is overly specific
fixes#14798fixes#13733
Compiler specifications require a generic architecture family as
their target. This commit improves the error message that is
displayed to users if they edit compilers.yaml and use an overly
specific name.
The hashing logic looks for function calls that are Spack directives.
It expects that when a Spack directive is used that it is referenced
directly by name, and that the directive function is not itself
retrieved by calling another function. When the hashing logic
encountered a function call where the function was determined
dynamically, it would fail (attempting to access a name attribute
that does not happen to exist in this case).
This updates the hashing logic to filter out function calls where the
function is determined dynamically when looking for uses of Spack
directives.
Spack now requires an exact match of the compiler version
requested by the user. A loose constraint can be given to
Spack by using a version range instead of a concrete version
(e.g. 4.5: instead of 4.5).
Sometimes one needs to preserve the (relative order) of
mtimes on installed files. So it's better to just copy
over all the metadata from the source tree to the install
tree. If permissions need fixing, that will be done anyway
afterwards.
One major use case are resource()s:
They're unpacked in one place and then copied to their
final place using install_tree(). If the resource is a
source tree using autoconf/automake, resetting mtimes
uncorrectly might force unwanted autoconf/etc calls.
If the mimetype returned from `file -h -b --mime-type` contains slashes
in its subtype, the tuple returned from `spack.relocate.mime_type` will
have a size larger than two, which leads to errors.
Change-Id: I31de477e69f114ffdc9ae122d00c573f5f749dbb
Fixes#9394Closes#13217.
## Background
Spack provides the ability to enable/disable parallel builds through two options: package `parallel` and configuration `build_jobs`. This PR changes the algorithm to allow multiple, simultaneous processes to coordinate the installation of the same spec (and specs with overlapping dependencies.).
The `parallel` (boolean) property sets the default for its package though the value can be overridden in the `install` method.
Spack's current parallel builds are limited to build tools supporting `jobs` arguments (e.g., `Makefiles`). The number of jobs actually used is calculated as`min(config:build_jobs, # cores, 16)`, which can be overridden in the package or on the command line (i.e., `spack install -j <# jobs>`).
This PR adds support for distributed (single- and multi-node) parallel builds. The goals of this work include improving the efficiency of installing packages with many dependencies and reducing the repetition associated with concurrent installations of (dependency) packages.
## Approach
### File System Locks
Coordination between concurrent installs of overlapping packages to a Spack instance is accomplished through bottom-up dependency DAG processing and file system locks. The runs can be a combination of interactive and batch processes affecting the same file system. Exclusive prefix locks are required to install a package while shared prefix locks are required to check if the package is installed.
Failures are communicated through a separate exclusive prefix failure lock, for concurrent processes, combined with a persistent store, for separate, related build processes. The resulting file contains the failing spec to facilitate manual debugging.
### Priority Queue
Management of dependency builds changed from reliance on recursion to use of a priority queue where the priority of a spec is based on the number of its remaining uninstalled dependencies.
Using a queue required a change to dependency build exception handling with the most visible issue being that the `install` method *must* install something in the prefix. Consequently, packages can no longer get away with an install method consisting of `pass`, for example.
## Caveats
- This still only parallelizes a single-rooted build. Multi-rooted installs (e.g., for environments) are TBD in a future PR.
Tasks:
- [x] Adjust package lock timeout to correspond to value used in the demo
- [x] Adjust database lock timeout to reduce contention on startup of concurrent
`spack install <spec>` calls
- [x] Replace (test) package's `install: pass` methods with file creation since post-install
`sanity_check_prefix` will otherwise error out with `Install failed .. Nothing was installed!`
- [x] Resolve remaining existing test failures
- [x] Respond to alalazo's initial feedback
- [x] Remove `bin/demo-locks.py`
- [x] Add new tests to address new coverage issues
- [x] Replace built-in package's `def install(..): pass` to "install" something
(i.e., only `apple-libunwind`)
- [x] Increase code coverage
* Buildcache creation change the way prefix is copied to workdir.
* install_tree copies hardlinked files
* tarfile creates hardlinked files on extraction.
* create a temporary tarfile from prefix and extract it to workdir
* Use temp tarfile to move workdir to prefix to preserve hardlinks instead of copying
It's often useful to run a module with `python -m`, e.g.:
python -m pyinstrument script.py
Running a python script this way was hard, though, as `spack python` did
not have a similar `-m` option. This PR adds a `-m` option to `spack
python` so that we can do things like this:
spack python -m pyinstrument ./test.py
This makes it easy to write a script that uses a small part of Spack and
then profile it. Previously thee easiest way to do this was to write a
custom Spack command, which is often overkill.
Fixes#10019
If multiple instances of a package were installed in a single
instance of Spack, and they differed in terms of dependencies, then
"spack find" would not distinguish specs based on their dependencies.
For example if two instances of X were installed, one with Y and one
with Z, then "spack find X ^Y" would display both instances of X.
Using `sys.executable` to run Python in a sub-shell doesn't always work in a virtual environment as the `sys.executable` Python is not necessarily compatible with any loaded spack/other virtual environment.
- revert use of sys.executable to print out subshell environment (#14496)
- try instead to use an available python, then if there *is not* one, use `sys.executable`
- this addresses RHEL8 (where there is no `python` and `PYTHONHOME` issue in a simpler way
When removing packages from a view, extensions were being deactivated
in an arbitrary order. Extensions must be deactivated in preorder
traversal (dependents before dependencies), so when this order was
violated the view update would fail.
This commit ensures that views deactivate extensions based on a
preorder traversal and adds a test for it.
Despite trying very hard to keep dicts out of our hash algorithm, we seem
to still accidentally add them in ways that the tests can't catch. This
can cause errors when hashes are not computed deterministically.
This fixes an error we saw with Python 3.5, where dictionary iteration
order is random. In this instance, we saw a bug when reading Spack
environment lockfiles -- The load would fail like this:
```
...
File "/sw/spack/lib/spack/spack/environment.py", line 1249, in concretized_specs
yield (s, self.specs_by_hash[h])
KeyError: 'qcttqplkwgxzjlycbs4rfxxladnt423p'
```
This was because the hashes differed depending on whether we wrote `path`
or `module` first when recomputing the build hash as part of reading a
Spack lockfile. We can fix it by ensuring a determistic iteration order.
- [x] Fix two places (one that caused an issue, and one that did
not... yet) where our to_node_dict-like methods were using regular python
dicts.
- [x] Also add a check that statically analyzes our to_node_dict
functions and flags any that use Python dicts.
The test found the two errors fixed here, specifically:
```
E AssertionError: assert [] == ['Use syaml_dict instead of ...pack/spack/spec.py:1495:28']
E Right contains more items, first extra item: 'Use syaml_dict instead of dict at /Users/gamblin2/src/spack/lib/spack/spack/spec.py:1495:28'
E Full diff:
E - []
E + ['Use syaml_dict instead of dict at '
E + '/Users/gamblin2/src/spack/lib/spack/spack/spec.py:1495:28']
```
and
```
E AssertionError: assert [] == ['Use syaml_dict instead of ...ack/architecture.py:359:15']
E Right contains more items, first extra item: 'Use syaml_dict instead of dict at /Users/gamblin2/src/spack/lib/spack/spack/architecture.py:359:15'
E Full diff:
E - []
E + ['Use syaml_dict instead of dict at '
E + '/Users/gamblin2/src/spack/lib/spack/spack/architecture.py:359:15']
```
This commit introduces a `--no-check-signature` option for
`spack install` so that unsigned packages can be installed. It is
off by default (signatures required).
VSX alitvec extensions are supported by PowerISA from v2.06 (Power7+), but might
not be listed in features.
FMA has been supported by PowerISA since Power1, but might not be listed in
features.
This commit adds these features to all the power ISA family sets.
Add an optional 'submodules_delete' field to Git versions in Spack
packages that allows them to remove specific submodules.
For example: the nervanagpu submodule has become unavailable for the
PyTorch project (see issue 19457 at
https://github.com/pytorch/pytorch/issues/). Removing this submodule
allows 0.4.1 to build.
* Initialize _cached_specs at the file level and check for spec in it before searching mirrors in try_download_spec.
* Make _cached_specs a set to avoid duplicates
* Fix packaging test
* Ignore build_cache in stage when spec.yaml files are downloaded.
`spack -V` previously always returned the version of spack from
`spack.spack_version`. This gives us a general idea of what version
users are on, but if they're on `develop` or on some branch, we have to
ask more questions.
This PR makes `spack -V` check whether this instance of Spack is a git
repository, and if it is, it appends useful information from `git
describe --tags` to the version. Specifically, it adds:
- number of commits since the last release tag
- abbreviated (but unique) commit hash
So, if you're on `develop` you might get something like this:
$ spack -V
0.13.3-912-3519a1762
This means you're on commit 3519a1762, which is 912 commits ahead of
the 0.13.3 release.
If you are on a release branch, or if you are using a tarball of Spack,
you'll get the usual `spack.spack_version`:
$ spack -V
0.13.3
This should help when asking users what version they are on, since a lot
of people use the `develop` branch.
This PR adds a new command to Spack:
```console
$ spack containerize -h
usage: spack containerize [-h] [--config CONFIG]
creates recipes to build images for different container runtimes
optional arguments:
-h, --help show this help message and exit
--config CONFIG configuration for the container recipe that will be generated
```
which takes an environment with an additional `container` section:
```yaml
spack:
specs:
- gromacs build_type=Release
- mpich
- fftw precision=float
packages:
all:
target: [broadwell]
container:
# Select the format of the recipe e.g. docker,
# singularity or anything else that is currently supported
format: docker
# Select from a valid list of images
base:
image: "ubuntu:18.04"
spack: prerelease
# Additional system packages that are needed at runtime
os_packages:
- libgomp1
```
and turns it into a `Dockerfile` or a Singularity definition file, for instance:
```Dockerfile
# Build stage with Spack pre-installed and ready to be used
FROM spack/ubuntu-bionic:prerelease as builder
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&& (echo "spack:" \
&& echo " specs:" \
&& echo " - gromacs build_type=Release" \
&& echo " - mpich" \
&& echo " - fftw precision=float" \
&& echo " packages:" \
&& echo " all:" \
&& echo " target:" \
&& echo " - broadwell" \
&& echo " config:" \
&& echo " install_tree: /opt/software" \
&& echo " concretization: together" \
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unecessary deps and strip executables
RUN cd /opt/spack-environment && spack install && spack autoremove -y
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s
# Modifications to the environment that are necessary to run
RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
# Bare OS image to run the installed executables
FROM ubuntu:18.04
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
RUN apt-get -yqq update && apt-get -yqq upgrade \
&& apt-get -yqq install libgomp1 \
&& rm -rf /var/lib/apt/lists/*
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
```
* Add binary_distribution::get_spec which takes concretized spec
Add binary_distribution::try_download_specs for downloading of spec.yaml files to cache
get_spec is used by package::try_install_from_binary_cache to download only the spec.yaml
for the concretized spec if it exists.
The Spec parser currently calls `spec.traverse()` after every parse, in
order to set the platform if it's not set. We don't need to do a full
traverse -- we can just check the platforrm as new specs are parsed.
This takes about a second off the time required to import all packages in
Spack (from 8s to 7s).
- [x] simplify platform-setting logic in `SpecParser`.
`filename_for_package_name()` and `dirname_for_package_name()`
automatically construct a Spec from their arguments, which adds a fair
amount of overhead to importing lots of packages. Removing this removes
about 11% of the runtime of importing all packages in Spack (9s -> 8s).
- [x] `filename_for_package_name()` and `dirname_for_package_name()` now
take a string `pkg_name` arguments instead of specs.
* `Environment.__init__` is now synchronized with all writing operations
* `spack uninstall` now synchronizes its updates to any associated environment
* A side effect of this is that the environment is no longer updated piecemeal as specs are uninstalled - all specs are removed from the environment before they are uninstalled
This commit makes two fundamental corrections to tests:
1) Changes 'matches' to the correct 'match' argument for 'pytest.raises' (for all affected tests except those checking for 'SystemExit');
2) Replaces the 'match' argument for tests expecting 'SystemExit' (since the exit code is retained instead) with 'capsys' error message capture.
Both changes are needed to ensure the associated exception message is actually checked.
Updates to environments were not multi-process safe, which prevented them from taking advantage of parallel builds as implemented in #13100. This is a minimal set of changes to enable `spack install` in an environment to be parallelized:
- [x] add an internal lock, stored in the `.spack-env` directory,
to synchronize updates to `spack.yaml` and `spack.lock`
- [x] add `Environment.write_transaction` interface for this lock
- [x] makes use of `Environment.write_transaction` in `install`,
`add`, and `remove` commands
- `uninstall` is not synchronized yet; that is left for a future PR.
Spack commands referring to upstream-installed specs by hash have
been broken since 6b619da (merged September 2019), which added a new
Database function specifically for parsing hashes from command-line
specs; this function was inappropriately attempting to acquire locks
on upstream databases.
This PR updates the offending function to avoid locking upstream
databases and also updates associated tests to catch regression
errors: the upstream database created for these tests was not
explicitly set as an upstream (i.e. initialized with upstream=True)
so it was not guarding against inappropriate accesses.
* Unified environment modifications in config files
fixes#13357
This commit factors all the code that is involved in
the validation (schema) and parsing of environment modifications
from configuration files in a single place. The factored out
code is then used for module files and compiler configuration.
Attributes were separated by dashes in `compilers.yaml` files and
by underscores in `modules.yaml` files. This PR unifies the syntax
on attributes separated by underscores.
Unit testing of environment modifications in compilers
has been refactored and simplified.
Using `sys.executable` to run Python in a sub-shell doesn't always work in a virtual environment as the `sys.executable` Python is not necessarily compatible with any loaded spack/other virtual environment.
- revert use of sys.executable to print out subshell environment (#14496)
- try instead to use an available python, then if there *is not* one, use `sys.executable`
- this addresses RHEL8 (where there is no `python` and `PYTHONHOME` issue in a simpler way
Openblas target is now determined automatically upon inspection of
`TargetList.txt`. If the spack target is a generic architecture family
(like x86_64 or aarch64) the DYNAMIC_ARCH setting is used
instead of targeting a specific microarchitecture.
Instead of another script, this adds a simple argument to `spack
commands` that updates the completion script. Developers can now just
run:
spack commands --update-completion
This should make it simpler for developers to remember to run this
*before* the tests fail. Also, this version tab-completes.
Previously the `spack load` command was a wrapper around `module load`. This required some bootstrapping of modules to make `spack load` work properly.
With this PR, the `spack` shell function handles the environment modifications necessary to add packages to your user environment. This removes the dependence on environment modules or lmod and removes the requirement to bootstrap spack (beyond using the setup-env scripts).
Included in this PR is support for MacOS when using Apple's System Integrity Protection (SIP), which is enabled by default in modern MacOS versions. SIP clears the `LD_LIBRARY_PATH` and `DYLD_LIBRARY_PATH` variables on process startup for executables that live in `/usr` (but not '/usr/local', `/System`, `/bin`, and `/sbin` among other system locations. Spack cannot know the `LD_LIBRARY_PATH` of the calling process when executed using `/bin/sh` and `/usr/bin/python`. The `spack` shell function now manually forwards these two variables, if they are present, as `SPACK_<VAR>` and recovers those values on startup.
- [x] spack load/unload no longer delegate to modules
- [x] refactor user_environment modification calculations
- [x] update documentation for spack load/unload
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
This PR adds a `--format=bash` option to `spack commands` to
auto-generate the Bash programmable tab completion script. It can be
extended to work for other shells.
Progress:
- [x] Fix bug in superclass initialization in `ArgparseWriter`
- [x] Refactor `ArgparseWriter` (see below)
- [x] Ensure that output of old `--format` options remains the same
- [x] Add `ArgparseCompletionWriter` and `BashCompletionWriter`
- [x] Add `--aliases` option to add command aliases
- [x] Standardize positional argument names
- [x] Tests for `spack commands --format=bash` coverage
- [x] Tests to make sure `spack-completion.bash` stays up-to-date
- [x] Tests for `spack-completion.bash` coverage
- [x] Speed up `spack-completion.bash` by caching subroutine calls
This PR also necessitates a significant refactoring of
`ArgparseWriter`. Previously, `ArgparseWriter` was mostly a single
`_write` method which handled everything from extracting the information
we care about from the parser to formatting the output. Now, `_write`
only handles recursion, while the information extraction is split into a
separate `parse` method, and the formatting is handled by `format`. This
allows subclasses to completely redefine how the format will appear
without overriding all of `_write`.
Co-Authored-by: Todd Gamblin <tgamblin@llnl.gov>
The gpg2 command isn't always around; it's sometimes called gpg. This is
the case with the brew-installed version, and it's breaking our tests.
- [x] Look for both 'gpg2' and 'gpg' when finding the command
- [x] If we find 'gpg', ensure the version is 2 or higher
- [x] Add tests for version detection.
- [x] Factored to a common place the fixture `testing_gpg_directory`, renamed it as
`mock_gnupghome`
- [x] Removed altogether the function `has_gnupg2`
For `has_gnupg2`, since we were not trying to parse the version from the output of:
```console
$ gpg2 --version
```
this is effectively equivalent to check if `spack.util.gpg.GPG.gpg()` was found. If we need to ensure version is `^2.X` it's probably better to do it in `spack.util.gpg.GPG.gpg()` than in a separate function.
Despite trying very hard to keep dicts out of our hash algorithm, we seem
to still accidentally add them in ways that the tests can't catch. This
can cause errors when hashes are not computed deterministically.
This fixes an error we saw with Python 3.5, where dictionary iteration
order is random. In this instance, we saw a bug when reading Spack
environment lockfiles -- The load would fail like this:
```
...
File "/sw/spack/lib/spack/spack/environment.py", line 1249, in concretized_specs
yield (s, self.specs_by_hash[h])
KeyError: 'qcttqplkwgxzjlycbs4rfxxladnt423p'
```
This was because the hashes differed depending on whether we wrote `path`
or `module` first when recomputing the build hash as part of reading a
Spack lockfile. We can fix it by ensuring a determistic iteration order.
- [x] Fix two places (one that caused an issue, and one that did
not... yet) where our to_node_dict-like methods were using regular python
dicts.
- [x] Also add a check that statically analyzes our to_node_dict
functions and flags any that use Python dicts.
The test found the two errors fixed here, specifically:
```
E AssertionError: assert [] == ['Use syaml_dict instead of ...pack/spack/spec.py:1495:28']
E Right contains more items, first extra item: 'Use syaml_dict instead of dict at /Users/gamblin2/src/spack/lib/spack/spack/spec.py:1495:28'
E Full diff:
E - []
E + ['Use syaml_dict instead of dict at '
E + '/Users/gamblin2/src/spack/lib/spack/spack/spec.py:1495:28']
```
and
```
E AssertionError: assert [] == ['Use syaml_dict instead of ...ack/architecture.py:359:15']
E Right contains more items, first extra item: 'Use syaml_dict instead of dict at /Users/gamblin2/src/spack/lib/spack/spack/architecture.py:359:15'
E Full diff:
E - []
E + ['Use syaml_dict instead of dict at '
E + '/Users/gamblin2/src/spack/lib/spack/spack/architecture.py:359:15']
```
Rework Spack's continuous integration workflow to be environment-based.
- Add the `spack ci` command, which replaces the many scripts in `bin/`
- `spack ci` decouples the CI workflow from the spack instance:
- CI is defined in a spack environment
- environment is in its own (single) git repository, separate from Spack
- spack instance used to run the pipeline is up to the user
- A new `gitlab-ci` section in environments allows users to configure how
specs in the environment should be mapped to runners
- Compilers can be bootstrapped in the new pipeline workflow
- Add extensive documentation on pipelines (see `pipelines.rst` for further details)
- Add extensive tests for pipeline code
* Reorder GNU mirrors (#14395)
As @adamjstewart commented in #14395, GNU suggests to use
their mirror. So reorder the mirror to the top.
GNU Doc: https://www.gnu.org/prep/ftp.en.html
* Use spack.util.url.join for URLs in GNU mirrors (#14395)
One should not use os.path.join for URLs. This does only
work on POSIX systems.
Instead use spack.util.url.join.
So every part in spack uses the same url joining method.
When removing packages from a view, extensions were being deactivated
in an arbitrary order. Extensions must be deactivated in preorder
traversal (dependents before dependencies), so when this order was
violated the view update would fail.
This commit ensures that views deactivate extensions based on a
preorder traversal and adds a test for it.
* Spack can uninstall unused specs
fixes#4382
Added an option to spack uninstall that removes all unused specs i.e.
build dependencies or transitive dependencies that are left
in the store after the specs that pulled them in have been removed.
* Moved the functionality to its own command
The command has been named 'spack autoremove' to follow the naming used
for the same functionality by other widely known package managers i.e.
yum and apt.
* Speed-up autoremoving specs by not locking and re-reading the scratch DB
* Make autoremove work directly on Spack's store
* Added unit tests for the new command
* Display a terser output to the user
* Renamed the "autoremove" command "gc"
Following discussion there's more consensus around
the latter name.
* Preserve root specs in env contexts
* Instead of preserving specs, restrict gc to the active environment
* Added docs
* Added a unit test for gc within an environment
* Updated copyright to 2020
* Updated documentation according to review
Rephrased a couple of sentences, added references to
`spack find` and dependency types.
* Updated function naming and docstrings
* Simplified computation of unused specs
Since the new approach uses private attributes of the DB
it has been coded as a method of that class rather than a
freestanding function.
The imports in `spec.py` are getting to be pretty unwieldy.
- [x] Remove all of the `import from` style imports and replace them with
`import` or `import as`
- [x] Remove a number names that were exported by `spack.spec` that
weren't even in `spack.spec`
Previously, `spack test` automatically passed all of its arguments to
`pytest -k` if no options were provided, and to `pytest` if they were.
`spack test -l` also provided a list of test filenames, but they didn't
really let you completely narrow down which tests you wanted to run.
Instead of trying to do our own weird thing, this passes `spack test`
args directly to `pytest`, and omits the implicit `-k`. This means we
can now run, e.g.:
```console
$ spack test spec_syntax.py::TestSpecSyntax::test_ambiguous
```
This wasn't possible before, because we'd pass the fully qualified name
to `pytest -k` and get an error.
Because `pytest` doesn't have the greatest ability to list tests, I've
tweaked the `-l`/`--list`, `-L`/`--list-long`, and `-N`/`--list-names`
options to `spack test` so that they help you understand the names
better. you can combine these options with `-k` or other arguments to do
pretty powerful searches.
This one makes it easy to get a list of names so you can run tests in
different orders (something I find useful for debugging `pytest` issues):
```console
$ spack test --list-names -k "spec and concretize"
cmd/env.py::test_concretize_user_specs_together
concretize.py::TestConcretize::test_conflicts_in_spec
concretize.py::TestConcretize::test_find_spec_children
concretize.py::TestConcretize::test_find_spec_none
concretize.py::TestConcretize::test_find_spec_parents
concretize.py::TestConcretize::test_find_spec_self
concretize.py::TestConcretize::test_find_spec_sibling
concretize.py::TestConcretize::test_no_matching_compiler_specs
concretize.py::TestConcretize::test_simultaneous_concretization_of_specs
spec_dag.py::TestSpecDag::test_concretize_deptypes
spec_dag.py::TestSpecDag::test_copy_concretized
```
You can combine any list option with keywords:
```console
$ spack test --list -k microarchitecture
llnl/util/cpu.py modules/lmod.py
```
```console
$ spack test --list-long -k microarchitecture
llnl/util/cpu.py::
test_generic_microarchitecture
modules/lmod.py::TestLmod::
test_only_generic_microarchitectures_in_root
```
Or just list specific files:
```console
$ spack test --list-long cmd/test.py
cmd/test.py::
test_list test_list_names_with_pytest_arg
test_list_long test_list_with_keywords
test_list_long_with_pytest_arg test_list_with_pytest_arg
test_list_names
```
Hopefully this stuff will help with debugging test issues.
- [x] make `spack test` send args directly to `pytest` instead of trying
to do fancy things.
- [x] rework `--list`, `--list-long`, and add `--list-names` to make
searching for tests easier.
- [x] make it possible to mix Spack's list args with `pytest` args
(they're just fancy parsing around `pytest --collect-only`)
- [x] add docs
- [x] add tests
- [x] update spack completion
Test configuration files (except modules.yaml) were in the root level of
test/data, but should really just be in their own directory. The absence
of modules.yaml was also breaking module tests if we got module
preferences after tests started, as the mock modules.yaml was not in the
test directory.
The module hook would previously fail if there were no enabled module types.
- Instead of looking for a `KeyError`, default to empty list when the
config variable is not present.
- Convert lambdas to real functions for clarity.
- Remove legacy yaml_version_check() hook
- Remove the pre_run hook from `hook/__init__.py` and `main.py`
We want to discourage the use of pre-run hooks because they have to run
at startup. To keep Spack fast, we should do things like this lazily
instead of in hooks that require spidering directories full of modules.
Continuing to shave small bits of time off startup --
`spack.cmd.common.arguments` constructs many `Args` objects at module
scope, which has to be done for all commands that import it. Instead of
doing this at load time, do it lazily.
- [x] construct Args objects lazily
- [x] remove the module-scoped argparse fixture
- [x] make the mock config scope set dirty to False by default (like the
regular scope)
This *seems* to reduce load time slightly
Previously, fixtures like `config`, `database`, and `store` were
module-scoped, but frequently used as test function arguments. These
fixtures swap out global on setup and restore them on teardown. As
function arguments, they would do the right set-up, but they'd leave the
global changes in place for the whole module the function lived in. This
meant that if you use `config` once, other functions in the same module
would inadvertently inherit the mock Spack configuration, as it would
only be torn down once all tests in the module were complete.
In general, we should module- or session-scope the *STATE* required for
these global objects (as it's expensive to create0, but we shouldn't
module-or session scope the activation/use of them, or things can get
really confusing.
- [x] Make generic context managers for global-modifying fixtures.
- [x] Make session- and module-scoped fixtures that ONLY build filesystem
state and create objects, but do not swap out any variables.
- [x] Make seeparate function-scoped fixtures that *use* the session
scoped fixtures and actually swap out (and back in) the global
variables like `config`, `database`, and `store`.
These changes make it so that global changes are *only* ever alive for a
singlee test function, and we don't get weird dependencies because a
global fixture hasn't been destroyed.
`PackagePrefs` has had a class-level cache of data from `packages.yaml` for
a long time, but it complicates testing and leads to subtle errors,
especially now that we frequently manipulate custom config scopes and
environments.
Moving the cache to instance-level doesn't slow down concretization or
the test suite, and it just caches for the life of a `PackagePrefs`
instance (i.e., for a single cocncretization) so we don't need to worry
about global state anymore.
- [x] Remove class-level caches from `PackagePrefs`
- [x] Add a cached _spec_order object on each `PackagePrefs` instance
- [x] Remove all calls to `PackagePrefs.clear_caches()`
Commands like `spack blame` were printig poorly when redirected to files,
as colify reverts to a single column when redirected. This works for
list data but not tables.
- [x] Force a table by always passing `tty=True` from `colify_table()`
In "spack info" the Variants header currently has two blank
lines under it. That's too much. It looks like the actual
content belongs to something else.
Instead underline the headers to make things more obvious.
This commit removes the `python_version.py` unit test module
and the vendored dependencies `pyqver2.py` and `pyqver3.py`.
It substitutes them with an equivalent check done using
`vermin` that is run as a separate workflow via Github Actions.
This allows us to delete 2 vendored dependencies that are unmaintained
and substitutes them with a maintained tool.
Also, updates the list of vendored dependencies.
`ViewDescriptor.regenerate()` calls `get_all_specs()`, which reads
`spec.yaml` files, which is slow. It's fine to do this once, but
`view.remove_specs()` *also* calls it immediately afterwards.
- [x] Pass the result of `get_all_specs()` as an optional parameter to
`view.remove_specs()` to avoid reading `spec.yaml` files twice.
`ViewDescriptor.regenerate()` was copying specs and stripping build
dependencies, which clears `_hash` and other cached fields on concrete
specs, which causes a bunch of YAML hashes to be recomputed.
- [x] Preserve the `_hash` and `_normal` fields on stripped specs, as
these will be unchanged.
`spack install` previously concretized, writes the entire environment
out, regenerated views, then wrote and regenerated views
again. Regenerating views is slow, so ensure that we only do that once.
- [x] add an option to env.write() to skip view regeneration
- [x] add a note on whether regenerate_views() shouldn't just be a
separate operation -- not clear if we want to keep it as part of write
to ensure consistency, or take it out to avoid performance issues.
Environments need to read the DB a lot when installing all specs.
- [x] Put a read transaction around `install_all()` and `install()`
to avoid repeated locking
Our `LockTransaction` class was reading overly aggressively. In cases
like this:
```
1 with spack.store.db.read_transaction():
2 with spack.store.db.write_transaction():
3 ...
```
The `ReadTransaction` on line 1 would read in the DB, but the
WriteTransaction on line 2 would read in the DB *again*, even though we
had a read lock the whole time. `WriteTransaction`s were only
considering nested writes to decide when to read, but they didn't know
when we already had a read lock.
- [x] `Lock.acquire_write()` return `False` in cases where we already had
a read lock.
If a write transaction was nested inside a read transaction, it would not
write properly on release, e.g., in a sequence like this, inside our
`LockTransaction` class:
```
1 with spack.store.db.read_transaction():
2 with spack.store.db.write_transaction():
3 ...
4 with spack.store.db.read_transaction():
...
```
The WriteTransaction on line 2 had no way of knowing that its
`__exit__()` call was the last *write* in the nesting, and it would skip
calling its write function.
The `__exit__()` call of the `ReadTransaction` on line 1 wouldn't know
how to write, and the file would never be written.
The DB would be correct in memory, but the `ReadTransaction` on line 4
would re-read the whole DB assuming that other processes may have
modified it. Since the DB was never written, we got stale data.
- [x] Make `Lock.release_write()` return `True` whenever we release the
*last write* in a nest.
Lock transactions were actually writing *after* the lock was
released. The code was looking at the result of `release_write()` before
writing, then writing based on whether the lock was released. This is
pretty obviously wrong.
- [x] Refactor `Lock` so that a release function can be passed to the
`Lock` and called *only* when a lock is really released.
- [x] Refactor `LockTransaction` classes to use the release function
instead of checking the return value of `release_read()` / `release_write()`
`ViewDescriptor.regenerate()` checks repeatedly whether packages are
installed and also does a lot of DB queries. Put a read transaction
around the whole thing to avoid repeatedly locking and unlocking the DB.
`Environment.added_specs()` has a loop around calls to
`Package.installed()`, which can result in repeated DB queries. Optimize
this with a read transaction in `Environment`.
Checks for deprecated specs were repeatedly taking out read locks on the
database, which can be very slow.
- [x] put a read transaction around the deprecation check
BundlePackages use a noop fetch strategy. The mirror logic was assuming
that the fetcher had a resource to cach after performing a fetch. This adds
a special check to skip caching if the stage is associated with a
BundleFetchStrategy. Note that this should allow caching resources
associated with BundlePackages.
When updating a mirror, Spack was re-retrieving all patches (since the
fetch logic for patches is separate). This updates the patch logic to
allow the mirror logic to avoid this.
Since cache_mirror does the fetch itself, it also needs to do the
checksum itself if it wants to verify that the source stored in the
mirror is valid. Note that this isn't strictly required because fetching
(including from mirrors) always separately verifies the checksum.
The targets for the cosmetic paths in mirrrors were being calculated
incorrectly as of fb3a3ba: the symlinks used relative paths as targets,
and the relative path was computed relative to the wrong directory.
When creating a cosmetic symlink for a resource in a mirror, remove
it if it already exists. The symlink is removed in case the logic to
create the symlink has changed.
* Some packages (e.g. mpfr at the time of this patch) can have patches
with the same name but different contents (which apply to different
versions of the package). This appends part of the patch hash to the
cache file name to avoid conflicts.
* Some exceptions which occur during fetching are not a subclass of
SpackError and therefore do not have a 'message' attribute. This
updates the logic for mirroring a single spec (add_single_spec)
to produce an appropriate error message in that case (where before
it failed with an AttributeError)
* In various circumstances, a mirror can contain the universal storage
path but not a cosmetic symlink; in this case it would not generate
a symlink. Now "spack mirror create" will create a symlink for any
package that doesn't have one.
`ViewDescriptor.regenerate()` calls `get_all_specs()`, which reads
`spec.yaml` files, which is slow. It's fine to do this once, but
`view.remove_specs()` *also* calls it immediately afterwards.
- [x] Pass the result of `get_all_specs()` as an optional parameter to
`view.remove_specs()` to avoid reading `spec.yaml` files twice.
`ViewDescriptor.regenerate()` was copying specs and stripping build
dependencies, which clears `_hash` and other cached fields on concrete
specs, which causes a bunch of YAML hashes to be recomputed.
- [x] Preserve the `_hash` and `_normal` fields on stripped specs, as
these will be unchanged.
`spack install` previously concretized, writes the entire environment
out, regenerated views, then wrote and regenerated views
again. Regenerating views is slow, so ensure that we only do that once.
- [x] add an option to env.write() to skip view regeneration
- [x] add a note on whether regenerate_views() shouldn't just be a
separate operation -- not clear if we want to keep it as part of write
to ensure consistency, or take it out to avoid performance issues.
Environments need to read the DB a lot when installing all specs.
- [x] Put a read transaction around `install_all()` and `install()`
to avoid repeated locking
Our `LockTransaction` class was reading overly aggressively. In cases
like this:
```
1 with spack.store.db.read_transaction():
2 with spack.store.db.write_transaction():
3 ...
```
The `ReadTransaction` on line 1 would read in the DB, but the
WriteTransaction on line 2 would read in the DB *again*, even though we
had a read lock the whole time. `WriteTransaction`s were only
considering nested writes to decide when to read, but they didn't know
when we already had a read lock.
- [x] `Lock.acquire_write()` return `False` in cases where we already had
a read lock.
If a write transaction was nested inside a read transaction, it would not
write properly on release, e.g., in a sequence like this, inside our
`LockTransaction` class:
```
1 with spack.store.db.read_transaction():
2 with spack.store.db.write_transaction():
3 ...
4 with spack.store.db.read_transaction():
...
```
The WriteTransaction on line 2 had no way of knowing that its
`__exit__()` call was the last *write* in the nesting, and it would skip
calling its write function.
The `__exit__()` call of the `ReadTransaction` on line 1 wouldn't know
how to write, and the file would never be written.
The DB would be correct in memory, but the `ReadTransaction` on line 4
would re-read the whole DB assuming that other processes may have
modified it. Since the DB was never written, we got stale data.
- [x] Make `Lock.release_write()` return `True` whenever we release the
*last write* in a nest.
Lock transactions were actually writing *after* the lock was
released. The code was looking at the result of `release_write()` before
writing, then writing based on whether the lock was released. This is
pretty obviously wrong.
- [x] Refactor `Lock` so that a release function can be passed to the
`Lock` and called *only* when a lock is really released.
- [x] Refactor `LockTransaction` classes to use the release function
instead of checking the return value of `release_read()` / `release_write()`
`ViewDescriptor.regenerate()` checks repeatedly whether packages are
installed and also does a lot of DB queries. Put a read transaction
around the whole thing to avoid repeatedly locking and unlocking the DB.
Users can now list mirrors of the main url in packages.
- [x] Instead of just a single `url` attribute, users can provide a list (`urls`) in the package, and these will be tried by in order by the fetch strategy.
- [x] To handle one of the most common mirror cases, define a `GNUMirrorPackage` mixin to handle all the standard GNU mirrors. GNU packages can set `gnu_mirror_path` to define the path within a mirror, and the mixin handles setting up all the requisite GNU mirror URLs.
- [x] update all GNU packages in `builtin` to use the `GNUMirrorPackage` mixin.
- Add an optional argument so that `possible_dependencies()` will report
missing dependencies.
- Add a test to ensure it works.
- Ignore missing dependencies in `possible_dependencies()` by default.
- this version allows getting possible dependencies of multiple packages
or specs at once.
- New method handles calling `PackageBase.possible_dependencies` multiple
times and passing `visited` dict around.
`Environment.added_specs()` has a loop around calls to
`Package.installed()`, which can result in repeated DB queries. Optimize
this with a read transaction in `Environment`.
Checks for deprecated specs were repeatedly taking out read locks on the
database, which can be very slow.
- [x] put a read transaction around the deprecation check
doesn't understand a custom, user-defined compiler version. However, if
the compiler's version check fails, you can't build anything with the
custom compiler.
- [x] Be more lenient: fall back to the custom compiler version and use
it verbatim if the version check fails.
`pgcc -V` was failing on power machines because it returns 2 (despite
correctly printing version information). On x86_64 machines the same
command returns 0 and doesn't cause an error.
- [x] Ignore return value of 2 for pgcc when doign a version check
Vendors for ARM come out of `/proc/cpuinfo` as hex numbers instead of readable strings.
- Add support for associating vendor names with the hex numbers.
- Also move these mappings from Python code to `microarchitectures.json`
- Move darwin feature name mappings to `microarchitectures.json` as well
* when constructing package hash, default to including a method in the content hash if we can't determine whether it would be included by examining the AST
* add a test for updated content-hash calculations
* refactor content hash tests to eliminate repeated lines
BundlePackages use a noop fetch strategy. The mirror logic was assuming
that the fetcher had a resource to cach after performing a fetch. This adds
a special check to skip caching if the stage is associated with a
BundleFetchStrategy. Note that this should allow caching resources
associated with BundlePackages.
When updating a mirror, Spack was re-retrieving all patches (since the
fetch logic for patches is separate). This updates the patch logic to
allow the mirror logic to avoid this.
Since cache_mirror does the fetch itself, it also needs to do the
checksum itself if it wants to verify that the source stored in the
mirror is valid. Note that this isn't strictly required because fetching
(including from mirrors) always separately verifies the checksum.
The targets for the cosmetic paths in mirrrors were being calculated
incorrectly as of fb3a3ba: the symlinks used relative paths as targets,
and the relative path was computed relative to the wrong directory.
When creating a cosmetic symlink for a resource in a mirror, remove
it if it already exists. The symlink is removed in case the logic to
create the symlink has changed.
* pytest: add __init__ files for all test subdirs
* add licenses to empty files
* Fix Sphinx warning message about comment within docstring
* Further fixes to Sphinx docstring
* fix docstring in generate_package_index() refering to "public" keys as "signing" keys
* use explicit kwargs in push_to_url()
* simplify url_util.parse() per tgamblin's suggestion
* replace standardize_header_names() with the much simpler get_header()
* add some basic tests
* update s3_fetch tests
* update S3 list code to strip leading slashes from prefix
* correct minor warning regression introduced in #11117
* add more tests
* flake8 fixes
* add capsys fixture to mirror_crud test
* add get_header() tests
* use get_header() in more places
* incorporate review comments
This PR allows virtual packages to be added to the specs list using
the add command.
Virtual packages are already allowed in named lists in spack
environments/stacks, and they are already allowed in the specs list
when added using the yaml directly.
I have, more than once, tried to install the list of things that need
to build the docs, only to discover that the list doesn't use Spack's
package names. I'm tired of facepalming....
While I was there I touched up the prose about activating the new
Python packages; activating a python package doesn't add anything to
your PYTHONPATH, it links things into a directory that's *already* on
your PYTHONPATH. Note that this all presupposes that you're using
that same python....
* CUDA HeaderList: Unit Test
* Spec Header Dirs: Only first include/
Avoid matching recurringly nested include paths that usually
refer to internally shipped libraries in packages.
Example in CUDA Toolkit, shipping a libc++ fork internally
with libcu++ since 10.2.89:
`<prefix>/include/cuda/some/more/details/include/` or
`<prefix>/include/cuda/std/detail/libcxx/include`
regex: non-greedy first match of include
Co-Authored-By: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* CUDA: Re-Enable 10.2.89 as Default
* apply strict constraint checks for patches, otherwise Spack may incorrectly treat a version range constraint as satisfied when mixing x.y and x.y.z versions
* add mixed version checks to version comparison tests
`spack module loads` and `spack module find` previously failed if any upstream modules were missing. This prevented it from being used with upstreams (or, really, any spack instance) that blacklisted modules.
This PR makes module finding is now more lenient (especially for blacklisted modules).
- `spack module find` now does not report an error if the spec is blacklisted
- instead, it prints a single warning if any modules will be omitted from the loads file
- It comments the missing modules out of the loads file so the user can see what's missing
- Debug messages are also printed so users can check this with `spack -d...`
- also added tests for new functionality
`spack module loads` and `spack module find` previously failed if any upstream modules were missing. This prevented it from being used with upstreams (or, really, any spack instance) that blacklisted modules.
This PR makes module finding is now more lenient (especially for blacklisted modules).
- `spack module find` now does not report an error if the spec is blacklisted
- instead, it prints a single warning if any modules will be omitted from the loads file
- It comments the missing modules out of the loads file so the user can see what's missing
- Debug messages are also printed so users can check this with `spack -d...`
- also added tests for new functionality
* Fixed x86-64 optimization flags for clang
* Fixed expected results in unit tests
Before the flags used where the one for llc, the underlying compiler from LLVM IR to machine assembly. It turns out that the semantic of `-march`, `-mtune` and `-mcpu` changes from clang front-end to llc.
I found no definitive reference for the flags submitted in this PR, but I checked the assembly on a vectorizable function using Godbolt's web-site.
* Add a transaction around repeated calls to `spec.prefix` in the activation process
* cache the computation of home in the python package to speed up setting deps
* ensure that module-scope variables are only set *once* per module
* Add a transaction around repeated calls to `spec.prefix` in the activation process
* cache the computation of home in the python package to speed up setting deps
* ensure that module-scope variables are only set *once* per module
`mirror_archive_path` was failing to account for the case where the fetched version isn't known to Spack.
- [x] don't require the fetched version to be in `Package.versions`
- [x] add regression test for mirror paths when package does not have a version
Extensions have been available for a while and the overall design
seems solid enough to be feasible for extensions without losing
backward compatibility.
* Some packages (e.g. mpfr at the time of this patch) can have patches
with the same name but different contents (which apply to different
versions of the package). This appends part of the patch hash to the
cache file name to avoid conflicts.
* Some exceptions which occur during fetching are not a subclass of
SpackError and therefore do not have a 'message' attribute. This
updates the logic for mirroring a single spec (add_single_spec)
to produce an appropriate error message in that case (where before
it failed with an AttributeError)
* In various circumstances, a mirror can contain the universal storage
path but not a cosmetic symlink; in this case it would not generate
a symlink. Now "spack mirror create" will create a symlink for any
package that doesn't have one.
* Add process to determine aarch64 microarchitecture
* add microarchitectures for thunderx2 and a64fx
* Add optimize flags for gcc on aarch64 family processors thunderx2 and a64fx.
* Add optimize flags for clang on aarch64 family processors thunderx2 and a64fx
* Add testing for thunderx2 and a64fx microarchitectures
* Make relative binaries relocate text files properly
* rb strings aren't valid in python 2
* move perl to new interface for setup_environment family methods
* remove reference to `spack.store` in method definition
Referencing `spack.store` in method definition will cache the `spack.config.config` singleton variable too early, before we have a chance to add command line and environment scopes.
* remove reference to `spack.store` in method definition
Referencing `spack.store` in method definition will cache the `spack.config.config` singleton variable too early, before we have a chance to add command line and environment scopes.
Add a configuration option to suppress gpg warnings during binary
package verification. This only suppresses warnings: a gpg failure
will still fail the install. This allows users who have already
explicitly trusted the gpg key they are using to avoid seeing
repeated warnings that it is self-signed.
Add a configuration option to suppress gpg warnings during binary
package verification. This only suppresses warnings: a gpg failure
will still fail the install. This allows users who have already
explicitly trusted the gpg key they are using to avoid seeing
repeated warnings that it is self-signed.
when making a package relative, relocate links relative to link directory
rather than the full link path (which includes the file name) because `os.path.relpath` expects a directory.
Binaries with relative RPATHS currently do not relocate strings
hard-coded in binaries
This PR extends the best-effort relocation of strings hard-coded
in binaries to those whose RPATHs have been relativized.
Binaries with relative RPATHS currently do not relocate strings
hard-coded in binaries
This PR extends the best-effort relocation of strings hard-coded
in binaries to those whose RPATHs have been relativized.
* Docs update for deprecated `spack sha256`
* Added macOS shasum
* Update lib/spack/docs/packaging_guide.rst
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
- [x] Use higher contrast terminal output font
- [x] Use higher contrast code block background color than default
- [x] Use a noticeable prompt character
See also https://github.com/spack/spack-tutorial/pull/10.
`mirror_archive_path` was failing to account for the case where the fetched version isn't known to Spack.
- [x] don't require the fetched version to be in `Package.versions`
- [x] add regression test for mirror paths when package does not have a version
This fixes a regression introduced in #10792. `spack uninstall` in an
environment would not match concrete query specs properly after the index
hash of enviroments changed.
- [x] Search by DAG hash for specs to remove instead of by build hash
If you do this in a spack environment:
spack add hdf5+hl
hdf5+hl will be the root added to the `spack.yaml` file, and you should
really expect `hdf5+hl` to display as a root in the environment.
- [x] Add decoration to roots so that you can see the details about what
is required to build.
- [x] Add a test.
If you do this in a spack environment:
spack add hdf5+hl
hdf5+hl will be the root added to the `spack.yaml` file, and you should
really expect `hdf5+hl` to display as a root in the environment.
- [x] Add decoration to roots so that you can see the details about what
is required to build.
- [x] Add a test.
This fixes a regression introduced in #10792. `spack uninstall` in an
environment would not match concrete query specs properly after the index
hash of enviroments changed.
- [x] Search by DAG hash for specs to remove instead of by build hash
* Make relative binaries relocate text files properly
* rb strings aren't valid in python 2
* move perl to new interface for setup_environment family methods
- [x] insert at beginning of list so fetch grabs local mirrors before remote resources
- [x] update the S3FetchStrategy so that it throws a SpackError if the fetch fails.
Before, it was throwing URLError, which was not being caught in stage.py.
- [x] move error handling out of S3FetchStrategy and into web_util.read_from_url()
- [x] pass string instead of URLError to SpackWebError
- [x] insert at beginning of list so fetch grabs local mirrors before remote resources
- [x] update the S3FetchStrategy so that it throws a SpackError if the fetch fails.
Before, it was throwing URLError, which was not being caught in stage.py.
- [x] move error handling out of S3FetchStrategy and into web_util.read_from_url()
- [x] pass string instead of URLError to SpackWebError
This changes Spack environments so that the YAML file associated with the environment is *only* written when necessary (i.e., if it is changed *by spack*). The lockfile is still written out as before.
There is a larger question here of which part of Spack should be responsible for setting defaults in config files, and how we can get rid of empty lists and data structures currently cluttering files like `compilers.yaml`. But that probably requires a rework of the default-setting validator in `spack.config`, as well as the code that uses `spack.config`. This will at least help for `spack.yaml`.
This changes Spack environments so that the YAML file associated with the environment is *only* written when necessary (i.e., if it is changed *by spack*). The lockfile is still written out as before.
There is a larger question here of which part of Spack should be responsible for setting defaults in config files, and how we can get rid of empty lists and data structures currently cluttering files like `compilers.yaml`. But that probably requires a rework of the default-setting validator in `spack.config`, as well as the code that uses `spack.config`. This will at least help for `spack.yaml`.
Commands like "spack mirror list" were displaying mirrors in a
different order than what was listed in the corresponding mirrors.yaml
file.
This restores commands to iterate over mirrors in the order that
they appear in the config file.
* Travis CI: Test Python 3.8
* Fix use of deprecated cgi.escape method
* Fix version comparison
* Fix flake8 F811 change in Python 3.8
* Make flake8 happy
* Use Python 3.8 for all test categories
Currently, query arguments in the Spack core are documented on the
Database._query method, where the functionality is defined.
For users of the spack python command, this makes the python builtin
method help less than ideally useful, as help(spack.store.db.query)
and help(spack.store.db.query_local) do not show relevant information.
This PR updates the doc attributes for the Database.query and
Database.query_local arguments to mirror everything after the first
line of the Database._query docstring.
* cuda: fix conflict statements for x86-64 targets
fixes#13462
This build system mixin was not updated after the support for specific
targets has been merged.
* Updated the version range of cuda that conflicts with gcc@8:
* Updated the version range of cuda that conflicts with gcc@8: for ppc64le
* Relaxed conflicts for version > 10.1
* Updated versions in conflicts
Co-Authored-By: Axel Huebl <axel.huebl@plasma.ninja>
4af4487 added a mirror_id function to most FetchStrategy
implementations that is used to calculate resource locations in
mirrors. It left out BundleFetchStrategy which broke all packages
making use of BundlePackage (e.g. xsdk). This adds a noop
implementation of mirror_id to BundleFetchStrategy so that the
download/installation of BundlePackages can proceed as normal.
* Travis CI: Test Python 3.8
* Fix use of deprecated cgi.escape method
* Fix version comparison
* Fix flake8 F811 change in Python 3.8
* Make flake8 happy
* Use Python 3.8 for all test categories
Currently, query arguments in the Spack core are documented on the
Database._query method, where the functionality is defined.
For users of the spack python command, this makes the python builtin
method help less than ideally useful, as help(spack.store.db.query)
and help(spack.store.db.query_local) do not show relevant information.
This PR updates the doc attributes for the Database.query and
Database.query_local arguments to mirror everything after the first
line of the Database._query docstring.
* cuda: fix conflict statements for x86-64 targets
fixes#13462
This build system mixin was not updated after the support for specific
targets has been merged.
* Updated the version range of cuda that conflicts with gcc@8:
* Updated the version range of cuda that conflicts with gcc@8: for ppc64le
* Relaxed conflicts for version > 10.1
* Updated versions in conflicts
Co-Authored-By: Axel Huebl <axel.huebl@plasma.ninja>
The `test_changed_files` in `test/cmd/flake8.py` was failing because it calls
`ArgumentParser.parse_args()` without arguments. Normally that would just
parse `sys.argv` but it seems to fail because of something in either `spack test`
or `pytest`. Call it with an empty array so that it doesn't try to touch`sys.argv`
at all.
- [x] allow `-d` spack option for `test_changed_files`
* docs: add a spack environment for building the docs
* docs: remove tutorial and link to spack-tutorial.readthedocs.io
The tutorial now has its own standalone website, versioned by instances
of the tutorial. Link to that instead of versioning it directly with Spack.
Support mirroring all packages with `spack mirror create --all`.
In this mode there is no concretization:
* Spack pulls every version of every package into the created mirror.
* It also makes multiple attempts for each package/version combination
(if there is a temporary connection failure).
* Continues if all attempts fail. i.e., this makes its best effort to
fetch evrerything, even if all attempts to fetch one package fail.
This also changes mirroring logic to prefer storing sources by their hash
or by a unique name derived from the source. For example:
* Archives with checksums are named by the sha256 sum, i.e.,
`archive/f6/f6cf3bd233f9ea6147b21c7c02cac24e5363570ce4fd6be11dab9f499ed6a7d8.tar.gz`
vs the previous `<package-name>-package-version>.tar.gz`
* VCS repositories are stored by a path derived from their URL,
e.g. `git/google/leveldb.git/master.tar.gz`.
The new mirror layout allows different packages to refer to the same
resource or source without duplicating that download in the
mirror/cache. This change is not essential to mirroring everything but is
expected to save space when mirroring packages that all use the same
resource.
The new structure of the mirror is:
```
<base directory>/
_source-cache/ <-- the _source-cache directory is new
archive/ <-- archives/resources/patches stored by hash
00/ <-- 2-letter sha256 prefix
002748bdd0319d5ab82606cf92dc210fc1c05d0607a2e1d5538f60512b029056.tar.gz
01/
0154c25c45b5506b6d618ca8e18d0ef093dac47946ac0df464fb21e77b504118.tar.gz
0173a74a515211997a3117a47e7b9ea43594a04b865b69da5a71c0886fa829ea.tar.gz
...
git/
OpenFAST/
openfast.git/
master.tar.gz <-- repo by branch name
PHASTA/
phasta.git/
11f431f2d1a53a529dab4b0f079ab8aab7ca1109.tar.gz <-- repo by commit
...
svn/ <-- each fetch strategy has its own subdirectory
...
openmpi/ <-- the remaining package directories have the old format
openmpi-1.10.1.tar.gz <-- human-readable name is symlink to _source-cache
```
In addition to the archive names as described above, `mirror create` now
also creates symlinks with the old format to help users understand which
package each mirrored archive is associated with, and to allow mirrors to
work with old spack versions. The symlinks are relative so the mirror
directory can still itself be archived.
Other improvements:
* `spack mirror create` will not re-download resources that have already
been placed in it.
* When creating a mirror, the resources downloaded to the mirror will not
be cached (things are not stored twice).
reindexing takes a significant amount of time, and there's no reason to
do it from DB version 0.9.3 to version 5. The only difference is that v5
can contain "deprecated_for" fields.
- [x] Add a `_skip_reindex` list at the start of `database.py`
- [x] Skip the reindex for upgrades in this list. The new version will
just be written to the file the first time we actually have to write
the DB out (e.g., after an install), and reads will still work fine.
Previously, spack would error out if we tried to fetch something with no
code, but that would prevent fetching dependencies. In particular, this
would fail:
spack fetch --dependencies xsdk
- [x] Instead of raising an error, just print a message that there is nothing
to be fetched for packages like xsdk that do not have code.
- [x] Make BundleFetchStrategy a bit more quiet about doing nothing.
We've had `spack spec --yaml` for a while, and we've had methods for JSON
for a while as well. We just haven't has a `--json` argument for `spack spec`.
- [x] Add a `--json` argument to `spack spec`, just like `--yaml`
New entry for K10 microarchitecture.
Reorder Zen* microarchitectures to avoid triggering as k10.
Remove some desktop-specific flags that were preventing Opteron Bulldozer/Piledriver/Steamroller/Excavator CPUs from being recognized as such.
Remove one or two flags which weren't produced in /proc/cpuinfo on older OS (RHEL6 and friends).
Rename the `spack diy` command to `spack dev-build` to make the use case clearer.
The `spack diy` command has some useful functionality for developers using Spack to build their dependencies and configure/build/install the code they are developing. Developers do not notice it, partly because of the obscure name.
The `spack dev-build` command has a `-u/--until PHASE` option to stop after a given phase of the build. This can be used to configure your project, run cmake on your project, or similarly stop after any stage of the build the user wants. These options are analogous to the existing `spack configure` and `spack build` commands, but for developer builds.
To unify the syntax, we have deprecated the `spack configure` and `spack build` commands, and added a `-u/--until PHASE` option to the `spack install` command as well.
The functionality in `spack dev-build` (specifically `spack dev-build -u cmake`) may be able to supersede the `spack setup` command, but this PR does not deprecate that command as that will require slightly more thought.
fd58c98 formats the `Stage`'s `archive_path` in `Stage.archive` (as part of `web.push_to_url`). This is not needed and if the formatted differs from the original path (for example if the archive file name contains a URL query suffix), then the copy fails.
This removes the formatting that occurs in `web.push_to_url`.
We should figure out a way to handle bad cases like this *and* to have nicer filenames for downloaded files. One option that would work in this particular case would be to also pass `-J` / `--remote-header-name` to `curl`. We'll need to do follow-up work to determine if we can use `-J` everywhere.
See also: https://github.com/spack/spack/pull/11117#discussion_r338301058
Add a new entry in `config.yaml`:
config:
shared_linking: 'rpath'
If this variable is set to `rpath` (the default) Spack will set RPATH in ELF binaries. If set to `runpath` it will set RUNPATH.
Details:
* Spack cc wrapper explicitly adds `--disable-new-dtags` when linking
* cc wrapper also strips `--enable-new-dtags` from the compile line
when disabling (and vice versa)
* We specifically do *not* add any dtags flags on macOS, which uses
Mach-O binaries, not ELF, so there's no RUNPATH)
`spack deprecate` allows for the removal of insecure packages with minimal impact to their dependents. It allows one package to be symlinked into the prefix of another to provide seamless transition for rpath'd and hard-coded applications using the old version.
Example usage:
spack deprecate /hash-of-old-openssl /hash-of-new-openssl
The spack deprecate command is designed for use only in extroardinary circumstances. The spack deprecate command makes no promises about binary compatibility. It is up to the user to ensure the replacement is suitable for the deprecated package.
Previously this command only showed total counts for each regular
expression. This doesn't give you a sense of which regexes are working
well and which ones are not. We now display the number of right, wrong,
and total URL parses per regex.
It's easier to see where we might improve the URL parsing with this
change.
This updates the configuration loading/dumping logic (now called
load_config/dump_config) in spack_yaml to preserve comments (by using
ruamel.yaml's RoundTripLoader). This has two effects:
* environment spack.yaml files expect to retain comments, which
load_config now supports. By using load_config, users can now use the
':' override syntax that was previously unavailable for environment
configs (but was available for other config files).
* config files now retain user comments by default (although in cases
where Spack updates/overwrites config, the comments can still be
removed).
Details:
* Subclasses `RoundTripLoader`/`RoundTripDumper` to parse yaml into
ruamel's `CommentedMap` and analogous data structures
* Applies filename info directly to ruamel objects in cases where the
updated loader returns those
* Copies management of sections in `SingleFileScope` from #10651 to allow
overrides to occur
* Updates the loader/dumper to handle the processing of overrides by
specifically checking for the `:` character
* Possibly the most controversial aspect, but without that, the parsed
objects have to be reconstructed (i.e. as was done in
`mark_overrides`). It is possible that `mark_overrides` could remain
and a deep copy will not cause problems, but IMO that's generally
worth avoiding.
* This is also possibly controversial because Spack YAML strings can
include `:`. My reckoning is that this only occurs for version
specifications, so it is safe to check for `endswith(':') and not
('@' in string)`
* As a consequence, this PR ends up reserving spack yaml functions
load_config/dump_config exclusively for the purpose of storing spack
config
`test_envoronment_status()` was printing extra output during tests.
- [x] disable output only for `env('status')` calls instead of disabling
it for the whole test.
This PR ensures that environment activation sets all environment variables set by the equivalent `module load` operations, except that the spec prefixes are "rebased" to the view associated with the environment.
Currently, Spack blindly adds paths relative to the environment view root to the user environment on activation. Issue #12731 points out ways in which this behavior is insufficient.
This PR changes that behavior to use the `setup_run_environment` logic for each package to augment the prefix inspections (as in Spack's modulefile generation logic) to ensure that all necessary variables are set to make use of the packages in the environment.
See #12731 for details on the previous problems in behavior.
This PR also updates the `ViewDescriptor` object in `spack.environment` to have a `__contains__` method. This allows for checks like `if spec in self.default_view`. The `__contains__` operator for `ViewDescriptor` objects checks whether the spec satisfies the filters of the View descriptor, not whether the spec is already linked into the underlying `FilesystemView` object.
This PR ensures that on Darwin we always append /sbin and /usr/sbin to PATH, if they are not already present, when looking for sysctl.
* Make sure we look into /sbin and /usr/sbin for sysctl
* Refactor sysctl for better readability
* Remove marker to make test pass
These changes update our gcc microarchitecture descriptions based on manuals found here https://gcc.gnu.org/onlinedocs/ and assuming that new architectures are not added during patch releases.
This extends Spack functionality so that it can fetch sources and binaries from-, push sources and binaries to-, and index the contents of- mirrors hosted on an S3 bucket.
High level to-do list:
- [x] Extend mirrors configuration to add support for `file://`, and `s3://` URLs.
- [x] Ensure all fetching, pushing, and indexing operations work for `file://` URLs.
- [x] Implement S3 source fetching
- [x] Implement S3 binary mirror indexing
- [x] Implement S3 binary package fetching
- [x] Implement S3 source pushing
- [x] Implement S3 binary package pushing
Important details:
* refactor URL handling to handle S3 URLs and mirror URLs more gracefully.
- updated parse() to accept already-parsed URL objects. an equivalent object
is returned with any extra s3-related attributes intact. Objects created with
urllib can also be passed, and the additional s3 handling logic will still be applied.
* update mirror schema/parsing (mirror can have separate fetch/push URLs)
* implement s3_fetch_strategy/several utility changes
* provide more feature-complete S3 fetching
* update buildcache create command to support S3
* Move the core logic for reading data from S3 out of the s3 fetch strategy and into
the s3 URL handler. The s3 fetch strategy now calls into `read_from_url()` Since
read_from_url can now handle S3 URLs, the S3 fetch strategy is redundant. It's
not clear whether the ideal design is to have S3 fetching functionality in a fetch
strategy, directly implemented in read_from_url, or both.
* expanded what can be passed to `spack buildcache` via the -d flag: In addition
to a directory on the local filesystem, the name of a configured mirror can be
passed, or a push URL can be passed directly.
fixes#13073
Since #3206 was merged bootstrapping environment-modules was using the architecture of the current host or the best match supported by the default compiler. The former case is an issue since shell integration was looking for a spec targeted at the host microarchitecture.
1. Bootstrap an env modules targeted at generic architectures
2. Look for generic targets in shell integration scripts
3. Add a new entry in Travis to test shell integration
Custom string versions for compilers were raising a ValueError on
conversion to int. This commit fixes the behavior by trying to detect
the underlying compiler version when in presence of a custom string
version.
* Refactor code that deals with custom versions for better readability
* Partition version components with a regex
* Fix semantic of custom compiler versions with a suffix
* clang@x.y-apple has been special-cased
* Add unit tests
We've been doing this for quite a while now, and it does not seem to
cause issues.
- [x] Switch the noisy warning to a debug to make Spack a bit quieter
while building.
* Added architecture specific optimization flags for Clang / LLVM
* Disallow compiler optimizations for mixed toolchains
* We emit a warning when building for a mixed toolchain
* Fixed issues with suffixed versions of compilers; Apple's Clang will,
for the time being, fall back on x86-64 for every compilation.
* Methods setting the environment now do it separately for build and run
Before this commit the `*_environment` methods were setting
modifications to both the build-time and run-time environment
simultaneously. This might cause issues as the two environments
inherently rely on different preconditions:
1. The build-time environment is set before building a package, thus
the package prefix doesn't exist and can't be inspected
2. The run-time environment instead is set assuming the target package
has been already installed
Here we split each of these functions into two: one setting the
build-time environment, one the run-time.
We also adopt a fallback strategy that inspects for old methods and
executes them as before, but prints a deprecation warning to tty. This
permits to port packages to use the new methods in a distributed way,
rather than having to modify all the packages at once.
* Added a test that fails if any package uses the old API
Marked the test xfail for now as we have a lot of packages in that
state.
* Added a test to check that a package modified by a PR is up to date
This test can be used any time we deprecate a method call to ensure
that during the first modification of the package we update also
the deprecated calls.
* Updated documentation
Python 3 metaclasses have a `__prepare__` method that lets us save the
class's dictionary before it is constructed. In Python 2 we had to walk
up the stack using our `caller_locals()` method to get at this. Using
`__prepare__` is much faster as it doesn't require us to use `inspect`.
This makes multimethods use the faster `__prepare__` method in Python3,
while still using `caller_locals()` in Python 2. We try to reduce the
use of caller locals using caching to speed up Python 2 a little bit.
Our importer was always parsing from source (which is considerably
slower) because the source size recorded in the .pyc file differed from
the size of the input file.
Override path_stats in the prepending importer to fool it into thinking
that the source size is the size *with* the prepended code.
Since the backup file is only created on the first invocation, it will
contain the original file without any modifications. Further invocations
will then read the backup file, effectively reverting prior invocations.
This can be reproduced easily by trying to install likwid, which will
try to install into /usr/local. Work around this by creating a temporary
file to read from.
* This updates stage names to use "spack-stage-" as a prefix.
This avoids removing non-Spack directories in "spack clean" as
c141e99 did (in this case so long as they don't contain the
prefix "spack-stage-"), and also addresses a follow-up issue
where Spack stage directories were not removed.
* Spack now does more-stringent checking of expected permissions for
staging directories. For a given stage root that includes a user
component, all directories before the user component that are
created by Spack are expected to match the permissions of their
parent; the user component and all deeper directories are expected
to be accessible to the user (read/write/execute).
This feature generates a verification manifest for each installed
package and provides a command, "spack verify", which can be used to
compare the current file checksums/permissions with those calculated
at installed time.
Verification includes
* Checksums of files
* File permissions
* Modification time
* File size
Packages installed before this PR will be skipped during verification.
To verify such a package you must reinstall it.
The spack verify command has three modes.
* With the -a,--all option it will check every installed package.
* With the -f,--files option, it will check some specific files,
determine which package they belong to, and confirm that they have
not been changed.
* With the -s,--specs option or by default, it will check some
specific packages that no files havae changed.
fixes#13005
This commit fixes an issue with the name of the root directory for
module file hierarchies. Since #3206 the root folder was named after
the microarchitecture used for the spec, which is too specific and
not backward compatible for lmod hierarchies. Here we compute the
root folder name using the target family instead of the target name
itself and we add target information in the 'whatis' portion of the
module file.
From Python docs:
--
'surrogateescape' will represent any incorrect bytes as code points in
the Unicode Private Use Area ranging from U+DC80 to U+DCFF. These
private code points will then be turned back into the same bytes when
the surrogateescape error handler is used when writing data. This is
useful for processing files in an unknown encoding.
--
This will allow us to process files with unknown encodings.
To accommodate the case of self-extracting bash scripts, filter_file
can now stop filtering text input if a certain marker is found. The
marker must be passed at call time via the "stop_at" function argument.
At that point the file will be reopened in binary mode and copied
verbatim.
* use "surrogateescape" error handling to ignore unknown chars
* permit to stop filtering if a marker is found
* add unit tests for non-ASCII and mixed text/binary files
- Add a test that verifies checksums on all packages
- Also add an attribute to packages that indicates whether they need a
manual download or not, and add an exception in the tests for these
packages until we can verify them.
Both floating-point and NEON are required in all standard ARMv8
implementations. Theoretically though specialized markets can support
no NEON or floating-point at all. Source:
https://developer.arm.com/docs/den0024/latest/aarch64-floating-point-and-neon
On the other hand the base procedure call standard for Aarch64
"assumes the availability of the vector registers for passing
floating-point and SIMD arguments". Further "the Arm 64-bit
architecture defines two mandatory register banks: a general-purpose
register bank which can be used for scalar integer processing and
pointer arithmetic; and a SIMD and Floating-Point register bank".
Source:
https://developer.arm.com/docs/ihi0055/latest/procedure-call-standard-for-the-arm-64-bit-architecture
This makes customization of Aarch64 with no NEON instruction set
available so unlikely that we can consider them a feature of the
generic family.
This PR adds a 'concretize' entry to an environment's spec.yaml file
which controls how user specs are concretized. By default it is
set to 'separately' which means that each spec added by the user is
concretized separately (the behavior of environments before this PR).
If set to 'together', the environment will concretize all of the
added user specs together; this means that all specs and their
dependencies will be consistent with each other (for example, a
user could develop code linked against the set of libraries in the
environment without conflicts).
If the environment was previously concretized, this will re-concretize
all specs, in which case previously-installed specs may no longer be
used by the environment (in this sense, adding a new spec to an
environment with 'concretize: together' can be significantly more
expensive).
The 'concretize: together' setting is not compatible with Spec
matrices; this PR adds a check to look for multiple instances of the
same package added to the environment and fails early when
'concretize: together' is set (to avoid confusing messages about
conflicts later on).
While the build environment already takes share/pkgconfig into account,
the generated module files etc. only consider lib/pkgconfig and
lib64/pkgconfig.
When removing support for dotkit in #11986 the code trying to set the
paths of the various module files was not updated to skip it. This
results in a failure because of a key error after the deprecation
warning is displayed to user.
This commit fixes the issue and adds a unit test for regression.
Note that code for Spack chains has been updated accordingly but
no unit test has been added for that case.
Dotkit is being used only at a few sites and has been deprecated on new
machines. This commit removes all the code that provide support for the
generation of dotkit module files.
A new validator named "deprecatedProperties" has been added to the
jsonschema validators. It permits to prompt a warning message or exit
with an error if a property that has been marked as deprecated is
encountered.
* Removed references to dotkit in the docs
* Removed references to dotkit in setup-env-test.sh
* Added a unit test for the 'deprecatedProperties' schema validator
fixes#12915closes#12916
Since Spack has support for specific targets it might happen that
software is built for targets that are not exactly the host because
it was either an explicit user request or the compiler being used is
too old to support the host.
Modules for different targets are written into different directories
and by default Spack was adding to MODULEPATH only the directory
corresponding to the current host. This PR modifies this behavior to
add all the directories that are **compatible** with the current host.
Sometimes when remove_file is called on a link, that link is missing
(perhaps ctrl-C happened halfway through a previous action). As
removing a non-existent file is no problem, this patch changes the
behavior so Spack continues rather than stopping with an error.
Currently you would see
ValueError: /path/to/dir is not a link tree!
and now it continues with a warning.
bin/spack now needs to have a "-*- python -*-" line after the shebang, so
that emacs will interpret it as a python file instead of as a shell
script. Add one line to the license check limit to accommodate this.
The output of subprocess.check_output is a byte string in Python 3. This causes dictionary lookup to fail later on.
A try-except around this function prevented this error from being noticed. Removed this so that more errors can propagate out.
Preferred targets were failing because we were looking them up by
Microarchitecture object, not by string.
- [x] Add a call to `str()` to fix target lookup.
- [x] Add a test to exercise this part of concretization.
- [x] Add documentation for setting `target` in `packages.yaml`
* microarchitectures: zen starts from x86_64, not from excavator
* Unit tests: fixed a test that is wrong with the new modeling
* microarchitectures: fixed features and inheritance for 15h family
bulldozer doesn't inherit from barcelona (10h) + added xop, lwp and tbm
instruction sets to the 15h family (it distinguish the family from 17h)
Addresses #12804
This PR adds the creation of the remaining (16) templates to ensure we can create them with expected content. The goal is to facilitate catching during testing.
Spack doesn't need `requests`, and neither does `jsonschema`, but
`jsonschema` tries to import it, and it'll succeed if `requests` is on
your machine (which is likely, given how popular it is). This commit
removes the import to improve Spack's startup time a bit.
On a mac with SSD, the import of requests is ~28% of Spack's startup time
when run as `spack --print-shell-vars sh,modules` (.069 / .25 seconds),
which is what `setup-env.sh` runs.
On a Linux cluster where Python is mounted from NFS, this reduces
`setup-env.sh` source time from ~1s to .75s.
Note: This issue will be eliminated if we upgrade to a newer `jsonschema`
(we'd need to drop Python 2.6 for that). See
https://github.com/Julian/jsonschema/pull/388.
- This is needed to support Cray machines -- we need an architecture
mic_knl > x86_64
- We used Cray's naming scheme for this target to make it work seamlessly
with the module-based detection sccheme on Cray. mic_knl is pretty
much dead, so this will be the last succh target. We will need to work
wtih Cray and other vendors in the future.
Seamless translation from 'target=<generic>' to either
- target.family == <generic> (in methods)
- 'target=<generic>:' (in directives)
Also updated docs to show ranges in directives.
Spack can now:
- label ppc64, ppc64le, x86_64, etc. builds with specific
microarchitecture-specific names, like 'haswell', 'skylake' or
'icelake'.
- detect the host architecture of a machine from /proc/cpuinfo or similar
tools.
- Understand which microarchitectures are compatible with which (for
binary reuse)
- Understand which compiler flags are needed (for GCC, so far) to build
binaries for particular microarchitectures.
All of this is managed through a JSON file (microarchitectures.json) that
contains detailed auto-detection, compiler flag, and compatibility
information for specific microarchitecture targets. The `llnl.util.cpu`
module implements a library that allows detection and comparison of
microarchitectures based on the data in this file.
The `target` part of Spack specs is now essentially a Microarchitecture
object, and Specs' targets can be compared for compatibility as well.
This allows us to label optimized binary packages at a granularity that
enables them to be reused on compatible machines. Previously, we only
knew that a package was built for x86_64, NOT which x86_64 machines it
was usable on.
Currently this feature supports Intel, Power, and AMD chips. Support for
ARM is forthcoming.
Specifics:
- Add microarchitectures.json with descriptions of architectures
- Relaxed semantic of compiler's "target" attribute. Before this change
the semantic to check if a compiler could be viable for a given target
was exact match. This made sense as the finest granularity of targets
was architecture families. As now we can target micro-architectures,
this commit changes the semantic by interpreting as the architecture
family what is stored in the compiler's "target" attribute. A compiler
is then a viable choice if the target being concretized belongs to the
same family. Similarly when a new compiler is detected the architecture
family is stored in the "target" attribute.
- Make Spack's `cc` compiler wrapper inject target-specific flags on the
command line
- Architecture concretization updated to use the same algorithm as
compiler concretization
- Micro-architecture features, vendor, generation etc. are included in
the package hash. Generic architectures, such as x86_64 or ppc64, are
still dumped using the name only.
- If the compiler for a target is not supported exit with an intelligible
error message. If the compiler support is unknown don't try to use
optimization flags.
- Support and define feature aliases (e.g., sse3 -> ssse3) in
microarchitectures.json and on Microarchitecture objects. Feature
aliases are defined in targets.json and map a name (the "alias") to a
list of rules that must be met for the test to be successful. The rules
that are available can be extended later using a decorator.
- Implement subset semantics for comparing microarchitectures (treat
microarchitectures as a partial order, i.e. (a < b), (a == b) and (b <
a) can all be false.
- Implement logic to automatically demote the default target if the
compiler being used is too old to optimize for it. Updated docs to make
this behavior explicit. This avoids surprising the user if the default
compiler is older than the host architecture.
This commit adds unit tests to verify the semantics of target ranges and
target lists in constraints. The implementation to allow target ranges
and lists is minimal and doesn't add any new type. A more careful
refactor that takes into account the type system might be due later.
Co-authored-by: Gregory Becker <becker33.llnl.gov>
Add llnl.util.cpu_name, with initial support for detecting different
microarchitectures on Linux. This also adds preliminary changes for
compiler support and variants to control the optimizatoin levels by
target.
This does not yet include translations of targets to particular
compilers; that is left to another PR.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Move verbose messages to debug level
get_patchelf should return None for test platform as well because create_buildinfo invokes patchelf to get rpaths.
Update command-line (CLI) parsing to understand references to yaml
files that store Spack specs. Where a file reference is encountered,
the full Spec in the file will be read in. A file reference may
appear anywhere that a spec could appear before. For example, if you
write "spack spec -y openmpi > openmpi.yaml" you may then install the
spec using the yaml file by running "spack install ./openmpi.yaml";
you can also refer to dependencies in this way (e.g.
"spack install foo^./openmpi.yaml").
There are two requirements for file references:
* A file path entered on the CLI must include a "/" even if the file
exists in your current working directory. For example, if you
create an openmpi.yaml file as above and run
"spack install openmpi.yaml" from the same directory, it will
report an error.
* A file path entered on the CLI must end with ".yaml"
This commit adds error messages to clearly inform the user of both
violations.
* implicit_rpaths are now removed from compilers.yaml config and are always instantiated dynamically, this occurs one time in the build_environment module
* per-compiler list required libraries (e.g. libstdc++, libgfortran) and whitelist directories from rpaths including those libraries. Remove non-whitelisted implicit rpaths. Some libraries default for all compilers.
* reintroduce 'implicit_rpaths' as a config variable that can be used to disable Spack insertion of compiler RPATHs generated at build time.
Fixes#12732Fixes#12767c22a145 added automatic detection and RPATHing of compiler libraries
to Spack builds. However, in cases where the parsing/detection logic
fails this was terminating the build. This makes the compiler library
detection "best-effort" and reports an issue when the detection fails
rather than terminating the build.
This is similar to #10191. The Ubuntu package for clang 8.0.0 displays
a very unusual version string, and we need this new regex to detect it
as just 8.0.0
Unit test have been complemented by the output that was failing
detection.
- Fix trailing whitespace missed by the bug described in #12755.
- Fix other style issues that have crept in over time (this can happen
when flake8 adds new checks with new versions)
E501 (line too long) exemptions are probably our most common ones -- we
add them for directives, URLs, hashes, etc. in packages. But we
currently add them even when a line *doesn't* need them, which can mask
trailing whitespace errors.
This changes `spack flake8` so that it will only add E501 exemptions if
the line is *actually* too long.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
mock_archive can now take multiple extension / tar option pairs (default matches old behavior).
url_fetch.test_fetch tests more archive types.
compression.EXTS split into EXTS and NOTAR_EXTS to avoid unwanted, non-meaningful combinatoric extensions such as .tar.tbz2.
- previously spec parsing didn't allow you to look up missing (but still
known) specs by hash
- This allows you to reference and potentially reinstall
force-uninstalled dependencies
- add testing for force uninstall and for reference by spec
- cmd/install tests now use mutable_database
* When cleaning the stage root, only remove directories that appear
to be used for staging Spack packages. Previously Spack was clearing
all directories in the stage root, which could remove content not
related to Spack if the user chose a staging root which contains
files/directories not managed by Spack.
* The documentation is updated with warnings about choosing a stage
directory that is only managed by Spack (although generally the
check added in this PR for "spack clean" should avoid removing
content that was not created by Spack)
* The default stage directory (in config.yaml) is now
$tempdir/$user/spack-stage and the logic is updated to omit the
$user portion of this path if $tempdir already contains a $user
directory.
* When creating stage root assign user read/write permissions to all
directories in the path under $user. Previously Spack was assigning
the permissions of the first existing parent directory
`spec.prefix` reads from Spack's database, and if you do this with
multiple consecutive read transactions, it can take a long time. Or, at
least, you can see the paths get written out one by one.
This uses an outer read transaction to ensure that actual disk locks are
acquired only once for the whole `spack find` operation, and that each
transaction inside `spec.prefix` is an in-memory operation. This speeds
up `spack find -p` a lot.
Refactor `spack.cmd.display_specs()` and `spack find` so that any options
can be used together with -d. This cleans up the display logic
considerably, as there are no longer multiple "modes".
This is another machine-readable version of `spack find`. Supplying the
`--json` argument causes specs to be written out as json records,
easily filered with tools like jq.
e.g.:
$ spack find --json python | jq -C ".[] | { name, version } "
[
{
"name": "python",
"version": "2.7.16"
},
{
"name": "bzip2",
"version": "1.0.8"
}
]
- spack find --format allows you to supply a format string and have specs
output in a more machine-readable way, without dedcoration
e.g.:
spack find --format "{name}-{version}-{hash}"
autoconf-2.69-icynozk7ti6h4ezzgonqe6jgw5f3ulx4
automake-1.16.1-o5v3tc77kesgonxjbmeqlwfmb5qzj7zy
bzip2-1.0.6-syohzw57v2jfag5du2x4bowziw3m5p67
...
or:
spack find --format "{hash}"
icynozk7ti6h4ezzgonqe6jgw5f3ulx4
o5v3tc77kesgonxjbmeqlwfmb5qzj7zy
syohzw57v2jfag5du2x4bowziw3m5p67
...
This is intended to make it much easier to script with `spack find`
When Spack installs a package it writes the package.py file and
patches to a separate repository (which reflects the state of the
package at the time it was installed). Previously, Spack only wrote
patches that were used at installation time. This updates the
archiving step to include all patch files that are relevant to the
package (in case that repository is used in another context).
This commit removes redundant calls to `libtoolize` and `aclocal`.
Some configurations, such as a Spack user using macOS with a
Homebrew-installed `libtool` added to their `packages.yaml`, have
`autoreconf` and GNU libtoolize installed as `glibtoolize`, but not
`libtoolize`. While Spack installations of `libtool` built from source
would install `glibtoolize` and symlink `libtoolize` to `glibtoolize`,
an external installation of GNU libtoolize as `glibtoolize` will not
have such a symlink, and thus the call `m.libtoolize()` will throw an
error because `libtoolize` does not exist at the path referenced by
`m.libtoolize()` (i.e.,
`self.spec['libtool'].prefix.bin.join('libtoolize')).
However, on these same systems, `autoreconf` runs correctly, and calls
`glibtoolize` instead of `libtoolize`, when appropriate. Thus,
removing the call to `libtoolize` should resolve the error mentioned
above.
The redundant call to `aclocal` is also removed in this commit because
the maintainers of GNU Automake state that "`aclocal` is expected to
disappear" and suggest that downstream users never call `aclocal`
directly -- rather, they suggest calling `autoreconf` instead.
Uses code from CMake to detect implicit link paths from compilers
System paths are filtered out of implicit link paths
Implicit link paths added to compiler config and object under `implicit_rpaths`
Implicit link paths added as rpaths to compile line through env/cc wrapper
Authored by: "Ben Boeckel <ben.boeckel@kitware.com>"
Co-authored by: "Peter Scheibel <scheibel1@llnl.gov>"
Co-authored by: "Gregory Becker <becker33@llnl.gov>"
c9e214f updated template creation by passing **kwargs to package
template classes but the template classes were not updated to accept
them; this adds **kwargs to package template initializers where they
are needed.
Having a non-directory invisible file causes `spack find` to die. This
fixes the logic to ignore invalid module names but only warn if they're
visible.
```
NotADirectoryError: [Errno 20] Not a directory: '/spack/var/spack/repos/builtin/packages/.DS_Store/package.py'
```
This adds a special package type to Spack which is used to aggregate
a set of packages that a user might commonly install together; it
does not include any source code itself and does not require a
download URL like other Spack packages. It may include an 'install'
method to generate scripts, and Spack will run post-install hooks
(including module generation).
* Add new BundlePackage type
* Update the Xsdk package to be a BundlePackage and remove the
'install' method (previously it had a noop install method)
* "spack create --template" now takes "bundle" as an option
* Rename cmd_create_repo fixture to "mock_test_repo" and relocate it
to shared pytest fixtures
* Add unit tests for BundlePackage behavior
This allows "spack spec --yaml" to generate a spec YAML file that can
be used with "spack install -f". Before, this would fail in cases
where the spec had build dependencies.
* All fetch strategies now accept the Boolean version keyword option `no_cache` in order to allow per-version control of cache-ability.
* New git-specific version keyword option `get_full_repo` (Boolean). When true, disables the default `--depth 1` and `--single-branch` optimizations that are applied if supported by the git version and (in the former case) transport protocol.
* The try / catch blog attempting `--depth 1` and retrying on failure has been removed in favor of more accurately ascertaining when the `--depth` option should work based on git version and protocol choice. Any failure is now treated as a real problem, and the clone is only attempted once.
* Test improvements:
* `mock_git_repository.checks[type_of_test].args['git']` is now specified as the URL (with leading `file://`) in order to avoid complaints when using `--depth`.
* New type_of_test `tag-branch`.
* mock_git_repository now provides `git_exe`.
* Improved the action of the `git_version` fixture, which was previously hard-wired.
* New tests of `--single-branch` and `--depth 1` behavior.
* Add documentation of new options to the packaging guide.
- mkdirp now takes arguments to allow it to properly set permissions on created directories.
- Two arguments (group and mode) set permissions for the leaf directory.
- Intermediate directories can inherit permissions from either the topmost existing directory (the parent) or the leaf.
On machines where $TMP is owned by a gid with no name, this avoids the
following error when the default spack stage does not exist:
(spackbook):spack$ spack clean
==> Removing all temporary build stages
==> Error: 'getgrgid(): gid not found: 57095'
Spack needs to deal with gids directly unless users pass them in.
Compiler caching was using the `id()` function to refer to configuration dictionary objects. If these objects are garbage-collected, this can produce incorrect results (false positive cache hits). This change replaces `id()` with an object that keeps a reference to the config dictionary so that it is not garbage-collected.
Fixes#11163
The goal of this work is to simplify stage directory structures by eliminating use of symbolic links. This means, among other things, that` $spack/var/spack/stage` will no longer be the core staging directory. Instead, the first accessible `config:build_stage` path will be used.
Spack will no longer automatically append `spack-stage` (or the like) to configured build stage directories so the onus of distinguishing the directory from other work -- so the other work is not automatically removed with a `spack clean` operation -- falls on the user.
Fixes#12062406c791 addressed "spack module load" for upstream modules but not
the "spack module loads" command. This applies the same fixes from
406c791 to "spack module loads".
It's no longer possible to set compiler flags under as an entry under
"paths" in compilers.yaml; instead the user must list these under the
"flags" section. This updates the docs accordingly.
Spack stacks drop invalid dependencies applied to packages by a
spec_list matrix operation
Without this fix, Spack would raise an error if orthogonal dependency
constraints and non-dependency constraints were applied to the same
package by a matrix and the dependency constraint was invalid for
that package. This is an error, fixed by this PR.
An example failing configuration:
spack:
definitions:
- packages: [libelf, hdf5+mpi]
- compilers: ['%gcc']
- mpis: [^openmpi]
specs:
- matrix:
- $packages
- $compilers
- $mpis
5f74f22 enabled installing compilers for dependencies but not for the root package (and in particular not for DAGs which consist of one package)
this enables bootstrapping compilers for both types of DAGs
Using "compilers" with the "s" is an invalid config section and throws an error.
Traceback (most recent call last):
File "spack/bin/spack", line 48, in <module>
sys.exit(spack.main.main())
File "/home/omsai/src/libkmap/spack/lib/spack/spack/main.py", line 633, in main
env = ev.find_environment(args)
File "/home/omsai/src/libkmap/spack/lib/spack/spack/environment.py", line 263, in find_environment
return Environment(env)
File "/home/omsai/src/libkmap/spack/lib/spack/spack/environment.py", line 534, in __init__
self._read_manifest(f)
File "/home/omsai/src/libkmap/spack/lib/spack/spack/environment.py", line 561, in _read_manifest
self.yaml = _read_yaml(f)
File "/home/omsai/src/libkmap/spack/lib/spack/spack/environment.py", line 402, in _read_yaml
validate(data, filename)
File "/home/omsai/src/libkmap/spack/lib/spack/spack/environment.py", line 395, in validate
e, data, filename, e.instance.lc.line + 1)
spack.config.ConfigFormatError: /home/omsai/src/libkmap/spack.yaml:15: Additional properties are not allowed ('compilers' was unexpected)
Environment.concretize returns newly-concretized specs rather than
printing them; as a result, the _display argument is removed from
Environment.concretize (originally only used to avoid printing specs
during unit testing). Command logic which invokes
Environment.concretize prints these explicitly.
This updates the Spack QT package to enable building qt version 4 on
MacOS.
This includes the following changes to the qt package:
* add version 4.8.7
* add option to build with or without shared libs
* add options to disable tools, ssl, sql, and freetype support
* add qt4-tools patch when building qt@4+tools
* add option to build as a framework (only available on MacOS)
* replace qt4-el-capitan patch with qt4-mac patch (which includes the
edits from qt4-el-capitan)
* apply qt4-pcre-include-conflict.patch only for version 4.8.6
(rather than all 4.x versions)
* apply qt4-gcc-and-webkit.patch for 4.x versions before 4.8.7 and
create a separate qt4-gcc-and-webkit-487.patch for version 4.8.7
* update patch function for qt@4 on MacOS to update configure
variables relevant to Spack (e.g. PREFIX)
* add option to build freetype with Spack, as a vendored dependency
of QT, or not at all (default is to build with Spack)
This includes the following edits outside of the qt package:
* Update MacOS version utility function to return all parts of the
Mac version (rather than just the first two)
* gettext package: implement "libs"
* python package: add gettext as a dependency
* Raise an exception and exit with a meaningful message when binary path substitution fails.
* Skip binary text replacement with padding and issue a warning when the new install path is longer than the old install path.
- We don't currently make enough use of the maintainers field on
packages, though we could use it to assign reviews.
- add a command that allows maintainers to be queried
- can ask who is maintaining a package or packages
- can ask what packages users are maintaining
- can list all maintained or unmaintained packages
- add tests for the command
* Added a unit test reproducing the failure in 12085
* Fixed name clash in the 'from_environment_diff' function
The bug reported in #12085 stemmed from a name clash among variables,
introduced during the refactor in #10753 and not caught by unit tests
and reviews.
- Setting specs from lockfiles was not correctly stringifying concretized
user specs.
- Fix `_set_user_specs_from_lockfile`
- Add some validation code to `SpecList` constructor
Spack has evolved to have three types of hash functions, and it's
becoming hard to tell when each one is called. Whlie we aren't yet ready
to get rid of them, we can refactor them so that the code is clearer and
easier to track.
- Add a `hash_types` module with concise descriptors for hashes.
- Consolidate hashing logic in a private `Spec._spec_hash()` function.
- `dag_hash()`, `build_hash()`, and `full_hash()` all call `_spec_hash()`
- `to_node_dict()`, `to_dict()`, `to_yaml()` and `to_json()` now take a
`hash` parameter consistent with the one that `_spec_hash()` requires.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
- ensure that Spec._build_hash attr is defined
- add logic to compute _build_hash when it is not already computed (e.g. for specs prior to this PR)
- add test to ensure that different instance of a build dep are preserved
- test conversion of old env lockfile format to new format
- tests: avoid view creation, DAG display in tests using MockPackage
- add regression test for more-general bug also fixed by this PR
- update lockfile version since the way we are maintaining hashes has changed
- write out backup for version-1 lockfiles and test that
The database and mutable_database fixtures were installing and uninstalling the same specs multiple times to ensure the database for tests has the correct state.
This commit optimizes the procedure by caching the state in an external directory, and copying it in instead of going through the installation or uninstallation again.
The database fixture is meant not to be modified by tests. This commit enforces this invariant by making the database read-only before starting the test.
* Added missing db markers to tests
* Added test for uninstall_by_spec
* `database` fixture now returns a read-only database
* Tests that modify the DB now use `mutable_database` fixture
Summary:
- Allow multiple definitions of compiler in compilers.yaml (use first instance)
- Still print debug messages when there are duplicates, to assist users in finding this issue.
Merging configs from different scopes can result in multiple compiler being present in the same configuration list. Instead of raising when there are duplicates, take the one with highest precedence.
Print a debug message instead of raising, so that we can still diagnose this. We don't have a good way of warning the user about inconsistent configuration *in the same file* -- we'd need to dig into YAML file/line info for that.
- [x] Add shell tests to ensure that `spack env activate`, `spack env
deactivate`, and `despacktivate` continue to work.
- [x] Also ensure that activate and deactivate both work with `set -u`
* extends mkdirs with permissions for intermediate folders
Does not use os.makedirs mode parameter because its behavior is changed
with Python 3.7 (it ignores it for intermediate dirs), and moreover it
was not possible to set different modes for newly-created folders
and leaf folder.
reference:
- https://bugs.python.org/issue19930
- https://docs.python.org/3.7/library/os.html#os.makedirs
* comment mkdirp step easing code understanding
* revert mkdir to default for package metapath
since metapath is nested in package folder, there is no need
to specify permissions for intermediate folders because the prefix
already exists.
* comment create_install_directory package modes
Bug relates to the interplay between:
1. random dict orders in python 3.5
2. bugfix in initial implementation of stacks for `_concretize_dependencies`
when `self._dependencies` is empty
3. bug in coconcretization algorithm computation of split specs
Result was transient hang in coconcretization.
Fixed#3 (bug in coconcretization) to resolve.
- remove redundant code in Environment.__init__
- use socket.gethostname() instead of which('hostname')
- refactor updating SpecList references
- refactor 'specs' literals to a single variable for default list name
- use six.string_types for python 2/3 compatibility
* from_sourcing_file: fixed a bug + added a few ignored variables
closes#7536
Credits for this change goes to mgsternberg (original author of #7536)
The new variables being ignored are specific to Modules v4.
* Use Spack Executable in 'EnvironmentModifications.from_sourcing_file'
Using this class avoids duplicating lower level logic to decode
stdout and handle non-zero return codes
* Extracted a function that returns the environment after sourcing files
The logic in `EnvironmentModifications.from_sourcing_file` has been
simplified by extracting a function that returns a dictionary with the
environment one would have after sourcing the files passed as argument.
* Further refactoring of EnvironmentModifications.from_sourcing_file
Extracted a function that sanitizes a dictionary removing keys that are
blacklisted, but keeping those that are whitelisted. Blacklisting and
whitelisting can be done on literals or regex.
Extracted a new factory that creates an instance of
EnvironmentModifications from a diff of two environments.
* Added unit tests
* PS1 is blacklisted + more readable names for some variables
All documentation mentions that `build_jobs` is limited by the number of
cores available in the system. This is also enforced when setting it via
`--jobs`. However, when setting it via `config.yaml`, it can exceed the
number of cores available, making builds run out of memory.
This PR adds the ability to specify the auto-dispatch targets that can
be used by the Intel compilers. The `-ax` flag will be written to the
respective compiler configuration files. This ability is very handy when
wanting to build optimized builds for various architectures. This PR
does not set any optimization flags, however.
Fixes#3690Fixes#5637
Uninstalling dependents of a spec was relying on a traversal of the
parents done by inspecting spec._dependents. This is in turn a
DependencyMap that maps a package name to a single DependencySpec object
(an edge in the DAG) and cannot thus model the case where a spec has
multiple configurations of the same parent package installed (for
example if different versions of the same Python library depend on
the same Python installation).
This commit works around this issue by constructing the list of specs to
be uninstalled in an alternative way, and adds tests to verify the
behavior. The core issue with DependencyMap is not resolved here.
The default library search for a package checks the lib/ and lib64/
directories for libraries before the root prefix, in order to save
time when searching for libraries provided by externals (which e.g.
may have '/usr/' as their root).
This moves that logic into the "find_libraries" utility method so
packages implementing their own custom library search logic can
benefit from it.
This also updates packages which appear to be replicating this logic
exactly, replacing it with a single call to "find_libraries".
Fixes#11782
Spack was not properly resolving relative paths to absolute paths
when a relative path was passed to "spack compiler add [PATH]".
Now, if provided a relative path, the absolute path is written to
compilers.yaml rather than the relative path.
* Add template creation test
* Added --skip-editor option to "spack create": normally
"spack create" opens an editor for the user after generating a
package file; when the --skip-editor option is used, "spack create"
only generates the package file and does not open an editor
* Added --skip-editor option to bash completion
- Namepsaces were shown without dots after the new format strings were
added.
- Add a test for `spack find` to ensure that find -N shows the right
output.
Fixes#11781
* Rename build log to spack-build-log.txt
* Rename environment variables file to spack-build-env.txt
* The name of the log and env files is now the same during the build
and after the build completes
* Update packages which referred to the build log/env files
* For packages installed before this commit using older names for the
build and env files, search for the older names
- Fix a bug introdcued by removing parse_anonymous_spec()
- Conflicts' when specs are now *actually* anonymous, and the name of the
package is implicit, so we need to remember to add it back to error
messages.
- `parse_anonymous_spec()` is a vestige of the days when Spack didn't
support nameless specs. We don't need it anymore because now we can
write Spec() for a spec that will match anything, and satisfies()
semantics work properly for anonymous specs.
- Delete `parse_anonymous_spec()` and replace its uses with simple calls
to the Spec() constructor.
- make then handling of when='...' specs in directives more consistent.
- clean up Spec.__contains__()
- refactor directives and tests slightly to accommodate the change.
- CNL OS previously used the *Cray PE* version to determine the OS
version. Cray does not synchronize PE and CLE releases; you can run
CLE7 with PrgEnv 6 (and NERSC currently does).
- Fix Spack's OS detection by using the cle-release file to detect the OS
version. This file is updated with every CLE OS release.
- Add some tests for our parsing logic
Add an example of a 'modules:' entry for an external package in
packages.yaml. The 'External Packages' section of 'Build
Customization' mentions 'paths:' and 'modules:' and gives an
example of paths, but not modules.
Fixes#11816
Allow packages to refer to non-expanded downloads (e.g. a single
script) using Stage.archive_file. This addresses a regression from
#11688 and adds a unit test for it.
This change reverts to the previous behavior of only looking for pgcc
and friends, not pgcc-llvm and friends.
The llvm variant doesn't support all the same features as the
traditional variant of the pgi code generator; this change avoids
treating the llvm variant as a default pgi compiler.
This retains the changes in #10704 which accept the "LLVM" suffix of
the version string of the PGI compiler, which allows users to
explicitly add the llvm-pgi compiler if desired.
For resources, it is desirable to use the expanded archive name of
the resource as the name of the directory when adding it to the root
staging area.
#11528 established 'spack-src' as the universal directory where
source files are placed, which also affected the behavior of
resources managed with Stages.
This adds a new property ('srcdir') to Stage to remember the name of
the expanded source directory, and uses this as the default name when
placing a resource directory in the root staging area.
This also:
* Ensures that downloaded sources are archived using the expanded
archive name (otherwise Spack will not be able to determine the
original directory name when using a cached archive).
* Updates working_dir context manager to guarantee restoration of
original working directory when an exception occurs
* Adds a "temp_cwd" context manager which creates a temporary
directory and sets it as the working directory
The regression test for #11678 fails on at least some Mac OS systems
because they have a /usr/bin/gcc that is secretly clang.
This PR replaces the dependency on a system gcc executable with a
test-generated script that generates the expected output for the
compiler logic.
Some tests introduced in #11528 temporarily set the user's `config:build_stage`, which affected (or created) a config.yaml file in the user's `$HOME/.spack` directory that could leave entries behind if the tests fail.
This change ensures only temporary configuration files are used/affected by these tests.
The "spack location" command was previously untested. This also adds
a check to ensure that composite Stages can report whether they were
expanded (this property was previously only recorded in Stage but not
in CompositeStage).
DIYStage, used to treat a user-managed directory as a staging area,
should always be considered expanded (i.e. the source has been
decompressed if it was stored in an archive).
This also:
* Adds checks to ensure that the path used to instantiate a
DIYStage refers to an existing directory.
* Adds tests to check the behavior of DIYStage (including behavior
added here, but it was generally untested before).
#11528 updated Stage to always store a Package's source in a fixed
directory accessible via `Stage.source_path` This left behind a
number of packages which were expecting to access the source code
via `Stage.path`. This Updates those packages to use
`Stage.source_path` instead.
This also updates the name of the fixed directory: The original name
of the fixed directory was "src", so if an expanded archive created a
"src" directory, then users inspecting the directory structure could
see paths like "src/src" (which wasn't wrong but could be confusing).
Therefore this also updates the name of the fixed directory to
"spack-src".
Fixes#11678
`spack compiler find` was not searching `PATH` when provided with no
arguments. ea7910a updated the API for the search function and the
command logic did not update how it called this function. This also
adds a test to ensure that `spack compiler find` will collect
compilers from `PATH`.
"spack module tcl find -r <spec>" (and equivalents for other module
systems) was failing when a dependency was installed in an upstream
Spack instance. This updates the module index to handle locating
module files for upstream Spack installations (encapsulating the
logic in a new class called UpstreamModuleIndex); the updated index
handles the case where a Spack installation has multiple upstream
instances.
Note that if a module is not available locally but we are using the
local package, then we shouldn't use a module (i.e. if the package is
also installed upstream, and there is a module file for it, Spack
should not use that module). Likewise, if we are instance X using
upstreams Y and Z like X->Y->Z, and if we are using a package from
instance Y, then we should only use a module from instance Y. This
commit includes tests to check that this is handled properly.
Spack currently tries to unify everything in the DAG, but this is too strict for build dependencies, where it is fine to build a dependency with a tool that conflicts with a version fo that tool for a dependent's build.
To enable a workaround for conflicts among build dependencies, so that users can install in multiple steps to avoid these conflicts, make the following changes:
* Dont apply package dependency constraints for build deps of installed packages
* Avoid applying constraints for installed packages vs. concrete packages
* Mark all dependencies of installed packages as visited in normalization method
* don't remove dependency links for concrete specs in flat_dependencies
Also add tests:
* Update test to ensure that link dependencies of installed packages have constraints applied
* Add test to check for proper handling of transitive dependencies (which is currently not the case)
- spack.compilers.find_compilers now uses a multiprocess.pool.ThreadPool to execute
system commands for the detection of compiler versions.
- A few memoized functions have been introduced to avoid poking the filesystem multiple
times for the same results.
- Performance is much improved, and Spack no longer fork-bombs the system when doing a `compiler find`
- We use `spack list --foramt=html` now, as it is much faster and doesn't
make the docs build take forever.
- Remove `spack list --format=rst` as it is no longer used.
- `stage.source_path` was previously overloaded; it returned `None` if it
didn't exist and this was used by client code
- we want to be able to know the `source_path` before it's created
- make stage.source_path available before it exists.
- use a well-known stage source path name, `$stage_path/src` that is
available when `Stage` is instantiated but does not exist until it's
"expanded"
- client code can now use the variable before the stage is created.
- client code can test whether the tarball is expanded by using the new
`stage.expanded` property instead of testing whether `source_path` is
`None`
- add tests for the new source_path semantics
- make tty.msg, tty.info, etc. print the exception type and stringified
message if the message argument is an exception.
- simplify parts of the code that call tty.debug(str(e))
- add extra tty.debug statements in places where exceptions were
previously ignored
- `spack graph --static` (and `spack.graph.dot_graph`) now do the "right
thing" and print the possible dependency graph of provided packages.
- `spack graph --static` no longer concretizes specs, as it only relies
on class level metadata
- Previously the behavior was not consistent -- `spack graph --static`
would graph possible dependencies of concrete specs, but would only
include some of them. The new code properly pursues all possible
dependencies, and allows traversing by different dependency types.
- `spack dependencies` can now take a --deptype argument to only traverse
particular deptypes
- add a new "common" argument for deptype in spack.cmd.common.arguments
- Database.installed_relatives() can now also take a deptype argument
- this is used by `spack dependencies --installed`
- `PackageBase.possible_dependencies` now:
- accepts a deptype param that controls dependency types traversed
- returns a dict mapping possible depnames to their immediate possible
dependencies (this lets you build a graph easily)
- Add tests for PackageBaes
- The 'name' attribute for packages was being set in DirectiveMeta, which
wasn't consistent with other class properties (like fullname, etc.)
- Move it to be a class property of `PackageMeta`, and add the
corresponding property method wrapper on `PackageBase`
* add c99_flag, c11_flag to compiler class
* implement c99_flag, c11_flag for gcc
* implement c99_flag, c11_flag for arm
* implement c99_flag for cce
* implement c99_flag, c11_flag for clang
* implement c99_flag, c11_flag for intel
* implement c99_flag, c11_flag for xl
Previously, module files were not set with the same permissions as the package installation. For world-readable packages, this would not cause a problem. For group readable packages, it does:
```
packages:
mypackage:
permissions:
group: mygroup
read: group
write: group
```
In this case, the modulefile is unreadable by members of the group other than the one who installed it. Add logic to the modulefile writers to set the permissions based on the configuration in `packages.yaml`
* Build cache: relocate path to spack/bin/sbang in text files.
* Found in testing.
* update packaging test
* Make sbang replacement including #!/bin/bash. Add an additional spack prefix replacement to fix stage directory references.
* flake8
* Use buildinfo.get() so old buildcaches without buildinfo['spackprefix'] can be read.
* config:build_jobs now controls the number of parallel jobs to spawn during
builds, but cannot ever exceed the number of cores on the machine.
* The default is set to 16 or the number of available cores, whatever
is lowest.
* Updated docs to reflect the changes done to limit parallel builds
- `gettext_uuid=True` makes every commit update every .pot file in spack/localized-docs,
and speeds up the internationalized doc build slightly.
- Optimize for less repository churn, and use `python-levenshtein` to accelerate
the build instead.
- make all Spack paths relative to a `_spack_root` symlink, so that we
can easily relocate the docs build *outside* lib/spack/docs
- set some useful defaults for gettext translation variables in conf.py
- update `relativeinclude` and other references to the spack root in the
RST files to use _spack_root
- Add a `--update FILE` option to `spack list`
- Output is written to the file only if any package is newer than the file
- Simplify the code in docs/conf.py using this new option
The Spack documentation currently hard-codes some functionality in
`conf.py`, which makes the doc build less "pluggable" for things like
localized doc builds.
In particular, we unconditionally generate an index of commands and a
package list as part of the docs, but those should really only be done if
things are not up to date.
This commit does the following:
- Add `--header` option to `spack commands` so that it can do the work of
prepending text to its output.
- Add `--update FILE` option to `spack commands` that makes it generate a
new command index *only* if FILE is out of date w.r.t. commands in the
Spack source.
- Simplify code in `conf.py` to use these options and only update the
command index when needed.
This PR implements several refactors requested in #11373, specifically:
- Config scopes are used to handle builtin defaults, command line overrides
and package overrides (`parallel=False`)
- `Package.make_jobs` attribute has been removed; `make_jobs` remains
as a module-scope variable in the build environment.
- The use of the argument `-j` has been rationalized across commands
- move '-j'/'--jobs' argument into `spack.cmd.common.arguments`
- Add unit tests to check that setting parallel jobs works as expected
- add new test to ensure that build job setting is isolated to each build
- Fix packages that used `Package.make_jobs` (i.e. `bazel`)
* Add Fujitsu compiler to Spack.
* Fixes for flake8
* Chenges location of FCC to subdirectory called case-insensitive
* Add compiler tests for Fujitsu compiler
* Modify the logic of taking compiler version for new version of Fujitsu compiler
The regex used for finding the Cray OS version from the PrgEnv-cray
module was not exact and was at times pulling the version from other
PrgEnv modules. This updates the regular expression to be more exact.
Adds executable=/bin/bash into Popen. We discovered this bug while
working in a csh/tsch environment. By executing with /bin/bash we ensure
that the module command works.
#8612 added command extensions to Spack: a command implemented in a
separate directory. This improves the implementation by allowing
the command to import additional utility code stored within the
established directory structure for commands.
This also:
* Adds tests for command extensions
* Documents command extensions (including the expected directory
layout)
- `svn info` prints different results depending on the system locale
- in particular, Japanese output doesn't contain "Revision:"
- Change Spack code to use XML output instead of using the human output
Add fixes to support multiple installs and dependents using a subset
of IntelPackage functionality.
* Update IntelPackage to only return scalapack libraries if the root
spec depends on MPI: scalapack requires MPI to be mentioned as a
dependency in the DAG. Package builds using intel-mkl for its
blas/lapack implementations but not for scalapack were failing to
build.
Ideally it would be possible to ask if any of the packages in the
DAG are actually requesting the scalapack functionality provided by
the IntelPackage and only return scalapack libs in that case, but
that is not easily done at this time.
Fixes#11314Fixes#11289
* set HOME when the intel silent installer is run. This prevents the
installer from using the ~/intel directory (which can cause
conflicts for multiple installs of the same IntelPackage)
Fixes#9713
Use new `module` function instead of `get_module_cmd`
Previously, Spack relied on either examining the bash `module()` function or using the `which` command to find the underlying executable for modules. More complicated module systems do not allow for the sort of simple analysis we were doing (see #6451).
Spack now uses the `module` function directly and copies environment changes from the resulting subprocess back into Spack. This should provide a future-proof implementation for changes to the logic underlying the module system on various HPC systems.
Add two functions to the EnvironmentModifications object to help
users sanitize environment variables in their package definitions:
* deprioritize_system_paths: this keeps system paths in the
environment variable but moves them to the end.
* prune_duplicate_paths: remove any duplicate paths from the
variable
This includes testing for the new functions as well as for
(previously-untested) old convenience functions for environment
variable manipulation.
This also adds special handling for bash functions so they
will be defined when the exported environment file is sourced.
Fixes#11335
Update the Spack compiler wrappers to add the headerpad_max_install_names
linker flag on MacOS. This allows the install_name_tool to rewrite
the RPATH entry of the binary to be longer if needed. This is
primarily useful for creating and distributing binary caches of
packages (i.e. using the "spack buildcache" command); binary caches
created on MacOS before this commit may not successfully relocate
(if the target root path is larger).
* Added a function that concretizes specs together
* Specs concretized together are copied instead of being referenced
This makes the specs different objects and removes any reference to the
fake root package that is needed currently for concretization.
* Factored creating a repository for concretization into its own function
* Added a test on overlapping dependencies
* extend Version class so that 2.0 > 1.develop > 1.1
* add concretization tests, with preferences and preferred version.
* add master, head, trunk as develop-like versions, develop > master > head > trunk
* update documentation on version comparison
- `spack edit` previously used `spack.util.executable` `Executable` objects,
and didn't `exec` the editor like you'd expect it to
- This meant that Spack was still running while your editor was, and
stdout/stdin were being set up in weird ways
- e.g. on macOS, if you call `spack edit` with `EDITOR` set to the
builtin `emacs` command, then type `Ctrl-g`, the whole thing dies with
a `==> Error: Keyboard interrupt`
- Fix all this by changing spack.util.editor to use `os.execv` instead of
Spack's `Executable` object
Also add constructor to NoLibrariesError which can either take an
error message (like other SpackErrors) or a name and prefix (in
which case the error message is constructed).
PR #10758 made a slight change to find_versions_of_archive() which included
archive_url in the search process. While this fixed `spack create` and
`spack checksum` missing command-line arguments, it caused `spack
install` to prefer those URLs over those it found in the scrape process.
As a result, the package url was treated as a list_url causing all R
packages to stop fetching once the package was updated on CRAN.
This patch is more selective about including the archive_url in the
remote versions, explicitly overriding it with matching versions found
by the scraper.
f242f5f8 changed the format strings but maintained backwards
compatibility in all cases except one: The list of valid tokens for
the module naming schemes was not updated properly to contain both
the new and old styles for compilers and package names.
This PR re-adds the old tokens into the list of valid tokens.
#11152 added documentation for #8772 but some details were based on
an earlier implementation that had changed by the time #8772 was
merged. In particular, #11152 mentioned that upstream Spack instances
were configured in config.yaml, when in fact they should be placed in
a separate upstreams.yaml config file; this PR updates the
documentation accordingly.
fixes#11159
The 'namespace' argument to both Repo and RepoPath were used to set the
"super namespace". Currently it seems to be vestigial as the only
"super namespace" allowed for packages is 'spack.pkg' since 39c9bbf
* Make a separate CDash report for each package installed
Previously, we generated a single CDash report ("build") for the complete results
of running a `spack install` command. Now we create a separate CDash build for
each package that was installed.
This commit also changes some of the tests related to CDash reporting.
Now only one of the tests exercises the code path of uploading to a
(nonexistent) CDash server. The rest of the related tests write their reports
to disk without trying to upload them.
* Don't report errors to CDash for successful packages
Convert errors detected by our log scraper into warnings when the package
being installed reports that it was successful.
* Report a maximum of 50 errors/warnings to CDash
This is in line with what CTest does. The idea is that if you have more than
50 errors/warnings you probably aren't going to read through them all anyway.
This change reduces the amount of data that we need to transfer and store.
* Update spec format to simpler syntax, maintain backwards compatibility
* Switch to new spec.format method throughout internals
* update package files for new format strings
* documentation and minor code cleanup. removed nonsensical variant sigils
Fixes#11070#11010
Spack attempts to intercede on behalf of all compiler invocations for
a build. This involves adding its wrappers to PATH. Cray systems
include a "ftn" executable and Spack was only redirecting this call
when the Spec was built with cce. This updates the compiler wrappers
to add "ftn" in all cases.
The default (implied) behavior for all environments, as of ea1de6b,
is that an environment will maintain a view in a location of its
choosing. ea1de6b explicitly recorded all three possible states of
maintaining a view:
1. Maintain a view, and let the environment decide where to put it
(default)
2. Maintain a view, and let the user decide
3. Don't maintain a view
This commit updates the config writer so that for case [1], nothing
will be written to the config.yaml. This will not change any existing
behavior, it just serves to keep the config more compact.
Compilers are treated separately from other dependencies in Spack.
#10761 added the option to automatically install compilers when a
package specifies using a compiler that is not available in Spack.
However, this did not work correctly for dependency packages (it
would only build a compiler for the root of an install DAG). This
commit enables the building of compilers for dependency packages.
Environments are nowm by default, created with views. When activated, if an environment includes a view, this view will be added to `PATH`, `CPATH`, and other shell variables to expose the Spack environment in the user's shell.
Example:
```
spack env create e1 #by default this will maintain a view in the directory Spack maintains for the env
spack env create e1 --with-view=/abs/path/to/anywhere
spack env create e1 --without-view
```
The `spack.yaml` manifest file now looks like this:
```
spack:
specs:
- python
view: true #or false, or a string
```
These commands can be used to control the view configuration for the active environment, without hand-editing the `spack.yaml` file:
```
spack env view enable
spack env view envable /abs/path/to/anywhere
spack env view disable
```
Views are automatically updated when specs are installed to an environment. A view only maintains one copy of any package. An environment may refer to a package multiple times, in particular if it appears as a dependency. This PR establishes a prioritization for which environment specs are added to views: a spec has higher priority if it was concretized first. This does not necessarily exactly match the order in which specs were added, for example, given `X->Z` and `Y->Z'`:
```
spack env activate e1
spack add X
spack install Y # immediately concretizes and installs Y and Z'
spack install # concretizes X and Z
```
In this case `Z'` will be favored over `Z`.
Specs in the environment must be concrete and installed to be added to the view, so there is another minor ordering effect: by default the view maintained for the environment ignores file conflicts between packages. If packages are not installed in order, and there are file conflicts, then the version chosen depends on the order.
Both ordering issues are avoided if `spack install`/`spack add` and `spack install <spec>` are not mixed.
When providing a track, the cdash reporter will format the stamp
itself, as it has always done, and register the build during the
package installation process. When providing a stamp, it should
first be formatted as cdash expects, and then cdash will be sure
to report results to same build id which was registered manually
elsewhere.
* Update Spec.prefix to have special case for 'None' in database path; regression test
* Update in database reader rather than spec
* Change assertion to conditional + raise
* Added test for concrete check in Spec.prefix
The module_parsing test checks whether the module function is available
by looking for the string 'not found'. If the user has set a different
locale, the test can assume that the module function is available when
it actually is not.
* Split get_compiler_version into two functions:
get_compiler_version_output runs the compiler with the relevant
option to print the version; extract_version_from_output determines
the version by examining this output. This makes it easier to test
the customized version detection for each compiler. Users can
customize this by overriding the following:
* version_argument: this is the argument that tells the compiler to
print its version. It assumes that the compiler will report its
version if invoked with a single option (like "--version")
* version_regex: the regular expression used to extract the version
from the compiler argument. This assumes that a regular
expression is sufficient to extract the version, and that the
version can be extracted from a single capture group (Spack uses
the first capture group)
* default_version: allows you to completely override all version
detection logic
* get_compiler_version_output: if getting the compiler to report
its version is more complex than invoking it with a single arg
* extract_version_from_output: if it is difficult to define a regex
that can be used to extract the version from the output
* Added tests for version detection of most compilers
* Removed redundant code from xl_r compiler class (by inheriting
from xl compiler definition)
Replace the original implementation of the "memoized" decorator with
an implementation that exposes the docstring and arguments of the
wrapped function. This is achieved using functools.wraps.
This provides a mechanism to implement a new Spack command in a
separate directory, and with a small configuration change point Spack
to the new command.
To register the command, the directory must be added to the
"extensions" section of config.yaml. The command directory name must
have the prefix "spack-", and have the following layout:
spack-X/
pytest.ini #optional, for testing
X/
cmd/
name-of-command1.py
name-of-command2.py
...
tests/ #optional
conftest.py
test_name-of-command1.py
templates/ #optional jinja templates, if needed
And in config.yaml:
config:
extensions:
- /path/to/spack-X
If the extension includes tests, you can run them via spack by adding
the --extension option, like "spack test --extension=X"
* initial work to make use of an 'upstream' spack installation: this uses the DB of the upstream installation to check if a package is installed
* need to query upstream dbs when adding new record to local db
* prevent reindexing upstream DBs
* set prefix on specs read from DB based on path stored in install record
* check that Spack does not install packages that are recorded as installed in an upstream db
* externals do not add their path to install records - need to use 'external_path' to get path of upstream externals
* views need to check for upstream installations when linking metadata
* package and spec now calculate upstream installation properties on-demand themselves rather than depending on concretization to set these properties up-front. The added tests for upstream installations don't work with this new strategy so they need to be updated
* only refresh modules for local specs (not those in upstream packages); optionally generate local module files for packages installed upstream
* when a user tries to locate a module file for a package installed upstream, tell them to use the upstream spack instance to locate it
* support recursive upstream databases (allow upstream databases to use their own upstream databases)
* separate upstream config into separate file with its own schema; each entry now also includes a name
* metadata_dir is no longer customizable on a per-instance basis for YamlDirectoryLayout
* treat metadata_dir as an instance variable but dont set it from kwargs; this follows several other hardcoded variables which must be consistent between upstream and downstream DBs. Also update DirectoryLayout.metadata_path to work entirely with Spec.prefix, since Spec.prefix is set from the DB when available (so metadata_path was duplicating that logic)
Change the location of the CMake build area from the staged source
directory to the stage base directory.
This change allows CMake packages to refer to the build directory in
setup_environment (e.g. if tests need to have a directory in PATH):
Staging happens after the call to setup_environment(), and if the
stage area does not exist, then spec.stage.source_path returns None.
To accommodate this change, archived files (like config.log for
Autotools packages) are archived relative to the stage base directory
rather than the expanded source directory.
Other packages (those not using CMake) will still use the staged
source directory as the default working directory for builds (and
will still be unable to reference this directory in
setup_environment())
When multiple instances of environment-modules were installed with
different architectures, Spack was not retrieving the installation
appropriate for the current architecture when finding the module
prefix.
* Fixed some issues with CUDA-Intel compiler conflicts.
* Comment about expressing CUDA-compiler conflicts.
* More precise conflicts and also add support for Intel 19.0
If the user has set the environment variable VISUAL, it will be used
in preference to EDITOR for all Spack editing activities. If VISUAL
is not set or fails (perhaps due to a lack of graphical editing
capabilities),EDITOR will be used instead. We fall back to one of
several common editors if neither bears fruit.
This feature has been tailored to:
* Provide identical behavior to the previous implementation in the
case that VISUAL is not set.
* Not require any change to code utilizing the editor feature.
* Follow usual UNIX behavior concerning VISUAL and EDITOR.
* Fix clearing EnvironmentModifications with python2
* Add EnvironmentModifications::clear unit test
Use re-assignment rather than del to clear array
* Fix flake issues
Fixes#10191
* Add more regular expressions to detect clang versions that were
not being picked up
* Add a test for parsing versions from the output of Clang (this
does not run Clang, but rather uses example outputs from Clang)
* Separate Clang version parsing into its own method (to make it
easier to test)
Currently, only C headers are considered, causing build failures for
packages depending on, e.g., netcdf-fortran and xerces-c. Additionally,
the regex used to look for the include path component did not consider
word boundaries, causing false matches.
* Create option to build missing compilers and add them to config before installing packages that use them
* Clean up kwarg passing for do_install, put compiler bootstrapping in separate method
* Rework of buildcache creation and install prefix checking using the functions introduced in
https://github.com/spack/spack/pull/9199
Instead of replacing rpaths with placeholder and then checking strings, make use of the functions
relocate.is_recocatable and relocate.is_file_relocatable to decide if a package needs the allow-root option.
This fixes a problem where the placeholder path was not in the first rpath entry. This was seen in c++ libraries and binaries because the compiler was outside the spack install base path and always appears first in the rpath.
Instead of checking the first rpath entry, all rpaths have the placeholder path and the old install path (if it exists) replaced with the new install path.
* flake8
* Added the `spack buildcache preview` sub-command
This is similar to `spack spec -I` but highlights which nodes in a DAG
are relocatable and which are not.
spec.tree has been generalized a little to accept a status function,
instead of always showing the install status
The current implementation works only for ELF, and needs to be
generalized to other platforms.
* Added a test to check if an executable is relocatable or not
This test requires a few commands to be present in the environment.
Currently it will run only under python 3.7 (which uses Xenial instead
of Trusty).
* Added tests for the 'buildcache preview' command.
* Fixed codebase after rebase
* Fixed the list of apt addons for Python 3.7 in travis.yaml
* Only check ELF executables and shared libraries. Skip checking virtual or external packages. (#229)
* Fixed flake8 issues
* Add handling for macOS mach binaries (#231)
The environment modules package has been updated to include
versions up to 4.0.0. The url of the package and the homepage
have been updated accordingly.
The `spack bootstrap` command now builds version 3.2.10 of
the environment-modules package, and will do until #10708
is fixed.
This restores the use of Package.headers when computing -I options
for building a package that was added in #8136 and reverted in
#10604. #8136 used utility logic that located all header files in
an installation prefix, and calculated the -I options as the
immediate roots containing those header files.
In some cases, for a package containing a directory structure like
prefix/
include/
ex1.h
subdir/
ex2.h
dependents may expect to include ex2.h relative to 'include', and
adding 'prefix/include/subdir' as a -I was causing errors,
in particular if ex2.h has the same name as a system header.
This updates header utility logic to by default return the base
"include" directory when it exists, rather than subdirectories.
It also makes it possible for package implementers to override
Package.headers to return the subdirectory when it is required
(for example with libxml2).
Spack warns users when a dependency package updates CPATH. This
warning message is generating bug reports and alarm in cases where
there is no problem. For now this downgrades the warning message to
the debug level, so it only shows up if something goes wrong for the
user and they ask for more information from Spack.
This spack command adds a new schema for a file which describes the
builder containers available, along with the compilers availabe on
each builder. The release-jobs command then generates the .gitlab-ci.yml
file by first expanding the release spec set, concretizing each spec
(in an appropriate docker container if --this-machine-only argument is
not provided on command line), and then combining and staging all the
concrete specs as jobs to be run by gitlab.
Adds four new sub-commands to the buildcache command:
1. save-yaml: Takes a root spec and a list of dependent spec names,
along with a directory in which to save yaml files, and writes out
the full spec.yaml for each of the dependent specs. This only needs
to concretize the root spec once, then indexes it with the names of
the dependent specs.
2. check: Checks a spec (via either an abstract spec or via a full
spec.yaml) against remote mirror to see if it needs to be rebuilt.
Comparies full_hash stored on remote mirror with full_hash computed
locally to determine whether spec needs to be rebuilt. Can also
generate list of specs to check against remote mirror by expanding
the set of release specs expressed in etc/spack/defaults/release.yaml.
3. get-buildcache-name: Makes it possible to attempt to read directly
the spec.yaml file on a remote or local mirror by providing the path
where the file should live based on concretizing the spec.
4. download: Downloads all buildcache files associated with a spec
on a remote mirror, including any .spack, .spec, and .cdashid files
that might exist. Puts the files into the local path provided on
the command line, and organizes them in the same hierarchy found on
the remote mirror
This commit also refactors lib/spack/spack/util/web.py to expose
functionality allowing other modules to read data from a url.
- add CombinatorialSpecSet in spack.util.spec_set module.
- class is iterable and encaspulated YAML parsing and validation.
- Adjust YAML format to be more generic
- YAML spec-set format now has a `matrix` section, which can contain
multiple lists of specs, generated different ways. Including:
- specs: a raw list of specs.
- packages: a list of package names and versions
- compilers: a list of compiler names and versions
- All of the elements of `matrix` are dimensions for the build matrix;
we take the cartesian product of these lists of specs to generate a
build matrix. This means we can add things like [^mpich, ^openmpi]
to get builds with different MPI versions. It also means we can
multiply the build matrix out with lots of different parameters.
- Add a schema format for spec-sets
Fixes#10617Fixes#10624Closes: #10619#8136 dependended entirely on spec.libs to retrieve library directories
from dependencies. By default this function only retrieves libraries if
their name is something like lib<package> (e.g. "libfoo.so" for a
package called "Foo"). This unconditionally adds lib/lib64 directories
for each dependency as link/rpath directories.
This also filters system paths from link/rpaths/include directories and
removes duplicated paths that #8136 could add.
If the -f <specyamlfile> argument to install is used (rather than
providing package specs on the command line), CDash throws an exception
due to missing the installation command (the packages targeted for
install). This fixes that behavior so CDash reporting succeeds in
either case.
fixes#10601
Due to a bug this attribute is wrong for packages that use directories
as namespaces. For instance it will add "<boost-prefix>/include/boost"
instead of "<boost-prefix>/include" to the include path.
As a minor addition a few loops in the compiler wrappers have been
simplified.
Fixes#7855Closes#8070Closes#2645
When searching for library directories (e.g. to add "-L" arguments to
the compiler wrapper) Spack was only trying the "lib/" and "lib64/"
directories for each dependency install prefix; this missed cases
where packages would install libraries to subdirectories and also was
not customizable. This PR makes use of the ".headers" and ".libs"
properties for more-advanced location of header/library directories.
Since packages can override the default behavior of ".headers" and
".libs", it also allows package writers to customize.
The following environment variables which used to be set by Spack
for a package build have been removed:
* Remove SPACK_PREFIX and SPACK_DEPENDENCIES environment variables as
they are no-longer used
* Remove SPACK_INSTALL environment variable: it was not used before
this PR
* fix permission setter
Fix a typo in islink test when applied to files.
* os.walk explicitly set not to follow links
The algorithm strongly rely on not following links.
* Note that `none` is the default for lmod autoload
Save a bit of confusion by *explicitly* pointing out that `none` is
the default value for autoload in the lmod module file generator.
* Add a tip re building software externally
Add a tip about using `autoload: all` when building packages outside
of the tree that use artifacts (e.g. libraries, includes) within the
tree.
CMake supports the notion of secondary generators which provide extra
information to (e.g.) IDEs over and above that normally provided by
the primary generator. Spack only supports the 'Unix Makefiles' and
'Ninja' primary generators but was not parsing out the primary
generator when a secondary generator was also included (e.g. for
a generator attribute like 'Codeblocks - Ninja'). This adds a regex
for extracting the primary generator for validation.
Since the secondary generator is irrelevant to a Spack build, it is
passed on to CMake without further validation.
* CUDA compiler conflicts for Linux.
* Add Volta and Turing GPUs.
* Add mandatory conflict for Volta and Turing GPUs.
* Revert "CUDA compiler conflicts for Linux."
This reverts commit 7d4ff654ac53aad272c59e9f7f8bb3fbb32bcec4.
* Compiler conflicts introduced from previous commit into CUDA packaged moved and integrated into CUDA build system.
* More conversative with compiler conflicts for cuda 10.0.130, since I don't know what will happen with future cuda 10.x releases.
* Correct off-by-one errors in clang conflicts for x86_64 Linux.
* No restrictions on Apple Clang compiler until we are able to distinguish Xcode clang from github clang more easily. Note to fix this in the future.
* Change comment to clarify that github clang refers to LLVM clang.
* Fix and simplify index range.
* Fix overlapping conflicts for CUDA 10.0.130
* Removed extra ^cuda from conflict.
Debug output now includes the output of modulecmd executions. Only
output module content when a failure occurs; always report when a
module is loaded/unloaded.
"spack install" will install all packages added to the current
environment. When this included external packages, the environment
update would fail because it would attempt to copy log files that
were only generated if Spack handled the install itself. This skips
that step for external packages.
* Allow overwrite nonexistent and multiple packages
initial implementation
give one prompt to users instead of a prompt per spec
testing
* flake
* bugfix: install overwrite check each spec against installed
* python3 compliance for filter/map
* Remove Cray CC compilers causing problems on case-insensitive filesystems
* cray -> cce
* Ensure that compiler-specific directory comes first in build-env
* Point to compiler-specific symlinks
Binary caches of packages with absolute symlinks had broken symlinks.
As a stopgap measure, #9747 addressed this by replacing symlinks with
copies of files when creating binary cached packages.
This reverts #9747 and instead, either relative-izes the symlink or
rewrites the target. If the binary cache is created using '--rel' (as
in "spack buildcache create --rel...") then absolute symlinks will be
replaced with relative symlinks (in addition to making RPATHs relative
as before); otherwise they are rewritten (when the binary cache is
unpacked and installed).
The current output of buildcache list is very verbose and I feel like
some details are getting lost. By making the output similar to find, I
think users will be able to get a better overview of what is stored in
the cache.
* dealii: fix concretization of xsdk package
* tests: add concretization tests for deal.II and xSDK, which are often broken due to limitations in the concretizer
* use pytest.mark.parametrize
Allow customizing views with Spec-formatted directory structure
Allow views to specify projections that are more complicated than
merging every package into a single shared prefix. This will allow
sites to configure a view for the way they want to present packages
to their users; for example this can be used to create a prefix for
each package but omit the DAG hash from the path.
This includes a new YAML format file for specifying the simplified
prefix for a spec in a view. This configuration allows the use of
different prefix formats for different specs (i.e. specs depending
on MPI can include the MPI implementation in the prefix).
Documentation on usage of the view projection configuration is
included.
Depending on the projection configuration, paths are not guaranteed
to be unique and it may not be possible to add multiple installs of
a package to a view.
Fixes#10284#10152 replaced shutil.move with llnl's copy and copy_tree for
resources. This did not copy permissions so led to later failures
if an executable was copied (e.g. a configure script). This uses
install/install_tree instead, which preserve permissions.
* Initial compiler support
* added arm.py
* Changed licence to Arm suggested header
* Changed licence to the same as clang.py
Main author of file is Nick Forrington <Nick.Forrington@arm.com>
Minor changes by Srinath Vadlamani <srinath.vadlamani@arm.com>
* compilers: add arm compiler detection to Spack
- added arm.py with support for detecting `armclang` and `armflang`
Co-authored-by: Srinath Vadlamani <srinath.vadlamani@arm.com>
* Changed to using get get_compiler_version
* linking to general cc for arm compiler
* For arm compiler add CFLAGS to use compiler-rt rtlib.
* Escape for special characters in rexep
* Cleaned up for Flake8 to pass.
* libcompiler-rt should be part of the LDFLAGS not CFLAGS
* fixed m4 when using clang to used LDFLAGS. Fixed comments for arm.py to display compiler --version output with # NOAQ for flakes pass.
* added arm compilers
* proper linked names
This enforces conventions that allow for correct handling of
multi-valued variants where specifying no value is an option,
and adds convenience functionality for specifying multi-valued
variants with conflicting sets of values. This also adds a notion
of "feature values" for variants, which are those that are understood
by the build system (e.g. those that would appear as configure
options). In more detail:
* Add documentation on variants to the packaging guide
* Forbid usage of '' or None as a possible variant value, in
particular as a default. To indicate choosing no value, the user
must explicitly define an option like 'none'. Without this,
multi-valued variants with default set to None were not parsable
from the command line (Fixes#6314)
* Add "disjoint_sets" function to support the declaration of
multi-valued variants with conflicting sets of options. For example
a variant "foo" with possible values "a", "b", and "c" where "c"
is exclusive of the other values ("foo=a,b" and "foo=c" are
valid but "foo=a,c" is not).
* Add "any_combination_of" function to support the declaration of
multi-valued variants where it is valid to choose none of the
values. This automatically defines "none" as an option (exclusive
with all other choices); this value does not appear when iterating
over the variant's values, for example in "with_or_without" (which
constructs autotools option strings from variant values).
* The "disjoint_sets" and "any_combination_of" methods return an
object which tracks the possible values. It is also possible to
indicate that some of these values do not correspond to options
understood by the package's build system, such that methods like
"with_or_without" will not define options for those values (this
occurs automatically for "none")
* Add documentation for usage of new functions for specifying
multi-valued variants
Non-expanded resources were being deleted from the cache on account
of two behaviors:
* ResourceStage was moving files rather than copying them, and uses
"os.path.realpath" to resolve symlinks
* CacheFetchStrategy creates a symlink to a cached resource rather
than copying it
This alters the first behavior: ResourceStage now copies the file
rather than moving it.
"mirror create" was invoking a package's do_patch method in order to
retrieve and archive URL patches. If a package implements a "patch"
method, this is also called as part of do_patch; this failed when the
package-specific implementation referred to environment variables
that are only available at the time the package is built
(e.g. "spack_cc").
This change introduces fetch and clean methods for patches. They are
no-ops for FilePatch but perform the appropriate actions for
UrlPatch. This allows "mirror create" to invoke do_fetch, which does
not call the package's patch method.
- in many files, regular strings were used in places where raw strings
should've been used.
- convert these to raw strings and get rid of new flake8 errors
This PR improves the validation of `modules.yaml` by introducing a custom validator that checks if an attribute listed in `properties` or `patternProperties` is a valid spec. This new check applied to the test case in #9857 gives:
```console
$ spack install szip
==> Error: /home/mculpo/.spack/linux/modules.yaml:5: "^python@2.7@" is an invalid spec [Invalid version specifier]
```
Details:
* Moved the set-up of a custom validator class to spack.schema
* In Spack we use `jsonschema` to validate configuration files
against a schema. We also need custom validators to enforce
writing default values within "properties" or "patternProperties"
attributes.
* Currently, validators were customized at the place of use and with the
recent introduction of environments that meant we were setting-up and
using 2 different validator classes in two different modules.
* This commit moves the set-up of a custom validator class in the
`spack.schema` module and refactors the code in `spack.config` and
`spack.environments` to use it.
* Added a custom validator to check if an attribute is a valid spec
* Added a custom validator that can be used on objects, which yields an
error if the attribute is not a valid spec.
* Updated the schema for modules.yaml
* Updated modules.yaml to fix a few inconsistencies:
- a few attributes were not tested properly using 'anyOf'
- suffixes has been updated to also check that the attribute is a spec
- hierarchical_scheme has been updated to hierarchy
* Removed $ref from every schema
* $ref is not composable or particularly legible
* Use python dicts and regular old variables instead.
- The nested directive implementation was broken for python 3
- directive results were not properly removed from the directive list
when it was processed in the DirectiveMeta metaclass.
- the issue was that remove_directives only descended into a list or
tuple, but in Python3, the initial value passed to the function is a
view of dictionary values.
- make it a list to fix things, and add a regression test.
- currently just looks at patches
- allows you to find out which package applied a patch to a spec
- intended to work with tarballs and resources in the future.
- add tab completion for `spack resource` and subcommands
- previously, if a concrete sub-DAG with patched specs was written out
and read back in, its patches would not be found because the dependent
that patched it was no longer in the DAG.
- Add a test to ensure that the PatchCache handles this case.
- Also add tests to ensure that patch objects are properly created from
Specs -- previously we only checked that the patches were on the Spec.
- this fixes a bug where if we save a concretized sug-DAG where a package
had been patched by a dependent, and the dependent was not in the DAG,
we would not read in all patches correctly.
- Rather than looking up patches in the DAG, we look them up globally
from an index created from the entire repository.
- The patch cache is a bit tricky for several reasons:
- we have to cache information from packages, specifically, the patch
level and working directory.
- FilePatches need to know which package owns them, so that they can
figure out where the patch lives. The repo can change locations from
run to run, so we have to store relative paths and restore them when
the cache is reloaded.
- Patch files can change underneath the cache, because repo indexes
only update on package changes. We currently punt on this -- there
are stub methods for needs_update() that will need to check patch
files when packages are loaded. There isn't an easy way to do this
at global indexing time without making the FastPackageChecker a lot
slower. This is TBD for a future commit.
- Currently, the same patch can only be used one way in a package. That
is, if it appears twice with different level/working_dir settings,
bad things will happen. There's no package that current uses the
same patch two different ways, so we've punted on this as well, but
we may need to fix this in the future by moving a lot of the metdata
(level, working dir) to the spec, and *only* caching sha256sums in
the PatchCache. That would require some much more complicated tweaks
to the Spec, so we're holding off on that til later.
- This required patches to be refactored somewhat -- the difference
between a UrlPatch and a FilePatch is still not particularly clean.
- indexes should use json, not YAML, to optimize for speed
- only use YAML in human-editable files
- this makes ProviderIndex consistent with other indexes
- virtual provider cache and tags were previously generated by nearly
identical but separate methods.
- factor out an Indexer interface for updating repository caches, and
provide implementations for each type of index (TagIndex,
ProviderIndex) so that more can be added if needed.
- Among other things, this allows all indexes to be updated at once.
This is an advantage because loading package files is the real
overhead, and building the indexes once the packages are loaded is
trivial. We avoid extra bulk read-ins by generating all package indexes
at once.
- This can be extended for dependents (reverse dependencies) and patches
later.
- cleanup patch.py:
- make patch.py constructors more understandable
- loosen coupling of patch.py with package
- in Package: make package_dir, module, and namespace class properties
- These were previously instance properties and couldn't be called from
directives, e.g. in patch.create()
- make them class properties so that they can be used in class definition
- also add some instance properties to delegate to class properties so
that prior usage on Package objects still works
- When returning string output, use text_type and decode utf-8 in Python
2 instead of using `str`
- This properly handles unicode, whereas before we would pass bad strings
to colify in `spack blame` when reading git output
- add a test that round-trips some unicode through an Executable object
* Remove /nfs/tmp2 from default configuration
* /nfs/tmp2 is going away from LC... and doesn’t exist for the rest of the world.
* update documentation to remove /nfs/tmp2 as well
* Record build output as an array of lines rather than concatenating to a
single large string.
* Use string.find to avoid running re.search on every line of output.
- some commands were missed in the rollout of spack environments
- this makes all commands that need to disambiguate specs restrict the
disambiguation to installed packages in the active environment, as
users would expect
* This fixes a number of bugs:
* Patches were not properly downloaded and added to mirrors.
* Mirror create didn't respect `list_url` in packages
* Update the `spack mirror` command to add all packages in the
concretized DAG (where originally it only added the package specified
by the user). This is required in order to collect patches that are specified
by dependents. Example:
* if X->Y and X requires a patch on Y called Pxy, then Pxy will only
be discovered if you create a mirror with X.
* replace confusing --one-version-per-spec option for `spack mirror create`
with --versions-per-spec; support retrieving multiple versions for
concrete specs
* Implementation details:
* `spack mirror create` now uses regular staging logic to download files
into a mirror, instead of reimplementing it in `add_single_spec`.
* use a separate resource caching object to keep track of new
resources and already-existing resources; also accepts storing
resources retrieved from a cache (unlike the local cache)
* mirror cache object now stores resources that are considered
non-cachable, like (e.g. the tip of a branch);
* the 'create' function of the mirror module no longer traverses
dependencies since this was already handled by the 'mirror' command;
* Change handling of `--no-checksum`:
* now that 'mirror create' uses stages, the mirror tests disable
checksums when creating the mirror
* remove `no_checksum` argument from library functions - this is now
handled at the Spack-command-level (like for 'spack install')
- all multimethod tests are now run for both `multimethod` and
`multimethod-inheritor`
- do this with a parameterized fixture (pkg_name) that runs the same
tests on both
- Since early Spack versions, the SpecParser has (weirdly) been
responsible for initializing Spec fields.
- This refactors initialization to take place in Spec.__init__, as it
probably should have originally.
- This makes the code easier to read, the parser easier to understand,
and removes the use of __new__ in the parser to initialize the Spec.
- This also makes it possible to make a completely empty Spec with
`Spec()` -- this is an abstract Spec that will match anything.
* "spack install" now uses cache by default, update examples accordingly
* Replace some example packages with others
* Packing tutorial reference to "spack env" replaced with "spack build-env"
* Command line prompts in examples are shortened
* Example output (including paths) are updated to be more relevant to training environment
Update all examples that need an MPI provider to build with MPICH; reorganize so that fixing MPICH (as part of environment section) comes first in the tutorial (most examples in the tutorial use an MPI provider).
- previously, uninstall would complain if a spec was needed by an
environment.
- Now, we analyze dependents and dependent environments and simply remove
(not uninstall) specs that are needed by environments
- with no arguments, these commands will now edit or dump the
environment's `spack.yaml` file.
- users may not know where named environments live
- this makes it convenient for users to get to the spack.yaml
configuration file for their named environment.
* Update Makefile to use property methods ("build_targets"/"install_targets")
to demonstrate their usage
* Fix highlighting
* Change cbench example to ESMF:
CBench package file was changed and no longer uses the example shown in
the old docs
Scopes added with -C are now referred to as "custom scopes"
rather than "command line scopes". "command line scope" now refers
to specific config options that are set on the command line (like
"--insecure")
- default is still to use the cache, but we've added back the
`--use-cache` argument so that scripts that used it are still correct.
- `--no-cache` is stil present and is mutually exclusive with `--use-cache`
* Introduce FFTW2 and FFT3 providers for Intel-MKL and FFTW Spack packages.
* make fftw default package for fftw-api virtual package
* virtual package test assertion now provides location of default virtual packages.
* Change name of virtual package to fftw-api and used versioned interface.
- all commands (except `spack find`, through `ConstraintAction`) now go
through get_env() to get the active environment
- ev.active was hard to read -- and the name wasn't descriptive.
- rename it to _active_environment to be more descriptive and to strongly
indicate that spack.environment manages it
- to aovid changing spec hashes drastically, only add this attribute to
differentiated abstract specs.
- othherwise assume that read-in specs are concrete
- spack.yaml files in the current directory were picked up inconsistently
-- make this a sure thing by moving that logic into find_environment()
and moving find_environment() to main()
- simplify arguments to Spack command:
- remove short args for infrequently used commands (--pdb/-D, -P, -s)
- `spack -D` now forces an env with a directory
- The `Spec` class maintains a special `_patches_in_order_of_appearance`
attribute on patch variants, but it is was preserved when specs are
copied.
- This caused issues for some builds
- Add special logic to `Spec` to preserve this variant on copy
- TODO: in the long term we should get rid of the special variant and
make it the responsibility of one of the variant classes.
- split 'environment' section into 'environments' and 'modules'
- move location to 'query packages' section
- move cd to developer section
- --env-dir no longer has a short optino (was -E)
- -E now means "run without an environment" (no longer same as --env-dir)
- -D now means "run with this directory environment"
- remove short options for may infrequently used top-level commands
- `spack env status` used to show install status; consolidate that into
`spack find`.
- `spack env status` will still print out whether there is an active
environment
- uninstall now:
- restricts its spec search to the current environment
- removes uninstalled specs from the current environment
- reports envs that still need specs you're trying to uninstall
- removed spack env uninstall command
- updated tests