This changes makes sure that when we run the pipeline job that updates
the buildcache package index on the remote mirror, we also update the
key index. The public keys corresponding to the signing keys used to
sign the package was pushed to the mirror as a part of creating the
buildcache index, so this is just ensuring those keys are reflected
in the key index.
Also, this change makes sure the "spack buildcache update-index"
job runs even when there may have been pipeline failures, since we
would like the index always to reflect the true state of the mirror.
* Rework spack.util.web.list_url()
list_url() now accepts an optional recursive argument (default: False)
for controlling whether to only return files within the prefix url or to
return all files whose path starts with the prefix url. Allows for the
most effecient implementation for the given prefix url scheme. For
example, only recursive queries are supported for S3 prefixes, so the
returned list is trimmed down if recursive == False, but the native
search is returned as-is when recursive == True. Suitable
implementations for each case are also used for file system URLs.
* Switch to using an explicit index for public keys
Switches to maintaining a build cache's keys under build_cache/_pgp.
Within this directory is an index.json file listing all the available
keys and a <fingerprint>.pub file for each such key.
- Adds spack.binary_distribution.generate_key_index()
- (re)generates a build cache's key index
- Modifies spack.binary_distribution.build_tarball()
- if tarball is signed, automatically pushes the key used for signing
along with the tarball
- if regenerate_index == True, automatically (re)generates the build
cache's key index along with the build cache's package index; as in
spack.binary_distribution.generate_key_index()
- Modifies spack.binary_distribution.get_keys()
- a build cache's key index is now used instead of programmatic
listing
- Adds spack.binary_distribution.push_keys()
- publishes keys from Spack's keyring to a given list of mirrors
- Adds new spack subcommand: spack gpg publish
- publishes keys from Spack's keyring to a given list of mirrors
- Modifies spack.util.gpg.Gpg.signing_keys()
- Accepts optional positional arguments for filtering the set of keys
returned
- Adds spack.util.gpg.Gpg.public_keys()
- As spack.util.gpg.Gpg.signing_keys(), except public keys are
returned
- Modifies spack.util.gpg.Gpg.export_keys()
- Fixes an issue where GnuPG would prompt for user input if trying to
overwrite an existing file
- Modifies spack.util.gpg.Gpg.untrust()
- Fixes an issue where GnuPG would fail for input that were not key
fingerprints
- Modifies spack.util.web.url_exists()
- Fixes an issue where url_exists() would throw instead of returning
False
* rework gpg module/fix error with very long GNUPGHOME dir
* add a shim for functools.cached_property
* handle permission denied error in gpg util
* fix tests/make gpgconf optional if no socket dir is available
* trigger ascent e4s pipeline on merge to spack develop
* change pipeline name ecpcitest/e4s is the pipeline that will be triggered for merge on develop its the E4S use-case.
`spack install --yes-to-all` doesn't actually make the build non-interactive,
but that is why people typically use it. This documents that you must also
specify `--no-checksum` for a fully non-interactive build.
* spack config: default modification scope can be an environment
The previous model was that environments are the highest priority config
scope for config reading operations, but were not considered for config
writing operations. Now, the active environment is the highest priority
config scope for both reading and writing operations.
Now spack config add, spack external find and spack compiler set environment
configuration in the environment by default if an environment is active. This is a
change in default behavior for these routines, but better matches the mental
model for an environment taking precedence over the user's default config file.
* add scope argument to 'spack external find' to choose non-default scope
* Increase testing for config modifications on environments
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Packages can implement “detect_version” to support detection
of external instances of a package. This is generally easier
than implementing “determine_spec_details”. The API for
determine_version is similar: for example you can return
“None” to indicate that an executable is not an instance
of a package.
Users may implement a “determine_variants” method for a package.
When doing external detection, executables are grouped by version
and each group results in a single invocation of “determine_variants”
for the associated spec. The method returns a string specifying
the variants for the package. The method may additionally return
a dictionary representing extra attributes for the package.
These will be stored in the spec yaml and can be retrieved
from self.spec.extra_attributes
The Spack GCC package has been updated with an implementation
of “determine_variants” which adds the following extra
attributes to the package: c, cxx, fortran
The YAML config for paths and modules of external packages has
changed: the new format allows a single spec to load multiple
modules. Spack will automatically convert from the old format
when reading the configs (the updates do not add new essential
properties, so this change in Spack is backwards-compatible).
With this update, Spack cannot modify existing configs/environments
without updating them (e.g. “spack config add” will fail if the
configuration is in a format that predates this PR). The user is
prompted to do this explicitly and commands are provided. All
config scopes can be updated at once. Each environment must be
updated one at a time.
* Run Python2.6 unit tests on Github Actions
* Skip url tests on Python 2.6 to reduce waiting times
* Skip foreground background tests on Python 2.6 to reduce waiting times
* Removed references to Travis in the documentation
* Deleted install_patchelf.sh (can be installed from repo on CentOS 6)
* add tutorial setup script to share/spack
* Add check for Ubuntu 18, fix xvda check, fix apt-get errors
- now works on t2.micro, t2.small, and m instances
- apt-get needs retries around it to work
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Buildcache:
* Try mocking an install of quux, corge and garply using prebuilt binaries
* Put patchelf install after ccache restore
* Add script to install patchelf from source so it can be used on Ubuntu:Trusty which does not have a patchelf pat package. The script will skip building on macOS
* Remove mirror at end of bindist test
* Add patchelf to Ubuntu build env
* Revert mock patchelf package to allow other tests to run.
* Remove depends_on('patchelf', type='build') relying instead on
* Test fixture to ensure patchelf is available.
* Call g++ command to build libraries directly during test build
* Flake8
* Install patchelf in before_install stage using apt unless on Trusty where a build is done.
* Add some symbolic links between packages
* Flake8
* Flake8:
* Update mock packages to write their own source files
* Create the stage because spec search does not create it any longer
* updates after change of list command arguments
* cleanup after merge
* flake8
On Cray platforms, we rely heavily on the module system to figure out
what targets, compilers, etc. are available. This unfortunately means
that we shell out to the `module` command as part of platform
initialization.
Because we run subcommands in a shell, we can get infinite recursion if
`setup-env.sh` and friends are in some init script like `.bashrc`.
This fixes the infinite loop by adding guards around `setup-env.sh`,
`setup-env.csh`, and `setup-env.fish`, to prevent recursive
initializations of Spack. This is safe because Spack never shells out to
itself, so we do not need it to be initialized in subshells.
- [x] add recursion guard around `setup-env.sh`
- [x] add recursion guard around `setup-env.csh`
- [x] add recursion guard around `setup-env.fish`
* Move flake8 tests on Github Actions
* Move shell test to Github Actions
* Moved documentation build to Github Action
* Don't run coverage on Python 2.6
Since we get connection errors consistently on Travis
when trying to upload coverage results for Python 2.6,
avoid computing coverage entirely to speed-up tests.
* Activate environment in container file
This PR will ensure that the container recipes will build the spack
environment by first activating the environment.
* Deactivate environment before environment collection
For Singularity, the environment must be deactivated before running the
command to collect the environment variables. This is because the
environment collection uses `spack env activate`.
* share/spack/setup-env.fish file to setup environment in fish shell
* setup-env.fish testing script
* Update share/spack/setup-env.fish
Co-Authored-By: Elsa Gonsiorowski, PhD <gonsie@me.com>
* Update share/spack/qa/setup-env-test.fish
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* updates completions using `spack commands --update-completion`
* added stderr-nocaret warning
* added fish shell tests to CI system
Co-authored-by: becker33 <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Elsa Gonsiorowski, PhD <gonsie@me.com>
* Start moving toward a json buildcache index
* Add spec and database index schemas
* Add a schema for buildcache spec.yaml files
* Provide a mode for database class to generate buildcache index
* Update db and ci tests to validate object w/ new schema
* Remove unused temporary upload-s3 command
* Use database class to generate buildcache index
* Do not generate index with each buildcache creation
* Make buildcache index mode into a couple of constructor args to Database class
* Use keyword args for _createtarball
* Parse new json index when we get specs from buildcache
Now that only one index file per mirror needs to be fetched in
order to have all the concrete specs for binaries available on the
mirror, we can just fetch and refresh the cached specs every time
instead of needing to use the '-f' flag to force re-reading.
spack config add <value>: add nested value value to the configuration scope specified
spack config remove/rm: remove specified configuration from the relevant scope
* Added unit tests to Github Actions
* Set user e-mail and name for git tests to succeed
* Simplify setup.sh logic
* Replicate Travis script on Github Actions
* Update flags since '.' is not allowed
* Added badge, simplified workflow
* Remove pinning of coverage
* Remove unit tests run on Github Actions from Travis
This fixes a fork bomb in `spack versions`. Recursive generation of pools
to scrape URLs in `_spider` was creating large numbers of processes.
Instead of recursively creating process pools, we now use a single
`ThreadPool` with a concurrency limit.
More on the issue: having ~10 users running at the same time spack
versions on front-end nodes caused kernel lockup due to the high number
of sockets opened (sys-admin reports ~210k distributed over 3 nodes).
Users were internal, so they had ulimit -n set to ~70k.
The forking behavior could be observed by just running:
$ spack versions boost
and checking the number of processes spawned. Number of processes
per se was not the issue, but each one of them opens a socket
which can stress `iptables`.
In the original issue the kernel watchdog was reporting:
Message from syslogd@login03 at May 19 12:01:30 ...
kernel:Watchdog CPU:110 Hard LOCKUP
Message from syslogd@login03 at May 19 12:01:31 ...
kernel:watchdog: BUG: soft lockup - CPU#110 stuck for 23s! [python3:2756]
Message from syslogd@login03 at May 19 12:01:31 ...
kernel:watchdog: BUG: soft lockup - CPU#94 stuck for 22s! [iptables:5603]
* add an --exclude-file option to 'spack mirror create' which allows a user to specify a file of specs to exclude when creating a mirror. this is anticipated to be useful especially when using the '--all' option
* allow specifying number of versions when mirroring all packages
* when mirroring all specs within an environment, include dependencies of root specs
* add '--exclude-specs' option to allow user to specify that specs should be excluded on the command line
* add test for excluding specs
This change also adds a code path through the spack ci pipelines
infrastructure which supports PR testing on the Spack repository.
Gitlab pipelines run as a result of a PR (either creation or pushing
to a PR branch) will only verify that the packages in the environment
build without error. When the PR branch is merged to develop,
another pipeline will run which results in the generated binaries
getting pushed to the binary mirror.
Modifications:
- [x] Travis now uses `bionic` as a default (`xenial` used for Python 3.5, `trusty` for Python 2.6)
- [x] Shell unit tests have been factored into their own run
- [x] `kcov` is built only for tests that upload coverage results
Overall with this we shave 3-4 mins. on each run and add an additional run of about 3 min. For some reason `kcov` 38 fails forwarding output when used with Python unit tests, so I used v34 for that and v38 (latest) for shell testing. Previously we were using v25.
* Non-interactive mode for spack checksum; allow passing 'package@version' to spack checksum
* Flake8 fixes
* Update checksum.py
Fix typo
* Update spack-completion script
* Automatically set non-interactive mode if more than one version passed
* Update lib/spack/spack/cmd/checksum.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Add documentation and update spack-completion
* Flake8
* Rename option
* Update spack-completion
* Update lib/spack/spack/cmd/checksum.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update checksum.py
* Update stage.py
* Update create.py
Use batch mode when adding a new package
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Add a `spack external find` command that tries to populate
`packages.yaml` with external packages from the user's `$PATH`. This
focuses on finding build dependencies. Currently, support has only been
added for `cmake`.
For a package to be discoverable with `spack external find`, it must define:
* an `executables` class attribute containing a list of
regular expressions that match executable names.
* a `determine_spec_details(prefix, specs_in_prefix)` method
Spack will call `determine_spec_details()` once for each prefix where
executables are found, passing in the path to the prefix and the path to
all found executables. The package is responsible for invoking the
executables and figuring out what type of installation(s) are in the
prefix, and returning one or more specs (each with version, variants or
whatever else the user decides to include in the spec).
The found specs and prefixes will be added to the user's `packages.yaml`
file. Providing the `--not-buildable` option will mark all generated
entries in `packages.yaml` as `buildable: False`
* dev-build: --drop-in <shell>
Add a `--drop-in <shell>` option to `spack dev-build`.
This option will automatically run a
`spack build-env <spec> -- <shell>` at the end of a `dev-build`, e.g.
to quickly drop-and-devel into a build phase of a package.
Example usage:
```
spack dev-build --before cmake --drop-in bash openpmd-api@develop
```
* build_env: drop in unit test
Co-authored-by: Greg Becker <becker33@llnl.gov>
Since #16132, we've consolidated the setting of FORCE_UNSAFE_CONFIGURE to
`autotools.py`, so we don't need to use it in packages like `coreutils`,
in our commands, or in our container recipes.
- [x] Remove FORCE_UNSAFE_CONFIGURE from packages
- [x] Remove FORCE_UNSAFE_CONFIGURE from container recipes
- [x] Remove FORCE_UNSAFE_CONFIGURE from `spack ci` command
`DYLD_LIBRARY_PATH` can frequently break builtin macOS software when
pointed at Spack libraries. This is because it takes *higher* precedence
than the default library search paths, which are used by system software.
`DYLD_FALLBACK_LIBRARY_PATH`, on the other hand, takes lower precedence.
At first glance, this might seem bad, because the software installed by
Spack in an environment needs to find *its* libraries, and it should not
use the defaults. However, Spack's isntallations are always `RPATH`'d,
so they do not have this problem.
`DYLD_FALLBACK_LIBRARY_PATH` is thus useful for things built in an
environment that need to use Spack's libraries, that don't set *their*
RPATHs correctly for whatever reason. We now prefer it to
`DYLD_LIBRARY_PATH` in modules and in environments because it helps a
little bit, and it is much less intrusive.
If a user invoked "spack env activate example-henv", Spack would
mistakenly interpret the "-h" from "example-henv" as the "-h" option.
This commit allows users to create and activate environments with
"-h" in the name.
This issue existed for bash shell support as well as csh support, and
this commit addresses both, along with some other unrelated csh
support issues.
* add --skip-unstable-versions option to 'spack mirror create' which skips sources/resource for packages if their version is not stable (i.e. if they are the head of a git branch rather than a fixed commit)
* '--skip-unstable-versions' should skip all VCS sources/resources, not just those which are not cachable
* Buildcache command: add install option -o/--otherarch
This will allow matching specs from other archs, for example
installing macOS buildcaches on linux hosts.
* spack commands --update-completion
It's often useful to run a module with `python -m`, e.g.:
python -m pyinstrument script.py
Running a python script this way was hard, though, as `spack python` did
not have a similar `-m` option. This PR adds a `-m` option to `spack
python` so that we can do things like this:
spack python -m pyinstrument ./test.py
This makes it easy to write a script that uses a small part of Spack and
then profile it. Previously thee easiest way to do this was to write a
custom Spack command, which is often overkill.
This commit introduces a `--no-check-signature` option for
`spack install` so that unsigned packages can be installed. It is
off by default (signatures required).
This PR adds a new command to Spack:
```console
$ spack containerize -h
usage: spack containerize [-h] [--config CONFIG]
creates recipes to build images for different container runtimes
optional arguments:
-h, --help show this help message and exit
--config CONFIG configuration for the container recipe that will be generated
```
which takes an environment with an additional `container` section:
```yaml
spack:
specs:
- gromacs build_type=Release
- mpich
- fftw precision=float
packages:
all:
target: [broadwell]
container:
# Select the format of the recipe e.g. docker,
# singularity or anything else that is currently supported
format: docker
# Select from a valid list of images
base:
image: "ubuntu:18.04"
spack: prerelease
# Additional system packages that are needed at runtime
os_packages:
- libgomp1
```
and turns it into a `Dockerfile` or a Singularity definition file, for instance:
```Dockerfile
# Build stage with Spack pre-installed and ready to be used
FROM spack/ubuntu-bionic:prerelease as builder
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&& (echo "spack:" \
&& echo " specs:" \
&& echo " - gromacs build_type=Release" \
&& echo " - mpich" \
&& echo " - fftw precision=float" \
&& echo " packages:" \
&& echo " all:" \
&& echo " target:" \
&& echo " - broadwell" \
&& echo " config:" \
&& echo " install_tree: /opt/software" \
&& echo " concretization: together" \
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unecessary deps and strip executables
RUN cd /opt/spack-environment && spack install && spack autoremove -y
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s
# Modifications to the environment that are necessary to run
RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
# Bare OS image to run the installed executables
FROM ubuntu:18.04
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
RUN apt-get -yqq update && apt-get -yqq upgrade \
&& apt-get -yqq install libgomp1 \
&& rm -rf /var/lib/apt/lists/*
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
```
Instead of another script, this adds a simple argument to `spack
commands` that updates the completion script. Developers can now just
run:
spack commands --update-completion
This should make it simpler for developers to remember to run this
*before* the tests fail. Also, this version tab-completes.
Previously the `spack load` command was a wrapper around `module load`. This required some bootstrapping of modules to make `spack load` work properly.
With this PR, the `spack` shell function handles the environment modifications necessary to add packages to your user environment. This removes the dependence on environment modules or lmod and removes the requirement to bootstrap spack (beyond using the setup-env scripts).
Included in this PR is support for MacOS when using Apple's System Integrity Protection (SIP), which is enabled by default in modern MacOS versions. SIP clears the `LD_LIBRARY_PATH` and `DYLD_LIBRARY_PATH` variables on process startup for executables that live in `/usr` (but not '/usr/local', `/System`, `/bin`, and `/sbin` among other system locations. Spack cannot know the `LD_LIBRARY_PATH` of the calling process when executed using `/bin/sh` and `/usr/bin/python`. The `spack` shell function now manually forwards these two variables, if they are present, as `SPACK_<VAR>` and recovers those values on startup.
- [x] spack load/unload no longer delegate to modules
- [x] refactor user_environment modification calculations
- [x] update documentation for spack load/unload
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
This PR adds a `--format=bash` option to `spack commands` to
auto-generate the Bash programmable tab completion script. It can be
extended to work for other shells.
Progress:
- [x] Fix bug in superclass initialization in `ArgparseWriter`
- [x] Refactor `ArgparseWriter` (see below)
- [x] Ensure that output of old `--format` options remains the same
- [x] Add `ArgparseCompletionWriter` and `BashCompletionWriter`
- [x] Add `--aliases` option to add command aliases
- [x] Standardize positional argument names
- [x] Tests for `spack commands --format=bash` coverage
- [x] Tests to make sure `spack-completion.bash` stays up-to-date
- [x] Tests for `spack-completion.bash` coverage
- [x] Speed up `spack-completion.bash` by caching subroutine calls
This PR also necessitates a significant refactoring of
`ArgparseWriter`. Previously, `ArgparseWriter` was mostly a single
`_write` method which handled everything from extracting the information
we care about from the parser to formatting the output. Now, `_write`
only handles recursion, while the information extraction is split into a
separate `parse` method, and the formatting is handled by `format`. This
allows subclasses to completely redefine how the format will appear
without overriding all of `_write`.
Co-Authored-by: Todd Gamblin <tgamblin@llnl.gov>
The pathadd function was using setopt to configure zsh for word
splitting, which leaks out of the function and breaks default
functionality in a number of external zsh plugins and packages. This
switches to emulate -L, just as the spack function uses, to keep the
setting local to the function.
Previously, `spack test` automatically passed all of its arguments to
`pytest -k` if no options were provided, and to `pytest` if they were.
`spack test -l` also provided a list of test filenames, but they didn't
really let you completely narrow down which tests you wanted to run.
Instead of trying to do our own weird thing, this passes `spack test`
args directly to `pytest`, and omits the implicit `-k`. This means we
can now run, e.g.:
```console
$ spack test spec_syntax.py::TestSpecSyntax::test_ambiguous
```
This wasn't possible before, because we'd pass the fully qualified name
to `pytest -k` and get an error.
Because `pytest` doesn't have the greatest ability to list tests, I've
tweaked the `-l`/`--list`, `-L`/`--list-long`, and `-N`/`--list-names`
options to `spack test` so that they help you understand the names
better. you can combine these options with `-k` or other arguments to do
pretty powerful searches.
This one makes it easy to get a list of names so you can run tests in
different orders (something I find useful for debugging `pytest` issues):
```console
$ spack test --list-names -k "spec and concretize"
cmd/env.py::test_concretize_user_specs_together
concretize.py::TestConcretize::test_conflicts_in_spec
concretize.py::TestConcretize::test_find_spec_children
concretize.py::TestConcretize::test_find_spec_none
concretize.py::TestConcretize::test_find_spec_parents
concretize.py::TestConcretize::test_find_spec_self
concretize.py::TestConcretize::test_find_spec_sibling
concretize.py::TestConcretize::test_no_matching_compiler_specs
concretize.py::TestConcretize::test_simultaneous_concretization_of_specs
spec_dag.py::TestSpecDag::test_concretize_deptypes
spec_dag.py::TestSpecDag::test_copy_concretized
```
You can combine any list option with keywords:
```console
$ spack test --list -k microarchitecture
llnl/util/cpu.py modules/lmod.py
```
```console
$ spack test --list-long -k microarchitecture
llnl/util/cpu.py::
test_generic_microarchitecture
modules/lmod.py::TestLmod::
test_only_generic_microarchitectures_in_root
```
Or just list specific files:
```console
$ spack test --list-long cmd/test.py
cmd/test.py::
test_list test_list_names_with_pytest_arg
test_list_long test_list_with_keywords
test_list_long_with_pytest_arg test_list_with_pytest_arg
test_list_names
```
Hopefully this stuff will help with debugging test issues.
- [x] make `spack test` send args directly to `pytest` instead of trying
to do fancy things.
- [x] rework `--list`, `--list-long`, and add `--list-names` to make
searching for tests easier.
- [x] make it possible to mix Spack's list args with `pytest` args
(they're just fancy parsing around `pytest --collect-only`)
- [x] add docs
- [x] add tests
- [x] update spack completion
This PR moves build smoke tests from TravisCI and migrates them to Github Actions. The result is that build tests are performed in parallel with unit tests and they don't hog additional resources on Travis. The workflow will not run if a PR only changes packages in the built-in repository, but will always run on pushes to develop or master.
* Removed build tests from Travis and passed them to Github Actions
* Store ~/.ccache in Github Actions cache
* Add filters on paths and make sure this workflow don't run
* Use paths-ignore and exclude only files in the built-in repo
* Added a badge to README.md
Before this commit we used to run the entire unit test suite
in the presence of a failure. Since we currently rely a lot
on the state of the filesystem etc. the end report was most
of the time showing spurious failures that were a consequence
of the first failing test.
This PR makes unit tests exit at the first failing test
Also, pin codecov at v4.5.4 (last one supporting Python 2.6)
* docker: add missing module to ubuntu images
* docker: fix issue with missing locale
* docker: one package per line + rm python2 support
* docker: ubuntu image also needs 'file' for buildcache creation
fixes#13073
Since #3206 was merged bootstrapping environment-modules was using the architecture of the current host or the best match supported by the default compiler. The former case is an issue since shell integration was looking for a spec targeted at the host microarchitecture.
1. Bootstrap an env modules targeted at generic architectures
2. Look for generic targets in shell integration scripts
3. Add a new entry in Travis to test shell integration
fixes#13005
This commit fixes an issue with the name of the root directory for
module file hierarchies. Since #3206 the root folder was named after
the microarchitecture used for the spec, which is too specific and
not backward compatible for lmod hierarchies. Here we compute the
root folder name using the target family instead of the target name
itself and we add target information in the 'whatis' portion of the
module file.
Dotkit is being used only at a few sites and has been deprecated on new
machines. This commit removes all the code that provide support for the
generation of dotkit module files.
A new validator named "deprecatedProperties" has been added to the
jsonschema validators. It permits to prompt a warning message or exit
with an error if a property that has been marked as deprecated is
encountered.
* Removed references to dotkit in the docs
* Removed references to dotkit in setup-env-test.sh
* Added a unit test for the 'deprecatedProperties' schema validator
fixes#12915closes#12916
Since Spack has support for specific targets it might happen that
software is built for targets that are not exactly the host because
it was either an explicit user request or the compiler being used is
too old to support the host.
Modules for different targets are written into different directories
and by default Spack was adding to MODULEPATH only the directory
corresponding to the current host. This PR modifies this behavior to
add all the directories that are **compatible** with the current host.
* Fix CD: Packages Service First
Build the packages.spack.io service images first, so they are
guaranteed to be pushed even if further images fail to build.
Fix the query to the `spack` script executed in later builds.
* CD: Remove Spack Images
Now done on Dockerhub.
- We don't currently make enough use of the maintainers field on
packages, though we could use it to assign reviews.
- add a command that allows maintainers to be queried
- can ask who is maintaining a package or packages
- can ask what packages users are maintaining
- can list all maintained or unmaintained packages
- add tests for the command
* fix docker builds/remove extra builds/add ci builds
* preinstall vim in CI builder images
* simplify & streamline docker resources
* restore os-container-mapping.yaml file
- [x] Add shell tests to ensure that `spack env activate`, `spack env
deactivate`, and `despacktivate` continue to work.
- [x] Also ensure that activate and deactivate both work with `set -u`
* Add template creation test
* Added --skip-editor option to "spack create": normally
"spack create" opens an editor for the user after generating a
package file; when the --skip-editor option is used, "spack create"
only generates the package file and does not open an editor
* Added --skip-editor option to bash completion
- `setup-env.sh` was not properly detecting a bash shell when bash was run
as /bin/sh.
- Detection routine now always reports bash when bash is run as sh, and
no longer parses the path to the executable indicated in `$BASH`.
- Add set -u to the setup-env.sh test script
- Refactor lines in setup-env.sh that tested potentially undefined
variables to use the `[ -z ${var+x} ]` construct
- tests use a shell-script harness and test all Spack commands that
require special shell support.
- tests work in bash, zsh, and dash
- run setup-env.sh tests on macos and linux builds.
- we run them on macos and linux
- replace use of [[ with [
- replace function foo { .. } with foo() { .. }
- wrap some long lines
- add lsof and /proc/fd magic so that we can find the sourced file even in dash
- only do the complicated shell checks in one place; test $_sp_shell
elsewhere.
- We've seen this a few times now where users have set up `cd` to echo
the new directory, and it screws up `setup-env.sh`
- In the past we've said this is user error.
- Here, we just fix it by sending `cd` output to /dev/null where needed.
- this works in bash, zsh, and dash
- Codecov cannot handle as many coverage reports as we are generating
- as a result, our PR coverage pages have been broken for a while, and
it's hard to tell people where to enhance their testing in PR reviews.
- Scale back to only running coverage for 3.7 and 2.7 unit tests
- This is *probably* better. We run the build tests for good measure,
but we do not need to evaluate them for coverage. The coverage reports
are about unit tests.
* Added a function that concretizes specs together
* Specs concretized together are copied instead of being referenced
This makes the specs different objects and removes any reference to the
fake root package that is needed currently for concretization.
* Factored creating a repository for concretization into its own function
* Added a test on overlapping dependencies
Usage of double quotes was preventing word-splitting when parsing
module roots in setup-env.sh, which lead to an error when multiple
module roots are used (in particular when Spack is pointed to use
an upstream module root in addition to its own).
Still look for BASH_SOURCE[0] first, but if it's not set,
_sp_source_file is initialized to an empty value addressing the
unset parameter error (line 217).
* initial work to make use of an 'upstream' spack installation: this uses the DB of the upstream installation to check if a package is installed
* need to query upstream dbs when adding new record to local db
* prevent reindexing upstream DBs
* set prefix on specs read from DB based on path stored in install record
* check that Spack does not install packages that are recorded as installed in an upstream db
* externals do not add their path to install records - need to use 'external_path' to get path of upstream externals
* views need to check for upstream installations when linking metadata
* package and spec now calculate upstream installation properties on-demand themselves rather than depending on concretization to set these properties up-front. The added tests for upstream installations don't work with this new strategy so they need to be updated
* only refresh modules for local specs (not those in upstream packages); optionally generate local module files for packages installed upstream
* when a user tries to locate a module file for a package installed upstream, tell them to use the upstream spack instance to locate it
* support recursive upstream databases (allow upstream databases to use their own upstream databases)
* separate upstream config into separate file with its own schema; each entry now also includes a name
* metadata_dir is no longer customizable on a per-instance basis for YamlDirectoryLayout
* treat metadata_dir as an instance variable but dont set it from kwargs; this follows several other hardcoded variables which must be consistent between upstream and downstream DBs. Also update DirectoryLayout.metadata_path to work entirely with Spec.prefix, since Spec.prefix is set from the DB when available (so metadata_path was duplicating that logic)
This spack command adds a new schema for a file which describes the
builder containers available, along with the compilers availabe on
each builder. The release-jobs command then generates the .gitlab-ci.yml
file by first expanding the release spec set, concretizing each spec
(in an appropriate docker container if --this-machine-only argument is
not provided on command line), and then combining and staging all the
concrete specs as jobs to be run by gitlab.
The built images are set up with fairly recent versions of gcc and
clang:
- centos_7: [ gcc@5.5.0 (built from src), clang@6.0.0 (spack-built from src) ]
- ubuntu_18.04: [ gcc@5.5.0 (system), clang@6.0.0-1ubuntu2 (system) ]
Adds four new sub-commands to the buildcache command:
1. save-yaml: Takes a root spec and a list of dependent spec names,
along with a directory in which to save yaml files, and writes out
the full spec.yaml for each of the dependent specs. This only needs
to concretize the root spec once, then indexes it with the names of
the dependent specs.
2. check: Checks a spec (via either an abstract spec or via a full
spec.yaml) against remote mirror to see if it needs to be rebuilt.
Comparies full_hash stored on remote mirror with full_hash computed
locally to determine whether spec needs to be rebuilt. Can also
generate list of specs to check against remote mirror by expanding
the set of release specs expressed in etc/spack/defaults/release.yaml.
3. get-buildcache-name: Makes it possible to attempt to read directly
the spec.yaml file on a remote or local mirror by providing the path
where the file should live based on concretizing the spec.
4. download: Downloads all buildcache files associated with a spec
on a remote mirror, including any .spack, .spec, and .cdashid files
that might exist. Puts the files into the local path provided on
the command line, and organizes them in the same hierarchy found on
the remote mirror
This commit also refactors lib/spack/spack/util/web.py to expose
functionality allowing other modules to read data from a url.
Spack shell detection in setup-env.sh was originally based on
examining the executable name of $$ (from "ps"). In some cases this
does not actually give the name of the shell used, for example when
setup-env.sh was invoked from a script using "#!". To make shell
detection more robust, this adds a preliminary check for shell
variables which indicate that the shell is bash or zsh; the
executable name of $$ is retained as a fallback if those variables
are not defined.
- currently just looks at patches
- allows you to find out which package applied a patch to a spec
- intended to work with tarballs and resources in the future.
- add tab completion for `spack resource` and subcommands
- fixed an issue where some undesirable parts of
the spack source tree were being copied into
the image context.
- added a workaround for a tty ioctl warning on
ubuntu
- adjusted how the main images are built so that
`RUN spack ...` works automatically for child
images that base themselves on them.
Lately many CI runs for PRs are failing due to the `mpich` build that
times out on Travis (10 mins. without output). As the timeout seems to
happen consistently during the build phase, increasing the verbosity of
that test can help working around the issue.
- `spack env create <name>` works as before
- `spack env create <path>` now works as well -- environments can be
created in their own directories outside of Spack.
- `spack install` will look for a `spack.yaml` file in the current
directory, and will install the entire project from the environment
- The Environment class has been refactored so that it does not depend on
the internal Spack environment root; it just takes a path and operates
on an environment in that path (so internal and external envs are
handled the same)
- The named environment interface has been hoisted to the
spack.environment module level.
- env.yaml is now spack.yaml in all places. It was easier to go with one
name for these files than to try to handle logic for both env.yaml and
spack.yaml.
- `spack env activate foo`: sets SPACK_ENV to the current active env name
- `spack env deactivate`: unsets SPACK_ENV, deactivates the environment
- added support to setup_env.sh and setup_env.csh
- other env commands work properly with SPACK_ENV, as with an environment
arguments.
- command-line --env arguments take precedence over the active
environment, if given.
setup-env includes a call to 'ps' to determine what shell is being
used. 'ps' can be instructed to use a different default output format
via the 'PS_FORMAT' env variable. Thus unset this variable before
calling 'ps'.
* Unite Dockerfiles - add build/run/push scripts
* update docker documentation
* update .travis.yml
* switch to using a preprocessor on Dockerfiles
* skip building docker images on pull requests
* update files with copyright info
* tweak when travis builds for docker files are done
- `spack license list-files`: list all files that should have license headers
- `spack license list-lgpl`: list files still under LGPL-2.1
- `spack license verify`: check that license headers are correct
- Added `spack license verify` to style tests
- remove the old LGPL license headers from all files in Spack
- add SPDX headers to all files
- core and most packages are (Apache-2.0 OR MIT)
- a very small number of remaining packages are LGPL-2.1-only
- Many container builds are timing out frequently during Spack tests in
Travis CI.
- Travis recommends to try `sudo: required` to see whether this is an
infrastructure issue or something else.
- added `sudo: required` to all Linux builds.
- added --verbose to `spack test` invocation so that we can see more
easily what tests it's timing out on.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
As requested in the review all the commands meant to manage module
files have been grouped under the `spack module` command.
Unit tests have been refactored to match the new command structure.
fixes#2215fixes#2570fixes#6676fixes#7281closes#3827
This PR reverts the use of `spack module loads` in favor of
`spack module find` when loading module files via Spack. After this PR
`spack load` will accept a single spec at a time, and will be able
to interpret correctly the `--dependencies` option.
- The setup-env.sh script currently makes two calls to spack, but it
should only need to make one.
- Add a fast-path shell setup routine in `main.py` to allow the shell
setup to happen in a single, fast call that doesn't load more than it
needs to.
- This simplifies setup code, as it has to eval what Spack prints
- TODO: consider eventually making the whole setup script the output of a
spack command
- replace `spack.config.get_configuration()` with `spack.config.config()`
- replace `get_config`/`update_config` with `get`, `set`
- add a path syntax that can be used to refer to specific config options
without firt getting the entire configuration dict
- update usages of `get_config` and `update_config` to use `get` and `set`
fixes#7593
Unit tests on OSX are trying to concretize mpileaks, and they fail due
to a conflict in the package:
"%gcc@7.2.0:" conflicts with "elfutils@0.163"
This solves the issue asking explicitly to concretize against
elfutils@1.170
Fixes#7593
By default MacOS concretizes using the clang compiler. The unit tests
include a call to "spack spec mpileaks", which has elfutils as a
dependency; #7096 added a conflict in elfutils to avoid building
with clang, which lead to the MacOS unit tests to start failing.
This updates the concretization to force using gcc when concretizing
mpileaks.
* Revert "Travis: use --concurrency=multiprocessing only on build tests (#6872)"
This reverts commit 596d463714.
* Removing 'coverage combine' in test script
According to what was discovered in #6887, one of the problems is
calling 'coverage combine' twice without the '-a' flag. This removes
the first call within our test scripts.
On a local workstation, it seems that tracking multiple processes during
coverage may result in malformed coverage reports for unit tests and not
for build tests.
Given that multiple processes make a difference in coverage mainly for
build tests, try to disable the tracking for unit tests to see if we get
more stable coverage results.
* Reduce the calls to the python interpreter during initialization
This should reduce the delay the users experience when sourcing the
setup file to activate shell support. It works by generating at once
all the commands that needs to evaluated (they are stored in
a string and later `eval`ed by the shell).
* setup_env.sh: changed `read` with an equivalent magic
For some reason `read` breaks when sourced from a running script.
Change the incantation we use to construct the unique python command
that will be evaluated.
* setup_env.sh: python command now constructed with `printf` for portability
This recovers the support for `zsh` that was broken in previous commits.
* Reworked module file tutorial section
First draft for the SC17 update. This includes:
- adding an introduction on module files + Spack's module
generation blueprints
- adding a set-up section and provide a docker image for easy set-up
- updating all the relevant snippets
- extending a bit some of the concepts that were already touched
* Added reference to #5582 + committed Dockerfiles
Also fixed a couple of typos spotted by Denis.
* module file tutorial: added section on template customization
* module file tutorial: fixed minor typos + rephrased a sentence
* module file tutorial: made explicit that Docker image comes with software
* module file tutorial: improved phrasing and layout.
Thanks Hartzell!
* module file tutorial: added vim and nano to editors
* module file tutorial: fixed typo
* Fixed typos
Thanks Adam!
* module file tutorial: updated Dockerfile + minor changes in introduction
- This isn't one of those autogenerated SVGs from a drawing program!
- This is a completely re-traced, minimalist SVG file with clearly
delineated pieces so that your favorite renderer can draw a Spack logo
at whatever resolution you want.
- Included versions with text, as well.
setup-env.sh adds the 'module' command to the user's environment
if it is not defined and if there is a Spack installation of
environment-modules available. This commit updates that logic to
perform these checks and updates quietly.
These were discovered with bash 4.1.2.
Add quotations around a variable to prevent the destruction of a
newline. Without this fix a conditional doesn't work properly.
Remove square brackets around a conditional meant to be evaluated based
on the return code of a command. This wasn't working properly with an
old bash.
Fix a typo.
Renames the existing bootstrap command to 'clone'. Repurposes
'spack bootstrap' to install packages that are useful to the
operation of Spack (for now this is just environment-modules).
For bash and ksh users running setup-env.sh, if a Spack-installed
instance of environment-modules is detected and environment modules
and dotkit are not externally available, Spack will define the
'module' command in the user's shell to use the environment-modules
built by Spack.
- This should speed-up Travis CI tests and refers to #5049
- Travis uses build-stages to group tests together
- The idea is to let fast tests fail first, then move to longer ones.
- Added external perl to avoid download failure from CPAN and reduce build time
- Disabling perl-dbi: continues to fail with (504 Gateway Time-out) on Travis
- We now cover all the build systems in tests:
- Add back `openblas` to Travis as a separate package.
- Switched `openblas` for `astyle` to build a simpler MakefilePackage.
- Added 'tut' (WafPackage)
- Added 'py-setuptools' (PythonPackage)
- Added 'perl-dbi' (PerlPackage)
- Added 'build_systems' directory to the ones for which we get a summary
- Added 'openjpeg' (CMakePackage)
- Added 'r-rcpp' (RPackage)
- Added comments to build tests to show the covered build system
* Merged 'purge' command with 'clean'. Deleted 'purge'. fixes#2942
'spack purge' has been merged with 'spack clean'. Documentation has been
updated accordingly. The 'clean' and 'purge' behavior are not mutually
exclusive, and they log brief information to tty while they go.
* Fixed a wrong reference to spack clean in the docs
* Added tests for 'spack clean'. Updated bash completion.
* Disable spec colorization when redirecting stdout and add command line flag to re-enable
* Add command line `--color` flag to control output colorization
* Add options to `llnl.util.tty.color` to allow color to be auto/always/never
* Add `Spec.cformat()` function to be used when `format()` should have auto-coloring