To make it easier to see how package hashes change and how they are computed, add two
commands:
* `spack pkg source <spec>`: dumps source code for a package to the terminal
* `spack pkg source --canonical <spec>`: dumps canonicalized source code for a
package to the terminal. It strips comments, directives, and known-unused
multimethods from the package. It is used to generate package hashes.
* `spack pkg hash <spec>`: This gives the package hash for a particular spec.
It is generated from the canonical source code for the spec.
- [x] `add spack pkg source` and `spack pkg hash`
- [x] add tests
- [x] fix bug in multimethod resolution with boolean `@when` values
Co-authored-by: Greg Becker <becker33@llnl.gov>
* llvm: make targets a multivalued variant
* Fix the targets variant values
1. Make them lowercase and add a mapping to cmake equivalent
2. auto -> all
2. Restore composability by using a multivalued variant, so that
`targets=all` and `targets=x86` is combined to `targets=all,x86`
which is then transformed into LLVM_TARGETS_TO_BUILD=all.
* use targets=x86 in iwyu
* Default to nvptx/amdgpu/host arch targets
* default to none
* Update var/spack/repos/builtin/packages/zig/package.py
This command pokes the environment, Python interpreter
and bootstrap store to check if dependencies needed by
Spack are available.
If any are missing, it shows a comprehensible message.
This commit introduces the command
spack module tcl setdefault <package>
similar to the one already available for lmod
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
spack monitor now requires authentication as each build must be associated
with a user, so it does not make sense to allow the --monitor-no-auth flag
and this commit will remove it
* Fix building container images
Patchelf is bootstrapped from sources, so we cannot
disable that mechanism until a finer selection is
possible in the configuration.
* Build on changes to the Dockerfile
* Don't login to Dockerhub on PRs
This commit introduces the command
spack module tcl setdefault <package>
similar to the one already available for lmod
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This PR is meant to move code with "business logic" from `spack.cmd.buildcache` to appropriate core modules[^1].
Modifications:
- [x] Add `spack.binary_distribution.push` to create a binary package from a spec and push it to a mirror
- [x] Add `spack.binary_distribution.install_root_node` to install only the root node of a concrete spec from a buildcache (may check the sha256 sum if it is passed in as input)
- [x] Add `spack.binary_distribution.install_single_spec` to install a single concrete spec from a buildcache
- [x] Add `spack.binary_distribution.download_single_spec` to download a single concrete spec from a buildcache to a local destination
- [x] Add `Spec.from_specfile` that construct a spec given the path of a JSON or YAML spec file
- [x] Removed logic from `spack.cmd.buildcache`
- [x] Removed calls to `spack.cmd.buildcache` in `spack.bootstrap`
- [x] Deprecate `spack buildcache copy` with a message that says it will be removed in v0.19.0
[^1]: The rationale is that commands should be lightweight wrappers of the core API, since that helps with both testing and scripting (easier mocking and no need to invoke `SpackCommand`s in a script).
* Add connection specification to mirror creation
This allows each mirror to contain information about the credentials
used to access it.
Update command and tests based on comments
Switch to only "long form" flags for the s3 connection information.
Use the "any" function instead of checking for an empty list when looking
for s3 connection information.
Split test to use the access token separately from the access id and key.
Use long flag form in test.
Add endpoint_url to available S3 options.
Extend the special parameters for an S3 mirror to accept the
endpoint_url parameter.
Add a test.
* Add connection information per URL not per mirror
Expand the mirror-based connection information to be per-URL.
This will allow a user to specify different S3 connection information
for both the fetch and the push URLs.
Add a parameter for "profile", another way of storing the id/secret pair.
* Switch from "access_profile" to "profile"
See #25249 and https://github.com/spack/spack/pull/27159#issuecomment-958163679.
This adds `spack load --list` as an alias for `spack find --loaded`. The new command is
not as powerful as `spack find --loaded`, as you can't combine it with all the queries or
formats that `spack find` provides. However, it is more intuitively located in the command
structure in that it appears in the output of `spack load --help`.
The idea here is that people can use `spack load --list` for simple stuff but fall back to
`spack find --loaded` if they need more.
- add help to `spack load --list` that references `spack find`
- factor some parts of `spack find` out to be called from `spack load`
- add shell tests
- update docs
Co-authored-by: Peter Josef Scheibel <scheibel1@llnl.gov>
Co-authored-by: Richarda Butler <39577672+RikkiButler20@users.noreply.github.com>
* GnuPG: allow bootstrapping from buildcache and sources
* Add a test to bootstrap GnuPG from binaries
* Disable bootstrapping in tests
* Add e2e test to bootstrap GnuPG from sources on Ubuntu
* Add e2e test to bootstrap GnuPG on macOS
The `--generic` argument allows printing the best generic target for the
current machine. This can be quite handy when wanting to find the
generic architecture to use when building a shared software stack for
multiple machines.
This PR adds a "spack tags" command to output package tags or
(available) packages with those tags. It also ensures each package
is listed in the tag cache ONLY ONCE per tag.
* ci: Enable more packages in the DVSDK CI pipeline
* doxygen: Add conflicts for gcc bugs
* dray: Add version constraints for api breakage with newer deps
If you don't format `spack.yaml` correctly, `spack config edit` still fails and
you have to edit your `spack.yaml` manually.
- [x] Add some code to `_main()` to defer `ConfigFormatError` when loading the
environment, until we know what command is being run.
- [x] Make `spack config edit` use `SPACK_ENV` instead of the config scope
object to find `spack.yaml`, so it can work even if the environment is bad.
Co-authored-by: scheibelp <scheibel1@llnl.gov>
* Deactivate previous env before activating new one
Currently on develop you can run `spack env activate` multiple times to switch
between environments, but they leave traces, even though Spack only supports
one active environment at a time.
Currently:
```console
$ spack env create a
$ spack env create b
$ spack env activate -p a
[a] $ spack env activate -p b
[b] [a] $ spack env activate -p b
[a] [b] [a] $ spack env activate -p a
[a] [b] [c] $ echo $MANPATH | tr ":" "\n"
/path/to/environments/a/.spack-env/view/share/man
/path/to/environments/a/.spack-env/view/man
/path/to/environments/b/.spack-env/view/share/man
/path/to/environments/b/.spack-env/view/man
```
This PR fixes that:
```console
$ spack env activate -p a
[a] $ spack env activate -p b
[b] $ spack env activate -p a
[a] $ echo $MANPATH | tr ":" "\n"
/path/to/environments/a/.spack-env/view/share/man
/path/to/environments/a/.spack-env/view/man
```
Currently spack is a bit of a bad actor as a zsh plugin, and it was my
fault. The autoload and compinit should really be handled by the user,
as was made abundantly clear when I found spack was doing completion
initialization for *all* of my plugins due to a deferred setup that was
getting messed up by it.
Making this conditional took spack load time from 1.5 seconds (with
module loading disabled) to 0.029 seconds. I can actually afford to load
spack by default with this change in.
Hopefully someday we'll do proper zsh completion support, but for now
this helps a lot.
* use zsh hist expansion in place of dirname
* only run (bash)compinit if compdef/complete missing
* add zsh compiled files to .gitignore
* move changes to .in file, because spack
This PR permits to specify the `url` and `ref` of the Spack instance used in a container recipe simply by expanding the YAML schema as outlined in #20442:
```yaml
container:
images:
os: amazonlinux:2
spack:
ref: develop
resolve_sha: true
```
The `resolve_sha` option, if true, verifies the `ref` by cloning the Spack repository in a temporary directory and transforming any tag or branch name to a commit sha. When this new ability is leveraged an additional "bootstrap" stage is added, which builds an image with Spack setup and ready to install software. The Spack repository to be used can be customized with the `url` keyword under `spack`.
Modifications:
- [x] Permit to pin the version of Spack, either by branch or tag or sha
- [x] Added a few new OSes (centos:8, amazonlinux:2, ubuntu:20.04, alpine:3, cuda:11.2.1)
- [x] Permit to print the bootstrap image as a standalone
- [x] Add documentation on the new part of the schema
- [x] Add unit tests for different use cases
Creates an environment in a temporary directory and activates it, which
is useful for a quick ephemeral environment:
```
$ spack env activate -p --temp
[spack-1a203lyg] $ spack add zlib
==> Adding zlib to environment /tmp/spack-1a203lyg
==> Updating view at /tmp/spack-1a203lyg/.spack-env/view
```
Modifications:
- Modify the workflow to build container images without pushing when the workflow file itself is modified
- Strip the leading ghcr.io/spack/ from env.container env.versioned to prepare pushing to multiple registries
- Fixed CentOS 7 and Amazon Linux builds
- Login and push to Docker Hub as well as Github Action
- Add a badge to README.md with the status of docker images
Modifications:
- [x] Change `defaults/config.yaml`
- [x] Add a fix for bootstrapping patchelf from sources if `compilers.yaml` is empty
- [x] Make `SPACK_TEST_SOLVER=clingo` the default for unit-tests
- [x] Fix package failures in the e4s pipeline
Caveats:
1. CentOS 6 still uses the original concretizer as it can't connect to the buildcache due to issues with `ssl` (bootstrapping from sources requires a C++14 capable compiler)
1. I had to update the image tag for GitlabCI in e699f14.
1. libtool v2.4.2 has been deprecated and other packages received some update
This is a major rework of Spack's core core `spec.yaml` metadata format. It moves from `spec.yaml` to `spec.json` for speed, and it changes the format in several ways. Specifically:
1. The spec format now has a `_meta` section with a version (now set to version `2`). This will simplify major changes like this one in the future.
2. The node list in spec dictionaries is no longer keyed by name. Instead, it is a list of records with no required key. The name, hash, etc. are fields in the dictionary records like any other.
3. Dependencies can be keyed by any hash (`hash`, `full_hash`, `build_hash`).
4. `build_spec` provenance from #20262 is included in the spec format. This means that, for spliced specs, we preserve the *full* provenance of how to build, and we can reproduce a spliced spec from the original builds that produced it.
**NOTE**: Because we have switched the spec format, this PR changes Spack's hashing algorithm. This means that after this commit, Spack will think a lot of things need rebuilds.
There are two major benefits this PR provides:
* The switch to JSON format speeds up Spack significantly, as Python's builtin JSON implementation is orders of magnitude faster than YAML.
* The new Spec format will soon allow us to represent DAGs with potentially multiple versions of the same dependency -- e.g., for build dependencies or for compilers-as-dependencies. This PR lays the necessary groundwork for those features.
The old `spec.yaml` format continues to be supported, but is now considered a legacy format, and Spack will opportunistically convert these to the new `spec.json` format.
* tests: make `spack url [stats|summary]` work on mock packages
Mock packages have historically had mock hashes, but this means they're also invalid
as far as Spack's hash detection is concerned.
- [x] convert all hashes in mock package to md5 or sha256
- [x] ensure that all mock packages have a URL
- [x] ignore some special cases with multiple VCS fetchers
* url stats: add `--show-issues` option
`spack url stats` tells us how many URLs are using what protocol, type of checksum,
etc., but it previously did not tell us which packages and URLs had the issues. This
adds a `--show-issues` option to show URLs with insecure (`http`) URLs or `md5` hashes
(which are now deprecated by NIST).
This PR will add a new audit, specifically for spack package homepage urls (and eventually
other kinds I suspect) to see if there is an http address that can be changed to https.
Usage is as follows:
```bash
$ spack audit packages-https <package>
```
And in list view:
```bash
$ spack audit list
generic:
Generic checks relying on global variables
configs:
Sanity checks on compilers.yaml
Sanity checks on packages.yaml
packages:
Sanity checks on specs used in directives
packages-https:
Sanity checks on https checks of package urls, etc.
```
I think it would be unwise to include with packages, because when run for all, since we do requests it takes a long time. I also like the idea of more well scoped checks - likely there will be other addresses for http/https within a package that we eventually check. For now, there are two error cases - one is when an https url is tried but there is some SSL error (or other error that means we cannot update to https):
```bash
$ spack audit packages-https zoltan
PKG-HTTPS-DIRECTIVES: 1 issue found
1. Error with attempting https for "zoltan":
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'www.cs.sandia.gov'. (_ssl.c:1125)>
```
This is either not fixable, or could be fixed with a change to the url or (better) contacting the site owners to ask about some certificate or similar.
The second case is when there is an http that needs to be https, which is a huge issue now, but hopefully not after this spack PR.
```bash
$ spack audit packages-https xman
Package "xman" uses http but has a valid https endpoint.
```
And then when a package is fixed:
```bash
$ spack audit packages-https zlib
PKG-HTTPS-DIRECTIVES: 0 issues found.
```
And that's mostly it. :)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
* Bootstrap clingo from binaries
* Move information on clingo binaries to a JSON file
* Add support to bootstrap on Cray
Bootstrapping on Cray requires, at the moment, to
swap the platform when looking for binaries - due
to #22800.
* Add SHA256 verification for bootstrapped software
Use sha256 verification for binaries necessary to bootstrap
the concretizer and gpg for signature verification
* patchelf: use Spec._old_concretize() to bootstrap
As noted in #24450 we may happen to need the
concretizer when bootstrapping clingo. In that case
only the old concretizer is available.
* Add a schema for bootstrapping methods
Two fields have been added to bootstrap.yaml:
"sources" which lists the methods available for
bootstrapping software
"trusted" which records if a source is trusted or not
A subcommand has been added to "spack bootstrap" to list
the sources currently available.
* Methods used for bootstrapping are configurable from bootstrap:sources
The function that tries to ensure a given Python module
is importable now tries bootstrapping methods in the same
order as they are defined in `bootstrap.yaml`
* Permit to trust/untrust bootstrapping methods
* Add binary tests for MacOS, Ubuntu
* Add documentation
* Add a note on bash
This pull request adds a new workflow to build and deploy Spack Docker containers
from GitHub Actions. In comparison with our current system where we use Dockerhub's
CI to build our Docker containers, this workflow will allow us to now build for multiple
architectures and deploy to multiple registries. (At the moment x86_64 and Arm64 because
ppc64le is throwing an error within archspec.)
As currently set up, the PR will build all of the current containers (minus Centos6 because
those yum repositories are no longer available?) as both x86_64 and Arm64 variants. The
workflow is currently setup to build and deploy containers nightly from develop as well as
on tagged releases. The workflow will also build, but NOT deploy containers on a pull request
for the purposes of testing this PR. At the moment it is setup to deploy the built containers to
GitHub's Container Registry although, support for also uploading to Dockerhub/Quay can be
included easily if we decide to keep releasing on Dockerhub/want to begin releasing on Quay.
Gitlab truncates job trace output (even the complete raw output) at 4MB,
so this change captures it to a file under "user_data" artifacts as well,
to make sure we can debug output from the end of the rebuild job.
A `spack diff` will take two specs, and then use the spack.solver.asp.SpackSolverSetup to generate
lists of facts about each (e.g., nodes, variants, etc.) and then take a set difference between the
two to show the user the differences.
Example output:
$ spack diff python@2.7.8 python@3.8.11
==> Warning: This interface is subject to change.
--- python@2.7.8/tsxdi6gl4lihp25qrm4d6nys3nypufbf
+++ python@3.8.11/yjtseru4nbpllbaxb46q7wfkyxbuvzxx
@@ variant_value @@
- python patches a8c52415a8b03c0e5f28b5d52ae498f7a7e602007db2b9554df28cd5685839b8
+ python patches 0d98e93189bc278fbc37a50ed7f183bd8aaf249a8e1670a465f0db6bb4f8cf87
@@ version @@
- openssl Version(1.0.2u)
+ openssl Version(1.1.1k)
- python Version(2.7.8)
+ python Version(3.8.11)
Currently this uses diff-like output but we will attempt to improve on this in the future.
One use case for `spack diff` is whenever a user has a disambiguate situation and cannot
remember how two different installs are different. The command can also output `--json` in
the case of a more analysis type use case where we want to save complete data with all
diffs and the intersection. However, the command is really more intended for a command
line use case, and we likely will have an analyzer more suited to saving data
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Modifications:
- Remove the "build tests" workflow from GitHub Actions
- Setup a similar e2e test on Gitlab
In this way we'll reduce load on GitHub Actions workflows and for e2e tests will
benefit from the buildcache reuse granted by pipelines.
`spack style` previously used a Travis CI variable to figure out
what the base branch of a PR was, and this was apparently also set
on `develop`. We switched to `GITHUB_BASE_REF` to support GitHub
Actions, but it looks like this is set to `""` in pushes to develop,
so `spack style` breaks there.
This PR does two things:
- [x] Remove `GITHUB_BASE_REF` knowledge from `spack style` entirely
- [x] Handle `GITHUB_BASE_REF` in style scripts instead, and explicitly
pass the base ref if it is present, but don't otherwise.
This makes `spack style` *not* dependent on the environment and fixes
handling of the base branch in the right place.
This adds a `--root` option so that `spack style` can check style for
a spack instance other than its own.
We also change the inner workings of `spack style` so that `--config FILE`
(and similar options for the various tools) options are used. This ensures
that when `spack style` runs, it always uses the config from the running spack,
and does *not* pick up configuration from the external root.
- [x] add `--root` option to `spack style`
- [x] add `--config` (or similar) option when invoking style tools
- [x] add a test that verifies we can check an external instance
This uses our bootstrapping logic to automatically install dependencies for
`spack style`. Users should no longer have to pre-install all of the tools
(`isort`, `mypy`, `black`, `flake8`). The command will do it for them.
- [x] add logic to bootstrap specs with specific version requirements in `spack style`
- [x] remove style tools from CI requirements (to ensure we test bootstrapping)
- [x] rework dependencies for `mypy` and `py-typed-ast`
- `py-typed-ast` needs to be a link dependency
- it needs to be at 1.4.1 or higher to work with python 3.9
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
* trilinos: rename basker variant
The Basker solver is part of amesos2 but is clearer without the extra
scoping.
* trilinos: automatically enable teuchos and remove variant
Basically everything in trilinos needs teuchos
* trilinos: group top-level dependencies
* trilinos: update dependencies, removing unused
- GLM, X11 are unused (x11 lacks dependency specs too)
- Python variant is more like a TPL so rearrange that
- Gtest internal package shouldn't be compiled or exported
- Add MPI4PY requirement for pytrilinos
* trilinos: remove package meta-options
- XSDK settings and "all opt packages" are not used anywhere
- all optional packages are dangerous
* trilinos: Use hwloc iff kokkos
See #19119, also the HWLOC tpl name was misspelled so this was being ignored before.
* Flake
* Fix trilinos +netcdf~mpi
* trilinos: default to disabling external dependencies
* Remove teuchos from downstream dependencies
* fixup! trilinos: Use hwloc iff kokkos
* Add netcdf requirements to packages with ^trilinos+exodus
* trilinos: disable exodus by default
* fixup! Add netcdf requirements to packages with ^trilinos+exodus
* trilinos: only enable hwloc when @13: +kokkos
* xyce: propagate trilinos dependencies more simply
* dtk: fix missing boost dependency
* trilinos: remove explicit metis dependency
* trilinos: require metis/parmetis for zoltan
Disable zoltan by default to minimize default dependencies
* trilinos: mark mesquite disabled and fix kokkos arch
* xsdk: fix trilinos to also list zoltan [with zoltan2]
* ci: remove nonexistent variant from trilinos
* trilinos: add missing boost dependency
Co-authored-by: Satish Balay <balay@mcs.anl.gov>
* Permit to enable/disable bootstrapping and customize store location
This PR adds configuration handles to allow enabling
and disabling bootstrapping, and to customize the store
location.
* Move bootstrap related configuration into its own YAML file
* Add a bootstrap command to manage configuration
* fix remaining flake8 errors
* imports: sort imports everywhere in Spack
We enabled import order checking in #23947, but fixing things manually drives
people crazy. This used `spack style --fix --all` from #24071 to automatically
sort everything in Spack so PR submitters won't have to deal with it.
This should go in after #24071, as it assumes we're using `isort`, not
`flake8-import-order` to order things. `isort` seems to be more flexible and
allows `llnl` mports to be in their own group before `spack` ones, so this
seems like a good switch.
This consolidates code across tools in `spack style` so that each
`run_<tool>` function can be called indirecty through a dictionary
of handlers, and os that checks like finding the executable for the
tool can be shared across commands.
- [x] rework `spack style` to use decorators to register tools
- [x] define tool order in one place in `spack style`
- [x] fix python 2/3 issues to Get `isort` checks working
- [x] make isort error regex more robust across versions
- [x] remove unused output option
- [x] change vestigial `TRAVIS_BRANCH` to `GITHUB_BASE_REF`
- [x] update completion
Spack pipelines need to take specific actions internally that depend
on whether the pipeline is being run on a PR to spack or a merge to
the develop branch. Pipelines can also run in other repositories,
which represents other possible use cases than just the two mentioned
above. This PR creates a "SPACK_PIPELINE_TYPE" gitlab variable which
is propagated to rebuild jobs, and is also used internally to determine
which pipeline-specific tasks to run.
One goal of the PR is fix an issue where rebuild jobs which failed on
develop pipelines did not properly report the broken full hash to the
"broken-specs-url".
* remove blueos check on cuda variant, fix typo
* restore necessary compiler guard
* remove axom+cuda from testing because it only partially works outside ppc systems
Add a new "spack audit" command. This command can check for issues
with configuration or with packages and is intended to help a
user debug a failed Spack build.
In some cases the reported issues are always errors but are too
costly to check for (e.g. packages that specify missing variants on
dependencies). In other cases the issues may be legitimate but
uncommon usage of Spack and we want to be sure the user intended the
behavior (e.g. duplicate compiler definitions).
Audits are grouped by theme, and for now the two themes are packages
and configuration. For example you can run all available audits
on packages with "spack audit packages". It is intended that in
the future users will be able to define their own audits.
The package audits are good candidates for running in package_sanity
(i.e. they could catch bugs in user-submitted packages before they
are merged) but that is left for a later PR.
Building magma has been failing consistently and is currently
blocking PRs from being merged. Disable that spec while we
investigate the failure and work on a fix.
This should get us most of the way there to support using monitor during a spack container build, for both Singularity and Docker. Some quick notes:
### Docker
Docker works by way of BUILDKIT and being able to specify --secret. What this means is that you can prefix a line with a mount of type secret as follows:
```bash
# Install the software, remove unnecessary deps
RUN --mount=type=secret,id=su --mount=type=secret,id=st cd /opt/spack-environment && spack env activate . && export SPACKMON_USER=$(cat /run/secrets/su) && export SPACKMON_TOKEN=$(cat /run/secrets/st) && spack install --monitor --fail-fast && spack gc -y
```
Where the id for one or more secrets corresponds to the file mounted at `/run/secrets/<name>`. So, for example, to build this container with su (spackmon user) and sv (spackmon token) defined I would export them on my host and do:
```bash
$ DOCKER_BUILDKIT=1 docker build --network="host" --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container .
```
And when we add `env` to the secret definition that tells the build to look for the secret with id "st" in the environment variable `SPACKMON_TOKEN` for example.
If the user is building locally with a local spack monitor, we also need to set the `--network` to be the host, otherwise you can't connect to it (a la isolation of course.)
## Singularity
Singularity doesn't have as nice an ability to clearly specify secrets, so (hoping this eventually gets implemented) what I'm doing now is providing the user instructions to write the credentials to a file, add it to the container to source, and remove when done.
## Tags
Note that the tags PR https://github.com/spack/spack/pull/23712 will need to be merged before `--monitor-tags` will actually work because I'm checking for the attribute (that doesn't exist yet):
```bash
"tags": getattr(args, "monitor_tags", None)
```
So when that PR is merged to update the argument group, it will work here, and I can either update the PR here to not check if the attribute is there (it will be) or open another one in the case this PR is already merged.
Finally, I added a bunch of documetation for how to use monitor with containerize. I say "mostly working" because I can't do a full test run with this new version until the container base is built with the updated spack (the request to the monitor server for an env install was missing so I had to add it here).
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
this will first support uploads for spack monitor, and eventually could be
used for other kinds of spack uploads
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
* Update of Flecsi Spackage
Update of flecsi spackage to reconcile differences between flecsi@1:1.9
and flecsi@2: for future support purposes
* Removing Unnecessary Conditional
Removing unused conditional. Initially the plan was to switch based on
version in `cmake_args` but this was not necessary as build system
variable names remained mostly the same and conflicts prevent the rest.
For the most part, if a variant is there it does not need to check
against what version of the code is being built.
* Updated CI To Reconcile Flecsi Changes
Updated CI to target flecsi@1.4.2 which best matches the previous
release version and reconciled change in variant name
* e4s ci: enable full e4s
* add llvm-amdgpu to list of specs needing an xlarge tagged runner
* comment out qt and qwt because of intermittent build failures
* remove +rocm specs because rocblas job consistently fails due to infrastructure
This PR allows users to `--export`, `--export-secret`, or both to export GPG keys
from Spack. The docs are updated that include a warning that this usually does not
need to be done.
This addresses an issue brought up in slack, and also represented in #14721.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).
This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.
As part of this change, the module roots configuration moved from the config section to inside each module configuration.
Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
### Overview
The goal of this PR is to make gitlab pipeline builds (especially build failures) more reproducible outside of the pipeline environment. The two key changes here which aim to improve reproducibility are:
1. Produce a `spack.lock` during pipeline generation which is passed to child jobs via artifacts. This concretized environment is used both by generated child jobs as well as uploaded as an artifact to be used when reproducing the build locally.
2. In the `spack ci rebuild` command, if a spec needs to be rebuilt from source, do this by generating and running an `install.sh` shell script which is then also uploaded as a job artifact to be run during local reproduction.
To make it easier to take advantage of improved build reproducibility, this PR also adds a new subcommand, `spack ci reproduce-build`, which, given a url to job artifacts:
- fetches and unzips the job artifacts to a local directory
- looks for the generated pipeline yaml and parses it to find details about the job to reproduce
- attempts to provide a copy of the same version of spack used in the ci build
- if the ci build used a docker image, the command prints a `docker run` command you can run to get an interactive shell for reproducing the build
#### Some highlights
One consequence of this change will be much smaller pipeline yaml files. By encoding the concrete environment in a `spack.lock` and passing to child jobs via artifacts, we will no longer need to encode the concrete root of each spec and write it into the job variables, greatly reducing the size of the generated pipeline yaml.
Additionally `spack ci rebuild` output (stdout/stderr) is no longer internally redirected to a log file, so job output will appear directly in the gitlab job trace. With debug logging turned on, this often results in log files getting truncated because they exceed the maximum amount of log output gitlab allows. If this is a problem, you still have the option to `tee` command output to a file in the within the artifacts directory, as now each generated job exposes a `user_data` directory as an artifact, which you can fill with whatever you want in your custom job scripts.
There are some changes to be aware of in how pipelines should be set up after this PR:
#### Pipeline generation
Because the pipeline generation job now writes a `spack.lock` artifact to be consumed by generated downstream jobs, `spack ci generate` takes a new option `--artifacts-root`, inside which it creates a `concrete_env` directory to place the lockfile. This artifacts root directory is also where the `user_data` directory will live, in case you want to generate any custom artifacts. If you do not provide `--artifacts-root`, the default is for it to create a `jobs_scratch_dir` within your `CI_PROJECT_DIR` (a gitlab predefined environment variable) or whatever is your current working directory if that variable isn't set. Here's the diff of the PR testing `.gitlab-ci.yml` taking advantage of the new option:
```
$ git diff develop..pipelines-reproducible-builds share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
diff --git a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
index 579d7b56f3..0247803a30 100644
--- a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
+++ b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
@@ -28,10 +28,11 @@ default:
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
- spack env activate --without-view .
- spack ci generate --check-index-only
+ --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
artifacts:
paths:
- - "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
+ - "${CI_PROJECT_DIR}/jobs_scratch_dir"
tags: ["spack", "public", "medium", "x86_64"]
interruptible: true
```
Notice how we replaced the specific pointer to the generated pipeline file with its containing folder, the same folder we passed as `--artifacts-root`. This way anything in that directory (the generated pipeline yaml, as well as the concrete environment directory containing the `spack.lock`) will be uploaded as an artifact and available to the downstream jobs.
#### Rebuild jobs
Rebuild jobs now must activate the concrete environment created by `spack ci generate` and provided via artifacts. When the pipeline is generated, a directory called `concrete_environment` is created within the artifacts root directory, and this is where the `spack.lock` file is written to be passed to the generated rebuild jobs. The artifacts root directory can be specified using the `--artifacts-root` option to `spack ci generate`, otherwise, it is assumed to be `$CI_PROJECT_DIR`. The directory containing the concrete environment files (`spack.yaml` and `spack.lock`) is then passed to generated child jobs via the `SPACK_CONCRETE_ENV_DIR` variable in the generated pipeline yaml file.
When you don't provide custom `script` sections in your `mappings` within the `gitlab-ci` section of your `spack.yaml`, the default behavior of rebuild jobs is now to change into `SPACK_CONCRETE_ENV_DIR` and activate that environment. If you do provide custom rebuild scripts in your `spack.yaml`, be aware those scripts should do the same thing: assume `SPACK_CONCRETE_ENV_DIR` contains the concretized environment to activate. No other changes to existing custom rebuild scripts should be required as a result of this PR.
As mentioned above, one key change made in this PR is the generation of the `install.sh` script by the rebuild jobs, as that same script is both run by the CI rebuild job as well as exported as an artifact to aid in subsequent attempts to reproduce the build outside of CI. The generated `install.sh` script contains only a single `spack install` command with arguments computed by `spack ci rebuild`. If the install fails, the job trace in gitlab will contain instructions on how to reproduce the build locally:
```
To reproduce this build locally, run:
spack ci reproduce-build https://gitlab.next.spack.io/api/v4/projects/7/jobs/240607/artifacts [--working-dir <dir>]
If this project does not have public pipelines, you will need to first:
export GITLAB_PRIVATE_TOKEN=<generated_token>
... then follow the printed instructions.
```
When run locally, the `spack ci reproduce-build` command shown above will download and process the job artifacts from gitlab, then print out instructions you can copy-paste to run a local reproducer of the CI job.
This PR includes a few other changes to the way pipelines work, see the documentation on pipelines for more details.
This PR erelies on
~- [ ] #23194 to be able to refer to uninstalled specs by DAG hash~
EDIT: that is going to take longer to come to fruition, so for now, we will continue to install specs represented by a concrete `spec.yaml` file on disk.
- [x] #22657 to support install a single spec already present in the active, concrete environment
I would like to be able to export (and save and then load programatically)
spack blame metadata, so this commit adds a spack blame --json argument,
along with developer docs for it
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
This work will come in two phases. The first here is to allow saving of a local result
with spack monitor, and the second will add a spack monitor command so the user can
do spack monitor upload.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).
This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.
As part of this change, the module roots configuration moved from the `config` section to inside each module configuration.
Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
TODO:
- [x] code changes to support multiple module sets
- [x] code changes to support modules relative to a view
- [x] Tests for multiple module configurations
- [x] Tests for modules relative to a view
- [x] Backwards compatibility for module roots from config section
- [x] Backwards compatibility for default module set without the name specified
- [x] Tests for backwards compatibility
In an active concretize environment, support installing one or more
cli specs only if they are already present in the environment. The
`--no-add` option is the default for root specs, but optional for
dependency specs. I.e. if you `spack install <depspec>` in an
environment, the dependency-only spec `depspec` will be added as a
root of the environment before being installed. In addition,
`spack install --no-add <spec>` fails if it does not find an
unambiguous match for `spec`.
This provides initial support for [spack monitor](https://github.com/spack/spack-monitor), a web application that stores information and analysis about Spack installations. Spack can now contact a monitor server and upload analysis -- even after a build is already done.
Specifically, this adds:
- [x] monitor options for `spack install`
- [x] `spack analyze` command
- [x] hook architecture for analyzers
- [x] separate build logs (in addition to the existing combined log)
- [x] docs for spack analyze
- [x] reworked developer docs, with hook docs
- [x] analyzers for:
- [x] config args
- [x] environment variables
- [x] installed files
- [x] libabigail
There is a lot more information in the docs contained in this PR, so consult those for full details on this feature.
Additional tests will be added in a future PR.
PRs that change only package recipes will only run tests under "package_sanity.py" and without coverage. This should result in a huge drop the cpu-time spent in CI for most PRs.
* unit tests: mark slow tests as "maybeslow"
This commit also removes the "network" marker and
marks every "network" test as "maybeslow". Tests
marked as db are maintained, but they're not slow
anymore.
* GA: require style tests to pass before running unit-tests
* GA: make MacOS unit tests fail fast
* GA: move all unit tests into the same workflow, run style tests as a prerequisite
All the unit tests have been moved into the same workflow so that a single
run of the dorny/paths-filter action can be used to ask for coverage based
on the files that have been changed in a PR. The basic idea is that for PRs
that introduce only changes to packages coverage is not necessary, this
resulting in a faster execution of the tests.
Also, for package only PRs slow unit tests are skipped.
Finally, MacOS and linux unit tests are now conditional on style tests passing
meaning that e.g. we won't waste a MacOS worker if we know that the PR has
flake8 issues.
* Addressed review comments
* Skipping slow tests on MacOS for package only recipes
* QA: make tests on changes correct before merging
* Rewrite relative dev_spec paths internally to absolute paths in case of relocation of the environment file
* Test relative paths for dev_path in environments
* Add a --keep-relative flag to spack env create
This ensures that relative paths of develop paths are not expanded to
absolute paths when initializing the environment in a different location
from the spack.yaml init file.
This pull request will add the ability for a user to add a configuration argument on the fly, on the command line, e.g.,:
```bash
$ spack -c config:install_tree:root:/path/to/config.yaml -c packages:all:compiler:[gcc] list --help
```
The above command doesn't do anything (I'm just getting help for list) but you can imagine having another root of packages, and updating it on the fly for a command (something I'd like to do in the near future!)
I've moved the logic for config_add that used to be in spack/cmd/config.py into spack/config.py proper, and now both the main.py (where spack commands live) and spack/cmd/config.py use these functions. I only needed spack config add, so I didn't move the others. We can move the others if there are also needed in multiple places.
This adds a `--path` option to `spack python` that shows the `python`
interpreter that Spack is using.
e.g.:
```console
$ spack python --path
/Users/gamblin2/src/spack/var/spack/environments/default/.spack-env/view/bin/python
```
This is useful for debugging, and we can ask users to run it to
understand what python Spack is picking up via preferences in `bin/spack`
and via the `SPACK_PYTHON` environment variable introduced in #21222.
`spack test list` will show you which *installed* packages can be tested
but it won't show you which packages have tests.
- [x] add `spack test list --all` to show which packages have test methods
- [x] update `has_test_method()` to handle package instances *and*
package classes.
* Allow the bootstrapping of clingo from sources
Allow python builds with system python as external
for MacOS
* Ensure consistent configuration when bootstrapping clingo
This commit uses context managers to ensure we can
bootstrap clingo using a consistent configuration
regardless of the use case being managed.
* Github actions: test clingo with bootstrapping from sources
* Add command to inspect and clean the bootstrap store
Prevent users to set the install tree root to the bootstrap store
* clingo: documented how to bootstrap from sources
Co-authored-by: Gregory Becker <becker33@llnl.gov>
This allows for quickly configuring a spack install/env to use upstream packages by default. This is particularly important when upstreaming from a set of officially supported spack installs on a production cluster. By configuring such that package preferences match the upstream, you ensure maximal reuse of existing package installations.
Fixes for gitlab pipelines
* Remove accidentally retained testing branch name
* Generate pipeline w/out debug mode
* Make jobs interruptible for auto-cancel pending
* Work around concretization conflicts
Drops:
* C_INCLUDE_PATH
* CPLUS_INCLUDE_PATH
* LIBRARY_PATH
* INCLUDE
We already decided to use C_INCLUDE_PATH, CPLUS_INCLUDE_PATH, INCLUDE over CPATH here:
https://github.com/spack/spack/pull/14749
However, none of these flags apply to Fortran on Linux. So for consistency it seems better to make the user use -I and -L flags by hand or through pkgconfig.
Before this change, in pipeline environments where runners do not have access
to persistent shared file-system storage, the only way to pass buildcaches to
dependents in later stages was by using the "enable-artifacts-buildcache" flag
in the gitlab-ci section of the spack.yaml. This change supports a second
mechanism, named "temporary-storage-url-prefix", which can be provided instead
of the "enable-artifacts-buildcache" feature, but the two cannot be used at the
same time. If this prefix is provided (only "file://" and "s3://" urls are
supported), the gitlab "CI_PIPELINE_ID" will be appended to it to create a url
for a mirror where pipeline jobs will write buildcache entries for use by jobs
in subsequent stages. If this prefix is provided, a cleanup job will be
generated to run after all the rebuild jobs have finished that will delete the
contents of the temporary mirror. To support this behavior a new mirror
sub-command has been added: "spack mirror destroy" which can take either a
mirror name or url.
This change also fixes a bug in generation of "needs" list for each job. Each
jobs "needs" list is supposed to only contain direct dependencies for scheduling
purposes, unless "enable-artifacts-buildcache" is specified. Only in that case
are the needs lists supposed to contain all transitive dependencies. This
changes fixes a bug that caused the needs lists to always contain all transitive
dependencies, regardless of whether or not "enable-artifacts-buildcache" was
specified.
Pipelines: DAG pruning
During the pipeline generation staging process we check each spec against all configured mirrors to determine whether it is up to date on any of the mirrors. By default, and with the --prune-dag argument to "spack ci generate", any spec already up to date on at least one remote mirror is omitted from the generated pipeline. To generate jobs for up to date specs instead of omitting them, use the --no-prune-dag argument. To speed up the pipeline generation process, pass the --check-index-only argument. This will cause spack to check only remote buildcache indices and avoid directly fetching any spec.yaml files from mirrors. The drawback is that if the remote buildcache index is out of date, spec rebuild jobs may be scheduled unnecessarily.
This change removes the final-stage-rebuild-index block from gitlab-ci section of spack.yaml. Now rebuilding the buildcache index of the mirror specified in the spack.yaml is the default, unless "rebuild-index: False" is set. Spack assigns the generated rebuild-index job runner attributes from an optional new "service-job-attributes" block, which is also used as the source of runner attributes for another generated non-build job, a no-op job, which spack generates to avoid gitlab errors when DAG pruning results in empty pipelines.
The SPACK_PYTHON environment variable can be set to a python interpreter to be
used by the spack command. This allows the spack command itself to use a
consistent and separate interpreter from whatever python might be used for package
building.
* Procedure to deprecate old versions of software
* Add documentation
* Fix bug in logic
* Update tab completion
* Deprecate legacy packages
* Deprecate old mxnet as well
* More explicit docs
This commit adds an option to the `external find`
command that allows it to search by tags. In this
way group of executables with common purposes can
be grouped under a single name and a simple command
can be used to detect all of them.
As an example introduce the 'build-tools' tag to
search for common development tools on a system
This adds a -i option to "spack python" which allows use of the
IPython interpreter; it can be used with "spack python -i ipython".
This assumes it is available in the Python instance used to run
Spack (i.e. that you can "import IPython").
- [x] add `concretize.lp`, `spack.yaml`, etc. to licensed files
- [x] update all licensed files to say 2013-2021 using
`spack license update-copyright-year`
- [x] appease mypy with some additions to package.py that needed
for oneapi.py
This adds a new subcommand to `spack license` that automatically updates
the copyright year in files that should have a license header.
- [x] add `spack license update-copyright-year` command
- [x] add test
I lost my mind a bit after getting the completion stuff working and
decided to get Mypy working for spack as well. This adds a
`.mypy.ini` that checks all of the spack and llnl modules, though
not yet packages, and fixes all of the identified missing types and
type issues for the spack library.
In addition to these changes, this includes:
* rename `spack flake8` to `spack style`
Aliases flake8 to style, and just runs flake8 as before, but with
a warning. The style command runs both `flake8` and `mypy`,
in sequence. Added --no-<tool> options to turn off one or the
other, they are on by default. Fixed two issues caught by the tools.
* stub typing module for python2.x
We don't support typing in Spack for python 2.x. To allow 2.x to
support `import typing` and `from typing import ...` without a
try/except dance to support old versions, this adds a stub module
*just* for python 2.x. Doing it this way means we can only reliably
use all type hints in python3.7+, and mypi.ini has been updated to
reflect that.
* add non-default black check to spack style
This is a first step to requiring black. It doesn't enforce it by
default, but it will check it if requested. Currently enforcing the
line length of 79 since that's what flake8 requires, but it's a bit odd
for a black formatted project to be quite that narrow. All settings are
in the style command since spack has no pyproject.toml and I don't
want to add one until more discussion happens. Also re-format
`style.py` since it no longer passed the black style check
with the new length.
* use style check in github action
Update the style and docs action to use `spack style`, adding in mypy
and black to the action even if it isn't running black right now.
This PR does three related things to try to improve developer tooling quality of life:
1. Adds new options to `.flake8` so it applies the rules of both `.flake8` and `.flake_package` based on paths in the repository.
2. Adds a re-factoring of the `spack flake8` logic into a flake8 plugin so using flake8 directly, or through editor or language server integration, only reports errors that `spack flake8` would.
3. Allows star import of `spack.pkgkit` in packages, since this is now the thing that needs to be imported for completion to work correctly in package files, it's nice to be able to do that.
I'm sorely tempted to sed over the whole repository and put `from spack.pkgkit import *` in every package, but at least being allowed to do it on a per-package basis helps.
As an example of what the result of this is:
```
~/Workspace/Projects/spack/spack develop* ⇣
❯ flake8 --format=pylint ./var/spack/repos/builtin/packages/kripke/package.py
./var/spack/repos/builtin/packages/kripke/package.py:6: [F403] 'from spack.pkgkit import *' used; unable to detect undefined names
./var/spack/repos/builtin/packages/kripke/package.py:25: [E501] line too long (88 > 79 characters)
~/Workspace/Projects/spack/spack refactor-flake8*
1 ❯ flake8 --format=spack ./var/spack/repos/builtin/packages/kripke/package.py
~/Workspace/Projects/spack/spack refactor-flake8*
❯ flake8 ./var/spack/repos/builtin/packages/kripke/package.py
```
* qa/flake8: update .flake8, spack formatter plugin
Adds:
* Modern flake8 settings for per-path/glob error ignores, allows
packages to use the same `.flake8` as the rest of spack
* A spack formatter plugin to flake8 that implements the behavior of
`spack flake8` for direct invocations. Makes integration with
developer tooling nicer, linting with flake8 reports only errors that
`spack flake8` would report. Using pyls and pyls-flake8, or any other
non-format-dependent flake8 integration, now works with spack's rules.
* qa/flake8: allow star import of spack.pkgkit
To get working completion of directives and spack components it's
necessary to import the contents of spack.pkgkit. At the moment doing
this makes flake8 displeased. For now, allow spack.pkgkit and spack
both, next step is to ban spack * and require spack.pkgkit *.
* first cut at refactoring spack flake8
This version still copies all of the files to be checked as befire, and
some other things that probably aren't necessary, but it relies on the
spack formatter plugin to implement the ignore logic.
* keep flake8 from rejecting itself
* remove separate packages flake8 config
* fix failures from too many files
I ran into this in the PR converting pkgkit to std. The solution in
that branch does not work in all cases as it turns out, and all the
workarounds I tried to use generated configs to get a single invocation
of flake8 with a filename optoion to work failed. It's an astonishingly
frustrating config option.
Regardless, this removes all temporary file creation from the command
and relies on the plugin instead. To work around the huge number of
files in spack and still allow the command to control what gets checked,
it scans files in batches of 100. This is a completely arbitrary number
but was chosen to be safely under common line-length limits. One
side-effect of this is that every 100 files the command will produce
output, rather than only at the end, which doesn't seem like a terrible
thing.
Since zsh can load bash completion files natively, seems reasonable to just turn this on.
The only changes are to switch from `type -t` which zsh doesn't support to using `type`
with a regex and adding a new arm to the sourcing of the completions to allow it to work
for zsh as well as bash.
Could use more bash/dash/etc testing probably, but everything I've thought to try has
worked so far.
Notes:
* unit-test zsh support, fix issues
Specifically fixed word splitting in completion-test, use a different
method to apply sh emulation to zsh loaded bash completion, and fixed
an incompatibility in regex operator quoting requirements.
* compinit now ignores insecure directories
Completion isn't meant to be enabled in non-interactive environments, so
by default compinit will ask the user if they want to ignore insecure
directories or load them anyway. To pass the spack unit tests in GH
actions, this prompt must be disabled, so ignore explicitly until a
better solution can be found.
* debug functions test also requires bash emulation
COMP_WORDS is a bash-ism that zsh doesn't natively support, turn on
emulation for just that section of tests to allow the comparison to
work. Does not change the behavior of the functions themselves since
they are already pinned to sh emulation elsewhere.
* propagate change to .in file
* fix comment and update script based on .in
* [cmd versions] add spack versions --new flag to only fetch new versions
format
[cmd versions] rename --latest to --newest and add --remote-only
[cmd versions] add tests for --remote-only and --new
format
[cmd versions] update shell tab completion
[cmd versions] remove test for --remote-only --new which gives empty output
[cmd versions] final rename
format
* add brillig mock package
* add test for spack versions --new
* [brillig] format
* [versions] increase test coverage
* Update lib/spack/spack/cmd/versions.py
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Update lib/spack/spack/cmd/versions.py
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* allow install of build-deps from cache via --include-build-deps switch
* make clear that --include-build-deps is useful for CI pipeline troubleshooting
* added dockerfile for opensuse leap 15
* updated maintainer info
* Update share/spack/docker/leap-15.dockerfile
* move copies and symlinks after package install
also use ${SPACK_ROOT} for spack calls as
this works with buildah
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This adds a new `mark` command that can be used to mark packages as either
explicitly or implicitly installed. Apart from fixing the package
database after installing a dependency manually, it can be used to
implement upgrade workflows as outlined in #13385.
The following commands demonstrate how the `mark` and `gc` commands can be
used to only keep the current version of a package installed:
```console
$ spack install pkgA
$ spack install pkgB
$ git pull # Imagine new versions for pkgA and/or pkgB are introduced
$ spack mark -i -a
$ spack install pkgA
$ spack install pkgB
$ spack gc
```
If there is no new version for a package, `install` will simply mark it as
explicitly installed and `gc` will not remove it.
Co-authored-by: Greg Becker <becker33@llnl.gov>
Users can add test() methods to their packages to run smoke tests on
installations with the new `spack test` command (the old `spack test` is
now `spack unit-test`). spack test is environment-aware, so you can
`spack install` an environment and then run `spack test run` to run smoke
tests on all of its packages. Historical test logs can be perused with
`spack test results`. Generic smoke tests for MPI implementations, C,
C++, and Fortran compilers as well as specific smoke tests for 18
packages.
Inside the test method, individual tests can be run separately (and
continue to run best-effort after a test failure) using the `run_test`
method. The `run_test` method encapsulates finding test executables,
running and checking return codes, checking output, and error handling.
This handles the following trickier aspects of testing with direct
support in Spack's package API:
- [x] Caching source or intermediate build files at build time for
use at test time.
- [x] Test dependencies,
- [x] packages that require a compiler for testing (such as library only
packages).
See the packaging guide for more details on using Spack testing support.
Included is support for package.py files for virtual packages. This does
not change the Spack interface, but is a major change in internals.
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Co-authored-by: wspear <wjspear@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This PR reworks a few attributes in the container subsection of
spack.yaml to permit the injection of custom base images when
generating containers with Spack. In more detail, users can still
specify the base operating system and Spack version they want to use:
spack:
container:
images:
os: ubuntu:18.04
spack: develop
in which case the generated recipe will use one of the Spack images
built on Docker Hub for the build stage and the base OS image in the
final stage. Alternatively, they can specify explicitly the two
base images:
spack:
container:
images:
build: spack/ubuntu-bionic:latest
final: ubuntu:18.04
and it will be up to them to ensure their consistency.
Additional changes:
* This commit adds documentation on the two approaches.
* Users can now specify OS packages to install (e.g. with apt or yum)
prior to the build (previously this was only available for the
finalized image).
* Handles to avoid an update of the available system packages have been
added to the configuration to facilitate the generation of recipes
permitting deterministic builds.
- [x] Solver now uses the Python interface to clingo
- [x] can extract unsatisfiable cores from problems when things go wrong
- [x] use Python callbacks for versions instead of choice rules (this may
ultimately hurt performance)
This change makes improvements to the `spack ci rebuild` command
which supports running gitlab pipelines on PRs from forks. Much
of this has to do with making sure we can run without the secrets
previously required for running gitlab pipelines (e.g signing key,
aws credentials, etc). Specific improvements in this PR:
Check if spack has precisely one signing key, and use that information
as an additional constraint on whether or not we should attempt to sign
the binary package we create.
Also, if spack does not have at least one public key, add the install
option "--no-check-signature"
If we are running a pipeline without any profile or environment
variables allowing us to push to S3, the pipeline could still
successfully create a buildcache in the artifacts and move on. So
just print a message and move on if pushing either the buildcache
entry or cdash id file to the remote mirror fails.
When we attempt to generate a pacakge or gpg key index on an S3
mirror, and there is nothing to index, just print a warning and
exit gracefully rather than throw an exception.
Support the use of PR-specific mirrors for temporary binary pkg
storage. This will allow quality-of-life improvement for developers,
providing a place to store binaries over the lifetime of a PR, so
that they must only wait for packages to rebuild from source when
they push a new commit that causes it to be necessary.
Replace two-pass install with a single pass and the new option:
--require-full-hash-match. Doing this also removes the need to
save a copy of the spack.yaml to be copied over the one spack
rewrites in between the two spack install passes.
Work around a mirror configuration issue caused by using
spack.util.executable to do the package installation.
* Update pipeline trigger jobs for PRs from forks
Moving to PRs from forks relies on external synchronization script
pushing special branch names. Also secrets will only live on the
spack mirror project, and must be propagated to the E4S project via
variables on the trigger jobs.
When this change is merged, pipelines will not run until we update
the "Custom CI configuration path" in the Gitlab CI Settings, as the
name of the file has changed to better reflect its purpose.
* Arg to MirrorCollection is used exclusively, so add main remote mirror to it
* Compute full hash less frequently
* Add tests covering index generation error handling code
Added a command to set up Spack for our tutorial at
https://spack-tutorial.readthedocs.io.
The command does some common operations we need first-time users to do.
Specifically:
- checks out a particular branch of Spack
- deletes spurious configuration in `~/.spack` that might be
left over from prior parts of the tutorial
- adds a mirror and trusts its public key
* "spack install" now has a "--require-full-hash-match" option, which
forces Spack to skip an available binary package when the full hash
doesn't match. Normally only a DAG-hash match is required, which
ensures equivalent Specs, but does not account for changing logic
inside the associated package.
* Add a local binary cache index which tracks specs that have a binary
install available in a remote binary cache. It is updated with
"spack buildcache list" or for a given spec when a binary package
is retrieved for that Spec.
Zsh and newer versions of bash have a builtin `which` function that will
show you if a command is actually an alias or a function. For functions,
the entire function is printed, and our `spack()` function is quite long.
Instead of printing out all that, make the `spack()` function a wrapper
around `_spack_shell_wrapper()`, and include some no-ops in the
definition so that users can see where it was created and where Spack is
installed.
Here's what the new output looks like in zsh:
```console
$ which spack
spack () {
: this is a shell function from: /Users/gamblin2/src/spack/share/spack/setup-env.sh
: the real spack script is here: /Users/gamblin2/src/spack/bin/spack
_spack "$@"
return $?
}
```
Note that `:` is a no-op in Bourne shell; it just discards anything after
it on the line. We use it here to embed paths in the function definition
(as comments are stripped).
* ADD: testing to dev-build command
* RM: mutally exclusive group for testing in parser
* FIX: test option to subparser and not testing
* ADD: spack-completion.bash
* RM: local devbuildcosmo cmd
* FIX: bad merge --drop-in -b --before options forgotten
* FIX: --test place in spack-completion.bash
* FIX: typo
* FIX: blank line removing
* FIX: trailing white space
Co-authored-by: Elsa Germann <egermann@tsa-ln002.cm.cluster>
* allow environments to specify dev-build packages
* spack develop and spack undevelop commands
* never pull dev-build packges from bincache
* reinstall dev_specs when code has changed; reinstall dependents too
* preserve dev info paths and versions in concretization as special variant
* move install overwrite transaction into installer
* move dev-build argument handling to package.do_install
now that specs are dev-aware, package.do_install can add
necessary args (keep_stage=True, use_cache=False) to dev
builds. This simplifies driving logic in cmd and env._install
* allow 'any' as wildcard for variants
* spec: allow anonymous dependencies
raise an error when constraining by or normalizing an anonymous dep
refactor concretize_develop to remove dev_build variant
refactor tests to check for ^dev_path=any instead of +dev_build
* fix variant class hierarchy
This reverts #18359 and follow-on PRs intended to address issues with
#18359 because that PR changes the hash of all specs. A future PR will
reintroduce the changes.
* Revert "Fix location in spec.yaml where we look for full_hash (#19132)"
* Revert "Fix fetch of spec.yaml files from buildcache (#19101)"
* Revert "Merge pull request #18359 from scottwittenburg/add-binary-distribution-cache-manager"
This changes makes sure that when we run the pipeline job that updates
the buildcache package index on the remote mirror, we also update the
key index. The public keys corresponding to the signing keys used to
sign the package was pushed to the mirror as a part of creating the
buildcache index, so this is just ensuring those keys are reflected
in the key index.
Also, this change makes sure the "spack buildcache update-index"
job runs even when there may have been pipeline failures, since we
would like the index always to reflect the true state of the mirror.
* Rework spack.util.web.list_url()
list_url() now accepts an optional recursive argument (default: False)
for controlling whether to only return files within the prefix url or to
return all files whose path starts with the prefix url. Allows for the
most effecient implementation for the given prefix url scheme. For
example, only recursive queries are supported for S3 prefixes, so the
returned list is trimmed down if recursive == False, but the native
search is returned as-is when recursive == True. Suitable
implementations for each case are also used for file system URLs.
* Switch to using an explicit index for public keys
Switches to maintaining a build cache's keys under build_cache/_pgp.
Within this directory is an index.json file listing all the available
keys and a <fingerprint>.pub file for each such key.
- Adds spack.binary_distribution.generate_key_index()
- (re)generates a build cache's key index
- Modifies spack.binary_distribution.build_tarball()
- if tarball is signed, automatically pushes the key used for signing
along with the tarball
- if regenerate_index == True, automatically (re)generates the build
cache's key index along with the build cache's package index; as in
spack.binary_distribution.generate_key_index()
- Modifies spack.binary_distribution.get_keys()
- a build cache's key index is now used instead of programmatic
listing
- Adds spack.binary_distribution.push_keys()
- publishes keys from Spack's keyring to a given list of mirrors
- Adds new spack subcommand: spack gpg publish
- publishes keys from Spack's keyring to a given list of mirrors
- Modifies spack.util.gpg.Gpg.signing_keys()
- Accepts optional positional arguments for filtering the set of keys
returned
- Adds spack.util.gpg.Gpg.public_keys()
- As spack.util.gpg.Gpg.signing_keys(), except public keys are
returned
- Modifies spack.util.gpg.Gpg.export_keys()
- Fixes an issue where GnuPG would prompt for user input if trying to
overwrite an existing file
- Modifies spack.util.gpg.Gpg.untrust()
- Fixes an issue where GnuPG would fail for input that were not key
fingerprints
- Modifies spack.util.web.url_exists()
- Fixes an issue where url_exists() would throw instead of returning
False
* rework gpg module/fix error with very long GNUPGHOME dir
* add a shim for functools.cached_property
* handle permission denied error in gpg util
* fix tests/make gpgconf optional if no socket dir is available
* trigger ascent e4s pipeline on merge to spack develop
* change pipeline name ecpcitest/e4s is the pipeline that will be triggered for merge on develop its the E4S use-case.
`spack install --yes-to-all` doesn't actually make the build non-interactive,
but that is why people typically use it. This documents that you must also
specify `--no-checksum` for a fully non-interactive build.
* spack config: default modification scope can be an environment
The previous model was that environments are the highest priority config
scope for config reading operations, but were not considered for config
writing operations. Now, the active environment is the highest priority
config scope for both reading and writing operations.
Now spack config add, spack external find and spack compiler set environment
configuration in the environment by default if an environment is active. This is a
change in default behavior for these routines, but better matches the mental
model for an environment taking precedence over the user's default config file.
* add scope argument to 'spack external find' to choose non-default scope
* Increase testing for config modifications on environments
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Packages can implement “detect_version” to support detection
of external instances of a package. This is generally easier
than implementing “determine_spec_details”. The API for
determine_version is similar: for example you can return
“None” to indicate that an executable is not an instance
of a package.
Users may implement a “determine_variants” method for a package.
When doing external detection, executables are grouped by version
and each group results in a single invocation of “determine_variants”
for the associated spec. The method returns a string specifying
the variants for the package. The method may additionally return
a dictionary representing extra attributes for the package.
These will be stored in the spec yaml and can be retrieved
from self.spec.extra_attributes
The Spack GCC package has been updated with an implementation
of “determine_variants” which adds the following extra
attributes to the package: c, cxx, fortran
The YAML config for paths and modules of external packages has
changed: the new format allows a single spec to load multiple
modules. Spack will automatically convert from the old format
when reading the configs (the updates do not add new essential
properties, so this change in Spack is backwards-compatible).
With this update, Spack cannot modify existing configs/environments
without updating them (e.g. “spack config add” will fail if the
configuration is in a format that predates this PR). The user is
prompted to do this explicitly and commands are provided. All
config scopes can be updated at once. Each environment must be
updated one at a time.
* Run Python2.6 unit tests on Github Actions
* Skip url tests on Python 2.6 to reduce waiting times
* Skip foreground background tests on Python 2.6 to reduce waiting times
* Removed references to Travis in the documentation
* Deleted install_patchelf.sh (can be installed from repo on CentOS 6)
* add tutorial setup script to share/spack
* Add check for Ubuntu 18, fix xvda check, fix apt-get errors
- now works on t2.micro, t2.small, and m instances
- apt-get needs retries around it to work
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Buildcache:
* Try mocking an install of quux, corge and garply using prebuilt binaries
* Put patchelf install after ccache restore
* Add script to install patchelf from source so it can be used on Ubuntu:Trusty which does not have a patchelf pat package. The script will skip building on macOS
* Remove mirror at end of bindist test
* Add patchelf to Ubuntu build env
* Revert mock patchelf package to allow other tests to run.
* Remove depends_on('patchelf', type='build') relying instead on
* Test fixture to ensure patchelf is available.
* Call g++ command to build libraries directly during test build
* Flake8
* Install patchelf in before_install stage using apt unless on Trusty where a build is done.
* Add some symbolic links between packages
* Flake8
* Flake8:
* Update mock packages to write their own source files
* Create the stage because spec search does not create it any longer
* updates after change of list command arguments
* cleanup after merge
* flake8
On Cray platforms, we rely heavily on the module system to figure out
what targets, compilers, etc. are available. This unfortunately means
that we shell out to the `module` command as part of platform
initialization.
Because we run subcommands in a shell, we can get infinite recursion if
`setup-env.sh` and friends are in some init script like `.bashrc`.
This fixes the infinite loop by adding guards around `setup-env.sh`,
`setup-env.csh`, and `setup-env.fish`, to prevent recursive
initializations of Spack. This is safe because Spack never shells out to
itself, so we do not need it to be initialized in subshells.
- [x] add recursion guard around `setup-env.sh`
- [x] add recursion guard around `setup-env.csh`
- [x] add recursion guard around `setup-env.fish`
* Move flake8 tests on Github Actions
* Move shell test to Github Actions
* Moved documentation build to Github Action
* Don't run coverage on Python 2.6
Since we get connection errors consistently on Travis
when trying to upload coverage results for Python 2.6,
avoid computing coverage entirely to speed-up tests.
* Activate environment in container file
This PR will ensure that the container recipes will build the spack
environment by first activating the environment.
* Deactivate environment before environment collection
For Singularity, the environment must be deactivated before running the
command to collect the environment variables. This is because the
environment collection uses `spack env activate`.
* share/spack/setup-env.fish file to setup environment in fish shell
* setup-env.fish testing script
* Update share/spack/setup-env.fish
Co-Authored-By: Elsa Gonsiorowski, PhD <gonsie@me.com>
* Update share/spack/qa/setup-env-test.fish
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* updates completions using `spack commands --update-completion`
* added stderr-nocaret warning
* added fish shell tests to CI system
Co-authored-by: becker33 <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Elsa Gonsiorowski, PhD <gonsie@me.com>
* Start moving toward a json buildcache index
* Add spec and database index schemas
* Add a schema for buildcache spec.yaml files
* Provide a mode for database class to generate buildcache index
* Update db and ci tests to validate object w/ new schema
* Remove unused temporary upload-s3 command
* Use database class to generate buildcache index
* Do not generate index with each buildcache creation
* Make buildcache index mode into a couple of constructor args to Database class
* Use keyword args for _createtarball
* Parse new json index when we get specs from buildcache
Now that only one index file per mirror needs to be fetched in
order to have all the concrete specs for binaries available on the
mirror, we can just fetch and refresh the cached specs every time
instead of needing to use the '-f' flag to force re-reading.
spack config add <value>: add nested value value to the configuration scope specified
spack config remove/rm: remove specified configuration from the relevant scope
* Added unit tests to Github Actions
* Set user e-mail and name for git tests to succeed
* Simplify setup.sh logic
* Replicate Travis script on Github Actions
* Update flags since '.' is not allowed
* Added badge, simplified workflow
* Remove pinning of coverage
* Remove unit tests run on Github Actions from Travis
This fixes a fork bomb in `spack versions`. Recursive generation of pools
to scrape URLs in `_spider` was creating large numbers of processes.
Instead of recursively creating process pools, we now use a single
`ThreadPool` with a concurrency limit.
More on the issue: having ~10 users running at the same time spack
versions on front-end nodes caused kernel lockup due to the high number
of sockets opened (sys-admin reports ~210k distributed over 3 nodes).
Users were internal, so they had ulimit -n set to ~70k.
The forking behavior could be observed by just running:
$ spack versions boost
and checking the number of processes spawned. Number of processes
per se was not the issue, but each one of them opens a socket
which can stress `iptables`.
In the original issue the kernel watchdog was reporting:
Message from syslogd@login03 at May 19 12:01:30 ...
kernel:Watchdog CPU:110 Hard LOCKUP
Message from syslogd@login03 at May 19 12:01:31 ...
kernel:watchdog: BUG: soft lockup - CPU#110 stuck for 23s! [python3:2756]
Message from syslogd@login03 at May 19 12:01:31 ...
kernel:watchdog: BUG: soft lockup - CPU#94 stuck for 22s! [iptables:5603]
* add an --exclude-file option to 'spack mirror create' which allows a user to specify a file of specs to exclude when creating a mirror. this is anticipated to be useful especially when using the '--all' option
* allow specifying number of versions when mirroring all packages
* when mirroring all specs within an environment, include dependencies of root specs
* add '--exclude-specs' option to allow user to specify that specs should be excluded on the command line
* add test for excluding specs
This change also adds a code path through the spack ci pipelines
infrastructure which supports PR testing on the Spack repository.
Gitlab pipelines run as a result of a PR (either creation or pushing
to a PR branch) will only verify that the packages in the environment
build without error. When the PR branch is merged to develop,
another pipeline will run which results in the generated binaries
getting pushed to the binary mirror.
Modifications:
- [x] Travis now uses `bionic` as a default (`xenial` used for Python 3.5, `trusty` for Python 2.6)
- [x] Shell unit tests have been factored into their own run
- [x] `kcov` is built only for tests that upload coverage results
Overall with this we shave 3-4 mins. on each run and add an additional run of about 3 min. For some reason `kcov` 38 fails forwarding output when used with Python unit tests, so I used v34 for that and v38 (latest) for shell testing. Previously we were using v25.
* Non-interactive mode for spack checksum; allow passing 'package@version' to spack checksum
* Flake8 fixes
* Update checksum.py
Fix typo
* Update spack-completion script
* Automatically set non-interactive mode if more than one version passed
* Update lib/spack/spack/cmd/checksum.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Add documentation and update spack-completion
* Flake8
* Rename option
* Update spack-completion
* Update lib/spack/spack/cmd/checksum.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update checksum.py
* Update stage.py
* Update create.py
Use batch mode when adding a new package
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Add a `spack external find` command that tries to populate
`packages.yaml` with external packages from the user's `$PATH`. This
focuses on finding build dependencies. Currently, support has only been
added for `cmake`.
For a package to be discoverable with `spack external find`, it must define:
* an `executables` class attribute containing a list of
regular expressions that match executable names.
* a `determine_spec_details(prefix, specs_in_prefix)` method
Spack will call `determine_spec_details()` once for each prefix where
executables are found, passing in the path to the prefix and the path to
all found executables. The package is responsible for invoking the
executables and figuring out what type of installation(s) are in the
prefix, and returning one or more specs (each with version, variants or
whatever else the user decides to include in the spec).
The found specs and prefixes will be added to the user's `packages.yaml`
file. Providing the `--not-buildable` option will mark all generated
entries in `packages.yaml` as `buildable: False`
* dev-build: --drop-in <shell>
Add a `--drop-in <shell>` option to `spack dev-build`.
This option will automatically run a
`spack build-env <spec> -- <shell>` at the end of a `dev-build`, e.g.
to quickly drop-and-devel into a build phase of a package.
Example usage:
```
spack dev-build --before cmake --drop-in bash openpmd-api@develop
```
* build_env: drop in unit test
Co-authored-by: Greg Becker <becker33@llnl.gov>
Since #16132, we've consolidated the setting of FORCE_UNSAFE_CONFIGURE to
`autotools.py`, so we don't need to use it in packages like `coreutils`,
in our commands, or in our container recipes.
- [x] Remove FORCE_UNSAFE_CONFIGURE from packages
- [x] Remove FORCE_UNSAFE_CONFIGURE from container recipes
- [x] Remove FORCE_UNSAFE_CONFIGURE from `spack ci` command
`DYLD_LIBRARY_PATH` can frequently break builtin macOS software when
pointed at Spack libraries. This is because it takes *higher* precedence
than the default library search paths, which are used by system software.
`DYLD_FALLBACK_LIBRARY_PATH`, on the other hand, takes lower precedence.
At first glance, this might seem bad, because the software installed by
Spack in an environment needs to find *its* libraries, and it should not
use the defaults. However, Spack's isntallations are always `RPATH`'d,
so they do not have this problem.
`DYLD_FALLBACK_LIBRARY_PATH` is thus useful for things built in an
environment that need to use Spack's libraries, that don't set *their*
RPATHs correctly for whatever reason. We now prefer it to
`DYLD_LIBRARY_PATH` in modules and in environments because it helps a
little bit, and it is much less intrusive.
If a user invoked "spack env activate example-henv", Spack would
mistakenly interpret the "-h" from "example-henv" as the "-h" option.
This commit allows users to create and activate environments with
"-h" in the name.
This issue existed for bash shell support as well as csh support, and
this commit addresses both, along with some other unrelated csh
support issues.
* add --skip-unstable-versions option to 'spack mirror create' which skips sources/resource for packages if their version is not stable (i.e. if they are the head of a git branch rather than a fixed commit)
* '--skip-unstable-versions' should skip all VCS sources/resources, not just those which are not cachable
* Buildcache command: add install option -o/--otherarch
This will allow matching specs from other archs, for example
installing macOS buildcaches on linux hosts.
* spack commands --update-completion
It's often useful to run a module with `python -m`, e.g.:
python -m pyinstrument script.py
Running a python script this way was hard, though, as `spack python` did
not have a similar `-m` option. This PR adds a `-m` option to `spack
python` so that we can do things like this:
spack python -m pyinstrument ./test.py
This makes it easy to write a script that uses a small part of Spack and
then profile it. Previously thee easiest way to do this was to write a
custom Spack command, which is often overkill.
This commit introduces a `--no-check-signature` option for
`spack install` so that unsigned packages can be installed. It is
off by default (signatures required).
This PR adds a new command to Spack:
```console
$ spack containerize -h
usage: spack containerize [-h] [--config CONFIG]
creates recipes to build images for different container runtimes
optional arguments:
-h, --help show this help message and exit
--config CONFIG configuration for the container recipe that will be generated
```
which takes an environment with an additional `container` section:
```yaml
spack:
specs:
- gromacs build_type=Release
- mpich
- fftw precision=float
packages:
all:
target: [broadwell]
container:
# Select the format of the recipe e.g. docker,
# singularity or anything else that is currently supported
format: docker
# Select from a valid list of images
base:
image: "ubuntu:18.04"
spack: prerelease
# Additional system packages that are needed at runtime
os_packages:
- libgomp1
```
and turns it into a `Dockerfile` or a Singularity definition file, for instance:
```Dockerfile
# Build stage with Spack pre-installed and ready to be used
FROM spack/ubuntu-bionic:prerelease as builder
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&& (echo "spack:" \
&& echo " specs:" \
&& echo " - gromacs build_type=Release" \
&& echo " - mpich" \
&& echo " - fftw precision=float" \
&& echo " packages:" \
&& echo " all:" \
&& echo " target:" \
&& echo " - broadwell" \
&& echo " config:" \
&& echo " install_tree: /opt/software" \
&& echo " concretization: together" \
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unecessary deps and strip executables
RUN cd /opt/spack-environment && spack install && spack autoremove -y
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s
# Modifications to the environment that are necessary to run
RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
# Bare OS image to run the installed executables
FROM ubuntu:18.04
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
RUN apt-get -yqq update && apt-get -yqq upgrade \
&& apt-get -yqq install libgomp1 \
&& rm -rf /var/lib/apt/lists/*
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
```