Commit graph

320 commits

Author SHA1 Message Date
Adam J. Stewart
aa8e026242
spack setup: remove the command for v0.17.0 (#20277)
spack setup was deprecated in 0.16 and will be removed in 0.17

Follow-up to #18240
2021-01-27 09:24:09 +01:00
Jamie Finney
3f0f79d2c9
Remove ascent gitlab trigger (#20755)
Remove the ORNL Ascent gitlab trigger 

CI will now be done internally via periodic builds.
2021-01-08 12:57:50 -07:00
Vanessasaurus
67ce1939a3
spack python: allow use of IPython (#20329)
This adds a -i option to "spack python" which allows use of the
IPython interpreter; it can be used with "spack python -i ipython".
This assumes it is available in the Python instance used to run
Spack (i.e. that you can "import IPython").
2021-01-05 16:54:47 -08:00
Todd Gamblin
a8ccb8e116 copyrights: update all files with license headers for 2021
- [x] add `concretize.lp`, `spack.yaml`, etc. to licensed files
- [x] update all licensed files to say 2013-2021 using
      `spack license update-copyright-year`
- [x] appease mypy with some additions to package.py that needed
      for oneapi.py
2021-01-02 12:12:00 -08:00
Todd Gamblin
78f39bdfee commands: add spack license update-copyright-year
This adds a new subcommand to `spack license` that automatically updates
the copyright year in files that should have a license header.

- [x] add `spack license update-copyright-year` command
- [x] add test
2021-01-02 12:12:00 -08:00
Adam J. Stewart
a4accff266
PythonPackage: url -> pypi (#20610)
* Convert all `url` attributes in `PythonPackage`s to `pypi` attributes
* add `pypi =` to flake8 exceptions
2020-12-29 16:44:04 -08:00
Tom Scogland
857749a9ba
add mypy to style checks; rename spack flake8 to spack style (#20384)
I lost my mind a bit after getting the completion stuff working and
decided to get Mypy working for spack as well. This adds a 
`.mypy.ini` that checks all of the spack and llnl modules, though
not yet packages, and fixes all of the identified missing types and
type issues for the spack library.

In addition to these changes, this includes:

* rename `spack flake8` to `spack style`

Aliases flake8 to style, and just runs flake8 as before, but with
a warning.  The style command runs both `flake8` and `mypy`,
in sequence. Added --no-<tool> options to turn off one or the
other, they are on by default.  Fixed two issues caught by the tools.

* stub typing module for python2.x

We don't support typing in Spack for python 2.x. To allow 2.x to
support `import typing` and `from typing import ...` without a
try/except dance to support old versions, this adds a stub module
*just* for python 2.x.  Doing it this way means we can only reliably
use all type hints in python3.7+, and mypi.ini has been updated to
reflect that.

* add non-default black check to spack style

This is a first step to requiring black.  It doesn't enforce it by
default, but it will check it if requested.  Currently enforcing the
line length of 79 since that's what flake8 requires, but it's a bit odd
for a black formatted project to be quite that narrow.  All settings are
in the style command since spack has no pyproject.toml and I don't
want to add one until more discussion happens. Also re-format
`style.py` since it no longer passed the black style check
with the new length.

* use style check in github action

Update the style and docs action to use `spack style`, adding in mypy
and black to the action even if it isn't running black right now.
2020-12-22 21:39:10 -08:00
Tom Scogland
c1e4f3e131
Refactor flake8 handling and tool compatibility (#20376)
This PR does three related things to try to improve developer tooling quality of life:

1. Adds new options to `.flake8` so it applies the rules of both `.flake8` and `.flake_package` based on paths in the repository.
2. Adds a re-factoring of the `spack flake8` logic into a flake8 plugin so using flake8 directly, or through editor or language server integration, only reports errors that `spack flake8` would.
3. Allows star import of `spack.pkgkit` in packages, since this is now the thing that needs to be imported for completion to work correctly in package files, it's nice to be able to do that.

I'm sorely tempted to sed over the whole repository and put `from spack.pkgkit import *` in every package, but at least being allowed to do it on a per-package basis helps.

As an example of what the result of this is:

```
~/Workspace/Projects/spack/spack develop* ⇣
❯ flake8 --format=pylint ./var/spack/repos/builtin/packages/kripke/package.py
./var/spack/repos/builtin/packages/kripke/package.py:6: [F403] 'from spack.pkgkit import *' used; unable to detect undefined names
./var/spack/repos/builtin/packages/kripke/package.py:25: [E501] line too long (88 > 79 characters)

~/Workspace/Projects/spack/spack refactor-flake8*
1 ❯ flake8 --format=spack ./var/spack/repos/builtin/packages/kripke/package.py

~/Workspace/Projects/spack/spack refactor-flake8*
❯ flake8 ./var/spack/repos/builtin/packages/kripke/package.py
```

* qa/flake8: update .flake8, spack formatter plugin

Adds:
* Modern flake8 settings for per-path/glob error ignores, allows
  packages to use the same `.flake8` as the rest of spack
* A spack formatter plugin to flake8 that implements the behavior of
  `spack flake8` for direct invocations.  Makes integration with
  developer tooling nicer, linting with flake8 reports only errors that
  `spack flake8` would report.  Using pyls and pyls-flake8, or any other
  non-format-dependent flake8 integration, now works with spack's rules.

* qa/flake8: allow star import of spack.pkgkit

To get working completion of directives and spack components it's
necessary to import the contents of spack.pkgkit.  At the moment doing
this makes flake8 displeased.  For now, allow spack.pkgkit and spack
both, next step is to ban spack * and require spack.pkgkit *.

* first cut at refactoring spack flake8

This version still copies all of the files to be checked as befire, and
some other things that probably aren't necessary, but it relies on the
spack formatter plugin to implement the ignore logic.

* keep flake8 from rejecting itself

* remove separate packages flake8 config

* fix failures from too many files

I ran into this in the PR converting pkgkit to std.  The solution in
that branch does not work in all cases as it turns out, and all the
workarounds I tried to use generated configs to get a single invocation
of flake8 with a filename optoion to work failed.  It's an astonishingly
frustrating config option.

Regardless, this removes all temporary file creation from the command
and relies on the plugin instead.  To work around the huge number of
files in spack and still allow the command to control what gets checked,
it scans files in batches of 100.  This is a completely arbitrary number
but was chosen to be safely under common line-length limits.  One
side-effect of this is that every 100 files the command will produce
output, rather than only at the end, which doesn't seem like a terrible
thing.
2020-12-22 09:28:46 -08:00
Tom Scogland
71c77fa8fa
minimal zsh completion (#20253)
Since zsh can load bash completion files natively, seems reasonable to just turn this on.
The only changes are to switch from `type -t` which zsh doesn't support to using `type`
with a regex and adding a new arm to the sourcing of the completions to allow it to work
for zsh as well as bash.

Could use more bash/dash/etc testing probably, but everything I've thought to try has
worked so far.

Notes:
* unit-test zsh support, fix issues
Specifically fixed word splitting in completion-test, use a different
method to apply sh emulation to zsh loaded bash completion, and fixed
an incompatibility in regex operator quoting requirements.

* compinit now ignores insecure directories
Completion isn't meant to be enabled in non-interactive environments, so
by default compinit will ask the user if they want to ignore insecure
directories or load them anyway.  To pass the spack unit tests in GH
actions, this prompt must be disabled, so ignore explicitly until a
better solution can be found.

* debug functions test also requires bash emulation
COMP_WORDS is a bash-ism that zsh doesn't natively support, turn on
emulation for just that section of tests to allow the comparison to
work.  Does not change the behavior of the functions themselves since
they are already pinned to sh emulation elsewhere.

* propagate change to .in file

* fix comment and update script based on .in
2020-12-18 17:26:15 -08:00
Christian Kniep
e4bb85cd27
Add amazonlinux (x86/arm) dockerfile (#20320)
Co-authored-by: Christian Kniep <kniec@amazon.com>
2020-12-11 11:11:33 +01:00
vvolkl
ed258ca9e9
Add "spack versions --new" flag to only show new versions (#20030)
* [cmd versions] add spack versions --new flag to only fetch new versions

format

[cmd versions] rename --latest to --newest and add --remote-only

[cmd versions] add tests for --remote-only and --new

format

[cmd versions] update shell tab completion

[cmd versions] remove test for --remote-only --new which gives empty output

[cmd versions] final rename

format

* add brillig mock package

* add test for spack versions --new

* [brillig] format

* [versions] increase test coverage

* Update lib/spack/spack/cmd/versions.py

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>

* Update lib/spack/spack/cmd/versions.py

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2020-12-07 09:29:10 -06:00
eugeneswalker
badf3368ad
allow install of build-deps from cache via --include-build-deps switch (#19955)
* allow install of build-deps from cache via --include-build-deps switch

* make clear that --include-build-deps is useful for CI pipeline troubleshooting
2020-12-03 15:27:01 -08:00
Christian Goll
235558df11
dockerfiles: add dockerfile for opensuse leap 15 (#20091)
* added dockerfile for opensuse leap 15
* updated maintainer info
* Update share/spack/docker/leap-15.dockerfile
* move copies and symlinks after package install
also use ${SPACK_ROOT} for spack calls as
this works with buildah

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2020-12-01 14:03:35 -08:00
Shahzeb Siddiqui
f30aeb35ae
[WIP] nersc e4s pipeline trigger (#19688)
* nersc e4s pipeline trigger

* Update nersc_pipeline.yml

* Update nersc_pipeline.yml
2020-11-20 13:31:25 -08:00
Michael Kuhn
20367e472d
cmd: add spack mark command (#16662)
This adds a new `mark` command that can be used to mark packages as either
explicitly or implicitly installed. Apart from fixing the package
database after installing a dependency manually, it can be used to
implement upgrade workflows as outlined in #13385.

The following commands demonstrate how the `mark` and `gc` commands can be
used to only keep the current version of a package installed:
```console
$ spack install pkgA
$ spack install pkgB
$ git pull # Imagine new versions for pkgA and/or pkgB are introduced
$ spack mark -i -a
$ spack install pkgA
$ spack install pkgB
$ spack gc
```

If there is no new version for a package, `install` will simply mark it as
explicitly installed and `gc` will not remove it.

Co-authored-by: Greg Becker <becker33@llnl.gov>
2020-11-18 03:20:56 -08:00
Greg Becker
77b2e578ec
spack test (#15702)
Users can add test() methods to their packages to run smoke tests on
installations with the new `spack test` command (the old `spack test` is
now `spack unit-test`). spack test is environment-aware, so you can
`spack install` an environment and then run `spack test run` to run smoke
tests on all of its packages. Historical test logs can be perused with
`spack test results`. Generic smoke tests for MPI implementations, C,
C++, and Fortran compilers as well as specific smoke tests for 18
packages.

Inside the test method, individual tests can be run separately (and
continue to run best-effort after a test failure) using the `run_test`
method. The `run_test` method encapsulates finding test executables,
running and checking return codes, checking output, and error handling.

This handles the following trickier aspects of testing with direct
support in Spack's package API:

- [x] Caching source or intermediate build files at build time for
      use at test time.
- [x] Test dependencies,
- [x] packages that require a compiler for testing (such as library only
      packages).

See the packaging guide for more details on using Spack testing support.
Included is support for package.py files for virtual packages. This does
not change the Spack interface, but is a major change in internals.

Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Co-authored-by: wspear <wjspear@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2020-11-18 02:39:02 -08:00
Massimiliano Culpo
5f636fc317
spack containerize: allow users to customize the base image (#15028)
This PR reworks a few attributes in the container subsection of
spack.yaml to permit the injection of custom base images when
generating containers with Spack. In more detail, users can still
specify the base operating system and Spack version they want to use:

  spack:
    container:
      images:
        os: ubuntu:18.04
        spack: develop

in which case the generated recipe will use one of the Spack images
built on Docker Hub for the build stage and the base OS image in the
final stage. Alternatively, they can specify explicitly the two
base images:

  spack:
    container:
      images:
        build: spack/ubuntu-bionic:latest
        final: ubuntu:18.04

and it will be up to them to ensure their consistency.

Additional changes:

* This commit adds documentation on the two approaches.
* Users can now specify OS packages to install (e.g. with apt or yum)
  prior to the build (previously this was only available for the
  finalized image).
* Handles to avoid an update of the available system packages have been
  added to the configuration to facilitate the generation of recipes
  permitting deterministic builds.
2020-11-17 11:25:13 -08:00
Todd Gamblin
0ed019d4ef concretizer: first working version with pyclingo interface
- [x] Solver now uses the Python interface to clingo
- [x] can extract unsatisfiable cores from problems when things go wrong
- [x] use Python callbacks for versions instead of choice rules (this may
      ultimately hurt performance)
2020-11-17 10:04:13 -08:00
Scott Wittenburg
ef0a555ca2
pipelines: support testing PRs from forks (#19248)
This change makes improvements to the `spack ci rebuild` command
which supports running gitlab pipelines on PRs from forks.  Much
of this has to do with making sure we can run without the secrets
previously required for running gitlab pipelines (e.g signing key,
aws credentials, etc).  Specific improvements in this PR:

Check if spack has precisely one signing key, and use that information
as an additional constraint on whether or not we should attempt to sign
the binary package we create.

Also, if spack does not have at least one public key, add the install
option "--no-check-signature"

If we are running a pipeline without any profile or environment
variables allowing us to push to S3, the pipeline could still
successfully create a buildcache in the artifacts and move on.  So
just print a message and move on if pushing either the buildcache
entry or cdash id file to the remote mirror fails.

When we attempt to generate a pacakge or gpg key index on an S3
mirror, and there is nothing to index, just print a warning and
exit gracefully rather than throw an exception.

Support the use of PR-specific mirrors for temporary binary pkg
storage.  This will allow quality-of-life improvement for developers,
providing a place to store binaries over the lifetime of a PR, so
that they must only wait for packages to rebuild from source when
they push a new commit that causes it to be necessary.

Replace two-pass install with a single pass and the new option:
 --require-full-hash-match.  Doing this also removes the need to
save a copy of the spack.yaml to be copied over the one spack
rewrites in between the two spack install passes.

Work around a mirror configuration issue caused by using
spack.util.executable to do the package installation.

* Update pipeline trigger jobs for PRs from forks

Moving to PRs from forks relies on external synchronization script
pushing special branch names.  Also secrets will only live on the
spack mirror project, and must be propagated to the E4S project via
variables on the trigger jobs.

When this change is merged, pipelines will not run until we update
the "Custom CI configuration path" in the Gitlab CI Settings, as the
name of the file has changed to better reflect its purpose.

* Arg to MirrorCollection is used exclusively, so add main remote mirror to it

* Compute full hash less frequently

* Add tests covering index generation error handling code
2020-11-16 15:16:24 -08:00
Todd Gamblin
4092c90b57
commands: add spack tutorial command (#19808)
Added a command to set up Spack for our tutorial at
https://spack-tutorial.readthedocs.io.

The command does some common operations we need first-time users to do.
Specifically:

- checks out a particular branch of Spack
- deletes spurious configuration in `~/.spack` that might be
  left over from prior parts of the tutorial
- adds a mirror and trusts its public key
2020-11-09 12:47:08 +01:00
Scott Wittenburg
31f57e56bb
Binary caching: use full hashes (#19209)
* "spack install" now has a "--require-full-hash-match" option, which
  forces Spack to skip an available binary package when the full hash
  doesn't match. Normally only a DAG-hash match is required, which
  ensures equivalent Specs, but does not account for changing logic
  inside the associated package.
* Add a local binary cache index which tracks specs that have a binary
  install available in a remote binary cache. It is updated with
  "spack buildcache list" or for a given spec when a binary package
  is retrieved for that Spec.
2020-10-30 12:53:33 -07:00
Todd Gamblin
965ccb78cf sbang: use bashcov in sbang on Linux 2020-10-27 13:59:46 -07:00
Todd Gamblin
560beb098e
csh: don't require SPACK_ROOT for sourcing setup-env.csh (#18225)
Don't require SPACK_ROOT for sourcing setup-env.csh and make output more consistent
2020-10-23 18:54:34 -07:00
Todd Gamblin
16e75ecac0
shell support: make which spack output intelligible (#19256)
Zsh and newer versions of bash have a builtin `which` function that will
show you if a command is actually an alias or a function. For functions,
the entire function is printed, and our `spack()` function is quite long.
Instead of printing out all that, make the `spack()` function a wrapper
around `_spack_shell_wrapper()`, and include some no-ops in the
definition so that users can see where it was created and where Spack is
installed.

Here's what the new output looks like in zsh:

```console
$ which spack
spack () {
	: this is a shell function from: /Users/gamblin2/src/spack/share/spack/setup-env.sh
	: the real spack script is here: /Users/gamblin2/src/spack/bin/spack
	_spack "$@"
	return $?
}
```

Note that `:` is a no-op in Bourne shell; it just discards anything after
it on the line. We use it here to embed paths in the function definition
(as comments are stripped).
2020-10-21 17:04:42 -07:00
elsagermann
4750d479a0
Add testing option to dev-build command (#17293)
* ADD: testing to dev-build command

* RM: mutally exclusive group for testing in parser

* FIX: test option to subparser and not testing

* ADD: spack-completion.bash

* RM: local devbuildcosmo cmd

* FIX: bad merge --drop-in -b --before options forgotten

* FIX: --test place in spack-completion.bash

* FIX: typo

* FIX: blank line removing

* FIX: trailing white space

Co-authored-by: Elsa Germann <egermann@tsa-ln002.cm.cluster>
2020-10-18 23:17:07 -05:00
Greg Becker
7a6268593c
Environments: specify packages for developer builds (#15256)
* allow environments to specify dev-build packages

* spack develop and spack undevelop commands

* never pull dev-build packges from bincache

* reinstall dev_specs when code has changed; reinstall dependents too

* preserve dev info paths and versions in concretization as special variant

* move install overwrite transaction into installer

* move dev-build argument handling to package.do_install

now that specs are dev-aware, package.do_install can add
necessary args (keep_stage=True, use_cache=False) to dev
builds. This simplifies driving logic in cmd and env._install

* allow 'any' as wildcard for variants

* spec: allow anonymous dependencies

raise an error when constraining by or normalizing an anonymous dep
refactor concretize_develop to remove dev_build variant
refactor tests to check for ^dev_path=any instead of +dev_build

* fix variant class hierarchy
2020-10-15 17:23:16 -07:00
Scott Wittenburg
438f80d19e
Revert binary distribution cache manager (#19158)
This reverts #18359 and follow-on PRs intended to address issues with
#18359 because that PR changes the hash of all specs. A future PR will
reintroduce the changes.

* Revert "Fix location in spec.yaml where we look for full_hash (#19132)"
* Revert "Fix fetch of spec.yaml files from buildcache (#19101)"
* Revert "Merge pull request #18359 from scottwittenburg/add-binary-distribution-cache-manager"
2020-10-05 16:02:37 -07:00
Scott Wittenburg
a44135dccf
Update buildcache key index when we update the package index (#19117)
This changes makes sure that when we run the pipeline job that updates
the buildcache package index on the remote mirror, we also update the
key index.  The public keys corresponding to the signing keys used to
sign the package was pushed to the mirror as a part of creating the
buildcache index, so this is just ensuring those keys are reflected
in the key index.

Also, this change makes sure the "spack buildcache update-index"
job runs even when there may have been pipeline failures, since we
would like the index always to reflect the true state of the mirror.
2020-10-02 11:00:42 -06:00
Scott Wittenburg
075c3e0d92
Merge pull request #18359 from scottwittenburg/add-binary-distribution-cache-manager
Add binary distribution cache manager
2020-09-30 16:37:35 -06:00
Omar Padron
2d93154119
Streamline key management for build caches (#17792)
* Rework spack.util.web.list_url()

list_url() now accepts an optional recursive argument (default: False)
for controlling whether to only return files within the prefix url or to
return all files whose path starts with the prefix url.  Allows for the
most effecient implementation for the given prefix url scheme.  For
example, only recursive queries are supported for S3 prefixes, so the
returned list is trimmed down if recursive == False, but the native
search is returned as-is when recursive == True.  Suitable
implementations for each case are also used for file system URLs.

* Switch to using an explicit index for public keys

Switches to maintaining a build cache's keys under build_cache/_pgp.
Within this directory is an index.json file listing all the available
keys and a <fingerprint>.pub file for each such key.

 - Adds spack.binary_distribution.generate_key_index()
   - (re)generates a build cache's key index

 - Modifies spack.binary_distribution.build_tarball()
   - if tarball is signed, automatically pushes the key used for signing
     along with the tarball
   - if regenerate_index == True, automatically (re)generates the build
     cache's key index along with the build cache's package index; as in
     spack.binary_distribution.generate_key_index()

 - Modifies spack.binary_distribution.get_keys()
   - a build cache's key index is now used instead of programmatic
     listing

 - Adds spack.binary_distribution.push_keys()
   - publishes keys from Spack's keyring to a given list of mirrors

 - Adds new spack subcommand: spack gpg publish
   - publishes keys from Spack's keyring to a given list of mirrors

 - Modifies spack.util.gpg.Gpg.signing_keys()
   - Accepts optional positional arguments for filtering the set of keys
     returned

 - Adds spack.util.gpg.Gpg.public_keys()
   - As spack.util.gpg.Gpg.signing_keys(), except public keys are
     returned

 - Modifies spack.util.gpg.Gpg.export_keys()
   - Fixes an issue where GnuPG would prompt for user input if trying to
     overwrite an existing file

 - Modifies spack.util.gpg.Gpg.untrust()
   - Fixes an issue where GnuPG would fail for input that were not key
     fingerprints

 - Modifies spack.util.web.url_exists()
   - Fixes an issue where url_exists() would throw instead of returning
     False

* rework gpg module/fix error with very long GNUPGHOME dir

* add a shim for functools.cached_property

* handle permission denied error in gpg util

* fix tests/make gpgconf optional if no socket dir is available
2020-09-25 12:54:24 -04:00
eugeneswalker
9eb87d1026
OLCF Ascent gitlab ci trigger: pass SPACK_REF (#18875) 2020-09-23 09:35:29 -07:00
Shahzeb Siddiqui
58fb6cdaad
trigger ascent e4s pipeline on merge to spack develop (#18655)
* trigger ascent e4s pipeline on merge to spack develop

* change pipeline name ecpcitest/e4s is the pipeline that will be triggered for merge on develop its the E4S use-case.
2020-09-18 10:38:29 -07:00
Scott Wittenburg
f537d5bb58 Make sure each develop pipeline tests associated commit 2020-09-14 10:37:42 -06:00
Scott Wittenburg
ace52bd476 Provide your own script, before_script, and after_script 2020-09-14 10:37:42 -06:00
Johannes Blaschke
757dad370f
Bugfix for fish support: overly zealous arg matching (#18528)
* bugfix for issue 18369

* fix typo

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2020-09-10 10:01:44 -05:00
Richarda Butler
d721bd8070
commands: update help for spack install --yes-to-all (#18367)
`spack install --yes-to-all` doesn't actually make the build non-interactive,
but that is why people typically use it. This documents that you must also
specify `--no-checksum` for a fully non-interactive build.
2020-09-08 13:18:25 -07:00
Robert Blake
ea57171712
Make spack environment configurations writable from spack external and spack compiler find (#18165)
* spack config: default modification scope can be an environment

The previous model was that environments are the highest priority config
scope for config reading operations, but were not considered for config
writing operations. Now, the active environment is the highest priority
config scope for both reading and writing operations.

Now spack config add, spack external find and spack compiler set environment 
configuration in the environment by default if an environment is active. This is a
change in default behavior for these routines, but better matches the mental
model for an environment taking precedence over the user's default config file.

* add scope argument to 'spack external find' to choose non-default scope

* Increase testing for config modifications on environments

Co-authored-by: Gregory Becker <becker33@llnl.gov>
2020-09-05 01:12:26 -07:00
Scott Wittenburg
597b43e30a Rely on E4S project variable for SPACK_REPO 2020-09-04 11:18:56 -06:00
Adam J. Stewart
443407cda5
Add new RubyPackage build system base class (#18199)
* Add new RubyPackage build system base class

* Ruby: add spack external find support

* Add build tests for RubyPackage
2020-09-02 16:26:36 -07:00
Massimiliano Culpo
c0d490ffbe Simplify the detection protocol for packages
Packages can implement “detect_version” to support detection
of external instances of a package. This is generally easier
than implementing “determine_spec_details”. The API for
determine_version is similar: for example you can return
“None” to indicate that an executable is not an instance
of a package.

Users may implement a “determine_variants” method for a package.
When doing external detection, executables are grouped by version
and each group results in a single invocation of “determine_variants”
for the associated spec. The method returns a string specifying
the variants for the package. The method may additionally return
a dictionary representing extra attributes for the package.

These will be stored in the spec yaml and can be retrieved
from self.spec.extra_attributes

The Spack GCC package has been updated with an implementation
of “determine_variants” which adds the following extra
attributes to the package: c, cxx, fortran
2020-08-10 11:59:05 -07:00
Massimiliano Culpo
193e8333fa Update packages.yaml format and support configuration updates
The YAML config for paths and modules of external packages has
changed: the new format allows a single spec to load multiple
modules. Spack will automatically convert from the old format
when reading the configs (the updates do not add new essential
properties, so this change in Spack is backwards-compatible).

With this update, Spack cannot modify existing configs/environments
without updating them (e.g. “spack config add” will fail if the
configuration is in a format that predates this PR). The user is
prompted to do this explicitly and commands are provided. All
config scopes can be updated at once. Each environment must be
updated one at a time.
2020-08-10 11:59:05 -07:00
Massimiliano Culpo
9dbad500bc
Move Python 2.6 unit tests to Github Actions (#17279)
* Run Python2.6 unit tests on Github Actions
* Skip url tests on Python 2.6 to reduce waiting times
* Skip foreground background tests on Python 2.6 to reduce waiting times
* Removed references to Travis in the documentation
* Deleted install_patchelf.sh (can be installed from repo on CentOS 6)
2020-07-31 15:01:12 -07:00
Todd Gamblin
cefb4ba014
tutorial: Add boto3 installation to setup script (#17722) 2020-07-27 16:55:33 -07:00
Greg Becker
1ceec31422
add tutorial setup script to share/spack (#17705)
* add tutorial setup script to share/spack

* Add check for Ubuntu 18, fix xvda check, fix apt-get errors
  - now works on t2.micro, t2.small, and m instances
  - apt-get needs retries around it to work

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2020-07-27 01:17:58 -07:00
Greg Becker
cdab4bdee0
add tutorial public key to share/spack/keys dir (#17684) 2020-07-23 14:35:25 -07:00
Harmen Stoppels
3449087284
Make the largest layer of the docker image cacheable (#17553) 2020-07-16 13:15:04 -04:00
Paul
d25c7ddd6f
spack containerize: added --fail-fast argument to containerize install. (#17533) 2020-07-15 11:13:04 +02:00
Patrick Gartung
8c41173678
Buildcache: bindist test without invoking spack compiler wrappers. (#15687)
* Buildcache:
   * Try mocking an install of quux, corge and garply using prebuilt binaries
   * Put patchelf install after ccache restore
   * Add script to install patchelf from source so it can be used on Ubuntu:Trusty which does not have a patchelf pat package. The script will skip building on macOS
   * Remove mirror at end of bindist test
   * Add patchelf to Ubuntu build env
   * Revert mock patchelf package to allow other tests to run.
   * Remove depends_on('patchelf', type='build') relying instead on
   * Test fixture to ensure patchelf is available.

* Call g++ command to build libraries directly during test build

* Flake8

* Install patchelf in before_install stage using apt unless on Trusty where a build is done.

* Add some symbolic links between packages

* Flake8

* Flake8:

* Update mock packages to write their own source files

* Create the stage because spec search does not create it any longer

* updates after change of list command arguments

* cleanup after merge

* flake8
2020-07-08 15:05:58 -05:00
Adam J. Stewart
207e496162
spack create: ask how many to download (#17373) 2020-07-08 09:38:42 +02:00
Todd Gamblin
c00a05bfba bugfix: no infinite recursion in setup-env.sh on Cray
On Cray platforms, we rely heavily on the module system to figure out
what targets, compilers, etc. are available. This unfortunately means
that we shell out to the `module` command as part of platform
initialization.

Because we run subcommands in a shell, we can get infinite recursion if
`setup-env.sh` and friends are in some init script like `.bashrc`.

This fixes the infinite loop by adding guards around `setup-env.sh`,
`setup-env.csh`, and `setup-env.fish`, to prevent recursive
initializations of Spack. This is safe because Spack never shells out to
itself, so we do not need it to be initialized in subshells.

- [x] add recursion guard around `setup-env.sh`
- [x] add recursion guard around `setup-env.csh`
- [x] add recursion guard around `setup-env.fish`
2020-07-06 13:55:14 -07:00