Commit graph

373 commits

Author SHA1 Message Date
Scott Wittenburg
09fa155333
pipelines: build warpx on instance with more memory (#24592) 2021-06-29 18:06:30 +02:00
Scott Wittenburg
d7405ddd39
Pipelines: Set a pipeline type variable (#24505)
Spack pipelines need to take specific actions internally that depend
on whether the pipeline is being run on a PR to spack or a merge to
the develop branch.  Pipelines can also run in other repositories,
which represents other possible use cases than just the two mentioned
above.  This PR creates a "SPACK_PIPELINE_TYPE" gitlab variable which
is propagated to rebuild jobs, and is also used internally to determine
which pipeline-specific tasks to run.

One goal of the PR is fix an issue where rebuild jobs which failed on
develop pipelines did not properly report the broken full hash to the
"broken-specs-url".
2021-06-24 16:15:19 -06:00
Chris White
c9932b2d1e
Axom: Remove blueos check on cuda variant (#24349)
* remove blueos check on cuda variant, fix typo

* restore necessary compiler guard

* remove axom+cuda from testing because it only partially works outside ppc systems
2021-06-22 01:29:18 +00:00
G-Ragghianti
94d6d3951a
Removed unofficial MAGMA release and enabled MAGMA in e4s (#24400) 2021-06-18 17:28:35 +00:00
Massimiliano Culpo
32f1aa607c
Add an audit system to Spack (#23053)
Add a new "spack audit" command. This command can check for issues
with configuration or with packages and is intended to help a
user debug a failed Spack build. 

In some cases the reported issues are always errors but are too
costly to check for (e.g. packages that specify missing variants on
dependencies). In other cases the issues may be legitimate but
uncommon usage of Spack and we want to be sure the user intended the
behavior (e.g. duplicate compiler definitions).

Audits are grouped by theme, and for now the two themes are packages
and configuration. For example you can run all available audits
on packages with "spack audit packages". It is intended that in
the future users will be able to define their own audits.

The package audits are good candidates for running in package_sanity
(i.e. they could catch bugs in user-submitted packages before they
are merged) but that is left for a later PR.
2021-06-18 07:52:08 -06:00
Massimiliano Culpo
57467d05e1
Disable magma in the E4S pipeline (#24395)
Building magma has been failing consistently and is currently
blocking PRs from being merged. Disable that spec while we
investigate the failure and work on a fix.
2021-06-18 12:55:31 +00:00
Vanessasaurus
e7ac422982
Adding support for spack monitor with containerize (#23777)
This should get us most of the way there to support using monitor during a spack container build, for both Singularity and Docker. Some quick notes:

### Docker
Docker works by way of BUILDKIT and being able to specify --secret. What this means is that you can prefix a line with a mount of type secret as follows:

```bash
# Install the software, remove unnecessary deps
RUN --mount=type=secret,id=su --mount=type=secret,id=st cd /opt/spack-environment && spack env activate . && export SPACKMON_USER=$(cat /run/secrets/su) && export SPACKMON_TOKEN=$(cat /run/secrets/st) && spack install --monitor --fail-fast && spack gc -y
```
Where the id for one or more secrets corresponds to the file mounted at `/run/secrets/<name>`. So, for example, to build this container with su (spackmon user) and sv (spackmon token) defined I would export them on my host and do:

```bash
$ DOCKER_BUILDKIT=1 docker build --network="host" --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container . 
```
And when we add `env` to the secret definition that tells the build to look for the secret with id "st" in the environment variable `SPACKMON_TOKEN` for example.

If the user is building locally with a local spack monitor, we also need to set the `--network` to be the host, otherwise you can't connect to it (a la isolation of course.)

## Singularity

Singularity doesn't have as nice an ability to clearly specify secrets, so (hoping this eventually gets implemented) what I'm doing now is providing the user instructions to write the credentials to a file, add it to the container to source, and remove when done.

## Tags

Note that the tags PR https://github.com/spack/spack/pull/23712 will need to be merged before `--monitor-tags` will actually work because I'm checking for the attribute (that doesn't exist yet):

```bash
"tags": getattr(args, "monitor_tags", None)
```
So when that PR is merged to update the argument group, it will work here, and I can either update the PR here to not check if the attribute is there (it will be) or open another one in the case this PR is already merged. 

Finally, I added a bunch of documetation for how to use monitor with containerize. I say "mostly working" because I can't do a full test run with this new version until the container base is built with the updated spack (the request to the monitor server for an env install was missing so I had to add it here).

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-06-17 17:15:22 -07:00
eugeneswalker
e916b699ee
e4s ci env: package preferences: use newer versions (#24371) 2021-06-17 15:17:49 -06:00
eugeneswalker
b330474a13
e4s ci: specs: add datatransferkit (#24325) 2021-06-15 18:37:37 -07:00
Vanessasaurus
53dae0040a
adding spack upload command (#24321)
this will first support uploads for spack monitor, and eventually could be
used for other kinds of spack uploads

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-06-15 14:36:02 -07:00
eugeneswalker
229247c899
e4s ci environment: packages: update to newer versions (#24308) 2021-06-14 19:26:30 -07:00
Adam J. Stewart
11f370e7be
setup-env: allow users to skip module function setup (#24236)
* setup-env: allow users to skip module function setup

* Add documentation on SPACK_SKIP_MODULES
2021-06-11 19:19:24 +00:00
Robert Pavel
29c4d5901a
Update of Flecsi Spackage (#24106)
* Update of Flecsi Spackage

Update of flecsi spackage to reconcile differences between flecsi@1:1.9
and flecsi@2: for future support purposes

* Removing Unnecessary Conditional

Removing unused conditional. Initially the plan was to switch based on
version in `cmake_args` but this was not necessary as build system
variable names remained mostly the same and conflicts prevent the rest.

For the most part, if a variant is there it does not need to check
against what version of the code is being built.

* Updated CI To Reconcile Flecsi Changes

Updated CI to target flecsi@1.4.2 which best matches the previous
release version and reconciled change in variant name
2021-06-09 11:03:50 -07:00
eugeneswalker
ac3b46fc95
e4s ci: re-enable veloc builds after recent fixes (#24190) 2021-06-08 14:10:44 -07:00
eugeneswalker
bd6145589d
CI: E4S: enable full E4S (#24011)
* e4s ci: enable full e4s
* add llvm-amdgpu to list of specs needing an xlarge tagged runner
* comment out qt and qwt because of intermittent build failures
* remove +rocm specs because rocblas job consistently fails due to infrastructure
2021-05-30 13:09:07 -07:00
Vanessasaurus
6f534acbef
adding support for export of private gpg key (#22557)
This PR allows users to `--export`, `--export-secret`, or both to  export GPG keys
from Spack. The docs are updated that include a warning that this usually does not
need to be done.

This addresses an issue brought up in slack, and also represented in #14721.

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-05-28 23:32:57 -07:00
Greg Becker
7490d63c38
Separable module configuration -- without the bugs this time (#23703)
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).

This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.

As part of this change, the module roots configuration moved from the config section to inside each module configuration.

Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
2021-05-28 14:12:05 -07:00
Scott Wittenburg
91f66ea0a4
Pipelines: reproducible builds (#22887)
### Overview

The goal of this PR is to make gitlab pipeline builds (especially build failures) more reproducible outside of the pipeline environment.  The two key changes here which aim to improve reproducibility are: 

1. Produce a `spack.lock` during pipeline generation which is passed to child jobs via artifacts.  This concretized environment is used both by generated child jobs as well as uploaded as an artifact to be used when reproducing the build locally.
2. In the `spack ci rebuild` command, if a spec needs to be rebuilt from source, do this by generating and running an `install.sh` shell script which is then also uploaded as a job artifact to be run during local reproduction.  

To make it easier to take advantage of improved build reproducibility, this PR also adds a new subcommand, `spack ci reproduce-build`, which, given a url to job artifacts:

- fetches and unzips the job artifacts to a local directory
- looks for the generated pipeline yaml and parses it to find details about the job to reproduce
- attempts to provide a copy of the same version of spack used in the ci build
- if the ci build used a docker image, the command prints a `docker run` command you can run to get an interactive shell for reproducing the build

#### Some highlights

One consequence of this change will be much smaller pipeline yaml files.  By encoding the concrete environment in a `spack.lock` and passing to child jobs via artifacts, we will no longer need to encode the concrete root of each spec and write it into the job variables, greatly reducing the size of the generated pipeline yaml.

Additionally `spack ci rebuild` output (stdout/stderr) is no longer internally redirected to a log file, so job output will appear directly in the gitlab job trace.  With debug logging turned on, this often results in log files getting truncated because they exceed the maximum amount of log output gitlab allows.  If this is a problem, you still have the option to `tee` command output to a file in the within the artifacts directory, as now each generated job exposes a `user_data` directory as an artifact, which you can fill with whatever you want in your custom job scripts.

There are some changes to be aware of in how pipelines should be set up after this PR:

#### Pipeline generation

Because the pipeline generation job now writes a `spack.lock` artifact to be consumed by generated downstream jobs, `spack ci generate` takes a new option `--artifacts-root`, inside which it creates a `concrete_env` directory to place the lockfile.  This artifacts root directory is also where the `user_data` directory will live, in case you want to generate any custom artifacts.  If you do not provide `--artifacts-root`, the default is for it to create a `jobs_scratch_dir` within your `CI_PROJECT_DIR` (a gitlab predefined environment variable) or whatever is your current working directory if that variable isn't set. Here's the diff of the PR testing `.gitlab-ci.yml` taking advantage of the new option:

```
$ git diff develop..pipelines-reproducible-builds share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
diff --git a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
index 579d7b56f3..0247803a30 100644
--- a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
+++ b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
@@ -28,10 +28,11 @@ default:
     - cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
     - spack env activate --without-view .
     - spack ci generate --check-index-only
+      --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
       --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
   artifacts:
     paths:
-      - "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
+      - "${CI_PROJECT_DIR}/jobs_scratch_dir"
   tags: ["spack", "public", "medium", "x86_64"]
   interruptible: true
```

Notice how we replaced the specific pointer to the generated pipeline file with its containing folder, the same folder we passed as `--artifacts-root`.  This way anything in that directory (the generated pipeline yaml, as well as the concrete environment directory containing the `spack.lock`) will be uploaded as an artifact and available to the downstream jobs.

#### Rebuild jobs

Rebuild jobs now must activate the concrete environment created by `spack ci generate` and provided via artifacts.  When the pipeline is generated, a directory called `concrete_environment` is created within the artifacts root directory, and this is where the `spack.lock` file is written to be passed to the generated rebuild jobs.  The artifacts root directory can be specified using the `--artifacts-root` option to `spack ci generate`, otherwise, it is assumed to be `$CI_PROJECT_DIR`.  The directory containing the concrete environment files (`spack.yaml` and `spack.lock`) is then passed to generated child jobs via the `SPACK_CONCRETE_ENV_DIR` variable in the generated pipeline yaml file.

When you don't provide custom `script` sections in your `mappings` within the `gitlab-ci` section of your `spack.yaml`, the default behavior of rebuild jobs is now to change into `SPACK_CONCRETE_ENV_DIR` and activate that environment.   If you do provide custom rebuild scripts in your `spack.yaml`, be aware those scripts should do the same thing: assume `SPACK_CONCRETE_ENV_DIR` contains the concretized environment to activate.  No other changes to existing custom rebuild scripts should be required as a result of this PR. 

As mentioned above, one key change made in this PR is the generation of the `install.sh` script by the rebuild jobs, as that same script is both run by the CI rebuild job as well as exported as an artifact to aid in subsequent attempts to reproduce the build outside of CI.  The generated `install.sh` script contains only a single `spack install` command with arguments computed by `spack ci rebuild`.  If the install fails, the job trace in gitlab will contain instructions on how to reproduce the build locally:

```
To reproduce this build locally, run:
  spack ci reproduce-build https://gitlab.next.spack.io/api/v4/projects/7/jobs/240607/artifacts [--working-dir <dir>]
If this project does not have public pipelines, you will need to first:
  export GITLAB_PRIVATE_TOKEN=<generated_token>
... then follow the printed instructions.
```

When run locally, the `spack ci reproduce-build` command shown above will download and process the job artifacts from gitlab, then print out instructions you  can copy-paste to run a local reproducer of the CI job.

This PR includes a few other changes to the way pipelines work, see the documentation on pipelines for more details.

This  PR erelies on 
~- [ ] #23194 to be able to refer to uninstalled specs by DAG hash~
EDIT: that is going to take longer to come to fruition, so for now, we will continue to install specs represented by a concrete `spec.yaml` file on disk.
- [x] #22657 to support install a single spec already present in the active, concrete environment
2021-05-28 09:38:07 -07:00
Vanessasaurus
3cef5663d8
adding json export for spack blame (#23417)
I would like to be able to export (and save and then load programatically)
spack blame metadata, so this commit adds a spack blame --json argument,
along with developer docs for it

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-05-25 12:40:08 -06:00
Vanessasaurus
b44bb952eb
first set of work to allow for saving local results with spack monitor (#23804)
This work will come in two phases. The first here is to allow saving of a local result
with spack monitor, and the second will add a spack monitor command so the user can
do spack monitor upload.

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-05-25 11:29:34 -07:00
vsoch
f2b362b5b3 adding support to tag a build
This will be useful to run multiple build experiments and organize by name

Signed-off-by: vsoch <vsoch@users.noreply.github.com>
2021-05-19 07:01:18 -07:00
Harmen Stoppels
8446bebdd9
Revert "Separable module configurations (#22588)" (#23674)
This reverts commit cefbe48c89.
2021-05-17 06:42:48 -07:00
Greg Becker
cefbe48c89
Separable module configurations (#22588)
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).

This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.

As part of this change, the module roots configuration moved from the `config` section to inside each module configuration.

Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.

TODO:
- [x] code changes to support multiple module sets
- [x] code changes to support modules relative to a view
- [x] Tests for multiple module configurations
- [x] Tests for modules relative to a view
- [x] Backwards compatibility for module roots from config section
- [x] Backwards compatibility for default module set without the name specified
- [x] Tests for backwards compatibility
2021-05-14 15:03:28 -07:00
Scott Wittenburg
91de23ce65 install cmd: --no-add in an env installs w/out concretize and add
In an active concretize environment, support installing one or more
cli specs only if they are already present in the environment.  The
`--no-add` option is the default for root specs, but optional for
dependency specs.  I.e. if you `spack install <depspec>` in an
environment, the dependency-only spec `depspec` will be added as a
root of the environment before being installed.  In addition,
`spack install --no-add <spec>` fails if it does not find an
unambiguous match for `spec`.
2021-05-07 10:07:53 -07:00
Tim Haines
cad06a15e0
Dyninst: add elfutils versioning (#19648) 2021-04-27 07:15:37 -05:00
Tim Haines
c23ffd89ff
Dyninst: Add dependencies for v11.0.0 (#23121)
Also update the mpileaks unit test to avoid a conflict on CentOS 6
where Dyninst >=11.0.0 no longer builds due to a compiler version
conflict.
2021-04-26 13:53:53 -07:00
Chuck Atkins
7317b04318
ci: Remove leftover duplicate gitlab yaml (#23248) 2021-04-26 16:42:15 +00:00
Chuck Atkins
e3054c3318
ci: Generalize the GitLab CI pipeline yaml (#23225)
* ci: Generalize the GitLab CI pipeline yaml

* ci: Rename cloud_e4s_pipelines to the more general cloud_pipelines
2021-04-26 08:13:16 -06:00
Vanessasaurus
7f91c1a510
Merge pull request #21930 from vsoch/add/spack-monitor
This provides initial support for [spack monitor](https://github.com/spack/spack-monitor), a web application that stores information and analysis about Spack installations.  Spack can now contact a monitor server and upload analysis -- even after a build is already done.

Specifically, this adds:
- [x] monitor options for `spack install`
- [x] `spack analyze` command
- [x] hook architecture for analyzers
- [x] separate build logs (in addition to the existing combined log)
- [x] docs for spack analyze
- [x] reworked developer docs, with hook docs
- [x] analyzers for:
  - [x] config args
  - [x] environment variables
  - [x] installed files
  - [x] libabigail

There is a lot more information in the docs contained in this PR, so consult those for full details on this feature.

Additional tests will be added in a future PR.
2021-04-15 00:38:36 -07:00
Gregory Becker
c4141e16a7 update tutorial public key 2021-04-14 23:53:07 -07:00
Harmen Stoppels
8b16728fd9
Add "spack [cd|location] --source-dir" (#22321) 2021-03-29 17:31:24 +02:00
Massimiliano Culpo
629f94b4e1
CI: drastically reduce the number of tests for package only PRs (#22410)
PRs that change only package recipes will only run tests under "package_sanity.py" and without coverage. This should result in a huge drop the cpu-time spent in CI for most PRs.
2021-03-19 11:04:53 -07:00
Harmen Stoppels
15645147ed
Tab to spaces (#22362) 2021-03-18 06:20:06 +00:00
Massimiliano Culpo
b304b4bdb0
Speed-up CI by reorganizing tests (#22247)
* unit tests: mark slow tests as "maybeslow"

This commit also removes the "network" marker and
marks every "network" test as "maybeslow". Tests
marked as db are maintained, but they're not slow
anymore.

* GA: require style tests to pass before running unit-tests

* GA: make MacOS unit tests fail fast

* GA: move all unit tests into the same workflow, run style tests as a prerequisite

All the unit tests have been moved into the same workflow so that a single
run of the dorny/paths-filter action can be used to ask for coverage based
on the files that have been changed in a PR. The basic idea is that for PRs
that introduce only changes to packages coverage is not necessary, this
resulting in a faster execution of the tests.

Also, for package only PRs slow unit tests are skipped.

Finally, MacOS and linux unit tests are now conditional on style tests passing
meaning that e.g. we won't waste a MacOS worker if we know that the PR has
flake8 issues.

* Addressed review comments

* Skipping slow tests on MacOS for package only recipes

* QA: make tests on changes correct before merging
2021-03-16 08:16:31 -07:00
Harmen Stoppels
195341113e
Expand relative dev paths in environment files (#22045)
* Rewrite relative dev_spec paths internally to absolute paths in case of relocation of the environment file

* Test relative paths for dev_path in environments

* Add a --keep-relative flag to spack env create

This ensures that relative paths of develop paths are not expanded to
absolute paths when initializing the environment in a different location
from the spack.yaml init file.
2021-03-15 15:38:35 -05:00
Harmen Stoppels
4f1d9d6095
Propagate --test= for environments (#22040)
* Propagate --test= for environments

* Improve help comment for spack concretize --test flag

* Add tests for --test with environments
2021-03-15 15:34:18 -05:00
Vanessasaurus
746081e933
adding spack -c to set one off config arguments (#22251)
This pull request will add the ability for a user to add a configuration argument on the fly, on the command line, e.g.,:

```bash
$ spack -c config:install_tree:root:/path/to/config.yaml -c packages:all:compiler:[gcc] list --help
```
The above command doesn't do anything (I'm just getting help for list) but you can imagine having another root of packages, and updating it on the fly for a command (something I'd like to do in the near future!)

I've moved the logic for config_add that used to be in spack/cmd/config.py into spack/config.py proper, and now both the main.py (where spack commands live) and spack/cmd/config.py use these functions. I only needed spack config add, so I didn't move the others. We can move the others if there are also needed in multiple places.
2021-03-13 05:31:26 +00:00
Danny McClanahan
f0275e84ad
fix setup-env.sh on older linux zsh (#21721) 2021-03-10 09:44:50 -08:00
Todd Gamblin
8d3272f82d
spack python: add --path option (#22006)
This adds a `--path` option to `spack python` that shows the `python`
interpreter that Spack is using.

e.g.:

```console
$ spack python --path
/Users/gamblin2/src/spack/var/spack/environments/default/.spack-env/view/bin/python
```

This is useful for debugging, and we can ask users to run it to
understand what python Spack is picking up via preferences in `bin/spack`
and via the `SPACK_PYTHON` environment variable introduced in #21222.
2021-03-07 13:37:26 -08:00
Todd Gamblin
7aa5cc241d
add spack test list --all (#22032)
`spack test list` will show you which *installed* packages can be tested
but it won't show you which packages have tests.

- [x] add `spack test list --all` to show which packages have test methods
- [x] update `has_test_method()` to handle package instances *and*
      package classes.
2021-03-07 11:44:17 -08:00
Massimiliano Culpo
10e9e142b7
Bootstrap clingo from sources (#21446)
* Allow the bootstrapping of clingo from sources

Allow python builds with system python as external
for MacOS

* Ensure consistent configuration when bootstrapping clingo

This commit uses context managers to ensure we can
bootstrap clingo using a consistent configuration
regardless of the use case being managed.

* Github actions: test clingo with bootstrapping from sources

* Add command to inspect and clean the bootstrap store

 Prevent users to set the install tree root to the bootstrap store

* clingo: documented how to bootstrap from sources

Co-authored-by: Gregory Becker <becker33@llnl.gov>
2021-03-03 09:37:46 -08:00
Scott Wittenburg
704eadbda1
Temporarily reduce pr stack size (#21998)
Gitlab pipelines fixes

* add arch tag to avoid picking up UO power9 runners
* temporarily reduce PR workload
2021-02-27 09:43:29 -07:00
Paul Ferrell
e85a8cde37
Config prefer upstream (#21487)
This allows for quickly configuring a spack install/env to use upstream packages by default. This is particularly important when upstreaming from a set of officially supported spack installs on a production cluster. By configuring such that package preferences match the upstream, you ensure maximal reuse of existing package installations.
2021-02-24 10:57:50 -08:00
Scott Wittenburg
ee5992783c
Gitlab fix pr workflow (#21786)
Fixes for gitlab pipelines

* Remove accidentally retained testing branch name
* Generate pipeline w/out debug mode
* Make jobs interruptible for auto-cancel pending
* Work around concretization conflicts
2021-02-23 19:19:06 -07:00
Harmen Stoppels
0664b90751
Drop compiler variables from spack load (#21699)
Drops:

* C_INCLUDE_PATH
* CPLUS_INCLUDE_PATH
* LIBRARY_PATH
* INCLUDE

We already decided to use C_INCLUDE_PATH, CPLUS_INCLUDE_PATH, INCLUDE over CPATH here:

https://github.com/spack/spack/pull/14749

However, none of these flags apply to Fortran on Linux. So for consistency it seems better to make the user use -I and -L flags by hand or through pkgconfig.
2021-02-23 13:35:19 +01:00
Scott Wittenburg
6b509a95da
Pipelines: Move PR testing stacks (currently only E4S) into spack (#21714) 2021-02-18 18:50:57 -07:00
Severin Strobl
0da2b82df2
Fixed conditional in match_flag for fish env (#21679)
An attempt to fix the conditional was made in 5a771bc8ad, yet this broke
the conditional completely.
2021-02-18 23:29:23 +00:00
Scott Wittenburg
5b0507cc65
Pipelines: Temporary buildcache storage (#21474)
Before this change, in pipeline environments where runners do not have access
to persistent shared file-system storage, the only way to pass buildcaches to
dependents in later stages was by using the "enable-artifacts-buildcache" flag
in the gitlab-ci section of the spack.yaml.  This change supports a second
mechanism, named "temporary-storage-url-prefix", which can be provided instead
of the "enable-artifacts-buildcache" feature, but the two cannot be used at the
same time.  If this prefix is provided (only "file://" and "s3://" urls are
supported), the gitlab "CI_PIPELINE_ID" will be appended to it to create a url
for a mirror where pipeline jobs will write buildcache entries for use by jobs
in subsequent stages.  If this prefix is provided, a cleanup job will be
generated to run after all the rebuild jobs have finished that will delete the
contents of the temporary mirror.  To support this behavior a new mirror
sub-command has been added: "spack mirror destroy" which can take either a
mirror name or url.

This change also fixes a bug in generation of "needs" list for each job.  Each
jobs "needs" list is supposed to only contain direct dependencies for scheduling
purposes, unless "enable-artifacts-buildcache" is specified.  Only in that case
are the needs lists supposed to contain all transitive dependencies.  This
changes fixes a bug that caused the needs lists to always contain all transitive
dependencies, regardless of whether or not "enable-artifacts-buildcache" was
specified.
2021-02-16 18:21:18 -07:00
Chuck Atkins
2870cc4c92
Add RHEL8 Universal Base Image with platform-python to CI unit tests (#21655) 2021-02-16 13:49:05 -05:00
Scott Wittenburg
428f831899
Pipelines: DAG Pruning (#20435)
Pipelines: DAG pruning

During the pipeline generation staging process we check each spec against all configured mirrors to determine whether it is up to date on any of the mirrors.  By default, and with the --prune-dag argument to "spack ci generate", any spec already up to date on at least one remote mirror is omitted from the generated pipeline.  To generate jobs for up to date specs instead of omitting them, use the --no-prune-dag argument.  To speed up the pipeline generation process, pass the --check-index-only argument.  This will cause spack to check only remote buildcache indices and avoid directly fetching any spec.yaml files from mirrors.  The drawback is that if the remote buildcache index is out of date, spec rebuild jobs may be scheduled unnecessarily.

This change removes the final-stage-rebuild-index block from gitlab-ci section of spack.yaml.  Now rebuilding the buildcache index of the mirror specified in the spack.yaml is the default, unless "rebuild-index: False" is set.  Spack assigns the generated rebuild-index job runner attributes from an optional new "service-job-attributes" block, which is also used as the source of runner attributes for another generated non-build job, a no-op job, which spack generates to avoid gitlab errors when DAG pruning results in empty pipelines.
2021-02-16 09:12:37 -07:00