aa39465188
* Changes to re-enable aws-pcluster pipelines - Use compilers from pre-installed spack store such that compiler path relocation works when downloading from buildcache. - Install gcc from hash so there is no risk of building gcc from source in pipleine. - `packages.yam` files are now part of the pipelines. - No more eternal `postinstall.sh`. The necessary steps are in `setup=pcluster.sh` and will be version controlled within this repo. - Re-enable pipelines. * Add and * Debugging output & mv skylake -> skylake_avx512 * Explicilty check for packages * Handle case with no intel compiler * compatibility when using setup-pcluster.sh on a pre-installed cluster. * Disable palace as parser cannot read require clause at the moment * ifort cannot build superlu in buildcache `ifort` is unable to handle such long file names as used when cmake compiles test programs inside build cache. * Fix spack commit for intel compiler installation * Need to fetch other commits before using them * fix style * Add TODO * Update packages.yaml to not use 'compiler:', 'target:' or 'provider:' Synchronize with changes in https://github.com/spack/spack-configs/blob/main/AWS/parallelcluster/ * Use Intel compiler from later version (orig commit no longer found) * Use envsubst to deal with quoted newlines This is cleaner than the `eval` command used. * Need to fetch tags for checkout on version number * Intel compiler needs to be from version that has compatible DB * Install intel compiler with commit that has DB ver 7 * Decouple the intel compiler installation from current commit - Use a completely different spack installation such that this current pipeline commit remains untouched. - Make the script suceed even if the compiler installation fails (e.g. because the Database version has been updated) - Make the install targets fall back to gcc in case the compiler did not install correctly. * Use generic target for x86_64_vX There is no way to provision a skylake/icelake/zen runner. They are all in the same pools under x86_64_v3 and x86_64_v4. * Find the intel compiler in the current spack installation * Remove SPACK_TARGET_ARCH * Fix virtual package index & use package.yaml for intel compiler * Use only one stack & pipeline per generic architecture * Fix yaml format * Cleanup typos * Include fix for ifx.cfg to get the right gcc toolchain when linking * [removeme] Adding timeout to debug hang in make (palace) * Revert "[removeme] Adding timeout to debug hang in make (palace)" This reverts commit fee8a01580489a4ea364368459e9353b46d0d7e2. * palace x86_64_v4 gets stuck when compiling try newer oneapi * Update comment * Use the latest container image * Update gcc_hashes to match new container * Use only one tag providing tags per extends call Also removed an unnecessary tag. * Move generic setup script out of individual stack * Cleanup from last commit * Enable checking signature for packages available on the container * Remove commented packages / Add comment for palace * Enable openmpi@5 which needs pmix>3 * don't look for intel compiler on aarch64 |
||
---|---|---|
.github | ||
bin | ||
etc/spack/defaults | ||
lib/spack | ||
share/spack | ||
var/spack | ||
.codecov.yml | ||
.dockerignore | ||
.flake8 | ||
.git-blame-ignore-revs | ||
.gitattributes | ||
.gitignore | ||
.mailmap | ||
.readthedocs.yml | ||
CHANGELOG.md | ||
CITATION.cff | ||
COPYRIGHT | ||
LICENSE-APACHE | ||
LICENSE-MIT | ||
NOTICE | ||
pyproject.toml | ||
pytest.ini | ||
README.md | ||
SECURITY.md |
Spack is a multi-platform package manager that builds and installs multiple versions and configurations of software. It works on Linux, macOS, and many supercomputers. Spack is non-destructive: installing a new version of a package does not break existing installations, so many configurations of the same package can coexist.
Spack offers a simple "spec" syntax that allows users to specify versions and configuration options. Package files are written in pure Python, and specs allow package authors to write a single script for many different builds of the same package. With Spack, you can build your software all the ways you want to.
See the Feature Overview for examples and highlights.
To install spack and your first package, make sure you have Python. Then:
$ git clone -c feature.manyFiles=true https://github.com/spack/spack.git
$ cd spack/bin
$ ./spack install zlib
Documentation
Full documentation is available, or
run spack help
or spack help --all
.
For a cheat sheet on Spack syntax, run spack help --spec
.
Tutorial
We maintain a hands-on tutorial. It covers basic to advanced usage, packaging, developer features, and large HPC deployments. You can do all of the exercises on your own laptop using a Docker container.
Feel free to use these materials to teach users at your organization about Spack.
Community
Spack is an open source project. Questions, discussion, and contributions are welcome. Contributions can be anything from new packages to bugfixes, documentation, or even new core features.
Resources:
- Slack workspace: spackpm.slack.com. To get an invitation, visit slack.spack.io.
- Matrix space: #spack-space:matrix.org: bridged to Slack.
- Github Discussions: for Q&A and discussions. Note the pinned discussions for announcements.
- Twitter: @spackpm. Be sure to
@mention
us! - Mailing list: groups.google.com/d/forum/spack: only for announcements. Please use other venues for discussions.
Contributing
Contributing to Spack is relatively easy. Just send us a
pull request.
When you send your request, make develop
the destination branch on the
Spack repository.
Your PR must pass Spack's unit tests and documentation tests, and must be PEP 8 compliant. We enforce these guidelines with our CI process. To run these tests locally, and for helpful tips on git, see our Contribution Guide.
Spack's develop
branch has the latest contributions. Pull requests
should target develop
, and users who want the latest package versions,
features, etc. can use develop
.
Releases
For multi-user site deployments or other use cases that need very stable software installations, we recommend using Spack's stable releases.
Each Spack release series also has a corresponding branch, e.g.
releases/v0.14
has 0.14.x
versions of Spack, and releases/v0.13
has
0.13.x
versions. We backport important bug fixes to these branches but
we do not advance the package versions or make other changes that would
change the way Spack concretizes dependencies within a release branch.
So, you can base your Spack deployment on a release branch and git pull
to get fixes, without the package churn that comes with develop
.
The latest release is always available with the releases/latest
tag.
See the docs on releases for more details.
Code of Conduct
Please note that Spack has a Code of Conduct. By participating in the Spack community, you agree to abide by its rules.
Authors
Many thanks go to Spack's contributors.
Spack was created by Todd Gamblin, tgamblin@llnl.gov.
Citing Spack
If you are referencing Spack in a publication, please cite the following paper:
- Todd Gamblin, Matthew P. LeGendre, Michael R. Collette, Gregory L. Lee, Adam Moody, Bronis R. de Supinski, and W. Scott Futral. The Spack Package Manager: Bringing Order to HPC Software Chaos. In Supercomputing 2015 (SC’15), Austin, Texas, November 15-20 2015. LLNL-CONF-669890.
On GitHub, you can copy this citation in APA or BibTeX format via the "Cite this repository"
button. Or, see the comments in CITATION.cff
for the raw BibTeX.
License
Spack is distributed under the terms of both the MIT license and the Apache License (Version 2.0). Users may choose either license, at their option.
All new contributions must be made under both the MIT and Apache-2.0 licenses.
See LICENSE-MIT, LICENSE-APACHE, COPYRIGHT, and NOTICE for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
LLNL-CODE-811652