The default url couldn't be the one with v0.0.0-aws since spack was
replacing v0.0.0-aws with v<version_number> for example, deleting the
-aws suffix. I used the url_for_version method to specify this suffix.
This adds support for prereleases. Alpha, beta and release candidate
suffixes are ordered in the intuitive way:
```
1.2.0-alpha < 1.2.0-alpha.1 < 1.2.0-beta.2 < 1.2.0-rc.3 < 1.2.0 < 1.2.0-xyz
```
Alpha, beta and rc prereleases are defined as follows: split the version
string into components like before (on delimiters and string boundaries).
If there's a string component `alpha`, `beta` or `rc` followed by an optional
numeric component at the end, then the version is prerelease.
So `1.2.0-alpha.1 == 1.2.0alpha1 == 1.2.0.alpha1` are all the same, as usual.
The strings `alpha`, `beta` and `rc` are chosen because they match semver,
they are sufficiently long to be unambiguous, and and all contain at least
one non-hex character so distinguish them from shasum/digest type suffixes.
The comparison key is now stored as `(release_tuple, prerelease_tuple)`, so in
the above example:
```
((1,2,0),(ALPHA,)) < ((1,2,0),(ALPHA,1)) < ((1,2,0),(BETA,2)) < ((1,2,0),(RC,3)) < ((1,2,0),(FINAL,)) < ((1,2,0,"xyz"), (FINAL,))
```
The version ranges `@1.2.0:` and `@:1.1` do *not* include prereleases of
`1.2.0`.
So for packaging, if the `1.2.0alpha` and `1.2.0` versions have the same constraints on
dependencies, it's best to write
```python
depends_on("x@1:", when="@1.2.0alpha:")
```
However, `@1.2:` does include `1.2.0alpha`. This is because Spack considers
`1.2 < 1.2.0` as distinct versions, with `1.2 < 1.2.0alpha < 1.2.0` as a consequence.
Alternatively, the above `depends_on` statement can thus be written
```python
depends_on("x@1:", when="@1.2:")
```
which can be useful too. A short-hand to include prereleases, but you
can still be explicit to exclude the prerelease by specifying the patch version
number.
### Concretization
Concretization uses a different version order than `<`. Prereleases are ordered
between final releases and develop versions. That way, users should not
have to set `preferred=True` on every final release if they add just one
prerelease to a package. The concretizer is unlikely to pick a prerelease when
final releases are possible.
### Limitations
1. You can't express a range that includes all alpha release but excludes all beta
releases. Only alternative is good old repeated nines: `@:1.2.0alpha99`.
2. The Python ecosystem defaults to `a`, `b`, `rc` strings, so translation of Python versions to
Spack versions requires expansion to `alpha`, `beta`, `rc`. It's mildly annoying, because
this means we may need to compute URLs differently (not done in this commit).
### Hash
Care is taken not to break hashes of versions that do not have a prerelease
suffix.
Generate CI scripts as powershell on Windows. This is intended to
output exactly the same bash scripts as before on Linux.
Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
As the cmake build is triggered by scikit build, the usual spack option
for enabling tests had no effect and the heavy test suite ran all the time.
Used https://github.com/scikit-build/cmake-python-distributions/issues/172#issuecomment-890322263
to implement how to pass options to the actual `cmake` build.
I also excluded some tests that failed for me on alma9 (gcc 11.4.1),
so the rest of the test suite can be run.
Enable OpenBLAS's built-in CPU capability detection and kernel selection.
This allows run-time selection of the "best" kernels for the running CPU, rather
than what is specified at build time. For example, it allows OpenBLAS to use
AVX512 kernels when running on ZEN4, and built targeting the "ZEN" architecture.
Co-authored-by: Branden Moore <branden.moore@amd.com>
* Changes to re-enable aws-pcluster pipelines
- Use compilers from pre-installed spack store such that compiler path relocation works when downloading from buildcache.
- Install gcc from hash so there is no risk of building gcc from source in pipleine.
- `packages.yam` files are now part of the pipelines.
- No more eternal `postinstall.sh`. The necessary steps are in `setup=pcluster.sh` and will be version controlled within this repo.
- Re-enable pipelines.
* Add and
* Debugging output & mv skylake -> skylake_avx512
* Explicilty check for packages
* Handle case with no intel compiler
* compatibility when using setup-pcluster.sh on a pre-installed cluster.
* Disable palace as parser cannot read require clause at the moment
* ifort cannot build superlu in buildcache
`ifort` is unable to handle such long file names as used when cmake compiles
test programs inside build cache.
* Fix spack commit for intel compiler installation
* Need to fetch other commits before using them
* fix style
* Add TODO
* Update packages.yaml to not use 'compiler:', 'target:' or 'provider:'
Synchronize with changes in https://github.com/spack/spack-configs/blob/main/AWS/parallelcluster/
* Use Intel compiler from later version (orig commit no longer found)
* Use envsubst to deal with quoted newlines
This is cleaner than the `eval` command used.
* Need to fetch tags for checkout on version number
* Intel compiler needs to be from version that has compatible DB
* Install intel compiler with commit that has DB ver 7
* Decouple the intel compiler installation from current commit
- Use a completely different spack installation such that this current pipeline
commit remains untouched.
- Make the script suceed even if the compiler installation fails (e.g. because
the Database version has been updated)
- Make the install targets fall back to gcc in case the compiler did not install
correctly.
* Use generic target for x86_64_vX
There is no way to provision a skylake/icelake/zen runner. They are all in the
same pools under x86_64_v3 and x86_64_v4.
* Find the intel compiler in the current spack installation
* Remove SPACK_TARGET_ARCH
* Fix virtual package index & use package.yaml for intel compiler
* Use only one stack & pipeline per generic architecture
* Fix yaml format
* Cleanup typos
* Include fix for ifx.cfg to get the right gcc toolchain when linking
* [removeme] Adding timeout to debug hang in make (palace)
* Revert "[removeme] Adding timeout to debug hang in make (palace)"
This reverts commit fee8a01580489a4ea364368459e9353b46d0d7e2.
* palace x86_64_v4 gets stuck when compiling try newer oneapi
* Update comment
* Use the latest container image
* Update gcc_hashes to match new container
* Use only one tag providing tags per extends call
Also removed an unnecessary tag.
* Move generic setup script out of individual stack
* Cleanup from last commit
* Enable checking signature for packages available on the container
* Remove commented packages / Add comment for palace
* Enable openmpi@5 which needs pmix>3
* don't look for intel compiler on aarch64
* py-python-lsp-server: add v1.10.0
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Remove py-wheel from package
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
---------
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>