Generate CI scripts as powershell on Windows. This is intended to
output exactly the same bash scripts as before on Linux.
Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
As the cmake build is triggered by scikit build, the usual spack option
for enabling tests had no effect and the heavy test suite ran all the time.
Used https://github.com/scikit-build/cmake-python-distributions/issues/172#issuecomment-890322263
to implement how to pass options to the actual `cmake` build.
I also excluded some tests that failed for me on alma9 (gcc 11.4.1),
so the rest of the test suite can be run.
Enable OpenBLAS's built-in CPU capability detection and kernel selection.
This allows run-time selection of the "best" kernels for the running CPU, rather
than what is specified at build time. For example, it allows OpenBLAS to use
AVX512 kernels when running on ZEN4, and built targeting the "ZEN" architecture.
Co-authored-by: Branden Moore <branden.moore@amd.com>
* Changes to re-enable aws-pcluster pipelines
- Use compilers from pre-installed spack store such that compiler path relocation works when downloading from buildcache.
- Install gcc from hash so there is no risk of building gcc from source in pipleine.
- `packages.yam` files are now part of the pipelines.
- No more eternal `postinstall.sh`. The necessary steps are in `setup=pcluster.sh` and will be version controlled within this repo.
- Re-enable pipelines.
* Add and
* Debugging output & mv skylake -> skylake_avx512
* Explicilty check for packages
* Handle case with no intel compiler
* compatibility when using setup-pcluster.sh on a pre-installed cluster.
* Disable palace as parser cannot read require clause at the moment
* ifort cannot build superlu in buildcache
`ifort` is unable to handle such long file names as used when cmake compiles
test programs inside build cache.
* Fix spack commit for intel compiler installation
* Need to fetch other commits before using them
* fix style
* Add TODO
* Update packages.yaml to not use 'compiler:', 'target:' or 'provider:'
Synchronize with changes in https://github.com/spack/spack-configs/blob/main/AWS/parallelcluster/
* Use Intel compiler from later version (orig commit no longer found)
* Use envsubst to deal with quoted newlines
This is cleaner than the `eval` command used.
* Need to fetch tags for checkout on version number
* Intel compiler needs to be from version that has compatible DB
* Install intel compiler with commit that has DB ver 7
* Decouple the intel compiler installation from current commit
- Use a completely different spack installation such that this current pipeline
commit remains untouched.
- Make the script suceed even if the compiler installation fails (e.g. because
the Database version has been updated)
- Make the install targets fall back to gcc in case the compiler did not install
correctly.
* Use generic target for x86_64_vX
There is no way to provision a skylake/icelake/zen runner. They are all in the
same pools under x86_64_v3 and x86_64_v4.
* Find the intel compiler in the current spack installation
* Remove SPACK_TARGET_ARCH
* Fix virtual package index & use package.yaml for intel compiler
* Use only one stack & pipeline per generic architecture
* Fix yaml format
* Cleanup typos
* Include fix for ifx.cfg to get the right gcc toolchain when linking
* [removeme] Adding timeout to debug hang in make (palace)
* Revert "[removeme] Adding timeout to debug hang in make (palace)"
This reverts commit fee8a01580489a4ea364368459e9353b46d0d7e2.
* palace x86_64_v4 gets stuck when compiling try newer oneapi
* Update comment
* Use the latest container image
* Update gcc_hashes to match new container
* Use only one tag providing tags per extends call
Also removed an unnecessary tag.
* Move generic setup script out of individual stack
* Cleanup from last commit
* Enable checking signature for packages available on the container
* Remove commented packages / Add comment for palace
* Enable openmpi@5 which needs pmix>3
* don't look for intel compiler on aarch64
* py-python-lsp-server: add v1.10.0
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Remove py-wheel from package
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
---------
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Running a `spack-python` script like this:
```python
import spack
import multiprocessing
def echo(args):
print(args)
if __name__ == "__main__":
pool = multiprocessing.Pool(2)
pool.map(echo, range(10))
```
will fail in `develop` with an error like this:
```console
_pickle.PicklingError: Can't pickle <function echo at 0x104865820>: attribute lookup echo on __main__ failed
```
Python expects to be able to look up the method `echo` in `sys.path["__main__"]` in
subprocesses spawned by `multiprocessing`, but because we use `InteractiveConsole` to
run `spack python`, the executed file isn't considered to be the `__main__` module, and
lookups in subprocesses fail. We tried to fake this by setting `__name__` to `__main__`
in the `spack python` command, but that doesn't fix the fact that no `__main__` module
exists.
Another annoyance with `InteractiveConsole` is that `__file__` is not defined in the
main script scope, so you can't use it in your scripts.
We can use the [runpy.run_path()](https://docs.python.org/3/library/runpy.html#runpy.run_path) function,
which has been around since Python 3.2, to fix this.
- [x] Use `runpy` module to launch non-interactive `spack python` invocations
- [x] Only use `InteractiveConsole` for interactive `spack python`
Often in containers, the files we use to detect whether a cray system supports new features are not available.
Given that the cray containers only support the newer versions, and that these versions have been
around for a while at this point and few sites don't support them, this PR changes the logic for
detecting cray systems so that:
1. Don't even consider whether something is the `cray` platform if `opt/cray` is not in `MODULEPATH`
2. Only use the `cray` platform if we can read files in /opt/cray/pe and positively detect an older version
3. Otherwise, assume we're *not* on a cray (includes newer Cray PE's, which we treat as Linux)
Compilation with the old flags fails on PowerPC (power8le) due to syntax
errors in the output from the preprocessor. Compilation with the
extended set of flags works both on PowerPC and x86_64.
The correct set of flags was suggested from the berkeleygw developers:
https://groups.google.com/a/berkeleygw.org/g/help/c/ewi3RZgOyeE/m/jSIoe45PAgAJ
Uses spack-provided compiler prefix to call cpp when compiling berkeleygw with gcc.
Adds variants to turn off tests
Add variants for some missing TPL options
Add the variables required to build in ~shared
* Add pamgen to Trilinos as a variant to support SEACAS
This adds the ability to turn off and on pamgen as needed
through the variant interface for the Trilinos package.py.
Add changes for seacas package.py to build the appropriate
Trilinos variants.
Add zlib-api as depends_on instead of zlib directly for SEACAS
package.py
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Add variant typescript for py-jupyter-server@:1, which then requires npm/node. Patch the build system for ~typescript so that it doesn't find any npm/node installations and attempts to build the typescript extension even though it shouldn't
* Fix formatting in var/spack/repos/builtin/packages/py-jupyter-server/package.py
* Constrain typescript variant to py-jupyter-server versions 1.10.2:1
* with when not needed if variant doesn't exist for other versions