The buildcache is now extracted in a temporary folder within the current store,
moved to its final place and relocated.
"spack clean -s" has been extended to also clean the temporary extraction directory.
Add hardlinks with absolute paths for libraries in the corge, garply and quux packages
to detect incorrect handling of hardlinks in tests.
Problem: Flux expects the `FLUX_PMI_LIBRARY_PATH` to point directly at
the `libpmi.so` installed by Flux. When the env var is unset,
prepending to it results in this behavior. In the rare case that the
env var is already set, then the spack `libpmi.so` gets prepended with a
`:`, which Flux then attempts to interpret as a single path.
Solution: don't prepend to the path, instead set the path to point to
the `libpmi.so` (which will be undone when Flux is unloaded).
* flux-core: remove deprecated environment variables
The earliest checksummed version in this package is 0.15.0. As of
0.12.0, wreck (and its associated paths) no longer exist in Flux. As of
0.13.0, the `FLUX_RCX_PATH` variables are no longer used. So clean up
these env vars from the `setup_run_environment`.
gromacs@2018:2020.6 is fixed to build with gcc@11.2.0
by adding #include <limits> to a few header files.
Thanks to Maciej Wójcik <w8jcik@gmail.com> for testing versions.
The `find` command was missing for the examples forcing colorized output. Without this (or another suitable) command, spack produces output that is not using any color. Thus, without the `find` command one does not see any difference between forced colorized and non-colorized output.
There was a bug in 2.36.* of missing Makefile dependencies. The
previous workaround was to require 2.36 to be built serially. This is
now fixed upstream in 2.37 and this PR adds the patch to restore
parallel make to 2.36.
* py-niworkflows: add new package
* Update var/spack/repos/builtin/packages/py-niworkflows/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* remove unnecessary comment
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-nistats: add new package
* Update var/spack/repos/builtin/packages/py-nistats/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* remove `conflicts`
* remove test dependencies
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
when deployed on kubernetes, the server sends back permanent redirect responses.
This is elegantly handled by the requests library, but not urllib that we have
to use here, so I have to manually handle it by parsing the exception to
get the Location header, and then retrying the request there.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
* packages/phist, re #26002: force phist to use MPI compiler wrappers (copied from trilinos package)
* packages/phist re #26002, use cmake-provded FindMPI module only
* packages/phist source code formatting
* packages/phist: set MPI_HOME rather than MPI_BASE_DIR, thanks @sethri.
* phist: delete own FindMPI.cmake for older versions (rather than patching it away)
* packages/phist: remove blank line
* phist: adjust sorting of imports
* phist: change order of imports
The ASP-based solver maximizes the number of values in multi-valued
variants (if other higher order constraints are met), to avoid cases
where only a subset of the values that have been specified on the
command line or imposed by another constraint are selected.
Here we swap the priority of this optimization target with the
selection of the default providers, to avoid unexpected results
like the one in #26598
Seems like https://bugs.python.org/issue29699 is relevant. Better to
just ignore errors when removing them tmpdir. The OS will remove it
anyways.
Errors are happening randomly from tests that are using this fixture.
TL;DR: there are matching groups trying to match 1 or more occurrences of
something. We don't use the matching group. Therefore it's sufficient to test
for 1 occurrence. This reduce quadratic complexity to linear time.
---
When parsing logs of an mpich build, I'm getting a 4 minute (!!) wait
with 16 threads for regexes to run:
```
In [1]: %time p.parse("mpich.log")
Wall time: 4min 14s
```
That's really unacceptably slow...
After some digging, it seems a few regexes tend to have `O(n^2)` scaling
where `n` is the string / log line length. I don't think they *necessarily*
should scale like that, but it seems that way. The common pattern is this
```
([^:]+): error
```
which matches `: error` literally, and then one or more non-colons before that. So
for a log line like this:
```
abcdefghijklmnopqrstuvwxyz: error etc etc
```
Any of these are potential group matches when using `search` in Python:
```
abcdefghijklmnopqrstuvwxyz
bcdefghijklmnopqrstuvwxyz
cdefghijklmnopqrstuvwxyz
⋮
yz
z
```
but clearly the capture group should return the longest match.
My hypothesis is that Python has a very bad implementation of `search`
that somehow considers all of these, even though it can be implemented
in linear time by scanning for `: error` first, and then greedily expanding
the longest possible `[^:]+` match to the left. If Python indeed considers
all possible matches, then with `n` matches of length `1 .. n` you
see the `O(n^2)` slowness (i verified this by replacing + with {1,k}
and doubling `k`, it doubles the execution time indeed).
This PR fixes this by removing the `+`, so effectively changing the
O(n^2) into a O(n) worst case.
The reason we are fine with dropping `+` is that we don't use the
capture group anywhere, so, we just ensure `:: error` is not a match
but `x: error` is.
After going from O(n^2) to O(n), the 15MB mpich build log is parsed
in `1.288s`, so about 200x faster.
Just to be sure I've also updated `^CMake Error.*:` to `^CMake Error`,
so that it does not match with all the possible `:`'s in the line.
Another option is to use `.*?` there to make it quit scanning as soon as
possible, but what line that starts with `CMake Error` that does not have
a colon is really a false positive...
When PR #26659 is merged, boost version boost@1.54.0 will be available for build too.
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>