As of #18205, all packages must be pickle-able to be installed by
Spack.
This adds a test to check that each package can be pickled. If any
package fails to pickle, the test keeps going and collects the names
of all failed packages; it then takes the first one that failed and
attempts to re-pickle it, generating the full stack trace for the
failed pickle attempt.
Spack creates a separate process to do package installation. Different
operating systems and Python versions use different methods to create
it but up until Python 3.8 both Linux and Mac OS used "fork" (which
duplicates process memory, file descriptor table, etc.).
Python >= 3.8 on Mac OS prefers creating an entirely new process
(referred to as the "spawn" start method) because "fork" was found to
cause issues (in other words "spawn" is the default start method used
by multiprocessing.Process). Spack was dependent on the particular
behavior of fork to replicate process memory and transmit file
descriptors.
This PR refactors the Spack internals to support starting a child
process with the "spawn" method. To achieve this, it makes the
following changes:
- ensure that the package repository and other global state are
transmitted to the child process
- ensure that file descriptors are transmitted to the child process in
a way that works with multiprocessing and spawn
- make all the state needed for the build process and tests picklable
(package, stage, etc.)
- move a number of locally-defined functions into global scope so that
they can be pickled
- rework tests where needed to avoid using local functions
This PR also reworks sbang tests to work on macOS, where temporary
directories are deeper than the Linux sbang limit. We make the limit
platform-dependent (macOS supports 512-character shebangs)
See: #14102
In compiler bootstrapping pipelines, we add an artificial dependency
between jobs for packages to be built with a bootstrapped compiler
and the job building the compiler. To find the right bootstrapped
compiler for each spec, we compared not only the compiler spec to
that required by the package spec, but also the architectures of
the compiler and package spec.
But this prevented us from finding the bootstrapped compiler for a
spec in cases where the architecture of the compiler wasn't exactly
the same as the spec. For example, a gcc@4.8.5 might have
bootstrapped a compiler with haswell as the architecture, while the
spec had broadwell. By comparing the families instead of the architecture
itself, we know that we can build the zlib for broadwell with the gcc for
haswell.
* py-json-get: new package at 1.1.1
* py-json-get: new package at 1.1.1
* r-bigalgebra: new package at 0.8.4
* r-bigalgebra: new package at 0.8.4 with corrections
* Added an additional change to tarball and dependencies
* removing accidentally added file
* Added tarball that uses mirror and removed redundant dependencies
* Fixed version and added dep.
* Updated checksum
* Fixed urls
* Added list_url
Co-authored-by: las_djorton <las_djorton@build.las.iastate.edu>
* Add CUDA support to superlu-dist
* Use spec['cuda'].libs.directories[0] iso spec['cuda'].prefix.lib
so it works for both lib and lib64
The suggested:
args.append('-DTPL_CUDA_LIBRARIES=' +
spec['cuda'].libs.ld_flags)
did not work because it does not link with cuBLAS.
Currently, full JSON output is the only machine readable option for `spack find`
in an environment.
`spack find --format` is also designed to be machine readable, but we print extra
headers in environments.
-[x] don't print headers in `spack find` output when in an environment
* No version of yaml-cpp in spack can build shared AND
static libraries at the same time. So drop the "static"
variant and let "shared" handle that alone.
Or in other words: No version handles the
BUILD_STATIC_LIBS flag.
* The flag for building shared libraries changed from
BUILD_SHARED_LIBS to YAML_BUILD_SHARED_LIBS at some
point. So just pass both flags.
* Use the newer define_from_variant.
* [py-cuml] created template
* [py-cuml] setup phases and added build_directory
* [py-cuml] added dependencies
* [py-cuml] depends on libcumlprims
* [py-cuml] requiring multigpu version
* [py-cuml] figuring out the best way to get concretization to happen cleanly
* [py-cuml] removed singlegpu variat from libcuml
* [py-cuml] depends on py-cudf
* [py-cuml] depends on cupy
* [py-cuml] fixed typoo
* [py-cuml] depends on py-scipy
* [py-cuml] depends on py-treelite
* [py-cuml] py-treelite is now a variant of treelite
* [py-cuml] depends on joblib
* [py-cuml] depends on py-scikit-learn
* [py-cuml] flake8
* [py-cuml] added homepage and description. removed fixmes
* [py-cuml] updated checksum
* Enabling build of v1.9.x development branch.
* v1.8.1 is the preferred (stable) version.
* Fixing code style
Co-authored-by: Filippo Spiga <fspiga@nvidia.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [podio] put python dir in python path
* Update var/spack/repos/builtin/packages/podio/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
When invoking "buildcache list" multiple times, the command was
reporting no specs in the cache the second time around. The
presence of an up-to-date index was causing the internal
representation to be left un-initialized.