Add a `build_type` variant, which allows building optimized compilers,
as well as target libraries (libstdc++ and friends).
The default is `build_type=RelWithDebInfo`, which corresponds to GCC's
default of -O2 -g.
When building with `+bootstrap %gcc`, also add Spack's arch specific
flags using the common denominator between host and new GCC.
It is done by creating a config/spack.mk file in def patch, that looks
as follows:
```
BOOT_CFLAGS := $(filter-out -O% -g%, $(BOOT_CFLAGS)) -O2 -g -march=znver2 -mtune=znver2
CFLAGS_FOR_TARGET := $(filter-out -O% -g%, $(CFLAGS_FOR_TARGET)) -O2 -g -march=znver2 -mtune=znver2
CXXFLAGS_FOR_TARGET := $(filter-out -O% -g%, $(CXXFLAGS_FOR_TARGET)) -O2 -g -march=znver2 -mtune=znver2
```
The oneapi and dpcpp compilers are essentially the same except for which
binary is used foc CXX. Spack will detect them as "mixed toolchain" and
not inject compiler optimization flags. This will be needed once
archspec has entries for the oneapi and dpcpp compilers. This PR detects
when dpcpp and oneapi are in the toolchains list and explicitly sets
`is_mixed_toolchain` to `False`.
* [py-openslide-python] added verion 1.1.2 and set max py-setuptools version for 1.1.1
* [py-openslide-python]
- setuptools required for all possible newer versions
- python is type build run
* [py-openslide-python] use pil provider
* Add version 3.0 and 3.1 and prelim OpenMP support
* Fix flag handler missing spec variable
* Use self.compiler.openmp_flag instead of -fopenmp
* Fix whitespace
Fixes qt configure errors with external openssl on older systems (rhel7)
See
efc02f9cc3/dist/changes-5.15.0 (L346)
This means for now on, `qt ^openssl@1.0` gets you `qt@5.15.4 ~ssl`:
clingo chooses latest qt version **but disables ssl support**.
Error messages for the clingo concretizer have proven challenging. The current messages are incredibly vague and often don't help users at all. Unsat cores in clingo are not guaranteed to be minimal, and lead to cores that are either not useful or need to be post-processed for hours to reach a minimal core.
Following up on an idea from a slack conversation with kwryankrattiger on slack, this PR takes a new approach. We eliminate most integrity constraints and minima/maxima on choice rules in clingo, and instead force invalid states to imply an error predicate. The error predicate can include context on the cause of the error (Package, Version, etc). These error predicates are then heavily optimized against, to ensure that we do not include error facts in the solution when a solution with no error facts could be generated. When post-processing the clingo solution to construct specs, any error facts cause the program to raise an exception. This leads to much more legible error messages. Each error predicate includes a priority and an error message. The error message is formatted by the remaining arguments to produce the error message. The priority is used to ensure that when clingo has a choice of which rules to violate, it chooses the one which will be most informative to the user.
Performance:
"fresh" concretizations appear to suffer a ~20% performance penalty under this branch, while "reuse" concretizations see a speedup of around 33%.
Possible optimizations if users still see unhelpful messages:
There are currently 3 levels of priority of the error messages. Additional priorities are possible, and can allow us finer granularity to ensure more informative error messages are provided in lieu of less informative ones.
Future work:
Improve tests to ensure that every possible rule implying an error message is exercised
A non-existent upstream should not be fatal: it could only mean it is
not deployed yet. In the meantime, it should not block the user to
rebuild anything it needs.
A warning is still emitted, to let the user decide if this is ok or not.
* Fix for xtensor-xsimd
* Add sha256 for all new releases
* renamed ufcx package
* Update sha for ffcx
* fixed hashes and modified fenics-dolfinx to depend on ufcx
* cleaned and fixed dependency types
* use spec.satisfies in cmake_args
* bumped to ufcx@0.4.1
* address PR comments
* fix hashes
* update parmetis in cmake_args to reflect default setting
* update versions
* renamed ufcx package
* fixed hashes and modified fenics-dolfinx to depend on ufcx
* cleaned and fixed dependency types
* use spec.satisfies in cmake_args
* bumped to ufcx@0.4.1
* address PR comments
* fix hashes
* update parmetis in cmake_args to reflect default setting
* update versions
* Add dependency fix
* bump basix to 0.4.2 and address PR comments
* Versioning fixes
* Use xtensor-0.24: and loosen pybind11
* Add conflicts for partitioners
* Updates on partitioners
* use define_from_variant
* Tidy up some dependencies
* Work on multi-variants for graph partitioners
* Fix KaHIP issue.
KaHIP changed the name of its library from 'interface' to 'kahip'. Pin earlier versions of DOLFINx to earlier verisons of KaHIP for proper detection.
Co-authored-by: Chris Richardson <chris@bpi.cam.ac.uk>
Co-authored-by: Garth N. Wells <gnw20@cam.ac.uk>
Fixes missing chgrp on symlinks in package installations, and errors on
symlinks referencing non-existent or non-writable locations.
Note: `os.chown(.., follow_symlinks=False)` is python3 only, but
`os.lchown` exists in both versions.
* Change license dir from hard-coded to a configurable item
* Change config item to be a string not an array
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Trying to compute `dag_hash()` or `package_hash()` on a concrete spec that doesn't have
a `_package_hash` attribute would attempt to recompute the package hash.
This most commonly manifests as a failed lookup of a namespace if you attempt to uninstall
or compute the hashes of packages in exsternal repositories that aren't registered, e.g.:
```console
> spack spec --json c/htno
==> Error: Unknown namespace: myrepo
```
While it wouldn't change the already-assigned `dag_hash` value, this behavior is
incorrect, since the package file for a previously concrete spec:
1. might have changed since concretization,
2. might not exist anymore, or
3. might just not be findable by Spack.
This PR ensures that the package hash can't be computed on older concrete specs. Instead
of calling `package_hash()` from within `to_node_dict()`, we now check for the `_package_hash`
attribute and only add the package_hash to the spec record if it's there.
This PR also handles the tricky semantics of computing `package_hash()` at concretization
time. We have to compute it *before* marking the spec concrete so that `to_node_dict` can
use it. But this means that the logic for `package_hash()` can't rely on `spec.concrete`,
as it is called *during* concretization. Instead of checking for concreteness, `package_hash()`
now checks `_patches_assigned()` to determine whether it should add them to the package
hash.
- [x] Add an assert to `package_hash()` so it can't be called on specs for which it
would be wrong.
- [x] Add an `_assign_hash()` method to handle tricky semantics of `package_hash`
and `dag_hash`.
- [x] Rework concretization to call `_assign_hash()` before and after marking specs
concrete.
- [x] Rework content hash part of package hash to check for `_patches_assigned()`
instead of `spec.concrete`.
- [x] regression test
* [py-tensorflow-hub] applied patch for newer version of zlib
* [py-tensorflow-hub] patch also applies to 0.11.0
* [py-tensorflow-hub] Audit fix
1. patch URL in package py-tensorflow-hub must end with ?full_index=1
* py-pytecplot: new package
* fix copyright year
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* use one variant for all extras
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Newer versions of gobject-introspection require Meson to build. Convert
the package into a hybrid one that still supports older versions using
Autotools.