Add a config option to strip `-Werror*` or `-Werror=*` from compile lines everywhere.
```yaml
config:
keep_werror: false
```
By default, we strip all `-Werror` arguments out of compile lines, to avoid unwanted
failures when upgrading compilers. You can re-enable `-Werror` in your builds if
you really want to, with either:
```yaml
config:
keep_werror: all
```
or to keep *just* specific `-Werror=XXX` args:
```yaml
config:
keep_werror: specific
```
This should make swapping in newer versions of compilers much smoother when
maintainers have decided to enable `-Werror` by default.
Parse error information is kept for specs, but it doesn't seem like we propagate it
to the user when we encounter an error. This fixes that.
e.g., for this error in a package:
```python
depends_on("python@:3.8", when="0.900:")
```
Before, with no context and no clue that it's even from a particular spec:
```
==> Error: Unexpected token: ':'
```
With this PR:
```
==> Error: Unexpected token: ':'
Encountered when parsing spec:
0.900:
^
```
* Introduce concretizer:unify option to replace spack:concretization
* Deprecate concretization
* Make spack:concretization overrule concretize:unify for now
* Add environment update logic to move from spack:concretization to spack:concretizer:reuse
* Migrate spack:concretization to spack:concretize:unify in all locations
* For new environments make concretizer:unify explicit, so that defaults can be changed in 0.19
The oneapi and dpcpp compilers are essentially the same except for which
binary is used foc CXX. Spack will detect them as "mixed toolchain" and
not inject compiler optimization flags. This will be needed once
archspec has entries for the oneapi and dpcpp compilers. This PR detects
when dpcpp and oneapi are in the toolchains list and explicitly sets
`is_mixed_toolchain` to `False`.
Error messages for the clingo concretizer have proven challenging. The current messages are incredibly vague and often don't help users at all. Unsat cores in clingo are not guaranteed to be minimal, and lead to cores that are either not useful or need to be post-processed for hours to reach a minimal core.
Following up on an idea from a slack conversation with kwryankrattiger on slack, this PR takes a new approach. We eliminate most integrity constraints and minima/maxima on choice rules in clingo, and instead force invalid states to imply an error predicate. The error predicate can include context on the cause of the error (Package, Version, etc). These error predicates are then heavily optimized against, to ensure that we do not include error facts in the solution when a solution with no error facts could be generated. When post-processing the clingo solution to construct specs, any error facts cause the program to raise an exception. This leads to much more legible error messages. Each error predicate includes a priority and an error message. The error message is formatted by the remaining arguments to produce the error message. The priority is used to ensure that when clingo has a choice of which rules to violate, it chooses the one which will be most informative to the user.
Performance:
"fresh" concretizations appear to suffer a ~20% performance penalty under this branch, while "reuse" concretizations see a speedup of around 33%.
Possible optimizations if users still see unhelpful messages:
There are currently 3 levels of priority of the error messages. Additional priorities are possible, and can allow us finer granularity to ensure more informative error messages are provided in lieu of less informative ones.
Future work:
Improve tests to ensure that every possible rule implying an error message is exercised
A non-existent upstream should not be fatal: it could only mean it is
not deployed yet. In the meantime, it should not block the user to
rebuild anything it needs.
A warning is still emitted, to let the user decide if this is ok or not.
Fixes missing chgrp on symlinks in package installations, and errors on
symlinks referencing non-existent or non-writable locations.
Note: `os.chown(.., follow_symlinks=False)` is python3 only, but
`os.lchown` exists in both versions.
* Change license dir from hard-coded to a configurable item
* Change config item to be a string not an array
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Trying to compute `dag_hash()` or `package_hash()` on a concrete spec that doesn't have
a `_package_hash` attribute would attempt to recompute the package hash.
This most commonly manifests as a failed lookup of a namespace if you attempt to uninstall
or compute the hashes of packages in exsternal repositories that aren't registered, e.g.:
```console
> spack spec --json c/htno
==> Error: Unknown namespace: myrepo
```
While it wouldn't change the already-assigned `dag_hash` value, this behavior is
incorrect, since the package file for a previously concrete spec:
1. might have changed since concretization,
2. might not exist anymore, or
3. might just not be findable by Spack.
This PR ensures that the package hash can't be computed on older concrete specs. Instead
of calling `package_hash()` from within `to_node_dict()`, we now check for the `_package_hash`
attribute and only add the package_hash to the spec record if it's there.
This PR also handles the tricky semantics of computing `package_hash()` at concretization
time. We have to compute it *before* marking the spec concrete so that `to_node_dict` can
use it. But this means that the logic for `package_hash()` can't rely on `spec.concrete`,
as it is called *during* concretization. Instead of checking for concreteness, `package_hash()`
now checks `_patches_assigned()` to determine whether it should add them to the package
hash.
- [x] Add an assert to `package_hash()` so it can't be called on specs for which it
would be wrong.
- [x] Add an `_assign_hash()` method to handle tricky semantics of `package_hash`
and `dag_hash`.
- [x] Rework concretization to call `_assign_hash()` before and after marking specs
concrete.
- [x] Rework content hash part of package hash to check for `_patches_assigned()`
instead of `spec.concrete`.
- [x] regression test
Previously we sorted by hash values for `spack graph`, but changing hashes can make the
test brittle and the node order seem nondeterministic to users.
- [x] Sort nodes in `spack graph` by the default edge order, which takes into account
parent and child names as well as dependency types.
- [x] Update ASCII test output for new order.
The dependency check currently checks whether there are only build
dependencies left for a particular package. However, the database also
contains uninstalled packages, which can cause the check to fail.
For instance, with `bison` and `flex` having already been uninstalled,
`m4` will have the following dependents:
```
bison ('build', 'run')--> m4
flex ('build',)--> m4
libnl ('build',)--> m4
```
`bison` and `flex` should be ignored in this case because they are not
installed anymore.
Fixes#30673
#24556 merged in support for Python's .zip file support via ZipFile.
However as per #30200 ZipFile does not preserve file permissions of
the extracted contents. This PR returns to using the `unzip`
executable on non-Windows systems (as was the case before #24556)
and now uses `tar` on Windows to extract .zip files.
We previously had checks in `directory_layout` to check for build-dependency
conflicts when we weren't storing build dependencies. We don't need
those anymore; we can just rely on the DAG hash now that it includes everything
we know about each spec.
- [x] Remove vestigial code for checking installed spec against concrete spec
in `ensure_installed()`
- [x] Remove `SpecHashCollisionError` -- if specs have the same hash now, they're
the same as far as `DirectoryLayout` should be concerned.
- [x] Convert spec comparison to `dag_hash()` comparison when adding extensions.
The database now stores full hashes, so we need to adjust the criteria we use to
determine if something can be uninstalled. Specifically, it's ok to uninstall thing that
have remaining build-only dependents.
With the original DAG hash, we did not store build dependencies in the database, but
with the full DAG hash, we do. Previously, we'd never tell the concretizer about build
dependencies of things used by hash, because we never had them. Now, we have to avoid
telling the concretizer about them, or they'll unnecessarily constrain build
dependencies for new concretizations.
- [x] Make database track all dependencies included in the `dag_hash`
- [x] Modify spec_clauses so that build dependency information is optional
and off by default.
- [x] `spack diff` asks `spec_clauses` for build dependencies for completeness
- [x] Modify `concretize.lp` so that reuse optimization doesn't affect fresh
installations.
- [x] Modify concretizer setup so that it does *not* prioritize installed versions
over package versions. We don't need this with reuse, so they're low priority.
- [x] Fix `test_installed_deps` for full hash and new concretizer (does not work
for old concretizer with full hash -- leave this for later if we need it)
- [x] Move `test_installed_deps` mock packages to `builtin.mock` for easier debugging
with `spack -m`.
- [x] Fix `test_reuse_installed_packages_when_package_def_changes` for full hash
- [x] update test to use `build_hash` instead of `dag_hash`, as we're testing for
graph structure, and specifically NOT testing for package changes.
- [x] make hash descriptors callable on specs to simplify syntax for invoking them
- [x] make `Spec.spec_hash()` public
This removes all but one usage of runtime hash. The runtime hash was being used to write
historical lockfiles for tests, but we don't need it for that; we can just save those
lockfiles.
- [x] add legacy lockfiles for v1, v2, v3
- [x] fix bugs with v1 lockfile tests (the dummy lockfile we were writing was not actually
a v1 lockfile because it used the new spec file format).
- [x] remove all but one runtime_hash usage -- that one needs a small rework of the
concretizer to really fix, as it's about separate concretization of build
dependencies.
- [x] Document the history of the lockfile format in `environment/__init__.py`
Some test cases had to be modified in a kludgy way so that abstract specs made
concrete would have versions on them. We shouldn't *need* to do this, as the
only reason we care is because the content hash has to be able to get an archive
for a version.
This modifies the content hash so that it can be called on abstract specs,
including only relevant content.
This does NOT add a partial content hash to the DAG hash, as we do not really
want that -- we don't need in-memory spec hashes to need to load package files.
It just makes `Package.content_hash()` less prickly and tests easier to
understand.
`spack monitor` expects a field called `spec_full_hash`, so we shouldn't change that.
Instead, we can pass a `dag_hash` (which is now the full hash) but not change the field
name.
`hashes_final` was used to indicate when a spec was concrete but possibly lacked
`full_hash` or `build_hash` fields. This was only necessary because older Spacks
didn't generate them, and we want to avoid recomputing them, as we likely do not
have the same package files as existed at concretization time.
Now, we don't need to do that -- there is only the DAG hash and specs are either
concrete and have a `dag_hash`, or not concrete and have no `dag_hash`. There's
no middle ground.
Without some enforcement of spec ordering, python 2 produced
different results in the affected test than did python 3. This
change makes the arbitrary but reproducible decision to sort
the specs by their lockfile key alphabetically.
The full hash appears twice in the spec dict now, replacing just
the value replaces it under "hash" and "full_hash". Only replace
the one that appears after "full_hash".
I'm actually not sure what purpose this test served, so maybe it
could be removed, as it may be testing some distinction between
full and dag hash which no longer exists.
For a long time, Spack has used a coarser hash to identify packages
than it likely should. Packages are identified by `dag_hash()`, which
includes only link and run dependencies. Build dependencies are
stripped before hashing, and we have notincluded hashes of build
artifacts or the `package.py` files used to build. This means the
DAG hash actually doesn't represent all the things Spack can build,
and it reduces reproducibility.
We did this because, in the early days, users were (rightly) annoyed
when a new version of CMake, autotools, or some other build dependency
would necessitate a rebuild of their entire stack. Coarsening the hash
avoided this issue and enabled a modicum of stability when only reusing
packages by hash match.
Now that we have `--reuse`, we don't need to be so careful. Users can
avoid unnecessary rebuilds much more easily, and we can add more
provenance to the spec without worrying that frequent hash changes
will cause too many rebuilds.
This commit starts the refactor with the following major change:
- [x] Make `Spec.dag_hash()` include build, run, and link
dependencides and the package hash (it is now equivalent to
`full_hash()`).
It also adds a couple of bugfixes for problems discovered during
the switch:
- [x] Don't add a `package_hash()` in `to_node_dict()` unless
the spec is concrete (fixes breaks on abstract specs)
- [x] Don't add source ids to the package hash for packages without
a known fetch strategy (may mock packages are like this)
- [x] Change how `Spec.patches` is memoized. Using
`llnl.util.lang.memoized` on `Spec` objects causes specs to
be stored in a `dict`, which means they need a hash. But,
`dag_hash()` now includes patch `sha256`'s via the package
hash, which can lead to infinite recursion
`spack pkg list` tests were broken by #29593 for cases when your `builtin.mock` repo
still has stale backup files (or, really, stale directories) sitting around. This
happens if you switch branches a lot. In this case, things like this were causing
erroneous packages in the mock listing:
```
var/spack/repos/builtin.mock/packages/
foo/
package.py~
```
- [x] make `list_packages` consider only directories with one-deep `package.py` files.
Reworking lua to allow easier substitution of the base lua implementation.
Also adding in a maintained version of luajit and re-factoring the entire stack
to use a custom build-system to centralize functionality like environment
variable management and luarocks installation.
The `lua-lang` virtual is now versioned so that a package that requires
Lua 5.1 semantics can get any lua, but one that requires 5.2 will only
get upstream lua.
The luaposix package requires lua-bit32, but only when built with a
lua conforming to version 5.1. This adds the package, and the
dependencies, but exposed a problem with luarocks dependency
detection. Since we're installing each package in its own "tree" and
there's no environment variable to list extra trees, spack now
generates a luarocks config file that lists all the trees of all the
dependencies, and references it by setting `LUAROCKS_CONFIG`
in the build environment of every LuaPackage. This allows luarocks
to find the spack installed dependencies correctly rather than
trying (and failing) to download them.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Tom Scogland <tscogland@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Some of our `git` tests still fail when `init.defaultBranch` is set to something other
than `master`.
- [x] get rid of all hard-coded `master` refs
- [x] Use `'default'` to key tests that use the default branch
When running on Windows, Spack may generate files in the stage/install
prefixes that do not have write permissions, which prevents the
removal of those directories (e.g. when cleaning stages or uninstalling).
There should be a refactoring to avoid this in the first place, but that
is assumed to be longer term, so the temporary fix is to make such files
writable if they are not. This PR:
* Automatically handles these permissions errors when uninstalling
packages from the Spack root (makes then writable)
* Updates similar already-existing logic when removing Spack-managed
stage directories (the error-handling was assuming all errors were
permissions errors and was therefore handling other errors
inappropriately)
Note: these permissions issues only appear on Windows so this logic is
only applied there (permissions are not modified for this purpose on
Linux etc.).
This also adds special handling for a case where calling `isdir`
on an `os.DirEntry` object would fail for improperly-created symlinks
(e.g. on Windows, using `os.symlink` without `target_is_directory=True`).
Note this specific issue only came up when enabling link_tree tests
(specifically `source_merge_visitor_cant_be_cyclical`).
* create function for translating compiler names on specs/compiler entries in manifest
* add tests for translating compiler names on spec/compiler entries
* use higher-level function in test and add comment to prefer testing via higher-level function
* opensuse clingo check should not fail on account of this pr, but I cannot get it to pass by restarting via CI UI
* Force GCC to always provide a C++14 flag
Updated gnu logic so that the c++14 flag for g++ is always propagated.
This fixes issues with build systems that error out if passed an empty
string for a flag.
Engaging in the best kind of software engineering by updating the unit
test to pass with the value it is now passed. This should better match
the expected flag for g++ compiling with the C++14 standard
This ensures that multiple spack instances called from `make` will respect the maximum number of jobs in the POSIX jobserver across packages.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
* use the init.defaultBranch name, not master
* make tcl and modules/common independent
Both used to use not just the same directory, but the same *file* for
their outputs. In parallel this can cause problems, but it can also
accidentally allow expected failures to pass if the file is left around
by mistake.
* use a non-global misc_cache in tests
* make pkg tests resilient to gitignore
* make source cache and module directories non-global
`make` solves a lot of headaches that would otherwise have to be implemented in Spack:
1. Parallelism over packages through multiple `spack install` processes
2. Orderly output of parallel package installs thanks to `make --sync-output=recurse` or `make -Orecurse` (works well in GNU Make 4.3; macOS is unfortunately on a 16 years old 3.x version, but it's one `spack install gmake` away...)
3. Shared jobserver across packages, which means a single `-j` to rule them all, instead of manually finding a balance between `#spack install processes` & `#jobs per package` (See #30302).
This pr adds the `spack env depfile` command that generates a Makefile with dag hashes as
targets, and dag hashes of dependencies as prerequisites, and a command
along the lines of `spack install --only=packages /hash` to just install
a single package.
It exposes two convenient phony targets: `all`, `fetch-all`. The former installs the environment, the latter just fetches all sources. So one can either use `make all -j16` directly or run `make fetch-all -j16` on a login node and `make all -j16` on a compute node.
Example:
```yaml
spack:
specs: [perl]
view: false
```
running
```
$ spack -e . env depfile --make-target-prefix env | tee Makefile
```
generates
```Makefile
SPACK ?= spack
.PHONY: env/all env/fetch-all env/clean
env/all: env/env
env/fetch-all: env/fetch
env/env: env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww
@touch $@
env/fetch: env/.fetch/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.fetch/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.fetch/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.fetch/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.fetch/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.fetch/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.fetch/73t7ndb5w72hrat5hsax4caox2sgumzu env/.fetch/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.fetch/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.fetch/c4go4gxlcznh5p5nklpjm644epuh3pzc
@touch $@
env/dirs:
@mkdir -p env/.fetch env/.install
env/.fetch/%: | env/dirs
$(info Fetching $(SPEC))
$(SPACK) -e '/tmp/tmp.7PHPSIRACv' fetch $(SPACK_FETCH_FLAGS) /$(notdir $@) && touch $@
env/.install/%: env/.fetch/%
$(info Installing $(SPEC))
+$(SPACK) -e '/tmp/tmp.7PHPSIRACv' install $(SPACK_INSTALL_FLAGS) --only-concrete --only=package --no-add /$(notdir $@) && touch $@
# Set the human-readable spec for each target
env/%/cdqldivylyxocqymwnfzmzc5sx2zwvww: SPEC = perl@5.34.1%gcc@10.3.0+cpanm+shared+threads arch=linux-ubuntu20.04-zen2
env/%/gv5kin2xnn33uxyfte6k4a3bynhmtxze: SPEC = berkeley-db@18.1.40%gcc@10.3.0+cxx~docs+stl patches=b231fcc arch=linux-ubuntu20.04-zen2
env/%/cuymc7e5gupwyu7vza5d4vrbuslk277p: SPEC = bzip2@1.0.8%gcc@10.3.0~debug~pic+shared arch=linux-ubuntu20.04-zen2
env/%/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk: SPEC = diffutils@3.8%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws: SPEC = libiconv@1.16%gcc@10.3.0 libs=shared,static arch=linux-ubuntu20.04-zen2
env/%/yfz2agazed7ohevqvnrmm7jfkmsgwjao: SPEC = gdbm@1.19%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/73t7ndb5w72hrat5hsax4caox2sgumzu: SPEC = readline@8.1%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/trvdyncxzfozxofpm3cwgq4vecpxixzs: SPEC = ncurses@6.2%gcc@10.3.0~symlinks+termlib abi=none arch=linux-ubuntu20.04-zen2
env/%/sbzszb7v557ohyd6c2ekirx2t3ctxfxp: SPEC = pkgconf@1.8.0%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
env/%/c4go4gxlcznh5p5nklpjm644epuh3pzc: SPEC = zlib@1.2.12%gcc@10.3.0+optimize+pic+shared patches=0d38234 arch=linux-ubuntu20.04-zen2
# Install dependencies
env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww: env/.install/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.install/c4go4gxlcznh5p5nklpjm644epuh3pzc
env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p: env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk
env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk: env/.install/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao: env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu
env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu: env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs
env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs: env/.install/sbzszb7v557ohyd6c2ekirx2t3ctxfxp
env/clean:
rm -f -- env/env env/fetch env/.fetch/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.fetch/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.fetch/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.fetch/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.fetch/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.fetch/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.fetch/73t7ndb5w72hrat5hsax4caox2sgumzu env/.fetch/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.fetch/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.fetch/c4go4gxlcznh5p5nklpjm644epuh3pzc env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.install/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.install/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.install/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.install/c4go4gxlcznh5p5nklpjm644epuh3pzc
```
Then with `make -O` you get very nice orderly output when packages are built in parallel:
```console
$ make -Orecurse -j16
spack -e . install --only-concrete --only=package /c4go4gxlcznh5p5nklpjm644epuh3pzc && touch c4go4gxlcznh5p5nklpjm644epuh3pzc
==> Installing zlib-1.2.12-c4go4gxlcznh5p5nklpjm644epuh3pzc
...
Fetch: 0.00s. Build: 0.88s. Total: 0.88s.
[+] /tmp/tmp.b1eTyAOe85/store/linux-ubuntu20.04-zen2/gcc-10.3.0/zlib-1.2.12-c4go4gxlcznh5p5nklpjm644epuh3pzc
spack -e . install --only-concrete --only=package /sbzszb7v557ohyd6c2ekirx2t3ctxfxp && touch sbzszb7v557ohyd6c2ekirx2t3ctxfxp
==> Installing pkgconf-1.8.0-sbzszb7v557ohyd6c2ekirx2t3ctxfxp
...
Fetch: 0.00s. Build: 3.96s. Total: 3.96s.
[+] /tmp/tmp.b1eTyAOe85/store/linux-ubuntu20.04-zen2/gcc-10.3.0/pkgconf-1.8.0-sbzszb7v557ohyd6c2ekirx2t3ctxfxp
```
For Perl, at least for me, using `make -j16` versus `spack -e . install -j16` speeds up the builds from 3m32.623s to 2m22.775s, as some configure scripts run in parallel.
Another nice feature is you can do Makefile "metaprogramming" and depend on packages built by Spack. This example fetches all sources (in parallel) first, print a message, and only then build packages (in parallel).
```Makefile
SPACK ?= spack
.PHONY: env
all: env
spack.lock: spack.yaml
$(SPACK) -e . concretize -f
env.mk: spack.lock
$(SPACK) -e . env depfile -o $@ --make-target-prefix spack
fetch: spack/fetch
@echo Fetched all packages && touch $@
env: fetch spack/env
@echo This executes after the environment has been installed
clean:
rm -rf spack/ env.mk spack.lock
ifeq (,$(filter clean,$(MAKECMDGOALS)))
include env.mk
endif
```
Added support for finding the OpenCV package via the find external
command. Included support for identifying variants based on available
shared libraries.
Added support to finding the OpenBLAS package via the find external
command.
Enabled packages to show that they can be discovered via the find
external command in the info message.
Updated the OpenCV and OpenBLAS packages to use the extensible search
mechanism for library extensions on multiple OS platforms.
Corrected how find externals works on Darwin for OpenCV and OpenBLAS
to accommodate that the version numbers are placed before the file
extension instead of after it, as on Linux.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This is an amended version of https://github.com/spack/spack/pull/24894 (reverted in https://github.com/spack/spack/pull/29603). https://github.com/spack/spack/pull/24894
broke all instances of `spack external find` (namely when it is invoked without arguments/options)
because it was mandating the presence of a file which most systems would not have.
This allows `spack external find` to proceed if that file is not present and adds tests for this.
- [x] Add a test which confirms that `spack external find` successfully reads a manifest file
if present in the default manifest path
--- Original commit message ---
Adds `spack external read-cray-manifest`, which reads a json file that describes a
set of package DAGs. The parsed results are stored directly in the database. A user
can see these installed specs with `spack find` (like any installed spec). The easiest
way to use them right now as dependencies is to run
`spack spec ... ^/hash-of-external-package`.
Changes include:
* `spack external read-cray-manifest --file <path/to/file>` will add all specs described
in the file to Spack's installation DB and will also install described compilers to the
compilers configuration (the expected format of the file is described in this PR as well including examples of the file)
* Database records now may include an "origin" (the command added in this PR
registers the origin as "external-db"). In the future, it is assumed users may want
to be able to treat installs registered with this command differently (e.g. they may
want to uninstall all specs added with this command)
* Hash properties are now always preserved when copying specs if the source spec
is concrete
* I don't think the hashes of installed-and-concrete specs should change and this
was the easiest way to handle that
* also specs that are concrete preserve their `.normal` property when copied
(external specs may mention compilers that are not registered, and without this
change they would fail in `normalize` when calling `validate_or_raise`)
* it might be this should only be the case if the spec was installed
- [x] Improve testing
- [x] Specifically mark DB records added with this command (so that users can do
something like "uninstall all packages added with `spack read-external-db`)
* This is now possible with `spack uninstall --all --origin=external-db` (this will
remove all specs added from manifest files)
- [x] Strip variants that are listed in json entries but don't actually exist for the package
* ASP-based solver: discard unknown packages from reuse
This is an add-on to #28259 that cover for the case of
a single package.py being removed from a repository,
rather than an entire custom repository being removed.
* Add unit test
CTest determines whether to enable tests using the BUILD_TESTING variable.
This should be used by projects to conditionally enable the compilation of tests.
Spack knowns which packages have to run tests and can thus automatically define this variable.
I tried to use --overwrite on nvhpc, but nvhpc's install size is 16GB. Seems
better to do os.rename in the same directory than moving the directory to
`/tmp`.
- [x] install --overwrite: use rename instead of tmpdir
- [x] use tempfile
fixes#28259
This commit discard specs from unknown namespaces from the
ones that can be "reused" during concretization. Previously
Spack would just error out when encountering them.
The parent thread in the process stdout redirection logic on Windows
was closing a file that was being read in child thread, which lead to
error-based termination of the reader thread. This updates the
interaction to avoid the error.
* ASP-based solver: allow configuring target selection
This commit adds a new "concretizer:targets" configuration
section, and two options under it.
- "concretizer:targets:granularity" allows switching from
considering only generic targets to consider all possible
microarchitectures.
- "concretizer:targets:host_compatible" instead controls
whether we can concretize for microarchitectures that
are incompatible with the current host.
* Add documentation
* Add unit-tests
* ASP-based solver: always consider version of installed packages
fixes#29201
Explicitly add facts for versions of installed software when
using the --reuse option, so that we could consider versions
that are not declared in package.py
The parser is already committing a crime of querying the database for
specs when it encounters a `/hash`. It's helpful, but unfortunately not
helpful when trying to install a specific spec in an environment by
hash. Therefore, consider the environment first, then the database.
This allows the following:
```console
$ spack -e . concretize
==> Starting concretization
==> Environment concretized in 0.27 seconds.
==> Concretized diffutils
- 7vangk4 diffutils@3.8%gcc@10.3.0 arch=linux-ubuntu20.04-zen2
- hyb7ehx ^libiconv@1.16%gcc@10.3.0 libs=shared,static arch=linux-ubuntu20.04-zen2
$ spack -e . install /hyb7ehx
==> Installing libiconv-1.16-hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
...
==> libiconv: Successfully installed libiconv-1.16-hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
Fetch: 0.01s. Build: 17.54s. Total: 17.55s.
[+] /tmp/tmp.VpvYApofVm/store/linux-ubuntu20.04-zen2/gcc-10.3.0/libiconv-1.16-hyb7ehxxyqqp2hiw56bzm5ampkw6cxws
```
Fix bug introduced in #30191. `Spec.installed` and `Spec.installed_upstream` should just return
`False` for abstract specs, as they can be called in that context.
- [x] `Spec.installed` returns `False` now instead of asserting that the `Spec`
is concrete.
- [x] `Spec.installed_upstream` returns `False` now instead of asserting that the `Spec`
is concrete.
- [x] `Spec.installed_upstream` no longer caches its result, as install status seems
like a bad thing to cache -- it can easily be invalidated. Calling code should
use transactions if there are peformance issues, as with other places in Spack.
- [x] add tests for `Spec.installed` and `Spec.installed_upstream`
This PR moves the `installed` and `installed_upstream` properties from `PackageBase` to `Spec` and is a step towards being able to reuse specs for which we don't have a `package.py` available. It _should_ be sufficient to complete the concretization step and see the spec in the concretized DAG.
To fully reuse a spec without a package.py though we need a way to serialize enough data to reconstruct the results of calls to:
- `Spec.libs`, `Spec.headers` and `Spec.ommand`
- `Package.setup_dependent_*_environment` and `Package.setup_run_environment`
- [x] Add stub methods to packages with warnings
- [x] Add a missing "root=False" in cmd/fetch.py
- [x] Assert that a spec is concrete before checking installation status
This PR updates the list of images we build nightly, deprecating
Ubuntu 16.04 and CentOS 8 and adding Ubuntu 20.04, Ubuntu 22.04
and CentOS Stream. It also removes a lot of duplication by generating
the Dockerfiles during the CI workflow and uploading them as artifacts
for later inspection or reuse.
Fix test_ci_generate_prune_untouched(), which would fail if run when
the latest commit changed the .gitlab-ci.yml. This change mocks the
get_stack_changed() method in that test to disregard the state of
the current spack repo in favor of a mock repo under test control.
gitlab ci: Remove code for relating CDash builds
Relating CDash builds to their dependencies was a seldom used feature. Removing
it will make it easier for us to reorganize our CDash projects & build groups in the
future by eliminating the needs to keep track of CDash build ids in our binary mirrors.
* Allow packages to add a 'submodules' property that determines when ad-hoc Git-commit-based versions should initialize submodules
* add support for ad-hoc git-commit-based versions to instantiate submodules if the associated package has a 'submodules' property and it indicates this should happen for the associated spec
* allow Package-level submodule request to influence all explicitly-defined version() in the Package
* skip test on windows which fails because of long paths
Spack added support in #24639 for ad-hoc Git-commit-hash-based
versions: A user can install a package x@hash, where X is a package
that stores its source code in a Git repository, and the hash refers
to a commit in that repository which is not recorded as an explicit
version in the package.py file for X.
A couple issues were found relating to this:
* If an environment defines an alternative package repo (i.e. with
repos.yaml), and spack.yaml contains user Specs with ad-hoc
Git-commit-hash-based versions for packages in that repo,
then as part of retrieving the data needed for version comparisons
it will attempt to retrieve the package before the environment's
configuration is instantiated.
* The bookkeeping information added to compare ad-hoc git versions was
being stripped from Specs during concretization (such that user
Specs which succeeded before concretizing would then fail after)
This addresses the issues:
* The first issue is resolved by deferring access to the associated
Package until the versions are actually compared to one another.
* The second issue is resolved by ensuring that the Git bookkeeping
information is explicitly applied to Specs after they are concretized.
This also:
* Resolves an ambiguity in the mock_git_version_info fixture used to
create a tree of Git commits and provide a list where each index
maps to a known commit.
* Isolates the cache used for Git repositories in tests using the
mock_git_version_info fixture
* Adds a TODO which points out that if the remote Git repository
overwrites tags, that Spack will then fail when using
ad-hoc Git-commit-hash-based versions
This commit updates the `gpg publish` command to work with the mirror
arguments, when trying to push keys to a mirror.
- [x] update `gpg publish command
- [x] add test for publishing GPG keys and rebuilding the key index within a mirror
In a typical call to spack, the OperatingSystem gets instantiated
multiple times. For macOS, each one requires a call to `sw_vers`, which
is done through the Executable helper class. Memoizing
reduces the call count from "spac spec" from three to one.
Currently environments are indexed by build hashes. When looking into this bug I noticed there is a disconnect between environments that are concretized in memory for the first time and environments that are read from a `spack.lock`. The issue is that specs read from a `spack.lock` don't have a full hash, since they are indexed by a build hash which is strictly coarser. They are also marked "final" as they are read from a file, so we can't compute additional hashes.
This bugfix PR makes "first concretization" equivalent to re-reading the specs from a corresponding `spack.lock`, and doing so unveiled a few tests were we were making wrong assumptions and relying on the fact that a `spack.lock` file was not there already.
* Add unit test
* Modify mpich to trigger jobs in pipelines
* Fix two failing unit tests
* Fix another full_hash vs. build_hash mismatch in tests
* Ignore top-level module config; add auto-update
In Spack 0.17 we got module sets (modules:[name]:[prop]), and for
backwards compat modules:[prop] was short for modules:default:[prop].
But this makes it awkward to define default config for the "default"
module set.
Since 0.17 is branched off, we can now deprecate top-level module config
(that is, just ignore it with a warning).
This PR does that, and it implements `spack config update modules` to
make upgrading easy (we should have added that to 0.17 already...)
It also removes references to `dotkit` stuff which was already
deprecated in 0.13 and could have been removed in 0.14.
Prefix inspections are the only exception, since the top-level prefix inspections
used for `spack load` and `spack env activate`.
Spack currently allows dependencies to be concretized for an
architecture incompatible with the root. This commit adds rules
to make this situation impossible by design.
* Extract the MetaPathFinder and Loaders for packages in their own classes
https://peps.python.org/pep-0451/
Currently, RepoPath and Repo implement the (deprecated) interface of
MetaPathFinder (find_module) and of Loader (load_module). This commit
extracts both of them and places the code in their own classes.
The MetaPathFinder interface is updated to contain both the deprecated
"find_module" (for Python 2.7 support) and the recommended "find_spec".
Update of the Loader interface is deferred at a subsequent commit.
* Move the lines to be prepended inside "RepoLoader"
Also adjust the naming of a few variables too
* Remove spack.util.imp, since code is only used in spack.repo
* Remove support from loading Python modules Python > 3 but < 3.5
* Remove `Repo._create_namespace`
This function was interacting badly with the MetaPathFinder
and causing issues with "normal" imports. Removing the
function allows to do things like:
```python
import spack.pkg.builtin.mpich
cls = spack.pkg.builtin.mpich.Mpich
```
* Remove code needed to trigger the Singleton evaluation
The finder is coded in a way to trigger the Singleton,
so we don't need external code now that we register it
at module level into `sys.meta_path`.
* Add unit tests
Some servers require `User-Agent` to be set, and otherwise error with
access denied. One such example is mpich.
To fix this, set `User-Agent: Spackbot/[version]` as a header.
Apparently by convention, it should include the word `bot`.
#27021 broke fetching for CVS-based packages because:
- The mirror logic was using URL parsing to extract a path from the
CVS repository location
- #27021 added sanity checks to enforce that strings passed to the
URL parser were actually URLs
This replaces the call to "url_util.parse" with logic that is
customized for CVS. This implies that VCSFetchStrategy should
rename the "url_attr" attribute to something more generic, but
that should be handled separately.
Allow declaring possible values for variants with an associated condition. If the variant takes one of those values, the condition is imposed as a further constraint.
The idea of this PR is to implement part of the mechanisms needed for modeling [packages with multiple build-systems]( https://github.com/spack/seps/pull/3). After this PR the build-system directive can be implemented as:
```python
variant(
'build-system',
default='cmake',
values=(
'autotools',
conditional('cmake', when='@X.Y:')
),
description='...',
)
```
Modifications:
- [x] Allow conditional possible values in variants
- [x] Add a unit-test for the feature
- [x] Add documentation
* tests for rewiring pure specs to spliced specs
* relocate text, binaries, and links
* using llnl.util.symlink for windows compat.
Note: This does not include CLI hooks for relocation.
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
- Add variants for various common build flags, including support for both versions of the Racket VM environment.
- Prevent `-j` flags to `make`, which has been known to cause problems with Racket builds.
- Prefer the minimal release to improve install times. Bells and whistles carry their own runtime dependencies and should be installed via `raco`. An enterprising user may even create a `RacketPackage` class to make spack aware of `raco` installed packages.
- Match the official version numbering scheme.
Update "spack external find --all" to also find library-only packages.
A Package can add a ".libraries" attribute, which is a list of regular
expressions to use to find libraries associated with the Package.
"spack external find --all" will search LD_LIBRARY_PATH for potential
libraries.
This PR adds examples for NCCL, RCCL, and hipblas packages. These
examples specify the suffix ".so" for the regular expressions used
to find libraries, so generally are only useful for detecting library
packages on Linux.
Do not prompt user with checksum warning when using git commit hashes
as versions. Spack was incorrectly reporting this as a potential
problem: it would display a prompt asking the user whether they
want to proceed if Spack was running in a terminal, or it would
terminate the running instance of Spack if running as part of a
script.
* Add pl2bat to PATH: Windows on Perl requires the script pl2bat.bat
and Perl to be available to the installer via the PATH. The build
and dependent environments of Perl on Windows have the install
prefix bin added to the PATH.
* symlink with win32file module instead of using Executable to
call mklink (mklink is a shell function and so is not accessible
in this manner).
We've previously generated CI pipelines for PRs, and they rebuild any packages that don't have
a binary in an existing build cache. The assumption we were making was that ALL prior merged
builds would be in cache, but due to the way we do security in the pipeline, they aren't. `develop`
pipelines can take a while to catch up with the latest PRs, and while it does that, there may be a
bunch of redundant builds on PRs that duplicate things being rebuilt on `develop`. Until we can
do better caching of PR builds, we'll have this problem.
We can do better in PRs, though, by *only* rebuilding things in the CI environment that are actually
touched by the PR. This change computes exactly what packages are changed by a PR branch and
*only* includes those packages' dependents and dependencies in the generated pipeline. Other
as-yet unbuilt packages are pruned from CI for the PR.
For `develop` pipelines, we still want to build everything to ensure that the stack works, and to ensure
that `develop` catches up with PRs. This is especially true since we do not do rebuilds for *every* commit
on `develop` -- just the most recent one after each `develop` pipeline finishes. Since we skip around,
we may end up missing builds unless we ensure that we rebuild everything.
We differentiate between `develop` and PR pipelines in `.gitlab-ci.yml` by setting
`SPACK_PRUNE_UNTOUCHED` for PRs. `develop` will still have the old behavior.
- [x] Add `SPACK_PRUNE_UNTOUCHED` variable to `spack ci`
- [x] Refactor `spack pkg` command by moving historical package checking logic to `spack.repo`
- [x] Implement pruning logic in `spack ci` to remove untouched packages
- [x] add tests
* cmake: use CMAKE_INSTALL_RPATH_USE_LINK_PATH
Spack has a heuristic to add rpaths for packages it knows are required,
but it's really a heuristic, and it does not work when the dependencies
put their libraries in a different folder than `<prefix>/lib{64,}`.
CMake patches binaries after install with the "install rpaths", which by
default are provided by Spack and its heuristic through
`CMAKE_INSTALL_RPATH`.
CMake however knows better what libraries are effectively being linked
to, and has an option to include those in the install rpath too, through
`CMAKE_INSTALL_RPATH_USE_LINK_PATH`.
These two CMake options are complementary, repeated rpaths seem to be
filtered, and the "use link path" paths are appended to Spack's
heuristic "install rpath".
So, it seems like a good idea to enable "use link path" by default, so
that:
- `dlopen` by library name uses Spack's heuristic search paths
- linked libraries in non-standard locations within a prefix get an
rpath thanks to CMake.
* docs
Add output of build- and install-time tests to info command
Enable dependencies, variants, and versions by default (i.e., provide --no*
options; add gcc to test_info_fields to increase coverage for c_names->v_names
We shouldn't be using "remove_linked_tree" to remove the lock file,
since that function expects to receive a directory path as an
argument.
Also, as a further measure to avoid regression, this commit restores
the "ignore_errors=True" argument on linux and adds a unit test
checking that "remove_linked_tree" doesn't change file permissions
as a side effect of a failure to remove.
Reduces the number of stat calls to a bare minimum:
- Single pass over src prefixes
- Handle projection clashes in memory
Symlinked directories in the src prefixes are now conditionally
transformed into directories with symlinks in the dst dir. Notably
`intel-mkl`, `cuda` and `qt` has top-level symlinked directories that
previously resulted in empty directories in the view. We now avoid
cycles and possible exponential blowup by only expanding symlinks that:
- point to dirs deeper in the folder structure;
- are a fixed depth of 2.
Currently `old_root` is computed by reading the symlink at `self.root`.
We should be more defensive in removing it by checking that it is in the
same directory as the new root. Otherwise, in the worst case, when
someone runs `spack env create --with-view=./view -d .` and `view`
already exists and is a symlink to `/`, Spack effectively runs `rm -rf /`.
`file` was used to detect Python scripts with shebangs, so that the interpreter could be changed from <python prefix> to <view path>. With this change, we detect shebangs using Python instead, so that `file` is no longer required.
The number of commit characters in patch files fetched from GitHub can change,
so we should use `full_index=1` to enforce full commit hashes (and a stable
patch `sha256`).
Similarly, URLs for branches like `master` don't give us stable patch files,
because branches are moving targets. Use specific tags or commits for those.
- [x] update all github patch URLs to use `full_index=1`
- [x] don't use `master` or other branches for patches
- [x] add an audit check and a test for `?full_index=1`
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Known issues reports only 2 issues, among the bugs reported on GitHub.
One of the two is also outdated, since the issue has been solved
with the new concretizer. Thus, this commit removes the section.
When you install Spack from a tarball, it will always show an exact
version for Spack itself, even when you don't download a tagged commit:
```
$ wget -q https://github.com/spack/spack/archive/refs/heads/develop.tar.gz
$ tar -xf develop.tar.gz
$ ./spack-develop/bin/spack --version
0.16.2
```
This PR sets the Spack version to `0.18.0.dev0` on develop, following [PEP440](https://github.com/spack/spack/pull/25267#issuecomment-896340234) as
suggested by Adam Stewart.
```
spack (fix/set-dev-version)$ spack --version
0.18.0.dev0 (git 0.17.1-1526-e270464ae0)
spack (fix/set-dev-version)$ mv .git .git_
spack $ spack --version
0.18.0.dev0
```
- [x] Update the release guide
- [x] Add __version__ to spack's __init__.py
- [x] Use PEP 440 canonical version strings
- [x] Make spack --version output [actual version] (git version)
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Add tests to ensure google cloud storage urls work as mirrors
This commit adds two tests to track that GCS buckets can work as
mirrors, and can be parsed as valid URLs.
Currently, gs:// format URLs are not correctly parsed.
* Fix URL parsing for GCS buckets
This commit adds GCS bucket URLs as valid URLs.
* lower priority of package-provided urls
This change favors urls found in a scraped page over those provided by
the package from `url_for_version`. In most cases this doesn't matter,
but R specifically returns known bad URLs in some cases, and the
fallback path for a failed fetch uses `fetch_remote_versions` to find a
substitute. This fixes that problem.
fixes#29204
* consider what links actually exist in all cases
Checksum was only actually scraping when called with no versions. It
now always scrapes and then selects URLs from the set of URLs known to
exist whenever possible.
fixes#25831
* bow to the wrath of flake8
* test-fetch urls from package, prefer if successful
* Update lib/spack/spack/package.py
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* reword as suggested
* re-enable mypy specific ignore and ignore pyflakes
* remove flake8 ignore from .flake8
* address review comments
* address comments
* add sneaky missing substitute
I missed this one because we call substitute on a URL that doesn't
contain a version component. I'm not sure how that's supposed to work,
but apparently it's required by at least one mock package, so back in it
goes.
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
Adds `spack external read-cray-manifest`, which reads a json file that describes a set of package DAGs. The parsed results are stored directly in the database. A user can see these installed specs with `spack find` (like any installed spec). The easiest way to use them right now as dependencies is to run `spack spec ... ^/hash-of-external-package`.
Changes include:
* `spack external read-cray-manifest --file <path/to/file>` will add all specs described in the file to Spack's installation DB and will also install described compilers to the compilers configuration (the expected format of the file is described in this PR as well including examples of the file)
* Database records now may include an "origin" (the command added in this PR registers the origin as "external-db"). In the future, it is assumed users may want to be able to treat installs registered with this command differently (e.g. they may want to uninstall all specs added with this command)
* Hash properties are now always preserved when copying specs if the source spec is concrete
* I don't think the hashes of installed-and-concrete specs should change and this was the easiest way to handle that
* also specs that are concrete preserve their `.normal` property when copied (external specs may mention compilers that are not registered, and without this change they would fail in `normalize` when calling `validate_or_raise`)
* it might be this should only be the case if the spec was installed
- [x] Improve testing
- [x] Specifically mark DB records added with this command (so that users can do something like "uninstall all packages added with `spack read-external-db`)
* This is now possible with `spack uninstall --all --origin=external-db` (this will remove all specs added from manifest files)
- [x] Strip variants that are listed in json entries but don't actually exist for the package
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
This PR removes a few outdated sections from the "Basics" part of the
documentation. It also makes a few topic under the environment section
more prominent by removing an unneeded spack.yaml subsection and
promoting everything under it.
Consolidate Spack's internal filepath logic to a select
few places and refactor to consistent internal useage of
os.path utilities. Creates a prefix, and a series of utilities
in the path utility module that facilitate handling paths
in a platform agnostic manner.
Convert Windows paths to posix paths internally
Prefer posixpath.join instead of os.path.join
Updated util/ directory to account for Windows integration
Co-authored-by: Stephen Crowell <stephen.crowell@khq.kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
Module template format for windows (#23041)
* Incorporate new search location
* Add external user option
* proper doc string
* Explicit commands in getting started
* raise during chgrp on Win
recover installer changes
Notate admin privleges
Windows phase install hooks
Find external python and install ninja (#23496)
Allow external find python to find windows python and spack install ninja
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Fixup common tests
* Remove requirement for Python 2.6
* Skip new failing test
Windows: Update url util to handle Windows paths (#27959)
* update url util to handle windows paths
* Update tests to handle fixed url handling
* canonicalize path only when the path type matches the host platform
* Skip some url tests on Windows
Co-authored-by: Omar Padron <omar.padron@kitware.com>
Use threading.TIMEOUT_MAX when available (#24246)
This value was introduced in Python 3.2. Specifying a timeout greater than
this value will raise an OverflowError.
Co-authored-by: Lou Lawrence <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Add compiler hint to the root spec for Windows
Reporters on Windows (#26038)
Reporters use Jinja2 as the templating engine, and Jinja2 indexes
templates by Unix separators, even on Windows, so search using Unix paths
on all systems.
Support patching on win via git (#25871)
Handle GRP on windows
CMake - Windows Bootstrap (#25825)
Remove hardcoded cmake compiler (#26410)
Revert breaking cmake changes
Ensure no autotools on Windows
Perl on Windows (#26612)
Python source build windows (#26313)
Reconfigure sysconf for Windows
Python2.6 compatibility
Fxixup new sbang tests for windows
Ruby support (#28287)
Add NASM support (#28319)
Add mock Ninja package for testing
* Style fixes
* Use Python's zipfile, if available
The compression libs are optional in Python. Rely on python as a
first attempt then fall back to `unzip`
MSVC's internal CMake and Ninja now detected by spack external find and added to packages.yaml
Saving progress on packaging zlib for Windows
Fixing the shared CMake flag
* Loading Intel's ifx Fortran compiler into MSVC; if there are multiple
versions of MSVC installed and detected, ifx will only be placed into
the first block written in compilers.yaml. The version number of ifx can
be detected using MSVC's version flag (instead of /QV) by using
ignore_version_errors. This commit also provides support for detection
of Intel compilers in their own compiler block by adding ifx.exe to the
fc/f77_name blocks inside intel.py
* Giving CMake a Fortran compiler argument
* Adding patch file for removing duplicated mangling header for versions 3.9.1 and older; static and shared now successfully building on Windows
* Have netlib-lapack depend on ninja@1.10
Co-authored-by: John R. Cary <cary@txcorp.com>
Co-authored-by: Jared Popelar <jpopelar@txcorp.com>
Making a default config.yaml for Windows
Small path length for build_stage
Provide more prerequisite details, mention default config.yaml
Killing an unnecessary setvars call
Replacing some lost changes, proofreading, updating windows-supported package list
Co-authored-by: John Parent <john.parent@kitware.com>
* Add 'make-installer' command for Windows
* Add '--bat' arg to env activate, env deactivate and unload commands
* An equivalent script to setup-env on linux: spack_cmd.bat. This script
has a wrapper to evaluate cd, load/unload, env activate/deactivate.(#21734)
* Add spacktivate and config editor (#22049)
* spack_cmd: will find python and spack on its own. It preferentially
tries to use python on your PATH (#22414)
* Ignore Windows python installer if found (#23134)
* Bundle git in windows installer (#23597)
* Add Windows section to Getting Started document
(#23131), (#23295), (#24240)
Co-authored-by: Stephen Crowell <stephen.crowell@kitware.com>
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Co-authored-by: Jared Popelar <jpopelar@txcorp.com>
Co-authored-by: Ben Cowan <benc@txcorp.com>
Update Installer CI
Co-authored-by: John Parent <john.parent@kitware.com>
Made the vcvars batch script location a member variable of the msvc compiler subclass, initialized from the compiler executable path. Added a setup_custom_environment() method to the msvc subclass that sources the vcvars script, dumps the environment, and copies the relevant environment variables to the Spack environment. Added class variables to the Windows OS and MSVC compiler subclasses to enable finding the compiler executables and determining their versions.
* Fixed path and uid issues.
* Added needed import statement; kluged .exe extension.
* Got package to build. Some manual intervention necessary, including sourcing the MSVC setup script and having certain configuration parameters.
* Removed CMake executable suffix hack.
To provide Windows-compatible functionality, spack code should use
llnl.util.symlink instead of os.symlink. On non-Windows platforms
and on Windows where supported, os.symlink will still be used.
Use junctions when symlinks aren't supported on Windows (#22583)
Support islink for junctions (#24182)
Windows: Update llnl/util/filesystem
* Use '/' as path separator on Windows.
* Recognizing that Windows paths start with '<Letter>:/' instead of '/'
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
os.rename() fails on Windows if file already exists.
Create getuid utility function (#21736)
On Windows, replace os.getuid with ctypes.windll.shell32.IsUserAnAdmin().
Tests: Use getuid util function
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
1. Forwarding sys.stdin, e.g. use input_multiprocess_fd,
gives an error on Windows. Skipping for now
3. subprocess_context needs to serialize for Windows, like it does
for Mac.
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
* Snapshot of some MSVC infrastructure added during experiments a while ago. Rebasing from spack/develop.
* Added platform and OS definitions for Windows.
* Updated Windows platform file to conform to new archspec use.
* Added Windows as a platform; introduced some debugging code.
* Added type annotations.
* Fixed copyright.
* Removed print statements.
* Ensure `spack arch` returns correctly on Windows (#21428)
* Correctly identify windows as 'windows-Windows10-AMD64'
Re-work the checks and comparisons around commit versions, when no
commit version is involved the overhead is now in the noise, where one
is the overhead is now constant rather than linear.
fixes#29446
The new setup_*_environment functions have been falling back
to calling the old functions and warn the user since #11115.
This commit removes the fallback behavior and any use of:
- setup_environment
- setup_dependent_environment
in the codebase
Change the internal representation of `Spec` to allow for multiple dependencies or
dependents stemming from the same package. This change permits to represent cases
which are frequent in cross compiled environments or to bootstrap compilers.
Modifications:
- [x] Substitute `DependencyMap` with `_EdgeMap`. The main differences are that the
latter does not support direct item assignment and can be modified only through its
API. It also provides a `select_by` method to query items.
- [x] Reworked a few public APIs of `Spec` to get list of dependencies or related edges.
- [x] Added unit tests to prevent regression on #11983 and prove the synthetic construction
of specs with multiple deps from the same package.
Since #22845 went in first, this PR reuses that format and thus it should not change hashes.
The same package may be present multiple times in the list of dependencies with different
associated specs (each with its own hash).
* environment.py: allow link:run
Some users want minimal views, excluding run-type dependencies, since
those type of dependencies are covered by rpaths and the symlinked
libraries in the view aren't used anyways.
With this change, an environment like this:
```
spack:
specs: ['py-flake8']
view:
default:
root: view
link: run
```
includes python packages and python, but no link type deps of python.
Speeds up comparison on `Version` by ~2.5x, e.g.
```python
In [1]: v = spack.version.Version('1.0.0'); w = spack.version.Version('1.0.2')
In [2]: %timeit v < w
1.47 µs ± 5.59 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
535 ns ± 1.75 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
fixes#29203
This PR fixes a subtle bug we have when importing
Spack packages as Python modules that can lead to
multiple module objects being created for the same
package.
It also fixes all the places in unit-tests where
"relying" on the old bug was crucial to have a new
"clean" state of the package class.
This commit reverts the GCS fetch strategy to before commit:
d759612523
The previous commit added some s3 syntax to handle connections, but
added them into the GCS fetch strategy in a way that prevents GCS from
working anymore.
* rocmcc compiler: initial commit based on aocc and clang
Co-authored-by: luker <luke.roskop@hpe.com>
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
The status displayed in the terminal title could be wrong when doing
distributed builds. For instance, doing `spack install glib` in two
different terminals could lead to the current package being reported as
`40/29` due to the way Spack handles retrying locks.
Work around this by keeping track of the package IDs that were already
encountered to avoid counting packages twice.