1. Forwarding sys.stdin, e.g. use input_multiprocess_fd,
gives an error on Windows. Skipping for now
3. subprocess_context needs to serialize for Windows, like it does
for Mac.
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
* Snapshot of some MSVC infrastructure added during experiments a while ago. Rebasing from spack/develop.
* Added platform and OS definitions for Windows.
* Updated Windows platform file to conform to new archspec use.
* Added Windows as a platform; introduced some debugging code.
* Added type annotations.
* Fixed copyright.
* Removed print statements.
* Ensure `spack arch` returns correctly on Windows (#21428)
* Correctly identify windows as 'windows-Windows10-AMD64'
Re-work the checks and comparisons around commit versions, when no
commit version is involved the overhead is now in the noise, where one
is the overhead is now constant rather than linear.
fixes#29446
The new setup_*_environment functions have been falling back
to calling the old functions and warn the user since #11115.
This commit removes the fallback behavior and any use of:
- setup_environment
- setup_dependent_environment
in the codebase
Change the internal representation of `Spec` to allow for multiple dependencies or
dependents stemming from the same package. This change permits to represent cases
which are frequent in cross compiled environments or to bootstrap compilers.
Modifications:
- [x] Substitute `DependencyMap` with `_EdgeMap`. The main differences are that the
latter does not support direct item assignment and can be modified only through its
API. It also provides a `select_by` method to query items.
- [x] Reworked a few public APIs of `Spec` to get list of dependencies or related edges.
- [x] Added unit tests to prevent regression on #11983 and prove the synthetic construction
of specs with multiple deps from the same package.
Since #22845 went in first, this PR reuses that format and thus it should not change hashes.
The same package may be present multiple times in the list of dependencies with different
associated specs (each with its own hash).
* environment.py: allow link:run
Some users want minimal views, excluding run-type dependencies, since
those type of dependencies are covered by rpaths and the symlinked
libraries in the view aren't used anyways.
With this change, an environment like this:
```
spack:
specs: ['py-flake8']
view:
default:
root: view
link: run
```
includes python packages and python, but no link type deps of python.
Speeds up comparison on `Version` by ~2.5x, e.g.
```python
In [1]: v = spack.version.Version('1.0.0'); w = spack.version.Version('1.0.2')
In [2]: %timeit v < w
1.47 µs ± 5.59 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
535 ns ± 1.75 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
fixes#29203
This PR fixes a subtle bug we have when importing
Spack packages as Python modules that can lead to
multiple module objects being created for the same
package.
It also fixes all the places in unit-tests where
"relying" on the old bug was crucial to have a new
"clean" state of the package class.
This commit reverts the GCS fetch strategy to before commit:
d759612523
The previous commit added some s3 syntax to handle connections, but
added them into the GCS fetch strategy in a way that prevents GCS from
working anymore.
* rocmcc compiler: initial commit based on aocc and clang
Co-authored-by: luker <luke.roskop@hpe.com>
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
The status displayed in the terminal title could be wrong when doing
distributed builds. For instance, doing `spack install glib` in two
different terminals could lead to the current package being reported as
`40/29` due to the way Spack handles retrying locks.
Work around this by keeping track of the package IDs that were already
encountered to avoid counting packages twice.
See https://github.com/spack/spack/pull/28468/files#r809156986
If we exit before generating the:
error("Dependencies must have compatible OS's with their dependents").
...
facts we'll output a problem that is effectively
different by the one solved by clingo.
* cmd/checksum: prefer url matching url_from_version
This is a minimal change toward getting the right archive from places
like github. The heuristic is:
* if an archive url exists, take its version
* generate a url from the package with pkg.url_from_version
* if they match
* stop considering other URLs for this version
* otherwise, continue replacing the url for the version
I doubt this will always work, but it should address a variety of
versions of this bug. A good test right now is `spack checksum gh`,
which checksums macos binaries without this, and the correct source
packages with it.
fixes#15985
related to #14129
related to #13940
* add heuristics to help create as well
Since create can't rely on an existing package, this commit adds another
pair of heuristics:
1. if the current version is a specifically listed archive, don't
replace it
2. if the current url matches the result of applying
`spack.url.substitute_version(a, ver)` for any a in archive_urls,
prefer it and don't replace it
fixes#13940
* clean up style and a lingering debug import
* ok flake8, you got me
* document reference_package argument
* Update lib/spack/spack/util/web.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* try to appease sphinx
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
We can see what is in the bootstrap store with `spack find -b`, and you can clean it with `spack
clean -b`, but we can't do much else with it, and if there are bootstrap issues they can be hard to
debug.
We already have `spack --mock`, which allows you to swap in the mock packages from the command
line. This PR introduces `spack -b` / `spack --bootstrap`, which runs all of spack with
`ensure_bootstrap_configuration()` set. This means that you can run `spack -b find`, `spack -b
install`, `spack -b spec`, etc. to see what *would* happen with bootstrap configuration, to remove
specific bootstrap packages, etc. This will hopefully make developers' lives easier as they deal
with bootstrap packages.
This PR also uses a `nullcontext` context manager. `nullcontext` has been implemented in several
other places in Spack, and this PR consolidates them to `llnl.util.lang`, with a note that we can
delete the function if we ever reqyire a new enough Python.
- [x] introduce `spack --bootstrap` option
- [x] consolidated all `nullcontext` usages to `llnl.util.lang`
Some "concrete" versions on the command line, e.g. `qt@5` are really
meant to satisfy some actual concrete version from a package. We should
only assume the user is introducing a new, unknown version on the CLI
if we, well, don't know of any version that satisfies the user's
request. So, if we know about `5.11.1` and `5.11.3` and they ask for
`5.11.2`, we'd ask the solver to consider `5.11.2` as a solution. If
they just ask for `5`, though, `5.11.1` or `5.11.3` are fine solutions,
as they satisfy `@5`, so use them.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
See https://github.com/spack/spack/issues/25353#issuecomment-1041868116
This commit changes the default behavior of
```
$ spack external find
```
from searching all the possible packages Spack knows about to
search only for the ones tagged as being a "build-tool".
It also introduces a `--all` option to restore the old behavior.
Prefer `sw_vers` to `platform.mac_ver`. In anaconda3 installation, for example, the latter reports 10.16 on Monterey -- I think this is affected by how and where the python instance was built.
Use MACOSX_DEPLOYMENT_TARGET if present to override the operating system choice.
It will be useful for metrics gathering and possibly debugging to
have this environment variable available in the runner pods that
do the actual rebuilds.
Since Spack does not install external packages, this commit skips them by
default when running stand-alone tests. The assumption is that such packages
have likely undergone an acceptance test process.
However, the tests can be run against installed externals using
```
% spack test run --externals ...
```
fixes#28260
Since we iterate over different variants from many packages, the variant
values may have types which are not comparable, which causes errors
at runtime. This is not a real issue though, since we don't need the facts
to be ordered. Thus, to avoid needless sorting, the sorted function has
been removed and a comment has been added to tip any developer that
might need to inspect these clauses for debugging to add back sorting
on the first two items only.
It's kind of difficult to add a test for this, since the error depends on
whether Python sorting algorithm ever needs to compare the third
value of a tuple being ordered.
* extensions: allow multiple "extends" directives
This will allow multiple extends directives in a package as long as only one of
them is selected as a dependency in the concrete spec.
* document the option to have multiple extends
Reuse previously was a very invasive change that required parameters to be added to all
the methods that called `concretize()` on a `Spec` object. With the addition of
concretizer configuration, we can use the config system to simplify this argument
passing and keep the code cleaner.
We decided that concretizer config options should be read at `Solver` instantiation
time, and if config changes between instnatiation of a particular solver and
`solve()` invocation, the `Solver` should use the settings from `__init__()`.
- [x] remove `reuse` keyword argument from most concretize functions
- [x] refactor usages to use `spack.config.override("concretizer:reuse", True)`
- [x] rework argument passing in `Solver` so that parameters are set from config
at instantiation time
`--reuse` was previously handled individually by each command that
needed it. We are growing more concretization options, and they'll
need their own section for commands that support them.
Now there are two concretization options:
* `--reuse`: Attempt to reuse packages from installs and buildcaches.
* `--fresh`: Opposite of reuse -- traditional spack install.
To handle thes, this PR adds a `ConfigSetAction` for `argparse`, so
that you can write argparse code like this:
```
subgroup.add_argument(
'--reuse', action=ConfigSetAction, dest="concretizer:reuse",
const=True, default=None,
help='reuse installed dependencies/buildcaches when possible'
)
```
With this, you don't need to add logic to pull the argument out and
handle it; the `ConfigSetAction` just does it for you. This can probably
be used to clean up some other commands later, as well.
Code that was previously passing `reuse=True` around everywhere has
been refactored to use config, and config is set from the CLI using
a new `add_concretizer_args()` function in `spack.cmd.common.arguments`.
- [x] Add `ConfigSetAction` to simplify concretizer config on the CLI
- [x] Refactor code so that it does not pass `reuse=True` to every function.
- [x] Refactor commands to use `add_concretizer_args()` and to pass
concretizer config using the config system.
Config scopes were different for `config` and `mutable_config`,
and `mutable_config` did not have a command line scope.
- [x] Fix by consolidating the creation logic for the two fixtures.
The concretizer is going to grow to have many more configuration,
and we really need some structured config for that.
* We have the `config:concretizer` option that chooses the solver,
but extending that is awkward (we'd need to replace a string with
a `dict`) and the solver choice will be deprecated eventually.
* We have the `concretization` option in environments, but it's
not a top-level config section -- it's just for environments,
and it also only admits a string right now.
To avoid overlapping with either of these and to allow the most
extensibility in the future, this adds a new `concretizer` config
section that can be used in and outside of environments. There
is only one option right now: `reuse`. This can expand to include
other options later.
Likely, we will soon deprecate `config:concretizer` and warn when
the user doesn't use `clingo`, and we will eventually (sometime later)
move the `together` / `separately` options from `concretization` into
the top-level `concretizer` section.
This commit just adds the new section and schema. Fully wiring it
up is TBD.
The solver has a lot of configuration associated with it. Rather
than adding arguments to everything, we should encapsulate that
in a class. This is the start of that work; it replaces `solve()`
and its kwargs with a class and properties.
* Add 'stable' to the list of infinity version names.
Rename libunwind 1.5-head to 1.5-stable.
* Add stable to the infinite version list in packaging_guide.rst.
* core: Make platform environment an instance not class method
In preparation for accessing data constructed in __init__.
* macos: set consistent macosx deployment target
This should silence numerous warnings from mixed gcc/macos toolchains.
* perl: prevent too-new deployment target version
```
*** Unexpected MACOSX_DEPLOYMENT_TARGET=11
***
*** Please either set it to a valid macOS version number (e.g., 10.15) or to empty.
```
* Stylin'
* Add deployment target overrides to failing autoconf packages
* Move configure workaround to base autoconf package
This reverts commit 3c119eaf8b4fb37c943d503beacf5ad2aa513d4c.
* Stylin'
* macos: add utility functions for SDK
These aren't yet used but should probably be added to spack debug
report.
* Remove node_target_satisfies/3 in favor of target_satisfies/2
When emitting input facts we don't need to couple target with
packages, but we can emit fewer facts independently and let
the grounder combine them.
* Remove compiler_version_satisfies/4 in favor of compiler_version_satisfies/3
When emitting input facts we don't need to couple compilers with
packages, but we can emit fewer facts independently and let
the grounder combine them.
* Introduce heuristic in the ASP-program
With heuristic we can drive clingo to make better
initial guesses, which lead to fewer choices and
conflicts in the overall solve
* Fix reindex with uninstalled deps
When a prefix of a dep is removed, and the db is reindexed, it is added
through the dependent, but until now it incorrectly listed the spec as
'installed'.
There was also some questionable behavior in the db when the same spec
was added multiple times, it would always be marked installed.
* Always reserve path
* Only add installed spec's prefixes to install prefixes set
* Improve warning, and ensure ensure only ensures
* test: reindex with every file system remnant removed except for the old index; it should give a database with nothing installed, including records with installed==False,external==False,ref_count==0,explicit=True, and these should be removable from the database
* stacks: add regression tests for matrix expansion
* Use constrain semantics to construct spec lists for stacks
* Fix semantics for constraining an anonymous spec. Add tests
* Add sticky variants
* Add unit tests for sticky variants
* Add documentation for sticky variants
* Revert "Revert 19736 because conflicts are avoided by clingo by default (#26721)"
This reverts commit 33ef7d57c1.
* Add stickiness to "allow-unsupported-compiler"
`spack license update-copyright-year` was updating license headers but not the MIT
license file. Make it do that and add a test.
Also simplify the way we bump the latest copyright year so that we only need to
update it in one place.
* Use pip to bootstrap pip
* Bootstrap wheel from source
* Update PythonPackage to install using pip
* Update several packages
* Add wheel as base class dep
* Build phase no longer exists
* Add py-poetry package, fix py-flit-core bootstrapping
* Fix isort build
* Clean up many more packages
* Remove unused import
* Fix unit tests
* Don't directly run setup.py
* Typo fix
* Remove unused imports
* Fix issues caught by CI
* Remove custom setup.py file handling
* Use PythonPackage for installing wheels
* Remove custom phases in PythonPackages
* Remove <phase>_args methods
* Remove unused import
* Fix various packages
* Try to test Python packages directly in CI
* Actually run the pipeline
* Fix more packages
* Fix mappings, fix packages
* Fix dep version
* Work around bug in concretizer
* Various concretization fixes
* Fix gitlab yaml, packages
* Fix typo in gitlab yaml
* Skip more packages that fail to concretize
* Fix? jupyter ecosystem concretization issues
* Solve Jupyter concretization issues
* Prevent duplicate entries in PYTHONPATH
* Skip fenics-dolfinx
* Build fewer Python packages
* Fix missing npm dep
* Specify image
* More package fixes
* Add backends for every from-source package
* Fix version arg
* Remove GitLab CI stuff, add py-installer package
* Remove test deps, re-add install_options
* Function declaration syntax fix
* More build fixes
* Update spack create template
* Update PythonPackage documentation
* Fix documentation build
* Fix unit tests
* Remove pip flag added only in newer pip
* flux: add explicit dependency on jsonschema
* Update packages that have been added since this was branched off of develop
* Move Python 2 deprecation to a separate PR
* py-neurolab: add build dep on py-setuptools
* Use wheels for pip/wheel
* Allow use of pre-installed pip for external Python
* pip -> python -m pip
* Use python -m pip for all packages
* Fix py-wrapt
* Add both platlib and purelib to PYTHONPATH
* py-pyyaml: setuptools is needed for all versions
* py-pyyaml: link flags aren't needed
* Appease spack audit packages
* Some build backend is required for all versions, distutils -> setuptools
* Correctly handle different setup.py filename
* Use wheels for py-tomli to avoid circular dep on py-flit-core
* Fix busco installation procedure
* Clarify things in spack create template
* Test other Python build backends
* Undo changes to busco
* Various fixes
* Don't test other backends
When `spack compiler list` is run without being restricted to a
particular scope, and no compilers are found, say that none are
available, and hint that the use should run spack compiler find to
auto detect compilers.
* Improve docs
* Check if stdin is a tty
* add a test
Many packages implement logic at the class level to handle complex dependencies and
conflicts. Others have started using `with when("@1.0"):` blocks since we added that
capability. The loops and other control logic can cause some pure directive logic not to
be removed by our package hashing logic -- and in many cases that's a lot of code that
will cause unnecessary rebuilds.
This commit changes the unparser so that it will descend into these blocks. Specifically:
1. Descend into loops, if statements, and with blocks at the class level.
2. Don't look inside function definitions (in or outside a class).
3. Don't look at nested class definitions (they don't have directives)
4. Add logic to *remove* empty loops/with blocks/if statements if all directives
in them were removed.
This allows our package hash to ignore a lot of pure metadata that it was not ignoring
before, and makes it less sensitive.
In addition, we add `maintainers` and `tags` to the list of metadata attributes that
Spack should remove from packages when constructing canonoical source for a package
hash.
- [x] Make unparser handle if/for/while/with at class level.
- [x] Add tests for control logic removal.
- [x] Add a test to ensure that all packages are not only unparseable, but also
that their canonical source is still compilable. This is a test for
our control logic removal.
- [x] Add another unparse test package that has complex logic.
These are the unit tests from astunparse, converted to pytest, with a few backports from
upstream cpython. These should hopefully keep `unparser.py` well covered as we change it.
We can't tell `print(a, b, c)` and `print((a, b, c))` apart -- both of these expressions
generate different ASTs in Python 2 and Python 3. However, we can decide that we don't
care. This commit treats both of them the same when `py_ver_consistent` is set with
`unparse()`.
This means that the package hash won't notice changes from printing a tuple to printing
multiple values, but we don't care, because this is extremely unlikely to affect the build.
More than likely this is just an error message for the user of the package.
- [x] treat `print(a, b, c)` and `print((a, b, c))` the same in py2 and py3
- [x] add another package parsing test -- legion -- that exercises this feature
To make it easier to see how package hashes change and how they are computed, add two
commands:
* `spack pkg source <spec>`: dumps source code for a package to the terminal
* `spack pkg source --canonical <spec>`: dumps canonicalized source code for a
package to the terminal. It strips comments, directives, and known-unused
multimethods from the package. It is used to generate package hashes.
* `spack pkg hash <spec>`: This gives the package hash for a particular spec.
It is generated from the canonical source code for the spec.
- [x] `add spack pkg source` and `spack pkg hash`
- [x] add tests
- [x] fix bug in multimethod resolution with boolean `@when` values
Co-authored-by: Greg Becker <becker33@llnl.gov>
We are planning to switch to using full hashes for Spack specs, which means that the
package hash will be included in the deployment descriptor. This means we need a more
robust package hash than simply dumping the `repr` of the AST.
The AST repr that we previously used for package content is unreliable because it can
vary between python versions (Python's AST actually changes fairly frequently).
- [x] change `package_hash`, `package_ast`, and `canonical_source` to accept a string for
alternate source instead of a filename.
- [x] consolidate package hash tests in `test/util/package_hash.py`.
- [x] remove old `package_content` method.
- [x] make `package_hash` do what `canonical_source_hash` was doing before.
- [x] modify `content_hash` in `package.py` to use the new `package_hash` function.
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Our package hash is supposed to be consistent from python version to python version.
Test this by adding some known unparse inputs and ensuring that they always have the
same canonical hash. This test relies on the fact that we run Spack's unit tests
across many python versions. We can't compute for several python versions within the
same test run so we precompute the hashes and check them in CI.
Package hashing was not properly handling multimethods. In particular, it was removing
any functions that had decorators from the output, so we'd miss things like
`@run_after("install")`, etc.
There were also problems with handling multiple `@when`'s in a single file, and with
handling `@when` functions that *had* to be evaluated dynamically.
- [x] Rework static `@when` resolution for package hash
- [x] Ensure that functions with decorators are not removed from output
- [x] Add tests for many different @when scenarios (multiple @when's,
combining with other decorators, default/no default, etc.)
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Previously we used `directives.__all__` to get directive names, but it wasn't
quite right -- it included `DirectiveMeta`, etc. It's not wrong, but it's also
not the clearest way to do this.
- [x] Refactor `@directive` to track names in `directive_names` global
- [x] Rename `_directive_names` to `_directive_dict_names` in `DirectiveMeta`
- [x] Add a test for `RemoveDirectives`
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Some packages use top-level unassigned strings instead of comments, either just after a
docstring on in the body somewhere else. Ignore those strings becasue they have no
effect on package behavior.
- [x] adjust RemoveDocstrings to remove all free-standing strings.
- [x] move tests for util/package_hash.py to test/util/package_hash.py
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Python 2 and 3 represent string literals differently in the AST. Python 2 requires '\x'
literals, and Python 3 source is always unicode, and allows unicode to be written
directly. These also unparse differently by default.
- [x] modify unparser to write both out the way `repr` would in Python 2 when
`py_ver_consistent` is provided.
Backport operator precedence algorithm from here:
397b96f6d7
This eliminates unnecessary parentheses from our unparsed output and makes Spack's unparser
consistent with the one in upstream Python 3.9+, with one exception.
Our parser normalizes argument order when `py_ver_consistent` is set, so that star arguments
in function calls come last. We have to do this because Python 2's AST doesn't have information
about their actual order.
If we ever support only Python 3.9 and higher, we can easily switch over to `ast.unparse`, as
the unparsing is consistent except for this detail (modulo future changes to `ast.unparse`)
Previously, there were differences in the unparsed code for Python 2.7 and for 3.5-3.10.
This makes unparsed code the same across these Python versions by:
1. Ensuring there are no spaces between unary operators and
their operands.
2. Ensuring that *args and **kwargs are always the last arguments,
regardless of the python version.
3. Always unparsing print as a function.
4. Not putting an extra comma after Python 2 class definitions.
Without these changes, the same source can generate different code for different
Python versions, depending on subtle AST differences.
One place where single source will generate an inconsistent AST is with
multi-argument print statements, e.g.:
```
print("foo", "bar", "baz")
```
In Python 2, this prints a tuple; in Python 3, it is the print function with
multiple arguments. Use `from __future__ import print_function` to avoid
this inconsistency.
Add `astunparse` as `spack_astunparse`. This library unparses Python ASTs and we're
adding it under our own name so that we can make modifications to it.
Ultimately this will be used to make `package_hash` consistent across Python versions.
* Python: set default config_vars
* Add missing commas
* dso_suffix not present for some reason
* Remove use of default_site_packages_dir
* Use config_vars during bootstrapping too
* Catch more errors
* Fix unit tests
* Catch more errors
* Update docstring
This reports the kernel version (vs. the distro version) on Linux and
returns a valid Version (stripping characters like '+' which may be
present for custom-built kernels).
Add a new check to `spack audit` to scan and verify that version constraints may be satisfied
Modifications:
- [x] Add a new check to `spack audit` to scan and verify that version constraints may be satisfied by some version declared in the built-in repository
- [x] Fix issues found by CI
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This command pokes the environment, Python interpreter
and bootstrap store to check if dependencies needed by
Spack are available.
If any are missing, it shows a comprehensible message.
* locks: allow locks to work under high contention
This is a bug found by Harshitha Menon.
The `lock=None` line shouldn't be a release but should be
```
return (lock_type, None)
```
to inform the caller it couldn't get the lock type requested without
disturbing the existing lock object in the database. There were also a
couple of bugs due to taking write locks at the beginning without any
checking or release, and not releasing read locks before requeueing.
This version no longer gives me read upgrade to write errors, even
running 200 instances on one box.
* Change lock in check_deps_status to read, release if not installed,
not sure why this was ever write, but read definitely is more
appropriate here, and the read lock is only held out of the scope if
the package is installed.
* Release read lock before requeueing to reduce chance of livelock, the
timeout that caused the original issue now happens in roughly 3 of 200
workers instead of 199 on average.
With this commit:
```
$ spack env activate --temp
$ spack install zlib
==> All of the packages are already installed
==> Updating view at /tmp/spack-faiirgmt/.spack-env/view
$ spack install zlib
==> All of the packages are already installed
```
Before this PR:
```
$ spack env activate --temp
$ spack install zlib
==> All of the packages are already installed
$ spack install zlib
==> All of the packages are already installed
```
No view was generated
This commit introduces the command
spack module tcl setdefault <package>
similar to the one already available for lmod
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
When running `spack install --log-format junit|cdash ...`, install
errors were ignored. This made spack continue building dependents of
failed install, ignoring `--fail-fast`, and exit 0 at the end.
* locks: allow locks to work under high contention
This is a bug found by Harshitha Menon.
The `lock=None` line shouldn't be a release but should be
```
return (lock_type, None)
```
to inform the caller it couldn't get the lock type requested without
disturbing the existing lock object in the database. There were also a
couple of bugs due to taking write locks at the beginning without any
checking or release, and not releasing read locks before requeueing.
This version no longer gives me read upgrade to write errors, even
running 200 instances on one box.
* Change lock in check_deps_status to read, release if not installed,
not sure why this was ever write, but read definitely is more
appropriate here, and the read lock is only held out of the scope if
the package is installed.
* Release read lock before requeueing to reduce chance of livelock, the
timeout that caused the original issue now happens in roughly 3 of 200
workers instead of 199 on average.
Fixes#27652
Ensure that mirror's to_dict function returns a syaml_dict object for all code
paths.
Switch to using the .get function for accessing the potential information from
the S3 mirror objects. If the key is not there, it will gracefully return
None instead of failing with a KeyError
Additionally, check that the connection object is a dictionary before trying
to "get" from it.
Add a test for the capturing of the new S3 information.
With this commit:
```
$ spack env activate --temp
$ spack install zlib
==> All of the packages are already installed
==> Updating view at /tmp/spack-faiirgmt/.spack-env/view
$ spack install zlib
==> All of the packages are already installed
```
Before this PR:
```
$ spack env activate --temp
$ spack install zlib
==> All of the packages are already installed
$ spack install zlib
==> All of the packages are already installed
```
No view was generated
Updates to installer.py did not account for spack monitor, so as currently implemented
there are three cases of failure that spack monitor will not account for. To fix this we add additional
hooks, including an on cancel and also do a custom action on concretization fail.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
The latest version of `jsonschema` fails if we're not specific about which schema draft
specification we're using. Update all of them to use the latest one (draft-07).
Our `jsonschema` external won't support Python 3.10, so we need to upgrade it.
It currently generates this warning:
lib/spack/external/jsonschema/compat.py:6: DeprecationWarning: Using or importing the ABCs
from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and
in 3.10 it will stop working
This upgrades `jsonschema` to 3.2.0, the latest version with support for Python 2.7. The next
version after this (4.0.0) drops support for 2.7 and 3.6, so we'll have to wait to upgrade to it.
Dependencies have been added in prior commits.
spack monitor now requires authentication as each build must be associated
with a user, so it does not make sense to allow the --monitor-no-auth flag
and this commit will remove it
This commit introduces the command
spack module tcl setdefault <package>
similar to the one already available for lmod
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This PR also slightly changes the behavior in ci_rebuild().
We now still attempt to submit `spack install` results to CDash
even if the initial registration failed due to connection issues.
This commit follows in the spirit of #24299. We do not want `spack install`
to exit with a non-zero status when something goes wrong while attempting to
report results to CDash.
This PR is meant to move code with "business logic" from `spack.cmd.buildcache` to appropriate core modules[^1].
Modifications:
- [x] Add `spack.binary_distribution.push` to create a binary package from a spec and push it to a mirror
- [x] Add `spack.binary_distribution.install_root_node` to install only the root node of a concrete spec from a buildcache (may check the sha256 sum if it is passed in as input)
- [x] Add `spack.binary_distribution.install_single_spec` to install a single concrete spec from a buildcache
- [x] Add `spack.binary_distribution.download_single_spec` to download a single concrete spec from a buildcache to a local destination
- [x] Add `Spec.from_specfile` that construct a spec given the path of a JSON or YAML spec file
- [x] Removed logic from `spack.cmd.buildcache`
- [x] Removed calls to `spack.cmd.buildcache` in `spack.bootstrap`
- [x] Deprecate `spack buildcache copy` with a message that says it will be removed in v0.19.0
[^1]: The rationale is that commands should be lightweight wrappers of the core API, since that helps with both testing and scripting (easier mocking and no need to invoke `SpackCommand`s in a script).
After this PR an error in a single package while detecting
external software won't abort the entire procedure.
The error is reported to screen as a warning.
Remove a try/catch for an error with no handling. If the affected
code doesn't execute successfully, then the associated variable
is undefined and another (more-obscure) error occurs shortly after.
Remove a custom bootstrapping procedure to
use spack.bootstrap instead
Modifications:
* Reference count the bootstrap context manager
* Avoid SpackCommand to make the bootstrapping
procedure more transparent
* Put back requirement on patchelf being in PATH for unit tests
* Add an e2e test to check bootstrapping patchelf
I think this test should be removed, but when it stays, it should at
least follow the symlink, cause it fails for me if I let spack build
patchelf and have a symlink in a view.
Modifications:
- [x] Removed `centos:6` unit test, adjusted vermin checks
- [x] Removed backport of `collections.OrderedDict`
- [x] Removed backport of `functools.total_ordering`
- [x] Removed Python 2.6 specific skip markers in unit tests
- [x] Fixed a few minor Python 2.6 related TODOs in code
Updating the vendored dependencies will be done in separate PRs
* Make CUDA and ROCm architecture conditional
fixes#14337
The variant to specify which architecture to use
for CUDA and ROCm are now conditional on +cuda and
+rocm respectively.
* cp2k: make all CUDA related variants conditional on +cuda
* Add connection specification to mirror creation
This allows each mirror to contain information about the credentials
used to access it.
Update command and tests based on comments
Switch to only "long form" flags for the s3 connection information.
Use the "any" function instead of checking for an empty list when looking
for s3 connection information.
Split test to use the access token separately from the access id and key.
Use long flag form in test.
Add endpoint_url to available S3 options.
Extend the special parameters for an S3 mirror to accept the
endpoint_url parameter.
Add a test.
* Add connection information per URL not per mirror
Expand the mirror-based connection information to be per-URL.
This will allow a user to specify different S3 connection information
for both the fetch and the push URLs.
Add a parameter for "profile", another way of storing the id/secret pair.
* Switch from "access_profile" to "profile"
Remove the "get_executable" function from the
spack.bootstrap module. Now "flake8", "isort",
"mypy" and "black" will use the same
bootstrapping method as GnuPG.
Currently Spack vendors `pytest` at a version which is three major
versions behind the latest (3.2.5 vs. 6.2.4). We do that since v3.2.5
is the latest version supporting Python 2.6. Remaining so much
behind the currently supported versions though might introduce
some incompatibilities and is surely a technical debt.
This PR modifies Spack to:
- Use the vendored `pytest@3.2.5` only as a fallback solution,
if the Python interpreter used for Spack doesn't provide a newer one
- Be able to parse `pytest --collect-only` in all the different output
formats from v3.2.5 to v6.2.4 and use it consistently for `spack unit-test --list-*`
- Updating the unit tests in Github Actions to use a more recent `pytest` version
This type of error is skipped:
make[1]: *** [Makefile:222: /tmp/user/spack-stage/.../spack-src/usr/lib/julia/libopenblas64_.so.so] Error 1
but it's useful to have it, especially when a package sets a variable
incorrectly in makefiles
Intel mpi comes with an installation of libfabric (which it needs as a
dependency). It can use other implementations of libfabric at runtime
though, so if you install a package that depends on `mpi` and
`libfabric`, you can specify `intel-mpi+external-libfabric` and ensure
that the Spack-built instance is used (both by `intel-mpi` and the
root).
Apply analogous change to intel-oneapi-mpi.
When running `spack install --log-format junit|cdash ...`, install
errors were ignored. This made spack continue building dependents of
failed install, ignoring `--fail-fast`, and exit 0 at the end.
* Python tests: allow importing weirdly-named modules
e.g. with dashes in name
* SIP tests: allow importing weirdly-named modules
* Skip modules with invalid names
* Changes from review
* Update from review
* Update from review
* Cleanup
* Prevent additional properties to be in the answer set when reusing specs
fixes#27237
The mechanism to reuse concrete specs relies on imposing
the set of constraints stemming from the concrete spec
being reused.
We also need to prevent that other constraints get added
to this set.
See #25249 and https://github.com/spack/spack/pull/27159#issuecomment-958163679.
This adds `spack load --list` as an alias for `spack find --loaded`. The new command is
not as powerful as `spack find --loaded`, as you can't combine it with all the queries or
formats that `spack find` provides. However, it is more intuitively located in the command
structure in that it appears in the output of `spack load --help`.
The idea here is that people can use `spack load --list` for simple stuff but fall back to
`spack find --loaded` if they need more.
- add help to `spack load --list` that references `spack find`
- factor some parts of `spack find` out to be called from `spack load`
- add shell tests
- update docs
Co-authored-by: Peter Josef Scheibel <scheibel1@llnl.gov>
Co-authored-by: Richarda Butler <39577672+RikkiButler20@users.noreply.github.com>
Reformulate variant rules so that we minimize both
1. The number of non-default values being used
2. The number of default values not-being used
This is crucial for MV variants where we may have
more than one default value
In our tests, we use concrete specs generated from mock packages,
which *only* occur as inputs to the solver. This fixes two problems:
1. We weren't previously adding facts to encode the necessary
`depends_on()` relationships, and specs were unsatisfiable on
reachability.
2. Our hash lookup for reconstructing the DAG does not
consider that a hash may have come from the inputs.
Concrete specs that are already installed or that come from a buildcache
may have compilers and variant settings that we do not recognize, but that
shouldn't prevent reuse (at least not until we have a more detailed compiler
model).
- [x] make sure compiler and variant consistency rules only apply to
built specs
- [x] don't validate concrete specs on input, either -- they're concrete
and we shouldn't apply today's rules to yesterday's build
In switching to hash facts for concrete specs, we lost the transitive facts
from dependencies. This was fine for solves, because they were implied by
the imposed constraints from every hash. However, for `spack diff`, we want
to see what the hashes mean, so we need another mode for `spec_clauses()` to
show that.
This adds a `expand_hashes` argument to `spec_clauses()` that allows us to
output *both* the hashes and their implications on dependencies. We use
this mode in `spack diff`.
- [x] Get rid of forgotten maximize directive.
- [x] Simplify variant handling
- [x] Fix bug in treatment of defaults on externals (don't count
non-default variants on externals against them)
Variants in concrete specs are "always" correct -- or at least we assume
them to be b/c they were concretized before. Their variants need not match
the current version of the package.
Multi-valued variants previously maximized default values to handle
cases where the default contained two values, e.g.:
variant("foo", default="bar,baz")
This is because previously we were minimizing non-default values, and
`foo=bar`, `foo=baz`, and `foo=bar,baz` all had the same score, as
none of them had any "non-default" values.
This commit changes the approach and considers a non-default value
to be either a value set to something not default *or* the absence
of a default value from the set value. This allows multi- and
single-valued variants to be handled the same way, with the same
minimization criterion. It alse means that the "best" value for every
optimization criterion is now zero, which allows us to make useful
assumptions about the optimization criteria.
Minimizing builds is tricky. We want a minimizing criterion because
we want to reuse the avaialble installs, but we also want things that
have to be built to stick to *default preferences* from the package
and from the user. We therefore treat built specs differently and
apply a different set of optimization criteria to them. Spack's *first*
priority is to reuse what it can, but if it builds something, the built
specs will respect defaults and preferences.
This is implemented by bumping the priority of optimization criteria
for built specs -- so that they take precedence over the otherwise
topmost-priority criterion to reuse what is installed.
The scheme relies on all of our optimization criteria being minimizations.
That is, we need the case where all specs are reused to be better than
any built spec could be. Basically, if nothing is built, all the build
criteria are zero (the best possible) and the number of built packages
dominates. If something *has* to be built, it must be strictly worse
than full reuse, because:
1. it increases the number of built specs
2. it must have either zero or some positive number for all criteria
Our optimziation criteria effectively sum into two buckets at once to
accomplish this. We use a `build_priority()` number to shift the
priority of optimization criteria for built specs higher.
The constraints in the `spack diff` test were very specific and assumed
a lot about the structure of what was being diffed. Relax them a bit to
make them more resilient to changes.
Make the first minimization conditional on whether `--reuse` is enabled in the solve.
If `--reuse` is not enabled, there will be nothing in the set to minimize and the
objective function (for this criterion) will be 0 for every answer set.
Many of the integrity constraints in the concretizer are there to restrict how solves are done, but
they ignore that past solves may have had different initial conditions. For example, for things
we're building, we want the allowed variants to be restricted to those currently in Spack packages,
but if we are reusing a concrete spec, we need to be flexible about names that may have existed in
old packages.
Similarly, restrictions around compatibility of OS's, compiler versions, compiler OS support, etc.
are really only about what is supported by the *current* set of compilers/build tools known to
Spack, not about what we may get from concrete specs.
- [x] restrict certain integrity constraints to only apply to packages that we need to build, and
omit concrete specs from consideration.
The OS logic in the concretizer is still the way it was in the first version.
Defaults are implemented in a fairly inflexible way using straight logic. Most
of the other sections have been reworked to leave these kinds of decisions to
optimization. This commit does that for OS's as well.
As with targets, we optimize for target matches. We also try to optimize for
OS matches between nodes. Additionally, this commit adds the notion of
"OS compatibility" where we allow for builds to depend on binaries for certain
other OS's. e.g, for macos, a bigsur build can depend on an already installed
(concrete) catalina build. One cool thing about this is that we can declare
additional compatible OS's later, e.g. CentOS and RHEL.
If we don't rename Spack will fail with:
```
ImportError: cannot bootstrap the "clingo" Python module from spec "clingo-bootstrap@spack+python %gcc target=x86_64" due to the following failures:
'spack-install' raised ValueError: Invalid config scope: 'bootstrap'. Must be one of odict_keys(['_builtin', 'defaults', 'defaults/cray', 'bootstrap/cray', 'disable_modules', 'overrides-0'])
Please run `spack -d spec zlib` for more verbose error messages
```
in case bootstrapping from binaries fails and we are
falling back to bootstrapping from sources.
A common question from users has been how to model variants
that are new in new versions of a package, or variants that are
dependent on other variants. Our stock answer so far has been
an unsatisfying combination of "just have it do nothing in the old
version" and "tell Spack it conflicts".
This PR enables conditional variants, on any spec condition. The
syntax is straightforward, and matches that of previous features.
* GnuPG: allow bootstrapping from buildcache and sources
* Add a test to bootstrap GnuPG from binaries
* Disable bootstrapping in tests
* Add e2e test to bootstrap GnuPG from sources on Ubuntu
* Add e2e test to bootstrap GnuPG on macOS
This PR adds error message sentinels to the clingo solve, attached to each of the rules that could fail a solve. The unsat core is then restricted to these messages, which makes the minimization problem tractable. Errors that can only be generated by a bug in the logic program or generating code are prefaced with "Internal error" to make clear to users that something has gone wrong on the Spack side of things.
* minimize unsat cores manually
* only errors messages are choices/assumptions for performance
* pre-check for unreachable nodes
* update tests for new error message
* make clingo concretization errors show up in cdash reports fully
* clingo: make import of clingo.ast parsing routines robust to clingo version
Older `clingo` has `parse_string`; newer `clingo` has `parse_files`. Make the
code work wtih both.
* make AST access functions backward-compatible with clingo 5.4.0
Clingo AST API has changed since 5.4.0; make some functions to help us
handle both versions of the AST.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
After #26608 I got a report about missing rpaths when building a
downstream package independently using a spack-installed toolchain
(@tmdelellis). This occurred because the spack-installed libraries were
being linked into the downstream app, but the rpaths were not being
manually added. Prior to #26608 autotools-installed libs would retain
their hard-coded path and would thus propagate their link information
into the downstream library on mac.
We could solve this problem *if* the mac linker (ld) respected
`LD_RUN_PATH` like it does on GNU systems, i.e. adding `rpath` entries
to each item in the environment variable. However on mac we would have
to manually add rpaths either using spack's compiler wrapper scripts or
manually (e.g. using `CMAKE_BUILD_RPATH` and pointing to the libraries of
all the autotools-installed spack libraries).
The easier and safer thing to do for now is to simply stop changing the
dylib IDs.
The `--generic` argument allows printing the best generic target for the
current machine. This can be quite handy when wanting to find the
generic architecture to use when building a shared software stack for
multiple machines.
This PR adds a "spack tags" command to output package tags or
(available) packages with those tags. It also ensures each package
is listed in the tag cache ONLY ONCE per tag.