This commit reverts the GCS fetch strategy to before commit:
d759612523
The previous commit added some s3 syntax to handle connections, but
added them into the GCS fetch strategy in a way that prevents GCS from
working anymore.
* rocmcc compiler: initial commit based on aocc and clang
Co-authored-by: luker <luke.roskop@hpe.com>
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
The status displayed in the terminal title could be wrong when doing
distributed builds. For instance, doing `spack install glib` in two
different terminals could lead to the current package being reported as
`40/29` due to the way Spack handles retrying locks.
Work around this by keeping track of the package IDs that were already
encountered to avoid counting packages twice.
See https://github.com/spack/spack/pull/28468/files#r809156986
If we exit before generating the:
error("Dependencies must have compatible OS's with their dependents").
...
facts we'll output a problem that is effectively
different by the one solved by clingo.
* cmd/checksum: prefer url matching url_from_version
This is a minimal change toward getting the right archive from places
like github. The heuristic is:
* if an archive url exists, take its version
* generate a url from the package with pkg.url_from_version
* if they match
* stop considering other URLs for this version
* otherwise, continue replacing the url for the version
I doubt this will always work, but it should address a variety of
versions of this bug. A good test right now is `spack checksum gh`,
which checksums macos binaries without this, and the correct source
packages with it.
fixes#15985
related to #14129
related to #13940
* add heuristics to help create as well
Since create can't rely on an existing package, this commit adds another
pair of heuristics:
1. if the current version is a specifically listed archive, don't
replace it
2. if the current url matches the result of applying
`spack.url.substitute_version(a, ver)` for any a in archive_urls,
prefer it and don't replace it
fixes#13940
* clean up style and a lingering debug import
* ok flake8, you got me
* document reference_package argument
* Update lib/spack/spack/util/web.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* try to appease sphinx
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
We can see what is in the bootstrap store with `spack find -b`, and you can clean it with `spack
clean -b`, but we can't do much else with it, and if there are bootstrap issues they can be hard to
debug.
We already have `spack --mock`, which allows you to swap in the mock packages from the command
line. This PR introduces `spack -b` / `spack --bootstrap`, which runs all of spack with
`ensure_bootstrap_configuration()` set. This means that you can run `spack -b find`, `spack -b
install`, `spack -b spec`, etc. to see what *would* happen with bootstrap configuration, to remove
specific bootstrap packages, etc. This will hopefully make developers' lives easier as they deal
with bootstrap packages.
This PR also uses a `nullcontext` context manager. `nullcontext` has been implemented in several
other places in Spack, and this PR consolidates them to `llnl.util.lang`, with a note that we can
delete the function if we ever reqyire a new enough Python.
- [x] introduce `spack --bootstrap` option
- [x] consolidated all `nullcontext` usages to `llnl.util.lang`
Some "concrete" versions on the command line, e.g. `qt@5` are really
meant to satisfy some actual concrete version from a package. We should
only assume the user is introducing a new, unknown version on the CLI
if we, well, don't know of any version that satisfies the user's
request. So, if we know about `5.11.1` and `5.11.3` and they ask for
`5.11.2`, we'd ask the solver to consider `5.11.2` as a solution. If
they just ask for `5`, though, `5.11.1` or `5.11.3` are fine solutions,
as they satisfy `@5`, so use them.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
See https://github.com/spack/spack/issues/25353#issuecomment-1041868116
This commit changes the default behavior of
```
$ spack external find
```
from searching all the possible packages Spack knows about to
search only for the ones tagged as being a "build-tool".
It also introduces a `--all` option to restore the old behavior.
Prefer `sw_vers` to `platform.mac_ver`. In anaconda3 installation, for example, the latter reports 10.16 on Monterey -- I think this is affected by how and where the python instance was built.
Use MACOSX_DEPLOYMENT_TARGET if present to override the operating system choice.
It will be useful for metrics gathering and possibly debugging to
have this environment variable available in the runner pods that
do the actual rebuilds.
Since Spack does not install external packages, this commit skips them by
default when running stand-alone tests. The assumption is that such packages
have likely undergone an acceptance test process.
However, the tests can be run against installed externals using
```
% spack test run --externals ...
```
fixes#28260
Since we iterate over different variants from many packages, the variant
values may have types which are not comparable, which causes errors
at runtime. This is not a real issue though, since we don't need the facts
to be ordered. Thus, to avoid needless sorting, the sorted function has
been removed and a comment has been added to tip any developer that
might need to inspect these clauses for debugging to add back sorting
on the first two items only.
It's kind of difficult to add a test for this, since the error depends on
whether Python sorting algorithm ever needs to compare the third
value of a tuple being ordered.
* extensions: allow multiple "extends" directives
This will allow multiple extends directives in a package as long as only one of
them is selected as a dependency in the concrete spec.
* document the option to have multiple extends
Reuse previously was a very invasive change that required parameters to be added to all
the methods that called `concretize()` on a `Spec` object. With the addition of
concretizer configuration, we can use the config system to simplify this argument
passing and keep the code cleaner.
We decided that concretizer config options should be read at `Solver` instantiation
time, and if config changes between instnatiation of a particular solver and
`solve()` invocation, the `Solver` should use the settings from `__init__()`.
- [x] remove `reuse` keyword argument from most concretize functions
- [x] refactor usages to use `spack.config.override("concretizer:reuse", True)`
- [x] rework argument passing in `Solver` so that parameters are set from config
at instantiation time
`--reuse` was previously handled individually by each command that
needed it. We are growing more concretization options, and they'll
need their own section for commands that support them.
Now there are two concretization options:
* `--reuse`: Attempt to reuse packages from installs and buildcaches.
* `--fresh`: Opposite of reuse -- traditional spack install.
To handle thes, this PR adds a `ConfigSetAction` for `argparse`, so
that you can write argparse code like this:
```
subgroup.add_argument(
'--reuse', action=ConfigSetAction, dest="concretizer:reuse",
const=True, default=None,
help='reuse installed dependencies/buildcaches when possible'
)
```
With this, you don't need to add logic to pull the argument out and
handle it; the `ConfigSetAction` just does it for you. This can probably
be used to clean up some other commands later, as well.
Code that was previously passing `reuse=True` around everywhere has
been refactored to use config, and config is set from the CLI using
a new `add_concretizer_args()` function in `spack.cmd.common.arguments`.
- [x] Add `ConfigSetAction` to simplify concretizer config on the CLI
- [x] Refactor code so that it does not pass `reuse=True` to every function.
- [x] Refactor commands to use `add_concretizer_args()` and to pass
concretizer config using the config system.
Config scopes were different for `config` and `mutable_config`,
and `mutable_config` did not have a command line scope.
- [x] Fix by consolidating the creation logic for the two fixtures.
The concretizer is going to grow to have many more configuration,
and we really need some structured config for that.
* We have the `config:concretizer` option that chooses the solver,
but extending that is awkward (we'd need to replace a string with
a `dict`) and the solver choice will be deprecated eventually.
* We have the `concretization` option in environments, but it's
not a top-level config section -- it's just for environments,
and it also only admits a string right now.
To avoid overlapping with either of these and to allow the most
extensibility in the future, this adds a new `concretizer` config
section that can be used in and outside of environments. There
is only one option right now: `reuse`. This can expand to include
other options later.
Likely, we will soon deprecate `config:concretizer` and warn when
the user doesn't use `clingo`, and we will eventually (sometime later)
move the `together` / `separately` options from `concretization` into
the top-level `concretizer` section.
This commit just adds the new section and schema. Fully wiring it
up is TBD.
The solver has a lot of configuration associated with it. Rather
than adding arguments to everything, we should encapsulate that
in a class. This is the start of that work; it replaces `solve()`
and its kwargs with a class and properties.
* Add 'stable' to the list of infinity version names.
Rename libunwind 1.5-head to 1.5-stable.
* Add stable to the infinite version list in packaging_guide.rst.
* core: Make platform environment an instance not class method
In preparation for accessing data constructed in __init__.
* macos: set consistent macosx deployment target
This should silence numerous warnings from mixed gcc/macos toolchains.
* perl: prevent too-new deployment target version
```
*** Unexpected MACOSX_DEPLOYMENT_TARGET=11
***
*** Please either set it to a valid macOS version number (e.g., 10.15) or to empty.
```
* Stylin'
* Add deployment target overrides to failing autoconf packages
* Move configure workaround to base autoconf package
This reverts commit 3c119eaf8b4fb37c943d503beacf5ad2aa513d4c.
* Stylin'
* macos: add utility functions for SDK
These aren't yet used but should probably be added to spack debug
report.
* Remove node_target_satisfies/3 in favor of target_satisfies/2
When emitting input facts we don't need to couple target with
packages, but we can emit fewer facts independently and let
the grounder combine them.
* Remove compiler_version_satisfies/4 in favor of compiler_version_satisfies/3
When emitting input facts we don't need to couple compilers with
packages, but we can emit fewer facts independently and let
the grounder combine them.
* Introduce heuristic in the ASP-program
With heuristic we can drive clingo to make better
initial guesses, which lead to fewer choices and
conflicts in the overall solve
* Fix reindex with uninstalled deps
When a prefix of a dep is removed, and the db is reindexed, it is added
through the dependent, but until now it incorrectly listed the spec as
'installed'.
There was also some questionable behavior in the db when the same spec
was added multiple times, it would always be marked installed.
* Always reserve path
* Only add installed spec's prefixes to install prefixes set
* Improve warning, and ensure ensure only ensures
* test: reindex with every file system remnant removed except for the old index; it should give a database with nothing installed, including records with installed==False,external==False,ref_count==0,explicit=True, and these should be removable from the database
* stacks: add regression tests for matrix expansion
* Use constrain semantics to construct spec lists for stacks
* Fix semantics for constraining an anonymous spec. Add tests
* Add sticky variants
* Add unit tests for sticky variants
* Add documentation for sticky variants
* Revert "Revert 19736 because conflicts are avoided by clingo by default (#26721)"
This reverts commit 33ef7d57c1.
* Add stickiness to "allow-unsupported-compiler"
`spack license update-copyright-year` was updating license headers but not the MIT
license file. Make it do that and add a test.
Also simplify the way we bump the latest copyright year so that we only need to
update it in one place.
* Use pip to bootstrap pip
* Bootstrap wheel from source
* Update PythonPackage to install using pip
* Update several packages
* Add wheel as base class dep
* Build phase no longer exists
* Add py-poetry package, fix py-flit-core bootstrapping
* Fix isort build
* Clean up many more packages
* Remove unused import
* Fix unit tests
* Don't directly run setup.py
* Typo fix
* Remove unused imports
* Fix issues caught by CI
* Remove custom setup.py file handling
* Use PythonPackage for installing wheels
* Remove custom phases in PythonPackages
* Remove <phase>_args methods
* Remove unused import
* Fix various packages
* Try to test Python packages directly in CI
* Actually run the pipeline
* Fix more packages
* Fix mappings, fix packages
* Fix dep version
* Work around bug in concretizer
* Various concretization fixes
* Fix gitlab yaml, packages
* Fix typo in gitlab yaml
* Skip more packages that fail to concretize
* Fix? jupyter ecosystem concretization issues
* Solve Jupyter concretization issues
* Prevent duplicate entries in PYTHONPATH
* Skip fenics-dolfinx
* Build fewer Python packages
* Fix missing npm dep
* Specify image
* More package fixes
* Add backends for every from-source package
* Fix version arg
* Remove GitLab CI stuff, add py-installer package
* Remove test deps, re-add install_options
* Function declaration syntax fix
* More build fixes
* Update spack create template
* Update PythonPackage documentation
* Fix documentation build
* Fix unit tests
* Remove pip flag added only in newer pip
* flux: add explicit dependency on jsonschema
* Update packages that have been added since this was branched off of develop
* Move Python 2 deprecation to a separate PR
* py-neurolab: add build dep on py-setuptools
* Use wheels for pip/wheel
* Allow use of pre-installed pip for external Python
* pip -> python -m pip
* Use python -m pip for all packages
* Fix py-wrapt
* Add both platlib and purelib to PYTHONPATH
* py-pyyaml: setuptools is needed for all versions
* py-pyyaml: link flags aren't needed
* Appease spack audit packages
* Some build backend is required for all versions, distutils -> setuptools
* Correctly handle different setup.py filename
* Use wheels for py-tomli to avoid circular dep on py-flit-core
* Fix busco installation procedure
* Clarify things in spack create template
* Test other Python build backends
* Undo changes to busco
* Various fixes
* Don't test other backends
When `spack compiler list` is run without being restricted to a
particular scope, and no compilers are found, say that none are
available, and hint that the use should run spack compiler find to
auto detect compilers.
* Improve docs
* Check if stdin is a tty
* add a test
Many packages implement logic at the class level to handle complex dependencies and
conflicts. Others have started using `with when("@1.0"):` blocks since we added that
capability. The loops and other control logic can cause some pure directive logic not to
be removed by our package hashing logic -- and in many cases that's a lot of code that
will cause unnecessary rebuilds.
This commit changes the unparser so that it will descend into these blocks. Specifically:
1. Descend into loops, if statements, and with blocks at the class level.
2. Don't look inside function definitions (in or outside a class).
3. Don't look at nested class definitions (they don't have directives)
4. Add logic to *remove* empty loops/with blocks/if statements if all directives
in them were removed.
This allows our package hash to ignore a lot of pure metadata that it was not ignoring
before, and makes it less sensitive.
In addition, we add `maintainers` and `tags` to the list of metadata attributes that
Spack should remove from packages when constructing canonoical source for a package
hash.
- [x] Make unparser handle if/for/while/with at class level.
- [x] Add tests for control logic removal.
- [x] Add a test to ensure that all packages are not only unparseable, but also
that their canonical source is still compilable. This is a test for
our control logic removal.
- [x] Add another unparse test package that has complex logic.
These are the unit tests from astunparse, converted to pytest, with a few backports from
upstream cpython. These should hopefully keep `unparser.py` well covered as we change it.
We can't tell `print(a, b, c)` and `print((a, b, c))` apart -- both of these expressions
generate different ASTs in Python 2 and Python 3. However, we can decide that we don't
care. This commit treats both of them the same when `py_ver_consistent` is set with
`unparse()`.
This means that the package hash won't notice changes from printing a tuple to printing
multiple values, but we don't care, because this is extremely unlikely to affect the build.
More than likely this is just an error message for the user of the package.
- [x] treat `print(a, b, c)` and `print((a, b, c))` the same in py2 and py3
- [x] add another package parsing test -- legion -- that exercises this feature
To make it easier to see how package hashes change and how they are computed, add two
commands:
* `spack pkg source <spec>`: dumps source code for a package to the terminal
* `spack pkg source --canonical <spec>`: dumps canonicalized source code for a
package to the terminal. It strips comments, directives, and known-unused
multimethods from the package. It is used to generate package hashes.
* `spack pkg hash <spec>`: This gives the package hash for a particular spec.
It is generated from the canonical source code for the spec.
- [x] `add spack pkg source` and `spack pkg hash`
- [x] add tests
- [x] fix bug in multimethod resolution with boolean `@when` values
Co-authored-by: Greg Becker <becker33@llnl.gov>
We are planning to switch to using full hashes for Spack specs, which means that the
package hash will be included in the deployment descriptor. This means we need a more
robust package hash than simply dumping the `repr` of the AST.
The AST repr that we previously used for package content is unreliable because it can
vary between python versions (Python's AST actually changes fairly frequently).
- [x] change `package_hash`, `package_ast`, and `canonical_source` to accept a string for
alternate source instead of a filename.
- [x] consolidate package hash tests in `test/util/package_hash.py`.
- [x] remove old `package_content` method.
- [x] make `package_hash` do what `canonical_source_hash` was doing before.
- [x] modify `content_hash` in `package.py` to use the new `package_hash` function.
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Our package hash is supposed to be consistent from python version to python version.
Test this by adding some known unparse inputs and ensuring that they always have the
same canonical hash. This test relies on the fact that we run Spack's unit tests
across many python versions. We can't compute for several python versions within the
same test run so we precompute the hashes and check them in CI.
Package hashing was not properly handling multimethods. In particular, it was removing
any functions that had decorators from the output, so we'd miss things like
`@run_after("install")`, etc.
There were also problems with handling multiple `@when`'s in a single file, and with
handling `@when` functions that *had* to be evaluated dynamically.
- [x] Rework static `@when` resolution for package hash
- [x] Ensure that functions with decorators are not removed from output
- [x] Add tests for many different @when scenarios (multiple @when's,
combining with other decorators, default/no default, etc.)
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Previously we used `directives.__all__` to get directive names, but it wasn't
quite right -- it included `DirectiveMeta`, etc. It's not wrong, but it's also
not the clearest way to do this.
- [x] Refactor `@directive` to track names in `directive_names` global
- [x] Rename `_directive_names` to `_directive_dict_names` in `DirectiveMeta`
- [x] Add a test for `RemoveDirectives`
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Some packages use top-level unassigned strings instead of comments, either just after a
docstring on in the body somewhere else. Ignore those strings becasue they have no
effect on package behavior.
- [x] adjust RemoveDocstrings to remove all free-standing strings.
- [x] move tests for util/package_hash.py to test/util/package_hash.py
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Python 2 and 3 represent string literals differently in the AST. Python 2 requires '\x'
literals, and Python 3 source is always unicode, and allows unicode to be written
directly. These also unparse differently by default.
- [x] modify unparser to write both out the way `repr` would in Python 2 when
`py_ver_consistent` is provided.
Backport operator precedence algorithm from here:
397b96f6d7
This eliminates unnecessary parentheses from our unparsed output and makes Spack's unparser
consistent with the one in upstream Python 3.9+, with one exception.
Our parser normalizes argument order when `py_ver_consistent` is set, so that star arguments
in function calls come last. We have to do this because Python 2's AST doesn't have information
about their actual order.
If we ever support only Python 3.9 and higher, we can easily switch over to `ast.unparse`, as
the unparsing is consistent except for this detail (modulo future changes to `ast.unparse`)
Previously, there were differences in the unparsed code for Python 2.7 and for 3.5-3.10.
This makes unparsed code the same across these Python versions by:
1. Ensuring there are no spaces between unary operators and
their operands.
2. Ensuring that *args and **kwargs are always the last arguments,
regardless of the python version.
3. Always unparsing print as a function.
4. Not putting an extra comma after Python 2 class definitions.
Without these changes, the same source can generate different code for different
Python versions, depending on subtle AST differences.
One place where single source will generate an inconsistent AST is with
multi-argument print statements, e.g.:
```
print("foo", "bar", "baz")
```
In Python 2, this prints a tuple; in Python 3, it is the print function with
multiple arguments. Use `from __future__ import print_function` to avoid
this inconsistency.
Add `astunparse` as `spack_astunparse`. This library unparses Python ASTs and we're
adding it under our own name so that we can make modifications to it.
Ultimately this will be used to make `package_hash` consistent across Python versions.
* Python: set default config_vars
* Add missing commas
* dso_suffix not present for some reason
* Remove use of default_site_packages_dir
* Use config_vars during bootstrapping too
* Catch more errors
* Fix unit tests
* Catch more errors
* Update docstring
This reports the kernel version (vs. the distro version) on Linux and
returns a valid Version (stripping characters like '+' which may be
present for custom-built kernels).
Add a new check to `spack audit` to scan and verify that version constraints may be satisfied
Modifications:
- [x] Add a new check to `spack audit` to scan and verify that version constraints may be satisfied by some version declared in the built-in repository
- [x] Fix issues found by CI
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This command pokes the environment, Python interpreter
and bootstrap store to check if dependencies needed by
Spack are available.
If any are missing, it shows a comprehensible message.
* locks: allow locks to work under high contention
This is a bug found by Harshitha Menon.
The `lock=None` line shouldn't be a release but should be
```
return (lock_type, None)
```
to inform the caller it couldn't get the lock type requested without
disturbing the existing lock object in the database. There were also a
couple of bugs due to taking write locks at the beginning without any
checking or release, and not releasing read locks before requeueing.
This version no longer gives me read upgrade to write errors, even
running 200 instances on one box.
* Change lock in check_deps_status to read, release if not installed,
not sure why this was ever write, but read definitely is more
appropriate here, and the read lock is only held out of the scope if
the package is installed.
* Release read lock before requeueing to reduce chance of livelock, the
timeout that caused the original issue now happens in roughly 3 of 200
workers instead of 199 on average.
With this commit:
```
$ spack env activate --temp
$ spack install zlib
==> All of the packages are already installed
==> Updating view at /tmp/spack-faiirgmt/.spack-env/view
$ spack install zlib
==> All of the packages are already installed
```
Before this PR:
```
$ spack env activate --temp
$ spack install zlib
==> All of the packages are already installed
$ spack install zlib
==> All of the packages are already installed
```
No view was generated
This commit introduces the command
spack module tcl setdefault <package>
similar to the one already available for lmod
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
When running `spack install --log-format junit|cdash ...`, install
errors were ignored. This made spack continue building dependents of
failed install, ignoring `--fail-fast`, and exit 0 at the end.
* locks: allow locks to work under high contention
This is a bug found by Harshitha Menon.
The `lock=None` line shouldn't be a release but should be
```
return (lock_type, None)
```
to inform the caller it couldn't get the lock type requested without
disturbing the existing lock object in the database. There were also a
couple of bugs due to taking write locks at the beginning without any
checking or release, and not releasing read locks before requeueing.
This version no longer gives me read upgrade to write errors, even
running 200 instances on one box.
* Change lock in check_deps_status to read, release if not installed,
not sure why this was ever write, but read definitely is more
appropriate here, and the read lock is only held out of the scope if
the package is installed.
* Release read lock before requeueing to reduce chance of livelock, the
timeout that caused the original issue now happens in roughly 3 of 200
workers instead of 199 on average.
Fixes#27652
Ensure that mirror's to_dict function returns a syaml_dict object for all code
paths.
Switch to using the .get function for accessing the potential information from
the S3 mirror objects. If the key is not there, it will gracefully return
None instead of failing with a KeyError
Additionally, check that the connection object is a dictionary before trying
to "get" from it.
Add a test for the capturing of the new S3 information.
With this commit:
```
$ spack env activate --temp
$ spack install zlib
==> All of the packages are already installed
==> Updating view at /tmp/spack-faiirgmt/.spack-env/view
$ spack install zlib
==> All of the packages are already installed
```
Before this PR:
```
$ spack env activate --temp
$ spack install zlib
==> All of the packages are already installed
$ spack install zlib
==> All of the packages are already installed
```
No view was generated
Updates to installer.py did not account for spack monitor, so as currently implemented
there are three cases of failure that spack monitor will not account for. To fix this we add additional
hooks, including an on cancel and also do a custom action on concretization fail.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
The latest version of `jsonschema` fails if we're not specific about which schema draft
specification we're using. Update all of them to use the latest one (draft-07).
Our `jsonschema` external won't support Python 3.10, so we need to upgrade it.
It currently generates this warning:
lib/spack/external/jsonschema/compat.py:6: DeprecationWarning: Using or importing the ABCs
from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and
in 3.10 it will stop working
This upgrades `jsonschema` to 3.2.0, the latest version with support for Python 2.7. The next
version after this (4.0.0) drops support for 2.7 and 3.6, so we'll have to wait to upgrade to it.
Dependencies have been added in prior commits.
spack monitor now requires authentication as each build must be associated
with a user, so it does not make sense to allow the --monitor-no-auth flag
and this commit will remove it
This commit introduces the command
spack module tcl setdefault <package>
similar to the one already available for lmod
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This PR also slightly changes the behavior in ci_rebuild().
We now still attempt to submit `spack install` results to CDash
even if the initial registration failed due to connection issues.
This commit follows in the spirit of #24299. We do not want `spack install`
to exit with a non-zero status when something goes wrong while attempting to
report results to CDash.
This PR is meant to move code with "business logic" from `spack.cmd.buildcache` to appropriate core modules[^1].
Modifications:
- [x] Add `spack.binary_distribution.push` to create a binary package from a spec and push it to a mirror
- [x] Add `spack.binary_distribution.install_root_node` to install only the root node of a concrete spec from a buildcache (may check the sha256 sum if it is passed in as input)
- [x] Add `spack.binary_distribution.install_single_spec` to install a single concrete spec from a buildcache
- [x] Add `spack.binary_distribution.download_single_spec` to download a single concrete spec from a buildcache to a local destination
- [x] Add `Spec.from_specfile` that construct a spec given the path of a JSON or YAML spec file
- [x] Removed logic from `spack.cmd.buildcache`
- [x] Removed calls to `spack.cmd.buildcache` in `spack.bootstrap`
- [x] Deprecate `spack buildcache copy` with a message that says it will be removed in v0.19.0
[^1]: The rationale is that commands should be lightweight wrappers of the core API, since that helps with both testing and scripting (easier mocking and no need to invoke `SpackCommand`s in a script).
After this PR an error in a single package while detecting
external software won't abort the entire procedure.
The error is reported to screen as a warning.
Remove a try/catch for an error with no handling. If the affected
code doesn't execute successfully, then the associated variable
is undefined and another (more-obscure) error occurs shortly after.
Remove a custom bootstrapping procedure to
use spack.bootstrap instead
Modifications:
* Reference count the bootstrap context manager
* Avoid SpackCommand to make the bootstrapping
procedure more transparent
* Put back requirement on patchelf being in PATH for unit tests
* Add an e2e test to check bootstrapping patchelf
I think this test should be removed, but when it stays, it should at
least follow the symlink, cause it fails for me if I let spack build
patchelf and have a symlink in a view.
Modifications:
- [x] Removed `centos:6` unit test, adjusted vermin checks
- [x] Removed backport of `collections.OrderedDict`
- [x] Removed backport of `functools.total_ordering`
- [x] Removed Python 2.6 specific skip markers in unit tests
- [x] Fixed a few minor Python 2.6 related TODOs in code
Updating the vendored dependencies will be done in separate PRs
* Make CUDA and ROCm architecture conditional
fixes#14337
The variant to specify which architecture to use
for CUDA and ROCm are now conditional on +cuda and
+rocm respectively.
* cp2k: make all CUDA related variants conditional on +cuda
* Add connection specification to mirror creation
This allows each mirror to contain information about the credentials
used to access it.
Update command and tests based on comments
Switch to only "long form" flags for the s3 connection information.
Use the "any" function instead of checking for an empty list when looking
for s3 connection information.
Split test to use the access token separately from the access id and key.
Use long flag form in test.
Add endpoint_url to available S3 options.
Extend the special parameters for an S3 mirror to accept the
endpoint_url parameter.
Add a test.
* Add connection information per URL not per mirror
Expand the mirror-based connection information to be per-URL.
This will allow a user to specify different S3 connection information
for both the fetch and the push URLs.
Add a parameter for "profile", another way of storing the id/secret pair.
* Switch from "access_profile" to "profile"
Remove the "get_executable" function from the
spack.bootstrap module. Now "flake8", "isort",
"mypy" and "black" will use the same
bootstrapping method as GnuPG.
Currently Spack vendors `pytest` at a version which is three major
versions behind the latest (3.2.5 vs. 6.2.4). We do that since v3.2.5
is the latest version supporting Python 2.6. Remaining so much
behind the currently supported versions though might introduce
some incompatibilities and is surely a technical debt.
This PR modifies Spack to:
- Use the vendored `pytest@3.2.5` only as a fallback solution,
if the Python interpreter used for Spack doesn't provide a newer one
- Be able to parse `pytest --collect-only` in all the different output
formats from v3.2.5 to v6.2.4 and use it consistently for `spack unit-test --list-*`
- Updating the unit tests in Github Actions to use a more recent `pytest` version
This type of error is skipped:
make[1]: *** [Makefile:222: /tmp/user/spack-stage/.../spack-src/usr/lib/julia/libopenblas64_.so.so] Error 1
but it's useful to have it, especially when a package sets a variable
incorrectly in makefiles
Intel mpi comes with an installation of libfabric (which it needs as a
dependency). It can use other implementations of libfabric at runtime
though, so if you install a package that depends on `mpi` and
`libfabric`, you can specify `intel-mpi+external-libfabric` and ensure
that the Spack-built instance is used (both by `intel-mpi` and the
root).
Apply analogous change to intel-oneapi-mpi.
When running `spack install --log-format junit|cdash ...`, install
errors were ignored. This made spack continue building dependents of
failed install, ignoring `--fail-fast`, and exit 0 at the end.
* Python tests: allow importing weirdly-named modules
e.g. with dashes in name
* SIP tests: allow importing weirdly-named modules
* Skip modules with invalid names
* Changes from review
* Update from review
* Update from review
* Cleanup
* Prevent additional properties to be in the answer set when reusing specs
fixes#27237
The mechanism to reuse concrete specs relies on imposing
the set of constraints stemming from the concrete spec
being reused.
We also need to prevent that other constraints get added
to this set.
See #25249 and https://github.com/spack/spack/pull/27159#issuecomment-958163679.
This adds `spack load --list` as an alias for `spack find --loaded`. The new command is
not as powerful as `spack find --loaded`, as you can't combine it with all the queries or
formats that `spack find` provides. However, it is more intuitively located in the command
structure in that it appears in the output of `spack load --help`.
The idea here is that people can use `spack load --list` for simple stuff but fall back to
`spack find --loaded` if they need more.
- add help to `spack load --list` that references `spack find`
- factor some parts of `spack find` out to be called from `spack load`
- add shell tests
- update docs
Co-authored-by: Peter Josef Scheibel <scheibel1@llnl.gov>
Co-authored-by: Richarda Butler <39577672+RikkiButler20@users.noreply.github.com>
Reformulate variant rules so that we minimize both
1. The number of non-default values being used
2. The number of default values not-being used
This is crucial for MV variants where we may have
more than one default value
In our tests, we use concrete specs generated from mock packages,
which *only* occur as inputs to the solver. This fixes two problems:
1. We weren't previously adding facts to encode the necessary
`depends_on()` relationships, and specs were unsatisfiable on
reachability.
2. Our hash lookup for reconstructing the DAG does not
consider that a hash may have come from the inputs.
Concrete specs that are already installed or that come from a buildcache
may have compilers and variant settings that we do not recognize, but that
shouldn't prevent reuse (at least not until we have a more detailed compiler
model).
- [x] make sure compiler and variant consistency rules only apply to
built specs
- [x] don't validate concrete specs on input, either -- they're concrete
and we shouldn't apply today's rules to yesterday's build
In switching to hash facts for concrete specs, we lost the transitive facts
from dependencies. This was fine for solves, because they were implied by
the imposed constraints from every hash. However, for `spack diff`, we want
to see what the hashes mean, so we need another mode for `spec_clauses()` to
show that.
This adds a `expand_hashes` argument to `spec_clauses()` that allows us to
output *both* the hashes and their implications on dependencies. We use
this mode in `spack diff`.
- [x] Get rid of forgotten maximize directive.
- [x] Simplify variant handling
- [x] Fix bug in treatment of defaults on externals (don't count
non-default variants on externals against them)
Variants in concrete specs are "always" correct -- or at least we assume
them to be b/c they were concretized before. Their variants need not match
the current version of the package.
Multi-valued variants previously maximized default values to handle
cases where the default contained two values, e.g.:
variant("foo", default="bar,baz")
This is because previously we were minimizing non-default values, and
`foo=bar`, `foo=baz`, and `foo=bar,baz` all had the same score, as
none of them had any "non-default" values.
This commit changes the approach and considers a non-default value
to be either a value set to something not default *or* the absence
of a default value from the set value. This allows multi- and
single-valued variants to be handled the same way, with the same
minimization criterion. It alse means that the "best" value for every
optimization criterion is now zero, which allows us to make useful
assumptions about the optimization criteria.
Minimizing builds is tricky. We want a minimizing criterion because
we want to reuse the avaialble installs, but we also want things that
have to be built to stick to *default preferences* from the package
and from the user. We therefore treat built specs differently and
apply a different set of optimization criteria to them. Spack's *first*
priority is to reuse what it can, but if it builds something, the built
specs will respect defaults and preferences.
This is implemented by bumping the priority of optimization criteria
for built specs -- so that they take precedence over the otherwise
topmost-priority criterion to reuse what is installed.
The scheme relies on all of our optimization criteria being minimizations.
That is, we need the case where all specs are reused to be better than
any built spec could be. Basically, if nothing is built, all the build
criteria are zero (the best possible) and the number of built packages
dominates. If something *has* to be built, it must be strictly worse
than full reuse, because:
1. it increases the number of built specs
2. it must have either zero or some positive number for all criteria
Our optimziation criteria effectively sum into two buckets at once to
accomplish this. We use a `build_priority()` number to shift the
priority of optimization criteria for built specs higher.
The constraints in the `spack diff` test were very specific and assumed
a lot about the structure of what was being diffed. Relax them a bit to
make them more resilient to changes.
Make the first minimization conditional on whether `--reuse` is enabled in the solve.
If `--reuse` is not enabled, there will be nothing in the set to minimize and the
objective function (for this criterion) will be 0 for every answer set.
Many of the integrity constraints in the concretizer are there to restrict how solves are done, but
they ignore that past solves may have had different initial conditions. For example, for things
we're building, we want the allowed variants to be restricted to those currently in Spack packages,
but if we are reusing a concrete spec, we need to be flexible about names that may have existed in
old packages.
Similarly, restrictions around compatibility of OS's, compiler versions, compiler OS support, etc.
are really only about what is supported by the *current* set of compilers/build tools known to
Spack, not about what we may get from concrete specs.
- [x] restrict certain integrity constraints to only apply to packages that we need to build, and
omit concrete specs from consideration.
The OS logic in the concretizer is still the way it was in the first version.
Defaults are implemented in a fairly inflexible way using straight logic. Most
of the other sections have been reworked to leave these kinds of decisions to
optimization. This commit does that for OS's as well.
As with targets, we optimize for target matches. We also try to optimize for
OS matches between nodes. Additionally, this commit adds the notion of
"OS compatibility" where we allow for builds to depend on binaries for certain
other OS's. e.g, for macos, a bigsur build can depend on an already installed
(concrete) catalina build. One cool thing about this is that we can declare
additional compatible OS's later, e.g. CentOS and RHEL.
If we don't rename Spack will fail with:
```
ImportError: cannot bootstrap the "clingo" Python module from spec "clingo-bootstrap@spack+python %gcc target=x86_64" due to the following failures:
'spack-install' raised ValueError: Invalid config scope: 'bootstrap'. Must be one of odict_keys(['_builtin', 'defaults', 'defaults/cray', 'bootstrap/cray', 'disable_modules', 'overrides-0'])
Please run `spack -d spec zlib` for more verbose error messages
```
in case bootstrapping from binaries fails and we are
falling back to bootstrapping from sources.
A common question from users has been how to model variants
that are new in new versions of a package, or variants that are
dependent on other variants. Our stock answer so far has been
an unsatisfying combination of "just have it do nothing in the old
version" and "tell Spack it conflicts".
This PR enables conditional variants, on any spec condition. The
syntax is straightforward, and matches that of previous features.
* GnuPG: allow bootstrapping from buildcache and sources
* Add a test to bootstrap GnuPG from binaries
* Disable bootstrapping in tests
* Add e2e test to bootstrap GnuPG from sources on Ubuntu
* Add e2e test to bootstrap GnuPG on macOS
This PR adds error message sentinels to the clingo solve, attached to each of the rules that could fail a solve. The unsat core is then restricted to these messages, which makes the minimization problem tractable. Errors that can only be generated by a bug in the logic program or generating code are prefaced with "Internal error" to make clear to users that something has gone wrong on the Spack side of things.
* minimize unsat cores manually
* only errors messages are choices/assumptions for performance
* pre-check for unreachable nodes
* update tests for new error message
* make clingo concretization errors show up in cdash reports fully
* clingo: make import of clingo.ast parsing routines robust to clingo version
Older `clingo` has `parse_string`; newer `clingo` has `parse_files`. Make the
code work wtih both.
* make AST access functions backward-compatible with clingo 5.4.0
Clingo AST API has changed since 5.4.0; make some functions to help us
handle both versions of the AST.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
After #26608 I got a report about missing rpaths when building a
downstream package independently using a spack-installed toolchain
(@tmdelellis). This occurred because the spack-installed libraries were
being linked into the downstream app, but the rpaths were not being
manually added. Prior to #26608 autotools-installed libs would retain
their hard-coded path and would thus propagate their link information
into the downstream library on mac.
We could solve this problem *if* the mac linker (ld) respected
`LD_RUN_PATH` like it does on GNU systems, i.e. adding `rpath` entries
to each item in the environment variable. However on mac we would have
to manually add rpaths either using spack's compiler wrapper scripts or
manually (e.g. using `CMAKE_BUILD_RPATH` and pointing to the libraries of
all the autotools-installed spack libraries).
The easier and safer thing to do for now is to simply stop changing the
dylib IDs.
The `--generic` argument allows printing the best generic target for the
current machine. This can be quite handy when wanting to find the
generic architecture to use when building a shared software stack for
multiple machines.
This PR adds a "spack tags" command to output package tags or
(available) packages with those tags. It also ensures each package
is listed in the tag cache ONLY ONCE per tag.
- [x] Allow dding enumerated types and types whose default value is forbidden by the schema
- [x] Add a test for using enumerated types in the tests for `spack config add`
- [x] Make `config add` tests use the `mutable_config` fixture so they do not
affect other tests
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
If you don't format `spack.yaml` correctly, `spack config edit` still fails and
you have to edit your `spack.yaml` manually.
- [x] Add some code to `_main()` to defer `ConfigFormatError` when loading the
environment, until we know what command is being run.
- [x] Make `spack config edit` use `SPACK_ENV` instead of the config scope
object to find `spack.yaml`, so it can work even if the environment is bad.
Co-authored-by: scheibelp <scheibel1@llnl.gov>
`spack config get <section>` was erroneously returning just the `spack.yaml`
for the environment.
It should return the combined configuration for that section (including
anything from `spack.yaml`), even in an environment.
- [x] reorder conditions in `cmd/config.py` to fix
`spack --debug config edit` was not working properly -- it would not do show a
stack trace for configuration errors.
- [x] Rework `_main()` and add some notes for maintainers on where things need
to go for configuration to work properly.
- [x] Move config setup to *after* command-line parsing is done.
Co-authored-by: scheibelp <scheibel1@llnl.gov>
`main()` has grown, and in some cases code that can generate errors has gotten
outside the top-level try/catch in there. This means that simple errors like
config issues give you large stack traces, which shouldn't happen without
`--debug`.
- [x] Split `main()` into `main()` for the top-level error handling and
`_main()` with all logic.
There were some loose ends left in ##26735 that cause errors when
using `SPACK_DISABLE_LOCAL_CONFIG`.
- [x] Fix hard-coded `~/.spack` references in `install_test.py` and `monitor.py`
Also, if `SPACK_DISABLE_LOCAL_CONFIG` is used, there is the issue that
`$user_config_path`, when used in configuration files, makes no sense,
because there is no user config scope.
Since we already have `$user_cache_path` in configuration files, and since there
really shouldn't be *any* data stored in a configuration scope (which is what
you'd configure in `config.yaml`/`bootstrap.yaml`/etc., this just removes
`$user_config_path`.
There will *always* be a `$user_cache_path`, as Spack needs to write files, but
we shouldn't rely on the existence of a particular configuration scope in the
Spack code, as scopes are configurable, both in number and location.
- [x] Remove `$user_config_path` substitution.
- [x] Fix reference to `$user_config_path` in `etc/spack/deaults/bootstrap.yaml`
to refer to `$user_cache_path`, which is where it was intended to be.
* Deactivate previous env before activating new one
Currently on develop you can run `spack env activate` multiple times to switch
between environments, but they leave traces, even though Spack only supports
one active environment at a time.
Currently:
```console
$ spack env create a
$ spack env create b
$ spack env activate -p a
[a] $ spack env activate -p b
[b] [a] $ spack env activate -p b
[a] [b] [a] $ spack env activate -p a
[a] [b] [c] $ echo $MANPATH | tr ":" "\n"
/path/to/environments/a/.spack-env/view/share/man
/path/to/environments/a/.spack-env/view/man
/path/to/environments/b/.spack-env/view/share/man
/path/to/environments/b/.spack-env/view/man
```
This PR fixes that:
```console
$ spack env activate -p a
[a] $ spack env activate -p b
[b] $ spack env activate -p a
[a] $ echo $MANPATH | tr ":" "\n"
/path/to/environments/a/.spack-env/view/share/man
/path/to/environments/a/.spack-env/view/man
```
* Drastically improve YamlFilesystemView file removal via batching
The `remove_file` routine has to check if the file is owned by multiple packages, so it doesn't
remove necessary files. This is done by the `get_all_specs` routine, which walks the entire
package tree. With large numbers of packages on shared file systems, this can take seconds
per file tree traversal, which adds up extremely quickly. For example, a single deactivate
of a largish python package in our software stack on GPFS took approximately 40 minutes.
This patch simply replaces `remove_file` with a batch `remove_files` routine. This routine
removes a list of files rather than a single file, requiring only one traversal per batch. In
practice this means a package can be removed in seconds time, rather than potentially hours,
essentially a ~100x speedup (ignoring initial deactivation logic, which takes about 3 minutes
in our test setup).
* Fix sbang hook for non-writable files
PR #26793 seems to have broken the sbang hook for files with missing
write permissions. Installing perl now breaks with the following error:
```
==> [2021-10-28-12:09:26.832759] Error: PermissionError: [Errno 13] Permission denied: '$SPACK/opt/spack/linux-fedora34-zen2/gcc-11.2.1/perl-5.34.0-afuweplnhphcojcowsc2mb5ngncmczk4/bin/cpanm'
```
Temporarily add write permissions to the original file so it can be
overwritten with the patched one.
And test that file permissions are preserved in sbang even for non-writable files
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
When relocating a binary distribution, Spack only checks files to see
if they are a link that needs to be relocated. Directories can be
such links as well, however, and need to undergo the same checks
and potential relocation.
`spack list` tests are not using mock packages for some reason, and many
are marked as potentially slow. This isn't really necessary; we don't need
6,000 packages to test the command.
- [x] update tests to use `mock_packages` fixture
- [x] remove `maybeslow` annotations
Currently Spack reads full files containing shebangs to memory as
strings, meaning Spack would have to guess their encoding. Currently
Spack has a fixed guess of UTF-8.
This is unnecessary, since e.g. the Linux kernel does not assume an
encoding on paths at all, it's just bytes and some delimiters on the
byte level.
This commit does the following:
1. Shebangs are treated as bytes, so that e.g. latin1 encoded files do
not throw UnicodeEncoding errors, and adds a test for this.
2. No more bytes than necessary are read to memory, we only have to read
until the first newline, and from there on we an copy the file byte by
bytes instead of decoding and re-encoding text.
3. We cap the number of bytes read to 4096, if no newline is found
before that, we don't attempt to patch it.
4. Add support for luajit too.
This should make Spack both more efficient and usable for non-UTF8
files.
Spack's `system` and `user` scopes provide ways for administrators and
users to set global defaults for all Spack instances, but for use cases
where one wants a clean Spack installation, these scopes can be undesirable.
For example, users may want to opt out of global system configuration, or
they may want to ignore their own home directory settings when running in
a continuous integration environment.
Spack also, by default, keeps various caches and user data in `~/.spack`,
but users may want to override these locations.
Spack provides three environment variables that allow you to override or
opt out of configuration locations:
* `SPACK_USER_CONFIG_PATH`: Override the path to use for the
`user` (`~/.spack`) scope.
* `SPACK_SYSTEM_CONFIG_PATH`: Override the path to use for the
`system` (`/etc/spack`) scope.
* `SPACK_DISABLE_LOCAL_CONFIG`: set this environment variable to completely
disable *both* the system and user configuration directories. Spack will
only consider its own defaults and `site` configuration locations.
And one that allows you to move the default cache location:
* `SPACK_USER_CACHE_PATH`: Override the default path to use for user data
(misc_cache, tests, reports, etc.)
With these settings, if you want to isolate Spack in a CI environment, you can do this:
export SPACK_DISABLE_LOCAL_CONFIG=true
export SPACK_USER_CACHE_PATH=/tmp/spack
This is a stop-gap approach until we have figured out how to deal with
the system and user config scopes more generally, as there are plans to
potentially / eventually get rid of them.
**User config**
Spack is a bit of a pain when you have:
- a shared $HOME folder across different systems.
- multiple Spack versions on the same system.
**System config**
- On shared systems with a versioned programming environment / toolkit,
system administrators want to provide config for each version (e.g.
21.09, 21.10) of the programming environment, and the user Spack
instance should be able to pick this up without a steep learning
curve.
- On shared systems the user should be able to opt out of the
hard-coded config scope in /etc/spack, since it may be incompatible
with their particular instance. Currently Spack can only opt out of all
config scopes through overrides with `"config:":`, `"packages:":`, but that
also drops the defaults config, which would have to be repeated, which
is undesirable, especially the lengthy packages.yaml.
An example use case is: having config in this folder:
```
/path/to/programming/environment/{version}/{compilers,packages}.yaml
```
and have `module load spack-system-config` set the variable
```
SPACK_SYSTEM_CONFIG_PATH=/path/to/programming/environment/{version}
```
where the user no longer has to worry about what `{version}` they are
on.
**Continuous integration**
Finally, there is the use case of continuous integration, which may
clone an arbitrary Spack version, which optimally should not pick up
system or user config from the previous run (like may happen in
classical bare metal non-containerized filesystem side effect ridden
jenkins pipelines). In fact this is very similar to how spack itself
tries to avoid picking up system dependencies during builds...
**But environments solve this?**
- You could do `include`s in environment files to get similar behavior
to the spack_system_config_path example, but environments require you
to:
1) require paths to individual config files, not directories.
2) fail if the listed config file does not exist
- They allow you to override config scopes, but this is generally too
rigurous, as it requires you to repeat the default config, in
particular packages.yaml, and just defies the point of layered config.
Co-authored-by: Tom Scogland <tscogland@llnl.gov>
Co-authored-by: Tim Fuller <tjfulle@sandia.gov>
Co-authored-by: Steve Leak <sleak@lbl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Any spec satisfying a default will be symlinked to `default`
If multiple specs have modulefiles in the same directory and satisfy
configured module defaults, then whichever was written last will be
default.
This PR permits to specify the `url` and `ref` of the Spack instance used in a container recipe simply by expanding the YAML schema as outlined in #20442:
```yaml
container:
images:
os: amazonlinux:2
spack:
ref: develop
resolve_sha: true
```
The `resolve_sha` option, if true, verifies the `ref` by cloning the Spack repository in a temporary directory and transforming any tag or branch name to a commit sha. When this new ability is leveraged an additional "bootstrap" stage is added, which builds an image with Spack setup and ready to install software. The Spack repository to be used can be customized with the `url` keyword under `spack`.
Modifications:
- [x] Permit to pin the version of Spack, either by branch or tag or sha
- [x] Added a few new OSes (centos:8, amazonlinux:2, ubuntu:20.04, alpine:3, cuda:11.2.1)
- [x] Permit to print the bootstrap image as a standalone
- [x] Add documentation on the new part of the schema
- [x] Add unit tests for different use cases