This PR removes a few outdated sections from the "Basics" part of the
documentation. It also makes a few topic under the environment section
more prominent by removing an unneeded spack.yaml subsection and
promoting everything under it.
Consolidate Spack's internal filepath logic to a select
few places and refactor to consistent internal useage of
os.path utilities. Creates a prefix, and a series of utilities
in the path utility module that facilitate handling paths
in a platform agnostic manner.
Convert Windows paths to posix paths internally
Prefer posixpath.join instead of os.path.join
Updated util/ directory to account for Windows integration
Co-authored-by: Stephen Crowell <stephen.crowell@khq.kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
Module template format for windows (#23041)
* Incorporate new search location
* Add external user option
* proper doc string
* Explicit commands in getting started
* raise during chgrp on Win
recover installer changes
Notate admin privleges
Windows phase install hooks
Find external python and install ninja (#23496)
Allow external find python to find windows python and spack install ninja
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Fixup common tests
* Remove requirement for Python 2.6
* Skip new failing test
Windows: Update url util to handle Windows paths (#27959)
* update url util to handle windows paths
* Update tests to handle fixed url handling
* canonicalize path only when the path type matches the host platform
* Skip some url tests on Windows
Co-authored-by: Omar Padron <omar.padron@kitware.com>
Use threading.TIMEOUT_MAX when available (#24246)
This value was introduced in Python 3.2. Specifying a timeout greater than
this value will raise an OverflowError.
Co-authored-by: Lou Lawrence <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Add compiler hint to the root spec for Windows
Reporters on Windows (#26038)
Reporters use Jinja2 as the templating engine, and Jinja2 indexes
templates by Unix separators, even on Windows, so search using Unix paths
on all systems.
Support patching on win via git (#25871)
Handle GRP on windows
CMake - Windows Bootstrap (#25825)
Remove hardcoded cmake compiler (#26410)
Revert breaking cmake changes
Ensure no autotools on Windows
Perl on Windows (#26612)
Python source build windows (#26313)
Reconfigure sysconf for Windows
Python2.6 compatibility
Fxixup new sbang tests for windows
Ruby support (#28287)
Add NASM support (#28319)
Add mock Ninja package for testing
* Style fixes
* Use Python's zipfile, if available
The compression libs are optional in Python. Rely on python as a
first attempt then fall back to `unzip`
MSVC's internal CMake and Ninja now detected by spack external find and added to packages.yaml
Saving progress on packaging zlib for Windows
Fixing the shared CMake flag
* Loading Intel's ifx Fortran compiler into MSVC; if there are multiple
versions of MSVC installed and detected, ifx will only be placed into
the first block written in compilers.yaml. The version number of ifx can
be detected using MSVC's version flag (instead of /QV) by using
ignore_version_errors. This commit also provides support for detection
of Intel compilers in their own compiler block by adding ifx.exe to the
fc/f77_name blocks inside intel.py
* Giving CMake a Fortran compiler argument
* Adding patch file for removing duplicated mangling header for versions 3.9.1 and older; static and shared now successfully building on Windows
* Have netlib-lapack depend on ninja@1.10
Co-authored-by: John R. Cary <cary@txcorp.com>
Co-authored-by: Jared Popelar <jpopelar@txcorp.com>
Making a default config.yaml for Windows
Small path length for build_stage
Provide more prerequisite details, mention default config.yaml
Killing an unnecessary setvars call
Replacing some lost changes, proofreading, updating windows-supported package list
Co-authored-by: John Parent <john.parent@kitware.com>
* Add 'make-installer' command for Windows
* Add '--bat' arg to env activate, env deactivate and unload commands
* An equivalent script to setup-env on linux: spack_cmd.bat. This script
has a wrapper to evaluate cd, load/unload, env activate/deactivate.(#21734)
* Add spacktivate and config editor (#22049)
* spack_cmd: will find python and spack on its own. It preferentially
tries to use python on your PATH (#22414)
* Ignore Windows python installer if found (#23134)
* Bundle git in windows installer (#23597)
* Add Windows section to Getting Started document
(#23131), (#23295), (#24240)
Co-authored-by: Stephen Crowell <stephen.crowell@kitware.com>
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
Co-authored-by: Jared Popelar <jpopelar@txcorp.com>
Co-authored-by: Ben Cowan <benc@txcorp.com>
Update Installer CI
Co-authored-by: John Parent <john.parent@kitware.com>
Made the vcvars batch script location a member variable of the msvc compiler subclass, initialized from the compiler executable path. Added a setup_custom_environment() method to the msvc subclass that sources the vcvars script, dumps the environment, and copies the relevant environment variables to the Spack environment. Added class variables to the Windows OS and MSVC compiler subclasses to enable finding the compiler executables and determining their versions.
* Fixed path and uid issues.
* Added needed import statement; kluged .exe extension.
* Got package to build. Some manual intervention necessary, including sourcing the MSVC setup script and having certain configuration parameters.
* Removed CMake executable suffix hack.
To provide Windows-compatible functionality, spack code should use
llnl.util.symlink instead of os.symlink. On non-Windows platforms
and on Windows where supported, os.symlink will still be used.
Use junctions when symlinks aren't supported on Windows (#22583)
Support islink for junctions (#24182)
Windows: Update llnl/util/filesystem
* Use '/' as path separator on Windows.
* Recognizing that Windows paths start with '<Letter>:/' instead of '/'
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
os.rename() fails on Windows if file already exists.
Create getuid utility function (#21736)
On Windows, replace os.getuid with ctypes.windll.shell32.IsUserAnAdmin().
Tests: Use getuid util function
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
1. Forwarding sys.stdin, e.g. use input_multiprocess_fd,
gives an error on Windows. Skipping for now
3. subprocess_context needs to serialize for Windows, like it does
for Mac.
Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
* Snapshot of some MSVC infrastructure added during experiments a while ago. Rebasing from spack/develop.
* Added platform and OS definitions for Windows.
* Updated Windows platform file to conform to new archspec use.
* Added Windows as a platform; introduced some debugging code.
* Added type annotations.
* Fixed copyright.
* Removed print statements.
* Ensure `spack arch` returns correctly on Windows (#21428)
* Correctly identify windows as 'windows-Windows10-AMD64'
Re-work the checks and comparisons around commit versions, when no
commit version is involved the overhead is now in the noise, where one
is the overhead is now constant rather than linear.
fixes#29446
The new setup_*_environment functions have been falling back
to calling the old functions and warn the user since #11115.
This commit removes the fallback behavior and any use of:
- setup_environment
- setup_dependent_environment
in the codebase
Change the internal representation of `Spec` to allow for multiple dependencies or
dependents stemming from the same package. This change permits to represent cases
which are frequent in cross compiled environments or to bootstrap compilers.
Modifications:
- [x] Substitute `DependencyMap` with `_EdgeMap`. The main differences are that the
latter does not support direct item assignment and can be modified only through its
API. It also provides a `select_by` method to query items.
- [x] Reworked a few public APIs of `Spec` to get list of dependencies or related edges.
- [x] Added unit tests to prevent regression on #11983 and prove the synthetic construction
of specs with multiple deps from the same package.
Since #22845 went in first, this PR reuses that format and thus it should not change hashes.
The same package may be present multiple times in the list of dependencies with different
associated specs (each with its own hash).
* environment.py: allow link:run
Some users want minimal views, excluding run-type dependencies, since
those type of dependencies are covered by rpaths and the symlinked
libraries in the view aren't used anyways.
With this change, an environment like this:
```
spack:
specs: ['py-flake8']
view:
default:
root: view
link: run
```
includes python packages and python, but no link type deps of python.
Speeds up comparison on `Version` by ~2.5x, e.g.
```python
In [1]: v = spack.version.Version('1.0.0'); w = spack.version.Version('1.0.2')
In [2]: %timeit v < w
1.47 µs ± 5.59 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
535 ns ± 1.75 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
fixes#29203
This PR fixes a subtle bug we have when importing
Spack packages as Python modules that can lead to
multiple module objects being created for the same
package.
It also fixes all the places in unit-tests where
"relying" on the old bug was crucial to have a new
"clean" state of the package class.
This commit reverts the GCS fetch strategy to before commit:
d759612523
The previous commit added some s3 syntax to handle connections, but
added them into the GCS fetch strategy in a way that prevents GCS from
working anymore.
* rocmcc compiler: initial commit based on aocc and clang
Co-authored-by: luker <luke.roskop@hpe.com>
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
The status displayed in the terminal title could be wrong when doing
distributed builds. For instance, doing `spack install glib` in two
different terminals could lead to the current package being reported as
`40/29` due to the way Spack handles retrying locks.
Work around this by keeping track of the package IDs that were already
encountered to avoid counting packages twice.
See https://github.com/spack/spack/pull/28468/files#r809156986
If we exit before generating the:
error("Dependencies must have compatible OS's with their dependents").
...
facts we'll output a problem that is effectively
different by the one solved by clingo.
* cmd/checksum: prefer url matching url_from_version
This is a minimal change toward getting the right archive from places
like github. The heuristic is:
* if an archive url exists, take its version
* generate a url from the package with pkg.url_from_version
* if they match
* stop considering other URLs for this version
* otherwise, continue replacing the url for the version
I doubt this will always work, but it should address a variety of
versions of this bug. A good test right now is `spack checksum gh`,
which checksums macos binaries without this, and the correct source
packages with it.
fixes#15985
related to #14129
related to #13940
* add heuristics to help create as well
Since create can't rely on an existing package, this commit adds another
pair of heuristics:
1. if the current version is a specifically listed archive, don't
replace it
2. if the current url matches the result of applying
`spack.url.substitute_version(a, ver)` for any a in archive_urls,
prefer it and don't replace it
fixes#13940
* clean up style and a lingering debug import
* ok flake8, you got me
* document reference_package argument
* Update lib/spack/spack/util/web.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* try to appease sphinx
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
We can see what is in the bootstrap store with `spack find -b`, and you can clean it with `spack
clean -b`, but we can't do much else with it, and if there are bootstrap issues they can be hard to
debug.
We already have `spack --mock`, which allows you to swap in the mock packages from the command
line. This PR introduces `spack -b` / `spack --bootstrap`, which runs all of spack with
`ensure_bootstrap_configuration()` set. This means that you can run `spack -b find`, `spack -b
install`, `spack -b spec`, etc. to see what *would* happen with bootstrap configuration, to remove
specific bootstrap packages, etc. This will hopefully make developers' lives easier as they deal
with bootstrap packages.
This PR also uses a `nullcontext` context manager. `nullcontext` has been implemented in several
other places in Spack, and this PR consolidates them to `llnl.util.lang`, with a note that we can
delete the function if we ever reqyire a new enough Python.
- [x] introduce `spack --bootstrap` option
- [x] consolidated all `nullcontext` usages to `llnl.util.lang`
Some "concrete" versions on the command line, e.g. `qt@5` are really
meant to satisfy some actual concrete version from a package. We should
only assume the user is introducing a new, unknown version on the CLI
if we, well, don't know of any version that satisfies the user's
request. So, if we know about `5.11.1` and `5.11.3` and they ask for
`5.11.2`, we'd ask the solver to consider `5.11.2` as a solution. If
they just ask for `5`, though, `5.11.1` or `5.11.3` are fine solutions,
as they satisfy `@5`, so use them.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
See https://github.com/spack/spack/issues/25353#issuecomment-1041868116
This commit changes the default behavior of
```
$ spack external find
```
from searching all the possible packages Spack knows about to
search only for the ones tagged as being a "build-tool".
It also introduces a `--all` option to restore the old behavior.
Prefer `sw_vers` to `platform.mac_ver`. In anaconda3 installation, for example, the latter reports 10.16 on Monterey -- I think this is affected by how and where the python instance was built.
Use MACOSX_DEPLOYMENT_TARGET if present to override the operating system choice.
It will be useful for metrics gathering and possibly debugging to
have this environment variable available in the runner pods that
do the actual rebuilds.
Since Spack does not install external packages, this commit skips them by
default when running stand-alone tests. The assumption is that such packages
have likely undergone an acceptance test process.
However, the tests can be run against installed externals using
```
% spack test run --externals ...
```
fixes#28260
Since we iterate over different variants from many packages, the variant
values may have types which are not comparable, which causes errors
at runtime. This is not a real issue though, since we don't need the facts
to be ordered. Thus, to avoid needless sorting, the sorted function has
been removed and a comment has been added to tip any developer that
might need to inspect these clauses for debugging to add back sorting
on the first two items only.
It's kind of difficult to add a test for this, since the error depends on
whether Python sorting algorithm ever needs to compare the third
value of a tuple being ordered.
* extensions: allow multiple "extends" directives
This will allow multiple extends directives in a package as long as only one of
them is selected as a dependency in the concrete spec.
* document the option to have multiple extends
Reuse previously was a very invasive change that required parameters to be added to all
the methods that called `concretize()` on a `Spec` object. With the addition of
concretizer configuration, we can use the config system to simplify this argument
passing and keep the code cleaner.
We decided that concretizer config options should be read at `Solver` instantiation
time, and if config changes between instnatiation of a particular solver and
`solve()` invocation, the `Solver` should use the settings from `__init__()`.
- [x] remove `reuse` keyword argument from most concretize functions
- [x] refactor usages to use `spack.config.override("concretizer:reuse", True)`
- [x] rework argument passing in `Solver` so that parameters are set from config
at instantiation time
`--reuse` was previously handled individually by each command that
needed it. We are growing more concretization options, and they'll
need their own section for commands that support them.
Now there are two concretization options:
* `--reuse`: Attempt to reuse packages from installs and buildcaches.
* `--fresh`: Opposite of reuse -- traditional spack install.
To handle thes, this PR adds a `ConfigSetAction` for `argparse`, so
that you can write argparse code like this:
```
subgroup.add_argument(
'--reuse', action=ConfigSetAction, dest="concretizer:reuse",
const=True, default=None,
help='reuse installed dependencies/buildcaches when possible'
)
```
With this, you don't need to add logic to pull the argument out and
handle it; the `ConfigSetAction` just does it for you. This can probably
be used to clean up some other commands later, as well.
Code that was previously passing `reuse=True` around everywhere has
been refactored to use config, and config is set from the CLI using
a new `add_concretizer_args()` function in `spack.cmd.common.arguments`.
- [x] Add `ConfigSetAction` to simplify concretizer config on the CLI
- [x] Refactor code so that it does not pass `reuse=True` to every function.
- [x] Refactor commands to use `add_concretizer_args()` and to pass
concretizer config using the config system.
Config scopes were different for `config` and `mutable_config`,
and `mutable_config` did not have a command line scope.
- [x] Fix by consolidating the creation logic for the two fixtures.
The concretizer is going to grow to have many more configuration,
and we really need some structured config for that.
* We have the `config:concretizer` option that chooses the solver,
but extending that is awkward (we'd need to replace a string with
a `dict`) and the solver choice will be deprecated eventually.
* We have the `concretization` option in environments, but it's
not a top-level config section -- it's just for environments,
and it also only admits a string right now.
To avoid overlapping with either of these and to allow the most
extensibility in the future, this adds a new `concretizer` config
section that can be used in and outside of environments. There
is only one option right now: `reuse`. This can expand to include
other options later.
Likely, we will soon deprecate `config:concretizer` and warn when
the user doesn't use `clingo`, and we will eventually (sometime later)
move the `together` / `separately` options from `concretization` into
the top-level `concretizer` section.
This commit just adds the new section and schema. Fully wiring it
up is TBD.
The solver has a lot of configuration associated with it. Rather
than adding arguments to everything, we should encapsulate that
in a class. This is the start of that work; it replaces `solve()`
and its kwargs with a class and properties.
* Add 'stable' to the list of infinity version names.
Rename libunwind 1.5-head to 1.5-stable.
* Add stable to the infinite version list in packaging_guide.rst.
* core: Make platform environment an instance not class method
In preparation for accessing data constructed in __init__.
* macos: set consistent macosx deployment target
This should silence numerous warnings from mixed gcc/macos toolchains.
* perl: prevent too-new deployment target version
```
*** Unexpected MACOSX_DEPLOYMENT_TARGET=11
***
*** Please either set it to a valid macOS version number (e.g., 10.15) or to empty.
```
* Stylin'
* Add deployment target overrides to failing autoconf packages
* Move configure workaround to base autoconf package
This reverts commit 3c119eaf8b4fb37c943d503beacf5ad2aa513d4c.
* Stylin'
* macos: add utility functions for SDK
These aren't yet used but should probably be added to spack debug
report.
* Remove node_target_satisfies/3 in favor of target_satisfies/2
When emitting input facts we don't need to couple target with
packages, but we can emit fewer facts independently and let
the grounder combine them.
* Remove compiler_version_satisfies/4 in favor of compiler_version_satisfies/3
When emitting input facts we don't need to couple compilers with
packages, but we can emit fewer facts independently and let
the grounder combine them.
* Introduce heuristic in the ASP-program
With heuristic we can drive clingo to make better
initial guesses, which lead to fewer choices and
conflicts in the overall solve
* Fix reindex with uninstalled deps
When a prefix of a dep is removed, and the db is reindexed, it is added
through the dependent, but until now it incorrectly listed the spec as
'installed'.
There was also some questionable behavior in the db when the same spec
was added multiple times, it would always be marked installed.
* Always reserve path
* Only add installed spec's prefixes to install prefixes set
* Improve warning, and ensure ensure only ensures
* test: reindex with every file system remnant removed except for the old index; it should give a database with nothing installed, including records with installed==False,external==False,ref_count==0,explicit=True, and these should be removable from the database
* stacks: add regression tests for matrix expansion
* Use constrain semantics to construct spec lists for stacks
* Fix semantics for constraining an anonymous spec. Add tests
* Add sticky variants
* Add unit tests for sticky variants
* Add documentation for sticky variants
* Revert "Revert 19736 because conflicts are avoided by clingo by default (#26721)"
This reverts commit 33ef7d57c1.
* Add stickiness to "allow-unsupported-compiler"
`spack license update-copyright-year` was updating license headers but not the MIT
license file. Make it do that and add a test.
Also simplify the way we bump the latest copyright year so that we only need to
update it in one place.
* Use pip to bootstrap pip
* Bootstrap wheel from source
* Update PythonPackage to install using pip
* Update several packages
* Add wheel as base class dep
* Build phase no longer exists
* Add py-poetry package, fix py-flit-core bootstrapping
* Fix isort build
* Clean up many more packages
* Remove unused import
* Fix unit tests
* Don't directly run setup.py
* Typo fix
* Remove unused imports
* Fix issues caught by CI
* Remove custom setup.py file handling
* Use PythonPackage for installing wheels
* Remove custom phases in PythonPackages
* Remove <phase>_args methods
* Remove unused import
* Fix various packages
* Try to test Python packages directly in CI
* Actually run the pipeline
* Fix more packages
* Fix mappings, fix packages
* Fix dep version
* Work around bug in concretizer
* Various concretization fixes
* Fix gitlab yaml, packages
* Fix typo in gitlab yaml
* Skip more packages that fail to concretize
* Fix? jupyter ecosystem concretization issues
* Solve Jupyter concretization issues
* Prevent duplicate entries in PYTHONPATH
* Skip fenics-dolfinx
* Build fewer Python packages
* Fix missing npm dep
* Specify image
* More package fixes
* Add backends for every from-source package
* Fix version arg
* Remove GitLab CI stuff, add py-installer package
* Remove test deps, re-add install_options
* Function declaration syntax fix
* More build fixes
* Update spack create template
* Update PythonPackage documentation
* Fix documentation build
* Fix unit tests
* Remove pip flag added only in newer pip
* flux: add explicit dependency on jsonschema
* Update packages that have been added since this was branched off of develop
* Move Python 2 deprecation to a separate PR
* py-neurolab: add build dep on py-setuptools
* Use wheels for pip/wheel
* Allow use of pre-installed pip for external Python
* pip -> python -m pip
* Use python -m pip for all packages
* Fix py-wrapt
* Add both platlib and purelib to PYTHONPATH
* py-pyyaml: setuptools is needed for all versions
* py-pyyaml: link flags aren't needed
* Appease spack audit packages
* Some build backend is required for all versions, distutils -> setuptools
* Correctly handle different setup.py filename
* Use wheels for py-tomli to avoid circular dep on py-flit-core
* Fix busco installation procedure
* Clarify things in spack create template
* Test other Python build backends
* Undo changes to busco
* Various fixes
* Don't test other backends
When `spack compiler list` is run without being restricted to a
particular scope, and no compilers are found, say that none are
available, and hint that the use should run spack compiler find to
auto detect compilers.
* Improve docs
* Check if stdin is a tty
* add a test
Many packages implement logic at the class level to handle complex dependencies and
conflicts. Others have started using `with when("@1.0"):` blocks since we added that
capability. The loops and other control logic can cause some pure directive logic not to
be removed by our package hashing logic -- and in many cases that's a lot of code that
will cause unnecessary rebuilds.
This commit changes the unparser so that it will descend into these blocks. Specifically:
1. Descend into loops, if statements, and with blocks at the class level.
2. Don't look inside function definitions (in or outside a class).
3. Don't look at nested class definitions (they don't have directives)
4. Add logic to *remove* empty loops/with blocks/if statements if all directives
in them were removed.
This allows our package hash to ignore a lot of pure metadata that it was not ignoring
before, and makes it less sensitive.
In addition, we add `maintainers` and `tags` to the list of metadata attributes that
Spack should remove from packages when constructing canonoical source for a package
hash.
- [x] Make unparser handle if/for/while/with at class level.
- [x] Add tests for control logic removal.
- [x] Add a test to ensure that all packages are not only unparseable, but also
that their canonical source is still compilable. This is a test for
our control logic removal.
- [x] Add another unparse test package that has complex logic.
These are the unit tests from astunparse, converted to pytest, with a few backports from
upstream cpython. These should hopefully keep `unparser.py` well covered as we change it.
We can't tell `print(a, b, c)` and `print((a, b, c))` apart -- both of these expressions
generate different ASTs in Python 2 and Python 3. However, we can decide that we don't
care. This commit treats both of them the same when `py_ver_consistent` is set with
`unparse()`.
This means that the package hash won't notice changes from printing a tuple to printing
multiple values, but we don't care, because this is extremely unlikely to affect the build.
More than likely this is just an error message for the user of the package.
- [x] treat `print(a, b, c)` and `print((a, b, c))` the same in py2 and py3
- [x] add another package parsing test -- legion -- that exercises this feature
To make it easier to see how package hashes change and how they are computed, add two
commands:
* `spack pkg source <spec>`: dumps source code for a package to the terminal
* `spack pkg source --canonical <spec>`: dumps canonicalized source code for a
package to the terminal. It strips comments, directives, and known-unused
multimethods from the package. It is used to generate package hashes.
* `spack pkg hash <spec>`: This gives the package hash for a particular spec.
It is generated from the canonical source code for the spec.
- [x] `add spack pkg source` and `spack pkg hash`
- [x] add tests
- [x] fix bug in multimethod resolution with boolean `@when` values
Co-authored-by: Greg Becker <becker33@llnl.gov>
We are planning to switch to using full hashes for Spack specs, which means that the
package hash will be included in the deployment descriptor. This means we need a more
robust package hash than simply dumping the `repr` of the AST.
The AST repr that we previously used for package content is unreliable because it can
vary between python versions (Python's AST actually changes fairly frequently).
- [x] change `package_hash`, `package_ast`, and `canonical_source` to accept a string for
alternate source instead of a filename.
- [x] consolidate package hash tests in `test/util/package_hash.py`.
- [x] remove old `package_content` method.
- [x] make `package_hash` do what `canonical_source_hash` was doing before.
- [x] modify `content_hash` in `package.py` to use the new `package_hash` function.
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Our package hash is supposed to be consistent from python version to python version.
Test this by adding some known unparse inputs and ensuring that they always have the
same canonical hash. This test relies on the fact that we run Spack's unit tests
across many python versions. We can't compute for several python versions within the
same test run so we precompute the hashes and check them in CI.
Package hashing was not properly handling multimethods. In particular, it was removing
any functions that had decorators from the output, so we'd miss things like
`@run_after("install")`, etc.
There were also problems with handling multiple `@when`'s in a single file, and with
handling `@when` functions that *had* to be evaluated dynamically.
- [x] Rework static `@when` resolution for package hash
- [x] Ensure that functions with decorators are not removed from output
- [x] Add tests for many different @when scenarios (multiple @when's,
combining with other decorators, default/no default, etc.)
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Previously we used `directives.__all__` to get directive names, but it wasn't
quite right -- it included `DirectiveMeta`, etc. It's not wrong, but it's also
not the clearest way to do this.
- [x] Refactor `@directive` to track names in `directive_names` global
- [x] Rename `_directive_names` to `_directive_dict_names` in `DirectiveMeta`
- [x] Add a test for `RemoveDirectives`
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Some packages use top-level unassigned strings instead of comments, either just after a
docstring on in the body somewhere else. Ignore those strings becasue they have no
effect on package behavior.
- [x] adjust RemoveDocstrings to remove all free-standing strings.
- [x] move tests for util/package_hash.py to test/util/package_hash.py
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
Python 2 and 3 represent string literals differently in the AST. Python 2 requires '\x'
literals, and Python 3 source is always unicode, and allows unicode to be written
directly. These also unparse differently by default.
- [x] modify unparser to write both out the way `repr` would in Python 2 when
`py_ver_consistent` is provided.
Backport operator precedence algorithm from here:
397b96f6d7
This eliminates unnecessary parentheses from our unparsed output and makes Spack's unparser
consistent with the one in upstream Python 3.9+, with one exception.
Our parser normalizes argument order when `py_ver_consistent` is set, so that star arguments
in function calls come last. We have to do this because Python 2's AST doesn't have information
about their actual order.
If we ever support only Python 3.9 and higher, we can easily switch over to `ast.unparse`, as
the unparsing is consistent except for this detail (modulo future changes to `ast.unparse`)
Previously, there were differences in the unparsed code for Python 2.7 and for 3.5-3.10.
This makes unparsed code the same across these Python versions by:
1. Ensuring there are no spaces between unary operators and
their operands.
2. Ensuring that *args and **kwargs are always the last arguments,
regardless of the python version.
3. Always unparsing print as a function.
4. Not putting an extra comma after Python 2 class definitions.
Without these changes, the same source can generate different code for different
Python versions, depending on subtle AST differences.
One place where single source will generate an inconsistent AST is with
multi-argument print statements, e.g.:
```
print("foo", "bar", "baz")
```
In Python 2, this prints a tuple; in Python 3, it is the print function with
multiple arguments. Use `from __future__ import print_function` to avoid
this inconsistency.
Add `astunparse` as `spack_astunparse`. This library unparses Python ASTs and we're
adding it under our own name so that we can make modifications to it.
Ultimately this will be used to make `package_hash` consistent across Python versions.
* Python: set default config_vars
* Add missing commas
* dso_suffix not present for some reason
* Remove use of default_site_packages_dir
* Use config_vars during bootstrapping too
* Catch more errors
* Fix unit tests
* Catch more errors
* Update docstring
This reports the kernel version (vs. the distro version) on Linux and
returns a valid Version (stripping characters like '+' which may be
present for custom-built kernels).
Add a new check to `spack audit` to scan and verify that version constraints may be satisfied
Modifications:
- [x] Add a new check to `spack audit` to scan and verify that version constraints may be satisfied by some version declared in the built-in repository
- [x] Fix issues found by CI
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This command pokes the environment, Python interpreter
and bootstrap store to check if dependencies needed by
Spack are available.
If any are missing, it shows a comprehensible message.
* locks: allow locks to work under high contention
This is a bug found by Harshitha Menon.
The `lock=None` line shouldn't be a release but should be
```
return (lock_type, None)
```
to inform the caller it couldn't get the lock type requested without
disturbing the existing lock object in the database. There were also a
couple of bugs due to taking write locks at the beginning without any
checking or release, and not releasing read locks before requeueing.
This version no longer gives me read upgrade to write errors, even
running 200 instances on one box.
* Change lock in check_deps_status to read, release if not installed,
not sure why this was ever write, but read definitely is more
appropriate here, and the read lock is only held out of the scope if
the package is installed.
* Release read lock before requeueing to reduce chance of livelock, the
timeout that caused the original issue now happens in roughly 3 of 200
workers instead of 199 on average.
With this commit:
```
$ spack env activate --temp
$ spack install zlib
==> All of the packages are already installed
==> Updating view at /tmp/spack-faiirgmt/.spack-env/view
$ spack install zlib
==> All of the packages are already installed
```
Before this PR:
```
$ spack env activate --temp
$ spack install zlib
==> All of the packages are already installed
$ spack install zlib
==> All of the packages are already installed
```
No view was generated
This commit introduces the command
spack module tcl setdefault <package>
similar to the one already available for lmod
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
When running `spack install --log-format junit|cdash ...`, install
errors were ignored. This made spack continue building dependents of
failed install, ignoring `--fail-fast`, and exit 0 at the end.
* locks: allow locks to work under high contention
This is a bug found by Harshitha Menon.
The `lock=None` line shouldn't be a release but should be
```
return (lock_type, None)
```
to inform the caller it couldn't get the lock type requested without
disturbing the existing lock object in the database. There were also a
couple of bugs due to taking write locks at the beginning without any
checking or release, and not releasing read locks before requeueing.
This version no longer gives me read upgrade to write errors, even
running 200 instances on one box.
* Change lock in check_deps_status to read, release if not installed,
not sure why this was ever write, but read definitely is more
appropriate here, and the read lock is only held out of the scope if
the package is installed.
* Release read lock before requeueing to reduce chance of livelock, the
timeout that caused the original issue now happens in roughly 3 of 200
workers instead of 199 on average.
Fixes#27652
Ensure that mirror's to_dict function returns a syaml_dict object for all code
paths.
Switch to using the .get function for accessing the potential information from
the S3 mirror objects. If the key is not there, it will gracefully return
None instead of failing with a KeyError
Additionally, check that the connection object is a dictionary before trying
to "get" from it.
Add a test for the capturing of the new S3 information.
With this commit:
```
$ spack env activate --temp
$ spack install zlib
==> All of the packages are already installed
==> Updating view at /tmp/spack-faiirgmt/.spack-env/view
$ spack install zlib
==> All of the packages are already installed
```
Before this PR:
```
$ spack env activate --temp
$ spack install zlib
==> All of the packages are already installed
$ spack install zlib
==> All of the packages are already installed
```
No view was generated
Updates to installer.py did not account for spack monitor, so as currently implemented
there are three cases of failure that spack monitor will not account for. To fix this we add additional
hooks, including an on cancel and also do a custom action on concretization fail.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
The latest version of `jsonschema` fails if we're not specific about which schema draft
specification we're using. Update all of them to use the latest one (draft-07).
Our `jsonschema` external won't support Python 3.10, so we need to upgrade it.
It currently generates this warning:
lib/spack/external/jsonschema/compat.py:6: DeprecationWarning: Using or importing the ABCs
from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and
in 3.10 it will stop working
This upgrades `jsonschema` to 3.2.0, the latest version with support for Python 2.7. The next
version after this (4.0.0) drops support for 2.7 and 3.6, so we'll have to wait to upgrade to it.
Dependencies have been added in prior commits.
spack monitor now requires authentication as each build must be associated
with a user, so it does not make sense to allow the --monitor-no-auth flag
and this commit will remove it
This commit introduces the command
spack module tcl setdefault <package>
similar to the one already available for lmod
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This PR also slightly changes the behavior in ci_rebuild().
We now still attempt to submit `spack install` results to CDash
even if the initial registration failed due to connection issues.
This commit follows in the spirit of #24299. We do not want `spack install`
to exit with a non-zero status when something goes wrong while attempting to
report results to CDash.
This PR is meant to move code with "business logic" from `spack.cmd.buildcache` to appropriate core modules[^1].
Modifications:
- [x] Add `spack.binary_distribution.push` to create a binary package from a spec and push it to a mirror
- [x] Add `spack.binary_distribution.install_root_node` to install only the root node of a concrete spec from a buildcache (may check the sha256 sum if it is passed in as input)
- [x] Add `spack.binary_distribution.install_single_spec` to install a single concrete spec from a buildcache
- [x] Add `spack.binary_distribution.download_single_spec` to download a single concrete spec from a buildcache to a local destination
- [x] Add `Spec.from_specfile` that construct a spec given the path of a JSON or YAML spec file
- [x] Removed logic from `spack.cmd.buildcache`
- [x] Removed calls to `spack.cmd.buildcache` in `spack.bootstrap`
- [x] Deprecate `spack buildcache copy` with a message that says it will be removed in v0.19.0
[^1]: The rationale is that commands should be lightweight wrappers of the core API, since that helps with both testing and scripting (easier mocking and no need to invoke `SpackCommand`s in a script).
After this PR an error in a single package while detecting
external software won't abort the entire procedure.
The error is reported to screen as a warning.
Remove a try/catch for an error with no handling. If the affected
code doesn't execute successfully, then the associated variable
is undefined and another (more-obscure) error occurs shortly after.
Remove a custom bootstrapping procedure to
use spack.bootstrap instead
Modifications:
* Reference count the bootstrap context manager
* Avoid SpackCommand to make the bootstrapping
procedure more transparent
* Put back requirement on patchelf being in PATH for unit tests
* Add an e2e test to check bootstrapping patchelf
I think this test should be removed, but when it stays, it should at
least follow the symlink, cause it fails for me if I let spack build
patchelf and have a symlink in a view.
Modifications:
- [x] Removed `centos:6` unit test, adjusted vermin checks
- [x] Removed backport of `collections.OrderedDict`
- [x] Removed backport of `functools.total_ordering`
- [x] Removed Python 2.6 specific skip markers in unit tests
- [x] Fixed a few minor Python 2.6 related TODOs in code
Updating the vendored dependencies will be done in separate PRs
* Make CUDA and ROCm architecture conditional
fixes#14337
The variant to specify which architecture to use
for CUDA and ROCm are now conditional on +cuda and
+rocm respectively.
* cp2k: make all CUDA related variants conditional on +cuda
* Add connection specification to mirror creation
This allows each mirror to contain information about the credentials
used to access it.
Update command and tests based on comments
Switch to only "long form" flags for the s3 connection information.
Use the "any" function instead of checking for an empty list when looking
for s3 connection information.
Split test to use the access token separately from the access id and key.
Use long flag form in test.
Add endpoint_url to available S3 options.
Extend the special parameters for an S3 mirror to accept the
endpoint_url parameter.
Add a test.
* Add connection information per URL not per mirror
Expand the mirror-based connection information to be per-URL.
This will allow a user to specify different S3 connection information
for both the fetch and the push URLs.
Add a parameter for "profile", another way of storing the id/secret pair.
* Switch from "access_profile" to "profile"
Remove the "get_executable" function from the
spack.bootstrap module. Now "flake8", "isort",
"mypy" and "black" will use the same
bootstrapping method as GnuPG.
Currently Spack vendors `pytest` at a version which is three major
versions behind the latest (3.2.5 vs. 6.2.4). We do that since v3.2.5
is the latest version supporting Python 2.6. Remaining so much
behind the currently supported versions though might introduce
some incompatibilities and is surely a technical debt.
This PR modifies Spack to:
- Use the vendored `pytest@3.2.5` only as a fallback solution,
if the Python interpreter used for Spack doesn't provide a newer one
- Be able to parse `pytest --collect-only` in all the different output
formats from v3.2.5 to v6.2.4 and use it consistently for `spack unit-test --list-*`
- Updating the unit tests in Github Actions to use a more recent `pytest` version
This type of error is skipped:
make[1]: *** [Makefile:222: /tmp/user/spack-stage/.../spack-src/usr/lib/julia/libopenblas64_.so.so] Error 1
but it's useful to have it, especially when a package sets a variable
incorrectly in makefiles
Intel mpi comes with an installation of libfabric (which it needs as a
dependency). It can use other implementations of libfabric at runtime
though, so if you install a package that depends on `mpi` and
`libfabric`, you can specify `intel-mpi+external-libfabric` and ensure
that the Spack-built instance is used (both by `intel-mpi` and the
root).
Apply analogous change to intel-oneapi-mpi.
When running `spack install --log-format junit|cdash ...`, install
errors were ignored. This made spack continue building dependents of
failed install, ignoring `--fail-fast`, and exit 0 at the end.
* Python tests: allow importing weirdly-named modules
e.g. with dashes in name
* SIP tests: allow importing weirdly-named modules
* Skip modules with invalid names
* Changes from review
* Update from review
* Update from review
* Cleanup
* Prevent additional properties to be in the answer set when reusing specs
fixes#27237
The mechanism to reuse concrete specs relies on imposing
the set of constraints stemming from the concrete spec
being reused.
We also need to prevent that other constraints get added
to this set.
See #25249 and https://github.com/spack/spack/pull/27159#issuecomment-958163679.
This adds `spack load --list` as an alias for `spack find --loaded`. The new command is
not as powerful as `spack find --loaded`, as you can't combine it with all the queries or
formats that `spack find` provides. However, it is more intuitively located in the command
structure in that it appears in the output of `spack load --help`.
The idea here is that people can use `spack load --list` for simple stuff but fall back to
`spack find --loaded` if they need more.
- add help to `spack load --list` that references `spack find`
- factor some parts of `spack find` out to be called from `spack load`
- add shell tests
- update docs
Co-authored-by: Peter Josef Scheibel <scheibel1@llnl.gov>
Co-authored-by: Richarda Butler <39577672+RikkiButler20@users.noreply.github.com>
Reformulate variant rules so that we minimize both
1. The number of non-default values being used
2. The number of default values not-being used
This is crucial for MV variants where we may have
more than one default value
In our tests, we use concrete specs generated from mock packages,
which *only* occur as inputs to the solver. This fixes two problems:
1. We weren't previously adding facts to encode the necessary
`depends_on()` relationships, and specs were unsatisfiable on
reachability.
2. Our hash lookup for reconstructing the DAG does not
consider that a hash may have come from the inputs.
Concrete specs that are already installed or that come from a buildcache
may have compilers and variant settings that we do not recognize, but that
shouldn't prevent reuse (at least not until we have a more detailed compiler
model).
- [x] make sure compiler and variant consistency rules only apply to
built specs
- [x] don't validate concrete specs on input, either -- they're concrete
and we shouldn't apply today's rules to yesterday's build
In switching to hash facts for concrete specs, we lost the transitive facts
from dependencies. This was fine for solves, because they were implied by
the imposed constraints from every hash. However, for `spack diff`, we want
to see what the hashes mean, so we need another mode for `spec_clauses()` to
show that.
This adds a `expand_hashes` argument to `spec_clauses()` that allows us to
output *both* the hashes and their implications on dependencies. We use
this mode in `spack diff`.
- [x] Get rid of forgotten maximize directive.
- [x] Simplify variant handling
- [x] Fix bug in treatment of defaults on externals (don't count
non-default variants on externals against them)
Variants in concrete specs are "always" correct -- or at least we assume
them to be b/c they were concretized before. Their variants need not match
the current version of the package.
Multi-valued variants previously maximized default values to handle
cases where the default contained two values, e.g.:
variant("foo", default="bar,baz")
This is because previously we were minimizing non-default values, and
`foo=bar`, `foo=baz`, and `foo=bar,baz` all had the same score, as
none of them had any "non-default" values.
This commit changes the approach and considers a non-default value
to be either a value set to something not default *or* the absence
of a default value from the set value. This allows multi- and
single-valued variants to be handled the same way, with the same
minimization criterion. It alse means that the "best" value for every
optimization criterion is now zero, which allows us to make useful
assumptions about the optimization criteria.
Minimizing builds is tricky. We want a minimizing criterion because
we want to reuse the avaialble installs, but we also want things that
have to be built to stick to *default preferences* from the package
and from the user. We therefore treat built specs differently and
apply a different set of optimization criteria to them. Spack's *first*
priority is to reuse what it can, but if it builds something, the built
specs will respect defaults and preferences.
This is implemented by bumping the priority of optimization criteria
for built specs -- so that they take precedence over the otherwise
topmost-priority criterion to reuse what is installed.
The scheme relies on all of our optimization criteria being minimizations.
That is, we need the case where all specs are reused to be better than
any built spec could be. Basically, if nothing is built, all the build
criteria are zero (the best possible) and the number of built packages
dominates. If something *has* to be built, it must be strictly worse
than full reuse, because:
1. it increases the number of built specs
2. it must have either zero or some positive number for all criteria
Our optimziation criteria effectively sum into two buckets at once to
accomplish this. We use a `build_priority()` number to shift the
priority of optimization criteria for built specs higher.
The constraints in the `spack diff` test were very specific and assumed
a lot about the structure of what was being diffed. Relax them a bit to
make them more resilient to changes.
Make the first minimization conditional on whether `--reuse` is enabled in the solve.
If `--reuse` is not enabled, there will be nothing in the set to minimize and the
objective function (for this criterion) will be 0 for every answer set.
Many of the integrity constraints in the concretizer are there to restrict how solves are done, but
they ignore that past solves may have had different initial conditions. For example, for things
we're building, we want the allowed variants to be restricted to those currently in Spack packages,
but if we are reusing a concrete spec, we need to be flexible about names that may have existed in
old packages.
Similarly, restrictions around compatibility of OS's, compiler versions, compiler OS support, etc.
are really only about what is supported by the *current* set of compilers/build tools known to
Spack, not about what we may get from concrete specs.
- [x] restrict certain integrity constraints to only apply to packages that we need to build, and
omit concrete specs from consideration.
The OS logic in the concretizer is still the way it was in the first version.
Defaults are implemented in a fairly inflexible way using straight logic. Most
of the other sections have been reworked to leave these kinds of decisions to
optimization. This commit does that for OS's as well.
As with targets, we optimize for target matches. We also try to optimize for
OS matches between nodes. Additionally, this commit adds the notion of
"OS compatibility" where we allow for builds to depend on binaries for certain
other OS's. e.g, for macos, a bigsur build can depend on an already installed
(concrete) catalina build. One cool thing about this is that we can declare
additional compatible OS's later, e.g. CentOS and RHEL.
If we don't rename Spack will fail with:
```
ImportError: cannot bootstrap the "clingo" Python module from spec "clingo-bootstrap@spack+python %gcc target=x86_64" due to the following failures:
'spack-install' raised ValueError: Invalid config scope: 'bootstrap'. Must be one of odict_keys(['_builtin', 'defaults', 'defaults/cray', 'bootstrap/cray', 'disable_modules', 'overrides-0'])
Please run `spack -d spec zlib` for more verbose error messages
```
in case bootstrapping from binaries fails and we are
falling back to bootstrapping from sources.
A common question from users has been how to model variants
that are new in new versions of a package, or variants that are
dependent on other variants. Our stock answer so far has been
an unsatisfying combination of "just have it do nothing in the old
version" and "tell Spack it conflicts".
This PR enables conditional variants, on any spec condition. The
syntax is straightforward, and matches that of previous features.
* GnuPG: allow bootstrapping from buildcache and sources
* Add a test to bootstrap GnuPG from binaries
* Disable bootstrapping in tests
* Add e2e test to bootstrap GnuPG from sources on Ubuntu
* Add e2e test to bootstrap GnuPG on macOS
This PR adds error message sentinels to the clingo solve, attached to each of the rules that could fail a solve. The unsat core is then restricted to these messages, which makes the minimization problem tractable. Errors that can only be generated by a bug in the logic program or generating code are prefaced with "Internal error" to make clear to users that something has gone wrong on the Spack side of things.
* minimize unsat cores manually
* only errors messages are choices/assumptions for performance
* pre-check for unreachable nodes
* update tests for new error message
* make clingo concretization errors show up in cdash reports fully
* clingo: make import of clingo.ast parsing routines robust to clingo version
Older `clingo` has `parse_string`; newer `clingo` has `parse_files`. Make the
code work wtih both.
* make AST access functions backward-compatible with clingo 5.4.0
Clingo AST API has changed since 5.4.0; make some functions to help us
handle both versions of the AST.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
After #26608 I got a report about missing rpaths when building a
downstream package independently using a spack-installed toolchain
(@tmdelellis). This occurred because the spack-installed libraries were
being linked into the downstream app, but the rpaths were not being
manually added. Prior to #26608 autotools-installed libs would retain
their hard-coded path and would thus propagate their link information
into the downstream library on mac.
We could solve this problem *if* the mac linker (ld) respected
`LD_RUN_PATH` like it does on GNU systems, i.e. adding `rpath` entries
to each item in the environment variable. However on mac we would have
to manually add rpaths either using spack's compiler wrapper scripts or
manually (e.g. using `CMAKE_BUILD_RPATH` and pointing to the libraries of
all the autotools-installed spack libraries).
The easier and safer thing to do for now is to simply stop changing the
dylib IDs.
The `--generic` argument allows printing the best generic target for the
current machine. This can be quite handy when wanting to find the
generic architecture to use when building a shared software stack for
multiple machines.
This PR adds a "spack tags" command to output package tags or
(available) packages with those tags. It also ensures each package
is listed in the tag cache ONLY ONCE per tag.
- [x] Allow dding enumerated types and types whose default value is forbidden by the schema
- [x] Add a test for using enumerated types in the tests for `spack config add`
- [x] Make `config add` tests use the `mutable_config` fixture so they do not
affect other tests
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
If you don't format `spack.yaml` correctly, `spack config edit` still fails and
you have to edit your `spack.yaml` manually.
- [x] Add some code to `_main()` to defer `ConfigFormatError` when loading the
environment, until we know what command is being run.
- [x] Make `spack config edit` use `SPACK_ENV` instead of the config scope
object to find `spack.yaml`, so it can work even if the environment is bad.
Co-authored-by: scheibelp <scheibel1@llnl.gov>
`spack config get <section>` was erroneously returning just the `spack.yaml`
for the environment.
It should return the combined configuration for that section (including
anything from `spack.yaml`), even in an environment.
- [x] reorder conditions in `cmd/config.py` to fix
`spack --debug config edit` was not working properly -- it would not do show a
stack trace for configuration errors.
- [x] Rework `_main()` and add some notes for maintainers on where things need
to go for configuration to work properly.
- [x] Move config setup to *after* command-line parsing is done.
Co-authored-by: scheibelp <scheibel1@llnl.gov>
`main()` has grown, and in some cases code that can generate errors has gotten
outside the top-level try/catch in there. This means that simple errors like
config issues give you large stack traces, which shouldn't happen without
`--debug`.
- [x] Split `main()` into `main()` for the top-level error handling and
`_main()` with all logic.
There were some loose ends left in ##26735 that cause errors when
using `SPACK_DISABLE_LOCAL_CONFIG`.
- [x] Fix hard-coded `~/.spack` references in `install_test.py` and `monitor.py`
Also, if `SPACK_DISABLE_LOCAL_CONFIG` is used, there is the issue that
`$user_config_path`, when used in configuration files, makes no sense,
because there is no user config scope.
Since we already have `$user_cache_path` in configuration files, and since there
really shouldn't be *any* data stored in a configuration scope (which is what
you'd configure in `config.yaml`/`bootstrap.yaml`/etc., this just removes
`$user_config_path`.
There will *always* be a `$user_cache_path`, as Spack needs to write files, but
we shouldn't rely on the existence of a particular configuration scope in the
Spack code, as scopes are configurable, both in number and location.
- [x] Remove `$user_config_path` substitution.
- [x] Fix reference to `$user_config_path` in `etc/spack/deaults/bootstrap.yaml`
to refer to `$user_cache_path`, which is where it was intended to be.
* Deactivate previous env before activating new one
Currently on develop you can run `spack env activate` multiple times to switch
between environments, but they leave traces, even though Spack only supports
one active environment at a time.
Currently:
```console
$ spack env create a
$ spack env create b
$ spack env activate -p a
[a] $ spack env activate -p b
[b] [a] $ spack env activate -p b
[a] [b] [a] $ spack env activate -p a
[a] [b] [c] $ echo $MANPATH | tr ":" "\n"
/path/to/environments/a/.spack-env/view/share/man
/path/to/environments/a/.spack-env/view/man
/path/to/environments/b/.spack-env/view/share/man
/path/to/environments/b/.spack-env/view/man
```
This PR fixes that:
```console
$ spack env activate -p a
[a] $ spack env activate -p b
[b] $ spack env activate -p a
[a] $ echo $MANPATH | tr ":" "\n"
/path/to/environments/a/.spack-env/view/share/man
/path/to/environments/a/.spack-env/view/man
```
* Drastically improve YamlFilesystemView file removal via batching
The `remove_file` routine has to check if the file is owned by multiple packages, so it doesn't
remove necessary files. This is done by the `get_all_specs` routine, which walks the entire
package tree. With large numbers of packages on shared file systems, this can take seconds
per file tree traversal, which adds up extremely quickly. For example, a single deactivate
of a largish python package in our software stack on GPFS took approximately 40 minutes.
This patch simply replaces `remove_file` with a batch `remove_files` routine. This routine
removes a list of files rather than a single file, requiring only one traversal per batch. In
practice this means a package can be removed in seconds time, rather than potentially hours,
essentially a ~100x speedup (ignoring initial deactivation logic, which takes about 3 minutes
in our test setup).
* Fix sbang hook for non-writable files
PR #26793 seems to have broken the sbang hook for files with missing
write permissions. Installing perl now breaks with the following error:
```
==> [2021-10-28-12:09:26.832759] Error: PermissionError: [Errno 13] Permission denied: '$SPACK/opt/spack/linux-fedora34-zen2/gcc-11.2.1/perl-5.34.0-afuweplnhphcojcowsc2mb5ngncmczk4/bin/cpanm'
```
Temporarily add write permissions to the original file so it can be
overwritten with the patched one.
And test that file permissions are preserved in sbang even for non-writable files
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
When relocating a binary distribution, Spack only checks files to see
if they are a link that needs to be relocated. Directories can be
such links as well, however, and need to undergo the same checks
and potential relocation.
`spack list` tests are not using mock packages for some reason, and many
are marked as potentially slow. This isn't really necessary; we don't need
6,000 packages to test the command.
- [x] update tests to use `mock_packages` fixture
- [x] remove `maybeslow` annotations
Currently Spack reads full files containing shebangs to memory as
strings, meaning Spack would have to guess their encoding. Currently
Spack has a fixed guess of UTF-8.
This is unnecessary, since e.g. the Linux kernel does not assume an
encoding on paths at all, it's just bytes and some delimiters on the
byte level.
This commit does the following:
1. Shebangs are treated as bytes, so that e.g. latin1 encoded files do
not throw UnicodeEncoding errors, and adds a test for this.
2. No more bytes than necessary are read to memory, we only have to read
until the first newline, and from there on we an copy the file byte by
bytes instead of decoding and re-encoding text.
3. We cap the number of bytes read to 4096, if no newline is found
before that, we don't attempt to patch it.
4. Add support for luajit too.
This should make Spack both more efficient and usable for non-UTF8
files.
Spack's `system` and `user` scopes provide ways for administrators and
users to set global defaults for all Spack instances, but for use cases
where one wants a clean Spack installation, these scopes can be undesirable.
For example, users may want to opt out of global system configuration, or
they may want to ignore their own home directory settings when running in
a continuous integration environment.
Spack also, by default, keeps various caches and user data in `~/.spack`,
but users may want to override these locations.
Spack provides three environment variables that allow you to override or
opt out of configuration locations:
* `SPACK_USER_CONFIG_PATH`: Override the path to use for the
`user` (`~/.spack`) scope.
* `SPACK_SYSTEM_CONFIG_PATH`: Override the path to use for the
`system` (`/etc/spack`) scope.
* `SPACK_DISABLE_LOCAL_CONFIG`: set this environment variable to completely
disable *both* the system and user configuration directories. Spack will
only consider its own defaults and `site` configuration locations.
And one that allows you to move the default cache location:
* `SPACK_USER_CACHE_PATH`: Override the default path to use for user data
(misc_cache, tests, reports, etc.)
With these settings, if you want to isolate Spack in a CI environment, you can do this:
export SPACK_DISABLE_LOCAL_CONFIG=true
export SPACK_USER_CACHE_PATH=/tmp/spack
This is a stop-gap approach until we have figured out how to deal with
the system and user config scopes more generally, as there are plans to
potentially / eventually get rid of them.
**User config**
Spack is a bit of a pain when you have:
- a shared $HOME folder across different systems.
- multiple Spack versions on the same system.
**System config**
- On shared systems with a versioned programming environment / toolkit,
system administrators want to provide config for each version (e.g.
21.09, 21.10) of the programming environment, and the user Spack
instance should be able to pick this up without a steep learning
curve.
- On shared systems the user should be able to opt out of the
hard-coded config scope in /etc/spack, since it may be incompatible
with their particular instance. Currently Spack can only opt out of all
config scopes through overrides with `"config:":`, `"packages:":`, but that
also drops the defaults config, which would have to be repeated, which
is undesirable, especially the lengthy packages.yaml.
An example use case is: having config in this folder:
```
/path/to/programming/environment/{version}/{compilers,packages}.yaml
```
and have `module load spack-system-config` set the variable
```
SPACK_SYSTEM_CONFIG_PATH=/path/to/programming/environment/{version}
```
where the user no longer has to worry about what `{version}` they are
on.
**Continuous integration**
Finally, there is the use case of continuous integration, which may
clone an arbitrary Spack version, which optimally should not pick up
system or user config from the previous run (like may happen in
classical bare metal non-containerized filesystem side effect ridden
jenkins pipelines). In fact this is very similar to how spack itself
tries to avoid picking up system dependencies during builds...
**But environments solve this?**
- You could do `include`s in environment files to get similar behavior
to the spack_system_config_path example, but environments require you
to:
1) require paths to individual config files, not directories.
2) fail if the listed config file does not exist
- They allow you to override config scopes, but this is generally too
rigurous, as it requires you to repeat the default config, in
particular packages.yaml, and just defies the point of layered config.
Co-authored-by: Tom Scogland <tscogland@llnl.gov>
Co-authored-by: Tim Fuller <tjfulle@sandia.gov>
Co-authored-by: Steve Leak <sleak@lbl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Any spec satisfying a default will be symlinked to `default`
If multiple specs have modulefiles in the same directory and satisfy
configured module defaults, then whichever was written last will be
default.
This PR permits to specify the `url` and `ref` of the Spack instance used in a container recipe simply by expanding the YAML schema as outlined in #20442:
```yaml
container:
images:
os: amazonlinux:2
spack:
ref: develop
resolve_sha: true
```
The `resolve_sha` option, if true, verifies the `ref` by cloning the Spack repository in a temporary directory and transforming any tag or branch name to a commit sha. When this new ability is leveraged an additional "bootstrap" stage is added, which builds an image with Spack setup and ready to install software. The Spack repository to be used can be customized with the `url` keyword under `spack`.
Modifications:
- [x] Permit to pin the version of Spack, either by branch or tag or sha
- [x] Added a few new OSes (centos:8, amazonlinux:2, ubuntu:20.04, alpine:3, cuda:11.2.1)
- [x] Permit to print the bootstrap image as a standalone
- [x] Add documentation on the new part of the schema
- [x] Add unit tests for different use cases
1. Currently it prints not just the spec name, but the dependencies +
their variants + their compilers + their architectures + ...
2. It's clear from the context what spec the message applies to, so,
let's not print the spec at all.
These three rules in `concretize.lp` are overly complex:
```prolog
:- not provider(Package, Virtual),
provides_virtual(Package, Virtual),
virtual_node(Virtual).
```
```prolog
:- provides_virtual(Package, V1), provides_virtual(Package, V2), V1 != V2,
provider(Package, V1), not provider(Package, V2),
virtual_node(V1), virtual_node(V2).
```
```prolog
provider(Package, Virtual) :- root(Package), provides_virtual(Package, Virtual).
```
and they can be simplified to just:
```prolog
provider(Package, Virtual) :- node(Package), provides_virtual(Package, Virtual).
```
- [x] simplify virtual rules to just one implication
- [x] rename `provides_virtual` to `virtual_condition_holds`
fixes#26866
This semantics fits with the way Spack currently treats providers of
virtual dependencies. It needs to be revisited when #15569 is reworked
with a new syntax.
* py-vermin: add latest version 1.3.1
* Exclude line from Vermin since version is already being checked for
Vermin 1.3.1 finds that `encoding` kwarg of builtin `open()` requires Python 3+.
The OS should only interpret shebangs, if a file is executable.
Thus, there should be no need to modify files where no execute bit is set.
This solves issues that are e.g. encountered while packaging software as
COVISE (https://github.com/hlrs-vis/covise), which includes example data
in Tecplot format. The sbang post-install hook is applied to every installed
file that starts with the two characters #!, but this fails on the binary Tecplot
files, as they happen to start with #!TDV. Decoding them with UTF-8 fails
and an exception is thrown during post_install.
Co-authored-by: Martin Aumüller <aumuell@reserv.at>
This commit contains changes to support Google Cloud Storage
buckets as mirrors, meant for hosting Spack build-caches. This
feature is beneficial for folks that are running infrastructure on
Google Cloud Platform. On public cloud systems, resources are
ephemeral and in many cases, installing compilers, MPI flavors,
and user packages from scratch takes up considerable time.
Giving users the ability to host a Spack mirror that can store build
caches in GCS buckets offers a clean solution for reducing
application rebuilds for Google Cloud infrastructure.
Co-authored-by: Joe Schoonover <joe@fluidnumerics.com>
* Update cray architecture detection for milan
Update the cray architecture module table with x86-milan -> zen3
Make cray architecture more robust to back off from frontend
architecture to a recent ancestor if necessary. This should make
future cray updates less paingful for users.
Co-authored-by: Gregory Becker <becker33.llnl.gov>
Co-authored-by: Todd Gamblin <gamblin2@llnl.gov>
1. Don't use 16 digits of precision for the seconds, round to 2 digits after the comma
2. Don't print if we don't concretize (i.e. `spack concretize` without `-f` doesn't have to tell me it did nothing in `0.00` seconds)
* Speed-up environment concretization with a process pool
We can exploit the fact that the environment is concretized
separately and use a pool of processes to concretize it.
* Add module spack.util.parallel
Module includes `pool` and `parallel_map` abstractions,
along with implementation details for both.
* Add a new hash type to pass specs across processes
* Add tty msg with concretization time
We use POSIX `patch` to apply patches to files when building, but
`patch` by default prompts the user when it looks like a patch
has already been applied. This means that:
1. If a patch lands in upstream and we don't disable it
in a package, the build will start failing.
2. `spack develop` builds (which keep the stage around) will
fail the second time you try to use them.
To avoid that, we can run `patch` with `-N` (also called
`--forward`, but the long option is not in POSIX). `-N` causes
`patch` to just ignore patches that have already been applied.
This *almost* makes `patch` idempotent, except that it returns 1
when it detects already applied patches with `-N`, so we have to
look at the output of the command to see if it's safe to ignore
the error.
- [x] Remove non-POSIX `-s` option from `patch` call
- [x] Add `-N` option to `patch`
- [x] Ignore error status when `patch` returns 1 due to `-N`
- [x] Add tests for applying a patch twice and applying a bad patch
- [x] Tweak `spack.util.executable` so that it saves the error that
*would have been* raised with `fail_on_error=True`. This lets
us easily re-raise it.
Co-authored-by: Greg Becker <becker33@llnl.gov>
* relocate: call install_name_tool less
* zstd: fix race condition
Multiple times on my mac, trying to install in parallel led to failures
from multiple tasks trying to simultaneously create `$PREFIX/lib`.
* PackageMeta: simplify callback flush
* Relocate: use spack.platforms instead of platform
* Relocate: code improvements
* fix zstd
* Automatically fix rpaths for packages on macOS
* Only change library IDs when the path is already in the rpath
This restores the hardcoded library path for GCC.
* Delete nonexistent rpaths and add more testing
* Relocate: Allow @executable_path and @loader_path
* downgrade_docutils_version
* invalid version
* Update requirements.txt
* Improve spelling and shorten the reference link
* Update spack.yaml
* update version requirement
* update version to maximum of 0.16
Co-authored-by: bernhardkaindl <43588962+bernhardkaindl@users.noreply.github.com>
Currently Spack keeps track of the origin in the code of any
modification to the environment variables. This is very slow
and enabled unconditionally even in code paths where the
origin of the modification is never queried.
The only place where we inspect the origins of environment
modifications is before we start a build, If there's an override
of the type `e.set(...)` after incremental changes like
`e.append_path(..)`, which is a "suspicious" change.
This is very rare though.
If an override like this ever happens, it might mean a package is
broken. If that leads to build errors, we can just ask the user to run
`spack -d install ...` and check the warnings issued by Spack to find
the origins of the problem.
It can be frustrating to successfully run `spack test run --alias <name>` only to find you cannot get the results because you already use `<name>` in some previous stand-alone test execution. This PR prevents that from happening.
Using the Spec.constrain method doesn't work since it might
trigger a repository lookup which could break our directives
and triggers a circular import error.
To fix that we introduce a function to merge abstract anonymous
specs, based only on package names, which does not perform any
lookup in the repository.
The buildcache is now extracted in a temporary folder within the current store,
moved to its final place and relocated.
"spack clean -s" has been extended to also clean the temporary extraction directory.
Add hardlinks with absolute paths for libraries in the corge, garply and quux packages
to detect incorrect handling of hardlinks in tests.
The `find` command was missing for the examples forcing colorized output. Without this (or another suitable) command, spack produces output that is not using any color. Thus, without the `find` command one does not see any difference between forced colorized and non-colorized output.
when deployed on kubernetes, the server sends back permanent redirect responses.
This is elegantly handled by the requests library, but not urllib that we have
to use here, so I have to manually handle it by parsing the exception to
get the Location header, and then retrying the request there.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
The ASP-based solver maximizes the number of values in multi-valued
variants (if other higher order constraints are met), to avoid cases
where only a subset of the values that have been specified on the
command line or imposed by another constraint are selected.
Here we swap the priority of this optimization target with the
selection of the default providers, to avoid unexpected results
like the one in #26598
Seems like https://bugs.python.org/issue29699 is relevant. Better to
just ignore errors when removing them tmpdir. The OS will remove it
anyways.
Errors are happening randomly from tests that are using this fixture.
TL;DR: there are matching groups trying to match 1 or more occurrences of
something. We don't use the matching group. Therefore it's sufficient to test
for 1 occurrence. This reduce quadratic complexity to linear time.
---
When parsing logs of an mpich build, I'm getting a 4 minute (!!) wait
with 16 threads for regexes to run:
```
In [1]: %time p.parse("mpich.log")
Wall time: 4min 14s
```
That's really unacceptably slow...
After some digging, it seems a few regexes tend to have `O(n^2)` scaling
where `n` is the string / log line length. I don't think they *necessarily*
should scale like that, but it seems that way. The common pattern is this
```
([^:]+): error
```
which matches `: error` literally, and then one or more non-colons before that. So
for a log line like this:
```
abcdefghijklmnopqrstuvwxyz: error etc etc
```
Any of these are potential group matches when using `search` in Python:
```
abcdefghijklmnopqrstuvwxyz
bcdefghijklmnopqrstuvwxyz
cdefghijklmnopqrstuvwxyz
⋮
yz
z
```
but clearly the capture group should return the longest match.
My hypothesis is that Python has a very bad implementation of `search`
that somehow considers all of these, even though it can be implemented
in linear time by scanning for `: error` first, and then greedily expanding
the longest possible `[^:]+` match to the left. If Python indeed considers
all possible matches, then with `n` matches of length `1 .. n` you
see the `O(n^2)` slowness (i verified this by replacing + with {1,k}
and doubling `k`, it doubles the execution time indeed).
This PR fixes this by removing the `+`, so effectively changing the
O(n^2) into a O(n) worst case.
The reason we are fine with dropping `+` is that we don't use the
capture group anywhere, so, we just ensure `:: error` is not a match
but `x: error` is.
After going from O(n^2) to O(n), the 15MB mpich build log is parsed
in `1.288s`, so about 200x faster.
Just to be sure I've also updated `^CMake Error.*:` to `^CMake Error`,
so that it does not match with all the possible `:`'s in the line.
Another option is to use `.*?` there to make it quit scanning as soon as
possible, but what line that starts with `CMake Error` that does not have
a colon is really a false positive...
Installing packages with a lot of dependencies does not have an easy way
of judging the current progress (apart from running `spack spec -I pkg`
in another terminal). This change allows Spack to update the terminal's
title with status information, including its current progress as well as
information about the current and total number of packages.
- Do not store the full list of environment variables in
<prefix>/.spack/spack-build-env.txt because it may contain user secrets.
- Only store environment variable modifications upon installation.
- Variables like PATH may still contain user and system paths to make
spack-build-env.txt sourceable. Variables containing paths are
modified through prepending/appending, and if we don't apply these
to the current environment variable, we end up with statements like
`export PATH=/path/to/spack/bin` with system paths missing, meaning
no system binaries are in the path, which is a bad user experience.
- Do write the full environment to spack-build-env.txt in the staging dir,
but ensure it is readonly for the current user, to make it a bit safer
on shared systems.
Creates an environment in a temporary directory and activates it, which
is useful for a quick ephemeral environment:
```
$ spack env activate -p --temp
[spack-1a203lyg] $ spack add zlib
==> Adding zlib to environment /tmp/spack-1a203lyg
==> Updating view at /tmp/spack-1a203lyg/.spack-env/view
```
The DB should be what is trusted for certain operations.
If it is not present when read we should assume the
corresponding store is empty, rather than trying a
write operation during a read.
* Add a unit test
* Document what needs to be there in tests
When a symlink to a license file exists but is broken, the license symlink post-install hook fails
because os.path.exists() checks the existence of the target not the symlink itself.
os.path.lexists() is the proper function to use.
Environments push/pop scopes upon activation. If some lazily
evaluated value depending on the current configuration was
computed and cached before the scopes are pushed / popped
there will be an inconsistency in the current state.
This PR fixes the issue for stores, but it would be better
to move away from global state.
The `spack.architecture` module contains an `Arch` class that is very similar to `spack.spec.ArchSpec` but points to platform, operating system and target objects rather than "names". There's a TODO in the class since 2016:
abb0f6e27c/lib/spack/spack/architecture.py (L70-L75)
and this PR basically addresses that. Since there are just a few places where the `Arch` class was used, here we query the relevant platform objects where they are needed directly from `spack.platforms`. This permits to clean the code from vestigial logic.
Modifications:
- [x] Remove the `spack.architecture` module and replace its use by `spack.platforms`
- [x] Remove unneeded tests
* Use gnuconfig package for config file replacement for RISC-V.
This extends the changes in #26035 to handle RISC-V. Before this change,
many packages fail to configure on riscv64 due to config.guess being too
old to know about RISC-V. This is seen out of the box when clingo fails
to build from source due to pkgconfig failing to configure, throwing
error: "configure: error: cannot guess build type; you must specify one".
* Add riscv64 architecture
* Update vendored archspec from upstream project.
These archspec updates include changes needed to support riscv64.
* Update archspec's __init__.py to reflect the commit hash of archspec being used.
Cherry-picked from #25564 so this is standalone.
With this PR we can activate an environment in Spack itself, without computing changes to environment variables only necessary for "shell aware" env activation.
1. Activating an environment:
```python
spack.environment.activate(Environment(xyz)) -> None
```
this basically just sets `_active_environment` and modifies some config scopes.
2. Activating an environment **and** getting environment variable modifications for the shell:
```python
spack.environment.shell.activate(Environment(xyz)) -> EnvironmentModifications
```
This should make it easier/faster to do unit tests and scripting with spack, without the cli interface.
* Isolate bootstrap configuration from user configuration
* Search for build dependencies automatically if bootstrapping from sources
The bootstrapping logic will search for build dependencies
automatically if bootstrapping anything form sources. Any
external spec, if found, is written in a scope that is specific
to bootstrapping.
* Don't clean the bootstrap store with "spack clean -a"
* Copy bootstrap.yaml and config.yaml in the bootstrap area
- [x] Our wrapper error messages are sometimes hard to differentiate from other build
output, so prefix all errors from `die()` with '[spack cc] ERROR:'
- [x] The error we raise when running, say, `fc` without a Fortran compiler was not
clear enough. Clarify the message and the comment.
This converts everything in cc to POSIX sh, except for the parts currently
handled with bash arrays. Tests are still passing.
This version tries to be as straightforward as possible. Specifically, most conversions
are kept simple -- convert ifs to ifs, handle indirect expansion the way we do in
`setup-env.sh`, only mess with the logic in `cc`, and don't mess with the python code at
all.
The big refactor is for arrays. We can't rely on bash's nice arrays and be ignorant of
separators anymore. So:
1. To avoid complicated separator logic, there are three types of lists. They are:
* `$lsep`-separated lists, which end with `_list`. `lsep` is customizable, but we
picked `^G` (alarm bell) for `$lsep` because it's ASCII and it's unlikely that it
would actually appear in any arguments. If we need to get fancier (and I will lose
faith in the world if we do) then we could consider XON or XOFF.
* `:`-separated directory lists, which end with `_dirs`, `_DIRS`, `PATH`, or `PATHS`
* Whitespace-separated lists (like flags), which can have any other name.
Whitespace and colon-separated lists come with the territory with PATHs from env
vars and lists of flags. `^G` separated lists are what we use for most internal
variables, b/c it's more likely to work.
2. To avoid subshells, use a bunch of functions that do dirty `eval` stuff instead. This
adds 3 functions to deal with lists:
* `append LISTNAME ELEMENT [SEP]` will put `ELEMENT` at the end of the list called
`LISTNAME`. You can optionally say what separator you expect to use. Note that we
are taking advantage of everything being global and passing lists by name.
* `prepend LISTNAME ELEMENT [SEP]` like append, but puts `ELEMENT` at the start of
`LISTNAME`
* `extend LISTNAME1 LISTNAME2 [PREFIX]` appends everything in LISTNAME2 to
LISTNAME1, and optionally prepends `PREFIX` to every element (this is useful for
things like `-I`, `-isystem `, etc.
* `preextend LISTNAME1 LISTNAME2 [PREFIX]` prepends everything in LISTNAME2 to
LISTNAME1 in order, and optionally prepends `PREFIX` to every element.
The routines determine the separator for each argument by its name, so we don't have to
pass around separators everywhere. Amazingly, as long as you do not expand variables'
values within an `eval` environment, you can do all this and still preserve quoting.
When iterating over lists, the user of this API still has to set and unset `IFS`
properly.
We ended up having to ignore shellcheck SC2034 (unused variable), because using evals
all over the place means that shellcheck doesn't notice that our list variables are
actually used.
So far this is looking pretty good. I took the most complex unit test I could find
(which runs a sample link line) and ran the same command line 200 times in a shell
script. Times are roughly as follows:
For this invocation:
```console
$ bash -c 'time (for i in `seq 1 200`; do ~/test_cc.sh > /dev/null; done)'
```
I get the following performance numbers (the listed shells are what I put in `cc`'s
shebang):
**Original**
* Old version of `cc` with arrays and `bash v3.2.57` (macOS builtin): `4.462s` (`.022s` / call)
* Old version of `cc` with arrays and `bash v5.1.8` (Homebrew): `3.267s` (`.016s` / call)
**Using many subshells (#26408)**
* with `bash v3.2.57`: `25.302s` (`.127s` / call)
* with `bash v5.1.8`: `27.801s` (`.139s` / call)
* with `dash`: `15.302s` (`.077s` / call)
This version didn't seem to work with zsh.
**This PR (no subshells)**
* with `bash v3.2.57`: `4.973s` (`.025s` / call)
* with `bash v5.1.8`: `4.984s` (`.025s` / call)
* with `zsh`: `2.995s` (`.015s` / call)
* with `dash`: `1.890s` (`.0095s` / call)
Dash, with the new posix design, is easily the winner.
So there are several interesting things to note here:
1. Running the posix version in `bash` is slower than using `bash` arrays. That is to be
expected because it's doing a bunch of string processing where it likely did not have
to before, at least in `bash`.
2. `zsh`, at least on macOS, is significantly faster than the ancient `bash` they ship
with the system. Using `zsh` with the new version also makes the posix wrappers
faster than `develop`. So it's worth preferring `zsh` if we have it. I suppose we
should also try this with newer `bash` on Linux.
3. `bash v5.1.8` seems to be significantly faster than the old system `bash v3.2.57` for
arrays. For straight POSIX stuff, it's a little slower. It did not seem to matter
whether `--posix` was used.
4. `dash` is way faster than `bash` or `zsh`, so the real payoff just comes from being
able to use it. I am not sure if that is mostly startup time, but it's significant.
`dash` is ~2.4x faster than the original `bash` with arrays.
So, doing a lot of string stuff is slower than arrays, but converting to posix seems
worth it to be able to exploit `dash`.
- [x] Convert all but array-related portions to sh
- [x] Fix basic shellcheck issues.
- [x] Convert arrays to use a few convenience functions: `append` and `extend`
- [x] Get `cc` tests passing.
- [x] Add `cc` tests where needed passing.
- [x] Benchmarking.
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
Co-authored-by: Danny McClanahan <1305167+cosmicexplorer@users.noreply.github.com>
When using modules for compiler (and/or external package), if a
package's `setup_[dependent_]build_environment` sets `PYTHONHOME`, it
can influence the python subprocess executed to gather module
information.
The error seen was:
```
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```
But the actual hidden error happened in the `python -c 'import
json...'` subprocess, which made it return an empty string as json:
```
ModuleNotFoundError: No module named 'encodings'
```
This fix uses `python -E` to ignore `PYTHONHOME` and
`PYTHONPATH`. Should be safe here because the python subprocess code
only use packages built-in python.
The python subprocess in `environment.py` was also patched to be safe
and consistent.
* Remove redundant preserve environment code in build environment
* Remove fix for a bug in a module
See https://github.com/spack/spack/issues/3153#issuecomment-280460041,
this shouldn't be part of core spack.
* Don't module unload cray-libsci on all platforms
Spack has logic to preserve an installation prefix when it is being
overwritten: if the new install fails, the old files are restored.
This PR adds error handling for when this backup restoration fails
(i.e. the new install fails, and then some unexpected error prevents
restoration from the backup).
* Remove vestigial code to be compatible with Spack v0.9.X
* ArchSpec: reworked __repr__ to be more adherent to common Python idioms
* ArchSpec: simplified __init__.py and copy()
The logic to perform detection of already installed
packages has been extracted from cmd/external.py
and put into the spack.detection package.
In this way it can be reused programmatically for
other purposes, like bootstrapping.
The new implementation accounts for cases where the
executables are placed in a subdirectory within <prefix>/bin
* Use gnuconfig package for config file replacement
Currently the autotools build system tries to pick up config.sub and
config.guess files from the system (in /usr/share) on arm and power.
This is introduces an implicit system dependency which we can avoid by
distributing config.guess and config.sub files in a separate package,
such as the new `gnuconfig` package which is very lightweight/text only
(unlike automake where we previously pulled these files from as a
backup). This PR adds `gnuconfig` as an unconditional build dependency
for arm and power archs.
In case the user needs a system version of config.sub and config.guess,
they are free to mark `gnuconfig` as an external package with the prefix
pointing to the directory containing the config files:
```yaml
gnuconfig:
externals:
- spec: gnuconfig@master
prefix: /tmp/tmp.ooBlkyAKdw/lol
buildable: false
```
Apart from that, this PR gives some better instructions for users when
replacing config files goes wrong.
* Mock needs this package too now, because autotools adds a depends_on
* Add documentation
* Make patch_config_files a prop, fix the docs, add integrations tests
* Make macOS happy
- Match failed autotest tests show the word "FAILED" near the end
- Match "FAIL: ", "FATAL: ", "failed ", "Failed test" of other suites
- autotest " ok"$ means the test passed, independend of text before.
- autoconf messages showing missing tools are fatal later, show them.
* autotoolspackage.rst: No depends_on('m4') with depends_on('autoconf')
- Remove `m4` from the example depends_on() lines for the autoreconf phase.
- Change the branch used as example from develop to master as it is
far more common in the packages of spack's builtin repo.
- Fix the wrong info that libtoolize and aclocal are run explicitly
in the autoreconf phase by default. autoreconf calls these internally
as needed, thus autotools.py also does not call them directly.
- Add that autoreconf() also adds -I<aclocal-prefix>/share/aclocal.
- Add an example how to set autoreconf_extra_args.
- Add an example of a custom autoreconf phase for running autogen.sh.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
This commit shows a template for cut-and-paste into the package to fix it:
```py
==> fast-global-file-status: Executing phase: 'autoreconf'
==> Error: RuntimeError: Cannot generate configure: missing dependencies autoconf, automake, libtool.
Please add the following lines to the package:
depends_on('autoconf', type='build', when='@master')
depends_on('automake', type='build', when='@master')
depends_on('libtool', type='build', when='@master')
Update the version (when='@master') as needed.
```
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
clean_environment(): Unset three more environment variables:
MAKEFLAGS: Affects make, can eg indirectly inhibit enabling parallel build
DISPLAY: Tests of GUI widget libraries might try to connect to an X server
TERM: Could make testsuites attempt to color their output
fixes#25992
Currently the bootstrapping process may need a compiler.
When bootstrapping from sources the need is obvious, while
when bootstrapping from binaries it's currently needed in
case patchelf is not on the system (since it will be then
bootstrapped from sources).
Before this PR we were searching for compilers as the
first operation, in case they were not declared in
the configuration. This fails in case we start
bootstrapping from within an environment.
The fix is to defer the search until we have swapped
configuration.
While debugging #24508, I noticed that we call `basename` in `cc`. The
same can be achieved by using Bash's parameter expansion, saving one
external process per call.
Parameter expansion cannot replace basename for directories in some
cases, but is guaranteed to work for executables.
Git 2.24 introduced a feature flag for repositories with many files, see:
https://github.blog/2019-11-03-highlights-from-git-2-24/#feature-macros
Since Spack's Git repository contains roughly 8,500 files, it can be
worthwhile to enable this, especially on slow file systems such as NFS:
```
$ hyperfine --warmup 3 'cd spack-default; git status' 'cd spack-manyfiles; git status'
Benchmark #1: cd spack-default; git status
Time (mean ± σ): 3.388 s ± 0.095 s [User: 256.2 ms, System: 625.8 ms]
Range (min … max): 3.168 s … 3.535 s 10 runs
Benchmark #2: cd spack-manyfiles; git status
Time (mean ± σ): 168.7 ms ± 10.9 ms [User: 98.6 ms, System: 126.1 ms]
Range (min … max): 144.8 ms … 188.0 ms 19 runs
Summary
'cd spack-manyfiles; git status' ran
20.09 ± 1.42 times faster than 'cd spack-default; git status'
```
Modifications:
- [x] Change `defaults/config.yaml`
- [x] Add a fix for bootstrapping patchelf from sources if `compilers.yaml` is empty
- [x] Make `SPACK_TEST_SOLVER=clingo` the default for unit-tests
- [x] Fix package failures in the e4s pipeline
Caveats:
1. CentOS 6 still uses the original concretizer as it can't connect to the buildcache due to issues with `ssl` (bootstrapping from sources requires a C++14 capable compiler)
1. I had to update the image tag for GitlabCI in e699f14.
1. libtool v2.4.2 has been deprecated and other packages received some update
This will allow a user to (from anywhere a Spec is parsed including both name and version) refer to a git commit in lieu of
a package version, and be able to make comparisons with releases in the history based on commits (or with other commits). We do this by way of:
- Adding a property, is_commit, to a version, meaning I can always check if a version is a commit and then change some action.
- Adding an attribute to the Version object which can lookup commits from a git repo and find the last known version before that commit, and the distance
- Construct new Version comparators, which are tuples. For normal versions, they are unchanged. For commits with a previous version x.y.z, d commits away, the comparator is (x, y, z, '', d). For commits with no previous version, the comparator is ('', d) where d is the distance from the first commit in the repo.
- Metadata on git commits is cached in the misc_cache, for quick lookup later.
- Git repos are cached as bare repos in `~/.spack/git_repos`
- In both caches, git repo urls are turned into file paths within the cache
If a commit cannot be found in the cached git repo, we fetch from the repo. If a commit is found in the cached metadata, we do not recompare to newly downloaded tags (assuming repo structure does not change). The cached metadata may be thrown out by using the `spack clean -m` option if you know the repo structure has changed in a way that invalidates existing entries. Future work will include automatic updates.
# Finding previous versions
Spack will search the repo for any tags that match the string of a version given by the `version` directive. Spack will also search for any tags that match `v + string` for any version string. Beyond that, Spack will search for tags that match a SEMVER regex (i.e., tags of the form x.y.z) and interpret those tags as valid versions as well. Future work will increase the breadth of tags understood by Spack
For each tag, Spack queries git to determine whether the tag is an ancestor of the commit in question or not. Spack then sorts the tags that are ancestors of the commit by commit-distance in the repo, and takes the nearest ancestor. The version represented by that tag is listed as the previous version for the commit.
Not all commits will find a previous version, depending on the package workflow. Future work may enable more tangential relationships between commits and versions to be discovered, but many commits in real world git repos require human knowledge to associate with a most recent previous version. Future work will also allow packages to specify commit/tag/version relationships manually for such situations.
# Version comparisons.
The empty string is a valid component of a Spack version tuple, and is in fact the lowest-valued component. It cannot be generated as part of any valid version. These two characteristics make it perfect for delineating previous versions from distances. For any version x.y.z, (x, y, z, '', _) will be less than any "real" version beginning x.y.z. This ensures that no distance from a release will cause the commit to be interpreted as "greater than" a version which is not an ancestor of it.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
This PR coincides with tiny changes to spack to support spack monitor using the new spec
the corresponding spack monitor PR is at https://github.com/spack/spack-monitor/pull/31.
Since there are no changes to the database we can actually update the current server
fairly easily, so either someone can test locally or we can just update and then
test from that (and update as needed).
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
#22845 revealed a long-standing bug that had never been triggered before, because the
hashing algorithm had been stable for multiple years while the bug was in production. The
bug was that when reading a concretized environment, Spack did not properly read in the
build hashes associated with the specs in the environment. Those hashes were recomputed
(and as long as we didn't change the algorithm, were recomputed identically). Spack's
policy, though, is never to recompute a hash. Once something is installed, we respect its
metadata hash forever -- even if internally Spack changes the hashing method. Put
differently, once something is concretized, it has a concrete hash, and that's it -- forever.
When we changed the hashing algorithm for performance in #22845 we exposed the bug.
This PR fixes the bug at its source, but properly reading in the cached build hash attributes
associated with the specs. I've also renamed some variables in the Environment class
methods to make a mistake of this sort more difficult to make in the future.
* ensure environment build hashes are never recomputed
* add comment clarifying reattachment of env build hashes
* bump lockfile version and include specfile version in env meta
* Fix unit-test for v1 to v2 conversion
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Refactor platform etc. to avoid circular dependencies
All the base classes in spack.architecture have been
moved to the corresponding specialized subpackages,
e.g. Platform is now defined within spack.platforms.
This resolves a circular dependency where spack.architecture
was both:
- Defining the base classes for spack.platforms, etc.
- Collecting derived classes from spack.platforms, etc.
Now it dopes only the latter.
* Move a few platform related functions to "spack.platforms"
* Removed spack.architecture.sys_type()
* Fixup for docs
* Rename Python modules according to review
Currently as part of installing a package, we lock a prefix, check if
it exists, and create it if not; the logic for creating the prefix
included a check for the existence of that prefix (and raised an
exception if it did), which was redundant.
This also includes removal of tests which were not verifying
anything (they pass with or without the modifications in this PR).
Modifications:
- Export platforms from spack.platforms directly, so that client modules don't have to import submodules
- Use only plain imports in test/architecture.py
- Parametrized test in test/architecture.py and put most of the setup/teardown in fixtures
This is a major rework of Spack's core core `spec.yaml` metadata format. It moves from `spec.yaml` to `spec.json` for speed, and it changes the format in several ways. Specifically:
1. The spec format now has a `_meta` section with a version (now set to version `2`). This will simplify major changes like this one in the future.
2. The node list in spec dictionaries is no longer keyed by name. Instead, it is a list of records with no required key. The name, hash, etc. are fields in the dictionary records like any other.
3. Dependencies can be keyed by any hash (`hash`, `full_hash`, `build_hash`).
4. `build_spec` provenance from #20262 is included in the spec format. This means that, for spliced specs, we preserve the *full* provenance of how to build, and we can reproduce a spliced spec from the original builds that produced it.
**NOTE**: Because we have switched the spec format, this PR changes Spack's hashing algorithm. This means that after this commit, Spack will think a lot of things need rebuilds.
There are two major benefits this PR provides:
* The switch to JSON format speeds up Spack significantly, as Python's builtin JSON implementation is orders of magnitude faster than YAML.
* The new Spec format will soon allow us to represent DAGs with potentially multiple versions of the same dependency -- e.g., for build dependencies or for compilers-as-dependencies. This PR lays the necessary groundwork for those features.
The old `spec.yaml` format continues to be supported, but is now considered a legacy format, and Spack will opportunistically convert these to the new `spec.json` format.
This modification accounts for:
1. Bootstrapping from sources using system, non-standard Python
2. Using later an ABI compatible standard Python interpreter
* tests: make `spack url [stats|summary]` work on mock packages
Mock packages have historically had mock hashes, but this means they're also invalid
as far as Spack's hash detection is concerned.
- [x] convert all hashes in mock package to md5 or sha256
- [x] ensure that all mock packages have a URL
- [x] ignore some special cases with multiple VCS fetchers
* url stats: add `--show-issues` option
`spack url stats` tells us how many URLs are using what protocol, type of checksum,
etc., but it previously did not tell us which packages and URLs had the issues. This
adds a `--show-issues` option to show URLs with insecure (`http`) URLs or `md5` hashes
(which are now deprecated by NIST).
Fixes removal of SPACK_ENV_PATH from PATH in the presence of trailing
slashes in the elements of PATH:
The compiler wrapper has to ensure that it is not called nested like
it would happen when gcc's collect2 uses PATH to call the linker ld,
or else the compilation fails.
To prevent nested calls, the compiler wrapper removes the elements
of SPACK_ENV_PATH from PATH.
Sadly, the autotest framework appends a slash to each element
of PATH when adding AUTOTEST_PATH to the PATH for the tests,
and some tests like those of GNU bison run cc inside the test.
Thus, ensure that PATH cleanup works even with trailing slashes.
This fixes the autotest suite of bison, compiling hundreds of
bison-generated test cases in a autotest-generated testsuite.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
This PR will add a new audit, specifically for spack package homepage urls (and eventually
other kinds I suspect) to see if there is an http address that can be changed to https.
Usage is as follows:
```bash
$ spack audit packages-https <package>
```
And in list view:
```bash
$ spack audit list
generic:
Generic checks relying on global variables
configs:
Sanity checks on compilers.yaml
Sanity checks on packages.yaml
packages:
Sanity checks on specs used in directives
packages-https:
Sanity checks on https checks of package urls, etc.
```
I think it would be unwise to include with packages, because when run for all, since we do requests it takes a long time. I also like the idea of more well scoped checks - likely there will be other addresses for http/https within a package that we eventually check. For now, there are two error cases - one is when an https url is tried but there is some SSL error (or other error that means we cannot update to https):
```bash
$ spack audit packages-https zoltan
PKG-HTTPS-DIRECTIVES: 1 issue found
1. Error with attempting https for "zoltan":
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'www.cs.sandia.gov'. (_ssl.c:1125)>
```
This is either not fixable, or could be fixed with a change to the url or (better) contacting the site owners to ask about some certificate or similar.
The second case is when there is an http that needs to be https, which is a huge issue now, but hopefully not after this spack PR.
```bash
$ spack audit packages-https xman
Package "xman" uses http but has a valid https endpoint.
```
And then when a package is fixed:
```bash
$ spack audit packages-https zlib
PKG-HTTPS-DIRECTIVES: 0 issues found.
```
And that's mostly it. :)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
* Add a __reduce__ method to Spec
fixes#23892
The recursion limit seems to be due to the default
way in which a Spec is serialized, following all
the attributes. It's still not clear to me why this
is related to being in an environment, but in any
case we already have methods to serialize Specs to
disk in JSON and YAML format. Here we use them to
pickle a Spec instance too.
* Downgrade to build-hash
Hopefully nothing will change the package in
between serializing the spec and sending it
to the child process.
* Add support for Python 2
* Make sure PackageInstaller does not remove the just-restored
install dir after failure in spack install --overwrite
* Remove cryptic error message and rethrow actual error
The gcc compiler can be configured to use `ld.gold` by default. It will
then call `ld.gold` explicitly when linking. When so, spack need to have
a ld.gold wrapper in PATH to inject rpaths link flags etc...
Also I wouldn't be surprised to see some package calling `ld.gold`
directly.
As for ld.gold, the argument could be made that we want to support any
package that could call ld.lld.
* Add a __reduce__ method to SpecBuildInterface
This class was confusing pickle when being serialized,
due to its scary nature of being an object that disguise
as another type.
* Add more MacOS tests, switch them to clingo
* Fix condition syntax
* Remove Python v3.6 and v3.9 with macOS
* Conditionally remove 'context' from kwargs in _urlopen
Previously, 'context' is purged from kwargs in _urlopen to
conform to varying support for 'context' in different versions
of urllib. This fix tries to use 'context', and then removes
it if an exception is thrown and tries again.
* Specify error type in try statement in _urlopen
Specify TypeError when checking if 'context' is in kwargs
for _urlopen. Also, if try fails, check that 'context' is
in the error message before removing from kwargs.
This is a direct followup to #13557 which caches additional attributes that were added in #24095 that are expensive to compute. I had to reopen#25556 in another PR to invalidate the GitLab CI cache, but see #25556 for prior discussion.
### Before
```console
$ time spack env activate .
real 2m13.037s
user 1m25.584s
sys 0m43.654s
$ time spack env view regenerate
==> Updating view at /Users/Adam/.spack/.spack-env/view
real 16m3.541s
user 10m28.892s
sys 4m57.816s
$ time spack env deactivate
real 2m30.974s
user 1m38.090s
sys 0m49.781s
```
### After
```console
$ time spack env activate .
real 0m8.937s
user 0m7.323s
sys 0m1.074s
$ time spack env view regenerate
==> Updating view at /Users/Adam/.spack/.spack-env/view
real 2m22.024s
user 1m44.739s
sys 0m30.717s
$ time spack env deactivate
real 0m10.398s
user 0m8.414s
sys 0m1.630s
```
Fixes#25555Fixes#25541
* Speedup environment activation, part 2
* Only query distutils a single time
* Fix KeyError bug
* Make vermin happy
* Manual memoize
* Add comment on cross-compiling
* Use platform-specific include directory
* Fix multiple bugs
* Fix python_inc discrepancy
* Fix import tests
* Set pubkey trust to ultimate during `gpg trust`
Tries to solve the same problem as #24760 without surpressing stderr
from gpg commands.
This PR makes every imported key trusted in the gpg database.
Note: I've outlined
[here](https://github.com/spack/spack/pull/24760#issuecomment-883183175)
that gpg's trust model makes sense, since how can we trust a random
public key we download from a binary cache?
* Fix test
Fixes#25603
This commit adds a new context manager to temporarily
deactivate active environments. This context manager
is used when setting up bootstrapping configuration to
make sure that the current environment is not affected
by operations on the bootstrap store.
* Preserve exit code 1 if nothing is found
* Use context manager for the environment
This commit adds a regression test for version selection
with preferences in `packages.yaml`. Before PR 25585 we
used negative weights in a minimization to select the
optimal version. This may lead to situations where a
dependency may make the version score of dependents
"better" if it is preferred in packages.yaml.
PackageInstaller and Package.installed disagree over what it means
for a package to be installed: PackageInstaller believes it should be
enough for a database entry to exist, whereas Package.installed
requires a database entry & a prefix directory.
This leads to the following niche issue:
* a develop spec in an environment is successfully installed
* then somehow its install prefix is removed (e.g. through a bug fixed
in #25583)
* you modify the sources and reinstall the environment
1. spack checks pkg.installed and realizes the develop spec is NOT
installed, therefore it doesn't need to have 'overwrite: true'
2. the installer gets the build task and checks the database and
realizes the spec IS installed, hence it doesn't have to install it.
3. the develop spec is not rebuilt.
The solution is to make PackageInstaller and pkg.installed agree over
what it means to be installed, and this PR does that by dropping the
prefix directory check from pkg.installed, so that it only checks the
database.
As a result, spack will create a build task with overwrite: true for
the develop spec, and the installer in fact handles overwrite requests
fine even if the install prefix doesn't exist (it just does a normal
install).
see #25563
When we have a concrete environment and we ask to install a
concrete spec from a file, currently Spack returns a list of
specs that are all the one that match the argument DAG hash.
Instead we want to compare build hashes, which also account
for build-only dependencies.
#25303 filtered padding from build output, but it's still there in binary install/relocate output,
so our CI logs are still quite long and frequently hit the limit.
- [x] add context handler from #25303 to buildcache installation as well
This allows you to run `spack graph --installed` from within an environment and get a dot graph of
its concrete specs.
- [x] make `spack graph -i` environment-aware
- [x] add code to the generated dot graph to ensure roots have min rank (i.e., they're all at the
top or left of the DAG)
Bootstrapping clingo on macOS on `develop` gives errors like this:
```
==> Error: RuntimeError: Unable to locate python command in /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/bin
/Users/gamblin2/Workspace/spack/var/spack/repos/builtin/packages/python/package.py:662, in command:
659 return Executable(path)
660 else:
661 msg = 'Unable to locate {0} command in {1}'
>> 662 raise RuntimeError(msg.format(self.name, self.prefix.bin))
```
On macOS, `python` is laid out differently. In particular, `sys.executable` is here:
```console
Python 2.7.16 (default, May 8 2021, 11:48:02)
[GCC Apple LLVM 12.0.5 (clang-1205.0.19.59.6) [+internal-os, ptrauth-isa=deploy on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.executable
'/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python'
```
Based on that, you'd think that
`/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents` would be
where you'd look for a `bin` directory, but you (and Spack) would be wrong:
```console
$ ls /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/
Info.plist MacOS/ PkgInfo Resources/ _CodeSignature/ version.plist
```
You need to look in `sys.exec_prefix`
```
>>> sys.exec_prefix
'/System/Library/Frameworks/Python.framework/Versions/2.7'
```
Which looks much more like a standard prefix, with understandable `bin`, `lib`, and `include`
directories:
```console
$ ls /System/Library/Frameworks/Python.framework/Versions/2.7
Extras/ Mac/ Resources/ bin/ lib/
Headers@ Python* _CodeSignature/ include/
$ ls -l /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python
lrwxr-xr-x 1 root wheel 7B Jan 1 2020 /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python@ -> python2
```
- [x] change `bootstrap.py` to use the `sys.exec_prefix` as the external prefix, instead of just
getting the parent directory of the executable.
This adds lockfile tracking to Spack's lock mechanism, so that we ensure that there
is only one open file descriptor per inode.
The `fcntl` locks that Spack uses are associated with an inode and a process.
This is convenient, because if a process exits, it releases its locks.
Unfortunately, this also means that if you close a file, *all* locks associated
with that file's inode are released, regardless of whether the process has any
other open file descriptors on it.
Because of this, we need to track open lock files so that we only close them when
a process no longer needs them. We do this by tracking each lockfile by its
inode and process id. This has several nice properties:
1. Tracking by pid ensures that, if we fork, we don't inadvertently track the parent
process's lockfiles. `fcntl` locks are not inherited across forks, so we'll
just track new lockfiles in the child.
2. Tracking by inode ensures that referencs are counted per inode, and that we don't
inadvertently close a file whose inode still has open locks.
3. Tracking by both pid and inode ensures that we only open lockfiles the minimum
number of times necessary for the locks we have.
Note: as mentioned elsewhere, these locks aren't thread safe -- they're designed to
work in Python and assume the GIL.
Tasks:
- [x] Introduce an `OpenFileTracker` class to track open file descriptors by inode.
- [x] Reference-count open file descriptors and only close them if they're no longer
needed (this avoids inadvertently releasing locks that should not be released).
This commit rework version facts so that:
1. All the information on versions is collected
before emitting the facts
2. The same kind of atom is emitted for versions
stemming from different origins (package.py
vs. packages.yaml)
In the end all the possible versions for a given
package are totally ordered and they are given
different and increasing weights staring from zero.
This refactor allow us to avoid using negative
weights, which in some configurations may make
parent node score "better" and lead to unexpected
"optimal" results.
Once PR binary graduation is deployed, the shared PR mirror will
contain binaries just built by a merged PR, before the subsequent
develop pipeline has had time to finish. Using the shared PR mirror
as a source of binaries will reduce the number of times we have to
rebuild the same full hash.
* Refactor active environment getters
- Make `spack.environment.active_environment` a trivial getter for the active
environment, replacing `spack.environment.get_env` when the arguments are
not needed
- New method `spack.cmd.require_active_environment(cmd_name)` for
commands that require an environment (rather than abusing
get_env/active_environment)
- Clean up calling code to call spack.environment.active_environment or
spack.cmd.require_active_environment as appropriate
- Remove the `-e` parsing from `active_environment`, because `main.py` is
responsible for processing `-e` and already activates the environment.
- Move `spack.environment.find_environment` to
`spack.cmd.find_environment`, to avoid having spack.environment aware
of argparse.
- Refactor `spack install` command so argument parsing is all handled in the
command, no argparse in spack.environment or spack.installer
- Update documentation
* Python 2: toplevel import errors only with 'as ev'
In two files, `import spack.environment as ev` leads to errors
These errors are not well understood ("'module' object has no attribute
'environment'"). All other files standardize on the above syntax.
* Bootstrap clingo from binaries
* Move information on clingo binaries to a JSON file
* Add support to bootstrap on Cray
Bootstrapping on Cray requires, at the moment, to
swap the platform when looking for binaries - due
to #22800.
* Add SHA256 verification for bootstrapped software
Use sha256 verification for binaries necessary to bootstrap
the concretizer and gpg for signature verification
* patchelf: use Spec._old_concretize() to bootstrap
As noted in #24450 we may happen to need the
concretizer when bootstrapping clingo. In that case
only the old concretizer is available.
* Add a schema for bootstrapping methods
Two fields have been added to bootstrap.yaml:
"sources" which lists the methods available for
bootstrapping software
"trusted" which records if a source is trusted or not
A subcommand has been added to "spack bootstrap" to list
the sources currently available.
* Methods used for bootstrapping are configurable from bootstrap:sources
The function that tries to ensure a given Python module
is importable now tries bootstrapping methods in the same
order as they are defined in `bootstrap.yaml`
* Permit to trust/untrust bootstrapping methods
* Add binary tests for MacOS, Ubuntu
* Add documentation
* Add a note on bash
Spack is internally using a patched version of `argparse` mainly to backport Python 3 functionality
into Python 2. This PR makes it such that for the supported Python 3 versions we use `argparse`
from the standard Python library. This PR has been extracted from #25371 where it was needed
to be able to use recent versions of `pytest`.
* Fixed formatting issues when using a pristine argparse.py
* Fix error message for Python 3.X when missing positional arguments
* Account for the change of API in Python 3.7
* Layout multi-valued args into columns in error messages
* Seamless transition in develop if argparse.pyc is in external
* Be more defensive in case we can't remove the file.
Add link type to spack.yaml format
Add tests to verify link behavior is correct for installed files
for all three view types
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
The commands have been deprecated in #7098, and have
been failing with an error message since then.
Cleaning the code since it is unlikely that somebody
is still using them.
Preferred providers had a non-zero weight because in an earlier formulation of the logic program that was needed to prefer external providers over default providers. With the current formulation for externals this is not needed anymore, so we can give a weight of zero to both default choices and providers that are externals. _Using zero ensures that we don't introduce any drift towards having less providers, which was happening when minimizing positive weights_.
Modifications:
- [x] Default weight for providers starts at 0 (instead of 10, needed before to prefer externals)
- [x] Rules to compute the `provider_weight` have been refactored. There are multiple possible weights for a given `Virtual`. Only one gets selected by the solver (the one that minimizes the objective function).
- [x] `provider_weight` are now accounting for each different `Virtual`. Before there was a single weight per provider, even if the package was providing multiple virtuals.
* Give preferred providers a weight of zero
Preferred providers had a non-zero weight because in an earlier
formulation of the logic program that was needed to prefer
external providers over default providers.
With the current formulation for externals this is not needed anymore,
so we can give a weight of zero to default choices. Using zero
ensures that we don't introduce any drift towards having
less providers, which was happening when minimizing positive weights.
* Simplify how we compute weights for providers
Rewrite rules so that specific events (i.e. being
an external) unlock the possibility to use certain
weights. The weight being considered is then selected
by the minimization process to be the one that gives
the best score.
* Allow providers to have different weights for different virtuals
Before this change we didn't differentiate providers based on
the virtual they provide, which meant that packages providing
more than one virtual had nonetheless a single weight.
With this change there will be a weight per virtual.
This is both a bugfix and a generalization of #25168. In #25168, we attempted to filter padding
*just* from the debug output of `spack.util.executable.Executable` objects. It turns out we got it
wrong -- filtering the command line string instead of the arg list resulted in output like this:
```
==> [2021-08-05-21:34:19.918576] ["'", '/', 'b', 'i', 'n', '/', 't', 'a', 'r', "'", ' ', "'", '-', 'o', 'x', 'f', "'", ' ', "'", '/', 't', 'm', 'p', '/', 'r', 'o', 'o', 't', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '-', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '-', 'w', 'p', 'h', 'p', 't', 'l', 'h', 'w', 'u', 's', 'e', 'i', 'a', '4', 'k', 'p', 'g', 'y', 'd', 'q', 'l', 'l', 'i', '2', '4', 'q', 'b', '5', '5', 'q', 'u', '4', '/', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '.', 't', 'a', 'r', '.', 'b', 'z', '2', "'"]
```
Additionally, plenty of builds output padded paths in other plcaes -- e.g., not just command
arguments, but in other `tty` messages via `llnl.util.filesystem` and other places. `Executable`
isn't really the right place for this.
This PR reverts the changes to `Executable` and moves the filtering into `llnl.util.tty`. There is
now a context manager there that you can use to install a filter for all output.
`spack.installer.build_process()` now uses this context manager to make `tty` do path filtering
when padding is enabled.
- [x] revert filtering in `Executable`
- [x] add ability for `tty` to filter output
- [x] install output filter in `build_process()`
- [x] tests
`compare_specs()` had a `colorful` keyword argument, but everything else in
spack uses `color` for this.
- [x] rename the argument
- [x] make the default follow spack's `--color=always/never/auto` setting
Add a workflow to test bootstrapping clingo on
different platforms so that we can detect changes
that break it.
Compute `site_packages_dir` in `bootstrap.py` as it was
before #24095, until we figure a better way to override
that attribute.
Long, padded install paths can get to be very long in the verbose install
output. This has to be filtered out by the Executable class, as it
generates these debug messages.
- [x] add ability to filter paths from Executable output.
- [x] add a context manager that can enable path filtering
- [x] make `build_process` in `installer.py`
This should hopefully allow us to see most of the build output in
Gitlab pipeline builds again.
`build_process` has been around a long time but it's become a very large,
unwieldy method. It's hard to work with because it has a lot of local
variables that need to persist across all of the code.
- [x] To address this, convert it its own `BuildInfoProcess` class.
- [x] Start breaking the method apart by factoring out the main
installation logic into its own function.
When context managers are used to save and restore values, we need to remember
to use try/finally around the yield in case an exception is thrown. Otherwise,
the cleanup will be skipped.
- Change config from the undocumented `use_curl: true/false` to `url_fetch_method: urllib/curl`.
- Documentation of `url_fetch_method` in `defaults/config.yaml`
- Default fetch option explicitly set to `urllib` for users who may not have curl on their system
To upgrade from `use_curl` to `url_fetch_method`, run `spack config update config`
The output order for `spack diff` is nondeterministic for larger diffs -- if you
ran it several times it will not put the fields in the spec in the same order on
successive invocations.
This makes a few fixes to `spack diff`:
- [x] Implement the change discussed in https://github.com/spack/spack/pull/22283#discussion_r598337448
to make `AspFunction` comparable in and of itself and to eliminate the need for `to_tuple()`
- [x] Sort the lists of diff properties so that the output is always in the same order.
- [x] Make the output for different fields the same as what we use in the solver. Previously, we
would use `Type(value)` for non-string values and `value` for strings. Now we just use
the value. So the output looks a little cleaner:
```
== Old ========================== == New ====================
@@ node_target @@ @@ node_target @@
- gdbm Target(x86_64) - gdbm x86_64
+ zlib Target(skylake) + zlib skylake
@@ variant_value @@ @@ variant_value @@
- ncurses symlinks bool(False) - ncurses symlinks False
+ zlib optimize bool(True) + zlib optimize True
@@ version @@ @@ version @@
- gdbm Version(1.18.1) - gdbm 1.18.1
+ zlib Version(1.2.11) + zlib 1.2.11
@@ node_os @@ @@ node_os @@
- gdbm catalina - gdbm catalina
+ zlib catalina + zlib catalina
```
I suppose if we want to use `repr()` in the output we could do that and could be
consistent but we don't do that elsewhere -- the types of things in Specs are
all stringifiable so the string and the name of the attribute (`version`, `node_os`,
etc.) are sufficient to know what they are.
When a spec fails to build on `develop`, instead of storing an empty file as the entry in the broken specs list, this change stores the full spec yaml as well as links to the failing pipeline and job.
A `spack diff` will take two specs, and then use the spack.solver.asp.SpackSolverSetup to generate
lists of facts about each (e.g., nodes, variants, etc.) and then take a set difference between the
two to show the user the differences.
Example output:
$ spack diff python@2.7.8 python@3.8.11
==> Warning: This interface is subject to change.
--- python@2.7.8/tsxdi6gl4lihp25qrm4d6nys3nypufbf
+++ python@3.8.11/yjtseru4nbpllbaxb46q7wfkyxbuvzxx
@@ variant_value @@
- python patches a8c52415a8b03c0e5f28b5d52ae498f7a7e602007db2b9554df28cd5685839b8
+ python patches 0d98e93189bc278fbc37a50ed7f183bd8aaf249a8e1670a465f0db6bb4f8cf87
@@ version @@
- openssl Version(1.0.2u)
+ openssl Version(1.1.1k)
- python Version(2.7.8)
+ python Version(3.8.11)
Currently this uses diff-like output but we will attempt to improve on this in the future.
One use case for `spack diff` is whenever a user has a disambiguate situation and cannot
remember how two different installs are different. The command can also output `--json` in
the case of a more analysis type use case where we want to save complete data with all
diffs and the intersection. However, the command is really more intended for a command
line use case, and we likely will have an analyzer more suited to saving data
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Catch ConnectionError from CDash reporter
Catch ConnectionError when attempting to upload the results of `spack install`
to CDash. This follows in the spirit of #24299. We do not want `spack install`
to exit with a non-zero status when something goes wrong while attempting to
report results to CDash.
* Catch HTTP Error 400 (Bad Request) in relate_cdash_builds()
`spack style` previously used a Travis CI variable to figure out
what the base branch of a PR was, and this was apparently also set
on `develop`. We switched to `GITHUB_BASE_REF` to support GitHub
Actions, but it looks like this is set to `""` in pushes to develop,
so `spack style` breaks there.
This PR does two things:
- [x] Remove `GITHUB_BASE_REF` knowledge from `spack style` entirely
- [x] Handle `GITHUB_BASE_REF` in style scripts instead, and explicitly
pass the base ref if it is present, but don't otherwise.
This makes `spack style` *not* dependent on the environment and fixes
handling of the base branch in the right place.
This adds a `--root` option so that `spack style` can check style for
a spack instance other than its own.
We also change the inner workings of `spack style` so that `--config FILE`
(and similar options for the various tools) options are used. This ensures
that when `spack style` runs, it always uses the config from the running spack,
and does *not* pick up configuration from the external root.
- [x] add `--root` option to `spack style`
- [x] add `--config` (or similar) option when invoking style tools
- [x] add a test that verifies we can check an external instance
Intel oneAPI installs maintain a lock file in XDG_RUNTIME_DIR,
which by default exists in /tmp (and is shared by all component
installs). This prevented multiple oneAPI components from being
installed in parallel. This commit sets XDG_RUNTIME_DIR to exist
within Spack's installation Stage, so allows multiple components
to be installed at the same time.
This uses our bootstrapping logic to automatically install dependencies for
`spack style`. Users should no longer have to pre-install all of the tools
(`isort`, `mypy`, `black`, `flake8`). The command will do it for them.
- [x] add logic to bootstrap specs with specific version requirements in `spack style`
- [x] remove style tools from CI requirements (to ensure we test bootstrapping)
- [x] rework dependencies for `mypy` and `py-typed-ast`
- `py-typed-ast` needs to be a link dependency
- it needs to be at 1.4.1 or higher to work with python 3.9
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
#24095 introduced a couple of bugs, which are fixed here:
1. The module path is computed incorrectly for bootstrapped clingo
2. We remove too many paths for `sys.path` in case of failures
Third-party Python libraries may be installed in one of several directories:
1. `lib/pythonX.Y/site-packages` for Spack-installed Python
2. `lib64/pythonX.Y/site-packages` for system Python on RHEL/CentOS/Fedora
3. `lib/pythonX/dist-packages` for system Python on Debian/Ubuntu
Previously, Spack packages were hard-coded to use the (1). Now, we query the Python installation itself and ask it which to use. Ever since #21446 this is how we've been determining where to install Python libraries anyway.
Note: there are still many packages that are hard-coded to use (1). I can change them in this PR, but I don't have the bandwidth to test all of them.
* Python: handle dist-packages and site-packages
* Query Python to find site-packages directory
* Add try-except statements for when distutils isn't installed
* Catch more errors
* Fix root directory used in import tests
* Rely on site_packages_dir property
* Permit to enable/disable bootstrapping and customize store location
This PR adds configuration handles to allow enabling
and disabling bootstrapping, and to customize the store
location.
* Move bootstrap related configuration into its own YAML file
* Add a bootstrap command to manage configuration
Spack allows users to set `padded_length` to pad out the installation path in
build farms so that any binaries created are more easily relocatable. The issue
with this is that the padding dominates installation output and makes it
difficult to see what is going on. The padding also causes logs to easily
exceed size limits for things like GitLab artifacts.
This PR fixes this by adding a filter in the logger daemon. If you use a
setting like this:
config:
install_tree:
padded_length: 512
Then lines like this in the output:
==> [2021-06-23-15:59:05.020387] './configure' '--prefix=/Users/gamblin2/padding-log-test/opt/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_pla/darwin-bigsur-skylake/apple-clang-12.0.5/zlib-1.2.11-74mwnxgn6nujehpyyalhwizwojwn5zga
will be replaced with the much more readable:
==> [2021-06-23-15:59:05.020387] './configure' '--prefix=/Users/gamblin2/padding-log-test/opt/[padded-to-512-chars]/darwin-bigsur-skylake/apple-clang-12.0.5/zlib-1.2.11-74mwnxgn6nujehpyyalhwizwojwn5zga
You can see that the padding has been replaced with `[padded-to-512-chars]` to
indicate the total number of characters in the padded prefix. Over a long log
file, this should save a lot of space and allow us to see error messages in
GitHub/GitLab log output.
The *actual* build logs still have full paths in them. Also lines that are
output by Spack and not by a package build are not filtered and will still
display the fully padded path. There aren't that many of these, so the change
should still help reduce file size and readability quite a bit.
015e29efe1 that introduced this section to the
documentation said “two” here instead of the actual count, three.
9f54cea5c5 then added a fourth, BLAS/LAPACK.
Rather than trying to keep this leading count in sync, this change just replaces
the wording with something more generic/stable.
* fix remaining flake8 errors
* imports: sort imports everywhere in Spack
We enabled import order checking in #23947, but fixing things manually drives
people crazy. This used `spack style --fix --all` from #24071 to automatically
sort everything in Spack so PR submitters won't have to deal with it.
This should go in after #24071, as it assumes we're using `isort`, not
`flake8-import-order` to order things. `isort` seems to be more flexible and
allows `llnl` mports to be in their own group before `spack` ones, so this
seems like a good switch.
`dateutil.parser` was an optional dependency for CVS tests. It was failing on macOS
beacuse the dateutil types were not being installed, and mypy was failing *even when the
CVS tests were skipped*. This seems like it was an oversight on macOS --
`types-dateutil-parser` was not installed there, though it was on Linux unit tests.
It takes 6 lines of YAML and some weird test-skipping logic to get `python-dateutil` and
`types-python-dateutil` installed in all the tests where we need them, but it only takes
4 lines of code to write the date parser we need for CVS, so I just did that instead.
Note that CVS date format can vary from system to system, but it seems like it's always
pretty similar for the parts we care about.
- [x] Replace dateutil.parser with a simpler date regex
- [x] Lose the dependency on `dateutil.parser`
Previous tests of `spack style` didn't really run the tools --
they just ensure that the commands worked enough to get coverage.
This adds several real tests and ensures that we hit the corner
cases in `spack style`. This also tests sucess as well as failure
cases.
This consolidates code across tools in `spack style` so that each
`run_<tool>` function can be called indirecty through a dictionary
of handlers, and os that checks like finding the executable for the
tool can be shared across commands.
- [x] rework `spack style` to use decorators to register tools
- [x] define tool order in one place in `spack style`
- [x] fix python 2/3 issues to Get `isort` checks working
- [x] make isort error regex more robust across versions
- [x] remove unused output option
- [x] change vestigial `TRAVIS_BRANCH` to `GITHUB_BASE_REF`
- [x] update completion
We should not fail the generate stage simply due to the presence of
a broken-spec somewhere in the DAG. Only fail if the known broken
spec needs to be rebuilt.
This PR adds a context manager that permit to group the common part of a `when=` argument and add that to the context:
```python
class Gcc(AutotoolsPackage):
with when('+nvptx'):
depends_on('cuda')
conflicts('@:6', msg='NVPTX only supported in gcc 7 and above')
conflicts('languages=ada')
conflicts('languages=brig')
conflicts('languages=go')
```
The above snippet is equivalent to:
```python
class Gcc(AutotoolsPackage):
depends_on('cuda', when='+nvptx')
conflicts('@:6', when='+nvptx', msg='NVPTX only supported in gcc 7 and above')
conflicts('languages=ada', when='+nvptx')
conflicts('languages=brig', when='+nvptx')
conflicts('languages=go', when='+nvptx')
```
which needs a repetition of the `when='+nvptx'` argument. The context manager might help improving readability and permits to group together directives related to the same semantic aspect (e.g. all the directives needed to model the behavior of `gcc` when `+nvptx` is active).
Modifications:
- [x] Added a `when` context manager to be used with package directives
- [x] Add unit tests and documentation for the new feature
- [x] Modified `cp2k` and `gcc` to show the use of the context manager
ci: only write to broken-specs list on SpackError
Only write to the broken-specs list when `spack install` raises a SpackError,
instead of writing to this list unnecessarily when infrastructure-related problems
prevent a develop job from completing successfully.
If two Specs have the same hash (and prefix) but are not equal, Spack
originally had logic to detect this and raise an error (since both
cannot be installed in the same place). Recently this has eroded and
the check no-longer works; moreover, when defining projections (which
may truncate the hash or other distinguishing properties from the
prefix) Spack was also failing to detect collisions (in both of these
cases, Spack would overwrite the old prefix with the new Spec).
This PR maintains a list of all "taken" prefixes: if a hash is not
registered (i.e. recorded as installed in the database) but the prefix
is occupied, that is a collision. This can detect collisions created
by defining projections (specifically when they omit the hash).
The PR does not detect collisions where specs have the same hash
(and prefix) but are not equal.
Prior to any Spack build, Spack modifies PATH etc. to help the build
find the dependencies it needs. It also allows any package to define
custom environment modifications (and furthermore a package can
specify environment modifications to apply when it is used as a
dependency). If an external package defines custom environment
modifications that alter PATH, and the external package is in a merged
or system prefix, then that prefix could "override" the Spack-built
packages.
This commit reorders environment modifications so that PrependPath
actions which expose Spack-built packages override PrependPath actions
for custom environment modifications of external packages.
In more detail, the original order of environment modifications is:
* Modules
* Compiler flag variables
* PATH, CMAKE_PREFIX_PATH, and PKG_CONFIG_PATH for dependencies
* Custom package.py modifications in the following order:
* dependencies
* root
This commit changes the order:
* Modules
* Compiler flag variables
* For each external dependency
* PATH, CMAKE_PREFIX_PATH, and PKG_CONFIG_PATH modifications
* Custom modifications
* For each Spack-built dependency
* PATH, CMAKE_PREFIX_PATH, and PKG_CONFIG_PATH modifications
* Custom modifications
Spack pipelines need to take specific actions internally that depend
on whether the pipeline is being run on a PR to spack or a merge to
the develop branch. Pipelines can also run in other repositories,
which represents other possible use cases than just the two mentioned
above. This PR creates a "SPACK_PIPELINE_TYPE" gitlab variable which
is propagated to rebuild jobs, and is also used internally to determine
which pipeline-specific tasks to run.
One goal of the PR is fix an issue where rebuild jobs which failed on
develop pipelines did not properly report the broken full hash to the
"broken-specs-url".
* Add Externally Findable section to info command
* Use comma delimited detection attributes in addition to boolean value
* Unit test externally detectable part of spack info
* Force the Python interpreter with an env variable
This commit forces the Python interpreter with an
environment variable, to ensure that the Python set
by the "setup-python" action is the one being used.
Due to the policy adopted by Spack to prefer python3
over python we may end up picking a Python 3.X
interpreter where Python 2.7 was meant to be used.
* Revert "Update conftest.py (#24473)"
This reverts commit 477c8ce820.
* Make python-dateutil a soft dependency for unit tests
Before #23212 people could clone spack and run
```
spack unit-tests
```
while now this is not possible, since python-dateutil is
a required but not vendored dependency. This change makes
it not a hard requirement, i.e. it will be used if found
in the current interpreter.
* Workaround mypy complaint
This commit fixes a subtle bug that may occur when
a package is a "possible_provider" of a virtual but
no "provides_virtual" can be deduced. In that case
the cardinality constraint on "provides_virtual"
may arbitrarily assign a package the role of provider
even if the constraints for it to be one are not fulfilled.
The fix reworks the logic around three concepts:
- "possible_provider": a package may provide a virtual if some constraints are met
- "provides_virtual": a package meet the constraints to provide a virtual
- "provider": a package selected to provide a virtual
Spack packages can now fetch versions from CVS repositories. Note
this fetch mechanism is unsafe unless using :extssh:. Most public
CVS repositories use an insecure protocol implemented as part of CVS.
Here we are adding an install_times.json into the spack install metadata folder.
We record a total, global time, along with the times for each phase. The type
of phase or install start / end is included (e.g., build or fail)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Add a new "spack audit" command. This command can check for issues
with configuration or with packages and is intended to help a
user debug a failed Spack build.
In some cases the reported issues are always errors but are too
costly to check for (e.g. packages that specify missing variants on
dependencies). In other cases the issues may be legitimate but
uncommon usage of Spack and we want to be sure the user intended the
behavior (e.g. duplicate compiler definitions).
Audits are grouped by theme, and for now the two themes are packages
and configuration. For example you can run all available audits
on packages with "spack audit packages". It is intended that in
the future users will be able to define their own audits.
The package audits are good candidates for running in package_sanity
(i.e. they could catch bugs in user-submitted packages before they
are merged) but that is left for a later PR.
This should get us most of the way there to support using monitor during a spack container build, for both Singularity and Docker. Some quick notes:
### Docker
Docker works by way of BUILDKIT and being able to specify --secret. What this means is that you can prefix a line with a mount of type secret as follows:
```bash
# Install the software, remove unnecessary deps
RUN --mount=type=secret,id=su --mount=type=secret,id=st cd /opt/spack-environment && spack env activate . && export SPACKMON_USER=$(cat /run/secrets/su) && export SPACKMON_TOKEN=$(cat /run/secrets/st) && spack install --monitor --fail-fast && spack gc -y
```
Where the id for one or more secrets corresponds to the file mounted at `/run/secrets/<name>`. So, for example, to build this container with su (spackmon user) and sv (spackmon token) defined I would export them on my host and do:
```bash
$ DOCKER_BUILDKIT=1 docker build --network="host" --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container .
```
And when we add `env` to the secret definition that tells the build to look for the secret with id "st" in the environment variable `SPACKMON_TOKEN` for example.
If the user is building locally with a local spack monitor, we also need to set the `--network` to be the host, otherwise you can't connect to it (a la isolation of course.)
## Singularity
Singularity doesn't have as nice an ability to clearly specify secrets, so (hoping this eventually gets implemented) what I'm doing now is providing the user instructions to write the credentials to a file, add it to the container to source, and remove when done.
## Tags
Note that the tags PR https://github.com/spack/spack/pull/23712 will need to be merged before `--monitor-tags` will actually work because I'm checking for the attribute (that doesn't exist yet):
```bash
"tags": getattr(args, "monitor_tags", None)
```
So when that PR is merged to update the argument group, it will work here, and I can either update the PR here to not check if the attribute is there (it will be) or open another one in the case this PR is already merged.
Finally, I added a bunch of documetation for how to use monitor with containerize. I say "mostly working" because I can't do a full test run with this new version until the container base is built with the updated spack (the request to the monitor server for an env install was missing so I had to add it here).
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
When running executables from build dependencies, we want to avoid that
`LD_PRELOAD` and `DYLD_INSERT_LIBRARIES` any of their shared libs build
by spack with system libraries.
this will first support uploads for spack monitor, and eventually could be
used for other kinds of spack uploads
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
* extending example for buildcaches
I was attempting to create a local build cache from a directory, and I found the
docs for both buildcaches and mirrors, but did not connect the docs that the
url variable could be the local filesystem variable. I am extending the docs for
buildcaches with an example of creating and interacting with one on the filesystem
because I suspect other users will run into this need and possibly not find what
they are looking for.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
* adding as follows to spack mirror list
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
It is currently kind of confusing to the reader to distinguish spack buildcache install
and spack install, and it is not clear how to use a build cache once a mirror is added.
Hopefully this little big of description can help (and I hope I got it right!)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Use the 'version_yearlike' attribute instead of 'version' to
check if the SPACK_COMPILER_EXTRA_RPATHS should be set to include
the built-in 'libfabrics'.
When using the bare 'version', the comparison is wrong when
building with 'intel-parallel-studio', which has the version
format '<edition>.YYYY.Nupdate', due to the leading '<edition>'.
Extracting specs for the result of a solve has been factored
as a method into the asp.Result class. The method account for
virtual specs being passed as initial requests.
Minimizing compiler mismatches in the DAG and preferring newer
versions of packages are now higher priority than trying to use as
many default values as possible in multi-valued variants.
Since the module roots were removed from the config file,
`--print-shell-vars` cannot find the module roots anymore. Fix it by
using the new `root_path` function. Moreover, the roots for lmod and
modules seems to have been flipped by accident.
The VALID_VERSION regex didn't check that the version string was
completely valid, only that a prefix of it was. This version ensures
the entire string represents a valid version.
This makes a few related changes.
1. Make the SEGMENT_REGEX identify *which* arm it matches by what groups
are populated, including whether it's a string or int component or a
separator all at once.
2. Use the updated regex to parse the input once with a findall rather
than twice, once with findall and once with split, since the version
components and separators can be distinguished by their group status.
3. Rather than "convert to int, on exception stay string," if the int
group is set then convert to int, if not then construct an instance
of the VersionStrComponent class
4. VersionStrComponent now implements all of the special string
comparison logic as part of its __lt__ and __eq__ methods to deal
with infinity versions and also overloads comparison with integers.
5. Version now uses direct tuple comparison since it has no per-element
special logic outside the VersionStrComponent class.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Passing absolute paths from pipeline generate job to downstream rebuild jobs
causes problems when the CI_PROJECT_DIR is not the same for the generate and
rebuild jobs. This has happened, for example, when gitlab checks out the
project into a runner-specific directory and different runners are chosen
for the generate and rebuild jobs.
* ensure that the stage root exists for `spack stage -p <PATH>`
* add test to verify `spack stage -p <PATH>` works!
* move out shared tmp staging path setup to a fixture to fix the test
* Simplified the spack.util.gpg implementation
All the classes defined in this Python module,
which were previously used to construct singleton
instances, have been removed in favor of four
global variables. These variables are initialized
lazily, like before.
The API of the module has been unchanged for the
most part. A few tests have been modified to use
the new global names.
For me the buildcache force overwrite option does not work. It tries to
delete a file, but errors with a key error, apparently because the
leading / has to be removed.
* util.tty.log: read up to 100 lines if ready
Rework to read up to 100 lines from the captured stdin as long as data
is ready to be read immediately. Adds a helper function to poll with
`select` for ready data. This showed a roughly 5-10x perf improvement
for high-rate writes through the logger with relatively short lines.
* util.tty.log: Defer flushes to end of ready reads
Rather than flush per line, flush per set of reads. Since this is a
non-blocking loop, the total perceived wait is short.
* util.tty.log: only scan each line once, usually
Rather than always find all control characters then substitute them all,
use `subn` to count the number of control characters replaced. Only if
control characters exist find out what they are. This could be made
truly single pass with sub with a function, but it's a more intrusive
change and this got 99%ish of the performance improvement (roughly
another 2x in some cases).
* util.tty.log: remove check for `readable`
Python < 3 does not support a readable check on streams, should not be
necessary here since we control the only use and it's explicitly a
stream to be read.
This PR allows users to `--export`, `--export-secret`, or both to export GPG keys
from Spack. The docs are updated that include a warning that this usually does not
need to be done.
This addresses an issue brought up in slack, and also represented in #14721.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).
This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.
As part of this change, the module roots configuration moved from the config section to inside each module configuration.
Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
### Overview
The goal of this PR is to make gitlab pipeline builds (especially build failures) more reproducible outside of the pipeline environment. The two key changes here which aim to improve reproducibility are:
1. Produce a `spack.lock` during pipeline generation which is passed to child jobs via artifacts. This concretized environment is used both by generated child jobs as well as uploaded as an artifact to be used when reproducing the build locally.
2. In the `spack ci rebuild` command, if a spec needs to be rebuilt from source, do this by generating and running an `install.sh` shell script which is then also uploaded as a job artifact to be run during local reproduction.
To make it easier to take advantage of improved build reproducibility, this PR also adds a new subcommand, `spack ci reproduce-build`, which, given a url to job artifacts:
- fetches and unzips the job artifacts to a local directory
- looks for the generated pipeline yaml and parses it to find details about the job to reproduce
- attempts to provide a copy of the same version of spack used in the ci build
- if the ci build used a docker image, the command prints a `docker run` command you can run to get an interactive shell for reproducing the build
#### Some highlights
One consequence of this change will be much smaller pipeline yaml files. By encoding the concrete environment in a `spack.lock` and passing to child jobs via artifacts, we will no longer need to encode the concrete root of each spec and write it into the job variables, greatly reducing the size of the generated pipeline yaml.
Additionally `spack ci rebuild` output (stdout/stderr) is no longer internally redirected to a log file, so job output will appear directly in the gitlab job trace. With debug logging turned on, this often results in log files getting truncated because they exceed the maximum amount of log output gitlab allows. If this is a problem, you still have the option to `tee` command output to a file in the within the artifacts directory, as now each generated job exposes a `user_data` directory as an artifact, which you can fill with whatever you want in your custom job scripts.
There are some changes to be aware of in how pipelines should be set up after this PR:
#### Pipeline generation
Because the pipeline generation job now writes a `spack.lock` artifact to be consumed by generated downstream jobs, `spack ci generate` takes a new option `--artifacts-root`, inside which it creates a `concrete_env` directory to place the lockfile. This artifacts root directory is also where the `user_data` directory will live, in case you want to generate any custom artifacts. If you do not provide `--artifacts-root`, the default is for it to create a `jobs_scratch_dir` within your `CI_PROJECT_DIR` (a gitlab predefined environment variable) or whatever is your current working directory if that variable isn't set. Here's the diff of the PR testing `.gitlab-ci.yml` taking advantage of the new option:
```
$ git diff develop..pipelines-reproducible-builds share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
diff --git a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
index 579d7b56f3..0247803a30 100644
--- a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
+++ b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
@@ -28,10 +28,11 @@ default:
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
- spack env activate --without-view .
- spack ci generate --check-index-only
+ --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
artifacts:
paths:
- - "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
+ - "${CI_PROJECT_DIR}/jobs_scratch_dir"
tags: ["spack", "public", "medium", "x86_64"]
interruptible: true
```
Notice how we replaced the specific pointer to the generated pipeline file with its containing folder, the same folder we passed as `--artifacts-root`. This way anything in that directory (the generated pipeline yaml, as well as the concrete environment directory containing the `spack.lock`) will be uploaded as an artifact and available to the downstream jobs.
#### Rebuild jobs
Rebuild jobs now must activate the concrete environment created by `spack ci generate` and provided via artifacts. When the pipeline is generated, a directory called `concrete_environment` is created within the artifacts root directory, and this is where the `spack.lock` file is written to be passed to the generated rebuild jobs. The artifacts root directory can be specified using the `--artifacts-root` option to `spack ci generate`, otherwise, it is assumed to be `$CI_PROJECT_DIR`. The directory containing the concrete environment files (`spack.yaml` and `spack.lock`) is then passed to generated child jobs via the `SPACK_CONCRETE_ENV_DIR` variable in the generated pipeline yaml file.
When you don't provide custom `script` sections in your `mappings` within the `gitlab-ci` section of your `spack.yaml`, the default behavior of rebuild jobs is now to change into `SPACK_CONCRETE_ENV_DIR` and activate that environment. If you do provide custom rebuild scripts in your `spack.yaml`, be aware those scripts should do the same thing: assume `SPACK_CONCRETE_ENV_DIR` contains the concretized environment to activate. No other changes to existing custom rebuild scripts should be required as a result of this PR.
As mentioned above, one key change made in this PR is the generation of the `install.sh` script by the rebuild jobs, as that same script is both run by the CI rebuild job as well as exported as an artifact to aid in subsequent attempts to reproduce the build outside of CI. The generated `install.sh` script contains only a single `spack install` command with arguments computed by `spack ci rebuild`. If the install fails, the job trace in gitlab will contain instructions on how to reproduce the build locally:
```
To reproduce this build locally, run:
spack ci reproduce-build https://gitlab.next.spack.io/api/v4/projects/7/jobs/240607/artifacts [--working-dir <dir>]
If this project does not have public pipelines, you will need to first:
export GITLAB_PRIVATE_TOKEN=<generated_token>
... then follow the printed instructions.
```
When run locally, the `spack ci reproduce-build` command shown above will download and process the job artifacts from gitlab, then print out instructions you can copy-paste to run a local reproducer of the CI job.
This PR includes a few other changes to the way pipelines work, see the documentation on pipelines for more details.
This PR erelies on
~- [ ] #23194 to be able to refer to uninstalled specs by DAG hash~
EDIT: that is going to take longer to come to fruition, so for now, we will continue to install specs represented by a concrete `spec.yaml` file on disk.
- [x] #22657 to support install a single spec already present in the active, concrete environment
- [x] add `in_buildcache` field to DB records to indicate what parts of an index,
which includes roots and dependencies, are in the buildcache.
- [x] add `mark()` method to DB for setting values on single nodes of the DAG.
I would like to be able to export (and save and then load programatically)
spack blame metadata, so this commit adds a spack blame --json argument,
along with developer docs for it
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
This work will come in two phases. The first here is to allow saving of a local result
with spack monitor, and the second will add a spack monitor command so the user can
do spack monitor upload.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Currently if one package does `depends_on('pkg default_library=shared')`
and another does `depends_on('pkg default_library=both')`, you'd get a
concretization error.
With this PR one package can do `depends_on('pkg default_library=shared')`
and another depends_on('default_library=static'), and it would concretize to
`pkg default_library=shared,static`
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Bash has a builtin `fc` that will override the compiler if you use "fc",
so it's better to use the full spack-supplied compiler path.
Additionally, the filter regex in the docs was wrong: it replaced the
entire assignment operation with the RHS.
* Modification to R environment
This PR modifies how the R environmnet is presented, and fixes
installing the standalone Rmath library.
- The Rmath build and install methods are combined into one
- Set parallel=False when installing Rmath
- remove the run environment that set up variables for libraries and
headers that are not really needed, and pollute the environment.
* Add setup_run_environment back
- Add back the setup_run_environment with LD_LIBRARY_PATH and
PKG_CONFIG_PATH.
- Adjust documentation to reflect the current code.
Spack uses curl to fetch URL resources. For locally-stored resources
it uses curl's file protocol; when using this protocol, curl expects
that the URL encoding conforms to RFC 3986 (which reserves characters
like '?' and '=' for special use).
We were not performing this encoding, and found a resource where
curl was interpreting this in an unfavorable way (succeeding, but
producing an empty file). This commit properly encodes URLs when
using curl's file protocol.
This error did not likely come up before because in most contexts
Spack was either fetching via http or it was using URLs without
offending characters (for example, the sha-based URLs in mirrors
never contain these characters).
Spack doesn't require users to manually index their repos; it reindexes the indexes automatically when things change. To determine when to do this, it has to `stat()` all package files in each repository to make sure that indexes up to date with packages. We currently index virtual providers, patches by sha256, and tags on packages.
When this was originally implemented, we ran the checker all the time, at startup, but that was slow (see #7587). But we didn't go far enough -- it still consults the checker and does all the stat operations just to see if a package exists (`Repo.exists()`). That might've been a wash in 2018, but as the number of packages has grown, it's gotten slower -- checking 5k packages is expensive and users see this for small operations. It's a win now to make `Repo.exists()` check files directly.
**Fix:**
This PR does a number of things to speed up `spack load`, `spack info`, and other commands:
- [x] Make `Repo.exists()` check files directly again with `os.path.exists()` (this is the big one)
- [x] Refactor `Spec.satisfies()` so that a checking for virtual packages only happens if needed
(avoids some calls to exists())
- [x] Avoid calling `Repo.exists(spec)` in `Repo.get()`. `Repo.get()` will ultimately try to load
a `package.py` file anyway; we can let the failure to load it indicate that the package doesn't
exist, and avoid another call to exists().
- [x] Fix up some comments in spec parsing
- [x] Call `UnknownPackageError` more consistently in `repo.py`
- [x] `analyze` isn't commonly used; move it to long help
(`spack -H` vs `spack -h`). Give it its own section.
- [x] make it clear from `spack -h` that `spack module` can generate
module files
- [x] shorten help for `spack style`
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).
This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.
As part of this change, the module roots configuration moved from the `config` section to inside each module configuration.
Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
TODO:
- [x] code changes to support multiple module sets
- [x] code changes to support modules relative to a view
- [x] Tests for multiple module configurations
- [x] Tests for modules relative to a view
- [x] Backwards compatibility for module roots from config section
- [x] Backwards compatibility for default module set without the name specified
- [x] Tests for backwards compatibility
The implementation for __str__ has been simplified to traverse the spec directly,
and doesn't call anymore the flat_dependencies method. Dead code has been
removed.
For configure (e.g. for hdf5) to pass, this option needs to be pulled out when invoked in ccld mode.
I thought it had fixed the issue but I still saw it after that. After some digging, my guess is that I was able
to get hdf5 to build with ifort instead of ifx. Lot of overlapping changes occurring at the time, as it were.
There are still outstanding issues building hdf5 with ifx, and Intel is looking into what appears to be a
compiler bug, but this manifests during build and is likely a separate issue.
I have verified that the making the edit in 'ccld' mode removes the -loopopt=0 and enables hdf5 to pass
configure. It should be fine to make the edit in 'ld' mode as well, but I have not tested that and didn't
include an -or- condition for it.
Currently, environment views blink out of existence during the view regeneration, and are slowly built back up to their new and improved state. This is not good if other processes attempt to access the view -- they can see it in an inconsistent state.
This PR fixes makes environment view updates atomic. This requires a level of indirection (via symlink, similar to nix or guix) from the view root to the underlying implementation on the filesystem.
Now, an environment view at `/path/to/foo` is a symlink to `/path/to/._foo/<hash>`, where `<hash>` is a hash of the contents of the view. We construct the view in its content-keyed hash directory, create a new symlink to this directory, and atomically replace the symlink with one to the new view.
This PR has a couple of other benefits:
* It future-proofs environment views so that we can implement rollback.
* It ensures that we don't leave users in an inconsistent state if building a new view fails for some reason.
For background:
* there is no atomic operation in posix that allows for a non-empty directory to be replaced.
* There is an atomic `renameat2` in the linux kernel starting in version 3.15, but many filesystems don't support the system call, including NFS3 and NFS4, which makes it a poor implementation choice for an HPC tool, so we use the symlink approach that others tools like nix and guix have used successfully.
fixes#22351
The ASP-based solver now accounts for the presence
in the DAG of deprecated versions and tries to minimize
their number at highest priority.
Variants explicitly set in an abstract root spec are considered
as defaults for the package they refer to, and they override
what is in packages.yaml and in package.py. This is relevant
only for multi-valued variants, where a constraint may extend
an already default value.
The code for guessing cpu archtype based on craype modules names got confused,
at least on LLNL RZ prototype systems. In particular a (L) or (D) at the end of a craype-x86-xxx or other
cpu architecture module was geting the logic confused.
With this patch, any white space + remaining characters in the moduel name are removed.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
There have been a lot of questions and some confusion recently surrounding Spack installation test capabilities so this PR is intended to clean up and refine the documentation for "Checking an installation".
It aims to better distinguish between checks that are performed during an installation (i.e., build-time tests) and those that can be done days and weeks after the software has been installed (i.e., install (or smoke) tests).
When we first merged the ASP-based solver, unit-tests
were run in a Docker container with root permissions
and that was preventing a few tests to succeed.
Since some time though, clingo is tested as a regular
user within Github Actions VMs, so we should start to
run checks again.
In an active concretize environment, support installing one or more
cli specs only if they are already present in the environment. The
`--no-add` option is the default for root specs, but optional for
dependency specs. I.e. if you `spack install <depspec>` in an
environment, the dependency-only spec `depspec` will be added as a
root of the environment before being installed. In addition,
`spack install --no-add <spec>` fails if it does not find an
unambiguous match for `spec`.
Like compilers targets now try to minimize
mismatches, instead of maximizing matches.
Deduction of mismatches is reworked to be
the opposite of a match, since computing
that is faster.
The ASP-based solver can natively manage cases where more than one root spec is given, and is able to concretize all the roots together (ensuring one spec per package at most).
Modifications:
- [x] When concretising together an environment the ASP-based solver calls directly its `solve` method rather than constructing a temporary fake root package.
The loading protocol mandates that the the module we are going
to import needs to be already in sys.modules before its code is
executed, so to prevent unbounded recursions and multiple loading.
Loading a module from file exits early if the module is already
in sys.modules
When installing OneAPI packages as root (e.g. in a container), the
installer places cache files in /var/intel/installercache that
interfere with future Spack installs. This ensures that when
running an installation as a root user that this is removed.
The function we coded in Spack to load Python modules with arbitrary
names from a file seem to have issues with local imports. For
loading hooks though it is unnecessary to use such functions, since
we don't care to bind a custom name to a module nor we have to load
it from an unknown location.
This PR thus modifies spack.hook in the following ways:
- Use __import__ instead of spack.util.imp.load_source (this
addresses #20005)
- Sync module docstring with all the hooks we have
- Avoid using memoization in a module function
- Marked with a leading underscore all the names that are supposed
to stay local
This is as much a question as it is a minor fine-tuning of the docs. I've been known to add things to an environment by editing the `spack.yaml` file directly. When I read the previous version of this sentence, I was afraid that `spack add` was actually doing *two* things, modifying the `spack.yaml` and updating something else that defined the roots of the Environment. A bit of experimentation suggests that editing the `spack.yaml` file is sufficient to change the roots.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
fixes#22786
Trying to get optimization flags for a specific target from
a compiler may trigger warnings. In the context of constructing
facts for the ASP-based solver we don't want to show these
warnings to the user, so here we simply ignore them.
This isn't a significant issue, but I noticed that the docstring incorrectly references "tty.fail" and I wanted to quickly fix it to reflect the correct command, tty.die. I also wanted to fix the docstrings to not be large clumps, to what @tgamblin suggested after I wrote this - having one line at the top that is a quick summary, and more verbose after that.
This provides initial support for [spack monitor](https://github.com/spack/spack-monitor), a web application that stores information and analysis about Spack installations. Spack can now contact a monitor server and upload analysis -- even after a build is already done.
Specifically, this adds:
- [x] monitor options for `spack install`
- [x] `spack analyze` command
- [x] hook architecture for analyzers
- [x] separate build logs (in addition to the existing combined log)
- [x] docs for spack analyze
- [x] reworked developer docs, with hook docs
- [x] analyzers for:
- [x] config args
- [x] environment variables
- [x] installed files
- [x] libabigail
There is a lot more information in the docs contained in this PR, so consult those for full details on this feature.
Additional tests will be added in a future PR.
In debug mode, processes taking an exclusive lock write out their node name to
the lock file. We were using `getfqdn()` for this, but it seems to produce
inconsistent results when used from within some github actions containers.
We get this error because getfqdn() seems to return a short name in one place
and a fully qualified name in another:
```
File "/home/runner/work/spack/spack/lib/spack/spack/test/llnl/util/lock.py", line 1211, in p1
assert lock.host == self.host
AssertionError: assert 'fv-az290-764....cloudapp.net' == 'fv-az290-764'
- fv-az290-764.internal.cloudapp.net
+ fv-az290-764
!!!!!!!!!!!!!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!!!!!!!!!!!
== 1 failed, 2547 passed, 7 skipped, 22 xfailed, 2 xpassed in 1238.67 seconds ==
```
This seems to stem from https://bugs.python.org/issue5004.
We don't really need to get a fully qualified hostname for debugging, so use
`gethostname()` because its results are more consistent. This seems to fix the
issue.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
* Clarify stub compiler definition in compilers.yaml
* Update explanation of why stub compiler definition is needed
* Add note about required module definition when using Spack-installed
intel-parallel-studio as intel-compiler
* Add suggestion about updating package config preferences based on
choice of variants when installing intel-parallel-studio to avoid
reinstallation
We remove system paths from search variables like PATH and
from -L options because they may contain many packages and
could interfere with Spack-built packages. External packages
may be installed to prefixes that are not actually system paths
but are still "merged" in the sense that many other packages are
installed there. To avoid conflicts, this PR places all external
packages at the end of search paths.
We set LC_ALL=C to encourage a build process to generate ASCII
output (so our logger daemon can decode it). Most packages
respect this but it appears that intel-oneapi-compilers does
not in some cases (see #22813). This reads the output of the build
process as UTF-8, which still works if the build process respects
LC_ALL=C but also works if the process generates UTF-8 output.
For Python >= 3.7 all files are opened with UTF-8 encoding by
default. Python 2 does not support the encoding argument on
'open', so to support Python 2 the files would have to be
opened in byte mode and explicitly decoded (as a side note,
this would be the only way to handle other encodings without
being informed of them in advance).
* bugfix: fix representation of null in spack_yaml output
Nulls were previously printed differently by `spack config blame config`
and `spack config get config`. Fix this in the `spack_yaml` dumpers.
* bugfix: `spack config blame` should print all lines of config
`spack config blame` was not printing all lines of configuration because
there were no annotations for empty lines in the YAML dump output. Fix
this by removing empty lines.
- Use debugoptimized as default build type, just like RelWithDebInfo for cmake
- Do not strip by default, and add a default_library variant which conveniently support both shared and static
By default, clingo doesn't show any optimization criteria (maximized or
minimized sums) if the set they aggregate is empty. Per the clingo
mailing list, we can get around that by adding, e.g.:
```
#minimize{ 0@2 : #true }.
```
for the 2nd criterion. This forces clingo to print out the criterion but
does not affect the optimization.
This PR adds directives as above for all of our optimization criteria, as
well as facts with descriptions of each criterion,like this:
```
opt_criterion(2, "number of non-default variants")
```
We use facts in `concretize.lp` rather than hard-coding these in `asp.py`
so that the names can be maintained in the same place as the other
optimization criteria.
The now-displayed weights and the names are used to display optimization
output like this:
```console
(spackle):solver> spack solve --show opt zlib
==> Best of 0 answers.
==> Optimization Criteria:
Priority Criterion Value
1 version weight 0
2 number of non-default variants (roots) 0
3 multi-valued variants + preferred providers for roots 0
4 number of non-default variants (non-roots) 0
5 number of non-default providers (non-roots) 0
6 count of non-root multi-valued variants 0
7 compiler matches + number of nodes 1
8 version badness 0
9 non-preferred compilers 0
10 target matches 0
11 non-preferred targets 0
zlib@1.2.11%apple-clang@12.0.0+optimize+pic+shared arch=darwin-catalina-skylake
```
Note that this is all hidden behind a `--show opt` option to `spack
solve`. Optimization weights are no longer shown by default, but you can
at least inspect them and more easily understand what is going on.
- [x] always show optimization criteria in `clingo` output
- [x] add `opt_criterion()` facts for all optimizationc criteria
- [x] make display of opt criteria optional in `spack solve`
- [x] rework how optimization criteria are displayed, and add a `--show opt`
optiong to `spack solve`
CachedCMakePackage is a CMakePackage subclass for using CMake initial
cache. This feature of CMake allows packages to increase reproducibility,
especially between spack builds and manual builds. It also allows
packages to sidestep certain parsing bugs in extremely long cmake
commands, and to avoid system limits on the length of the command line.
Co-authored by: Chris White <white238@llnl.gov>
In the face of two consecutive spaces in the command line, the compiler wrapper would skip all remaining arguments, causing problems building py-scipy with Intel compiler. This PR solves the problem.
* Fixed compiler wrapper in the face of extra spaces between arguments
Co-authored-by: Elizabeth Fischer <elizabeth.fischer@alaska.edu>
Original commit message:
This feature of CMake allows packages to increase reproducibility, especially between
Spack- and manual builds. It also allows packages to sidestep certain parsing bugs in
extremely long ``cmake`` commands, and to avoid system limits on the length of the
command line.
Adding:
Co-authored by: Chris White <white238@llnl.gov>
This reverts commit c4f0a3cf6c.
CachedCMakePackage is a specialized class for packages built using CMake initial cache.
This feature of CMake allows packages to increase reproducibility, especially between
Spack- and manual builds. It also allows packages to sidestep certain parsing bugs in
extremely long ``cmake`` commands, and to avoid system limits on the length of the
command line.
Autoconf before 2.70 will erroneously pass ifx's -loopopt argument to the
linker, requiring all packages to use autoconf 2.70 or newer to use ifx.
This is a hotfix enabling ifx to be used in Spack. Instead of bothering
to upgrade autoconf for every package, we'll just strip out the
problematic flag if we're in `ld` mode.
- [x] Add a conditional to the `cc` wrapper to skip `-loopopt` in `ld`
mode. This can probably be generalized in the future to strip more
things (e.g., via an environment variable we can constrol from
Spack) but it's good enough for now.
- [x] Add a test ensuring that `-loopopt` arguments are stripped in link
mode, but not in compile mode.
Since `lazy_lexicographic_ordering` handles `None` comparison for us, we
don't need to adjust the spec comparators to return empty strings or
other type-specific empty types. We can just leverage the None-awareness
of `lazy_lexicographic_ordering`.
- [x] remove "or ''" from `_cmp_iter` in `Spec`
- [x] remove setting of `self.namespace` to `''` in `MockPackage`
We have been using the `@llnl.util.lang.key_ordering` decorator for specs
and most of their components. This leverages the fact that in Python,
tuple comparison is lexicographic. It allows you to implement a
`_cmp_key` method on your class, and have `__eq__`, `__lt__`, etc.
implemented automatically using that key. For example, you might use
tuple keys to implement comparison, e.g.:
```python
class Widget:
# author implements this
def _cmp_key(self):
return (
self.a,
self.b,
(self.c, self.d),
self.e
)
# operators are generated by @key_ordering
def __eq__(self, other):
return self._cmp_key() == other._cmp_key()
def __lt__(self):
return self._cmp_key() < other._cmp_key()
# etc.
```
The issue there for simple comparators is that we have to bulid the
tuples *and* we have to generate all the values in them up front. When
implementing comparisons for large data structures, this can be costly.
This PR replaces `@key_ordering` with a new decorator,
`@lazy_lexicographic_ordering`. Lazy lexicographic comparison maps the
tuple comparison shown above to generator functions. Instead of comparing
based on pre-constructed tuple keys, users of this decorator can compare
using elements from a generator. So, you'd write:
```python
@lazy_lexicographic_ordering
class Widget:
def _cmp_iter(self):
yield a
yield b
def cd_fun():
yield c
yield d
yield cd_fun
yield e
# operators are added by decorator (but are a bit more complex)
There are no tuples that have to be pre-constructed, and the generator
does not have to complete. Instead of tuples, we simply make functions
that lazily yield what would've been in the tuple. If a yielded value is
a `callable`, the comparison functions will call it and recursively
compar it. The comparator just walks the data structure like you'd expect
it to.
The ``@lazy_lexicographic_ordering`` decorator handles the details of
implementing comparison operators, and the ``Widget`` implementor only
has to worry about writing ``_cmp_iter``, and making sure the elements in
it are also comparable.
Using this PR shaves another 1.5 sec off the runtime of `spack buildcache
list`, and it also speeds up Spec comparison by about 30%. The runtime
improvement comes mostly from *not* calling `hash()` `_cmp_iter()`.
* Make -j flag less exceptional
The -j flag in spack behaves differently from make, ctest, ninja, etc,
because it caps the number of jobs to an arbitrary number 16.
Spack will behave like other tools if `spack install` uses a reasonable
default, and `spack install -j <num>` *overrides* that default.
This will be particularly useful for Spack usage outside of a traditional
HPC context and for HPC centers that encourage users to compile on
login nodes with many cores instead of on compute nodes, which has
become increasingly common as individual nodes have more cores.
This maintains the existing default value of min(num_cpus, 16). However,
as it is right now, Spack does a poor job at determining the number of
cpus on linux, since it doesn't take cgroups into account. This is
particularly problematic when using distributed builds with slurm. This PR
also introduces `spack.util.cpus.cpus_available()` to consolidate
knowledge on determining the number of available cores, and improves
core detection for linux. This should also improve core detection for Docker/
Kubernetes, which also use cgroups.
This commit extends the API of the __call__ method of the
SpackCommand class to permit passing global arguments
like those interposed between the main "spack" command
and the subsequent subcommand.
The functionality is used to fix an issue where running
```spack -e . location -b some_package```
ends up printing the name of the environment instead of
the build directory of the package, because the location arg
parser also stores this value as `arg.env`.
fixes#22294
A combination of the swapping order for global variables and
the fact that most of them are lazily evaluated resulted in
custom install tree not being taken into account if clingo
had to be bootstrapped.
This commit fixes that particular issue, but a broader refactor
may be needed to ensure that similar situations won't affect us
in the future.
Remote buildcache indices need to be stored in a place that does not
require writing to the Spack prefix. Move them from the install_tree to
the misc_cache.
fixes#22565
This change enforces the uniqueness of the version_weight
atom per node(Package) in the DAG. It does so by applying
FTSE and adding an extra layer of indirection with the
possible_version_weight/2 atom.
Before this change it may have happened that for the same
node two different version_weight/2 were in the answer set,
each of which referred to a different spec with the same
version, and their weights would sum up.
This lead to unexpected result like preferring to build a
new version of an external if the external version was
older.
* Make stage use concrete specs from environment
Same as in https://github.com/spack/spack/pull/21642, the idea is that
we want to easily stage a package that fails to build in a complex
environment. Instead of making the user create a spec by hand (basically
transforming all the rules in the environment manifest into a spec,
defying the purpose of the environment...), use the provided spec as a
filter for the already concretized specs. This also speeds up things,
cause we don't have to reconcretize.
* clingo: modify recipe for bootstrapping
Modifications:
- clingo builds with shared Python only if ^python+shared
- avoid building the clingo app for bootstrapping
- don't link to libpython when bootstrapping
* Remove option that breaks on linux
* Give more hints for the current Python
* Disable CLINGO_BUILD_PY_SHARED for bootstrapping
* bootstrapping: try to detect the current python from std library
This is much faster than calling external executables
* Fix compatibility with Python 2.6
* Give hints on which compiler and OS to use when bootstrapping
This change hints which compiler to use for bootstrapping clingo
(either GCC or Apple Clang on MacOS). On Cray platforms it also
hints to build for the frontend system, where software is meant
to be installed.
* Use spec_for_current_python to constrain module requirement
* ASP-based solver: avoid adding values to variants when they're set
fixes#22533fixes#21911
Added a rule that prevents any value to slip in a variant when the
variant is set explicitly. This is relevant for multi-valued variants,
in particular for those that have disjoint sets of values.
* Ensure disjoint sets have a clear semantics for external packages
fixes#22547
SingleFileScope was not able to repopulate its cache before this
change. This was affecting the configuration seen by environments
using clingo bootstrapped from sources, since the bootstrapping
operation involved a few cache invalidation for config files.
This change accounts for platform specific configuration scopes,
like ~/.spack/linux, during bootstrapping. These scopes were
previously not accounted for and that was causing issues e.g.
when searching for compilers.
* Replace URL computation in base IntelOneApiPackage class with
defining URLs in component packages (this is expected to be
simpler for now)
* Add component_dir property that all oneAPI component packages must
define. This property names a directory that should exist after
installation completes (useful for making sure the install was
successful) and also defines the search location for the
component's environment update script.
* Add needed dependencies for components (e.g. intel-oneapi-dnn
requires intel-oneapi-tbb). The compilers provided by
intel-oneapi-compilers need some components under certain
circumstances (e.g. when enabling SYCL support) but these were
omitted since the libraries should only be linked when a
dependent package requests that feature
* Remove individual setup_run_environment implementations and use
IntelOneApiPackage superclass method which sources vars.sh
(located in a subdirectory of component_dir)
* Add documentation for IntelOneApiPackge build system
Co-authored-by: Vasily Danilin <vasily.danilin@yandex.ru>
* unit tests: mark slow tests as "maybeslow"
This commit also removes the "network" marker and
marks every "network" test as "maybeslow". Tests
marked as db are maintained, but they're not slow
anymore.
* GA: require style tests to pass before running unit-tests
* GA: make MacOS unit tests fail fast
* GA: move all unit tests into the same workflow, run style tests as a prerequisite
All the unit tests have been moved into the same workflow so that a single
run of the dorny/paths-filter action can be used to ask for coverage based
on the files that have been changed in a PR. The basic idea is that for PRs
that introduce only changes to packages coverage is not necessary, this
resulting in a faster execution of the tests.
Also, for package only PRs slow unit tests are skipped.
Finally, MacOS and linux unit tests are now conditional on style tests passing
meaning that e.g. we won't waste a MacOS worker if we know that the PR has
flake8 issues.
* Addressed review comments
* Skipping slow tests on MacOS for package only recipes
* QA: make tests on changes correct before merging
In most cases, we want condition_holds(ID) to imply any imposed
constraints associated with the ID. However, the dependency relationship
in Spack is special because it's "extra" conditional -- a dependency
*condition* may hold, but we have decided that externals will not have
dependencies, so we need a way to avoid having imposed constraints appear
for nodes that don't exist.
This introduces a new rule that says that constraints are imposed
*unless* we define `do_not_impose(ID)`. This allows rules like
dependencies, which rely on more than just spec conditions, to cancel
imposed constraints.
We add one special case for this: dependencies of externals.
We only consider test dependencies some of the time. Some packages are
*only* test dependencies. Spack's algorithm was previously generating
dependency conditions that could hold, *even* if there was no potential
dependency type.
- [x] change asp.py so that this can't happen -- we now only generate
dependency types for possible dependencies.
This builds on #20638 by unifying all the places in the concretizer where
things are conditional on specs. Previously, we duplicated a common spec
conditional pattern for dependencies, virtual providers, conflicts, and
externals. That was introduced in #20423 and refined in #20507, and
roughly looked as follows.
Given some directives in a package like:
```python
depends_on("foo@1.0+bar", when="@2.0+variant")
provides("mpi@2:", when="@1.9:")
```
We handled the `@2.0+variant` and `@1.9:` parts by generating generated
`dependency_condition()`, `required_dependency_condition()`, and
`imposed_dependency_condition()` facts to trigger rules like this:
```prolog
dependency_conditions_hold(ID, Parent, Dependency) :-
attr(Name, Arg1) : required_dependency_condition(ID, Name, Arg1);
attr(Name, Arg1, Arg2) : required_dependency_condition(ID, Name, Arg1, Arg2);
attr(Name, Arg1, Arg2, Arg3) : required_dependency_condition(ID, Name, Arg1, Arg2, Arg3);
dependency_condition(ID, Parent, Dependency);
node(Parent).
```
And we handled `foo@1.0+bar` and `mpi@2:` parts ("imposed constraints")
like this:
```prolog
attr(Name, Arg1, Arg2) :-
dependency_conditions_hold(ID, Package, Dependency),
imposed_dependency_condition(ID, Name, Arg1, Arg2).
attr(Name, Arg1, Arg2, Arg3) :-
dependency_conditions_hold(ID, Package, Dependency),
imposed_dependency_condition(ID, Name, Arg1, Arg2, Arg3).
```
These rules were repeated with different input predicates for
requirements (e.g., `required_dependency_condition`) and imposed
constraints (e.g., `imposed_dependency_condition`) throughout
`concretize.lp`. In #20638 it got to be a bit confusing, because we used
the same `dependency_condition_holds` predicate to impose constraints on
conditional dependencies and virtual providers. So, even though the
pattern was repeated, some of the conditional rules were conjoined in a
weird way.
Instead of repeating this pattern everywhere, we now have *one* set of
consolidated rules for conditions:
```prolog
condition_holds(ID) :-
condition(ID);
attr(Name, A1) : condition_requirement(ID, Name, A1);
attr(Name, A1, A2) : condition_requirement(ID, Name, A1, A2);
attr(Name, A1, A2, A3) : condition_requirement(ID, Name, A1, A2, A3).
attr(Name, A1) :- condition_holds(ID), imposed_constraint(ID, Name, A1).
attr(Name, A1, A2) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2).
attr(Name, A1, A2, A3) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2, A3).
```
this allows us to use `condition(ID)` and `condition_holds(ID)` to
encapsulate the conditional logic on specs in all the scenarios where we
need it. Instead of defining predicates for the requirements and imposed
constraints, we generate the condition inputs with generic facts, and
define predicates to associate the condition ID with a particular
scenario. So, now, the generated facts for a condition look like this:
```prolog
condition(121).
condition_requirement(121,"node","cairo").
condition_requirement(121,"variant_value","cairo","fc","True").
imposed_constraint(121,"version_satisfies","fontconfig","2.10.91:").
dependency_condition(121,"cairo","fontconfig").
dependency_type(121,"build").
dependency_type(121,"link").
```
The requirements and imposed constraints are generic, and we associate
them with their meaning via the id. Here, `dependency_condition(121,
"cairo", "fontconfig")` tells us that condition 121 has to do with the
dependency of `cairo` on `fontconfig`, and the conditional dependency
rules just become:
```prolog
dependency_holds(Package, Dependency, Type) :-
dependency_condition(ID, Package, Dependency),
dependency_type(ID, Type),
condition_holds(ID).
```
Dependencies, virtuals, conflicts, and externals all now use similar
patterns, and the logic for generating condition facts is common to all
of them on the python side, as well. The more specific routines like
`package_dependencies_rules` just call `self.condition(...)` to get an id
and generate requirements and imposed constraints, then they generate
their extra facts with the returned id, like this:
```python
def package_dependencies_rules(self, pkg, tests):
"""Translate 'depends_on' directives into ASP logic."""
for _, conditions in sorted(pkg.dependencies.items()):
for cond, dep in sorted(conditions.items()):
condition_id = self.condition(cond, dep.spec, pkg.name) # create a condition and get its id
self.gen.fact(fn.dependency_condition( # associate specifics about the dependency w/the id
condition_id, pkg.name, dep.spec.name
))
# etc.
```
- [x] unify generation and logic for conditions
- [x] use unified logic for dependencies
- [x] use unified logic for virtuals
- [x] use unified logic for conflicts
- [x] use unified logic for externals
LocalWords: concretizer mpi attr Arg concretize lp cairo fc fontconfig
LocalWords: virtuals def pkg cond dep fn refactor github py
* Rewrite relative dev_spec paths internally to absolute paths in case of relocation of the environment file
* Test relative paths for dev_path in environments
* Add a --keep-relative flag to spack env create
This ensures that relative paths of develop paths are not expanded to
absolute paths when initializing the environment in a different location
from the spack.yaml init file.
Currently, regardless of a spec being concrete or not, we validate its variants in `spec_clauses` (part of `SpackSolverSetup`).
This PR skips the check if the spec is concrete.
The reason we want to do this is so that the solver setup class (really, `spec_clauses`) can be used for cases when we just want the logic statements / facts (is that what they are called?) and we don't need to re-validate an already concrete spec. We can't change existing concrete specs, and we have to be able to handle them *even if they violate constraints in the current spack*. This happens in practice if we are doing the validation for a spec produced by a different spack install.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
This pull request will add the ability for a user to add a configuration argument on the fly, on the command line, e.g.,:
```bash
$ spack -c config:install_tree:root:/path/to/config.yaml -c packages:all:compiler:[gcc] list --help
```
The above command doesn't do anything (I'm just getting help for list) but you can imagine having another root of packages, and updating it on the fly for a command (something I'd like to do in the near future!)
I've moved the logic for config_add that used to be in spack/cmd/config.py into spack/config.py proper, and now both the main.py (where spack commands live) and spack/cmd/config.py use these functions. I only needed spack config add, so I didn't move the others. We can move the others if there are also needed in multiple places.
Was getting the following error:
```
$ spack test list
==> Error: issubclass() arg 1 must be a class
```
This PR adds a check in `has_test_method` (in case it is re-used elsewhere such as #22097) and ensures a class is passed to the method from `spack test list`.
This is a workaround for an issue with how "spack install" is invoked from within "spack ci rebuild". The fact that we don't get an exception or even the actual returncode when using the object returned by spack.util.executable.which('spack') to install the target spec means we get no indication of failures about the install command itself. Instead we rely on the subsequent buildcache creation failure to fail the job.
Unlike the other commands of the `R CMD` interface, the `INSTALL` command
will read `Renviron` files. This can potentially break builds of r-
packages, depending on what is set in the `Renviron` file. This PR adds
the `--vanilla` flag to ensure that neither `Rprofile` nor `Renviron` files
are read during Spack builds of r- packages.
This adds a `--path` option to `spack python` that shows the `python`
interpreter that Spack is using.
e.g.:
```console
$ spack python --path
/Users/gamblin2/src/spack/var/spack/environments/default/.spack-env/view/bin/python
```
This is useful for debugging, and we can ask users to run it to
understand what python Spack is picking up via preferences in `bin/spack`
and via the `SPACK_PYTHON` environment variable introduced in #21222.
`spack test list` will show you which *installed* packages can be tested
but it won't show you which packages have tests.
- [x] add `spack test list --all` to show which packages have test methods
- [x] update `has_test_method()` to handle package instances *and*
package classes.
* Improve R package creation
This PR adds the `list_url` attribute to CRAN R packages when using
`spack create`. It also adds the `git` attribute to R Bioconductor
packages upon creation.
* Switch over to using cran/bioc attributes
The cran/bioc entries are set to have the '=' line up with homepage
entry, but homepage does not need to exist in the package file. If it
does not, that could affect the alignment.
* Do not have to split bioc
* Edit R package documentation
Explain Bioconductor packages and add `cran` and `bioc` attributes.
* Update lib/spack/docs/build_systems/rpackage.rst
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update lib/spack/docs/build_systems/rpackage.rst
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Simplify the cran attribute
The version can be faked so that the cran attribute is simply equal to
the CRAN package name.
* Edit the docs to reflect new `cran` attribute format
* Use the first element of self.versions() for url
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This allows users to use relative paths for mirrors and repos and other things that may be part of a Spack environment. There are two ways to do it.
1. Relative to the file
```yaml
spack:
repos:
- local_dir/my_repository
```
Which will refer to a repository like this in the directory where `spack.yaml` lives:
```
env/
spack.yaml <-- the config file above
local_dir/
my_repository/ <-- this repository
repo.yaml
packages/
```
2. Relative to the environment
```yaml
spack:
repos:
- $env/local_dir/my_repository
```
Both of these would refer to the same directory, but they differ for included files. For example, if you had this layout:
```
env/
spack.yaml
repository/
includes/
repos.yaml
repository/
```
And this `spack.yaml`:
```yaml
spack:
include: includes/repos.yaml
```
Then, these two `repos.yaml` files are functionally different:
```yaml
repos:
- $env/repository # refers to env/repository/ above
repos:
- repository # refers to env/includes/repository/ above
```
The $env variable will not be evaluated if there is no active environment. This generally means that it should not be used outside of an environment's spack.yaml file. However, if other aspects of your workflow guarantee that there is always an active environment, it may be used in other config scopes.
* Allow the bootstrapping of clingo from sources
Allow python builds with system python as external
for MacOS
* Ensure consistent configuration when bootstrapping clingo
This commit uses context managers to ensure we can
bootstrap clingo using a consistent configuration
regardless of the use case being managed.
* Github actions: test clingo with bootstrapping from sources
* Add command to inspect and clean the bootstrap store
Prevent users to set the install tree root to the bootstrap store
* clingo: documented how to bootstrap from sources
Co-authored-by: Gregory Becker <becker33@llnl.gov>
If a user creates a wrapper for the ifx binary called ifx_orig,
this causes the ifx --version command to produce:
$ ifx --version
ifx_orig (IFORT) 2021.1 Beta 20201113
Copyright (C) 1985-2020 Intel Corporation. All rights reserved.
The regex for ifx currently expects the output to begin with
"ifx (IFORT)..." so the wrapper would not be detected as ifx. This
PR removes the need for the static "ifx" string which allows wrappers
to be detected as ifx.
In general, the Intel compiler regexes do not include the invoked
executable name (i.e., ifort, icc, icx, etc.), so this is not
expected to cause any issues.
* make `spack fetch` work with environments
* previously: `spack fetch` required the explicit statement of
the specs to be fetched, even when in an environment
* now: if there is no spec(s) provided to `spack fetch` we check
if an environment is active and if yes we fetch all
uninstalled specs.
When using an external package with the old concretizer, all
dependencies of that external package were severed. This was not
performed bidirectionally though, so for an external package W with
a dependency on Z, if some other package Y depended on Z, Z could
still pull properties (e.g. compiler) from W since it was not
severed as a parent dependency.
This performs the severing bidirectionally, and adds tests to
confirm expected behavior when using config from DAG-adjacent
packages during concretization.
This allows for quickly configuring a spack install/env to use upstream packages by default. This is particularly important when upstreaming from a set of officially supported spack installs on a production cluster. By configuring such that package preferences match the upstream, you ensure maximal reuse of existing package installations.
Fixes for gitlab pipelines
* Remove accidentally retained testing branch name
* Generate pipeline w/out debug mode
* Make jobs interruptible for auto-cancel pending
* Work around concretization conflicts
* Support clingo when used with cffi
Clingo recently merged in a new Python module option based on cffi.
Compatibility with this module requires a few changes to spack - it does not automatically convert strings/ints/etc to Symbol and clingo.Symbol.string throws on failure.
manually convert str/int to clingo.Symbol types
catch stringify exceptions
add job for clingo-cffi to Spack CI
switch to potassco-vendored wheel for clingo-cffi CI
on_unsat argument when cffi
* Spec.splice feature
Construct a new spec with a dependency swapped out. Currently can only swap dependencies of the same name, and can only apply to concrete specs.
This feature is not yet attached to any install functionality, but will eventually allow us to "rewire" a package to depend on a different set of dependencies.
Docstring is reformatted for git below
Splices dependency "other" into this ("target") Spec, and return the result as a concrete Spec.
If transitive, then other and its dependencies will be extrapolated to a list of Specs and spliced in accordingly.
For example, let there exist a dependency graph as follows:
T
| \
Z<-H
In this example, Spec T depends on H and Z, and H also depends on Z.
Suppose, however, that we wish to use a differently-built H, known as H'. This function will splice in the new H' in one of two ways:
1. transitively, where H' depends on the Z' it was built with, and the new T* also directly depends on this new Z', or
2. intransitively, where the new T* and H' both depend on the original Z.
Since the Spec returned by this splicing function is no longer deployed the same way it was built, any such changes are tracked by setting the build_spec to point to the corresponding dependency from the original Spec.
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
If you install packages using spack install in an environment with
complex spec constraints, and the install fails, you may want to
test out the build using spack build-env; one issue (particularly
if you use concretize: together) is that it may be hard to pass
the appropriate spec that matches what the environment is
attempting to install.
This updates the build-env command to default to pulling a matching
spec from the environment rather than concretizing what the user
provides on the command line independently.
This makes a similar change to spack cd.
If the user-provided spec matches multiple specs in the environment,
then these commands will now report an error and display all
matching specs (to help the user specify).
Co-authored-by: Gregory Becker <becker33@llnl.gov>
* Improve error message for inconsistencies in package.py
Sometimes directives refer to variants that do not exist.
Make it such that:
1. The name of the variant
2. The name of the package which is supposed to have
such variant
3. The name of the package making this assumption
are all printed in the error message for easier debugging.
* Add unit tests
The signature for configure_args in the template for new
RPackage packages was incorrect (different than what is
defined and used in lib/spack/spack/build_systems/r.py)
See issue #21774
Keep spack.store.store and spack.store.db consistent in unit tests
* Remove calls to monkeypatch for spack.store.store and spack.store.db:
tests that used these called one or the other, which lead to
inconsistencies (the tests passed regardless but were fragile as a
result)
* Fixtures making use of monkeypatch with mock_store now use the
updated use_store function, which sets store.store and store.db
consistently
* subprocess_context.TestState now transfers the serializes and
restores spack.store.store (without the monkeypatch changes this
would have created inconsistencies)