Since #34821 we are annotating virtual dependencies on
DAG edges, and reconstructing virtuals in memory when
we read a concrete spec from previous formats.
Therefore, we can remove a TODO in asp.py, and rely on
"virtual_on_edge" facts to be imposed.
Computing str(spec) is faster than computing hash(spec), and
since all the abstract specs we deal with come from user configuration
they cannot cover DAG structures that are not captured by str() but
are captured by hash()
Delay lookup for abstract hashes until concretization time, instead of
until Spec comparison. This has a few advantages:
1. `satisfies` / `intersects` etc don't always know where to resolve the
abstract hash (in some cases it's wrong to look in the current env,
db, buildcache, ...). Better to let the call site dictate it.
2. Allows search by abstract hash without triggering a database lookup,
causing quadratic complexity issues (accidental nested loop during
search)
3. Simplifies queries against the buildcache, they can now use Spec
instances instead of strings.
The rules are straightforward:
1. a satisfies b when b's hash is prefix of a's hash
2. a intersects b when either a's or b's hash is a prefix of b's or a's
hash respectively
The median length of this list of 1. For reasons I don't know, `.sort()`
still like to call the key function.
This saves ~9% of total database read time, and the number of calls
goes from 5305 -> 1715.
* Do not impose provider conditions, if the node is not a provider
fixes#39455
When a node can be a provider of a spec, but is not selected as
a provider, we should not be imposing provider conditions on the
virtual.
* Adjust the integrity constraint, by using the correct atom
* Add "only_clingo", "only_original" and "not_on_windows" markers
* Modify tests to use the "not_on_windows" marker
* Mark tests that run only with clingo
* Mark tests that run only with the original concretizer
To avoid paying the cost of setup and of a full grounding again,
move cycle detection into a separate program and check first if
the solution has cycles.
If it has, ground only the integrity constraint preventing cycles
and solve again.
The "concretizer" section has been extended with a "duplicates:strategy"
attribute, that can take three values:
- "none": only 1 node per package
- "minimal": allow multiple nodes opf specific packages
- "full": allow full duplication for a build tool
This refactor introduces extra indices for triggers and
effect of a condition, so that the corresponding clauses
are evaluated once for every condition they apply to.
All the solution modes we use imply that we have to solve for all
the literals, except for "when possible".
Here we remove a minimization on the number of literals not
solved, and emit directly a fact when a literal *has* to be
solved.
Introduce the concept of "condition sets", i.e. the set of packages on which
a package can require / impose conditions. This currently maps to the link/run
sub-dag of each package + its direct build dependencies.
Parametrize the "condition" and "requirement" logic to multiple nodes.
So far the encoding has a single ID per package, i.e. all the
facts will be node(0, Package). This will prepare the stage for
extending this logic and having multiple nodes from the same
package in a DAG.
Each fact that is deduced from package rules, and start with
a bare package atom, is transformed into a "facts" atom containing
a nested function.
For instance we transformed
version_declared(Package, ...) -> facts(Package, version_declared(...))
This allows us to clearly mark facts that represent a rule on the package,
and will be of help later when we'll have to distinguish the cases where
the atom "Package" is being used referred to package rules and not to a
node in the DAG.
Windows executable paths can have spaces in them, which was leading to
errors when constructing Executable objects: the parser was intended
to handle cases where users could provide an executable along with one
or more space-delimited arguments.
* Executable now assumes that it is constructed with a string argument
that represents the path to the executable, but no additional arguments.
* Invocations of Executable.__init__ that depended on this have been
updated (this includes the core, tests, and one instance of builtin
repository package).
* The error handling for failed invocations of Executable.__call__ now
includes a check for whether the executable name/path contains a
space, to help users debug cases where they (now incorrectly)
concatenate the path and the arguments.
* The module-level skip for tests in `cmd.install` on Windows is removed.
A few classes of errors still persist:
* Cdash tests are not working on Windows
* Tests for failed installs are also not working (this will require
investigating bugs in output redirection)
* Environments are not yet supported on Windows
overall though, this enables testing of most basic uses of "spack install"
* Git repositories cached for version lookups were using a layout that
mimicked the URL as much as possible. This was useful for listing the
cache directory and understanding what was present at a glance, but
the paths were overly long on Windows. On all systems, the layout is
now a single directory based on a hash of the Git URL and is shortened
(which ensures a consistent and acceptable length, and also avoids
special characters).
* In particular, this removes util.url.parse_git_url and its associated
test, which were used exclusively for generating the git cache layout
* Bootstrapping is now enabled for unit tests on Windows
#36770 added git as a dependency to `setuptools-scm`. This in turn makes `git` a
transitive dependency for our bootstrapping process.
Since `git` may take a long time to build, and is found on most systems, try to
detect it as an external.
This makes the name of the global variable representing
the repository currently in use uppercase. Doing so is advised
by pylint rules, and helps to identify where the global is used.
* Prefix conflict messages with package name
This patch prefixes all conflict messages with the package name to
alleviate what was otherwise a very manual process. Note that this patch
is a one line change but has a fairly outsized impact.
* same for requires directive
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
* Ensure that all variants have a description
* Update mock packages too
* Fix test invocations
* Black fix
* mgard: update variant descriptions
* flake8 fix
* black fix
* Add to audit tests
* Relax type hints
* Older Python support
* Undo all changes to mock packages
* Flake8 fix
This PR extracts two responsibilities from the `Database` class:
1. Managing locks for prefixes during an installation
2. Marking installation failures
and pushes them into their own class (`SpecLocker` and `FailureMarker`). These responsibilities are also pushed up into the `Store`, leaving to `Database` only the duty to manage `index.json` files.
`SpecLocker` classes no longer share a global list of locks, but locks are per instance. Their identifier is simply `(dag hash, package name)`, and not the spec prefix path, to avoid circular dependencies across Store / Database / Spec.
FastPackageChecker.modified_since should use a default number < 0
When the repo cache does not exist, Spack uses mtime 0. This causes the repo
cache not to be generated when the repo has mtime 0.
Some popular package managers such as spack use 0 mtime normalization for
reproducible tarballs. So when installing spack with spack from a buildcache, the
repo cache doesn't generate
Also add some typehints
* Inform mypy that tty.die is noreturn
* avoid temporary allocation in env
* update spack buildcache save-specfile
* fix spack buildcache check/download/get-buildcache-name
- ensure that required args and mutually exclusive ones are marked as
such in argparse for better error messages
- deprecate --spec-file everywhere
- use disambiguate for better error messages
* CI: Refactor ci reproducer
* Autostart container
* Reproducer paths match CI paths
* Generate start scripts for docker and reproducer
* CI: Add interactive and gpg options to reproduce-build
* Interactive will determine if the docker container persists
after running reproduction.
* GPG path/url allow downloading GPG keys needed for binary
cache download validation. This is important for running
reproducer for protected CI jobs.
* Add exit_on_failure option to CI scripts
* CI: Add runtime option for reproducer
- Run `mkdirp` on `spec.prefix`
- Extract directly into `spec.prefix`
1. No need for `$store/tmp.xxx` where we extract the tarball directly, pray that it has one subdir `<name>-<version>-<hash>`, and then `rm -rf` the package prefix followed by `mv`.
2. No need to clean up this temp dir in `spack clean`.
3. Instead figure out package directory prefix from the tarball contents, and strip the tarinfo entries accordingly (kinda like tar --strip-components but more strict)
- Set package dir permissions
- Don't error during error handling when files cannot removed
- No need to "enrich" spec.json with this tarball-toplevel-path
After this PR, we can in fact tarball packages relative to `/` instead of `spec.prefix/..`, which makes it possible to use Spack tarballs as container layers, where relocation is impossible, and rootfs tarballs are expected.
`spack buildcache list` did not have a way to display the namespace of
packages in the buildcache. This PR adds that functionality.
For consistency's sake, it moves the `-N/--namespace` arg definition to
the `common/arguments.py` and modifies `find`, `solve`, `spec` to use
the common definition.
Previously, `find` was using `--namespace` (singular) to control whether
to display the namespace (it doesn't restrict the search to that
namespace). The other commands were using `--namespaces` (plural). For
backwards compatibility and for consistency with `--deps`, `--tags`,
etc, the plural `--namespaces` was chosen. The argument parser ensures
that `find --namespace` will continue to behave as before.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
* Add rewrite of spack checksum to include --verify and better add versions to package.py files
* Fix formatting and remove unused import
* Update checksum unit-tests to correctly test multiple versions and add to package
* Remove references to latest in stage.py
* Update bash-completion scripts to fix unit tests failures
* Fix docs generation
* Remove unused url_dict argument from methods
* Reduce chance of redundant remote_versions work
* Add print() before tty.die() to increase error readablity
* Update version regular expression to allow for multi-line versions
* Add a few unit tests to improve test coverage
* Update command completion
* Add type hints to added functions and fix a few py-lint suggestions
* Add @no_type_check to prevent mypy from failing on pkg.versions
* Add type hints to format.py and fix unit test
* Black format lib/spack/spack/package_base.py
* Attempt ignoring type errors
* Add optional dict type hint and declare versions in PackageBase
* Refactor util/format.py to allow for url_dict as an optional parameter
* Directly reference PackageBase class instead of using TypeVar
* Fix comment typo
---------
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
The sanitization function is completely bogus as it tries to replace /
on unix after ... splitting on it. The way it's implemented is very
questionable: the input is a file name, not a path. It doesn't make
sense to interpret the input as a path and then make the components
valid -- you'll interpret / in a filename as a dir separator.
It also fails to deal with path components that contain just unsupported
characters (resulting in empty component).
The correct way to deal with this is to have a function that takes a
potential file name and replaces unsupported characters.
I'm not going to fix the other issues on Windows, such as reserved file
names, but left a note, and hope that @johnwparent can fix that
separately.
(Obviously we wouldn't have this problem at all if we just fixed the
filename in a safe way instead of trying to derive something from
the url; we could use the content digest when available for example)
* commands: provide more information to Command
* fish: Add script to generate fish completion
* fish: auto prepend `spack` command to avoid duplication
* fish: impove completion generation code readability
* commands: replace match-case with if-else
* fish: fix optspec variable name prefix
* fish: fix return value in get_optspecs
* fish: fix return value in get_optspecs
* format: split long line and trim trailing space
* bugfix: replace f-string with interpolation
* fish: compete more specs and some fixes
* fish: complete hash spec starts with /
* fish: improve compatibility
* style: trim trailing whitespace
* commands: add fish to update args and update tests
* commands: add fish completion file
* style: merge imports
* fish: source completion in setup-env
* fish: caret only completes dependencies
* fish: make sure we always get same order of output
* fish: spack activate
only show installed packages that have extensions
* fish: update completion file
* fish: make dict keys sorted
* Blacken code
* Fix bad merge
* Undo style changes to setup-env.fish
* Fix unit tests
* Style fix
* Compatible with fish_indent
* Use list for stability of order
* Sort one more place
* Sort more things
* Sorting unneeded
* Unsort
* Print difference
* Style fix
* Help messages need quotes
* Arguments to -a must be quoted
* Update types
* Update types
* Update types
* Add type hints
* Change order of positionals
* Always expand help
* Remove shared base class
* Fix type hints
* Remove platform-specific choices
* First line of help only
* Remove unused maps
* Remove suppress
* Remove debugging comments
* Better quoting
* Fish completions have no double dash
* Remove test for deleted class
* Fix grammar in header file
* Use single quotes in most places
* Better support for remainder nargs
* No magic strings
* * and + can also complete multiple
* lower case, no period
---------
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The user store is lazily evaluated. The change
in #38975 made it such that the first evaluation
was happening in the middle of swapping to user
configuration.
Ensure we construct the user store before that.
Use curly braces instead of quotes to enclose value or text in Tcl
modulefile. Within curly braces Tcl special characters like [, ] or $
are treated verbatim whereas they are evaluated within quotes.
Curly braces is Tcl recommended way to enclose verbatim content [1].
Note: if curly braces charaters are used within content, they must be
balanced. This point has been checked against current repository and no
unbalanced curly braces has been spotted.
Fixes#24243
[1] https://wiki.tcl-lang.org/page/Tcl+Minimal+Escaping+Style
* Fetching patches wouldn't result in acquiring a stage lock during install
* The installer would acquire a stage lock *after* fetching instead of
before, leading to races
* The name of the stage for patches was random, so on build failure
(where stage dirs are not removed), these directories would continue
to exist after a second successful install.
* There was this redundant "composite fetch" object -- there's already
a composite stage. Remove this.
* For some reason we do *double* shasum validation of patches, before
and after compression -- that's just too much? I removed it.
Spack heuristically adds `<install prefix>/lib` and `<install prefix>/lib64` as rpath entries, as it doesn't know what the install dir is going to be ahead of the build. This PR cleans up non-existing, absolute paths[^1], which
1. avoids redundant stat calls at runtime
2. drops redundant rpaths in `patchelf`, making it relocatable -- you don't need patchelf recursively then.
[^1]: It also removes relative paths not starting with `$` (so, `$ORIGIN/../lib` is retained -- we _could_ interpolate `$ORIGIN`, but that's hard to get right when symlinks have to be taken into account). Relative paths _are_ supported in glibc, but are relative to _the current working directory_, which is madness, and it would be better to drop those paths.
A LazyReference object is a reference to an attribute of a
lazily evaluated singleton. Its only purpose is to let developers
use shorter names to refer to such attribute.
This class does more harm than good, as it obfuscates the fact
that we are using the attribute of a global object. Also, it can easily
go out of sync with the singleton it refers to if, for instance, the
singleton is updated but the references are not.
This commit removes the LazyReference class entirely, and access
the attributes explicitly passing through the global value to which
they are attached.
These tests now work without any changes to core. Furthermore, it is
surprising that they had to be disabled (at least, as long as the
installer.py tests are run on Windows: these tests are more-basic
and their functionality would have been exercised automatically).
Without --allow-root spack cannot push binaries that contain paths in
binaries. This flag is almost always needed, so there is no point of
requiring users to spell it out.
Even without --allow-root, rpaths would still have to be patched, so the
flag is not there to guarantee binaries are not modified on install.
This commit makes --allow-root the default, and drops the code
required for it. It also deprecates `spack buildcache preview`, since
the command made sense only with --allow-root.
As a side effect, Spack no longer depends on binutils for relocation
Add support for conflict directives in Lua modulefile like done for Tcl
modulefile.
Note that conflicts are correctly honored on Lmod and Environment
Modules <4.2 only if mutually expressed on both modulefiles that
conflict with each other.
Migrate conflict code from Tcl-specific classes to the common part. Add
tests for Lmod and split the conflict test case in two.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* When using system tools to unpack a .gz file, the input file needs a
different name than the output file. Normally, we generate this new
name by stripping off the .gz extension off of the file name.
This was not sufficient if the file name did not have an extension,
so we temporarily rename the file in that case.
* When using system tar utility to untar on Windows, we were (erroneously)
skipping the actual untar step if the filename was lacking a .tar
extension
* For foo.txz, we were not changing the extension of the decompressed file
(i.e. we would decompress foo.txz to foo.txz). This did not cause any
problems, but is confusing, so has been updated such that the output
filename reflects its decompressed state (i.e. foo.tar).
* Added test for strip_compression_extension
* Update test_native_unpacking to test each archive type with and without
an extension as part of the file name (i.e. we test "foo.tar.gz", but
also make sure we decompress properly if it is named "foo").
Running `spack test run <python package>` resulted in the error
```
'str' object is not callable
```
because the python executable was not set correctly.
Lock objects can now be instantiated independently,
without being tied to the global configuration. The
same is true for database and store objects.
The database __init__ method has been simplified to
take a single lock configuration object. Some common
lock configurations (e.g. NO_LOCK or NO_TIMEOUT) have
been named and are provided as globals.
The use_store context manager keeps the configuration
consistent by pushing and popping an internal scope.
It can also be tuned by passing extra data to set up
e.g. upstreams or anything else that might be related
to the store.
* Bugfix: spack.yaml concretizer:unify needs to be read and used
* Optional: add environment test to ensure configuration scheme is used
* Activate environment in unit tests
A more proper solution would be to keep
an environment instance configuration as
an attribute, but that is a bigger refactor
* Delay evaluation of Environment.unify
* Slightly simplify unit tests
---------
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Allow the following formats:
```yaml
mirrors:
name: <url>
```
```yaml
mirrors:
name:
url: s3://xyz
access_pair: [x, y]
```
```yaml
mirrors:
name:
fetch: http://xyz
push:
url: s3://xyz
access_pair: [x, y]
```
And reserve two new properties to indicate the mirror type (e.g.
mirror.spack.io is a source mirror, not a binary cache)
```yaml
mirrors:
spack-public:
source: true
binary: false
url: https://mirror.spack.io
```
A few packages have version directives evaluated
within if statements, conditional on the value of
`platform.platform()`.
Sometimes there are no cases for e.g. platform=darwin and that
causes a lot of spurious failures with version existence
audits.
This PR allows expressing conditions to skip version
existence checks in audits and avoid these spurious reports.
### Rationale
While working on #29549, I noticed a lot of inconsistencies in our argparse help messages. This is important for fish where these help messages end up as descriptions in the tab completion menu. See https://github.com/spack/spack/pull/29549#issuecomment-1627596477 for some examples of longer or more stylized help messages.
### Implementation
This PR makes the following changes:
- [x] help messages start with a lowercase letter.
- [x] Help messages do not end with a period
- [x] the first line of a help message is short and simple
longer text is separated by an empty line
- [x] "help messages do not use triple quotes"
"""(except docstrings)"""
- [x] Parentheses not needed for string concatenation inside function call
- [x] Remove "..." "..." string concatenation leftover from black reformatting
- [x] Remove Sphinx argument docs from help messages
The first 2 choices aren't very controversial, and are designed to match the syntax of the `--help` flag automatically added by argparse. The 3rd choice is more up for debate, and is designed to match our package/module docstrings. The 4th choice is designed to avoid excessive newline characters and indentation. We may actually want to go even further and disallow docstrings altogether.
### Alternatives
Choice 3 in particular has a lot of alternatives. My goal is solely to ensure that fish tab completion looks reasonable. Alternatives include:
1. Get rid of long help messages, only allow short simple messages
2. Move longer help messages to epilog
3. Separate by 2 newline characters instead of 1
4. Separate by period instead of newline. First sentence goes into tab completion description
The number of commands with long help text is actually rather small, and is mostly relegated to `spack ci` and `spack buildcache`. So 1 isn't actually as ridiculous as it sounds.
Let me know if there are any other standardizations or alternatives you would like to suggest.
Refactor `TermTitle` into `InstallStatus` and use it to show progress
information both in the terminal title as well as inline. This also
turns on the terminal title status by default.
The inline output will look like the following after this change:
```
==> Installing m4-1.4.19-w2fxrpuz64zdq63woprqfxxzc3tzu7p3 [4/4]
```
People frequently ask us how to pipe `spack find` output to other commands, and we tell
them to do things like this:
```console
$ spack find --format "/{hash}" | spack uninstall -ay
```
Sometimes users don't know about hash references and come up with potentially ambiguous
formulations like this:
```console
spack find --format {name}@{version}%{compiler} | spack uninstall -ay
```
Since this is a common enough thing to want to do, and to make it more obvious how, this
PR adds a `-H` / `--hashes` as a shortcut, so you can now just do:
```console
spack find -H | spack uninstall -ay
```
`"%s" % spec` formats the spec with deps included, which produces sometimes KBs
of data and is slow to run in pure Python. It can delay otherwise very short-lived
read/write locks on the database.
Discovered in #38762 where profile output showed about 2 seconds is spent in
`spec.format`, which is significant overhead when using multiprocessing to install
from binary cache in parallel (installation often takes <5s for small packages). With
this change, `spec.format` no longer shows up in profile output.
(This line hasn't changed since Spack v0.9 ;p)
* move format() call to custom NoSuchSpecError exception
* add a comment saying why, so we can eventually change `Spec.__str__`
1. Fix O(n^2) iteration in `_get_overwrite_specs`
2. Early exit `get_by_hash` on full hash
3. Fix O(n^2) double lookup in `all_matching_specs` with hashes
4. Fix some legibility issues
* When installing a package Spack will attempt to set group permissions on
the install prefix even when the configuration does not specify a group.
Co-authored-by: David Gomez <dvdgomez@users.noreply.github.com>
Move the logic checking which mirrors have the specs we need closer
to where that information is needed. Also update the staging summary
to contain a brief description of why we scheduled or pruned each
job. If a spec was found on any mirrors, regardless of whether
we scheduled a job for it, print those mirrors.
PowerShell requires explicit shell and env support in Spack.
This is due to the distinct differences in shell interactions between
cmd and pwsh. Add a doskey in pwsh piping 'spack' commands to a
powershell script similar to the sh function 'spack'. Add
support for PowerShell-specific shell interactions from Spack
(set/unset shell variables).
* Support hardlinks/junctions on Windows systems without developer
mode enabled
* Generally, use of llnl.util.symlink.symlink is preferred over
os.symlink since it handles this automatically
* Generally an error is now reported if a user attempts to create a
symlink to a file that does not exist (this was previously allowed
on Linux/Mac).
* One exception to this: when Spack installs files from the source
into their final prefix, dangling symlinks are allowed (on
Linux/Mac - Windows does not allow this in any circumstance).
The intent behind this is to avoid generating failures for
installations on Linux/Mac that were succeeding before.
* Because Windows is strict about forbidding dangling symlinks,
`traverse_tree` has been updated to skip creating symlinks if they
would point to a file that is ignored. This check is not
transitive (i.e., a symlink to a symlink to an ignored file would
not be caught appropriately)
* Relocate function: resolve_link_target_relative_to_the_link
(this is not otherwise modified)
Co-authored-by: jamessmillie <smillie@txcorp.com>
Update list of excluded variables in `from_sourcing_file` function to
cover all variables specific to Environment Modules or Lmod. Add
specifically variables relative to the definition of `module()`, `ml()`
and `_module_raw()` Bash functions.
Fixes#13504
Update `env.set` command and underlying `SetEnv` object to add the `raw`
boolean attribute. `raw` is optional and set to False by default. When
set to True, value format is skipped for object when generating
environment modifications.
With this change it is now possible to define environment variable
whose value contains variable reference syntax (like `{foo}` or `{}`)
that should be set as-is.
Fixes#29578
Spack flags supplied by users should supersede flags from package build systems and
other places in Spack. However, Spack currently adds user-supplied flags to the
beginning of the compile line, which means that in some cases build system flags will
supersede user-supplied ones.
The right place to add a flag to ensure it has highest precedence for the compiler really
depends on the type of flag. For example, search paths like `-L` and `-I` are examined
in order, so adding them first is highest precedence. Compilers take the *last* occurrence
of optimization flags like `-O2`, so those should be placed *after* other such flags. Shim
libraries with `-l` should go *before* other libraries on the command line, so we want
user-supplied libs to go first, etc.
`lib/spack/env/cc` already knows how to split arguments into categories like `libs_list`,
`rpath_dirs_list`, etc., so we can leverage that functionality to merge user flags into
the arg list correctly.
The general rules for injected flags are:
1. All `-L`, `-I`, `-isystem`, `-l`, and `*-rpath` flags from `spack_flags_*` to appear
before their regular counterparts.
2. All other flags ordered with the ones from flags after their regular counterparts,
i.e. `other_flags` before `spack_flags_other_flags`
- [x] Generalize argument categorization into its own function in the `cc` shell script
- [x] Apply the same splitting logic to injected flags and flags from the original compile line.
- [x] Use the resulting flag lists to merge user- and build-system-supplied flags by category.
- [x] Add tests.
Signed-off-by: Andrey Parfenov <andrey.parfenov@intel.com>
Co-authored-by: iermolae <igor.ermolaev@intel.com>
The `unparser` that Spack uses for package hashing had several tweaks to ensure compatibility
with Python 2.7:
1. Currently, the unparser automatically moves `*` and `**` args to the end to preserve
compatibility with `python@:3.4`
2. `print a, b, c` statements and single-tuple `print((a, b, c))` function calls were
remapped to `print(a, b, c)` in the unparsed output for consistency across versions.
(1) is causing issues in our tests because a recent patch to the Python source code
(https://github.com/python/cpython/pull/102953/files#diff-7972dffec6674d5f09410c71766ac6caacb95b9bccbf032061806ae304519c9bR813-R823)
has a `**` arg before an named argument, and we round-trip the core python source code
as a test of our unparser. This isn't actually a break with our consistent unpausing -- it's still
consistent, the python source just doesn't unparse to the same thing anymore. It does makes
it harder to test, so it's not worth maintaining the Python2-specific stuff anymore.
Since we only support `python@3.6:`, this PR removes (1) and (2) from the unparser, but keeps
one last tweak for unicode AST inconsistencies, as it's still needed for Python 3.5-3.7.
This fixes the CI error we've been seeing on `python@3.11.4` and `python@3.10.12`. Again, that
bug exists only in the test system and doesn't affect our canonical hashing of Python code.
* DependencySpec: add virtuals attribute on edges
This works for both the new and the old concretizer. Also,
added type hints to involved functions.
* Improve virtual reconstruction from old format
* Reconstruct virtuals when reading from Cray manifest
* Reconstruct virtual information on test dependencies
Update Tcl modulefile template to use the `depends-on` command to
autoload modules if Lmod is the current module tool.
Autoloading modules with `module load` command in Tcl modulefile does
not work well for Lmod at some extend. An attempt to unload then load
designated module is performed each time such command is encountered. It
may lead to a load storm that may not end correctly with large number of
module dependencies.
`depends-on` command should be used for Lmod instead of `module load`,
as it checks if module is already loaded, and does not attempt to reload
this module.
Lua modulefile template already uses `depends_on` command to autoload
dependencies. Thus it is already considered that to use Lmod with Spack,
it must support `depends_on` command (version 7.6+).
Environment Modules copes well with `module load` command to autoload
dependencies (version 3.2+). `depends-on` command is supported starting
version 5.1 (as an alias of `prereq-all` command) which was relased last
year.
This change introduces a test to determine if current module tool that
evaluates modulefile is Lmod. If so, autoload dependencies are defined
with `depends-on` command. Otherwise `module load` command is used.
Test is based on `LMOD_VERSION_MAJOR` environment variable, which is set
by Lmod starting version 5.1.
Fixes#36764
When interpreting local paths as relative URL endpoints, they were
formatted as Windows paths on Windows (i.e. with '\'). URLs should
always be POSIX-style.
Update modulefile templates to append a trailing delimiter to MANPATH
environment variable, if the modulefile sets it.
With a trailing delimiter at ends of MANPATH's value, man will search
the system man pages after searching the specific paths set.
Using append-path/append_path to add this element, the module tool
ensures it is appended only once. When modulefile is unloaded, the
number of append attempt is decreased, thus the trailing delimiter is
removed only if this number equals 0.
Disclaimer: no path element should be appended to MANPATH by generated
modulefiles. It should always be prepended to ensure this variable's
value ends with the trailing delimiter.
Fixes#11355.
* Improve lib/spack/spack/test/cmd/compiler.py
* Use "tmp_path" in the "mock_executable" fixture
* Return a pathlib.Path from mock_executable
* Fix mock_executable fixture on Windows
"mock_gcc" was very similar to mock_executable, so use the latter to reduce code duplication
* Remove wrong compiler cache, fix compiler removal
fixes#37996
_CACHE_CONFIG_FILES was both unneeded and wrong, if called
subsequently with different scopes.
Here we remove that cache, and we fix an issue with compiler
removal triggered by having the same compiler spec in multiple
scopes.
* e4s cray ci stack
* e4s ci: add cray
* add zen4 tag
* WIP: new defintions just for cray
* updates
* remove ci signing job overrride, not necessary
* echo $PATH and show modules loaded
* add mirror
* add external def for cray-libsci
* comment out quantum-espresso
* use /etc/protected-runner as key path
* cray ci stack: do not remove tags: [spack, public]
* make cray stack composable
* generate job should run on public tagged runner, override default config:install_tree:root
* CI: Use relative path in default script
* CI: Use relative includes paths for shell runners
* Use concrete_env_dir for relpath
* ml-darwin-aarch64-mps: jax has bazel codesign issue
---------
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
#37592 updated cached cmake packages to set CMAKE_CUDA_ARCHITECTURES.
The condition `if archs != "none"` lead to `CMAKE_CUDA_ARCHITECTURES=none`
when cuda_arch=none (incorrect check on the value of a multi-valued
variant), i.e. CMAKE_CUDA_ARCHITECTURES is always set. This PR udpates
the condition to if archs[0] != "none" to ensure CMAKE_CUDA_ARCHITECTURES
is only set if cuda_arch is not none (which seems to be the pattern used
in other packages).
This does the same for HIP (although in general ROCmPackage disallows
amdgpu_target=none when +rocm).
* Add CMake options for building with CUDA/HIP support to
CachedCMakePackages (intended to reduce duplication across packages
building with +hip/+cuda and using CachedCMakePackage)
* Define generic variables like CMAKE_PREFIX_PATH for
CachedCMakePackages (so that a user may invoke "cmake" themselves
without needing to setthem on the command line).
* Make `lbann` a CachedCMakePackage.
Co-authored-by: Chris White <white238@llnl.gov>
fa7719a changed syntax for specifying exact versions, which are
required for some compiler specs (including those read as part
of parsing a Cray manifest). This fixes that and also makes a
couple other improvements to manifest parsing.
* Instantiate compiler specs with exact versions (fixes#37893)
* fix slingshot network detection (CPE 22.10+ has libcxi.so
in /usr/lib64)
* "spack external find": add arg to ignore default dir for cray
manifests
Change default naming scheme for tcl modules for a more user-friendly
experience.
Change from flat projection to "per software name" projection.
Flat naming scheme restrains module selection capabilities. The
`{name}/{version}...` scheme make possible to use user-friendly
mechanisms:
* implicit defaults (`module load git`)
* extended default (`module load git/2`)
* advanced version specifiers (`module load git@2:`)
Note the win-sdk package is not installable and reports an error
which instructs the user how to add it. Without this fix, a
(more confusing) error occurs before this message can be generated.
"spack build-env" was not generating proper environment variable
definitions on Windows; this commit updates the generated commands
to succeed with batch/PowerShell.
Ensure that spack compiler add/find/list and lists of concrete specs
print the compiler effectively as {compiler.name}{@compiler.version}.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Make it clear that copy-only pipelines are not supported while still
using the deprecated ci config format. Also ensure that the deprecated
stack does not fail on spack pipelines for tags.
* Fix reporting of packageless specs as having no tests
* Add test_test_output_multiple_specs with update to simple-standalone-test (and tests)
* Refactored test status summary; added more tests or checks
MSVC compiler logic was using string parsing to extract version
from compiler spec, which was fragile. This broke in #37572, so has
been fixed and made more robust by using attribute access.
Ensure that requirements `packages:*:require:@x` and preferences `packages:*:version:[x]`
fail concretization when no version defined in the package satisfies `x`. This always holds
except for git versions -- they are defined on the fly.
Two bugs came in from #37438
1. `unify: when_possible` was broken, because of an incorrect assertion. abstract/concrete
spec pairs were compared against the results that were in the process of being computed,
rather than against the previous results.
2. `unify: true` had an ordering bug that could mix the association between abstract and
concrete specs
- [x] 1 is resolved by creating a lookup from old concrete specs to old abstract specs,
and we use that to associate the "new" concrete specs that happen to be the old
ones with their abstract specs (since those are stripped out for concretization
- [x] 2 is resolved by combining the new and old abstract as lists instead of combining
them as sets. This is important because `set() | set()` does not make any ordering
promises, even though set ordering is otherwise guaranteed in `python@3.7:`
Spack displays package code context when it shouldn't (e.g., on `FetchError`s)
and doesn't display it when it should (e.g., when errors occur in builder classes.
The line attribution can sometimes be off by one, as well.
- [x] Display package context when errors occur in a subclass of `PackageBase`
- [x] Display package context when errors occur in a subclass of `BaseBuilder`
- [x] Do not display package context when errors occur in `PackageBase`,
`BaseBuilder` or other core code that is not in a `package.py` file.
- [x] Fix off-by-one error for core code (don't subtract one from the line number *unless*
it's in an actual `package.py` file.
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
We currently throw a nasty error if you try to reuse packages from some other namespace
(e.g., OLCF), but we should be able to reuse patched local versions of builtin packages.
Right now the only obstacle to that is that we try to look up virtual info for unknown
namespaces, and we can't get the package from the repo to do that. We *can* assume that
a package with a known namespace is similar, and that its virtual provider information
is reasonably accurate, so we now do that. This isn't 100% accurate, but neither is
relying on the package itself, as it may have gone out of date.
The real solution here is virtual edge information, but this is a stopgap until we have
that.
`spec_clauses()` attempts to look up package information for concrete specs in order to
determine which virtuals they may provide. This fails for renamed/deleted dependencies
of buildcaches and installed packages.
This will eventually be fixed by #35258, which adds virtual information on edges, but we
need a workaround to make older buildcaches usable.
- [x] make an exception for renamed packages and omit their virtual constraints
- [x] add a note that this will be solved by adding virtuals to edges
The concretizer can fail with `reuse:true` if a buildcache or installation contains a
package with a dependency that has been renamed or deleted in the main repo (e.g.,
`netcdf` was refactored to `netcdf-c`, `netcdf-fortran`, etc., but there are still
binary packages with dependencies called `netcdf`).
We should still be able to install things for which we are missing `package.py` files.
`Spec.inject_patches_variant()` was failing this requirement by attempting to look up
the package class for concrete specs. This isn't needed -- we can skip it.
- [x] swap two conditions in `Spec.inject_patches_variant()`
The @= in `spack find` output adds a bit of noise. Remove it as we
did for `spack spec` and `spack concretize`.
This modifies display_specs so it actually covers other places we use that routine, as
well, e.g., `spack buildcache list`.
before:
```
-- linux-ubuntu20.04-aarch64 / gcc@=11.1.0 -----------------------
ofdlcpi libpressio@0.88.0
```
after:
```
-- linux-ubuntu20.04-aarch64 / gcc@11.1.0 -----------------------
ofdlcpi libpressio@0.88.0
```
If a user does not explicitly `--force` the concretization of an entire environment,
Spack will try to reuse the concrete specs that are already in the lockfile.
---------
Co-authored-by: becker33 <becker33@users.noreply.github.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* gitlab ci: release fixes and improvements
- use rules to reduce boilerplate in .gitlab-ci.yml
- support copy-only pipeline jobs
- make pipelines for release branches rebuild everything
- make pipelines for protected tags copy-only
* gitlab ci: remove url changes used in testing
* gitlab ci: tag mirrors need public key
Make sure that mirrors associated with release branches and tags
contain the public key needed to verify the signed binaries. This
also ensures that when stack-specific mirror contents are copied
to the root, the root mirror has the public key as well.
* review: be more specific about tags, curl flags
* Make the check in ci.yaml consistent with the .gitlab-ci.yml
---------
Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
Currently, specs on buildcache mirrors must be referenced by their full description. This PR allows buildcache specs to be referenced by their hashes, rather than their full description.
### How it works
Hash resolution has been moved from `SpecParser` into `Spec`, and now includes the ability to execute a `BinaryCacheQuery` after checking the local store, but before concluding that the hash doesn't exist.
### Side-effects of Proposed Changes
Failures will take longer when nonexistent hashes are parsed, as mirrors will now be scanned.
### Other Changes
- `BinaryCacheIndex.update` has been modified to fail appropriately only when mirrors have been configured.
- Tests of hash failures have been updated to use `mutable_empty_config` so they don't needlessly search mirrors.
- Documentation has been clarified for `BinaryCacheQuery`, and more documentation has been added to the hash resolution functions added to `Spec`.
This PR ensures that we'll get a comprehensible error message whenever an old
version of Spack tries to use a DB or a lockfile that is "too new".
* Fix error message when using a too new DB
* Add a unit-test to ensure we have a comprehensible error message
Add a section to the lock file to track the Spack version/commit that produced
an environment. This should (eventually) enhance reproducibility, though we
do not currently do anything with the information. It just adds to provenance
at the moment.
Changes include:
- [x] adding the version/commit to `spack.lock`
- [x] refactor `spack.main.get_version()
- [x] fix a couple of environment lock file-related typos
The flags --mirror-name / --mirror-url / --directory were deprecated in
favor of just passing a positional name, url or directory, and letting spack
figure it out.
---------
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
Prior to this PR, the HOMEDRIVE environment variable was used to
detect what drive we are operating in. This variable is not available
for service account logins (like what is used for CI), so switch to
extracting the drive from PROGRAMFILES (which is more-widely defined).
On Windows, several commonly available system tools for decompression
are unreliable (gz/bz2/xz). This commit refactors `decompressor_for`
to call out to a Windows or Unix-specific method:
* The decompressor_for_nix method behaves the same as before and
generally treats the Python/system support options for decompression
as interchangeable (although avoids using Python's built-in tar
support since that has had issues with permissions).
* The decompressor_for_win method can only use Python support for
gz/bz2/xz, although for a tar.gz it does use system support for
untar (after the decompression step). .zip uses the system tar
utility, and .Z depends on external support (i.e. that the user
has installed 7zip).
A naming scheme has been introduced for the various _decompression
methods:
* _system_gunzip means to use a system tool (and fail if it's not
available)
* _py_gunzip means to use Python's built-in support for decompressing
.gzip files (and fail if it's not available)
* _gunzip is a method that can do either
This is a refactor of Spack's stand-alone test process to be more spack- and pytest-like.
It is more spack-like in that test parts are no longer "hidden" in a package's run_test()
method and pytest-like in that any package method whose name starts test_
(i.e., a "test" method) is a test part. We also support the ability to embed test parts in a
test method when that makes sense.
Test methods are now implicit test parts. The docstring is the purpose for the test part.
The name of the method is the name of the test part. The working directory is the active
spec's test stage directory. You can embed test parts using the test_part context manager.
Functionality added by this commit:
* Adds support for multiple test_* stand-alone package test methods, each of which is
an implicit test_part for execution and reporting purposes;
* Deprecates package use of run_test();
* Exposes some functionality from run_test() as optional helper methods;
* Adds a SkipTest exception that can be used to flag stand-alone tests as being skipped;
* Updates the packaging guide section on stand-alone tests to provide more examples;
* Restores the ability to run tests "inherited" from provided virtual packages;
* Prints the test log path (like we currently do for build log paths);
* Times and reports the post-install process (since it can include post-install tests);
* Corrects context-related error message to distinguish test recipes from build recipes.
fixes#22341
Using double quotes creates issues with shell variable substitutions,
in particular when the manifest has "definitions:" in it. Use single
quotes instead.
Add a "require" directive to packages, which functions exactly like
requirements specified in packages.yaml (uses the same fact-generation
logic); update both to allow making the requirement conditional.
* Packages may now use "require" to add constraints. This can be useful
for something like "require(%gcc)" (where before we had to add a
conflict for every compiler except gcc).
* Requirements (in packages.yaml or in a "require" directive) can be
conditional on a spec, e.g. "require(%gcc, when=@1.0.0)" (version
1.0.0 can only build with gcc).
* Requirements may include a message which clarifies why they are needed.
The concretizer assigns a high priority to errors which generate these
messages (in particular over errors for unsatisfied requirements that
do not produce messages, but also over a number of more-generic
errors).
## Version types, parsing and printing
- The version classes have changed: `VersionBase` is removed, there is now a
`ConcreteVersion` base class. `StandardVersion` and `GitVersion` both inherit
from this.
- The public api (`Version`, `VersionRange`, `ver`) has changed a bit:
1. `Version` produces either `StandardVersion` or `GitVersion` instances.
2. `VersionRange` produces a `ClosedOpenRange`, but this shouldn't affect the user.
3. `ver` produces any of `VersionList`, `ClosedOpenRange`, `StandardVersion`
or `GitVersion`.
- No unexpected type promotion, so that the following is no longer an identity:
`Version(x) != VersionRange(x, x)`.
- `VersionList.concrete` now returns a version if it contains only a single element
subtyping `ConcreteVersion` (i.e. `StandardVersion(...)` or `GitVersion(...)`)
- In version lists, the parser turns `@x` into `VersionRange(x, x)` instead
of `Version(x)`.
- The above also means that `ver("x")` produces a range, whereas
`ver("=x")` produces a `StandardVersion`. The `=` is part of _VersionList_
syntax.
- `VersionList.__str__` now outputs `=x.y.z` for specific version entries,
and `x.y.z` as a short-hand for ranges `x.y.z:x.y.z`.
- `Spec.format` no longer aliases `{version}` to `{versions}`, but pulls the
concrete version out of the list and prints that -- except when the list is
is not concrete, then is falls back to `{versions}` to avoid a pedantic error.
For projections of concrete specs, `{version}` should be used to render
`1.2.3` instead of `=1.2.3` (which you would get with `{versions}`).
The default `Spec` format string used in `Spec.__str__` now uses
`{versions}` so that `str(Spec(string)) == string` holds.
## Changes to `GitVersion`
- `GitVersion` is a small wrapper around `StandardVersion` which enriches it
with a git ref. It no longer inherits from it.
- `GitVersion` _always_ needs to be able to look up an associated Spack version
if it was not assigned (yet). It throws a `VersionLookupError` whenever `ref_version`
is accessed but it has no means to look up the ref; in the past Spack would
not error and use the commit sha as a literal version, which was incorrect.
- `GitVersion` is never equal to `StandardVersion`, nor is satisfied by it. This
is such that we don't lose transitivity. This fixes the following bug on `develop`
where `git_version_a == standard_version == git_version_b` does not imply
`git_version_a == git_version_b`. It also ensures equality always implies equal
hash, which is also currently broken on develop; inclusion tests of a set of
versions + git versions would behave differently from inclusion tests of a
list of the same objects.
- The above means `ver("ref=1.2.3) != ver("=1.2.3")` could break packages that branch
on specific versions, but that was brittle already, since the same happens with
externals: `pkg@1.2.3-external` suffixes wouldn't be exactly equal either. Instead,
those checks should be `x.satisfies("@1.2.3")` which works both for git versions and
custom version suffixes.
- `GitVersion` from commit will now print as `<hash>=<version>` once the
git ref is resolved to a spack version. This is for reliability -- version is frozen
when added to the database and queried later. It also improves performance
since there is no need to clone all repos of all git versions after `spack clean -m`
is run and something queries the database, triggering version comparison, such
as potentially reuse concretization.
- The "empty VerstionStrComponent trick" for `GitVerison` is dropped since it wasn't
representable as a version string (by design). Instead, it's replaced by `git`,
so you get `1.2.3.git.4` (which reads 4 commits after a tag 1.2.3). This means
that there's an edge case for version schemes `1.1.1`, `1.1.1a`, since the
generated git version `1.1.1.git.1` (1 commit after `1.1.1`) compares larger
than `1.1.1a`, since `a < git` are compared as strings. This is currently a
wont-fix edge case, but if really required, could be fixed by special casing
the `git` string.
- Saved, concrete specs (database, lock file, ...) that only had a git sha as their
version, but have no means to look the effective Spack version anymore, will
now see their version mapped to `hash=develop`. Previously these specs
would always have their sha literally interpreted as a version string (even when
it _could_ be looked up). This only applies to databases, lock files and spec.json
files created before Spack 0.20; after this PR, we always have a Spack version
associated to the relevant GitVersion).
- Fixes a bug where previously `to_dict` / `from_dict` (de)serialization would not
reattach the repo to the GitVersion, causing the git hash to be used as a literal
(bogus) version instead of the resolved version. This was in particularly breaking
version comparison in the build process on macOS/Windows.
## Installing or matching specific versions
- In the past, `spack install pkg@3.2` would install `pkg@=3.2` if it was a
known specific version defined in the package, even when newer patch releases
`3.2.1`, `3.2.2`, `...` were available. This behavior was only there because
there was no syntax to distinguish between `3.2` and `3.2.1`. Since there is
syntax for this now through `pkg@=3.2`, the old exact matching behavior is
removed. This means that `spack install pkg@3.2` constrains the `pkg` version
to the range `3.2`, and `spack install pkg@=3.2` constrains it to the specific
version `3.2`.
- Also in directives such as `depends_on("pkg@2.3")` and their when
conditions `conflicts("...", when="@2.3")` ranges are ranges, and specific
version matches require `@=2.3.`.
- No matching version: in the case `pkg@3.2` matches nothing, concretization
errors. However, if you run `spack install pkg@=3.2` and this version
doesn't exist, Spack will define it; this allows you to install non-registered
versions.
- For consistency, you can now do `%gcc@10` and let it match a configured
`10.x.y` compiler. It errors when there is no matching compiler.
In the past it was interpreted like a specific `gcc@=10` version, which
would get bootstrapped.
- When compiler _bootstrapping_ is enabled, `%gcc@=10.2.0` can be used to
bootstrap a specific compiler version.
## Other changes
- Externals, compilers, and develop spec definitions are backwards compatible.
They are typically defined as `pkg@3.2.1` even though they should be
saying `pkg@=3.2.1`. Spack now transforms `pkg@3` into `pkg@=3` in those cases.
- Finally, fix strictness of `version(...)` directive/declaration. It just does a simple
type check, and now requires strings/integers. Floats are not allowed because
they are ambiguous `str(3.10) == "3.1"`.
`spack buildcache create` is a misnomer cause it's the only way to push to
an existing buildcache (and it in fact calls binary_distribution.push).
Also we have `spack buildcache update-index` but for create the flag is
`--rebuild-index`, which is confusing (and also... why "rebuild"
something if the command is "create" in the first place, that implies it
wasn't there to begin with).
So, after this PR, you can use either
```
spack buildcache create --rebuild-index
```
or
```
spack buildcache push --update-index
```
Also, alias `spack buildcache rebuild-index` to `spack buildcache
update-index`.
Spack never parsed `nagfor` linker arguments put on the compiler line:
```
nagfor -Wl,-Wl,,-rpath,,/path
````
so, let's continue not attempting to parse that.
`buildcache create --rel`: deprecate this because there is no point in
making things relative before tarballing; on install you need to expand
`$ORIGIN` / `@loader_path` / relative symlinks anyways because some
dependencies may actually be in an upstream, or have different
projections.
`buildcache install --allow-root`: this flag was propagated through a
lot of functions but was ultimately unused.
This switches the default Make build type to `build_type=Release`.
This offers:
- higher optimization level, including loop vectorization on older GCC
- adds NDEBUG define, which disables assertions, which could cause speedups if assertions are in loops etc
- no `-g` means smaller install size
Downsides are:
- worse backtraces (though this does NOT strip symbols)
- perf reports may be useless
- no function arguments / local variables in debugger (could be of course)
- no file path / line numbers in debugger
The downsides can be mitigated by overriding to `build_type=RelWithDebInfo` in `packages.yaml`,
if needed. The upside is that builds will be MUCH smaller (and faster) with this change.
---------
Co-authored-by: Gregory Becker <becker33@llnl.gov>
* Vendor ruamel.yaml v0.17.21
* Add unit test for whitespace regression
* Add an abstraction layer in Spack to wrap ruamel.yaml
All YAML operations are routed through spack.util.spack_yaml
The custom classes have been adapted to the new ruamel.yaml
class hierarchy.
Fixed line annotation issue in "spack config blame"
This ensures that:
a) no externals are added to the tarball metadata file
b) no externals are added to the prefix to prefix map on install, also
for old tarballs that did include externals
c) ensure that the prefix -> prefix map is always string to string, and
doesn't contain None in case for some reason a hash is missing
* Disable module generation by default (#35564)
a) It's used by site administrators, so it's niche
b) If it's used by site administrators, they likely need to modify the config anyhow, so the default config only serves as an example to get started
c) it's too arbitrary to enable tcl, but disable lmod
* Remove leftover from old module file schema
* Warn if module file config is detected and generation is disabled
---------
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Change the signature of the Environment.__init__ method to have
a single argument, i.e. the directory where the environment manifest
is located. Initializing that directory is now delegated to a function
taking care of all the error handling upfront. Environment objects
require a "spack.yaml" to be available to be constructed.
Add a class to manage the environment manifest file. The environment
now delegates to an attribute of that class the responsibility of keeping
track of changes modifying the manifest. This allows simplifying the
updates of the manifest file, and helps keeping in sync the spec lists in
memory with the spack.yaml on disk.