This commit allows version specifiers to refer to git branches that contain
forward slashes. For example, the following is valid syntax now:
pkg@git.releases/1.0
It also adds a new method `Spec.format_path(fmt)` which is like `Spec.format`,
but also maps unsafe characters to `_` after interpolation. The difference is
as follows:
>>> Spec("pkg@git.releases/1.0").format("{name}/{version}")
'pkg/git.releases/1.0'
>>> Spec("pkg@git.releases/1.0").format_path("{name}/{version}")
'pkg/git.releases_1.0'
The `format_path` method is used in all projections. Notice that this method
also maps `=` to `_`
>>> Spec("pkg@git.main=1.0").format_path("{name}/{version}")
'pkg/git.main_1.0'
which should avoid syntax issues when `Spec.prefix` is literally copied into a
Makefile as sometimes happens in AutotoolsPackage or MakefilePackage
Currently `spack env activate --with-view` exists, but is a no-op.
So, it is not too much of a breaking change to make this redundant flag
accept a value `spack env activate --with-view <name>` which activates
a particular view by name.
The view name is stored in `SPACK_ENV_VIEW`.
This also fixes an issue where deactivating a view that was activated
with `--without-view` possibly removes entries from PATH, since now we
keep track of whether the default view was "enabled" or not.
* spack checksum: improve interactive filtering
* fix signature of executable
* Fix restart when using editor
* Don't show [x version(s) are new] when no known versions (e.g. in spack create <url>)
* Test ^D in test_checksum_interactive_quit_from_ask_each
* formatting
* colorize / skip header on invalid command
* show original total, not modified total
* use colify for command list
* Warn about possible URL changes
* show possible URL change as comment
* make mypy happy
* drop numbers
* [o]pen editor -> [e]dit
Because those end up being passed to ar which does not understand linker
arguments. This was making ldflags largely unusuable for statically
linked cmake packages.
* Allow branching out of the "generic build" unification set
For cases like the one in https://github.com/spack/spack/pull/39661
we need to relax rules on unification sets.
The issue is that, right now, nodes in the "generic build" unification
set are unified together with their build dependencies. This was done
out of caution to avoid the risk of circular dependencies, which would
ultimately cause a very slow solve.
For build-tools like Cython, however, the build dependencies is masked
by a long chain of "build, run" dependencies that belong in the
"generic build" unification space.
To allow splitting on cases like this, we relax the rule disallowing
branching out of the "generic build" unification set.
* Fix issue with pure build virtual dependencies
Pure build virtual dependencies were not accounted properly in the
list of possible virtuals. This caused some facts connecting virtuals
to the corresponding providers to not be emitted, and in the end
lead to unsat problems.
* Fixed a few issues in packages
py-gevent: restore dependency on py-cython@3
jsoncpp: fix typo in build dependency
ecp-data-vis-sdk: update spack.yaml and cmake recipe
py-statsmodels: add v0.13.5
* Make dependency on "blt" of type "build"
We run pip with `--no-build-isolation` because we don't wanna let pip
install build deps.
As a consequence, when pip runs hooks, it runs hooks of *any* package it
can find in `sys.path`.
For Spack-built Python this includes user site packages -- there
shouldn't be any system site packages. So in this case it suffices to
set the environment variable PYTHONNOUSERSITE=1.
For external Python, more needs to be done, cause there is no env
variable that disables both system and user site packages; setting the
`python -S` flag doesn't work because pip runs subprocesses that don't
inherit this flag (and there is no API to know if -S was passed)
So, for external Python, an empty venv is created before invoking pip in
Spack's build env ensures that pip can no longer see anything but
standard libraries and `PYTHONPATH`.
The downside of this is that pip will generate shebangs that point to
the python executable from the venv. So, for external python an extra
step is necessary where we fix up shebangs post install.
Two changes in this PR:
1. Register absolute paths in tarballs, which makes it easier
to use them as container image layers, or rootfs in general, outside
of Spack. Spack supports this already on develop.
2. Assemble the tarfile entries "by hand", which has a few advantages:
1. Avoid reading `/etc/passwd`, `/etc/groups`, `/etc/nsswitch.conf`
which `tar.add(dir)` does _for each file it adds_
2. Reduce the number of stat calls per file added by a factor two,
compared to `tar.add`, which should help with slow, shared filesystems
where these calls are expensive
4. Create normalized `TarInfo` entries from the start, instead of letting
Python create them and patching them after the fact
5. Don't recurse into subdirs before processing files, to avoid
keeping nested directories opened. (this changes the tar entry
order slightly, it's like sorting by `(not is_dir, name)`.
For a long time, the docs have generated a huge, static HTML package list. It has some
disadvantages:
* It's slow to load
* It's slow to build
* It's hard to search
We now have a nice website that can tell us about Spack packages, and it's searchable so
users can easily find the one or two packages out of 7400 that they're looking for. We
should link to this instead of including a static package list page in the docs.
- [x] Replace package list link with link to packages.spack.io
- [x] Remove `package_list.html` generation from `conf.py`.
- [x] Add a new section for "Links" to the docs.
- [x] Remove docstring notes from contribution guide (we haven't generated RST
for package docstrings for a while)
- [x] Remove referencese to `package-list` from docs.
Currently, Windows SDK detection will only pick up SDK versions
related to the current version of Windows Spack is running on.
However, in some circumstances, we want to detect other version
of the SDK, for example, for compiling on Windows 11 for Windows
10 to ensure an API is compatible with Win10.
* Make use of `prefix` in the Cray manifest schema (prepend it to
the relative CC etc.) - this was a Spack error.
* Warn people when wrong-looking compilers are found in the manifest
(i.e. non-existent CC path).
* Bypass compilers that we fail to add (don't allow a single bad
compiler to terminate the entire read-cray-manifest action).
* Refactor Cray manifest tests: module-level variables have been
replaced with fixtures, specifically using the `test_platform`
fixture, which allows the unit tests to run with the new
concretizer.
* Add unit test to check case where adding a compiler raises an
exception (check that this doesn't prevent processing the
rest of the manifest).
If you `spack install x ^y` where `y` is a pure build dep of `x`, and
then uninstall `y`, and then `spack install --overwrite x ^y`, the build
fails because `y` is not re-installed.
Same can happen when you install a develop spec, run `spack gc`,
modify sources, and install again; develop specs rely on overwrite
install to work correctly.
This PR adds a new audit sub-command to check that detection of relevant packages
is performed correctly in a few scenarios mocking real use-cases. The data for each
package being tested is in a YAML file called detection_test.yaml alongside the
corresponding package.py file.
This is to allow encoding detection tests for compilers and other widely used tools,
in preparation for compilers as dependencies.
Modifications:
- [x] Move `spack.util.string` to `llnl.string`
- [x] Remove dependency of `llnl` on `spack.error`
- [x] Move path of `spack.util.path` to `llnl.path`
- [x] Move `spack.util.environment.get_host_*` to `spack.spec`
* msvc.py: don't import distutils
Introduced in #27021, makes Spack forward incompatible with Python.
The module was already deprecated at the time of the PR.
* update spack package
Fixes#39622
Add a timeout to compiler detection and allow Spack to proceed when
this timeout occurs.
In all cases, the timeout is 120 seconds: it is assumed any compiler
invocation we do for the purposes of verifying it would resolve in
that amount of time.
Also refine executables that are tested as being possible MSVC
instances, and limit where we try to detect MSVC. In more detail:
* Compiler detection should timeout after a certain period of time.
Because compiler detection executes arbitrary executables on the
system, we could encounter a program that just hangs, or even a
compiler that hangs on a license key or similar. A timeout
prevents this from hanging Spack.
* Prevents things like cl-.* from being detected as potential MSVC
installs. cl is always just cl in all cases that Spack supports.
Change the MSVC class to indicate this.
* Prevent compilers unsupported on certain platforms from being
detected there (i.e. don't look for MSVC on systems other than
Windows).
The first point alone is sufficient to address #39622, but the
next two reduce the likelihood of timeouts (which is useful since
those slow down the user even if they are survivable).
Put back normalization of the "virtuals" input as a sorted tuple.
Without this we might get edges that differ just for the order of
virtuals, or that have lists, which are not hashable.
Add unit-tests to prevent regressions.
By default, do not let deprecated versions enter the solve.
Previously you could concretize to something deprecated, only to get errors on install.
With this commit, we get errors on concretization, so the issue is caught earlier.
PythonExtension is a base class for PythonPackage, and
is meant to be used for any package that is a Python
extension but is not built using "python_pip".
The "update_external_dependency" method in the base
class calls another method that is defined in the derived
class.
Push "get_external_python_for_prefix" up in the hierarchy
to make method calls consistent.
This commit replaces the internal representation of deptypes with `int`, which is more compact
and faster to operate with.
Double loops like:
```
any(x in ys for x in xs)
```
are replaced by constant operations bool(xs & ys), where xs and ys are dependency types.
Global constants are exposed for convenience in `spack.deptypes`
Currently, the concretizer emits facts for all versions known to Spack, including deprecated versions, and has a specific optimization objective to minimize their use.
This commit simplifies how deprecated versions are handled by considering possible versions for a spec only if they appear in a spec literal, or if the `config:deprecated:true` is set directly or through the `--deprecated` flag. The optimization objective has also been removed, in favor of just ordering versions and having deprecated ones last.
This results in:
a) no delayed errors on install, but concretization errors when deprecated versions would be the only option. This is in particular relevant for CI where it's better to get errors early
b) a slight concretization speed-up due to fewer facts
c) a simplification of the logic program.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
NMake makefiles are still called makefiles. The corresponding builder
variable was called "nmakefile", which is a bit unintuitive and lead
to a few easy-to-make, hard-to-notice mistakes when creating packages.
This commit renames the builder property to be "makefile"
Extensionless archives requiring two-stage decompression and extraction
require intermediate archives to be renamed after decompression/extraction
to prevent collision. Prior behavior attempted to cleanup the intermediate
archive with the original name, this PR ensures the renamed folder is
cleaned instead.
Co-authored-by: Dan Lipsa <dan.lipsa@khq.kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
* Perform external spec detection with multiple workers
The logic to perform external spec detection has been refactored
into classes. These classes use the GoF "template" pattern to account
for the small differences between searching for "executables" and
for "libraries", while unifying the larger part of the algorithm.
A ProcessPoolExecutor is used to parallelize the work.
* Speed-up external find by tagging detectable packages automatically
Querying packages by tag is much faster than inspecting the repository,
since tags are cached. This commit adds a "detectable" tag to every
package that implements the detection protocol, and external detection
uses it to search for packages.
* Pass package names instead of package classes to workers
The slowest part of the search is importing the Python modules
associated with candidate packages. The import is done serially
before we distribute the work to the pool of executors.
This commit pushes the import of the Python module to the job
performed by the workers, and passes just the name of the packages
to the executors.
In this way imports can be done in parallel.
* Rework unit-tests for Windows
Some unit tests were doing a full e2e run of a command
just to check a input handling. Make the test more
focused by just stressing a specific function.
Mark as xfailed 2 tests on Windows, that will be fixed
by a PR in the queue. The tests are failing because we
monkeypatch internals in the parent process, but the
monkeypatching is not done in the "spawned" child
process.
* Write timing information for installs from cache
* CI: aggregate and upload install_times.json to artifacts
* CI: Don't change root directory for artifact generation
* Flat event based timer variation
Event based timer allows for easily starting and stopping timers without
wiping sub-timer data. It also requires less branching logic when
tracking time.
The json output is non-hierarchical in this version and hierarchy is
less rigidly enforced between starting and stopping.
* Add and write timers for top level install
* Update completion
* remove unused subtimer api
* Fix unit tests
* Suppress timing summary option
* Save timers summaries to user_data artifacts
* Remove completion from fish
* Move spack python to script section
* Write timer correctly for non-cache installs
* Re-add hash to timer file
* Fish completion updates
* Fix null timer yield value
* fix type hints
* Remove timer-summary-file option
* Add "." in front of non-package timer name
---------
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
This is a fixed version of b72a268
* That commit would discard the final key component (so if you set
"config:install_tree:root", it would discard "root" and just set
install tree).
* When setting key:"value", with the quotes, that commit would
discard the quotes, which would confuse the system if adding a
value like "{example}" (the "{" character indicates a dictionary).
This commit retains the quotes.
These commands are currently broken on powershell (Windows) due to
improper use of the InvokeCommand commandlet and a lack of direct
support for the `--pwsh` argument in `spack load`, `spack unload`,
and `spack env deactivate`.
If you wanted to set a configuration option like
`config:install_tree:root` to "C:/path/to/config.yaml", Spack had
trouble parsing this because of the ":" in the value. This adds
logic to allow using quotes to enclose the value, so you can add
`config:install_tree:root:"C:/path/to/config.yaml"`.
Configuration keys should never contain a quote character, so the
presence of any quote is taken to mean that the rest of the string
is specifying the value.
Setting the undocumented variable SPACK_CONCRETIZER_REQUIRE_CHECKSUM
now causes the solver to avoid accounting for versions that are not checksummed.
This feature is used in CI to avoid spurious concretization against e.g. develop branches.
Currently, OneAPI's setvars scripts effectively disregard any arguments
we're passing to the MSVC vcvars env setup script, and additionally,
completely ignore the requested version of OneAPI, defaulting to whatever
the latest installed on the system is.
This leads to a scenario where we have improperly constructed Windows
native development environments, with potentially multiple versions of
MSVC and OneAPI being loaded or called in the same env. Obviously this is
far from ideal and leads to some fairly inscrutable errors such as
overlapping header files between MSVC and OneAPI and a different version
of OneAPI being called than the env was setup for.
This PR solves this issue by creating a structured invocation of each
relevant script in an order that ensures the correct values are set in
the resultant build env.
The order needs to be:
1. MSVC vcvarsall
2. The compiler specific env.bat script for the relevant version of
the oneapi compiler we're looking for. The root setvars scripts seems
to respect this as well, although it is less explicit
3. The root oneapi setvars script, which sets up everything else the
oneapi env needs and seems to respect previous env invocations.
Bash completion is now smarter about handling aliases. In particular, if all completions
for some input command are aliased to the same thing, we'll just complete with that thing.
If you've already *typed* the full alias for a command, we'll complete the alias.
So, for example, here there's more than one real command involved, so all aliases are
shown:
```console
$ spack con
concretise concretize config containerise containerize
```
Here, there are two possibilities: `concretise` and `concretize`, but both map to
`concretize` so we just complete that:
```console
$ spack conc
concretize
```
And here, the user has already typed `concretis`, so we just go with it as there is only
one option:
```console
spack concretis
concretise
```
From a user:
> Aargh.
> ```
> ==> Error: concretise is not a recognized Spack command or extension command; check with `spack commands`.
> ```
To make things easier for our friends in the UK, this adds `concretise` and
`containerise` aliases for the `spack concretize` and `spack containerize` commands.
- [x] add aliases
- [x] update completions
This reapplies 66f7540, which adds supports for hardlinks/junctions on
Windows systems where developer mode is not enabled.
The commit was reverted on account of multiple issues:
* Checks added to prevent dangling symlinks were interfering with
existing CI builds on Linux (i.e. builds that otherwise succeed were
failing for creating dangling symlinks).
* The logic also updated symlinking to perform redirection of relative
paths, which lead to malformed symlinks.
This commit fixes these issues.
#35042 introduced lazy hash parsing, but didn't remove a
few attributes from the parser that were needed only for
concrete specs
This commit removes them, since they are effectively
dead code.
The heuristic for duplicate nodes contains a few typos, and
apparently slows down the solve for specs that have a lot of
sub-optimal choices to be taken.
This is likely because with a lot of sub-optimal choices, the
low priority, flawed heuristic is being used by clingo.
Here I split the heuristic, so complex rules that matter only
if we allow multiple nodes from the same package are used
only in that case.
Since #34821 we are annotating virtual dependencies on
DAG edges, and reconstructing virtuals in memory when
we read a concrete spec from previous formats.
Therefore, we can remove a TODO in asp.py, and rely on
"virtual_on_edge" facts to be imposed.
Computing str(spec) is faster than computing hash(spec), and
since all the abstract specs we deal with come from user configuration
they cannot cover DAG structures that are not captured by str() but
are captured by hash()
Delay lookup for abstract hashes until concretization time, instead of
until Spec comparison. This has a few advantages:
1. `satisfies` / `intersects` etc don't always know where to resolve the
abstract hash (in some cases it's wrong to look in the current env,
db, buildcache, ...). Better to let the call site dictate it.
2. Allows search by abstract hash without triggering a database lookup,
causing quadratic complexity issues (accidental nested loop during
search)
3. Simplifies queries against the buildcache, they can now use Spec
instances instead of strings.
The rules are straightforward:
1. a satisfies b when b's hash is prefix of a's hash
2. a intersects b when either a's or b's hash is a prefix of b's or a's
hash respectively
The median length of this list of 1. For reasons I don't know, `.sort()`
still like to call the key function.
This saves ~9% of total database read time, and the number of calls
goes from 5305 -> 1715.
* Do not impose provider conditions, if the node is not a provider
fixes#39455
When a node can be a provider of a spec, but is not selected as
a provider, we should not be imposing provider conditions on the
virtual.
* Adjust the integrity constraint, by using the correct atom
* Add "only_clingo", "only_original" and "not_on_windows" markers
* Modify tests to use the "not_on_windows" marker
* Mark tests that run only with clingo
* Mark tests that run only with the original concretizer
To avoid paying the cost of setup and of a full grounding again,
move cycle detection into a separate program and check first if
the solution has cycles.
If it has, ground only the integrity constraint preventing cycles
and solve again.
The "concretizer" section has been extended with a "duplicates:strategy"
attribute, that can take three values:
- "none": only 1 node per package
- "minimal": allow multiple nodes opf specific packages
- "full": allow full duplication for a build tool
This refactor introduces extra indices for triggers and
effect of a condition, so that the corresponding clauses
are evaluated once for every condition they apply to.
All the solution modes we use imply that we have to solve for all
the literals, except for "when possible".
Here we remove a minimization on the number of literals not
solved, and emit directly a fact when a literal *has* to be
solved.
Introduce the concept of "condition sets", i.e. the set of packages on which
a package can require / impose conditions. This currently maps to the link/run
sub-dag of each package + its direct build dependencies.
Parametrize the "condition" and "requirement" logic to multiple nodes.
So far the encoding has a single ID per package, i.e. all the
facts will be node(0, Package). This will prepare the stage for
extending this logic and having multiple nodes from the same
package in a DAG.
Each fact that is deduced from package rules, and start with
a bare package atom, is transformed into a "facts" atom containing
a nested function.
For instance we transformed
version_declared(Package, ...) -> facts(Package, version_declared(...))
This allows us to clearly mark facts that represent a rule on the package,
and will be of help later when we'll have to distinguish the cases where
the atom "Package" is being used referred to package rules and not to a
node in the DAG.
Windows executable paths can have spaces in them, which was leading to
errors when constructing Executable objects: the parser was intended
to handle cases where users could provide an executable along with one
or more space-delimited arguments.
* Executable now assumes that it is constructed with a string argument
that represents the path to the executable, but no additional arguments.
* Invocations of Executable.__init__ that depended on this have been
updated (this includes the core, tests, and one instance of builtin
repository package).
* The error handling for failed invocations of Executable.__call__ now
includes a check for whether the executable name/path contains a
space, to help users debug cases where they (now incorrectly)
concatenate the path and the arguments.
* The module-level skip for tests in `cmd.install` on Windows is removed.
A few classes of errors still persist:
* Cdash tests are not working on Windows
* Tests for failed installs are also not working (this will require
investigating bugs in output redirection)
* Environments are not yet supported on Windows
overall though, this enables testing of most basic uses of "spack install"
* Git repositories cached for version lookups were using a layout that
mimicked the URL as much as possible. This was useful for listing the
cache directory and understanding what was present at a glance, but
the paths were overly long on Windows. On all systems, the layout is
now a single directory based on a hash of the Git URL and is shortened
(which ensures a consistent and acceptable length, and also avoids
special characters).
* In particular, this removes util.url.parse_git_url and its associated
test, which were used exclusively for generating the git cache layout
* Bootstrapping is now enabled for unit tests on Windows
#36770 added git as a dependency to `setuptools-scm`. This in turn makes `git` a
transitive dependency for our bootstrapping process.
Since `git` may take a long time to build, and is found on most systems, try to
detect it as an external.
This makes the name of the global variable representing
the repository currently in use uppercase. Doing so is advised
by pylint rules, and helps to identify where the global is used.
* Prefix conflict messages with package name
This patch prefixes all conflict messages with the package name to
alleviate what was otherwise a very manual process. Note that this patch
is a one line change but has a fairly outsized impact.
* same for requires directive
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
* Ensure that all variants have a description
* Update mock packages too
* Fix test invocations
* Black fix
* mgard: update variant descriptions
* flake8 fix
* black fix
* Add to audit tests
* Relax type hints
* Older Python support
* Undo all changes to mock packages
* Flake8 fix