Local `git` tests will fail with `fatal: transport 'file' not allowed` when using git 2.38.1 or higher, due to a fix for `CVE-2022-39253`.
This was fixed in CI in #33429, but that doesn't help the issue for anyone's local environment. Instead of fixing this with git config in CI, we should ensure that the tests run anywhere.
- [x] Introduce `spack.util.git`.
- [x] Use `spack.util.git.get_git()` to get a git executable, instead of `which("git")` everywhere.
- [x] Make all `git` tests use a `git` fixture that goes through `spack.util.git.get_git()`.
- [x] Add `-c protocol.file.allow=always` to all `git` invocations under `pytest`.
- [x] Revert changes from #33429, which are no longer needed.
`spack graph` has been reworked to use:
- Jinja templates
- builder objects to construct the template context when DOT graphs are requested.
This allowed to add a new colored output for DOT graphs that highlights both
the dependency types and the nodes that are needed at runtime for a given spec.
`spack solve` is supposed to show you times you can compare. setup, ground, solve, etc.
all in a list. You're also supposed to be able to compare easily across runs. With
`pretty_seconds()` (introduced in #33900), it's easy to miss the units, e.g., spot the
bottleneck here:
```console
> spack solve --timers tcl
setup 22.125ms
load 16.083ms
ground 8.298ms
solve 848.055us
total 58.615ms
```
It's easier to see what matters if these are all in the same units, e.g.:
```
> spack solve --timers tcl
setup 0.0147s
load 0.0130s
ground 0.0078s
solve 0.0008s
total 0.0463s
```
And the units won't fluctuate from run to run as you make changes.
-[x] make `spack solve` timings consistent like before
Avoid text decoding and encoding when combining log files, instead
combine in binary mode.
Also do a buffered copy which is sometimes faster for large log files.
Currently, the Spack docs show documentation for submodules *before* documentation for
submodules on package doc pages. This means that if you put docs in `__init__.py` in
some package, the docs in there will be shown *after* the docs for all submodules of the
package instead of at the top as an intro to the package. See, e.g.,
[the lockfile docs](https://spack.readthedocs.io/en/latest/spack.environment.html#module-spack.environment),
which should be at the
[top of that page](https://spack.readthedocs.io/en/latest/spack.environment.html).
- [x] add the `--module-first` option to sphinx so that it generates module docs at top of page.
The sticky property will prevent clingo from changing the amdgpu_target
to work around conflicts. This is the same behaviour as was adopted for
cuda_arch in 055c9d125d.
Implement an alternative strategy to do index.json invalidation.
The current approach of pairs of index.json / index.json.hash is
problematic because it leads to races.
The standard solution for cache invalidation is etags, which are
supported by both http and s3 protocols, which allows one to do
conditional fetches.
This PR implements that for the http/https schemes. It should also work
for s3 schemes, but that requires other prs to be merged.
Also it improves unit tests for index.json fetches.
Interim fix for #34559
Spack's MSVC compiler definition uses ifx as the Fortran compiler.
Prior to #33385, the Spack MSVC compiler definition required the
executable to be called "ifx.exe"; #33385 replaced this with just
"ifx", which inadvertently led to ifx falsely indicating the
presence of MSVC on non-Windows systems (which leads to future
errors when attempting to query/use those compiler objects).
This commit applies a short-term fix by updating MSVC Fortran
version detection to always indicate a failure on non-Windows.
fixes#34518
Fix an issue due to the MRO chain of the package wrapper
during build. Before this PR we were always returning
False when the builder object was created before the
run_tests method was monkey patched.
This reverts commit 8035eeb36d.
And also removes logic around an additional HEAD request to prevent
a more expensive GET request on wrong content-type. Since large files
are typically an attachment and only downloaded when reading the
stream, it's not an optimization that helps much, and in fact the logic
was broken since the GET request was done unconditionally.
The main issue that's fixed is that Spack passes paths (as strings) to
functions that require urls. That wasn't an issue on unix, since there
you can simply concatenate `file://` and `path` and all is good, but on
Windows that gives invalid file urls. Also on Unix, Spack would not deal with uri encoding like x%20y for file paths.
It also removes Spack's custom url.parse function, which had its own incorrect interpretation of file urls, taking file://x/y to mean the relative path x/y instead of hostname=x and path=/y. Also it automatically interpolated variables, which is surprising for a function that parses URLs.
Instead of all sorts of ad-hoc `if windows: fix_broken_file_url` this PR
adds two helper functions around Python's own path2url and reverse.
Also fixes a bug where some `spack buildcache` commands
used `-d` as a flag to mean `--mirror-url` requiring a URL, and others
`--directory`, requiring a path. It is now the latter consistently.
When installing binary tarballs, Spack has to download from its
binary mirrors.
Sometimes Spack has cache available for these mirrors.
That cache helps to order mirrors to increase the likelihood of
getting a direct hit.
However, currently, when Spack can't find a spec in any local cache
of mirrors, it's very dumb:
- A while ago it used to query each mirror to see if it had a spec,
and use that information to order the mirror again, only to go
about and do exactly a part of what it just did: fetch the spec
from that mirror confused
- Recently, it was changed to download a full index.json, which
can be multiple dozens of MBs of data and may take a minute to
process thanks to the blazing fast performance you get with
Python.
In a typical use case of concretizing with reuse, the full index.json
is already available, and it likely that the local cache gives a perfect
mirror ordering on install. (There's typically no need to update any
caches).
However, in the use case of Gitlab CI, the build jobs don't have cache,
and it would be smart to just do direct fetches instead of all the
redundant work of (1) and/or (2).
Also, direct fetches from mirrors will soon be fast enough to
prefer these direct fetches over the excruciating slowness of
index.json files.
Writing a long dependency like:
```python
depends_on(
"llvm"
"targets=amdgpu,bpf,nvptx,webassembly"
"version_suffix=jl +link_llvm_dylib ~internal_unwind"
)
```
when it should be formatted like this:
```python
depends_on(
"llvm"
" targets=amdgpu,bpf,nvptx,webassembly"
" version_suffix=jl +link_llvm_dylib ~internal_unwind"
)
```
can cause really subtle errors. Specifically, you'll get something like this in
the package sanity tests:
```
AttributeError: 'NoneType' object has no attribute 'rpartition'
```
because Spack happily constructs a class that has a dependency with name `None`.
We can catch this earlier by banning anonymous dependency specs directly in
`depends_on()`. This causes the package itself to fail to parse, and emits
a much better error message:
```
==> Error: Invalid dependency specification in package 'julia':
llvmtargets=amdgpu,bpf,nvptx,webassemblyversion_suffix=jl +link_llvm_dylib ~internal_unwind
```
* `url_exists` improvements (take 2)
Make `url_exists` do HEAD request for http/https/s3 protocols
Rework the opener: construct it once and only once, dynamically dispatch
to the right one based on config.
It's very common for us to tell users to grep through the existing Spack packages to
find examples of what they want, and it's also very common for package developers to do
it. Now, searching packages is even easier.
`spack pkg grep` runs grep on all `package.py` files in repos known to Spack. It has no
special options other than the search string; all options passed to it are forwarded
along to `grep`.
```console
> spack pkg grep --help
usage: spack pkg grep [--help] ...
positional arguments:
grep_args arguments for grep
options:
--help show this help message and exit
```
```console
> spack pkg grep CMakePackage | head -3
/Users/gamblin2/src/spack/var/spack/repos/builtin/packages/3dtk/package.py:class _3dtk(CMakePackage):
/Users/gamblin2/src/spack/var/spack/repos/builtin/packages/abseil-cpp/package.py:class AbseilCpp(CMakePackage):
/Users/gamblin2/src/spack/var/spack/repos/builtin/packages/accfft/package.py:class Accfft(CMakePackage, CudaPackage):
```
```console
> spack pkg grep -Eho '(\S*)\(PythonPackage\)' | head -3
AwsParallelcluster(PythonPackage)
Awscli(PythonPackage)
Bueno(PythonPackage)
```
## Return Value
This retains the return value semantics of `grep`:
* 0 for found,
* 1 for not found
* >1 for error
## Choosing a `grep`
You can set the ``SPACK_GREP`` environment variable to choose the ``grep``
executable this command should use.
Unit tests on Windows are supposed to pass for any PR to pass CI.
However, the return code for the unit test command was not being
checked, which meant this check was always passing (effectively
disabled). This PR
* Properly checks the result of the unit tests and fails if the
unit tests fail
* Fixes (or disables on Windows) a number of tests which have
"drifted" out of support on Windows since this check was
effectively disabled
## Motivation
Our parser grew to be quite complex, with a 2-state lexer and logic in the parser
that has up to 5 levels of nested conditionals. In the future, to turn compilers into
proper dependencies, we'll have to increase the complexity further as we foresee
the need to add:
1. Edge attributes
2. Spec nesting
to the spec syntax (see https://github.com/spack/seps/pull/5 for an initial discussion of
those changes). The main attempt here is thus to _simplify the existing code_ before
we start extending it later. We try to do that by adopting a different token granularity,
and by using more complex regexes for tokenization. This allow us to a have a "flatter"
encoding for the parser. i.e., it has fewer nested conditionals and a near-trivial lexer.
There are places, namely in `VERSION`, where we have to use negative lookahead judiciously
to avoid ambiguity. Specifically, this parse is ambiguous without `(?!\s*=)` in `VERSION_RANGE`
and an extra final `\b` in `VERSION`:
```
@ 1.2.3 : develop # This is a version range 1.2.3:develop
@ 1.2.3 : develop=foo # This is a version range 1.2.3: followed by a key-value pair
```
## Differences with the previous parser
~There are currently 2 known differences with the previous parser, which have been added on purpose:~
- ~No spaces allowed after a sigil (e.g. `foo @ 1.2.3` is invalid while `foo @1.2.3` is valid)~
- ~`/<hash> @1.2.3` can be parsed as a concrete spec followed by an anonymous spec (before was invalid)~
~We can recover the previous behavior on both ones but, especially for the second one, it seems the current behavior in the PR is more consistent.~
The parser is currently 100% backward compatible.
## Error handling
Being based on more complex regexes, we can possibly improve error
handling by adding regexes for common issues and hint users on that.
I'll leave that for a following PR, but there's a stub for this approach in the PR.
## Performance
To be sure we don't add any performance penalty with this new encoding, I measured:
```console
$ spack python -m timeit -s "import spack.spec" -c "spack.spec.Spec(<spec>)"
```
for different specs on my machine:
* **Spack:** 0.20.0.dev0 (c9db4e50ba045f5697816187accaf2451cb1aae7)
* **Python:** 3.8.10
* **Platform:** linux-ubuntu20.04-icelake
* **Concretizer:** clingo
results are:
| Spec | develop | this PR |
| ------------- | ------------- | ------- |
| `trilinos` | 28.9 usec | 13.1 usec |
| `trilinos @1.2.10:1.4.20,2.0.1` | 131 usec | 120 usec |
| `trilinos %gcc` | 44.9 usec | 20.9 usec |
| `trilinos +foo` | 44.1 usec | 21.3 usec |
| `trilinos foo=bar` | 59.5 usec | 25.6 usec |
| `trilinos foo=bar ^ mpich foo=baz` | 120 usec | 82.1 usec |
so this new parser seems to be consistently faster than the previous one.
## Modifications
In this PR we just substituted the Spec parser, which means:
- [x] Deleted in `spec.py` the `SpecParser` and `SpecLexer` classes. deleted `spack/parse.py`
- [x] Added a new parser in `spack/parser.py`
- [x] Hooked the new parser in all the places the previous one was used
- [x] Adapted unit tests in `test/spec_syntax.py`
## Possible future improvements
Random thoughts while working on the PR:
- Currently we transform hashes and files into specs during parsing. I think
we might want to introduce an additional step and parse special objects like
a `FileSpec` etc. in-between parsing and concretization.
This commit reworks the bootstrapping procedure to use Spack environments
as much as possible.
The `spack.bootstrap` module has also been reorganized into a Python package.
A distinction is made among "core" Spack dependencies (clingo, GnuPG, patchelf)
and other dependencies. For a number of reasons, explained in the `spack.bootstrap.core`
module docstring, "core" dependencies are bootstrapped with the current ad-hoc
method.
All the other dependencies are instead bootstrapped using a Spack environment
that lives in a directory specific to the interpreter and the architecture being used.
All the vermin annotations we were using were for optional features introduced in early
Python 3 versions. We no longer need any of them, as we only support Python 3.6+. If we
start optionally using features from newer Pythons than 3.6, we will need more vermin
annotations.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
We no longer support Python <3.6, so we don't need to check whether Python supports SSL
verification in `spack.util.web`.
- [x] Remove a bunch of logic we needed to appease Python 2
We've stopped supporting Python 2, and contributors are noticing that our CI no longer
allows Python 2.7 comment type hints. They end up having to adapt them, but this adds
extra unrelated work to PRs.
- [x] Move to 3.6 type hints across the entire code base