So far the encoding has a single ID per package, i.e. all the
facts will be node(0, Package). This will prepare the stage for
extending this logic and having multiple nodes from the same
package in a DAG.
Each fact that is deduced from package rules, and start with
a bare package atom, is transformed into a "facts" atom containing
a nested function.
For instance we transformed
version_declared(Package, ...) -> facts(Package, version_declared(...))
This allows us to clearly mark facts that represent a rule on the package,
and will be of help later when we'll have to distinguish the cases where
the atom "Package" is being used referred to package rules and not to a
node in the DAG.
Windows executable paths can have spaces in them, which was leading to
errors when constructing Executable objects: the parser was intended
to handle cases where users could provide an executable along with one
or more space-delimited arguments.
* Executable now assumes that it is constructed with a string argument
that represents the path to the executable, but no additional arguments.
* Invocations of Executable.__init__ that depended on this have been
updated (this includes the core, tests, and one instance of builtin
repository package).
* The error handling for failed invocations of Executable.__call__ now
includes a check for whether the executable name/path contains a
space, to help users debug cases where they (now incorrectly)
concatenate the path and the arguments.
* The module-level skip for tests in `cmd.install` on Windows is removed.
A few classes of errors still persist:
* Cdash tests are not working on Windows
* Tests for failed installs are also not working (this will require
investigating bugs in output redirection)
* Environments are not yet supported on Windows
overall though, this enables testing of most basic uses of "spack install"
* Git repositories cached for version lookups were using a layout that
mimicked the URL as much as possible. This was useful for listing the
cache directory and understanding what was present at a glance, but
the paths were overly long on Windows. On all systems, the layout is
now a single directory based on a hash of the Git URL and is shortened
(which ensures a consistent and acceptable length, and also avoids
special characters).
* In particular, this removes util.url.parse_git_url and its associated
test, which were used exclusively for generating the git cache layout
* Bootstrapping is now enabled for unit tests on Windows
#36770 added git as a dependency to `setuptools-scm`. This in turn makes `git` a
transitive dependency for our bootstrapping process.
Since `git` may take a long time to build, and is found on most systems, try to
detect it as an external.
This makes the name of the global variable representing
the repository currently in use uppercase. Doing so is advised
by pylint rules, and helps to identify where the global is used.
* Prefix conflict messages with package name
This patch prefixes all conflict messages with the package name to
alleviate what was otherwise a very manual process. Note that this patch
is a one line change but has a fairly outsized impact.
* same for requires directive
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
* Ensure that all variants have a description
* Update mock packages too
* Fix test invocations
* Black fix
* mgard: update variant descriptions
* flake8 fix
* black fix
* Add to audit tests
* Relax type hints
* Older Python support
* Undo all changes to mock packages
* Flake8 fix
This PR extracts two responsibilities from the `Database` class:
1. Managing locks for prefixes during an installation
2. Marking installation failures
and pushes them into their own class (`SpecLocker` and `FailureMarker`). These responsibilities are also pushed up into the `Store`, leaving to `Database` only the duty to manage `index.json` files.
`SpecLocker` classes no longer share a global list of locks, but locks are per instance. Their identifier is simply `(dag hash, package name)`, and not the spec prefix path, to avoid circular dependencies across Store / Database / Spec.
FastPackageChecker.modified_since should use a default number < 0
When the repo cache does not exist, Spack uses mtime 0. This causes the repo
cache not to be generated when the repo has mtime 0.
Some popular package managers such as spack use 0 mtime normalization for
reproducible tarballs. So when installing spack with spack from a buildcache, the
repo cache doesn't generate
Also add some typehints
* Inform mypy that tty.die is noreturn
* avoid temporary allocation in env
* update spack buildcache save-specfile
* fix spack buildcache check/download/get-buildcache-name
- ensure that required args and mutually exclusive ones are marked as
such in argparse for better error messages
- deprecate --spec-file everywhere
- use disambiguate for better error messages
* CI: Refactor ci reproducer
* Autostart container
* Reproducer paths match CI paths
* Generate start scripts for docker and reproducer
* CI: Add interactive and gpg options to reproduce-build
* Interactive will determine if the docker container persists
after running reproduction.
* GPG path/url allow downloading GPG keys needed for binary
cache download validation. This is important for running
reproducer for protected CI jobs.
* Add exit_on_failure option to CI scripts
* CI: Add runtime option for reproducer
- Run `mkdirp` on `spec.prefix`
- Extract directly into `spec.prefix`
1. No need for `$store/tmp.xxx` where we extract the tarball directly, pray that it has one subdir `<name>-<version>-<hash>`, and then `rm -rf` the package prefix followed by `mv`.
2. No need to clean up this temp dir in `spack clean`.
3. Instead figure out package directory prefix from the tarball contents, and strip the tarinfo entries accordingly (kinda like tar --strip-components but more strict)
- Set package dir permissions
- Don't error during error handling when files cannot removed
- No need to "enrich" spec.json with this tarball-toplevel-path
After this PR, we can in fact tarball packages relative to `/` instead of `spec.prefix/..`, which makes it possible to use Spack tarballs as container layers, where relocation is impossible, and rootfs tarballs are expected.