For a long time, the docs have generated a huge, static HTML package list. It has some
disadvantages:
* It's slow to load
* It's slow to build
* It's hard to search
We now have a nice website that can tell us about Spack packages, and it's searchable so
users can easily find the one or two packages out of 7400 that they're looking for. We
should link to this instead of including a static package list page in the docs.
- [x] Replace package list link with link to packages.spack.io
- [x] Remove `package_list.html` generation from `conf.py`.
- [x] Add a new section for "Links" to the docs.
- [x] Remove docstring notes from contribution guide (we haven't generated RST
for package docstrings for a while)
- [x] Remove referencese to `package-list` from docs.
Currently, Windows SDK detection will only pick up SDK versions
related to the current version of Windows Spack is running on.
However, in some circumstances, we want to detect other version
of the SDK, for example, for compiling on Windows 11 for Windows
10 to ensure an API is compatible with Win10.
* Make use of `prefix` in the Cray manifest schema (prepend it to
the relative CC etc.) - this was a Spack error.
* Warn people when wrong-looking compilers are found in the manifest
(i.e. non-existent CC path).
* Bypass compilers that we fail to add (don't allow a single bad
compiler to terminate the entire read-cray-manifest action).
* Refactor Cray manifest tests: module-level variables have been
replaced with fixtures, specifically using the `test_platform`
fixture, which allows the unit tests to run with the new
concretizer.
* Add unit test to check case where adding a compiler raises an
exception (check that this doesn't prevent processing the
rest of the manifest).
If you `spack install x ^y` where `y` is a pure build dep of `x`, and
then uninstall `y`, and then `spack install --overwrite x ^y`, the build
fails because `y` is not re-installed.
Same can happen when you install a develop spec, run `spack gc`,
modify sources, and install again; develop specs rely on overwrite
install to work correctly.
This PR adds a new audit sub-command to check that detection of relevant packages
is performed correctly in a few scenarios mocking real use-cases. The data for each
package being tested is in a YAML file called detection_test.yaml alongside the
corresponding package.py file.
This is to allow encoding detection tests for compilers and other widely used tools,
in preparation for compilers as dependencies.
Modifications:
- [x] Move `spack.util.string` to `llnl.string`
- [x] Remove dependency of `llnl` on `spack.error`
- [x] Move path of `spack.util.path` to `llnl.path`
- [x] Move `spack.util.environment.get_host_*` to `spack.spec`
* msvc.py: don't import distutils
Introduced in #27021, makes Spack forward incompatible with Python.
The module was already deprecated at the time of the PR.
* update spack package
Fixes#39622
Add a timeout to compiler detection and allow Spack to proceed when
this timeout occurs.
In all cases, the timeout is 120 seconds: it is assumed any compiler
invocation we do for the purposes of verifying it would resolve in
that amount of time.
Also refine executables that are tested as being possible MSVC
instances, and limit where we try to detect MSVC. In more detail:
* Compiler detection should timeout after a certain period of time.
Because compiler detection executes arbitrary executables on the
system, we could encounter a program that just hangs, or even a
compiler that hangs on a license key or similar. A timeout
prevents this from hanging Spack.
* Prevents things like cl-.* from being detected as potential MSVC
installs. cl is always just cl in all cases that Spack supports.
Change the MSVC class to indicate this.
* Prevent compilers unsupported on certain platforms from being
detected there (i.e. don't look for MSVC on systems other than
Windows).
The first point alone is sufficient to address #39622, but the
next two reduce the likelihood of timeouts (which is useful since
those slow down the user even if they are survivable).
Put back normalization of the "virtuals" input as a sorted tuple.
Without this we might get edges that differ just for the order of
virtuals, or that have lists, which are not hashable.
Add unit-tests to prevent regressions.
By default, do not let deprecated versions enter the solve.
Previously you could concretize to something deprecated, only to get errors on install.
With this commit, we get errors on concretization, so the issue is caught earlier.
PythonExtension is a base class for PythonPackage, and
is meant to be used for any package that is a Python
extension but is not built using "python_pip".
The "update_external_dependency" method in the base
class calls another method that is defined in the derived
class.
Push "get_external_python_for_prefix" up in the hierarchy
to make method calls consistent.
This commit replaces the internal representation of deptypes with `int`, which is more compact
and faster to operate with.
Double loops like:
```
any(x in ys for x in xs)
```
are replaced by constant operations bool(xs & ys), where xs and ys are dependency types.
Global constants are exposed for convenience in `spack.deptypes`
Currently, the concretizer emits facts for all versions known to Spack, including deprecated versions, and has a specific optimization objective to minimize their use.
This commit simplifies how deprecated versions are handled by considering possible versions for a spec only if they appear in a spec literal, or if the `config:deprecated:true` is set directly or through the `--deprecated` flag. The optimization objective has also been removed, in favor of just ordering versions and having deprecated ones last.
This results in:
a) no delayed errors on install, but concretization errors when deprecated versions would be the only option. This is in particular relevant for CI where it's better to get errors early
b) a slight concretization speed-up due to fewer facts
c) a simplification of the logic program.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
NMake makefiles are still called makefiles. The corresponding builder
variable was called "nmakefile", which is a bit unintuitive and lead
to a few easy-to-make, hard-to-notice mistakes when creating packages.
This commit renames the builder property to be "makefile"
Extensionless archives requiring two-stage decompression and extraction
require intermediate archives to be renamed after decompression/extraction
to prevent collision. Prior behavior attempted to cleanup the intermediate
archive with the original name, this PR ensures the renamed folder is
cleaned instead.
Co-authored-by: Dan Lipsa <dan.lipsa@khq.kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
* Perform external spec detection with multiple workers
The logic to perform external spec detection has been refactored
into classes. These classes use the GoF "template" pattern to account
for the small differences between searching for "executables" and
for "libraries", while unifying the larger part of the algorithm.
A ProcessPoolExecutor is used to parallelize the work.
* Speed-up external find by tagging detectable packages automatically
Querying packages by tag is much faster than inspecting the repository,
since tags are cached. This commit adds a "detectable" tag to every
package that implements the detection protocol, and external detection
uses it to search for packages.
* Pass package names instead of package classes to workers
The slowest part of the search is importing the Python modules
associated with candidate packages. The import is done serially
before we distribute the work to the pool of executors.
This commit pushes the import of the Python module to the job
performed by the workers, and passes just the name of the packages
to the executors.
In this way imports can be done in parallel.
* Rework unit-tests for Windows
Some unit tests were doing a full e2e run of a command
just to check a input handling. Make the test more
focused by just stressing a specific function.
Mark as xfailed 2 tests on Windows, that will be fixed
by a PR in the queue. The tests are failing because we
monkeypatch internals in the parent process, but the
monkeypatching is not done in the "spawned" child
process.
* Write timing information for installs from cache
* CI: aggregate and upload install_times.json to artifacts
* CI: Don't change root directory for artifact generation
* Flat event based timer variation
Event based timer allows for easily starting and stopping timers without
wiping sub-timer data. It also requires less branching logic when
tracking time.
The json output is non-hierarchical in this version and hierarchy is
less rigidly enforced between starting and stopping.
* Add and write timers for top level install
* Update completion
* remove unused subtimer api
* Fix unit tests
* Suppress timing summary option
* Save timers summaries to user_data artifacts
* Remove completion from fish
* Move spack python to script section
* Write timer correctly for non-cache installs
* Re-add hash to timer file
* Fish completion updates
* Fix null timer yield value
* fix type hints
* Remove timer-summary-file option
* Add "." in front of non-package timer name
---------
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
This is a fixed version of b72a268
* That commit would discard the final key component (so if you set
"config:install_tree:root", it would discard "root" and just set
install tree).
* When setting key:"value", with the quotes, that commit would
discard the quotes, which would confuse the system if adding a
value like "{example}" (the "{" character indicates a dictionary).
This commit retains the quotes.