Fixes#5778.
Spack uses 'gcc -dumpversion' to determine the full version of gcc.
'gcc -dumpversion' no longer gives the full version on gcc 7.2.0.
'gcc -dumpfullversion' is required instead. This PR detects when
'gcc -dumpversion' gives a truncated version of '7' and in that
case retrieves the full version with 'gcc -dumpfullversion'
The name of the debug log written by the cc compiler wrapper was given
by Spec.short_spec, which includes the architecture. Somewhere along
the line Spec.format started adding spaces around the architecture
property so the filename started including spaces; the cc wrapper
script appears to ignore this, so files like spack-cc-bzip2-....in.log
(which record the wrapped compiler invocations) were not being
generated. This uses a different format string from the spec to
generate the wrapper log file names (which does not include spaces).
* when a user-provided spec refers to an already-installed package, packages with patches applied were causing validation errors based on the recorded variants in the package's class
* avoid checks on all reserved variants (not just 'patches')
* basic functionality to install spec from a binary cache when it's available; this spiders each cache for each package and could likely be more efficient by caching the results of the first check
* add spec to db after installing from binary cache
* cache (in memory) spec listings retrieved from binary caches
* print a warning vs. failing when no mirrors are configured to retrieve pre-built Spack packages
* make automatic retrieval of pre-built spack packages from mirrors optional
* no code was using the links stored in the dictionary returned by get_specs, so this simplifies the logic to return only a set of specs
* print package prefix after installing from binary cache
* provide more information to user on install progress
This updates the logic for Package.extends so that if the spec
associated with the package is not concrete, it will report true if
the package *could extend* the given spec; generally speaking a
package could extend a spec as long as none of the details associated
with its extendee spec conflict with the given spec. When the spec
associated with the package is concrete, this function will only
report whether the package *does extend* the given spec. When both
the specs are concrete, the semantics are the same as before.
* When creating a tar of a package for a build cache, symlinks are
preserved (the corresponding path in the newly-created tarfile will
be a symlink rather than a copy of the file)
* Dont add external packages to a build cache
* When installing from binary cache, don't create install prefix until
verification is complete
* Fixes#5754
Previously when RepoPath.repo_for_pkg was invoked with a string,
it did not check if the string included a namespace. Any
namespace-qualified package provided as a string would not be found
(at which point the behavior was to return the highest-precedence
repository).
* handle nested namespaces for packages specified as strings in repo_for_pkg
* add preliminary repository tests
* add test which replicates #5754
* refactor repo tests with fixtures
* define repo_path equivalent at test-level scope for repo tests
* add tests for unknown namespace/package
* rename fixture function (no longer prefixed with 'test_')
Internally we work against a branch named 'llnl/develop', which
mirrors the public repository's `develop` branch.
It's useful to be able to run flake8 on our changes, using
`llnl/develop` as the base branch instead of `develop`.
Internally the flake8 subcommand generates the list of changes files
using a hardcoded range of `develop...`.
This makes the base of that range a command line option, with a
default of `develop`.
That lets us do this:
```
spack flake8 --base llnl/develop
```
which uses a range of `llnl/develop...`.
'spack install' can now reinstall a spec even if it has dependents, via
the --overwrite option. This option moves the current installation in a
temporary directory. If the reinstallation is successful the temporary
is removed, otherwise a rollback is performed.
- new E741 flake8 checks disallow 'l', 'O', and 'I' as variable names
- rework parts of the code that use this, to be compliant
- we could add exceptions for this, but we're trying to mostly keep up
with PEP8 and we already have more than a few exceptions.
- When you don't use wildcards, flake8 will find places where you used an
undefined name.
- This commit has all the bugfixes resulting from this static check.
There are now separate flake8 configs for core vs. packages:
- core has a smaller set of flake8 exceptions
- packages allow `from spack import *` and module globals
- Allows core to take advantage of static checking for undefined names
- Allows packages to keep using Spack tricks like `from spack import *`
and dependencies setting globals for dependents.
* Update Getting Started docs to clarify that full Xcode suite is required for qt
* Better error message when only the command-line tools are installed
I'm tracking down a problem with the perl package that's been
generating this error:
```
OSError: OSError: [Errno 2] No such file or directory: '/blah/blah/blah/lib/5.24.1/x86_64-linux/Config.pm~'
```
The real problem is upstream, but it's being masked by an exception
raised in `filter_file`s finally block.
In my case, `backup` is `False`.
The backup is created around line 127, the `re.sub()` calls
fails (working on that), the `except` block fires and moves the backup
file back, then the finally block tries to remove the non-existent
backup file.
This change just avoids trying to remove the non-existent file.
- The shell script uses arrays and hence only works on sophisticated shells and not the default `sh`. For clarity the shebang `#!/bin/bash` has been used instead.
- This fakes out GitFetchStrategy to try code paths for different git
versions.
- This allows us to test code paths for old versions using a newer git
version.
- Tests use a session-scoped mock stage directory so as not to interfere
with the real install.
- Every test is forced to clean up after itself with an additional check.
We now automatically assert that no new files have been added to
`spack.stage_path` during each test.
- This means that tests that fail installs now need to clean up their
stages, but in all other cases the check is useful.
- Be explicit about stage creation during the install process.
- This avoids accidental creation of stages
- e.g., during `spack install --fake`, stages were erroneously recreated
after being destroyed
- This prevents the main spack install from being clusttered by
invocations of `spack test`.
- This uses a session-scoped stage fixture to ensure tests don't
interfere.
- Spack's core package interface was previously overly stateful, in that
calling methods like `do_stage()` could change your working directory.
- This removes Stage's `chdir` and `chdir_to_source` methods and replaces
their usages with `with working_dir(stage.path)` and `with
working_dir(stage.source_path)`, respectively. These ensure the
original working directory is preserved.
- This not only makes the API more usable, it makes the tests more
deterministic, as previously a test could leave the current working
directory in a bad state and cause subsequent tests to fail
mysteriously.
- Assertion would search for root through all possible paths.
- It's also really slow.
- This isn't needed anymore. We're pretty good at ensuring single-rooted
DAGs, and this assertion has never been thrown.
- This shaves another 6 seconds off r-rminer concretization
- This reduces concretization time for r-rminer from over 1 minute to
only 16 seconds.
- OrderedDict ends up taking a *lot* of time to compare larger specs.
- An OrderedDict isn't actually needed here. It's actually not possible
to find duplicates, and we end up sorting the contents of the
OrderedDict anyway.
- This is an optimization to the way traverse_edges iterates over
successors.
- Previous version called dependencies_dict(), which involved a lot of
redundant work (creating dicts and calling caonical_deptype)
- Spack ends up constructing compilers frequently from YAML data.
- This caches the result of parsing the compiler config
- The logic in compilers/__init__.py could use a bigger cleanup, but this
makes concretization much faster for now.
- on macOS, this also ensures that xcrun is called only twice, as opposed
to every time a new compiler object is constructed.
This adds a workflow section on how to use spack on Docker.
It provides an example on the best-practices I collected over the
last months and circumvents the common pitfalls I tapped in.
Works with MPI, CUDA, Modules, execution as root, etc.
Background: Developed initially for PIConGPU.