'spack install' can now reinstall a spec even if it has dependents, via
the --overwrite option. This option moves the current installation in a
temporary directory. If the reinstallation is successful the temporary
is removed, otherwise a rollback is performed.
- new E741 flake8 checks disallow 'l', 'O', and 'I' as variable names
- rework parts of the code that use this, to be compliant
- we could add exceptions for this, but we're trying to mostly keep up
with PEP8 and we already have more than a few exceptions.
- When you don't use wildcards, flake8 will find places where you used an
undefined name.
- This commit has all the bugfixes resulting from this static check.
There are now separate flake8 configs for core vs. packages:
- core has a smaller set of flake8 exceptions
- packages allow `from spack import *` and module globals
- Allows core to take advantage of static checking for undefined names
- Allows packages to keep using Spack tricks like `from spack import *`
and dependencies setting globals for dependents.
* Update Getting Started docs to clarify that full Xcode suite is required for qt
* Better error message when only the command-line tools are installed
I'm tracking down a problem with the perl package that's been
generating this error:
```
OSError: OSError: [Errno 2] No such file or directory: '/blah/blah/blah/lib/5.24.1/x86_64-linux/Config.pm~'
```
The real problem is upstream, but it's being masked by an exception
raised in `filter_file`s finally block.
In my case, `backup` is `False`.
The backup is created around line 127, the `re.sub()` calls
fails (working on that), the `except` block fires and moves the backup
file back, then the finally block tries to remove the non-existent
backup file.
This change just avoids trying to remove the non-existent file.
- The shell script uses arrays and hence only works on sophisticated shells and not the default `sh`. For clarity the shebang `#!/bin/bash` has been used instead.
- This fakes out GitFetchStrategy to try code paths for different git
versions.
- This allows us to test code paths for old versions using a newer git
version.
- Tests use a session-scoped mock stage directory so as not to interfere
with the real install.
- Every test is forced to clean up after itself with an additional check.
We now automatically assert that no new files have been added to
`spack.stage_path` during each test.
- This means that tests that fail installs now need to clean up their
stages, but in all other cases the check is useful.
- Be explicit about stage creation during the install process.
- This avoids accidental creation of stages
- e.g., during `spack install --fake`, stages were erroneously recreated
after being destroyed
- This prevents the main spack install from being clusttered by
invocations of `spack test`.
- This uses a session-scoped stage fixture to ensure tests don't
interfere.
- Spack's core package interface was previously overly stateful, in that
calling methods like `do_stage()` could change your working directory.
- This removes Stage's `chdir` and `chdir_to_source` methods and replaces
their usages with `with working_dir(stage.path)` and `with
working_dir(stage.source_path)`, respectively. These ensure the
original working directory is preserved.
- This not only makes the API more usable, it makes the tests more
deterministic, as previously a test could leave the current working
directory in a bad state and cause subsequent tests to fail
mysteriously.
- Assertion would search for root through all possible paths.
- It's also really slow.
- This isn't needed anymore. We're pretty good at ensuring single-rooted
DAGs, and this assertion has never been thrown.
- This shaves another 6 seconds off r-rminer concretization
- This reduces concretization time for r-rminer from over 1 minute to
only 16 seconds.
- OrderedDict ends up taking a *lot* of time to compare larger specs.
- An OrderedDict isn't actually needed here. It's actually not possible
to find duplicates, and we end up sorting the contents of the
OrderedDict anyway.
- This is an optimization to the way traverse_edges iterates over
successors.
- Previous version called dependencies_dict(), which involved a lot of
redundant work (creating dicts and calling caonical_deptype)
- Spack ends up constructing compilers frequently from YAML data.
- This caches the result of parsing the compiler config
- The logic in compilers/__init__.py could use a bigger cleanup, but this
makes concretization much faster for now.
- on macOS, this also ensures that xcrun is called only twice, as opposed
to every time a new compiler object is constructed.
This adds a workflow section on how to use spack on Docker.
It provides an example on the best-practices I collected over the
last months and circumvents the common pitfalls I tapped in.
Works with MPI, CUDA, Modules, execution as root, etc.
Background: Developed initially for PIConGPU.
* Make --trusted default when running spack gpg list
Currently running `spack gpg list` with no arguments returns nothing. You must supply either the `--trusted` or the `--signing` options. The idea here is to return some initial data to the user when the command is run. The alternative is to return an error, telling the user to select one of the two options.
* Add an extra test case for the empty list command
Fixes the issue with code coverage
This isn't a rework of the concretizer but it speeds things up a LOT.
The main culprits were:
1. Variant code, `provider_index`, and `concretize.py` were calling
`spec.package` when they could use `spec.package_class`
- `spec.package` looks up a package instance by `Spec`, which requires a
(fast-ish but not that fast) DAG compare.
- `spec.package_class` just looks up the package's class by name, and you
should use this when all you need is metadata (most of the time).
- not really clear that the current way packages are looked up is
necessary -- we can consider refactoring that in the future.
2. `Repository.repo_for_pkg` parses a `str` argument into a `Spec` when
called with one, via `@_autospec`, but this is not needed.
- Add some faster code to handle strings directly and avoid parsing
This speeds up concretization 3-9x in my limited tests. Still not super
fast but much more bearable:
Before:
- `spack spec xsdk` took 33.6s
- `spack spec dealii` took 1m39s
After:
- `spack spec xsdk` takes 6.8s
- `spack spec dealii` takes 10.8s
* Do not call setup_package for fake installs
- setup package could fail if ``setup_dependent_environment`` or other
routines expected to use executables from dependencies
- xpetsc and boost try to get python config variables in
`setup_dependent_package`; this would cause them not to be
fake-installable
* Remove vestigial deptype_query argument to Spec.traverse()
- The `deptype_query` argument isn't used anymore -- it's only passed
around and causes confusion when calling traverse.
- Get rid of it and just keep the `deptypes` argument
* Don't print redundant messages when installing dependencies
- `do_install()` was originally depth-first recursive, and printed "<pkg>
already installed in ..." multiple times for packages as recursive
calls encountered them.
- For much cleaner output, use spec.traverse(order='post') to install
dependencies instead
* Add package for aspell and ass't dictionaries
Add a package definition for aspell.
Add a handful of dictionaries to convince myself that the support for
a bunch of dictionaries works.
* Flake8 cleanup
* Use six's version of urlparse
`urlparse` is not python3 friendly. This works around it (stolen from
`.../cmd/md5.py`).
* Fix incorrect trimming regexp
* Clean up dictionary build
- more parsimonious use of `which` (`make()` has already been made)
- use `sh` instead of `bash`
* Use a helper method to generate info for variants
I figured out my issues with static methods. I *think* that it this
is pythonic.
* Convert aspell to an extendable package
Convert aspell to be extendable and rework the dictionaries to be
extensions.
As it stands, there's a great deal of cut and paste in the
dictionaries, I'll abstract that out next.
The {de,}activate methods copy a great deal of code out of
package.py. Perhaps there's a better way....
* Create AspellDictPackage and use it for the dictionaries
Reduce the repeated code, pull it into a base class.
I'm confused about why 'from spack import *' wasn't more useful in the
base class.
* Oops, -de & -es should be AspellDictPackages too
* Typo: pakcage -> package
* Address some commentary
* Update copyright dates, 2016->2017
* spec and spec.package.spec can refer to different objects in the
database. When these two instances of spec differ in terms of
the value of the 'concrete' property, Spec._mark_concrete can
fail when checking Spec.package.installed (which requires
package.spec to be concrete). This skips the check for
spec.package.installed when _mark_concrete is called with
'True' (in other words, when the database is marking all specs
as being concrete).
* add test to confirm this fixes#5293
* edits to address issues where spack concretization attempts to set properties on already-installed specs
* most added checks only need to check if the spec is concrete; they dont also need to check if the package is installed
* add test to ensure that patches are not applied to an installed spec
* add test to ensure that an error is detected when a dependent requests a dependency constraint which conflicts with a requested installed dependency
Fixes#5455
All methods within setup_package use an EnvironmentModifications object
to control the environment. Those modifications are applied at the end
of setup_package. Module loads for the build environment need to be
done after the rest of the environment modifications are applied, as
otherwise Spack may unset variables set by those modules (for example
LD_LIBRARY_PATH).
closes#2884closes#4684
In #1848 we decided to use `Spec.format` to expand certain tokens in
the module file naming scheme or in the environment variable name.
Not all the tokens that are allowed in `Spec.format` make sense in
module file generation. This PR restricts the set of tokens that can
be used, and adds tests to check that the intended behavior is respected.
Additionally, the names of environment variables set/modified by module
files were, up to now, always uppercase. There are packages though that
require case sensitive variable names to honor certain behaviors (e.g.
OpenMPI). This PR restricts the uppercase transformation in variable
names to `Spec.format` tokens.
fixes#5587
In trying to preserve patch ordering, #5476 made equality inconsistent
for the added 'patches' variant. This commit maintains the original
weak ordering of patch applications while preserving consistency of
comparisons. The ordering DOES NOT enter the hashing mechanism. It's
supposed to be a hotfix, while we think of a cleaner and more-permanent
solution.
- This steals the magic regular expressions that CTest uses to parse log
files and addds them to Spack. See here:
https://github.com/Kitware/CMake/blob/master/Source/CTest/cmCTestBuildHandler.cxx
These are BSD licensed, so the port is in `externa/ctest_log_parser.py`
- We currently use these to do better filtering of errors from build
output. Plan is to use them to generate good CDash output.