- This moves var/spack/packages to var/spack/repos/builtin/packages.
- Packages that did not exist in the source branch, or were changed in
develop, were moved into var/spack/repos/builtin/packages as part of
the integration.
Conflicts:
lib/spack/spack/test/unit_install.py
var/spack/repos/builtin/packages/clang/package.py
- mirrors.yaml now uses dict order for precedence, instead of lists of
dicts.
- spack.cmd now specifies default scope for add/remove and for list
with `default_modify_scope` and `default_list_scope`.
- commands that only read or list default to all scopes (merged)
- commands that modify configs modify user scope (highest
precedence) by default
- These vars are used in setup_paraser for mirror/repo/compiler.
- Spack's argparse supports aliases now.
- added 'rm' alias for `spack [repo|compiler|mirror] remove`
This update significantly reworks the llvm and clang packages. The llvm
package now includes variants allowing it to build and install any and
all of:
* clang
* lldb
* llvm's libunwind (why, WHY did they name it this?!?)
* polly (including building it directly into the clang tools, 3.7.0 only)
* clang extra tools
* compiler-rt (sanitizers)
* clang lto (the gold linker plugin that allows same to work)
* libcxx/libcxxabi
* libopenmp, also setting the default openmp runtime to same, when
parameters happen this shoudl be an option of libomp or libgomp
Ideally, this should have rpath setup like the gcc package does, but
clang's driver has no support for specs as such, and no clearly
equivalent mechanism either. If anyone has ideas on this, they would be
welcome.
One significant note related to gcc though, if you test this on LLNL
systems, or anywhere that has multiple GCCs straddling the dwarf2
boundary and sharing a libstdc++, build a gcc with spack and use that to
build clang. If you use a gcc4.8+ to build this with an older
libstdc++ it will fail on missing unwind symbols because of the
discrepancy.
Resource handling has been changed slightly to move the unpacked archive
into the target rather than use symlinks, because symlinks break certain
kinds of relative paths, and orders resource staging such that nested
resources are unpacked after outer ones.
This solution doesn't really make me happy, but does seem to work. It
sorts the resources by the length of the string representing their
destination. Since any nested resource must contain another resource's
name in its path, it seems that should work, but there should be a
better way to do this.
This allows resources to be placed into subdirectory trees that may not
exist in the base package, and may depend on other resources to be
staged later.
- All of these work:
- `spack mirror add`
- `spack mirror remove`
- `spack mirror list`
- `spack mirror` subcommands (except create) now have their own
--scope argument.
- Mirror config is now stored sanely as an ordered list.
- `spack compiler` subcommands now take an optional --scope argument.
- no more `remove_from_config` in `config.py` -- `update` just
overwrites b/c it's easier to just call `get_config`, modify YAML
structures directly, and then call `update`.
- Implemented `spack compiler remove`.
- Configs are now parsed with `spack.util.spack_yaml.load/dump`
- Parser annotates returned data with `_start_mark` and `_end_mark`
properties, so that we can recover what lines/files they came from.
- Parser uses `OrderedDict` instead of `dict`. This will help
maintain some sanity when round-tripping config files.
- User and site config are now kept separately in memory.
- Merging is done on demand when client code requests the configuration.
- Allows user/site config to be updated independently of each other by commands.
- simplifies config logic (no more tracking merged files)
- Stage and fetcher were not being set up properly when fetching using
a different fetch strategy than the default one for the package.
- This is fixed but fetch/stage/mirror logic is still too complicated
and long-term needs a rethink.
- Spack will now print a warning when fetching a checksum-less tarball
from a mirror -- users should be careful to use https or local
filesystem mirrors for this.
- Move `find_versions_of_archive` from spack.package to `spack.util.web`.
- `spider` funciton now just uses the link parsing it already does to
return links. We evaluate actual links found in the scraped pages
instead of trying to reconstruct them naively.
- Add `spack url-parse` command, which you can use to show how Spack
interprets the name and version in a URL.
Versions found by wildcard URLs are different from versions found by
parse_version, etc. The wildcards are constructed more haphazardly
than the very specific URL patterns in url.py, so they can get things
wrong. e.g., for this URL:
https://software.lanl.gov/MeshTools/trac/attachment/wiki/WikiStart/mstk-2.25rc1.tgz
We miss the 'rc' and only return 2.25r as the version if we ONLY use
URL wildcards.
Future: Maybe use the regexes from url.py to scrape web pages, and
then compare them for similarity with the original URL, instead of
trying to make a structured wildcard URL pattern? This might yield
better results.
- remove getcwd() check (seems arbitrary -- if users set their TMPDIR
to this why stop them?)
- try a number of common locations and try per-user directories in
them first.
- Adding `preferred=True` to a version directive will change its sort
order in concretization.
- This provides us a rudimentary ability to keep the Spack stack
stable as new versions are added.
- Having multiple stacks will come next, but this at least allows us
to specify default versions of things instead of always taking the
newest.
This changes the compiler wrappers so that they are called by the same
name as the wrapped compiler. Many builds make assumptions about
compiler names, and we need the spack compilers to be recognizable so
that build systems will get their flags right.
This adds per-compiler subdirectories to lib/spack/spack/env directory
that contain symlinks to cc for the C, C++, F77, and F90
compilers. The build now sets CC, CXX, F77, and F90 to point to these
links instead of to the generically named cc, c++, f77, and f90
wrappers.
- Extension logic didn't take conditional deps into account.
- Extension methods now check for whether the extnesion is
in the extendee map AND whether the dependency is actually present
in the spec yet.
This changes the hash algorithm so that it does much less object
allocation and copying, and so that it is correct.
The old version of `_cmp_key()` would call `sorted_deps`, which would
call `flat_dependencies` to get a list of dependencies so that it
could sort them in alphabetical order. This isn't necessary in the
`_cmp_key()`, and in fact we want more DAG structure than that to be
included in the `_cmp_key()`.
The new version constructs a tuple without copying the Spec DAG, and
the tuple contains hashes of sub-DAGs that are computed recursively
in-place. This is way faster than the previous algorithm and reduces
the numebr of copies significantly. It is also a correct DAG hash.
Example timing and copy counts for the different hashing algorithms
we've tried:
Original (wrong) Spec hash:
```
106,170 copies
real 0m5.024s
user 0m4.949s
sys 0m0.104s
```
Spec hash using YAML `dag_hash()`:
```
3,794 copies
real 0m5.024s
user 0m4.949s
sys 0m0.104s
New no-copy, no-YAML hash:
```
3,594 copies
real 0m2.543s
user 0m2.435s
sys 0m0.104s
```
So now we have a hash that is correct AND faster.
The remaining ~3k copies happen mostly during concretization, and as
all packages are initially loaded. I believe this is because Spack
currently has to load all packages to figure out virtual dependency
information; it could also be becasue there ar a lot of lookups of
partial specs in concretize. I can investigate this further.
- _cross_provider_maps() had suffered some bit rot (map returned was
ill-formed but still worked for cases with one vdep)
- ProviderIndex.satisfies() was only checking whether the result map
was non-empty. It should check whether all common vdeps are *in*
the result map, as that indicates there is *some* way to satisfy
*all* of them. We were checking whether there was some way to
satisfy *any one* of them, which is wrong.
- Above would cause a problem when there is more than one vdep provider.
- Added test that covers this case.
- Added `constrained()` method to Spec. Analogous to `normalized()`:
`constrain():constrained() :: normalize():normalized()`
Added test for, e.g.:
import spack.pkg.builtin.mock.mpich
import spack.pkg.builtin.mock.mpich as mpich
from spack.pkg.builtin.mock.mpich import Mpich
Among others. These ensure that direct package imports work so that
packages can be extended.
Package repositories now look like this:
top-level-dir/
repo.yaml
packages/
libelf/
package.py
mpich/
package.py
...
This leaves room at the top level for additional metadata, source,
per-repo configs, indexes, etc., and it makes it easy to see that
something is a spack repo (just look for repo.yaml and packages).
unit tests, so tracking tests with sets wouldn't work unless I extracted the
details relevant to the particular test. For now a simple count will work so
using a set was unnecessary anyways.
1. Adding a plugin to keep track of the total number of tests run as well as the
number of tests with failures/errors.
2. Some nose plugins (including xunit which will be added in a future commit)
assign stdout to a stream object that does not have a .fileno attribute.
spack.util.executable.Executable now avoids passing stdout to subprocess (and
always uses subprocess.PIPE)
TODO:
1. Still need to figure out how to activate the plugin (as of now it is
being ignored by nose). Newer versions of nose appear to make this simpler
(e.g. the "addplugins" argument to nose.run)
2. Need to include new version of nose in order to use xunit