This update significantly reworks the llvm and clang packages. The llvm
package now includes variants allowing it to build and install any and
all of:
* clang
* lldb
* llvm's libunwind (why, WHY did they name it this?!?)
* polly (including building it directly into the clang tools, 3.7.0 only)
* clang extra tools
* compiler-rt (sanitizers)
* clang lto (the gold linker plugin that allows same to work)
* libcxx/libcxxabi
* libopenmp, also setting the default openmp runtime to same, when
parameters happen this shoudl be an option of libomp or libgomp
Ideally, this should have rpath setup like the gcc package does, but
clang's driver has no support for specs as such, and no clearly
equivalent mechanism either. If anyone has ideas on this, they would be
welcome.
One significant note related to gcc though, if you test this on LLNL
systems, or anywhere that has multiple GCCs straddling the dwarf2
boundary and sharing a libstdc++, build a gcc with spack and use that to
build clang. If you use a gcc4.8+ to build this with an older
libstdc++ it will fail on missing unwind symbols because of the
discrepancy.
Resource handling has been changed slightly to move the unpacked archive
into the target rather than use symlinks, because symlinks break certain
kinds of relative paths, and orders resource staging such that nested
resources are unpacked after outer ones.
This solution doesn't really make me happy, but does seem to work. It
sorts the resources by the length of the string representing their
destination. Since any nested resource must contain another resource's
name in its path, it seems that should work, but there should be a
better way to do this.
This allows resources to be placed into subdirectory trees that may not
exist in the base package, and may depend on other resources to be
staged later.
A pile of libraries and tools, libedit is actually important as a
replacement of readline for non-GPL projects. Also ninja may be
worthwhile for some of the larger CMake projects, like llvm/clang.
Yay for non-portable declaration syntax. After the previous screwiness
I ran this through a number of shells, and found that this is the most
portable version I coudl seem to get.
- All of these work:
- `spack mirror add`
- `spack mirror remove`
- `spack mirror list`
- `spack mirror` subcommands (except create) now have their own
--scope argument.
- Mirror config is now stored sanely as an ordered list.
- `spack compiler` subcommands now take an optional --scope argument.
- no more `remove_from_config` in `config.py` -- `update` just
overwrites b/c it's easier to just call `get_config`, modify YAML
structures directly, and then call `update`.
- Implemented `spack compiler remove`.
- Configs are now parsed with `spack.util.spack_yaml.load/dump`
- Parser annotates returned data with `_start_mark` and `_end_mark`
properties, so that we can recover what lines/files they came from.
- Parser uses `OrderedDict` instead of `dict`. This will help
maintain some sanity when round-tripping config files.
- User and site config are now kept separately in memory.
- Merging is done on demand when client code requests the configuration.
- Allows user/site config to be updated independently of each other by commands.
- simplifies config logic (no more tracking merged files)
- Stage and fetcher were not being set up properly when fetching using
a different fetch strategy than the default one for the package.
- This is fixed but fetch/stage/mirror logic is still too complicated
and long-term needs a rethink.
- Spack will now print a warning when fetching a checksum-less tarball
from a mirror -- users should be careful to use https or local
filesystem mirrors for this.
- Move `find_versions_of_archive` from spack.package to `spack.util.web`.
- `spider` funciton now just uses the link parsing it already does to
return links. We evaluate actual links found in the scraped pages
instead of trying to reconstruct them naively.
- Add `spack url-parse` command, which you can use to show how Spack
interprets the name and version in a URL.
Versions found by wildcard URLs are different from versions found by
parse_version, etc. The wildcards are constructed more haphazardly
than the very specific URL patterns in url.py, so they can get things
wrong. e.g., for this URL:
https://software.lanl.gov/MeshTools/trac/attachment/wiki/WikiStart/mstk-2.25rc1.tgz
We miss the 'rc' and only return 2.25r as the version if we ONLY use
URL wildcards.
Future: Maybe use the regexes from url.py to scrape web pages, and
then compare them for similarity with the original URL, instead of
trying to make a structured wildcard URL pattern? This might yield
better results.