This allows resources to be placed into subdirectory trees that may not
exist in the base package, and may depend on other resources to be
staged later.
A pile of libraries and tools, libedit is actually important as a
replacement of readline for non-GPL projects. Also ninja may be
worthwhile for some of the larger CMake projects, like llvm/clang.
Yay for non-portable declaration syntax. After the previous screwiness
I ran this through a number of shells, and found that this is the most
portable version I coudl seem to get.
- All of these work:
- `spack mirror add`
- `spack mirror remove`
- `spack mirror list`
- `spack mirror` subcommands (except create) now have their own
--scope argument.
- Mirror config is now stored sanely as an ordered list.
- `spack compiler` subcommands now take an optional --scope argument.
- no more `remove_from_config` in `config.py` -- `update` just
overwrites b/c it's easier to just call `get_config`, modify YAML
structures directly, and then call `update`.
- Implemented `spack compiler remove`.
- Configs are now parsed with `spack.util.spack_yaml.load/dump`
- Parser annotates returned data with `_start_mark` and `_end_mark`
properties, so that we can recover what lines/files they came from.
- Parser uses `OrderedDict` instead of `dict`. This will help
maintain some sanity when round-tripping config files.
- User and site config are now kept separately in memory.
- Merging is done on demand when client code requests the configuration.
- Allows user/site config to be updated independently of each other by commands.
- simplifies config logic (no more tracking merged files)
- Stage and fetcher were not being set up properly when fetching using
a different fetch strategy than the default one for the package.
- This is fixed but fetch/stage/mirror logic is still too complicated
and long-term needs a rethink.
- Spack will now print a warning when fetching a checksum-less tarball
from a mirror -- users should be careful to use https or local
filesystem mirrors for this.
- Move `find_versions_of_archive` from spack.package to `spack.util.web`.
- `spider` funciton now just uses the link parsing it already does to
return links. We evaluate actual links found in the scraped pages
instead of trying to reconstruct them naively.
- Add `spack url-parse` command, which you can use to show how Spack
interprets the name and version in a URL.
Versions found by wildcard URLs are different from versions found by
parse_version, etc. The wildcards are constructed more haphazardly
than the very specific URL patterns in url.py, so they can get things
wrong. e.g., for this URL:
https://software.lanl.gov/MeshTools/trac/attachment/wiki/WikiStart/mstk-2.25rc1.tgz
We miss the 'rc' and only return 2.25r as the version if we ONLY use
URL wildcards.
Future: Maybe use the regexes from url.py to scrape web pages, and
then compare them for similarity with the original URL, instead of
trying to make a structured wildcard URL pattern? This might yield
better results.
- remove getcwd() check (seems arbitrary -- if users set their TMPDIR
to this why stop them?)
- try a number of common locations and try per-user directories in
them first.