* bowtie: new versions and %gcc@8.0.0: fix
Bowtie development shifted from Sourceforge to GitHub.
This commit adds several versions from GitHub, using the archive
tarballs. Note that the URL for 1.2.2 tarball is actually an '_p1'
tag....
It leaves the old 1.2 release download info in place.
Versions 1.2.0 and newer come from GitHub (I'm unsure if the 1.2 and
1.2.0 are equivalent).
Include a fix that enables %gcc@8.0.0: from:
https://github.com/BenLangmead/bowtie/issues/87
But, v1.2.2 has trouble with "newer" gcc's, so it only adds v1.2.2 for
%gcc@6.0.0:.
Feel free to tighten this. I know that:
- 1.2 -> 1.2.2 work with %gcc@5.5.0;
- 1.2 -> 1.2.1.1 work with %gcc@8.2.0; and
- 1.2.2 fails with %gcc@8.2.0
* Tighten to `conflicts('%gcc@8:', when='@1.2.2')`
* Point 1.2.2 and 1.2.2_p1 at the 1.2.2_p1 tarball
* Update package.py
Doxygen has migrated from a private SVN to GitHub. This PR updates the URLs and adds versioned commit hashes from GitHub. It also includes version 1.8.15 as the latest option.
* Update package.py
This enforces conventions that allow for correct handling of
multi-valued variants where specifying no value is an option,
and adds convenience functionality for specifying multi-valued
variants with conflicting sets of values. This also adds a notion
of "feature values" for variants, which are those that are understood
by the build system (e.g. those that would appear as configure
options). In more detail:
* Add documentation on variants to the packaging guide
* Forbid usage of '' or None as a possible variant value, in
particular as a default. To indicate choosing no value, the user
must explicitly define an option like 'none'. Without this,
multi-valued variants with default set to None were not parsable
from the command line (Fixes#6314)
* Add "disjoint_sets" function to support the declaration of
multi-valued variants with conflicting sets of options. For example
a variant "foo" with possible values "a", "b", and "c" where "c"
is exclusive of the other values ("foo=a,b" and "foo=c" are
valid but "foo=a,c" is not).
* Add "any_combination_of" function to support the declaration of
multi-valued variants where it is valid to choose none of the
values. This automatically defines "none" as an option (exclusive
with all other choices); this value does not appear when iterating
over the variant's values, for example in "with_or_without" (which
constructs autotools option strings from variant values).
* The "disjoint_sets" and "any_combination_of" methods return an
object which tracks the possible values. It is also possible to
indicate that some of these values do not correspond to options
understood by the package's build system, such that methods like
"with_or_without" will not define options for those values (this
occurs automatically for "none")
* Add documentation for usage of new functions for specifying
multi-valued variants
Non-expanded resources were being deleted from the cache on account
of two behaviors:
* ResourceStage was moving files rather than copying them, and uses
"os.path.realpath" to resolve symlinks
* CacheFetchStrategy creates a symlink to a cached resource rather
than copying it
This alters the first behavior: ResourceStage now copies the file
rather than moving it.
"mirror create" was invoking a package's do_patch method in order to
retrieve and archive URL patches. If a package implements a "patch"
method, this is also called as part of do_patch; this failed when the
package-specific implementation referred to environment variables
that are only available at the time the package is built
(e.g. "spack_cc").
This change introduces fetch and clean methods for patches. They are
no-ops for FilePatch but perform the appropriate actions for
UrlPatch. This allows "mirror create" to invoke do_fetch, which does
not call the package's patch method.
* PARALLEL-NETCDF: Update new version and location
PnetCDF-1.11.0 is released.
Also, the canonical download area has been changed and they are now using git, so can also provide a develop and master checkout.
One issue is that they changed the name of the tar files, so 1.11.0 needs special handling (and future versions will also).
All checksums at new location match the checksums from the old location.
* Address concerns in review
Added a `url_for_version` function to handle the name change in the tar files at version 1.11.0 and later. Updated description to match current shown on website. Updated `homepage` setting since it has recently moved.
* Address issues in second review
- in many files, regular strings were used in places where raw strings
should've been used.
- convert these to raw strings and get rid of new flake8 errors
This PR improves the validation of `modules.yaml` by introducing a custom validator that checks if an attribute listed in `properties` or `patternProperties` is a valid spec. This new check applied to the test case in #9857 gives:
```console
$ spack install szip
==> Error: /home/mculpo/.spack/linux/modules.yaml:5: "^python@2.7@" is an invalid spec [Invalid version specifier]
```
Details:
* Moved the set-up of a custom validator class to spack.schema
* In Spack we use `jsonschema` to validate configuration files
against a schema. We also need custom validators to enforce
writing default values within "properties" or "patternProperties"
attributes.
* Currently, validators were customized at the place of use and with the
recent introduction of environments that meant we were setting-up and
using 2 different validator classes in two different modules.
* This commit moves the set-up of a custom validator class in the
`spack.schema` module and refactors the code in `spack.config` and
`spack.environments` to use it.
* Added a custom validator to check if an attribute is a valid spec
* Added a custom validator that can be used on objects, which yields an
error if the attribute is not a valid spec.
* Updated the schema for modules.yaml
* Updated modules.yaml to fix a few inconsistencies:
- a few attributes were not tested properly using 'anyOf'
- suffixes has been updated to also check that the attribute is a spec
- hierarchical_scheme has been updated to hierarchy
* Removed $ref from every schema
* $ref is not composable or particularly legible
* Use python dicts and regular old variables instead.
* express: new version, use tags and fix gcc@6.0.0:
Express fails to build with gcc@6.0.0:.
The debian project [has a
fix](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=811859) but
they don't seem to have pushed it upstream.
I've opened an issue and a PR in eXpress repo, but eXpress isn't
actively developed, so I'm fixing it here too.
Since the Spack package was created, the eXpress team tagged their
releases. I've updated the package to use the tags.
Version 1.5.1 used to be known as 2015-11-29 (same commit). 1.5.2 is
new(er).
* Make flake8 happ{y,ier}
Apply fix from aspell issue (519) for a pointer dereference bug that
newer versions of gcc won't let slip past.
There hasn't been a release that includes the fix, this applies the
change to the latest release.
There's a missing break in a switch statement that newer gcc's
dislike.
Our #4696 simply disallowed newer gcc's.
This fixes the problem instead.
It's been [PR'ed upstream](https://github.com/agordon/fastx_toolkit/pull/22).
Tested with gcc@5.5.0 and gcc@8.2.0 on CentOS.
The most recent release of bamutil that we support uses an embedded
copy of libStatGen that has several issues that keep it from building
with newer releases of gcc.
They've all been fixed upstream and the latest release of bamutil
would pick them up if/when we support it. The build process has
changed though, plus my team needs *this* version.
This commit backports those fixes.
- The nested directive implementation was broken for python 3
- directive results were not properly removed from the directive list
when it was processed in the DirectiveMeta metaclass.
- the issue was that remove_directives only descended into a list or
tuple, but in Python3, the initial value passed to the function is a
view of dictionary values.
- make it a list to fix things, and add a regression test.
- currently just looks at patches
- allows you to find out which package applied a patch to a spec
- intended to work with tarballs and resources in the future.
- add tab completion for `spack resource` and subcommands
- previously, if a concrete sub-DAG with patched specs was written out
and read back in, its patches would not be found because the dependent
that patched it was no longer in the DAG.
- Add a test to ensure that the PatchCache handles this case.
- Also add tests to ensure that patch objects are properly created from
Specs -- previously we only checked that the patches were on the Spec.
- this fixes a bug where if we save a concretized sug-DAG where a package
had been patched by a dependent, and the dependent was not in the DAG,
we would not read in all patches correctly.
- Rather than looking up patches in the DAG, we look them up globally
from an index created from the entire repository.
- The patch cache is a bit tricky for several reasons:
- we have to cache information from packages, specifically, the patch
level and working directory.
- FilePatches need to know which package owns them, so that they can
figure out where the patch lives. The repo can change locations from
run to run, so we have to store relative paths and restore them when
the cache is reloaded.
- Patch files can change underneath the cache, because repo indexes
only update on package changes. We currently punt on this -- there
are stub methods for needs_update() that will need to check patch
files when packages are loaded. There isn't an easy way to do this
at global indexing time without making the FastPackageChecker a lot
slower. This is TBD for a future commit.
- Currently, the same patch can only be used one way in a package. That
is, if it appears twice with different level/working_dir settings,
bad things will happen. There's no package that current uses the
same patch two different ways, so we've punted on this as well, but
we may need to fix this in the future by moving a lot of the metdata
(level, working dir) to the spec, and *only* caching sha256sums in
the PatchCache. That would require some much more complicated tweaks
to the Spec, so we're holding off on that til later.
- This required patches to be refactored somewhat -- the difference
between a UrlPatch and a FilePatch is still not particularly clean.
- indexes should use json, not YAML, to optimize for speed
- only use YAML in human-editable files
- this makes ProviderIndex consistent with other indexes
- virtual provider cache and tags were previously generated by nearly
identical but separate methods.
- factor out an Indexer interface for updating repository caches, and
provide implementations for each type of index (TagIndex,
ProviderIndex) so that more can be added if needed.
- Among other things, this allows all indexes to be updated at once.
This is an advantage because loading package files is the real
overhead, and building the indexes once the packages are loaded is
trivial. We avoid extra bulk read-ins by generating all package indexes
at once.
- This can be extended for dependents (reverse dependencies) and patches
later.
- cleanup patch.py:
- make patch.py constructors more understandable
- loosen coupling of patch.py with package
- in Package: make package_dir, module, and namespace class properties
- These were previously instance properties and couldn't be called from
directives, e.g. in patch.create()
- make them class properties so that they can be used in class definition
- also add some instance properties to delegate to class properties so
that prior usage on Package objects still works
* Always build glib with iconv
My early PR, #10165, which added a variant to configure glib to use
libiconv and defaulted to false, seems to be causing more trouble than
the knob is worth.
This changes the glib package to always depend on and use libiconv.
* libiconv depends_on is no longer conditional