Commands like "spack mirror list" were displaying mirrors in a
different order than what was listed in the corresponding mirrors.yaml
file.
This restores commands to iterate over mirrors in the order that
they appear in the config file.
4af4487 added a mirror_id function to most FetchStrategy
implementations that is used to calculate resource locations in
mirrors. It left out BundleFetchStrategy which broke all packages
making use of BundlePackage (e.g. xsdk). This adds a noop
implementation of mirror_id to BundleFetchStrategy so that the
download/installation of BundlePackages can proceed as normal.
* Travis CI: Test Python 3.8
* Fix use of deprecated cgi.escape method
* Fix version comparison
* Fix flake8 F811 change in Python 3.8
* Make flake8 happy
* Use Python 3.8 for all test categories
Currently, query arguments in the Spack core are documented on the
Database._query method, where the functionality is defined.
For users of the spack python command, this makes the python builtin
method help less than ideally useful, as help(spack.store.db.query)
and help(spack.store.db.query_local) do not show relevant information.
This PR updates the doc attributes for the Database.query and
Database.query_local arguments to mirror everything after the first
line of the Database._query docstring.
* cuda: fix conflict statements for x86-64 targets
fixes#13462
This build system mixin was not updated after the support for specific
targets has been merged.
* Updated the version range of cuda that conflicts with gcc@8:
* Updated the version range of cuda that conflicts with gcc@8: for ppc64le
* Relaxed conflicts for version > 10.1
* Updated versions in conflicts
Co-Authored-By: Axel Huebl <axel.huebl@plasma.ninja>
The `test_changed_files` in `test/cmd/flake8.py` was failing because it calls
`ArgumentParser.parse_args()` without arguments. Normally that would just
parse `sys.argv` but it seems to fail because of something in either `spack test`
or `pytest`. Call it with an empty array so that it doesn't try to touch`sys.argv`
at all.
- [x] allow `-d` spack option for `test_changed_files`
* docs: add a spack environment for building the docs
* docs: remove tutorial and link to spack-tutorial.readthedocs.io
The tutorial now has its own standalone website, versioned by instances
of the tutorial. Link to that instead of versioning it directly with Spack.
Support mirroring all packages with `spack mirror create --all`.
In this mode there is no concretization:
* Spack pulls every version of every package into the created mirror.
* It also makes multiple attempts for each package/version combination
(if there is a temporary connection failure).
* Continues if all attempts fail. i.e., this makes its best effort to
fetch evrerything, even if all attempts to fetch one package fail.
This also changes mirroring logic to prefer storing sources by their hash
or by a unique name derived from the source. For example:
* Archives with checksums are named by the sha256 sum, i.e.,
`archive/f6/f6cf3bd233f9ea6147b21c7c02cac24e5363570ce4fd6be11dab9f499ed6a7d8.tar.gz`
vs the previous `<package-name>-package-version>.tar.gz`
* VCS repositories are stored by a path derived from their URL,
e.g. `git/google/leveldb.git/master.tar.gz`.
The new mirror layout allows different packages to refer to the same
resource or source without duplicating that download in the
mirror/cache. This change is not essential to mirroring everything but is
expected to save space when mirroring packages that all use the same
resource.
The new structure of the mirror is:
```
<base directory>/
_source-cache/ <-- the _source-cache directory is new
archive/ <-- archives/resources/patches stored by hash
00/ <-- 2-letter sha256 prefix
002748bdd0319d5ab82606cf92dc210fc1c05d0607a2e1d5538f60512b029056.tar.gz
01/
0154c25c45b5506b6d618ca8e18d0ef093dac47946ac0df464fb21e77b504118.tar.gz
0173a74a515211997a3117a47e7b9ea43594a04b865b69da5a71c0886fa829ea.tar.gz
...
git/
OpenFAST/
openfast.git/
master.tar.gz <-- repo by branch name
PHASTA/
phasta.git/
11f431f2d1a53a529dab4b0f079ab8aab7ca1109.tar.gz <-- repo by commit
...
svn/ <-- each fetch strategy has its own subdirectory
...
openmpi/ <-- the remaining package directories have the old format
openmpi-1.10.1.tar.gz <-- human-readable name is symlink to _source-cache
```
In addition to the archive names as described above, `mirror create` now
also creates symlinks with the old format to help users understand which
package each mirrored archive is associated with, and to allow mirrors to
work with old spack versions. The symlinks are relative so the mirror
directory can still itself be archived.
Other improvements:
* `spack mirror create` will not re-download resources that have already
been placed in it.
* When creating a mirror, the resources downloaded to the mirror will not
be cached (things are not stored twice).
reindexing takes a significant amount of time, and there's no reason to
do it from DB version 0.9.3 to version 5. The only difference is that v5
can contain "deprecated_for" fields.
- [x] Add a `_skip_reindex` list at the start of `database.py`
- [x] Skip the reindex for upgrades in this list. The new version will
just be written to the file the first time we actually have to write
the DB out (e.g., after an install), and reads will still work fine.
Previously, spack would error out if we tried to fetch something with no
code, but that would prevent fetching dependencies. In particular, this
would fail:
spack fetch --dependencies xsdk
- [x] Instead of raising an error, just print a message that there is nothing
to be fetched for packages like xsdk that do not have code.
- [x] Make BundleFetchStrategy a bit more quiet about doing nothing.
We've had `spack spec --yaml` for a while, and we've had methods for JSON
for a while as well. We just haven't has a `--json` argument for `spack spec`.
- [x] Add a `--json` argument to `spack spec`, just like `--yaml`
New entry for K10 microarchitecture.
Reorder Zen* microarchitectures to avoid triggering as k10.
Remove some desktop-specific flags that were preventing Opteron Bulldozer/Piledriver/Steamroller/Excavator CPUs from being recognized as such.
Remove one or two flags which weren't produced in /proc/cpuinfo on older OS (RHEL6 and friends).
Rename the `spack diy` command to `spack dev-build` to make the use case clearer.
The `spack diy` command has some useful functionality for developers using Spack to build their dependencies and configure/build/install the code they are developing. Developers do not notice it, partly because of the obscure name.
The `spack dev-build` command has a `-u/--until PHASE` option to stop after a given phase of the build. This can be used to configure your project, run cmake on your project, or similarly stop after any stage of the build the user wants. These options are analogous to the existing `spack configure` and `spack build` commands, but for developer builds.
To unify the syntax, we have deprecated the `spack configure` and `spack build` commands, and added a `-u/--until PHASE` option to the `spack install` command as well.
The functionality in `spack dev-build` (specifically `spack dev-build -u cmake`) may be able to supersede the `spack setup` command, but this PR does not deprecate that command as that will require slightly more thought.
fd58c98 formats the `Stage`'s `archive_path` in `Stage.archive` (as part of `web.push_to_url`). This is not needed and if the formatted differs from the original path (for example if the archive file name contains a URL query suffix), then the copy fails.
This removes the formatting that occurs in `web.push_to_url`.
We should figure out a way to handle bad cases like this *and* to have nicer filenames for downloaded files. One option that would work in this particular case would be to also pass `-J` / `--remote-header-name` to `curl`. We'll need to do follow-up work to determine if we can use `-J` everywhere.
See also: https://github.com/spack/spack/pull/11117#discussion_r338301058
Add a new entry in `config.yaml`:
config:
shared_linking: 'rpath'
If this variable is set to `rpath` (the default) Spack will set RPATH in ELF binaries. If set to `runpath` it will set RUNPATH.
Details:
* Spack cc wrapper explicitly adds `--disable-new-dtags` when linking
* cc wrapper also strips `--enable-new-dtags` from the compile line
when disabling (and vice versa)
* We specifically do *not* add any dtags flags on macOS, which uses
Mach-O binaries, not ELF, so there's no RUNPATH)
`spack deprecate` allows for the removal of insecure packages with minimal impact to their dependents. It allows one package to be symlinked into the prefix of another to provide seamless transition for rpath'd and hard-coded applications using the old version.
Example usage:
spack deprecate /hash-of-old-openssl /hash-of-new-openssl
The spack deprecate command is designed for use only in extroardinary circumstances. The spack deprecate command makes no promises about binary compatibility. It is up to the user to ensure the replacement is suitable for the deprecated package.
Previously this command only showed total counts for each regular
expression. This doesn't give you a sense of which regexes are working
well and which ones are not. We now display the number of right, wrong,
and total URL parses per regex.
It's easier to see where we might improve the URL parsing with this
change.
This updates the configuration loading/dumping logic (now called
load_config/dump_config) in spack_yaml to preserve comments (by using
ruamel.yaml's RoundTripLoader). This has two effects:
* environment spack.yaml files expect to retain comments, which
load_config now supports. By using load_config, users can now use the
':' override syntax that was previously unavailable for environment
configs (but was available for other config files).
* config files now retain user comments by default (although in cases
where Spack updates/overwrites config, the comments can still be
removed).
Details:
* Subclasses `RoundTripLoader`/`RoundTripDumper` to parse yaml into
ruamel's `CommentedMap` and analogous data structures
* Applies filename info directly to ruamel objects in cases where the
updated loader returns those
* Copies management of sections in `SingleFileScope` from #10651 to allow
overrides to occur
* Updates the loader/dumper to handle the processing of overrides by
specifically checking for the `:` character
* Possibly the most controversial aspect, but without that, the parsed
objects have to be reconstructed (i.e. as was done in
`mark_overrides`). It is possible that `mark_overrides` could remain
and a deep copy will not cause problems, but IMO that's generally
worth avoiding.
* This is also possibly controversial because Spack YAML strings can
include `:`. My reckoning is that this only occurs for version
specifications, so it is safe to check for `endswith(':') and not
('@' in string)`
* As a consequence, this PR ends up reserving spack yaml functions
load_config/dump_config exclusively for the purpose of storing spack
config
`test_envoronment_status()` was printing extra output during tests.
- [x] disable output only for `env('status')` calls instead of disabling
it for the whole test.
This PR ensures that environment activation sets all environment variables set by the equivalent `module load` operations, except that the spec prefixes are "rebased" to the view associated with the environment.
Currently, Spack blindly adds paths relative to the environment view root to the user environment on activation. Issue #12731 points out ways in which this behavior is insufficient.
This PR changes that behavior to use the `setup_run_environment` logic for each package to augment the prefix inspections (as in Spack's modulefile generation logic) to ensure that all necessary variables are set to make use of the packages in the environment.
See #12731 for details on the previous problems in behavior.
This PR also updates the `ViewDescriptor` object in `spack.environment` to have a `__contains__` method. This allows for checks like `if spec in self.default_view`. The `__contains__` operator for `ViewDescriptor` objects checks whether the spec satisfies the filters of the View descriptor, not whether the spec is already linked into the underlying `FilesystemView` object.
This PR ensures that on Darwin we always append /sbin and /usr/sbin to PATH, if they are not already present, when looking for sysctl.
* Make sure we look into /sbin and /usr/sbin for sysctl
* Refactor sysctl for better readability
* Remove marker to make test pass
These changes update our gcc microarchitecture descriptions based on manuals found here https://gcc.gnu.org/onlinedocs/ and assuming that new architectures are not added during patch releases.
This extends Spack functionality so that it can fetch sources and binaries from-, push sources and binaries to-, and index the contents of- mirrors hosted on an S3 bucket.
High level to-do list:
- [x] Extend mirrors configuration to add support for `file://`, and `s3://` URLs.
- [x] Ensure all fetching, pushing, and indexing operations work for `file://` URLs.
- [x] Implement S3 source fetching
- [x] Implement S3 binary mirror indexing
- [x] Implement S3 binary package fetching
- [x] Implement S3 source pushing
- [x] Implement S3 binary package pushing
Important details:
* refactor URL handling to handle S3 URLs and mirror URLs more gracefully.
- updated parse() to accept already-parsed URL objects. an equivalent object
is returned with any extra s3-related attributes intact. Objects created with
urllib can also be passed, and the additional s3 handling logic will still be applied.
* update mirror schema/parsing (mirror can have separate fetch/push URLs)
* implement s3_fetch_strategy/several utility changes
* provide more feature-complete S3 fetching
* update buildcache create command to support S3
* Move the core logic for reading data from S3 out of the s3 fetch strategy and into
the s3 URL handler. The s3 fetch strategy now calls into `read_from_url()` Since
read_from_url can now handle S3 URLs, the S3 fetch strategy is redundant. It's
not clear whether the ideal design is to have S3 fetching functionality in a fetch
strategy, directly implemented in read_from_url, or both.
* expanded what can be passed to `spack buildcache` via the -d flag: In addition
to a directory on the local filesystem, the name of a configured mirror can be
passed, or a push URL can be passed directly.
fixes#13073
Since #3206 was merged bootstrapping environment-modules was using the architecture of the current host or the best match supported by the default compiler. The former case is an issue since shell integration was looking for a spec targeted at the host microarchitecture.
1. Bootstrap an env modules targeted at generic architectures
2. Look for generic targets in shell integration scripts
3. Add a new entry in Travis to test shell integration
Custom string versions for compilers were raising a ValueError on
conversion to int. This commit fixes the behavior by trying to detect
the underlying compiler version when in presence of a custom string
version.
* Refactor code that deals with custom versions for better readability
* Partition version components with a regex
* Fix semantic of custom compiler versions with a suffix
* clang@x.y-apple has been special-cased
* Add unit tests
We've been doing this for quite a while now, and it does not seem to
cause issues.
- [x] Switch the noisy warning to a debug to make Spack a bit quieter
while building.
* Added architecture specific optimization flags for Clang / LLVM
* Disallow compiler optimizations for mixed toolchains
* We emit a warning when building for a mixed toolchain
* Fixed issues with suffixed versions of compilers; Apple's Clang will,
for the time being, fall back on x86-64 for every compilation.
* Methods setting the environment now do it separately for build and run
Before this commit the `*_environment` methods were setting
modifications to both the build-time and run-time environment
simultaneously. This might cause issues as the two environments
inherently rely on different preconditions:
1. The build-time environment is set before building a package, thus
the package prefix doesn't exist and can't be inspected
2. The run-time environment instead is set assuming the target package
has been already installed
Here we split each of these functions into two: one setting the
build-time environment, one the run-time.
We also adopt a fallback strategy that inspects for old methods and
executes them as before, but prints a deprecation warning to tty. This
permits to port packages to use the new methods in a distributed way,
rather than having to modify all the packages at once.
* Added a test that fails if any package uses the old API
Marked the test xfail for now as we have a lot of packages in that
state.
* Added a test to check that a package modified by a PR is up to date
This test can be used any time we deprecate a method call to ensure
that during the first modification of the package we update also
the deprecated calls.
* Updated documentation