* Some packages (e.g. mpfr at the time of this patch) can have patches
with the same name but different contents (which apply to different
versions of the package). This appends part of the patch hash to the
cache file name to avoid conflicts.
* Some exceptions which occur during fetching are not a subclass of
SpackError and therefore do not have a 'message' attribute. This
updates the logic for mirroring a single spec (add_single_spec)
to produce an appropriate error message in that case (where before
it failed with an AttributeError)
* In various circumstances, a mirror can contain the universal storage
path but not a cosmetic symlink; in this case it would not generate
a symlink. Now "spack mirror create" will create a symlink for any
package that doesn't have one.
* Add process to determine aarch64 microarchitecture
* add microarchitectures for thunderx2 and a64fx
* Add optimize flags for gcc on aarch64 family processors thunderx2 and a64fx.
* Add optimize flags for clang on aarch64 family processors thunderx2 and a64fx
* Add testing for thunderx2 and a64fx microarchitectures
* Make relative binaries relocate text files properly
* rb strings aren't valid in python 2
* move perl to new interface for setup_environment family methods
* remove reference to `spack.store` in method definition
Referencing `spack.store` in method definition will cache the `spack.config.config` singleton variable too early, before we have a chance to add command line and environment scopes.
* remove reference to `spack.store` in method definition
Referencing `spack.store` in method definition will cache the `spack.config.config` singleton variable too early, before we have a chance to add command line and environment scopes.
Add a configuration option to suppress gpg warnings during binary
package verification. This only suppresses warnings: a gpg failure
will still fail the install. This allows users who have already
explicitly trusted the gpg key they are using to avoid seeing
repeated warnings that it is self-signed.
Add a configuration option to suppress gpg warnings during binary
package verification. This only suppresses warnings: a gpg failure
will still fail the install. This allows users who have already
explicitly trusted the gpg key they are using to avoid seeing
repeated warnings that it is self-signed.
when making a package relative, relocate links relative to link directory
rather than the full link path (which includes the file name) because `os.path.relpath` expects a directory.
Binaries with relative RPATHS currently do not relocate strings
hard-coded in binaries
This PR extends the best-effort relocation of strings hard-coded
in binaries to those whose RPATHs have been relativized.
Binaries with relative RPATHS currently do not relocate strings
hard-coded in binaries
This PR extends the best-effort relocation of strings hard-coded
in binaries to those whose RPATHs have been relativized.
* Docs update for deprecated `spack sha256`
* Added macOS shasum
* Update lib/spack/docs/packaging_guide.rst
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
- [x] Use higher contrast terminal output font
- [x] Use higher contrast code block background color than default
- [x] Use a noticeable prompt character
See also https://github.com/spack/spack-tutorial/pull/10.
`mirror_archive_path` was failing to account for the case where the fetched version isn't known to Spack.
- [x] don't require the fetched version to be in `Package.versions`
- [x] add regression test for mirror paths when package does not have a version
This fixes a regression introduced in #10792. `spack uninstall` in an
environment would not match concrete query specs properly after the index
hash of enviroments changed.
- [x] Search by DAG hash for specs to remove instead of by build hash
If you do this in a spack environment:
spack add hdf5+hl
hdf5+hl will be the root added to the `spack.yaml` file, and you should
really expect `hdf5+hl` to display as a root in the environment.
- [x] Add decoration to roots so that you can see the details about what
is required to build.
- [x] Add a test.
If you do this in a spack environment:
spack add hdf5+hl
hdf5+hl will be the root added to the `spack.yaml` file, and you should
really expect `hdf5+hl` to display as a root in the environment.
- [x] Add decoration to roots so that you can see the details about what
is required to build.
- [x] Add a test.
This fixes a regression introduced in #10792. `spack uninstall` in an
environment would not match concrete query specs properly after the index
hash of enviroments changed.
- [x] Search by DAG hash for specs to remove instead of by build hash
* Make relative binaries relocate text files properly
* rb strings aren't valid in python 2
* move perl to new interface for setup_environment family methods
- [x] insert at beginning of list so fetch grabs local mirrors before remote resources
- [x] update the S3FetchStrategy so that it throws a SpackError if the fetch fails.
Before, it was throwing URLError, which was not being caught in stage.py.
- [x] move error handling out of S3FetchStrategy and into web_util.read_from_url()
- [x] pass string instead of URLError to SpackWebError
- [x] insert at beginning of list so fetch grabs local mirrors before remote resources
- [x] update the S3FetchStrategy so that it throws a SpackError if the fetch fails.
Before, it was throwing URLError, which was not being caught in stage.py.
- [x] move error handling out of S3FetchStrategy and into web_util.read_from_url()
- [x] pass string instead of URLError to SpackWebError
This changes Spack environments so that the YAML file associated with the environment is *only* written when necessary (i.e., if it is changed *by spack*). The lockfile is still written out as before.
There is a larger question here of which part of Spack should be responsible for setting defaults in config files, and how we can get rid of empty lists and data structures currently cluttering files like `compilers.yaml`. But that probably requires a rework of the default-setting validator in `spack.config`, as well as the code that uses `spack.config`. This will at least help for `spack.yaml`.
This changes Spack environments so that the YAML file associated with the environment is *only* written when necessary (i.e., if it is changed *by spack*). The lockfile is still written out as before.
There is a larger question here of which part of Spack should be responsible for setting defaults in config files, and how we can get rid of empty lists and data structures currently cluttering files like `compilers.yaml`. But that probably requires a rework of the default-setting validator in `spack.config`, as well as the code that uses `spack.config`. This will at least help for `spack.yaml`.
Commands like "spack mirror list" were displaying mirrors in a
different order than what was listed in the corresponding mirrors.yaml
file.
This restores commands to iterate over mirrors in the order that
they appear in the config file.
* Travis CI: Test Python 3.8
* Fix use of deprecated cgi.escape method
* Fix version comparison
* Fix flake8 F811 change in Python 3.8
* Make flake8 happy
* Use Python 3.8 for all test categories
Currently, query arguments in the Spack core are documented on the
Database._query method, where the functionality is defined.
For users of the spack python command, this makes the python builtin
method help less than ideally useful, as help(spack.store.db.query)
and help(spack.store.db.query_local) do not show relevant information.
This PR updates the doc attributes for the Database.query and
Database.query_local arguments to mirror everything after the first
line of the Database._query docstring.
* cuda: fix conflict statements for x86-64 targets
fixes#13462
This build system mixin was not updated after the support for specific
targets has been merged.
* Updated the version range of cuda that conflicts with gcc@8:
* Updated the version range of cuda that conflicts with gcc@8: for ppc64le
* Relaxed conflicts for version > 10.1
* Updated versions in conflicts
Co-Authored-By: Axel Huebl <axel.huebl@plasma.ninja>
4af4487 added a mirror_id function to most FetchStrategy
implementations that is used to calculate resource locations in
mirrors. It left out BundleFetchStrategy which broke all packages
making use of BundlePackage (e.g. xsdk). This adds a noop
implementation of mirror_id to BundleFetchStrategy so that the
download/installation of BundlePackages can proceed as normal.
* Travis CI: Test Python 3.8
* Fix use of deprecated cgi.escape method
* Fix version comparison
* Fix flake8 F811 change in Python 3.8
* Make flake8 happy
* Use Python 3.8 for all test categories
Currently, query arguments in the Spack core are documented on the
Database._query method, where the functionality is defined.
For users of the spack python command, this makes the python builtin
method help less than ideally useful, as help(spack.store.db.query)
and help(spack.store.db.query_local) do not show relevant information.
This PR updates the doc attributes for the Database.query and
Database.query_local arguments to mirror everything after the first
line of the Database._query docstring.
* cuda: fix conflict statements for x86-64 targets
fixes#13462
This build system mixin was not updated after the support for specific
targets has been merged.
* Updated the version range of cuda that conflicts with gcc@8:
* Updated the version range of cuda that conflicts with gcc@8: for ppc64le
* Relaxed conflicts for version > 10.1
* Updated versions in conflicts
Co-Authored-By: Axel Huebl <axel.huebl@plasma.ninja>
The `test_changed_files` in `test/cmd/flake8.py` was failing because it calls
`ArgumentParser.parse_args()` without arguments. Normally that would just
parse `sys.argv` but it seems to fail because of something in either `spack test`
or `pytest`. Call it with an empty array so that it doesn't try to touch`sys.argv`
at all.
- [x] allow `-d` spack option for `test_changed_files`
* docs: add a spack environment for building the docs
* docs: remove tutorial and link to spack-tutorial.readthedocs.io
The tutorial now has its own standalone website, versioned by instances
of the tutorial. Link to that instead of versioning it directly with Spack.
Support mirroring all packages with `spack mirror create --all`.
In this mode there is no concretization:
* Spack pulls every version of every package into the created mirror.
* It also makes multiple attempts for each package/version combination
(if there is a temporary connection failure).
* Continues if all attempts fail. i.e., this makes its best effort to
fetch evrerything, even if all attempts to fetch one package fail.
This also changes mirroring logic to prefer storing sources by their hash
or by a unique name derived from the source. For example:
* Archives with checksums are named by the sha256 sum, i.e.,
`archive/f6/f6cf3bd233f9ea6147b21c7c02cac24e5363570ce4fd6be11dab9f499ed6a7d8.tar.gz`
vs the previous `<package-name>-package-version>.tar.gz`
* VCS repositories are stored by a path derived from their URL,
e.g. `git/google/leveldb.git/master.tar.gz`.
The new mirror layout allows different packages to refer to the same
resource or source without duplicating that download in the
mirror/cache. This change is not essential to mirroring everything but is
expected to save space when mirroring packages that all use the same
resource.
The new structure of the mirror is:
```
<base directory>/
_source-cache/ <-- the _source-cache directory is new
archive/ <-- archives/resources/patches stored by hash
00/ <-- 2-letter sha256 prefix
002748bdd0319d5ab82606cf92dc210fc1c05d0607a2e1d5538f60512b029056.tar.gz
01/
0154c25c45b5506b6d618ca8e18d0ef093dac47946ac0df464fb21e77b504118.tar.gz
0173a74a515211997a3117a47e7b9ea43594a04b865b69da5a71c0886fa829ea.tar.gz
...
git/
OpenFAST/
openfast.git/
master.tar.gz <-- repo by branch name
PHASTA/
phasta.git/
11f431f2d1a53a529dab4b0f079ab8aab7ca1109.tar.gz <-- repo by commit
...
svn/ <-- each fetch strategy has its own subdirectory
...
openmpi/ <-- the remaining package directories have the old format
openmpi-1.10.1.tar.gz <-- human-readable name is symlink to _source-cache
```
In addition to the archive names as described above, `mirror create` now
also creates symlinks with the old format to help users understand which
package each mirrored archive is associated with, and to allow mirrors to
work with old spack versions. The symlinks are relative so the mirror
directory can still itself be archived.
Other improvements:
* `spack mirror create` will not re-download resources that have already
been placed in it.
* When creating a mirror, the resources downloaded to the mirror will not
be cached (things are not stored twice).
reindexing takes a significant amount of time, and there's no reason to
do it from DB version 0.9.3 to version 5. The only difference is that v5
can contain "deprecated_for" fields.
- [x] Add a `_skip_reindex` list at the start of `database.py`
- [x] Skip the reindex for upgrades in this list. The new version will
just be written to the file the first time we actually have to write
the DB out (e.g., after an install), and reads will still work fine.
Previously, spack would error out if we tried to fetch something with no
code, but that would prevent fetching dependencies. In particular, this
would fail:
spack fetch --dependencies xsdk
- [x] Instead of raising an error, just print a message that there is nothing
to be fetched for packages like xsdk that do not have code.
- [x] Make BundleFetchStrategy a bit more quiet about doing nothing.
We've had `spack spec --yaml` for a while, and we've had methods for JSON
for a while as well. We just haven't has a `--json` argument for `spack spec`.
- [x] Add a `--json` argument to `spack spec`, just like `--yaml`
New entry for K10 microarchitecture.
Reorder Zen* microarchitectures to avoid triggering as k10.
Remove some desktop-specific flags that were preventing Opteron Bulldozer/Piledriver/Steamroller/Excavator CPUs from being recognized as such.
Remove one or two flags which weren't produced in /proc/cpuinfo on older OS (RHEL6 and friends).
Rename the `spack diy` command to `spack dev-build` to make the use case clearer.
The `spack diy` command has some useful functionality for developers using Spack to build their dependencies and configure/build/install the code they are developing. Developers do not notice it, partly because of the obscure name.
The `spack dev-build` command has a `-u/--until PHASE` option to stop after a given phase of the build. This can be used to configure your project, run cmake on your project, or similarly stop after any stage of the build the user wants. These options are analogous to the existing `spack configure` and `spack build` commands, but for developer builds.
To unify the syntax, we have deprecated the `spack configure` and `spack build` commands, and added a `-u/--until PHASE` option to the `spack install` command as well.
The functionality in `spack dev-build` (specifically `spack dev-build -u cmake`) may be able to supersede the `spack setup` command, but this PR does not deprecate that command as that will require slightly more thought.
fd58c98 formats the `Stage`'s `archive_path` in `Stage.archive` (as part of `web.push_to_url`). This is not needed and if the formatted differs from the original path (for example if the archive file name contains a URL query suffix), then the copy fails.
This removes the formatting that occurs in `web.push_to_url`.
We should figure out a way to handle bad cases like this *and* to have nicer filenames for downloaded files. One option that would work in this particular case would be to also pass `-J` / `--remote-header-name` to `curl`. We'll need to do follow-up work to determine if we can use `-J` everywhere.
See also: https://github.com/spack/spack/pull/11117#discussion_r338301058
Add a new entry in `config.yaml`:
config:
shared_linking: 'rpath'
If this variable is set to `rpath` (the default) Spack will set RPATH in ELF binaries. If set to `runpath` it will set RUNPATH.
Details:
* Spack cc wrapper explicitly adds `--disable-new-dtags` when linking
* cc wrapper also strips `--enable-new-dtags` from the compile line
when disabling (and vice versa)
* We specifically do *not* add any dtags flags on macOS, which uses
Mach-O binaries, not ELF, so there's no RUNPATH)
`spack deprecate` allows for the removal of insecure packages with minimal impact to their dependents. It allows one package to be symlinked into the prefix of another to provide seamless transition for rpath'd and hard-coded applications using the old version.
Example usage:
spack deprecate /hash-of-old-openssl /hash-of-new-openssl
The spack deprecate command is designed for use only in extroardinary circumstances. The spack deprecate command makes no promises about binary compatibility. It is up to the user to ensure the replacement is suitable for the deprecated package.
Previously this command only showed total counts for each regular
expression. This doesn't give you a sense of which regexes are working
well and which ones are not. We now display the number of right, wrong,
and total URL parses per regex.
It's easier to see where we might improve the URL parsing with this
change.
This updates the configuration loading/dumping logic (now called
load_config/dump_config) in spack_yaml to preserve comments (by using
ruamel.yaml's RoundTripLoader). This has two effects:
* environment spack.yaml files expect to retain comments, which
load_config now supports. By using load_config, users can now use the
':' override syntax that was previously unavailable for environment
configs (but was available for other config files).
* config files now retain user comments by default (although in cases
where Spack updates/overwrites config, the comments can still be
removed).
Details:
* Subclasses `RoundTripLoader`/`RoundTripDumper` to parse yaml into
ruamel's `CommentedMap` and analogous data structures
* Applies filename info directly to ruamel objects in cases where the
updated loader returns those
* Copies management of sections in `SingleFileScope` from #10651 to allow
overrides to occur
* Updates the loader/dumper to handle the processing of overrides by
specifically checking for the `:` character
* Possibly the most controversial aspect, but without that, the parsed
objects have to be reconstructed (i.e. as was done in
`mark_overrides`). It is possible that `mark_overrides` could remain
and a deep copy will not cause problems, but IMO that's generally
worth avoiding.
* This is also possibly controversial because Spack YAML strings can
include `:`. My reckoning is that this only occurs for version
specifications, so it is safe to check for `endswith(':') and not
('@' in string)`
* As a consequence, this PR ends up reserving spack yaml functions
load_config/dump_config exclusively for the purpose of storing spack
config
`test_envoronment_status()` was printing extra output during tests.
- [x] disable output only for `env('status')` calls instead of disabling
it for the whole test.
This PR ensures that environment activation sets all environment variables set by the equivalent `module load` operations, except that the spec prefixes are "rebased" to the view associated with the environment.
Currently, Spack blindly adds paths relative to the environment view root to the user environment on activation. Issue #12731 points out ways in which this behavior is insufficient.
This PR changes that behavior to use the `setup_run_environment` logic for each package to augment the prefix inspections (as in Spack's modulefile generation logic) to ensure that all necessary variables are set to make use of the packages in the environment.
See #12731 for details on the previous problems in behavior.
This PR also updates the `ViewDescriptor` object in `spack.environment` to have a `__contains__` method. This allows for checks like `if spec in self.default_view`. The `__contains__` operator for `ViewDescriptor` objects checks whether the spec satisfies the filters of the View descriptor, not whether the spec is already linked into the underlying `FilesystemView` object.
This PR ensures that on Darwin we always append /sbin and /usr/sbin to PATH, if they are not already present, when looking for sysctl.
* Make sure we look into /sbin and /usr/sbin for sysctl
* Refactor sysctl for better readability
* Remove marker to make test pass
These changes update our gcc microarchitecture descriptions based on manuals found here https://gcc.gnu.org/onlinedocs/ and assuming that new architectures are not added during patch releases.
This extends Spack functionality so that it can fetch sources and binaries from-, push sources and binaries to-, and index the contents of- mirrors hosted on an S3 bucket.
High level to-do list:
- [x] Extend mirrors configuration to add support for `file://`, and `s3://` URLs.
- [x] Ensure all fetching, pushing, and indexing operations work for `file://` URLs.
- [x] Implement S3 source fetching
- [x] Implement S3 binary mirror indexing
- [x] Implement S3 binary package fetching
- [x] Implement S3 source pushing
- [x] Implement S3 binary package pushing
Important details:
* refactor URL handling to handle S3 URLs and mirror URLs more gracefully.
- updated parse() to accept already-parsed URL objects. an equivalent object
is returned with any extra s3-related attributes intact. Objects created with
urllib can also be passed, and the additional s3 handling logic will still be applied.
* update mirror schema/parsing (mirror can have separate fetch/push URLs)
* implement s3_fetch_strategy/several utility changes
* provide more feature-complete S3 fetching
* update buildcache create command to support S3
* Move the core logic for reading data from S3 out of the s3 fetch strategy and into
the s3 URL handler. The s3 fetch strategy now calls into `read_from_url()` Since
read_from_url can now handle S3 URLs, the S3 fetch strategy is redundant. It's
not clear whether the ideal design is to have S3 fetching functionality in a fetch
strategy, directly implemented in read_from_url, or both.
* expanded what can be passed to `spack buildcache` via the -d flag: In addition
to a directory on the local filesystem, the name of a configured mirror can be
passed, or a push URL can be passed directly.
fixes#13073
Since #3206 was merged bootstrapping environment-modules was using the architecture of the current host or the best match supported by the default compiler. The former case is an issue since shell integration was looking for a spec targeted at the host microarchitecture.
1. Bootstrap an env modules targeted at generic architectures
2. Look for generic targets in shell integration scripts
3. Add a new entry in Travis to test shell integration
Custom string versions for compilers were raising a ValueError on
conversion to int. This commit fixes the behavior by trying to detect
the underlying compiler version when in presence of a custom string
version.
* Refactor code that deals with custom versions for better readability
* Partition version components with a regex
* Fix semantic of custom compiler versions with a suffix
* clang@x.y-apple has been special-cased
* Add unit tests
We've been doing this for quite a while now, and it does not seem to
cause issues.
- [x] Switch the noisy warning to a debug to make Spack a bit quieter
while building.
* Added architecture specific optimization flags for Clang / LLVM
* Disallow compiler optimizations for mixed toolchains
* We emit a warning when building for a mixed toolchain
* Fixed issues with suffixed versions of compilers; Apple's Clang will,
for the time being, fall back on x86-64 for every compilation.
* Methods setting the environment now do it separately for build and run
Before this commit the `*_environment` methods were setting
modifications to both the build-time and run-time environment
simultaneously. This might cause issues as the two environments
inherently rely on different preconditions:
1. The build-time environment is set before building a package, thus
the package prefix doesn't exist and can't be inspected
2. The run-time environment instead is set assuming the target package
has been already installed
Here we split each of these functions into two: one setting the
build-time environment, one the run-time.
We also adopt a fallback strategy that inspects for old methods and
executes them as before, but prints a deprecation warning to tty. This
permits to port packages to use the new methods in a distributed way,
rather than having to modify all the packages at once.
* Added a test that fails if any package uses the old API
Marked the test xfail for now as we have a lot of packages in that
state.
* Added a test to check that a package modified by a PR is up to date
This test can be used any time we deprecate a method call to ensure
that during the first modification of the package we update also
the deprecated calls.
* Updated documentation
Python 3 metaclasses have a `__prepare__` method that lets us save the
class's dictionary before it is constructed. In Python 2 we had to walk
up the stack using our `caller_locals()` method to get at this. Using
`__prepare__` is much faster as it doesn't require us to use `inspect`.
This makes multimethods use the faster `__prepare__` method in Python3,
while still using `caller_locals()` in Python 2. We try to reduce the
use of caller locals using caching to speed up Python 2 a little bit.
Our importer was always parsing from source (which is considerably
slower) because the source size recorded in the .pyc file differed from
the size of the input file.
Override path_stats in the prepending importer to fool it into thinking
that the source size is the size *with* the prepended code.
Since the backup file is only created on the first invocation, it will
contain the original file without any modifications. Further invocations
will then read the backup file, effectively reverting prior invocations.
This can be reproduced easily by trying to install likwid, which will
try to install into /usr/local. Work around this by creating a temporary
file to read from.
* This updates stage names to use "spack-stage-" as a prefix.
This avoids removing non-Spack directories in "spack clean" as
c141e99 did (in this case so long as they don't contain the
prefix "spack-stage-"), and also addresses a follow-up issue
where Spack stage directories were not removed.
* Spack now does more-stringent checking of expected permissions for
staging directories. For a given stage root that includes a user
component, all directories before the user component that are
created by Spack are expected to match the permissions of their
parent; the user component and all deeper directories are expected
to be accessible to the user (read/write/execute).
This feature generates a verification manifest for each installed
package and provides a command, "spack verify", which can be used to
compare the current file checksums/permissions with those calculated
at installed time.
Verification includes
* Checksums of files
* File permissions
* Modification time
* File size
Packages installed before this PR will be skipped during verification.
To verify such a package you must reinstall it.
The spack verify command has three modes.
* With the -a,--all option it will check every installed package.
* With the -f,--files option, it will check some specific files,
determine which package they belong to, and confirm that they have
not been changed.
* With the -s,--specs option or by default, it will check some
specific packages that no files havae changed.
fixes#13005
This commit fixes an issue with the name of the root directory for
module file hierarchies. Since #3206 the root folder was named after
the microarchitecture used for the spec, which is too specific and
not backward compatible for lmod hierarchies. Here we compute the
root folder name using the target family instead of the target name
itself and we add target information in the 'whatis' portion of the
module file.
From Python docs:
--
'surrogateescape' will represent any incorrect bytes as code points in
the Unicode Private Use Area ranging from U+DC80 to U+DCFF. These
private code points will then be turned back into the same bytes when
the surrogateescape error handler is used when writing data. This is
useful for processing files in an unknown encoding.
--
This will allow us to process files with unknown encodings.
To accommodate the case of self-extracting bash scripts, filter_file
can now stop filtering text input if a certain marker is found. The
marker must be passed at call time via the "stop_at" function argument.
At that point the file will be reopened in binary mode and copied
verbatim.
* use "surrogateescape" error handling to ignore unknown chars
* permit to stop filtering if a marker is found
* add unit tests for non-ASCII and mixed text/binary files
- Add a test that verifies checksums on all packages
- Also add an attribute to packages that indicates whether they need a
manual download or not, and add an exception in the tests for these
packages until we can verify them.
Both floating-point and NEON are required in all standard ARMv8
implementations. Theoretically though specialized markets can support
no NEON or floating-point at all. Source:
https://developer.arm.com/docs/den0024/latest/aarch64-floating-point-and-neon
On the other hand the base procedure call standard for Aarch64
"assumes the availability of the vector registers for passing
floating-point and SIMD arguments". Further "the Arm 64-bit
architecture defines two mandatory register banks: a general-purpose
register bank which can be used for scalar integer processing and
pointer arithmetic; and a SIMD and Floating-Point register bank".
Source:
https://developer.arm.com/docs/ihi0055/latest/procedure-call-standard-for-the-arm-64-bit-architecture
This makes customization of Aarch64 with no NEON instruction set
available so unlikely that we can consider them a feature of the
generic family.
This PR adds a 'concretize' entry to an environment's spec.yaml file
which controls how user specs are concretized. By default it is
set to 'separately' which means that each spec added by the user is
concretized separately (the behavior of environments before this PR).
If set to 'together', the environment will concretize all of the
added user specs together; this means that all specs and their
dependencies will be consistent with each other (for example, a
user could develop code linked against the set of libraries in the
environment without conflicts).
If the environment was previously concretized, this will re-concretize
all specs, in which case previously-installed specs may no longer be
used by the environment (in this sense, adding a new spec to an
environment with 'concretize: together' can be significantly more
expensive).
The 'concretize: together' setting is not compatible with Spec
matrices; this PR adds a check to look for multiple instances of the
same package added to the environment and fails early when
'concretize: together' is set (to avoid confusing messages about
conflicts later on).
While the build environment already takes share/pkgconfig into account,
the generated module files etc. only consider lib/pkgconfig and
lib64/pkgconfig.
When removing support for dotkit in #11986 the code trying to set the
paths of the various module files was not updated to skip it. This
results in a failure because of a key error after the deprecation
warning is displayed to user.
This commit fixes the issue and adds a unit test for regression.
Note that code for Spack chains has been updated accordingly but
no unit test has been added for that case.
Dotkit is being used only at a few sites and has been deprecated on new
machines. This commit removes all the code that provide support for the
generation of dotkit module files.
A new validator named "deprecatedProperties" has been added to the
jsonschema validators. It permits to prompt a warning message or exit
with an error if a property that has been marked as deprecated is
encountered.
* Removed references to dotkit in the docs
* Removed references to dotkit in setup-env-test.sh
* Added a unit test for the 'deprecatedProperties' schema validator
fixes#12915closes#12916
Since Spack has support for specific targets it might happen that
software is built for targets that are not exactly the host because
it was either an explicit user request or the compiler being used is
too old to support the host.
Modules for different targets are written into different directories
and by default Spack was adding to MODULEPATH only the directory
corresponding to the current host. This PR modifies this behavior to
add all the directories that are **compatible** with the current host.
Sometimes when remove_file is called on a link, that link is missing
(perhaps ctrl-C happened halfway through a previous action). As
removing a non-existent file is no problem, this patch changes the
behavior so Spack continues rather than stopping with an error.
Currently you would see
ValueError: /path/to/dir is not a link tree!
and now it continues with a warning.
bin/spack now needs to have a "-*- python -*-" line after the shebang, so
that emacs will interpret it as a python file instead of as a shell
script. Add one line to the license check limit to accommodate this.
The output of subprocess.check_output is a byte string in Python 3. This causes dictionary lookup to fail later on.
A try-except around this function prevented this error from being noticed. Removed this so that more errors can propagate out.
Preferred targets were failing because we were looking them up by
Microarchitecture object, not by string.
- [x] Add a call to `str()` to fix target lookup.
- [x] Add a test to exercise this part of concretization.
- [x] Add documentation for setting `target` in `packages.yaml`
* microarchitectures: zen starts from x86_64, not from excavator
* Unit tests: fixed a test that is wrong with the new modeling
* microarchitectures: fixed features and inheritance for 15h family
bulldozer doesn't inherit from barcelona (10h) + added xop, lwp and tbm
instruction sets to the 15h family (it distinguish the family from 17h)
Addresses #12804
This PR adds the creation of the remaining (16) templates to ensure we can create them with expected content. The goal is to facilitate catching during testing.
Spack doesn't need `requests`, and neither does `jsonschema`, but
`jsonschema` tries to import it, and it'll succeed if `requests` is on
your machine (which is likely, given how popular it is). This commit
removes the import to improve Spack's startup time a bit.
On a mac with SSD, the import of requests is ~28% of Spack's startup time
when run as `spack --print-shell-vars sh,modules` (.069 / .25 seconds),
which is what `setup-env.sh` runs.
On a Linux cluster where Python is mounted from NFS, this reduces
`setup-env.sh` source time from ~1s to .75s.
Note: This issue will be eliminated if we upgrade to a newer `jsonschema`
(we'd need to drop Python 2.6 for that). See
https://github.com/Julian/jsonschema/pull/388.
- This is needed to support Cray machines -- we need an architecture
mic_knl > x86_64
- We used Cray's naming scheme for this target to make it work seamlessly
with the module-based detection sccheme on Cray. mic_knl is pretty
much dead, so this will be the last succh target. We will need to work
wtih Cray and other vendors in the future.
Seamless translation from 'target=<generic>' to either
- target.family == <generic> (in methods)
- 'target=<generic>:' (in directives)
Also updated docs to show ranges in directives.
Spack can now:
- label ppc64, ppc64le, x86_64, etc. builds with specific
microarchitecture-specific names, like 'haswell', 'skylake' or
'icelake'.
- detect the host architecture of a machine from /proc/cpuinfo or similar
tools.
- Understand which microarchitectures are compatible with which (for
binary reuse)
- Understand which compiler flags are needed (for GCC, so far) to build
binaries for particular microarchitectures.
All of this is managed through a JSON file (microarchitectures.json) that
contains detailed auto-detection, compiler flag, and compatibility
information for specific microarchitecture targets. The `llnl.util.cpu`
module implements a library that allows detection and comparison of
microarchitectures based on the data in this file.
The `target` part of Spack specs is now essentially a Microarchitecture
object, and Specs' targets can be compared for compatibility as well.
This allows us to label optimized binary packages at a granularity that
enables them to be reused on compatible machines. Previously, we only
knew that a package was built for x86_64, NOT which x86_64 machines it
was usable on.
Currently this feature supports Intel, Power, and AMD chips. Support for
ARM is forthcoming.
Specifics:
- Add microarchitectures.json with descriptions of architectures
- Relaxed semantic of compiler's "target" attribute. Before this change
the semantic to check if a compiler could be viable for a given target
was exact match. This made sense as the finest granularity of targets
was architecture families. As now we can target micro-architectures,
this commit changes the semantic by interpreting as the architecture
family what is stored in the compiler's "target" attribute. A compiler
is then a viable choice if the target being concretized belongs to the
same family. Similarly when a new compiler is detected the architecture
family is stored in the "target" attribute.
- Make Spack's `cc` compiler wrapper inject target-specific flags on the
command line
- Architecture concretization updated to use the same algorithm as
compiler concretization
- Micro-architecture features, vendor, generation etc. are included in
the package hash. Generic architectures, such as x86_64 or ppc64, are
still dumped using the name only.
- If the compiler for a target is not supported exit with an intelligible
error message. If the compiler support is unknown don't try to use
optimization flags.
- Support and define feature aliases (e.g., sse3 -> ssse3) in
microarchitectures.json and on Microarchitecture objects. Feature
aliases are defined in targets.json and map a name (the "alias") to a
list of rules that must be met for the test to be successful. The rules
that are available can be extended later using a decorator.
- Implement subset semantics for comparing microarchitectures (treat
microarchitectures as a partial order, i.e. (a < b), (a == b) and (b <
a) can all be false.
- Implement logic to automatically demote the default target if the
compiler being used is too old to optimize for it. Updated docs to make
this behavior explicit. This avoids surprising the user if the default
compiler is older than the host architecture.
This commit adds unit tests to verify the semantics of target ranges and
target lists in constraints. The implementation to allow target ranges
and lists is minimal and doesn't add any new type. A more careful
refactor that takes into account the type system might be due later.
Co-authored-by: Gregory Becker <becker33.llnl.gov>
Add llnl.util.cpu_name, with initial support for detecting different
microarchitectures on Linux. This also adds preliminary changes for
compiler support and variants to control the optimizatoin levels by
target.
This does not yet include translations of targets to particular
compilers; that is left to another PR.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Move verbose messages to debug level
get_patchelf should return None for test platform as well because create_buildinfo invokes patchelf to get rpaths.
Update command-line (CLI) parsing to understand references to yaml
files that store Spack specs. Where a file reference is encountered,
the full Spec in the file will be read in. A file reference may
appear anywhere that a spec could appear before. For example, if you
write "spack spec -y openmpi > openmpi.yaml" you may then install the
spec using the yaml file by running "spack install ./openmpi.yaml";
you can also refer to dependencies in this way (e.g.
"spack install foo^./openmpi.yaml").
There are two requirements for file references:
* A file path entered on the CLI must include a "/" even if the file
exists in your current working directory. For example, if you
create an openmpi.yaml file as above and run
"spack install openmpi.yaml" from the same directory, it will
report an error.
* A file path entered on the CLI must end with ".yaml"
This commit adds error messages to clearly inform the user of both
violations.
* implicit_rpaths are now removed from compilers.yaml config and are always instantiated dynamically, this occurs one time in the build_environment module
* per-compiler list required libraries (e.g. libstdc++, libgfortran) and whitelist directories from rpaths including those libraries. Remove non-whitelisted implicit rpaths. Some libraries default for all compilers.
* reintroduce 'implicit_rpaths' as a config variable that can be used to disable Spack insertion of compiler RPATHs generated at build time.
Fixes#12732Fixes#12767c22a145 added automatic detection and RPATHing of compiler libraries
to Spack builds. However, in cases where the parsing/detection logic
fails this was terminating the build. This makes the compiler library
detection "best-effort" and reports an issue when the detection fails
rather than terminating the build.
This is similar to #10191. The Ubuntu package for clang 8.0.0 displays
a very unusual version string, and we need this new regex to detect it
as just 8.0.0
Unit test have been complemented by the output that was failing
detection.
- Fix trailing whitespace missed by the bug described in #12755.
- Fix other style issues that have crept in over time (this can happen
when flake8 adds new checks with new versions)
E501 (line too long) exemptions are probably our most common ones -- we
add them for directives, URLs, hashes, etc. in packages. But we
currently add them even when a line *doesn't* need them, which can mask
trailing whitespace errors.
This changes `spack flake8` so that it will only add E501 exemptions if
the line is *actually* too long.
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
mock_archive can now take multiple extension / tar option pairs (default matches old behavior).
url_fetch.test_fetch tests more archive types.
compression.EXTS split into EXTS and NOTAR_EXTS to avoid unwanted, non-meaningful combinatoric extensions such as .tar.tbz2.
- previously spec parsing didn't allow you to look up missing (but still
known) specs by hash
- This allows you to reference and potentially reinstall
force-uninstalled dependencies
- add testing for force uninstall and for reference by spec
- cmd/install tests now use mutable_database
* When cleaning the stage root, only remove directories that appear
to be used for staging Spack packages. Previously Spack was clearing
all directories in the stage root, which could remove content not
related to Spack if the user chose a staging root which contains
files/directories not managed by Spack.
* The documentation is updated with warnings about choosing a stage
directory that is only managed by Spack (although generally the
check added in this PR for "spack clean" should avoid removing
content that was not created by Spack)
* The default stage directory (in config.yaml) is now
$tempdir/$user/spack-stage and the logic is updated to omit the
$user portion of this path if $tempdir already contains a $user
directory.
* When creating stage root assign user read/write permissions to all
directories in the path under $user. Previously Spack was assigning
the permissions of the first existing parent directory
`spec.prefix` reads from Spack's database, and if you do this with
multiple consecutive read transactions, it can take a long time. Or, at
least, you can see the paths get written out one by one.
This uses an outer read transaction to ensure that actual disk locks are
acquired only once for the whole `spack find` operation, and that each
transaction inside `spec.prefix` is an in-memory operation. This speeds
up `spack find -p` a lot.
Refactor `spack.cmd.display_specs()` and `spack find` so that any options
can be used together with -d. This cleans up the display logic
considerably, as there are no longer multiple "modes".
This is another machine-readable version of `spack find`. Supplying the
`--json` argument causes specs to be written out as json records,
easily filered with tools like jq.
e.g.:
$ spack find --json python | jq -C ".[] | { name, version } "
[
{
"name": "python",
"version": "2.7.16"
},
{
"name": "bzip2",
"version": "1.0.8"
}
]
- spack find --format allows you to supply a format string and have specs
output in a more machine-readable way, without dedcoration
e.g.:
spack find --format "{name}-{version}-{hash}"
autoconf-2.69-icynozk7ti6h4ezzgonqe6jgw5f3ulx4
automake-1.16.1-o5v3tc77kesgonxjbmeqlwfmb5qzj7zy
bzip2-1.0.6-syohzw57v2jfag5du2x4bowziw3m5p67
...
or:
spack find --format "{hash}"
icynozk7ti6h4ezzgonqe6jgw5f3ulx4
o5v3tc77kesgonxjbmeqlwfmb5qzj7zy
syohzw57v2jfag5du2x4bowziw3m5p67
...
This is intended to make it much easier to script with `spack find`
When Spack installs a package it writes the package.py file and
patches to a separate repository (which reflects the state of the
package at the time it was installed). Previously, Spack only wrote
patches that were used at installation time. This updates the
archiving step to include all patch files that are relevant to the
package (in case that repository is used in another context).
This commit removes redundant calls to `libtoolize` and `aclocal`.
Some configurations, such as a Spack user using macOS with a
Homebrew-installed `libtool` added to their `packages.yaml`, have
`autoreconf` and GNU libtoolize installed as `glibtoolize`, but not
`libtoolize`. While Spack installations of `libtool` built from source
would install `glibtoolize` and symlink `libtoolize` to `glibtoolize`,
an external installation of GNU libtoolize as `glibtoolize` will not
have such a symlink, and thus the call `m.libtoolize()` will throw an
error because `libtoolize` does not exist at the path referenced by
`m.libtoolize()` (i.e.,
`self.spec['libtool'].prefix.bin.join('libtoolize')).
However, on these same systems, `autoreconf` runs correctly, and calls
`glibtoolize` instead of `libtoolize`, when appropriate. Thus,
removing the call to `libtoolize` should resolve the error mentioned
above.
The redundant call to `aclocal` is also removed in this commit because
the maintainers of GNU Automake state that "`aclocal` is expected to
disappear" and suggest that downstream users never call `aclocal`
directly -- rather, they suggest calling `autoreconf` instead.
Uses code from CMake to detect implicit link paths from compilers
System paths are filtered out of implicit link paths
Implicit link paths added to compiler config and object under `implicit_rpaths`
Implicit link paths added as rpaths to compile line through env/cc wrapper
Authored by: "Ben Boeckel <ben.boeckel@kitware.com>"
Co-authored by: "Peter Scheibel <scheibel1@llnl.gov>"
Co-authored by: "Gregory Becker <becker33@llnl.gov>"
c9e214f updated template creation by passing **kwargs to package
template classes but the template classes were not updated to accept
them; this adds **kwargs to package template initializers where they
are needed.
Having a non-directory invisible file causes `spack find` to die. This
fixes the logic to ignore invalid module names but only warn if they're
visible.
```
NotADirectoryError: [Errno 20] Not a directory: '/spack/var/spack/repos/builtin/packages/.DS_Store/package.py'
```
This adds a special package type to Spack which is used to aggregate
a set of packages that a user might commonly install together; it
does not include any source code itself and does not require a
download URL like other Spack packages. It may include an 'install'
method to generate scripts, and Spack will run post-install hooks
(including module generation).
* Add new BundlePackage type
* Update the Xsdk package to be a BundlePackage and remove the
'install' method (previously it had a noop install method)
* "spack create --template" now takes "bundle" as an option
* Rename cmd_create_repo fixture to "mock_test_repo" and relocate it
to shared pytest fixtures
* Add unit tests for BundlePackage behavior
This allows "spack spec --yaml" to generate a spec YAML file that can
be used with "spack install -f". Before, this would fail in cases
where the spec had build dependencies.
* All fetch strategies now accept the Boolean version keyword option `no_cache` in order to allow per-version control of cache-ability.
* New git-specific version keyword option `get_full_repo` (Boolean). When true, disables the default `--depth 1` and `--single-branch` optimizations that are applied if supported by the git version and (in the former case) transport protocol.
* The try / catch blog attempting `--depth 1` and retrying on failure has been removed in favor of more accurately ascertaining when the `--depth` option should work based on git version and protocol choice. Any failure is now treated as a real problem, and the clone is only attempted once.
* Test improvements:
* `mock_git_repository.checks[type_of_test].args['git']` is now specified as the URL (with leading `file://`) in order to avoid complaints when using `--depth`.
* New type_of_test `tag-branch`.
* mock_git_repository now provides `git_exe`.
* Improved the action of the `git_version` fixture, which was previously hard-wired.
* New tests of `--single-branch` and `--depth 1` behavior.
* Add documentation of new options to the packaging guide.
- mkdirp now takes arguments to allow it to properly set permissions on created directories.
- Two arguments (group and mode) set permissions for the leaf directory.
- Intermediate directories can inherit permissions from either the topmost existing directory (the parent) or the leaf.
On machines where $TMP is owned by a gid with no name, this avoids the
following error when the default spack stage does not exist:
(spackbook):spack$ spack clean
==> Removing all temporary build stages
==> Error: 'getgrgid(): gid not found: 57095'
Spack needs to deal with gids directly unless users pass them in.
Compiler caching was using the `id()` function to refer to configuration dictionary objects. If these objects are garbage-collected, this can produce incorrect results (false positive cache hits). This change replaces `id()` with an object that keeps a reference to the config dictionary so that it is not garbage-collected.
Fixes#11163
The goal of this work is to simplify stage directory structures by eliminating use of symbolic links. This means, among other things, that` $spack/var/spack/stage` will no longer be the core staging directory. Instead, the first accessible `config:build_stage` path will be used.
Spack will no longer automatically append `spack-stage` (or the like) to configured build stage directories so the onus of distinguishing the directory from other work -- so the other work is not automatically removed with a `spack clean` operation -- falls on the user.