- We use `spack list --foramt=html` now, as it is much faster and doesn't
make the docs build take forever.
- Remove `spack list --format=rst` as it is no longer used.
- `stage.source_path` was previously overloaded; it returned `None` if it
didn't exist and this was used by client code
- we want to be able to know the `source_path` before it's created
- make stage.source_path available before it exists.
- use a well-known stage source path name, `$stage_path/src` that is
available when `Stage` is instantiated but does not exist until it's
"expanded"
- client code can now use the variable before the stage is created.
- client code can test whether the tarball is expanded by using the new
`stage.expanded` property instead of testing whether `source_path` is
`None`
- add tests for the new source_path semantics
- make tty.msg, tty.info, etc. print the exception type and stringified
message if the message argument is an exception.
- simplify parts of the code that call tty.debug(str(e))
- add extra tty.debug statements in places where exceptions were
previously ignored
- `spack graph --static` (and `spack.graph.dot_graph`) now do the "right
thing" and print the possible dependency graph of provided packages.
- `spack graph --static` no longer concretizes specs, as it only relies
on class level metadata
- Previously the behavior was not consistent -- `spack graph --static`
would graph possible dependencies of concrete specs, but would only
include some of them. The new code properly pursues all possible
dependencies, and allows traversing by different dependency types.
- `spack dependencies` can now take a --deptype argument to only traverse
particular deptypes
- add a new "common" argument for deptype in spack.cmd.common.arguments
- Database.installed_relatives() can now also take a deptype argument
- this is used by `spack dependencies --installed`
- `PackageBase.possible_dependencies` now:
- accepts a deptype param that controls dependency types traversed
- returns a dict mapping possible depnames to their immediate possible
dependencies (this lets you build a graph easily)
- Add tests for PackageBaes
- The 'name' attribute for packages was being set in DirectiveMeta, which
wasn't consistent with other class properties (like fullname, etc.)
- Move it to be a class property of `PackageMeta`, and add the
corresponding property method wrapper on `PackageBase`
* add c99_flag, c11_flag to compiler class
* implement c99_flag, c11_flag for gcc
* implement c99_flag, c11_flag for arm
* implement c99_flag for cce
* implement c99_flag, c11_flag for clang
* implement c99_flag, c11_flag for intel
* implement c99_flag, c11_flag for xl
Previously, module files were not set with the same permissions as the package installation. For world-readable packages, this would not cause a problem. For group readable packages, it does:
```
packages:
mypackage:
permissions:
group: mygroup
read: group
write: group
```
In this case, the modulefile is unreadable by members of the group other than the one who installed it. Add logic to the modulefile writers to set the permissions based on the configuration in `packages.yaml`
* Build cache: relocate path to spack/bin/sbang in text files.
* Found in testing.
* update packaging test
* Make sbang replacement including #!/bin/bash. Add an additional spack prefix replacement to fix stage directory references.
* flake8
* Use buildinfo.get() so old buildcaches without buildinfo['spackprefix'] can be read.
* config:build_jobs now controls the number of parallel jobs to spawn during
builds, but cannot ever exceed the number of cores on the machine.
* The default is set to 16 or the number of available cores, whatever
is lowest.
* Updated docs to reflect the changes done to limit parallel builds
- `gettext_uuid=True` makes every commit update every .pot file in spack/localized-docs,
and speeds up the internationalized doc build slightly.
- Optimize for less repository churn, and use `python-levenshtein` to accelerate
the build instead.
- make all Spack paths relative to a `_spack_root` symlink, so that we
can easily relocate the docs build *outside* lib/spack/docs
- set some useful defaults for gettext translation variables in conf.py
- update `relativeinclude` and other references to the spack root in the
RST files to use _spack_root
- Add a `--update FILE` option to `spack list`
- Output is written to the file only if any package is newer than the file
- Simplify the code in docs/conf.py using this new option
The Spack documentation currently hard-codes some functionality in
`conf.py`, which makes the doc build less "pluggable" for things like
localized doc builds.
In particular, we unconditionally generate an index of commands and a
package list as part of the docs, but those should really only be done if
things are not up to date.
This commit does the following:
- Add `--header` option to `spack commands` so that it can do the work of
prepending text to its output.
- Add `--update FILE` option to `spack commands` that makes it generate a
new command index *only* if FILE is out of date w.r.t. commands in the
Spack source.
- Simplify code in `conf.py` to use these options and only update the
command index when needed.
This PR implements several refactors requested in #11373, specifically:
- Config scopes are used to handle builtin defaults, command line overrides
and package overrides (`parallel=False`)
- `Package.make_jobs` attribute has been removed; `make_jobs` remains
as a module-scope variable in the build environment.
- The use of the argument `-j` has been rationalized across commands
- move '-j'/'--jobs' argument into `spack.cmd.common.arguments`
- Add unit tests to check that setting parallel jobs works as expected
- add new test to ensure that build job setting is isolated to each build
- Fix packages that used `Package.make_jobs` (i.e. `bazel`)
* Add Fujitsu compiler to Spack.
* Fixes for flake8
* Chenges location of FCC to subdirectory called case-insensitive
* Add compiler tests for Fujitsu compiler
* Modify the logic of taking compiler version for new version of Fujitsu compiler
The regex used for finding the Cray OS version from the PrgEnv-cray
module was not exact and was at times pulling the version from other
PrgEnv modules. This updates the regular expression to be more exact.
Adds executable=/bin/bash into Popen. We discovered this bug while
working in a csh/tsch environment. By executing with /bin/bash we ensure
that the module command works.
#8612 added command extensions to Spack: a command implemented in a
separate directory. This improves the implementation by allowing
the command to import additional utility code stored within the
established directory structure for commands.
This also:
* Adds tests for command extensions
* Documents command extensions (including the expected directory
layout)
- `svn info` prints different results depending on the system locale
- in particular, Japanese output doesn't contain "Revision:"
- Change Spack code to use XML output instead of using the human output
Add fixes to support multiple installs and dependents using a subset
of IntelPackage functionality.
* Update IntelPackage to only return scalapack libraries if the root
spec depends on MPI: scalapack requires MPI to be mentioned as a
dependency in the DAG. Package builds using intel-mkl for its
blas/lapack implementations but not for scalapack were failing to
build.
Ideally it would be possible to ask if any of the packages in the
DAG are actually requesting the scalapack functionality provided by
the IntelPackage and only return scalapack libs in that case, but
that is not easily done at this time.
Fixes#11314Fixes#11289
* set HOME when the intel silent installer is run. This prevents the
installer from using the ~/intel directory (which can cause
conflicts for multiple installs of the same IntelPackage)
Fixes#9713
Use new `module` function instead of `get_module_cmd`
Previously, Spack relied on either examining the bash `module()` function or using the `which` command to find the underlying executable for modules. More complicated module systems do not allow for the sort of simple analysis we were doing (see #6451).
Spack now uses the `module` function directly and copies environment changes from the resulting subprocess back into Spack. This should provide a future-proof implementation for changes to the logic underlying the module system on various HPC systems.
Add two functions to the EnvironmentModifications object to help
users sanitize environment variables in their package definitions:
* deprioritize_system_paths: this keeps system paths in the
environment variable but moves them to the end.
* prune_duplicate_paths: remove any duplicate paths from the
variable
This includes testing for the new functions as well as for
(previously-untested) old convenience functions for environment
variable manipulation.
This also adds special handling for bash functions so they
will be defined when the exported environment file is sourced.
Fixes#11335
Update the Spack compiler wrappers to add the headerpad_max_install_names
linker flag on MacOS. This allows the install_name_tool to rewrite
the RPATH entry of the binary to be longer if needed. This is
primarily useful for creating and distributing binary caches of
packages (i.e. using the "spack buildcache" command); binary caches
created on MacOS before this commit may not successfully relocate
(if the target root path is larger).
* Added a function that concretizes specs together
* Specs concretized together are copied instead of being referenced
This makes the specs different objects and removes any reference to the
fake root package that is needed currently for concretization.
* Factored creating a repository for concretization into its own function
* Added a test on overlapping dependencies
* extend Version class so that 2.0 > 1.develop > 1.1
* add concretization tests, with preferences and preferred version.
* add master, head, trunk as develop-like versions, develop > master > head > trunk
* update documentation on version comparison
- `spack edit` previously used `spack.util.executable` `Executable` objects,
and didn't `exec` the editor like you'd expect it to
- This meant that Spack was still running while your editor was, and
stdout/stdin were being set up in weird ways
- e.g. on macOS, if you call `spack edit` with `EDITOR` set to the
builtin `emacs` command, then type `Ctrl-g`, the whole thing dies with
a `==> Error: Keyboard interrupt`
- Fix all this by changing spack.util.editor to use `os.execv` instead of
Spack's `Executable` object
Also add constructor to NoLibrariesError which can either take an
error message (like other SpackErrors) or a name and prefix (in
which case the error message is constructed).
PR #10758 made a slight change to find_versions_of_archive() which included
archive_url in the search process. While this fixed `spack create` and
`spack checksum` missing command-line arguments, it caused `spack
install` to prefer those URLs over those it found in the scrape process.
As a result, the package url was treated as a list_url causing all R
packages to stop fetching once the package was updated on CRAN.
This patch is more selective about including the archive_url in the
remote versions, explicitly overriding it with matching versions found
by the scraper.
f242f5f8 changed the format strings but maintained backwards
compatibility in all cases except one: The list of valid tokens for
the module naming schemes was not updated properly to contain both
the new and old styles for compilers and package names.
This PR re-adds the old tokens into the list of valid tokens.
#11152 added documentation for #8772 but some details were based on
an earlier implementation that had changed by the time #8772 was
merged. In particular, #11152 mentioned that upstream Spack instances
were configured in config.yaml, when in fact they should be placed in
a separate upstreams.yaml config file; this PR updates the
documentation accordingly.
fixes#11159
The 'namespace' argument to both Repo and RepoPath were used to set the
"super namespace". Currently it seems to be vestigial as the only
"super namespace" allowed for packages is 'spack.pkg' since 39c9bbf
* Make a separate CDash report for each package installed
Previously, we generated a single CDash report ("build") for the complete results
of running a `spack install` command. Now we create a separate CDash build for
each package that was installed.
This commit also changes some of the tests related to CDash reporting.
Now only one of the tests exercises the code path of uploading to a
(nonexistent) CDash server. The rest of the related tests write their reports
to disk without trying to upload them.
* Don't report errors to CDash for successful packages
Convert errors detected by our log scraper into warnings when the package
being installed reports that it was successful.
* Report a maximum of 50 errors/warnings to CDash
This is in line with what CTest does. The idea is that if you have more than
50 errors/warnings you probably aren't going to read through them all anyway.
This change reduces the amount of data that we need to transfer and store.
* Update spec format to simpler syntax, maintain backwards compatibility
* Switch to new spec.format method throughout internals
* update package files for new format strings
* documentation and minor code cleanup. removed nonsensical variant sigils
Fixes#11070#11010
Spack attempts to intercede on behalf of all compiler invocations for
a build. This involves adding its wrappers to PATH. Cray systems
include a "ftn" executable and Spack was only redirecting this call
when the Spec was built with cce. This updates the compiler wrappers
to add "ftn" in all cases.
The default (implied) behavior for all environments, as of ea1de6b,
is that an environment will maintain a view in a location of its
choosing. ea1de6b explicitly recorded all three possible states of
maintaining a view:
1. Maintain a view, and let the environment decide where to put it
(default)
2. Maintain a view, and let the user decide
3. Don't maintain a view
This commit updates the config writer so that for case [1], nothing
will be written to the config.yaml. This will not change any existing
behavior, it just serves to keep the config more compact.
Compilers are treated separately from other dependencies in Spack.
#10761 added the option to automatically install compilers when a
package specifies using a compiler that is not available in Spack.
However, this did not work correctly for dependency packages (it
would only build a compiler for the root of an install DAG). This
commit enables the building of compilers for dependency packages.
Environments are nowm by default, created with views. When activated, if an environment includes a view, this view will be added to `PATH`, `CPATH`, and other shell variables to expose the Spack environment in the user's shell.
Example:
```
spack env create e1 #by default this will maintain a view in the directory Spack maintains for the env
spack env create e1 --with-view=/abs/path/to/anywhere
spack env create e1 --without-view
```
The `spack.yaml` manifest file now looks like this:
```
spack:
specs:
- python
view: true #or false, or a string
```
These commands can be used to control the view configuration for the active environment, without hand-editing the `spack.yaml` file:
```
spack env view enable
spack env view envable /abs/path/to/anywhere
spack env view disable
```
Views are automatically updated when specs are installed to an environment. A view only maintains one copy of any package. An environment may refer to a package multiple times, in particular if it appears as a dependency. This PR establishes a prioritization for which environment specs are added to views: a spec has higher priority if it was concretized first. This does not necessarily exactly match the order in which specs were added, for example, given `X->Z` and `Y->Z'`:
```
spack env activate e1
spack add X
spack install Y # immediately concretizes and installs Y and Z'
spack install # concretizes X and Z
```
In this case `Z'` will be favored over `Z`.
Specs in the environment must be concrete and installed to be added to the view, so there is another minor ordering effect: by default the view maintained for the environment ignores file conflicts between packages. If packages are not installed in order, and there are file conflicts, then the version chosen depends on the order.
Both ordering issues are avoided if `spack install`/`spack add` and `spack install <spec>` are not mixed.
When providing a track, the cdash reporter will format the stamp
itself, as it has always done, and register the build during the
package installation process. When providing a stamp, it should
first be formatted as cdash expects, and then cdash will be sure
to report results to same build id which was registered manually
elsewhere.
* Update Spec.prefix to have special case for 'None' in database path; regression test
* Update in database reader rather than spec
* Change assertion to conditional + raise
* Added test for concrete check in Spec.prefix
The module_parsing test checks whether the module function is available
by looking for the string 'not found'. If the user has set a different
locale, the test can assume that the module function is available when
it actually is not.
* Split get_compiler_version into two functions:
get_compiler_version_output runs the compiler with the relevant
option to print the version; extract_version_from_output determines
the version by examining this output. This makes it easier to test
the customized version detection for each compiler. Users can
customize this by overriding the following:
* version_argument: this is the argument that tells the compiler to
print its version. It assumes that the compiler will report its
version if invoked with a single option (like "--version")
* version_regex: the regular expression used to extract the version
from the compiler argument. This assumes that a regular
expression is sufficient to extract the version, and that the
version can be extracted from a single capture group (Spack uses
the first capture group)
* default_version: allows you to completely override all version
detection logic
* get_compiler_version_output: if getting the compiler to report
its version is more complex than invoking it with a single arg
* extract_version_from_output: if it is difficult to define a regex
that can be used to extract the version from the output
* Added tests for version detection of most compilers
* Removed redundant code from xl_r compiler class (by inheriting
from xl compiler definition)
Replace the original implementation of the "memoized" decorator with
an implementation that exposes the docstring and arguments of the
wrapped function. This is achieved using functools.wraps.
This provides a mechanism to implement a new Spack command in a
separate directory, and with a small configuration change point Spack
to the new command.
To register the command, the directory must be added to the
"extensions" section of config.yaml. The command directory name must
have the prefix "spack-", and have the following layout:
spack-X/
pytest.ini #optional, for testing
X/
cmd/
name-of-command1.py
name-of-command2.py
...
tests/ #optional
conftest.py
test_name-of-command1.py
templates/ #optional jinja templates, if needed
And in config.yaml:
config:
extensions:
- /path/to/spack-X
If the extension includes tests, you can run them via spack by adding
the --extension option, like "spack test --extension=X"
* initial work to make use of an 'upstream' spack installation: this uses the DB of the upstream installation to check if a package is installed
* need to query upstream dbs when adding new record to local db
* prevent reindexing upstream DBs
* set prefix on specs read from DB based on path stored in install record
* check that Spack does not install packages that are recorded as installed in an upstream db
* externals do not add their path to install records - need to use 'external_path' to get path of upstream externals
* views need to check for upstream installations when linking metadata
* package and spec now calculate upstream installation properties on-demand themselves rather than depending on concretization to set these properties up-front. The added tests for upstream installations don't work with this new strategy so they need to be updated
* only refresh modules for local specs (not those in upstream packages); optionally generate local module files for packages installed upstream
* when a user tries to locate a module file for a package installed upstream, tell them to use the upstream spack instance to locate it
* support recursive upstream databases (allow upstream databases to use their own upstream databases)
* separate upstream config into separate file with its own schema; each entry now also includes a name
* metadata_dir is no longer customizable on a per-instance basis for YamlDirectoryLayout
* treat metadata_dir as an instance variable but dont set it from kwargs; this follows several other hardcoded variables which must be consistent between upstream and downstream DBs. Also update DirectoryLayout.metadata_path to work entirely with Spec.prefix, since Spec.prefix is set from the DB when available (so metadata_path was duplicating that logic)
Change the location of the CMake build area from the staged source
directory to the stage base directory.
This change allows CMake packages to refer to the build directory in
setup_environment (e.g. if tests need to have a directory in PATH):
Staging happens after the call to setup_environment(), and if the
stage area does not exist, then spec.stage.source_path returns None.
To accommodate this change, archived files (like config.log for
Autotools packages) are archived relative to the stage base directory
rather than the expanded source directory.
Other packages (those not using CMake) will still use the staged
source directory as the default working directory for builds (and
will still be unable to reference this directory in
setup_environment())
When multiple instances of environment-modules were installed with
different architectures, Spack was not retrieving the installation
appropriate for the current architecture when finding the module
prefix.
* Fixed some issues with CUDA-Intel compiler conflicts.
* Comment about expressing CUDA-compiler conflicts.
* More precise conflicts and also add support for Intel 19.0
If the user has set the environment variable VISUAL, it will be used
in preference to EDITOR for all Spack editing activities. If VISUAL
is not set or fails (perhaps due to a lack of graphical editing
capabilities),EDITOR will be used instead. We fall back to one of
several common editors if neither bears fruit.
This feature has been tailored to:
* Provide identical behavior to the previous implementation in the
case that VISUAL is not set.
* Not require any change to code utilizing the editor feature.
* Follow usual UNIX behavior concerning VISUAL and EDITOR.
* Fix clearing EnvironmentModifications with python2
* Add EnvironmentModifications::clear unit test
Use re-assignment rather than del to clear array
* Fix flake issues
Fixes#10191
* Add more regular expressions to detect clang versions that were
not being picked up
* Add a test for parsing versions from the output of Clang (this
does not run Clang, but rather uses example outputs from Clang)
* Separate Clang version parsing into its own method (to make it
easier to test)
Currently, only C headers are considered, causing build failures for
packages depending on, e.g., netcdf-fortran and xerces-c. Additionally,
the regex used to look for the include path component did not consider
word boundaries, causing false matches.
* Create option to build missing compilers and add them to config before installing packages that use them
* Clean up kwarg passing for do_install, put compiler bootstrapping in separate method
* Rework of buildcache creation and install prefix checking using the functions introduced in
https://github.com/spack/spack/pull/9199
Instead of replacing rpaths with placeholder and then checking strings, make use of the functions
relocate.is_recocatable and relocate.is_file_relocatable to decide if a package needs the allow-root option.
This fixes a problem where the placeholder path was not in the first rpath entry. This was seen in c++ libraries and binaries because the compiler was outside the spack install base path and always appears first in the rpath.
Instead of checking the first rpath entry, all rpaths have the placeholder path and the old install path (if it exists) replaced with the new install path.
* flake8
* Added the `spack buildcache preview` sub-command
This is similar to `spack spec -I` but highlights which nodes in a DAG
are relocatable and which are not.
spec.tree has been generalized a little to accept a status function,
instead of always showing the install status
The current implementation works only for ELF, and needs to be
generalized to other platforms.
* Added a test to check if an executable is relocatable or not
This test requires a few commands to be present in the environment.
Currently it will run only under python 3.7 (which uses Xenial instead
of Trusty).
* Added tests for the 'buildcache preview' command.
* Fixed codebase after rebase
* Fixed the list of apt addons for Python 3.7 in travis.yaml
* Only check ELF executables and shared libraries. Skip checking virtual or external packages. (#229)
* Fixed flake8 issues
* Add handling for macOS mach binaries (#231)
The environment modules package has been updated to include
versions up to 4.0.0. The url of the package and the homepage
have been updated accordingly.
The `spack bootstrap` command now builds version 3.2.10 of
the environment-modules package, and will do until #10708
is fixed.
This restores the use of Package.headers when computing -I options
for building a package that was added in #8136 and reverted in
#10604. #8136 used utility logic that located all header files in
an installation prefix, and calculated the -I options as the
immediate roots containing those header files.
In some cases, for a package containing a directory structure like
prefix/
include/
ex1.h
subdir/
ex2.h
dependents may expect to include ex2.h relative to 'include', and
adding 'prefix/include/subdir' as a -I was causing errors,
in particular if ex2.h has the same name as a system header.
This updates header utility logic to by default return the base
"include" directory when it exists, rather than subdirectories.
It also makes it possible for package implementers to override
Package.headers to return the subdirectory when it is required
(for example with libxml2).
Spack warns users when a dependency package updates CPATH. This
warning message is generating bug reports and alarm in cases where
there is no problem. For now this downgrades the warning message to
the debug level, so it only shows up if something goes wrong for the
user and they ask for more information from Spack.
This spack command adds a new schema for a file which describes the
builder containers available, along with the compilers availabe on
each builder. The release-jobs command then generates the .gitlab-ci.yml
file by first expanding the release spec set, concretizing each spec
(in an appropriate docker container if --this-machine-only argument is
not provided on command line), and then combining and staging all the
concrete specs as jobs to be run by gitlab.
Adds four new sub-commands to the buildcache command:
1. save-yaml: Takes a root spec and a list of dependent spec names,
along with a directory in which to save yaml files, and writes out
the full spec.yaml for each of the dependent specs. This only needs
to concretize the root spec once, then indexes it with the names of
the dependent specs.
2. check: Checks a spec (via either an abstract spec or via a full
spec.yaml) against remote mirror to see if it needs to be rebuilt.
Comparies full_hash stored on remote mirror with full_hash computed
locally to determine whether spec needs to be rebuilt. Can also
generate list of specs to check against remote mirror by expanding
the set of release specs expressed in etc/spack/defaults/release.yaml.
3. get-buildcache-name: Makes it possible to attempt to read directly
the spec.yaml file on a remote or local mirror by providing the path
where the file should live based on concretizing the spec.
4. download: Downloads all buildcache files associated with a spec
on a remote mirror, including any .spack, .spec, and .cdashid files
that might exist. Puts the files into the local path provided on
the command line, and organizes them in the same hierarchy found on
the remote mirror
This commit also refactors lib/spack/spack/util/web.py to expose
functionality allowing other modules to read data from a url.
- add CombinatorialSpecSet in spack.util.spec_set module.
- class is iterable and encaspulated YAML parsing and validation.
- Adjust YAML format to be more generic
- YAML spec-set format now has a `matrix` section, which can contain
multiple lists of specs, generated different ways. Including:
- specs: a raw list of specs.
- packages: a list of package names and versions
- compilers: a list of compiler names and versions
- All of the elements of `matrix` are dimensions for the build matrix;
we take the cartesian product of these lists of specs to generate a
build matrix. This means we can add things like [^mpich, ^openmpi]
to get builds with different MPI versions. It also means we can
multiply the build matrix out with lots of different parameters.
- Add a schema format for spec-sets
Fixes#10617Fixes#10624Closes: #10619#8136 dependended entirely on spec.libs to retrieve library directories
from dependencies. By default this function only retrieves libraries if
their name is something like lib<package> (e.g. "libfoo.so" for a
package called "Foo"). This unconditionally adds lib/lib64 directories
for each dependency as link/rpath directories.
This also filters system paths from link/rpaths/include directories and
removes duplicated paths that #8136 could add.
If the -f <specyamlfile> argument to install is used (rather than
providing package specs on the command line), CDash throws an exception
due to missing the installation command (the packages targeted for
install). This fixes that behavior so CDash reporting succeeds in
either case.
fixes#10601
Due to a bug this attribute is wrong for packages that use directories
as namespaces. For instance it will add "<boost-prefix>/include/boost"
instead of "<boost-prefix>/include" to the include path.
As a minor addition a few loops in the compiler wrappers have been
simplified.
Fixes#7855Closes#8070Closes#2645
When searching for library directories (e.g. to add "-L" arguments to
the compiler wrapper) Spack was only trying the "lib/" and "lib64/"
directories for each dependency install prefix; this missed cases
where packages would install libraries to subdirectories and also was
not customizable. This PR makes use of the ".headers" and ".libs"
properties for more-advanced location of header/library directories.
Since packages can override the default behavior of ".headers" and
".libs", it also allows package writers to customize.
The following environment variables which used to be set by Spack
for a package build have been removed:
* Remove SPACK_PREFIX and SPACK_DEPENDENCIES environment variables as
they are no-longer used
* Remove SPACK_INSTALL environment variable: it was not used before
this PR
* fix permission setter
Fix a typo in islink test when applied to files.
* os.walk explicitly set not to follow links
The algorithm strongly rely on not following links.
* Note that `none` is the default for lmod autoload
Save a bit of confusion by *explicitly* pointing out that `none` is
the default value for autoload in the lmod module file generator.
* Add a tip re building software externally
Add a tip about using `autoload: all` when building packages outside
of the tree that use artifacts (e.g. libraries, includes) within the
tree.
CMake supports the notion of secondary generators which provide extra
information to (e.g.) IDEs over and above that normally provided by
the primary generator. Spack only supports the 'Unix Makefiles' and
'Ninja' primary generators but was not parsing out the primary
generator when a secondary generator was also included (e.g. for
a generator attribute like 'Codeblocks - Ninja'). This adds a regex
for extracting the primary generator for validation.
Since the secondary generator is irrelevant to a Spack build, it is
passed on to CMake without further validation.
* CUDA compiler conflicts for Linux.
* Add Volta and Turing GPUs.
* Add mandatory conflict for Volta and Turing GPUs.
* Revert "CUDA compiler conflicts for Linux."
This reverts commit 7d4ff654ac53aad272c59e9f7f8bb3fbb32bcec4.
* Compiler conflicts introduced from previous commit into CUDA packaged moved and integrated into CUDA build system.
* More conversative with compiler conflicts for cuda 10.0.130, since I don't know what will happen with future cuda 10.x releases.
* Correct off-by-one errors in clang conflicts for x86_64 Linux.
* No restrictions on Apple Clang compiler until we are able to distinguish Xcode clang from github clang more easily. Note to fix this in the future.
* Change comment to clarify that github clang refers to LLVM clang.
* Fix and simplify index range.
* Fix overlapping conflicts for CUDA 10.0.130
* Removed extra ^cuda from conflict.
Debug output now includes the output of modulecmd executions. Only
output module content when a failure occurs; always report when a
module is loaded/unloaded.
"spack install" will install all packages added to the current
environment. When this included external packages, the environment
update would fail because it would attempt to copy log files that
were only generated if Spack handled the install itself. This skips
that step for external packages.
* Allow overwrite nonexistent and multiple packages
initial implementation
give one prompt to users instead of a prompt per spec
testing
* flake
* bugfix: install overwrite check each spec against installed
* python3 compliance for filter/map
* Remove Cray CC compilers causing problems on case-insensitive filesystems
* cray -> cce
* Ensure that compiler-specific directory comes first in build-env
* Point to compiler-specific symlinks
Binary caches of packages with absolute symlinks had broken symlinks.
As a stopgap measure, #9747 addressed this by replacing symlinks with
copies of files when creating binary cached packages.
This reverts #9747 and instead, either relative-izes the symlink or
rewrites the target. If the binary cache is created using '--rel' (as
in "spack buildcache create --rel...") then absolute symlinks will be
replaced with relative symlinks (in addition to making RPATHs relative
as before); otherwise they are rewritten (when the binary cache is
unpacked and installed).
The current output of buildcache list is very verbose and I feel like
some details are getting lost. By making the output similar to find, I
think users will be able to get a better overview of what is stored in
the cache.
* dealii: fix concretization of xsdk package
* tests: add concretization tests for deal.II and xSDK, which are often broken due to limitations in the concretizer
* use pytest.mark.parametrize
Allow customizing views with Spec-formatted directory structure
Allow views to specify projections that are more complicated than
merging every package into a single shared prefix. This will allow
sites to configure a view for the way they want to present packages
to their users; for example this can be used to create a prefix for
each package but omit the DAG hash from the path.
This includes a new YAML format file for specifying the simplified
prefix for a spec in a view. This configuration allows the use of
different prefix formats for different specs (i.e. specs depending
on MPI can include the MPI implementation in the prefix).
Documentation on usage of the view projection configuration is
included.
Depending on the projection configuration, paths are not guaranteed
to be unique and it may not be possible to add multiple installs of
a package to a view.
Fixes#10284#10152 replaced shutil.move with llnl's copy and copy_tree for
resources. This did not copy permissions so led to later failures
if an executable was copied (e.g. a configure script). This uses
install/install_tree instead, which preserve permissions.
* Initial compiler support
* added arm.py
* Changed licence to Arm suggested header
* Changed licence to the same as clang.py
Main author of file is Nick Forrington <Nick.Forrington@arm.com>
Minor changes by Srinath Vadlamani <srinath.vadlamani@arm.com>
* compilers: add arm compiler detection to Spack
- added arm.py with support for detecting `armclang` and `armflang`
Co-authored-by: Srinath Vadlamani <srinath.vadlamani@arm.com>
* Changed to using get get_compiler_version
* linking to general cc for arm compiler
* For arm compiler add CFLAGS to use compiler-rt rtlib.
* Escape for special characters in rexep
* Cleaned up for Flake8 to pass.
* libcompiler-rt should be part of the LDFLAGS not CFLAGS
* fixed m4 when using clang to used LDFLAGS. Fixed comments for arm.py to display compiler --version output with # NOAQ for flakes pass.
* added arm compilers
* proper linked names
This enforces conventions that allow for correct handling of
multi-valued variants where specifying no value is an option,
and adds convenience functionality for specifying multi-valued
variants with conflicting sets of values. This also adds a notion
of "feature values" for variants, which are those that are understood
by the build system (e.g. those that would appear as configure
options). In more detail:
* Add documentation on variants to the packaging guide
* Forbid usage of '' or None as a possible variant value, in
particular as a default. To indicate choosing no value, the user
must explicitly define an option like 'none'. Without this,
multi-valued variants with default set to None were not parsable
from the command line (Fixes#6314)
* Add "disjoint_sets" function to support the declaration of
multi-valued variants with conflicting sets of options. For example
a variant "foo" with possible values "a", "b", and "c" where "c"
is exclusive of the other values ("foo=a,b" and "foo=c" are
valid but "foo=a,c" is not).
* Add "any_combination_of" function to support the declaration of
multi-valued variants where it is valid to choose none of the
values. This automatically defines "none" as an option (exclusive
with all other choices); this value does not appear when iterating
over the variant's values, for example in "with_or_without" (which
constructs autotools option strings from variant values).
* The "disjoint_sets" and "any_combination_of" methods return an
object which tracks the possible values. It is also possible to
indicate that some of these values do not correspond to options
understood by the package's build system, such that methods like
"with_or_without" will not define options for those values (this
occurs automatically for "none")
* Add documentation for usage of new functions for specifying
multi-valued variants
Non-expanded resources were being deleted from the cache on account
of two behaviors:
* ResourceStage was moving files rather than copying them, and uses
"os.path.realpath" to resolve symlinks
* CacheFetchStrategy creates a symlink to a cached resource rather
than copying it
This alters the first behavior: ResourceStage now copies the file
rather than moving it.
"mirror create" was invoking a package's do_patch method in order to
retrieve and archive URL patches. If a package implements a "patch"
method, this is also called as part of do_patch; this failed when the
package-specific implementation referred to environment variables
that are only available at the time the package is built
(e.g. "spack_cc").
This change introduces fetch and clean methods for patches. They are
no-ops for FilePatch but perform the appropriate actions for
UrlPatch. This allows "mirror create" to invoke do_fetch, which does
not call the package's patch method.
- in many files, regular strings were used in places where raw strings
should've been used.
- convert these to raw strings and get rid of new flake8 errors
This PR improves the validation of `modules.yaml` by introducing a custom validator that checks if an attribute listed in `properties` or `patternProperties` is a valid spec. This new check applied to the test case in #9857 gives:
```console
$ spack install szip
==> Error: /home/mculpo/.spack/linux/modules.yaml:5: "^python@2.7@" is an invalid spec [Invalid version specifier]
```
Details:
* Moved the set-up of a custom validator class to spack.schema
* In Spack we use `jsonschema` to validate configuration files
against a schema. We also need custom validators to enforce
writing default values within "properties" or "patternProperties"
attributes.
* Currently, validators were customized at the place of use and with the
recent introduction of environments that meant we were setting-up and
using 2 different validator classes in two different modules.
* This commit moves the set-up of a custom validator class in the
`spack.schema` module and refactors the code in `spack.config` and
`spack.environments` to use it.
* Added a custom validator to check if an attribute is a valid spec
* Added a custom validator that can be used on objects, which yields an
error if the attribute is not a valid spec.
* Updated the schema for modules.yaml
* Updated modules.yaml to fix a few inconsistencies:
- a few attributes were not tested properly using 'anyOf'
- suffixes has been updated to also check that the attribute is a spec
- hierarchical_scheme has been updated to hierarchy
* Removed $ref from every schema
* $ref is not composable or particularly legible
* Use python dicts and regular old variables instead.
- The nested directive implementation was broken for python 3
- directive results were not properly removed from the directive list
when it was processed in the DirectiveMeta metaclass.
- the issue was that remove_directives only descended into a list or
tuple, but in Python3, the initial value passed to the function is a
view of dictionary values.
- make it a list to fix things, and add a regression test.
- currently just looks at patches
- allows you to find out which package applied a patch to a spec
- intended to work with tarballs and resources in the future.
- add tab completion for `spack resource` and subcommands
- previously, if a concrete sub-DAG with patched specs was written out
and read back in, its patches would not be found because the dependent
that patched it was no longer in the DAG.
- Add a test to ensure that the PatchCache handles this case.
- Also add tests to ensure that patch objects are properly created from
Specs -- previously we only checked that the patches were on the Spec.
- this fixes a bug where if we save a concretized sug-DAG where a package
had been patched by a dependent, and the dependent was not in the DAG,
we would not read in all patches correctly.
- Rather than looking up patches in the DAG, we look them up globally
from an index created from the entire repository.
- The patch cache is a bit tricky for several reasons:
- we have to cache information from packages, specifically, the patch
level and working directory.
- FilePatches need to know which package owns them, so that they can
figure out where the patch lives. The repo can change locations from
run to run, so we have to store relative paths and restore them when
the cache is reloaded.
- Patch files can change underneath the cache, because repo indexes
only update on package changes. We currently punt on this -- there
are stub methods for needs_update() that will need to check patch
files when packages are loaded. There isn't an easy way to do this
at global indexing time without making the FastPackageChecker a lot
slower. This is TBD for a future commit.
- Currently, the same patch can only be used one way in a package. That
is, if it appears twice with different level/working_dir settings,
bad things will happen. There's no package that current uses the
same patch two different ways, so we've punted on this as well, but
we may need to fix this in the future by moving a lot of the metdata
(level, working dir) to the spec, and *only* caching sha256sums in
the PatchCache. That would require some much more complicated tweaks
to the Spec, so we're holding off on that til later.
- This required patches to be refactored somewhat -- the difference
between a UrlPatch and a FilePatch is still not particularly clean.
- indexes should use json, not YAML, to optimize for speed
- only use YAML in human-editable files
- this makes ProviderIndex consistent with other indexes
- virtual provider cache and tags were previously generated by nearly
identical but separate methods.
- factor out an Indexer interface for updating repository caches, and
provide implementations for each type of index (TagIndex,
ProviderIndex) so that more can be added if needed.
- Among other things, this allows all indexes to be updated at once.
This is an advantage because loading package files is the real
overhead, and building the indexes once the packages are loaded is
trivial. We avoid extra bulk read-ins by generating all package indexes
at once.
- This can be extended for dependents (reverse dependencies) and patches
later.
- cleanup patch.py:
- make patch.py constructors more understandable
- loosen coupling of patch.py with package
- in Package: make package_dir, module, and namespace class properties
- These were previously instance properties and couldn't be called from
directives, e.g. in patch.create()
- make them class properties so that they can be used in class definition
- also add some instance properties to delegate to class properties so
that prior usage on Package objects still works
- When returning string output, use text_type and decode utf-8 in Python
2 instead of using `str`
- This properly handles unicode, whereas before we would pass bad strings
to colify in `spack blame` when reading git output
- add a test that round-trips some unicode through an Executable object
* Remove /nfs/tmp2 from default configuration
* /nfs/tmp2 is going away from LC... and doesn’t exist for the rest of the world.
* update documentation to remove /nfs/tmp2 as well
* Record build output as an array of lines rather than concatenating to a
single large string.
* Use string.find to avoid running re.search on every line of output.
- some commands were missed in the rollout of spack environments
- this makes all commands that need to disambiguate specs restrict the
disambiguation to installed packages in the active environment, as
users would expect
* This fixes a number of bugs:
* Patches were not properly downloaded and added to mirrors.
* Mirror create didn't respect `list_url` in packages
* Update the `spack mirror` command to add all packages in the
concretized DAG (where originally it only added the package specified
by the user). This is required in order to collect patches that are specified
by dependents. Example:
* if X->Y and X requires a patch on Y called Pxy, then Pxy will only
be discovered if you create a mirror with X.
* replace confusing --one-version-per-spec option for `spack mirror create`
with --versions-per-spec; support retrieving multiple versions for
concrete specs
* Implementation details:
* `spack mirror create` now uses regular staging logic to download files
into a mirror, instead of reimplementing it in `add_single_spec`.
* use a separate resource caching object to keep track of new
resources and already-existing resources; also accepts storing
resources retrieved from a cache (unlike the local cache)
* mirror cache object now stores resources that are considered
non-cachable, like (e.g. the tip of a branch);
* the 'create' function of the mirror module no longer traverses
dependencies since this was already handled by the 'mirror' command;
* Change handling of `--no-checksum`:
* now that 'mirror create' uses stages, the mirror tests disable
checksums when creating the mirror
* remove `no_checksum` argument from library functions - this is now
handled at the Spack-command-level (like for 'spack install')
- all multimethod tests are now run for both `multimethod` and
`multimethod-inheritor`
- do this with a parameterized fixture (pkg_name) that runs the same
tests on both
- Since early Spack versions, the SpecParser has (weirdly) been
responsible for initializing Spec fields.
- This refactors initialization to take place in Spec.__init__, as it
probably should have originally.
- This makes the code easier to read, the parser easier to understand,
and removes the use of __new__ in the parser to initialize the Spec.
- This also makes it possible to make a completely empty Spec with
`Spec()` -- this is an abstract Spec that will match anything.
* "spack install" now uses cache by default, update examples accordingly
* Replace some example packages with others
* Packing tutorial reference to "spack env" replaced with "spack build-env"
* Command line prompts in examples are shortened
* Example output (including paths) are updated to be more relevant to training environment
Update all examples that need an MPI provider to build with MPICH; reorganize so that fixing MPICH (as part of environment section) comes first in the tutorial (most examples in the tutorial use an MPI provider).
- previously, uninstall would complain if a spec was needed by an
environment.
- Now, we analyze dependents and dependent environments and simply remove
(not uninstall) specs that are needed by environments
- with no arguments, these commands will now edit or dump the
environment's `spack.yaml` file.
- users may not know where named environments live
- this makes it convenient for users to get to the spack.yaml
configuration file for their named environment.
* Update Makefile to use property methods ("build_targets"/"install_targets")
to demonstrate their usage
* Fix highlighting
* Change cbench example to ESMF:
CBench package file was changed and no longer uses the example shown in
the old docs
Scopes added with -C are now referred to as "custom scopes"
rather than "command line scopes". "command line scope" now refers
to specific config options that are set on the command line (like
"--insecure")
- default is still to use the cache, but we've added back the
`--use-cache` argument so that scripts that used it are still correct.
- `--no-cache` is stil present and is mutually exclusive with `--use-cache`
* Introduce FFTW2 and FFT3 providers for Intel-MKL and FFTW Spack packages.
* make fftw default package for fftw-api virtual package
* virtual package test assertion now provides location of default virtual packages.
* Change name of virtual package to fftw-api and used versioned interface.
- all commands (except `spack find`, through `ConstraintAction`) now go
through get_env() to get the active environment
- ev.active was hard to read -- and the name wasn't descriptive.
- rename it to _active_environment to be more descriptive and to strongly
indicate that spack.environment manages it
- to aovid changing spec hashes drastically, only add this attribute to
differentiated abstract specs.
- othherwise assume that read-in specs are concrete
- spack.yaml files in the current directory were picked up inconsistently
-- make this a sure thing by moving that logic into find_environment()
and moving find_environment() to main()
- simplify arguments to Spack command:
- remove short args for infrequently used commands (--pdb/-D, -P, -s)
- `spack -D` now forces an env with a directory
- The `Spec` class maintains a special `_patches_in_order_of_appearance`
attribute on patch variants, but it is was preserved when specs are
copied.
- This caused issues for some builds
- Add special logic to `Spec` to preserve this variant on copy
- TODO: in the long term we should get rid of the special variant and
make it the responsibility of one of the variant classes.
- split 'environment' section into 'environments' and 'modules'
- move location to 'query packages' section
- move cd to developer section
- --env-dir no longer has a short optino (was -E)
- -E now means "run without an environment" (no longer same as --env-dir)
- -D now means "run with this directory environment"
- remove short options for may infrequently used top-level commands
- `spack env status` used to show install status; consolidate that into
`spack find`.
- `spack env status` will still print out whether there is an active
environment
- uninstall now:
- restricts its spec search to the current environment
- removes uninstalled specs from the current environment
- reports envs that still need specs you're trying to uninstall
- removed spack env uninstall command
- updated tests
- moved get_env from cmd/env.py to environment.py
- spack install will now install into the active environment when no
arguments are provided. It looks:
1. at the command line
2. for a local spack.yaml file
3. for any currently activated environment
- `spack env create <name>` works as before
- `spack env create <path>` now works as well -- environments can be
created in their own directories outside of Spack.
- `spack install` will look for a `spack.yaml` file in the current
directory, and will install the entire project from the environment
- The Environment class has been refactored so that it does not depend on
the internal Spack environment root; it just takes a path and operates
on an environment in that path (so internal and external envs are
handled the same)
- The named environment interface has been hoisted to the
spack.environment module level.
- env.yaml is now spack.yaml in all places. It was easier to go with one
name for these files than to try to handle logic for both env.yaml and
spack.yaml.
- `spack env activate foo`: sets SPACK_ENV to the current active env name
- `spack env deactivate`: unsets SPACK_ENV, deactivates the environment
- added support to setup_env.sh and setup_env.csh
- other env commands work properly with SPACK_ENV, as with an environment
arguments.
- command-line --env arguments take precedence over the active
environment, if given.
- env.yaml is now meaningful; it contains authoritative user specs
- concretize diffs user specs in env.yaml and env.json to allow user to
add/remove by simply updating env.yaml
- comments are preserved when env.yaml is updated by add/unadd
- env.yaml can contain configuration and include external configuration
either from merged files or from config scopes
- there is only one file format to remember (env.yaml, no separate init
format)
- env.json is now env.lock, and it stores the *last* user specs to be
concretized, along with full provenance.
- internal structure was modified slightly for readability
- env.lock contains a _meta section with metadata, in case needed
- added more tests for environments
- env commands follow Spack conventions; no more `spack env foo install`
- add `SingleFileScope` to configuration, which allows us to pull config
sections from a single file.
- update `env.yaml` and tests to ensure that the env.yaml schema works
when pulling configurtion from the env file.
- Each schema now has a top-level `properties` and `schema` attribute.
- The `properties` is a fragment that can be included in other
jsonschemas, via Python, not via '$ref'
- Th `schema` is a complete `jsonschema` with `title` and `$schema`
properties.
- add a common argument for `-e/--env`
- modify the database to support queries on subsets of hashes
- allow `spack find` to be filtered by hashes in an environment
- logic used in `spack find` was hiding duplicate installations if their
hashes were different
- short hash doesn't work in this scenario, since specs are structurally
identical
- ConstraintAction always works on a DB query, so use the DAG hash to
ensure uniqueness
- `spack.environment` is now the home for most of the infrastructure
around Spack environments
- refactor `cmd/env.py` to use everything from spack.environment
- refactor the cmd/env test to use pytest and fixtures
- `spack.util.environment` is the new home for routines that modify
environment variables.
- This is to make room for `spack.environment` to contain new routines
for dealing with spack environments
- Instead of one method with all parsers, each subcommand gets two
functions: `setup_<cmd>_parser()` and `environment_<cmd>()`
- the `setup_parser()` and `env()` functions now generate the parser
based on these and a list of subcommands.
- it is now easier to associate the arguments with the subcommand.
* modified tutorial packages
* update hint in hdf5 tutorial file (typo for suggested argument)
* add repo.yaml to tutorial repository
* update tutorial docs to refer user to tutorial package repository
* flake edits
* recommend site scope vs. defaults
* you don't specify the repo's name when adding a repo, just the path
* omit symlinks and create file copies when making a binary cache of a package
* unrelated flake edits involving regexes that recent flake is now angry about
* Record stdout for packages without errors
Previously our reporter only stored stdout if something went wrong
while installing a package. This prevented us from properly reporting
on steps where everything went as expected.
* More robustly report all phases to CDash
Previously if a phase generated no output it would not be reported to CDash.
For example, consider the following output:
==> Executing phase: 'configure'
==> Executing phase: 'build'
This would not generate a report for the configure phase. Now it does.
* Add test case for CDash reporting clean builds
* Fix default directory for CDash reports
The default 'cdash_report' directory name was getting overwritten
by 'junit-report'.
* Upload the build phase first to CDash
Older versions of CDash expect Build.xml to be the first file uploaded
for any given build.
* Define cdash_phase before referring to it
fixes#9739
The non-daemonic pool relies heavily on implementation details of the
multiprocessing package. In this commit we provide an implementation
that fits recent python versions.
This allows installing software on a namespace basis by including ${NAMESPACE} in `install_path_scheme`. e.g.,
```
cat ~/.spack/config.yaml
config:
install_path_scheme:
"${ARCHITECTURE}/${NAMESPACE}/${COMPILERNAME}-${COMPILERVER}/${PACKAGE}-${VERSION}-${HASH}"
```
The 'static_to_shared_library' function takes a compiler Executable,
which is intended to be invoked with a list of arguments; the
arguments must be separated from their values in the list, given
the way that 'Executable.__call__' invokes the underlying executable.
'static_to_shared_library' was not doing this, which this commit fixes.
Clang has support for using different fortran compilers with the Clang executable.
Spack includes logic to select a compiler wrapper symlink which refers to the fortran executable (since some build systems depend on the name of the compiler, e.g. 'gfortran' or 'flang').
This selection was previously based on the architecture, and chose incorrectly in some situations (e.g. for clang/gfortran on Linux). This replaces architecture-based wrapper selection with a selection that is based on the name of the Fortran compiler executable.
* Unite Dockerfiles - add build/run/push scripts
* update docker documentation
* update .travis.yml
* switch to using a preprocessor on Dockerfiles
* skip building docker images on pull requests
* update files with copyright info
* tweak when travis builds for docker files are done
fixes#9624
merge_config_rules was using `strict=False` to check if a spec
satisfies a constraint, which loosely translates to "this spec has
no conflict with the constraint, so I can potentially add it to the
spec". We want instead `strict=True` which means "the spec satisfies
the constraint right now".
- #8773 made the default mode 0o777, which is what's documented but
mkdirp actually takes the OS default or umask by default
- revert to the Python default by default, and only set the mode when
asked explicitly.
#9100 added a warning message when a path extracted from a module file
did not appear to be a valid filesystem path. This check was applied
to a variable which could be a list of paths, which would erroneously
trigger the warning. This commit updates the check to run at the
actual point where the path has been extracted.
* Add a build_language config.yaml option which controls the language
of compiler messages
* build_language defaults to "C", in which case the compiler messages
will be in English. This allows Spack log parsing to detect and
highlight error messages (since the regular expressions to find
error messages are in English)
* The user can use the default language in their environment by setting
the build_language config variable to null or ''
- `spack license list-files`: list all files that should have license headers
- `spack license list-lgpl`: list files still under LGPL-2.1
- `spack license verify`: check that license headers are correct
- Added `spack license verify` to style tests
- remove the old LGPL license headers from all files in Spack
- add SPDX headers to all files
- core and most packages are (Apache-2.0 OR MIT)
- a very small number of remaining packages are LGPL-2.1-only
compilers.yaml can track a module that is needed for a compiler, but
Spack does not fill this in automatically. This adds a note to the
documentation informing the user how to do this.
If we do not specify libdir explicitly, Meson chooses something like
lib/x86_64-linux-gnu, which causes problems when trying to find libraries
and pkg-config files.
Spack packages installed using spack buildcache were not running
post-install hooks, which create module files and manage licenses
(if necessary).
This was already occurring for Spack packages installed with
spack install --use-cache
Spack can now be configured to assign permissions to the files installed by a package.
In the `packages.yaml` file under `permissions`, the attributes `read`, `write`, and `group` control the package permissions. These attributes can be set per-package, or for all packages under `all`. If permissions are set under `all` and for a specific package, the package-specific settings take precedence. The `read` and `write` attributes take one of `user`, `group`, and `world`.
packages:
all:
permissions:
write: group
group: spack
my_app:
permissions:
read: group
group: my_team
* Better default CLI arguments for CDash reporting
--log-format=cdash is now implied if you specify the --cdash-upload-url
option to spack install.
We also now default to writing CTest XML files to cdash_report/ when using
the CDash reporter if no --log-file argument was specified.
* Improved documentation on how to use the CDash reporter
* Push default flag handlers into module scope
* Preserve backwards compatibility of builtin flag handler names
Ensure Spack continues to work for packages using the `Package.env_flags` idiom and equivalent.
* update docs and tests to match
* Update packages to match new syntax
Fix two bugs with module file parsing:
* Detection of the CRAY_LD_LIBRARY_PATH variable was broken by #9100.
This fixes it and adds a test for it.
* For module names like "foo-bar/1.0", the associated PACKAGE_DIR
environment variable name would be "FOO_BAR_DIR", but Spack was not
parsing the components and not converting "-" to "_"
Fixes#9166
This is intended to reduce errors related to lock timeouts by making
the following changes:
* Improves error reporting when acquiring a lock fails (addressing
#9166) - there is no longer an attempt to release the lock if an
acquire fails
* By default locks taken on individual packages no longer have a
timeout. This allows multiple spack instances to install overlapping
dependency DAGs. For debugging purposes, a timeout can be added by
setting 'package_lock_timeout' in config.yaml
* Reduces the polling frequency when trying to acquire a lock, to
reduce impact in the case where NFS is overtaxed. A simple
adaptive strategy is implemented, which starts with a polling
interval of .1 seconds and quickly increases to .5 seconds
(originally it would poll up to 10^5 times per second).
A test is added to check the polling interval generation logic.
* The timeout for Spack's whole-database lock (e.g. for managing
information about installed packages) is increased from 60s to
120s
* Users can configure the whole-database lock timeout using the
'db_lock_timout' setting in config.yaml
Generally, Spack locks (those created using spack.llnl.util.lock.Lock)
now have no timeout by default
This does not address implementations of NFS that do not support file
locking, or detect cases where services that may be required
(nfslock/statd) aren't running.
Users may want to be able to more-aggressively release locks when
they know they are the only one using their Spack instance, and they
encounter lock errors after a crash (e.g. a remote terminal disconnect
mentioned in #8915).
When a Spack Executable was configured to capture stderr and the
process failed, the error messages of the process were discarded.
This made it difficult to understand why the process failed. The
exception is now updated to include the stderr of the process when
the Executable captures stderr.
Adds 'code' to the list of suffixes that are excluded from version
parsing of URLs, such that if a URL contains the string
'cistem-1.0.0-beta-source-code', a version X will substitute in to
produce a URL with cistem-X-source-code ('source' was already excluded).
The 'cistem' package version is updated to make use of this (and fix
a fetching bug with the cistem package). A unit test is added to check
this parsing case.
Improve Spack's parsing of module show to eliminate some false
positives (e.g. accepting MODULEPATH when it is in fact looking for
PATH). This makes the following changes:
* Updates the pattern searching for several paths to avoid the case
where they are prefixes of unwanted paths
* Adds a warning message when an extracted path doesn't exist (which
may help catch future module parsing bugs faster)
* Adds a test with the content mentioned in #9083
Spack originally handled environment modifications in the following
order:
1. clear environment variables
(unless Spack was invoked with --dirty)
2. apply spack-specific environment variable updates,
including variables set by Spack core like CC/PKG_CONFIG_PATH
and those set by installed dependencies (e.g. in
setup_dependent_environment)
3. load all external/compiler modules
1 and 2 were done together. This splits 1 into its own function and
imposes the following order for environment modifications:
1. clear environment variables
2. load all external/compiler modules
3. apply spack-specific environment variable updates
As a result, prepend-path actions taken by Spack (or installed Spack
dependencies) take precedence over prepend-path actions from compiler
and external modules. Additionally, when Spack (or a package
dependency) sets/unsets an environment variable, that will override
the actions of external/compiler modules.
* Add 'extra_env' argument to Executable.__call__: this will be added
to the environment but does not affect whether the current
environment is reused. If 'env' is not set, then the current
environment is copied and the variables from 'extra_env' are added
to it.
* MakeExecutable can take a 'jobs_env' parameter that specifies the
name of an environment variable used to set the level of parallelism.
This is added to 'extra_env' (so does not affect whether the current
environment is reused).
* CMake-based Spack packages set 'jobs_env' when executing the 'test'
target for make and ninja (which does not use -j)
Consolidate prefix calculation logic for intel packages into the
IntelPackage class.
Add documentation on installing Intel packages with Spack an
(alternatively) adding them as external packages in Spack.
The functions returning the default scope to be modified or listed
have been moved from spack.cmd to spack.config.
Lmod now writes the guessed core compiler in the default modify scope
instead of the 'site' scope.
closes#8916
Currently Spack ends with an error if asked to write lmod modules files
and the 'core_compilers' entry is not found in `modules.yaml`. After
this PR an attempt will be made to guess that entry and the site
configuration file will be updated accordingly.
This is similar to what Spack already does to guess compilers on first
run.
- Support for Python 3.3 isn't really needed, as nothing uses it as the
default system Python, and nearly everyone will have a newer Python 3
version installed.
#8223 replaced regex-based makefile target parsing with an invocation of
"make -q". #8818 discovered that "make -q" can result in an error for some
packages.
Also, the "make -q" strategy relied on interpreting the error code, which only
worked for GNU Make and not BSD Make (which was deemed acceptable at
the time). As an added bonus, this implementation ignores the exit code and
instead parses STDERR for any indications that the target does not exist; this
works for both GNU Make and BSD Make.
#8223 also updated ninja target detection to use "ninja -t targets". This does
not change that behavior but makes it more-explicit with "ninja -t targets all"
This also adds tests for detection of "make" and "ninja" targets.
Fixes#9001#8289 added support for install_tree and copy_tree to merge into an existing
directory structure. However, it did not properly handle relative symlinks and
also removed support for the 'ignore' keyword. Additionally, some of the tests
were overly-strict when checking the permissions on the copied files.
This updates the install_tree/copy_tree methods and their tests:
* copy_tree/install_tree now preserve relative link targets (if the symlink in the
source directory structure is relative, the symlink created in the destination
will be relative)
* Added support for 'ignore' argument back to copy_tree/install_tree (removed
in #8289). It is no longer the object output by shutil.ignore_patterns: you pass a
function that accepts a path relative to the source and returns whether that
path should be copied.
* The openfoam packages (currently the only ones making use of the 'ignore'
argument) are updated for the new API
* When a symlink target is absolute, copy_tree and install_tree now rewrite the
source prefix to be the destination prefix
* copy_tree tests no longer check permissions: copy_tree doesn't enforce
anything about permissions so its tests don't check for that
* install_tree tests no longer check for exact permission matching since it can add
file permissions
- `imp` is deprecated and seems to have started having some weird
issues on certain Linux versions.
- In particular, the file argument to `load_source` is ignored on
arch linux with Python 3.7.
- `imp` is the only way to do imports in 2.6, so we'll keep it around for
now and use it if importlib won't work.
- `importlib` is the new import system, and it allows us to get
lower-level access to the import implementation.
- This consolidates all import logic into `spack.util.imp`, and make it
use `importlib` if it's avialable.
Replace use of `shutil.copytree` with `copy_tree` and `install_tree` functions in `llnl.util.filesystem`.
- `copy_tree` copies without setting permissions. It should be used to copy files around in the build directory.
- `install_tree` copies files and sets permissions. It should be used to copy files into the installation directory.
- `install` and `copy` are analogous single-file functions.
- add more extensive tests for these functions
- update packages to use these functions.
- dependency patching test didn't attempt to apply patches; just to see
whether they were on the spec.
- it applies the patch now and verifies that that patch was applied.
* Branch with the meson build-system
* Fix build_environment for dual loads and add create code
* Add documentation
* Fixed option list
* Update build_system_guess for meson
* Fixed documentation errors
* Added meson to build and configure and updated documentation
* fix typos
- cc cleanup caused a parsing regression in flag handling
- We added proper quoting to array expansions, but flag variables were
never actually converted to arrays. Old code relied on this.
This commit:
- Adds reads to convert flags to arrays.
- Makes the cc test check for improper space handling to prevent future
regressions.
- flags were prepended in reverse order to args, but this makes it hard
to see what order they'll be in on the final command line.
- add them in the order they'll appear to make cc easier to maintain.
- simplify code for assembling the command line
- fix separator used in SPACK_SYSTEM_DIRS test
- This corrects most of the issues found by shellcheck
- This also uses ':' as the delimiter for SPACK_SYSTEM_DIRS, for
consistency with other variables.
- filtering using sed causes most builds to slow down quite a bit, as the
compiler wrapper has to run sed many times, and *it* runs many times
- do the system directory parsing directly in bash
- Add tests to ensure that RPATHs are not added in cc mode, which can
cause some builds to fail.
- Change cc.py to use pytest style
- Instead of writing out all the flags, break the flags down into
variables so that it's easy to read what each test is supposed to
check. This should make cc.py more maintainable in the future.
- Adding -L and -Wl,-rpath to compile-only command lines ("cc mode" or
"-c") causes clang (if not also other compilers) to emit warnings that
confuse configure systems.
- Clang will print warnings about unused command-line arguments.
- This fix ensures that -L and -Wl,-rpath are not added if the compile
line is just building an object file with -c
- This also cleans up the cc script in several places.
Spack currently prepends include paths, library paths, and rpaths to the
compile line. This causes problems when a header or library in the package
has the same name as one exported by one of its dependencies. The
*dependency's* header will be preferred over the package's, which is not
what most builds expect. This also breaks some of our production codes.
This restores the original cc behavior (from *very* early Spack) of parsing
compiler arguments out by type (`-L`, `-I`, `-Wl,-rpath`) and reconstituting
the full command at the end.
`<includes> <other_args> <library dirs> <rpaths>`
This differs from the original behavior in one significant way, though: it
*appends* the library arguments so that dependency libraries do not shadow
those in the build.
This is safe because semantics aren't affected by *interleaving* `-I`, `-L`,
and `-Wl,-rpath` arguments with others, only with each other (so the order of
two `-L` args affects the search path, but we search for all libraries on the
command line using the same search path).
We preserve the following:
1. Any system directory in the paths will be listed last.
2. The root package's include/library/RPATH flags come before flags of the
same type for any dependency.
3. Order will be preserved within flags passed by the build (except system
paths, which are moved to be last)
4. Flags for dependencies will appear between the root flags and the system
flags, and the flags for any dependency will come before those for *its*
dependencies (this is for completeness -- we already guarantee this in
`build_environment.py`)
* Fix performance issue when compiling.
Spack was doing active wait when compiling, spoiling one core.
My fix consists in not setting any timeout for select, instead of
the previous 0 second.
* Fix comments about select.select timeout
- This was a nasty workaround due to the way our compiler wrappers used
to work. We don't want to have to modify our elfutils installation to
install libdwarf.
- Since cd9691de5, we no longer need this because the package will always
come before dependencies in our include order.
Spack currently prepends include paths, library paths, and rpaths to the compile line. This causes problems when a header or library in the package has the same name as one exported by one of its dependencies. The *dependency's* header will be preferred over the package's, which is not what most builds expect. This also breaks some of our production codes.
This restores the original cc behavior (from *very* early Spack) of parsing compiler arguments out by type (`-L`, `-I`, `-Wl,-rpath`) and reconstituting the full command at the end.
`<includes> <other_args> <library dirs> <rpaths>`
This differs from the original behavior in one significant way, though: it *appends* the library arguments so that dependency libraries do not shadow those in the build.
This is safe because semantics aren't affected by *interleaving* `-I`, `-L`, and `-Wl,-rpath` arguments with others, only with each other (so the order fo two `-L` args affects the search path, but we search for all libraries on the command line using the same search path).
We preserve the following:
1. Any system directory in the paths will be listed last.
2. The root package's include/library/RPATH flags come before flags of the same type for any dependency.
3. Order will be preserved within flags passed by the build (except system paths, which are moved to be last)
4. Flags for dependencies will appear between the root flags and the system flags, and the flags for any dependency will come before those for *its* dependencies (this is for completeness -- we already guarantee this in `build_environment.py`)
- previously, output could be confusing when deptypes were only shown for
one dependent when a node had *multiple* dependents
- also fix default coverage of `Spec.tree()`: it previously defaulted to
cover only build and link dependencies, but this is a holdover from
when those were the only types.
- Previously, Spack didn't check the arguments you put in version()
directives.
- So, you could do something like this, where there are arguments for a
URL fetcher AND for a git fetcher:
version('1.0', md5='abc123', git='https://foo.bar', commit='feda2343')
- Now, we check the arguments before constructing a fetcher, to ensure
that each package has *only* arguments for a single type of fetcher.
- Also added `test_package_version_consistency()` to the `package_sanity`
test, so that all builtin packages are required to have valid
`version()` directives.
- packagers can specify two top-level fetch URLs if one is `url`
- e.g., `url` and `git` or `url` and `svn`
- allow only one VCS fetcher so we can differentiate between URL and VCS.
- also clean up fetcher logic and class structure
- Packages can remove the top-level `url` attribute and still work
- These are now legal:
- Packages with *only* version-specific URLs (even with gaps)
- Packages with a top-level git/hg/svn attribute and `version`
directives for that.
- If a package has both a top-level hg/git/svn attribute AND a top-level
url attribute, the url attribute takes precedence.
Some packages do not have a `url` and are instead downloaded via `git`,
`hg`, or `svn`. Some packages like `spectrum-mpi` cannot be downloaded at
all, and are placeholder packages for system installations. Previously,
`__init__()` in `PackageBase` crashed if a package did not have a `url`
attribute defined.
I hacked this section of code out, but I have no idea what the
repercussions of that are.
- This hard-codes the hash lengths rather than computing them on import.
- Also cleans up the code in `spack.util.crypto` to make it easier to
understand.
Fix this output error:
```
$ spack -m module loads mpileaks
==> Error: `spack module loads -m t -m c -m l ...` has been moved. Try this instead:
$ spack module t loads mpileaks
$ spack module c loads mpileaks
$ spack module l loads mpileaks
```
In case a deprecated form of the module command is used, the program
will exit non-zero and print an informative error message suggesting
which command should be used instead.
As requested in the review all the commands meant to manage module
files have been grouped under the `spack module` command.
Unit tests have been refactored to match the new command structure.
fixes#2215fixes#2570fixes#6676fixes#7281closes#3827
This PR reverts the use of `spack module loads` in favor of
`spack module find` when loading module files via Spack. After this PR
`spack load` will accept a single spec at a time, and will be able
to interpret correctly the `--dependencies` option.
fixes#4400
The feature requested in #4400 was already part of the module file
configuration, but it was neither tested nor documented. This
commit takes care of adding a few lines in the documentation and a
regression test.
This just because the fixture has been moved one level above the one
it was originally defined. In this more general context there's more
than one configuration file that could be patched for tests.
'spack module' has been split into multiple commands, each one tied to a
specific module type. This permits the specialization of the new
commands with features that are module type specific (e.g. set the
default module file in lmod when multiple versions of the same package
are installed at the same time).
- repo membership test was broken by the refactor of spack/__init__.py
- refactor singleton so that 'spec in repo' works again for `spack.repo.path`
- fix spec command and add basic tests for `spack spec` and `spack spec --yaml`
- There was a lot of documentation in `PackageBase` dating back to the
very first versions of Spack.
- It was repetitive and out of date, and the docs at spack.readthedocs.io
are better.
- Remove the outdated specifics, and leave the minimal useful set of
developer docs in `package.py`.
- This changes `get_checksums_for_versions` to generate code that uses an
explicit `sha256` argument instead if the bare `md5` hash we used to
generate.
- also use a generic digest parameter for the `version` directive, rather
than a specific `md5` parameter.
- Add command-line scope option to Spack
- Rework structure of main to allow configuration system to raise
errors more naturally
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
- Fixes a bug in `llnl.util.lock`
- Locks in the current directory would fail because the parent directory
was the empty string.
- Fix this and return '.' for the parent of locks in the current
directory.
Replace regex-based target detection for Makefiles with a preliminary "make -q"
to check if a target exists. This does not work for NetBSD make; additional work
is required to detect if NetBSD make is present and to use a regex in that case.
The affected makefile target checks are only performed when the "--test" flag is
added to a "spack install" invocation.
Fixes#8036
Before this PR Package.installed was returning True if the spec prefix
existed, without checking the DB. This is wrong for external packages,
whose prefix exists before being registered into the DB. Now the property
checks for both the prefix and a DB entry.
Spack provides a number of classes based on commonly-used build systems
that users can extend when writing packages; the classes provide functionality
to perform the actions relevant to the build system (e.g. running "configure" for
an Autotools-based package). This adds documentation for classes supporting the
following build systems:
* Makefile
* Autotools
* CMake
* QMake
* SCons
* Waf
This includes build systems for managing extensions of the following packages:
* Perl
* Python
* R
* Octave
This also adds documentation on implementing packages that use a custom build
system (e.g. Perl/CMake).
Spack also provides extendable classes which aggregate functionality for related
sets of packages, e.g. those using CUDA. Documentation is added for
CudaPackage.
- The setup-env.sh script currently makes two calls to spack, but it
should only need to make one.
- Add a fast-path shell setup routine in `main.py` to allow the shell
setup to happen in a single, fast call that doesn't load more than it
needs to.
- This simplifies setup code, as it has to eval what Spack prints
- TODO: consider eventually making the whole setup script the output of a
spack command
* update help of `clean --all` to include `-p`
* remove old orphaned `.pyc` removal
* restrict removal or orphaned pyc files to `lib/spack` and `var/spack`
- Clean up error messages for when a lock can't be created, or when an
exclusive (write) lock can't be taken on a file.
- Add a number of subclasses of LockError to distinguish timeouts from
permission issues.
- Add an explicit check to prevent the user from taking a write lock on a
read-only file.
- We had a check for this for when we try to *upgrade* a lock on an RO
file, but not for an initial write lock attempt.
- Add more tests for different lock permission scenarios.
- write locks previously wrote information about the lock holder (host
and pid), and read locks woudl read this in.
- This is really only for debugging, so only enable it then
- add some tests that target debug info, and improve multiproc lock test
output
When a user specifies a URL for a specific version of a package, Spack originally
would use that URL for all newer versions of the package. This behavior has
proven to be generally more harmful than useful, so this PR removes the feature
such that a version-specific URL override affects only that version.
If the user sets "ccache: true" in spack's config.yaml, Spack will use an available
ccache executable when compiling c/c++ code. This feature is disabled by default
(i.e. "ccache: false") and the documentation is updated with how to enable
ccache support
Functional updates:
- `python` now creates a copy of the `python` binaries when it is added
to a view
- Python extensions (packages which subclass `PythonPackage`) rewrite
their shebang lines to refer to python in the view
- Python packages in the same namespace will not generate conflicts if
both have `...lib/site-packages/namespace-example/__init__.py`
- These `__init__` files will also remain when removing any package in
the namespace until the last package in the namespace is removed
Generally (Updated 2/16):
- Any package can define `add_files_to_view` to customize how it is added
to a view (and at the moment custom definitions are included for
`python` and `PythonPackage`)
- Likewise any package can define `remove_files_from_view` to customize
which files are removed (e.g. you don't always want to remove the
namespace `__init__`)
- Any package can define `view_file_conflicts` to customize what it
considers a merge conflict
- Global activations are handled like views (where the view root is the
spec prefix of the extendee)
- Benefit: filesystem-management aspects of activating extensions are
now placed in views (e.g. now one can hardlink a global activation)
- Benefit: overriding `Package.activate` is more straightforward (see
`Python.activate`)
- Complication: extension packages which have special-purpose logic
*only* when activated outside of the extendee prefix must check for
this in their `add_files_to_view` method (see `PythonPackage`)
- `LinkTree` is refactored to have separate methods for copying a
directory structure and for copying files (since it was found that
generally packages may want to alter how files are copied but still
wanted to copy directories in the same way)
TODOs (updated 2/20):
- [x] additional testing (there is some unit testing added at this point
but more would be useful)
- [x] refactor or reorganize `LinkTree` methods: currently there is a
separate set of methods for replicating just the directory structure
without the files, and a set for replicating everything
- [x] Right now external views (i.e. those not used for global
activations) call `view.add_extension`, but global activations do not
to avoid some extra work that goes into maintaining external views. I'm
not sure if addressing that needs to be done here but I'd like to
clarify it in the comments (UPDATE: for now I have added a TODO and in
my opinion this can be merged now and the refactor handled later)
- [x] Several method descriptions (e.g. for `Package.activate`) are out
of date and reference a distinction between global activations and
views, they need to be updated
- [x] Update aspell package activations
- Spack was assuming that a group with gid == current uid would always exist.
- This was breaking the travis build for macos.
- also fix issue with the DB tarball test finding coverage filesx
- pytest was not reporing the correct version from pytest.__version__.
It reported 'unknown'
- this fixes issues on some systems where system-installed pytest plugins
would try to use the version and convert it to an int
The following improvements are made to cxx standard support
(e.g. compiler.cxxNN_flag functions) in compilers:
* Add cxx98_flag property
* Add support for throwing an exception when a flag is not supported (previously
if a flag was not supported the application was terminated with tty.die)
* The name of the flag associated with e.g. c++14 standard support changes for
different compiler versions (e.g. c++1y vs c++14). This makes a few corrections
on what flag to return for which version.
* Added tests to confirm that versions report expected flags for various c++
standards (or raise an exception for versions that don't provide a given cxx
standard)
Note that if a given cxx standard is the default, the associated flag property will
return ""; cxx98 is assumed to be the default standard so this is the behavior for
the associated property in the base compiler class.
Package changes:
* Improvements to the boost spec to take advantage of the improved standard
flag facility.
* Update the clingo spec to catch the new exception rather than look for an
empty flag to indicate non-support (which is not part of the compiler flag API)
Fixes#7885#7193 added the patches_to_apply function to collect patches which are then
applied in Package.do_patch. However this only collects patches that are
associated with the Package object and does not include Spec-related patches
(which are applied by dependents, added in #5476).
Spec.patches already collects patches from the package as well as those applied
by dependents, so the Package.patches_to_apply function isn't necessary. All
uses of Package.patches_to_apply are replaced with Package.spec.patches.
This also updates Package.content_hash to require the associated spec to be
concrete: Spec.patches is only set after concretization. Before this PR, it was
possible for Package.content_hash to be valid before concretizing the associated
Spec if all patches were associated with the Package (vs. being applied by
dependents). This behavior was unreliable though so the change is unlikely to
be disruptive.
Fixes#8345
Spack environment modifications are applied before modules are loaded; this
includes settings to CC, FC, F77, and CXX, which point to the Spack compiler
wrappers. If the loaded modules set CC, this overrides the Spack compiler
wrappers. This PR adds a context manager to preserve the values of CC etc. that
are set by Spack: any effects on the CC, FC, F77, and CXX variables from modules
are undone and their original values are restored.
* pybind11: test support
Add a test functionality to pybind11.
* CMake: test also on "make check"
Some projects use non-CTest manual targets for tests.
* extend Prefix class with join() member to support dynamic directories
* add more tests for Prefix.join()
* more tests for Prefix.join()
* add docstring
* add example to docstring of Prefix class
* cleanup Prefix.join() tests
* use Prefix.join() in Packaging Guide
Fixes#8217
Trying to relocate a distribution when the new and old paths are
equal leads to failure, because the test that ensures that no
unrelocated bits are left over always fails. As an example, this
occurs if a user installs a package, generates a binary with it
using 'spack buildcache', uninstalls it, and then attempts to
reinstall into the same spack installation using the generated
binary package.
This updates the relocation check to accept the presence of the
old prefix in binaries if the package is being reinstalled into
its original location.
* allow user to constrain dependencies that are added conditionally
* remove check for not-visited deps from normalize, move it to concretize. The check now runs after the concretization loop completes (so an error is only reported if the user-mentioned spec doesnt appear anywhere in the dag)
* remove separate full_spec_deps variable; rename spec_deps to all_spec_deps to clarify that it merges user-specified dependencies with derived dependencies
* add unit test to confirm new functionality
- `spack config blame` is similar to `spack config get`, but it prints
out the config file and line number that each line of the merged
configuration came from.
- This is a debugging tool for understanding where Spack config settings
come from.
- add tests for config blame
Fixes: #8258#8090 altered import behavior so that import spack no longer
provides access to many other Spack modules. This addresses
a case which depended on the prior behavior and was not
updated as part of #8090. This particular import error only
came up when users were setting compiler flags on specs.
See also: #8194
- there were some leftover spack.* names being used after we removed
globals and moved everything in the top-level namespace to spack.pkgkit
- point those references to their new homes
- remove most `import spack` statements, except for files that need
`spack_version`
- import spack is no longer sufficient to use submodules
(e.g. spack.directives).
- these submodules must be imported directly. Update references
accordingly.
- Spack packages were originally expected to call `from spack import *`
themselves, but it has become difficult to manage imports in the
Spack core.
- the top-level namespace polluted by package symbols, and it's not
possible to avoid circular dependencies and unnecessary module loads in
the core, given all the stuff the packages need.
- This makes the top-level `spack` package essentially empty, save for a
version tuple and a version string, and `from spack import *` is now
essentially a no-op.
- The common routines and directives that packages need are now in
`spack.pkgkit`, and the import system forces packages to automatically
include this so that old packages that call `from spack import *`
will continue to work without modification.
- Since `from spack import *` is no longer required, we could consider
removing ``from spack import *`` from packages in the future and
shifting to ``from spack.pkgkit import *``, but we can wait a while to
do this.
- spack.util.lock behaves the same as llnl.util.lock, but Lock._lock and
Lock._unlock do nothing.
- can be disabled with a control variable.
- configuration options can enable/disable locking:
- `locks` option in spack configuration controls whether Spack will use filesystem locks or not.
- `-l` and `-L` command-line options can force-disable or force-enable locking.
- Spack will check for group- and world-writability before disabling
locks, and it will not allow a group- or world-writable instance to
have locks disabled.
- update documentation
- Spack core has long used llnl.util.filesystem.join_path, but
os.path.join is pretty much the same thing, and is more efficient.
- Use os.path.join in the core Spack code from now on.
- simplify the singleton pattern across the codebase
- reduce lines of code needed for crufty initialization
- reduce functions that need to mess with a global
- Singletons whose semantics changed:
- spack.store.store() -> spack.store
- spack.repo.path() -> spack.repo.path
- spack.config.config() -> spack.config.config
- spack.caches.fetch_cache() -> spack.caches.fetch_cache
- spack.caches.misc_cache() -> spack.caches.misc_cache
- `spack.cmd.all_commands` does a directory listing on
`lib/spack/spack/cmd`, regardless of whether it is needed
- make this lazy so that the directory listing won't happen unless it's
necessary.
- It turns out that jsonschema is one of the more expensive imports.
- move imports of jsonschema into functions to avoid the performance hits
for calls that don't need config.
- spack.store was previously initialized at the spack.store module level,
but this means the store has to be initialized on every spack call.
- this moves the state in spack.store to a singleton so that the store is
only initialized when needed.
- spack.repository module is now spack.repo
- `spack.repo` is now `spack.repo.path()` and loaded lazily
- Added `spack.repo.get()` and `spack.repo.all_package_names()` as
convenience functions to simplify the new lazy interface.
- updated tests and code
- no longer require `spack_version` to be a Version (it isn't used that
way anyway)
- use a simple tuple `spack_version_info` with major, minor, patch
versions
- generate `spack_version` from the tuple
- replace `spack.config.get_configuration()` with `spack.config.config()`
- replace `get_config`/`update_config` with `get`, `set`
- add a path syntax that can be used to refer to specific config options
without firt getting the entire configuration dict
- update usages of `get_config` and `update_config` to use `get` and `set`
- Current configuration code forces the config system to be initialized
at module scope, so configs are parsed on every Spack run, essentially
before anything else.
- We need more control over configuration init order, so move the config
scopes into a class and reduce global state in config.py
Fixes#2781
This PR introduces a new attribute for packages called
`archive_files`, which designates files that should be saved from
a package build (e.g. the config.log generated during autotools
builds).
The attribute contains a list of glob expressions; Any file that
matches will be archived in the `<prefix>/.spack/archived-files`
directory. Errors that occur when archiving files are collected and
reported in a file named `<prefix>/.spack/archived-files/errors.txt`.
`AutotoolsPackage` and `CMakePackage` provide a sensible default
override for this attribute.
fixes#7941
Modified string representation of Specs to add a space before deps
Unit-tests have been modified accordingly
Added a test for regression on #7941
Fixes#7924
Spack's yaml schema validator was initializing all instances of
unspecified properties with the same object, so that updating the
property for one entry was updating it for others (e.g. updating
versions for one package would update it for other packages).
This updates the schema validator to instantiate defaults with
separate object instances and adds a test to confirm this behavior
(and also confirms #7924 without this change).
* Use reported version of IBM XL Fortran compiler for compiler versions >= 16.0.
Starting with the April 2018 release, the IBM XL C and Fortran compilers report the same version, 16.0. Consequently, there is no need to downgrade the Fortran compiler version to match that of the C compiler.
* Use GitLab's API endpoint for fetching a git snapshot.
* More GitLab packages use the API.
* find_list_url for GitLab's API URLs.
* Flake8
* Url for 'hacckernels'.
* Check GitLab API regexps before the non-API ones.
Activating a package that is already activated now sends a `tty.msg`
and returns.
```
-bash-4.2$ ~/spack/bin/spack activate aspell6-en
==> Package aspell6-en/lc4v24f is already activated.
```
* Better error message for spack providers
fixes#1355
`spack providers` now outputs a sensible error message if non-virtual
specs are provided as arguments:
```
$ spack providers mpi zlib petsc
==> Error: non-virtual specs cannot be part of the query [zlib, petsc]
```
Formatting of the output changed slightly.
* Calling 'spack providers' without arguments print the virtual pkg list
Also, the error message in case of a wrong parameter has been improved
to show the list of valid packages.
* Avoid printing headers if stdout is not a tty
* The provider list is formatted with colify if not in a tty
* Added a test to check the list of providers returned from the command
Popping the when spec from kwargs in the extends directive breaks
class inheritance. Inheriting classes do not find their when spec.
We now get the when spec from kwargs instead, leaving it to be found
by any downstream package classes.
fixes#7705
Package.provides now checks constraints to ensure that a spec provides
a given virtual package. Note that 'strict=True' is not passed to
satisfies as this function is also used during concretization.
Fixes#7548
This updates the "spack view" command to use the same parsing logic
as "spack install" on the user-provided specs. For example you can
provide a DAG hash to refer to an exact installed spec instead of
specifying name, compiler, etc.
This fixes a check that decides when to skip buildcache relocation.
Originally the check was flawed in two ways: it would skip if the
source prefix matched the destination prefix, which no longer
matters since the source prefix is replaced with a placeholder
(so it always needs to be updated); it also would skip relocation
if the rpaths were not relative, when in fact it should be the
opposite (binaries without relative rpaths *should* be relocated,
and those without don't need it).
- FastPackageChecker was being called at startup every time Spack runs,
which takes a long time on networked filesystems. Startup was taking
5-7 seconds due to this call.
- The checker was intended to avaoid importing all packages (which is
really expensive) when all it needs is to stat them. So it's only
"fast" for parts of the code that *need* it.
- This commit makes repositories instantiate the checker lazily, so it's
only constructed when needed.
- This was needed when we transitioned to all lowercase packages because
git didn't handle case changes well on case-insensitive filesystems.
- Now it just adds extra stat calls to startup, and we check for
all-lowercase package names in tests, so we'll remove it.
- people using really old versions of Spack can re-clone.
* Create unload_module method
Extract code from load_module into unload_module.
* Unload modules to create a clean env on Cray
removes cray-libsci, cray-mpich and darshan to prevent any silent
linking with those packages.
* Add format to separate target and os for path
spec format can now handle separations of target and os for setting
up the path.
* Added ${PLATFORM} et al to spec.format()
${PLATFORM}, ${OS}, ${TARGET}
* Update tests
Updated tests and got rid of unnecessary code.
* Also update documentation to reflect this new ability.
* Add default path scheme to config.yaml
Added default path scheme to config.yaml. Users can overwrite this
section if they want.
* Speedup the default 'libs' property search - important for external
packages.
* As advised by @alalazo, use tuples instead of lists inside
_libs_default_handler.
* Added installation date and time to the database
Information on the date and time of installation of a spec is recorded
into the database. The information is retained on reindexing.
* Expose the possibility to query for installation date
The DB can now be queried for specs that have been installed in a given
time window. This query possibility is exposed to command line via two
new options of the `find` command.
* Extended docstring for Database._add
* Use timestamps since the epoch instead of formatted date in the DB
* Allow 'pretty date' formats from command line
* Substituted kwargs with explicit arguments
* Simplified regex for pretty date strings. Added unit tests.
This updates architecture concretization to
* Search for the nearest parent in the DAG for architecture information
rather than defaulting to the root of the DAG
* Propagate architecture settings transitively, such that if for
example the target is set at the root of the dag it will set the
same target on indirect dependencies (assuming no intermediate
dependency specifies a separate target). Previously this occurred
in general but under some conditions did not, for example if an
intermediate dependency specified some subset of architecture
properties.
* Create mirror for system with different compilers
Spack concretizes the spec provided by the user in
"spack mirror create" to ensure downloading the right
dependencies. Under normal circumstances concretization
requires that the chosen compiler exists on the system,
but this is not required when creating download mirrors
for other systems, so this requirement is removed in that
case.
* Add test for disabling compiler existence check
* Update compiler existence checking logic
* improve test for disabling compiler existence check
* make py-setuptools a run-time-only dep for py-basemap and patch python package to only apply setuptools flag for build deps
* py-qtconsole does not require setuptools
* This allows Spack to work with MD5 hashes on machines with openssl in FIPS mode.
* We are still using MD5 for validation in many places, and a later PR will replace all uses of MD5 with SHA256.
* This is a quick fix until that happens.
- transitive dependencies were not being handled correctly
- restructure code to do recursion and mark visited packages properly
- add `-V` option to *not* expand virtuals in spack dependencies
This re-adds the option to create and install unsigned tarballs, now
with the -u option (--unsigned) rather than the -y option.
This also changes the "keys" command, replacing the -y/--yes-to-all
option with the -t/--trust option (which has the same effect but is
more-clearly named).
Fixes#7130
shutil.move expects a source path like "/x/y/" to be a directory and
fails if "/x/y" is a symlink. This invokes realpath on the source
path to avoid the issue.
Fixes#7356
In some cases OperatingSystem (e.g. LinuxDistro) was getting
instantiated with a version that contains dashes. This breaks because
the concretizer later converts this value to a string and re-parses
it, and the '-' character is used to separate architecture components.
This adds a guard in the initializer to convert '-' to '_'.
Fixes#7237Fixes#6404Fixes#6418Fixes#6369
Identify when binary relocation fails to remove all instances of the
source prefix (and report an error to the user unless they specify
-a to allow the old root to appear). This check occurs at two stages:
during "bincache create" all instances of the root are replaced with
a special placeholder string ("@@@@..."), and a failure occurs if the
root is detected at this point; when the binary package is extracted
there is a second check. This addresses #7237 and #6418.
This is intended to be compatible with previously-created binary
packages.
This also adds:
* Better error messages for "spack install --use-cache" (#6404)
* Faster relocation on Mac OS (using a single call to
install_name_tool for all files rather than a call for each file)
* Clean up when "buildcache create" fails (addresses #6369)
* Explicit error message when the spack instance extracting the binary
package uses a different install layout than the spack instance that
created the binary package (since this is currently not supported)
* Remove the option to create unsigned binary packages with -y
This updates Cray.setup_platform_environment to use cray-specific
pkgconfig paths so that all providers of 'pkgconfig' have access
to them (previously the 'pkg-config' provider had this but the
'pkgconf' provider did not).
* [SPACK/spec.py] When a query through ForwardQueryToPackage returns
'None', treat that as query failure and raise RuntimeError with
suitable message. This overrides the current behavior to raise an
AttributeError which is now triggered only when no suitable query
property is found and there is no default handler.
* [spack/spec.py] Fix style.
* [SPACK/spec.py] In case of query failure, i.e. property returning
'None', raise AttributeError instead of RuntimeError in order to
pass the unit test. Also, small update in the logic distinguishing
query failure and lack of relevant property/attribute handling.
This updates the fix_darwin_install_name function to use the Spack
Executable object to run install_name_tool, which ensures that
process output is formatted as a 'str' for python2 and python3.
Originally fix_darwin_install_name was invoking subprocess.Popen
directly.
Fixes#5189
When working with non-normalized paths containing ".." on some
file systems, Spack was found to encounter a permission error when
writing to the path. This normalizes a path written by the
intel-parallel-studio package and also normalizes all paths
written by the license install hook (for all packages) to avoid
this issue for intel-parallel-studio.
Following the discussion with Todd and Adam, find has been modified to
accept glob expressions. This should not affect performance as every
glob implementation I inspected has 3 cases (no wildcard, wildcard but
no directories involved, wildcard and directories involved) and uses
fnmatch underneath.
Mixins have been changed to do by default a non-recursive search (but
a recursive search can still be triggered using the recursive keyword).
Following a comment from Todd, the search path for the files listed in
`filter_compiler_wrappers` can now be narrowed. Anyhow, the function
implementation still makes use of `find`, the rationale being that we
have already seen packages that install artifacts in e.g. architecture
dependent folders. The possibility to have a relative search path might
be a good compromise between the previous approach and the one suggested
in the review.
Also: 'ignore_absent' and 'backup' keyword arguments can be optionally
forwarded to `filter_file`.
Following comments from Todd:
- the call to tty.debug has been moved deeper, to log the filtering of each file
- the shadowing on the name "kwargs" is avoided
Implemented a declarative syntax for the additional behavior that can
get attached to classes. Implemented a function to filter compiler
wrappers that uses the mechanism above.
- command reference now includes usage for all Spack commands as output
by `spack help`. Each command usage links to any related section in
the docs.
- added `spack commands` command which can list command names,
subcommands, and generate RST docs for commands.
- added `llnl.util.argparsewriter`, which analyzes an argparse parser and
calls hooks for description, usage, options, and subcommands
- Shorten Spack command usage for short options. Short options are now
shown as [-abc] instead of as [-a] [-b] [-c]
- fix bug that mixed long and short options for top-level `spack help`
- Add proper help for `spack buildcache` subcommands
- Reorganize the help categories of Spack commands so that buildcache is
in packaging and diy and setup are now in build.
- previously commands with this argument showed a long list of choices
that were platform specific.
- use a better metavar: {defaults,system,site,user}[/PLATFORM]
Fixes#7159
When activating extensions in external views, the --ignore-conflicts
option was being ignored. In this particular issue the conflict was
for the duplicate __init__ file for multiple python packages in the
same namespace, but in general any conflict for extensions would
cause an error whether or not --ignore-conflicts was set.
This also renames the 'force' option of do_activate to
'with_dependencies' and updates views to call do_activate with this
set to False (since it traverses the dependency dag anyway). This
isn't strictly required, it just avoids redundant calls.
This reorganizes most sections and rewords a significant portion of
the content (including all introductions) but keeps all the examples.
* Remove section 'What happens at subscript time' from tutorial:
it is too detailed for a tutorial
* Move the 'Extra query parameters' and 'Attach attributes to other
packages' sections into a separate grouping 'Other packaging topics'
* move the 'Set variables at build time yourself' section after
'Set environment variables in dependents' section since the latter
is more motivating
* start the 'set environment variables at build-time for yourself'
section with qt as an example
* renamed section 'specs build interface' to 'retrieving library
information' and updated section introduction
* renamed section 'a motivating example' to 'accessing library
dependencies'; split out the material which deals with implementing
.libs for netlib-lapack into a separate section called 'providing
libraries to dependents'. consolidated in material from the section
'single package providing multiple virtual specs' since
netlib-lapack is an example of this (this removes the material
about intel-parallel studio)
* Allow dashes in command names and fix command name handling
- Command should allow dashes in their names like the reest of spack,
e.g. `spack log-parse`
- It might be too late for `spack build-cache` (since it is already
called `spack buildcache`), but we should try a bit to avoid
inconsistencies in naming conventions
- The code was inconsistent about where commands should be called by
their python module name (e.g. `log_parse`) and where the actual
command name should be used (e.g. `log-parse`).
- This made it hard to make a command with a dash in the name, and it
made `SpackCommand` fail to recognize commands with dashes.
- The code now uses the user-facing name with dashes for function
parameters, then converts that the module name when needed.
* Improve performance of log parsing
- A number of regular expressions from ctest_log_parser have really poor
performance, most due to untethered expressions with * or + (i.e., they
don't start with ^, so the repetition has to be checked for every
position in the string with Python's backtracking regex implementation)
- I can't verify that CTest's regexes work with an added ^, so I don't
really want to touch them. I tried adding this and found that it
caused some tests to break.
- Instead of using only "efficient" regular expressions, Added a
prefilter() class that allows the parser to quickly check a
precondition before evaluating any of the expensive regexes.
- Preconditions do things like check whether the string contains "error"
or "warning" (linear time things) before evaluating regexes that would
require them. It's sad that Python doesn't use Thompson string
matching (see https://swtch.com/~rsc/regexp/regexp1.html)
- Even with Python's slow implementation, this makes the parser ~200x
faster on the input we tried it on.
* Add `spack log-parse` command and improve the display of parsed logs
- Add better coloring and line wrapping to the log parse output. This
makes nasty build output look better with the line numbers.
- `spack log-parse` allows the log parsing logic used at the end of
builds to be executed on arbitrary files, which is handy even outside
of spack.
- Also provides a profile option -- we can profile arbitrary files and
show which regular expressions in the magic CTest parser take the most
time.
* Parallelize log parsing
- Log parsing now uses multiple threads for long logs
- Lines from logs are divided into chnks and farmed out to <ncpus>
- Add -j option to `spack log-parse`
* Marking database tests as slow
* Marking url command tests as slow
* Marking every test that uses database as slow
* Marking tests that import files as slow
* Marking gpg tests as slow
* Marking all versions and one list tests as slow
* Added more markers to unit tests + cli option to skip slow tests
Following a discussion with Axel, the generic 'slowtest' marker has been
split into 'db', 'network' and 'maybeslow'. A brief description of the
meaning of each marker has been added to pytest.ini.
A command line option to run only fast tests has been added to
'spack test'
* Don't use classes to group tests together
Reverted grouping tests under a class, as required in the review
* Minor style changes
This attempts to address one of the complaints at #5996 (comment):
> Repo currently caches package instances by Spec, and those Package instances have a Spec.
> This is unnecessary and causes confusion. I think I thought that we'd need to cache instances
> after loading package classes, but really just caching the classes is fine.
With this update, Repo's package cache is removed and Specs cache the package reference themselves. One consequence is that Specs which compare as equal will store separate instances of a Package class (not doing this creates issues for #4595 (comment)).
There were several references to Spec.package that could be replaced with Spec.package_class without any additional modifications. There are still a couple remaining references to Spec.package in Spec that would require adding functionality before replacing (e.g. calling Package.provides and Package.installed).
Note this makes it difficult to mock fetchers for tests which invokes code that reconstructs specs. test_packaging was one example of this where the updates caused a failure (in that case the error was avoided by not making an unnecessary call).
Details:
* Replace instances of spec.package with spec.package_class where a class method is being called
* Remove instances of Repo.get where Spec.package_class can be used in its place
* remove Repo.get caching instances of Package class for specs
* remove redundant check (which is also incorrect now that each spec stores its own copy of its package)
* avoid creating mirror with specs because it copies specs and those copies dont refer to the mocked fetcher (and it is also not required for the test)
* remove checks that are no longer necessary since repo doesn't cache specs
* Cleaned up JUnit report generation on install
The generation of a JUnit report was previously part of the install
command. This commit factors the logic into its own module, and uses
a template for the generation of the report.
It also improves report generation, that now can deal with multiple
specs installed at once. Finally, extending the list of supported
formats is much easier than before, as it entails just writing a
new template.
* Polished report generation + added tests for failures and errors
The generation of a JUnit report has been polished, so that the
stacktrace is correctly displayed with Jenkins JUnit plugin. Standard
error is still not used.
Added unit tests to cover for installation failures and installation
errors.
Avoid adding an "outside" (local home's) python user site directory
during python package installs.
Implements #6611
Fixes packages with auto-find side effects such as `py-setuptools`
that cause `py-matplotlib` to fail to build #6558
The flag_handlers method was being set as a bound method, but when
reset in the package.py file it was being set as an unbound method
(all python2 issues). This gets the underlying function information,
which is the same in either case.
The bug was uncovered for parmetis in #6858. This is a partial fix.
Included are changes to the parmetis package.py file to make use of
flag_handlers.
The feature added in #4611 is currently broken. This commit fixes the
behavior of the command and adds unit tests to ensure the basic semantic
is maintained.
It also changes slightly the behavior of Spec.concretized to avoid
copying caches before the concretization (as this may result in a
wrong hash computation for the DAG).
- Generating the HTML from for >2300 packages from RST in Sphinx seems to
take forever.
- Add an option to `spack list` to generate straight HTML instead.
- This reduces the doc build time to about a minute (from 5 minutes on a mac laptop).
* Vendor ordereddict for python2.6 only
This commit substitutes the custom module 'ordereddict_backport' with
the more known 'ordereddict' and vendors it only for python 2.6. Other
supported versions of python will use 'collections.OrderedDict'.
* Use absolute imports also for python 2.6
See PEP-328 for more information on the subject
* Added provenance of vendored ordereddict
See #6794
This fixes cases where test-only dependencies were omitted from
consideration when modifying the environment at build time. This
includes an update to the python package definition to add
testing-related python extensions to its specialized environment
setup.
This updates the conflict-checking logic to require that the conflict
spec matches exactly and that all fields mentioned in the conflict
spec are present in the concretized spec in order to report a
conflict. This will automatically skip all conflict checks for
dependencies of externals (since externals strip dependencies). This
will not affect non-external packages since all fields and
dependencies are fully specified for such packages.
* Keep track of source and versions for external libraries
* Note source of more obscure libraries
* We aren't upgrading jsonschema after all
* Add note on modifications made to pytest
* add OctavePackage
1. remove import CudaPackage which is not needed anymore
2. mention CudaPackage and OctavePackage in packaging guide
3. adjust OctavePackageTemplate
4. add clue file for Octave build
5. sanity check on self.prefix
* use setup_environment
* Only specify a file as needing relocation if it contains the spack
root as a text string (this constraint also applies to binaries)
* Don't fail if there is an error retrieving RPATH information from a
binary (even if it is specified as requiring relocation)
This adds the ability for packages to apply compiler flags in one of
three ways: by injecting them into the compiler wrapper calls (the
default in this PR and previously the only automated choice);
exporting environment variable definitions for variables with
corresponding names (e.g. CPPFLAGS=...); providing them as arguments
to the build system (e.g. configure).
When applying compiler flags using build system arguments, a package
must implement the 'flags_to_build_system_args" function. This is
provided for CMake and autotools packages, so for packages which
subclass those build systems, they need only update their flag
handler method specify which compiler flags should be specified as
arguments to the build system.
Convenience methods are provided to specify that all flags be applied
in one of the 3 available ways, so a custom implementation is only
required if more than one method of applying compiler flags is
needed.
This also removes redundant build system definitions from tutorial
examples
Fixes#5940
This PR changes the option '--restage' of 'spack install' to
'--dont-restage', inverting its meaning and the default behavior
of the command.
Fixes#6200
For compilers that successfully run a version detection script but
don't actually return a version, Spack was keeping track of the
empty version and then failing when attempting to construct a
compiler spec. This skips any attempt to add a compiler entry when
no version is reported (but logs when a compiler fails to report
a version).
* Support pruning of vars with Env from_sourcing_file (issue #6501)
- Blacklist string literals or regular expressions of environment
variables that are to be removed from consideration as being affect
by the sourcing of the file. Conversely, whitelist modifications
that should not ignored. Whitelisted variables have priority over
blacklisting.
Eg,
EnvironmentModifications.from_sourcing_file
(
bashrc
blacklist=['JUNK_ENV', 'OPTIONAL_.*'],
whitelist=['OPTIONAL_REQUIRED.*']
)
This modification can be used to eliminate environment variables that
are not generalized for modules (eg, user-specific variables).
* BUG: module prepend-path in wrong order (fixes#6501)
* STYLE: module variables in sorted order (issue #6501)
- looks nicer and also helps when comparing the contents of different
module files.
* ENH: remove duplicates from env paths when creating modules (issue #6501)
- this makes for a cleaner module environment and helps avoid some
unnecessary changes to the environment that are only provoked by
redundancies in the PATH.
eg,
before PATH=/usr/bin
after PATH=/usr/bin:/usr/bin:/my/application/bin
should only result in /my/application/bin being added to the PATH
and not /usr/bin:/my/application/bin
Activate via the 'clean' flag (default: False):
EnvironmentModifications.from_sourcing_file(bashrc, clean=True,..
Fixes#2440
The "Getting started" guide should be short and sweet. This commit
simplifies the "Environment-Modules" section pruning:
- outdated / wrong suggestions as noted in #2440
- uncommon setups that are better treated in a reference guide
According to the documentation here:
http://www.sphinx-doc.org/en/stable/ext/autodoc.html
"For module data members and class attributes, documentation can either
be put into a comment with special formatting (using a #: to start the
comment instead of just #), or in a docstring after the definition."
* Updated function which checks if a binary file needs relocation.
Previously this was incorrectly identifying ELF binaries as symbolic
links (so they were being excluded from relocation). Added test to
check that ELF binaries are not considered symlinks.
* relocate_text was not replacing paths in text files. Added test to
check that text files are relocated properly (i.e. paths in the file
are converted to the new prefix).
* Exclude backup files created by filter_file when installing from
binary cache.
* Update write_buildinfo_file method signature to distinguish between
the spec prefix and the working directory for the binary cache
package.
"spack spec" was providing helpful error information about conflicts
that was missing from "spack install", this updates "spack install"
to provide the same information.
The original docstring had confusing wording re: what is going to
symlinked and where when using the `extend` directive, and how exactly
the symlinking is performed (not automatically on install, but using
`spack activate`). See #5559.
Showing "Normalize" on output doesn't give users additional information,
as this step is essentially an implementation detail of concretization.
This PR skips it and shows just the input spec and the concretized one.
Printing partial hashes for input spec has been disabled.
* First draft for SC17 build systems portion
Added tutorial_buildsystems.rst file as well as example files under
the tutorial/ directory.
* Remove floating `
* Add requested changes, and examples of subclasses
Added in the requested changes to the documentation. Also added in
information about the subclasses and the defaults that they provide.
Also fixed some phrasing issues, formatting and punctuation.
* Flake8 fixes and new files for classes
Made flake8 fixes to pass tests and also added files to demonstrate code
in the classes.
* Minor edits
Edits in formatting and made some sentence changes
* Flake8 fixes
More flake8 fixes
* Flake8 fix
* Change section order on tutorial and minor edits
Placed the section at the appropriate section for the tutorial and then
added some minor edits that were requested.
* Add requested changes and more details
Added more details to Cmake, Makefile and Python Packages.
* Fixed formatting and minor edits
* Fix doc build error
* Allow types and 'any' in variant definitions.
- Previously variant values had to be a tuple or a callable predicate.
- This allows 'any' as shorthand for `lambda x: True` and type objects
as shorthand for "any value of this type".
- Makes variant definitions more readable, keeps lambdas out of
packages for common cases.
* Update packaging tutorial
* Fix bad file reference in packaging tutorial
* First draft of the advanced packaging tutorial
* advanced packaging tutorial: improved phrasing
Thanks Denis and Hartzell!
* Fixed typos + reworded a couple of sentences
* Reworked module file tutorial section
First draft for the SC17 update. This includes:
- adding an introduction on module files + Spack's module
generation blueprints
- adding a set-up section and provide a docker image for easy set-up
- updating all the relevant snippets
- extending a bit some of the concepts that were already touched
* Added reference to #5582 + committed Dockerfiles
Also fixed a couple of typos spotted by Denis.
* module file tutorial: added section on template customization
* module file tutorial: fixed minor typos + rephrased a sentence
* module file tutorial: made explicit that Docker image comes with software
* module file tutorial: improved phrasing and layout.
Thanks Hartzell!
* module file tutorial: added vim and nano to editors
* module file tutorial: fixed typo
* Fixed typos
Thanks Adam!
* module file tutorial: updated Dockerfile + minor changes in introduction
Fixes#6154
For compiler options which set values using the syntax "-flag value"
(with a space between the flag and the flag's value) the flag and
value were treated as separate and reordered. This updates Spack's
logic to treat the flag and value as a single unit, even if there is
a space between them. It assumes that all flags are prefixed with
"-" and that all flag values are not.
* Skip rewriting files that are links (this also avoids issues with parsing
the output of the 'file' command for symlinks)
* Fail rather than warn for several gpg signing issues (e.g. if there is no
key available to sign with)
* Installation with 'buildcache install' only retrieves link and run
dependencies (this matches 'buildcache create' which only creates tarballs
of link/run dependencies)
* Don't rewrite RPATHs for a binary if the binary cached package was created
with relative RPATHs
* Refactor relocate_binary to operate on multiple binaries (given as a
collection of paths). This was likewise done for relocate_text and
make_binary_relative
- This isn't one of those autogenerated SVGs from a drawing program!
- This is a completely re-traced, minimalist SVG file with clearly
delineated pieces so that your favorite renderer can draw a Spack logo
at whatever resolution you want.
- Included versions with text, as well.
Fixes#6126#3183 removed the print_help function from the "modules" module.
This adds it back so that if a user invokes 'spack load foo' without
having sourced an environment setup script, they will be prompted
to do so. This also improves the placeholder message to tell the
user to invoke 'spack' as shell function rather than as an executable.
Part of the resource staging process is to place downloaded/expanded
files in the root stage. This was not happening when a resource stage
was restaged.
Fixes#5778.
Spack uses 'gcc -dumpversion' to determine the full version of gcc.
'gcc -dumpversion' no longer gives the full version on gcc 7.2.0.
'gcc -dumpfullversion' is required instead. This PR detects when
'gcc -dumpversion' gives a truncated version of '7' and in that
case retrieves the full version with 'gcc -dumpfullversion'
The name of the debug log written by the cc compiler wrapper was given
by Spec.short_spec, which includes the architecture. Somewhere along
the line Spec.format started adding spaces around the architecture
property so the filename started including spaces; the cc wrapper
script appears to ignore this, so files like spack-cc-bzip2-....in.log
(which record the wrapped compiler invocations) were not being
generated. This uses a different format string from the spec to
generate the wrapper log file names (which does not include spaces).
* when a user-provided spec refers to an already-installed package, packages with patches applied were causing validation errors based on the recorded variants in the package's class
* avoid checks on all reserved variants (not just 'patches')
* basic functionality to install spec from a binary cache when it's available; this spiders each cache for each package and could likely be more efficient by caching the results of the first check
* add spec to db after installing from binary cache
* cache (in memory) spec listings retrieved from binary caches
* print a warning vs. failing when no mirrors are configured to retrieve pre-built Spack packages
* make automatic retrieval of pre-built spack packages from mirrors optional
* no code was using the links stored in the dictionary returned by get_specs, so this simplifies the logic to return only a set of specs
* print package prefix after installing from binary cache
* provide more information to user on install progress
This updates the logic for Package.extends so that if the spec
associated with the package is not concrete, it will report true if
the package *could extend* the given spec; generally speaking a
package could extend a spec as long as none of the details associated
with its extendee spec conflict with the given spec. When the spec
associated with the package is concrete, this function will only
report whether the package *does extend* the given spec. When both
the specs are concrete, the semantics are the same as before.
* When creating a tar of a package for a build cache, symlinks are
preserved (the corresponding path in the newly-created tarfile will
be a symlink rather than a copy of the file)
* Dont add external packages to a build cache
* When installing from binary cache, don't create install prefix until
verification is complete
* Fixes#5754
Previously when RepoPath.repo_for_pkg was invoked with a string,
it did not check if the string included a namespace. Any
namespace-qualified package provided as a string would not be found
(at which point the behavior was to return the highest-precedence
repository).
* handle nested namespaces for packages specified as strings in repo_for_pkg
* add preliminary repository tests
* add test which replicates #5754
* refactor repo tests with fixtures
* define repo_path equivalent at test-level scope for repo tests
* add tests for unknown namespace/package
* rename fixture function (no longer prefixed with 'test_')
Internally we work against a branch named 'llnl/develop', which
mirrors the public repository's `develop` branch.
It's useful to be able to run flake8 on our changes, using
`llnl/develop` as the base branch instead of `develop`.
Internally the flake8 subcommand generates the list of changes files
using a hardcoded range of `develop...`.
This makes the base of that range a command line option, with a
default of `develop`.
That lets us do this:
```
spack flake8 --base llnl/develop
```
which uses a range of `llnl/develop...`.
'spack install' can now reinstall a spec even if it has dependents, via
the --overwrite option. This option moves the current installation in a
temporary directory. If the reinstallation is successful the temporary
is removed, otherwise a rollback is performed.
- new E741 flake8 checks disallow 'l', 'O', and 'I' as variable names
- rework parts of the code that use this, to be compliant
- we could add exceptions for this, but we're trying to mostly keep up
with PEP8 and we already have more than a few exceptions.
- When you don't use wildcards, flake8 will find places where you used an
undefined name.
- This commit has all the bugfixes resulting from this static check.
There are now separate flake8 configs for core vs. packages:
- core has a smaller set of flake8 exceptions
- packages allow `from spack import *` and module globals
- Allows core to take advantage of static checking for undefined names
- Allows packages to keep using Spack tricks like `from spack import *`
and dependencies setting globals for dependents.
* Update Getting Started docs to clarify that full Xcode suite is required for qt
* Better error message when only the command-line tools are installed
I'm tracking down a problem with the perl package that's been
generating this error:
```
OSError: OSError: [Errno 2] No such file or directory: '/blah/blah/blah/lib/5.24.1/x86_64-linux/Config.pm~'
```
The real problem is upstream, but it's being masked by an exception
raised in `filter_file`s finally block.
In my case, `backup` is `False`.
The backup is created around line 127, the `re.sub()` calls
fails (working on that), the `except` block fires and moves the backup
file back, then the finally block tries to remove the non-existent
backup file.
This change just avoids trying to remove the non-existent file.
- The shell script uses arrays and hence only works on sophisticated shells and not the default `sh`. For clarity the shebang `#!/bin/bash` has been used instead.
- This fakes out GitFetchStrategy to try code paths for different git
versions.
- This allows us to test code paths for old versions using a newer git
version.
- Tests use a session-scoped mock stage directory so as not to interfere
with the real install.
- Every test is forced to clean up after itself with an additional check.
We now automatically assert that no new files have been added to
`spack.stage_path` during each test.
- This means that tests that fail installs now need to clean up their
stages, but in all other cases the check is useful.
- Be explicit about stage creation during the install process.
- This avoids accidental creation of stages
- e.g., during `spack install --fake`, stages were erroneously recreated
after being destroyed
- This prevents the main spack install from being clusttered by
invocations of `spack test`.
- This uses a session-scoped stage fixture to ensure tests don't
interfere.
- Spack's core package interface was previously overly stateful, in that
calling methods like `do_stage()` could change your working directory.
- This removes Stage's `chdir` and `chdir_to_source` methods and replaces
their usages with `with working_dir(stage.path)` and `with
working_dir(stage.source_path)`, respectively. These ensure the
original working directory is preserved.
- This not only makes the API more usable, it makes the tests more
deterministic, as previously a test could leave the current working
directory in a bad state and cause subsequent tests to fail
mysteriously.
- Assertion would search for root through all possible paths.
- It's also really slow.
- This isn't needed anymore. We're pretty good at ensuring single-rooted
DAGs, and this assertion has never been thrown.
- This shaves another 6 seconds off r-rminer concretization
- This reduces concretization time for r-rminer from over 1 minute to
only 16 seconds.
- OrderedDict ends up taking a *lot* of time to compare larger specs.
- An OrderedDict isn't actually needed here. It's actually not possible
to find duplicates, and we end up sorting the contents of the
OrderedDict anyway.
- This is an optimization to the way traverse_edges iterates over
successors.
- Previous version called dependencies_dict(), which involved a lot of
redundant work (creating dicts and calling caonical_deptype)
- Spack ends up constructing compilers frequently from YAML data.
- This caches the result of parsing the compiler config
- The logic in compilers/__init__.py could use a bigger cleanup, but this
makes concretization much faster for now.
- on macOS, this also ensures that xcrun is called only twice, as opposed
to every time a new compiler object is constructed.
This adds a workflow section on how to use spack on Docker.
It provides an example on the best-practices I collected over the
last months and circumvents the common pitfalls I tapped in.
Works with MPI, CUDA, Modules, execution as root, etc.
Background: Developed initially for PIConGPU.
* Make --trusted default when running spack gpg list
Currently running `spack gpg list` with no arguments returns nothing. You must supply either the `--trusted` or the `--signing` options. The idea here is to return some initial data to the user when the command is run. The alternative is to return an error, telling the user to select one of the two options.
* Add an extra test case for the empty list command
Fixes the issue with code coverage
This isn't a rework of the concretizer but it speeds things up a LOT.
The main culprits were:
1. Variant code, `provider_index`, and `concretize.py` were calling
`spec.package` when they could use `spec.package_class`
- `spec.package` looks up a package instance by `Spec`, which requires a
(fast-ish but not that fast) DAG compare.
- `spec.package_class` just looks up the package's class by name, and you
should use this when all you need is metadata (most of the time).
- not really clear that the current way packages are looked up is
necessary -- we can consider refactoring that in the future.
2. `Repository.repo_for_pkg` parses a `str` argument into a `Spec` when
called with one, via `@_autospec`, but this is not needed.
- Add some faster code to handle strings directly and avoid parsing
This speeds up concretization 3-9x in my limited tests. Still not super
fast but much more bearable:
Before:
- `spack spec xsdk` took 33.6s
- `spack spec dealii` took 1m39s
After:
- `spack spec xsdk` takes 6.8s
- `spack spec dealii` takes 10.8s
* Do not call setup_package for fake installs
- setup package could fail if ``setup_dependent_environment`` or other
routines expected to use executables from dependencies
- xpetsc and boost try to get python config variables in
`setup_dependent_package`; this would cause them not to be
fake-installable
* Remove vestigial deptype_query argument to Spec.traverse()
- The `deptype_query` argument isn't used anymore -- it's only passed
around and causes confusion when calling traverse.
- Get rid of it and just keep the `deptypes` argument
* Don't print redundant messages when installing dependencies
- `do_install()` was originally depth-first recursive, and printed "<pkg>
already installed in ..." multiple times for packages as recursive
calls encountered them.
- For much cleaner output, use spec.traverse(order='post') to install
dependencies instead
* Add package for aspell and ass't dictionaries
Add a package definition for aspell.
Add a handful of dictionaries to convince myself that the support for
a bunch of dictionaries works.
* Flake8 cleanup
* Use six's version of urlparse
`urlparse` is not python3 friendly. This works around it (stolen from
`.../cmd/md5.py`).
* Fix incorrect trimming regexp
* Clean up dictionary build
- more parsimonious use of `which` (`make()` has already been made)
- use `sh` instead of `bash`
* Use a helper method to generate info for variants
I figured out my issues with static methods. I *think* that it this
is pythonic.
* Convert aspell to an extendable package
Convert aspell to be extendable and rework the dictionaries to be
extensions.
As it stands, there's a great deal of cut and paste in the
dictionaries, I'll abstract that out next.
The {de,}activate methods copy a great deal of code out of
package.py. Perhaps there's a better way....
* Create AspellDictPackage and use it for the dictionaries
Reduce the repeated code, pull it into a base class.
I'm confused about why 'from spack import *' wasn't more useful in the
base class.
* Oops, -de & -es should be AspellDictPackages too
* Typo: pakcage -> package
* Address some commentary
* Update copyright dates, 2016->2017
* spec and spec.package.spec can refer to different objects in the
database. When these two instances of spec differ in terms of
the value of the 'concrete' property, Spec._mark_concrete can
fail when checking Spec.package.installed (which requires
package.spec to be concrete). This skips the check for
spec.package.installed when _mark_concrete is called with
'True' (in other words, when the database is marking all specs
as being concrete).
* add test to confirm this fixes#5293
* edits to address issues where spack concretization attempts to set properties on already-installed specs
* most added checks only need to check if the spec is concrete; they dont also need to check if the package is installed
* add test to ensure that patches are not applied to an installed spec
* add test to ensure that an error is detected when a dependent requests a dependency constraint which conflicts with a requested installed dependency
Fixes#5455
All methods within setup_package use an EnvironmentModifications object
to control the environment. Those modifications are applied at the end
of setup_package. Module loads for the build environment need to be
done after the rest of the environment modifications are applied, as
otherwise Spack may unset variables set by those modules (for example
LD_LIBRARY_PATH).
closes#2884closes#4684
In #1848 we decided to use `Spec.format` to expand certain tokens in
the module file naming scheme or in the environment variable name.
Not all the tokens that are allowed in `Spec.format` make sense in
module file generation. This PR restricts the set of tokens that can
be used, and adds tests to check that the intended behavior is respected.
Additionally, the names of environment variables set/modified by module
files were, up to now, always uppercase. There are packages though that
require case sensitive variable names to honor certain behaviors (e.g.
OpenMPI). This PR restricts the uppercase transformation in variable
names to `Spec.format` tokens.
fixes#5587
In trying to preserve patch ordering, #5476 made equality inconsistent
for the added 'patches' variant. This commit maintains the original
weak ordering of patch applications while preserving consistency of
comparisons. The ordering DOES NOT enter the hashing mechanism. It's
supposed to be a hotfix, while we think of a cleaner and more-permanent
solution.
- This steals the magic regular expressions that CTest uses to parse log
files and addds them to Spack. See here:
https://github.com/Kitware/CMake/blob/master/Source/CTest/cmCTestBuildHandler.cxx
These are BSD licensed, so the port is in `externa/ctest_log_parser.py`
- We currently use these to do better filtering of errors from build
output. Plan is to use them to generate good CDash output.
`spack blame` prints out the contributors to a package.
By modification time:
```
$ spack blame --time llvm
LAST_COMMIT LINES % AUTHOR EMAIL
3 days ago 2 0.6 Andrey Prokopenko <andrey.prok@gmail.com>
3 weeks ago 125 34.7 Massimiliano Culpo <massimiliano.culpo@epfl.ch>
3 weeks ago 3 0.8 Peter Scheibel <scheibel1@llnl.gov>
2 months ago 21 5.8 Adam J. Stewart <ajstewart426@gmail.com>
2 months ago 1 0.3 Gregory Becker <becker33@llnl.gov>
3 months ago 116 32.2 Todd Gamblin <tgamblin@llnl.gov>
5 months ago 2 0.6 Jimmy Tang <jcftang@gmail.com>
5 months ago 6 1.7 Jean-Paul Pelteret <jppelteret@gmail.com>
7 months ago 65 18.1 Tom Scogland <tscogland@llnl.gov>
11 months ago 13 3.6 Kelly (KT) Thompson <kgt@lanl.gov>
a year ago 1 0.3 Scott Pakin <pakin@lanl.gov>
a year ago 3 0.8 Erik Schnetter <schnetter@gmail.com>
3 years ago 2 0.6 David Beckingsale <davidbeckingsale@gmail.com>
3 days ago 360 100.0
```
Or by percent contribution:
```
$ spack blame --percent llvm
LAST_COMMIT LINES % AUTHOR EMAIL
3 weeks ago 125 34.7 Massimiliano Culpo <massimiliano.culpo@epfl.ch>
3 months ago 116 32.2 Todd Gamblin <tgamblin@llnl.gov>
7 months ago 65 18.1 Tom Scogland <tscogland@llnl.gov>
2 months ago 21 5.8 Adam J. Stewart <ajstewart426@gmail.com>
11 months ago 13 3.6 Kelly (KT) Thompson <kgt@lanl.gov>
5 months ago 6 1.7 Jean-Paul Pelteret <jppelteret@gmail.com>
3 weeks ago 3 0.8 Peter Scheibel <scheibel1@llnl.gov>
a year ago 3 0.8 Erik Schnetter <schnetter@gmail.com>
3 years ago 2 0.6 David Beckingsale <davidbeckingsale@gmail.com>
3 days ago 2 0.6 Andrey Prokopenko <andrey.prok@gmail.com>
5 months ago 2 0.6 Jimmy Tang <jcftang@gmail.com>
2 months ago 1 0.3 Gregory Becker <becker33@llnl.gov>
a year ago 1 0.3 Scott Pakin <pakin@lanl.gov>
3 days ago 360 100.0
```
- A package can depend on a special patched version of its dependencies.
- The `Spec` YAML (and therefore the hash) now includes the sha256 of
the patch in the `Spec` YAML, which changes its hash.
- The special patched version will be built separately from a "vanilla"
version of the same package.
- This allows packages to maintain patches on their dependencies
without affecting either the dependency package or its dependents.
This could previously be accomplished with special variants, but
having to add variants means the hash of the dependency changes
frequently when it really doesn't need to. This commit allows the
hash to change *just* for dependencies that need patches.
- Patching dependencies shouldn't be the common case, but some packages
(qmcpack, hpctoolkit, openspeedshop) do this kind of thing and it
makes the code structure mirror maintenance responsibilities.
- Note that this commit means that adding or changing a patch on a
package will change its hash. This is probably what *should* happen,
but we haven't done it so far.
- Only applies to `patch()` directives; `package.py` files (and their
`patch()` functions) are not hashed, but we'd like to do that in the
future.
- The interface looks like this: `depends_on()` can optionally take a
patch directive or a list of them:
depends_on(<spec>,
patches=patch(..., when=<cond>),
when=<cond>)
# or
depends_on(<spec>,
patches=[patch(..., when=<cond>),
patch(..., when=<cond>)],
when=<cond>)
- Previously, the `patch()` directive only took an `md5` parameter. Now
it only takes a `sha256` parameter. We restrict this because we want
to be consistent about which hash is used in the `Spec`.
- A side effect of hashing patches is that *compressed* patches fetched
from URLs now need *two* checksums: one for the downloaded archive and
one for the content of the patch itself. Patches fetched uncompressed
only need a checksum for the patch. Rationale:
- we include the content of the *patch* in the spec hash, as that is
the checksum we can do consistently for patches included in Spack's
source and patches fetched remotely, both compressed and
uncompressed.
- we *still* need the patch of the downloaded archive, because we want
to verify the download *before* handing it off to tar, unzip, or
another decompressor. Not doing so is a security risk and leaves
users exposed to any arbitrary code execution vulnerabilities in
compression tools.
- Functions returned by directives were all called `_execute`, which made
reading stack traces hard because you couldn't tell what directive a
frame came from.
- renamed them all to `_execute_<directive>`
- Exceptions in directives were only really used in one or two places --
get rid of the boilerplate init functions and let the callsite specify
the message.
- move `spack.cmd.checksum.get_checksums` to `spack.util.web.spider_checksums`
- move `spack.error.NoNetworkError` to `spack.util.web.NoNetworkError` since
it is only used there.
- Previously, dependencies and dependency_types were stored as separate
dicts on Package.
- This means a package can only depend on another in one specific way,
which is usually but not always true.
- Prior code unioned dependency types statically across dependencies on
the same package.
- New code stores dependency relationships as their own object, with a
spec constraint and a set of dependency types per relationship.
- Dependency types are now more precise
- There is now room to add more information to dependency relationships.
- New Dependency class lives in dependency.py, along with deptype
definitions that used to live in spack.spec.
Move deptype definitions to spack.dependency
* Add '--test=all' and '--test=root' options to test either the root or the root and all dependencies.
* add a test dependency type that is only used when --test is enabled.
* test dependencies are not added to the spec, but they are provided in the test environment.
closes#5473
Prior to this PR we were not exiting early for external packages, which
caused the `configure_options` property of the contexts to fail with
e.g. a key error because the DAG gets truncated for them. More
importantly Spack configure options don't make any sense for externals.
Now we exit early, and leave a message in the module file clarifying
that this package has been installed outside of Spack.
closes#5201
Currently, if a user sets an external package to have a prefix that is
one of the system paths (like '/usr') the module files that are
generated will prepend '/usr/bin' to 'PATH', etc. This is particularly
nasty at the time when a module file is unloaded, and e.g. paths like
'/usr/bin' will be discarded from PATH.
This PR solves the issue skipping system paths when a prefix inspection
is made to generate module files.
* Module files now are generated using a template engine refers #2902#3173
jinja2 has been hooked into Spack.
The python module `modules.py` has been splitted into several modules
under the python package `spack/modules`. Unit tests stressing module
file generation have been refactored accordingly.
The module file generator for Lmod has been extended to multi-providers
and deeper hierarchies.
* Improved the support for templates in module files.
Added an entry in `config.yaml` (`template_dirs`) to list all the
directories where Spack could find templates for `jinja2`.
Module file generators have a simple override mechanism to override
template selection ('modules.yaml' beats 'package.py' beats 'default').
* Added jinja2 and MarkupSafe to vendored packages.
* Spec.concretize() sets mutual spec-package references
The correct place to set the mutual references between spec and package
objects at the end of concretization. After a call to concretize we
should now be ensured that spec is the same object as spec.package.spec.
Code in `build_environment.py` that was performing the same operation
has been turned into an assertion to be defensive on the new behavior.
* Improved code and data layout for modules and related tests.
Common fixtures related to module file generation have been extracted
in `conftest.py`. All the mock configurations for module files have been
extracted from python code and have been put into their own yaml file.
Added a `context_property` decorator for the template engine, to make
it easy to define dictionaries out of properties.
The default for `verbose` in `modules.yaml` is now False instead of True.
* Extendable module file contexts + short description from docstring
The contexts that are used in conjunction with `jinja2` templates to
generate module files can now be extended from package.py and
modules.yaml.
Module files generators now infer the short description from package.py
docstring (and as you may expect it's the first paragraph)
* 'module refresh' regenerates all modules by default
`module refresh` without `--module-type` specified tries to
regenerate all known module types. The same holds true for `module rm`
Configure options used at build time are extracted and written into the
module files where possible.
* Fixed python3 compatibility, tests for Lmod and Tcl.
Added test for exceptional paths of execution when generating Lmod
module files.
Fixed a few compatibility issues with python3.
Fixed a bug in Tcl with naming_scheme and autoload + unit tests
* Updated module file tutorial docs. Fixed a few typos in docstrings.
The reference section for module files has been reorganized. The idea is
to have only three topics at the highest level:
- shell support + spack load/unload use/unuse
- module file generation (a.k.a. APIs + modules.yaml)
- module file maintenance (spack module refresh/rm)
Module file generation will cover the entries in modules.yaml
Also:
- Licenses have been updated to include NOTICE and extended to 2017
- docstrings have been reformatted according to Google style
* Removed redundant arguments to RPackage and WafPackage.
All the callbacks in `RPackage` and `WafPackage` that are not build
phases have been modified not to accept a `spec` and a `prefix`
argument. This permits to leverage the common `configure_args` signature
to insert by default the configuration arguments into the generated
module files. I think it's preferable to handling those packages
differently than `AutotoolsPackage`. Besides only one package seems
to override one of these methods.
* Fixed broken indentation + improved resiliency of refresh
Fixed broken indentation in `spack module refresh` (probably a rebase
gone silently wrong?). Filter the writers for blacklisted specs before
searching for name clashes. An error with a single writer will not
stop regeneration, but instead will print a warning and continue
the command.
- converted `log_path` and `env_path` to properties of PackageBase.
- InstallErrors in build_environment are now annotated with the package
that caused them, in the 'pkg' attribute.
- Add `--show-log-on-error` option to `spack install` that catches
InstallErrors and prints the log to stderr if it exists.
Note that adding a reference to the Pakcage allows a lot of stuff
currently handled by do_install() and build_environment to be handled
externally.
- '\b' in regular expression needs to be in a raw string (r'\b')
- Regression test that would've caught this was unintentionally disabled
- This fixes the string and the test
The correct place to set the mutual references between spec and
package objects is at the end of concretization. After a call to
concretize we should now be ensured that spec is the same object
as spec.package.spec.
Code in `build_environment.py` that was performing the same
operation has been turned into an assertion to be defensive on
the new behavior.
- Fixes bugs where concretization would fail due to an erroneously cached
_concrete attribute.
- Ripped out a bunch of code in spec.py that isn't needed/valid anymore:
- The various concrete() methods on different types of Specs would
attempt to statically compute whether the Spec was concrete.
- This dates back to when DAGs were simpler and there were no optional
dependencies. It's actually NOT possible to compute statically
whether a Spec is concrete now. The ONLY way you know is if it goes
through concretization and is marked concrete once that completes.
- This commit removes all simple concreteness checks and relies only on
the _concrete attribute. This should make thinking about
concreteness simpler.
- Fixed a couple places where Specs need to be marked concrete explicitly.
- Specs read from files and Specs that are destructively copied from
concrete Specs now need to be marked concrete explicitly.
- These spots may previously have "worked", but they were brittle and
should be explcitly marked anyway.
- Dependencies in concrete specs did not previously have their cache
fields (_concrete, _normal, etc.) preserved.
- _dup and _dup_deps weren't passing each other enough information to
preserve concreteness properly, so only the root was properly
preserved.
- cached concreteness is now preserved properly for the entire DAG, not
just the root.
- added method docs.
Fixes#4112
This commit extends the support of the AutotoolsPackage methods
`with_or_without` and `enable_or_disable` to bool-valued variants. It
also defines for those functions a convenience short-cut if the
activation parameter is the prefix of a spec (like in
`--with-{pkg}={prefix}`).
This commit also includes:
* Updates to viennarna and adios accordingly: they have been modified to
use `enable_or_disable` and `with_or_without`
* Improved docstrings in `autotools.py`. Raise `KeyError` if name is
not a variant.
Renames the existing bootstrap command to 'clone'. Repurposes
'spack bootstrap' to install packages that are useful to the
operation of Spack (for now this is just environment-modules).
For bash and ksh users running setup-env.sh, if a Spack-installed
instance of environment-modules is detected and environment modules
and dotkit are not externally available, Spack will define the
'module' command in the user's shell to use the environment-modules
built by Spack.
First, quote the environment variable values. Second, export the
variables. sorry, this is bourn-shell syntax. Happy to consider a
shell-independent way to do this, but spack is already using sh-like
"env=value"
* Added support to query packages by tags.
- The querying commands `spack list`, `spack find` and `spack info` have
been modified to support querying by tags. Tests have been added to
check that the feature is working correctly under what should be the
most frequent use cases.
* Refactored Repo class to make insertion of new file caches easier.
- Added the class FastPackageChecker. This class is a Mapping from
package names to stat info, that gets memoized for faster access.
- Extracted the creation of a ProviderIndex to its own factory function.
* Added a cache file for tags.
- Following what was done for providers, a TagIndex class has been added.
This class can serialize and deserialize objects from json. Repo and
RepoPath have a new method 'packages_with_tags', that uses the TagIndex
to compute a list of package names that have all the tags passed as
arguments.
On Ubuntu 14.04 the effect if the cache reduces the time for spack list
from ~3sec. to ~0.3sec. after the cache has been built.
* Fixed colorization of `spack info`
This command broke after #5109. It was using the default value for the
"dirty" argument in `setup_package`. Now it adopts the same logic as
in `spack install`. Changed help for '--clean' and '--dirty'.
Improved coverage of spack env.
The private method `Spec._dup` was missing a line (when setting compiler
flags the parent spec was not set to `self`). This resulted in
an inconsistent state of the duplicated Spec. This problem has been
fixed here. The docstring of `Spec._dup` has been updated.
This change is done to avoid inconsistencies during refactoring. The rationale is that functions at different levels in the call stack all define a default for the 'dirty' argument. This PR removes the default value for all the functions except the top-level one (`PackageBase.do_install`).
In this way not defining 'dirty' will result in an error, instead of the default value being used. This will reduce the risk of having an inconsistent behavior after a refactoring.
* Respect --insecure when fetching list_url.
* Ensure support for Python 2.6, and that urlopen works for python versions prior 2.7.9 and between 3.0 and 3.4.3.
* Simplified Spec.__init__ signature by removing the *dep_like argument.
The `*dep_like` argument of `Spec.__init__` is used only for tests. This
PR removes it from the call signature and introduces an equivalent
fixture to be used in tests.
* Refactored ``spec_from_dict`` to be a static method of ``Spec``
The fixture ``spec_from_dict`` has been refactored to be a static method
of ``Spec``. Test code has been updated accordingly. Added tests for
exceptional paths.
* Renamed argument `unique` to `normal` + added LazySpecCache class
As requested in the review the argument `unique` of `Spec.from_literal`
has been renamed to `normal`. To avoid eager evaluations of
`Spec(spec_like)` expressions a subclass of `collections.defaultdict`
has been introduced.
* Spec object can be keys of the dictionary for a spec literal.
Added back the possibility use a spec directly as a key. This permits
to build DAGs that are partially normalized.
- Update handling of ChildError so that its output is capturable from a
SpackCommand
- Update cmd/install test to make sure Python and build log output is
being displayed properly.
- install and probably other commands were designed to run once, but now
we can to test them from within Spack with SpackCommand
- cmd/install.py assumed that it could modify do_install in PackageBase
and leave it that way; this makes the decorator temporary
- package.py didn't properly initialize its stage if the same package had
been built successfully before (and the stage removed).
- manage stage lifecycle better and remember when Package needs to
re-create the stage
- If a failure comes from an external command and NOT the Python code,
display errors highlighted with some context.
- Add some rudimentary support for parsing errors out of the build log
(not very sophisticated yet).
- Build errors in Python code will still display with Python context.
Users can now add an optional custom message to the conflicts directive.
Layout on screen has been changed to improve readability and the long
spec is shown in tree format. Two conflicts in `espresso` have been
modified to showcase the feature.
- Python I/O would not properly interleave (or appear) with output from
subcommands.
- Add a flusing wrapper around sys.stdout and sys.stderr when
redirecting, so that Python output is synchronous with that of
subcommands.
- 'v' toggle was previously only good for the current install.
- subsequent installs needed user to press 'v' again.
- 'v' state is now preserved across dependency installs.
- Previously we would use `os._exit()` in to avoid Spack error handling
in the parent process when build processes failed. This isn't
necessary anymore since build processes propagate their exceptions to
the parent process.
- Use `sys.exit` instead of `os._exit`. This has the advantage of
automatically flushing output streams on quit, so output from child
processes is not lost when Spack exits.
- Simplify interface to log_output. New interface requires only one
context handler instead of two. Before:
with log_output('logfile.txt') as log_redirection:
with log_redirection:
# do things ... output will be logged
After:
with log_output('logfile.txt'):
# do things ... output will be logged
If you also want the output to be echoed to ``stdout``, use the
`echo` parameter::
with log_output('logfile.txt', echo=True):
# do things ... output will be logged and printed out
And, if you just want to echo *some* stuff from the parent, use
``force_echo``:
with log_output('logfile.txt', echo=False) as logger:
# do things ... output will be logged
with logger.force_echo():
# things here will be echoed *and* logged
A key difference between this and the previous implementation is that
*everything* in the context handler is logged. Previously, things like
`Executing phase 'configure'` would not be logged, only output to the
screen, so understanding phases in the build log was difficult.
- The implementation of `log_output()` is different in two major ways:
1. This implementation avoids race cases by using only one pipe (before
we had a multiprocessing pipe and a unix pipe). The logger daemon
stops naturally when the input stream is closed, which avoids a race
in the previous implementation where we'd miss some lines of output
because the parent would shut the daemon down before it was done
with all output.
2. Instead of turning output redirection on and off, which prevented
some things from being logged, this version uses control characters
in the output stream to enable/disable forced echoing. We're using
the time-honored xon and xoff codes, which tell the daemon to echo
anything between them AND write it to the log. This is how
`logger.force_echo()` works.
- Fix places where output could get stuck in buffers by flushing more
aggressively. This makes the output printed to the terminal the same
as that which would be printed through a pipe to `cat` or to a file.
Previously these could be weirdly different, and some output would be
missing when redirecting Spack to a file or pipe.
- Simplify input and color handling in both `build_environment.fork()`
and `llnl.util.tty.log.log_output()`. Neither requires an input_stream
parameter anymore; we assume stdin will be forwarded if possible.
- remove `llnl.util.lang.duplicate_stream()` and remove associated
monkey-patching in tests, as these aren't needed if you just check
whether stdin is a tty and has a fileno attribute.
- Fix issue with color formatting regular expression.
- _separators regex in spec.py could be constructed such that '^' came
first in the character matcher, e.g. '[^@#/]'. This inverts the match
and causes transient KeyErrors.
- Fixed to escape all characters in the constructed regex.
- This bug comes up in Python3 due to its more randomized hash iteration
order, but it could probably also happen in a Python 2 implementation.
- also clean up variable docstrings in spec.py
- Mac OS X Sierra has no /usr/include by default
- Instead of assuming there's an include directory in /usr, mock up a directory that looks like we expect.
This adds sbang hook support for node-js and fixes the sbang filter
for lua (the character class exclusion was swallowing newlines and
reporting a false positive if lua was mentioned anywhere in the
file).
* Docs: Travis-CI Workflow
Add a workflow how to use spack on Travis-CI.
Future Work:
depending if and how we can simplify 5101:
add a multi-compiler, multi-C++-standard, multi-software
build matrix example
* Fix Typos
* Colorize spack info. Adds prominence to preferred version. fixes#2708
This uses 'llnl.util.tty.color' to colorize the output of 'spack info'.
It also displays versions in the order the concretizer would choose
them and shows the preferred in a line on its own and in bold.
* Modified output according to Adam and Denis reviews.
Section titles are not bold + black, but bold + blue. Added a new
section named "Preferred version", which prints the preferred version
in bold characters.
* Further modifications according to Adam and Denis reviews.
After "Homepage:" we now have a single space. Removed newline after each
variant. Preferred version is not in bold fonts anymore. Added a simple
test that just runs the command.
* Improved error message for unsatisfiable specs. fixes#5066
This PR improves the error message for unsatisfiable specs by showing in tree format both the spec that cannot satisfy the constraint and the spec that asked for that constraint. After that follows a readable error message.
This PR allows additional unused properties at the top-level of the config.yaml file. Having these properties permits to use two different versions of Spack, one of which adds a new property, without receiving error messages due to the presence of this new property in a configuration cache stored in the user's home.
This fixes a syntax error in the index.html file generated by the
"spack buildcache" command when creating build caches. This also
fixes support for installing unsigned binaries.
* Refactor IntelInstaller into IntelPackage base class
* Move license attributes from __init__ to class-level
* Flake8 fixes: remove unused imports
* Fix logic that writes the silent.cfg file
* More specific version numbers for Intel MPI
* Rework logic that selects components to install
* Final changes necessary to get intel package working
* Various updates to intel-parallel-studio
* Add latest version of every Intel package
* Add environment variables for Intel packages
* Update env vars for intel package
* Finalize components for intel-parallel-studio package
Adds a +tbb variant to intel-parallel-studio.
The tbb package was renamed to intel-tbb.
Now both intel-tbb and intel-parallel-studio+tbb
provide tbb.
* Overhaul environment variables set by intel-parallel-studio
* Point dependent packages to the correct MPI wrappers
* Never default to intel-parallel-studio
* Gather env vars by sourcing setup scripts
* Use mpiicc instead of mpicc when using Intel compiler
* Undo change to ARCH
* Add changes from intel-mpi to intel-parallel-studio
* Add comment explaining mpicc vs mpiicc
* Prepend env vars containing 'PATH' or separators
* Flake8 fix
* Fix bugs in from_sourcing_file
* Indentation fix
* Prepend, not set if contains separator
* Fix license symlinking broken by changes to intel-parallel-studio
* Use comments instead of docstrings to document attributes
* Flake8 fixes
* Use a set instead of a list to prevent duplicate components
* Fix MKL and MPI library linking directories
* Remove +all variant from intel-parallel-studio
* It is not possible to build with MKL, GCC, and OpenMP at this time
* Found a workaround for locating GCC libraries
* Typos and variable names
* Fix initialization of empty LibraryList
Adds the "buildcache" command to spack. The buildcache command is
used to create gpg signatures for archives of installed spack
packages; the signatures and archives are placed together in a
directory that can be added to a spack mirror. A user can retrieve
the archives from a mirror and verify their integrity using the
buildcache command. It is often the case that the user's Spack
instance is located in a different path compared to the Spack
instance used to generate the package archive and signature, so
this includes logic to relocate the RPATHs generated by Spack.
The action `CleanOrDirtyAction` has been added. It sets the default
value for `dest` to `spack.dirty`, and changes it according to the flags
passed via command line. Added unit tests to check that the arguments
are parsed correctly. Removed lines in `PackageBase` that were setting
the default value of dirty according to what was in the configuration.
Popen.communicate outputs a str object for python2 and a bytes
object for python3. This updates the Executable.__call__ function
to call .decode on the output of Popen.communicate only for python3.
This ensures that Executable.__call__ returns a str for python2 and
python3.
fixes#4236fixes#5002
When a package is defined in more than one repository,
RepoPath.dirname_for_package_name may return the path
to either definition. This sidesteps that ambiguity by
accessing the module associated with the package definition.
* Merged 'purge' command with 'clean'. Deleted 'purge'. fixes#2942
'spack purge' has been merged with 'spack clean'. Documentation has been
updated accordingly. The 'clean' and 'purge' behavior are not mutually
exclusive, and they log brief information to tty while they go.
* Fixed a wrong reference to spack clean in the docs
* Added tests for 'spack clean'. Updated bash completion.
* Typo fixes in docstrings.
* Let OS classes know if the paths they get were explicitly specified by user.
* Fixed regexp for cray compiler version matching.
* Replaced LinuxDistro with CrayFrontend for the Cray platform's frontend.
Fixes#4898
Constraints that were supposed to be conditionally activated for
specified values of a single-valued variant were being activated
unconditionally in the case that the variant was associated with
an implicit dependency. For example if X->Y->Z and Y places a
conditional constraint on Z for a given single-valued variant on
Y, then it would have been applied unconditionally when
concretizing X.
* Add a QMakePackage base class
* Fix sqlite linking bug in qt-creator
* Add latest version of qt-creator
* Add latest version of qwt
* Use raw strings for regular expressions
* Increase minimum required version of qt
* Add comment about specific version of sqlite required
* Fixes for latest version of qwt and qt-creator
* Older versions of Qwt only work with older versions of Qt
* Fix crashes when running spack install under nohup
Fixes#4919
For reasons that I do not entire understand, duplicate_stream() throws
an '[Errno 22] Invalid argument' exception when it tries to
`os.fdopen()` the duplicated file descriptor generated by
`os.dup(original.fileno())`. See spack/llnl/util/lang.py, line
394-ish.
This happens when run under `nohup`, which supposedly has hooked
`stdin` to `/dev/null`.
It seems like opening and using `devnull` on the `input_stream` in
this situation is a reasonable way to handle the problem.
* Be more specific about error being handled.
Only catch the specific error that happens when trying to dup
the stdin that nohup provides.
Catching e as a StandardErorr and then
`type(e).__name__` tells me that it's an OSError.
Printing e.errno tells me that it's 22
Double checking tells me that 22 is EINVAL.
Phew.
- Remove `special_types` dict in spec.py, as only 'all' is still used
- Still allow 'all' to be used as a deptype
- Simplify `canonical_deptype` function
- Clean up args in spack graph
- Add tests
For packages which contain a mix of versions with formats X.Y and
X.Y.Z, if the user entered an X.Y version as a preference in
packages.yaml, Spack would get confused and favor any version A.B.Z
where X=A and Y=B. In the case where there is a mix of these version
types, this commit updates preferences so Spack will favor an exact
match.
* Disable spec colorization when redirecting stdout and add command line flag to re-enable
* Add command line `--color` flag to control output colorization
* Add options to `llnl.util.tty.color` to allow color to be auto/always/never
* Add `Spec.cformat()` function to be used when `format()` should have auto-coloring
* Add universal build_type variant to CMakePackage
* Override build_type in some packages with different possible values
* Remove reference to no longer existent debug variant
* Update CBTF packages with new build_type variant
* Keep note on build size of LLVM
* Change version.up_to() to return Version() object
* Add unit tests for Version.up_to()
* Fix packages that expected up_to() to return a string
* Ensure that up_to() preserves separator characters
* Use version indexing instead of up_to
* Make all Version formatting properties return Version objects
* Update docs
* Tests need to test string representation
Adds SpackCommand class allowing Spack commands to be easily in Python
Example usage:
from spack.main import SpackCommand
info = SpackCommand('info')
out, err = info('mpich')
print(info.returncode)
This allows easier testing of Spack commands.
Also:
* Simplify command tests
* Simplify mocking in command tests.
* Simplify module command test
* Simplify python command test
* Simplify uninstall command test
* Simplify url command test
* SpackCommand uses more compatible output redirection
* Initial work on flag trapping using functions called <flag>_handler and default_flag_handler
* Update packages so they do not obliterate flags
* Added append to EnvironmentModifications class
* changed EnvironmentModifications to have append_flags method
* changed flag_val to be a tuple
* Increased test coverage
* added documentation of flag handling
* Change path to CMakeLists.txt to be relative to root, not pwd
* Changes requested during code review
* Revert back to old naming of root_cmakelists_dir
* Make relative directory more clear in docs
* Revert change causing build_type AttributeError
* Fix forgotten abs_path var
* Update CLHEP with new relative path
* Update more packages with new root_cmakelists_dir syntax
- Lock test can be run either as a node-local test or as an MPI test.
- Lock test is now parametrized by filesystem, so you can test the
locking capabilities of your NFS, Lustre, or GPFS filesystem. See docs
for details.
* mv variants: packages are now needed only during normalization
The relationship among different types of variants have been weakened,
in the sense that now it is permitted to compare MV, SV and BV among
each other. The mechanism that permits this is an implicit conversion
of the variant passed as argument to the type of the variant asking
to execute a constrain, satisfies, etc. operation.
* asbtract variant: added a new type of variant
An abstract variant is like a multi valued variant, but behaves
differently on "satisfies" requests, because it will reply "True"
to requests that **it could** satisfy eventually.
Tests have been modified to reflect the fact that abstract variants
are now what get parsed from expressions like `foo=bar` given by users.
* Removed 'concrete=' and 'normal=' kwargs from Spec.__init__
These two keyword arguments where only used in one test module to force
a Spec to 'appear' concrete. I suspect they are just a leftover from
another refactoring, as now there's the private method '_mark_concrete'
that does essentially the same job. Removed them to reduce a bit the
clutter in Spec.
* Moved yaml related functions from MultiValuedVariant to AbstractVariant.
This is to fix the erros that are occurring in epfl-scitas#73, and that
I can't reproduce locally.
* Parse modules in a way that works for both lmod and tcl
* added test and made method more robust
* refactoring for pythonic clarity
* Improved detection of 'module' shell function + refactored module utilities into spack.util.module_cmd
* Improved regex to reject nested parentheses we are not prepared to handle
* make tests backwards compatible with python 2.6
* Improved regex to account for sh being aliased to bash and used in bash module definition on some systems
* Improve test compatibility with lmod
* Added error for None module_cmd
* Add test for get_module_cmd_from_which()
Add test for get_module_cmd_from_which().
Add -c argument to Popen call to typeset -f module in get_module_cmd_from_bash().
* Increased detection options
Included BASH_FUNC_module() variable outside of typeset as a detection option
This should work on bash even in restricted_shell mode
Kept the typeset detection as an option in case the module function is not exported in bash
Also added try statements to tests, with environment recreation in finally blocks.
* More tests added; some hackiness
* increased test coverage for util/module_cmd
* Code changes to enable system config scope in /etc
Files will go in either /etc/spack or /etc/spack/<platform>
Required minor changes to conftest.
* Updated documentation to match new config scope
- previous code called `which` on $EDITOR, but that doesn't work for
EDITORs like `emacs -nw` or `emacsclient -t -nw`.
- This patch just trusts EDITOR if it is set (same as previous
behavior), and only uses the defaults if it's not.
* issue 4492: added xfailing test, added owner to DependencyMap
* DependencyMap.concrete checks if we have unconditional dependencies
This fixes#4492 and #4107 using some heuristics to avoid an infinite
recursion among Spec.satisfies, Spec.concrete and DependencyMap.concrete
The idea, as suggested by @becker33, is to check just for unconditional
dependencies. This is not covering the whole space and a package with
just conditional dependencies can still fail in the same way. It should
cover though all the **real** packages we have in our repo so far.
* Check for CRAYPE_VERSION instead of path
Architecture tests would fail on Cray since it would not find
the expected path. To make the test correctly work on Cray search
for the CRAYPE version instead.
* Catch SystemExit error in case flake8 not in path
On shared systems having flake8 can involve starting own virtual env.
Skip the test if no flake8 is found to avoid failure reporting.
* Add compatibility to 1.5 svnadmin create
The flag added is needed to correctly create svn repos on NERSC systems.
This could be unnecessary for other sites. I'd like to see others
test before this change gets merged.
- Skip spack flake8 test when flake8 is not installed.
- Fix parsing of dashes in specs broken by new help parser.
- use argparse.REMAINDER instead of narg='?'
- don't interpret parts of specs like -mpi as arguments.
* Initial version of the namd package
* Modified charm to consider compile against intel/intel-mpi
* Correction of namd to compile with intel-mkl and intel compiler
* Adding inclue64 in the prefix
* adding property for the build directory
* removing useless function build
* During install, remove prior unfinished installs
If a user performs an installation which fails, in some cases the
install prefix is still present, and the stage path may also be
present. With this commit, unless the user specifies
'--keep-prefix', installs are guaranteed to begin with a clean
slate. The database is used to decide whether an install finished,
since a database record is not added until the end of the install
process.
* test updates
* repair_partial uses keep_prefix and keep_stage
* use of mock stage object to ensure that stage is destroyed when it should be destroyed (and otherwise not)
* add --restage option to 'install' command; when this option is not set, the default is to reuse a stage if it is found.
- Add a `spack gpg` subcommand in anticipation of signed binaries.
- GPG keys are stored in var/spack/gpg, and the spack gpg command manages them.
- Docs are included on the command.
* Touch up string expansion.
I'm chasing this:
```
$ (module purge; spack install perl %gcc/5.4.0)
==> Error: No installed spec matches the hash: '%s'
```
There's something deeper going on, but the error message isn't helpful.
After this change it tells me this:
```
$ (module purge; spack install perl %gcc/5.4.0)
==> Error: No installed spec matches the hash: '5.4.0'
```
Which is weird because `5.4.0` is not a hash... Whatever is going on here, the error message needs to be fixed.
* Flake8 whitespace
* fix parser
* Removed xfails
* cleaned up debug print statements
* make use of these changes in gcc
* Added comment explaining unreachable line, line left for added protection
* Sphinx no longer supports Python 2.6
* Update vendored sphinxcontrib.programoutput from 0.9.0 to 0.10.0
* Documentation cannot be built in parallel
* Let Travis install programoutput for us
* Remove vendored sphinxcontrib-programoutput
Recent updates to the sphinx package prevent the vendored version
from being found in sys.path. We don't vendor sphinx, so it doesn't
make sense to vendor sphinxcontrib-programoutput either.
PR #3367 inadvertently changed the semantics of _find_recursive and
_find_non_recursive so that the returned list are not ordered as the
input search list. This commit restores the original semantic, and adds
tests to verify it.
Added DFLAGS to the `make.inc` file being written.
These macros are also added to the language specific variables
like CFLAGS, CXXFLAGS and FCFLAGS. Changed `spec.satisfies('foo')`
with `'foo' in spec` in `intel-mkl`, see #4135. Added a basic
build interface to `intel-mpi`.
It seems that parse_anonymous_spec may fail if more than one part
(variant, version range, etc.) is given to the function. Added tests to
code against to fix the problem in #4144.
- Full help is now only generated lazily, when needed.
- Executing specific commands doesn't require loading all of them.
- All commands are only loaded if we need them for help.
- There is now short and long help:
- short help (spack help) shows only basic spack options
- long help (spack help -a) shows all spack options
- Both divide help on commands into high-level sections
- Commands now specify attributes from which help is auto-generated:
- description: used in help to describe the command.
- section: help section
- level: short or long
- Clean up command descriptions
- Add a `spack docs` command to open full documentation
in the browser.
- move `spack doc` command to `spack pydoc` for clarity
- Add a `spack --spec` command to show documentation on
the spec syntax.
* SV variants are evaluated correctly in `when=` statements fixes#4113
The problem here was tricky:
```python
spec.satisfies(other)
```
changes already the MV variants in others into SV variants (where
necessary) if spec is concrete. If it is not concrete it does
nothing because we may be acting at a pure syntactical level.
When evaluating a `when=` keyword spec is for sure not concrete
as it is in the middle of the concretization process. In this case we
have to trigger manually the substitution in other to not end up
comparing a MV variant "foo=bar" to a SV variant "foo=bar" and having
False in return. Which is wrong.
* sv variants: improved error message for typos in "when=" statements
Modifications:
- added support for multi-valued variants
- refactored code related to variants into variant.py
- added new generic features to AutotoolsPackage that leverage multi-valued variants
- modified openmpi to use new features
- added unit tests for the new semantics
This allows people on systems that don't have all the fetchers to still
run Spack tests. Mark tests that require git, subversion, or mercurial to
be skipped if they're not installed.
* Filter all system paths introduced by dependencies from PATH
* Make sure path filtering works *even* for trailing slashes
* Revert some of the changes to `filter_system_paths`
* Yes, `bin64` is a real thing (sigh)
* add tests: /usr, /usr/, /usr/local/../bin, etc.
* Convert from rST to Google-style docstrings
The required hash of a submodule might point to the
non-HEAD commit of the current main branch and hence
would lead to a "no such remote ref" at checkout in
a shallow submodule.
## Motivation
Python installations are both important and unfortunately inconsistent. Depending on the Python version, OS, and the strength of the Earth's magnetic field when it was installed, the name of the Python executable, directory containing its libraries, library names, and the directory containing its headers can vary drastically.
I originally got into this mess with #3274, where I discovered that Boost could not be built with Python 3 because the executable is called `python3` and we were telling it to use `python`. I got deeper into this mess when I started hacking on #3140, where I discovered just how difficult it is to find the location and name of the Python libraries and headers.
Currently, half of the packages that depend on Python and need to know this information jump through hoops to determine the correct information. The other half are hard-coded to use `python`, `spec['python'].prefix.lib`, and `spec['python'].prefix.include`. Obviously, none of these packages would work for Python 3, and there's no reason to duplicate the effort. The Python package itself should contain all of the information necessary to use it properly. This is in line with the recent work by @alalazo and @davydden with respect to `spec['blas'].libs` and friends.
## Prefix
For most packages in Spack, we assume that the installation directory is `spec['python'].prefix`. This generally works for anything installed with Spack, but gets complicated when we include external packages. Python is a commonly used external package (it needs to be installed just to run Spack). If it was installed with Homebrew, `which python` would return `/usr/local/bin/python`, and most users would erroneously assume that `/usr/local` is the installation directory. If you peruse through #2173, you'll immediately see why this is not the case. Homebrew actually installs Python in `/usr/local/Cellar/python/2.7.12_2` and symlinks the executable to `/usr/local/bin/python`. `PYTHONHOME` (and presumably most things that need to know where Python is installed) needs to be set to the actual installation directory, not `/usr/local`.
Normally I would say, "sounds like user error, make sure to use the real installation directory in your `packages.yaml`". But I think we can make a special case for Python. That's what we decided in #2173 anyway. If we change our minds, I would be more than happy to simplify things.
To solve this problem, I created a `spec['python'].home` attribute that works the same way as `spec['python'].prefix` but queries Python to figure out where it was actually installed. @tgamblin Is there any way to overwrite `spec['python'].prefix`? I think it's currently immutable.
## Command
In general, Python 2 comes with both `python` and `python2` commands, while Python 3 only comes with a `python3` command. But this is up to the OS developers. For example, `/usr/bin/python` on Gentoo is actually Python 3. Worse yet, if someone is using an externally installed Python, all 3 commands may exist in the same directory! Here's what I'm thinking:
If the spec is for Python 3, try searching for the `python3` command.
If the spec is for Python 2, try searching for the `python2` command.
If neither are found, try searching for the `python` command.
## Libraries
Spack installs Python libraries in `spec['python'].prefix.lib`. Except on openSUSE 13, where it installs to `spec['python'].prefix.lib64` (see #2295 and #2253). On my CentOS 6 machine, the Python libraries are installed in `/usr/lib64`. Both need to work.
The libraries themselves change name depending on OS and Python version. For Python 2.7 on macOS, I'm seeing:
```
lib/libpython2.7.dylib
```
For Python 3.6 on CentOS 6, I'm seeing:
```
lib/libpython3.so
lib/libpython3.6m.so.1.0
lib/libpython3.6m.so -> lib/libpython3.6m.so.1.0
```
Notice the `m` after the version number. Yeah, that's a thing.
## Headers
In Python 2.7, I'm seeing:
```
include/python2.7/pyconfig.h
```
In Python 3.6, I'm seeing:
```
include/python3.6m/pyconfig.h
```
It looks like all Python 3 installations have this `m`. Tested with Python 3.2 and 3.6 on macOS and CentOS 6
Spack has really nice support for libraries (`find_libraries` and `LibraryList`), but nothing for headers. Fixed.
When a compiler was not found a stacktrace was displayed to user because
there were three arguments to be substituted in a string with only two
substitutions to be done.
Fixes#4026#1167 updated Database.reindex to keep old installation records to
support external packages. However, when a user manually removes a
prefix and reindexes this kept the records so the packages were
still installed according to "spack find" etc. This adds a check
for non-external packages to ensure they are properly installed
according to the directory layout.
- add Version.__format__ to support new-style formatting.
- Python3 doesn't handle this well -- it delegates to
object.__format__(), which raises an error for fancy format strings.
- not sure why it doesn't call str(self).__format__ instead, but that's
hwo things are.
* Properly ignore flake8 F811 redefinition errors
* Add unit tests for flake8 command
* Allow spack flake8 to work on systems with older git
* Skip flake8 unit tests for Python 2.6 and 3.3
* treats correctly a change from `explicit=False` to `explicit=True` in an external package DB entry.
* added unit tests
* fixed issues raised by @tgamblin . In particular the PR is no more hash-changing for packages that are not external.
* added a test to check correctness of a spec/yaml round-trip for things that involve an external
* Don't find external module path at each step of concretization
* it's not necessary.. The paths are retrieved at the end of concretizaion
* Don't find replacements for external packages.
* Test root of the DAG if external
* No reason not to test if the root of the DAG is external when external
packages are now first class citizens!
* Create `external` property for Spec (for external_path and external_module)
* Allow users to specify external package paths relative to spack
* Canonicalize external package paths so that users may specify their
locations relative to spack's directory.
* Update tests to use new external_path and external properly.
* skip license hooks on external
- Spack doesn't need eggs -- it manages its own directories
- Simplify install layout and reduce sys.path searches by installing all
packages flat (eggs are deprecated for wheels, and this is also what
wheels do).
- We now supply the --single-version-externally-managed argument to
`setup.py install` for setuptools packages and setuptools.
- modify packages to only use setuptools args if setuptools is an
immediate dependency
- Remove setuptools from packages that do not need it.
- Some packages use setuptools *only* when certain args (likeb
'develop' or 'bdist') are supplied to setup.py, and they specifically
do not use setuptools for installation.
- Spack never calls setup.py this way, so just removing the setuptools
dependency works for these packages.
* fetch git submodules recursively
This is useful if the submodules have submodules themselves. On
the other hand doing a recursive update doesn't hurt if there
is only one level.
* fetch submodules with depth=1 as well (fix#2190)
* use git submodule with depth only for git>=1.8.4
- Spack install would previously fail if it could not load a package for
the thing being uninstalled.
- This reworks uninstall to handle cases where the package is no longer
known, e.g.:
a) the package has been renamed or is no longer in Spack
b) the repository the package came from is no longer registered in
repos.yaml
- gcc on macOS says it's version 4.2.1, but it's really clang, and it's
actually the *same* clang as the system clang.
- It also doesn't respond with a full path when called with
--print-file-name=libstdc++.dylib, which is expected from gcc in abi.py.
Instead, it gives a relative path and _gcc_compiler_compare doesn't
understand what to do with it. This results in errors like:
```
lib/spack/spack/abi.py, line 71, in _gcc_get_libstdcxx_version
libpath = os.readlink(output.strip())
OSError: [Errno 2] No such file or directory: 'libstdc++.dylib'
```
- This commit does two things:
1. Ignore any gcc that's actually clang in abi.py. We can probably do
better than this, but it's not clear there is a need to, since we
should handle the compiler as clang, not gcc.
2. Don't auto-detect any "gcc" that is actually clang anymore. Ignore
it and expect people to use clang (which is the default macOS
compiler anyway).
Users can still add fake gccs to their compilers.yaml if they want, but
it's discouraged.
* Checksum code wasn't opening binary files as binary.
- Fixes Python 3 issue where files are opened as unicode text by default,
and decoding fails for binary blobs.
* Simplify fetch test parametrization.
* - add tests for URL fetching and checksumming.
- fix coverage on interface functions in FetchStrategy superclass
- add some extra crypto tests.
* Package install remove prior unfinished installs
Depending on how spack is terminated in the middle of building a
package it may leave a partially installed package in the install
prefix. Originally Spack treated the package as being installed if
the prefix was present, in which case the user would have to
manually remove the installation prefix before restarting an
install. This commit adds a more thorough check to ensure that a
package is actually installed. If the installation prefix is present
but Spack determines that the install did not complete, it removes
the installation prefix and starts a new install; if the user has
enabled --keep-prefix, then Spack reverts to its old behavior.
* Added test for partial install handling
* Added test for restoring DB
* Style fixes
* Restoring 2.6 compatibility
* Relocated repair logic to separate function
* If --keep-prefix is set, package installs will continue an install from an existing prefix if one is present
* check metadata consistency when continuing partial install
* Added --force option to make spack reinstall a package (and all dependencies) from scratch
* Updated bash completion; removed '-f' shorthand for '--force' for install command
* dont use multiple write modes for completion file
* Add tests to mercurial package
* Add support for --insecure with mercurial fetching
* Install man pages and tab-completion scripts
* Add tests and latest version for all deps
* Flake8 fix
* Use certifi module to find CA certificate
* Flake8 fix
* Unset PYTHONPATH when running hg
* svn_fetch should use to svn-test, not hg-test
* Drop Python 3 support in Mercurial
Python 3 support is a work in progress and isn't currently
recommended:
https://www.mercurial-scm.org/wiki/SupportedPythonVersions
* Test both secure and insecure hg fetching
* Test both secure and insecure git and svn fetching
`set_executable` now checks if a user/group.other had read permission
on a file and if it does then it sets the corresponding executable
bit.
See #1483.
Fixes#2587
The concretizer falls back on using the root architecture (followed
by the default system architecture) to fill in unspecified arch
properties for a spec. It failed to check cases where the root could
be None.
* Remove fake URLs from Spack
* Ignore long lines for URLs that start with ftp:
* Preliminary changes to version regexes
* New redesign of version regexes
* Allow letters in version-only
* Fix detection of versions that end in Final
* Rearrange a few regexes and add examples
* Add tests for common download repositories
* Add test cases for common tarball naming schemes
* Finalize version regexes
* spack url test -> spack url summary
* Clean up comments
* Rearrange suffix checks
* Use query strings for name detection
* Remove no longer necessary url_for_version functions
* Strip off extraneous information after package name
* Add one more test
* Dot in square brackets does not need to be escaped
* Move renaming outside of parse_name_offset
* Fix versions for a couple more packages
* Fix flake8 and doc tests
* Correctly parse Python, Lua, and Bio++ package names
* Use effective URLs for mfem
* Add checksummed version to mitos
* Remove url_for_version from STAR-CCM+ package
* Revert changes to version numbers with underscores and dashes
* Fix name detection for tbb
* Correctly parse Ruby gems
* Reverted mfem back to shortened URLs.
* Updated instructions for better security
* Remove preferred=True from newest version
* Add tests for new `spack url list` flags
* Add tests for strip_name_suffixes
* Add unit tests for version separators
* Fix bugs related to parseable name but in parseable version
* Remove dead code, update docstring
* Ignore 'binary' at end of version string
* Remove platform from version
* Flip libedit version numbers
* Re-support weird NCO alpha/beta versions
* Rebase and remove one new fake URL
* Add / to beginning of regex to avoid picking up similarly named packages
* Ignore weird tar versions
* Fix bug in url parse --spider when no versions found
* Less strict version matching for spack versions
* Don't rename Python packages
* Be a little more selective, version must begin with a digit
* Re-add fake URLs
* Fix up several other packages
* Ignore more file endings
* Add parsing support for Miniconda
* Update tab completion
* XFAILS are now PASSES for 2 web tests
- _spider in web.py was actually failing to spider deeper than a certain
point.
- Fixed multiprocessing pools to not use daemons and to allow recursive
spawning.
- Added detailed tests for spidering and for finding archive versions.
- left some xfail URL finding exercises for the reader.
- Fix noqa annotations for some @when decorators
- Clean up spec_syntax tests: don't dependend on DB order.
- spec_syntax hash parsing tests were strongly dependent on the order the
DB was traversed.
- Tests now specifically grab the specs they want from the mock DB.
- Tests are more readable as a result.
- Add Python3 versions to Travis tests.
1. Fix#2807: Can't depend on virtual and non-virtual package
- This is tested by test_my_dep_depends_on_provider_of_my_virtual_dep in
the concretize.py test.
- This was actually working in the test suite, but it depended on the
order the dependencies were resolved in. Resolving non-virtual then
virtual worked, but virtual, then non-virtual did not.
- Problem was that an unnecessary copy was made of a spec that already
had some dependencies set up, and the copy lost half of some of the
dependency relationships. This caused the "can'd depend on X twice
error".
- Fix by eliminating unnecessary copy and ensuring that dep parameter of
_merge_dependency is always safe to own -- i.e. it's a defensive copy
from somewhere else.
2. Fix bug and simplify concretization of deptypes.
- deptypes weren't being accumulated; they were being set on each
DependencySpec. This could cause concretization to get into an infinite
loop.
- Fixed by accumulating deptypes in DependencySpec.update_deptypes()
- Also simplified deptype normalization logic: deptypes are now merged in
constrain() like everything else -- there is no need to merge them
specially or to look at dpeendents in _merge_dependency().
- Add some docstrings to deptype tests.
- Get rid of pkgsort() usage for preferred variants.
- Concretization is now entirely based on key-based sorting.
- Remove PreferredPackages class and various spec cmp() methods.
- Replace with PackagePrefs class that implements a key function for
sorting according to packages.yaml.
- Clear package pref caches on config test.
- Explicit compare methods instead of total_ordering in Version.
- Our total_ordering backport wasn't making Python 3 happy for some
reason.
- Python 3's functools.total_ordering and spelling the operators out
fixes the problem.
- Fix unicode issues with spec hashes, json, & YAML
- Try to use str everywhere and avoid unicode objects in python 2.
- Remove ascii encoding assumption from spack_yaml
- proc.communicate() returns bytes; convert to str before adding.
- Fix various byte string/unicode issues for Python 2/3 support
- Need to decode subprocess output as utf-8 in from_sourcing_files.
- Fix comments in strify()
- convert print, StringIO, except as, octals, izip
- convert print statement to print function
- convert StringIO to six.StringIO
- remove usage of csv reader in Spec, in favor of simple regex
- csv reader only does byte strings
- convert 0755 octal literals to 0o755
- convert `except Foo, e` to `except Foo as e`
- fix a few places `str` is used.
- may need to switch everything to str later.
- convert iteritems usages to use six.iteritems
- fix urllib and HTMLParser
- port metaclasses to use six.with_metaclass
- More octal literal conversions for Python 2/3
- Fix a new octal literal.
- Convert `basestring` to `six.string_types`
- Convert xrange -> range
- Fix various issues with encoding, iteritems, and Python3 semantics.
- Convert contextlib.nested to explicitly nexted context managers.
- Convert use of filter() to list comprehensions.
- Replace reduce() with list comprehensions.
- Clean up composite: replace inspect.ismethod() with callable()
- Python 3 doesn't have "method" objects; inspect.ismethod returns False.
- Need to use callable in Composite to make it work.
- Update colify to use future division.
- Fix zip() usages that need to be lists.
- Python3: Use line-buffered logging instead of unbuffered.
- Python3 raises an error with unbuffered I/O
- See https://bugs.python.org/issue17404
- Update YAML version to support Python 3
- Python 3 support for ordereddict backport
- Exclude Python3 YAML from version tests.
- Vendor six into Spack.
- Make Python version-check tests work with Python 3
- Add ability to add version check exceptions with '# nopyqver' line
comments.
* Run python setup.py test if --run-tests
* Attempt to import the Python module after installation
* Add testing support to numpy and scipy
* Remove duplicated comments
* Update to new run-tests callback methodology
* Remove unrelated changes for another PR
* perl: make extendable and add Module::Build package
* perl: allow 'spack create' to identify perl packages from their contents
* perl-module-build: fix indenting of package docstring
* perl: split install() method for extensions into phases
* perl: auto-detect build method (Makefile.PL vs Build.PL) and define a 'check' method
* PerlPackage: use import statements similar to those in AutotoolsPackage
* PerlModule: fix detection of Build.PL
* PerlPackageTemplate: remove extraneous lines to avoid flake8 warnings
* PerlPackageTemplate: split into separate templates for Makefile.PL and Build.PL
* PerlPackage: add cross-references to docstrings
* AutotoolsPackage: fix ambiguous cross-references to avoid errors in doc tests
* PerlbuildPackageTemplate: depend on perl-module-build if Build.PL exists
- Spack find would fail with "unknown namespace" for some queries when a
package from an unknown namespace was installed.
- Solve by being conservative: assume unknown packages are NOT providers
of virtual dependencies.
- deactivate -a wouldn't work if the installation's package was no longer
available.
- Fix installed_extensions_for so that it doesn't need to look at the
package.py file.
This fixes the problem described in #3374, which describes `spack find` ignore explicit/implicit.
I believe that this was broken in #2626.
This restores the behavior of implicit/explicit for me.
I believe that it does not screw anything else up, but ....
* Order listed compiler sections
"spack compiler list" output compiler sections in an arbitrary order.
With this commit compiler sections are ordered primarily by compiler
name and then by operating system and target.
* Compiler search lists config files with compilers
If a compiler entry is already defined in a configuration file that
the user does not know about, they may be confused when that compiler
is not added by "spack compiler find". This commit adds a message at
the end of "spack compiler find" to inform the user of the locations
of all config files where compilers are defined.
Fixes#1476
Concretization uses compilers defined in config files and if those
are not available defaults to searching typical paths where the
detected operating system would have a compiler. If there is an OS
update, the detected OS can change; in this case all compilers
defined in the config files would no longer match (because they would
be associated with the previous OS version). The error message in
this case was too vague. This commit adds logic for detecting when it
is likely that the OS has been updated (in particular when that
affects compiler concretization) and improves the information provided
to the user in the error message.
* Dont propagate flags between different compilers
Fixes#2786
Previously when a spec had no parents with an equivalent compiler,
Spack would default to adding the compiler flags associated with the
root of the DAG. This eliminates that default.
* added test for compiler flag propagation
* simplify compiler flag propagation logic
Fixes#3428
Users can run 'spack compiler find' to automatically initialize their
compilers.yaml configuration file. It also turns out that Spack will
implicitly initialize the compilers configuration file as part of
detecting compilers if none are found (so if a user were to attempt to
concretize a spec without running 'spack compiler find' it would not
fail). However, in this case Spack was overlooking its own implicit
initialization of the config files and would report that no new
compilers were found. This commit removes implicit initialization when
the user calls 'spack compiler find'.
This did not surface until #2999 because the 'spack compiler' command
defaulted to using a scope 'user/platform' that was not accounted for
in get_compiler_config (where the implicit initialization logic
predates the addition of this new scope); #2999 removed the scope
specification when checking through config files, leading to the
implicit initialization.
Previously, this would fail with a NoSuchMethodError:
class Package(object):
# this is the default implementation
def some_method(self):
...
class Foo(Package):
@when('platform=cray')
def some_method(self):
...
@when('platform=linux')
def some_method(self):
...
This fixes the implementation of `@when` so that the superclass method
will be invoked when no subclass method matches.
Adds tests to ensure this works, as well.
* default scope for config command is made consistent with cmd/__init__ default
* dont specify a scope when looking for compilers with a matching spec (since compiler concretization is scope-independent)
* config edit should default to platform-specific file only for compilers
* when duplicate compiler specs are detected, the exception raised now points the user to the files where the duplicates appear
* updated error message to emphasize that a spec is duplicated (since multiple specs can reference the same compiler)
* 'spack compilers' is now also broken down into sections by os and target
* Added tests for new compiler methods
Modifications:
- `dump_packages` copies build dependencies into `$prefix/.spack`, as well as the link/run dependencies that we already copied there.
- fake installs copy dependency packages into `$prefix/.spack` as well
- Added a new interface for Specs to pass build information
- Calls forwarded from Spec to Package are now explicit
- Added descriptor within Spec to manage forwarding
- Added state in Spec to maintain query information
- Modified a few packages (the one involved in spack install pexsi) to showcase changes
- This uses an object wrapper to `spec` to implement the `libs` sub-calls.
- wrapper is returned from `__getitem__` only if spec is concrete
- allows packagers to access build information easily
It seems the tests in `packages.py` were running just because we had a specific order of execution. This should fix the problem, and make the test_suite more resilient to running order.
- Fix format printing to match command line for hashes and full name formats
- Update spack graph to use new format
- Changed format string signifier for hashes from `$#` to `$/`
Modules generated by the module creation machinery currently print out
a notice that warnts the user that things are being autoloaded. In
some situations those warnings are problematic. See #2754 for
discussion.
This is a first cut at optionally disabling the warning messages:
- adds a helper tothe EnvModule base class that encapsulates the
config file variable;
- adds a method to the base class that provides a default (empty)
code fragment for generating a warning message;
- passes the warning fragment into the bit that formats the autoload
string;
- adds specialized autload_warner() methods in the tcl and lmod
subclasses;; and finally
- touches up the autoload_format strings in the specialized classes.
Add the ability to the modules generation process to blacklist
packages that were installed implicitly. One can still whitelist
modules that were installed implicitly.
This changes adds a `blacklist_implicts` boolean as a peer to the
`whitelist` and `blacklist` arrays, e.g.:
```
modules:
enable::
- lmod
lmod:
whitelist:
- 'lua'
- 'py-setuptools'
blacklist:
- '%gcc@4.8.3'
blacklist_implicits: True
```
It adds a small helper in `spec.py` and then touches up the package
filtering code in `modules.py`.
* Replace `spack urls` and `spack url-parse` with `spack url`
* Allow spack url list to only list incorrect parsings
* Add spack url test reporting
* Add unit tests for new URL commands
* Add several new R packages
* Add a few more R packages
* Update more versions
* Convert Package to RPackage
* Add a few more packages
* Add missing dependencies
* AutotoolsPackage: added configure_directory to permit build out of source. The configure script executable is now invoked with an absolute path. Modified a few packages accordingly.
* build_systems: functions returning directories are now properties
* build_systems: fixed issues with tcl and tk
* AutotoolsPackage: reworked recipe for autoreconf
* Spec.satisfies accesses Spec.concrete as property
Fixes#2760
When copying a spec, _concrete is always set to False for each
dependency. "Spec.satisfies" was accessing the member "_concrete"
directly instead of using the property "concrete". This means that
if you copy a spec, the dependencies will be considered equal, but
did not necessarily satisfy one another. Spec.satisfies is a
prerequisite for a package to be considered an extension; as a
consequence, an extension with run-time dependencies that were also
extensions did not activate those extensions. This updates
Spec.satisfies to avoid checking the cached member "_concrete"
directly.
* Added test to check for activation of dependency extension
* Added test to check for transitive satisfiability between a spec and its copy
- Allows hashes to be specified after other parts of the spec
- Does not allow other parts of the spec to be specified after the hash
- The hash must either end input or be followed by another separate spec
- The next spec cannot be an anonymous spec (it must start with a package name or a hash)
See #2769 (after it was merged) for further discussion of this interface addition. That discussion resulted in these requirements:
```
python # 1 spec
/abc123 # 1 spec
python /abc123 # 1 spec
/456789 # 1 spec
python /abc123 /456789 # 2 specs
python /456789 /abc123 # 2 specs
/abc123 /456789 # 2 specs
/456789 /abc123 # 2 specs
/456789 /abc123 python # 3 specs
```
assuming `abc123` and `456789` are both hashes of different python specs.
* Add support for IBM threaded compilers, xl*_r
Added new compiler class, xl_r; added default flags to the compilers.yaml file.
* Add cppflags to the set of default flags to be added to the compilers stanza in compiler.yaml.
These flags are optional. Only defined flags will be listed in the compilers.yaml file.
* Fix scripting warnings revealed by flake8.
Updated __init__.py and xl_r.py to conform with flake8 rules.
* Add justification to the definition of the XL default compiler flags.
* PackageMeta: `run_before` is an alias of `precondition`, `run_after` an alias of `sanity_check`
* PackageMeta: removed `precondition` and `sanity_check`
* PackageMeta: decorators are now free-standing
* package: modified/added docstrings. Fixed the semantics of `on_package_attributes`.
* package: added unit test assertion as side effects of install
* build_systems: factored build-time test running into base class
* r: updated decorators in package.py
* docs: updated decorator names
* documentation: reworked packaging guide to add build-system phases
* documentation: improvements to AutotoolsPackage autodocs
* build_systems: updated autodocs
* run-tests: added a few information on how to run tests fixes#2606 fixes#2605
* documentation: fixed items brought up by @davydden
* typos in docs
* consistent use of 'build system' (i.e. removed 'build-system' from docs)
* added a note on possible default implementations for build-time tests
* documentation: fixed items brought up by @citibeth
* added note to explain the difference between build system and language used in a package
* capitalized bullet items
* added link to API docs
* documentation: fixed multiple cross-references after rebase
* documentation: fixed minor issues raised by @tgamblin
* documentation: added entry in table for the `PythonPackage` class
* docs: fixed issues brought up by @citybeth in the second review
Previously, fix_darwin_install_name would only handle dependencies that have no path set, and it ignore dependencies that have the build directory as path baked in. Catch this, and replace it by the install directory.
- Add a PythonPackage class with build system support.
- Support build phases in PythonPackage
- Add a custom sanity check for PythonPackages
- Get rid of nolink dependencies in python packages
- Update spack create to use new PythonPackage class
- Port most of Python packages to new PythonPackage class
- Conducted a massive install and activate of Python packages.
- Fixed bugs introduced by install and activate.
- Update API docs on PythonPackage
* Initial changes to spack create command
* Get 'spack create <url>' working again
* Simplify call to BuildSystemGuesser
* More verbose output of spack create
* Remove duplicated code from spack create and spack checksum
* Add better documentation to spack create docstrings
* Fix pluralization bug
* Flake8
* Update documentation on spack create and deprecate spack edit --force
* Make it more obvious when we are renaming a package
* Further deprecate spack edit --force
* Fix unit tests
* Rename default template to generic template
* Don't add automake/autoconf deps to Autotools packages
* Remove changes to default $EDITOR
* Completely remove all traces of spack edit --force
* Remove grammar changes to make the PR easier to review
* Fixed parser to eliminate need for escape quotes. TODO: Fix double call to shlex, fix spaces in spec __str__
* Fixed double shlex
* cleanup
* rebased on develop
* Fixed parsing for multiple specs; broken since #360
* Revoked elimination of the `-` sigil in the syntax, and added it back into tests
* flake8
* more flake8
* Cleaned up dead code and added comments to parsing code
* bugfix for spaces in arguments; new bug found in testing
* Added unit tests for kv pairs in parsing/lexing
* Even more flake8
* ... yet another flake8
* Allow multiple specs in install
* unfathomable levels of flake8
* Updated documentation to match parser fix
* Added customization for make targets in 'build' and 'install' phases for CMakePackage
* Use rst in build system docs so that Sphinx generates nice API docs
* Allow AutotoolsPackages to be built in a different directory
* Flake8
* Fix missing import
* Allow configure to be located in different directory
* Update espressopp to use build targets
* Flake8
* Sphinx fix, lists must be a new paragraph
* Back out change that allowed a configure script in a different directory than build_directory
* Add missing deps, build in parallel
* Missing space for rst list
* Removing the nobuild, nolink, and alldeps dependency types in favor of being explicit.
* This will help with maintenance going forward, as adding more dependency types won't affect existing declared dependencies in weird ways.
* default deptype is still `('build', 'link')`
* Rename packages
* Upcasing depends_on() in packages.
* Downcased extends('r')
* Fixed erroneously changed URL that had slipped through.
* Fixed typo
* Fixed link from documentation into package source code.
* Fixed another doc problem.
* Changed underscores to dashes in package names.
* Added test to enforce lowercase, no-underscore naming convention.
* Fix r-xgboost
* Downcase more instances of 'R' in package auto-creation.
* Fix test.
* Converted unit test packages to use dashes not underscores
* Downcase `r` in the docs.
* Update module_file_support.rst
Fix r->R for class R.
* Porting: substitute nose with ytest
This huge commit substitutes nose with pytest as a testing system. Things done here:
* deleted external/nose as it is no longer used
* moved mock resources in their own directory 'test/mock/'
* ported two tests (cmd/find, build_system) to pytest native syntax as an example
* build_environment, log: used monkeypatch instead of try/catch
* moved global mocking of fetch_cache to an auto-used fixture
* moved global mocking from test/__init__.py to conftest.py
* made `spack test` a wrapper around pytest
* run-unit-tests: avoid running python 2.6 tests under coverage to speed them up
* use `pytest --cov` instead of coverage run to cut down testing time
* mock/packages_test: moved mock yaml configuration to files instead of leaving it in the code as string literals
* concretize.py: ported tests to native pytest, reverted multiprocessing in pytest.ini as it was creating the wrong report for coveralls
* conftest.py, fixtures: added docstrings
* concretize_preferences.py: uses fixtures instead of subclassing MockPackagesTest
* directory_layout.py: uses fixtures instead of subclassing MockPackagesTest
* install.py: uses fixtures instead of subclassing MockPackagesTest
* optional_deps.py: uses fixtures instead of subclassing MockPackagesTest
optional_deps.py: uses fixtures instead of subclassing MockPackagesTest
* packages.py: uses fixtures instead of subclassing MockPackagesTest
* provider_index.py: uses fixtures instead of subclassing MockPackagesTest
* spec_yaml.py: uses fixtures instead of subclassing MockPackagesTest
* multimethod.py: uses fixtures instead of subclassing MockPackagesTest
* install.py: now uses mock_archive_url
* git_fetch.py: uses fixtures instead of subclassing MockPackagesTest
* hg_fetch.py: uses fixtures instead of subclassing MockPackagesTest
* svn_fetch.py, mirror.py: uses fixtures instead of subclassing MockPackagesTest
repo.py: deleted
* test_compiler_cmd.py: uses fixtures instead of subclassing MockPackagesTest
* cmd/module.py, cmd/uninstall.py: uses fixtures instead of subclassing MockDatabase
* database.py: uses fixtures instead of subclassing MockDatabase, removed mock/database
* pytest: uncluttering fixture implementations
* database: changing the scope to 'module'
* config.py: uses fixtures instead of subclassing MockPackagesTest
* spec_dag.py, spec_semantics.py: uses fixtures instead of subclassing MockPackagesTest
* stage.py: uses fixtures instead of subclassing MockPackagesTest. Removed mock directory
* pytest: added docstrings to all the fixtures
* pytest: final cleanup
* build_system_guess.py: fixed naming and docstrings as suggested by @scheibelp
* spec_syntax.py: added expected failure on parsing multiple specs closes#1976
* Add pytest and pytest-cov to Spack externals.
* Make `spack flake8` ignore externals.
* run-unit-tests runs spack test and not pytest.
* Remove all the special stuff for `spack test`
- Remove `conftest.py` magic and all the special case stuff in `bin/spack`
- Spack commands can optionally take unknown arguments, if they want to
handle them.
- `spack test` is now a command like the others.
- `spack test` now just delegates its arguments to `pytest`, but it does
it by receiving unknown arguments and NOT taking an explicit
help argument.
* Fix error in fixtures.
* Improve `spack test` command a bit.
- Now supports an approximation of the old simple interface
- Also supports full pytest options if you want them.
* Use external coverage instead of pytest-cov
* Make coverage use parallel-mode.
* change __init__.py docs to include pytest
* inheritance of directives: using meta-classes to inject attributes coming from directives into packages + lazy directives
* _dep_types -> dependency_types
* using a meta-class to inject directives into packages
* directives are lazy
fixes#2466
* directives.py: allows for multiple inheritance. Added blank lines as suggested by @tgamblin
* directives.py: added a test for simple inheritance of directives
* Minor improvement requested by @tgamblin
CMakePackage: importing names from spack.directives
directives: wrap __new__ to respect pep8
* Refactoring requested by @tgamblin
directives: removed global variables in favor of class variables. Simplified the interface for directives (they return a callable on a package or a list of them).
* Ensure that every package has a license
Also fixes URLs with http://http:// doubled.
This is a continuation of #2656.
* Add license to every file in Spack
* Make sure Todd is the author of all packages
* Fix flake8 tests
* Don't license external Sphinx docs
* Don't display licenses in tutorial example packages
Also fixes typos and converts command-line examples
from tcsh to bash, which is more common
That's because in set_build_environment_variables()
the funciton filter_system_paths() is actually applied to
package prefixes, and not to prefix/lib or prefix/include.
The primary goal of #2292 was to use the frontend compiler to make
build dependencies like cmake on HPC platforms. It turns out that
while this works in some cases, it did not handle cases where a
package was a link dependency of the root and of a build dependency
(and could produce incorrect concretizations which would not build).
* Better output for disambiguate_specs()
* Fix wrong exception name.
* Fix satsifies(): concrete specs require matching by hash.
- Fixes uninstall by hash and other places where we need to match a
specific spec.
- Fix an error in provider_index (satisfies() call was backwards)
- Fix an error in satisfies_dependencies(): checks were too shallow.
* Fix default args in Spec.tree()
* Move installed_dependents() to DB to avoid unknown package error.
* Make `spack find` and `sapck.store.db.query()` faster for hashes.
* Add a test to ensure satisfies() respects concrete Specs' hashes.
* Customization for make targets in build and test phases for AutotoolsPackage
* Updated Blitz++ to use customized make build and test targets
* Removed flake8 error
* Removed make test customization, added make install customization, need to figure out issues with multiple make targets
* Changed build_targets and install_targets to normal attributes
* MakefilePackage: changed build_args and install_args for consistency with #2464
openblas: derives from MakefilePackage
* MakefilePackage: changed default edit behavior
* Better cxx11/14 flags for GNU/Clang/Intel
- GCC 4.8 only supports -std=c++1y for C++14
- Use CMake's rules for AppleClang to set cxx11 and cxx14 flags based on
Apple Xcode/LLVM version
- Use CMake's rules for Intel to add support for cxx14 flags based on
Intel version.
* Add cxx17_flag property
Implement property in compiler for c++17 as per those for c++11/14.
Add concrete support for GNU/Clang:
- Return -std=c++1z for GCC 5 and above per GCC documentation
- Return -std=c++1z for Clang 3.5 and above per Clang documentation
- Return -std=c++1z for Apple LLVM 6.1 and above per CMake's rules
* Update `spack setup` and `spack graph` to be consistent with c557e765 and 9347f869. Fixes#2316.
* Added another "fix" necessary to make `spack setup` work.
* Added another "fix" necessary to make `spack setup` work. (reverted from commit 7f0d3ecb38c97ec00491d7cd66b4266e3018b1ca)
* Add documentation for repositories and namespaces.
* Update and extend repository documentation per review.
- Also add `-N` argument for `spack spec`
The advanced [Uninstalling Packages](spack.readthedocs.io/en/latest/tutorial_sc16_spack_basics.html#uninstalling-packages) via hash has a couple missing `.. code-block:: console` directives ;)
I have no idea what branch to direct this to though...
* clang: do xcode mockup iff requested by a package
* add a note
* add pkg to setup_custom_environment() and decide whether or not to use mockup XCode there based on the package
* Make targets an attribute of compilers, similar to OS. Allows users to use `spack compiler find` to add compilers for the same platform/os but different targets when spack is installed on a file system mounted on machines with different targets.
* Changed get_compilers() method to treat old compilers with no target as target='any'
* flake8 changes
* Address change requests for comments/code clarity
Fixes#2306
Any dependency explicitly mentioned in a spec string ended up with the
build and link deptypes unconditionally. This fixes dependency
resolution to ensure that packages which are mentioned in the spec
string have their deptypes determined by the dependency information
in the package.py files. For example if a package has cmake as a build
dependency, and cmake is mentioned as a dependency in the spec string
for the package, then it ends up with just the build deptype.
Packages built targeting a backend may depend on packages like cmake
which can be built against the frontend. With this commit, any build
dependency or child of a build dependency will target the frontend by
default. In compiler concretization when packages copy compilers from
nearby packages, build dependencies use compiler information from
other build dependencies, and link dependencies avoid using compiler
information from build dependencies.
* Use JSON for the database instead of YAML.
- JSON is much faster than YAML *and* can preserve ordered keys.
- 170x+ faster than Python YAML when using unordered dicts
- 55x faster than Python YAML (both using OrderedDicts)
- 8x faster than C YAML (with OrderedDicts)
- JSON is built into Python, unlike C YAML, so doesn't add a dependency.
- Don't need human readability for the package database.
- JSON requires no major changes to the code -- same object model as YAML.
- add to_json, from_json methods to spec.
* Add tests to ensure JSON and YAML don't need to be ordered in DB.
* Write index.json first time it's not found instead of requiring reindex.
* flake8 bug.
* Added some notes about how multiarch detection could be fixed.
* Implemented a preliminary version of the "spack.spec.ArchSpec" class.
* Updated the "spack.spec.Spec" class to use "ArchSpec" instead of "Arch".
* Fixed a number of small bugs in the "spack.spec.ArchSpec" class.
* Fixed the 'Concretizer.concretize_architecture' method so that it uses the new architecture specs.
* Updated the package class to properly use arch specs.
Removed a number of unused architecture functions.
* Fixed up a number of bugs that were causing the regression tests to fail.
Added a couple of additional regression tests related to architecture parsing/specification.
Fixed a few bugs with setting reserved os/target values on "ArchSpec" objects.
Removed a number of unnecessary functions in the "spack.architecture" and "spack.concretize" modules.
* Fixed a few bugs with reading architecture information from specs.
Updated the tests to use a uniform architecture to improve reliability.
Fixed a few minor style issues.
* Adapted the compiler component of Spack to use arch specs.
* Implemented more test cases for the extended architecture spec features.
Improved error detection for multiple arch components in a spec.
* Fix for backwards compatibility with v0.8 and prior
* Changed os to unknown for compatibility specs
* Use `spack09` instead of `spackcompat` for the platform of old specs.
Different compilers have different flags for PIC (position-independent
code). This patch provides a common ground to accessing it inside specs.
See discussions in #508 and #2373. This patch does not address the issue
of mixed compilers as mentioned in #508.
Package provides a 'list_url' attribute which may be searched to find
download links. #1822 created a slowdown for all tests by always
searching this URL. This reenables dynamic search only in cases where
all other fetchers fail. This also only enables dynamic search when
'mirror_only' is set to false.
The option -s now causes file and line number information to be printed
along with any invocation of msg, info, etc...
This will greatly ease debugging.
- Seems like older git versions won't be able to clone an LFS repo.
- Reverting to an external link for the slides to avoid storing an 8MB
file in the repo and to avoid using git LFS.
* Allow compiler wrapper to modify environment
This adds the ability to set environment variables in the compiler
wrappers. These are specified as part of the compilers.yaml config.
The user may also specify RPATHs in compilers.yaml that should be
added.
* Minor doc tweak
* Waste less space when fetching cached archives, simplify fetch messages.
- Just symlink cached files into the stage instead of copying them with curl.
- Don't copy linked files back into the cache when done fetching.
* Fixes for review.
* more updates
* last update
Not the desired eventual behavior, but cflag subset matching is not currently working for anonymous specs and this provides a temporary solution by restricting the feature until it is fixed.
* Wordsmithing/minor-edits to module tutorial
A small set of wordsmithing, spell checking and minor edits to the fancy
new modules tutorial!
* Fix typo (sneaky z key...)
* Fix "S:" and "manual<" typos
Some packages which include resources fetched from source control
repositories terminated package installs because they failed to
archive; specifically, this included all SCM resources which identify
a specific state of the repo - for example a revision in svn or a
tag/revision in git. This is because the resource stage creation
logic did not choose an appropriate archive name for these kinds of
resources.
* module files tutorial : first complete draft
- first complete draft for module files tutorial
- minor corrections to module file reference
* module file tutorial : first batch of corrections
- module avail spelled out fully
- typos from @adamjstewart
- rewording of a few sentences
* module file tutorial : first batch of corrections
- emphasized lines in yaml files
* module file tutorial : fixes according to @citibeth and @adamjstewart reviews
- used long format for command options
- reworded unclear sentence on tokes
- reworked table in reference manual to make it clearer
* module file tutorial : implemented corrections collected on site from @schlyfts
* module file tutorial : removed comment (@hartzell suggestion)
* Add options for hashes, tree depth, and YAML to `spack spec`.
- Can now display hashes with `spack spec`, like `spack find`.
- Removed the old "ids" argument to `spack spec` (which
printed numerical values)b
- Can output YAML spec from `spack spec` with `-y`
- Can control depth of DAG traversal with --cover=[nodes|edges|paths]
- Can print install status (installed, missing, not installed) with -I
* Don't use YAML aliases in specs.
- Force Spack's YAML dumper to ignore aliases.
- aliases cause non-canonical YAML to be used in DAG hash, and result in
redundant hashes.
- add a test to ensure this behavior stays
* spack install: forward sys.stdin to child processes fixes#2140
- [ ] redirection process is spawned in __enter__ instead of __init__
- [ ] sys.stdin is forwarded to child processes
* log: wrapped __init__ definition
1) list gfortran as a fc and f77 compiler that can work with clang
2) allow compatible gfortran to ./spack compiler find with clang by matching version numbers
This is based on the discussions in
https://github.com/LLNL/spack/issues/237https://github.com/dealii/dealii/wiki/deal.II-in-Spack#mixing-gcc-and-clang-on-osx
This is not a long term solution but something to get us through the next months until the compiler
infrastructure is reworked to allow mixing and matching for C/C++ and Fortran compilers
Funded-by: IDEAS
Project: IDEAS/xSDK
Time: 1.5 hours
- Detailed debug information is now handed back to the parent process
from builds, for *any* type of exception.
- previously this only worked for Spack ProcessErrors, but now it works
for any type of error raised in a child.
- Spack will print an error message and source code context for build
errors by default.
- It will print a stack trace when using `spack -d`, even when the error
occurred in the child process.
- Ported old run-flake8-tests qa script to `spack flake8` command.
- New command does not modify files in the source tree
- Copies files to a temp stage modifies them there, and runs tests.
- Updated docs and `run-flake8-tests` script to call `spack flake8`.
- generalized and fixed to work with any key in YAML file
- simplified schema writing, as well
- add more unit tests for the config system
- Rename test/yaml.py to test/spack_yaml.py
- Add test/yaml.pyc to ignored pyc files.
- Added a schema for config.yaml
- Moved install tree configuration to config.yaml
- Moved etc/spack/install.yaml to etc/spack/defaults/config.yaml
- renamed install_area to "store", to use a term in common with guix/nix.
- in `config.yaml` file, it's called the `install_tree` to be more
intuitive to users.
- `install_tree` might've worked in the code, but `install_tree` is
already a global function in the spack namespace, from
llnl.util.filesystem.
Merge #2030 added a cyclic dependency between the Cray platform needing
to read a `targets.yaml` config file and `config.py` needing to get the
platform names.
This commit removes the cyclic dependency in favor of the more general
config scheme. It also removes the now functionless `targets.yaml`
config file. This breaks 'frontend' targets on the Cray platform but
all architecture targets, including the frontend, that are provided by
CrayPE are added to the Platform anyway so users can be explicit about
the architecture targeted by the Cray compiler wrappers:
```
spack spec libelf arch=cray-CNL-frontend
```
becomes
```
spack spec libelf arch=cray-CNL-mc8 # on an XK7 or
spack spec libelf arch=cray-CNL-sandybridge # on an older XC30, etc..
```
The only way the 'frontend' target can be defined after this commit is
through target environment variables.
* module file support: major rework of docs
* module file support: fixed issues found by @adamjstewart
- list or enumeration should not be indented
- use console instead of bash or csh in things that are not scripts
- other typos
* module file support: fixed other issues found by @adamjstewart
- tables should not be indented
- substitute lines with pyobject to import an entire function
- get help output running commands
- typos
* module file support: fixes according to review comments
- @citibeth moved `spack module loads` after `spack load`
- @glennpj tried to clarify installation table + changes to language
- @tgamblin Removed top level section and moved the whole thing into the reference manual
* module file support: moved directive before spack module loads
External packages do not have an spec.yaml file so don't check for it.
Without this change any time a package depends on an external package
when the new package is installed you will get the error
Install prefix exists but contains no spec.yaml
This problem has also haunted me since I started using Spack since PETSc
depends on Python and I used an external python but fortunately it
was relatively easy to debug once I could reproduce it at will.
Funded-by: IDEAS
Project: IDEAS/xSDK
Time: 1 hour
This replaces a custom token-based substitution format with calls to
Spec.format in modules.py
This also resolves a couple issues:
- LmodModules set configuration globally instead of in its initializer
which meant test-specific configuration was not applied
- Added support for setting hash_length=0 for LmodModules. This only
affects the module filename and not the directory names for the
hierarchy tokens in the path. This includes an additional unit test.
* add filter_system_paths()
* filter system paths in set_build_environment_variables()
* two function: lib/inc + bin
* reverse order in bin
* fix order
* minor
* improvements of the code
* more cleanup
* alternative solution for filter_bins
* fiddle base alalazo's approach
* minor
* minor
* Fixed a bug causing config-specified compiler flags to be ignored.
Updated the compiler config so all flags are in a separate section.
* Updated the documentation for the `compilers.yaml` file spec.
* Implemented basic testing for the 'flags' section of compiler config.
* Fixed a few minor problems with the manual compiler config documentation.
* Add new version property to handle joined version numbers
* Add unit test for new joined property
* Add documentation on version.up_to() and version.joined
* Removes the extra argument from Package.do_install while maintaining the changes in behavior pulled in #1603
* install : removed -i and -d shorthands (breaks backward compatibility)
* Change ':' to ','