- dependency patching test didn't attempt to apply patches; just to see
whether they were on the spec.
- it applies the patch now and verifies that that patch was applied.
* Branch with the meson build-system
* Fix build_environment for dual loads and add create code
* Add documentation
* Fixed option list
* Update build_system_guess for meson
* Fixed documentation errors
* Added meson to build and configure and updated documentation
* fix typos
- cc cleanup caused a parsing regression in flag handling
- We added proper quoting to array expansions, but flag variables were
never actually converted to arrays. Old code relied on this.
This commit:
- Adds reads to convert flags to arrays.
- Makes the cc test check for improper space handling to prevent future
regressions.
- flags were prepended in reverse order to args, but this makes it hard
to see what order they'll be in on the final command line.
- add them in the order they'll appear to make cc easier to maintain.
- simplify code for assembling the command line
- fix separator used in SPACK_SYSTEM_DIRS test
- This corrects most of the issues found by shellcheck
- This also uses ':' as the delimiter for SPACK_SYSTEM_DIRS, for
consistency with other variables.
- filtering using sed causes most builds to slow down quite a bit, as the
compiler wrapper has to run sed many times, and *it* runs many times
- do the system directory parsing directly in bash
- Add tests to ensure that RPATHs are not added in cc mode, which can
cause some builds to fail.
- Change cc.py to use pytest style
- Instead of writing out all the flags, break the flags down into
variables so that it's easy to read what each test is supposed to
check. This should make cc.py more maintainable in the future.
- Adding -L and -Wl,-rpath to compile-only command lines ("cc mode" or
"-c") causes clang (if not also other compilers) to emit warnings that
confuse configure systems.
- Clang will print warnings about unused command-line arguments.
- This fix ensures that -L and -Wl,-rpath are not added if the compile
line is just building an object file with -c
- This also cleans up the cc script in several places.
Spack currently prepends include paths, library paths, and rpaths to the
compile line. This causes problems when a header or library in the package
has the same name as one exported by one of its dependencies. The
*dependency's* header will be preferred over the package's, which is not
what most builds expect. This also breaks some of our production codes.
This restores the original cc behavior (from *very* early Spack) of parsing
compiler arguments out by type (`-L`, `-I`, `-Wl,-rpath`) and reconstituting
the full command at the end.
`<includes> <other_args> <library dirs> <rpaths>`
This differs from the original behavior in one significant way, though: it
*appends* the library arguments so that dependency libraries do not shadow
those in the build.
This is safe because semantics aren't affected by *interleaving* `-I`, `-L`,
and `-Wl,-rpath` arguments with others, only with each other (so the order of
two `-L` args affects the search path, but we search for all libraries on the
command line using the same search path).
We preserve the following:
1. Any system directory in the paths will be listed last.
2. The root package's include/library/RPATH flags come before flags of the
same type for any dependency.
3. Order will be preserved within flags passed by the build (except system
paths, which are moved to be last)
4. Flags for dependencies will appear between the root flags and the system
flags, and the flags for any dependency will come before those for *its*
dependencies (this is for completeness -- we already guarantee this in
`build_environment.py`)
* Fix performance issue when compiling.
Spack was doing active wait when compiling, spoiling one core.
My fix consists in not setting any timeout for select, instead of
the previous 0 second.
* Fix comments about select.select timeout
- This was a nasty workaround due to the way our compiler wrappers used
to work. We don't want to have to modify our elfutils installation to
install libdwarf.
- Since cd9691de5, we no longer need this because the package will always
come before dependencies in our include order.
Spack currently prepends include paths, library paths, and rpaths to the compile line. This causes problems when a header or library in the package has the same name as one exported by one of its dependencies. The *dependency's* header will be preferred over the package's, which is not what most builds expect. This also breaks some of our production codes.
This restores the original cc behavior (from *very* early Spack) of parsing compiler arguments out by type (`-L`, `-I`, `-Wl,-rpath`) and reconstituting the full command at the end.
`<includes> <other_args> <library dirs> <rpaths>`
This differs from the original behavior in one significant way, though: it *appends* the library arguments so that dependency libraries do not shadow those in the build.
This is safe because semantics aren't affected by *interleaving* `-I`, `-L`, and `-Wl,-rpath` arguments with others, only with each other (so the order fo two `-L` args affects the search path, but we search for all libraries on the command line using the same search path).
We preserve the following:
1. Any system directory in the paths will be listed last.
2. The root package's include/library/RPATH flags come before flags of the same type for any dependency.
3. Order will be preserved within flags passed by the build (except system paths, which are moved to be last)
4. Flags for dependencies will appear between the root flags and the system flags, and the flags for any dependency will come before those for *its* dependencies (this is for completeness -- we already guarantee this in `build_environment.py`)
- previously, output could be confusing when deptypes were only shown for
one dependent when a node had *multiple* dependents
- also fix default coverage of `Spec.tree()`: it previously defaulted to
cover only build and link dependencies, but this is a holdover from
when those were the only types.
- Previously, Spack didn't check the arguments you put in version()
directives.
- So, you could do something like this, where there are arguments for a
URL fetcher AND for a git fetcher:
version('1.0', md5='abc123', git='https://foo.bar', commit='feda2343')
- Now, we check the arguments before constructing a fetcher, to ensure
that each package has *only* arguments for a single type of fetcher.
- Also added `test_package_version_consistency()` to the `package_sanity`
test, so that all builtin packages are required to have valid
`version()` directives.
- packagers can specify two top-level fetch URLs if one is `url`
- e.g., `url` and `git` or `url` and `svn`
- allow only one VCS fetcher so we can differentiate between URL and VCS.
- also clean up fetcher logic and class structure
- Packages can remove the top-level `url` attribute and still work
- These are now legal:
- Packages with *only* version-specific URLs (even with gaps)
- Packages with a top-level git/hg/svn attribute and `version`
directives for that.
- If a package has both a top-level hg/git/svn attribute AND a top-level
url attribute, the url attribute takes precedence.
Some packages do not have a `url` and are instead downloaded via `git`,
`hg`, or `svn`. Some packages like `spectrum-mpi` cannot be downloaded at
all, and are placeholder packages for system installations. Previously,
`__init__()` in `PackageBase` crashed if a package did not have a `url`
attribute defined.
I hacked this section of code out, but I have no idea what the
repercussions of that are.
- This hard-codes the hash lengths rather than computing them on import.
- Also cleans up the code in `spack.util.crypto` to make it easier to
understand.
Fix this output error:
```
$ spack -m module loads mpileaks
==> Error: `spack module loads -m t -m c -m l ...` has been moved. Try this instead:
$ spack module t loads mpileaks
$ spack module c loads mpileaks
$ spack module l loads mpileaks
```
In case a deprecated form of the module command is used, the program
will exit non-zero and print an informative error message suggesting
which command should be used instead.
As requested in the review all the commands meant to manage module
files have been grouped under the `spack module` command.
Unit tests have been refactored to match the new command structure.
fixes#2215fixes#2570fixes#6676fixes#7281closes#3827
This PR reverts the use of `spack module loads` in favor of
`spack module find` when loading module files via Spack. After this PR
`spack load` will accept a single spec at a time, and will be able
to interpret correctly the `--dependencies` option.
fixes#4400
The feature requested in #4400 was already part of the module file
configuration, but it was neither tested nor documented. This
commit takes care of adding a few lines in the documentation and a
regression test.
This just because the fixture has been moved one level above the one
it was originally defined. In this more general context there's more
than one configuration file that could be patched for tests.
'spack module' has been split into multiple commands, each one tied to a
specific module type. This permits the specialization of the new
commands with features that are module type specific (e.g. set the
default module file in lmod when multiple versions of the same package
are installed at the same time).
- repo membership test was broken by the refactor of spack/__init__.py
- refactor singleton so that 'spec in repo' works again for `spack.repo.path`
- fix spec command and add basic tests for `spack spec` and `spack spec --yaml`
- There was a lot of documentation in `PackageBase` dating back to the
very first versions of Spack.
- It was repetitive and out of date, and the docs at spack.readthedocs.io
are better.
- Remove the outdated specifics, and leave the minimal useful set of
developer docs in `package.py`.
- This changes `get_checksums_for_versions` to generate code that uses an
explicit `sha256` argument instead if the bare `md5` hash we used to
generate.
- also use a generic digest parameter for the `version` directive, rather
than a specific `md5` parameter.
- Add command-line scope option to Spack
- Rework structure of main to allow configuration system to raise
errors more naturally
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
- Fixes a bug in `llnl.util.lock`
- Locks in the current directory would fail because the parent directory
was the empty string.
- Fix this and return '.' for the parent of locks in the current
directory.
Replace regex-based target detection for Makefiles with a preliminary "make -q"
to check if a target exists. This does not work for NetBSD make; additional work
is required to detect if NetBSD make is present and to use a regex in that case.
The affected makefile target checks are only performed when the "--test" flag is
added to a "spack install" invocation.
Fixes#8036
Before this PR Package.installed was returning True if the spec prefix
existed, without checking the DB. This is wrong for external packages,
whose prefix exists before being registered into the DB. Now the property
checks for both the prefix and a DB entry.
Spack provides a number of classes based on commonly-used build systems
that users can extend when writing packages; the classes provide functionality
to perform the actions relevant to the build system (e.g. running "configure" for
an Autotools-based package). This adds documentation for classes supporting the
following build systems:
* Makefile
* Autotools
* CMake
* QMake
* SCons
* Waf
This includes build systems for managing extensions of the following packages:
* Perl
* Python
* R
* Octave
This also adds documentation on implementing packages that use a custom build
system (e.g. Perl/CMake).
Spack also provides extendable classes which aggregate functionality for related
sets of packages, e.g. those using CUDA. Documentation is added for
CudaPackage.
- The setup-env.sh script currently makes two calls to spack, but it
should only need to make one.
- Add a fast-path shell setup routine in `main.py` to allow the shell
setup to happen in a single, fast call that doesn't load more than it
needs to.
- This simplifies setup code, as it has to eval what Spack prints
- TODO: consider eventually making the whole setup script the output of a
spack command
* update help of `clean --all` to include `-p`
* remove old orphaned `.pyc` removal
* restrict removal or orphaned pyc files to `lib/spack` and `var/spack`
- Clean up error messages for when a lock can't be created, or when an
exclusive (write) lock can't be taken on a file.
- Add a number of subclasses of LockError to distinguish timeouts from
permission issues.
- Add an explicit check to prevent the user from taking a write lock on a
read-only file.
- We had a check for this for when we try to *upgrade* a lock on an RO
file, but not for an initial write lock attempt.
- Add more tests for different lock permission scenarios.
- write locks previously wrote information about the lock holder (host
and pid), and read locks woudl read this in.
- This is really only for debugging, so only enable it then
- add some tests that target debug info, and improve multiproc lock test
output
When a user specifies a URL for a specific version of a package, Spack originally
would use that URL for all newer versions of the package. This behavior has
proven to be generally more harmful than useful, so this PR removes the feature
such that a version-specific URL override affects only that version.
If the user sets "ccache: true" in spack's config.yaml, Spack will use an available
ccache executable when compiling c/c++ code. This feature is disabled by default
(i.e. "ccache: false") and the documentation is updated with how to enable
ccache support
Functional updates:
- `python` now creates a copy of the `python` binaries when it is added
to a view
- Python extensions (packages which subclass `PythonPackage`) rewrite
their shebang lines to refer to python in the view
- Python packages in the same namespace will not generate conflicts if
both have `...lib/site-packages/namespace-example/__init__.py`
- These `__init__` files will also remain when removing any package in
the namespace until the last package in the namespace is removed
Generally (Updated 2/16):
- Any package can define `add_files_to_view` to customize how it is added
to a view (and at the moment custom definitions are included for
`python` and `PythonPackage`)
- Likewise any package can define `remove_files_from_view` to customize
which files are removed (e.g. you don't always want to remove the
namespace `__init__`)
- Any package can define `view_file_conflicts` to customize what it
considers a merge conflict
- Global activations are handled like views (where the view root is the
spec prefix of the extendee)
- Benefit: filesystem-management aspects of activating extensions are
now placed in views (e.g. now one can hardlink a global activation)
- Benefit: overriding `Package.activate` is more straightforward (see
`Python.activate`)
- Complication: extension packages which have special-purpose logic
*only* when activated outside of the extendee prefix must check for
this in their `add_files_to_view` method (see `PythonPackage`)
- `LinkTree` is refactored to have separate methods for copying a
directory structure and for copying files (since it was found that
generally packages may want to alter how files are copied but still
wanted to copy directories in the same way)
TODOs (updated 2/20):
- [x] additional testing (there is some unit testing added at this point
but more would be useful)
- [x] refactor or reorganize `LinkTree` methods: currently there is a
separate set of methods for replicating just the directory structure
without the files, and a set for replicating everything
- [x] Right now external views (i.e. those not used for global
activations) call `view.add_extension`, but global activations do not
to avoid some extra work that goes into maintaining external views. I'm
not sure if addressing that needs to be done here but I'd like to
clarify it in the comments (UPDATE: for now I have added a TODO and in
my opinion this can be merged now and the refactor handled later)
- [x] Several method descriptions (e.g. for `Package.activate`) are out
of date and reference a distinction between global activations and
views, they need to be updated
- [x] Update aspell package activations
- Spack was assuming that a group with gid == current uid would always exist.
- This was breaking the travis build for macos.
- also fix issue with the DB tarball test finding coverage filesx
- pytest was not reporing the correct version from pytest.__version__.
It reported 'unknown'
- this fixes issues on some systems where system-installed pytest plugins
would try to use the version and convert it to an int
The following improvements are made to cxx standard support
(e.g. compiler.cxxNN_flag functions) in compilers:
* Add cxx98_flag property
* Add support for throwing an exception when a flag is not supported (previously
if a flag was not supported the application was terminated with tty.die)
* The name of the flag associated with e.g. c++14 standard support changes for
different compiler versions (e.g. c++1y vs c++14). This makes a few corrections
on what flag to return for which version.
* Added tests to confirm that versions report expected flags for various c++
standards (or raise an exception for versions that don't provide a given cxx
standard)
Note that if a given cxx standard is the default, the associated flag property will
return ""; cxx98 is assumed to be the default standard so this is the behavior for
the associated property in the base compiler class.
Package changes:
* Improvements to the boost spec to take advantage of the improved standard
flag facility.
* Update the clingo spec to catch the new exception rather than look for an
empty flag to indicate non-support (which is not part of the compiler flag API)
Fixes#7885#7193 added the patches_to_apply function to collect patches which are then
applied in Package.do_patch. However this only collects patches that are
associated with the Package object and does not include Spec-related patches
(which are applied by dependents, added in #5476).
Spec.patches already collects patches from the package as well as those applied
by dependents, so the Package.patches_to_apply function isn't necessary. All
uses of Package.patches_to_apply are replaced with Package.spec.patches.
This also updates Package.content_hash to require the associated spec to be
concrete: Spec.patches is only set after concretization. Before this PR, it was
possible for Package.content_hash to be valid before concretizing the associated
Spec if all patches were associated with the Package (vs. being applied by
dependents). This behavior was unreliable though so the change is unlikely to
be disruptive.
Fixes#8345
Spack environment modifications are applied before modules are loaded; this
includes settings to CC, FC, F77, and CXX, which point to the Spack compiler
wrappers. If the loaded modules set CC, this overrides the Spack compiler
wrappers. This PR adds a context manager to preserve the values of CC etc. that
are set by Spack: any effects on the CC, FC, F77, and CXX variables from modules
are undone and their original values are restored.
* pybind11: test support
Add a test functionality to pybind11.
* CMake: test also on "make check"
Some projects use non-CTest manual targets for tests.
* extend Prefix class with join() member to support dynamic directories
* add more tests for Prefix.join()
* more tests for Prefix.join()
* add docstring
* add example to docstring of Prefix class
* cleanup Prefix.join() tests
* use Prefix.join() in Packaging Guide
Fixes#8217
Trying to relocate a distribution when the new and old paths are
equal leads to failure, because the test that ensures that no
unrelocated bits are left over always fails. As an example, this
occurs if a user installs a package, generates a binary with it
using 'spack buildcache', uninstalls it, and then attempts to
reinstall into the same spack installation using the generated
binary package.
This updates the relocation check to accept the presence of the
old prefix in binaries if the package is being reinstalled into
its original location.
* allow user to constrain dependencies that are added conditionally
* remove check for not-visited deps from normalize, move it to concretize. The check now runs after the concretization loop completes (so an error is only reported if the user-mentioned spec doesnt appear anywhere in the dag)
* remove separate full_spec_deps variable; rename spec_deps to all_spec_deps to clarify that it merges user-specified dependencies with derived dependencies
* add unit test to confirm new functionality
- `spack config blame` is similar to `spack config get`, but it prints
out the config file and line number that each line of the merged
configuration came from.
- This is a debugging tool for understanding where Spack config settings
come from.
- add tests for config blame
Fixes: #8258#8090 altered import behavior so that import spack no longer
provides access to many other Spack modules. This addresses
a case which depended on the prior behavior and was not
updated as part of #8090. This particular import error only
came up when users were setting compiler flags on specs.
See also: #8194
- there were some leftover spack.* names being used after we removed
globals and moved everything in the top-level namespace to spack.pkgkit
- point those references to their new homes
- remove most `import spack` statements, except for files that need
`spack_version`
- import spack is no longer sufficient to use submodules
(e.g. spack.directives).
- these submodules must be imported directly. Update references
accordingly.
- Spack packages were originally expected to call `from spack import *`
themselves, but it has become difficult to manage imports in the
Spack core.
- the top-level namespace polluted by package symbols, and it's not
possible to avoid circular dependencies and unnecessary module loads in
the core, given all the stuff the packages need.
- This makes the top-level `spack` package essentially empty, save for a
version tuple and a version string, and `from spack import *` is now
essentially a no-op.
- The common routines and directives that packages need are now in
`spack.pkgkit`, and the import system forces packages to automatically
include this so that old packages that call `from spack import *`
will continue to work without modification.
- Since `from spack import *` is no longer required, we could consider
removing ``from spack import *`` from packages in the future and
shifting to ``from spack.pkgkit import *``, but we can wait a while to
do this.
- spack.util.lock behaves the same as llnl.util.lock, but Lock._lock and
Lock._unlock do nothing.
- can be disabled with a control variable.
- configuration options can enable/disable locking:
- `locks` option in spack configuration controls whether Spack will use filesystem locks or not.
- `-l` and `-L` command-line options can force-disable or force-enable locking.
- Spack will check for group- and world-writability before disabling
locks, and it will not allow a group- or world-writable instance to
have locks disabled.
- update documentation
- Spack core has long used llnl.util.filesystem.join_path, but
os.path.join is pretty much the same thing, and is more efficient.
- Use os.path.join in the core Spack code from now on.
- simplify the singleton pattern across the codebase
- reduce lines of code needed for crufty initialization
- reduce functions that need to mess with a global
- Singletons whose semantics changed:
- spack.store.store() -> spack.store
- spack.repo.path() -> spack.repo.path
- spack.config.config() -> spack.config.config
- spack.caches.fetch_cache() -> spack.caches.fetch_cache
- spack.caches.misc_cache() -> spack.caches.misc_cache
- `spack.cmd.all_commands` does a directory listing on
`lib/spack/spack/cmd`, regardless of whether it is needed
- make this lazy so that the directory listing won't happen unless it's
necessary.
- It turns out that jsonschema is one of the more expensive imports.
- move imports of jsonschema into functions to avoid the performance hits
for calls that don't need config.
- spack.store was previously initialized at the spack.store module level,
but this means the store has to be initialized on every spack call.
- this moves the state in spack.store to a singleton so that the store is
only initialized when needed.
- spack.repository module is now spack.repo
- `spack.repo` is now `spack.repo.path()` and loaded lazily
- Added `spack.repo.get()` and `spack.repo.all_package_names()` as
convenience functions to simplify the new lazy interface.
- updated tests and code
- no longer require `spack_version` to be a Version (it isn't used that
way anyway)
- use a simple tuple `spack_version_info` with major, minor, patch
versions
- generate `spack_version` from the tuple
- replace `spack.config.get_configuration()` with `spack.config.config()`
- replace `get_config`/`update_config` with `get`, `set`
- add a path syntax that can be used to refer to specific config options
without firt getting the entire configuration dict
- update usages of `get_config` and `update_config` to use `get` and `set`
- Current configuration code forces the config system to be initialized
at module scope, so configs are parsed on every Spack run, essentially
before anything else.
- We need more control over configuration init order, so move the config
scopes into a class and reduce global state in config.py
Fixes#2781
This PR introduces a new attribute for packages called
`archive_files`, which designates files that should be saved from
a package build (e.g. the config.log generated during autotools
builds).
The attribute contains a list of glob expressions; Any file that
matches will be archived in the `<prefix>/.spack/archived-files`
directory. Errors that occur when archiving files are collected and
reported in a file named `<prefix>/.spack/archived-files/errors.txt`.
`AutotoolsPackage` and `CMakePackage` provide a sensible default
override for this attribute.
fixes#7941
Modified string representation of Specs to add a space before deps
Unit-tests have been modified accordingly
Added a test for regression on #7941
Fixes#7924
Spack's yaml schema validator was initializing all instances of
unspecified properties with the same object, so that updating the
property for one entry was updating it for others (e.g. updating
versions for one package would update it for other packages).
This updates the schema validator to instantiate defaults with
separate object instances and adds a test to confirm this behavior
(and also confirms #7924 without this change).
* Use reported version of IBM XL Fortran compiler for compiler versions >= 16.0.
Starting with the April 2018 release, the IBM XL C and Fortran compilers report the same version, 16.0. Consequently, there is no need to downgrade the Fortran compiler version to match that of the C compiler.
* Use GitLab's API endpoint for fetching a git snapshot.
* More GitLab packages use the API.
* find_list_url for GitLab's API URLs.
* Flake8
* Url for 'hacckernels'.
* Check GitLab API regexps before the non-API ones.
Activating a package that is already activated now sends a `tty.msg`
and returns.
```
-bash-4.2$ ~/spack/bin/spack activate aspell6-en
==> Package aspell6-en/lc4v24f is already activated.
```
* Better error message for spack providers
fixes#1355
`spack providers` now outputs a sensible error message if non-virtual
specs are provided as arguments:
```
$ spack providers mpi zlib petsc
==> Error: non-virtual specs cannot be part of the query [zlib, petsc]
```
Formatting of the output changed slightly.
* Calling 'spack providers' without arguments print the virtual pkg list
Also, the error message in case of a wrong parameter has been improved
to show the list of valid packages.
* Avoid printing headers if stdout is not a tty
* The provider list is formatted with colify if not in a tty
* Added a test to check the list of providers returned from the command
Popping the when spec from kwargs in the extends directive breaks
class inheritance. Inheriting classes do not find their when spec.
We now get the when spec from kwargs instead, leaving it to be found
by any downstream package classes.
fixes#7705
Package.provides now checks constraints to ensure that a spec provides
a given virtual package. Note that 'strict=True' is not passed to
satisfies as this function is also used during concretization.
Fixes#7548
This updates the "spack view" command to use the same parsing logic
as "spack install" on the user-provided specs. For example you can
provide a DAG hash to refer to an exact installed spec instead of
specifying name, compiler, etc.
This fixes a check that decides when to skip buildcache relocation.
Originally the check was flawed in two ways: it would skip if the
source prefix matched the destination prefix, which no longer
matters since the source prefix is replaced with a placeholder
(so it always needs to be updated); it also would skip relocation
if the rpaths were not relative, when in fact it should be the
opposite (binaries without relative rpaths *should* be relocated,
and those without don't need it).
- FastPackageChecker was being called at startup every time Spack runs,
which takes a long time on networked filesystems. Startup was taking
5-7 seconds due to this call.
- The checker was intended to avaoid importing all packages (which is
really expensive) when all it needs is to stat them. So it's only
"fast" for parts of the code that *need* it.
- This commit makes repositories instantiate the checker lazily, so it's
only constructed when needed.
- This was needed when we transitioned to all lowercase packages because
git didn't handle case changes well on case-insensitive filesystems.
- Now it just adds extra stat calls to startup, and we check for
all-lowercase package names in tests, so we'll remove it.
- people using really old versions of Spack can re-clone.
* Create unload_module method
Extract code from load_module into unload_module.
* Unload modules to create a clean env on Cray
removes cray-libsci, cray-mpich and darshan to prevent any silent
linking with those packages.
* Add format to separate target and os for path
spec format can now handle separations of target and os for setting
up the path.
* Added ${PLATFORM} et al to spec.format()
${PLATFORM}, ${OS}, ${TARGET}
* Update tests
Updated tests and got rid of unnecessary code.
* Also update documentation to reflect this new ability.
* Add default path scheme to config.yaml
Added default path scheme to config.yaml. Users can overwrite this
section if they want.
* Speedup the default 'libs' property search - important for external
packages.
* As advised by @alalazo, use tuples instead of lists inside
_libs_default_handler.
* Added installation date and time to the database
Information on the date and time of installation of a spec is recorded
into the database. The information is retained on reindexing.
* Expose the possibility to query for installation date
The DB can now be queried for specs that have been installed in a given
time window. This query possibility is exposed to command line via two
new options of the `find` command.
* Extended docstring for Database._add
* Use timestamps since the epoch instead of formatted date in the DB
* Allow 'pretty date' formats from command line
* Substituted kwargs with explicit arguments
* Simplified regex for pretty date strings. Added unit tests.
This updates architecture concretization to
* Search for the nearest parent in the DAG for architecture information
rather than defaulting to the root of the DAG
* Propagate architecture settings transitively, such that if for
example the target is set at the root of the dag it will set the
same target on indirect dependencies (assuming no intermediate
dependency specifies a separate target). Previously this occurred
in general but under some conditions did not, for example if an
intermediate dependency specified some subset of architecture
properties.
* Create mirror for system with different compilers
Spack concretizes the spec provided by the user in
"spack mirror create" to ensure downloading the right
dependencies. Under normal circumstances concretization
requires that the chosen compiler exists on the system,
but this is not required when creating download mirrors
for other systems, so this requirement is removed in that
case.
* Add test for disabling compiler existence check
* Update compiler existence checking logic
* improve test for disabling compiler existence check
* make py-setuptools a run-time-only dep for py-basemap and patch python package to only apply setuptools flag for build deps
* py-qtconsole does not require setuptools
* This allows Spack to work with MD5 hashes on machines with openssl in FIPS mode.
* We are still using MD5 for validation in many places, and a later PR will replace all uses of MD5 with SHA256.
* This is a quick fix until that happens.
- transitive dependencies were not being handled correctly
- restructure code to do recursion and mark visited packages properly
- add `-V` option to *not* expand virtuals in spack dependencies
This re-adds the option to create and install unsigned tarballs, now
with the -u option (--unsigned) rather than the -y option.
This also changes the "keys" command, replacing the -y/--yes-to-all
option with the -t/--trust option (which has the same effect but is
more-clearly named).
Fixes#7130
shutil.move expects a source path like "/x/y/" to be a directory and
fails if "/x/y" is a symlink. This invokes realpath on the source
path to avoid the issue.
Fixes#7356
In some cases OperatingSystem (e.g. LinuxDistro) was getting
instantiated with a version that contains dashes. This breaks because
the concretizer later converts this value to a string and re-parses
it, and the '-' character is used to separate architecture components.
This adds a guard in the initializer to convert '-' to '_'.
Fixes#7237Fixes#6404Fixes#6418Fixes#6369
Identify when binary relocation fails to remove all instances of the
source prefix (and report an error to the user unless they specify
-a to allow the old root to appear). This check occurs at two stages:
during "bincache create" all instances of the root are replaced with
a special placeholder string ("@@@@..."), and a failure occurs if the
root is detected at this point; when the binary package is extracted
there is a second check. This addresses #7237 and #6418.
This is intended to be compatible with previously-created binary
packages.
This also adds:
* Better error messages for "spack install --use-cache" (#6404)
* Faster relocation on Mac OS (using a single call to
install_name_tool for all files rather than a call for each file)
* Clean up when "buildcache create" fails (addresses #6369)
* Explicit error message when the spack instance extracting the binary
package uses a different install layout than the spack instance that
created the binary package (since this is currently not supported)
* Remove the option to create unsigned binary packages with -y
This updates Cray.setup_platform_environment to use cray-specific
pkgconfig paths so that all providers of 'pkgconfig' have access
to them (previously the 'pkg-config' provider had this but the
'pkgconf' provider did not).
* [SPACK/spec.py] When a query through ForwardQueryToPackage returns
'None', treat that as query failure and raise RuntimeError with
suitable message. This overrides the current behavior to raise an
AttributeError which is now triggered only when no suitable query
property is found and there is no default handler.
* [spack/spec.py] Fix style.
* [SPACK/spec.py] In case of query failure, i.e. property returning
'None', raise AttributeError instead of RuntimeError in order to
pass the unit test. Also, small update in the logic distinguishing
query failure and lack of relevant property/attribute handling.
This updates the fix_darwin_install_name function to use the Spack
Executable object to run install_name_tool, which ensures that
process output is formatted as a 'str' for python2 and python3.
Originally fix_darwin_install_name was invoking subprocess.Popen
directly.
Fixes#5189
When working with non-normalized paths containing ".." on some
file systems, Spack was found to encounter a permission error when
writing to the path. This normalizes a path written by the
intel-parallel-studio package and also normalizes all paths
written by the license install hook (for all packages) to avoid
this issue for intel-parallel-studio.
Following the discussion with Todd and Adam, find has been modified to
accept glob expressions. This should not affect performance as every
glob implementation I inspected has 3 cases (no wildcard, wildcard but
no directories involved, wildcard and directories involved) and uses
fnmatch underneath.
Mixins have been changed to do by default a non-recursive search (but
a recursive search can still be triggered using the recursive keyword).
Following a comment from Todd, the search path for the files listed in
`filter_compiler_wrappers` can now be narrowed. Anyhow, the function
implementation still makes use of `find`, the rationale being that we
have already seen packages that install artifacts in e.g. architecture
dependent folders. The possibility to have a relative search path might
be a good compromise between the previous approach and the one suggested
in the review.
Also: 'ignore_absent' and 'backup' keyword arguments can be optionally
forwarded to `filter_file`.
Following comments from Todd:
- the call to tty.debug has been moved deeper, to log the filtering of each file
- the shadowing on the name "kwargs" is avoided
Implemented a declarative syntax for the additional behavior that can
get attached to classes. Implemented a function to filter compiler
wrappers that uses the mechanism above.
- command reference now includes usage for all Spack commands as output
by `spack help`. Each command usage links to any related section in
the docs.
- added `spack commands` command which can list command names,
subcommands, and generate RST docs for commands.
- added `llnl.util.argparsewriter`, which analyzes an argparse parser and
calls hooks for description, usage, options, and subcommands
- Shorten Spack command usage for short options. Short options are now
shown as [-abc] instead of as [-a] [-b] [-c]
- fix bug that mixed long and short options for top-level `spack help`
- Add proper help for `spack buildcache` subcommands
- Reorganize the help categories of Spack commands so that buildcache is
in packaging and diy and setup are now in build.
- previously commands with this argument showed a long list of choices
that were platform specific.
- use a better metavar: {defaults,system,site,user}[/PLATFORM]
Fixes#7159
When activating extensions in external views, the --ignore-conflicts
option was being ignored. In this particular issue the conflict was
for the duplicate __init__ file for multiple python packages in the
same namespace, but in general any conflict for extensions would
cause an error whether or not --ignore-conflicts was set.
This also renames the 'force' option of do_activate to
'with_dependencies' and updates views to call do_activate with this
set to False (since it traverses the dependency dag anyway). This
isn't strictly required, it just avoids redundant calls.
This reorganizes most sections and rewords a significant portion of
the content (including all introductions) but keeps all the examples.
* Remove section 'What happens at subscript time' from tutorial:
it is too detailed for a tutorial
* Move the 'Extra query parameters' and 'Attach attributes to other
packages' sections into a separate grouping 'Other packaging topics'
* move the 'Set variables at build time yourself' section after
'Set environment variables in dependents' section since the latter
is more motivating
* start the 'set environment variables at build-time for yourself'
section with qt as an example
* renamed section 'specs build interface' to 'retrieving library
information' and updated section introduction
* renamed section 'a motivating example' to 'accessing library
dependencies'; split out the material which deals with implementing
.libs for netlib-lapack into a separate section called 'providing
libraries to dependents'. consolidated in material from the section
'single package providing multiple virtual specs' since
netlib-lapack is an example of this (this removes the material
about intel-parallel studio)
* Allow dashes in command names and fix command name handling
- Command should allow dashes in their names like the reest of spack,
e.g. `spack log-parse`
- It might be too late for `spack build-cache` (since it is already
called `spack buildcache`), but we should try a bit to avoid
inconsistencies in naming conventions
- The code was inconsistent about where commands should be called by
their python module name (e.g. `log_parse`) and where the actual
command name should be used (e.g. `log-parse`).
- This made it hard to make a command with a dash in the name, and it
made `SpackCommand` fail to recognize commands with dashes.
- The code now uses the user-facing name with dashes for function
parameters, then converts that the module name when needed.
* Improve performance of log parsing
- A number of regular expressions from ctest_log_parser have really poor
performance, most due to untethered expressions with * or + (i.e., they
don't start with ^, so the repetition has to be checked for every
position in the string with Python's backtracking regex implementation)
- I can't verify that CTest's regexes work with an added ^, so I don't
really want to touch them. I tried adding this and found that it
caused some tests to break.
- Instead of using only "efficient" regular expressions, Added a
prefilter() class that allows the parser to quickly check a
precondition before evaluating any of the expensive regexes.
- Preconditions do things like check whether the string contains "error"
or "warning" (linear time things) before evaluating regexes that would
require them. It's sad that Python doesn't use Thompson string
matching (see https://swtch.com/~rsc/regexp/regexp1.html)
- Even with Python's slow implementation, this makes the parser ~200x
faster on the input we tried it on.
* Add `spack log-parse` command and improve the display of parsed logs
- Add better coloring and line wrapping to the log parse output. This
makes nasty build output look better with the line numbers.
- `spack log-parse` allows the log parsing logic used at the end of
builds to be executed on arbitrary files, which is handy even outside
of spack.
- Also provides a profile option -- we can profile arbitrary files and
show which regular expressions in the magic CTest parser take the most
time.
* Parallelize log parsing
- Log parsing now uses multiple threads for long logs
- Lines from logs are divided into chnks and farmed out to <ncpus>
- Add -j option to `spack log-parse`
* Marking database tests as slow
* Marking url command tests as slow
* Marking every test that uses database as slow
* Marking tests that import files as slow
* Marking gpg tests as slow
* Marking all versions and one list tests as slow
* Added more markers to unit tests + cli option to skip slow tests
Following a discussion with Axel, the generic 'slowtest' marker has been
split into 'db', 'network' and 'maybeslow'. A brief description of the
meaning of each marker has been added to pytest.ini.
A command line option to run only fast tests has been added to
'spack test'
* Don't use classes to group tests together
Reverted grouping tests under a class, as required in the review
* Minor style changes
This attempts to address one of the complaints at #5996 (comment):
> Repo currently caches package instances by Spec, and those Package instances have a Spec.
> This is unnecessary and causes confusion. I think I thought that we'd need to cache instances
> after loading package classes, but really just caching the classes is fine.
With this update, Repo's package cache is removed and Specs cache the package reference themselves. One consequence is that Specs which compare as equal will store separate instances of a Package class (not doing this creates issues for #4595 (comment)).
There were several references to Spec.package that could be replaced with Spec.package_class without any additional modifications. There are still a couple remaining references to Spec.package in Spec that would require adding functionality before replacing (e.g. calling Package.provides and Package.installed).
Note this makes it difficult to mock fetchers for tests which invokes code that reconstructs specs. test_packaging was one example of this where the updates caused a failure (in that case the error was avoided by not making an unnecessary call).
Details:
* Replace instances of spec.package with spec.package_class where a class method is being called
* Remove instances of Repo.get where Spec.package_class can be used in its place
* remove Repo.get caching instances of Package class for specs
* remove redundant check (which is also incorrect now that each spec stores its own copy of its package)
* avoid creating mirror with specs because it copies specs and those copies dont refer to the mocked fetcher (and it is also not required for the test)
* remove checks that are no longer necessary since repo doesn't cache specs
* Cleaned up JUnit report generation on install
The generation of a JUnit report was previously part of the install
command. This commit factors the logic into its own module, and uses
a template for the generation of the report.
It also improves report generation, that now can deal with multiple
specs installed at once. Finally, extending the list of supported
formats is much easier than before, as it entails just writing a
new template.
* Polished report generation + added tests for failures and errors
The generation of a JUnit report has been polished, so that the
stacktrace is correctly displayed with Jenkins JUnit plugin. Standard
error is still not used.
Added unit tests to cover for installation failures and installation
errors.
Avoid adding an "outside" (local home's) python user site directory
during python package installs.
Implements #6611
Fixes packages with auto-find side effects such as `py-setuptools`
that cause `py-matplotlib` to fail to build #6558
The flag_handlers method was being set as a bound method, but when
reset in the package.py file it was being set as an unbound method
(all python2 issues). This gets the underlying function information,
which is the same in either case.
The bug was uncovered for parmetis in #6858. This is a partial fix.
Included are changes to the parmetis package.py file to make use of
flag_handlers.
The feature added in #4611 is currently broken. This commit fixes the
behavior of the command and adds unit tests to ensure the basic semantic
is maintained.
It also changes slightly the behavior of Spec.concretized to avoid
copying caches before the concretization (as this may result in a
wrong hash computation for the DAG).
- Generating the HTML from for >2300 packages from RST in Sphinx seems to
take forever.
- Add an option to `spack list` to generate straight HTML instead.
- This reduces the doc build time to about a minute (from 5 minutes on a mac laptop).
* Vendor ordereddict for python2.6 only
This commit substitutes the custom module 'ordereddict_backport' with
the more known 'ordereddict' and vendors it only for python 2.6. Other
supported versions of python will use 'collections.OrderedDict'.
* Use absolute imports also for python 2.6
See PEP-328 for more information on the subject
* Added provenance of vendored ordereddict
See #6794
This fixes cases where test-only dependencies were omitted from
consideration when modifying the environment at build time. This
includes an update to the python package definition to add
testing-related python extensions to its specialized environment
setup.
This updates the conflict-checking logic to require that the conflict
spec matches exactly and that all fields mentioned in the conflict
spec are present in the concretized spec in order to report a
conflict. This will automatically skip all conflict checks for
dependencies of externals (since externals strip dependencies). This
will not affect non-external packages since all fields and
dependencies are fully specified for such packages.
* Keep track of source and versions for external libraries
* Note source of more obscure libraries
* We aren't upgrading jsonschema after all
* Add note on modifications made to pytest
* add OctavePackage
1. remove import CudaPackage which is not needed anymore
2. mention CudaPackage and OctavePackage in packaging guide
3. adjust OctavePackageTemplate
4. add clue file for Octave build
5. sanity check on self.prefix
* use setup_environment
* Only specify a file as needing relocation if it contains the spack
root as a text string (this constraint also applies to binaries)
* Don't fail if there is an error retrieving RPATH information from a
binary (even if it is specified as requiring relocation)
This adds the ability for packages to apply compiler flags in one of
three ways: by injecting them into the compiler wrapper calls (the
default in this PR and previously the only automated choice);
exporting environment variable definitions for variables with
corresponding names (e.g. CPPFLAGS=...); providing them as arguments
to the build system (e.g. configure).
When applying compiler flags using build system arguments, a package
must implement the 'flags_to_build_system_args" function. This is
provided for CMake and autotools packages, so for packages which
subclass those build systems, they need only update their flag
handler method specify which compiler flags should be specified as
arguments to the build system.
Convenience methods are provided to specify that all flags be applied
in one of the 3 available ways, so a custom implementation is only
required if more than one method of applying compiler flags is
needed.
This also removes redundant build system definitions from tutorial
examples
Fixes#5940
This PR changes the option '--restage' of 'spack install' to
'--dont-restage', inverting its meaning and the default behavior
of the command.
Fixes#6200
For compilers that successfully run a version detection script but
don't actually return a version, Spack was keeping track of the
empty version and then failing when attempting to construct a
compiler spec. This skips any attempt to add a compiler entry when
no version is reported (but logs when a compiler fails to report
a version).
* Support pruning of vars with Env from_sourcing_file (issue #6501)
- Blacklist string literals or regular expressions of environment
variables that are to be removed from consideration as being affect
by the sourcing of the file. Conversely, whitelist modifications
that should not ignored. Whitelisted variables have priority over
blacklisting.
Eg,
EnvironmentModifications.from_sourcing_file
(
bashrc
blacklist=['JUNK_ENV', 'OPTIONAL_.*'],
whitelist=['OPTIONAL_REQUIRED.*']
)
This modification can be used to eliminate environment variables that
are not generalized for modules (eg, user-specific variables).
* BUG: module prepend-path in wrong order (fixes#6501)
* STYLE: module variables in sorted order (issue #6501)
- looks nicer and also helps when comparing the contents of different
module files.
* ENH: remove duplicates from env paths when creating modules (issue #6501)
- this makes for a cleaner module environment and helps avoid some
unnecessary changes to the environment that are only provoked by
redundancies in the PATH.
eg,
before PATH=/usr/bin
after PATH=/usr/bin:/usr/bin:/my/application/bin
should only result in /my/application/bin being added to the PATH
and not /usr/bin:/my/application/bin
Activate via the 'clean' flag (default: False):
EnvironmentModifications.from_sourcing_file(bashrc, clean=True,..
Fixes#2440
The "Getting started" guide should be short and sweet. This commit
simplifies the "Environment-Modules" section pruning:
- outdated / wrong suggestions as noted in #2440
- uncommon setups that are better treated in a reference guide
According to the documentation here:
http://www.sphinx-doc.org/en/stable/ext/autodoc.html
"For module data members and class attributes, documentation can either
be put into a comment with special formatting (using a #: to start the
comment instead of just #), or in a docstring after the definition."
* Updated function which checks if a binary file needs relocation.
Previously this was incorrectly identifying ELF binaries as symbolic
links (so they were being excluded from relocation). Added test to
check that ELF binaries are not considered symlinks.
* relocate_text was not replacing paths in text files. Added test to
check that text files are relocated properly (i.e. paths in the file
are converted to the new prefix).
* Exclude backup files created by filter_file when installing from
binary cache.
* Update write_buildinfo_file method signature to distinguish between
the spec prefix and the working directory for the binary cache
package.
"spack spec" was providing helpful error information about conflicts
that was missing from "spack install", this updates "spack install"
to provide the same information.
The original docstring had confusing wording re: what is going to
symlinked and where when using the `extend` directive, and how exactly
the symlinking is performed (not automatically on install, but using
`spack activate`). See #5559.
Showing "Normalize" on output doesn't give users additional information,
as this step is essentially an implementation detail of concretization.
This PR skips it and shows just the input spec and the concretized one.
Printing partial hashes for input spec has been disabled.
* First draft for SC17 build systems portion
Added tutorial_buildsystems.rst file as well as example files under
the tutorial/ directory.
* Remove floating `
* Add requested changes, and examples of subclasses
Added in the requested changes to the documentation. Also added in
information about the subclasses and the defaults that they provide.
Also fixed some phrasing issues, formatting and punctuation.
* Flake8 fixes and new files for classes
Made flake8 fixes to pass tests and also added files to demonstrate code
in the classes.
* Minor edits
Edits in formatting and made some sentence changes
* Flake8 fixes
More flake8 fixes
* Flake8 fix
* Change section order on tutorial and minor edits
Placed the section at the appropriate section for the tutorial and then
added some minor edits that were requested.
* Add requested changes and more details
Added more details to Cmake, Makefile and Python Packages.
* Fixed formatting and minor edits
* Fix doc build error
* Allow types and 'any' in variant definitions.
- Previously variant values had to be a tuple or a callable predicate.
- This allows 'any' as shorthand for `lambda x: True` and type objects
as shorthand for "any value of this type".
- Makes variant definitions more readable, keeps lambdas out of
packages for common cases.
* Update packaging tutorial
* Fix bad file reference in packaging tutorial
* First draft of the advanced packaging tutorial
* advanced packaging tutorial: improved phrasing
Thanks Denis and Hartzell!
* Fixed typos + reworded a couple of sentences
* Reworked module file tutorial section
First draft for the SC17 update. This includes:
- adding an introduction on module files + Spack's module
generation blueprints
- adding a set-up section and provide a docker image for easy set-up
- updating all the relevant snippets
- extending a bit some of the concepts that were already touched
* Added reference to #5582 + committed Dockerfiles
Also fixed a couple of typos spotted by Denis.
* module file tutorial: added section on template customization
* module file tutorial: fixed minor typos + rephrased a sentence
* module file tutorial: made explicit that Docker image comes with software
* module file tutorial: improved phrasing and layout.
Thanks Hartzell!
* module file tutorial: added vim and nano to editors
* module file tutorial: fixed typo
* Fixed typos
Thanks Adam!
* module file tutorial: updated Dockerfile + minor changes in introduction
Fixes#6154
For compiler options which set values using the syntax "-flag value"
(with a space between the flag and the flag's value) the flag and
value were treated as separate and reordered. This updates Spack's
logic to treat the flag and value as a single unit, even if there is
a space between them. It assumes that all flags are prefixed with
"-" and that all flag values are not.
* Skip rewriting files that are links (this also avoids issues with parsing
the output of the 'file' command for symlinks)
* Fail rather than warn for several gpg signing issues (e.g. if there is no
key available to sign with)
* Installation with 'buildcache install' only retrieves link and run
dependencies (this matches 'buildcache create' which only creates tarballs
of link/run dependencies)
* Don't rewrite RPATHs for a binary if the binary cached package was created
with relative RPATHs
* Refactor relocate_binary to operate on multiple binaries (given as a
collection of paths). This was likewise done for relocate_text and
make_binary_relative
- This isn't one of those autogenerated SVGs from a drawing program!
- This is a completely re-traced, minimalist SVG file with clearly
delineated pieces so that your favorite renderer can draw a Spack logo
at whatever resolution you want.
- Included versions with text, as well.
Fixes#6126#3183 removed the print_help function from the "modules" module.
This adds it back so that if a user invokes 'spack load foo' without
having sourced an environment setup script, they will be prompted
to do so. This also improves the placeholder message to tell the
user to invoke 'spack' as shell function rather than as an executable.
Part of the resource staging process is to place downloaded/expanded
files in the root stage. This was not happening when a resource stage
was restaged.
Fixes#5778.
Spack uses 'gcc -dumpversion' to determine the full version of gcc.
'gcc -dumpversion' no longer gives the full version on gcc 7.2.0.
'gcc -dumpfullversion' is required instead. This PR detects when
'gcc -dumpversion' gives a truncated version of '7' and in that
case retrieves the full version with 'gcc -dumpfullversion'
The name of the debug log written by the cc compiler wrapper was given
by Spec.short_spec, which includes the architecture. Somewhere along
the line Spec.format started adding spaces around the architecture
property so the filename started including spaces; the cc wrapper
script appears to ignore this, so files like spack-cc-bzip2-....in.log
(which record the wrapped compiler invocations) were not being
generated. This uses a different format string from the spec to
generate the wrapper log file names (which does not include spaces).
* when a user-provided spec refers to an already-installed package, packages with patches applied were causing validation errors based on the recorded variants in the package's class
* avoid checks on all reserved variants (not just 'patches')
* basic functionality to install spec from a binary cache when it's available; this spiders each cache for each package and could likely be more efficient by caching the results of the first check
* add spec to db after installing from binary cache
* cache (in memory) spec listings retrieved from binary caches
* print a warning vs. failing when no mirrors are configured to retrieve pre-built Spack packages
* make automatic retrieval of pre-built spack packages from mirrors optional
* no code was using the links stored in the dictionary returned by get_specs, so this simplifies the logic to return only a set of specs
* print package prefix after installing from binary cache
* provide more information to user on install progress
This updates the logic for Package.extends so that if the spec
associated with the package is not concrete, it will report true if
the package *could extend* the given spec; generally speaking a
package could extend a spec as long as none of the details associated
with its extendee spec conflict with the given spec. When the spec
associated with the package is concrete, this function will only
report whether the package *does extend* the given spec. When both
the specs are concrete, the semantics are the same as before.
* When creating a tar of a package for a build cache, symlinks are
preserved (the corresponding path in the newly-created tarfile will
be a symlink rather than a copy of the file)
* Dont add external packages to a build cache
* When installing from binary cache, don't create install prefix until
verification is complete
* Fixes#5754
Previously when RepoPath.repo_for_pkg was invoked with a string,
it did not check if the string included a namespace. Any
namespace-qualified package provided as a string would not be found
(at which point the behavior was to return the highest-precedence
repository).
* handle nested namespaces for packages specified as strings in repo_for_pkg
* add preliminary repository tests
* add test which replicates #5754
* refactor repo tests with fixtures
* define repo_path equivalent at test-level scope for repo tests
* add tests for unknown namespace/package
* rename fixture function (no longer prefixed with 'test_')
Internally we work against a branch named 'llnl/develop', which
mirrors the public repository's `develop` branch.
It's useful to be able to run flake8 on our changes, using
`llnl/develop` as the base branch instead of `develop`.
Internally the flake8 subcommand generates the list of changes files
using a hardcoded range of `develop...`.
This makes the base of that range a command line option, with a
default of `develop`.
That lets us do this:
```
spack flake8 --base llnl/develop
```
which uses a range of `llnl/develop...`.
'spack install' can now reinstall a spec even if it has dependents, via
the --overwrite option. This option moves the current installation in a
temporary directory. If the reinstallation is successful the temporary
is removed, otherwise a rollback is performed.
- new E741 flake8 checks disallow 'l', 'O', and 'I' as variable names
- rework parts of the code that use this, to be compliant
- we could add exceptions for this, but we're trying to mostly keep up
with PEP8 and we already have more than a few exceptions.
- When you don't use wildcards, flake8 will find places where you used an
undefined name.
- This commit has all the bugfixes resulting from this static check.
There are now separate flake8 configs for core vs. packages:
- core has a smaller set of flake8 exceptions
- packages allow `from spack import *` and module globals
- Allows core to take advantage of static checking for undefined names
- Allows packages to keep using Spack tricks like `from spack import *`
and dependencies setting globals for dependents.
* Update Getting Started docs to clarify that full Xcode suite is required for qt
* Better error message when only the command-line tools are installed
I'm tracking down a problem with the perl package that's been
generating this error:
```
OSError: OSError: [Errno 2] No such file or directory: '/blah/blah/blah/lib/5.24.1/x86_64-linux/Config.pm~'
```
The real problem is upstream, but it's being masked by an exception
raised in `filter_file`s finally block.
In my case, `backup` is `False`.
The backup is created around line 127, the `re.sub()` calls
fails (working on that), the `except` block fires and moves the backup
file back, then the finally block tries to remove the non-existent
backup file.
This change just avoids trying to remove the non-existent file.
- The shell script uses arrays and hence only works on sophisticated shells and not the default `sh`. For clarity the shebang `#!/bin/bash` has been used instead.
- This fakes out GitFetchStrategy to try code paths for different git
versions.
- This allows us to test code paths for old versions using a newer git
version.
- Tests use a session-scoped mock stage directory so as not to interfere
with the real install.
- Every test is forced to clean up after itself with an additional check.
We now automatically assert that no new files have been added to
`spack.stage_path` during each test.
- This means that tests that fail installs now need to clean up their
stages, but in all other cases the check is useful.
- Be explicit about stage creation during the install process.
- This avoids accidental creation of stages
- e.g., during `spack install --fake`, stages were erroneously recreated
after being destroyed
- This prevents the main spack install from being clusttered by
invocations of `spack test`.
- This uses a session-scoped stage fixture to ensure tests don't
interfere.
- Spack's core package interface was previously overly stateful, in that
calling methods like `do_stage()` could change your working directory.
- This removes Stage's `chdir` and `chdir_to_source` methods and replaces
their usages with `with working_dir(stage.path)` and `with
working_dir(stage.source_path)`, respectively. These ensure the
original working directory is preserved.
- This not only makes the API more usable, it makes the tests more
deterministic, as previously a test could leave the current working
directory in a bad state and cause subsequent tests to fail
mysteriously.
- Assertion would search for root through all possible paths.
- It's also really slow.
- This isn't needed anymore. We're pretty good at ensuring single-rooted
DAGs, and this assertion has never been thrown.
- This shaves another 6 seconds off r-rminer concretization
- This reduces concretization time for r-rminer from over 1 minute to
only 16 seconds.
- OrderedDict ends up taking a *lot* of time to compare larger specs.
- An OrderedDict isn't actually needed here. It's actually not possible
to find duplicates, and we end up sorting the contents of the
OrderedDict anyway.
- This is an optimization to the way traverse_edges iterates over
successors.
- Previous version called dependencies_dict(), which involved a lot of
redundant work (creating dicts and calling caonical_deptype)
- Spack ends up constructing compilers frequently from YAML data.
- This caches the result of parsing the compiler config
- The logic in compilers/__init__.py could use a bigger cleanup, but this
makes concretization much faster for now.
- on macOS, this also ensures that xcrun is called only twice, as opposed
to every time a new compiler object is constructed.
This adds a workflow section on how to use spack on Docker.
It provides an example on the best-practices I collected over the
last months and circumvents the common pitfalls I tapped in.
Works with MPI, CUDA, Modules, execution as root, etc.
Background: Developed initially for PIConGPU.
* Make --trusted default when running spack gpg list
Currently running `spack gpg list` with no arguments returns nothing. You must supply either the `--trusted` or the `--signing` options. The idea here is to return some initial data to the user when the command is run. The alternative is to return an error, telling the user to select one of the two options.
* Add an extra test case for the empty list command
Fixes the issue with code coverage
This isn't a rework of the concretizer but it speeds things up a LOT.
The main culprits were:
1. Variant code, `provider_index`, and `concretize.py` were calling
`spec.package` when they could use `spec.package_class`
- `spec.package` looks up a package instance by `Spec`, which requires a
(fast-ish but not that fast) DAG compare.
- `spec.package_class` just looks up the package's class by name, and you
should use this when all you need is metadata (most of the time).
- not really clear that the current way packages are looked up is
necessary -- we can consider refactoring that in the future.
2. `Repository.repo_for_pkg` parses a `str` argument into a `Spec` when
called with one, via `@_autospec`, but this is not needed.
- Add some faster code to handle strings directly and avoid parsing
This speeds up concretization 3-9x in my limited tests. Still not super
fast but much more bearable:
Before:
- `spack spec xsdk` took 33.6s
- `spack spec dealii` took 1m39s
After:
- `spack spec xsdk` takes 6.8s
- `spack spec dealii` takes 10.8s
* Do not call setup_package for fake installs
- setup package could fail if ``setup_dependent_environment`` or other
routines expected to use executables from dependencies
- xpetsc and boost try to get python config variables in
`setup_dependent_package`; this would cause them not to be
fake-installable
* Remove vestigial deptype_query argument to Spec.traverse()
- The `deptype_query` argument isn't used anymore -- it's only passed
around and causes confusion when calling traverse.
- Get rid of it and just keep the `deptypes` argument
* Don't print redundant messages when installing dependencies
- `do_install()` was originally depth-first recursive, and printed "<pkg>
already installed in ..." multiple times for packages as recursive
calls encountered them.
- For much cleaner output, use spec.traverse(order='post') to install
dependencies instead
* Add package for aspell and ass't dictionaries
Add a package definition for aspell.
Add a handful of dictionaries to convince myself that the support for
a bunch of dictionaries works.
* Flake8 cleanup
* Use six's version of urlparse
`urlparse` is not python3 friendly. This works around it (stolen from
`.../cmd/md5.py`).
* Fix incorrect trimming regexp
* Clean up dictionary build
- more parsimonious use of `which` (`make()` has already been made)
- use `sh` instead of `bash`
* Use a helper method to generate info for variants
I figured out my issues with static methods. I *think* that it this
is pythonic.
* Convert aspell to an extendable package
Convert aspell to be extendable and rework the dictionaries to be
extensions.
As it stands, there's a great deal of cut and paste in the
dictionaries, I'll abstract that out next.
The {de,}activate methods copy a great deal of code out of
package.py. Perhaps there's a better way....
* Create AspellDictPackage and use it for the dictionaries
Reduce the repeated code, pull it into a base class.
I'm confused about why 'from spack import *' wasn't more useful in the
base class.
* Oops, -de & -es should be AspellDictPackages too
* Typo: pakcage -> package
* Address some commentary
* Update copyright dates, 2016->2017
* spec and spec.package.spec can refer to different objects in the
database. When these two instances of spec differ in terms of
the value of the 'concrete' property, Spec._mark_concrete can
fail when checking Spec.package.installed (which requires
package.spec to be concrete). This skips the check for
spec.package.installed when _mark_concrete is called with
'True' (in other words, when the database is marking all specs
as being concrete).
* add test to confirm this fixes#5293