Replace regex-based target detection for Makefiles with a preliminary "make -q"
to check if a target exists. This does not work for NetBSD make; additional work
is required to detect if NetBSD make is present and to use a regex in that case.
The affected makefile target checks are only performed when the "--test" flag is
added to a "spack install" invocation.
Fixes#8036
Before this PR Package.installed was returning True if the spec prefix
existed, without checking the DB. This is wrong for external packages,
whose prefix exists before being registered into the DB. Now the property
checks for both the prefix and a DB entry.
Spack provides a number of classes based on commonly-used build systems
that users can extend when writing packages; the classes provide functionality
to perform the actions relevant to the build system (e.g. running "configure" for
an Autotools-based package). This adds documentation for classes supporting the
following build systems:
* Makefile
* Autotools
* CMake
* QMake
* SCons
* Waf
This includes build systems for managing extensions of the following packages:
* Perl
* Python
* R
* Octave
This also adds documentation on implementing packages that use a custom build
system (e.g. Perl/CMake).
Spack also provides extendable classes which aggregate functionality for related
sets of packages, e.g. those using CUDA. Documentation is added for
CudaPackage.
- The setup-env.sh script currently makes two calls to spack, but it
should only need to make one.
- Add a fast-path shell setup routine in `main.py` to allow the shell
setup to happen in a single, fast call that doesn't load more than it
needs to.
- This simplifies setup code, as it has to eval what Spack prints
- TODO: consider eventually making the whole setup script the output of a
spack command
* update help of `clean --all` to include `-p`
* remove old orphaned `.pyc` removal
* restrict removal or orphaned pyc files to `lib/spack` and `var/spack`
- Clean up error messages for when a lock can't be created, or when an
exclusive (write) lock can't be taken on a file.
- Add a number of subclasses of LockError to distinguish timeouts from
permission issues.
- Add an explicit check to prevent the user from taking a write lock on a
read-only file.
- We had a check for this for when we try to *upgrade* a lock on an RO
file, but not for an initial write lock attempt.
- Add more tests for different lock permission scenarios.
- write locks previously wrote information about the lock holder (host
and pid), and read locks woudl read this in.
- This is really only for debugging, so only enable it then
- add some tests that target debug info, and improve multiproc lock test
output
When a user specifies a URL for a specific version of a package, Spack originally
would use that URL for all newer versions of the package. This behavior has
proven to be generally more harmful than useful, so this PR removes the feature
such that a version-specific URL override affects only that version.
If the user sets "ccache: true" in spack's config.yaml, Spack will use an available
ccache executable when compiling c/c++ code. This feature is disabled by default
(i.e. "ccache: false") and the documentation is updated with how to enable
ccache support
Functional updates:
- `python` now creates a copy of the `python` binaries when it is added
to a view
- Python extensions (packages which subclass `PythonPackage`) rewrite
their shebang lines to refer to python in the view
- Python packages in the same namespace will not generate conflicts if
both have `...lib/site-packages/namespace-example/__init__.py`
- These `__init__` files will also remain when removing any package in
the namespace until the last package in the namespace is removed
Generally (Updated 2/16):
- Any package can define `add_files_to_view` to customize how it is added
to a view (and at the moment custom definitions are included for
`python` and `PythonPackage`)
- Likewise any package can define `remove_files_from_view` to customize
which files are removed (e.g. you don't always want to remove the
namespace `__init__`)
- Any package can define `view_file_conflicts` to customize what it
considers a merge conflict
- Global activations are handled like views (where the view root is the
spec prefix of the extendee)
- Benefit: filesystem-management aspects of activating extensions are
now placed in views (e.g. now one can hardlink a global activation)
- Benefit: overriding `Package.activate` is more straightforward (see
`Python.activate`)
- Complication: extension packages which have special-purpose logic
*only* when activated outside of the extendee prefix must check for
this in their `add_files_to_view` method (see `PythonPackage`)
- `LinkTree` is refactored to have separate methods for copying a
directory structure and for copying files (since it was found that
generally packages may want to alter how files are copied but still
wanted to copy directories in the same way)
TODOs (updated 2/20):
- [x] additional testing (there is some unit testing added at this point
but more would be useful)
- [x] refactor or reorganize `LinkTree` methods: currently there is a
separate set of methods for replicating just the directory structure
without the files, and a set for replicating everything
- [x] Right now external views (i.e. those not used for global
activations) call `view.add_extension`, but global activations do not
to avoid some extra work that goes into maintaining external views. I'm
not sure if addressing that needs to be done here but I'd like to
clarify it in the comments (UPDATE: for now I have added a TODO and in
my opinion this can be merged now and the refactor handled later)
- [x] Several method descriptions (e.g. for `Package.activate`) are out
of date and reference a distinction between global activations and
views, they need to be updated
- [x] Update aspell package activations
- Spack was assuming that a group with gid == current uid would always exist.
- This was breaking the travis build for macos.
- also fix issue with the DB tarball test finding coverage filesx
- pytest was not reporing the correct version from pytest.__version__.
It reported 'unknown'
- this fixes issues on some systems where system-installed pytest plugins
would try to use the version and convert it to an int
The following improvements are made to cxx standard support
(e.g. compiler.cxxNN_flag functions) in compilers:
* Add cxx98_flag property
* Add support for throwing an exception when a flag is not supported (previously
if a flag was not supported the application was terminated with tty.die)
* The name of the flag associated with e.g. c++14 standard support changes for
different compiler versions (e.g. c++1y vs c++14). This makes a few corrections
on what flag to return for which version.
* Added tests to confirm that versions report expected flags for various c++
standards (or raise an exception for versions that don't provide a given cxx
standard)
Note that if a given cxx standard is the default, the associated flag property will
return ""; cxx98 is assumed to be the default standard so this is the behavior for
the associated property in the base compiler class.
Package changes:
* Improvements to the boost spec to take advantage of the improved standard
flag facility.
* Update the clingo spec to catch the new exception rather than look for an
empty flag to indicate non-support (which is not part of the compiler flag API)
Fixes#7885#7193 added the patches_to_apply function to collect patches which are then
applied in Package.do_patch. However this only collects patches that are
associated with the Package object and does not include Spec-related patches
(which are applied by dependents, added in #5476).
Spec.patches already collects patches from the package as well as those applied
by dependents, so the Package.patches_to_apply function isn't necessary. All
uses of Package.patches_to_apply are replaced with Package.spec.patches.
This also updates Package.content_hash to require the associated spec to be
concrete: Spec.patches is only set after concretization. Before this PR, it was
possible for Package.content_hash to be valid before concretizing the associated
Spec if all patches were associated with the Package (vs. being applied by
dependents). This behavior was unreliable though so the change is unlikely to
be disruptive.
Fixes#8345
Spack environment modifications are applied before modules are loaded; this
includes settings to CC, FC, F77, and CXX, which point to the Spack compiler
wrappers. If the loaded modules set CC, this overrides the Spack compiler
wrappers. This PR adds a context manager to preserve the values of CC etc. that
are set by Spack: any effects on the CC, FC, F77, and CXX variables from modules
are undone and their original values are restored.
* pybind11: test support
Add a test functionality to pybind11.
* CMake: test also on "make check"
Some projects use non-CTest manual targets for tests.
* extend Prefix class with join() member to support dynamic directories
* add more tests for Prefix.join()
* more tests for Prefix.join()
* add docstring
* add example to docstring of Prefix class
* cleanup Prefix.join() tests
* use Prefix.join() in Packaging Guide
Fixes#8217
Trying to relocate a distribution when the new and old paths are
equal leads to failure, because the test that ensures that no
unrelocated bits are left over always fails. As an example, this
occurs if a user installs a package, generates a binary with it
using 'spack buildcache', uninstalls it, and then attempts to
reinstall into the same spack installation using the generated
binary package.
This updates the relocation check to accept the presence of the
old prefix in binaries if the package is being reinstalled into
its original location.
* allow user to constrain dependencies that are added conditionally
* remove check for not-visited deps from normalize, move it to concretize. The check now runs after the concretization loop completes (so an error is only reported if the user-mentioned spec doesnt appear anywhere in the dag)
* remove separate full_spec_deps variable; rename spec_deps to all_spec_deps to clarify that it merges user-specified dependencies with derived dependencies
* add unit test to confirm new functionality
- `spack config blame` is similar to `spack config get`, but it prints
out the config file and line number that each line of the merged
configuration came from.
- This is a debugging tool for understanding where Spack config settings
come from.
- add tests for config blame
Fixes: #8258#8090 altered import behavior so that import spack no longer
provides access to many other Spack modules. This addresses
a case which depended on the prior behavior and was not
updated as part of #8090. This particular import error only
came up when users were setting compiler flags on specs.
See also: #8194
- there were some leftover spack.* names being used after we removed
globals and moved everything in the top-level namespace to spack.pkgkit
- point those references to their new homes
- remove most `import spack` statements, except for files that need
`spack_version`
- import spack is no longer sufficient to use submodules
(e.g. spack.directives).
- these submodules must be imported directly. Update references
accordingly.
- Spack packages were originally expected to call `from spack import *`
themselves, but it has become difficult to manage imports in the
Spack core.
- the top-level namespace polluted by package symbols, and it's not
possible to avoid circular dependencies and unnecessary module loads in
the core, given all the stuff the packages need.
- This makes the top-level `spack` package essentially empty, save for a
version tuple and a version string, and `from spack import *` is now
essentially a no-op.
- The common routines and directives that packages need are now in
`spack.pkgkit`, and the import system forces packages to automatically
include this so that old packages that call `from spack import *`
will continue to work without modification.
- Since `from spack import *` is no longer required, we could consider
removing ``from spack import *`` from packages in the future and
shifting to ``from spack.pkgkit import *``, but we can wait a while to
do this.
- spack.util.lock behaves the same as llnl.util.lock, but Lock._lock and
Lock._unlock do nothing.
- can be disabled with a control variable.
- configuration options can enable/disable locking:
- `locks` option in spack configuration controls whether Spack will use filesystem locks or not.
- `-l` and `-L` command-line options can force-disable or force-enable locking.
- Spack will check for group- and world-writability before disabling
locks, and it will not allow a group- or world-writable instance to
have locks disabled.
- update documentation
- Spack core has long used llnl.util.filesystem.join_path, but
os.path.join is pretty much the same thing, and is more efficient.
- Use os.path.join in the core Spack code from now on.
- simplify the singleton pattern across the codebase
- reduce lines of code needed for crufty initialization
- reduce functions that need to mess with a global
- Singletons whose semantics changed:
- spack.store.store() -> spack.store
- spack.repo.path() -> spack.repo.path
- spack.config.config() -> spack.config.config
- spack.caches.fetch_cache() -> spack.caches.fetch_cache
- spack.caches.misc_cache() -> spack.caches.misc_cache
- `spack.cmd.all_commands` does a directory listing on
`lib/spack/spack/cmd`, regardless of whether it is needed
- make this lazy so that the directory listing won't happen unless it's
necessary.
- It turns out that jsonschema is one of the more expensive imports.
- move imports of jsonschema into functions to avoid the performance hits
for calls that don't need config.
- spack.store was previously initialized at the spack.store module level,
but this means the store has to be initialized on every spack call.
- this moves the state in spack.store to a singleton so that the store is
only initialized when needed.
- spack.repository module is now spack.repo
- `spack.repo` is now `spack.repo.path()` and loaded lazily
- Added `spack.repo.get()` and `spack.repo.all_package_names()` as
convenience functions to simplify the new lazy interface.
- updated tests and code
- no longer require `spack_version` to be a Version (it isn't used that
way anyway)
- use a simple tuple `spack_version_info` with major, minor, patch
versions
- generate `spack_version` from the tuple
- replace `spack.config.get_configuration()` with `spack.config.config()`
- replace `get_config`/`update_config` with `get`, `set`
- add a path syntax that can be used to refer to specific config options
without firt getting the entire configuration dict
- update usages of `get_config` and `update_config` to use `get` and `set`
- Current configuration code forces the config system to be initialized
at module scope, so configs are parsed on every Spack run, essentially
before anything else.
- We need more control over configuration init order, so move the config
scopes into a class and reduce global state in config.py
Fixes#2781
This PR introduces a new attribute for packages called
`archive_files`, which designates files that should be saved from
a package build (e.g. the config.log generated during autotools
builds).
The attribute contains a list of glob expressions; Any file that
matches will be archived in the `<prefix>/.spack/archived-files`
directory. Errors that occur when archiving files are collected and
reported in a file named `<prefix>/.spack/archived-files/errors.txt`.
`AutotoolsPackage` and `CMakePackage` provide a sensible default
override for this attribute.
fixes#7941
Modified string representation of Specs to add a space before deps
Unit-tests have been modified accordingly
Added a test for regression on #7941
Fixes#7924
Spack's yaml schema validator was initializing all instances of
unspecified properties with the same object, so that updating the
property for one entry was updating it for others (e.g. updating
versions for one package would update it for other packages).
This updates the schema validator to instantiate defaults with
separate object instances and adds a test to confirm this behavior
(and also confirms #7924 without this change).
* Use reported version of IBM XL Fortran compiler for compiler versions >= 16.0.
Starting with the April 2018 release, the IBM XL C and Fortran compilers report the same version, 16.0. Consequently, there is no need to downgrade the Fortran compiler version to match that of the C compiler.
* Use GitLab's API endpoint for fetching a git snapshot.
* More GitLab packages use the API.
* find_list_url for GitLab's API URLs.
* Flake8
* Url for 'hacckernels'.
* Check GitLab API regexps before the non-API ones.
Activating a package that is already activated now sends a `tty.msg`
and returns.
```
-bash-4.2$ ~/spack/bin/spack activate aspell6-en
==> Package aspell6-en/lc4v24f is already activated.
```
* Better error message for spack providers
fixes#1355
`spack providers` now outputs a sensible error message if non-virtual
specs are provided as arguments:
```
$ spack providers mpi zlib petsc
==> Error: non-virtual specs cannot be part of the query [zlib, petsc]
```
Formatting of the output changed slightly.
* Calling 'spack providers' without arguments print the virtual pkg list
Also, the error message in case of a wrong parameter has been improved
to show the list of valid packages.
* Avoid printing headers if stdout is not a tty
* The provider list is formatted with colify if not in a tty
* Added a test to check the list of providers returned from the command
Popping the when spec from kwargs in the extends directive breaks
class inheritance. Inheriting classes do not find their when spec.
We now get the when spec from kwargs instead, leaving it to be found
by any downstream package classes.
fixes#7705
Package.provides now checks constraints to ensure that a spec provides
a given virtual package. Note that 'strict=True' is not passed to
satisfies as this function is also used during concretization.
Fixes#7548
This updates the "spack view" command to use the same parsing logic
as "spack install" on the user-provided specs. For example you can
provide a DAG hash to refer to an exact installed spec instead of
specifying name, compiler, etc.
This fixes a check that decides when to skip buildcache relocation.
Originally the check was flawed in two ways: it would skip if the
source prefix matched the destination prefix, which no longer
matters since the source prefix is replaced with a placeholder
(so it always needs to be updated); it also would skip relocation
if the rpaths were not relative, when in fact it should be the
opposite (binaries without relative rpaths *should* be relocated,
and those without don't need it).
- FastPackageChecker was being called at startup every time Spack runs,
which takes a long time on networked filesystems. Startup was taking
5-7 seconds due to this call.
- The checker was intended to avaoid importing all packages (which is
really expensive) when all it needs is to stat them. So it's only
"fast" for parts of the code that *need* it.
- This commit makes repositories instantiate the checker lazily, so it's
only constructed when needed.
- This was needed when we transitioned to all lowercase packages because
git didn't handle case changes well on case-insensitive filesystems.
- Now it just adds extra stat calls to startup, and we check for
all-lowercase package names in tests, so we'll remove it.
- people using really old versions of Spack can re-clone.
* Create unload_module method
Extract code from load_module into unload_module.
* Unload modules to create a clean env on Cray
removes cray-libsci, cray-mpich and darshan to prevent any silent
linking with those packages.
* Add format to separate target and os for path
spec format can now handle separations of target and os for setting
up the path.
* Added ${PLATFORM} et al to spec.format()
${PLATFORM}, ${OS}, ${TARGET}
* Update tests
Updated tests and got rid of unnecessary code.
* Also update documentation to reflect this new ability.
* Add default path scheme to config.yaml
Added default path scheme to config.yaml. Users can overwrite this
section if they want.
* Speedup the default 'libs' property search - important for external
packages.
* As advised by @alalazo, use tuples instead of lists inside
_libs_default_handler.
* Added installation date and time to the database
Information on the date and time of installation of a spec is recorded
into the database. The information is retained on reindexing.
* Expose the possibility to query for installation date
The DB can now be queried for specs that have been installed in a given
time window. This query possibility is exposed to command line via two
new options of the `find` command.
* Extended docstring for Database._add
* Use timestamps since the epoch instead of formatted date in the DB
* Allow 'pretty date' formats from command line
* Substituted kwargs with explicit arguments
* Simplified regex for pretty date strings. Added unit tests.
This updates architecture concretization to
* Search for the nearest parent in the DAG for architecture information
rather than defaulting to the root of the DAG
* Propagate architecture settings transitively, such that if for
example the target is set at the root of the dag it will set the
same target on indirect dependencies (assuming no intermediate
dependency specifies a separate target). Previously this occurred
in general but under some conditions did not, for example if an
intermediate dependency specified some subset of architecture
properties.
* Create mirror for system with different compilers
Spack concretizes the spec provided by the user in
"spack mirror create" to ensure downloading the right
dependencies. Under normal circumstances concretization
requires that the chosen compiler exists on the system,
but this is not required when creating download mirrors
for other systems, so this requirement is removed in that
case.
* Add test for disabling compiler existence check
* Update compiler existence checking logic
* improve test for disabling compiler existence check
* make py-setuptools a run-time-only dep for py-basemap and patch python package to only apply setuptools flag for build deps
* py-qtconsole does not require setuptools
* This allows Spack to work with MD5 hashes on machines with openssl in FIPS mode.
* We are still using MD5 for validation in many places, and a later PR will replace all uses of MD5 with SHA256.
* This is a quick fix until that happens.
- transitive dependencies were not being handled correctly
- restructure code to do recursion and mark visited packages properly
- add `-V` option to *not* expand virtuals in spack dependencies
This re-adds the option to create and install unsigned tarballs, now
with the -u option (--unsigned) rather than the -y option.
This also changes the "keys" command, replacing the -y/--yes-to-all
option with the -t/--trust option (which has the same effect but is
more-clearly named).
Fixes#7130
shutil.move expects a source path like "/x/y/" to be a directory and
fails if "/x/y" is a symlink. This invokes realpath on the source
path to avoid the issue.
Fixes#7356
In some cases OperatingSystem (e.g. LinuxDistro) was getting
instantiated with a version that contains dashes. This breaks because
the concretizer later converts this value to a string and re-parses
it, and the '-' character is used to separate architecture components.
This adds a guard in the initializer to convert '-' to '_'.
Fixes#7237Fixes#6404Fixes#6418Fixes#6369
Identify when binary relocation fails to remove all instances of the
source prefix (and report an error to the user unless they specify
-a to allow the old root to appear). This check occurs at two stages:
during "bincache create" all instances of the root are replaced with
a special placeholder string ("@@@@..."), and a failure occurs if the
root is detected at this point; when the binary package is extracted
there is a second check. This addresses #7237 and #6418.
This is intended to be compatible with previously-created binary
packages.
This also adds:
* Better error messages for "spack install --use-cache" (#6404)
* Faster relocation on Mac OS (using a single call to
install_name_tool for all files rather than a call for each file)
* Clean up when "buildcache create" fails (addresses #6369)
* Explicit error message when the spack instance extracting the binary
package uses a different install layout than the spack instance that
created the binary package (since this is currently not supported)
* Remove the option to create unsigned binary packages with -y
This updates Cray.setup_platform_environment to use cray-specific
pkgconfig paths so that all providers of 'pkgconfig' have access
to them (previously the 'pkg-config' provider had this but the
'pkgconf' provider did not).
* [SPACK/spec.py] When a query through ForwardQueryToPackage returns
'None', treat that as query failure and raise RuntimeError with
suitable message. This overrides the current behavior to raise an
AttributeError which is now triggered only when no suitable query
property is found and there is no default handler.
* [spack/spec.py] Fix style.
* [SPACK/spec.py] In case of query failure, i.e. property returning
'None', raise AttributeError instead of RuntimeError in order to
pass the unit test. Also, small update in the logic distinguishing
query failure and lack of relevant property/attribute handling.
This updates the fix_darwin_install_name function to use the Spack
Executable object to run install_name_tool, which ensures that
process output is formatted as a 'str' for python2 and python3.
Originally fix_darwin_install_name was invoking subprocess.Popen
directly.
Fixes#5189
When working with non-normalized paths containing ".." on some
file systems, Spack was found to encounter a permission error when
writing to the path. This normalizes a path written by the
intel-parallel-studio package and also normalizes all paths
written by the license install hook (for all packages) to avoid
this issue for intel-parallel-studio.
Following the discussion with Todd and Adam, find has been modified to
accept glob expressions. This should not affect performance as every
glob implementation I inspected has 3 cases (no wildcard, wildcard but
no directories involved, wildcard and directories involved) and uses
fnmatch underneath.
Mixins have been changed to do by default a non-recursive search (but
a recursive search can still be triggered using the recursive keyword).
Following a comment from Todd, the search path for the files listed in
`filter_compiler_wrappers` can now be narrowed. Anyhow, the function
implementation still makes use of `find`, the rationale being that we
have already seen packages that install artifacts in e.g. architecture
dependent folders. The possibility to have a relative search path might
be a good compromise between the previous approach and the one suggested
in the review.
Also: 'ignore_absent' and 'backup' keyword arguments can be optionally
forwarded to `filter_file`.
Following comments from Todd:
- the call to tty.debug has been moved deeper, to log the filtering of each file
- the shadowing on the name "kwargs" is avoided
Implemented a declarative syntax for the additional behavior that can
get attached to classes. Implemented a function to filter compiler
wrappers that uses the mechanism above.
- command reference now includes usage for all Spack commands as output
by `spack help`. Each command usage links to any related section in
the docs.
- added `spack commands` command which can list command names,
subcommands, and generate RST docs for commands.
- added `llnl.util.argparsewriter`, which analyzes an argparse parser and
calls hooks for description, usage, options, and subcommands
- Shorten Spack command usage for short options. Short options are now
shown as [-abc] instead of as [-a] [-b] [-c]
- fix bug that mixed long and short options for top-level `spack help`
- Add proper help for `spack buildcache` subcommands
- Reorganize the help categories of Spack commands so that buildcache is
in packaging and diy and setup are now in build.
- previously commands with this argument showed a long list of choices
that were platform specific.
- use a better metavar: {defaults,system,site,user}[/PLATFORM]
Fixes#7159
When activating extensions in external views, the --ignore-conflicts
option was being ignored. In this particular issue the conflict was
for the duplicate __init__ file for multiple python packages in the
same namespace, but in general any conflict for extensions would
cause an error whether or not --ignore-conflicts was set.
This also renames the 'force' option of do_activate to
'with_dependencies' and updates views to call do_activate with this
set to False (since it traverses the dependency dag anyway). This
isn't strictly required, it just avoids redundant calls.
This reorganizes most sections and rewords a significant portion of
the content (including all introductions) but keeps all the examples.
* Remove section 'What happens at subscript time' from tutorial:
it is too detailed for a tutorial
* Move the 'Extra query parameters' and 'Attach attributes to other
packages' sections into a separate grouping 'Other packaging topics'
* move the 'Set variables at build time yourself' section after
'Set environment variables in dependents' section since the latter
is more motivating
* start the 'set environment variables at build-time for yourself'
section with qt as an example
* renamed section 'specs build interface' to 'retrieving library
information' and updated section introduction
* renamed section 'a motivating example' to 'accessing library
dependencies'; split out the material which deals with implementing
.libs for netlib-lapack into a separate section called 'providing
libraries to dependents'. consolidated in material from the section
'single package providing multiple virtual specs' since
netlib-lapack is an example of this (this removes the material
about intel-parallel studio)
* Allow dashes in command names and fix command name handling
- Command should allow dashes in their names like the reest of spack,
e.g. `spack log-parse`
- It might be too late for `spack build-cache` (since it is already
called `spack buildcache`), but we should try a bit to avoid
inconsistencies in naming conventions
- The code was inconsistent about where commands should be called by
their python module name (e.g. `log_parse`) and where the actual
command name should be used (e.g. `log-parse`).
- This made it hard to make a command with a dash in the name, and it
made `SpackCommand` fail to recognize commands with dashes.
- The code now uses the user-facing name with dashes for function
parameters, then converts that the module name when needed.
* Improve performance of log parsing
- A number of regular expressions from ctest_log_parser have really poor
performance, most due to untethered expressions with * or + (i.e., they
don't start with ^, so the repetition has to be checked for every
position in the string with Python's backtracking regex implementation)
- I can't verify that CTest's regexes work with an added ^, so I don't
really want to touch them. I tried adding this and found that it
caused some tests to break.
- Instead of using only "efficient" regular expressions, Added a
prefilter() class that allows the parser to quickly check a
precondition before evaluating any of the expensive regexes.
- Preconditions do things like check whether the string contains "error"
or "warning" (linear time things) before evaluating regexes that would
require them. It's sad that Python doesn't use Thompson string
matching (see https://swtch.com/~rsc/regexp/regexp1.html)
- Even with Python's slow implementation, this makes the parser ~200x
faster on the input we tried it on.
* Add `spack log-parse` command and improve the display of parsed logs
- Add better coloring and line wrapping to the log parse output. This
makes nasty build output look better with the line numbers.
- `spack log-parse` allows the log parsing logic used at the end of
builds to be executed on arbitrary files, which is handy even outside
of spack.
- Also provides a profile option -- we can profile arbitrary files and
show which regular expressions in the magic CTest parser take the most
time.
* Parallelize log parsing
- Log parsing now uses multiple threads for long logs
- Lines from logs are divided into chnks and farmed out to <ncpus>
- Add -j option to `spack log-parse`
* Marking database tests as slow
* Marking url command tests as slow
* Marking every test that uses database as slow
* Marking tests that import files as slow
* Marking gpg tests as slow
* Marking all versions and one list tests as slow
* Added more markers to unit tests + cli option to skip slow tests
Following a discussion with Axel, the generic 'slowtest' marker has been
split into 'db', 'network' and 'maybeslow'. A brief description of the
meaning of each marker has been added to pytest.ini.
A command line option to run only fast tests has been added to
'spack test'
* Don't use classes to group tests together
Reverted grouping tests under a class, as required in the review
* Minor style changes
This attempts to address one of the complaints at #5996 (comment):
> Repo currently caches package instances by Spec, and those Package instances have a Spec.
> This is unnecessary and causes confusion. I think I thought that we'd need to cache instances
> after loading package classes, but really just caching the classes is fine.
With this update, Repo's package cache is removed and Specs cache the package reference themselves. One consequence is that Specs which compare as equal will store separate instances of a Package class (not doing this creates issues for #4595 (comment)).
There were several references to Spec.package that could be replaced with Spec.package_class without any additional modifications. There are still a couple remaining references to Spec.package in Spec that would require adding functionality before replacing (e.g. calling Package.provides and Package.installed).
Note this makes it difficult to mock fetchers for tests which invokes code that reconstructs specs. test_packaging was one example of this where the updates caused a failure (in that case the error was avoided by not making an unnecessary call).
Details:
* Replace instances of spec.package with spec.package_class where a class method is being called
* Remove instances of Repo.get where Spec.package_class can be used in its place
* remove Repo.get caching instances of Package class for specs
* remove redundant check (which is also incorrect now that each spec stores its own copy of its package)
* avoid creating mirror with specs because it copies specs and those copies dont refer to the mocked fetcher (and it is also not required for the test)
* remove checks that are no longer necessary since repo doesn't cache specs
* Cleaned up JUnit report generation on install
The generation of a JUnit report was previously part of the install
command. This commit factors the logic into its own module, and uses
a template for the generation of the report.
It also improves report generation, that now can deal with multiple
specs installed at once. Finally, extending the list of supported
formats is much easier than before, as it entails just writing a
new template.
* Polished report generation + added tests for failures and errors
The generation of a JUnit report has been polished, so that the
stacktrace is correctly displayed with Jenkins JUnit plugin. Standard
error is still not used.
Added unit tests to cover for installation failures and installation
errors.
Avoid adding an "outside" (local home's) python user site directory
during python package installs.
Implements #6611
Fixes packages with auto-find side effects such as `py-setuptools`
that cause `py-matplotlib` to fail to build #6558
The flag_handlers method was being set as a bound method, but when
reset in the package.py file it was being set as an unbound method
(all python2 issues). This gets the underlying function information,
which is the same in either case.
The bug was uncovered for parmetis in #6858. This is a partial fix.
Included are changes to the parmetis package.py file to make use of
flag_handlers.
The feature added in #4611 is currently broken. This commit fixes the
behavior of the command and adds unit tests to ensure the basic semantic
is maintained.
It also changes slightly the behavior of Spec.concretized to avoid
copying caches before the concretization (as this may result in a
wrong hash computation for the DAG).
- Generating the HTML from for >2300 packages from RST in Sphinx seems to
take forever.
- Add an option to `spack list` to generate straight HTML instead.
- This reduces the doc build time to about a minute (from 5 minutes on a mac laptop).
* Vendor ordereddict for python2.6 only
This commit substitutes the custom module 'ordereddict_backport' with
the more known 'ordereddict' and vendors it only for python 2.6. Other
supported versions of python will use 'collections.OrderedDict'.
* Use absolute imports also for python 2.6
See PEP-328 for more information on the subject
* Added provenance of vendored ordereddict
See #6794
This fixes cases where test-only dependencies were omitted from
consideration when modifying the environment at build time. This
includes an update to the python package definition to add
testing-related python extensions to its specialized environment
setup.
This updates the conflict-checking logic to require that the conflict
spec matches exactly and that all fields mentioned in the conflict
spec are present in the concretized spec in order to report a
conflict. This will automatically skip all conflict checks for
dependencies of externals (since externals strip dependencies). This
will not affect non-external packages since all fields and
dependencies are fully specified for such packages.
* Keep track of source and versions for external libraries
* Note source of more obscure libraries
* We aren't upgrading jsonschema after all
* Add note on modifications made to pytest
* add OctavePackage
1. remove import CudaPackage which is not needed anymore
2. mention CudaPackage and OctavePackage in packaging guide
3. adjust OctavePackageTemplate
4. add clue file for Octave build
5. sanity check on self.prefix
* use setup_environment
* Only specify a file as needing relocation if it contains the spack
root as a text string (this constraint also applies to binaries)
* Don't fail if there is an error retrieving RPATH information from a
binary (even if it is specified as requiring relocation)
This adds the ability for packages to apply compiler flags in one of
three ways: by injecting them into the compiler wrapper calls (the
default in this PR and previously the only automated choice);
exporting environment variable definitions for variables with
corresponding names (e.g. CPPFLAGS=...); providing them as arguments
to the build system (e.g. configure).
When applying compiler flags using build system arguments, a package
must implement the 'flags_to_build_system_args" function. This is
provided for CMake and autotools packages, so for packages which
subclass those build systems, they need only update their flag
handler method specify which compiler flags should be specified as
arguments to the build system.
Convenience methods are provided to specify that all flags be applied
in one of the 3 available ways, so a custom implementation is only
required if more than one method of applying compiler flags is
needed.
This also removes redundant build system definitions from tutorial
examples
Fixes#5940
This PR changes the option '--restage' of 'spack install' to
'--dont-restage', inverting its meaning and the default behavior
of the command.
Fixes#6200
For compilers that successfully run a version detection script but
don't actually return a version, Spack was keeping track of the
empty version and then failing when attempting to construct a
compiler spec. This skips any attempt to add a compiler entry when
no version is reported (but logs when a compiler fails to report
a version).
* Support pruning of vars with Env from_sourcing_file (issue #6501)
- Blacklist string literals or regular expressions of environment
variables that are to be removed from consideration as being affect
by the sourcing of the file. Conversely, whitelist modifications
that should not ignored. Whitelisted variables have priority over
blacklisting.
Eg,
EnvironmentModifications.from_sourcing_file
(
bashrc
blacklist=['JUNK_ENV', 'OPTIONAL_.*'],
whitelist=['OPTIONAL_REQUIRED.*']
)
This modification can be used to eliminate environment variables that
are not generalized for modules (eg, user-specific variables).
* BUG: module prepend-path in wrong order (fixes#6501)
* STYLE: module variables in sorted order (issue #6501)
- looks nicer and also helps when comparing the contents of different
module files.
* ENH: remove duplicates from env paths when creating modules (issue #6501)
- this makes for a cleaner module environment and helps avoid some
unnecessary changes to the environment that are only provoked by
redundancies in the PATH.
eg,
before PATH=/usr/bin
after PATH=/usr/bin:/usr/bin:/my/application/bin
should only result in /my/application/bin being added to the PATH
and not /usr/bin:/my/application/bin
Activate via the 'clean' flag (default: False):
EnvironmentModifications.from_sourcing_file(bashrc, clean=True,..
Fixes#2440
The "Getting started" guide should be short and sweet. This commit
simplifies the "Environment-Modules" section pruning:
- outdated / wrong suggestions as noted in #2440
- uncommon setups that are better treated in a reference guide
According to the documentation here:
http://www.sphinx-doc.org/en/stable/ext/autodoc.html
"For module data members and class attributes, documentation can either
be put into a comment with special formatting (using a #: to start the
comment instead of just #), or in a docstring after the definition."
* Updated function which checks if a binary file needs relocation.
Previously this was incorrectly identifying ELF binaries as symbolic
links (so they were being excluded from relocation). Added test to
check that ELF binaries are not considered symlinks.
* relocate_text was not replacing paths in text files. Added test to
check that text files are relocated properly (i.e. paths in the file
are converted to the new prefix).
* Exclude backup files created by filter_file when installing from
binary cache.
* Update write_buildinfo_file method signature to distinguish between
the spec prefix and the working directory for the binary cache
package.
"spack spec" was providing helpful error information about conflicts
that was missing from "spack install", this updates "spack install"
to provide the same information.
The original docstring had confusing wording re: what is going to
symlinked and where when using the `extend` directive, and how exactly
the symlinking is performed (not automatically on install, but using
`spack activate`). See #5559.
Showing "Normalize" on output doesn't give users additional information,
as this step is essentially an implementation detail of concretization.
This PR skips it and shows just the input spec and the concretized one.
Printing partial hashes for input spec has been disabled.
* First draft for SC17 build systems portion
Added tutorial_buildsystems.rst file as well as example files under
the tutorial/ directory.
* Remove floating `
* Add requested changes, and examples of subclasses
Added in the requested changes to the documentation. Also added in
information about the subclasses and the defaults that they provide.
Also fixed some phrasing issues, formatting and punctuation.
* Flake8 fixes and new files for classes
Made flake8 fixes to pass tests and also added files to demonstrate code
in the classes.
* Minor edits
Edits in formatting and made some sentence changes
* Flake8 fixes
More flake8 fixes
* Flake8 fix
* Change section order on tutorial and minor edits
Placed the section at the appropriate section for the tutorial and then
added some minor edits that were requested.
* Add requested changes and more details
Added more details to Cmake, Makefile and Python Packages.
* Fixed formatting and minor edits
* Fix doc build error
* Allow types and 'any' in variant definitions.
- Previously variant values had to be a tuple or a callable predicate.
- This allows 'any' as shorthand for `lambda x: True` and type objects
as shorthand for "any value of this type".
- Makes variant definitions more readable, keeps lambdas out of
packages for common cases.
* Update packaging tutorial
* Fix bad file reference in packaging tutorial
* First draft of the advanced packaging tutorial
* advanced packaging tutorial: improved phrasing
Thanks Denis and Hartzell!
* Fixed typos + reworded a couple of sentences
* Reworked module file tutorial section
First draft for the SC17 update. This includes:
- adding an introduction on module files + Spack's module
generation blueprints
- adding a set-up section and provide a docker image for easy set-up
- updating all the relevant snippets
- extending a bit some of the concepts that were already touched
* Added reference to #5582 + committed Dockerfiles
Also fixed a couple of typos spotted by Denis.
* module file tutorial: added section on template customization
* module file tutorial: fixed minor typos + rephrased a sentence
* module file tutorial: made explicit that Docker image comes with software
* module file tutorial: improved phrasing and layout.
Thanks Hartzell!
* module file tutorial: added vim and nano to editors
* module file tutorial: fixed typo
* Fixed typos
Thanks Adam!
* module file tutorial: updated Dockerfile + minor changes in introduction
Fixes#6154
For compiler options which set values using the syntax "-flag value"
(with a space between the flag and the flag's value) the flag and
value were treated as separate and reordered. This updates Spack's
logic to treat the flag and value as a single unit, even if there is
a space between them. It assumes that all flags are prefixed with
"-" and that all flag values are not.
* Skip rewriting files that are links (this also avoids issues with parsing
the output of the 'file' command for symlinks)
* Fail rather than warn for several gpg signing issues (e.g. if there is no
key available to sign with)
* Installation with 'buildcache install' only retrieves link and run
dependencies (this matches 'buildcache create' which only creates tarballs
of link/run dependencies)
* Don't rewrite RPATHs for a binary if the binary cached package was created
with relative RPATHs
* Refactor relocate_binary to operate on multiple binaries (given as a
collection of paths). This was likewise done for relocate_text and
make_binary_relative
- This isn't one of those autogenerated SVGs from a drawing program!
- This is a completely re-traced, minimalist SVG file with clearly
delineated pieces so that your favorite renderer can draw a Spack logo
at whatever resolution you want.
- Included versions with text, as well.
Fixes#6126#3183 removed the print_help function from the "modules" module.
This adds it back so that if a user invokes 'spack load foo' without
having sourced an environment setup script, they will be prompted
to do so. This also improves the placeholder message to tell the
user to invoke 'spack' as shell function rather than as an executable.
Part of the resource staging process is to place downloaded/expanded
files in the root stage. This was not happening when a resource stage
was restaged.
Fixes#5778.
Spack uses 'gcc -dumpversion' to determine the full version of gcc.
'gcc -dumpversion' no longer gives the full version on gcc 7.2.0.
'gcc -dumpfullversion' is required instead. This PR detects when
'gcc -dumpversion' gives a truncated version of '7' and in that
case retrieves the full version with 'gcc -dumpfullversion'
The name of the debug log written by the cc compiler wrapper was given
by Spec.short_spec, which includes the architecture. Somewhere along
the line Spec.format started adding spaces around the architecture
property so the filename started including spaces; the cc wrapper
script appears to ignore this, so files like spack-cc-bzip2-....in.log
(which record the wrapped compiler invocations) were not being
generated. This uses a different format string from the spec to
generate the wrapper log file names (which does not include spaces).
* when a user-provided spec refers to an already-installed package, packages with patches applied were causing validation errors based on the recorded variants in the package's class
* avoid checks on all reserved variants (not just 'patches')
* basic functionality to install spec from a binary cache when it's available; this spiders each cache for each package and could likely be more efficient by caching the results of the first check
* add spec to db after installing from binary cache
* cache (in memory) spec listings retrieved from binary caches
* print a warning vs. failing when no mirrors are configured to retrieve pre-built Spack packages
* make automatic retrieval of pre-built spack packages from mirrors optional
* no code was using the links stored in the dictionary returned by get_specs, so this simplifies the logic to return only a set of specs
* print package prefix after installing from binary cache
* provide more information to user on install progress
This updates the logic for Package.extends so that if the spec
associated with the package is not concrete, it will report true if
the package *could extend* the given spec; generally speaking a
package could extend a spec as long as none of the details associated
with its extendee spec conflict with the given spec. When the spec
associated with the package is concrete, this function will only
report whether the package *does extend* the given spec. When both
the specs are concrete, the semantics are the same as before.
* When creating a tar of a package for a build cache, symlinks are
preserved (the corresponding path in the newly-created tarfile will
be a symlink rather than a copy of the file)
* Dont add external packages to a build cache
* When installing from binary cache, don't create install prefix until
verification is complete
* Fixes#5754
Previously when RepoPath.repo_for_pkg was invoked with a string,
it did not check if the string included a namespace. Any
namespace-qualified package provided as a string would not be found
(at which point the behavior was to return the highest-precedence
repository).
* handle nested namespaces for packages specified as strings in repo_for_pkg
* add preliminary repository tests
* add test which replicates #5754
* refactor repo tests with fixtures
* define repo_path equivalent at test-level scope for repo tests
* add tests for unknown namespace/package
* rename fixture function (no longer prefixed with 'test_')
Internally we work against a branch named 'llnl/develop', which
mirrors the public repository's `develop` branch.
It's useful to be able to run flake8 on our changes, using
`llnl/develop` as the base branch instead of `develop`.
Internally the flake8 subcommand generates the list of changes files
using a hardcoded range of `develop...`.
This makes the base of that range a command line option, with a
default of `develop`.
That lets us do this:
```
spack flake8 --base llnl/develop
```
which uses a range of `llnl/develop...`.
'spack install' can now reinstall a spec even if it has dependents, via
the --overwrite option. This option moves the current installation in a
temporary directory. If the reinstallation is successful the temporary
is removed, otherwise a rollback is performed.
- new E741 flake8 checks disallow 'l', 'O', and 'I' as variable names
- rework parts of the code that use this, to be compliant
- we could add exceptions for this, but we're trying to mostly keep up
with PEP8 and we already have more than a few exceptions.
- When you don't use wildcards, flake8 will find places where you used an
undefined name.
- This commit has all the bugfixes resulting from this static check.
There are now separate flake8 configs for core vs. packages:
- core has a smaller set of flake8 exceptions
- packages allow `from spack import *` and module globals
- Allows core to take advantage of static checking for undefined names
- Allows packages to keep using Spack tricks like `from spack import *`
and dependencies setting globals for dependents.
* Update Getting Started docs to clarify that full Xcode suite is required for qt
* Better error message when only the command-line tools are installed
I'm tracking down a problem with the perl package that's been
generating this error:
```
OSError: OSError: [Errno 2] No such file or directory: '/blah/blah/blah/lib/5.24.1/x86_64-linux/Config.pm~'
```
The real problem is upstream, but it's being masked by an exception
raised in `filter_file`s finally block.
In my case, `backup` is `False`.
The backup is created around line 127, the `re.sub()` calls
fails (working on that), the `except` block fires and moves the backup
file back, then the finally block tries to remove the non-existent
backup file.
This change just avoids trying to remove the non-existent file.
- The shell script uses arrays and hence only works on sophisticated shells and not the default `sh`. For clarity the shebang `#!/bin/bash` has been used instead.
- This fakes out GitFetchStrategy to try code paths for different git
versions.
- This allows us to test code paths for old versions using a newer git
version.
- Tests use a session-scoped mock stage directory so as not to interfere
with the real install.
- Every test is forced to clean up after itself with an additional check.
We now automatically assert that no new files have been added to
`spack.stage_path` during each test.
- This means that tests that fail installs now need to clean up their
stages, but in all other cases the check is useful.
- Be explicit about stage creation during the install process.
- This avoids accidental creation of stages
- e.g., during `spack install --fake`, stages were erroneously recreated
after being destroyed
- This prevents the main spack install from being clusttered by
invocations of `spack test`.
- This uses a session-scoped stage fixture to ensure tests don't
interfere.
- Spack's core package interface was previously overly stateful, in that
calling methods like `do_stage()` could change your working directory.
- This removes Stage's `chdir` and `chdir_to_source` methods and replaces
their usages with `with working_dir(stage.path)` and `with
working_dir(stage.source_path)`, respectively. These ensure the
original working directory is preserved.
- This not only makes the API more usable, it makes the tests more
deterministic, as previously a test could leave the current working
directory in a bad state and cause subsequent tests to fail
mysteriously.
- Assertion would search for root through all possible paths.
- It's also really slow.
- This isn't needed anymore. We're pretty good at ensuring single-rooted
DAGs, and this assertion has never been thrown.
- This shaves another 6 seconds off r-rminer concretization
- This reduces concretization time for r-rminer from over 1 minute to
only 16 seconds.
- OrderedDict ends up taking a *lot* of time to compare larger specs.
- An OrderedDict isn't actually needed here. It's actually not possible
to find duplicates, and we end up sorting the contents of the
OrderedDict anyway.
- This is an optimization to the way traverse_edges iterates over
successors.
- Previous version called dependencies_dict(), which involved a lot of
redundant work (creating dicts and calling caonical_deptype)
- Spack ends up constructing compilers frequently from YAML data.
- This caches the result of parsing the compiler config
- The logic in compilers/__init__.py could use a bigger cleanup, but this
makes concretization much faster for now.
- on macOS, this also ensures that xcrun is called only twice, as opposed
to every time a new compiler object is constructed.
This adds a workflow section on how to use spack on Docker.
It provides an example on the best-practices I collected over the
last months and circumvents the common pitfalls I tapped in.
Works with MPI, CUDA, Modules, execution as root, etc.
Background: Developed initially for PIConGPU.
* Make --trusted default when running spack gpg list
Currently running `spack gpg list` with no arguments returns nothing. You must supply either the `--trusted` or the `--signing` options. The idea here is to return some initial data to the user when the command is run. The alternative is to return an error, telling the user to select one of the two options.
* Add an extra test case for the empty list command
Fixes the issue with code coverage
This isn't a rework of the concretizer but it speeds things up a LOT.
The main culprits were:
1. Variant code, `provider_index`, and `concretize.py` were calling
`spec.package` when they could use `spec.package_class`
- `spec.package` looks up a package instance by `Spec`, which requires a
(fast-ish but not that fast) DAG compare.
- `spec.package_class` just looks up the package's class by name, and you
should use this when all you need is metadata (most of the time).
- not really clear that the current way packages are looked up is
necessary -- we can consider refactoring that in the future.
2. `Repository.repo_for_pkg` parses a `str` argument into a `Spec` when
called with one, via `@_autospec`, but this is not needed.
- Add some faster code to handle strings directly and avoid parsing
This speeds up concretization 3-9x in my limited tests. Still not super
fast but much more bearable:
Before:
- `spack spec xsdk` took 33.6s
- `spack spec dealii` took 1m39s
After:
- `spack spec xsdk` takes 6.8s
- `spack spec dealii` takes 10.8s
* Do not call setup_package for fake installs
- setup package could fail if ``setup_dependent_environment`` or other
routines expected to use executables from dependencies
- xpetsc and boost try to get python config variables in
`setup_dependent_package`; this would cause them not to be
fake-installable
* Remove vestigial deptype_query argument to Spec.traverse()
- The `deptype_query` argument isn't used anymore -- it's only passed
around and causes confusion when calling traverse.
- Get rid of it and just keep the `deptypes` argument
* Don't print redundant messages when installing dependencies
- `do_install()` was originally depth-first recursive, and printed "<pkg>
already installed in ..." multiple times for packages as recursive
calls encountered them.
- For much cleaner output, use spec.traverse(order='post') to install
dependencies instead
* Add package for aspell and ass't dictionaries
Add a package definition for aspell.
Add a handful of dictionaries to convince myself that the support for
a bunch of dictionaries works.
* Flake8 cleanup
* Use six's version of urlparse
`urlparse` is not python3 friendly. This works around it (stolen from
`.../cmd/md5.py`).
* Fix incorrect trimming regexp
* Clean up dictionary build
- more parsimonious use of `which` (`make()` has already been made)
- use `sh` instead of `bash`
* Use a helper method to generate info for variants
I figured out my issues with static methods. I *think* that it this
is pythonic.
* Convert aspell to an extendable package
Convert aspell to be extendable and rework the dictionaries to be
extensions.
As it stands, there's a great deal of cut and paste in the
dictionaries, I'll abstract that out next.
The {de,}activate methods copy a great deal of code out of
package.py. Perhaps there's a better way....
* Create AspellDictPackage and use it for the dictionaries
Reduce the repeated code, pull it into a base class.
I'm confused about why 'from spack import *' wasn't more useful in the
base class.
* Oops, -de & -es should be AspellDictPackages too
* Typo: pakcage -> package
* Address some commentary
* Update copyright dates, 2016->2017
* spec and spec.package.spec can refer to different objects in the
database. When these two instances of spec differ in terms of
the value of the 'concrete' property, Spec._mark_concrete can
fail when checking Spec.package.installed (which requires
package.spec to be concrete). This skips the check for
spec.package.installed when _mark_concrete is called with
'True' (in other words, when the database is marking all specs
as being concrete).
* add test to confirm this fixes#5293
* edits to address issues where spack concretization attempts to set properties on already-installed specs
* most added checks only need to check if the spec is concrete; they dont also need to check if the package is installed
* add test to ensure that patches are not applied to an installed spec
* add test to ensure that an error is detected when a dependent requests a dependency constraint which conflicts with a requested installed dependency
Fixes#5455
All methods within setup_package use an EnvironmentModifications object
to control the environment. Those modifications are applied at the end
of setup_package. Module loads for the build environment need to be
done after the rest of the environment modifications are applied, as
otherwise Spack may unset variables set by those modules (for example
LD_LIBRARY_PATH).
closes#2884closes#4684
In #1848 we decided to use `Spec.format` to expand certain tokens in
the module file naming scheme or in the environment variable name.
Not all the tokens that are allowed in `Spec.format` make sense in
module file generation. This PR restricts the set of tokens that can
be used, and adds tests to check that the intended behavior is respected.
Additionally, the names of environment variables set/modified by module
files were, up to now, always uppercase. There are packages though that
require case sensitive variable names to honor certain behaviors (e.g.
OpenMPI). This PR restricts the uppercase transformation in variable
names to `Spec.format` tokens.
fixes#5587
In trying to preserve patch ordering, #5476 made equality inconsistent
for the added 'patches' variant. This commit maintains the original
weak ordering of patch applications while preserving consistency of
comparisons. The ordering DOES NOT enter the hashing mechanism. It's
supposed to be a hotfix, while we think of a cleaner and more-permanent
solution.
- This steals the magic regular expressions that CTest uses to parse log
files and addds them to Spack. See here:
https://github.com/Kitware/CMake/blob/master/Source/CTest/cmCTestBuildHandler.cxx
These are BSD licensed, so the port is in `externa/ctest_log_parser.py`
- We currently use these to do better filtering of errors from build
output. Plan is to use them to generate good CDash output.
`spack blame` prints out the contributors to a package.
By modification time:
```
$ spack blame --time llvm
LAST_COMMIT LINES % AUTHOR EMAIL
3 days ago 2 0.6 Andrey Prokopenko <andrey.prok@gmail.com>
3 weeks ago 125 34.7 Massimiliano Culpo <massimiliano.culpo@epfl.ch>
3 weeks ago 3 0.8 Peter Scheibel <scheibel1@llnl.gov>
2 months ago 21 5.8 Adam J. Stewart <ajstewart426@gmail.com>
2 months ago 1 0.3 Gregory Becker <becker33@llnl.gov>
3 months ago 116 32.2 Todd Gamblin <tgamblin@llnl.gov>
5 months ago 2 0.6 Jimmy Tang <jcftang@gmail.com>
5 months ago 6 1.7 Jean-Paul Pelteret <jppelteret@gmail.com>
7 months ago 65 18.1 Tom Scogland <tscogland@llnl.gov>
11 months ago 13 3.6 Kelly (KT) Thompson <kgt@lanl.gov>
a year ago 1 0.3 Scott Pakin <pakin@lanl.gov>
a year ago 3 0.8 Erik Schnetter <schnetter@gmail.com>
3 years ago 2 0.6 David Beckingsale <davidbeckingsale@gmail.com>
3 days ago 360 100.0
```
Or by percent contribution:
```
$ spack blame --percent llvm
LAST_COMMIT LINES % AUTHOR EMAIL
3 weeks ago 125 34.7 Massimiliano Culpo <massimiliano.culpo@epfl.ch>
3 months ago 116 32.2 Todd Gamblin <tgamblin@llnl.gov>
7 months ago 65 18.1 Tom Scogland <tscogland@llnl.gov>
2 months ago 21 5.8 Adam J. Stewart <ajstewart426@gmail.com>
11 months ago 13 3.6 Kelly (KT) Thompson <kgt@lanl.gov>
5 months ago 6 1.7 Jean-Paul Pelteret <jppelteret@gmail.com>
3 weeks ago 3 0.8 Peter Scheibel <scheibel1@llnl.gov>
a year ago 3 0.8 Erik Schnetter <schnetter@gmail.com>
3 years ago 2 0.6 David Beckingsale <davidbeckingsale@gmail.com>
3 days ago 2 0.6 Andrey Prokopenko <andrey.prok@gmail.com>
5 months ago 2 0.6 Jimmy Tang <jcftang@gmail.com>
2 months ago 1 0.3 Gregory Becker <becker33@llnl.gov>
a year ago 1 0.3 Scott Pakin <pakin@lanl.gov>
3 days ago 360 100.0
```
- A package can depend on a special patched version of its dependencies.
- The `Spec` YAML (and therefore the hash) now includes the sha256 of
the patch in the `Spec` YAML, which changes its hash.
- The special patched version will be built separately from a "vanilla"
version of the same package.
- This allows packages to maintain patches on their dependencies
without affecting either the dependency package or its dependents.
This could previously be accomplished with special variants, but
having to add variants means the hash of the dependency changes
frequently when it really doesn't need to. This commit allows the
hash to change *just* for dependencies that need patches.
- Patching dependencies shouldn't be the common case, but some packages
(qmcpack, hpctoolkit, openspeedshop) do this kind of thing and it
makes the code structure mirror maintenance responsibilities.
- Note that this commit means that adding or changing a patch on a
package will change its hash. This is probably what *should* happen,
but we haven't done it so far.
- Only applies to `patch()` directives; `package.py` files (and their
`patch()` functions) are not hashed, but we'd like to do that in the
future.
- The interface looks like this: `depends_on()` can optionally take a
patch directive or a list of them:
depends_on(<spec>,
patches=patch(..., when=<cond>),
when=<cond>)
# or
depends_on(<spec>,
patches=[patch(..., when=<cond>),
patch(..., when=<cond>)],
when=<cond>)
- Previously, the `patch()` directive only took an `md5` parameter. Now
it only takes a `sha256` parameter. We restrict this because we want
to be consistent about which hash is used in the `Spec`.
- A side effect of hashing patches is that *compressed* patches fetched
from URLs now need *two* checksums: one for the downloaded archive and
one for the content of the patch itself. Patches fetched uncompressed
only need a checksum for the patch. Rationale:
- we include the content of the *patch* in the spec hash, as that is
the checksum we can do consistently for patches included in Spack's
source and patches fetched remotely, both compressed and
uncompressed.
- we *still* need the patch of the downloaded archive, because we want
to verify the download *before* handing it off to tar, unzip, or
another decompressor. Not doing so is a security risk and leaves
users exposed to any arbitrary code execution vulnerabilities in
compression tools.
- Functions returned by directives were all called `_execute`, which made
reading stack traces hard because you couldn't tell what directive a
frame came from.
- renamed them all to `_execute_<directive>`
- Exceptions in directives were only really used in one or two places --
get rid of the boilerplate init functions and let the callsite specify
the message.
- move `spack.cmd.checksum.get_checksums` to `spack.util.web.spider_checksums`
- move `spack.error.NoNetworkError` to `spack.util.web.NoNetworkError` since
it is only used there.
- Previously, dependencies and dependency_types were stored as separate
dicts on Package.
- This means a package can only depend on another in one specific way,
which is usually but not always true.
- Prior code unioned dependency types statically across dependencies on
the same package.
- New code stores dependency relationships as their own object, with a
spec constraint and a set of dependency types per relationship.
- Dependency types are now more precise
- There is now room to add more information to dependency relationships.
- New Dependency class lives in dependency.py, along with deptype
definitions that used to live in spack.spec.
Move deptype definitions to spack.dependency
* Add '--test=all' and '--test=root' options to test either the root or the root and all dependencies.
* add a test dependency type that is only used when --test is enabled.
* test dependencies are not added to the spec, but they are provided in the test environment.
closes#5473
Prior to this PR we were not exiting early for external packages, which
caused the `configure_options` property of the contexts to fail with
e.g. a key error because the DAG gets truncated for them. More
importantly Spack configure options don't make any sense for externals.
Now we exit early, and leave a message in the module file clarifying
that this package has been installed outside of Spack.
closes#5201
Currently, if a user sets an external package to have a prefix that is
one of the system paths (like '/usr') the module files that are
generated will prepend '/usr/bin' to 'PATH', etc. This is particularly
nasty at the time when a module file is unloaded, and e.g. paths like
'/usr/bin' will be discarded from PATH.
This PR solves the issue skipping system paths when a prefix inspection
is made to generate module files.
* Module files now are generated using a template engine refers #2902#3173
jinja2 has been hooked into Spack.
The python module `modules.py` has been splitted into several modules
under the python package `spack/modules`. Unit tests stressing module
file generation have been refactored accordingly.
The module file generator for Lmod has been extended to multi-providers
and deeper hierarchies.
* Improved the support for templates in module files.
Added an entry in `config.yaml` (`template_dirs`) to list all the
directories where Spack could find templates for `jinja2`.
Module file generators have a simple override mechanism to override
template selection ('modules.yaml' beats 'package.py' beats 'default').
* Added jinja2 and MarkupSafe to vendored packages.
* Spec.concretize() sets mutual spec-package references
The correct place to set the mutual references between spec and package
objects at the end of concretization. After a call to concretize we
should now be ensured that spec is the same object as spec.package.spec.
Code in `build_environment.py` that was performing the same operation
has been turned into an assertion to be defensive on the new behavior.
* Improved code and data layout for modules and related tests.
Common fixtures related to module file generation have been extracted
in `conftest.py`. All the mock configurations for module files have been
extracted from python code and have been put into their own yaml file.
Added a `context_property` decorator for the template engine, to make
it easy to define dictionaries out of properties.
The default for `verbose` in `modules.yaml` is now False instead of True.
* Extendable module file contexts + short description from docstring
The contexts that are used in conjunction with `jinja2` templates to
generate module files can now be extended from package.py and
modules.yaml.
Module files generators now infer the short description from package.py
docstring (and as you may expect it's the first paragraph)
* 'module refresh' regenerates all modules by default
`module refresh` without `--module-type` specified tries to
regenerate all known module types. The same holds true for `module rm`
Configure options used at build time are extracted and written into the
module files where possible.
* Fixed python3 compatibility, tests for Lmod and Tcl.
Added test for exceptional paths of execution when generating Lmod
module files.
Fixed a few compatibility issues with python3.
Fixed a bug in Tcl with naming_scheme and autoload + unit tests
* Updated module file tutorial docs. Fixed a few typos in docstrings.
The reference section for module files has been reorganized. The idea is
to have only three topics at the highest level:
- shell support + spack load/unload use/unuse
- module file generation (a.k.a. APIs + modules.yaml)
- module file maintenance (spack module refresh/rm)
Module file generation will cover the entries in modules.yaml
Also:
- Licenses have been updated to include NOTICE and extended to 2017
- docstrings have been reformatted according to Google style
* Removed redundant arguments to RPackage and WafPackage.
All the callbacks in `RPackage` and `WafPackage` that are not build
phases have been modified not to accept a `spec` and a `prefix`
argument. This permits to leverage the common `configure_args` signature
to insert by default the configuration arguments into the generated
module files. I think it's preferable to handling those packages
differently than `AutotoolsPackage`. Besides only one package seems
to override one of these methods.
* Fixed broken indentation + improved resiliency of refresh
Fixed broken indentation in `spack module refresh` (probably a rebase
gone silently wrong?). Filter the writers for blacklisted specs before
searching for name clashes. An error with a single writer will not
stop regeneration, but instead will print a warning and continue
the command.
- converted `log_path` and `env_path` to properties of PackageBase.
- InstallErrors in build_environment are now annotated with the package
that caused them, in the 'pkg' attribute.
- Add `--show-log-on-error` option to `spack install` that catches
InstallErrors and prints the log to stderr if it exists.
Note that adding a reference to the Pakcage allows a lot of stuff
currently handled by do_install() and build_environment to be handled
externally.
- '\b' in regular expression needs to be in a raw string (r'\b')
- Regression test that would've caught this was unintentionally disabled
- This fixes the string and the test
The correct place to set the mutual references between spec and
package objects is at the end of concretization. After a call to
concretize we should now be ensured that spec is the same object
as spec.package.spec.
Code in `build_environment.py` that was performing the same
operation has been turned into an assertion to be defensive on
the new behavior.
- Fixes bugs where concretization would fail due to an erroneously cached
_concrete attribute.
- Ripped out a bunch of code in spec.py that isn't needed/valid anymore:
- The various concrete() methods on different types of Specs would
attempt to statically compute whether the Spec was concrete.
- This dates back to when DAGs were simpler and there were no optional
dependencies. It's actually NOT possible to compute statically
whether a Spec is concrete now. The ONLY way you know is if it goes
through concretization and is marked concrete once that completes.
- This commit removes all simple concreteness checks and relies only on
the _concrete attribute. This should make thinking about
concreteness simpler.
- Fixed a couple places where Specs need to be marked concrete explicitly.
- Specs read from files and Specs that are destructively copied from
concrete Specs now need to be marked concrete explicitly.
- These spots may previously have "worked", but they were brittle and
should be explcitly marked anyway.
- Dependencies in concrete specs did not previously have their cache
fields (_concrete, _normal, etc.) preserved.
- _dup and _dup_deps weren't passing each other enough information to
preserve concreteness properly, so only the root was properly
preserved.
- cached concreteness is now preserved properly for the entire DAG, not
just the root.
- added method docs.
Fixes#4112
This commit extends the support of the AutotoolsPackage methods
`with_or_without` and `enable_or_disable` to bool-valued variants. It
also defines for those functions a convenience short-cut if the
activation parameter is the prefix of a spec (like in
`--with-{pkg}={prefix}`).
This commit also includes:
* Updates to viennarna and adios accordingly: they have been modified to
use `enable_or_disable` and `with_or_without`
* Improved docstrings in `autotools.py`. Raise `KeyError` if name is
not a variant.
Renames the existing bootstrap command to 'clone'. Repurposes
'spack bootstrap' to install packages that are useful to the
operation of Spack (for now this is just environment-modules).
For bash and ksh users running setup-env.sh, if a Spack-installed
instance of environment-modules is detected and environment modules
and dotkit are not externally available, Spack will define the
'module' command in the user's shell to use the environment-modules
built by Spack.
First, quote the environment variable values. Second, export the
variables. sorry, this is bourn-shell syntax. Happy to consider a
shell-independent way to do this, but spack is already using sh-like
"env=value"
* Added support to query packages by tags.
- The querying commands `spack list`, `spack find` and `spack info` have
been modified to support querying by tags. Tests have been added to
check that the feature is working correctly under what should be the
most frequent use cases.
* Refactored Repo class to make insertion of new file caches easier.
- Added the class FastPackageChecker. This class is a Mapping from
package names to stat info, that gets memoized for faster access.
- Extracted the creation of a ProviderIndex to its own factory function.
* Added a cache file for tags.
- Following what was done for providers, a TagIndex class has been added.
This class can serialize and deserialize objects from json. Repo and
RepoPath have a new method 'packages_with_tags', that uses the TagIndex
to compute a list of package names that have all the tags passed as
arguments.
On Ubuntu 14.04 the effect if the cache reduces the time for spack list
from ~3sec. to ~0.3sec. after the cache has been built.
* Fixed colorization of `spack info`
This command broke after #5109. It was using the default value for the
"dirty" argument in `setup_package`. Now it adopts the same logic as
in `spack install`. Changed help for '--clean' and '--dirty'.
Improved coverage of spack env.
The private method `Spec._dup` was missing a line (when setting compiler
flags the parent spec was not set to `self`). This resulted in
an inconsistent state of the duplicated Spec. This problem has been
fixed here. The docstring of `Spec._dup` has been updated.
This change is done to avoid inconsistencies during refactoring. The rationale is that functions at different levels in the call stack all define a default for the 'dirty' argument. This PR removes the default value for all the functions except the top-level one (`PackageBase.do_install`).
In this way not defining 'dirty' will result in an error, instead of the default value being used. This will reduce the risk of having an inconsistent behavior after a refactoring.
* Respect --insecure when fetching list_url.
* Ensure support for Python 2.6, and that urlopen works for python versions prior 2.7.9 and between 3.0 and 3.4.3.
* Simplified Spec.__init__ signature by removing the *dep_like argument.
The `*dep_like` argument of `Spec.__init__` is used only for tests. This
PR removes it from the call signature and introduces an equivalent
fixture to be used in tests.
* Refactored ``spec_from_dict`` to be a static method of ``Spec``
The fixture ``spec_from_dict`` has been refactored to be a static method
of ``Spec``. Test code has been updated accordingly. Added tests for
exceptional paths.
* Renamed argument `unique` to `normal` + added LazySpecCache class
As requested in the review the argument `unique` of `Spec.from_literal`
has been renamed to `normal`. To avoid eager evaluations of
`Spec(spec_like)` expressions a subclass of `collections.defaultdict`
has been introduced.
* Spec object can be keys of the dictionary for a spec literal.
Added back the possibility use a spec directly as a key. This permits
to build DAGs that are partially normalized.
- Update handling of ChildError so that its output is capturable from a
SpackCommand
- Update cmd/install test to make sure Python and build log output is
being displayed properly.
- install and probably other commands were designed to run once, but now
we can to test them from within Spack with SpackCommand
- cmd/install.py assumed that it could modify do_install in PackageBase
and leave it that way; this makes the decorator temporary
- package.py didn't properly initialize its stage if the same package had
been built successfully before (and the stage removed).
- manage stage lifecycle better and remember when Package needs to
re-create the stage
- If a failure comes from an external command and NOT the Python code,
display errors highlighted with some context.
- Add some rudimentary support for parsing errors out of the build log
(not very sophisticated yet).
- Build errors in Python code will still display with Python context.
Users can now add an optional custom message to the conflicts directive.
Layout on screen has been changed to improve readability and the long
spec is shown in tree format. Two conflicts in `espresso` have been
modified to showcase the feature.
- Python I/O would not properly interleave (or appear) with output from
subcommands.
- Add a flusing wrapper around sys.stdout and sys.stderr when
redirecting, so that Python output is synchronous with that of
subcommands.
- 'v' toggle was previously only good for the current install.
- subsequent installs needed user to press 'v' again.
- 'v' state is now preserved across dependency installs.
- Previously we would use `os._exit()` in to avoid Spack error handling
in the parent process when build processes failed. This isn't
necessary anymore since build processes propagate their exceptions to
the parent process.
- Use `sys.exit` instead of `os._exit`. This has the advantage of
automatically flushing output streams on quit, so output from child
processes is not lost when Spack exits.
- Simplify interface to log_output. New interface requires only one
context handler instead of two. Before:
with log_output('logfile.txt') as log_redirection:
with log_redirection:
# do things ... output will be logged
After:
with log_output('logfile.txt'):
# do things ... output will be logged
If you also want the output to be echoed to ``stdout``, use the
`echo` parameter::
with log_output('logfile.txt', echo=True):
# do things ... output will be logged and printed out
And, if you just want to echo *some* stuff from the parent, use
``force_echo``:
with log_output('logfile.txt', echo=False) as logger:
# do things ... output will be logged
with logger.force_echo():
# things here will be echoed *and* logged
A key difference between this and the previous implementation is that
*everything* in the context handler is logged. Previously, things like
`Executing phase 'configure'` would not be logged, only output to the
screen, so understanding phases in the build log was difficult.
- The implementation of `log_output()` is different in two major ways:
1. This implementation avoids race cases by using only one pipe (before
we had a multiprocessing pipe and a unix pipe). The logger daemon
stops naturally when the input stream is closed, which avoids a race
in the previous implementation where we'd miss some lines of output
because the parent would shut the daemon down before it was done
with all output.
2. Instead of turning output redirection on and off, which prevented
some things from being logged, this version uses control characters
in the output stream to enable/disable forced echoing. We're using
the time-honored xon and xoff codes, which tell the daemon to echo
anything between them AND write it to the log. This is how
`logger.force_echo()` works.
- Fix places where output could get stuck in buffers by flushing more
aggressively. This makes the output printed to the terminal the same
as that which would be printed through a pipe to `cat` or to a file.
Previously these could be weirdly different, and some output would be
missing when redirecting Spack to a file or pipe.
- Simplify input and color handling in both `build_environment.fork()`
and `llnl.util.tty.log.log_output()`. Neither requires an input_stream
parameter anymore; we assume stdin will be forwarded if possible.
- remove `llnl.util.lang.duplicate_stream()` and remove associated
monkey-patching in tests, as these aren't needed if you just check
whether stdin is a tty and has a fileno attribute.
- Fix issue with color formatting regular expression.
- _separators regex in spec.py could be constructed such that '^' came
first in the character matcher, e.g. '[^@#/]'. This inverts the match
and causes transient KeyErrors.
- Fixed to escape all characters in the constructed regex.
- This bug comes up in Python3 due to its more randomized hash iteration
order, but it could probably also happen in a Python 2 implementation.
- also clean up variable docstrings in spec.py
- Mac OS X Sierra has no /usr/include by default
- Instead of assuming there's an include directory in /usr, mock up a directory that looks like we expect.
This adds sbang hook support for node-js and fixes the sbang filter
for lua (the character class exclusion was swallowing newlines and
reporting a false positive if lua was mentioned anywhere in the
file).
* Docs: Travis-CI Workflow
Add a workflow how to use spack on Travis-CI.
Future Work:
depending if and how we can simplify 5101:
add a multi-compiler, multi-C++-standard, multi-software
build matrix example
* Fix Typos
* Colorize spack info. Adds prominence to preferred version. fixes#2708
This uses 'llnl.util.tty.color' to colorize the output of 'spack info'.
It also displays versions in the order the concretizer would choose
them and shows the preferred in a line on its own and in bold.
* Modified output according to Adam and Denis reviews.
Section titles are not bold + black, but bold + blue. Added a new
section named "Preferred version", which prints the preferred version
in bold characters.
* Further modifications according to Adam and Denis reviews.
After "Homepage:" we now have a single space. Removed newline after each
variant. Preferred version is not in bold fonts anymore. Added a simple
test that just runs the command.
* Improved error message for unsatisfiable specs. fixes#5066
This PR improves the error message for unsatisfiable specs by showing in tree format both the spec that cannot satisfy the constraint and the spec that asked for that constraint. After that follows a readable error message.
This PR allows additional unused properties at the top-level of the config.yaml file. Having these properties permits to use two different versions of Spack, one of which adds a new property, without receiving error messages due to the presence of this new property in a configuration cache stored in the user's home.
This fixes a syntax error in the index.html file generated by the
"spack buildcache" command when creating build caches. This also
fixes support for installing unsigned binaries.
* Refactor IntelInstaller into IntelPackage base class
* Move license attributes from __init__ to class-level
* Flake8 fixes: remove unused imports
* Fix logic that writes the silent.cfg file
* More specific version numbers for Intel MPI
* Rework logic that selects components to install
* Final changes necessary to get intel package working
* Various updates to intel-parallel-studio
* Add latest version of every Intel package
* Add environment variables for Intel packages
* Update env vars for intel package
* Finalize components for intel-parallel-studio package
Adds a +tbb variant to intel-parallel-studio.
The tbb package was renamed to intel-tbb.
Now both intel-tbb and intel-parallel-studio+tbb
provide tbb.
* Overhaul environment variables set by intel-parallel-studio
* Point dependent packages to the correct MPI wrappers
* Never default to intel-parallel-studio
* Gather env vars by sourcing setup scripts
* Use mpiicc instead of mpicc when using Intel compiler
* Undo change to ARCH
* Add changes from intel-mpi to intel-parallel-studio
* Add comment explaining mpicc vs mpiicc
* Prepend env vars containing 'PATH' or separators
* Flake8 fix
* Fix bugs in from_sourcing_file
* Indentation fix
* Prepend, not set if contains separator
* Fix license symlinking broken by changes to intel-parallel-studio
* Use comments instead of docstrings to document attributes
* Flake8 fixes
* Use a set instead of a list to prevent duplicate components
* Fix MKL and MPI library linking directories
* Remove +all variant from intel-parallel-studio
* It is not possible to build with MKL, GCC, and OpenMP at this time
* Found a workaround for locating GCC libraries
* Typos and variable names
* Fix initialization of empty LibraryList
Adds the "buildcache" command to spack. The buildcache command is
used to create gpg signatures for archives of installed spack
packages; the signatures and archives are placed together in a
directory that can be added to a spack mirror. A user can retrieve
the archives from a mirror and verify their integrity using the
buildcache command. It is often the case that the user's Spack
instance is located in a different path compared to the Spack
instance used to generate the package archive and signature, so
this includes logic to relocate the RPATHs generated by Spack.
The action `CleanOrDirtyAction` has been added. It sets the default
value for `dest` to `spack.dirty`, and changes it according to the flags
passed via command line. Added unit tests to check that the arguments
are parsed correctly. Removed lines in `PackageBase` that were setting
the default value of dirty according to what was in the configuration.
Popen.communicate outputs a str object for python2 and a bytes
object for python3. This updates the Executable.__call__ function
to call .decode on the output of Popen.communicate only for python3.
This ensures that Executable.__call__ returns a str for python2 and
python3.
fixes#4236fixes#5002
When a package is defined in more than one repository,
RepoPath.dirname_for_package_name may return the path
to either definition. This sidesteps that ambiguity by
accessing the module associated with the package definition.
* Merged 'purge' command with 'clean'. Deleted 'purge'. fixes#2942
'spack purge' has been merged with 'spack clean'. Documentation has been
updated accordingly. The 'clean' and 'purge' behavior are not mutually
exclusive, and they log brief information to tty while they go.
* Fixed a wrong reference to spack clean in the docs
* Added tests for 'spack clean'. Updated bash completion.
* Typo fixes in docstrings.
* Let OS classes know if the paths they get were explicitly specified by user.
* Fixed regexp for cray compiler version matching.
* Replaced LinuxDistro with CrayFrontend for the Cray platform's frontend.
Fixes#4898
Constraints that were supposed to be conditionally activated for
specified values of a single-valued variant were being activated
unconditionally in the case that the variant was associated with
an implicit dependency. For example if X->Y->Z and Y places a
conditional constraint on Z for a given single-valued variant on
Y, then it would have been applied unconditionally when
concretizing X.
* Add a QMakePackage base class
* Fix sqlite linking bug in qt-creator
* Add latest version of qt-creator
* Add latest version of qwt
* Use raw strings for regular expressions
* Increase minimum required version of qt
* Add comment about specific version of sqlite required
* Fixes for latest version of qwt and qt-creator
* Older versions of Qwt only work with older versions of Qt
* Fix crashes when running spack install under nohup
Fixes#4919
For reasons that I do not entire understand, duplicate_stream() throws
an '[Errno 22] Invalid argument' exception when it tries to
`os.fdopen()` the duplicated file descriptor generated by
`os.dup(original.fileno())`. See spack/llnl/util/lang.py, line
394-ish.
This happens when run under `nohup`, which supposedly has hooked
`stdin` to `/dev/null`.
It seems like opening and using `devnull` on the `input_stream` in
this situation is a reasonable way to handle the problem.
* Be more specific about error being handled.
Only catch the specific error that happens when trying to dup
the stdin that nohup provides.
Catching e as a StandardErorr and then
`type(e).__name__` tells me that it's an OSError.
Printing e.errno tells me that it's 22
Double checking tells me that 22 is EINVAL.
Phew.
- Remove `special_types` dict in spec.py, as only 'all' is still used
- Still allow 'all' to be used as a deptype
- Simplify `canonical_deptype` function
- Clean up args in spack graph
- Add tests
For packages which contain a mix of versions with formats X.Y and
X.Y.Z, if the user entered an X.Y version as a preference in
packages.yaml, Spack would get confused and favor any version A.B.Z
where X=A and Y=B. In the case where there is a mix of these version
types, this commit updates preferences so Spack will favor an exact
match.
* Disable spec colorization when redirecting stdout and add command line flag to re-enable
* Add command line `--color` flag to control output colorization
* Add options to `llnl.util.tty.color` to allow color to be auto/always/never
* Add `Spec.cformat()` function to be used when `format()` should have auto-coloring
* Add universal build_type variant to CMakePackage
* Override build_type in some packages with different possible values
* Remove reference to no longer existent debug variant
* Update CBTF packages with new build_type variant
* Keep note on build size of LLVM
* Change version.up_to() to return Version() object
* Add unit tests for Version.up_to()
* Fix packages that expected up_to() to return a string
* Ensure that up_to() preserves separator characters
* Use version indexing instead of up_to
* Make all Version formatting properties return Version objects
* Update docs
* Tests need to test string representation
Adds SpackCommand class allowing Spack commands to be easily in Python
Example usage:
from spack.main import SpackCommand
info = SpackCommand('info')
out, err = info('mpich')
print(info.returncode)
This allows easier testing of Spack commands.
Also:
* Simplify command tests
* Simplify mocking in command tests.
* Simplify module command test
* Simplify python command test
* Simplify uninstall command test
* Simplify url command test
* SpackCommand uses more compatible output redirection
* Initial work on flag trapping using functions called <flag>_handler and default_flag_handler
* Update packages so they do not obliterate flags
* Added append to EnvironmentModifications class
* changed EnvironmentModifications to have append_flags method
* changed flag_val to be a tuple
* Increased test coverage
* added documentation of flag handling
* Change path to CMakeLists.txt to be relative to root, not pwd
* Changes requested during code review
* Revert back to old naming of root_cmakelists_dir
* Make relative directory more clear in docs
* Revert change causing build_type AttributeError
* Fix forgotten abs_path var
* Update CLHEP with new relative path
* Update more packages with new root_cmakelists_dir syntax
- Lock test can be run either as a node-local test or as an MPI test.
- Lock test is now parametrized by filesystem, so you can test the
locking capabilities of your NFS, Lustre, or GPFS filesystem. See docs
for details.
* mv variants: packages are now needed only during normalization
The relationship among different types of variants have been weakened,
in the sense that now it is permitted to compare MV, SV and BV among
each other. The mechanism that permits this is an implicit conversion
of the variant passed as argument to the type of the variant asking
to execute a constrain, satisfies, etc. operation.
* asbtract variant: added a new type of variant
An abstract variant is like a multi valued variant, but behaves
differently on "satisfies" requests, because it will reply "True"
to requests that **it could** satisfy eventually.
Tests have been modified to reflect the fact that abstract variants
are now what get parsed from expressions like `foo=bar` given by users.
* Removed 'concrete=' and 'normal=' kwargs from Spec.__init__
These two keyword arguments where only used in one test module to force
a Spec to 'appear' concrete. I suspect they are just a leftover from
another refactoring, as now there's the private method '_mark_concrete'
that does essentially the same job. Removed them to reduce a bit the
clutter in Spec.
* Moved yaml related functions from MultiValuedVariant to AbstractVariant.
This is to fix the erros that are occurring in epfl-scitas#73, and that
I can't reproduce locally.
* Parse modules in a way that works for both lmod and tcl
* added test and made method more robust
* refactoring for pythonic clarity
* Improved detection of 'module' shell function + refactored module utilities into spack.util.module_cmd
* Improved regex to reject nested parentheses we are not prepared to handle
* make tests backwards compatible with python 2.6
* Improved regex to account for sh being aliased to bash and used in bash module definition on some systems
* Improve test compatibility with lmod
* Added error for None module_cmd
* Add test for get_module_cmd_from_which()
Add test for get_module_cmd_from_which().
Add -c argument to Popen call to typeset -f module in get_module_cmd_from_bash().
* Increased detection options
Included BASH_FUNC_module() variable outside of typeset as a detection option
This should work on bash even in restricted_shell mode
Kept the typeset detection as an option in case the module function is not exported in bash
Also added try statements to tests, with environment recreation in finally blocks.
* More tests added; some hackiness
* increased test coverage for util/module_cmd
* Code changes to enable system config scope in /etc
Files will go in either /etc/spack or /etc/spack/<platform>
Required minor changes to conftest.
* Updated documentation to match new config scope
- previous code called `which` on $EDITOR, but that doesn't work for
EDITORs like `emacs -nw` or `emacsclient -t -nw`.
- This patch just trusts EDITOR if it is set (same as previous
behavior), and only uses the defaults if it's not.
* issue 4492: added xfailing test, added owner to DependencyMap
* DependencyMap.concrete checks if we have unconditional dependencies
This fixes#4492 and #4107 using some heuristics to avoid an infinite
recursion among Spec.satisfies, Spec.concrete and DependencyMap.concrete
The idea, as suggested by @becker33, is to check just for unconditional
dependencies. This is not covering the whole space and a package with
just conditional dependencies can still fail in the same way. It should
cover though all the **real** packages we have in our repo so far.
* Check for CRAYPE_VERSION instead of path
Architecture tests would fail on Cray since it would not find
the expected path. To make the test correctly work on Cray search
for the CRAYPE version instead.
* Catch SystemExit error in case flake8 not in path
On shared systems having flake8 can involve starting own virtual env.
Skip the test if no flake8 is found to avoid failure reporting.
* Add compatibility to 1.5 svnadmin create
The flag added is needed to correctly create svn repos on NERSC systems.
This could be unnecessary for other sites. I'd like to see others
test before this change gets merged.
- Skip spack flake8 test when flake8 is not installed.
- Fix parsing of dashes in specs broken by new help parser.
- use argparse.REMAINDER instead of narg='?'
- don't interpret parts of specs like -mpi as arguments.
* Initial version of the namd package
* Modified charm to consider compile against intel/intel-mpi
* Correction of namd to compile with intel-mkl and intel compiler
* Adding inclue64 in the prefix
* adding property for the build directory
* removing useless function build
* During install, remove prior unfinished installs
If a user performs an installation which fails, in some cases the
install prefix is still present, and the stage path may also be
present. With this commit, unless the user specifies
'--keep-prefix', installs are guaranteed to begin with a clean
slate. The database is used to decide whether an install finished,
since a database record is not added until the end of the install
process.
* test updates
* repair_partial uses keep_prefix and keep_stage
* use of mock stage object to ensure that stage is destroyed when it should be destroyed (and otherwise not)
* add --restage option to 'install' command; when this option is not set, the default is to reuse a stage if it is found.
- Add a `spack gpg` subcommand in anticipation of signed binaries.
- GPG keys are stored in var/spack/gpg, and the spack gpg command manages them.
- Docs are included on the command.
* Touch up string expansion.
I'm chasing this:
```
$ (module purge; spack install perl %gcc/5.4.0)
==> Error: No installed spec matches the hash: '%s'
```
There's something deeper going on, but the error message isn't helpful.
After this change it tells me this:
```
$ (module purge; spack install perl %gcc/5.4.0)
==> Error: No installed spec matches the hash: '5.4.0'
```
Which is weird because `5.4.0` is not a hash... Whatever is going on here, the error message needs to be fixed.
* Flake8 whitespace
* fix parser
* Removed xfails
* cleaned up debug print statements
* make use of these changes in gcc
* Added comment explaining unreachable line, line left for added protection
* Sphinx no longer supports Python 2.6
* Update vendored sphinxcontrib.programoutput from 0.9.0 to 0.10.0
* Documentation cannot be built in parallel
* Let Travis install programoutput for us
* Remove vendored sphinxcontrib-programoutput
Recent updates to the sphinx package prevent the vendored version
from being found in sys.path. We don't vendor sphinx, so it doesn't
make sense to vendor sphinxcontrib-programoutput either.
PR #3367 inadvertently changed the semantics of _find_recursive and
_find_non_recursive so that the returned list are not ordered as the
input search list. This commit restores the original semantic, and adds
tests to verify it.
Added DFLAGS to the `make.inc` file being written.
These macros are also added to the language specific variables
like CFLAGS, CXXFLAGS and FCFLAGS. Changed `spec.satisfies('foo')`
with `'foo' in spec` in `intel-mkl`, see #4135. Added a basic
build interface to `intel-mpi`.
It seems that parse_anonymous_spec may fail if more than one part
(variant, version range, etc.) is given to the function. Added tests to
code against to fix the problem in #4144.
- Full help is now only generated lazily, when needed.
- Executing specific commands doesn't require loading all of them.
- All commands are only loaded if we need them for help.
- There is now short and long help:
- short help (spack help) shows only basic spack options
- long help (spack help -a) shows all spack options
- Both divide help on commands into high-level sections
- Commands now specify attributes from which help is auto-generated:
- description: used in help to describe the command.
- section: help section
- level: short or long
- Clean up command descriptions
- Add a `spack docs` command to open full documentation
in the browser.
- move `spack doc` command to `spack pydoc` for clarity
- Add a `spack --spec` command to show documentation on
the spec syntax.
* SV variants are evaluated correctly in `when=` statements fixes#4113
The problem here was tricky:
```python
spec.satisfies(other)
```
changes already the MV variants in others into SV variants (where
necessary) if spec is concrete. If it is not concrete it does
nothing because we may be acting at a pure syntactical level.
When evaluating a `when=` keyword spec is for sure not concrete
as it is in the middle of the concretization process. In this case we
have to trigger manually the substitution in other to not end up
comparing a MV variant "foo=bar" to a SV variant "foo=bar" and having
False in return. Which is wrong.
* sv variants: improved error message for typos in "when=" statements
Modifications:
- added support for multi-valued variants
- refactored code related to variants into variant.py
- added new generic features to AutotoolsPackage that leverage multi-valued variants
- modified openmpi to use new features
- added unit tests for the new semantics
This allows people on systems that don't have all the fetchers to still
run Spack tests. Mark tests that require git, subversion, or mercurial to
be skipped if they're not installed.
* Filter all system paths introduced by dependencies from PATH
* Make sure path filtering works *even* for trailing slashes
* Revert some of the changes to `filter_system_paths`
* Yes, `bin64` is a real thing (sigh)
* add tests: /usr, /usr/, /usr/local/../bin, etc.
* Convert from rST to Google-style docstrings
The required hash of a submodule might point to the
non-HEAD commit of the current main branch and hence
would lead to a "no such remote ref" at checkout in
a shallow submodule.
## Motivation
Python installations are both important and unfortunately inconsistent. Depending on the Python version, OS, and the strength of the Earth's magnetic field when it was installed, the name of the Python executable, directory containing its libraries, library names, and the directory containing its headers can vary drastically.
I originally got into this mess with #3274, where I discovered that Boost could not be built with Python 3 because the executable is called `python3` and we were telling it to use `python`. I got deeper into this mess when I started hacking on #3140, where I discovered just how difficult it is to find the location and name of the Python libraries and headers.
Currently, half of the packages that depend on Python and need to know this information jump through hoops to determine the correct information. The other half are hard-coded to use `python`, `spec['python'].prefix.lib`, and `spec['python'].prefix.include`. Obviously, none of these packages would work for Python 3, and there's no reason to duplicate the effort. The Python package itself should contain all of the information necessary to use it properly. This is in line with the recent work by @alalazo and @davydden with respect to `spec['blas'].libs` and friends.
## Prefix
For most packages in Spack, we assume that the installation directory is `spec['python'].prefix`. This generally works for anything installed with Spack, but gets complicated when we include external packages. Python is a commonly used external package (it needs to be installed just to run Spack). If it was installed with Homebrew, `which python` would return `/usr/local/bin/python`, and most users would erroneously assume that `/usr/local` is the installation directory. If you peruse through #2173, you'll immediately see why this is not the case. Homebrew actually installs Python in `/usr/local/Cellar/python/2.7.12_2` and symlinks the executable to `/usr/local/bin/python`. `PYTHONHOME` (and presumably most things that need to know where Python is installed) needs to be set to the actual installation directory, not `/usr/local`.
Normally I would say, "sounds like user error, make sure to use the real installation directory in your `packages.yaml`". But I think we can make a special case for Python. That's what we decided in #2173 anyway. If we change our minds, I would be more than happy to simplify things.
To solve this problem, I created a `spec['python'].home` attribute that works the same way as `spec['python'].prefix` but queries Python to figure out where it was actually installed. @tgamblin Is there any way to overwrite `spec['python'].prefix`? I think it's currently immutable.
## Command
In general, Python 2 comes with both `python` and `python2` commands, while Python 3 only comes with a `python3` command. But this is up to the OS developers. For example, `/usr/bin/python` on Gentoo is actually Python 3. Worse yet, if someone is using an externally installed Python, all 3 commands may exist in the same directory! Here's what I'm thinking:
If the spec is for Python 3, try searching for the `python3` command.
If the spec is for Python 2, try searching for the `python2` command.
If neither are found, try searching for the `python` command.
## Libraries
Spack installs Python libraries in `spec['python'].prefix.lib`. Except on openSUSE 13, where it installs to `spec['python'].prefix.lib64` (see #2295 and #2253). On my CentOS 6 machine, the Python libraries are installed in `/usr/lib64`. Both need to work.
The libraries themselves change name depending on OS and Python version. For Python 2.7 on macOS, I'm seeing:
```
lib/libpython2.7.dylib
```
For Python 3.6 on CentOS 6, I'm seeing:
```
lib/libpython3.so
lib/libpython3.6m.so.1.0
lib/libpython3.6m.so -> lib/libpython3.6m.so.1.0
```
Notice the `m` after the version number. Yeah, that's a thing.
## Headers
In Python 2.7, I'm seeing:
```
include/python2.7/pyconfig.h
```
In Python 3.6, I'm seeing:
```
include/python3.6m/pyconfig.h
```
It looks like all Python 3 installations have this `m`. Tested with Python 3.2 and 3.6 on macOS and CentOS 6
Spack has really nice support for libraries (`find_libraries` and `LibraryList`), but nothing for headers. Fixed.
When a compiler was not found a stacktrace was displayed to user because
there were three arguments to be substituted in a string with only two
substitutions to be done.
Fixes#4026#1167 updated Database.reindex to keep old installation records to
support external packages. However, when a user manually removes a
prefix and reindexes this kept the records so the packages were
still installed according to "spack find" etc. This adds a check
for non-external packages to ensure they are properly installed
according to the directory layout.
- add Version.__format__ to support new-style formatting.
- Python3 doesn't handle this well -- it delegates to
object.__format__(), which raises an error for fancy format strings.
- not sure why it doesn't call str(self).__format__ instead, but that's
hwo things are.
* Properly ignore flake8 F811 redefinition errors
* Add unit tests for flake8 command
* Allow spack flake8 to work on systems with older git
* Skip flake8 unit tests for Python 2.6 and 3.3
* treats correctly a change from `explicit=False` to `explicit=True` in an external package DB entry.
* added unit tests
* fixed issues raised by @tgamblin . In particular the PR is no more hash-changing for packages that are not external.
* added a test to check correctness of a spec/yaml round-trip for things that involve an external
* Don't find external module path at each step of concretization
* it's not necessary.. The paths are retrieved at the end of concretizaion
* Don't find replacements for external packages.
* Test root of the DAG if external
* No reason not to test if the root of the DAG is external when external
packages are now first class citizens!
* Create `external` property for Spec (for external_path and external_module)
* Allow users to specify external package paths relative to spack
* Canonicalize external package paths so that users may specify their
locations relative to spack's directory.
* Update tests to use new external_path and external properly.
* skip license hooks on external
- Spack doesn't need eggs -- it manages its own directories
- Simplify install layout and reduce sys.path searches by installing all
packages flat (eggs are deprecated for wheels, and this is also what
wheels do).
- We now supply the --single-version-externally-managed argument to
`setup.py install` for setuptools packages and setuptools.
- modify packages to only use setuptools args if setuptools is an
immediate dependency
- Remove setuptools from packages that do not need it.
- Some packages use setuptools *only* when certain args (likeb
'develop' or 'bdist') are supplied to setup.py, and they specifically
do not use setuptools for installation.
- Spack never calls setup.py this way, so just removing the setuptools
dependency works for these packages.
* fetch git submodules recursively
This is useful if the submodules have submodules themselves. On
the other hand doing a recursive update doesn't hurt if there
is only one level.
* fetch submodules with depth=1 as well (fix#2190)
* use git submodule with depth only for git>=1.8.4
- Spack install would previously fail if it could not load a package for
the thing being uninstalled.
- This reworks uninstall to handle cases where the package is no longer
known, e.g.:
a) the package has been renamed or is no longer in Spack
b) the repository the package came from is no longer registered in
repos.yaml
- gcc on macOS says it's version 4.2.1, but it's really clang, and it's
actually the *same* clang as the system clang.
- It also doesn't respond with a full path when called with
--print-file-name=libstdc++.dylib, which is expected from gcc in abi.py.
Instead, it gives a relative path and _gcc_compiler_compare doesn't
understand what to do with it. This results in errors like:
```
lib/spack/spack/abi.py, line 71, in _gcc_get_libstdcxx_version
libpath = os.readlink(output.strip())
OSError: [Errno 2] No such file or directory: 'libstdc++.dylib'
```
- This commit does two things:
1. Ignore any gcc that's actually clang in abi.py. We can probably do
better than this, but it's not clear there is a need to, since we
should handle the compiler as clang, not gcc.
2. Don't auto-detect any "gcc" that is actually clang anymore. Ignore
it and expect people to use clang (which is the default macOS
compiler anyway).
Users can still add fake gccs to their compilers.yaml if they want, but
it's discouraged.
* Checksum code wasn't opening binary files as binary.
- Fixes Python 3 issue where files are opened as unicode text by default,
and decoding fails for binary blobs.
* Simplify fetch test parametrization.
* - add tests for URL fetching and checksumming.
- fix coverage on interface functions in FetchStrategy superclass
- add some extra crypto tests.
* Package install remove prior unfinished installs
Depending on how spack is terminated in the middle of building a
package it may leave a partially installed package in the install
prefix. Originally Spack treated the package as being installed if
the prefix was present, in which case the user would have to
manually remove the installation prefix before restarting an
install. This commit adds a more thorough check to ensure that a
package is actually installed. If the installation prefix is present
but Spack determines that the install did not complete, it removes
the installation prefix and starts a new install; if the user has
enabled --keep-prefix, then Spack reverts to its old behavior.
* Added test for partial install handling
* Added test for restoring DB
* Style fixes
* Restoring 2.6 compatibility
* Relocated repair logic to separate function
* If --keep-prefix is set, package installs will continue an install from an existing prefix if one is present
* check metadata consistency when continuing partial install
* Added --force option to make spack reinstall a package (and all dependencies) from scratch
* Updated bash completion; removed '-f' shorthand for '--force' for install command
* dont use multiple write modes for completion file
* Add tests to mercurial package
* Add support for --insecure with mercurial fetching
* Install man pages and tab-completion scripts
* Add tests and latest version for all deps
* Flake8 fix
* Use certifi module to find CA certificate
* Flake8 fix
* Unset PYTHONPATH when running hg
* svn_fetch should use to svn-test, not hg-test
* Drop Python 3 support in Mercurial
Python 3 support is a work in progress and isn't currently
recommended:
https://www.mercurial-scm.org/wiki/SupportedPythonVersions
* Test both secure and insecure hg fetching
* Test both secure and insecure git and svn fetching
`set_executable` now checks if a user/group.other had read permission
on a file and if it does then it sets the corresponding executable
bit.
See #1483.
Fixes#2587
The concretizer falls back on using the root architecture (followed
by the default system architecture) to fill in unspecified arch
properties for a spec. It failed to check cases where the root could
be None.
* Remove fake URLs from Spack
* Ignore long lines for URLs that start with ftp:
* Preliminary changes to version regexes
* New redesign of version regexes
* Allow letters in version-only
* Fix detection of versions that end in Final
* Rearrange a few regexes and add examples
* Add tests for common download repositories
* Add test cases for common tarball naming schemes
* Finalize version regexes
* spack url test -> spack url summary
* Clean up comments
* Rearrange suffix checks
* Use query strings for name detection
* Remove no longer necessary url_for_version functions
* Strip off extraneous information after package name
* Add one more test
* Dot in square brackets does not need to be escaped
* Move renaming outside of parse_name_offset
* Fix versions for a couple more packages
* Fix flake8 and doc tests
* Correctly parse Python, Lua, and Bio++ package names
* Use effective URLs for mfem
* Add checksummed version to mitos
* Remove url_for_version from STAR-CCM+ package
* Revert changes to version numbers with underscores and dashes
* Fix name detection for tbb
* Correctly parse Ruby gems
* Reverted mfem back to shortened URLs.
* Updated instructions for better security
* Remove preferred=True from newest version
* Add tests for new `spack url list` flags
* Add tests for strip_name_suffixes
* Add unit tests for version separators
* Fix bugs related to parseable name but in parseable version
* Remove dead code, update docstring
* Ignore 'binary' at end of version string
* Remove platform from version
* Flip libedit version numbers
* Re-support weird NCO alpha/beta versions
* Rebase and remove one new fake URL
* Add / to beginning of regex to avoid picking up similarly named packages
* Ignore weird tar versions
* Fix bug in url parse --spider when no versions found
* Less strict version matching for spack versions
* Don't rename Python packages
* Be a little more selective, version must begin with a digit
* Re-add fake URLs
* Fix up several other packages
* Ignore more file endings
* Add parsing support for Miniconda
* Update tab completion
* XFAILS are now PASSES for 2 web tests
- _spider in web.py was actually failing to spider deeper than a certain
point.
- Fixed multiprocessing pools to not use daemons and to allow recursive
spawning.
- Added detailed tests for spidering and for finding archive versions.
- left some xfail URL finding exercises for the reader.
- Fix noqa annotations for some @when decorators
- Clean up spec_syntax tests: don't dependend on DB order.
- spec_syntax hash parsing tests were strongly dependent on the order the
DB was traversed.
- Tests now specifically grab the specs they want from the mock DB.
- Tests are more readable as a result.
- Add Python3 versions to Travis tests.
1. Fix#2807: Can't depend on virtual and non-virtual package
- This is tested by test_my_dep_depends_on_provider_of_my_virtual_dep in
the concretize.py test.
- This was actually working in the test suite, but it depended on the
order the dependencies were resolved in. Resolving non-virtual then
virtual worked, but virtual, then non-virtual did not.
- Problem was that an unnecessary copy was made of a spec that already
had some dependencies set up, and the copy lost half of some of the
dependency relationships. This caused the "can'd depend on X twice
error".
- Fix by eliminating unnecessary copy and ensuring that dep parameter of
_merge_dependency is always safe to own -- i.e. it's a defensive copy
from somewhere else.
2. Fix bug and simplify concretization of deptypes.
- deptypes weren't being accumulated; they were being set on each
DependencySpec. This could cause concretization to get into an infinite
loop.
- Fixed by accumulating deptypes in DependencySpec.update_deptypes()
- Also simplified deptype normalization logic: deptypes are now merged in
constrain() like everything else -- there is no need to merge them
specially or to look at dpeendents in _merge_dependency().
- Add some docstrings to deptype tests.
- Get rid of pkgsort() usage for preferred variants.
- Concretization is now entirely based on key-based sorting.
- Remove PreferredPackages class and various spec cmp() methods.
- Replace with PackagePrefs class that implements a key function for
sorting according to packages.yaml.
- Clear package pref caches on config test.
- Explicit compare methods instead of total_ordering in Version.
- Our total_ordering backport wasn't making Python 3 happy for some
reason.
- Python 3's functools.total_ordering and spelling the operators out
fixes the problem.
- Fix unicode issues with spec hashes, json, & YAML
- Try to use str everywhere and avoid unicode objects in python 2.
- Remove ascii encoding assumption from spack_yaml
- proc.communicate() returns bytes; convert to str before adding.
- Fix various byte string/unicode issues for Python 2/3 support
- Need to decode subprocess output as utf-8 in from_sourcing_files.
- Fix comments in strify()
- convert print, StringIO, except as, octals, izip
- convert print statement to print function
- convert StringIO to six.StringIO
- remove usage of csv reader in Spec, in favor of simple regex
- csv reader only does byte strings
- convert 0755 octal literals to 0o755
- convert `except Foo, e` to `except Foo as e`
- fix a few places `str` is used.
- may need to switch everything to str later.
- convert iteritems usages to use six.iteritems
- fix urllib and HTMLParser
- port metaclasses to use six.with_metaclass
- More octal literal conversions for Python 2/3
- Fix a new octal literal.
- Convert `basestring` to `six.string_types`
- Convert xrange -> range
- Fix various issues with encoding, iteritems, and Python3 semantics.
- Convert contextlib.nested to explicitly nexted context managers.
- Convert use of filter() to list comprehensions.
- Replace reduce() with list comprehensions.
- Clean up composite: replace inspect.ismethod() with callable()
- Python 3 doesn't have "method" objects; inspect.ismethod returns False.
- Need to use callable in Composite to make it work.
- Update colify to use future division.
- Fix zip() usages that need to be lists.
- Python3: Use line-buffered logging instead of unbuffered.
- Python3 raises an error with unbuffered I/O
- See https://bugs.python.org/issue17404
- Update YAML version to support Python 3
- Python 3 support for ordereddict backport
- Exclude Python3 YAML from version tests.
- Vendor six into Spack.
- Make Python version-check tests work with Python 3
- Add ability to add version check exceptions with '# nopyqver' line
comments.
* Run python setup.py test if --run-tests
* Attempt to import the Python module after installation
* Add testing support to numpy and scipy
* Remove duplicated comments
* Update to new run-tests callback methodology
* Remove unrelated changes for another PR
* perl: make extendable and add Module::Build package
* perl: allow 'spack create' to identify perl packages from their contents
* perl-module-build: fix indenting of package docstring
* perl: split install() method for extensions into phases
* perl: auto-detect build method (Makefile.PL vs Build.PL) and define a 'check' method
* PerlPackage: use import statements similar to those in AutotoolsPackage
* PerlModule: fix detection of Build.PL
* PerlPackageTemplate: remove extraneous lines to avoid flake8 warnings
* PerlPackageTemplate: split into separate templates for Makefile.PL and Build.PL
* PerlPackage: add cross-references to docstrings
* AutotoolsPackage: fix ambiguous cross-references to avoid errors in doc tests
* PerlbuildPackageTemplate: depend on perl-module-build if Build.PL exists
- Spack find would fail with "unknown namespace" for some queries when a
package from an unknown namespace was installed.
- Solve by being conservative: assume unknown packages are NOT providers
of virtual dependencies.
- deactivate -a wouldn't work if the installation's package was no longer
available.
- Fix installed_extensions_for so that it doesn't need to look at the
package.py file.
This fixes the problem described in #3374, which describes `spack find` ignore explicit/implicit.
I believe that this was broken in #2626.
This restores the behavior of implicit/explicit for me.
I believe that it does not screw anything else up, but ....
* Order listed compiler sections
"spack compiler list" output compiler sections in an arbitrary order.
With this commit compiler sections are ordered primarily by compiler
name and then by operating system and target.
* Compiler search lists config files with compilers
If a compiler entry is already defined in a configuration file that
the user does not know about, they may be confused when that compiler
is not added by "spack compiler find". This commit adds a message at
the end of "spack compiler find" to inform the user of the locations
of all config files where compilers are defined.
Fixes#1476
Concretization uses compilers defined in config files and if those
are not available defaults to searching typical paths where the
detected operating system would have a compiler. If there is an OS
update, the detected OS can change; in this case all compilers
defined in the config files would no longer match (because they would
be associated with the previous OS version). The error message in
this case was too vague. This commit adds logic for detecting when it
is likely that the OS has been updated (in particular when that
affects compiler concretization) and improves the information provided
to the user in the error message.
* Dont propagate flags between different compilers
Fixes#2786
Previously when a spec had no parents with an equivalent compiler,
Spack would default to adding the compiler flags associated with the
root of the DAG. This eliminates that default.
* added test for compiler flag propagation
* simplify compiler flag propagation logic
Fixes#3428
Users can run 'spack compiler find' to automatically initialize their
compilers.yaml configuration file. It also turns out that Spack will
implicitly initialize the compilers configuration file as part of
detecting compilers if none are found (so if a user were to attempt to
concretize a spec without running 'spack compiler find' it would not
fail). However, in this case Spack was overlooking its own implicit
initialization of the config files and would report that no new
compilers were found. This commit removes implicit initialization when
the user calls 'spack compiler find'.
This did not surface until #2999 because the 'spack compiler' command
defaulted to using a scope 'user/platform' that was not accounted for
in get_compiler_config (where the implicit initialization logic
predates the addition of this new scope); #2999 removed the scope
specification when checking through config files, leading to the
implicit initialization.
Previously, this would fail with a NoSuchMethodError:
class Package(object):
# this is the default implementation
def some_method(self):
...
class Foo(Package):
@when('platform=cray')
def some_method(self):
...
@when('platform=linux')
def some_method(self):
...
This fixes the implementation of `@when` so that the superclass method
will be invoked when no subclass method matches.
Adds tests to ensure this works, as well.
* default scope for config command is made consistent with cmd/__init__ default
* dont specify a scope when looking for compilers with a matching spec (since compiler concretization is scope-independent)
* config edit should default to platform-specific file only for compilers
* when duplicate compiler specs are detected, the exception raised now points the user to the files where the duplicates appear
* updated error message to emphasize that a spec is duplicated (since multiple specs can reference the same compiler)
* 'spack compilers' is now also broken down into sections by os and target
* Added tests for new compiler methods
Modifications:
- `dump_packages` copies build dependencies into `$prefix/.spack`, as well as the link/run dependencies that we already copied there.
- fake installs copy dependency packages into `$prefix/.spack` as well
- Added a new interface for Specs to pass build information
- Calls forwarded from Spec to Package are now explicit
- Added descriptor within Spec to manage forwarding
- Added state in Spec to maintain query information
- Modified a few packages (the one involved in spack install pexsi) to showcase changes
- This uses an object wrapper to `spec` to implement the `libs` sub-calls.
- wrapper is returned from `__getitem__` only if spec is concrete
- allows packagers to access build information easily
It seems the tests in `packages.py` were running just because we had a specific order of execution. This should fix the problem, and make the test_suite more resilient to running order.