spack test (#15702)

Users can add test() methods to their packages to run smoke tests on
installations with the new `spack test` command (the old `spack test` is
now `spack unit-test`). spack test is environment-aware, so you can
`spack install` an environment and then run `spack test run` to run smoke
tests on all of its packages. Historical test logs can be perused with
`spack test results`. Generic smoke tests for MPI implementations, C,
C++, and Fortran compilers as well as specific smoke tests for 18
packages.

Inside the test method, individual tests can be run separately (and
continue to run best-effort after a test failure) using the `run_test`
method. The `run_test` method encapsulates finding test executables,
running and checking return codes, checking output, and error handling.

This handles the following trickier aspects of testing with direct
support in Spack's package API:

- [x] Caching source or intermediate build files at build time for
      use at test time.
- [x] Test dependencies,
- [x] packages that require a compiler for testing (such as library only
      packages).

See the packaging guide for more details on using Spack testing support.
Included is support for package.py files for virtual packages. This does
not change the Spack interface, but is a major change in internals.

Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Co-authored-by: wspear <wjspear@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This commit is contained in:
Greg Becker 2020-11-18 02:39:02 -08:00 committed by GitHub
parent b81bbfb6e9
commit 77b2e578ec
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
131 changed files with 3567 additions and 644 deletions

View file

@ -132,7 +132,7 @@ jobs:
. share/spack/setup-env.sh . share/spack/setup-env.sh
spack compiler find spack compiler find
spack solve mpileaks%gcc spack solve mpileaks%gcc
coverage run $(which spack) test -v coverage run $(which spack) unit-test -v
coverage combine coverage combine
coverage xml coverage xml
- uses: codecov/codecov-action@v1 - uses: codecov/codecov-action@v1

View file

@ -35,7 +35,7 @@ jobs:
git --version git --version
. .github/workflows/setup_git.sh . .github/workflows/setup_git.sh
. share/spack/setup-env.sh . share/spack/setup-env.sh
coverage run $(which spack) test coverage run $(which spack) unit-test
coverage combine coverage combine
coverage xml coverage xml
- uses: codecov/codecov-action@v1 - uses: codecov/codecov-action@v1

View file

@ -70,6 +70,10 @@ config:
- ~/.spack/stage - ~/.spack/stage
# - $spack/var/spack/stage # - $spack/var/spack/stage
# Directory in which to run tests and store test results.
# Tests will be stored in directories named by date/time and package
# name/hash.
test_stage: ~/.spack/test
# Cache directory for already downloaded source tarballs and archived # Cache directory for already downloaded source tarballs and archived
# repositories. This can be purged with `spack clean --downloads`. # repositories. This can be purged with `spack clean --downloads`.

View file

@ -74,7 +74,7 @@ locally to speed up the review process.
We currently test against Python 2.6, 2.7, and 3.5-3.7 on both macOS and Linux and We currently test against Python 2.6, 2.7, and 3.5-3.7 on both macOS and Linux and
perform 3 types of tests: perform 3 types of tests:
.. _cmd-spack-test: .. _cmd-spack-unit-test:
^^^^^^^^^^ ^^^^^^^^^^
Unit Tests Unit Tests
@ -96,7 +96,7 @@ To run *all* of the unit tests, use:
.. code-block:: console .. code-block:: console
$ spack test $ spack unit-test
These tests may take several minutes to complete. If you know you are These tests may take several minutes to complete. If you know you are
only modifying a single Spack feature, you can run subsets of tests at a only modifying a single Spack feature, you can run subsets of tests at a
@ -105,13 +105,13 @@ time. For example, this would run all the tests in
.. code-block:: console .. code-block:: console
$ spack test lib/spack/spack/test/architecture.py $ spack unit-test lib/spack/spack/test/architecture.py
And this would run the ``test_platform`` test from that file: And this would run the ``test_platform`` test from that file:
.. code-block:: console .. code-block:: console
$ spack test lib/spack/spack/test/architecture.py::test_platform $ spack unit-test lib/spack/spack/test/architecture.py::test_platform
This allows you to develop iteratively: make a change, test that change, This allows you to develop iteratively: make a change, test that change,
make another change, test that change, etc. We use `pytest make another change, test that change, etc. We use `pytest
@ -121,29 +121,29 @@ pytest docs
<http://doc.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests>`_ <http://doc.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests>`_
for more details on test selection syntax. for more details on test selection syntax.
``spack test`` has a few special options that can help you understand ``spack unit-test`` has a few special options that can help you
what tests are available. To get a list of all available unit test understand what tests are available. To get a list of all available
files, run: unit test files, run:
.. command-output:: spack test --list .. command-output:: spack unit-test --list
:ellipsis: 5 :ellipsis: 5
To see a more detailed list of available unit tests, use ``spack test To see a more detailed list of available unit tests, use ``spack
--list-long``: unit-test --list-long``:
.. command-output:: spack test --list-long .. command-output:: spack unit-test --list-long
:ellipsis: 10 :ellipsis: 10
And to see the fully qualified names of all tests, use ``--list-names``: And to see the fully qualified names of all tests, use ``--list-names``:
.. command-output:: spack test --list-names .. command-output:: spack unit-test --list-names
:ellipsis: 5 :ellipsis: 5
You can combine these with ``pytest`` arguments to restrict which tests You can combine these with ``pytest`` arguments to restrict which tests
you want to know about. For example, to see just the tests in you want to know about. For example, to see just the tests in
``architecture.py``: ``architecture.py``:
.. command-output:: spack test --list-long lib/spack/spack/test/architecture.py .. command-output:: spack unit-test --list-long lib/spack/spack/test/architecture.py
You can also combine any of these options with a ``pytest`` keyword You can also combine any of these options with a ``pytest`` keyword
search. See the `pytest usage docs search. See the `pytest usage docs
@ -151,7 +151,7 @@ search. See the `pytest usage docs
for more details on test selection syntax. For example, to see the names of all tests that have "spec" for more details on test selection syntax. For example, to see the names of all tests that have "spec"
or "concretize" somewhere in their names: or "concretize" somewhere in their names:
.. command-output:: spack test --list-names -k "spec and concretize" .. command-output:: spack unit-test --list-names -k "spec and concretize"
By default, ``pytest`` captures the output of all unit tests, and it will By default, ``pytest`` captures the output of all unit tests, and it will
print any captured output for failed tests. Sometimes it's helpful to see print any captured output for failed tests. Sometimes it's helpful to see
@ -161,7 +161,7 @@ argument to ``pytest``:
.. code-block:: console .. code-block:: console
$ spack test -s --list-long lib/spack/spack/test/architecture.py::test_platform $ spack unit-test -s --list-long lib/spack/spack/test/architecture.py::test_platform
Unit tests are crucial to making sure bugs aren't introduced into Unit tests are crucial to making sure bugs aren't introduced into
Spack. If you are modifying core Spack libraries or adding new Spack. If you are modifying core Spack libraries or adding new
@ -176,7 +176,7 @@ how to write tests!
You may notice the ``share/spack/qa/run-unit-tests`` script in the You may notice the ``share/spack/qa/run-unit-tests`` script in the
repository. This script is designed for CI. It runs the unit repository. This script is designed for CI. It runs the unit
tests and reports coverage statistics back to Codecov. If you want to tests and reports coverage statistics back to Codecov. If you want to
run the unit tests yourself, we suggest you use ``spack test``. run the unit tests yourself, we suggest you use ``spack unit-test``.
^^^^^^^^^^^^ ^^^^^^^^^^^^
Flake8 Tests Flake8 Tests

View file

@ -363,11 +363,12 @@ Developer commands
``spack doc`` ``spack doc``
^^^^^^^^^^^^^ ^^^^^^^^^^^^^
^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
``spack test`` ``spack unit-test``
^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
See the :ref:`contributor guide section <cmd-spack-test>` on ``spack test``. See the :ref:`contributor guide section <cmd-spack-unit-test>` on
``spack unit-test``.
.. _cmd-spack-python: .. _cmd-spack-python:

View file

@ -87,11 +87,12 @@ will be available from the command line:
--implicit select specs that are not installed or were installed implicitly --implicit select specs that are not installed or were installed implicitly
--output OUTPUT where to dump the result --output OUTPUT where to dump the result
The corresponding unit tests can be run giving the appropriate options to ``spack test``: The corresponding unit tests can be run giving the appropriate options
to ``spack unit-test``:
.. code-block:: console .. code-block:: console
$ spack test --extension=scripting $ spack unit-test --extension=scripting
============================================================== test session starts =============================================================== ============================================================== test session starts ===============================================================
platform linux2 -- Python 2.7.15rc1, pytest-3.2.5, py-1.4.34, pluggy-0.4.0 platform linux2 -- Python 2.7.15rc1, pytest-3.2.5, py-1.4.34, pluggy-0.4.0

View file

@ -3948,6 +3948,118 @@ using the ``run_before`` decorator.
.. _file-manipulation: .. _file-manipulation:
^^^^^^^^^^^^^
Install Tests
^^^^^^^^^^^^^
.. warning::
The API for adding and running install tests is not yet considered
stable and may change drastically in future releases. Packages with
upstreamed tests will be refactored to match changes to the API.
While build-tests are integrated with the build system, install tests
may be added to Spack packages to be run independently of the install
method.
Install tests may be added by defining a ``test`` method with the following signature:
.. code-block:: python
def test(self):
These tests will be run in an environment set up to provide access to
this package and all of its dependencies, including ``test``-type
dependencies. Inside the ``test`` method, standard python ``assert``
statements and other error reporting mechanisms can be used. Spack
will report any errors as a test failure.
Inside the test method, individual tests can be run separately (and
continue transparently after a test failure) using the ``run_test``
method. The signature for the ``run_test`` method is:
.. code-block:: python
def run_test(self, exe, options=[], expected=[], status=0, installed=False,
purpose='', skip_missing=False, work_dir=None):
This method will operate in ``work_dir`` if one is specified. It will
search for an executable in the ``PATH`` variable named ``exe``, and
if ``installed=True`` it will fail if that executable does not come
from the prefix of the package being tested. If the executable is not
found, it will fail the test unless ``skip_missing`` is set to
``True``. The executable will be run with the options specified, and
the return code will be checked against the ``status`` argument, which
can be an integer or list of integers. Spack will also check that
every string in ``expected`` is a regex matching part of the output of
the executable. The ``purpose`` argument is recorded in the test log
for debugging purposes.
""""""""""""""""""""""""""""""""""""""
Install tests that require compilation
""""""""""""""""""""""""""""""""""""""
Some tests may require access to the compiler with which the package
was built, especially to test library-only packages. To ensure the
compiler is configured as part of the test environment, set the
attribute ``tests_require_compiler = True`` on the package. The
compiler will be available through the canonical environment variables
(``CC``, ``CXX``, ``FC``, ``F77``) in the test environment.
""""""""""""""""""""""""""""""""""""""""""""""""
Install tests that require build-time components
""""""""""""""""""""""""""""""""""""""""""""""""
Some packages cannot be easily tested without components from the
build-time test suite. For those packages, the
``cache_extra_test_sources`` method can be used.
.. code-block:: python
@run_after('install')
def cache_test_sources(self):
srcs = ['./tests/foo.c', './tests/bar.c']
self.cache_extra_test_sources(srcs)
This method will copy the listed methods into the metadata directory
of the package at the end of the install phase of the build. They will
be available to the test method in the directory
``self._extra_tests_path``.
While source files are generally recommended, for many packages
binaries may also technically be cached in this way for later testing.
"""""""""""""""""""""
Running install tests
"""""""""""""""""""""
Install tests can be run using the ``spack test run`` command. The
``spack test run`` command will create a ``test suite`` out of the
specs provided to it, or if no specs are provided it will test all
specs in the active environment, or all specs installed in Spack if no
environment is active. Test suites can be named using the ``--alias``
option; test suites not aliased will use the content hash of their
specs as their name.
Packages to install test can be queried using the ``spack test list``
command, which outputs all installed packages with defined ``test``
methods.
Test suites can be found using the ``spack test find`` command. It
will list all test suites that have been run and have not been removed
using the ``spack test remove`` command. The ``spack test remove``
command will remove tests to declutter the test stage. The ``spack
test results`` command will show results for completed test suites.
The test stage is the working directory for all install tests run with
Spack. By default, Spack uses ``~/.spack/test`` as the test stage. The
test stage can be set in the high-level config:
.. code-block:: yaml
config:
test_stage: /path/to/stage
--------------------------- ---------------------------
File manipulation functions File manipulation functions
--------------------------- ---------------------------

View file

@ -118,6 +118,7 @@ def match(self, text):
"([^:]+): (Error:|error|undefined reference|multiply defined)", "([^:]+): (Error:|error|undefined reference|multiply defined)",
"([^ :]+) ?: (error|fatal error|catastrophic error)", "([^ :]+) ?: (error|fatal error|catastrophic error)",
"([^:]+)\\(([^\\)]+)\\) ?: (error|fatal error|catastrophic error)"), "([^:]+)\\(([^\\)]+)\\) ?: (error|fatal error|catastrophic error)"),
"^FAILED",
"^[Bb]us [Ee]rror", "^[Bb]us [Ee]rror",
"^[Ss]egmentation [Vv]iolation", "^[Ss]egmentation [Vv]iolation",
"^[Ss]egmentation [Ff]ault", "^[Ss]egmentation [Ff]ault",

View file

@ -41,6 +41,8 @@
'fix_darwin_install_name', 'fix_darwin_install_name',
'force_remove', 'force_remove',
'force_symlink', 'force_symlink',
'chgrp',
'chmod_x',
'copy', 'copy',
'install', 'install',
'copy_tree', 'copy_tree',
@ -52,6 +54,7 @@
'partition_path', 'partition_path',
'prefixes', 'prefixes',
'remove_dead_links', 'remove_dead_links',
'remove_directory_contents',
'remove_if_dead_link', 'remove_if_dead_link',
'remove_linked_tree', 'remove_linked_tree',
'set_executable', 'set_executable',
@ -1806,3 +1809,13 @@ def md5sum(file):
with open(file, "rb") as f: with open(file, "rb") as f:
md5.update(f.read()) md5.update(f.read())
return md5.digest() return md5.digest()
def remove_directory_contents(dir):
"""Remove all contents of a directory."""
if os.path.exists(dir):
for entry in [os.path.join(dir, entry) for entry in os.listdir(dir)]:
if os.path.isfile(entry) or os.path.islink(entry):
os.unlink(entry)
else:
shutil.rmtree(entry)

View file

@ -32,8 +32,8 @@
Skimming this module is a nice way to get acquainted with the types of Skimming this module is a nice way to get acquainted with the types of
calls you can make from within the install() function. calls you can make from within the install() function.
""" """
import re
import inspect import inspect
import re
import multiprocessing import multiprocessing
import os import os
import shutil import shutil
@ -53,10 +53,14 @@
import spack.config import spack.config
import spack.main import spack.main
import spack.paths import spack.paths
import spack.package
import spack.repo
import spack.schema.environment import spack.schema.environment
import spack.store import spack.store
import spack.install_test
import spack.subprocess_context import spack.subprocess_context
import spack.architecture as arch import spack.architecture as arch
import spack.util.path
from spack.util.string import plural from spack.util.string import plural
from spack.util.environment import ( from spack.util.environment import (
env_flag, filter_system_paths, get_path, is_system_path, env_flag, filter_system_paths, get_path, is_system_path,
@ -453,7 +457,6 @@ def _set_variables_for_single_module(pkg, module):
jobs = spack.config.get('config:build_jobs', 16) if pkg.parallel else 1 jobs = spack.config.get('config:build_jobs', 16) if pkg.parallel else 1
jobs = min(jobs, multiprocessing.cpu_count()) jobs = min(jobs, multiprocessing.cpu_count())
assert jobs is not None, "no default set for config:build_jobs"
m = module m = module
m.make_jobs = jobs m.make_jobs = jobs
@ -713,28 +716,42 @@ def load_external_modules(pkg):
load_module(external_module) load_module(external_module)
def setup_package(pkg, dirty): def setup_package(pkg, dirty, context='build'):
"""Execute all environment setup routines.""" """Execute all environment setup routines."""
build_env = EnvironmentModifications() env = EnvironmentModifications()
if not dirty: if not dirty:
clean_environment() clean_environment()
set_compiler_environment_variables(pkg, build_env) # setup compilers and build tools for build contexts
set_build_environment_variables(pkg, build_env, dirty) need_compiler = context == 'build' or (context == 'test' and
pkg.architecture.platform.setup_platform_environment(pkg, build_env) pkg.test_requires_compiler)
if need_compiler:
set_compiler_environment_variables(pkg, env)
set_build_environment_variables(pkg, env, dirty)
build_env.extend( # architecture specific setup
modifications_from_dependencies(pkg.spec, context='build') pkg.architecture.platform.setup_platform_environment(pkg, env)
)
if (not dirty) and (not build_env.is_unset('CPATH')): if context == 'build':
tty.debug("A dependency has updated CPATH, this may lead pkg-config" # recursive post-order dependency information
" to assume that the package is part of the system" env.extend(
" includes and omit it when invoked with '--cflags'.") modifications_from_dependencies(pkg.spec, context=context)
)
set_module_variables_for_package(pkg) if (not dirty) and (not env.is_unset('CPATH')):
pkg.setup_build_environment(build_env) tty.debug("A dependency has updated CPATH, this may lead pkg-"
"config to assume that the package is part of the system"
" includes and omit it when invoked with '--cflags'.")
# setup package itself
set_module_variables_for_package(pkg)
pkg.setup_build_environment(env)
elif context == 'test':
import spack.user_environment as uenv # avoid circular import
env.extend(uenv.environment_modifications_for_spec(pkg.spec))
set_module_variables_for_package(pkg)
env.prepend_path('PATH', '.')
# Loading modules, in particular if they are meant to be used outside # Loading modules, in particular if they are meant to be used outside
# of Spack, can change environment variables that are relevant to the # of Spack, can change environment variables that are relevant to the
@ -744,15 +761,16 @@ def setup_package(pkg, dirty):
# unnecessary. Modules affecting these variables will be overwritten anyway # unnecessary. Modules affecting these variables will be overwritten anyway
with preserve_environment('CC', 'CXX', 'FC', 'F77'): with preserve_environment('CC', 'CXX', 'FC', 'F77'):
# All module loads that otherwise would belong in previous # All module loads that otherwise would belong in previous
# functions have to occur after the build_env object has its # functions have to occur after the env object has its
# modifications applied. Otherwise the environment modifications # modifications applied. Otherwise the environment modifications
# could undo module changes, such as unsetting LD_LIBRARY_PATH # could undo module changes, such as unsetting LD_LIBRARY_PATH
# after a module changes it. # after a module changes it.
for mod in pkg.compiler.modules: if need_compiler:
# Fixes issue https://github.com/spack/spack/issues/3153 for mod in pkg.compiler.modules:
if os.environ.get("CRAY_CPU_TARGET") == "mic-knl": # Fixes issue https://github.com/spack/spack/issues/3153
load_module("cce") if os.environ.get("CRAY_CPU_TARGET") == "mic-knl":
load_module(mod) load_module("cce")
load_module(mod)
# kludge to handle cray libsci being automatically loaded by PrgEnv # kludge to handle cray libsci being automatically loaded by PrgEnv
# modules on cray platform. Module unload does no damage when # modules on cray platform. Module unload does no damage when
@ -766,12 +784,12 @@ def setup_package(pkg, dirty):
implicit_rpaths = pkg.compiler.implicit_rpaths() implicit_rpaths = pkg.compiler.implicit_rpaths()
if implicit_rpaths: if implicit_rpaths:
build_env.set('SPACK_COMPILER_IMPLICIT_RPATHS', env.set('SPACK_COMPILER_IMPLICIT_RPATHS',
':'.join(implicit_rpaths)) ':'.join(implicit_rpaths))
# Make sure nothing's strange about the Spack environment. # Make sure nothing's strange about the Spack environment.
validate(build_env, tty.warn) validate(env, tty.warn)
build_env.apply_modifications() env.apply_modifications()
def modifications_from_dependencies(spec, context): def modifications_from_dependencies(spec, context):
@ -791,7 +809,8 @@ def modifications_from_dependencies(spec, context):
deptype_and_method = { deptype_and_method = {
'build': (('build', 'link', 'test'), 'build': (('build', 'link', 'test'),
'setup_dependent_build_environment'), 'setup_dependent_build_environment'),
'run': (('link', 'run'), 'setup_dependent_run_environment') 'run': (('link', 'run'), 'setup_dependent_run_environment'),
'test': (('link', 'run', 'test'), 'setup_dependent_run_environment')
} }
deptype, method = deptype_and_method[context] deptype, method = deptype_and_method[context]
@ -808,6 +827,8 @@ def modifications_from_dependencies(spec, context):
def _setup_pkg_and_run(serialized_pkg, function, kwargs, child_pipe, def _setup_pkg_and_run(serialized_pkg, function, kwargs, child_pipe,
input_multiprocess_fd): input_multiprocess_fd):
context = kwargs.get('context', 'build')
try: try:
# We are in the child process. Python sets sys.stdin to # We are in the child process. Python sets sys.stdin to
# open(os.devnull) to prevent our process and its parent from # open(os.devnull) to prevent our process and its parent from
@ -821,7 +842,8 @@ def _setup_pkg_and_run(serialized_pkg, function, kwargs, child_pipe,
if not kwargs.get('fake', False): if not kwargs.get('fake', False):
kwargs['unmodified_env'] = os.environ.copy() kwargs['unmodified_env'] = os.environ.copy()
setup_package(pkg, dirty=kwargs.get('dirty', False)) setup_package(pkg, dirty=kwargs.get('dirty', False),
context=context)
return_value = function(pkg, kwargs) return_value = function(pkg, kwargs)
child_pipe.send(return_value) child_pipe.send(return_value)
@ -841,13 +863,18 @@ def _setup_pkg_and_run(serialized_pkg, function, kwargs, child_pipe,
# show that, too. # show that, too.
package_context = get_package_context(tb) package_context = get_package_context(tb)
build_log = None logfile = None
try: if context == 'build':
if hasattr(pkg, 'log_path'): try:
build_log = pkg.log_path if hasattr(pkg, 'log_path'):
except NameError: logfile = pkg.log_path
# 'pkg' is not defined yet except NameError:
pass # 'pkg' is not defined yet
pass
elif context == 'test':
logfile = os.path.join(
pkg.test_suite.stage,
spack.install_test.TestSuite.test_log_name(pkg.spec))
# make a pickleable exception to send to parent. # make a pickleable exception to send to parent.
msg = "%s: %s" % (exc_type.__name__, str(exc)) msg = "%s: %s" % (exc_type.__name__, str(exc))
@ -855,7 +882,7 @@ def _setup_pkg_and_run(serialized_pkg, function, kwargs, child_pipe,
ce = ChildError(msg, ce = ChildError(msg,
exc_type.__module__, exc_type.__module__,
exc_type.__name__, exc_type.__name__,
tb_string, build_log, package_context) tb_string, logfile, context, package_context)
child_pipe.send(ce) child_pipe.send(ce)
finally: finally:
@ -873,9 +900,6 @@ def start_build_process(pkg, function, kwargs):
child process for. child process for.
function (callable): argless function to run in the child function (callable): argless function to run in the child
process. process.
dirty (bool): If True, do NOT clean the environment before
building.
fake (bool): If True, skip package setup b/c it's not a real build
Usage:: Usage::
@ -961,6 +985,7 @@ def get_package_context(traceback, context=3):
Args: Args:
traceback (traceback): A traceback from some exception raised during traceback (traceback): A traceback from some exception raised during
install install
context (int): Lines of context to show before and after the line context (int): Lines of context to show before and after the line
where the error happened where the error happened
@ -1067,13 +1092,14 @@ class ChildError(InstallError):
# context instead of Python context. # context instead of Python context.
build_errors = [('spack.util.executable', 'ProcessError')] build_errors = [('spack.util.executable', 'ProcessError')]
def __init__(self, msg, module, classname, traceback_string, build_log, def __init__(self, msg, module, classname, traceback_string, log_name,
context): log_type, context):
super(ChildError, self).__init__(msg) super(ChildError, self).__init__(msg)
self.module = module self.module = module
self.name = classname self.name = classname
self.traceback = traceback_string self.traceback = traceback_string
self.build_log = build_log self.log_name = log_name
self.log_type = log_type
self.context = context self.context = context
@property @property
@ -1081,26 +1107,16 @@ def long_message(self):
out = StringIO() out = StringIO()
out.write(self._long_message if self._long_message else '') out.write(self._long_message if self._long_message else '')
have_log = self.log_name and os.path.exists(self.log_name)
if (self.module, self.name) in ChildError.build_errors: if (self.module, self.name) in ChildError.build_errors:
# The error happened in some external executed process. Show # The error happened in some external executed process. Show
# the build log with errors or warnings highlighted. # the log with errors or warnings highlighted.
if self.build_log and os.path.exists(self.build_log): if have_log:
errors, warnings = parse_log_events(self.build_log) write_log_summary(out, self.log_type, self.log_name)
nerr = len(errors)
nwar = len(warnings)
if nerr > 0:
# If errors are found, only display errors
out.write(
"\n%s found in build log:\n" % plural(nerr, 'error'))
out.write(make_log_context(errors))
elif nwar > 0:
# If no errors are found but warnings are, display warnings
out.write(
"\n%s found in build log:\n" % plural(nwar, 'warning'))
out.write(make_log_context(warnings))
else: else:
# The error happened in in the Python code, so try to show # The error happened in the Python code, so try to show
# some context from the Package itself. # some context from the Package itself.
if self.context: if self.context:
out.write('\n') out.write('\n')
@ -1110,14 +1126,14 @@ def long_message(self):
if out.getvalue(): if out.getvalue():
out.write('\n') out.write('\n')
if self.build_log and os.path.exists(self.build_log): if have_log:
out.write('See build log for details:\n') out.write('See {0} log for details:\n'.format(self.log_type))
out.write(' %s\n' % self.build_log) out.write(' {0}\n'.format(self.log_name))
return out.getvalue() return out.getvalue()
def __str__(self): def __str__(self):
return self.message + self.long_message + self.traceback return self.message
def __reduce__(self): def __reduce__(self):
"""__reduce__ is used to serialize (pickle) ChildErrors. """__reduce__ is used to serialize (pickle) ChildErrors.
@ -1130,13 +1146,14 @@ def __reduce__(self):
self.module, self.module,
self.name, self.name,
self.traceback, self.traceback,
self.build_log, self.log_name,
self.log_type,
self.context) self.context)
def _make_child_error(msg, module, name, traceback, build_log, context): def _make_child_error(msg, module, name, traceback, log, log_type, context):
"""Used by __reduce__ in ChildError to reconstruct pickled errors.""" """Used by __reduce__ in ChildError to reconstruct pickled errors."""
return ChildError(msg, module, name, traceback, build_log, context) return ChildError(msg, module, name, traceback, log, log_type, context)
class StopPhase(spack.error.SpackError): class StopPhase(spack.error.SpackError):
@ -1147,3 +1164,30 @@ def __reduce__(self):
def _make_stop_phase(msg, long_msg): def _make_stop_phase(msg, long_msg):
return StopPhase(msg, long_msg) return StopPhase(msg, long_msg)
def write_log_summary(out, log_type, log, last=None):
errors, warnings = parse_log_events(log)
nerr = len(errors)
nwar = len(warnings)
if nerr > 0:
if last and nerr > last:
errors = errors[-last:]
nerr = last
# If errors are found, only display errors
out.write(
"\n%s found in %s log:\n" %
(plural(nerr, 'error'), log_type))
out.write(make_log_context(errors))
elif nwar > 0:
if last and nwar > last:
warnings = warnings[-last:]
nwar = last
# If no errors are found but warnings are, display warnings
out.write(
"\n%s found in %s log:\n" %
(plural(nwar, 'warning'), log_type))
out.write(make_log_context(warnings))

View file

@ -324,14 +324,21 @@ def flags_to_build_system_args(self, flags):
self.cmake_flag_args.append(libs_string.format(lang, self.cmake_flag_args.append(libs_string.format(lang,
libs_flags)) libs_flags))
@property
def build_dirname(self):
"""Returns the directory name to use when building the package
:return: name of the subdirectory for building the package
"""
return 'spack-build-%s' % self.spec.dag_hash(7)
@property @property
def build_directory(self): def build_directory(self):
"""Returns the directory to use when building the package """Returns the directory to use when building the package
:return: directory where to build the package :return: directory where to build the package
""" """
dirname = 'spack-build-%s' % self.spec.dag_hash(7) return os.path.join(self.stage.path, self.build_dirname)
return os.path.join(self.stage.path, dirname)
def cmake_args(self): def cmake_args(self):
"""Produces a list containing all the arguments that must be passed to """Produces a list containing all the arguments that must be passed to

View file

@ -1017,6 +1017,15 @@ def setup_run_environment(self, env):
env.extend(EnvironmentModifications.from_sourcing_file(f, *args)) env.extend(EnvironmentModifications.from_sourcing_file(f, *args))
if self.spec.name in ('intel', 'intel-parallel-studio'):
# this package provides compilers
# TODO: fix check above when compilers are dependencies
env.set('CC', self.prefix.bin.icc)
env.set('CXX', self.prefix.bin.icpc)
env.set('FC', self.prefix.bin.ifort)
env.set('F77', self.prefix.bin.ifort)
env.set('F90', self.prefix.bin.ifort)
def setup_dependent_build_environment(self, env, dependent_spec): def setup_dependent_build_environment(self, env, dependent_spec):
# NB: This function is overwritten by 'mpi' provider packages: # NB: This function is overwritten by 'mpi' provider packages:
# #

View file

@ -89,7 +89,7 @@ def configure(self, spec, prefix):
build_system_class = 'PythonPackage' build_system_class = 'PythonPackage'
#: Callback names for build-time test #: Callback names for build-time test
build_time_test_callbacks = ['test'] build_time_test_callbacks = ['build_test']
#: Callback names for install-time test #: Callback names for install-time test
install_time_test_callbacks = ['import_module_test'] install_time_test_callbacks = ['import_module_test']
@ -359,7 +359,7 @@ def check_args(self, spec, prefix):
# Testing # Testing
def test(self): def build_test(self):
"""Run unit tests after in-place build. """Run unit tests after in-place build.
These tests are only run if the package actually has a 'test' command. These tests are only run if the package actually has a 'test' command.

View file

@ -33,7 +33,7 @@ class SConsPackage(PackageBase):
build_system_class = 'SConsPackage' build_system_class = 'SConsPackage'
#: Callback names for build-time test #: Callback names for build-time test
build_time_test_callbacks = ['test'] build_time_test_callbacks = ['build_test']
depends_on('scons', type='build') depends_on('scons', type='build')
@ -59,7 +59,7 @@ def install(self, spec, prefix):
# Testing # Testing
def test(self): def build_test(self):
"""Run unit tests after build. """Run unit tests after build.
By default, does nothing. Override this if you want to By default, does nothing. Override this if you want to

View file

@ -47,10 +47,10 @@ class WafPackage(PackageBase):
build_system_class = 'WafPackage' build_system_class = 'WafPackage'
# Callback names for build-time test # Callback names for build-time test
build_time_test_callbacks = ['test'] build_time_test_callbacks = ['build_test']
# Callback names for install-time test # Callback names for install-time test
install_time_test_callbacks = ['installtest'] install_time_test_callbacks = ['install_test']
# Much like AutotoolsPackage does not require automake and autoconf # Much like AutotoolsPackage does not require automake and autoconf
# to build, WafPackage does not require waf to build. It only requires # to build, WafPackage does not require waf to build. It only requires
@ -106,7 +106,7 @@ def install_args(self):
# Testing # Testing
def test(self): def build_test(self):
"""Run unit tests after build. """Run unit tests after build.
By default, does nothing. Override this if you want to By default, does nothing. Override this if you want to
@ -116,7 +116,7 @@ def test(self):
run_after('build')(PackageBase._run_default_build_time_test_callbacks) run_after('build')(PackageBase._run_default_build_time_test_callbacks)
def installtest(self): def install_test(self):
"""Run unit tests after install. """Run unit tests after install.
By default, does nothing. Override this if you want to By default, does nothing. Override this if you want to

View file

@ -2,86 +2,15 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details. # Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.cmd.common.env_utility as env_utility
from __future__ import print_function
import argparse
import os
import llnl.util.tty as tty
import spack.build_environment as build_environment
import spack.cmd
import spack.cmd.common.arguments as arguments
from spack.util.environment import dump_environment, pickle_environment
description = "run a command in a spec's install environment, " \ description = "run a command in a spec's install environment, " \
"or dump its environment to screen or file" "or dump its environment to screen or file"
section = "build" section = "build"
level = "long" level = "long"
setup_parser = env_utility.setup_parser
def setup_parser(subparser):
arguments.add_common_arguments(subparser, ['clean', 'dirty'])
subparser.add_argument(
'--dump', metavar="FILE",
help="dump a source-able environment to FILE"
)
subparser.add_argument(
'--pickle', metavar="FILE",
help="dump a pickled source-able environment to FILE"
)
subparser.add_argument(
'spec', nargs=argparse.REMAINDER,
metavar='spec [--] [cmd]...',
help="spec of package environment to emulate")
subparser.epilog\
= 'If a command is not specified, the environment will be printed ' \
'to standard output (cf /usr/bin/env) unless --dump and/or --pickle ' \
'are specified.\n\nIf a command is specified and spec is ' \
'multi-word, then the -- separator is obligatory.'
def build_env(parser, args): def build_env(parser, args):
if not args.spec: env_utility.emulate_env_utility('build-env', 'build', args)
tty.die("spack build-env requires a spec.")
# Specs may have spaces in them, so if they do, require that the
# caller put a '--' between the spec and the command to be
# executed. If there is no '--', assume that the spec is the
# first argument.
sep = '--'
if sep in args.spec:
s = args.spec.index(sep)
spec = args.spec[:s]
cmd = args.spec[s + 1:]
else:
spec = args.spec[0]
cmd = args.spec[1:]
specs = spack.cmd.parse_specs(spec, concretize=True)
if len(specs) > 1:
tty.die("spack build-env only takes one spec.")
spec = specs[0]
build_environment.setup_package(spec.package, args.dirty)
if args.dump:
# Dump a source-able environment to a text file.
tty.msg("Dumping a source-able environment to {0}".format(args.dump))
dump_environment(args.dump)
if args.pickle:
# Dump a source-able environment to a pickle file.
tty.msg(
"Pickling a source-able environment to {0}".format(args.pickle))
pickle_environment(args.pickle)
if cmd:
# Execute the command with the new environment
os.execvp(cmd[0], cmd)
elif not bool(args.pickle or args.dump):
# If no command or dump/pickle option act like the "env" command
# and print out env vars.
for key, val in os.environ.items():
print("%s=%s" % (key, val))

View file

@ -10,10 +10,11 @@
import llnl.util.tty as tty import llnl.util.tty as tty
import spack.caches import spack.caches
import spack.cmd import spack.cmd.test
import spack.cmd.common.arguments as arguments import spack.cmd.common.arguments as arguments
import spack.repo import spack.repo
import spack.stage import spack.stage
import spack.config
from spack.paths import lib_path, var_path from spack.paths import lib_path, var_path

View file

@ -275,3 +275,53 @@ def no_checksum():
return Args( return Args(
'-n', '--no-checksum', action='store_true', default=False, '-n', '--no-checksum', action='store_true', default=False,
help="do not use checksums to verify downloaded files (unsafe)") help="do not use checksums to verify downloaded files (unsafe)")
def add_cdash_args(subparser, add_help):
cdash_help = {}
if add_help:
cdash_help['upload-url'] = "CDash URL where reports will be uploaded"
cdash_help['build'] = """The name of the build that will be reported to CDash.
Defaults to spec of the package to operate on."""
cdash_help['site'] = """The site name that will be reported to CDash.
Defaults to current system hostname."""
cdash_help['track'] = """Results will be reported to this group on CDash.
Defaults to Experimental."""
cdash_help['buildstamp'] = """Instead of letting the CDash reporter prepare the
buildstamp which, when combined with build name, site and project,
uniquely identifies the build, provide this argument to identify
the build yourself. Format: %%Y%%m%%d-%%H%%M-[cdash-track]"""
else:
cdash_help['upload-url'] = argparse.SUPPRESS
cdash_help['build'] = argparse.SUPPRESS
cdash_help['site'] = argparse.SUPPRESS
cdash_help['track'] = argparse.SUPPRESS
cdash_help['buildstamp'] = argparse.SUPPRESS
subparser.add_argument(
'--cdash-upload-url',
default=None,
help=cdash_help['upload-url']
)
subparser.add_argument(
'--cdash-build',
default=None,
help=cdash_help['build']
)
subparser.add_argument(
'--cdash-site',
default=None,
help=cdash_help['site']
)
cdash_subgroup = subparser.add_mutually_exclusive_group()
cdash_subgroup.add_argument(
'--cdash-track',
default='Experimental',
help=cdash_help['track']
)
cdash_subgroup.add_argument(
'--cdash-buildstamp',
default=None,
help=cdash_help['buildstamp']
)

View file

@ -0,0 +1,82 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from __future__ import print_function
import argparse
import os
import llnl.util.tty as tty
import spack.build_environment as build_environment
import spack.paths
import spack.cmd
import spack.cmd.common.arguments as arguments
from spack.util.environment import dump_environment, pickle_environment
def setup_parser(subparser):
arguments.add_common_arguments(subparser, ['clean', 'dirty'])
subparser.add_argument(
'--dump', metavar="FILE",
help="dump a source-able environment to FILE"
)
subparser.add_argument(
'--pickle', metavar="FILE",
help="dump a pickled source-able environment to FILE"
)
subparser.add_argument(
'spec', nargs=argparse.REMAINDER,
metavar='spec [--] [cmd]...',
help="specs of package environment to emulate")
subparser.epilog\
= 'If a command is not specified, the environment will be printed ' \
'to standard output (cf /usr/bin/env) unless --dump and/or --pickle ' \
'are specified.\n\nIf a command is specified and spec is ' \
'multi-word, then the -- separator is obligatory.'
def emulate_env_utility(cmd_name, context, args):
if not args.spec:
tty.die("spack %s requires a spec." % cmd_name)
# Specs may have spaces in them, so if they do, require that the
# caller put a '--' between the spec and the command to be
# executed. If there is no '--', assume that the spec is the
# first argument.
sep = '--'
if sep in args.spec:
s = args.spec.index(sep)
spec = args.spec[:s]
cmd = args.spec[s + 1:]
else:
spec = args.spec[0]
cmd = args.spec[1:]
specs = spack.cmd.parse_specs(spec, concretize=True)
if len(specs) > 1:
tty.die("spack %s only takes one spec." % cmd_name)
spec = specs[0]
build_environment.setup_package(spec.package, args.dirty, context)
if args.dump:
# Dump a source-able environment to a text file.
tty.msg("Dumping a source-able environment to {0}".format(args.dump))
dump_environment(args.dump)
if args.pickle:
# Dump a source-able environment to a pickle file.
tty.msg(
"Pickling a source-able environment to {0}".format(args.pickle))
pickle_environment(args.pickle)
if cmd:
# Execute the command with the new environment
os.execvp(cmd[0], cmd)
elif not bool(args.pickle or args.dump):
# If no command or dump/pickle option then act like the "env" command
# and print out env vars.
for key, val in os.environ.items():
print("%s=%s" % (key, val))

View file

@ -167,65 +167,8 @@ def setup_parser(subparser):
action='store_true', action='store_true',
help="Show usage instructions for CDash reporting" help="Show usage instructions for CDash reporting"
) )
subparser.add_argument( arguments.add_cdash_args(subparser, False)
'-y', '--yes-to-all', arguments.add_common_arguments(subparser, ['yes_to_all', 'spec'])
action='store_true',
dest='yes_to_all',
help="""assume "yes" is the answer to every confirmation request.
To run completely non-interactively, also specify '--no-checksum'."""
)
add_cdash_args(subparser, False)
arguments.add_common_arguments(subparser, ['spec'])
def add_cdash_args(subparser, add_help):
cdash_help = {}
if add_help:
cdash_help['upload-url'] = "CDash URL where reports will be uploaded"
cdash_help['build'] = """The name of the build that will be reported to CDash.
Defaults to spec of the package to install."""
cdash_help['site'] = """The site name that will be reported to CDash.
Defaults to current system hostname."""
cdash_help['track'] = """Results will be reported to this group on CDash.
Defaults to Experimental."""
cdash_help['buildstamp'] = """Instead of letting the CDash reporter prepare the
buildstamp which, when combined with build name, site and project,
uniquely identifies the build, provide this argument to identify
the build yourself. Format: %%Y%%m%%d-%%H%%M-[cdash-track]"""
else:
cdash_help['upload-url'] = argparse.SUPPRESS
cdash_help['build'] = argparse.SUPPRESS
cdash_help['site'] = argparse.SUPPRESS
cdash_help['track'] = argparse.SUPPRESS
cdash_help['buildstamp'] = argparse.SUPPRESS
subparser.add_argument(
'--cdash-upload-url',
default=None,
help=cdash_help['upload-url']
)
subparser.add_argument(
'--cdash-build',
default=None,
help=cdash_help['build']
)
subparser.add_argument(
'--cdash-site',
default=None,
help=cdash_help['site']
)
cdash_subgroup = subparser.add_mutually_exclusive_group()
cdash_subgroup.add_argument(
'--cdash-track',
default='Experimental',
help=cdash_help['track']
)
cdash_subgroup.add_argument(
'--cdash-buildstamp',
default=None,
help=cdash_help['buildstamp']
)
def default_log_file(spec): def default_log_file(spec):
@ -283,11 +226,12 @@ def install(parser, args, **kwargs):
SPACK_CDASH_AUTH_TOKEN SPACK_CDASH_AUTH_TOKEN
authentication token to present to CDash authentication token to present to CDash
''')) '''))
add_cdash_args(parser, True) arguments.add_cdash_args(parser, True)
parser.print_help() parser.print_help()
return return
reporter = spack.report.collect_info(args.log_format, args) reporter = spack.report.collect_info(
spack.package.PackageInstaller, '_install_task', args.log_format, args)
if args.log_file: if args.log_file:
reporter.filename = args.log_file reporter.filename = args.log_file
@ -383,7 +327,7 @@ def install(parser, args, **kwargs):
if not args.log_file and not reporter.filename: if not args.log_file and not reporter.filename:
reporter.filename = default_log_file(specs[0]) reporter.filename = default_log_file(specs[0])
reporter.specs = specs reporter.specs = specs
with reporter: with reporter('build'):
if args.overwrite: if args.overwrite:
installed = list(filter(lambda x: x, installed = list(filter(lambda x: x,

View file

@ -54,6 +54,9 @@ def setup_parser(subparser):
subparser.add_argument( subparser.add_argument(
'--update', metavar='FILE', default=None, action='store', '--update', metavar='FILE', default=None, action='store',
help='write output to the specified file, if any package is newer') help='write output to the specified file, if any package is newer')
subparser.add_argument(
'-v', '--virtuals', action='store_true', default=False,
help='include virtual packages in list')
arguments.add_common_arguments(subparser, ['tags']) arguments.add_common_arguments(subparser, ['tags'])
@ -267,7 +270,7 @@ def list(parser, args):
formatter = formatters[args.format] formatter = formatters[args.format]
# Retrieve the names of all the packages # Retrieve the names of all the packages
pkgs = set(spack.repo.all_package_names()) pkgs = set(spack.repo.all_package_names(args.virtuals))
# Filter the set appropriately # Filter the set appropriately
sorted_packages = filter_by_name(pkgs, args) sorted_packages = filter_by_name(pkgs, args)

View file

@ -4,166 +4,381 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
from __future__ import print_function from __future__ import print_function
from __future__ import division import os
import collections
import sys
import re
import argparse import argparse
import pytest import textwrap
from six import StringIO import fnmatch
import re
import shutil
import llnl.util.tty.color as color import llnl.util.tty as tty
from llnl.util.filesystem import working_dir
from llnl.util.tty.colify import colify
import spack.paths import spack.install_test
import spack.environment as ev
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.report
import spack.package
description = "run spack's unit tests (wrapper around pytest)" description = "run spack's tests for an install"
section = "developer" section = "administrator"
level = "long" level = "long"
def first_line(docstring):
"""Return the first line of the docstring."""
return docstring.split('\n')[0]
def setup_parser(subparser): def setup_parser(subparser):
subparser.add_argument( sp = subparser.add_subparsers(metavar='SUBCOMMAND', dest='test_command')
'-H', '--pytest-help', action='store_true', default=False,
help="show full pytest help, with advanced options")
# extra spack arguments to list tests # Run
list_group = subparser.add_argument_group("listing tests") run_parser = sp.add_parser('run', description=test_run.__doc__,
list_mutex = list_group.add_mutually_exclusive_group() help=first_line(test_run.__doc__))
list_mutex.add_argument(
'-l', '--list', action='store_const', default=None,
dest='list', const='list', help="list test filenames")
list_mutex.add_argument(
'-L', '--list-long', action='store_const', default=None,
dest='list', const='long', help="list all test functions")
list_mutex.add_argument(
'-N', '--list-names', action='store_const', default=None,
dest='list', const='names', help="list full names of all tests")
# use tests for extension alias_help_msg = "Provide an alias for this test-suite"
subparser.add_argument( alias_help_msg += " for subsequent access."
'--extension', default=None, run_parser.add_argument('--alias', help=alias_help_msg)
help="run test for a given spack extension")
# spell out some common pytest arguments, so they'll show up in help run_parser.add_argument(
pytest_group = subparser.add_argument_group( '--fail-fast', action='store_true',
"common pytest arguments (spack test --pytest-help for more details)") help="Stop tests for each package after the first failure."
pytest_group.add_argument( )
"-s", action='append_const', dest='parsed_args', const='-s', run_parser.add_argument(
help="print output while tests run (disable capture)") '--fail-first', action='store_true',
pytest_group.add_argument( help="Stop after the first failed package."
"-k", action='store', metavar="EXPRESSION", dest='expression', )
help="filter tests by keyword (can also use w/list options)") run_parser.add_argument(
pytest_group.add_argument( '--keep-stage',
"--showlocals", action='append_const', dest='parsed_args', action='store_true',
const='--showlocals', help="show local variable values in tracebacks") help='Keep testing directory for debugging'
)
run_parser.add_argument(
'--log-format',
default=None,
choices=spack.report.valid_formats,
help="format to be used for log files"
)
run_parser.add_argument(
'--log-file',
default=None,
help="filename for the log file. if not passed a default will be used"
)
arguments.add_cdash_args(run_parser, False)
run_parser.add_argument(
'--help-cdash',
action='store_true',
help="Show usage instructions for CDash reporting"
)
# remainder is just passed to pytest cd_group = run_parser.add_mutually_exclusive_group()
subparser.add_argument( arguments.add_common_arguments(cd_group, ['clean', 'dirty'])
'pytest_args', nargs=argparse.REMAINDER, help="arguments for pytest")
arguments.add_common_arguments(run_parser, ['installed_specs'])
# List
sp.add_parser('list', description=test_list.__doc__,
help=first_line(test_list.__doc__))
# Find
find_parser = sp.add_parser('find', description=test_find.__doc__,
help=first_line(test_find.__doc__))
find_parser.add_argument(
'filter', nargs=argparse.REMAINDER,
help='optional case-insensitive glob patterns to filter results.')
# Status
status_parser = sp.add_parser('status', description=test_status.__doc__,
help=first_line(test_status.__doc__))
status_parser.add_argument(
'names', nargs=argparse.REMAINDER,
help="Test suites for which to print status")
# Results
results_parser = sp.add_parser('results', description=test_results.__doc__,
help=first_line(test_results.__doc__))
results_parser.add_argument(
'-l', '--logs', action='store_true',
help="print the test log for each matching package")
results_parser.add_argument(
'-f', '--failed', action='store_true',
help="only show results for failed tests of matching packages")
results_parser.add_argument(
'names', nargs=argparse.REMAINDER,
metavar='[name(s)] [-- installed_specs]...',
help="suite names and installed package constraints")
results_parser.epilog = 'Test results will be filtered by space-' \
'separated suite name(s) and installed\nspecs when provided. '\
'If names are provided, then only results for those test\nsuites '\
'will be shown. If installed specs are provided, then ony results'\
'\nmatching those specs will be shown.'
# Remove
remove_parser = sp.add_parser('remove', description=test_remove.__doc__,
help=first_line(test_remove.__doc__))
arguments.add_common_arguments(remove_parser, ['yes_to_all'])
remove_parser.add_argument(
'names', nargs=argparse.REMAINDER,
help="Test suites to remove from test stage")
def do_list(args, extra_args): def test_run(args):
"""Print a lists of tests than what pytest offers.""" """Run tests for the specified installed packages.
# Run test collection and get the tree out.
old_output = sys.stdout
try:
sys.stdout = output = StringIO()
pytest.main(['--collect-only'] + extra_args)
finally:
sys.stdout = old_output
lines = output.getvalue().split('\n') If no specs are listed, run tests for all packages in the current
tests = collections.defaultdict(lambda: set()) environment or all installed packages if there is no active environment.
prefix = []
# collect tests into sections
for line in lines:
match = re.match(r"(\s*)<([^ ]*) '([^']*)'", line)
if not match:
continue
indent, nodetype, name = match.groups()
# strip parametrized tests
if "[" in name:
name = name[:name.index("[")]
depth = len(indent) // 2
if nodetype.endswith("Function"):
key = tuple(prefix)
tests[key].add(name)
else:
prefix = prefix[:depth]
prefix.append(name)
def colorize(c, prefix):
if isinstance(prefix, tuple):
return "::".join(
color.colorize("@%s{%s}" % (c, p))
for p in prefix if p != "()"
)
return color.colorize("@%s{%s}" % (c, prefix))
if args.list == "list":
files = set(prefix[0] for prefix in tests)
color_files = [colorize("B", file) for file in sorted(files)]
colify(color_files)
elif args.list == "long":
for prefix, functions in sorted(tests.items()):
path = colorize("*B", prefix) + "::"
functions = [colorize("c", f) for f in sorted(functions)]
color.cprint(path)
colify(functions, indent=4)
print()
else: # args.list == "names"
all_functions = [
colorize("*B", prefix) + "::" + colorize("c", f)
for prefix, functions in sorted(tests.items())
for f in sorted(functions)
]
colify(all_functions)
def add_back_pytest_args(args, unknown_args):
"""Add parsed pytest args, unknown args, and remainder together.
We add some basic pytest arguments to the Spack parser to ensure that
they show up in the short help, so we have to reassemble things here.
""" """
result = args.parsed_args or [] # cdash help option
result += unknown_args or [] if args.help_cdash:
result += args.pytest_args or [] parser = argparse.ArgumentParser(
if args.expression: formatter_class=argparse.RawDescriptionHelpFormatter,
result += ["-k", args.expression] epilog=textwrap.dedent('''\
return result environment variables:
SPACK_CDASH_AUTH_TOKEN
authentication token to present to CDash
'''))
arguments.add_cdash_args(parser, True)
parser.print_help()
return
# set config option for fail-fast
if args.fail_fast:
spack.config.set('config:fail_fast', True, scope='command_line')
# Get specs to test
env = ev.get_env(args, 'test')
hashes = env.all_hashes() if env else None
specs = spack.cmd.parse_specs(args.specs) if args.specs else [None]
specs_to_test = []
for spec in specs:
matching = spack.store.db.query_local(spec, hashes=hashes)
if spec and not matching:
tty.warn("No installed packages match spec %s" % spec)
specs_to_test.extend(matching)
# test_stage_dir
test_suite = spack.install_test.TestSuite(specs_to_test, args.alias)
test_suite.ensure_stage()
tty.msg("Spack test %s" % test_suite.name)
# Set up reporter
setattr(args, 'package', [s.format() for s in test_suite.specs])
reporter = spack.report.collect_info(
spack.package.PackageBase, 'do_test', args.log_format, args)
if not reporter.filename:
if args.log_file:
if os.path.isabs(args.log_file):
log_file = args.log_file
else:
log_dir = os.getcwd()
log_file = os.path.join(log_dir, args.log_file)
else:
log_file = os.path.join(
os.getcwd(),
'test-%s' % test_suite.name)
reporter.filename = log_file
reporter.specs = specs_to_test
with reporter('test', test_suite.stage):
test_suite(remove_directory=not args.keep_stage,
dirty=args.dirty,
fail_first=args.fail_first)
def test(parser, args, unknown_args): def has_test_method(pkg):
if args.pytest_help: return pkg.test.__func__ != spack.package.PackageBase.test
# make the pytest.main help output more accurate
sys.argv[0] = 'spack test'
return pytest.main(['-h'])
# add back any parsed pytest args we need to pass to pytest
pytest_args = add_back_pytest_args(args, unknown_args)
# The default is to test the core of Spack. If the option `--extension` def test_list(args):
# has been used, then test that extension. """List all installed packages with available tests."""
pytest_root = spack.paths.spack_root # TODO: This can be extended to have all of the output formatting options
if args.extension: # from `spack find`.
target = args.extension env = ev.get_env(args, 'test')
extensions = spack.config.get('config:extensions') hashes = env.all_hashes() if env else None
pytest_root = spack.extensions.path_for_extension(target, *extensions)
# pytest.ini lives in the root of the spack repository. specs = spack.store.db.query(hashes=hashes)
with working_dir(pytest_root): specs = list(filter(lambda s: has_test_method(s.package), specs))
if args.list:
do_list(args, pytest_args) spack.cmd.display_specs(specs, long=True)
def test_find(args): # TODO: merge with status (noargs)
"""Find tests that are running or have available results.
Displays aliases for tests that have them, otherwise test suite content
hashes."""
test_suites = spack.install_test.get_all_test_suites()
# Filter tests by filter argument
if args.filter:
def create_filter(f):
raw = fnmatch.translate('f' if '*' in f or '?' in f
else '*' + f + '*')
return re.compile(raw, flags=re.IGNORECASE)
filters = [create_filter(f) for f in args.filter]
def match(t, f):
return f.match(t)
test_suites = [t for t in test_suites
if any(match(t.alias, f) for f in filters) and
os.path.isdir(t.stage)]
names = [t.name for t in test_suites]
if names:
# TODO: Make these specify results vs active
msg = "Spack test results available for the following tests:\n"
msg += " %s\n" % ' '.join(names)
msg += " Run `spack test remove` to remove all tests"
tty.msg(msg)
else:
msg = "No test results match the query\n"
msg += " Tests may have been removed using `spack test remove`"
tty.msg(msg)
def test_status(args):
"""Get the current status for the specified Spack test suite(s)."""
if args.names:
test_suites = []
for name in args.names:
test_suite = spack.install_test.get_test_suite(name)
if test_suite:
test_suites.append(test_suite)
else:
tty.msg("No test suite %s found in test stage" % name)
else:
test_suites = spack.install_test.get_all_test_suites()
if not test_suites:
tty.msg("No test suites with status to report")
for test_suite in test_suites:
# TODO: Make this handle capability tests too
# TODO: Make this handle tests running in another process
tty.msg("Test suite %s completed" % test_suite.name)
def _report_suite_results(test_suite, args, constraints):
"""Report the relevant test suite results."""
# TODO: Make this handle capability tests too
# The results file may turn out to be a placeholder for future work
if constraints:
# TBD: Should I be refactoring or re-using ConstraintAction?
qspecs = spack.cmd.parse_specs(constraints)
specs = {}
for spec in qspecs:
for s in spack.store.db.query(spec, installed=True):
specs[s.dag_hash()] = s
specs = sorted(specs.values())
test_specs = dict((test_suite.test_pkg_id(s), s) for s in
test_suite.specs if s in specs)
else:
test_specs = dict((test_suite.test_pkg_id(s), s) for s in
test_suite.specs)
if not test_specs:
return
if os.path.exists(test_suite.results_file):
results_desc = 'Failing results' if args.failed else 'Results'
matching = ", spec matching '{0}'".format(' '.join(constraints)) \
if constraints else ''
tty.msg("{0} for test suite '{1}'{2}:"
.format(results_desc, test_suite.name, matching))
results = {}
with open(test_suite.results_file, 'r') as f:
for line in f:
pkg_id, status = line.split()
results[pkg_id] = status
for pkg_id in test_specs:
if pkg_id in results:
status = results[pkg_id]
if args.failed and status != 'FAILED':
continue
msg = " {0} {1}".format(pkg_id, status)
if args.logs:
spec = test_specs[pkg_id]
log_file = test_suite.log_file_for_spec(spec)
if os.path.isfile(log_file):
with open(log_file, 'r') as f:
msg += '\n{0}'.format(''.join(f.readlines()))
tty.msg(msg)
else:
msg = "Test %s has no results.\n" % test_suite.name
msg += " Check if it is running with "
msg += "`spack test status %s`" % test_suite.name
tty.msg(msg)
def test_results(args):
"""Get the results from Spack test suite(s) (default all)."""
if args.names:
try:
sep_index = args.names.index('--')
names = args.names[:sep_index]
constraints = args.names[sep_index + 1:]
except ValueError:
names = args.names
constraints = None
else:
names, constraints = None, None
if names:
test_suites = [spack.install_test.get_test_suite(name) for name
in names]
if not test_suites:
tty.msg('No test suite(s) found in test stage: {0}'
.format(', '.join(names)))
else:
test_suites = spack.install_test.get_all_test_suites()
if not test_suites:
tty.msg("No test suites with results to report")
for test_suite in test_suites:
_report_suite_results(test_suite, args, constraints)
def test_remove(args):
"""Remove results from Spack test suite(s) (default all).
If no test suite is listed, remove results for all suites.
Removed tests can no longer be accessed for results or status, and will not
appear in `spack test list` results."""
if args.names:
test_suites = []
for name in args.names:
test_suite = spack.install_test.get_test_suite(name)
if test_suite:
test_suites.append(test_suite)
else:
tty.msg("No test suite %s found in test stage" % name)
else:
test_suites = spack.install_test.get_all_test_suites()
if not test_suites:
tty.msg("No test suites to remove")
return
if not args.yes_to_all:
msg = 'The following test suites will be removed:\n\n'
msg += ' ' + ' '.join(test.name for test in test_suites) + '\n'
tty.msg(msg)
answer = tty.get_yes_or_no('Do you want to proceed?', default=False)
if not answer:
tty.msg('Aborting removal of test suites')
return return
return pytest.main(pytest_args) for test_suite in test_suites:
shutil.rmtree(test_suite.stage)
def test(parser, args):
globals()['test_%s' % args.test_command](args)

View file

@ -0,0 +1,16 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.cmd.common.env_utility as env_utility
description = "run a command in a spec's test environment, " \
"or dump its environment to screen or file"
section = "administration"
level = "long"
setup_parser = env_utility.setup_parser
def test_env(parser, args):
env_utility.emulate_env_utility('test-env', 'test', args)

View file

@ -0,0 +1,169 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from __future__ import print_function
from __future__ import division
import collections
import sys
import re
import argparse
import pytest
from six import StringIO
import llnl.util.tty.color as color
from llnl.util.filesystem import working_dir
from llnl.util.tty.colify import colify
import spack.paths
description = "run spack's unit tests (wrapper around pytest)"
section = "developer"
level = "long"
def setup_parser(subparser):
subparser.add_argument(
'-H', '--pytest-help', action='store_true', default=False,
help="show full pytest help, with advanced options")
# extra spack arguments to list tests
list_group = subparser.add_argument_group("listing tests")
list_mutex = list_group.add_mutually_exclusive_group()
list_mutex.add_argument(
'-l', '--list', action='store_const', default=None,
dest='list', const='list', help="list test filenames")
list_mutex.add_argument(
'-L', '--list-long', action='store_const', default=None,
dest='list', const='long', help="list all test functions")
list_mutex.add_argument(
'-N', '--list-names', action='store_const', default=None,
dest='list', const='names', help="list full names of all tests")
# use tests for extension
subparser.add_argument(
'--extension', default=None,
help="run test for a given spack extension")
# spell out some common pytest arguments, so they'll show up in help
pytest_group = subparser.add_argument_group(
"common pytest arguments (spack unit-test --pytest-help for more)")
pytest_group.add_argument(
"-s", action='append_const', dest='parsed_args', const='-s',
help="print output while tests run (disable capture)")
pytest_group.add_argument(
"-k", action='store', metavar="EXPRESSION", dest='expression',
help="filter tests by keyword (can also use w/list options)")
pytest_group.add_argument(
"--showlocals", action='append_const', dest='parsed_args',
const='--showlocals', help="show local variable values in tracebacks")
# remainder is just passed to pytest
subparser.add_argument(
'pytest_args', nargs=argparse.REMAINDER, help="arguments for pytest")
def do_list(args, extra_args):
"""Print a lists of tests than what pytest offers."""
# Run test collection and get the tree out.
old_output = sys.stdout
try:
sys.stdout = output = StringIO()
pytest.main(['--collect-only'] + extra_args)
finally:
sys.stdout = old_output
lines = output.getvalue().split('\n')
tests = collections.defaultdict(lambda: set())
prefix = []
# collect tests into sections
for line in lines:
match = re.match(r"(\s*)<([^ ]*) '([^']*)'", line)
if not match:
continue
indent, nodetype, name = match.groups()
# strip parametrized tests
if "[" in name:
name = name[:name.index("[")]
depth = len(indent) // 2
if nodetype.endswith("Function"):
key = tuple(prefix)
tests[key].add(name)
else:
prefix = prefix[:depth]
prefix.append(name)
def colorize(c, prefix):
if isinstance(prefix, tuple):
return "::".join(
color.colorize("@%s{%s}" % (c, p))
for p in prefix if p != "()"
)
return color.colorize("@%s{%s}" % (c, prefix))
if args.list == "list":
files = set(prefix[0] for prefix in tests)
color_files = [colorize("B", file) for file in sorted(files)]
colify(color_files)
elif args.list == "long":
for prefix, functions in sorted(tests.items()):
path = colorize("*B", prefix) + "::"
functions = [colorize("c", f) for f in sorted(functions)]
color.cprint(path)
colify(functions, indent=4)
print()
else: # args.list == "names"
all_functions = [
colorize("*B", prefix) + "::" + colorize("c", f)
for prefix, functions in sorted(tests.items())
for f in sorted(functions)
]
colify(all_functions)
def add_back_pytest_args(args, unknown_args):
"""Add parsed pytest args, unknown args, and remainder together.
We add some basic pytest arguments to the Spack parser to ensure that
they show up in the short help, so we have to reassemble things here.
"""
result = args.parsed_args or []
result += unknown_args or []
result += args.pytest_args or []
if args.expression:
result += ["-k", args.expression]
return result
def unit_test(parser, args, unknown_args):
if args.pytest_help:
# make the pytest.main help output more accurate
sys.argv[0] = 'spack test'
return pytest.main(['-h'])
# add back any parsed pytest args we need to pass to pytest
pytest_args = add_back_pytest_args(args, unknown_args)
# The default is to test the core of Spack. If the option `--extension`
# has been used, then test that extension.
pytest_root = spack.paths.spack_root
if args.extension:
target = args.extension
extensions = spack.config.get('config:extensions')
pytest_root = spack.extensions.path_for_extension(target, *extensions)
# pytest.ini lives in the root of the spack repository.
with working_dir(pytest_root):
if args.list:
do_list(args, pytest_args)
return
return pytest.main(pytest_args)

View file

@ -299,8 +299,18 @@ def _depends_on(pkg, spec, when=None, type=default_deptype, patches=None):
# call this patches here for clarity -- we want patch to be a list, # call this patches here for clarity -- we want patch to be a list,
# but the caller doesn't have to make it one. # but the caller doesn't have to make it one.
if patches and dep_spec.virtual:
raise DependencyPatchError("Cannot patch a virtual dependency.") # Note: we cannot check whether a package is virtual in a directive
# because directives are run as part of class instantiation, and specs
# instantiate the package class as part of the `virtual` check.
# To be technical, specs only instantiate the package class as part of the
# virtual check if the provider index hasn't been created yet.
# TODO: There could be a cache warming strategy that would allow us to
# ensure `Spec.virtual` is a valid thing to call in a directive.
# For now, we comment out the following check to allow for virtual packages
# with package files.
# if patches and dep_spec.virtual:
# raise DependencyPatchError("Cannot patch a virtual dependency.")
# ensure patches is a list # ensure patches is a list
if patches is None: if patches is None:

View file

@ -0,0 +1,266 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import base64
import hashlib
import os
import re
import shutil
import sys
import tty
import llnl.util.filesystem as fs
from spack.spec import Spec
import spack.error
import spack.util.prefix
import spack.util.spack_json as sjson
test_suite_filename = 'test_suite.lock'
results_filename = 'results.txt'
def get_escaped_text_output(filename):
"""Retrieve and escape the expected text output from the file
Args:
filename (str): path to the file
Returns:
(list of str): escaped text lines read from the file
"""
with open(filename, 'r') as f:
# Ensure special characters are escaped as needed
expected = f.read()
# Split the lines to make it easier to debug failures when there is
# a lot of output
return [re.escape(ln) for ln in expected.split('\n')]
def get_test_stage_dir():
return spack.util.path.canonicalize_path(
spack.config.get('config:test_stage', '~/.spack/test'))
def get_all_test_suites():
stage_root = get_test_stage_dir()
if not os.path.isdir(stage_root):
return []
def valid_stage(d):
dirpath = os.path.join(stage_root, d)
return (os.path.isdir(dirpath) and
test_suite_filename in os.listdir(dirpath))
candidates = [
os.path.join(stage_root, d, test_suite_filename)
for d in os.listdir(stage_root)
if valid_stage(d)
]
test_suites = [TestSuite.from_file(c) for c in candidates]
return test_suites
def get_test_suite(name):
assert name, "Cannot search for empty test name or 'None'"
test_suites = get_all_test_suites()
names = [ts for ts in test_suites
if ts.name == name]
assert len(names) < 2, "alias shadows test suite hash"
if not names:
return None
return names[0]
class TestSuite(object):
def __init__(self, specs, alias=None):
# copy so that different test suites have different package objects
# even if they contain the same spec
self.specs = [spec.copy() for spec in specs]
self.current_test_spec = None # spec currently tested, can be virtual
self.current_base_spec = None # spec currently running do_test
self.alias = alias
self._hash = None
@property
def name(self):
return self.alias if self.alias else self.content_hash
@property
def content_hash(self):
if not self._hash:
json_text = sjson.dump(self.to_dict())
sha = hashlib.sha1(json_text.encode('utf-8'))
b32_hash = base64.b32encode(sha.digest()).lower()
if sys.version_info[0] >= 3:
b32_hash = b32_hash.decode('utf-8')
self._hash = b32_hash
return self._hash
def __call__(self, *args, **kwargs):
self.write_reproducibility_data()
remove_directory = kwargs.get('remove_directory', True)
dirty = kwargs.get('dirty', False)
fail_first = kwargs.get('fail_first', False)
for spec in self.specs:
try:
msg = "A package object cannot run in two test suites at once"
assert not spec.package.test_suite, msg
# Set up the test suite to know which test is running
spec.package.test_suite = self
self.current_base_spec = spec
self.current_test_spec = spec
# setup per-test directory in the stage dir
test_dir = self.test_dir_for_spec(spec)
if os.path.exists(test_dir):
shutil.rmtree(test_dir)
fs.mkdirp(test_dir)
# run the package tests
spec.package.do_test(
dirty=dirty
)
# Clean up on success and log passed test
if remove_directory:
shutil.rmtree(test_dir)
self.write_test_result(spec, 'PASSED')
except BaseException as exc:
if isinstance(exc, SyntaxError):
# Create the test log file and report the error.
self.ensure_stage()
msg = 'Testing package {0}\n{1}'\
.format(self.test_pkg_id(spec), str(exc))
_add_msg_to_file(self.log_file_for_spec(spec), msg)
self.write_test_result(spec, 'FAILED')
if fail_first:
break
finally:
spec.package.test_suite = None
self.current_test_spec = None
self.current_base_spec = None
def ensure_stage(self):
if not os.path.exists(self.stage):
fs.mkdirp(self.stage)
@property
def stage(self):
return spack.util.prefix.Prefix(
os.path.join(get_test_stage_dir(), self.content_hash))
@property
def results_file(self):
return self.stage.join(results_filename)
@classmethod
def test_pkg_id(cls, spec):
"""Build the standard install test package identifier
Args:
spec (Spec): instance of the spec under test
Returns:
(str): the install test package identifier
"""
return spec.format('{name}-{version}-{hash:7}')
@classmethod
def test_log_name(cls, spec):
return '%s-test-out.txt' % cls.test_pkg_id(spec)
def log_file_for_spec(self, spec):
return self.stage.join(self.test_log_name(spec))
def test_dir_for_spec(self, spec):
return self.stage.join(self.test_pkg_id(spec))
@property
def current_test_data_dir(self):
assert self.current_test_spec and self.current_base_spec
test_spec = self.current_test_spec
base_spec = self.current_base_spec
return self.test_dir_for_spec(base_spec).data.join(test_spec.name)
def add_failure(self, exc, msg):
current_hash = self.current_base_spec.dag_hash()
current_failures = self.failures.get(current_hash, [])
current_failures.append((exc, msg))
self.failures[current_hash] = current_failures
def write_test_result(self, spec, result):
msg = "{0} {1}".format(self.test_pkg_id(spec), result)
_add_msg_to_file(self.results_file, msg)
def write_reproducibility_data(self):
for spec in self.specs:
repo_cache_path = self.stage.repo.join(spec.name)
spack.repo.path.dump_provenance(spec, repo_cache_path)
for vspec in spec.package.virtuals_provided:
repo_cache_path = self.stage.repo.join(vspec.name)
if not os.path.exists(repo_cache_path):
try:
spack.repo.path.dump_provenance(vspec, repo_cache_path)
except spack.repo.UnknownPackageError:
pass # not all virtuals have package files
with open(self.stage.join(test_suite_filename), 'w') as f:
sjson.dump(self.to_dict(), stream=f)
def to_dict(self):
specs = [s.to_dict() for s in self.specs]
d = {'specs': specs}
if self.alias:
d['alias'] = self.alias
return d
@staticmethod
def from_dict(d):
specs = [Spec.from_dict(spec_dict) for spec_dict in d['specs']]
alias = d.get('alias', None)
return TestSuite(specs, alias)
@staticmethod
def from_file(filename):
try:
with open(filename, 'r') as f:
data = sjson.load(f)
return TestSuite.from_dict(data)
except Exception as e:
tty.debug(e)
raise sjson.SpackJSONError("error parsing JSON TestSuite:", str(e))
def _add_msg_to_file(filename, msg):
"""Add the message to the specified file
Args:
filename (str): path to the file
msg (str): message to be appended to the file
"""
with open(filename, 'a+') as f:
f.write('{0}\n'.format(msg))
class TestFailure(spack.error.SpackError):
"""Raised when package tests have failed for an installation."""
def __init__(self, failures):
# Failures are all exceptions
msg = "%d tests failed.\n" % len(failures)
for failure, message in failures:
msg += '\n\n%s\n' % str(failure)
msg += '\n%s\n' % message
super(TestFailure, self).__init__(msg)

View file

@ -1610,12 +1610,12 @@ def build_process(pkg, kwargs):
This function's return value is returned to the parent process. This function's return value is returned to the parent process.
""" """
keep_stage = kwargs.get('keep_stage', False)
install_source = kwargs.get('install_source', False)
skip_patch = kwargs.get('skip_patch', False)
verbose = kwargs.get('verbose', False)
fake = kwargs.get('fake', False) fake = kwargs.get('fake', False)
install_source = kwargs.get('install_source', False)
keep_stage = kwargs.get('keep_stage', False)
skip_patch = kwargs.get('skip_patch', False)
unmodified_env = kwargs.get('unmodified_env', {}) unmodified_env = kwargs.get('unmodified_env', {})
verbose = kwargs.get('verbose', False)
start_time = time.time() start_time = time.time()
if not fake: if not fake:
@ -1958,6 +1958,7 @@ def __str__(self):
def _add_default_args(self): def _add_default_args(self):
"""Ensure standard install options are set to at least the default.""" """Ensure standard install options are set to at least the default."""
for arg, default in [('cache_only', False), for arg, default in [('cache_only', False),
('context', 'build'), # installs *always* build
('dirty', False), ('dirty', False),
('fail_fast', False), ('fail_fast', False),
('fake', False), ('fake', False),

View file

@ -12,6 +12,7 @@
import spack.config import spack.config
import spack.compilers import spack.compilers
import spack.spec import spack.spec
import spack.repo
import spack.error import spack.error
import spack.tengine as tengine import spack.tengine as tengine
@ -125,7 +126,9 @@ def hierarchy_tokens(self):
# Check if all the tokens in the hierarchy are virtual specs. # Check if all the tokens in the hierarchy are virtual specs.
# If not warn the user and raise an error. # If not warn the user and raise an error.
not_virtual = [t for t in tokens if not spack.spec.Spec.is_virtual(t)] not_virtual = [t for t in tokens
if t != 'compiler' and
not spack.repo.path.is_virtual(t)]
if not_virtual: if not_virtual:
msg = "Non-virtual specs in 'hierarchy' list for lmod: {0}\n" msg = "Non-virtual specs in 'hierarchy' list for lmod: {0}\n"
msg += "Please check the 'modules.yaml' configuration files" msg += "Please check the 'modules.yaml' configuration files"

View file

@ -24,10 +24,12 @@
import textwrap import textwrap
import time import time
import traceback import traceback
import six import six
import types
import llnl.util.filesystem as fsys
import llnl.util.tty as tty import llnl.util.tty as tty
import spack.compilers import spack.compilers
import spack.config import spack.config
import spack.dependency import spack.dependency
@ -45,15 +47,14 @@
import spack.url import spack.url
import spack.util.environment import spack.util.environment
import spack.util.web import spack.util.web
from llnl.util.filesystem import mkdirp, touch, working_dir
from llnl.util.lang import memoized from llnl.util.lang import memoized
from llnl.util.link_tree import LinkTree from llnl.util.link_tree import LinkTree
from ordereddict_backport import OrderedDict from ordereddict_backport import OrderedDict
from six import StringIO
from six import string_types
from six import with_metaclass
from spack.filesystem_view import YamlFilesystemView from spack.filesystem_view import YamlFilesystemView
from spack.installer import PackageInstaller, InstallError from spack.installer import PackageInstaller, InstallError
from spack.install_test import TestFailure, TestSuite
from spack.util.executable import which, ProcessError
from spack.util.prefix import Prefix
from spack.stage import stage_prefix, Stage, ResourceStage, StageComposite from spack.stage import stage_prefix, Stage, ResourceStage, StageComposite
from spack.util.package_hash import package_hash from spack.util.package_hash import package_hash
from spack.version import Version from spack.version import Version
@ -452,7 +453,21 @@ def remove_files_from_view(self, view, merge_map):
view.remove_file(src, dst) view.remove_file(src, dst)
class PackageBase(with_metaclass(PackageMeta, PackageViewMixin, object)): def test_log_pathname(test_stage, spec):
"""Build the pathname of the test log file
Args:
test_stage (str): path to the test stage directory
spec (Spec): instance of the spec under test
Returns:
(str): the pathname of the test log file
"""
return os.path.join(test_stage,
'test-{0}-out.txt'.format(TestSuite.test_pkg_id(spec)))
class PackageBase(six.with_metaclass(PackageMeta, PackageViewMixin, object)):
"""This is the superclass for all spack packages. """This is the superclass for all spack packages.
***The Package class*** ***The Package class***
@ -542,6 +557,10 @@ class PackageBase(with_metaclass(PackageMeta, PackageViewMixin, object)):
#: are executed or 'None' if there are no such test functions. #: are executed or 'None' if there are no such test functions.
build_time_test_callbacks = None build_time_test_callbacks = None
#: By default, packages are not virtual
#: Virtual packages override this attribute
virtual = False
#: Most Spack packages are used to install source or binary code while #: Most Spack packages are used to install source or binary code while
#: those that do not can be used to install a set of other Spack packages. #: those that do not can be used to install a set of other Spack packages.
has_code = True has_code = True
@ -633,6 +652,18 @@ class PackageBase(with_metaclass(PackageMeta, PackageViewMixin, object)):
metadata_attrs = ['homepage', 'url', 'urls', 'list_url', 'extendable', metadata_attrs = ['homepage', 'url', 'urls', 'list_url', 'extendable',
'parallel', 'make_jobs'] 'parallel', 'make_jobs']
#: Boolean. If set to ``True``, the smoke/install test requires a compiler.
#: This is currently used by smoke tests to ensure a compiler is available
#: to build a custom test code.
test_requires_compiler = False
#: List of test failures encountered during a smoke/install test run.
test_failures = None
#: TestSuite instance used to manage smoke/install tests for one or more
#: specs.
test_suite = None
def __init__(self, spec): def __init__(self, spec):
# this determines how the package should be built. # this determines how the package should be built.
self.spec = spec self.spec = spec
@ -1001,20 +1032,23 @@ def env_path(self):
else: else:
return os.path.join(self.stage.path, _spack_build_envfile) return os.path.join(self.stage.path, _spack_build_envfile)
@property
def metadata_dir(self):
"""Return the install metadata directory."""
return spack.store.layout.metadata_path(self.spec)
@property @property
def install_env_path(self): def install_env_path(self):
""" """
Return the build environment file path on successful installation. Return the build environment file path on successful installation.
""" """
install_path = spack.store.layout.metadata_path(self.spec)
# Backward compatibility: Return the name of an existing log path; # Backward compatibility: Return the name of an existing log path;
# otherwise, return the current install env path name. # otherwise, return the current install env path name.
old_filename = os.path.join(install_path, 'build.env') old_filename = os.path.join(self.metadata_dir, 'build.env')
if os.path.exists(old_filename): if os.path.exists(old_filename):
return old_filename return old_filename
else: else:
return os.path.join(install_path, _spack_build_envfile) return os.path.join(self.metadata_dir, _spack_build_envfile)
@property @property
def log_path(self): def log_path(self):
@ -1031,16 +1065,14 @@ def log_path(self):
@property @property
def install_log_path(self): def install_log_path(self):
"""Return the build log file path on successful installation.""" """Return the build log file path on successful installation."""
install_path = spack.store.layout.metadata_path(self.spec)
# Backward compatibility: Return the name of an existing install log. # Backward compatibility: Return the name of an existing install log.
for filename in ['build.out', 'build.txt']: for filename in ['build.out', 'build.txt']:
old_log = os.path.join(install_path, filename) old_log = os.path.join(self.metadata_dir, filename)
if os.path.exists(old_log): if os.path.exists(old_log):
return old_log return old_log
# Otherwise, return the current install log path name. # Otherwise, return the current install log path name.
return os.path.join(install_path, _spack_build_logfile) return os.path.join(self.metadata_dir, _spack_build_logfile)
@property @property
def configure_args_path(self): def configure_args_path(self):
@ -1050,9 +1082,12 @@ def configure_args_path(self):
@property @property
def install_configure_args_path(self): def install_configure_args_path(self):
"""Return the configure args file path on successful installation.""" """Return the configure args file path on successful installation."""
install_path = spack.store.layout.metadata_path(self.spec) return os.path.join(self.metadata_dir, _spack_configure_argsfile)
return os.path.join(install_path, _spack_configure_argsfile) @property
def install_test_root(self):
"""Return the install test root directory."""
return os.path.join(self.metadata_dir, 'test')
def _make_fetcher(self): def _make_fetcher(self):
# Construct a composite fetcher that always contains at least # Construct a composite fetcher that always contains at least
@ -1322,7 +1357,7 @@ def do_stage(self, mirror_only=False):
raise FetchError("Archive was empty for %s" % self.name) raise FetchError("Archive was empty for %s" % self.name)
else: else:
# Support for post-install hooks requires a stage.source_path # Support for post-install hooks requires a stage.source_path
mkdirp(self.stage.source_path) fsys.mkdirp(self.stage.source_path)
def do_patch(self): def do_patch(self):
"""Applies patches if they haven't been applied already.""" """Applies patches if they haven't been applied already."""
@ -1368,7 +1403,7 @@ def do_patch(self):
patched = False patched = False
for patch in patches: for patch in patches:
try: try:
with working_dir(self.stage.source_path): with fsys.working_dir(self.stage.source_path):
patch.apply(self.stage) patch.apply(self.stage)
tty.debug('Applied patch {0}'.format(patch.path_or_url)) tty.debug('Applied patch {0}'.format(patch.path_or_url))
patched = True patched = True
@ -1377,12 +1412,12 @@ def do_patch(self):
# Touch bad file if anything goes wrong. # Touch bad file if anything goes wrong.
tty.msg('Patch %s failed.' % patch.path_or_url) tty.msg('Patch %s failed.' % patch.path_or_url)
touch(bad_file) fsys.touch(bad_file)
raise raise
if has_patch_fun: if has_patch_fun:
try: try:
with working_dir(self.stage.source_path): with fsys.working_dir(self.stage.source_path):
self.patch() self.patch()
tty.debug('Ran patch() for {0}'.format(self.name)) tty.debug('Ran patch() for {0}'.format(self.name))
patched = True patched = True
@ -1400,7 +1435,7 @@ def do_patch(self):
# Touch bad file if anything goes wrong. # Touch bad file if anything goes wrong.
tty.msg('patch() function failed for {0}'.format(self.name)) tty.msg('patch() function failed for {0}'.format(self.name))
touch(bad_file) fsys.touch(bad_file)
raise raise
# Get rid of any old failed file -- patches have either succeeded # Get rid of any old failed file -- patches have either succeeded
@ -1411,9 +1446,9 @@ def do_patch(self):
# touch good or no patches file so that we skip next time. # touch good or no patches file so that we skip next time.
if patched: if patched:
touch(good_file) fsys.touch(good_file)
else: else:
touch(no_patches_file) fsys.touch(no_patches_file)
@classmethod @classmethod
def all_patches(cls): def all_patches(cls):
@ -1657,6 +1692,175 @@ def do_install(self, **kwargs):
builder = PackageInstaller([(self, kwargs)]) builder = PackageInstaller([(self, kwargs)])
builder.install() builder.install()
def cache_extra_test_sources(self, srcs):
"""Copy relative source paths to the corresponding install test subdir
This method is intended as an optional install test setup helper for
grabbing source files/directories during the installation process and
copying them to the installation test subdirectory for subsequent use
during install testing.
Args:
srcs (str or list of str): relative path for files and or
subdirectories located in the staged source path that are to
be copied to the corresponding location(s) under the install
testing directory.
"""
paths = [srcs] if isinstance(srcs, six.string_types) else srcs
for path in paths:
src_path = os.path.join(self.stage.source_path, path)
dest_path = os.path.join(self.install_test_root, path)
if os.path.isdir(src_path):
fsys.install_tree(src_path, dest_path)
else:
fsys.mkdirp(os.path.dirname(dest_path))
fsys.copy(src_path, dest_path)
def do_test(self, dirty=False):
if self.test_requires_compiler:
compilers = spack.compilers.compilers_for_spec(
self.spec.compiler, arch_spec=self.spec.architecture)
if not compilers:
tty.error('Skipping tests for package %s\n' %
self.spec.format('{name}-{version}-{hash:7}') +
'Package test requires missing compiler %s' %
self.spec.compiler)
return
# Clear test failures
self.test_failures = []
self.test_log_file = self.test_suite.log_file_for_spec(self.spec)
fsys.touch(self.test_log_file) # Otherwise log_parse complains
kwargs = {'dirty': dirty, 'fake': False, 'context': 'test'}
spack.build_environment.start_build_process(self, test_process, kwargs)
def test(self):
pass
def run_test(self, exe, options=[], expected=[], status=0,
installed=False, purpose='', skip_missing=False,
work_dir=None):
"""Run the test and confirm the expected results are obtained
Log any failures and continue, they will be re-raised later
Args:
exe (str): the name of the executable
options (str or list of str): list of options to pass to the runner
expected (str or list of str): list of expected output strings.
Each string is a regex expected to match part of the output.
status (int or list of int): possible passing status values
with 0 meaning the test is expected to succeed
installed (bool): if ``True``, the executable must be in the
install prefix
purpose (str): message to display before running test
skip_missing (bool): skip the test if the executable is not
in the install prefix bin directory or the provided work_dir
work_dir (str or None): path to the smoke test directory
"""
wdir = '.' if work_dir is None else work_dir
with fsys.working_dir(wdir):
try:
runner = which(exe)
if runner is None and skip_missing:
return
assert runner is not None, \
"Failed to find executable '{0}'".format(exe)
self._run_test_helper(
runner, options, expected, status, installed, purpose)
print("PASSED")
return True
except BaseException as e:
# print a summary of the error to the log file
# so that cdash and junit reporters know about it
exc_type, _, tb = sys.exc_info()
print('FAILED: {0}'.format(e))
import traceback
# remove the current call frame to exclude the extract_stack
# call from the error
stack = traceback.extract_stack()[:-1]
# Package files have a line added at import time, so we re-read
# the file to make line numbers match. We have to subtract two
# from the line number because the original line number is
# inflated once by the import statement and the lines are
# displaced one by the import statement.
for i, entry in enumerate(stack):
filename, lineno, function, text = entry
if spack.repo.is_package_file(filename):
with open(filename, 'r') as f:
lines = f.readlines()
new_lineno = lineno - 2
text = lines[new_lineno]
stack[i] = (filename, new_lineno, function, text)
# Format the stack to print and print it
out = traceback.format_list(stack)
for line in out:
print(line.rstrip('\n'))
if exc_type is spack.util.executable.ProcessError:
out = six.StringIO()
spack.build_environment.write_log_summary(
out, 'test', self.test_log_file, last=1)
m = out.getvalue()
else:
# We're below the package context, so get context from
# stack instead of from traceback.
# The traceback is truncated here, so we can't use it to
# traverse the stack.
m = '\n'.join(
spack.build_environment.get_package_context(tb)
)
exc = e # e is deleted after this block
# If we fail fast, raise another error
if spack.config.get('config:fail_fast', False):
raise TestFailure([(exc, m)])
else:
self.test_failures.append((exc, m))
return False
def _run_test_helper(self, runner, options, expected, status, installed,
purpose):
status = [status] if isinstance(status, six.integer_types) else status
expected = [expected] if isinstance(expected, six.string_types) else \
expected
options = [options] if isinstance(options, six.string_types) else \
options
if purpose:
tty.msg(purpose)
else:
tty.debug('test: {0}: expect command status in {1}'
.format(runner.name, status))
if installed:
msg = "Executable '{0}' expected in prefix".format(runner.name)
msg += ", found in {0} instead".format(runner.path)
assert runner.path.startswith(self.spec.prefix), msg
try:
output = runner(*options, output=str.split, error=str.split)
assert 0 in status, \
'Expected {0} execution to fail'.format(runner.name)
except ProcessError as err:
output = str(err)
match = re.search(r'exited with status ([0-9]+)', output)
if not (match and int(match.group(1)) in status):
raise
for check in expected:
cmd = ' '.join([runner.name] + options)
msg = "Expected '{0}' to match output of `{1}`".format(check, cmd)
msg += '\n\nOutput: {0}'.format(output)
assert re.search(check, output), msg
def unit_test_check(self): def unit_test_check(self):
"""Hook for unit tests to assert things about package internals. """Hook for unit tests to assert things about package internals.
@ -1678,7 +1882,7 @@ def sanity_check_prefix(self):
"""This function checks whether install succeeded.""" """This function checks whether install succeeded."""
def check_paths(path_list, filetype, predicate): def check_paths(path_list, filetype, predicate):
if isinstance(path_list, string_types): if isinstance(path_list, six.string_types):
path_list = [path_list] path_list = [path_list]
for path in path_list: for path in path_list:
@ -2031,7 +2235,7 @@ def do_deprecate(self, deprecator, link_fn):
# copy spec metadata to "deprecated" dir of deprecator # copy spec metadata to "deprecated" dir of deprecator
depr_yaml = spack.store.layout.deprecated_file_path(spec, depr_yaml = spack.store.layout.deprecated_file_path(spec,
deprecator) deprecator)
fs.mkdirp(os.path.dirname(depr_yaml)) fsys.mkdirp(os.path.dirname(depr_yaml))
shutil.copy2(self_yaml, depr_yaml) shutil.copy2(self_yaml, depr_yaml)
# Any specs deprecated in favor of this spec are re-deprecated in # Any specs deprecated in favor of this spec are re-deprecated in
@ -2210,7 +2414,7 @@ def format_doc(self, **kwargs):
doc = re.sub(r'\s+', ' ', self.__doc__) doc = re.sub(r'\s+', ' ', self.__doc__)
lines = textwrap.wrap(doc, 72) lines = textwrap.wrap(doc, 72)
results = StringIO() results = six.StringIO()
for line in lines: for line in lines:
results.write((" " * indent) + line + "\n") results.write((" " * indent) + line + "\n")
return results.getvalue() return results.getvalue()
@ -2313,6 +2517,71 @@ def _run_default_install_time_test_callbacks(self):
tty.warn(msg.format(name)) tty.warn(msg.format(name))
def test_process(pkg, kwargs):
with tty.log.log_output(pkg.test_log_file) as logger:
with logger.force_echo():
tty.msg('Testing package {0}'
.format(pkg.test_suite.test_pkg_id(pkg.spec)))
# use debug print levels for log file to record commands
old_debug = tty.is_debug()
tty.set_debug(True)
# run test methods from the package and all virtuals it
# provides virtuals have to be deduped by name
v_names = list(set([vspec.name
for vspec in pkg.virtuals_provided]))
# hack for compilers that are not dependencies (yet)
# TODO: this all eventually goes away
c_names = ('gcc', 'intel', 'intel-parallel-studio', 'pgi')
if pkg.name in c_names:
v_names.extend(['c', 'cxx', 'fortran'])
if pkg.spec.satisfies('llvm+clang'):
v_names.extend(['c', 'cxx'])
test_specs = [pkg.spec] + [spack.spec.Spec(v_name)
for v_name in sorted(v_names)]
try:
with fsys.working_dir(
pkg.test_suite.test_dir_for_spec(pkg.spec)):
for spec in test_specs:
pkg.test_suite.current_test_spec = spec
# Fail gracefully if a virtual has no package/tests
try:
spec_pkg = spec.package
except spack.repo.UnknownPackageError:
continue
# copy test data into test data dir
data_source = Prefix(spec_pkg.package_dir).test
data_dir = pkg.test_suite.current_test_data_dir
if (os.path.isdir(data_source) and
not os.path.exists(data_dir)):
# We assume data dir is used read-only
# maybe enforce this later
shutil.copytree(data_source, data_dir)
# grab the function for each method so we can call
# it with the package
test_fn = spec_pkg.__class__.test
if not isinstance(test_fn, types.FunctionType):
test_fn = test_fn.__func__
# Run the tests
test_fn(pkg)
# If fail-fast was on, we error out above
# If we collect errors, raise them in batch here
if pkg.test_failures:
raise TestFailure(pkg.test_failures)
finally:
# reset debug level
tty.set_debug(old_debug)
inject_flags = PackageBase.inject_flags inject_flags = PackageBase.inject_flags
env_flags = PackageBase.env_flags env_flags = PackageBase.env_flags
build_system_flags = PackageBase.build_system_flags build_system_flags = PackageBase.build_system_flags

View file

@ -12,7 +12,6 @@
import os import os
from llnl.util.filesystem import ancestor from llnl.util.filesystem import ancestor
#: This file lives in $prefix/lib/spack/spack/__file__ #: This file lives in $prefix/lib/spack/spack/__file__
prefix = ancestor(__file__, 4) prefix = ancestor(__file__, 4)
@ -42,6 +41,7 @@
hooks_path = os.path.join(module_path, "hooks") hooks_path = os.path.join(module_path, "hooks")
var_path = os.path.join(prefix, "var", "spack") var_path = os.path.join(prefix, "var", "spack")
repos_path = os.path.join(var_path, "repos") repos_path = os.path.join(var_path, "repos")
tests_path = os.path.join(var_path, "tests")
share_path = os.path.join(prefix, "share", "spack") share_path = os.path.join(prefix, "share", "spack")
# Paths to built-in Spack repositories. # Paths to built-in Spack repositories.

View file

@ -59,6 +59,7 @@
from spack.installer import \ from spack.installer import \
ExternalPackageError, InstallError, InstallLockError, UpstreamPackageError ExternalPackageError, InstallError, InstallLockError, UpstreamPackageError
from spack.install_test import get_escaped_text_output
from spack.variant import any_combination_of, auto_or_any_combination_of from spack.variant import any_combination_of, auto_or_any_combination_of
from spack.variant import disjoint_sets from spack.variant import disjoint_sets

View file

@ -91,6 +91,19 @@ def converter(self, spec_like, *args, **kwargs):
return converter return converter
def is_package_file(filename):
"""Determine whether we are in a package file from a repo."""
# Package files are named `package.py` and are not in lib/spack/spack
# We have to remove the file extension because it can be .py and can be
# .pyc depending on context, and can differ between the files
import spack.package # break cycle
filename_noext = os.path.splitext(filename)[0]
packagebase_filename_noext = os.path.splitext(
inspect.getfile(spack.package.PackageBase))[0]
return (filename_noext != packagebase_filename_noext and
os.path.basename(filename_noext) == 'package')
class SpackNamespace(types.ModuleType): class SpackNamespace(types.ModuleType):
""" Allow lazy loading of modules.""" """ Allow lazy loading of modules."""
@ -131,6 +144,11 @@ def __init__(self, packages_path):
#: Reference to the appropriate entry in the global cache #: Reference to the appropriate entry in the global cache
self._packages_to_stats = self._paths_cache[packages_path] self._packages_to_stats = self._paths_cache[packages_path]
def invalidate(self):
"""Regenerate cache for this checker."""
self._paths_cache[self.packages_path] = self._create_new_cache()
self._packages_to_stats = self._paths_cache[self.packages_path]
def _create_new_cache(self): def _create_new_cache(self):
"""Create a new cache for packages in a repo. """Create a new cache for packages in a repo.
@ -308,6 +326,9 @@ def read(self, stream):
self.index = spack.provider_index.ProviderIndex.from_json(stream) self.index = spack.provider_index.ProviderIndex.from_json(stream)
def update(self, pkg_fullname): def update(self, pkg_fullname):
name = pkg_fullname.split('.')[-1]
if spack.repo.path.is_virtual(name, use_index=False):
return
self.index.remove_provider(pkg_fullname) self.index.remove_provider(pkg_fullname)
self.index.update(pkg_fullname) self.index.update(pkg_fullname)
@ -517,12 +538,12 @@ def first_repo(self):
"""Get the first repo in precedence order.""" """Get the first repo in precedence order."""
return self.repos[0] if self.repos else None return self.repos[0] if self.repos else None
def all_package_names(self): def all_package_names(self, include_virtuals=False):
"""Return all unique package names in all repositories.""" """Return all unique package names in all repositories."""
if self._all_package_names is None: if self._all_package_names is None:
all_pkgs = set() all_pkgs = set()
for repo in self.repos: for repo in self.repos:
for name in repo.all_package_names(): for name in repo.all_package_names(include_virtuals):
all_pkgs.add(name) all_pkgs.add(name)
self._all_package_names = sorted(all_pkgs, key=lambda n: n.lower()) self._all_package_names = sorted(all_pkgs, key=lambda n: n.lower())
return self._all_package_names return self._all_package_names
@ -679,12 +700,20 @@ def exists(self, pkg_name):
""" """
return any(repo.exists(pkg_name) for repo in self.repos) return any(repo.exists(pkg_name) for repo in self.repos)
def is_virtual(self, pkg_name): def is_virtual(self, pkg_name, use_index=True):
"""True if the package with this name is virtual, False otherwise.""" """True if the package with this name is virtual, False otherwise.
if not isinstance(pkg_name, str):
Set `use_index` False when calling from a code block that could
be run during the computation of the provider index."""
have_name = pkg_name is not None
if have_name and not isinstance(pkg_name, str):
raise ValueError( raise ValueError(
"is_virtual(): expected package name, got %s" % type(pkg_name)) "is_virtual(): expected package name, got %s" % type(pkg_name))
return pkg_name in self.provider_index if use_index:
return have_name and pkg_name in self.provider_index
else:
return have_name and (not self.exists(pkg_name) or
self.get_pkg_class(pkg_name).virtual)
def __contains__(self, pkg_name): def __contains__(self, pkg_name):
return self.exists(pkg_name) return self.exists(pkg_name)
@ -913,10 +942,6 @@ def dump_provenance(self, spec, path):
This dumps the package file and any associated patch files. This dumps the package file and any associated patch files.
Raises UnknownPackageError if not found. Raises UnknownPackageError if not found.
""" """
# Some preliminary checks.
if spec.virtual:
raise UnknownPackageError(spec.name)
if spec.namespace and spec.namespace != self.namespace: if spec.namespace and spec.namespace != self.namespace:
raise UnknownPackageError( raise UnknownPackageError(
"Repository %s does not contain package %s." "Repository %s does not contain package %s."
@ -999,9 +1024,12 @@ def _pkg_checker(self):
self._fast_package_checker = FastPackageChecker(self.packages_path) self._fast_package_checker = FastPackageChecker(self.packages_path)
return self._fast_package_checker return self._fast_package_checker
def all_package_names(self): def all_package_names(self, include_virtuals=False):
"""Returns a sorted list of all package names in the Repo.""" """Returns a sorted list of all package names in the Repo."""
return sorted(self._pkg_checker.keys()) names = sorted(self._pkg_checker.keys())
if include_virtuals:
return names
return [x for x in names if not self.is_virtual(x)]
def packages_with_tags(self, *tags): def packages_with_tags(self, *tags):
v = set(self.all_package_names()) v = set(self.all_package_names())
@ -1040,7 +1068,7 @@ def last_mtime(self):
def is_virtual(self, pkg_name): def is_virtual(self, pkg_name):
"""True if the package with this name is virtual, False otherwise.""" """True if the package with this name is virtual, False otherwise."""
return self.provider_index.contains(pkg_name) return pkg_name in self.provider_index
def _get_pkg_module(self, pkg_name): def _get_pkg_module(self, pkg_name):
"""Create a module for a particular package. """Create a module for a particular package.
@ -1074,7 +1102,8 @@ def _get_pkg_module(self, pkg_name):
# manually construct the error message in order to give the # manually construct the error message in order to give the
# user the correct package.py where the syntax error is located # user the correct package.py where the syntax error is located
raise SyntaxError('invalid syntax in {0:}, line {1:}' raise SyntaxError('invalid syntax in {0:}, line {1:}'
''.format(file_path, e.lineno)) .format(file_path, e.lineno))
module.__package__ = self.full_namespace module.__package__ = self.full_namespace
module.__loader__ = self module.__loader__ = self
self._modules[pkg_name] = module self._modules[pkg_name] = module
@ -1205,9 +1234,9 @@ def get(spec):
return path.get(spec) return path.get(spec)
def all_package_names(): def all_package_names(include_virtuals=False):
"""Convenience wrapper around ``spack.repo.all_package_names()``.""" """Convenience wrapper around ``spack.repo.all_package_names()``."""
return path.all_package_names() return path.all_package_names(include_virtuals)
def set_path(repo): def set_path(repo):

View file

@ -9,11 +9,13 @@
import functools import functools
import time import time
import traceback import traceback
import os
import llnl.util.lang import llnl.util.lang
import spack.build_environment import spack.build_environment
import spack.fetch_strategy import spack.fetch_strategy
import spack.package import spack.package
from spack.install_test import TestSuite
from spack.reporter import Reporter from spack.reporter import Reporter
from spack.reporters.cdash import CDash from spack.reporters.cdash import CDash
from spack.reporters.junit import JUnit from spack.reporters.junit import JUnit
@ -33,12 +35,16 @@
] ]
def fetch_package_log(pkg): def fetch_log(pkg, do_fn, dir):
log_files = {
'_install_task': pkg.build_log_path,
'do_test': os.path.join(dir, TestSuite.test_log_name(pkg.spec)),
}
try: try:
with codecs.open(pkg.build_log_path, 'r', 'utf-8') as f: with codecs.open(log_files[do_fn.__name__], 'r', 'utf-8') as f:
return ''.join(f.readlines()) return ''.join(f.readlines())
except Exception: except Exception:
return 'Cannot open build log for {0}'.format( return 'Cannot open log for {0}'.format(
pkg.spec.cshort_spec pkg.spec.cshort_spec
) )
@ -58,15 +64,20 @@ class InfoCollector(object):
specs (list of Spec): specs whose install information will specs (list of Spec): specs whose install information will
be recorded be recorded
""" """
#: Backup of PackageInstaller._install_task def __init__(self, wrap_class, do_fn, specs, dir):
_backup__install_task = spack.package.PackageInstaller._install_task #: Class for which to wrap a function
self.wrap_class = wrap_class
def __init__(self, specs): #: Action to be reported on
#: Specs that will be installed self.do_fn = do_fn
#: Backup of PackageBase function
self._backup_do_fn = getattr(self.wrap_class, do_fn)
#: Specs that will be acted on
self.input_specs = specs self.input_specs = specs
#: This is where we record the data that will be included #: This is where we record the data that will be included
#: in our report. #: in our report.
self.specs = [] self.specs = []
#: Record directory for test log paths
self.dir = dir
def __enter__(self): def __enter__(self):
# Initialize the spec report with the data that is available upfront. # Initialize the spec report with the data that is available upfront.
@ -98,30 +109,37 @@ def __enter__(self):
Property('compiler', input_spec.compiler)) Property('compiler', input_spec.compiler))
# Check which specs are already installed and mark them as skipped # Check which specs are already installed and mark them as skipped
for dep in filter(lambda x: x.package.installed, # only for install_task
input_spec.traverse()): if self.do_fn == '_install_task':
package = { for dep in filter(lambda x: x.package.installed,
'name': dep.name, input_spec.traverse()):
'id': dep.dag_hash(), package = {
'elapsed_time': '0.0', 'name': dep.name,
'result': 'skipped', 'id': dep.dag_hash(),
'message': 'Spec already installed' 'elapsed_time': '0.0',
} 'result': 'skipped',
spec['packages'].append(package) 'message': 'Spec already installed'
}
spec['packages'].append(package)
def gather_info(_install_task): def gather_info(do_fn):
"""Decorates PackageInstaller._install_task to gather useful """Decorates do_fn to gather useful information for
information on PackageBase.do_install for a CI report. a CI report.
It's defined here to capture the environment and build It's defined here to capture the environment and build
this context as the installations proceed. this context as the installations proceed.
""" """
@functools.wraps(_install_task) @functools.wraps(do_fn)
def wrapper(installer, task, *args, **kwargs): def wrapper(instance, *args, **kwargs):
pkg = task.pkg if isinstance(instance, spack.package.PackageBase):
pkg = instance
elif hasattr(args[0], 'pkg'):
pkg = args[0].pkg
else:
raise Exception
# We accounted before for what is already installed # We accounted before for what is already installed
installed_on_entry = pkg.installed installed_already = pkg.installed
package = { package = {
'name': pkg.name, 'name': pkg.name,
@ -135,13 +153,12 @@ def wrapper(installer, task, *args, **kwargs):
start_time = time.time() start_time = time.time()
value = None value = None
try: try:
value = do_fn(instance, *args, **kwargs)
value = _install_task(installer, task, *args, **kwargs)
package['result'] = 'success' package['result'] = 'success'
package['stdout'] = fetch_package_log(pkg) package['stdout'] = fetch_log(pkg, do_fn, self.dir)
package['installed_from_binary_cache'] = \ package['installed_from_binary_cache'] = \
pkg.installed_from_binary_cache pkg.installed_from_binary_cache
if installed_on_entry: if do_fn.__name__ == '_install_task' and installed_already:
return return
except spack.build_environment.InstallError as e: except spack.build_environment.InstallError as e:
@ -149,7 +166,7 @@ def wrapper(installer, task, *args, **kwargs):
# didn't work correctly) # didn't work correctly)
package['result'] = 'failure' package['result'] = 'failure'
package['message'] = e.message or 'Installation failure' package['message'] = e.message or 'Installation failure'
package['stdout'] = fetch_package_log(pkg) package['stdout'] = fetch_log(pkg, do_fn, self.dir)
package['stdout'] += package['message'] package['stdout'] += package['message']
package['exception'] = e.traceback package['exception'] = e.traceback
@ -157,7 +174,7 @@ def wrapper(installer, task, *args, **kwargs):
# Everything else is an error (the installation # Everything else is an error (the installation
# failed outside of the child process) # failed outside of the child process)
package['result'] = 'error' package['result'] = 'error'
package['stdout'] = fetch_package_log(pkg) package['stdout'] = fetch_log(pkg, do_fn, self.dir)
package['message'] = str(e) or 'Unknown error' package['message'] = str(e) or 'Unknown error'
package['exception'] = traceback.format_exc() package['exception'] = traceback.format_exc()
@ -184,15 +201,14 @@ def wrapper(installer, task, *args, **kwargs):
return wrapper return wrapper
spack.package.PackageInstaller._install_task = gather_info( setattr(self.wrap_class, self.do_fn, gather_info(
spack.package.PackageInstaller._install_task getattr(self.wrap_class, self.do_fn)
) ))
def __exit__(self, exc_type, exc_val, exc_tb): def __exit__(self, exc_type, exc_val, exc_tb):
# Restore the original method in PackageInstaller # Restore the original method in PackageBase
spack.package.PackageInstaller._install_task = \ setattr(self.wrap_class, self.do_fn, self._backup_do_fn)
InfoCollector._backup__install_task
for spec in self.specs: for spec in self.specs:
spec['npackages'] = len(spec['packages']) spec['npackages'] = len(spec['packages'])
@ -225,22 +241,26 @@ class collect_info(object):
# The file 'junit.xml' is written when exiting # The file 'junit.xml' is written when exiting
# the context # the context
specs = [Spec('hdf5').concretized()] s = [Spec('hdf5').concretized()]
with collect_info(specs, 'junit', 'junit.xml'): with collect_info(PackageBase, do_install, s, 'junit', 'a.xml'):
# A report will be generated for these specs... # A report will be generated for these specs...
for spec in specs: for spec in s:
spec.do_install() getattr(class, function)(spec)
# ...but not for this one # ...but not for this one
Spec('zlib').concretized().do_install() Spec('zlib').concretized().do_install()
Args: Args:
class: class on which to wrap a function
function: function to wrap
format_name (str or None): one of the supported formats format_name (str or None): one of the supported formats
args (dict): args passed to spack install args (dict): args passed to function
Raises: Raises:
ValueError: when ``format_name`` is not in ``valid_formats`` ValueError: when ``format_name`` is not in ``valid_formats``
""" """
def __init__(self, format_name, args): def __init__(self, cls, function, format_name, args):
self.cls = cls
self.function = function
self.filename = None self.filename = None
if args.cdash_upload_url: if args.cdash_upload_url:
self.format_name = 'cdash' self.format_name = 'cdash'
@ -253,13 +273,19 @@ def __init__(self, format_name, args):
.format(self.format_name)) .format(self.format_name))
self.report_writer = report_writers[self.format_name](args) self.report_writer = report_writers[self.format_name](args)
def __call__(self, type, dir=os.getcwd()):
self.type = type
self.dir = dir
return self
def concretization_report(self, msg): def concretization_report(self, msg):
self.report_writer.concretization_report(self.filename, msg) self.report_writer.concretization_report(self.filename, msg)
def __enter__(self): def __enter__(self):
if self.format_name: if self.format_name:
# Start the collector and patch PackageInstaller._install_task # Start the collector and patch self.function on appropriate class
self.collector = InfoCollector(self.specs) self.collector = InfoCollector(
self.cls, self.function, self.specs, self.dir)
self.collector.__enter__() self.collector.__enter__()
def __exit__(self, exc_type, exc_val, exc_tb): def __exit__(self, exc_type, exc_val, exc_tb):
@ -269,4 +295,5 @@ def __exit__(self, exc_type, exc_val, exc_tb):
self.collector.__exit__(exc_type, exc_val, exc_tb) self.collector.__exit__(exc_type, exc_val, exc_tb)
report_data = {'specs': self.collector.specs} report_data = {'specs': self.collector.specs}
self.report_writer.build_report(self.filename, report_data) report_fn = getattr(self.report_writer, '%s_report' % self.type)
report_fn(self.filename, report_data)

View file

@ -16,5 +16,8 @@ def __init__(self, args):
def build_report(self, filename, report_data): def build_report(self, filename, report_data):
pass pass
def test_report(self, filename, report_data):
pass
def concretization_report(self, filename, msg): def concretization_report(self, filename, msg):
pass pass

View file

@ -72,8 +72,10 @@ def __init__(self, args):
tty.verbose("Using CDash auth token from environment") tty.verbose("Using CDash auth token from environment")
self.authtoken = os.environ.get('SPACK_CDASH_AUTH_TOKEN') self.authtoken = os.environ.get('SPACK_CDASH_AUTH_TOKEN')
if args.spec: if getattr(args, 'spec', ''):
packages = args.spec packages = args.spec
elif getattr(args, 'specs', ''):
packages = args.specs
else: else:
packages = [] packages = []
for file in args.specfiles: for file in args.specfiles:
@ -98,7 +100,7 @@ def __init__(self, args):
self.revision = git('rev-parse', 'HEAD', output=str).strip() self.revision = git('rev-parse', 'HEAD', output=str).strip()
self.multiple_packages = False self.multiple_packages = False
def report_for_package(self, directory_name, package, duration): def build_report_for_package(self, directory_name, package, duration):
if 'stdout' not in package: if 'stdout' not in package:
# Skip reporting on packages that did not generate any output. # Skip reporting on packages that did not generate any output.
return return
@ -158,8 +160,8 @@ def report_for_package(self, directory_name, package, duration):
'\n'.join(report_data[phase]['loglines']) '\n'.join(report_data[phase]['loglines'])
errors, warnings = parse_log_events(report_data[phase]['loglines']) errors, warnings = parse_log_events(report_data[phase]['loglines'])
# Cap the number of errors and warnings at 50 each. # Cap the number of errors and warnings at 50 each.
errors = errors[0:49] errors = errors[:50]
warnings = warnings[0:49] warnings = warnings[:50]
nerrors = len(errors) nerrors = len(errors)
if phase == 'configure' and nerrors > 0: if phase == 'configure' and nerrors > 0:
@ -250,7 +252,114 @@ def build_report(self, directory_name, input_data):
if 'time' in spec: if 'time' in spec:
duration = int(spec['time']) duration = int(spec['time'])
for package in spec['packages']: for package in spec['packages']:
self.report_for_package(directory_name, package, duration) self.build_report_for_package(
directory_name, package, duration)
self.print_cdash_link()
def test_report_for_package(self, directory_name, package, duration):
if 'stdout' not in package:
# Skip reporting on packages that did not generate any output.
return
self.current_package_name = package['name']
self.buildname = "{0} - {1}".format(
self.base_buildname, package['name'])
report_data = self.initialize_report(directory_name)
for phase in ('test', 'update'):
report_data[phase] = {}
report_data[phase]['loglines'] = []
report_data[phase]['status'] = 0
report_data[phase]['endtime'] = self.endtime
# Track the phases we perform so we know what reports to create.
# We always report the update step because this is how we tell CDash
# what revision of Spack we are using.
phases_encountered = ['test', 'update']
# Generate a report for this package.
# The first line just says "Testing package name-hash"
report_data['test']['loglines'].append(
text_type("{0} output for {1}:".format(
'test', package['name'])))
for line in package['stdout'].splitlines()[1:]:
report_data['test']['loglines'].append(
xml.sax.saxutils.escape(line))
self.starttime = self.endtime - duration
for phase in phases_encountered:
report_data[phase]['starttime'] = self.starttime
report_data[phase]['log'] = \
'\n'.join(report_data[phase]['loglines'])
errors, warnings = parse_log_events(report_data[phase]['loglines'])
# Cap the number of errors and warnings at 50 each.
errors = errors[0:49]
warnings = warnings[0:49]
if phase == 'test':
# Convert log output from ASCII to Unicode and escape for XML.
def clean_log_event(event):
event = vars(event)
event['text'] = xml.sax.saxutils.escape(event['text'])
event['pre_context'] = xml.sax.saxutils.escape(
'\n'.join(event['pre_context']))
event['post_context'] = xml.sax.saxutils.escape(
'\n'.join(event['post_context']))
# source_file and source_line_no are either strings or
# the tuple (None,). Distinguish between these two cases.
if event['source_file'][0] is None:
event['source_file'] = ''
event['source_line_no'] = ''
else:
event['source_file'] = xml.sax.saxutils.escape(
event['source_file'])
return event
# Convert errors to warnings if the package reported success.
if package['result'] == 'success':
warnings = errors + warnings
errors = []
report_data[phase]['errors'] = []
report_data[phase]['warnings'] = []
for error in errors:
report_data[phase]['errors'].append(clean_log_event(error))
for warning in warnings:
report_data[phase]['warnings'].append(
clean_log_event(warning))
if phase == 'update':
report_data[phase]['revision'] = self.revision
# Write the report.
report_name = phase.capitalize() + ".xml"
report_file_name = package['name'] + "_" + report_name
phase_report = os.path.join(directory_name, report_file_name)
with codecs.open(phase_report, 'w', 'utf-8') as f:
env = spack.tengine.make_environment()
if phase != 'update':
# Update.xml stores site information differently
# than the rest of the CTest XML files.
site_template = os.path.join(self.template_dir, 'Site.xml')
t = env.get_template(site_template)
f.write(t.render(report_data))
phase_template = os.path.join(self.template_dir, report_name)
t = env.get_template(phase_template)
f.write(t.render(report_data))
self.upload(phase_report)
def test_report(self, directory_name, input_data):
# Generate reports for each package in each spec.
for spec in input_data['specs']:
duration = 0
if 'time' in spec:
duration = int(spec['time'])
for package in spec['packages']:
self.test_report_for_package(
directory_name, package, duration)
self.print_cdash_link() self.print_cdash_link()
def concretization_report(self, directory_name, msg): def concretization_report(self, directory_name, msg):

View file

@ -27,3 +27,6 @@ def build_report(self, filename, report_data):
env = spack.tengine.make_environment() env = spack.tengine.make_environment()
t = env.get_template(self.template_file) t = env.get_template(self.template_file)
f.write(t.render(report_data)) f.write(t.render(report_data))
def test_report(self, filename, report_data):
self.build_report(filename, report_data)

View file

@ -45,6 +45,7 @@
{'type': 'array', {'type': 'array',
'items': {'type': 'string'}}], 'items': {'type': 'string'}}],
}, },
'test_stage': {'type': 'string'},
'extensions': { 'extensions': {
'type': 'array', 'type': 'array',
'items': {'type': 'string'} 'items': {'type': 'string'}

View file

@ -951,7 +951,7 @@ class SpecBuildInterface(lang.ObjectWrapper):
def __init__(self, spec, name, query_parameters): def __init__(self, spec, name, query_parameters):
super(SpecBuildInterface, self).__init__(spec) super(SpecBuildInterface, self).__init__(spec)
is_virtual = Spec.is_virtual(name) is_virtual = spack.repo.path.is_virtual(name)
self.last_query = QueryState( self.last_query = QueryState(
name=name, name=name,
extra_parameters=query_parameters, extra_parameters=query_parameters,
@ -1227,12 +1227,9 @@ def virtual(self):
Possible idea: just use conventin and make virtual deps all Possible idea: just use conventin and make virtual deps all
caps, e.g., MPI vs mpi. caps, e.g., MPI vs mpi.
""" """
return Spec.is_virtual(self.name) # This method can be called while regenerating the provider index
# So we turn off using the index to detect virtuals
@staticmethod return spack.repo.path.is_virtual(self.name, use_index=False)
def is_virtual(name):
"""Test if a name is virtual without requiring a Spec."""
return (name is not None) and (not spack.repo.path.exists(name))
@property @property
def concrete(self): def concrete(self):

View file

@ -68,7 +68,8 @@ def make_environment(dirs=None):
"""Returns an configured environment for template rendering.""" """Returns an configured environment for template rendering."""
if dirs is None: if dirs is None:
# Default directories where to search for templates # Default directories where to search for templates
builtins = spack.config.get('config:template_dirs') builtins = spack.config.get('config:template_dirs',
['$spack/share/spack/templates'])
extensions = spack.extensions.get_template_dirs() extensions = spack.extensions.get_template_dirs()
dirs = [canonicalize_path(d) dirs = [canonicalize_path(d)
for d in itertools.chain(builtins, extensions)] for d in itertools.chain(builtins, extensions)]

View file

@ -15,44 +15,51 @@
@pytest.fixture() @pytest.fixture()
def mock_calls_for_clean(monkeypatch): def mock_calls_for_clean(monkeypatch):
counts = {}
class Counter(object): class Counter(object):
def __init__(self): def __init__(self, name):
self.call_count = 0 self.name = name
counts[name] = 0
def __call__(self, *args, **kwargs): def __call__(self, *args, **kwargs):
self.call_count += 1 counts[self.name] += 1
monkeypatch.setattr(spack.package.PackageBase, 'do_clean', Counter()) monkeypatch.setattr(spack.package.PackageBase, 'do_clean',
monkeypatch.setattr(spack.stage, 'purge', Counter()) Counter('package'))
monkeypatch.setattr(spack.stage, 'purge', Counter('stages'))
monkeypatch.setattr( monkeypatch.setattr(
spack.caches.fetch_cache, 'destroy', Counter(), raising=False) spack.caches.fetch_cache, 'destroy', Counter('downloads'),
raising=False)
monkeypatch.setattr( monkeypatch.setattr(
spack.caches.misc_cache, 'destroy', Counter()) spack.caches.misc_cache, 'destroy', Counter('caches'))
monkeypatch.setattr( monkeypatch.setattr(
spack.installer, 'clear_failures', Counter()) spack.installer, 'clear_failures', Counter('failures'))
yield counts
all_effects = ['stages', 'downloads', 'caches', 'failures']
@pytest.mark.usefixtures( @pytest.mark.usefixtures(
'mock_packages', 'config', 'mock_calls_for_clean' 'mock_packages', 'config'
) )
@pytest.mark.parametrize('command_line,counters', [ @pytest.mark.parametrize('command_line,effects', [
('mpileaks', [1, 0, 0, 0, 0]), ('mpileaks', ['package']),
('-s', [0, 1, 0, 0, 0]), ('-s', ['stages']),
('-sd', [0, 1, 1, 0, 0]), ('-sd', ['stages', 'downloads']),
('-m', [0, 0, 0, 1, 0]), ('-m', ['caches']),
('-f', [0, 0, 0, 0, 1]), ('-f', ['failures']),
('-a', [0, 1, 1, 1, 1]), ('-a', all_effects),
('', [0, 0, 0, 0, 0]), ('', []),
]) ])
def test_function_calls(command_line, counters): def test_function_calls(command_line, effects, mock_calls_for_clean):
# Call the command with the supplied command line # Call the command with the supplied command line
clean(command_line) clean(command_line)
# Assert that we called the expected functions the correct # Assert that we called the expected functions the correct
# number of times # number of times
assert spack.package.PackageBase.do_clean.call_count == counters[0] for name in ['package'] + all_effects:
assert spack.stage.purge.call_count == counters[1] assert mock_calls_for_clean[name] == (1 if name in effects else 0)
assert spack.caches.fetch_cache.destroy.call_count == counters[2]
assert spack.caches.misc_cache.destroy.call_count == counters[3]
assert spack.installer.clear_failures.call_count == counters[4]

View file

@ -102,7 +102,7 @@ def __init__(self, specs=None, all=False, file=None,
self.exclude_specs = exclude_specs self.exclude_specs = exclude_specs
def test_exclude_specs(mock_packages): def test_exclude_specs(mock_packages, config):
args = MockMirrorArgs( args = MockMirrorArgs(
specs=['mpich'], specs=['mpich'],
versions_per_spec='all', versions_per_spec='all',
@ -117,7 +117,7 @@ def test_exclude_specs(mock_packages):
assert (not expected_exclude & set(mirror_specs)) assert (not expected_exclude & set(mirror_specs))
def test_exclude_file(mock_packages, tmpdir): def test_exclude_file(mock_packages, tmpdir, config):
exclude_path = os.path.join(str(tmpdir), 'test-exclude.txt') exclude_path = os.path.join(str(tmpdir), 'test-exclude.txt')
with open(exclude_path, 'w') as exclude_file: with open(exclude_path, 'w') as exclude_file:
exclude_file.write("""\ exclude_file.write("""\

View file

@ -62,9 +62,9 @@ def mock_pkg_git_repo(tmpdir_factory):
mkdirp('pkg-a', 'pkg-b', 'pkg-c') mkdirp('pkg-a', 'pkg-b', 'pkg-c')
with open('pkg-a/package.py', 'w') as f: with open('pkg-a/package.py', 'w') as f:
f.write(pkg_template.format(name='PkgA')) f.write(pkg_template.format(name='PkgA'))
with open('pkg-c/package.py', 'w') as f:
f.write(pkg_template.format(name='PkgB'))
with open('pkg-b/package.py', 'w') as f: with open('pkg-b/package.py', 'w') as f:
f.write(pkg_template.format(name='PkgB'))
with open('pkg-c/package.py', 'w') as f:
f.write(pkg_template.format(name='PkgC')) f.write(pkg_template.format(name='PkgC'))
git('add', 'pkg-a', 'pkg-b', 'pkg-c') git('add', 'pkg-a', 'pkg-b', 'pkg-c')
git('-c', 'commit.gpgsign=false', 'commit', git('-c', 'commit.gpgsign=false', 'commit',
@ -128,6 +128,8 @@ def test_pkg_add(mock_pkg_git_repo):
git('status', '--short', output=str)) git('status', '--short', output=str))
finally: finally:
shutil.rmtree('pkg-e') shutil.rmtree('pkg-e')
# Removing a package mid-run disrupts Spack's caching
spack.repo.path.repos[0]._fast_package_checker.invalidate()
with pytest.raises(spack.main.SpackCommandError): with pytest.raises(spack.main.SpackCommandError):
pkg('add', 'does-not-exist') pkg('add', 'does-not-exist')

View file

@ -3,93 +3,181 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import argparse
import os
import pytest
import spack.config
import spack.package
import spack.cmd.install
from spack.main import SpackCommand from spack.main import SpackCommand
install = SpackCommand('install')
spack_test = SpackCommand('test') spack_test = SpackCommand('test')
cmd_test_py = 'lib/spack/spack/test/cmd/test.py'
def test_list(): def test_test_package_not_installed(
output = spack_test('--list') tmpdir, mock_packages, mock_archive, mock_fetch, config,
assert "test.py" in output install_mockery_mutable_config, mock_test_stage):
assert "spec_semantics.py" in output
assert "test_list" not in output output = spack_test('run', 'libdwarf')
assert "No installed packages match spec libdwarf" in output
def test_list_with_pytest_arg(): @pytest.mark.parametrize('arguments,expected', [
output = spack_test('--list', cmd_test_py) (['run'], spack.config.get('config:dirty')), # default from config file
assert output.strip() == cmd_test_py (['run', '--clean'], False),
(['run', '--dirty'], True),
])
def test_test_dirty_flag(arguments, expected):
parser = argparse.ArgumentParser()
spack.cmd.test.setup_parser(parser)
args = parser.parse_args(arguments)
assert args.dirty == expected
def test_list_with_keywords(): def test_test_output(mock_test_stage, mock_packages, mock_archive, mock_fetch,
output = spack_test('--list', '-k', 'cmd/test.py') install_mockery_mutable_config):
assert output.strip() == cmd_test_py """Ensure output printed from pkgs is captured by output redirection."""
install('printing-package')
spack_test('run', 'printing-package')
stage_files = os.listdir(mock_test_stage)
assert len(stage_files) == 1
# Grab test stage directory contents
testdir = os.path.join(mock_test_stage, stage_files[0])
testdir_files = os.listdir(testdir)
# Grab the output from the test log
testlog = list(filter(lambda x: x.endswith('out.txt') and
x != 'results.txt', testdir_files))
outfile = os.path.join(testdir, testlog[0])
with open(outfile, 'r') as f:
output = f.read()
assert "BEFORE TEST" in output
assert "true: expect command status in [" in output
assert "AFTER TEST" in output
assert "FAILED" not in output
def test_list_long(capsys): def test_test_output_on_error(
with capsys.disabled(): mock_packages, mock_archive, mock_fetch, install_mockery_mutable_config,
output = spack_test('--list-long') capfd, mock_test_stage
assert "test.py::\n" in output ):
assert "test_list" in output install('test-error')
assert "test_list_with_pytest_arg" in output # capfd interferes with Spack's capturing
assert "test_list_with_keywords" in output with capfd.disabled():
assert "test_list_long" in output out = spack_test('run', 'test-error', fail_on_error=False)
assert "test_list_long_with_pytest_arg" in output
assert "test_list_names" in output
assert "test_list_names_with_pytest_arg" in output
assert "spec_dag.py::\n" in output assert "TestFailure" in out
assert 'test_installed_deps' in output assert "Command exited with status 1" in out
assert 'test_test_deptype' in output
def test_list_long_with_pytest_arg(capsys): def test_test_output_on_failure(
with capsys.disabled(): mock_packages, mock_archive, mock_fetch, install_mockery_mutable_config,
output = spack_test('--list-long', cmd_test_py) capfd, mock_test_stage
assert "test.py::\n" in output ):
assert "test_list" in output install('test-fail')
assert "test_list_with_pytest_arg" in output with capfd.disabled():
assert "test_list_with_keywords" in output out = spack_test('run', 'test-fail', fail_on_error=False)
assert "test_list_long" in output
assert "test_list_long_with_pytest_arg" in output
assert "test_list_names" in output
assert "test_list_names_with_pytest_arg" in output
assert "spec_dag.py::\n" not in output assert "Expected 'not in the output' to match output of `true`" in out
assert 'test_installed_deps' not in output assert "TestFailure" in out
assert 'test_test_deptype' not in output
def test_list_names(): def test_show_log_on_error(
output = spack_test('--list-names') mock_packages, mock_archive, mock_fetch,
assert "test.py::test_list\n" in output install_mockery_mutable_config, capfd, mock_test_stage
assert "test.py::test_list_with_pytest_arg\n" in output ):
assert "test.py::test_list_with_keywords\n" in output """Make sure spack prints location of test log on failure."""
assert "test.py::test_list_long\n" in output install('test-error')
assert "test.py::test_list_long_with_pytest_arg\n" in output with capfd.disabled():
assert "test.py::test_list_names\n" in output out = spack_test('run', 'test-error', fail_on_error=False)
assert "test.py::test_list_names_with_pytest_arg\n" in output
assert "spec_dag.py::test_installed_deps\n" in output assert 'See test log' in out
assert 'spec_dag.py::test_test_deptype\n' in output assert mock_test_stage in out
def test_list_names_with_pytest_arg(): @pytest.mark.usefixtures(
output = spack_test('--list-names', cmd_test_py) 'mock_packages', 'mock_archive', 'mock_fetch',
assert "test.py::test_list\n" in output 'install_mockery_mutable_config'
assert "test.py::test_list_with_pytest_arg\n" in output )
assert "test.py::test_list_with_keywords\n" in output @pytest.mark.parametrize('pkg_name,msgs', [
assert "test.py::test_list_long\n" in output ('test-error', ['FAILED: Command exited', 'TestFailure']),
assert "test.py::test_list_long_with_pytest_arg\n" in output ('test-fail', ['FAILED: Expected', 'TestFailure'])
assert "test.py::test_list_names\n" in output ])
assert "test.py::test_list_names_with_pytest_arg\n" in output def test_junit_output_with_failures(tmpdir, mock_test_stage, pkg_name, msgs):
install(pkg_name)
with tmpdir.as_cwd():
spack_test('run',
'--log-format=junit', '--log-file=test.xml',
pkg_name)
assert "spec_dag.py::test_installed_deps\n" not in output files = tmpdir.listdir()
assert 'spec_dag.py::test_test_deptype\n' not in output filename = tmpdir.join('test.xml')
assert filename in files
content = filename.open().read()
# Count failures and errors correctly
assert 'tests="1"' in content
assert 'failures="1"' in content
assert 'errors="0"' in content
# We want to have both stdout and stderr
assert '<system-out>' in content
for msg in msgs:
assert msg in content
def test_pytest_help(): def test_cdash_output_test_error(
output = spack_test('--pytest-help') tmpdir, mock_fetch, install_mockery_mutable_config, mock_packages,
assert "-k EXPRESSION" in output mock_archive, mock_test_stage, capfd):
assert "pytest-warnings:" in output install('test-error')
assert "--collect-only" in output with tmpdir.as_cwd():
spack_test('run',
'--log-format=cdash',
'--log-file=cdash_reports',
'test-error')
report_dir = tmpdir.join('cdash_reports')
print(tmpdir.listdir())
assert report_dir in tmpdir.listdir()
report_file = report_dir.join('test-error_Test.xml')
assert report_file in report_dir.listdir()
content = report_file.open().read()
assert 'FAILED: Command exited with status 1' in content
def test_cdash_upload_clean_test(
tmpdir, mock_fetch, install_mockery_mutable_config, mock_packages,
mock_archive, mock_test_stage):
install('printing-package')
with tmpdir.as_cwd():
spack_test('run',
'--log-file=cdash_reports',
'--log-format=cdash',
'printing-package')
report_dir = tmpdir.join('cdash_reports')
assert report_dir in tmpdir.listdir()
report_file = report_dir.join('printing-package_Test.xml')
assert report_file in report_dir.listdir()
content = report_file.open().read()
assert '</Test>' in content
assert '<Text>' not in content
def test_test_help_does_not_show_cdash_options(mock_test_stage, capsys):
"""Make sure `spack test --help` does not describe CDash arguments"""
with pytest.raises(SystemExit):
spack_test('run', '--help')
captured = capsys.readouterr()
assert 'CDash URL' not in captured.out
def test_test_help_cdash(mock_test_stage):
"""Make sure `spack test --help-cdash` describes CDash arguments"""
out = spack_test('run', '--help-cdash')
assert 'CDash URL' in out

View file

@ -0,0 +1,96 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.main import SpackCommand
spack_test = SpackCommand('unit-test')
cmd_test_py = 'lib/spack/spack/test/cmd/unit_test.py'
def test_list():
output = spack_test('--list')
assert "unit_test.py" in output
assert "spec_semantics.py" in output
assert "test_list" not in output
def test_list_with_pytest_arg():
output = spack_test('--list', cmd_test_py)
assert output.strip() == cmd_test_py
def test_list_with_keywords():
output = spack_test('--list', '-k', 'cmd/unit_test.py')
assert output.strip() == cmd_test_py
def test_list_long(capsys):
with capsys.disabled():
output = spack_test('--list-long')
assert "unit_test.py::\n" in output
assert "test_list" in output
assert "test_list_with_pytest_arg" in output
assert "test_list_with_keywords" in output
assert "test_list_long" in output
assert "test_list_long_with_pytest_arg" in output
assert "test_list_names" in output
assert "test_list_names_with_pytest_arg" in output
assert "spec_dag.py::\n" in output
assert 'test_installed_deps' in output
assert 'test_test_deptype' in output
def test_list_long_with_pytest_arg(capsys):
with capsys.disabled():
output = spack_test('--list-long', cmd_test_py)
print(output)
assert "unit_test.py::\n" in output
assert "test_list" in output
assert "test_list_with_pytest_arg" in output
assert "test_list_with_keywords" in output
assert "test_list_long" in output
assert "test_list_long_with_pytest_arg" in output
assert "test_list_names" in output
assert "test_list_names_with_pytest_arg" in output
assert "spec_dag.py::\n" not in output
assert 'test_installed_deps' not in output
assert 'test_test_deptype' not in output
def test_list_names():
output = spack_test('--list-names')
assert "unit_test.py::test_list\n" in output
assert "unit_test.py::test_list_with_pytest_arg\n" in output
assert "unit_test.py::test_list_with_keywords\n" in output
assert "unit_test.py::test_list_long\n" in output
assert "unit_test.py::test_list_long_with_pytest_arg\n" in output
assert "unit_test.py::test_list_names\n" in output
assert "unit_test.py::test_list_names_with_pytest_arg\n" in output
assert "spec_dag.py::test_installed_deps\n" in output
assert 'spec_dag.py::test_test_deptype\n' in output
def test_list_names_with_pytest_arg():
output = spack_test('--list-names', cmd_test_py)
assert "unit_test.py::test_list\n" in output
assert "unit_test.py::test_list_with_pytest_arg\n" in output
assert "unit_test.py::test_list_with_keywords\n" in output
assert "unit_test.py::test_list_long\n" in output
assert "unit_test.py::test_list_long_with_pytest_arg\n" in output
assert "unit_test.py::test_list_names\n" in output
assert "unit_test.py::test_list_names_with_pytest_arg\n" in output
assert "spec_dag.py::test_installed_deps\n" not in output
assert 'spec_dag.py::test_test_deptype\n' not in output
def test_pytest_help():
output = spack_test('--pytest-help')
assert "-k EXPRESSION" in output
assert "pytest-warnings:" in output
assert "--collect-only" in output

View file

@ -1273,3 +1273,14 @@ def _factory(name, output, subdir=('bin',)):
return str(f) return str(f)
return _factory return _factory
@pytest.fixture()
def mock_test_stage(mutable_config, tmpdir):
# NOTE: This fixture MUST be applied after any fixture that uses
# the config fixture under the hood
# No need to unset because we use mutable_config
tmp_stage = str(tmpdir.join('test_stage'))
mutable_config.set('config:test_stage', tmp_stage)
yield tmp_stage

View file

@ -0,0 +1,4 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)

View file

@ -16,7 +16,7 @@
from llnl.util.filesystem import resolve_link_target_relative_to_the_link from llnl.util.filesystem import resolve_link_target_relative_to_the_link
pytestmark = pytest.mark.usefixtures('config', 'mutable_mock_repo') pytestmark = pytest.mark.usefixtures('mutable_config', 'mutable_mock_repo')
# paths in repos that shouldn't be in the mirror tarballs. # paths in repos that shouldn't be in the mirror tarballs.
exclude = ['.hg', '.git', '.svn'] exclude = ['.hg', '.git', '.svn']
@ -97,7 +97,7 @@ def check_mirror():
# tarball # tarball
assert not dcmp.right_only assert not dcmp.right_only
# and that all original files are present. # and that all original files are present.
assert all(l in exclude for l in dcmp.left_only) assert all(left in exclude for left in dcmp.left_only)
def test_url_mirror(mock_archive): def test_url_mirror(mock_archive):

View file

@ -10,7 +10,12 @@
static DSL metadata for packages. static DSL metadata for packages.
""" """
import os
import pytest import pytest
import shutil
import llnl.util.filesystem as fs
import spack.package import spack.package
import spack.repo import spack.repo
@ -119,3 +124,72 @@ def test_possible_dependencies_with_multiple_classes(
}) })
assert expected == spack.package.possible_dependencies(*pkgs) assert expected == spack.package.possible_dependencies(*pkgs)
def setup_install_test(source_paths, install_test_root):
"""
Set up the install test by creating sources and install test roots.
The convention used here is to create an empty file if the path name
ends with an extension otherwise, a directory is created.
"""
fs.mkdirp(install_test_root)
for path in source_paths:
if os.path.splitext(path)[1]:
fs.touchp(path)
else:
fs.mkdirp(path)
@pytest.mark.parametrize('spec,sources,extras,expect', [
('a',
['example/a.c'], # Source(s)
['example/a.c'], # Extra test source
['example/a.c']), # Test install dir source(s)
('b',
['test/b.cpp', 'test/b.hpp', 'example/b.txt'], # Source(s)
['test'], # Extra test source
['test/b.cpp', 'test/b.hpp']), # Test install dir source
('c',
['examples/a.py', 'examples/b.py', 'examples/c.py', 'tests/d.py'],
['examples/b.py', 'tests'],
['examples/b.py', 'tests/d.py']),
])
def test_cache_extra_sources(install_mockery, spec, sources, extras, expect):
"""Test the package's cache extra test sources helper function."""
pkg = spack.repo.get(spec)
pkg.spec.concretize()
source_path = pkg.stage.source_path
srcs = [fs.join_path(source_path, s) for s in sources]
setup_install_test(srcs, pkg.install_test_root)
emsg_dir = 'Expected {0} to be a directory'
emsg_file = 'Expected {0} to be a file'
for s in srcs:
assert os.path.exists(s), 'Expected {0} to exist'.format(s)
if os.path.splitext(s)[1]:
assert os.path.isfile(s), emsg_file.format(s)
else:
assert os.path.isdir(s), emsg_dir.format(s)
pkg.cache_extra_test_sources(extras)
src_dests = [fs.join_path(pkg.install_test_root, s) for s in sources]
exp_dests = [fs.join_path(pkg.install_test_root, e) for e in expect]
poss_dests = set(src_dests) | set(exp_dests)
msg = 'Expected {0} to{1} exist'
for pd in poss_dests:
if pd in exp_dests:
assert os.path.exists(pd), msg.format(pd, '')
if os.path.splitext(pd)[1]:
assert os.path.isfile(pd), emsg_file.format(pd)
else:
assert os.path.isdir(pd), emsg_dir.format(pd)
else:
assert not os.path.exists(pd), msg.format(pd, ' not')
# Perform a little cleanup
shutil.rmtree(os.path.dirname(source_path))

View file

@ -0,0 +1,53 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import spack.install_test
import spack.spec
def test_test_log_pathname(mock_packages, config):
"""Ensure test log path is reasonable."""
spec = spack.spec.Spec('libdwarf').concretized()
test_name = 'test_name'
test_suite = spack.install_test.TestSuite([spec], test_name)
logfile = test_suite.log_file_for_spec(spec)
assert test_suite.stage in logfile
assert test_suite.test_log_name(spec) in logfile
def test_test_ensure_stage(mock_test_stage):
"""Make sure test stage directory is properly set up."""
spec = spack.spec.Spec('libdwarf').concretized()
test_name = 'test_name'
test_suite = spack.install_test.TestSuite([spec], test_name)
test_suite.ensure_stage()
assert os.path.isdir(test_suite.stage)
assert mock_test_stage in test_suite.stage
def test_write_test_result(mock_packages, mock_test_stage):
"""Ensure test results written to a results file."""
spec = spack.spec.Spec('libdwarf').concretized()
result = 'TEST'
test_name = 'write-test'
test_suite = spack.install_test.TestSuite([spec], test_name)
test_suite.ensure_stage()
results_file = test_suite.results_file
test_suite.write_test_result(spec, result)
with open(results_file, 'r') as f:
lines = f.readlines()
assert len(lines) == 1
msg = lines[0]
assert result in msg
assert spec.name in msg

View file

@ -2,7 +2,7 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details. # Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import sys
import os import os
import re import re
import shlex import shlex
@ -98,6 +98,9 @@ def __call__(self, *args, **kwargs):
If both ``output`` and ``error`` are set to ``str``, then one string If both ``output`` and ``error`` are set to ``str``, then one string
is returned containing output concatenated with error. Not valid is returned containing output concatenated with error. Not valid
for ``input`` for ``input``
* ``str.split``, as in the ``split`` method of the Python string type.
Behaves the same as ``str``, except that value is also written to
``stdout`` or ``stderr``.
By default, the subprocess inherits the parent's file descriptors. By default, the subprocess inherits the parent's file descriptors.
@ -132,7 +135,7 @@ def __call__(self, *args, **kwargs):
def streamify(arg, mode): def streamify(arg, mode):
if isinstance(arg, string_types): if isinstance(arg, string_types):
return open(arg, mode), True return open(arg, mode), True
elif arg is str: elif arg in (str, str.split):
return subprocess.PIPE, False return subprocess.PIPE, False
else: else:
return arg, False return arg, False
@ -168,12 +171,18 @@ def streamify(arg, mode):
out, err = proc.communicate() out, err = proc.communicate()
result = None result = None
if output is str or error is str: if output in (str, str.split) or error in (str, str.split):
result = '' result = ''
if output is str: if output in (str, str.split):
result += text_type(out.decode('utf-8')) outstr = text_type(out.decode('utf-8'))
if error is str: result += outstr
result += text_type(err.decode('utf-8')) if output is str.split:
sys.stdout.write(outstr)
if error in (str, str.split):
errstr = text_type(err.decode('utf-8'))
result += errstr
if error is str.split:
sys.stderr.write(errstr)
rc = self.returncode = proc.returncode rc = self.returncode = proc.returncode
if fail_on_error and rc != 0 and (rc not in ignore_errors): if fail_on_error and rc != 0 and (rc not in ignore_errors):

View file

@ -21,6 +21,8 @@ class MockPackageBase(object):
Use ``MockPackageMultiRepo.add_package()`` to create new instances. Use ``MockPackageMultiRepo.add_package()`` to create new instances.
""" """
virtual = False
def __init__(self, dependencies, dependency_types, def __init__(self, dependencies, dependency_types,
conditions=None, versions=None): conditions=None, versions=None):
"""Instantiate a new MockPackageBase. """Instantiate a new MockPackageBase.
@ -92,7 +94,7 @@ def get_pkg_class(self, name):
def exists(self, name): def exists(self, name):
return name in self.spec_to_pkg return name in self.spec_to_pkg
def is_virtual(self, name): def is_virtual(self, name, use_index=True):
return False return False
def repo_for_pkg(self, name): def repo_for_pkg(self, name):

View file

@ -56,7 +56,7 @@ contains 'hdf5' _spack_completions spack -d install --jobs 8 ''
contains 'hdf5' _spack_completions spack install -v '' contains 'hdf5' _spack_completions spack install -v ''
# XFAIL: Fails for Python 2.6 because pkg_resources not found? # XFAIL: Fails for Python 2.6 because pkg_resources not found?
#contains 'compilers.py' _spack_completions spack test '' #contains 'compilers.py' _spack_completions spack unit-test ''
title 'Testing debugging functions' title 'Testing debugging functions'

View file

@ -42,4 +42,4 @@ spack -p --lines 20 spec mpileaks%gcc ^elfutils@0.170
#----------------------------------------------------------- #-----------------------------------------------------------
# Run unit tests with code coverage # Run unit tests with code coverage
#----------------------------------------------------------- #-----------------------------------------------------------
$coverage_run $(which spack) test -x --verbose $coverage_run $(which spack) unit-test -x --verbose

View file

@ -320,7 +320,7 @@ _spack() {
then then
SPACK_COMPREPLY="-h --help -H --all-help --color -C --config-scope -d --debug --timestamp --pdb -e --env -D --env-dir -E --no-env --use-env-repo -k --insecure -l --enable-locks -L --disable-locks -m --mock -p --profile --sorted-profile --lines -v --verbose --stacktrace -V --version --print-shell-vars" SPACK_COMPREPLY="-h --help -H --all-help --color -C --config-scope -d --debug --timestamp --pdb -e --env -D --env-dir -E --no-env --use-env-repo -k --insecure -l --enable-locks -L --disable-locks -m --mock -p --profile --sorted-profile --lines -v --verbose --stacktrace -V --version --print-shell-vars"
else else
SPACK_COMPREPLY="activate add arch blame build-env buildcache cd checksum ci clean clone commands compiler compilers concretize config containerize create deactivate debug dependencies dependents deprecate dev-build develop docs edit env extensions external fetch find flake8 gc gpg graph help info install license list load location log-parse maintainers mirror module patch pkg providers pydoc python reindex remove rm repo resource restage setup solve spec stage test tutorial undevelop uninstall unload url verify versions view" SPACK_COMPREPLY="activate add arch blame build-env buildcache cd checksum ci clean clone commands compiler compilers concretize config containerize create deactivate debug dependencies dependents deprecate dev-build develop docs edit env extensions external fetch find flake8 gc gpg graph help info install license list load location log-parse maintainers mirror module patch pkg providers pydoc python reindex remove rm repo resource restage setup solve spec stage test test-env tutorial undevelop uninstall unit-test unload url verify versions view"
fi fi
} }
@ -1020,7 +1020,7 @@ _spack_info() {
_spack_install() { _spack_install() {
if $list_options if $list_options
then then
SPACK_COMPREPLY="-h --help --only -u --until -j --jobs --overwrite --fail-fast --keep-prefix --keep-stage --dont-restage --use-cache --no-cache --cache-only --no-check-signature --require-full-hash-match --show-log-on-error --source -n --no-checksum -v --verbose --fake --only-concrete -f --file --clean --dirty --test --run-tests --log-format --log-file --help-cdash -y --yes-to-all --cdash-upload-url --cdash-build --cdash-site --cdash-track --cdash-buildstamp" SPACK_COMPREPLY="-h --help --only -u --until -j --jobs --overwrite --fail-fast --keep-prefix --keep-stage --dont-restage --use-cache --no-cache --cache-only --no-check-signature --require-full-hash-match --show-log-on-error --source -n --no-checksum -v --verbose --fake --only-concrete -f --file --clean --dirty --test --run-tests --log-format --log-file --help-cdash --cdash-upload-url --cdash-build --cdash-site --cdash-track --cdash-buildstamp -y --yes-to-all"
else else
_all_packages _all_packages
fi fi
@ -1046,7 +1046,7 @@ _spack_license_verify() {
_spack_list() { _spack_list() {
if $list_options if $list_options
then then
SPACK_COMPREPLY="-h --help -d --search-description --format --update -t --tags" SPACK_COMPREPLY="-h --help -d --search-description --format --update -v --virtuals -t --tags"
else else
_all_packages _all_packages
fi fi
@ -1494,9 +1494,67 @@ _spack_stage() {
_spack_test() { _spack_test() {
if $list_options if $list_options
then then
SPACK_COMPREPLY="-h --help -H --pytest-help -l --list -L --list-long -N --list-names --extension -s -k --showlocals" SPACK_COMPREPLY="-h --help"
else else
_tests SPACK_COMPREPLY="run list find status results remove"
fi
}
_spack_test_run() {
if $list_options
then
SPACK_COMPREPLY="-h --help --alias --fail-fast --fail-first --keep-stage --log-format --log-file --cdash-upload-url --cdash-build --cdash-site --cdash-track --cdash-buildstamp --help-cdash --clean --dirty"
else
_installed_packages
fi
}
_spack_test_list() {
SPACK_COMPREPLY="-h --help"
}
_spack_test_find() {
if $list_options
then
SPACK_COMPREPLY="-h --help"
else
_all_packages
fi
}
_spack_test_status() {
if $list_options
then
SPACK_COMPREPLY="-h --help"
else
SPACK_COMPREPLY=""
fi
}
_spack_test_results() {
if $list_options
then
SPACK_COMPREPLY="-h --help -l --logs -f --failed"
else
SPACK_COMPREPLY=""
fi
}
_spack_test_remove() {
if $list_options
then
SPACK_COMPREPLY="-h --help -y --yes-to-all"
else
SPACK_COMPREPLY=""
fi
}
_spack_test_env() {
if $list_options
then
SPACK_COMPREPLY="-h --help --clean --dirty --dump --pickle"
else
_all_packages
fi fi
} }
@ -1522,6 +1580,15 @@ _spack_uninstall() {
fi fi
} }
_spack_unit_test() {
if $list_options
then
SPACK_COMPREPLY="-h --help -H --pytest-help -l --list -L --list-long -N --list-names --extension -s -k --showlocals"
else
_tests
fi
}
_spack_unload() { _spack_unload() {
if $list_options if $list_options
then then

View file

@ -0,0 +1,27 @@
<Test>
<StartTestTime>{{ test.starttime }}</StartTestTime>
<TestCommand>{{ install_command }}</TestCommand>
{% for warning in test.warnings %}
<Warning>
<TestLogLine>{{ warning.line_no }}</TestLogLine>
<Text>{{ warning.text }}</Text>
<SourceFile>{{ warning.source_file }}</SourceFile>
<SourceLineNumber>{{ warning.source_line_no }}</SourceLineNumber>
<PreContext>{{ warning.pre_context }}</PreContext>
<PostContext>{{ warning.post_context }}</PostContext>
</Warning>
{% endfor %}
{% for error in test.errors %}
<Error>
<TestLogLine>{{ error.line_no }}</TestLogLine>
<Text>{{ error.text }}</Text>
<SourceFile>{{ error.source_file }}</SourceFile>
<SourceLineNumber>{{ error.source_line_no }}</SourceLineNumber>
<PreContext>{{ error.pre_context }}</PreContext>
<PostContext>{{ error.post_context }}</PostContext>
</Error>
{% endfor %}
<EndTestTime>{{ test.endtime }}</EndTestTime>
<ElapsedMinutes>0</ElapsedMinutes>
</Test>
</Site>

View file

@ -24,3 +24,8 @@ def install(self, spec, prefix):
make('install') make('install')
print("AFTER INSTALL") print("AFTER INSTALL")
def test(self):
print("BEFORE TEST")
self.run_test('true') # run /bin/true
print("AFTER TEST")

View file

@ -0,0 +1,21 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class TestError(Package):
"""This package has a test method that fails in a subprocess."""
homepage = "http://www.example.com/test-failure"
url = "http://www.test-failure.test/test-failure-1.0.tar.gz"
version('1.0', 'foobarbaz')
def install(self, spec, prefix):
mkdirp(prefix.bin)
def test(self):
self.run_test('false')

View file

@ -0,0 +1,21 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class TestFail(Package):
"""This package has a test method that fails in a subprocess."""
homepage = "http://www.example.com/test-failure"
url = "http://www.test-failure.test/test-failure-1.0.tar.gz"
version('1.0', 'foobarbaz')
def install(self, spec, prefix):
mkdirp(prefix.bin)
def test(self):
self.run_test('true', expected=['not in the output'])

View file

@ -184,7 +184,7 @@ def install(self, spec, prefix):
@run_after('install') @run_after('install')
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
def test(self): def install_test(self):
# https://github.com/Homebrew/homebrew-core/blob/master/Formula/bazel.rb # https://github.com/Homebrew/homebrew-core/blob/master/Formula/bazel.rb
# Bazel does not work properly on NFS, switch to /tmp # Bazel does not work properly on NFS, switch to /tmp

View file

@ -3,8 +3,6 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class BerkeleyDb(AutotoolsPackage): class BerkeleyDb(AutotoolsPackage):
"""Oracle Berkeley DB""" """Oracle Berkeley DB"""
@ -47,3 +45,15 @@ def configure_args(self):
config_args.append('--disable-atomicsupport') config_args.append('--disable-atomicsupport')
return config_args return config_args
def test(self):
"""Perform smoke tests on the installed package binaries."""
exes = [
'db_checkpoint', 'db_deadlock', 'db_dump', 'db_load',
'db_printlog', 'db_stat', 'db_upgrade', 'db_verify'
]
for exe in exes:
reason = 'test version of {0} is {1}'.format(exe,
self.spec.version)
self.run_test(exe, ['-V'], [self.spec.version.string],
installed=True, purpose=reason, skip_missing=True)

View file

@ -129,3 +129,29 @@ def flag_handler(self, name, flags):
if self.spec.satisfies('@:2.34 %gcc@10:'): if self.spec.satisfies('@:2.34 %gcc@10:'):
flags.append('-fcommon') flags.append('-fcommon')
return (flags, None, None) return (flags, None, None)
def test(self):
spec_vers = str(self.spec.version)
checks = {
'ar': spec_vers,
'c++filt': spec_vers,
'coffdump': spec_vers,
'dlltool': spec_vers,
'elfedit': spec_vers,
'gprof': spec_vers,
'ld': spec_vers,
'nm': spec_vers,
'objdump': spec_vers,
'ranlib': spec_vers,
'readelf': spec_vers,
'size': spec_vers,
'strings': spec_vers,
}
for exe in checks:
expected = checks[exe]
reason = 'test: ensuring version of {0} is {1}' \
.format(exe, expected)
self.run_test(exe, '--version', expected, installed=True,
purpose=reason, skip_missing=True)

View file

@ -0,0 +1,27 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
class C(Package):
"""Virtual package for C compilers."""
homepage = 'http://open-std.org/JTC1/SC22/WG14/www/standards'
virtual = True
def test(self):
test_source = self.test_suite.current_test_data_dir
for test in os.listdir(test_source):
filepath = test_source.join(test)
exe_name = '%s.exe' % test
cc_exe = os.environ['CC']
cc_opts = ['-o', exe_name, filepath]
compiled = self.run_test(cc_exe, options=cc_opts, installed=True)
if compiled:
expected = ['Hello world', 'YES!']
self.run_test(exe_name, expected=expected)

View file

@ -0,0 +1,7 @@
#include <stdio.h>
int main()
{
printf ("Hello world from C!\n");
printf ("YES!");
return 0;
}

View file

@ -146,7 +146,7 @@ def build_args(self, spec, prefix):
return args return args
def test(self): def build_test(self):
if '+python' in self.spec: if '+python' in self.spec:
# Tests will always fail if Python dependencies aren't built # Tests will always fail if Python dependencies aren't built
# In addition, 3 of the tests fail when run in parallel # In addition, 3 of the tests fail when run in parallel

View file

@ -2,6 +2,7 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details. # Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import re import re
@ -250,7 +251,7 @@ def build(self, spec, prefix):
@run_after('build') @run_after('build')
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
def test(self): def build_test(self):
# Some tests fail, takes forever # Some tests fail, takes forever
make('test') make('test')
@ -262,3 +263,12 @@ def install(self, spec, prefix):
filter_file('mpcc_r)', 'mpcc_r mpifcc)', f, string=True) filter_file('mpcc_r)', 'mpcc_r mpifcc)', f, string=True)
filter_file('mpc++_r)', 'mpc++_r mpiFCC)', f, string=True) filter_file('mpc++_r)', 'mpc++_r mpiFCC)', f, string=True)
filter_file('mpifc)', 'mpifc mpifrt)', f, string=True) filter_file('mpifc)', 'mpifc mpifrt)', f, string=True)
def test(self):
"""Perform smoke tests on the installed package."""
spec_vers_str = 'version {0}'.format(self.spec.version)
for exe in ['ccmake', 'cmake', 'cpack', 'ctest']:
reason = 'test version of {0} is {1}'.format(exe, spec_vers_str)
self.run_test(exe, ['--version'], [spec_vers_str],
installed=True, purpose=reason, skip_missing=True)

View file

@ -217,7 +217,7 @@ def build(self, spec, prefix):
@run_after('build') @run_after('build')
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
def test(self): def build_test(self):
with working_dir('spack-build'): with working_dir('spack-build'):
print("Running Conduit Unit Tests...") print("Running Conduit Unit Tests...")
make("test") make("test")

View file

@ -0,0 +1,38 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
class Cxx(Package):
"""Virtual package for the C++ language."""
homepage = 'https://isocpp.org/std/the-standard'
virtual = True
def test(self):
test_source = self.test_suite.current_test_data_dir
for test in os.listdir(test_source):
filepath = os.path.join(test_source, test)
exe_name = '%s.exe' % test
cxx_exe = os.environ['CXX']
# standard options
# Hack to get compiler attributes
# TODO: remove this when compilers are dependencies
c_name = clang if self.spec.satisfies('llvm+clang') else self.name
c_spec = spack.spec.CompilerSpec(c_name, self.spec.version)
c_cls = spack.compilers.class_for_compiler_name(c_name)
compiler = c_cls(c_spec, None, None, ['fakecc', 'fakecxx'])
cxx_opts = [compiler.cxx11_flag] if 'c++11' in test else []
cxx_opts += ['-o', exe_name, filepath]
compiled = self.run_test(cxx_exe, options=cxx_opts, installed=True)
if compiled:
expected = ['Hello world', 'YES!']
self.run_test(exe_name, expected=expected)

View file

@ -0,0 +1,9 @@
#include <stdio.h>
int main()
{
printf ("Hello world from C++\n");
printf ("YES!");
return 0;
}

View file

@ -0,0 +1,9 @@
#include <iostream>
using namespace std;
int main()
{
cout << "Hello world from C++!" << endl;
cout << "YES!" << endl;
return (0);
}

View file

@ -0,0 +1,9 @@
#include <iostream>
using namespace std;
int main()
{
cout << "Hello world from C++!" << endl;
cout << "YES!" << endl;
return (0);
}

View file

@ -0,0 +1,17 @@
#include <iostream>
#include <regex>
using namespace std;
int main()
{
auto func = [] () { cout << "Hello world from C++11" << endl; };
func(); // now call the function
std::regex r("st|mt|tr");
std::cout << "std::regex r(\"st|mt|tr\")" << " match tr? ";
if (std::regex_match("tr", r) == 0)
std::cout << "NO!\n ==> Using pre g++ 4.9.2 libstdc++ which doesn't implement regex properly" << std::endl;
else
std::cout << "YES!\n ==> Correct libstdc++11 implementation of regex (4.9.2 or later)" << std::endl;
}

View file

@ -80,3 +80,18 @@ def configure_args(self):
args.append('--without-gnutls') args.append('--without-gnutls')
return args return args
def _test_check_versions(self):
"""Perform version checks on installed package binaries."""
checks = ['ctags', 'ebrowse', 'emacs', 'emacsclient', 'etags']
for exe in checks:
expected = str(self.spec.version)
reason = 'test version of {0} is {1}'.format(exe, expected)
self.run_test(exe, ['--version'], expected, installed=True,
purpose=reason, skip_missing=True)
def test(self):
"""Perform smoke tests on the installed package."""
# Simple version check tests on known binaries
self._test_check_versions()

View file

@ -0,0 +1,28 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
class Fortran(Package):
"""Virtual package for the Fortran language."""
homepage = 'https://wg5-fortran.org/'
virtual = True
def test(self):
test_source = self.test_suite.current_test_data_dir
for test in os.listdir(test_source):
filepath = os.path.join(test_source, test)
exe_name = '%s.exe' % test
fc_exe = os.environ['FC']
fc_opts = ['-o', exe_name, filepath]
compiled = self.run_test(fc_exe, options=fc_opts, installed=True)
if compiled:
expected = ['Hello world', 'YES!']
self.run_test(exe_name, expected=expected)

View file

@ -0,0 +1,6 @@
program line
write (*,*) "Hello world from FORTRAN"
write (*,*) "YES!"
end

View file

@ -0,0 +1,6 @@
program line
write (*,*) "Hello world from FORTRAN"
write (*,*) "YES!"
end program line

View file

@ -124,7 +124,7 @@ class Gdal(AutotoolsPackage):
depends_on('hdf5', when='+hdf5') depends_on('hdf5', when='+hdf5')
depends_on('kealib', when='+kea @2:') depends_on('kealib', when='+kea @2:')
depends_on('netcdf-c', when='+netcdf') depends_on('netcdf-c', when='+netcdf')
depends_on('jasper@1.900.1', patches='uuid.patch', when='+jasper') depends_on('jasper@1.900.1', patches=[patch('uuid.patch')], when='+jasper')
depends_on('openjpeg', when='+openjpeg') depends_on('openjpeg', when='+openjpeg')
depends_on('xerces-c', when='+xerces') depends_on('xerces-c', when='+xerces')
depends_on('expat', when='+expat') depends_on('expat', when='+expat')

View file

@ -4,6 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import sys import sys
import os
class Hdf(AutotoolsPackage): class Hdf(AutotoolsPackage):
@ -151,3 +152,67 @@ def configure_args(self):
def check(self): def check(self):
with working_dir(self.build_directory): with working_dir(self.build_directory):
make('check', parallel=False) make('check', parallel=False)
extra_install_tests = 'hdf/util/testfiles'
@run_after('install')
def setup_build_tests(self):
"""Copy the build test files after the package is installed to an
install test subdirectory for use during `spack test run`."""
self.cache_extra_test_sources(self.extra_install_tests)
def _test_check_versions(self):
"""Perform version checks on selected installed package binaries."""
spec_vers_str = 'Version {0}'.format(self.spec.version.up_to(2))
exes = ['hdfimport', 'hrepack', 'ncdump', 'ncgen']
for exe in exes:
reason = 'test: ensuring version of {0} is {1}' \
.format(exe, spec_vers_str)
self.run_test(exe, ['-V'], spec_vers_str, installed=True,
purpose=reason, skip_missing=True)
def _test_gif_converters(self):
"""This test performs an image conversion sequence and diff."""
work_dir = '.'
storm_fn = os.path.join(self.install_test_root,
self.extra_install_tests, 'storm110.hdf')
gif_fn = 'storm110.gif'
new_hdf_fn = 'storm110gif.hdf'
# Convert a test HDF file to a gif
self.run_test('hdf2gif', [storm_fn, gif_fn], '', installed=True,
purpose="test: hdf-to-gif", work_dir=work_dir)
# Convert the gif to an HDF file
self.run_test('gif2hdf', [gif_fn, new_hdf_fn], '', installed=True,
purpose="test: gif-to-hdf", work_dir=work_dir)
# Compare the original and new HDF files
self.run_test('hdiff', [new_hdf_fn, storm_fn], '', installed=True,
purpose="test: compare orig to new hdf",
work_dir=work_dir)
def _test_list(self):
"""This test compares low-level HDF file information to expected."""
storm_fn = os.path.join(self.install_test_root,
self.extra_install_tests, 'storm110.hdf')
test_data_dir = self.test_suite.current_test_data_dir
work_dir = '.'
reason = 'test: checking hdfls output'
details_file = os.path.join(test_data_dir, 'storm110.out')
expected = get_escaped_text_output(details_file)
self.run_test('hdfls', [storm_fn], expected, installed=True,
purpose=reason, skip_missing=True, work_dir=work_dir)
def test(self):
"""Perform smoke tests on the installed package."""
# Simple version check tests on subset of known binaries that respond
self._test_check_versions()
# Run gif converter sequence test
self._test_gif_converters()
# Run hdfls output
self._test_list()

View file

@ -0,0 +1,17 @@
File library version: Major= 0, Minor=0, Release=0
String=
Number type : (tag 106)
Ref nos: 110
Machine type : (tag 107)
Ref nos: 4369
Image Dimensions-8 : (tag 200)
Ref nos: 110
Raster Image-8 : (tag 202)
Ref nos: 110
Image Dimensions : (tag 300)
Ref nos: 110
Raster Image Data : (tag 302)
Ref nos: 110
Raster Image Group : (tag 306)
Ref nos: 110

View file

@ -6,8 +6,6 @@
import shutil import shutil
import sys import sys
from spack import *
class Hdf5(AutotoolsPackage): class Hdf5(AutotoolsPackage):
"""HDF5 is a data model, library, and file format for storing and managing """HDF5 is a data model, library, and file format for storing and managing
@ -327,6 +325,9 @@ def patch_postdeps(self):
@run_after('install') @run_after('install')
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
def check_install(self): def check_install(self):
self._check_install()
def _check_install(self):
# Build and run a small program to test the installed HDF5 library # Build and run a small program to test the installed HDF5 library
spec = self.spec spec = self.spec
print("Checking HDF5 installation...") print("Checking HDF5 installation...")
@ -375,3 +376,55 @@ def check_install(self):
print('-' * 80) print('-' * 80)
raise RuntimeError("HDF5 install check failed") raise RuntimeError("HDF5 install check failed")
shutil.rmtree(checkdir) shutil.rmtree(checkdir)
def _test_check_versions(self):
"""Perform version checks on selected installed package binaries."""
spec_vers_str = 'Version {0}'.format(self.spec.version)
exes = [
'h5copy', 'h5diff', 'h5dump', 'h5format_convert', 'h5ls',
'h5mkgrp', 'h5repack', 'h5stat', 'h5unjam',
]
use_short_opt = ['h52gif', 'h5repart', 'h5unjam']
for exe in exes:
reason = 'test: ensuring version of {0} is {1}' \
.format(exe, spec_vers_str)
option = '-V' if exe in use_short_opt else '--version'
self.run_test(exe, option, spec_vers_str, installed=True,
purpose=reason, skip_missing=True)
def _test_example(self):
"""This test performs copy, dump, and diff on an example hdf5 file."""
test_data_dir = self.test_suite.current_test_data_dir
filename = 'spack.h5'
h5_file = test_data_dir.join(filename)
reason = 'test: ensuring h5dump produces expected output'
expected = get_escaped_text_output(test_data_dir.join('dump.out'))
self.run_test('h5dump', filename, expected, installed=True,
purpose=reason, skip_missing=True,
work_dir=test_data_dir)
reason = 'test: ensuring h5copy runs'
options = ['-i', h5_file, '-s', 'Spack', '-o', 'test.h5', '-d',
'Spack']
self.run_test('h5copy', options, [], installed=True,
purpose=reason, skip_missing=True, work_dir='.')
reason = ('test: ensuring h5diff shows no differences between orig and'
' copy')
self.run_test('h5diff', [h5_file, 'test.h5'], [], installed=True,
purpose=reason, skip_missing=True, work_dir='.')
def test(self):
"""Perform smoke tests on the installed package."""
# Simple version check tests on known binaries
self._test_check_versions()
# Run sequence of commands on an hdf5 file
self._test_example()
# Run existing install check
# TODO: Restore once address built vs. installed state
# self._check_install()

View file

@ -0,0 +1,45 @@
HDF5 "spack.h5" {
GROUP "/" {
GROUP "Spack" {
GROUP "Software" {
ATTRIBUTE "Distribution" {
DATATYPE H5T_STRING {
STRSIZE H5T_VARIABLE;
STRPAD H5T_STR_NULLTERM;
CSET H5T_CSET_UTF8;
CTYPE H5T_C_S1;
}
DATASPACE SCALAR
DATA {
(0): "Open Source"
}
}
DATASET "data" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 7, 11 ) / ( 7, 11 ) }
DATA {
(0,0): 0.371141, 0.508482, 0.585975, 0.0944911, 0.684849,
(0,5): 0.580396, 0.720271, 0.693561, 0.340432, 0.217145,
(0,10): 0.636083,
(1,0): 0.686996, 0.773501, 0.656767, 0.617543, 0.226132,
(1,5): 0.768632, 0.0548711, 0.54572, 0.355544, 0.591548,
(1,10): 0.233007,
(2,0): 0.230032, 0.192087, 0.293845, 0.0369338, 0.038727,
(2,5): 0.0977931, 0.966522, 0.0821391, 0.857921, 0.495703,
(2,10): 0.746006,
(3,0): 0.598494, 0.990266, 0.993009, 0.187481, 0.746391,
(3,5): 0.140095, 0.122661, 0.929242, 0.542415, 0.802758,
(3,10): 0.757941,
(4,0): 0.372124, 0.411982, 0.270479, 0.950033, 0.329948,
(4,5): 0.936704, 0.105097, 0.742285, 0.556565, 0.18988, 0.72797,
(5,0): 0.801669, 0.271807, 0.910649, 0.186251, 0.868865,
(5,5): 0.191484, 0.788371, 0.920173, 0.582249, 0.682022,
(5,10): 0.146883,
(6,0): 0.826824, 0.0886705, 0.402606, 0.0532444, 0.72509,
(6,5): 0.964683, 0.330362, 0.833284, 0.630456, 0.411489, 0.247806
}
}
}
}
}
}

Binary file not shown.

View file

@ -21,7 +21,7 @@ class Jq(AutotoolsPackage):
@run_after('install') @run_after('install')
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
def installtest(self): def install_test(self):
jq = self.spec['jq'].command jq = self.spec['jq'].command
f = os.path.join(os.path.dirname(__file__), 'input.json') f = os.path.join(os.path.dirname(__file__), 'input.json')

View file

@ -27,7 +27,7 @@ def cmake_args(self):
@run_after('install') @run_after('install')
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
def test(self): def test_install(self):
# The help message exits with an exit code of 1 # The help message exits with an exit code of 1
kcov = Executable(self.prefix.bin.kcov) kcov = Executable(self.prefix.bin.kcov)
kcov('-h', ignore_errors=1) kcov('-h', ignore_errors=1)

View file

@ -3,8 +3,6 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Libsigsegv(AutotoolsPackage, GNUMirrorPackage): class Libsigsegv(AutotoolsPackage, GNUMirrorPackage):
"""GNU libsigsegv is a library for handling page faults in user mode.""" """GNU libsigsegv is a library for handling page faults in user mode."""
@ -18,5 +16,60 @@ class Libsigsegv(AutotoolsPackage, GNUMirrorPackage):
patch('patch.new_config_guess', when='@2.10') patch('patch.new_config_guess', when='@2.10')
test_requires_compiler = True
def configure_args(self): def configure_args(self):
return ['--enable-shared'] return ['--enable-shared']
extra_install_tests = 'tests/.libs'
@run_after('install')
def setup_build_tests(self):
"""Copy the build test files after the package is installed to an
install test subdirectory for use during `spack test run`."""
self.cache_extra_test_sources(self.extra_install_tests)
def _run_smoke_tests(self):
"""Build and run the added smoke (install) test."""
data_dir = self.test_suite.current_test_data_dir
prog = 'smoke_test'
src = data_dir.join('{0}.c'.format(prog))
options = [
'-I{0}'.format(self.prefix.include),
src,
'-o',
prog,
'-L{0}'.format(self.prefix.lib),
'-lsigsegv',
'{0}{1}'.format(self.compiler.cc_rpath_arg, self.prefix.lib)]
reason = 'test: checking ability to link to the library'
self.run_test('cc', options, [], installed=False, purpose=reason)
# Now run the program and confirm the output matches expectations
expected = get_escaped_text_output(data_dir.join('smoke_test.out'))
reason = 'test: checking ability to use the library'
self.run_test(prog, [], expected, purpose=reason)
def _run_build_tests(self):
"""Run selected build tests."""
passed = 'Test passed'
checks = {
'sigsegv1': [passed],
'sigsegv2': [passed],
'sigsegv3': ['caught', passed],
'stackoverflow1': ['recursion', 'Stack overflow', passed],
'stackoverflow2': ['recursion', 'overflow', 'violation', passed],
}
for exe, expected in checks.items():
reason = 'test: checking {0} output'.format(exe)
self.run_test(exe, [], expected, installed=True, purpose=reason,
skip_missing=True)
def test(self):
# Run the simple built-in smoke test
self._run_smoke_tests()
# Run test programs pulled from the build
self._run_build_tests()

View file

@ -0,0 +1,70 @@
/* Simple "Hello World" test set up to handle a single page fault
*
* Inspired by libsigsegv's test cases with argument names for handlers
* taken from the header files.
*/
#include "sigsegv.h"
#include <stdio.h>
#include <stdlib.h> /* for exit */
# include <stddef.h> /* for NULL on SunOS4 (per libsigsegv examples) */
#include <setjmp.h> /* for controlling handler-related flow */
/* Calling environment */
jmp_buf calling_env;
char *message = "Hello, World!";
/* Track the number of times the handler is called */
volatile int times_called = 0;
/* Continuation function, which relies on the latest libsigsegv API */
static void
resume(void *cont_arg1, void *cont_arg2, void *cont_arg3)
{
/* Go to calling environment and restore state. */
longjmp(calling_env, times_called);
}
/* sigsegv handler */
int
handle_sigsegv(void *fault_address, int serious)
{
times_called++;
/* Generate handler output for the test. */
printf("Caught sigsegv #%d\n", times_called);
return sigsegv_leave_handler(resume, NULL, NULL, NULL);
}
/* "Buggy" function used to demonstrate non-local goto */
void printit(char *m)
{
if (times_called < 1) {
/* Force SIGSEGV only on the first call. */
volatile int *fail_ptr = 0;
int failure = *fail_ptr;
printf("%s\n", m);
} else {
/* Print it correctly. */
printf("%s\n", m);
}
}
int
main(void)
{
/* Install the global SIGSEGV handler */
sigsegv_install_handler(&handle_sigsegv);
char *msg = "Hello World!";
int calls = setjmp(calling_env); /* Resume here after detecting sigsegv */
/* Call the function that will trigger the page fault. */
printit(msg);
return 0;
}

View file

@ -0,0 +1,2 @@
Caught sigsegv #1
Hello World!

View file

@ -2,6 +2,8 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details. # Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import llnl.util.filesystem as fs
import llnl.util.tty as tty
from spack import * from spack import *
@ -82,3 +84,34 @@ def import_module_test(self):
if '+python' in self.spec: if '+python' in self.spec:
with working_dir('spack-test', create=True): with working_dir('spack-test', create=True):
python('-c', 'import libxml2') python('-c', 'import libxml2')
def test(self):
"""Perform smoke tests on the installed package"""
# Start with what we already have post-install
tty.msg('test: Performing simple import test')
self.import_module_test()
data_dir = self.test_suite.current_test_data_dir
# Now run defined tests based on expected executables
dtd_path = data_dir.join('info.dtd')
test_filename = 'test.xml'
exec_checks = {
'xml2-config': [
('--version', [str(self.spec.version)], 0)],
'xmllint': [
(['--auto', '-o', test_filename], [], 0),
(['--postvalid', test_filename],
['validity error', 'no DTD found', 'does not validate'], 3),
(['--dtdvalid', dtd_path, test_filename],
['validity error', 'does not follow the DTD'], 3),
(['--dtdvalid', dtd_path, data_dir.join('info.xml')], [], 0)],
'xmlcatalog': [
('--create', ['<catalog xmlns', 'catalog"/>'], 0)],
}
for exe in exec_checks:
for options, expected, status in exec_checks[exe]:
self.run_test(exe, options, expected, status)
# Perform some cleanup
fs.force_remove(test_filename)

View file

@ -0,0 +1,2 @@
<!ELEMENT info (data)>
<!ELEMENT data (#PCDATA)>

View file

@ -0,0 +1,4 @@
<?xml version="1.0"?>
<info>
<data>abc</data>
</info>

View file

@ -74,3 +74,16 @@ def configure_args(self):
args.append('ac_cv_type_struct_sched_param=yes') args.append('ac_cv_type_struct_sched_param=yes')
return args return args
def test(self):
spec_vers = str(self.spec.version)
reason = 'test: ensuring m4 version is {0}'.format(spec_vers)
self.run_test('m4', '--version', spec_vers, installed=True,
purpose=reason, skip_missing=False)
reason = 'test: ensuring m4 example succeeds'
test_data_dir = self.test_suite.current_test_data_dir
hello_file = test_data_dir.join('hello.m4')
expected = get_escaped_text_output(test_data_dir.join('hello.out'))
self.run_test('m4', hello_file, expected, installed=True,
purpose=reason, skip_missing=False)

View file

@ -0,0 +1,4 @@
define(NAME, World)
dnl This line should not show up
// macro is ifdef(`NAME', , not)defined
Hello, NAME!

View file

@ -0,0 +1,3 @@
// macro is defined
Hello, World!

View file

@ -0,0 +1,31 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
class Mpi(Package):
"""Virtual package for the Message Passing Interface."""
homepage = 'https://www.mpi-forum.org/'
virtual = True
def test(self):
for lang in ('c', 'f'):
filename = self.test_suite.current_test_data_dir.join(
'mpi_hello.' + lang)
compiler_var = 'MPICC' if lang == 'c' else 'MPIF90'
compiler = os.environ[compiler_var]
exe_name = 'mpi_hello_%s' % lang
mpirun = join_path(self.prefix.bin, 'mpirun')
compiled = self.run_test(compiler,
options=['-o', exe_name, filename])
if compiled:
self.run_test(mpirun,
options=['-np', '1', exe_name],
expected=[r'Hello world! From rank \s*0 of \s*1']
)

View file

@ -0,0 +1,16 @@
#include <stdio.h>
#include <mpi.h>
int main(int argc, char** argv) {
MPI_Init(&argc, &argv);
int rank;
int num_ranks;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &num_ranks);
printf("Hello world! From rank %d of %d\n", rank, num_ranks);
MPI_Finalize();
return(0);
}

View file

@ -0,0 +1,11 @@
c Fortran example
program hello
include 'mpif.h'
integer rank, num_ranks, err_flag
call MPI_INIT(err_flag)
call MPI_COMM_SIZE(MPI_COMM_WORLD, num_ranks, err_flag)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, err_flag)
print*, 'Hello world! From rank', rank, 'of ', num_ranks
call MPI_FINALIZE(err_flag)
end

View file

@ -51,7 +51,7 @@ def configure(self, spec, prefix):
@run_after('configure') @run_after('configure')
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
def test(self): def configure_test(self):
ninja = Executable('./ninja') ninja = Executable('./ninja')
ninja('-j{0}'.format(make_jobs), 'ninja_test') ninja('-j{0}'.format(make_jobs), 'ninja_test')
ninja_test = Executable('./ninja_test') ninja_test = Executable('./ninja_test')

View file

@ -2,7 +2,6 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details. # Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import re
class Ninja(Package): class Ninja(Package):
@ -40,7 +39,7 @@ def configure(self, spec, prefix):
@run_after('configure') @run_after('configure')
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
def test(self): def configure_test(self):
ninja = Executable('./ninja') ninja = Executable('./ninja')
ninja('-j{0}'.format(make_jobs), 'ninja_test') ninja('-j{0}'.format(make_jobs), 'ninja_test')
ninja_test = Executable('./ninja_test') ninja_test = Executable('./ninja_test')

View file

@ -127,7 +127,7 @@ def build(self, spec, prefix):
@run_after('build') @run_after('build')
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
def test(self): def build_test(self):
make('test') make('test')
make('test-addons') make('test-addons')

View file

@ -336,6 +336,8 @@ class Openmpi(AutotoolsPackage):
filter_compiler_wrappers('openmpi/*-wrapper-data*', relative_root='share') filter_compiler_wrappers('openmpi/*-wrapper-data*', relative_root='share')
extra_install_tests = 'examples'
@classmethod @classmethod
def determine_version(cls, exe): def determine_version(cls, exe):
output = Executable(exe)(output=str, error=str) output = Executable(exe)(output=str, error=str)
@ -846,6 +848,149 @@ def delete_mpirun_mpiexec(self):
else: else:
copy(script_stub, exe) copy(script_stub, exe)
@run_after('install')
def setup_install_tests(self):
"""
Copy the example files after the package is installed to an
install test subdirectory for use during `spack test run`.
"""
self.cache_extra_test_sources(self.extra_install_tests)
def _test_bin_ops(self):
info = ([], ['Ident string: {0}'.format(self.spec.version), 'MCA'],
0)
ls = (['-n', '1', 'ls', '..'],
['openmpi-{0}'.format(self.spec.version)], 0)
checks = {
'mpirun': ls,
'ompi_info': info,
'oshmem_info': info,
'oshrun': ls,
'shmemrun': ls,
}
for exe in checks:
options, expected, status = checks[exe]
reason = 'test: checking {0} output'.format(exe)
self.run_test(exe, options, expected, status, installed=True,
purpose=reason, skip_missing=True)
def _test_check_versions(self):
comp_vers = str(self.spec.compiler.version)
spec_vers = str(self.spec.version)
checks = {
# Binaries available in at least versions 2.0.0 through 4.0.3
'mpiCC': comp_vers,
'mpic++': comp_vers,
'mpicc': comp_vers,
'mpicxx': comp_vers,
'mpiexec': spec_vers,
'mpif77': comp_vers,
'mpif90': comp_vers,
'mpifort': comp_vers,
'mpirun': spec_vers,
'ompi_info': spec_vers,
'ortecc': comp_vers,
'orterun': spec_vers,
# Binaries available in versions 2.0.0 through 2.1.6
'ompi-submit': spec_vers,
'orte-submit': spec_vers,
# Binaries available in versions 2.0.0 through 3.1.5
'ompi-dvm': spec_vers,
'orte-dvm': spec_vers,
'oshcc': comp_vers,
'oshfort': comp_vers,
'oshmem_info': spec_vers,
'oshrun': spec_vers,
'shmemcc': comp_vers,
'shmemfort': comp_vers,
'shmemrun': spec_vers,
# Binary available in version 3.1.0 through 3.1.5
'prun': spec_vers,
# Binaries available in versions 3.0.0 through 3.1.5
'oshCC': comp_vers,
'oshc++': comp_vers,
'oshcxx': comp_vers,
'shmemCC': comp_vers,
'shmemc++': comp_vers,
'shmemcxx': comp_vers,
}
for exe in checks:
expected = checks[exe]
purpose = 'test: ensuring version of {0} is {1}' \
.format(exe, expected)
self.run_test(exe, '--version', expected, installed=True,
purpose=purpose, skip_missing=True)
def _test_examples(self):
# First build the examples
self.run_test('make', ['all'], [],
purpose='test: ensuring ability to build the examples',
work_dir=join_path(self.install_test_root,
self.extra_install_tests))
# Now run those with known results
have_spml = self.spec.satisfies('@2.0.0:2.1.6')
hello_world = (['Hello, world', 'I am', '0 of', '1'], 0)
max_red = (['0/1 dst = 0 1 2'], 0)
missing_spml = (['No available spml components'], 1)
no_out = ([''], 0)
ring_out = (['1 processes in ring', '0 exiting'], 0)
strided = (['not in valid range'], 255)
checks = {
'hello_c': hello_world,
'hello_cxx': hello_world,
'hello_mpifh': hello_world,
'hello_oshmem': hello_world if have_spml else missing_spml,
'hello_oshmemcxx': hello_world if have_spml else missing_spml,
'hello_oshmemfh': hello_world if have_spml else missing_spml,
'hello_usempi': hello_world,
'hello_usempif08': hello_world,
'oshmem_circular_shift': ring_out if have_spml else missing_spml,
'oshmem_max_reduction': max_red if have_spml else missing_spml,
'oshmem_shmalloc': no_out if have_spml else missing_spml,
'oshmem_strided_puts': strided if have_spml else missing_spml,
'oshmem_symmetric_data': no_out if have_spml else missing_spml,
'ring_c': ring_out,
'ring_cxx': ring_out,
'ring_mpifh': ring_out,
'ring_oshmem': ring_out if have_spml else missing_spml,
'ring_oshmemfh': ring_out if have_spml else missing_spml,
'ring_usempi': ring_out,
'ring_usempif08': ring_out,
}
for exe in checks:
expected = checks[exe]
reason = 'test: checking example {0} output'.format(exe)
self.run_test(exe, [], expected, 0, installed=True,
purpose=reason, skip_missing=True)
def test(self):
"""Perform smoke tests on the installed package."""
# Simple version check tests on known packages
self._test_check_versions()
# Test the operation of selected executables
self._test_bin_ops()
# Test example programs pulled from the build
self._test_examples()
def get_spack_compiler_spec(path): def get_spack_compiler_spec(path):
spack_compilers = spack.compilers.find_compilers([path]) spack_compilers = spack.compilers.find_compilers([path])

View file

@ -2,8 +2,8 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details. # Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import * from spack import *
import os
class Patchelf(AutotoolsPackage): class Patchelf(AutotoolsPackage):
@ -18,3 +18,24 @@ class Patchelf(AutotoolsPackage):
version('0.10', sha256='b2deabce05c34ce98558c0efb965f209de592197b2c88e930298d740ead09019') version('0.10', sha256='b2deabce05c34ce98558c0efb965f209de592197b2c88e930298d740ead09019')
version('0.9', sha256='f2aa40a6148cb3b0ca807a1bf836b081793e55ec9e5540a5356d800132be7e0a') version('0.9', sha256='f2aa40a6148cb3b0ca807a1bf836b081793e55ec9e5540a5356d800132be7e0a')
version('0.8', sha256='14af06a2da688d577d64ff8dac065bb8903bbffbe01d30c62df7af9bf4ce72fe') version('0.8', sha256='14af06a2da688d577d64ff8dac065bb8903bbffbe01d30c62df7af9bf4ce72fe')
def test(self):
# Check patchelf in prefix and reports the correct version
reason = 'test: ensuring patchelf version is {0}' \
.format(self.spec.version)
self.run_test('patchelf',
options='--version',
expected=['patchelf %s' % self.spec.version],
installed=True,
purpose=reason)
# Check the rpath is changed
currdir = os.getcwd()
hello_file = self.test_suite.current_test_data_dir.join('hello')
self.run_test('patchelf', ['--set-rpath', currdir, hello_file],
purpose='test: ensuring that patchelf can change rpath')
self.run_test('patchelf',
options=['--print-rpath', hello_file],
expected=[currdir],
purpose='test: ensuring that patchelf changed rpath')

Some files were not shown because too many files have changed in this diff Show more