Distributed builds (#13100)

Fixes #9394
Closes #13217.

## Background
Spack provides the ability to enable/disable parallel builds through two options: package `parallel` and configuration `build_jobs`.  This PR changes the algorithm to allow multiple, simultaneous processes to coordinate the installation of the same spec (and specs with overlapping dependencies.).

The `parallel` (boolean) property sets the default for its package though the value can be overridden in the `install` method.

Spack's current parallel builds are limited to build tools supporting `jobs` arguments (e.g., `Makefiles`).  The number of jobs actually used is calculated as`min(config:build_jobs, # cores, 16)`, which can be overridden in the package or on the command line (i.e., `spack install -j <# jobs>`).

This PR adds support for distributed (single- and multi-node) parallel builds.  The goals of this work include improving the efficiency of installing packages with many dependencies and reducing the repetition associated with concurrent installations of (dependency) packages.

## Approach
### File System Locks
Coordination between concurrent installs of overlapping packages to a Spack instance is accomplished through bottom-up dependency DAG processing and file system locks.  The runs can be a combination of interactive and batch processes affecting the same file system.  Exclusive prefix locks are required to install a package while shared prefix locks are required to check if the package is installed.

Failures are communicated through a separate exclusive prefix failure lock, for concurrent processes, combined with a persistent store, for separate, related build processes.  The resulting file contains the failing spec to facilitate manual debugging.

### Priority Queue
Management of dependency builds changed from reliance on recursion to use of a priority queue where the priority of a spec is based on the number of its remaining uninstalled dependencies.  

Using a queue required a change to dependency build exception handling with the most visible issue being that the `install` method *must* install something in the prefix.  Consequently, packages can no longer get away with an install method consisting of `pass`, for example.

## Caveats
- This still only parallelizes a single-rooted build.  Multi-rooted installs (e.g., for environments) are TBD in a future PR.

Tasks:
- [x] Adjust package lock timeout to correspond to value used in the demo
- [x] Adjust database lock timeout to reduce contention on startup of concurrent
    `spack install <spec>` calls
- [x] Replace (test) package's `install: pass` methods with file creation since post-install 
    `sanity_check_prefix` will otherwise error out with `Install failed .. Nothing was installed!`
- [x] Resolve remaining existing test failures
- [x] Respond to alalazo's initial feedback
- [x] Remove `bin/demo-locks.py`
- [x] Add new tests to address new coverage issues
- [x] Replace built-in package's `def install(..): pass` to "install" something
    (i.e., only `apple-libunwind`)
- [x] Increase code coverage
This commit is contained in:
Tamara Dahlgren 2020-02-19 00:04:22 -08:00 committed by GitHub
parent 2f4881d582
commit f2aca86502
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
100 changed files with 2954 additions and 963 deletions

View file

@ -137,7 +137,7 @@ config:
# when Spack needs to manage its own package metadata and all operations are # when Spack needs to manage its own package metadata and all operations are
# expected to complete within the default time limit. The timeout should # expected to complete within the default time limit. The timeout should
# therefore generally be left untouched. # therefore generally be left untouched.
db_lock_timeout: 120 db_lock_timeout: 3
# How long to wait when attempting to modify a package (e.g. to install it). # How long to wait when attempting to modify a package (e.g. to install it).

View file

@ -8,14 +8,32 @@
import errno import errno
import time import time
import socket import socket
from datetime import datetime
import llnl.util.tty as tty import llnl.util.tty as tty
import spack.util.string
__all__ = ['Lock', 'LockTransaction', 'WriteTransaction', 'ReadTransaction', __all__ = ['Lock', 'LockTransaction', 'WriteTransaction', 'ReadTransaction',
'LockError', 'LockTimeoutError', 'LockError', 'LockTimeoutError',
'LockPermissionError', 'LockROFileError', 'CantCreateLockError'] 'LockPermissionError', 'LockROFileError', 'CantCreateLockError']
#: Mapping of supported locks to description
lock_type = {fcntl.LOCK_SH: 'read', fcntl.LOCK_EX: 'write'}
#: A useful replacement for functions that should return True when not provided
#: for example.
true_fn = lambda: True
def _attempts_str(wait_time, nattempts):
# Don't print anything if we succeeded on the first try
if nattempts <= 1:
return ''
attempts = spack.util.string.plural(nattempts, 'attempt')
return ' after {0:0.2f}s and {1}'.format(wait_time, attempts)
class Lock(object): class Lock(object):
"""This is an implementation of a filesystem lock using Python's lockf. """This is an implementation of a filesystem lock using Python's lockf.
@ -31,8 +49,8 @@ class Lock(object):
maintain multiple locks on the same file. maintain multiple locks on the same file.
""" """
def __init__(self, path, start=0, length=0, debug=False, def __init__(self, path, start=0, length=0, default_timeout=None,
default_timeout=None): debug=False, desc=''):
"""Construct a new lock on the file at ``path``. """Construct a new lock on the file at ``path``.
By default, the lock applies to the whole file. Optionally, By default, the lock applies to the whole file. Optionally,
@ -43,6 +61,16 @@ def __init__(self, path, start=0, length=0, debug=False,
not currently expose the ``whence`` parameter -- ``whence`` is not currently expose the ``whence`` parameter -- ``whence`` is
always ``os.SEEK_SET`` and ``start`` is always evaluated from the always ``os.SEEK_SET`` and ``start`` is always evaluated from the
beginning of the file. beginning of the file.
Args:
path (str): path to the lock
start (int): optional byte offset at which the lock starts
length (int): optional number of bytes to lock
default_timeout (int): number of seconds to wait for lock attempts,
where None means to wait indefinitely
debug (bool): debug mode specific to locking
desc (str): optional debug message lock description, which is
helpful for distinguishing between different Spack locks.
""" """
self.path = path self.path = path
self._file = None self._file = None
@ -56,6 +84,9 @@ def __init__(self, path, start=0, length=0, debug=False,
# enable debug mode # enable debug mode
self.debug = debug self.debug = debug
# optional debug description
self.desc = ' ({0})'.format(desc) if desc else ''
# If the user doesn't set a default timeout, or if they choose # If the user doesn't set a default timeout, or if they choose
# None, 0, etc. then lock attempts will not time out (unless the # None, 0, etc. then lock attempts will not time out (unless the
# user sets a timeout for each attempt) # user sets a timeout for each attempt)
@ -89,6 +120,20 @@ def _poll_interval_generator(_wait_times=None):
num_requests += 1 num_requests += 1
yield wait_time yield wait_time
def __repr__(self):
"""Formal representation of the lock."""
rep = '{0}('.format(self.__class__.__name__)
for attr, value in self.__dict__.items():
rep += '{0}={1}, '.format(attr, value.__repr__())
return '{0})'.format(rep.strip(', '))
def __str__(self):
"""Readable string (with key fields) of the lock."""
location = '{0}[{1}:{2}]'.format(self.path, self._start, self._length)
timeout = 'timeout={0}'.format(self.default_timeout)
activity = '#reads={0}, #writes={1}'.format(self._reads, self._writes)
return '({0}, {1}, {2})'.format(location, timeout, activity)
def _lock(self, op, timeout=None): def _lock(self, op, timeout=None):
"""This takes a lock using POSIX locks (``fcntl.lockf``). """This takes a lock using POSIX locks (``fcntl.lockf``).
@ -99,8 +144,9 @@ def _lock(self, op, timeout=None):
successfully acquired, the total wait time and the number of attempts successfully acquired, the total wait time and the number of attempts
is returned. is returned.
""" """
assert op in (fcntl.LOCK_SH, fcntl.LOCK_EX) assert op in lock_type
self._log_acquiring('{0} LOCK'.format(lock_type[op].upper()))
timeout = timeout or self.default_timeout timeout = timeout or self.default_timeout
# Create file and parent directories if they don't exist. # Create file and parent directories if they don't exist.
@ -128,6 +174,9 @@ def _lock(self, op, timeout=None):
# If the file were writable, we'd have opened it 'r+' # If the file were writable, we'd have opened it 'r+'
raise LockROFileError(self.path) raise LockROFileError(self.path)
tty.debug("{0} locking [{1}:{2}]: timeout {3} sec"
.format(lock_type[op], self._start, self._length, timeout))
poll_intervals = iter(Lock._poll_interval_generator()) poll_intervals = iter(Lock._poll_interval_generator())
start_time = time.time() start_time = time.time()
num_attempts = 0 num_attempts = 0
@ -139,17 +188,21 @@ def _lock(self, op, timeout=None):
time.sleep(next(poll_intervals)) time.sleep(next(poll_intervals))
# TBD: Is an extra attempt after timeout needed/appropriate?
num_attempts += 1 num_attempts += 1
if self._poll_lock(op): if self._poll_lock(op):
total_wait_time = time.time() - start_time total_wait_time = time.time() - start_time
return total_wait_time, num_attempts return total_wait_time, num_attempts
raise LockTimeoutError("Timed out waiting for lock.") raise LockTimeoutError("Timed out waiting for a {0} lock."
.format(lock_type[op]))
def _poll_lock(self, op): def _poll_lock(self, op):
"""Attempt to acquire the lock in a non-blocking manner. Return whether """Attempt to acquire the lock in a non-blocking manner. Return whether
the locking attempt succeeds the locking attempt succeeds
""" """
assert op in lock_type
try: try:
# Try to get the lock (will raise if not available.) # Try to get the lock (will raise if not available.)
fcntl.lockf(self._file, op | fcntl.LOCK_NB, fcntl.lockf(self._file, op | fcntl.LOCK_NB,
@ -159,6 +212,9 @@ def _poll_lock(self, op):
if self.debug: if self.debug:
# All locks read the owner PID and host # All locks read the owner PID and host
self._read_debug_data() self._read_debug_data()
tty.debug('{0} locked {1} [{2}:{3}] (owner={4})'
.format(lock_type[op], self.path,
self._start, self._length, self.pid))
# Exclusive locks write their PID/host # Exclusive locks write their PID/host
if op == fcntl.LOCK_EX: if op == fcntl.LOCK_EX:
@ -167,12 +223,12 @@ def _poll_lock(self, op):
return True return True
except IOError as e: except IOError as e:
if e.errno in (errno.EAGAIN, errno.EACCES): # EAGAIN and EACCES == locked by another process (so try again)
# EAGAIN and EACCES == locked by another process if e.errno not in (errno.EAGAIN, errno.EACCES):
pass
else:
raise raise
return False
def _ensure_parent_directory(self): def _ensure_parent_directory(self):
parent = os.path.dirname(self.path) parent = os.path.dirname(self.path)
@ -227,6 +283,8 @@ def _unlock(self):
self._length, self._start, os.SEEK_SET) self._length, self._start, os.SEEK_SET)
self._file.close() self._file.close()
self._file = None self._file = None
self._reads = 0
self._writes = 0
def acquire_read(self, timeout=None): def acquire_read(self, timeout=None):
"""Acquires a recursive, shared lock for reading. """Acquires a recursive, shared lock for reading.
@ -242,15 +300,14 @@ def acquire_read(self, timeout=None):
timeout = timeout or self.default_timeout timeout = timeout or self.default_timeout
if self._reads == 0 and self._writes == 0: if self._reads == 0 and self._writes == 0:
self._debug(
'READ LOCK: {0.path}[{0._start}:{0._length}] [Acquiring]'
.format(self))
# can raise LockError. # can raise LockError.
wait_time, nattempts = self._lock(fcntl.LOCK_SH, timeout=timeout) wait_time, nattempts = self._lock(fcntl.LOCK_SH, timeout=timeout)
self._acquired_debug('READ LOCK', wait_time, nattempts)
self._reads += 1 self._reads += 1
# Log if acquired, which includes counts when verbose
self._log_acquired('READ LOCK', wait_time, nattempts)
return True return True
else: else:
# Increment the read count for nested lock tracking
self._reads += 1 self._reads += 1
return False return False
@ -268,13 +325,11 @@ def acquire_write(self, timeout=None):
timeout = timeout or self.default_timeout timeout = timeout or self.default_timeout
if self._writes == 0: if self._writes == 0:
self._debug(
'WRITE LOCK: {0.path}[{0._start}:{0._length}] [Acquiring]'
.format(self))
# can raise LockError. # can raise LockError.
wait_time, nattempts = self._lock(fcntl.LOCK_EX, timeout=timeout) wait_time, nattempts = self._lock(fcntl.LOCK_EX, timeout=timeout)
self._acquired_debug('WRITE LOCK', wait_time, nattempts)
self._writes += 1 self._writes += 1
# Log if acquired, which includes counts when verbose
self._log_acquired('WRITE LOCK', wait_time, nattempts)
# return True only if we weren't nested in a read lock. # return True only if we weren't nested in a read lock.
# TODO: we may need to return two values: whether we got # TODO: we may need to return two values: whether we got
@ -282,9 +337,65 @@ def acquire_write(self, timeout=None):
# write lock for the first time. Now it returns the latter. # write lock for the first time. Now it returns the latter.
return self._reads == 0 return self._reads == 0
else: else:
# Increment the write count for nested lock tracking
self._writes += 1 self._writes += 1
return False return False
def is_write_locked(self):
"""Check if the file is write locked
Return:
(bool): ``True`` if the path is write locked, otherwise, ``False``
"""
try:
self.acquire_read()
# If we have a read lock then no other process has a write lock.
self.release_read()
except LockTimeoutError:
# Another process is holding a write lock on the file
return True
return False
def downgrade_write_to_read(self, timeout=None):
"""
Downgrade from an exclusive write lock to a shared read.
Raises:
LockDowngradeError: if this is an attempt at a nested transaction
"""
timeout = timeout or self.default_timeout
if self._writes == 1 and self._reads == 0:
self._log_downgrading()
# can raise LockError.
wait_time, nattempts = self._lock(fcntl.LOCK_SH, timeout=timeout)
self._reads = 1
self._writes = 0
self._log_downgraded(wait_time, nattempts)
else:
raise LockDowngradeError(self.path)
def upgrade_read_to_write(self, timeout=None):
"""
Attempts to upgrade from a shared read lock to an exclusive write.
Raises:
LockUpgradeError: if this is an attempt at a nested transaction
"""
timeout = timeout or self.default_timeout
if self._reads == 1 and self._writes == 0:
self._log_upgrading()
# can raise LockError.
wait_time, nattempts = self._lock(fcntl.LOCK_EX, timeout=timeout)
self._reads = 0
self._writes = 1
self._log_upgraded(wait_time, nattempts)
else:
raise LockUpgradeError(self.path)
def release_read(self, release_fn=None): def release_read(self, release_fn=None):
"""Releases a read lock. """Releases a read lock.
@ -305,17 +416,17 @@ def release_read(self, release_fn=None):
""" """
assert self._reads > 0 assert self._reads > 0
locktype = 'READ LOCK'
if self._reads == 1 and self._writes == 0: if self._reads == 1 and self._writes == 0:
self._debug( self._log_releasing(locktype)
'READ LOCK: {0.path}[{0._start}:{0._length}] [Released]'
.format(self))
result = True # we need to call release_fn before releasing the lock
if release_fn is not None: release_fn = release_fn or true_fn
result = release_fn() result = release_fn()
self._unlock() # can raise LockError. self._unlock() # can raise LockError.
self._reads -= 1 self._reads = 0
self._log_released(locktype)
return result return result
else: else:
self._reads -= 1 self._reads -= 1
@ -339,45 +450,91 @@ def release_write(self, release_fn=None):
""" """
assert self._writes > 0 assert self._writes > 0
release_fn = release_fn or true_fn
locktype = 'WRITE LOCK'
if self._writes == 1 and self._reads == 0: if self._writes == 1 and self._reads == 0:
self._debug( self._log_releasing(locktype)
'WRITE LOCK: {0.path}[{0._start}:{0._length}] [Released]'
.format(self))
# we need to call release_fn before releasing the lock # we need to call release_fn before releasing the lock
result = True result = release_fn()
if release_fn is not None:
result = release_fn()
self._unlock() # can raise LockError. self._unlock() # can raise LockError.
self._writes -= 1 self._writes = 0
self._log_released(locktype)
return result return result
else: else:
self._writes -= 1 self._writes -= 1
# when the last *write* is released, we call release_fn here # when the last *write* is released, we call release_fn here
# instead of immediately before releasing the lock. # instead of immediately before releasing the lock.
if self._writes == 0: if self._writes == 0:
return release_fn() if release_fn is not None else True return release_fn()
else: else:
return False return False
def _debug(self, *args): def _debug(self, *args):
tty.debug(*args) tty.debug(*args)
def _acquired_debug(self, lock_type, wait_time, nattempts): def _get_counts_desc(self):
attempts_format = 'attempt' if nattempts == 1 else 'attempt' return '(reads {0}, writes {1})'.format(self._reads, self._writes) \
if nattempts > 1: if tty.is_verbose() else ''
acquired_attempts_format = ' after {0:0.2f}s and {1:d} {2}'.format(
wait_time, nattempts, attempts_format) def _log_acquired(self, locktype, wait_time, nattempts):
else: attempts_part = _attempts_str(wait_time, nattempts)
# Dont print anything if we succeeded immediately now = datetime.now()
acquired_attempts_format = '' desc = 'Acquired at %s' % now.strftime("%H:%M:%S.%f")
self._debug( self._debug(self._status_msg(locktype, '{0}{1}'.
'{0}: {1.path}[{1._start}:{1._length}] [Acquired{2}]' format(desc, attempts_part)))
.format(lock_type, self, acquired_attempts_format))
def _log_acquiring(self, locktype):
self._debug2(self._status_msg(locktype, 'Acquiring'))
def _log_downgraded(self, wait_time, nattempts):
attempts_part = _attempts_str(wait_time, nattempts)
now = datetime.now()
desc = 'Downgraded at %s' % now.strftime("%H:%M:%S.%f")
self._debug(self._status_msg('READ LOCK', '{0}{1}'
.format(desc, attempts_part)))
def _log_downgrading(self):
self._debug2(self._status_msg('WRITE LOCK', 'Downgrading'))
def _log_released(self, locktype):
now = datetime.now()
desc = 'Released at %s' % now.strftime("%H:%M:%S.%f")
self._debug(self._status_msg(locktype, desc))
def _log_releasing(self, locktype):
self._debug2(self._status_msg(locktype, 'Releasing'))
def _log_upgraded(self, wait_time, nattempts):
attempts_part = _attempts_str(wait_time, nattempts)
now = datetime.now()
desc = 'Upgraded at %s' % now.strftime("%H:%M:%S.%f")
self._debug(self._status_msg('WRITE LOCK', '{0}{1}'.
format(desc, attempts_part)))
def _log_upgrading(self):
self._debug2(self._status_msg('READ LOCK', 'Upgrading'))
def _status_msg(self, locktype, status):
status_desc = '[{0}] {1}'.format(status, self._get_counts_desc())
return '{0}{1.desc}: {1.path}[{1._start}:{1._length}] {2}'.format(
locktype, self, status_desc)
def _debug2(self, *args):
# TODO: Easy place to make a single, temporary change to the
# TODO: debug level associated with the more detailed messages.
# TODO:
# TODO: Someday it would be great if we could switch this to
# TODO: another level, perhaps _between_ debug and verbose, or
# TODO: some other form of filtering so the first level of
# TODO: debugging doesn't have to generate these messages. Using
# TODO: verbose here did not work as expected because tests like
# TODO: test_spec_json will write the verbose messages to the
# TODO: output that is used to check test correctness.
tty.debug(*args)
class LockTransaction(object): class LockTransaction(object):
@ -462,10 +619,28 @@ class LockError(Exception):
"""Raised for any errors related to locks.""" """Raised for any errors related to locks."""
class LockDowngradeError(LockError):
"""Raised when unable to downgrade from a write to a read lock."""
def __init__(self, path):
msg = "Cannot downgrade lock from write to read on file: %s" % path
super(LockDowngradeError, self).__init__(msg)
class LockLimitError(LockError):
"""Raised when exceed maximum attempts to acquire a lock."""
class LockTimeoutError(LockError): class LockTimeoutError(LockError):
"""Raised when an attempt to acquire a lock times out.""" """Raised when an attempt to acquire a lock times out."""
class LockUpgradeError(LockError):
"""Raised when unable to upgrade from a read to a write lock."""
def __init__(self, path):
msg = "Cannot upgrade lock from read to write on file: %s" % path
super(LockUpgradeError, self).__init__(msg)
class LockPermissionError(LockError): class LockPermissionError(LockError):
"""Raised when there are permission issues with a lock.""" """Raised when there are permission issues with a lock."""

View file

@ -135,7 +135,9 @@ def process_stacktrace(countback):
def get_timestamp(force=False): def get_timestamp(force=False):
"""Get a string timestamp""" """Get a string timestamp"""
if _debug or _timestamp or force: if _debug or _timestamp or force:
return datetime.now().strftime("[%Y-%m-%d-%H:%M:%S.%f] ") # Note inclusion of the PID is useful for parallel builds.
return '[{0}, {1}] '.format(
datetime.now().strftime("%Y-%m-%d-%H:%M:%S.%f"), os.getpid())
else: else:
return '' return ''

View file

@ -47,7 +47,7 @@ def update_kwargs_from_args(args, kwargs):
}) })
kwargs.update({ kwargs.update({
'install_dependencies': ('dependencies' in args.things_to_install), 'install_deps': ('dependencies' in args.things_to_install),
'install_package': ('package' in args.things_to_install) 'install_package': ('package' in args.things_to_install)
}) })

View file

@ -37,6 +37,7 @@
import spack.store import spack.store
import spack.repo import spack.repo
import spack.spec import spack.spec
import spack.util.lock as lk
import spack.util.spack_yaml as syaml import spack.util.spack_yaml as syaml
import spack.util.spack_json as sjson import spack.util.spack_json as sjson
from spack.filesystem_view import YamlFilesystemView from spack.filesystem_view import YamlFilesystemView
@ -44,7 +45,9 @@
from spack.directory_layout import DirectoryLayoutError from spack.directory_layout import DirectoryLayoutError
from spack.error import SpackError from spack.error import SpackError
from spack.version import Version from spack.version import Version
from spack.util.lock import Lock, WriteTransaction, ReadTransaction, LockError
# TODO: Provide an API automatically retyring a build after detecting and
# TODO: clearing a failure.
# DB goes in this directory underneath the root # DB goes in this directory underneath the root
_db_dirname = '.spack-db' _db_dirname = '.spack-db'
@ -65,9 +68,20 @@
(Version('0.9.3'), Version('5')), (Version('0.9.3'), Version('5')),
] ]
# Timeout for spack database locks in seconds # Default timeout for spack database locks in seconds or None (no timeout).
# A balance needs to be struck between quick turnaround for parallel installs
# (to avoid excess delays) and waiting long enough when the system is busy
# (to ensure the database is updated).
_db_lock_timeout = 120 _db_lock_timeout = 120
# Default timeout for spack package locks in seconds or None (no timeout).
# A balance needs to be struck between quick turnaround for parallel installs
# (to avoid excess delays when performing a parallel installation) and waiting
# long enough for the next possible spec to install (to avoid excessive
# checking of the last high priority package) or holding on to a lock (to
# ensure a failed install is properly tracked).
_pkg_lock_timeout = None
# Types of dependencies tracked by the database # Types of dependencies tracked by the database
_tracked_deps = ('link', 'run') _tracked_deps = ('link', 'run')
@ -255,6 +269,9 @@ class Database(object):
"""Per-process lock objects for each install prefix.""" """Per-process lock objects for each install prefix."""
_prefix_locks = {} _prefix_locks = {}
"""Per-process failure (lock) objects for each install prefix."""
_prefix_failures = {}
def __init__(self, root, db_dir=None, upstream_dbs=None, def __init__(self, root, db_dir=None, upstream_dbs=None,
is_upstream=False): is_upstream=False):
"""Create a Database for Spack installations under ``root``. """Create a Database for Spack installations under ``root``.
@ -295,17 +312,29 @@ def __init__(self, root, db_dir=None, upstream_dbs=None,
# This is for other classes to use to lock prefix directories. # This is for other classes to use to lock prefix directories.
self.prefix_lock_path = os.path.join(self._db_dir, 'prefix_lock') self.prefix_lock_path = os.path.join(self._db_dir, 'prefix_lock')
# Ensure a persistent location for dealing with parallel installation
# failures (e.g., across near-concurrent processes).
self._failure_dir = os.path.join(self._db_dir, 'failures')
# Support special locks for handling parallel installation failures
# of a spec.
self.prefix_fail_path = os.path.join(self._db_dir, 'prefix_failures')
# Create needed directories and files # Create needed directories and files
if not os.path.exists(self._db_dir): if not os.path.exists(self._db_dir):
mkdirp(self._db_dir) mkdirp(self._db_dir)
if not os.path.exists(self._failure_dir):
mkdirp(self._failure_dir)
self.is_upstream = is_upstream self.is_upstream = is_upstream
# initialize rest of state. # initialize rest of state.
self.db_lock_timeout = ( self.db_lock_timeout = (
spack.config.get('config:db_lock_timeout') or _db_lock_timeout) spack.config.get('config:db_lock_timeout') or _db_lock_timeout)
self.package_lock_timeout = ( self.package_lock_timeout = (
spack.config.get('config:package_lock_timeout') or None) spack.config.get('config:package_lock_timeout') or
_pkg_lock_timeout)
tty.debug('DATABASE LOCK TIMEOUT: {0}s'.format( tty.debug('DATABASE LOCK TIMEOUT: {0}s'.format(
str(self.db_lock_timeout))) str(self.db_lock_timeout)))
timeout_format_str = ('{0}s'.format(str(self.package_lock_timeout)) timeout_format_str = ('{0}s'.format(str(self.package_lock_timeout))
@ -316,8 +345,9 @@ def __init__(self, root, db_dir=None, upstream_dbs=None,
if self.is_upstream: if self.is_upstream:
self.lock = ForbiddenLock() self.lock = ForbiddenLock()
else: else:
self.lock = Lock(self._lock_path, self.lock = lk.Lock(self._lock_path,
default_timeout=self.db_lock_timeout) default_timeout=self.db_lock_timeout,
desc='database')
self._data = {} self._data = {}
self.upstream_dbs = list(upstream_dbs) if upstream_dbs else [] self.upstream_dbs = list(upstream_dbs) if upstream_dbs else []
@ -332,14 +362,136 @@ def __init__(self, root, db_dir=None, upstream_dbs=None,
def write_transaction(self): def write_transaction(self):
"""Get a write lock context manager for use in a `with` block.""" """Get a write lock context manager for use in a `with` block."""
return WriteTransaction( return lk.WriteTransaction(
self.lock, acquire=self._read, release=self._write) self.lock, acquire=self._read, release=self._write)
def read_transaction(self): def read_transaction(self):
"""Get a read lock context manager for use in a `with` block.""" """Get a read lock context manager for use in a `with` block."""
return ReadTransaction(self.lock, acquire=self._read) return lk.ReadTransaction(self.lock, acquire=self._read)
def prefix_lock(self, spec): def _failed_spec_path(self, spec):
"""Return the path to the spec's failure file, which may not exist."""
if not spec.concrete:
raise ValueError('Concrete spec required for failure path for {0}'
.format(spec.name))
return os.path.join(self._failure_dir,
'{0}-{1}'.format(spec.name, spec.full_hash()))
def clear_failure(self, spec, force=False):
"""
Remove any persistent and cached failure tracking for the spec.
see `mark_failed()`.
Args:
spec (Spec): the spec whose failure indicators are being removed
force (bool): True if the failure information should be cleared
when a prefix failure lock exists for the file or False if
the failure should not be cleared (e.g., it may be
associated with a concurrent build)
"""
failure_locked = self.prefix_failure_locked(spec)
if failure_locked and not force:
tty.msg('Retaining failure marking for {0} due to lock'
.format(spec.name))
return
if failure_locked:
tty.warn('Removing failure marking despite lock for {0}'
.format(spec.name))
lock = self._prefix_failures.pop(spec.prefix, None)
if lock:
lock.release_write()
if self.prefix_failure_marked(spec):
try:
path = self._failed_spec_path(spec)
tty.debug('Removing failure marking for {0}'.format(spec.name))
os.remove(path)
except OSError as err:
tty.warn('Unable to remove failure marking for {0} ({1}): {2}'
.format(spec.name, path, str(err)))
def mark_failed(self, spec):
"""
Mark a spec as failing to install.
Prefix failure marking takes the form of a byte range lock on the nth
byte of a file for coordinating between concurrent parallel build
processes and a persistent file, named with the full hash and
containing the spec, in a subdirectory of the database to enable
persistence across overlapping but separate related build processes.
The failure lock file, ``spack.store.db.prefix_failures``, lives
alongside the install DB. ``n`` is the sys.maxsize-bit prefix of the
associated DAG hash to make the likelihood of collision very low with
no cleanup required.
"""
# Dump the spec to the failure file for (manual) debugging purposes
path = self._failed_spec_path(spec)
with open(path, 'w') as f:
spec.to_json(f)
# Also ensure a failure lock is taken to prevent cleanup removal
# of failure status information during a concurrent parallel build.
err = 'Unable to mark {0.name} as failed.'
prefix = spec.prefix
if prefix not in self._prefix_failures:
mark = lk.Lock(
self.prefix_fail_path,
start=spec.dag_hash_bit_prefix(bit_length(sys.maxsize)),
length=1,
default_timeout=self.package_lock_timeout, desc=spec.name)
try:
mark.acquire_write()
except lk.LockTimeoutError:
# Unlikely that another process failed to install at the same
# time but log it anyway.
tty.debug('PID {0} failed to mark install failure for {1}'
.format(os.getpid(), spec.name))
tty.warn(err.format(spec))
# Whether we or another process marked it as a failure, track it
# as such locally.
self._prefix_failures[prefix] = mark
return self._prefix_failures[prefix]
def prefix_failed(self, spec):
"""Return True if the prefix (installation) is marked as failed."""
# The failure was detected in this process.
if spec.prefix in self._prefix_failures:
return True
# The failure was detected by a concurrent process (e.g., an srun),
# which is expected to be holding a write lock if that is the case.
if self.prefix_failure_locked(spec):
return True
# Determine if the spec may have been marked as failed by a separate
# spack build process running concurrently.
return self.prefix_failure_marked(spec)
def prefix_failure_locked(self, spec):
"""Return True if a process has a failure lock on the spec."""
check = lk.Lock(
self.prefix_fail_path,
start=spec.dag_hash_bit_prefix(bit_length(sys.maxsize)),
length=1,
default_timeout=self.package_lock_timeout, desc=spec.name)
return check.is_write_locked()
def prefix_failure_marked(self, spec):
"""Determine if the spec has a persistent failure marking."""
return os.path.exists(self._failed_spec_path(spec))
def prefix_lock(self, spec, timeout=None):
"""Get a lock on a particular spec's installation directory. """Get a lock on a particular spec's installation directory.
NOTE: The installation directory **does not** need to exist. NOTE: The installation directory **does not** need to exist.
@ -354,13 +506,16 @@ def prefix_lock(self, spec):
readers-writer lock semantics with just a single lockfile, so no readers-writer lock semantics with just a single lockfile, so no
cleanup required. cleanup required.
""" """
timeout = timeout or self.package_lock_timeout
prefix = spec.prefix prefix = spec.prefix
if prefix not in self._prefix_locks: if prefix not in self._prefix_locks:
self._prefix_locks[prefix] = Lock( self._prefix_locks[prefix] = lk.Lock(
self.prefix_lock_path, self.prefix_lock_path,
start=spec.dag_hash_bit_prefix(bit_length(sys.maxsize)), start=spec.dag_hash_bit_prefix(bit_length(sys.maxsize)),
length=1, length=1,
default_timeout=self.package_lock_timeout) default_timeout=timeout, desc=spec.name)
elif timeout != self._prefix_locks[prefix].default_timeout:
self._prefix_locks[prefix].default_timeout = timeout
return self._prefix_locks[prefix] return self._prefix_locks[prefix]
@ -371,7 +526,7 @@ def prefix_read_lock(self, spec):
try: try:
yield self yield self
except LockError: except lk.LockError:
# This addresses the case where a nested lock attempt fails inside # This addresses the case where a nested lock attempt fails inside
# of this context manager # of this context manager
raise raise
@ -388,7 +543,7 @@ def prefix_write_lock(self, spec):
try: try:
yield self yield self
except LockError: except lk.LockError:
# This addresses the case where a nested lock attempt fails inside # This addresses the case where a nested lock attempt fails inside
# of this context manager # of this context manager
raise raise
@ -624,7 +779,7 @@ def _read_suppress_error():
self._error = e self._error = e
self._data = {} self._data = {}
transaction = WriteTransaction( transaction = lk.WriteTransaction(
self.lock, acquire=_read_suppress_error, release=self._write self.lock, acquire=_read_suppress_error, release=self._write
) )
@ -810,7 +965,7 @@ def _read(self):
self._db_dir, os.R_OK | os.W_OK): self._db_dir, os.R_OK | os.W_OK):
# if we can write, then read AND write a JSON file. # if we can write, then read AND write a JSON file.
self._read_from_file(self._old_yaml_index_path, format='yaml') self._read_from_file(self._old_yaml_index_path, format='yaml')
with WriteTransaction(self.lock): with lk.WriteTransaction(self.lock):
self._write(None, None, None) self._write(None, None, None)
else: else:
# Read chck for a YAML file if we can't find JSON. # Read chck for a YAML file if we can't find JSON.
@ -823,7 +978,7 @@ def _read(self):
" databases cannot generate an index file") " databases cannot generate an index file")
# The file doesn't exist, try to traverse the directory. # The file doesn't exist, try to traverse the directory.
# reindex() takes its own write lock, so no lock here. # reindex() takes its own write lock, so no lock here.
with WriteTransaction(self.lock): with lk.WriteTransaction(self.lock):
self._write(None, None, None) self._write(None, None, None)
self.reindex(spack.store.layout) self.reindex(spack.store.layout)

1597
lib/spack/spack/installer.py Executable file

File diff suppressed because it is too large Load diff

View file

@ -550,9 +550,11 @@ def __call__(self, *argv, **kwargs):
tty.debug(e) tty.debug(e)
self.error = e self.error = e
if fail_on_error: if fail_on_error:
self._log_command_output(out)
raise raise
if fail_on_error and self.returncode not in (None, 0): if fail_on_error and self.returncode not in (None, 0):
self._log_command_output(out)
raise SpackCommandError( raise SpackCommandError(
"Command exited with code %d: %s(%s)" % ( "Command exited with code %d: %s(%s)" % (
self.returncode, self.command_name, self.returncode, self.command_name,
@ -560,6 +562,13 @@ def __call__(self, *argv, **kwargs):
return out.getvalue() return out.getvalue()
def _log_command_output(self, out):
if tty.is_verbose():
fmt = self.command_name + ': {0}'
for ln in out.getvalue().split('\n'):
if len(ln) > 0:
tty.verbose(fmt.format(ln.replace('==> ', '')))
def _profile_wrapper(command, parser, args, unknown_args): def _profile_wrapper(command, parser, args, unknown_args):
import cProfile import cProfile

View file

@ -14,7 +14,6 @@
import contextlib import contextlib
import copy import copy
import functools import functools
import glob
import hashlib import hashlib
import inspect import inspect
import os import os
@ -48,21 +47,16 @@
import spack.util.environment import spack.util.environment
import spack.util.web import spack.util.web
import spack.multimethod import spack.multimethod
import spack.binary_distribution as binary_distribution
from llnl.util.filesystem import mkdirp, touch, chgrp from llnl.util.filesystem import mkdirp, touch, working_dir
from llnl.util.filesystem import working_dir, install_tree, install
from llnl.util.lang import memoized from llnl.util.lang import memoized
from llnl.util.link_tree import LinkTree from llnl.util.link_tree import LinkTree
from llnl.util.tty.log import log_output
from llnl.util.tty.color import colorize
from spack.filesystem_view import YamlFilesystemView from spack.filesystem_view import YamlFilesystemView
from spack.util.executable import which from spack.installer import \
install_args_docstring, PackageInstaller, InstallError
from spack.stage import stage_prefix, Stage, ResourceStage, StageComposite from spack.stage import stage_prefix, Stage, ResourceStage, StageComposite
from spack.util.environment import dump_environment
from spack.util.package_hash import package_hash from spack.util.package_hash import package_hash
from spack.version import Version from spack.version import Version
from spack.package_prefs import get_package_dir_permissions, get_package_group
"""Allowed URL schemes for spack packages.""" """Allowed URL schemes for spack packages."""
_ALLOWED_URL_SCHEMES = ["http", "https", "ftp", "file", "git"] _ALLOWED_URL_SCHEMES = ["http", "https", "ftp", "file", "git"]
@ -430,10 +424,18 @@ class PackageBase(with_metaclass(PackageMeta, PackageViewMixin, object)):
# These are default values for instance variables. # These are default values for instance variables.
# #
#: A list or set of build time test functions to be called when tests
#: are executed or 'None' if there are no such test functions.
build_time_test_callbacks = None
#: Most Spack packages are used to install source or binary code while #: Most Spack packages are used to install source or binary code while
#: those that do not can be used to install a set of other Spack packages. #: those that do not can be used to install a set of other Spack packages.
has_code = True has_code = True
#: A list or set of install time test functions to be called when tests
#: are executed or 'None' if there are no such test functions.
install_time_test_callbacks = None
#: By default we build in parallel. Subclasses can override this. #: By default we build in parallel. Subclasses can override this.
parallel = True parallel = True
@ -1283,41 +1285,6 @@ def content_hash(self, content=None):
hashlib.sha256(bytes().join( hashlib.sha256(bytes().join(
sorted(hash_content))).digest()).lower() sorted(hash_content))).digest()).lower()
def do_fake_install(self):
"""Make a fake install directory containing fake executables,
headers, and libraries."""
command = self.name
header = self.name
library = self.name
# Avoid double 'lib' for packages whose names already start with lib
if not self.name.startswith('lib'):
library = 'lib' + library
dso_suffix = '.dylib' if sys.platform == 'darwin' else '.so'
chmod = which('chmod')
# Install fake command
mkdirp(self.prefix.bin)
touch(os.path.join(self.prefix.bin, command))
chmod('+x', os.path.join(self.prefix.bin, command))
# Install fake header file
mkdirp(self.prefix.include)
touch(os.path.join(self.prefix.include, header + '.h'))
# Install fake shared and static libraries
mkdirp(self.prefix.lib)
for suffix in [dso_suffix, '.a']:
touch(os.path.join(self.prefix.lib, library + suffix))
# Install fake man page
mkdirp(self.prefix.man.man1)
packages_dir = spack.store.layout.build_packages_path(self.spec)
dump_packages(self.spec, packages_dir)
def _has_make_target(self, target): def _has_make_target(self, target):
"""Checks to see if 'target' is a valid target in a Makefile. """Checks to see if 'target' is a valid target in a Makefile.
@ -1461,382 +1428,17 @@ def _stage_and_write_lock(self):
with spack.store.db.prefix_write_lock(self.spec): with spack.store.db.prefix_write_lock(self.spec):
yield yield
def _process_external_package(self, explicit):
"""Helper function to process external packages.
Runs post install hooks and registers the package in the DB.
Args:
explicit (bool): if the package was requested explicitly by
the user, False if it was pulled in as a dependency of an
explicit package.
"""
if self.spec.external_module:
message = '{s.name}@{s.version} : has external module in {module}'
tty.msg(message.format(s=self, module=self.spec.external_module))
message = '{s.name}@{s.version} : is actually installed in {path}'
tty.msg(message.format(s=self, path=self.spec.external_path))
else:
message = '{s.name}@{s.version} : externally installed in {path}'
tty.msg(message.format(s=self, path=self.spec.external_path))
try:
# Check if the package was already registered in the DB
# If this is the case, then just exit
rec = spack.store.db.get_record(self.spec)
message = '{s.name}@{s.version} : already registered in DB'
tty.msg(message.format(s=self))
# Update the value of rec.explicit if it is necessary
self._update_explicit_entry_in_db(rec, explicit)
except KeyError:
# If not register it and generate the module file
# For external packages we just need to run
# post-install hooks to generate module files
message = '{s.name}@{s.version} : generating module file'
tty.msg(message.format(s=self))
spack.hooks.post_install(self.spec)
# Add to the DB
message = '{s.name}@{s.version} : registering into DB'
tty.msg(message.format(s=self))
spack.store.db.add(self.spec, None, explicit=explicit)
def _update_explicit_entry_in_db(self, rec, explicit):
if explicit and not rec.explicit:
with spack.store.db.write_transaction():
rec = spack.store.db.get_record(self.spec)
rec.explicit = True
message = '{s.name}@{s.version} : marking the package explicit'
tty.msg(message.format(s=self))
def try_install_from_binary_cache(self, explicit, unsigned=False):
tty.msg('Searching for binary cache of %s' % self.name)
specs = binary_distribution.get_spec(spec=self.spec,
force=False)
binary_spec = spack.spec.Spec.from_dict(self.spec.to_dict())
binary_spec._mark_concrete()
if binary_spec not in specs:
return False
tarball = binary_distribution.download_tarball(binary_spec)
# see #10063 : install from source if tarball doesn't exist
if tarball is None:
tty.msg('%s exist in binary cache but with different hash' %
self.name)
return False
tty.msg('Installing %s from binary cache' % self.name)
binary_distribution.extract_tarball(
binary_spec, tarball, allow_root=False,
unsigned=unsigned, force=False)
self.installed_from_binary_cache = True
spack.store.db.add(
self.spec, spack.store.layout, explicit=explicit)
return True
def bootstrap_compiler(self, **kwargs):
"""Called by do_install to setup ensure Spack has the right compiler.
Checks Spack's compiler configuration for a compiler that
matches the package spec. If none are configured, installs and
adds to the compiler configuration the compiler matching the
CompilerSpec object."""
compilers = spack.compilers.compilers_for_spec(
self.spec.compiler,
arch_spec=self.spec.architecture
)
if not compilers:
dep = spack.compilers.pkg_spec_for_compiler(self.spec.compiler)
dep.architecture = self.spec.architecture
# concrete CompilerSpec has less info than concrete Spec
# concretize as Spec to add that information
dep.concretize()
dep.package.do_install(**kwargs)
spack.compilers.add_compilers_to_config(
spack.compilers.find_compilers([dep.prefix])
)
def do_install(self, **kwargs): def do_install(self, **kwargs):
"""Called by commands to install a package and its dependencies. """Called by commands to install a package and or its dependencies.
Package implementations should override install() to describe Package implementations should override install() to describe
their build process. their build process.
Args: Args:"""
keep_prefix (bool): Keep install prefix on failure. By default, builder = PackageInstaller(self)
destroys it. builder.install(**kwargs)
keep_stage (bool): By default, stage is destroyed only if there
are no exceptions during build. Set to True to keep the stage
even with exceptions.
install_source (bool): By default, source is not installed, but
for debugging it might be useful to keep it around.
install_deps (bool): Install dependencies before installing this
package
skip_patch (bool): Skip patch stage of build if True.
verbose (bool): Display verbose build output (by default,
suppresses it)
fake (bool): Don't really build; install fake stub files instead.
explicit (bool): True if package was explicitly installed, False
if package was implicitly installed (as a dependency).
tests (bool or list or set): False to run no tests, True to test
all packages, or a list of package names to run tests for some
dirty (bool): Don't clean the build environment before installing.
restage (bool): Force spack to restage the package source.
force (bool): Install again, even if already installed.
use_cache (bool): Install from binary package, if available.
cache_only (bool): Fail if binary package unavailable.
stop_at (InstallPhase): last installation phase to be executed
(or None)
"""
if not self.spec.concrete:
raise ValueError("Can only install concrete packages: %s."
% self.spec.name)
keep_prefix = kwargs.get('keep_prefix', False) do_install.__doc__ += install_args_docstring
keep_stage = kwargs.get('keep_stage', False)
install_source = kwargs.get('install_source', False)
install_deps = kwargs.get('install_deps', True)
skip_patch = kwargs.get('skip_patch', False)
verbose = kwargs.get('verbose', False)
fake = kwargs.get('fake', False)
explicit = kwargs.get('explicit', False)
tests = kwargs.get('tests', False)
dirty = kwargs.get('dirty', False)
restage = kwargs.get('restage', False)
# install_self defaults True and is popped so that dependencies are
# always installed regardless of whether the root was installed
install_self = kwargs.pop('install_package', True)
# explicit defaults False so that dependents are implicit regardless
# of whether their dependents are implicitly or explicitly installed.
# Spack ensures root packages of install commands are always marked to
# install explicit
explicit = kwargs.pop('explicit', False)
# For external packages the workflow is simplified, and basically
# consists in module file generation and registration in the DB
if self.spec.external:
return self._process_external_package(explicit)
if self.installed_upstream:
tty.msg("{0.name} is installed in an upstream Spack instance"
" at {0.prefix}".format(self))
# Note this skips all post-install hooks. In the case of modules
# this is considered correct because we want to retrieve the
# module from the upstream Spack instance.
return
partial = self.check_for_unfinished_installation(keep_prefix, restage)
# Ensure package is not already installed
layout = spack.store.layout
with spack.store.db.prefix_read_lock(self.spec):
if partial:
tty.msg(
"Continuing from partial install of %s" % self.name)
elif layout.check_installed(self.spec):
msg = '{0.name} is already installed in {0.prefix}'
tty.msg(msg.format(self))
rec = spack.store.db.get_record(self.spec)
# In case the stage directory has already been created,
# this ensures it's removed after we checked that the spec
# is installed
if keep_stage is False:
self.stage.destroy()
return self._update_explicit_entry_in_db(rec, explicit)
self._do_install_pop_kwargs(kwargs)
# First, install dependencies recursively.
if install_deps:
tty.debug('Installing {0} dependencies'.format(self.name))
dep_kwargs = kwargs.copy()
dep_kwargs['explicit'] = False
dep_kwargs['install_deps'] = False
for dep in self.spec.traverse(order='post', root=False):
if spack.config.get('config:install_missing_compilers', False):
Package._install_bootstrap_compiler(dep.package, **kwargs)
dep.package.do_install(**dep_kwargs)
# Then install the compiler if it is not already installed.
if install_deps:
Package._install_bootstrap_compiler(self, **kwargs)
if not install_self:
return
# Then, install the package proper
tty.msg(colorize('@*{Installing} @*g{%s}' % self.name))
if kwargs.get('use_cache', True):
if self.try_install_from_binary_cache(
explicit, unsigned=kwargs.get('unsigned', False)):
tty.msg('Successfully installed %s from binary cache'
% self.name)
print_pkg(self.prefix)
spack.hooks.post_install(self.spec)
return
elif kwargs.get('cache_only', False):
tty.die('No binary for %s found and cache-only specified'
% self.name)
tty.msg('No binary for %s found: installing from source'
% self.name)
# Set run_tests flag before starting build
self.run_tests = (tests is True or
tests and self.name in tests)
# Then install the package itself.
def build_process():
"""This implements the process forked for each build.
Has its own process and python module space set up by
build_environment.fork().
This function's return value is returned to the parent process.
"""
start_time = time.time()
if not fake:
if not skip_patch:
self.do_patch()
else:
self.do_stage()
tty.msg(
'Building {0} [{1}]'.format(self.name, self.build_system_class)
)
# get verbosity from do_install() parameter or saved value
echo = verbose
if PackageBase._verbose is not None:
echo = PackageBase._verbose
self.stage.keep = keep_stage
with self._stage_and_write_lock():
# Run the pre-install hook in the child process after
# the directory is created.
spack.hooks.pre_install(self.spec)
if fake:
self.do_fake_install()
else:
source_path = self.stage.source_path
if install_source and os.path.isdir(source_path):
src_target = os.path.join(
self.spec.prefix, 'share', self.name, 'src')
tty.msg('Copying source to {0}'.format(src_target))
install_tree(self.stage.source_path, src_target)
# Do the real install in the source directory.
with working_dir(self.stage.source_path):
# Save the build environment in a file before building.
dump_environment(self.env_path)
# cache debug settings
debug_enabled = tty.is_debug()
# Spawn a daemon that reads from a pipe and redirects
# everything to log_path
with log_output(self.log_path, echo, True) as logger:
for phase_name, phase_attr in zip(
self.phases, self._InstallPhase_phases):
with logger.force_echo():
inner_debug = tty.is_debug()
tty.set_debug(debug_enabled)
tty.msg(
"Executing phase: '%s'" % phase_name)
tty.set_debug(inner_debug)
# Redirect stdout and stderr to daemon pipe
phase = getattr(self, phase_attr)
phase(self.spec, self.prefix)
echo = logger.echo
self.log()
# Run post install hooks before build stage is removed.
spack.hooks.post_install(self.spec)
# Stop timer.
self._total_time = time.time() - start_time
build_time = self._total_time - self._fetch_time
tty.msg("Successfully installed %s" % self.name,
"Fetch: %s. Build: %s. Total: %s." %
(_hms(self._fetch_time), _hms(build_time),
_hms(self._total_time)))
print_pkg(self.prefix)
# preserve verbosity across runs
return echo
# hook that allow tests to inspect this Package before installation
# see unit_test_check() docs.
if not self.unit_test_check():
return
try:
# Create the install prefix and fork the build process.
if not os.path.exists(self.prefix):
spack.store.layout.create_install_directory(self.spec)
else:
# Set the proper group for the prefix
group = get_package_group(self.spec)
if group:
chgrp(self.prefix, group)
# Set the proper permissions.
# This has to be done after group because changing groups blows
# away the sticky group bit on the directory
mode = os.stat(self.prefix).st_mode
perms = get_package_dir_permissions(self.spec)
if mode != perms:
os.chmod(self.prefix, perms)
# Ensure the metadata path exists as well
mkdirp(spack.store.layout.metadata_path(self.spec), mode=perms)
# Fork a child to do the actual installation.
# Preserve verbosity settings across installs.
PackageBase._verbose = spack.build_environment.fork(
self, build_process, dirty=dirty, fake=fake)
# If we installed then we should keep the prefix
keep_prefix = self.last_phase is None or keep_prefix
# note: PARENT of the build process adds the new package to
# the database, so that we don't need to re-read from file.
spack.store.db.add(
self.spec, spack.store.layout, explicit=explicit
)
except spack.directory_layout.InstallDirectoryAlreadyExistsError:
# Abort install if install directory exists.
# But do NOT remove it (you'd be overwriting someone else's stuff)
tty.warn("Keeping existing install prefix in place.")
raise
except StopIteration as e:
# A StopIteration exception means that do_install
# was asked to stop early from clients
tty.msg(e.message)
tty.msg(
'Package stage directory : {0}'.format(self.stage.source_path)
)
finally:
# Remove the install prefix if anything went wrong during install.
if not keep_prefix:
self.remove_prefix()
# The subprocess *may* have removed the build stage. Mark it
# not created so that the next time self.stage is invoked, we
# check the filesystem for it.
self.stage.created = False
@staticmethod
def _install_bootstrap_compiler(pkg, **install_kwargs):
tty.debug('Bootstrapping {0} compiler for {1}'.format(
pkg.spec.compiler, pkg.name
))
comp_kwargs = install_kwargs.copy()
comp_kwargs['explicit'] = False
comp_kwargs['install_deps'] = True
pkg.bootstrap_compiler(**comp_kwargs)
def unit_test_check(self): def unit_test_check(self):
"""Hook for unit tests to assert things about package internals. """Hook for unit tests to assert things about package internals.
@ -1855,125 +1457,6 @@ def unit_test_check(self):
""" """
return True return True
def check_for_unfinished_installation(
self, keep_prefix=False, restage=False):
"""Check for leftover files from partially-completed prior install to
prepare for a new install attempt.
Options control whether these files are reused (vs. destroyed).
Args:
keep_prefix (bool): True if the installation prefix needs to be
kept, False otherwise
restage (bool): False if the stage has to be kept, True otherwise
Returns:
True if the prefix exists but the install is not complete, False
otherwise.
"""
if self.spec.external:
raise ExternalPackageError("Attempted to repair external spec %s" %
self.spec.name)
with spack.store.db.prefix_write_lock(self.spec):
try:
record = spack.store.db.get_record(self.spec)
installed_in_db = record.installed if record else False
except KeyError:
installed_in_db = False
partial = False
if not installed_in_db and os.path.isdir(self.prefix):
if not keep_prefix:
self.remove_prefix()
else:
partial = True
if restage and self.stage.managed_by_spack:
self.stage.destroy()
return partial
def _do_install_pop_kwargs(self, kwargs):
"""Pops kwargs from do_install before starting the installation
Args:
kwargs:
'stop_at': last installation phase to be executed (or None)
"""
self.last_phase = kwargs.pop('stop_at', None)
if self.last_phase is not None and self.last_phase not in self.phases:
tty.die('\'{0}\' is not an allowed phase for package {1}'
.format(self.last_phase, self.name))
def log(self):
"""Copy provenance into the install directory on success."""
packages_dir = spack.store.layout.build_packages_path(self.spec)
# Remove first if we're overwriting another build
# (can happen with spack setup)
try:
# log and env install paths are inside this
shutil.rmtree(packages_dir)
except Exception as e:
# FIXME : this potentially catches too many things...
tty.debug(e)
# Archive the whole stdout + stderr for the package
install(self.log_path, self.install_log_path)
# Archive the environment used for the build
install(self.env_path, self.install_env_path)
# Finally, archive files that are specific to each package
with working_dir(self.stage.path):
errors = StringIO()
target_dir = os.path.join(
spack.store.layout.metadata_path(self.spec),
'archived-files')
for glob_expr in self.archive_files:
# Check that we are trying to copy things that are
# in the stage tree (not arbitrary files)
abs_expr = os.path.realpath(glob_expr)
if os.path.realpath(self.stage.path) not in abs_expr:
errors.write(
'[OUTSIDE SOURCE PATH]: {0}\n'.format(glob_expr)
)
continue
# Now that we are sure that the path is within the correct
# folder, make it relative and check for matches
if os.path.isabs(glob_expr):
glob_expr = os.path.relpath(
glob_expr, self.stage.path
)
files = glob.glob(glob_expr)
for f in files:
try:
target = os.path.join(target_dir, f)
# We must ensure that the directory exists before
# copying a file in
mkdirp(os.path.dirname(target))
install(f, target)
except Exception as e:
tty.debug(e)
# Here try to be conservative, and avoid discarding
# the whole install procedure because of copying a
# single file failed
errors.write('[FAILED TO ARCHIVE]: {0}'.format(f))
if errors.getvalue():
error_file = os.path.join(target_dir, 'errors.txt')
mkdirp(target_dir)
with open(error_file, 'w') as err:
err.write(errors.getvalue())
tty.warn('Errors occurred when archiving files.\n\t'
'See: {0}'.format(error_file))
dump_packages(self.spec, packages_dir)
def sanity_check_prefix(self): def sanity_check_prefix(self):
"""This function checks whether install succeeded.""" """This function checks whether install succeeded."""
@ -2539,8 +2022,6 @@ def rpath_args(self):
""" """
return " ".join("-Wl,-rpath,%s" % p for p in self.rpath) return " ".join("-Wl,-rpath,%s" % p for p in self.rpath)
build_time_test_callbacks = None
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
def _run_default_build_time_test_callbacks(self): def _run_default_build_time_test_callbacks(self):
"""Tries to call all the methods that are listed in the attribute """Tries to call all the methods that are listed in the attribute
@ -2560,8 +2041,6 @@ def _run_default_build_time_test_callbacks(self):
msg = 'RUN-TESTS: method not implemented [{0}]' msg = 'RUN-TESTS: method not implemented [{0}]'
tty.warn(msg.format(name)) tty.warn(msg.format(name))
install_time_test_callbacks = None
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
def _run_default_install_time_test_callbacks(self): def _run_default_install_time_test_callbacks(self):
"""Tries to call all the methods that are listed in the attribute """Tries to call all the methods that are listed in the attribute
@ -2652,54 +2131,6 @@ def flatten_dependencies(spec, flat_dir):
dep_files.merge(flat_dir + '/' + name) dep_files.merge(flat_dir + '/' + name)
def dump_packages(spec, path):
"""Dump all package information for a spec and its dependencies.
This creates a package repository within path for every
namespace in the spec DAG, and fills the repos wtih package
files and patch files for every node in the DAG.
"""
mkdirp(path)
# Copy in package.py files from any dependencies.
# Note that we copy them in as they are in the *install* directory
# NOT as they are in the repository, because we want a snapshot of
# how *this* particular build was done.
for node in spec.traverse(deptype=all):
if node is not spec:
# Locate the dependency package in the install tree and find
# its provenance information.
source = spack.store.layout.build_packages_path(node)
source_repo_root = os.path.join(source, node.namespace)
# There's no provenance installed for the source package. Skip it.
# User can always get something current from the builtin repo.
if not os.path.isdir(source_repo_root):
continue
# Create a source repo and get the pkg directory out of it.
try:
source_repo = spack.repo.Repo(source_repo_root)
source_pkg_dir = source_repo.dirname_for_package_name(
node.name)
except spack.repo.RepoError:
tty.warn("Warning: Couldn't copy in provenance for %s" %
node.name)
# Create a destination repository
dest_repo_root = os.path.join(path, node.namespace)
if not os.path.exists(dest_repo_root):
spack.repo.create_repo(dest_repo_root)
repo = spack.repo.Repo(dest_repo_root)
# Get the location of the package in the dest repo.
dest_pkg_dir = repo.dirname_for_package_name(node.name)
if node is not spec:
install_tree(source_pkg_dir, dest_pkg_dir)
else:
spack.repo.path.dump_provenance(node, dest_pkg_dir)
def possible_dependencies(*pkg_or_spec, **kwargs): def possible_dependencies(*pkg_or_spec, **kwargs):
"""Get the possible dependencies of a number of packages. """Get the possible dependencies of a number of packages.
@ -2729,28 +2160,6 @@ def possible_dependencies(*pkg_or_spec, **kwargs):
return visited return visited
def print_pkg(message):
"""Outputs a message with a package icon."""
from llnl.util.tty.color import cwrite
cwrite('@*g{[+]} ')
print(message)
def _hms(seconds):
"""Convert time in seconds to hours, minutes, seconds."""
m, s = divmod(seconds, 60)
h, m = divmod(m, 60)
parts = []
if h:
parts.append("%dh" % h)
if m:
parts.append("%dm" % m)
if s:
parts.append("%.2fs" % s)
return ' '.join(parts)
class FetchError(spack.error.SpackError): class FetchError(spack.error.SpackError):
"""Raised when something goes wrong during fetch.""" """Raised when something goes wrong during fetch."""
@ -2758,17 +2167,6 @@ def __init__(self, message, long_msg=None):
super(FetchError, self).__init__(message, long_msg) super(FetchError, self).__init__(message, long_msg)
class InstallError(spack.error.SpackError):
"""Raised when something goes wrong during install or uninstall."""
def __init__(self, message, long_msg=None):
super(InstallError, self).__init__(message, long_msg)
class ExternalPackageError(InstallError):
"""Raised by install() when a package is only for external use."""
class PackageStillNeededError(InstallError): class PackageStillNeededError(InstallError):
"""Raised when package is still needed by another on uninstall.""" """Raised when package is still needed by another on uninstall."""
def __init__(self, spec, dependents): def __init__(self, spec, dependents):

View file

@ -50,7 +50,10 @@
from spack.package import \ from spack.package import \
install_dependency_symlinks, flatten_dependencies, \ install_dependency_symlinks, flatten_dependencies, \
DependencyConflictError, InstallError, ExternalPackageError DependencyConflictError
from spack.installer import \
ExternalPackageError, InstallError, InstallLockError, UpstreamPackageError
from spack.variant import any_combination_of, auto_or_any_combination_of from spack.variant import any_combination_of, auto_or_any_combination_of
from spack.variant import disjoint_sets from spack.variant import disjoint_sets

View file

@ -44,7 +44,8 @@ def fetch_package_log(pkg):
class InfoCollector(object): class InfoCollector(object):
"""Decorates PackageBase.do_install to collect information """Decorates PackageInstaller._install_task, which is called by
PackageBase.do_install for each spec, to collect information
on the installation of certain specs. on the installation of certain specs.
When exiting the context this change will be rolled-back. When exiting the context this change will be rolled-back.
@ -57,8 +58,8 @@ class InfoCollector(object):
specs (list of Spec): specs whose install information will specs (list of Spec): specs whose install information will
be recorded be recorded
""" """
#: Backup of PackageBase.do_install #: Backup of PackageInstaller._install_task
_backup_do_install = spack.package.PackageBase.do_install _backup__install_task = spack.package.PackageInstaller._install_task
def __init__(self, specs): def __init__(self, specs):
#: Specs that will be installed #: Specs that will be installed
@ -108,15 +109,16 @@ def __enter__(self):
} }
spec['packages'].append(package) spec['packages'].append(package)
def gather_info(do_install): def gather_info(_install_task):
"""Decorates do_install to gather useful information for """Decorates PackageInstaller._install_task to gather useful
a CI report. information on PackageBase.do_install for a CI report.
It's defined here to capture the environment and build It's defined here to capture the environment and build
this context as the installations proceed. this context as the installations proceed.
""" """
@functools.wraps(do_install) @functools.wraps(_install_task)
def wrapper(pkg, *args, **kwargs): def wrapper(installer, task, *args, **kwargs):
pkg = task.pkg
# We accounted before for what is already installed # We accounted before for what is already installed
installed_on_entry = pkg.installed installed_on_entry = pkg.installed
@ -134,7 +136,7 @@ def wrapper(pkg, *args, **kwargs):
value = None value = None
try: try:
value = do_install(pkg, *args, **kwargs) value = _install_task(installer, task, *args, **kwargs)
package['result'] = 'success' package['result'] = 'success'
package['stdout'] = fetch_package_log(pkg) package['stdout'] = fetch_package_log(pkg)
package['installed_from_binary_cache'] = \ package['installed_from_binary_cache'] = \
@ -182,14 +184,15 @@ def wrapper(pkg, *args, **kwargs):
return wrapper return wrapper
spack.package.PackageBase.do_install = gather_info( spack.package.PackageInstaller._install_task = gather_info(
spack.package.PackageBase.do_install spack.package.PackageInstaller._install_task
) )
def __exit__(self, exc_type, exc_val, exc_tb): def __exit__(self, exc_type, exc_val, exc_tb):
# Restore the original method in PackageBase # Restore the original method in PackageInstaller
spack.package.PackageBase.do_install = InfoCollector._backup_do_install spack.package.PackageInstaller._install_task = \
InfoCollector._backup__install_task
for spec in self.specs: for spec in self.specs:
spec['npackages'] = len(spec['packages']) spec['npackages'] = len(spec['packages'])
@ -208,9 +211,9 @@ class collect_info(object):
"""Collects information to build a report while installing """Collects information to build a report while installing
and dumps it on exit. and dumps it on exit.
If the format name is not ``None``, this context manager If the format name is not ``None``, this context manager decorates
decorates PackageBase.do_install when entering the context PackageInstaller._install_task when entering the context for a
and unrolls the change when exiting. PackageBase.do_install operation and unrolls the change when exiting.
Within the context, only the specs that are passed to it Within the context, only the specs that are passed to it
on initialization will be recorded for the report. Data from on initialization will be recorded for the report. Data from
@ -255,14 +258,14 @@ def concretization_report(self, msg):
def __enter__(self): def __enter__(self):
if self.format_name: if self.format_name:
# Start the collector and patch PackageBase.do_install # Start the collector and patch PackageInstaller._install_task
self.collector = InfoCollector(self.specs) self.collector = InfoCollector(self.specs)
self.collector.__enter__() self.collector.__enter__()
def __exit__(self, exc_type, exc_val, exc_tb): def __exit__(self, exc_type, exc_val, exc_tb):
if self.format_name: if self.format_name:
# Close the collector and restore the # Close the collector and restore the
# original PackageBase.do_install # original PackageInstaller._install_task
self.collector.__exit__(exc_type, exc_val, exc_tb) self.collector.__exit__(exc_type, exc_val, exc_tb)
report_data = {'specs': self.collector.specs} report_data = {'specs': self.collector.specs}

View file

@ -307,8 +307,9 @@ def __init__(
lock_id = prefix_bits(sha1, bit_length(sys.maxsize)) lock_id = prefix_bits(sha1, bit_length(sys.maxsize))
stage_lock_path = os.path.join(get_stage_root(), '.lock') stage_lock_path = os.path.join(get_stage_root(), '.lock')
tty.debug("Creating stage lock {0}".format(self.name))
Stage.stage_locks[self.name] = spack.util.lock.Lock( Stage.stage_locks[self.name] = spack.util.lock.Lock(
stage_lock_path, lock_id, 1) stage_lock_path, lock_id, 1, desc=self.name)
self._lock = Stage.stage_locks[self.name] self._lock = Stage.stage_locks[self.name]

View file

@ -0,0 +1,66 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import pytest
import spack.installer as inst
import spack.repo
import spack.spec
def test_build_task_errors(install_mockery):
with pytest.raises(ValueError, match='must be a package'):
inst.BuildTask('abc', False, 0, 0, 0, [])
pkg = spack.repo.get('trivial-install-test-package')
with pytest.raises(ValueError, match='must have a concrete spec'):
inst.BuildTask(pkg, False, 0, 0, 0, [])
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
with pytest.raises(inst.InstallError, match='Cannot create a build task'):
inst.BuildTask(spec.package, False, 0, 0, inst.STATUS_REMOVED, [])
def test_build_task_basics(install_mockery):
spec = spack.spec.Spec('dependent-install')
spec.concretize()
assert spec.concrete
# Ensure key properties match expectations
task = inst.BuildTask(spec.package, False, 0, 0, inst.STATUS_ADDED, [])
assert task.priority == len(task.uninstalled_deps)
assert task.key == (task.priority, task.sequence)
# Ensure flagging installed works as expected
assert len(task.uninstalled_deps) > 0
assert task.dependencies == task.uninstalled_deps
task.flag_installed(task.dependencies)
assert len(task.uninstalled_deps) == 0
assert task.priority == 0
def test_build_task_strings(install_mockery):
"""Tests of build_task repr and str for coverage purposes."""
# Using a package with one dependency
spec = spack.spec.Spec('dependent-install')
spec.concretize()
assert spec.concrete
# Ensure key properties match expectations
task = inst.BuildTask(spec.package, False, 0, 0, inst.STATUS_ADDED, [])
# Cover __repr__
irep = task.__repr__()
assert irep.startswith(task.__class__.__name__)
assert "status='queued'" in irep # == STATUS_ADDED
assert "sequence=" in irep
# Cover __str__
istr = str(task)
assert "status=queued" in istr # == STATUS_ADDED
assert "#dependencies=1" in istr
assert "priority=" in istr

View file

@ -169,9 +169,11 @@ def test_env_install_same_spec_twice(install_mockery, mock_fetch, capfd):
e = ev.read('test') e = ev.read('test')
with capfd.disabled(): with capfd.disabled():
with e: with e:
# The first installation outputs the package prefix
install('cmake-client') install('cmake-client')
# The second installation attempt will also update the view
out = install('cmake-client') out = install('cmake-client')
assert 'is already installed in' in out assert 'Updating view at' in out
def test_remove_after_concretize(): def test_remove_after_concretize():

View file

@ -139,8 +139,7 @@ def test_install_output_on_build_error(mock_packages, mock_archive, mock_fetch,
# capfd interferes with Spack's capturing # capfd interferes with Spack's capturing
with capfd.disabled(): with capfd.disabled():
out = install('build-error', fail_on_error=False) out = install('build-error', fail_on_error=False)
assert isinstance(install.error, spack.build_environment.ChildError) assert 'ProcessError' in out
assert install.error.name == 'ProcessError'
assert 'configure: error: in /path/to/some/file:' in out assert 'configure: error: in /path/to/some/file:' in out
assert 'configure: error: cannot run C compiled programs.' in out assert 'configure: error: cannot run C compiled programs.' in out
@ -177,9 +176,10 @@ def test_show_log_on_error(mock_packages, mock_archive, mock_fetch,
assert install.error.pkg.name == 'build-error' assert install.error.pkg.name == 'build-error'
assert 'Full build log:' in out assert 'Full build log:' in out
# Message shows up for ProcessError (1), ChildError (1), and output (1)
errors = [line for line in out.split('\n') errors = [line for line in out.split('\n')
if 'configure: error: cannot run C compiled programs' in line] if 'configure: error: cannot run C compiled programs' in line]
assert len(errors) == 2 assert len(errors) == 3
def test_install_overwrite( def test_install_overwrite(
@ -373,8 +373,12 @@ def just_throw(*args, **kwargs):
exc_type = getattr(builtins, exc_typename) exc_type = getattr(builtins, exc_typename)
raise exc_type(msg) raise exc_type(msg)
monkeypatch.setattr(spack.package.PackageBase, 'do_install', just_throw) monkeypatch.setattr(spack.installer.PackageInstaller, '_install_task',
just_throw)
# TODO: Why does junit output capture appear to swallow the exception
# TODO: as evidenced by the two failing packages getting tagged as
# TODO: installed?
with tmpdir.as_cwd(): with tmpdir.as_cwd():
install('--log-format=junit', '--log-file=test.xml', 'libdwarf') install('--log-format=junit', '--log-file=test.xml', 'libdwarf')
@ -384,14 +388,14 @@ def just_throw(*args, **kwargs):
content = filename.open().read() content = filename.open().read()
# Count failures and errors correctly # Count failures and errors correctly: libdwarf _and_ libelf
assert 'tests="1"' in content assert 'tests="2"' in content
assert 'failures="0"' in content assert 'failures="0"' in content
assert 'errors="1"' in content assert 'errors="2"' in content
# We want to have both stdout and stderr # We want to have both stdout and stderr
assert '<system-out>' in content assert '<system-out>' in content
assert msg in content assert 'error message="{0}"'.format(msg) in content
@pytest.mark.usefixtures('noop_install', 'config') @pytest.mark.usefixtures('noop_install', 'config')
@ -478,9 +482,8 @@ def test_cdash_upload_build_error(tmpdir, mock_fetch, install_mockery,
@pytest.mark.disable_clean_stage_check @pytest.mark.disable_clean_stage_check
def test_cdash_upload_clean_build(tmpdir, mock_fetch, install_mockery, def test_cdash_upload_clean_build(tmpdir, mock_fetch, install_mockery, capfd):
capfd): # capfd interferes with Spack's capturing of e.g., Build.xml output
# capfd interferes with Spack's capturing
with capfd.disabled(): with capfd.disabled():
with tmpdir.as_cwd(): with tmpdir.as_cwd():
install( install(
@ -498,7 +501,7 @@ def test_cdash_upload_clean_build(tmpdir, mock_fetch, install_mockery,
@pytest.mark.disable_clean_stage_check @pytest.mark.disable_clean_stage_check
def test_cdash_upload_extra_params(tmpdir, mock_fetch, install_mockery, capfd): def test_cdash_upload_extra_params(tmpdir, mock_fetch, install_mockery, capfd):
# capfd interferes with Spack's capturing # capfd interferes with Spack's capture of e.g., Build.xml output
with capfd.disabled(): with capfd.disabled():
with tmpdir.as_cwd(): with tmpdir.as_cwd():
install( install(
@ -520,7 +523,7 @@ def test_cdash_upload_extra_params(tmpdir, mock_fetch, install_mockery, capfd):
@pytest.mark.disable_clean_stage_check @pytest.mark.disable_clean_stage_check
def test_cdash_buildstamp_param(tmpdir, mock_fetch, install_mockery, capfd): def test_cdash_buildstamp_param(tmpdir, mock_fetch, install_mockery, capfd):
# capfd interferes with Spack's capturing # capfd interferes with Spack's capture of e.g., Build.xml output
with capfd.disabled(): with capfd.disabled():
with tmpdir.as_cwd(): with tmpdir.as_cwd():
cdash_track = 'some_mocked_track' cdash_track = 'some_mocked_track'
@ -569,7 +572,6 @@ def test_cdash_install_from_spec_yaml(tmpdir, mock_fetch, install_mockery,
report_file = report_dir.join('a_Configure.xml') report_file = report_dir.join('a_Configure.xml')
assert report_file in report_dir.listdir() assert report_file in report_dir.listdir()
content = report_file.open().read() content = report_file.open().read()
import re
install_command_regex = re.compile( install_command_regex = re.compile(
r'<ConfigureCommand>(.+)</ConfigureCommand>', r'<ConfigureCommand>(.+)</ConfigureCommand>',
re.MULTILINE | re.DOTALL) re.MULTILINE | re.DOTALL)
@ -599,6 +601,7 @@ def test_build_warning_output(tmpdir, mock_fetch, install_mockery, capfd):
msg = '' msg = ''
try: try:
install('build-warnings') install('build-warnings')
assert False, "no exception was raised!"
except spack.build_environment.ChildError as e: except spack.build_environment.ChildError as e:
msg = e.long_message msg = e.long_message
@ -607,12 +610,16 @@ def test_build_warning_output(tmpdir, mock_fetch, install_mockery, capfd):
def test_cache_only_fails(tmpdir, mock_fetch, install_mockery, capfd): def test_cache_only_fails(tmpdir, mock_fetch, install_mockery, capfd):
msg = ''
with capfd.disabled(): with capfd.disabled():
try: try:
install('--cache-only', 'libdwarf') install('--cache-only', 'libdwarf')
assert False except spack.installer.InstallError as e:
except spack.main.SpackCommandError: msg = str(e)
pass
# libelf from cache failed to install, which automatically removed the
# the libdwarf build task and flagged the package as failed to install.
assert 'Installation of libdwarf failed' in msg
def test_install_only_dependencies(tmpdir, mock_fetch, install_mockery): def test_install_only_dependencies(tmpdir, mock_fetch, install_mockery):

View file

@ -28,6 +28,7 @@
import spack.database import spack.database
import spack.directory_layout import spack.directory_layout
import spack.environment as ev import spack.environment as ev
import spack.package
import spack.package_prefs import spack.package_prefs
import spack.paths import spack.paths
import spack.platforms.test import spack.platforms.test
@ -38,7 +39,6 @@
from spack.util.pattern import Bunch from spack.util.pattern import Bunch
from spack.dependency import Dependency from spack.dependency import Dependency
from spack.package import PackageBase
from spack.fetch_strategy import FetchStrategyComposite, URLFetchStrategy from spack.fetch_strategy import FetchStrategyComposite, URLFetchStrategy
from spack.fetch_strategy import FetchError from spack.fetch_strategy import FetchError
from spack.spec import Spec from spack.spec import Spec
@ -329,8 +329,18 @@ def mock_repo_path():
yield spack.repo.RepoPath(spack.paths.mock_packages_path) yield spack.repo.RepoPath(spack.paths.mock_packages_path)
@pytest.fixture
def mock_pkg_install(monkeypatch):
def _pkg_install_fn(pkg, spec, prefix):
# sanity_check_prefix requires something in the install directory
mkdirp(prefix.bin)
monkeypatch.setattr(spack.package.PackageBase, 'install', _pkg_install_fn,
raising=False)
@pytest.fixture(scope='function') @pytest.fixture(scope='function')
def mock_packages(mock_repo_path): def mock_packages(mock_repo_path, mock_pkg_install):
"""Use the 'builtin.mock' repository instead of 'builtin'""" """Use the 'builtin.mock' repository instead of 'builtin'"""
with use_repo(mock_repo_path): with use_repo(mock_repo_path):
yield mock_repo_path yield mock_repo_path
@ -599,10 +609,10 @@ def mock_fetch(mock_archive):
def fake_fn(self): def fake_fn(self):
return fetcher return fetcher
orig_fn = PackageBase.fetcher orig_fn = spack.package.PackageBase.fetcher
PackageBase.fetcher = fake_fn spack.package.PackageBase.fetcher = fake_fn
yield yield
PackageBase.fetcher = orig_fn spack.package.PackageBase.fetcher = orig_fn
class MockLayout(object): class MockLayout(object):

View file

@ -14,6 +14,7 @@
import pytest import pytest
import json import json
import llnl.util.lock as lk
from llnl.util.tty.colify import colify from llnl.util.tty.colify import colify
import spack.repo import spack.repo
@ -749,3 +750,118 @@ def test_query_spec_with_non_conditional_virtual_dependency(database):
# dependency that are not conditional on variants # dependency that are not conditional on variants
results = spack.store.db.query_local('mpileaks ^mpich') results = spack.store.db.query_local('mpileaks ^mpich')
assert len(results) == 1 assert len(results) == 1
def test_failed_spec_path_error(database):
"""Ensure spec not concrete check is covered."""
s = spack.spec.Spec('a')
with pytest.raises(ValueError, matches='Concrete spec required'):
spack.store.db._failed_spec_path(s)
@pytest.mark.db
def test_clear_failure_keep(mutable_database, monkeypatch, capfd):
"""Add test coverage for clear_failure operation when to be retained."""
def _is(db, spec):
return True
# Pretend the spec has been failure locked
monkeypatch.setattr(spack.database.Database, 'prefix_failure_locked', _is)
s = spack.spec.Spec('a')
spack.store.db.clear_failure(s)
out = capfd.readouterr()[0]
assert 'Retaining failure marking' in out
@pytest.mark.db
def test_clear_failure_forced(mutable_database, monkeypatch, capfd):
"""Add test coverage for clear_failure operation when force."""
def _is(db, spec):
return True
# Pretend the spec has been failure locked
monkeypatch.setattr(spack.database.Database, 'prefix_failure_locked', _is)
# Ensure raise OSError when try to remove the non-existent marking
monkeypatch.setattr(spack.database.Database, 'prefix_failure_marked', _is)
s = spack.spec.Spec('a').concretized()
spack.store.db.clear_failure(s, force=True)
out = capfd.readouterr()[1]
assert 'Removing failure marking despite lock' in out
assert 'Unable to remove failure marking' in out
@pytest.mark.db
def test_mark_failed(mutable_database, monkeypatch, tmpdir, capsys):
"""Add coverage to mark_failed."""
def _raise_exc(lock):
raise lk.LockTimeoutError('Mock acquire_write failure')
# Ensure attempt to acquire write lock on the mark raises the exception
monkeypatch.setattr(lk.Lock, 'acquire_write', _raise_exc)
with tmpdir.as_cwd():
s = spack.spec.Spec('a').concretized()
spack.store.db.mark_failed(s)
out = str(capsys.readouterr()[1])
assert 'Unable to mark a as failed' in out
# Clean up the failure mark to ensure it does not interfere with other
# tests using the same spec.
del spack.store.db._prefix_failures[s.prefix]
@pytest.mark.db
def test_prefix_failed(mutable_database, monkeypatch):
"""Add coverage to prefix_failed operation."""
def _is(db, spec):
return True
s = spack.spec.Spec('a').concretized()
# Confirm the spec is not already marked as failed
assert not spack.store.db.prefix_failed(s)
# Check that a failure entry is sufficient
spack.store.db._prefix_failures[s.prefix] = None
assert spack.store.db.prefix_failed(s)
# Remove the entry and check again
del spack.store.db._prefix_failures[s.prefix]
assert not spack.store.db.prefix_failed(s)
# Now pretend that the prefix failure is locked
monkeypatch.setattr(spack.database.Database, 'prefix_failure_locked', _is)
assert spack.store.db.prefix_failed(s)
def test_prefix_read_lock_error(mutable_database, monkeypatch):
"""Cover the prefix read lock exception."""
def _raise(db, spec):
raise lk.LockError('Mock lock error')
s = spack.spec.Spec('a').concretized()
# Ensure subsequent lock operations fail
monkeypatch.setattr(lk.Lock, 'acquire_read', _raise)
with pytest.raises(Exception):
with spack.store.db.prefix_read_lock(s):
assert False
def test_prefix_write_lock_error(mutable_database, monkeypatch):
"""Cover the prefix write lock exception."""
def _raise(db, spec):
raise lk.LockError('Mock lock error')
s = spack.spec.Spec('a').concretized()
# Ensure subsequent lock operations fail
monkeypatch.setattr(lk.Lock, 'acquire_write', _raise)
with pytest.raises(Exception):
with spack.store.db.prefix_write_lock(s):
assert False

View file

@ -100,6 +100,9 @@ def test_partial_install_delete_prefix_and_stage(install_mockery, mock_fetch):
rm_prefix_checker = RemovePrefixChecker(instance_rm_prefix) rm_prefix_checker = RemovePrefixChecker(instance_rm_prefix)
spack.package.Package.remove_prefix = rm_prefix_checker.remove_prefix spack.package.Package.remove_prefix = rm_prefix_checker.remove_prefix
# must clear failure markings for the package before re-installing it
spack.store.db.clear_failure(spec, True)
pkg.succeed = True pkg.succeed = True
pkg.stage = MockStage(pkg.stage) pkg.stage = MockStage(pkg.stage)
@ -264,6 +267,9 @@ def test_partial_install_keep_prefix(install_mockery, mock_fetch):
pkg.do_install(keep_prefix=True) pkg.do_install(keep_prefix=True)
assert os.path.exists(pkg.prefix) assert os.path.exists(pkg.prefix)
# must clear failure markings for the package before re-installing it
spack.store.db.clear_failure(spec, True)
pkg.succeed = True # make the build succeed pkg.succeed = True # make the build succeed
pkg.stage = MockStage(pkg.stage) pkg.stage = MockStage(pkg.stage)
pkg.do_install(keep_prefix=True) pkg.do_install(keep_prefix=True)
@ -300,12 +306,13 @@ def test_store(install_mockery, mock_fetch):
@pytest.mark.disable_clean_stage_check @pytest.mark.disable_clean_stage_check
def test_failing_build(install_mockery, mock_fetch): def test_failing_build(install_mockery, mock_fetch, capfd):
spec = Spec('failing-build').concretized() spec = Spec('failing-build').concretized()
pkg = spec.package pkg = spec.package
with pytest.raises(spack.build_environment.ChildError): with pytest.raises(spack.build_environment.ChildError):
pkg.do_install() pkg.do_install()
assert 'InstallError: Expected Failure' in capfd.readouterr()[0]
class MockInstallError(spack.error.SpackError): class MockInstallError(spack.error.SpackError):
@ -432,7 +439,7 @@ def test_pkg_install_log(install_mockery):
# Attempt installing log without the build log file # Attempt installing log without the build log file
with pytest.raises(IOError, match="No such file or directory"): with pytest.raises(IOError, match="No such file or directory"):
spec.package.log() spack.installer.log(spec.package)
# Set up mock build files and try again # Set up mock build files and try again
log_path = spec.package.log_path log_path = spec.package.log_path
@ -445,7 +452,7 @@ def test_pkg_install_log(install_mockery):
install_path = os.path.dirname(spec.package.install_log_path) install_path = os.path.dirname(spec.package.install_log_path)
mkdirp(install_path) mkdirp(install_path)
spec.package.log() spack.installer.log(spec.package)
assert os.path.exists(spec.package.install_log_path) assert os.path.exists(spec.package.install_log_path)
assert os.path.exists(spec.package.install_env_path) assert os.path.exists(spec.package.install_env_path)
@ -469,3 +476,14 @@ def test_unconcretized_install(install_mockery, mock_fetch, mock_packages):
with pytest.raises(ValueError, match="only patch concrete packages"): with pytest.raises(ValueError, match="only patch concrete packages"):
spec.package.do_patch() spec.package.do_patch()
def test_install_error():
try:
msg = 'test install error'
long_msg = 'this is the long version of test install error'
raise InstallError(msg, long_msg=long_msg)
except Exception as exc:
assert exc.__class__.__name__ == 'InstallError'
assert exc.message == msg
assert exc.long_message == long_msg

View file

@ -0,0 +1,579 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import pytest
import llnl.util.tty as tty
import spack.binary_distribution
import spack.compilers
import spack.directory_layout as dl
import spack.installer as inst
import spack.util.lock as lk
import spack.repo
import spack.spec
def _noop(*args, **kwargs):
"""Generic monkeypatch no-op routine."""
pass
def _none(*args, **kwargs):
"""Generic monkeypatch function that always returns None."""
return None
def _true(*args, **kwargs):
"""Generic monkeypatch function that always returns True."""
return True
def create_build_task(pkg):
"""
Create a built task for the given (concretized) package
Args:
pkg (PackageBase): concretized package associated with the task
Return:
(BuildTask) A basic package build task
"""
return inst.BuildTask(pkg, False, 0, 0, inst.STATUS_ADDED, [])
def create_installer(spec_name):
"""
Create an installer for the named spec
Args:
spec_name (str): Name of the explicit install spec
Return:
spec (Spec): concretized spec
installer (PackageInstaller): the associated package installer
"""
spec = spack.spec.Spec(spec_name)
spec.concretize()
assert spec.concrete
return spec, inst.PackageInstaller(spec.package)
@pytest.mark.parametrize('sec,result', [
(86400, "24h"),
(3600, "1h"),
(60, "1m"),
(1.802, "1.80s"),
(3723.456, "1h 2m 3.46s")])
def test_hms(sec, result):
assert inst._hms(sec) == result
def test_install_msg():
name = 'some-package'
pid = 123456
expected = "{0}: Installing {1}".format(pid, name)
assert inst.install_msg(name, pid) == expected
def test_install_from_cache_errors(install_mockery, capsys):
"""Test to ensure cover _install_from_cache errors."""
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
# Check with cache-only
with pytest.raises(SystemExit):
inst._install_from_cache(spec.package, True, True)
captured = str(capsys.readouterr())
assert 'No binary' in captured
assert 'found when cache-only specified' in captured
assert not spec.package.installed_from_binary_cache
# Check when don't expect to install only from binary cache
assert not inst._install_from_cache(spec.package, False, True)
assert not spec.package.installed_from_binary_cache
def test_install_from_cache_ok(install_mockery, monkeypatch):
"""Test to ensure cover _install_from_cache to the return."""
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
monkeypatch.setattr(inst, '_try_install_from_binary_cache', _true)
monkeypatch.setattr(spack.hooks, 'post_install', _noop)
assert inst._install_from_cache(spec.package, True, True)
def test_process_external_package_module(install_mockery, monkeypatch, capfd):
"""Test to simply cover the external module message path."""
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
# Ensure take the external module path WITHOUT any changes to the database
monkeypatch.setattr(spack.database.Database, 'get_record', _none)
spec.external_path = '/actual/external/path/not/checked'
spec.external_module = 'unchecked_module'
inst._process_external_package(spec.package, False)
out = capfd.readouterr()[0]
assert 'has external module in {0}'.format(spec.external_module) in out
assert 'is actually installed in {0}'.format(spec.external_path) in out
def test_process_binary_cache_tarball_none(install_mockery, monkeypatch,
capfd):
"""Tests to cover _process_binary_cache_tarball when no tarball."""
monkeypatch.setattr(spack.binary_distribution, 'download_tarball', _none)
pkg = spack.repo.get('trivial-install-test-package')
assert not inst._process_binary_cache_tarball(pkg, None, False)
assert 'exists in binary cache but' in capfd.readouterr()[0]
def test_process_binary_cache_tarball_tar(install_mockery, monkeypatch, capfd):
"""Tests to cover _process_binary_cache_tarball with a tar file."""
def _spec(spec):
return spec
# Skip binary distribution functionality since assume tested elsewhere
monkeypatch.setattr(spack.binary_distribution, 'download_tarball', _spec)
monkeypatch.setattr(spack.binary_distribution, 'extract_tarball', _noop)
# Skip database updates
monkeypatch.setattr(spack.database.Database, 'add', _noop)
spec = spack.spec.Spec('a').concretized()
assert inst._process_binary_cache_tarball(spec.package, spec, False)
assert 'Installing a from binary cache' in capfd.readouterr()[0]
def test_installer_init_errors(install_mockery):
"""Test to ensure cover installer constructor errors."""
with pytest.raises(ValueError, match='must be a package'):
inst.PackageInstaller('abc')
pkg = spack.repo.get('trivial-install-test-package')
with pytest.raises(ValueError, match='Can only install concrete'):
inst.PackageInstaller(pkg)
def test_installer_strings(install_mockery):
"""Tests of installer repr and str for coverage purposes."""
spec, installer = create_installer('trivial-install-test-package')
# Cover __repr__
irep = installer.__repr__()
assert irep.startswith(installer.__class__.__name__)
assert "installed=" in irep
assert "failed=" in irep
# Cover __str__
istr = str(installer)
assert "#tasks=0" in istr
assert "installed (0)" in istr
assert "failed (0)" in istr
def test_installer_last_phase_error(install_mockery, capsys):
"""Test to cover last phase error."""
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
with pytest.raises(SystemExit):
installer = inst.PackageInstaller(spec.package)
installer.install(stop_at='badphase')
captured = capsys.readouterr()
assert 'is not an allowed phase' in str(captured)
def test_installer_ensure_ready_errors(install_mockery):
"""Test to cover _ensure_ready errors."""
spec, installer = create_installer('trivial-install-test-package')
fmt = r'cannot be installed locally.*{0}'
# Force an external package error
path, module = spec.external_path, spec.external_module
spec.external_path = '/actual/external/path/not/checked'
spec.external_module = 'unchecked_module'
msg = fmt.format('is external')
with pytest.raises(inst.ExternalPackageError, match=msg):
installer._ensure_install_ready(spec.package)
# Force an upstream package error
spec.external_path, spec.external_module = path, module
spec.package._installed_upstream = True
msg = fmt.format('is upstream')
with pytest.raises(inst.UpstreamPackageError, match=msg):
installer._ensure_install_ready(spec.package)
# Force an install lock error, which should occur naturally since
# we are calling an internal method prior to any lock-related setup
spec.package._installed_upstream = False
assert len(installer.locks) == 0
with pytest.raises(inst.InstallLockError, match=fmt.format('not locked')):
installer._ensure_install_ready(spec.package)
def test_ensure_locked_have(install_mockery, tmpdir):
"""Test to cover _ensure_locked when already have lock."""
spec, installer = create_installer('trivial-install-test-package')
with tmpdir.as_cwd():
lock = lk.Lock('./test', default_timeout=1e-9, desc='test')
lock_type = 'read'
tpl = (lock_type, lock)
installer.locks[installer.pkg_id] = tpl
assert installer._ensure_locked(lock_type, spec.package) == tpl
def test_package_id(install_mockery):
"""Test to cover package_id functionality."""
pkg = spack.repo.get('trivial-install-test-package')
with pytest.raises(ValueError, matches='spec is not concretized'):
inst.package_id(pkg)
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
pkg = spec.package
assert pkg.name in inst.package_id(pkg)
def test_fake_install(install_mockery):
"""Test to cover fake install basics."""
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
pkg = spec.package
inst._do_fake_install(pkg)
assert os.path.isdir(pkg.prefix.lib)
def test_packages_needed_to_bootstrap_compiler(install_mockery, monkeypatch):
"""Test to cover most of _packages_needed_to_boostrap_compiler."""
# TODO: More work is needed to go beyond the dependency check
def _no_compilers(pkg, arch_spec):
return []
# Test path where no compiler packages returned
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
packages = inst._packages_needed_to_bootstrap_compiler(spec.package)
assert not packages
# Test up to the dependency check
monkeypatch.setattr(spack.compilers, 'compilers_for_spec', _no_compilers)
with pytest.raises(spack.repo.UnknownPackageError, matches='not found'):
inst._packages_needed_to_bootstrap_compiler(spec.package)
def test_dump_packages_deps(install_mockery, tmpdir):
"""Test to add coverage to dump_packages."""
spec = spack.spec.Spec('simple-inheritance').concretized()
with tmpdir.as_cwd():
inst.dump_packages(spec, '.')
def test_add_bootstrap_compilers(install_mockery, monkeypatch):
"""Test to cover _add_bootstrap_compilers."""
def _pkgs(pkg):
spec = spack.spec.Spec('mpi').concretized()
return [(spec.package, True)]
spec, installer = create_installer('trivial-install-test-package')
monkeypatch.setattr(inst, '_packages_needed_to_bootstrap_compiler', _pkgs)
installer._add_bootstrap_compilers(spec.package)
ids = list(installer.build_tasks)
assert len(ids) == 1
task = installer.build_tasks[ids[0]]
assert task.compiler
def test_prepare_for_install_on_installed(install_mockery, monkeypatch):
"""Test of _prepare_for_install's early return for installed task path."""
spec, installer = create_installer('dependent-install')
task = create_build_task(spec.package)
installer.installed.add(task.pkg_id)
monkeypatch.setattr(inst.PackageInstaller, '_ensure_install_ready', _noop)
installer._prepare_for_install(task, True, True, False)
def test_installer_init_queue(install_mockery):
"""Test of installer queue functions."""
with spack.config.override('config:install_missing_compilers', True):
spec, installer = create_installer('dependent-install')
installer._init_queue(True, True)
ids = list(installer.build_tasks)
assert len(ids) == 2
assert 'dependency-install' in ids
assert 'dependent-install' in ids
def test_install_task_use_cache(install_mockery, monkeypatch):
"""Test _install_task to cover use_cache path."""
spec, installer = create_installer('trivial-install-test-package')
task = create_build_task(spec.package)
monkeypatch.setattr(inst, '_install_from_cache', _true)
installer._install_task(task)
assert spec.package.name in installer.installed
def test_release_lock_write_n_exception(install_mockery, tmpdir, capsys):
"""Test _release_lock for supposed write lock with exception."""
spec, installer = create_installer('trivial-install-test-package')
pkg_id = 'test'
with tmpdir.as_cwd():
lock = lk.Lock('./test', default_timeout=1e-9, desc='test')
installer.locks[pkg_id] = ('write', lock)
assert lock._writes == 0
installer._release_lock(pkg_id)
out = str(capsys.readouterr()[1])
msg = 'exception when releasing write lock for {0}'.format(pkg_id)
assert msg in out
def test_requeue_task(install_mockery, capfd):
"""Test to ensure cover _requeue_task."""
spec, installer = create_installer('a')
task = create_build_task(spec.package)
installer._requeue_task(task)
ids = list(installer.build_tasks)
assert len(ids) == 1
qtask = installer.build_tasks[ids[0]]
assert qtask.status == inst.STATUS_INSTALLING
out = capfd.readouterr()[0]
assert 'Installing a in progress by another process' in out
def test_cleanup_all_tasks(install_mockery, monkeypatch):
"""Test to ensure cover _cleanup_all_tasks."""
def _mktask(pkg):
return create_build_task(pkg)
def _rmtask(installer, pkg_id):
raise RuntimeError('Raise an exception to test except path')
spec, installer = create_installer('a')
# Cover task removal happy path
installer.build_tasks['a'] = _mktask(spec.package)
installer._cleanup_all_tasks()
assert len(installer.build_tasks) == 0
# Cover task removal exception path
installer.build_tasks['a'] = _mktask(spec.package)
monkeypatch.setattr(inst.PackageInstaller, '_remove_task', _rmtask)
installer._cleanup_all_tasks()
assert len(installer.build_tasks) == 1
def test_cleanup_failed(install_mockery, tmpdir, monkeypatch, capsys):
"""Test to increase coverage of _cleanup_failed."""
msg = 'Fake release_write exception'
def _raise_except(lock):
raise RuntimeError(msg)
spec, installer = create_installer('trivial-install-test-package')
monkeypatch.setattr(lk.Lock, 'release_write', _raise_except)
pkg_id = 'test'
with tmpdir.as_cwd():
lock = lk.Lock('./test', default_timeout=1e-9, desc='test')
installer.failed[pkg_id] = lock
installer._cleanup_failed(pkg_id)
out = str(capsys.readouterr()[1])
assert 'exception when removing failure mark' in out
assert msg in out
def test_update_failed_no_mark(install_mockery):
"""Test of _update_failed sans mark and dependent build tasks."""
spec, installer = create_installer('dependent-install')
task = create_build_task(spec.package)
installer._update_failed(task)
assert installer.failed['dependent-install'] is None
def test_install_uninstalled_deps(install_mockery, monkeypatch, capsys):
"""Test install with uninstalled dependencies."""
spec, installer = create_installer('dependent-install')
# Skip the actual installation and any status updates
monkeypatch.setattr(inst.PackageInstaller, '_install_task', _noop)
monkeypatch.setattr(inst.PackageInstaller, '_update_installed', _noop)
monkeypatch.setattr(inst.PackageInstaller, '_update_failed', _noop)
msg = 'Cannot proceed with dependent-install'
with pytest.raises(spack.installer.InstallError, matches=msg):
installer.install()
out = str(capsys.readouterr())
assert 'Detected uninstalled dependencies for' in out
def test_install_failed(install_mockery, monkeypatch, capsys):
"""Test install with failed install."""
spec, installer = create_installer('b')
# Make sure the package is identified as failed
monkeypatch.setattr(spack.database.Database, 'prefix_failed', _true)
# Skip the actual installation though it should never get there
monkeypatch.setattr(inst.PackageInstaller, '_install_task', _noop)
msg = 'Installation of b failed'
with pytest.raises(spack.installer.InstallError, matches=msg):
installer.install()
out = str(capsys.readouterr())
assert 'Warning: b failed to install' in out
def test_install_lock_failures(install_mockery, monkeypatch, capfd):
"""Cover basic install lock failure handling in a single pass."""
def _requeued(installer, task):
tty.msg('requeued {0}' .format(task.pkg.spec.name))
def _not_locked(installer, lock_type, pkg):
tty.msg('{0} locked {1}' .format(lock_type, pkg.spec.name))
return lock_type, None
spec, installer = create_installer('b')
# Ensure never acquire a lock
monkeypatch.setattr(inst.PackageInstaller, '_ensure_locked', _not_locked)
# Ensure don't continually requeue the task
monkeypatch.setattr(inst.PackageInstaller, '_requeue_task', _requeued)
# Skip the actual installation though should never reach it
monkeypatch.setattr(inst.PackageInstaller, '_install_task', _noop)
installer.install()
out = capfd.readouterr()[0]
expected = ['write locked', 'read locked', 'requeued']
for exp, ln in zip(expected, out.split('\n')):
assert exp in ln
def test_install_lock_installed_requeue(install_mockery, monkeypatch, capfd):
"""Cover basic install handling for installed package."""
def _install(installer, task, **kwargs):
tty.msg('{0} installing'.format(task.pkg.spec.name))
def _not_locked(installer, lock_type, pkg):
tty.msg('{0} locked {1}' .format(lock_type, pkg.spec.name))
return lock_type, None
def _prep(installer, task, keep_prefix, keep_stage, restage):
installer.installed.add('b')
tty.msg('{0} is installed' .format(task.pkg.spec.name))
# also do not allow the package to be locked again
monkeypatch.setattr(inst.PackageInstaller, '_ensure_locked',
_not_locked)
def _requeued(installer, task):
tty.msg('requeued {0}' .format(task.pkg.spec.name))
# Skip the actual installation though should never reach it
monkeypatch.setattr(inst.PackageInstaller, '_install_task', _install)
# Flag the package as installed
monkeypatch.setattr(inst.PackageInstaller, '_prepare_for_install', _prep)
# Ensure don't continually requeue the task
monkeypatch.setattr(inst.PackageInstaller, '_requeue_task', _requeued)
spec, installer = create_installer('b')
installer.install()
assert 'b' not in installer.installed
out = capfd.readouterr()[0]
expected = ['is installed', 'read locked', 'requeued']
for exp, ln in zip(expected, out.split('\n')):
assert exp in ln
def test_install_read_locked_requeue(install_mockery, monkeypatch, capfd):
"""Cover basic read lock handling for uninstalled package with requeue."""
orig_fn = inst.PackageInstaller._ensure_locked
def _install(installer, task, **kwargs):
tty.msg('{0} installing'.format(task.pkg.spec.name))
def _read(installer, lock_type, pkg):
tty.msg('{0}->read locked {1}' .format(lock_type, pkg.spec.name))
return orig_fn(installer, 'read', pkg)
def _prep(installer, task, keep_prefix, keep_stage, restage):
tty.msg('preparing {0}' .format(task.pkg.spec.name))
assert task.pkg.spec.name not in installer.installed
def _requeued(installer, task):
tty.msg('requeued {0}' .format(task.pkg.spec.name))
# Force a read lock
monkeypatch.setattr(inst.PackageInstaller, '_ensure_locked', _read)
# Skip the actual installation though should never reach it
monkeypatch.setattr(inst.PackageInstaller, '_install_task', _install)
# Flag the package as installed
monkeypatch.setattr(inst.PackageInstaller, '_prepare_for_install', _prep)
# Ensure don't continually requeue the task
monkeypatch.setattr(inst.PackageInstaller, '_requeue_task', _requeued)
spec, installer = create_installer('b')
installer.install()
assert 'b' not in installer.installed
out = capfd.readouterr()[0]
expected = ['write->read locked', 'preparing', 'requeued']
for exp, ln in zip(expected, out.split('\n')):
assert exp in ln
def test_install_dir_exists(install_mockery, monkeypatch, capfd):
"""Cover capture of install directory exists error."""
err = 'Mock directory exists error'
def _install(installer, task, **kwargs):
raise dl.InstallDirectoryAlreadyExistsError(err)
# Skip the actual installation though should never reach it
monkeypatch.setattr(inst.PackageInstaller, '_install_task', _install)
spec, installer = create_installer('b')
with pytest.raises(dl.InstallDirectoryAlreadyExistsError, matches=err):
installer.install()
assert 'b' in installer.installed

View file

@ -1240,3 +1240,57 @@ def test_lock_in_current_directory(tmpdir):
pass pass
with lk.WriteTransaction(lock): with lk.WriteTransaction(lock):
pass pass
def test_attempts_str():
assert lk._attempts_str(0, 0) == ''
assert lk._attempts_str(0.12, 1) == ''
assert lk._attempts_str(12.345, 2) == ' after 12.35s and 2 attempts'
def test_lock_str():
lock = lk.Lock('lockfile')
lockstr = str(lock)
assert 'lockfile[0:0]' in lockstr
assert 'timeout=None' in lockstr
assert '#reads=0, #writes=0' in lockstr
def test_downgrade_write_okay(tmpdir):
"""Test the lock write-to-read downgrade operation."""
with tmpdir.as_cwd():
lock = lk.Lock('lockfile')
lock.acquire_write()
lock.downgrade_write_to_read()
assert lock._reads == 1
assert lock._writes == 0
def test_downgrade_write_fails(tmpdir):
"""Test failing the lock write-to-read downgrade operation."""
with tmpdir.as_cwd():
lock = lk.Lock('lockfile')
lock.acquire_read()
msg = 'Cannot downgrade lock from write to read on file: lockfile'
with pytest.raises(lk.LockDowngradeError, matches=msg):
lock.downgrade_write_to_read()
def test_upgrade_read_okay(tmpdir):
"""Test the lock read-to-write upgrade operation."""
with tmpdir.as_cwd():
lock = lk.Lock('lockfile')
lock.acquire_read()
lock.upgrade_read_to_write()
assert lock._reads == 0
assert lock._writes == 1
def test_upgrade_read_fails(tmpdir):
"""Test failing the lock read-to-write upgrade operation."""
with tmpdir.as_cwd():
lock = lk.Lock('lockfile')
lock.acquire_write()
msg = 'Cannot upgrade lock from read to write on file: lockfile'
with pytest.raises(lk.LockUpgradeError, matches=msg):
lock.upgrade_read_to_write()

View file

@ -49,4 +49,6 @@ def build(self, spec, prefix):
pass pass
def install(self, spec, prefix): def install(self, spec, prefix):
pass # sanity_check_prefix requires something in the install directory
# Test requires overriding the one provided by `AutotoolsPackage`
mkdirp(prefix.bin)

View file

@ -13,6 +13,3 @@ class B(Package):
url = "http://www.example.com/b-1.0.tar.gz" url = "http://www.example.com/b-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View file

@ -59,6 +59,3 @@ class Boost(Package):
description="Build the Boost Graph library") description="Build the Boost Graph library")
variant('taggedlayout', default=False, variant('taggedlayout', default=False,
description="Augment library names with build options") description="Augment library names with build options")
def install(self, spec, prefix):
pass

View file

@ -13,6 +13,3 @@ class C(Package):
url = "http://www.example.com/c-1.0.tar.gz" url = "http://www.example.com/c-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View file

@ -17,6 +17,3 @@ class ConflictingDependent(Package):
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dependency-install@:1.0') depends_on('dependency-install@:1.0')
def install(self, spec, prefix):
pass

View file

@ -25,6 +25,3 @@ class DepDiamondPatchMid1(Package):
# single patch file in repo # single patch file in repo
depends_on('patch', patches='mid1.patch') depends_on('patch', patches='mid1.patch')
def install(self, spec, prefix):
pass

View file

@ -28,6 +28,3 @@ class DepDiamondPatchMid2(Package):
patch('http://example.com/urlpatch.patch', patch('http://example.com/urlpatch.patch',
sha256='mid21234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234'), # noqa: E501 sha256='mid21234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234'), # noqa: E501
]) ])
def install(self, spec, prefix):
pass

View file

@ -27,6 +27,3 @@ class DepDiamondPatchTop(Package):
depends_on('patch', patches='top.patch') depends_on('patch', patches='top.patch')
depends_on('dep-diamond-patch-mid1') depends_on('dep-diamond-patch-mid1')
depends_on('dep-diamond-patch-mid2') depends_on('dep-diamond-patch-mid2')
def install(self, spec, prefix):
pass

View file

@ -13,6 +13,3 @@ class DevelopTest(Package):
version('develop', git='https://github.com/dummy/repo.git') version('develop', git='https://github.com/dummy/repo.git')
version('0.2.15', 'b1190f3d3471685f17cfd1ec1d252ac9') version('0.2.15', 'b1190f3d3471685f17cfd1ec1d252ac9')
def install(self, spec, prefix):
pass

View file

@ -13,6 +13,3 @@ class DevelopTest2(Package):
version('0.2.15.develop', git='https://github.com/dummy/repo.git') version('0.2.15.develop', git='https://github.com/dummy/repo.git')
version('0.2.15', 'b1190f3d3471685f17cfd1ec1d252ac9') version('0.2.15', 'b1190f3d3471685f17cfd1ec1d252ac9')
def install(self, spec, prefix):
pass

View file

@ -13,6 +13,3 @@ class DirectMpich(Package):
version('1.0', 'foobarbaz') version('1.0', 'foobarbaz')
depends_on('mpich') depends_on('mpich')
def install(self, spec, prefix):
pass

View file

@ -12,6 +12,3 @@ class DtDiamondBottom(Package):
url = "http://www.example.com/dt-diamond-bottom-1.0.tar.gz" url = "http://www.example.com/dt-diamond-bottom-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View file

@ -14,6 +14,3 @@ class DtDiamondLeft(Package):
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dt-diamond-bottom', type='build') depends_on('dt-diamond-bottom', type='build')
def install(self, spec, prefix):
pass

View file

@ -14,6 +14,3 @@ class DtDiamondRight(Package):
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dt-diamond-bottom', type=('build', 'link', 'run')) depends_on('dt-diamond-bottom', type=('build', 'link', 'run'))
def install(self, spec, prefix):
pass

View file

@ -15,6 +15,3 @@ class DtDiamond(Package):
depends_on('dt-diamond-left') depends_on('dt-diamond-left')
depends_on('dt-diamond-right') depends_on('dt-diamond-right')
def install(self, spec, prefix):
pass

View file

@ -18,6 +18,3 @@ class Dtbuild1(Package):
depends_on('dtbuild2', type='build') depends_on('dtbuild2', type='build')
depends_on('dtlink2') depends_on('dtlink2')
depends_on('dtrun2', type='run') depends_on('dtrun2', type='run')
def install(self, spec, prefix):
pass

View file

@ -13,6 +13,3 @@ class Dtbuild2(Package):
url = "http://www.example.com/dtbuild2-1.0.tar.gz" url = "http://www.example.com/dtbuild2-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View file

@ -13,6 +13,3 @@ class Dtbuild3(Package):
url = "http://www.example.com/dtbuild3-1.0.tar.gz" url = "http://www.example.com/dtbuild3-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View file

@ -15,6 +15,3 @@ class Dtlink1(Package):
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dtlink3') depends_on('dtlink3')
def install(self, spec, prefix):
pass

View file

@ -13,6 +13,3 @@ class Dtlink2(Package):
url = "http://www.example.com/dtlink2-1.0.tar.gz" url = "http://www.example.com/dtlink2-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View file

@ -16,6 +16,3 @@ class Dtlink3(Package):
depends_on('dtbuild2', type='build') depends_on('dtbuild2', type='build')
depends_on('dtlink4') depends_on('dtlink4')
def install(self, spec, prefix):
pass

View file

@ -13,6 +13,3 @@ class Dtlink4(Package):
url = "http://www.example.com/dtlink4-1.0.tar.gz" url = "http://www.example.com/dtlink4-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View file

@ -13,6 +13,3 @@ class Dtlink5(Package):
url = "http://www.example.com/dtlink5-1.0.tar.gz" url = "http://www.example.com/dtlink5-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View file

@ -16,6 +16,3 @@ class Dtrun1(Package):
depends_on('dtlink5') depends_on('dtlink5')
depends_on('dtrun3', type='run') depends_on('dtrun3', type='run')
def install(self, spec, prefix):
pass

View file

@ -13,6 +13,3 @@ class Dtrun2(Package):
url = "http://www.example.com/dtrun2-1.0.tar.gz" url = "http://www.example.com/dtrun2-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View file

@ -15,6 +15,3 @@ class Dtrun3(Package):
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dtbuild3', type='build') depends_on('dtbuild3', type='build')
def install(self, spec, prefix):
pass

View file

@ -17,6 +17,3 @@ class Dttop(Package):
depends_on('dtbuild1', type='build') depends_on('dtbuild1', type='build')
depends_on('dtlink1') depends_on('dtlink1')
depends_on('dtrun1', type='run') depends_on('dtrun1', type='run')
def install(self, spec, prefix):
pass

View file

@ -15,6 +15,3 @@ class Dtuse(Package):
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dttop') depends_on('dttop')
def install(self, spec, prefix):
pass

View file

@ -13,6 +13,3 @@ class E(Package):
url = "http://www.example.com/e-1.0.tar.gz" url = "http://www.example.com/e-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View file

@ -13,6 +13,3 @@ class Externalmodule(Package):
version('1.0', '1234567890abcdef1234567890abcdef') version('1.0', '1234567890abcdef1234567890abcdef')
depends_on('externalprereq') depends_on('externalprereq')
def install(self, spec, prefix):
pass

View file

@ -11,6 +11,3 @@ class Externalprereq(Package):
url = "http://somewhere.com/prereq-1.0.tar.gz" url = "http://somewhere.com/prereq-1.0.tar.gz"
version('1.4', 'f1234567890abcdef1234567890abcde') version('1.4', 'f1234567890abcdef1234567890abcde')
def install(self, spec, prefix):
pass

View file

@ -14,6 +14,3 @@ class Externaltool(Package):
version('0.9', '1234567890abcdef1234567890abcdef') version('0.9', '1234567890abcdef1234567890abcdef')
depends_on('externalprereq') depends_on('externalprereq')
def install(self, spec, prefix):
pass

View file

@ -16,6 +16,3 @@ class Externalvirtual(Package):
version('2.2', '4567890abcdef1234567890abcdef123') version('2.2', '4567890abcdef1234567890abcdef123')
provides('stuff', when='@1.0:') provides('stuff', when='@1.0:')
def install(self, spec, prefix):
pass

View file

@ -11,6 +11,3 @@ class Fake(Package):
url = "http://www.fake-spack-example.org/downloads/fake-1.0.tar.gz" url = "http://www.fake-spack-example.org/downloads/fake-1.0.tar.gz"
version('1.0', 'foobarbaz') version('1.0', 'foobarbaz')
def install(self, spec, prefix):
pass

View file

@ -58,7 +58,11 @@ def install(self, spec, prefix):
if 'really-long-if-statement' != 'that-goes-over-the-line-length-limit-and-requires-noqa': # noqa if 'really-long-if-statement' != 'that-goes-over-the-line-length-limit-and-requires-noqa': # noqa
pass pass
# sanity_check_prefix requires something in the install directory
mkdirp(prefix.bin)
# '@when' decorated functions are exempt from redefinition errors # '@when' decorated functions are exempt from redefinition errors
@when('@2.0') @when('@2.0')
def install(self, spec, prefix): def install(self, spec, prefix):
pass # sanity_check_prefix requires something in the install directory
mkdirp(prefix.bin)

View file

@ -15,6 +15,3 @@ class GitSvnTopLevel(Package):
svn = 'https://example.com/some/svn/repo' svn = 'https://example.com/some/svn/repo'
version('2.0') version('2.0')
def install(self, spec, prefix):
pass

View file

@ -11,6 +11,3 @@ class GitTest(Package):
homepage = "http://www.git-fetch-example.com" homepage = "http://www.git-fetch-example.com"
version('git', git='to-be-filled-in-by-test') version('git', git='to-be-filled-in-by-test')
def install(self, spec, prefix):
pass

View file

@ -12,6 +12,3 @@ class GitTopLevel(Package):
git = 'https://example.com/some/git/repo' git = 'https://example.com/some/git/repo'
version('1.0') version('1.0')
def install(self, spec, prefix):
pass

View file

@ -16,6 +16,3 @@ class GitUrlSvnTopLevel(Package):
svn = 'https://example.com/some/svn/repo' svn = 'https://example.com/some/svn/repo'
version('2.0') version('2.0')
def install(self, spec, prefix):
pass

View file

@ -38,6 +38,3 @@ class GitUrlTopLevel(Package):
version('1.2', sha512='abc12', branch='releases/v1.2') version('1.2', sha512='abc12', branch='releases/v1.2')
version('1.1', md5='abc11', tag='v1.1') version('1.1', md5='abc11', tag='v1.1')
version('1.0', 'abc11', tag='abc123') version('1.0', 'abc11', tag='abc123')
def install(self, spec, prefix):
pass

View file

@ -3,10 +3,10 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
import os import os
from spack import *
class HashTest1(Package): class HashTest1(Package):
"""Used to test package hashing """Used to test package hashing
@ -37,10 +37,16 @@ def install(self, spec, prefix):
print("install 1") print("install 1")
os.listdir(os.getcwd()) os.listdir(os.getcwd())
# sanity_check_prefix requires something in the install directory
mkdirp(prefix.bin)
@when('@1.5:') @when('@1.5:')
def install(self, spec, prefix): def install(self, spec, prefix):
os.listdir(os.getcwd()) os.listdir(os.getcwd())
# sanity_check_prefix requires something in the install directory
mkdirp(prefix.bin)
@when('@1.5,1.6') @when('@1.5,1.6')
def extra_phase(self, spec, prefix): def extra_phase(self, spec, prefix):
pass pass

View file

@ -3,10 +3,10 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
import os import os
from spack import *
class HashTest2(Package): class HashTest2(Package):
"""Used to test package hashing """Used to test package hashing
@ -31,3 +31,6 @@ def setup_dependent_environment(self, spack_env, run_env, dependent_spec):
def install(self, spec, prefix): def install(self, spec, prefix):
print("install 1") print("install 1")
os.listdir(os.getcwd()) os.listdir(os.getcwd())
# sanity_check_prefix requires something in the install directory
mkdirp(prefix.bin)

View file

@ -3,10 +3,10 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
import os import os
from spack import *
class HashTest3(Package): class HashTest3(Package):
"""Used to test package hashing """Used to test package hashing
@ -32,10 +32,16 @@ def install(self, spec, prefix):
print("install 1") print("install 1")
os.listdir(os.getcwd()) os.listdir(os.getcwd())
# sanity_check_prefix requires something in the install directory
mkdirp(prefix.bin)
@when('@1.5:') @when('@1.5:')
def install(self, spec, prefix): def install(self, spec, prefix):
os.listdir(os.getcwd()) os.listdir(os.getcwd())
# sanity_check_prefix requires something in the install directory
mkdirp(prefix.bin)
for _version_constraint in ['@1.5', '@1.6']: for _version_constraint in ['@1.5', '@1.6']:
@when(_version_constraint) @when(_version_constraint)
def extra_phase(self, spec, prefix): def extra_phase(self, spec, prefix):

View file

@ -11,6 +11,3 @@ class HgTest(Package):
homepage = "http://www.hg-fetch-example.com" homepage = "http://www.hg-fetch-example.com"
version('hg', hg='to-be-filled-in-by-test') version('hg', hg='to-be-filled-in-by-test')
def install(self, spec, prefix):
pass

View file

@ -12,6 +12,3 @@ class HgTopLevel(Package):
hg = 'https://example.com/some/hg/repo' hg = 'https://example.com/some/hg/repo'
version('1.0') version('1.0')
def install(self, spec, prefix):
pass

View file

@ -16,6 +16,3 @@ class Hypre(Package):
depends_on('lapack') depends_on('lapack')
depends_on('blas') depends_on('blas')
def install(self, spec, prefix):
pass

View file

@ -18,6 +18,3 @@ class IndirectMpich(Package):
depends_on('mpi') depends_on('mpi')
depends_on('direct-mpich') depends_on('direct-mpich')
def install(self, spec, prefix):
pass

View file

@ -15,6 +15,3 @@ class Maintainers1(Package):
maintainers = ['user1', 'user2'] maintainers = ['user1', 'user2']
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View file

@ -15,6 +15,3 @@ class Maintainers2(Package):
maintainers = ['user2', 'user3'] maintainers = ['user2', 'user3']
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View file

@ -12,6 +12,3 @@ class Mixedversions(Package):
version('2.0.1', 'hashc') version('2.0.1', 'hashc')
version('2.0', 'hashb') version('2.0', 'hashb')
version('1.0.1', 'hasha') version('1.0.1', 'hasha')
def install(self, spec, prefix):
pass

View file

@ -12,9 +12,6 @@ class ModulePathSeparator(Package):
version(1.0, 'foobarbaz') version(1.0, 'foobarbaz')
def install(self, spec, prefix):
pass
def setup_environment(self, senv, renv): def setup_environment(self, senv, renv):
renv.append_path("COLON", "foo") renv.append_path("COLON", "foo")
renv.prepend_path("COLON", "foo") renv.prepend_path("COLON", "foo")

View file

@ -27,6 +27,3 @@ class MultiProviderMpi(Package):
provides('mpi@3.0', when='@1.10.0') provides('mpi@3.0', when='@1.10.0')
provides('mpi@3.0', when='@1.8.8') provides('mpi@3.0', when='@1.8.8')
provides('mpi@2.2', when='@1.6.5') provides('mpi@2.2', when='@1.6.5')
def install(self, spec, prefix):
pass

View file

@ -15,6 +15,3 @@ class MultimoduleInheritance(si.BaseWithDirectives):
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('openblas', when='+openblas') depends_on('openblas', when='+openblas')
def install(self, spec, prefix):
pass

View file

@ -33,6 +33,3 @@ class MultivalueVariant(Package):
depends_on('callpath') depends_on('callpath')
depends_on('a') depends_on('a')
depends_on('a@1.0', when='fee=barbaz') depends_on('a@1.0', when='fee=barbaz')
def install(self, spec, prefix):
pass

View file

@ -13,6 +13,3 @@ class NetlibBlas(Package):
version('3.5.0', 'b1d3e3e425b2e44a06760ff173104bdf') version('3.5.0', 'b1d3e3e425b2e44a06760ff173104bdf')
provides('blas') provides('blas')
def install(self, spec, prefix):
pass

View file

@ -14,6 +14,3 @@ class NetlibLapack(Package):
provides('lapack') provides('lapack')
depends_on('blas') depends_on('blas')
def install(self, spec, prefix):
pass

View file

@ -3,10 +3,7 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
from spack import * from spack import *
from llnl.util.filesystem import touch
class NosourceInstall(BundlePackage): class NosourceInstall(BundlePackage):
@ -24,8 +21,8 @@ class NosourceInstall(BundlePackage):
# The install method must also be present. # The install method must also be present.
def install(self, spec, prefix): def install(self, spec, prefix):
touch(os.path.join(self.prefix, 'install.txt')) touch(join_path(self.prefix, 'install.txt'))
@run_after('install') @run_after('install')
def post_install(self): def post_install(self):
touch(os.path.join(self.prefix, 'post-install.txt')) touch(join_path(self.prefix, 'post-install.txt'))

View file

@ -15,6 +15,3 @@ class OpenblasWithLapack(Package):
provides('lapack') provides('lapack')
provides('blas') provides('blas')
def install(self, spec, prefix):
pass

View file

@ -14,6 +14,3 @@ class Openblas(Package):
version('0.2.15', 'b1190f3d3471685f17cfd1ec1d252ac9') version('0.2.15', 'b1190f3d3471685f17cfd1ec1d252ac9')
provides('blas') provides('blas')
def install(self, spec, prefix):
pass

View file

@ -19,6 +19,3 @@ class OptionalDepTest2(Package):
depends_on('optional-dep-test', when='+odt') depends_on('optional-dep-test', when='+odt')
depends_on('optional-dep-test+mpi', when='+mpi') depends_on('optional-dep-test+mpi', when='+mpi')
def install(self, spec, prefix):
pass

View file

@ -18,6 +18,3 @@ class OptionalDepTest3(Package):
depends_on('a', when='~var') depends_on('a', when='~var')
depends_on('b', when='+var') depends_on('b', when='+var')
def install(self, spec, prefix):
pass

View file

@ -30,6 +30,3 @@ class OptionalDepTest(Package):
depends_on('mpi', when='^g') depends_on('mpi', when='^g')
depends_on('mpi', when='+mpi') depends_on('mpi', when='+mpi')
def install(self, spec, prefix):
pass

View file

@ -13,6 +13,3 @@ class Othervirtual(Package):
version('1.0', '67890abcdef1234567890abcdef12345') version('1.0', '67890abcdef1234567890abcdef12345')
provides('stuff') provides('stuff')
def install(self, spec, prefix):
pass

View file

@ -18,6 +18,3 @@ class OverrideContextTemplates(Package):
tcl_template = 'extension.tcl' tcl_template = 'extension.tcl'
tcl_context = {'sentence': "sentence from package"} tcl_context = {'sentence': "sentence from package"}
def install(self, spec, prefix):
pass

View file

@ -14,6 +14,3 @@ class OverrideModuleTemplates(Package):
tcl_template = 'override.txt' tcl_template = 'override.txt'
lmod_template = 'override.txt' lmod_template = 'override.txt'
def install(self, spec, prefix):
pass

View file

@ -15,6 +15,3 @@ class PatchADependency(Package):
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('libelf', patches=patch('libelf.patch')) depends_on('libelf', patches=patch('libelf.patch'))
def install(self, spec, prefix):
pass

View file

@ -36,6 +36,3 @@ class PatchSeveralDependencies(Package):
archive_sha256='abcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcd', archive_sha256='abcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcd',
sha256='1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd') sha256='1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd')
]) ])
def install(self, spec, prefix):
pass

View file

@ -21,6 +21,3 @@ class Patch(Package):
patch('bar.patch', when='@2:') patch('bar.patch', when='@2:')
patch('baz.patch') patch('baz.patch')
patch('biz.patch', when='@1.0.1:1.0.2') patch('biz.patch', when='@1.0.1:1.0.2')
def install(self, spec, prefix):
pass

View file

@ -13,6 +13,3 @@ class Perl(Package):
extendable = True extendable = True
version('0.0.0', 'hash') version('0.0.0', 'hash')
def install(self, spec, prefix):
pass

View file

@ -15,6 +15,3 @@ class PreferredTest(Package):
version('0.2.16', 'b1190f3d3471685f17cfd1ec1d252ac9') version('0.2.16', 'b1190f3d3471685f17cfd1ec1d252ac9')
version('0.2.15', 'b1190f3d3471685f17cfd1ec1d252ac9', preferred=True) version('0.2.15', 'b1190f3d3471685f17cfd1ec1d252ac9', preferred=True)
version('0.2.14', 'b1190f3d3471685f17cfd1ec1d252ac9') version('0.2.14', 'b1190f3d3471685f17cfd1ec1d252ac9')
def install(self, spec, prefix):
pass

View file

@ -19,6 +19,3 @@ class Python(Package):
version('2.7.10', 'd7547558fd673bd9d38e2108c6b42521') version('2.7.10', 'd7547558fd673bd9d38e2108c6b42521')
version('2.7.9', '5eebcaa0030dc4061156d3429657fb83') version('2.7.9', '5eebcaa0030dc4061156d3429657fb83')
version('2.7.8', 'd4bca0159acb0b44a781292b5231936f') version('2.7.8', 'd4bca0159acb0b44a781292b5231936f')
def install(self, spec, prefix):
pass

View file

@ -30,6 +30,3 @@ class SimpleInheritance(BaseWithDirectives):
depends_on('openblas', when='+openblas') depends_on('openblas', when='+openblas')
provides('lapack', when='+openblas') provides('lapack', when='+openblas')
def install(self, spec, prefix):
pass

View file

@ -15,6 +15,3 @@ class SinglevalueVariantDependent(Package):
version('1.0', '0123456789abcdef0123456789abcdef') version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('multivalue_variant fee=baz') depends_on('multivalue_variant fee=baz')
def install(self, spec, prefix):
pass

View file

@ -11,6 +11,3 @@ class SvnTest(Package):
url = "http://www.example.com/svn-test-1.0.tar.gz" url = "http://www.example.com/svn-test-1.0.tar.gz"
version('svn', svn='to-be-filled-in-by-test') version('svn', svn='to-be-filled-in-by-test')
def install(self, spec, prefix):
pass

View file

@ -11,6 +11,3 @@ class SvnTopLevel(Package):
svn = 'https://example.com/some/svn/repo' svn = 'https://example.com/some/svn/repo'
version('1.0') version('1.0')
def install(self, spec, prefix):
pass

View file

@ -5,7 +5,6 @@
from spack import * from spack import *
import os
import spack.paths import spack.paths
@ -13,7 +12,7 @@ class UrlListTest(Package):
"""Mock package with url_list.""" """Mock package with url_list."""
homepage = "http://www.url-list-example.com" homepage = "http://www.url-list-example.com"
web_data_path = os.path.join(spack.paths.test_path, 'data', 'web') web_data_path = join_path(spack.paths.test_path, 'data', 'web')
url = 'file://' + web_data_path + '/foo-0.0.0.tar.gz' url = 'file://' + web_data_path + '/foo-0.0.0.tar.gz'
list_url = 'file://' + web_data_path + '/index.html' list_url = 'file://' + web_data_path + '/index.html'
list_depth = 3 list_depth = 3
@ -25,6 +24,3 @@ class UrlListTest(Package):
version('2.0.0b2', 'abc200b2') version('2.0.0b2', 'abc200b2')
version('3.0a1', 'abc30a1') version('3.0a1', 'abc30a1')
version('4.5-rc5', 'abc45rc5') version('4.5-rc5', 'abc45rc5')
def install(self, spec, prefix):
pass

View file

@ -11,6 +11,3 @@ class UrlTest(Package):
homepage = "http://www.url-fetch-example.com" homepage = "http://www.url-fetch-example.com"
version('test', url='to-be-filled-in-by-test') version('test', url='to-be-filled-in-by-test')
def install(self, spec, prefix):
pass

View file

@ -23,6 +23,3 @@ class WhenDirectivesFalse(Package):
resource(url="http://www.example.com/example-1.0-resource.tar.gz", resource(url="http://www.example.com/example-1.0-resource.tar.gz",
md5='0123456789abcdef0123456789abcdef', md5='0123456789abcdef0123456789abcdef',
when=False) when=False)
def install(self, spec, prefix):
pass

View file

@ -23,6 +23,3 @@ class WhenDirectivesTrue(Package):
resource(url="http://www.example.com/example-1.0-resource.tar.gz", resource(url="http://www.example.com/example-1.0-resource.tar.gz",
md5='0123456789abcdef0123456789abcdef', md5='0123456789abcdef0123456789abcdef',
when=True) when=True)
def install(self, spec, prefix):
pass

View file

@ -16,6 +16,3 @@ class Zmpi(Package):
provides('mpi@:10.0') provides('mpi@:10.0')
depends_on('fake') depends_on('fake')
def install(self, spec, prefix):
pass

View file

@ -43,7 +43,8 @@ def fetcher(self):
raise InstallError(msg) raise InstallError(msg)
def install(self, spec, prefix): def install(self, spec, prefix):
pass # sanity_check_prefix requires something in the install directory
mkdirp(prefix.lib)
@property @property
def libs(self): def libs(self):