Merge pull request #21930 from vsoch/add/spack-monitor

This provides initial support for [spack monitor](https://github.com/spack/spack-monitor), a web application that stores information and analysis about Spack installations.  Spack can now contact a monitor server and upload analysis -- even after a build is already done.

Specifically, this adds:
- [x] monitor options for `spack install`
- [x] `spack analyze` command
- [x] hook architecture for analyzers
- [x] separate build logs (in addition to the existing combined log)
- [x] docs for spack analyze
- [x] reworked developer docs, with hook docs
- [x] analyzers for:
  - [x] config args
  - [x] environment variables
  - [x] installed files
  - [x] libabigail

There is a lot more information in the docs contained in this PR, so consult those for full details on this feature.

Additional tests will be added in a future PR.
This commit is contained in:
Vanessasaurus 2021-04-15 01:38:36 -06:00 committed by GitHub
parent 613348ec90
commit 7f91c1a510
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
31 changed files with 2281 additions and 105 deletions

162
lib/spack/docs/analyze.rst Normal file
View file

@ -0,0 +1,162 @@
.. Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _analyze:
=======
Analyze
=======
The analyze command is a front-end to various tools that let us analyze
package installations. Each analyzer is a module for a different kind
of analysis that can be done on a package installation, including (but not
limited to) binary, log, or text analysis. Thus, the analyze command group
allows you to take an existing package install, choose an analyzer,
and extract some output for the package using it.
-----------------
Analyzer Metadata
-----------------
For all analyzers, we write to an ``analyzers`` folder in ``~/.spack``, or the
value that you specify in your spack config at ``config:analyzers_dir``.
For example, here we see the results of running an analysis on zlib:
.. code-block:: console
$ tree ~/.spack/analyzers/
└── linux-ubuntu20.04-skylake
└── gcc-9.3.0
└── zlib-1.2.11-sl7m27mzkbejtkrajigj3a3m37ygv4u2
├── environment_variables
│   └── spack-analyzer-environment-variables.json
├── install_files
│   └── spack-analyzer-install-files.json
└── libabigail
└── spack-analyzer-libabigail-libz.so.1.2.11.xml
This means that you can always find analyzer output in this folder, and it
is organized with the same logic as the package install it was run for.
If you want to customize this top level folder, simply provide the ``--path``
argument to ``spack analyze run``. The nested organization will be maintained
within your custom root.
-----------------
Listing Analyzers
-----------------
If you aren't familiar with Spack's analyzers, you can quickly list those that
are available:
.. code-block:: console
$ spack analyze list-analyzers
install_files : install file listing read from install_manifest.json
environment_variables : environment variables parsed from spack-build-env.txt
config_args : config args loaded from spack-configure-args.txt
abigail : Application Binary Interface (ABI) features for objects
In the above, the first three are fairly simple - parsing metadata files from
a package install directory to save
-------------------
Analyzing a Package
-------------------
The analyze command, akin to install, will accept a package spec to perform
an analysis for. The package must be installed. Let's walk through an example
with zlib. We first ask to analyze it. However, since we have more than one
install, we are asked to disambiguate:
.. code-block:: console
$ spack analyze run zlib
==> Error: zlib matches multiple packages.
Matching packages:
fz2bs56 zlib@1.2.11%gcc@7.5.0 arch=linux-ubuntu18.04-skylake
sl7m27m zlib@1.2.11%gcc@9.3.0 arch=linux-ubuntu20.04-skylake
Use a more specific spec.
We can then specify the spec version that we want to analyze:
.. code-block:: console
$ spack analyze run zlib/fz2bs56
If you don't provide any specific analyzer names, by default all analyzers
(shown in the ``list-analyzers`` subcommand list) will be run. If an analyzer does not
have any result, it will be skipped. For example, here is a result running for
zlib:
.. code-block:: console
$ ls ~/.spack/analyzers/linux-ubuntu20.04-skylake/gcc-9.3.0/zlib-1.2.11-sl7m27mzkbejtkrajigj3a3m37ygv4u2/
spack-analyzer-environment-variables.json
spack-analyzer-install-files.json
spack-analyzer-libabigail-libz.so.1.2.11.xml
If you want to run a specific analyzer, ask for it with `--analyzer`. Here we run
spack analyze on libabigail (already installed) _using_ libabigail1
.. code-block:: console
$ spack analyze run --analyzer abigail libabigail
.. _analyze_monitoring:
----------------------
Monitoring An Analysis
----------------------
For any kind of analysis, you can
use a `spack monitor <https://github.com/spack/spack-monitor>`_ "Spackmon"
as a server to upload the same run metadata to. You can
follow the instructions in the `spack monitor documentation <https://spack-monitor.readthedocs.org>`_
to first create a server along with a username and token for yourself.
You can then use this guide to interact with the server.
You should first export our spack monitor token and username to the environment:
.. code-block:: console
$ export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
$ export SPACKMON_USER=spacky
By default, the host for your server is expected to be at ``http://127.0.0.1``
with a prefix of ``ms1``, and if this is the case, you can simply add the
``--monitor`` flag to the install command:
.. code-block:: console
$ spack analyze run --monitor wget
If you need to customize the host or the prefix, you can do that as well:
.. code-block:: console
$ spack analyze run --monitor --monitor-prefix monitor --monitor-host https://monitor-service.io wget
If your server doesn't have authentication, you can skip it:
.. code-block:: console
$ spack analyze run --monitor --monitor-disable-auth wget
Regardless of your choice, when you run analyze on an installed package (whether
it was installed with ``--monitor`` or not, you'll see the results generating as they did
before, and a message that the monitor server was pinged:
.. code-block:: console
$ spack analyze --monitor wget
...
==> Sending result for wget bin/wget to monitor.

View file

@ -107,8 +107,18 @@ with a high level view of Spack's directory structure:
llnl/ <- some general-use libraries llnl/ <- some general-use libraries
spack/ <- spack module; contains Python code spack/ <- spack module; contains Python code
analyzers/ <- modules to run analysis on installed packages
build_systems/ <- modules for different build systems
cmd/ <- each file in here is a spack subcommand cmd/ <- each file in here is a spack subcommand
compilers/ <- compiler description files compilers/ <- compiler description files
container/ <- module for spack containerize
hooks/ <- hook modules to run at different points
modules/ <- modules for lmod, tcl, etc.
operating_systems/ <- operating system modules
platforms/ <- different spack platforms
reporters/ <- reporters like cdash, junit
schema/ <- schemas to validate data structures
solver/ <- the spack solver
test/ <- unit test modules test/ <- unit test modules
util/ <- common code util/ <- common code
@ -251,6 +261,22 @@ Unit tests
This is a fake package hierarchy used to mock up packages for This is a fake package hierarchy used to mock up packages for
Spack's test suite. Spack's test suite.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Research and Monitoring Modules
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
:mod:`spack.monitor`
Contains :class:`SpackMonitor <spack.monitor.SpackMonitor>`. This is accessed
from the ``spack install`` and ``spack analyze`` commands to send build
and package metadada up to a `Spack Monitor <https://github.com/spack/spack-monitor>`_ server.
:mod:`spack.analyzers`
A module folder with a :class:`AnalyzerBase <spack.analyzers.analyzer_base.AnalyzerBase>`
that provides base functions to run, save, and (optionally) upload analysis
results to a `Spack Monitor <https://github.com/spack/spack-monitor>`_ server.
^^^^^^^^^^^^^ ^^^^^^^^^^^^^
Other Modules Other Modules
^^^^^^^^^^^^^ ^^^^^^^^^^^^^
@ -299,6 +325,235 @@ Conceptually, packages are overloaded. They contain:
Stage objects Stage objects
------------- -------------
.. _writing-analyzers:
-----------------
Writing analyzers
-----------------
To write an analyzer, you should add a new python file to the
analyzers module directory at ``lib/spack/spack/analyzers`` .
Your analyzer should be a subclass of the :class:`AnalyzerBase <spack.analyzers.analyzer_base.AnalyzerBase>`. For example, if you want
to add an analyzer class ``Myanalyzer`` you woul write to
``spack/analyzers/myanalyzer.py`` and import and
use the base as follows:
.. code-block:: python
from .analyzer_base import AnalyzerBase
class Myanalyzer(AnalyzerBase):
Note that the class name is your module file name, all lowercase
except for the first capital letter. You can look at other analyzers in
that analyzer directory for examples. The guide here will tell you about the basic functions needed.
^^^^^^^^^^^^^^^^^^^^^^^^^
Analyzer Output Directory
^^^^^^^^^^^^^^^^^^^^^^^^^
By default, when you run ``spack analyze run`` an analyzer output directory will
be created in your spack user directory in your ``$HOME``. The reason we output here
is because the install directory might not always be writable.
.. code-block:: console
~/.spack/
analyzers
Result files will be written here, organized in subfolders in the same structure
as the package, with each analyzer owning it's own subfolder. for example:
.. code-block:: console
$ tree ~/.spack/analyzers/
/home/spackuser/.spack/analyzers/
└── linux-ubuntu20.04-skylake
└── gcc-9.3.0
└── zlib-1.2.11-sl7m27mzkbejtkrajigj3a3m37ygv4u2
├── environment_variables
│   └── spack-analyzer-environment-variables.json
├── install_files
│   └── spack-analyzer-install-files.json
└── libabigail
└── lib
└── spack-analyzer-libabigail-libz.so.1.2.11.xml
Notice that for the libabigail analyzer, since results are generated per object,
we honor the object's folder in case there are equivalently named files in
different folders. The result files are typically written as json so they can be easily read and uploaded in a future interaction with a monitor.
^^^^^^^^^^^^^^^^^
Analyzer Metadata
^^^^^^^^^^^^^^^^^
Your analyzer is required to have the class attributes ``name``, ``outfile``,
and ``description``. These are printed to the user with they use the subcommand
``spack analyze list-analyzers``. Here is an example.
As we mentioned above, note that this analyzer would live in a module named
``libabigail.py`` in the analyzers folder so that the class can be discovered.
.. code-block:: python
class Libabigail(AnalyzerBase):
name = "libabigail"
outfile = "spack-analyzer-libabigail.json"
description = "Application Binary Interface (ABI) features for objects"
This means that the name and output file should be unique for your analyzer.
Note that "all" cannot be the name of an analyzer, as this key is used to indicate
that the user wants to run all analyzers.
.. _analyzer_run_function:
^^^^^^^^^^^^^^^^^^^^^^^^
An analyzer run Function
^^^^^^^^^^^^^^^^^^^^^^^^
The core of an analyzer is its ``run()`` function, which should accept no
arguments. You can assume your analyzer has the package spec of interest at ``self.spec``
and it's up to the run function to generate whatever analysis data you need,
and then return the object with a key as the analyzer name. The result data
should be a list of objects, each with a name, ``analyzer_name``, ``install_file``,
and one of ``value`` or ``binary_value``. The install file should be for a relative
path, and not the absolute path. For example, let's say we extract a metric called
``metric`` for ``bin/wget`` using our analyzer ``thebest-analyzer``.
We might have data that looks like this:
.. code-block:: python
result = {"name": "metric", "analyzer_name": "thebest-analyzer", "value": "1", "install_file": "bin/wget"}
We'd then return it as follows - note that they key is the analyzer name at ``self.name``.
.. code-block:: python
return {self.name: result}
This will save the complete result to the analyzer metadata folder, as described
previously. If you want support for adding a different kind of metadata (e.g.,
not associated with an install file) then the monitor server would need to be updated
to support this first.
^^^^^^^^^^^^^^^^^^^^^^^^^
An analyzer init Function
^^^^^^^^^^^^^^^^^^^^^^^^^
If you don't need any extra dependencies or checks, you can skip defining an analyzer
init function, as the base class will handle it. Typically, it will accept
a spec, and an optional output directory (if the user does not want the default
metadata folder for analyzer results). The analyzer init function should call
it's parent init, and then do any extra checks or validation that are required to
work. For example:
.. code-block:: python
def __init__(self, spec, dirname=None):
super(Myanalyzer, self).__init__(spec, dirname)
# install extra dependencies, do extra preparation and checks here
At the end of the init, you will have available to you:
- **self.spec**: the spec object
- **self.dirname**: an optional directory name the user as provided at init to save
- **self.output_dir**: the analyzer metadata directory, where we save by default
- **self.meta_dir**: the path to the package metadata directory (.spack) if you need it
And can proceed to write your analyzer.
^^^^^^^^^^^^^^^^^^^^^^^
Saving Analyzer Results
^^^^^^^^^^^^^^^^^^^^^^^
The analyzer will have ``save_result`` called, with the result object generated
to save it to the filesystem, and if the user has added the ``--monitor`` flag
to upload it to a monitor server. If your result follows an accepted result
format and you don't need to parse it further, you don't need to add this
function to your class. However, if your result data is large or otherwise
needs additional parsing, you can define it. If you define the function, it
is useful to know about the ``output_dir`` property, which you can join
with your output file relative path of choice:
.. code-block:: python
outfile = os.path.join(self.output_dir, "my-output-file.txt")
The directory will be provided by the ``output_dir`` property but it won't exist,
so you should create it:
.. code::block:: python
# Create the output directory
if not os.path.exists(self._output_dir):
os.makedirs(self._output_dir)
If you are generating results that match to specific files in the package
install directory, you should try to maintain those paths in the case that
there are equivalently named files in different directories that would
overwrite one another. As an example of an analyzer with a custom save,
the Libabigail analyzer saves ``*.xml`` files to the analyzer metadata
folder in ``run()``, as they are either binaries, or as xml (text) would
usually be too big to pass in one request. For this reason, the files
are saved during ``run()`` and the filenames added to the result object,
and then when the result object is passed back into ``save_result()``,
we skip saving to the filesystem, and instead read the file and send
each one (separately) to the monitor:
.. code-block:: python
def save_result(self, result, monitor=None, overwrite=False):
"""ABI results are saved to individual files, so each one needs to be
read and uploaded. Result here should be the lookup generated in run(),
the key is the analyzer name, and each value is the result file.
We currently upload the entire xml as text because libabigail can't
easily read gzipped xml, but this will be updated when it can.
"""
if not monitor:
return
name = self.spec.package.name
for obj, filename in result.get(self.name, {}).items():
# Don't include the prefix
rel_path = obj.replace(self.spec.prefix + os.path.sep, "")
# We've already saved the results to file during run
content = spack.monitor.read_file(filename)
# A result needs an analyzer, value or binary_value, and name
data = {"value": content, "install_file": rel_path, "name": "abidw-xml"}
tty.info("Sending result for %s %s to monitor." % (name, rel_path))
monitor.send_analyze_metadata(self.spec.package, {"libabigail": [data]})
Notice that this function, if you define it, requires a result object (generated by
``run()``, a monitor (if you want to send), and a boolean ``overwrite`` to be used
to check if a result exists first, and not write to it if the result exists and
overwrite is False. Also notice that since we already saved these files to the analyzer metadata folder, we return early if a monitor isn't defined, because this function serves to send results to the monitor. If you haven't saved anything to the analyzer metadata folder
yet, you might want to do that here. You should also use ``tty.info`` to give
the user a message of "Writing result to $DIRNAME."
.. _writing-commands: .. _writing-commands:
---------------- ----------------
@ -345,6 +600,183 @@ Whenever you add/remove/rename a command or flags for an existing command,
make sure to update Spack's `Bash tab completion script make sure to update Spack's `Bash tab completion script
<https://github.com/adamjstewart/spack/blob/develop/share/spack/spack-completion.bash>`_. <https://github.com/adamjstewart/spack/blob/develop/share/spack/spack-completion.bash>`_.
-------------
Writing Hooks
-------------
A hook is a callback that makes it easy to design functions that run
for different events. We do this by way of defining hook types, and then
inserting them at different places in the spack code base. Whenever a hook
type triggers by way of a function call, we find all the hooks of that type,
and run them.
Spack defines hooks by way of a module at ``lib/spack/spack/hooks`` where we can define
types of hooks in the ``__init__.py``, and then python files in that folder
can use hook functions. The files are automatically parsed, so if you write
a new file for some integration (e.g., ``lib/spack/spack/hooks/myintegration.py``
you can then write hook functions in that file that will be automatically detected,
and run whenever your hook is called. This section will cover the basic kind
of hooks, and how to write them.
^^^^^^^^^^^^^^
Types of Hooks
^^^^^^^^^^^^^^
The following hooks are currently implemented to make it easy for you,
the developer, to add hooks at different stages of a spack install or similar.
If there is a hook that you would like and is missing, you can propose to add a new one.
"""""""""""""""""""""
``pre_install(spec)``
"""""""""""""""""""""
A ``pre_install`` hook is run within an install subprocess, directly before
the install starts. It expects a single argument of a spec, and is run in
a multiprocessing subprocess. Note that if you see ``pre_install`` functions associated with packages these are not hooks
as we have defined them here, but rather callback functions associated with
a package install.
""""""""""""""""""""""
``post_install(spec)``
""""""""""""""""""""""
A ``post_install`` hook is run within an install subprocess, directly after
the install finishes, but before the build stage is removed. If you
write one of these hooks, you should expect it to accept a spec as the only
argument. This is run in a multiprocessing subprocess. This ``post_install`` is
also seen in packages, but in this context not related to the hooks described
here.
""""""""""""""""""""""""""
``on_install_start(spec)``
""""""""""""""""""""""""""
This hook is run at the beginning of ``lib/spack/spack/installer.py``,
in the install function of a ``PackageInstaller``,
and importantly is not part of a build process, but before it. This is when
we have just newly grabbed the task, and are preparing to install. If you
write a hook of this type, you should provide the spec to it.
.. code-block:: python
def on_install_start(spec):
"""On start of an install, we want to...
"""
print('on_install_start')
""""""""""""""""""""""""""""
``on_install_success(spec)``
""""""""""""""""""""""""""""
This hook is run on a successful install, and is also run inside the build
process, akin to ``post_install``. The main difference is that this hook
is run outside of the context of the stage directory, meaning after the
build stage has been removed and the user is alerted that the install was
successful. If you need to write a hook that is run on success of a particular
phase, you should use ``on_phase_success``.
""""""""""""""""""""""""""""
``on_install_failure(spec)``
""""""""""""""""""""""""""""
This hook is run given an install failure that happens outside of the build
subprocess, but somewhere in ``installer.py`` when something else goes wrong.
If you need to write a hook that is relevant to a failure within a build
process, you would want to instead use ``on_phase_failure``.
"""""""""""""""""""""""""""""""""""""""""""""""
``on_phase_success(pkg, phase_name, log_file)``
"""""""""""""""""""""""""""""""""""""""""""""""
This hook is run within the install subprocess, and specifically when a phase
successfully finishes. Since we are interested in the package, the name of
the phase, and any output from it, we require:
- **pkg**: the package variable, which also has the attached spec at ``pkg.spec``
- **phase_name**: the name of the phase that was successful (e.g., configure)
- **log_file**: the path to the file with output, in case you need to inspect or otherwise interact with it.
"""""""""""""""""""""""""""""""""""""""""""""
``on_phase_error(pkg, phase_name, log_file)``
"""""""""""""""""""""""""""""""""""""""""""""
In the case of an error during a phase, we might want to trigger some event
with a hook, and this is the purpose of this particular hook. Akin to
``on_phase_success`` we require the same variables - the package that failed,
the name of the phase, and the log file where we might find errors.
"""""""""""""""""""""""""""""""""
``on_analyzer_save(pkg, result)``
"""""""""""""""""""""""""""""""""
After an analyzer has saved some result for a package, this hook is called,
and it provides the package that we just ran the analysis for, along with
the loaded result. Typically, a result is structured to have the name
of the analyzer as key, and the result object that is defined in detail in
:ref:`analyzer_run_function`.
.. code-block:: python
def on_analyzer_save(pkg, result):
"""given a package and a result...
"""
print('Do something extra with a package analysis result here')
^^^^^^^^^^^^^^^^^^^^^^
Adding a New Hook Type
^^^^^^^^^^^^^^^^^^^^^^
Adding a new hook type is very simple! In ``lib/spack/spack/hooks/__init__.py``
you can simply create a new ``HookRunner`` that is named to match your new hook.
For example, let's say you want to add a new hook called ``post_log_write``
to trigger after anything is written to a logger. You would add it as follows:
.. code-block:: python
# pre/post install and run by the install subprocess
pre_install = HookRunner('pre_install')
post_install = HookRunner('post_install')
# hooks related to logging
post_log_write = HookRunner('post_log_write') # <- here is my new hook!
You then need to decide what arguments my hook would expect. Since this is
related to logging, let's say that you want a message and level. That means
that when you add a python file to the ``lib/spack/spack/hooks``
folder with one or more callbacks intended to be triggered by this hook. You might
use my new hook as follows:
.. code-block:: python
def post_log_write(message, level):
"""Do something custom with the messsage and level every time we write
to the log
"""
print('running post_log_write!')
To use the hook, we would call it as follows somewhere in the logic to do logging.
In this example, we use it outside of a logger that is already defined:
.. code-block:: python
import spack.hooks
# We do something here to generate a logger and message
spack.hooks.post_log_write(message, logger.level)
This is not to say that this would be the best way to implement an integration
with the logger (you'd probably want to write a custom logger, or you could
have the hook defined within the logger) but serves as an example of writing a hook.
---------- ----------
Unit tests Unit tests
---------- ----------

View file

@ -67,6 +67,7 @@ or refer to the full manual below.
build_settings build_settings
environments environments
containers containers
monitoring
mirrors mirrors
module_file_support module_file_support
repositories repositories
@ -77,6 +78,12 @@ or refer to the full manual below.
extensions extensions
pipelines pipelines
.. toctree::
:maxdepth: 2
:caption: Research
analyze
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
:caption: Contributing :caption: Contributing

View file

@ -0,0 +1,94 @@
.. Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _monitoring:
==========
Monitoring
==========
You can use a `spack monitor <https://github.com/spack/spack-monitor>`_ "Spackmon"
server to store a database of your packages, builds, and associated metadata
for provenance, research, or some other kind of development. You should
follow the instructions in the `spack monitor documentation <https://spack-monitor.readthedocs.org>`_
to first create a server along with a username and token for yourself.
You can then use this guide to interact with the server.
-------------------
Analysis Monitoring
-------------------
To read about how to monitor an analysis (meaning you want to send analysis results
to a server) see :ref:`analyze_monitoring`.
---------------------
Monitoring An Install
---------------------
Since an install is typically when you build packages, we logically want
to tell spack to monitor during this step. Let's start with an example
where we want to monitor the install of hdf5. Unless you have disabled authentication
for the server, we first want to export our spack monitor token and username to the environment:
.. code-block:: console
$ export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
$ export SPACKMON_USER=spacky
By default, the host for your server is expected to be at ``http://127.0.0.1``
with a prefix of ``ms1``, and if this is the case, you can simply add the
``--monitor`` flag to the install command:
.. code-block:: console
$ spack install --monitor hdf5
If you need to customize the host or the prefix, you can do that as well:
.. code-block:: console
$ spack install --monitor --monitor-prefix monitor --monitor-host https://monitor-service.io hdf5
As a precaution, we cut out early in the spack client if you have not provided
authentication credentials. For example, if you run the command above without
exporting your username or token, you'll see:
.. code-block:: console
==> Error: You are required to export SPACKMON_TOKEN and SPACKMON_USER
This extra check is to ensure that we don't start any builds,
and then discover that you forgot to export your token. However, if
your monitoring server has authentication disabled, you can tell this to
the client to skip this step:
.. code-block:: console
$ spack install --monitor --monitor-disable-auth hdf5
If the service is not running, you'll cleanly exit early - the install will
not continue if you've asked it to monitor and there is no service.
For example, here is what you'll see if the monitoring service is not running:
.. code-block:: console
[Errno 111] Connection refused
If you want to continue builds (and stop monitoring) you can set the ``--monitor-keep-going``
flag.
.. code-block:: console
$ spack install --monitor --monitor-keep-going hdf5
This could mean that if a request fails, you only have partial or no data
added to your monitoring database. This setting will not be applied to the
first request to check if the server is running, but to subsequent requests.
If you don't have a monitor server running and you want to build, simply
don't provide the ``--monitor`` flag!

View file

@ -0,0 +1,43 @@
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""This package contains code for creating analyzers to extract Application
Binary Interface (ABI) information, along with simple analyses that just load
existing metadata.
"""
from __future__ import absolute_import
import spack.util.classes
import spack.paths
import llnl.util.tty as tty
mod_path = spack.paths.analyzers_path
analyzers = spack.util.classes.list_classes("spack.analyzers", mod_path)
# The base analyzer does not have a name, and cannot do dict comprehension
analyzer_types = {}
for a in analyzers:
if not hasattr(a, "name"):
continue
analyzer_types[a.name] = a
def list_all():
"""A helper function to list all analyzers and their descriptions
"""
for name, analyzer in analyzer_types.items():
print("%-25s: %-35s" % (name, analyzer.description))
def get_analyzer(name):
"""Courtesy function to retrieve an analyzer, and exit on error if it
does not exist.
"""
if name in analyzer_types:
return analyzer_types[name]
tty.die("Analyzer %s does not exist" % name)

View file

@ -0,0 +1,115 @@
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""An analyzer base provides basic functions to run the analysis, save results,
and (optionally) interact with a Spack Monitor
"""
import spack.monitor
import spack.hooks
import llnl.util.tty as tty
import spack.util.path
import spack.config
import os
def get_analyzer_dir(spec, analyzer_dir=None):
"""
Given a spec, return the directory to save analyzer results.
We create the directory if it does not exist. We also check that the
spec has an associated package. An analyzer cannot be run if the spec isn't
associated with a package. If the user provides a custom analyzer_dir,
we use it over checking the config and the default at ~/.spack/analyzers
"""
# An analyzer cannot be run if the spec isn't associated with a package
if not hasattr(spec, "package") or not spec.package:
tty.die("A spec can only be analyzed with an associated package.")
# The top level directory is in the user home, or a custom location
if not analyzer_dir:
analyzer_dir = spack.util.path.canonicalize_path(
spack.config.get('config:analyzers_dir', '~/.spack/analyzers'))
# We follow the same convention as the spec install (this could be better)
package_prefix = os.sep.join(spec.package.prefix.split('/')[-3:])
meta_dir = os.path.join(analyzer_dir, package_prefix)
return meta_dir
class AnalyzerBase(object):
def __init__(self, spec, dirname=None):
"""
Verify that the analyzer has correct metadata.
An Analyzer is intended to run on one spec install, so the spec
with its associated package is required on init. The child analyzer
class should define an init function that super's the init here, and
also check that the analyzer has all dependencies that it
needs. If an analyzer subclass does not have dependencies, it does not
need to define an init. An Analyzer should not be allowed to proceed
if one or more dependencies are missing. The dirname, if defined,
is an optional directory name to save to (instead of the default meta
spack directory).
"""
self.spec = spec
self.dirname = dirname
self.meta_dir = os.path.dirname(spec.package.install_log_path)
for required in ["name", "outfile", "description"]:
if not hasattr(self, required):
tty.die("Please add a %s attribute on the analyzer." % required)
def run(self):
"""
Given a spec with an installed package, run the analyzer on it.
"""
raise NotImplementedError
@property
def output_dir(self):
"""
The full path to the output directory.
This includes the nested analyzer directory structure. This function
does not create anything.
"""
if not hasattr(self, "_output_dir"):
output_dir = get_analyzer_dir(self.spec, self.dirname)
self._output_dir = os.path.join(output_dir, self.name)
return self._output_dir
def save_result(self, result, overwrite=False):
"""
Save a result to the associated spack monitor, if defined.
This function is on the level of the analyzer because it might be
the case that the result is large (appropriate for a single request)
or that the data is organized differently (e.g., more than one
request per result). If an analyzer subclass needs to over-write
this function with a custom save, that is appropriate to do (see abi).
"""
# We maintain the structure in json with the analyzer as key so
# that in the future, we could upload to a monitor server
if result[self.name]:
outfile = os.path.join(self.output_dir, self.outfile)
# Only try to create the results directory if we have a result
if not os.path.exists(self._output_dir):
os.makedirs(self._output_dir)
# Don't overwrite an existing result if overwrite is False
if os.path.exists(outfile) and not overwrite:
tty.info("%s exists and overwrite is False, skipping." % outfile)
else:
tty.info("Writing result to %s" % outfile)
spack.monitor.write_json(result[self.name], outfile)
# This hook runs after a save result
spack.hooks.on_analyzer_save(self.spec.package, result)

View file

@ -0,0 +1,32 @@
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""A configargs analyzer is a class of analyzer that typically just uploads
already existing metadata about config args from a package spec install
directory."""
import spack.monitor
from .analyzer_base import AnalyzerBase
import os
class ConfigArgs(AnalyzerBase):
name = "config_args"
outfile = "spack-analyzer-config-args.json"
description = "config args loaded from spack-configure-args.txt"
def run(self):
"""
Load the configure-args.txt and save in json.
The run function will find the spack-config-args.txt file in the
package install directory, and read it into a json structure that has
the name of the analyzer as the key.
"""
config_file = os.path.join(self.meta_dir, "spack-configure-args.txt")
return {self.name: spack.monitor.read_file(config_file)}

View file

@ -0,0 +1,51 @@
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""An environment analyzer will read and parse the environment variables
file in the installed package directory, generating a json file that has
an index of key, value pairs for environment variables."""
from .analyzer_base import AnalyzerBase
from spack.util.environment import EnvironmentModifications
import os
class EnvironmentVariables(AnalyzerBase):
name = "environment_variables"
outfile = "spack-analyzer-environment-variables.json"
description = "environment variables parsed from spack-build-env.txt"
def run(self):
"""
Load, parse, and save spack-build-env.txt to analyzers.
Read in the spack-build-env.txt file from the package install
directory and parse the environment variables into key value pairs.
The result should have the key for the analyzer, the name.
"""
env_file = os.path.join(self.meta_dir, "spack-build-env.txt")
return {self.name: self._read_environment_file(env_file)}
def _read_environment_file(self, filename):
"""
Read and parse the environment file.
Given an environment file, we want to read it, split by semicolons
and new lines, and then parse down to the subset of SPACK_* variables.
We assume that all spack prefix variables are not secrets, and unlike
the install_manifest.json, we don't (at least to start) parse the values
to remove path prefixes specific to user systems.
"""
if not os.path.exists(filename):
return
mods = EnvironmentModifications.from_sourcing_file(filename)
env = {}
mods.apply_modifications(env)
return env

View file

@ -0,0 +1,30 @@
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""The install files json file (install_manifest.json) already exists in
the package install folder, so this analyzer simply moves it to the user
analyzer folder for further processing."""
import spack.monitor
from .analyzer_base import AnalyzerBase
import os
class InstallFiles(AnalyzerBase):
name = "install_files"
outfile = "spack-analyzer-install-files.json"
description = "install file listing read from install_manifest.json"
def run(self):
"""
Load in the install_manifest.json and save to analyzers.
We write it out to the analyzers folder, with key as the analyzer name.
"""
manifest_file = os.path.join(self.meta_dir, "install_manifest.json")
return {self.name: spack.monitor.read_json(manifest_file)}

View file

@ -0,0 +1,116 @@
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack
import spack.error
import spack.bootstrap
import spack.hooks
import spack.monitor
import spack.binary_distribution
import spack.package
import spack.repo
import llnl.util.tty as tty
from .analyzer_base import AnalyzerBase
import os
class Libabigail(AnalyzerBase):
name = "libabigail"
outfile = "spack-analyzer-libabigail.json"
description = "Application Binary Interface (ABI) features for objects"
def __init__(self, spec, dirname=None):
"""
init for an analyzer ensures we have all needed dependencies.
For the libabigail analyzer, this means Libabigail.
Since the output for libabigail is one file per object, we communicate
with the monitor multiple times.
"""
super(Libabigail, self).__init__(spec, dirname)
# This doesn't seem to work to import on the module level
tty.debug("Preparing to use Libabigail, will install if missing.")
with spack.bootstrap.ensure_bootstrap_configuration():
# libabigail won't install lib/bin/share without docs
spec = spack.spec.Spec("libabigail+docs")
spec.concretize()
self.abidw = spack.bootstrap.get_executable(
"abidw", spec=spec, install=True)
def run(self):
"""
Run libabigail, and save results to filename.
This run function differs in that we write as we generate and then
return a dict with the analyzer name as the key, and the value of a
dict of results, where the key is the object name, and the value is
the output file written to.
"""
manifest = spack.binary_distribution.get_buildfile_manifest(self.spec)
# This result will store a path to each file
result = {}
# Generate an output file for each binary or object
for obj in manifest.get("binary_to_relocate_fullpath", []):
# We want to preserve the path in the install directory in case
# a library has an equivalenly named lib or executable, for example
outdir = os.path.dirname(obj.replace(self.spec.package.prefix,
'').strip(os.path.sep))
outfile = "spack-analyzer-libabigail-%s.xml" % os.path.basename(obj)
outfile = os.path.join(self.output_dir, outdir, outfile)
outdir = os.path.dirname(outfile)
# Create the output directory
if not os.path.exists(outdir):
os.makedirs(outdir)
# Sometimes libabigail segfaults and dumps
try:
self.abidw(obj, "--out-file", outfile)
result[obj] = outfile
tty.info("Writing result to %s" % outfile)
except spack.error.SpackError:
tty.warn("Issue running abidw for %s" % obj)
return {self.name: result}
def save_result(self, result, overwrite=False):
"""
Read saved ABI results and upload to monitor server.
ABI results are saved to individual files, so each one needs to be
read and uploaded. Result here should be the lookup generated in run(),
the key is the analyzer name, and each value is the result file.
We currently upload the entire xml as text because libabigail can't
easily read gzipped xml, but this will be updated when it can.
"""
if not spack.monitor.cli:
return
name = self.spec.package.name
for obj, filename in result.get(self.name, {}).items():
# Don't include the prefix
rel_path = obj.replace(self.spec.prefix + os.path.sep, "")
# We've already saved the results to file during run
content = spack.monitor.read_file(filename)
# A result needs an analyzer, value or binary_value, and name
data = {"value": content, "install_file": rel_path, "name": "abidw-xml"}
tty.info("Sending result for %s %s to monitor." % (name, rel_path))
spack.hooks.on_analyzer_save(self.spec.package, {"libabigail": [data]})

View file

@ -58,7 +58,6 @@
""" """
import contextlib import contextlib
import functools import functools
import inspect
import warnings import warnings
import archspec.cpu import archspec.cpu
@ -74,7 +73,7 @@
import spack.error as serr import spack.error as serr
import spack.util.executable import spack.util.executable
import spack.version import spack.version
from spack.util.naming import mod_to_class import spack.util.classes
from spack.util.spack_yaml import syaml_dict from spack.util.spack_yaml import syaml_dict
@ -502,23 +501,8 @@ def arch_for_spec(arch_spec):
@lang.memoized @lang.memoized
def _all_platforms(): def _all_platforms():
classes = []
mod_path = spack.paths.platform_path mod_path = spack.paths.platform_path
parent_module = "spack.platforms" return spack.util.classes.list_classes("spack.platforms", mod_path)
for name in lang.list_modules(mod_path):
mod_name = '%s.%s' % (parent_module, name)
class_name = mod_to_class(name)
mod = __import__(mod_name, fromlist=[class_name])
if not hasattr(mod, class_name):
tty.die('No class %s defined in %s' % (class_name, mod_name))
cls = getattr(mod, class_name)
if not inspect.isclass(cls):
tty.die('%s.%s is not a class' % (mod_name, class_name))
classes.append(cls)
return classes
@lang.memoized @lang.memoized

View file

@ -550,40 +550,38 @@ def read_buildinfo_file(prefix):
return buildinfo return buildinfo
def write_buildinfo_file(spec, workdir, rel=False): def get_buildfile_manifest(spec):
""" """
Create a cache file containing information Return a data structure with information about a build, including
required for the relocation text_to_relocate, binary_to_relocate, binary_to_relocate_fullpath
link_to_relocate, and other, which means it doesn't fit any of previous
checks (and should not be relocated). We blacklist docs (man) and
metadata (.spack). This can be used to find a particular kind of file
in spack, or to generate the build metadata.
""" """
prefix = spec.prefix data = {"text_to_relocate": [], "binary_to_relocate": [],
text_to_relocate = [] "link_to_relocate": [], "other": [],
binary_to_relocate = [] "binary_to_relocate_fullpath": []}
link_to_relocate = []
blacklist = (".spack", "man") blacklist = (".spack", "man")
prefix_to_hash = dict()
prefix_to_hash[str(spec.package.prefix)] = spec.dag_hash()
deps = spack.build_environment.get_rpath_deps(spec.package)
for d in deps:
prefix_to_hash[str(d.prefix)] = d.dag_hash()
# Do this at during tarball creation to save time when tarball unpacked. # Do this at during tarball creation to save time when tarball unpacked.
# Used by make_package_relative to determine binaries to change. # Used by make_package_relative to determine binaries to change.
for root, dirs, files in os.walk(prefix, topdown=True): for root, dirs, files in os.walk(spec.prefix, topdown=True):
dirs[:] = [d for d in dirs if d not in blacklist] dirs[:] = [d for d in dirs if d not in blacklist]
for filename in files: for filename in files:
path_name = os.path.join(root, filename) path_name = os.path.join(root, filename)
m_type, m_subtype = relocate.mime_type(path_name) m_type, m_subtype = relocate.mime_type(path_name)
rel_path_name = os.path.relpath(path_name, spec.prefix)
added = False
if os.path.islink(path_name): if os.path.islink(path_name):
link = os.readlink(path_name) link = os.readlink(path_name)
if os.path.isabs(link): if os.path.isabs(link):
# Relocate absolute links into the spack tree # Relocate absolute links into the spack tree
if link.startswith(spack.store.layout.root): if link.startswith(spack.store.layout.root):
rel_path_name = os.path.relpath(path_name, prefix) data['link_to_relocate'].append(rel_path_name)
link_to_relocate.append(rel_path_name) added = True
else:
msg = 'Absolute link %s to %s ' % (path_name, link)
msg += 'outside of prefix %s ' % prefix
msg += 'should not be relocated.'
tty.warn(msg)
if relocate.needs_binary_relocation(m_type, m_subtype): if relocate.needs_binary_relocation(m_type, m_subtype):
if ((m_subtype in ('x-executable', 'x-sharedlib') if ((m_subtype in ('x-executable', 'x-sharedlib')
@ -591,11 +589,31 @@ def write_buildinfo_file(spec, workdir, rel=False):
(m_subtype in ('x-mach-binary') (m_subtype in ('x-mach-binary')
and sys.platform == 'darwin') or and sys.platform == 'darwin') or
(not filename.endswith('.o'))): (not filename.endswith('.o'))):
rel_path_name = os.path.relpath(path_name, prefix) data['binary_to_relocate'].append(rel_path_name)
binary_to_relocate.append(rel_path_name) data['binary_to_relocate_fullpath'].append(path_name)
added = True
if relocate.needs_text_relocation(m_type, m_subtype): if relocate.needs_text_relocation(m_type, m_subtype):
rel_path_name = os.path.relpath(path_name, prefix) data['text_to_relocate'].append(rel_path_name)
text_to_relocate.append(rel_path_name) added = True
if not added:
data['other'].append(path_name)
return data
def write_buildinfo_file(spec, workdir, rel=False):
"""
Create a cache file containing information
required for the relocation
"""
manifest = get_buildfile_manifest(spec)
prefix_to_hash = dict()
prefix_to_hash[str(spec.package.prefix)] = spec.dag_hash()
deps = spack.build_environment.get_rpath_deps(spec.package)
for d in deps:
prefix_to_hash[str(d.prefix)] = d.dag_hash()
# Create buildinfo data and write it to disk # Create buildinfo data and write it to disk
import spack.hooks.sbang as sbang import spack.hooks.sbang as sbang
@ -605,10 +623,10 @@ def write_buildinfo_file(spec, workdir, rel=False):
buildinfo['buildpath'] = spack.store.layout.root buildinfo['buildpath'] = spack.store.layout.root
buildinfo['spackprefix'] = spack.paths.prefix buildinfo['spackprefix'] = spack.paths.prefix
buildinfo['relative_prefix'] = os.path.relpath( buildinfo['relative_prefix'] = os.path.relpath(
prefix, spack.store.layout.root) spec.prefix, spack.store.layout.root)
buildinfo['relocate_textfiles'] = text_to_relocate buildinfo['relocate_textfiles'] = manifest['text_to_relocate']
buildinfo['relocate_binaries'] = binary_to_relocate buildinfo['relocate_binaries'] = manifest['binary_to_relocate']
buildinfo['relocate_links'] = link_to_relocate buildinfo['relocate_links'] = manifest['link_to_relocate']
buildinfo['prefix_to_hash'] = prefix_to_hash buildinfo['prefix_to_hash'] = prefix_to_hash
filename = buildinfo_file_name(workdir) filename = buildinfo_file_name(workdir)
with open(filename, 'w') as outfile: with open(filename, 'w') as outfile:

View file

@ -0,0 +1,118 @@
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import sys
import llnl.util.tty as tty
import spack.analyzers
import spack.build_environment
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.environment as ev
import spack.fetch_strategy
import spack.monitor
import spack.paths
import spack.report
description = "analyze installed packages"
section = "extensions"
level = "short"
def setup_parser(subparser):
sp = subparser.add_subparsers(metavar='SUBCOMMAND', dest='analyze_command')
sp.add_parser('list-analyzers',
description="list available analyzers",
help="show list of analyzers that are available to run.")
# This adds the monitor group to the subparser
spack.monitor.get_monitor_group(subparser)
# Run Parser
run_parser = sp.add_parser('run', description="run an analyzer",
help="provide the name of the analyzer to run.")
run_parser.add_argument(
'--overwrite', action='store_true',
help="re-analyze even if the output file already exists.")
run_parser.add_argument(
'-p', '--path', default=None,
dest='path',
help="write output to a different directory than ~/.spack/analyzers")
run_parser.add_argument(
'-a', '--analyzers', default=None,
dest="analyzers", action="append",
help="add an analyzer (defaults to all available)")
arguments.add_common_arguments(run_parser, ['spec'])
def analyze_spec(spec, analyzers=None, outdir=None, monitor=None, overwrite=False):
"""
Do an analysis for a spec, optionally adding monitoring.
We also allow the user to specify a custom output directory.
analyze_spec(spec, args.analyzers, args.outdir, monitor)
Args:
spec (Spec): spec object of installed package
analyzers (list): list of analyzer (keys) to run
monitor (monitor.SpackMonitorClient): a monitor client
overwrite (bool): overwrite result if already exists
"""
analyzers = analyzers or list(spack.analyzers.analyzer_types.keys())
# Load the build environment from the spec install directory, and send
# the spec to the monitor if it's not known
if monitor:
monitor.load_build_environment(spec)
monitor.new_configuration([spec])
for name in analyzers:
# Instantiate the analyzer with the spec and outdir
analyzer = spack.analyzers.get_analyzer(name)(spec, outdir)
# Run the analyzer to get a json result - results are returned as
# a dictionary with a key corresponding to the analyzer type, so
# we can just update the data
result = analyzer.run()
# Send the result. We do them separately because:
# 1. each analyzer might have differently organized output
# 2. the size of a result can be large
analyzer.save_result(result, overwrite)
def analyze(parser, args, **kwargs):
# If the user wants to list analyzers, do so and exit
if args.analyze_command == "list-analyzers":
spack.analyzers.list_all()
sys.exit(0)
# handle active environment, if any
env = ev.get_env(args, 'analyze')
# Get an disambiguate spec (we should only have one)
specs = spack.cmd.parse_specs(args.spec)
if not specs:
tty.die("You must provide one or more specs to analyze.")
spec = spack.cmd.disambiguate_spec(specs[0], env)
# The user wants to monitor builds using github.com/spack/spack-monitor
# It is instantianted once here, and then available at spack.monitor.cli
monitor = None
if args.use_monitor:
monitor = spack.monitor.get_client(
host=args.monitor_host,
prefix=args.monitor_prefix,
disable_auth=args.monitor_disable_auth,
)
# Run the analysis
analyze_spec(spec, args.analyzers, args.path, monitor, args.overwrite)

View file

@ -17,6 +17,7 @@
import spack.cmd.common.arguments as arguments import spack.cmd.common.arguments as arguments
import spack.environment as ev import spack.environment as ev
import spack.fetch_strategy import spack.fetch_strategy
import spack.monitor
import spack.paths import spack.paths
import spack.report import spack.report
from spack.error import SpackError from spack.error import SpackError
@ -106,6 +107,8 @@ def setup_parser(subparser):
'--cache-only', action='store_true', dest='cache_only', default=False, '--cache-only', action='store_true', dest='cache_only', default=False,
help="only install package from binary mirrors") help="only install package from binary mirrors")
monitor_group = spack.monitor.get_monitor_group(subparser) # noqa
subparser.add_argument( subparser.add_argument(
'--include-build-deps', action='store_true', dest='include_build_deps', '--include-build-deps', action='store_true', dest='include_build_deps',
default=False, help="""include build deps when installing from cache, default=False, help="""include build deps when installing from cache,
@ -224,6 +227,7 @@ def install_specs(cli_args, kwargs, specs):
def install(parser, args, **kwargs): def install(parser, args, **kwargs):
if args.help_cdash: if args.help_cdash:
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter, formatter_class=argparse.RawDescriptionHelpFormatter,
@ -236,6 +240,14 @@ def install(parser, args, **kwargs):
parser.print_help() parser.print_help()
return return
# The user wants to monitor builds using github.com/spack/spack-monitor
if args.use_monitor:
monitor = spack.monitor.get_client(
host=args.monitor_host,
prefix=args.monitor_prefix,
disable_auth=args.monitor_disable_auth,
)
reporter = spack.report.collect_info( reporter = spack.report.collect_info(
spack.package.PackageInstaller, '_install_task', args.log_format, args) spack.package.PackageInstaller, '_install_task', args.log_format, args)
if args.log_file: if args.log_file:
@ -378,4 +390,17 @@ def get_tests(specs):
# overwrite all concrete explicit specs from this build # overwrite all concrete explicit specs from this build
kwargs['overwrite'] = [spec.dag_hash() for spec in specs] kwargs['overwrite'] = [spec.dag_hash() for spec in specs]
# Update install_args with the monitor args, needed for build task
kwargs.update({
"monitor_disable_auth": args.monitor_disable_auth,
"monitor_keep_going": args.monitor_keep_going,
"monitor_host": args.monitor_host,
"use_monitor": args.use_monitor,
"monitor_prefix": args.monitor_prefix,
})
# If we are using the monitor, we send configs. and create build
# The full_hash is the main package id, the build_hash for others
if args.use_monitor and specs:
monitor.new_configuration(specs)
install_specs(args, kwargs, zip(abstract_specs, specs)) install_specs(args, kwargs, zip(abstract_specs, specs))

View file

@ -42,6 +42,10 @@ def setup_parser(subparser):
subparser.add_argument( subparser.add_argument(
'-N', '--namespaces', action='store_true', default=False, '-N', '--namespaces', action='store_true', default=False,
help='show fully qualified package names') help='show fully qualified package names')
subparser.add_argument(
'--hash-type', default="build_hash",
choices=['build_hash', 'full_hash', 'dag_hash'],
help='generate spec with a particular hash type.')
subparser.add_argument( subparser.add_argument(
'-t', '--types', action='store_true', default=False, '-t', '--types', action='store_true', default=False,
help='show dependency types') help='show dependency types')
@ -83,11 +87,14 @@ def spec(parser, args):
if spec.name in spack.repo.path or spec.virtual: if spec.name in spack.repo.path or spec.virtual:
spec.concretize() spec.concretize()
# The user can specify the hash type to use
hash_type = getattr(ht, args.hash_type)
if args.format == 'yaml': if args.format == 'yaml':
# use write because to_yaml already has a newline. # use write because to_yaml already has a newline.
sys.stdout.write(spec.to_yaml(hash=ht.build_hash)) sys.stdout.write(spec.to_yaml(hash=hash_type))
else: else:
print(spec.to_json(hash=ht.build_hash)) print(spec.to_json(hash=hash_type))
continue continue
with tree_context(): with tree_context():

View file

@ -17,6 +17,7 @@
import spack.config import spack.config
import spack.hash_types as ht import spack.hash_types as ht
import spack.spec import spack.spec
import spack.util.spack_json as sjson
from spack.error import SpackError from spack.error import SpackError
@ -247,6 +248,17 @@ def write_spec(self, spec, path):
# full provenance by full hash so it's availabe if we want it later # full provenance by full hash so it's availabe if we want it later
spec.to_yaml(f, hash=ht.full_hash) spec.to_yaml(f, hash=ht.full_hash)
def write_host_environment(self, spec):
"""The host environment is a json file with os, kernel, and spack
versioning. We use it in the case that an analysis later needs to
easily access this information.
"""
from spack.util.environment import get_host_environment_metadata
env_file = self.env_metadata_path(spec)
environ = get_host_environment_metadata()
with open(env_file, 'w') as fd:
sjson.dump(environ, fd)
def read_spec(self, path): def read_spec(self, path):
"""Read the contents of a file and parse them as a spec""" """Read the contents of a file and parse them as a spec"""
try: try:
@ -300,6 +312,9 @@ def disable_upstream_check(self):
def metadata_path(self, spec): def metadata_path(self, spec):
return os.path.join(spec.prefix, self.metadata_dir) return os.path.join(spec.prefix, self.metadata_dir)
def env_metadata_path(self, spec):
return os.path.join(self.metadata_path(spec), "install_environment.json")
def build_packages_path(self, spec): def build_packages_path(self, spec):
return os.path.join(self.metadata_path(spec), self.packages_dir) return os.path.join(self.metadata_path(spec), self.packages_dir)

View file

@ -9,8 +9,6 @@
import sys import sys
import shutil import shutil
import copy import copy
import socket
import six import six
from ordereddict_backport import OrderedDict from ordereddict_backport import OrderedDict
@ -33,13 +31,11 @@
import spack.user_environment as uenv import spack.user_environment as uenv
from spack.filesystem_view import YamlFilesystemView from spack.filesystem_view import YamlFilesystemView
import spack.util.environment import spack.util.environment
import spack.architecture as architecture
from spack.spec import Spec from spack.spec import Spec
from spack.spec_list import SpecList, InvalidSpecConstraintError from spack.spec_list import SpecList, InvalidSpecConstraintError
from spack.variant import UnknownVariantError from spack.variant import UnknownVariantError
import spack.util.lock as lk import spack.util.lock as lk
from spack.util.path import substitute_path_variables from spack.util.path import substitute_path_variables
from spack.installer import PackageInstaller
import spack.util.path import spack.util.path
#: environment variable used to indicate the active environment #: environment variable used to indicate the active environment
@ -447,21 +443,11 @@ def _write_yaml(data, str_or_file):
def _eval_conditional(string): def _eval_conditional(string):
"""Evaluate conditional definitions using restricted variable scope.""" """Evaluate conditional definitions using restricted variable scope."""
arch = architecture.Arch( valid_variables = spack.util.environment.get_host_environment()
architecture.platform(), 'default_os', 'default_target') valid_variables.update({
arch_spec = spack.spec.Spec('arch=%s' % arch)
valid_variables = {
'target': str(arch.target),
'os': str(arch.os),
'platform': str(arch.platform),
'arch': arch_spec,
'architecture': arch_spec,
'arch_str': str(arch),
're': re, 're': re,
'env': os.environ, 'env': os.environ,
'hostname': socket.gethostname() })
}
return eval(string, valid_variables) return eval(string, valid_variables)
@ -1454,6 +1440,7 @@ def install_all(self, args=None, **install_args):
args (Namespace): argparse namespace with command arguments args (Namespace): argparse namespace with command arguments
install_args (dict): keyword install arguments install_args (dict): keyword install arguments
""" """
from spack.installer import PackageInstaller
tty.debug('Assessing installation status of environment packages') tty.debug('Assessing installation status of environment packages')
# If "spack install" is invoked repeatedly for a large environment # If "spack install" is invoked repeatedly for a large environment
# where all specs are already installed, the operation can take # where all specs are already installed, the operation can take

View file

@ -16,6 +16,7 @@
* post_install(spec) * post_install(spec)
* pre_uninstall(spec) * pre_uninstall(spec)
* post_uninstall(spec) * post_uninstall(spec)
* on_install_failure(exception)
This can be used to implement support for things like module This can be used to implement support for things like module
systems (e.g. modules, lmod, etc.) or to add other custom systems (e.g. modules, lmod, etc.) or to add other custom
@ -59,8 +60,20 @@ def __call__(self, *args, **kwargs):
hook(*args, **kwargs) hook(*args, **kwargs)
# pre/post install and run by the install subprocess
pre_install = HookRunner('pre_install') pre_install = HookRunner('pre_install')
post_install = HookRunner('post_install') post_install = HookRunner('post_install')
# These hooks are run within an install subprocess
pre_uninstall = HookRunner('pre_uninstall') pre_uninstall = HookRunner('pre_uninstall')
post_uninstall = HookRunner('post_uninstall') post_uninstall = HookRunner('post_uninstall')
on_phase_success = HookRunner('on_phase_success')
on_phase_error = HookRunner('on_phase_error')
# These are hooks in installer.py, before starting install subprocess
on_install_start = HookRunner('on_install_start')
on_install_success = HookRunner('on_install_success')
on_install_failure = HookRunner('on_install_failure')
# Analyzer hooks
on_analyzer_save = HookRunner('on_analyzer_save')

View file

@ -0,0 +1,73 @@
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import llnl.util.tty as tty
import spack.monitor
def on_install_start(spec):
"""On start of an install, we want to ping the server if it exists
"""
if not spack.monitor.cli:
return
tty.debug("Running on_install_start for %s" % spec)
build_id = spack.monitor.cli.new_build(spec)
tty.verbose("Build created with id %s" % build_id)
def on_install_success(spec):
"""On the success of an install (after everything is complete)
"""
if not spack.monitor.cli:
return
tty.debug("Running on_install_success for %s" % spec)
result = spack.monitor.cli.update_build(spec, status="SUCCESS")
tty.verbose(result.get('message'))
def on_install_failure(spec):
"""Triggered on failure of an install
"""
if not spack.monitor.cli:
return
tty.debug("Running on_install_failure for %s" % spec)
result = spack.monitor.cli.fail_task(spec)
tty.verbose(result.get('message'))
def on_phase_success(pkg, phase_name, log_file):
"""Triggered on a phase success
"""
if not spack.monitor.cli:
return
tty.debug("Running on_phase_success %s, phase %s" % (pkg.name, phase_name))
result = spack.monitor.cli.send_phase(pkg, phase_name, log_file, "SUCCESS")
tty.verbose(result.get('message'))
def on_phase_error(pkg, phase_name, log_file):
"""Triggered on a phase error
"""
if not spack.monitor.cli:
return
tty.debug("Running on_phase_error %s, phase %s" % (pkg.name, phase_name))
result = spack.monitor.cli.send_phase(pkg, phase_name, log_file, "ERROR")
tty.verbose(result.get('message'))
def on_analyzer_save(pkg, result):
"""given a package and a result, if we have a spack monitor, upload
the result to it.
"""
if not spack.monitor.cli:
return
# This hook runs after a save result
spack.monitor.cli.send_analyze_metadata(pkg, result)

View file

@ -46,6 +46,7 @@
import spack.compilers import spack.compilers
import spack.error import spack.error
import spack.hooks import spack.hooks
import spack.monitor
import spack.package import spack.package
import spack.package_prefs as prefs import spack.package_prefs as prefs
import spack.repo import spack.repo
@ -412,6 +413,25 @@ def clear_failures():
spack.store.db.clear_all_failures() spack.store.db.clear_all_failures()
def combine_phase_logs(phase_log_files, log_path):
"""
Read set or list of logs and combine them into one file.
Each phase will produce it's own log, so this function aims to cat all the
separate phase log output files into the pkg.log_path. It is written
generally to accept some list of files, and a log path to combine them to.
Args:
phase_log_files (list): a list or iterator of logs to combine
log_path (path): the path to combine them to
"""
with open(log_path, 'w') as log_file:
for phase_log_file in phase_log_files:
with open(phase_log_file, 'r') as phase_log:
log_file.write(phase_log.read())
def dump_packages(spec, path): def dump_packages(spec, path):
""" """
Dump all package information for a spec and its dependencies. Dump all package information for a spec and its dependencies.
@ -521,6 +541,12 @@ def log(pkg):
# Archive the whole stdout + stderr for the package # Archive the whole stdout + stderr for the package
fs.install(pkg.log_path, pkg.install_log_path) fs.install(pkg.log_path, pkg.install_log_path)
# Archive all phase log paths
for phase_log in pkg.phase_log_files:
log_file = os.path.basename(phase_log)
log_file = os.path.join(os.path.dirname(packages_dir), log_file)
fs.install(phase_log, log_file)
# Archive the environment used for the build # Archive the environment used for the build
fs.install(pkg.env_path, pkg.install_env_path) fs.install(pkg.env_path, pkg.install_env_path)
@ -1260,6 +1286,7 @@ def _requeue_task(self, task):
def _setup_install_dir(self, pkg): def _setup_install_dir(self, pkg):
""" """
Create and ensure proper access controls for the install directory. Create and ensure proper access controls for the install directory.
Write a small metadata file with the current spack environment.
Args: Args:
pkg (Package): the package to be built and installed pkg (Package): the package to be built and installed
@ -1285,6 +1312,9 @@ def _setup_install_dir(self, pkg):
# Ensure the metadata path exists as well # Ensure the metadata path exists as well
fs.mkdirp(spack.store.layout.metadata_path(pkg.spec), mode=perms) fs.mkdirp(spack.store.layout.metadata_path(pkg.spec), mode=perms)
# Always write host environment - we assume this can change
spack.store.layout.write_host_environment(pkg.spec)
def _update_failed(self, task, mark=False, exc=None): def _update_failed(self, task, mark=False, exc=None):
""" """
Update the task and transitive dependents as failed; optionally mark Update the task and transitive dependents as failed; optionally mark
@ -1388,8 +1418,8 @@ def install(self):
Args: Args:
pkg (Package): the package to be built and installed""" pkg (Package): the package to be built and installed"""
self._init_queue()
self._init_queue()
fail_fast_err = 'Terminating after first install failure' fail_fast_err = 'Terminating after first install failure'
single_explicit_spec = len(self.build_requests) == 1 single_explicit_spec = len(self.build_requests) == 1
failed_explicits = [] failed_explicits = []
@ -1400,6 +1430,7 @@ def install(self):
if task is None: if task is None:
continue continue
spack.hooks.on_install_start(task.request.pkg.spec)
install_args = task.request.install_args install_args = task.request.install_args
keep_prefix = install_args.get('keep_prefix') keep_prefix = install_args.get('keep_prefix')
@ -1422,6 +1453,10 @@ def install(self):
tty.warn('{0} does NOT actually have any uninstalled deps' tty.warn('{0} does NOT actually have any uninstalled deps'
' left'.format(pkg_id)) ' left'.format(pkg_id))
dep_str = 'dependencies' if task.priority > 1 else 'dependency' dep_str = 'dependencies' if task.priority > 1 else 'dependency'
# Hook to indicate task failure, but without an exception
spack.hooks.on_install_failure(task.request.pkg.spec)
raise InstallError( raise InstallError(
'Cannot proceed with {0}: {1} uninstalled {2}: {3}' 'Cannot proceed with {0}: {1} uninstalled {2}: {3}'
.format(pkg_id, task.priority, dep_str, .format(pkg_id, task.priority, dep_str,
@ -1441,6 +1476,11 @@ def install(self):
tty.warn('{0} failed to install'.format(pkg_id)) tty.warn('{0} failed to install'.format(pkg_id))
self._update_failed(task) self._update_failed(task)
# Mark that the package failed
# TODO: this should also be for the task.pkg, but we don't
# model transitive yet.
spack.hooks.on_install_failure(task.request.pkg.spec)
if self.fail_fast: if self.fail_fast:
raise InstallError(fail_fast_err) raise InstallError(fail_fast_err)
@ -1550,6 +1590,7 @@ def install(self):
# Only terminate at this point if a single build request was # Only terminate at this point if a single build request was
# made. # made.
if task.explicit and single_explicit_spec: if task.explicit and single_explicit_spec:
spack.hooks.on_install_failure(task.request.pkg.spec)
raise raise
if task.explicit: if task.explicit:
@ -1561,10 +1602,12 @@ def install(self):
err = 'Failed to install {0} due to {1}: {2}' err = 'Failed to install {0} due to {1}: {2}'
tty.error(err.format(pkg.name, exc.__class__.__name__, tty.error(err.format(pkg.name, exc.__class__.__name__,
str(exc))) str(exc)))
spack.hooks.on_install_failure(task.request.pkg.spec)
raise raise
except (Exception, SystemExit) as exc: except (Exception, SystemExit) as exc:
self._update_failed(task, True, exc) self._update_failed(task, True, exc)
spack.hooks.on_install_failure(task.request.pkg.spec)
# Best effort installs suppress the exception and mark the # Best effort installs suppress the exception and mark the
# package as a failure. # package as a failure.
@ -1662,6 +1705,7 @@ def build_process(pkg, kwargs):
echo = spack.package.PackageBase._verbose echo = spack.package.PackageBase._verbose
pkg.stage.keep = keep_stage pkg.stage.keep = keep_stage
with pkg.stage: with pkg.stage:
# Run the pre-install hook in the child process after # Run the pre-install hook in the child process after
# the directory is created. # the directory is created.
@ -1679,6 +1723,7 @@ def build_process(pkg, kwargs):
# Do the real install in the source directory. # Do the real install in the source directory.
with fs.working_dir(pkg.stage.source_path): with fs.working_dir(pkg.stage.source_path):
# Save the build environment in a file before building. # Save the build environment in a file before building.
dump_environment(pkg.env_path) dump_environment(pkg.env_path)
@ -1699,12 +1744,22 @@ def build_process(pkg, kwargs):
debug_level = tty.debug_level() debug_level = tty.debug_level()
# Spawn a daemon that reads from a pipe and redirects # Spawn a daemon that reads from a pipe and redirects
# everything to log_path # everything to log_path, and provide the phase for logging
with log_output(pkg.log_path, echo, True, for i, (phase_name, phase_attr) in enumerate(zip(
env=unmodified_env) as logger: pkg.phases, pkg._InstallPhase_phases)):
for phase_name, phase_attr in zip( # Keep a log file for each phase
pkg.phases, pkg._InstallPhase_phases): log_dir = os.path.dirname(pkg.log_path)
log_file = "spack-build-%02d-%s-out.txt" % (
i + 1, phase_name.lower()
)
log_file = os.path.join(log_dir, log_file)
try:
# DEBUGGING TIP - to debug this section, insert an IPython
# embed here, and run the sections below without log capture
with log_output(log_file, echo, True,
env=unmodified_env) as logger:
with logger.force_echo(): with logger.force_echo():
inner_debug_level = tty.debug_level() inner_debug_level = tty.debug_level()
@ -1715,9 +1770,22 @@ def build_process(pkg, kwargs):
# Redirect stdout and stderr to daemon pipe # Redirect stdout and stderr to daemon pipe
phase = getattr(pkg, phase_attr) phase = getattr(pkg, phase_attr)
phase(pkg.spec, pkg.prefix)
# Catch any errors to report to logging
phase(pkg.spec, pkg.prefix)
spack.hooks.on_phase_success(pkg, phase_name, log_file)
except BaseException:
combine_phase_logs(pkg.phase_log_files, pkg.log_path)
spack.hooks.on_phase_error(pkg, phase_name, log_file)
raise
# We assume loggers share echo True/False
echo = logger.echo echo = logger.echo
# After log, we can get all output/error files from the package stage
combine_phase_logs(pkg.phase_log_files, pkg.log_path)
log(pkg) log(pkg)
# Run post install hooks before build stage is removed. # Run post install hooks before build stage is removed.
@ -1733,6 +1801,9 @@ def build_process(pkg, kwargs):
_hms(pkg._total_time))) _hms(pkg._total_time)))
_print_installed_pkg(pkg.prefix) _print_installed_pkg(pkg.prefix)
# Send final status that install is successful
spack.hooks.on_install_success(pkg.spec)
# preserve verbosity across runs # preserve verbosity across runs
return echo return echo

459
lib/spack/spack/monitor.py Normal file
View file

@ -0,0 +1,459 @@
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Interact with a Spack Monitor Service. Derived from
https://github.com/spack/spack-monitor/blob/main/script/spackmoncli.py
"""
import base64
import os
import re
try:
from urllib.request import Request, urlopen
from urllib.error import URLError
except ImportError:
from urllib2 import urlopen, Request, URLError # type: ignore # novm
import spack
import spack.hash_types as ht
import spack.main
import spack.store
import spack.util.spack_json as sjson
import spack.util.spack_yaml as syaml
import llnl.util.tty as tty
from copy import deepcopy
# A global client to instantiate once
cli = None
def get_client(host, prefix="ms1", disable_auth=False, allow_fail=False):
"""a common function to get a client for a particular host and prefix.
If the client is not running, we exit early, unless allow_fail is set
to true, indicating that we should continue the build even if the
server is not present. Note that this client is defined globally as "cli"
so we can istantiate it once (checking for credentials, etc.) and then
always have access to it via spack.monitor.cli. Also note that
typically, we call the monitor by way of hooks in spack.hooks.monitor.
So if you want the monitor to have a new interaction with some part of
the codebase, it's recommended to write a hook first, and then have
the monitor use it.
"""
global cli
cli = SpackMonitorClient(host=host, prefix=prefix, allow_fail=allow_fail)
# If we don't disable auth, environment credentials are required
if not disable_auth:
cli.require_auth()
# We will exit early if the monitoring service is not running
info = cli.service_info()
# If we allow failure, the response will be done
if info:
tty.debug("%s v.%s has status %s" % (
info['id'],
info['version'],
info['status'])
)
return cli
else:
tty.debug("spack-monitor server not found, continuing as allow_fail is True.")
def get_monitor_group(subparser):
"""Since the monitor group is shared between commands, we provide a common
function to generate the group for it. The user can pass the subparser, and
the group is added, and returned.
"""
# Monitoring via https://github.com/spack/spack-monitor
monitor_group = subparser.add_argument_group()
monitor_group.add_argument(
'--monitor', action='store_true', dest='use_monitor', default=False,
help="interact with a montor server during builds.")
monitor_group.add_argument(
'--monitor-no-auth', action='store_true', dest='monitor_disable_auth',
default=False, help="the monitoring server does not require auth.")
monitor_group.add_argument(
'--monitor-keep-going', action='store_true', dest='monitor_keep_going',
default=False, help="continue the build if a request to monitor fails.")
monitor_group.add_argument(
'--monitor-host', dest='monitor_host', default="http://127.0.0.1",
help="If using a monitor, customize the host.")
monitor_group.add_argument(
'--monitor-prefix', dest='monitor_prefix', default="ms1",
help="The API prefix for the monitor service.")
return monitor_group
class SpackMonitorClient:
"""The SpackMonitorClient is a handle to interact with a spack monitor
server. We require the host url, along with the prefix to discover the
service_info endpoint. If allow_fail is set to True, we will not exit
on error with tty.fail given that a request is not successful. The spack
version is one of the fields to uniquely identify a spec, so we add it
to the client on init.
"""
def __init__(self, host=None, prefix="ms1", allow_fail=False):
self.host = host or "http://127.0.0.1"
self.baseurl = "%s/%s" % (self.host, prefix.strip("/"))
self.token = os.environ.get("SPACKMON_TOKEN")
self.username = os.environ.get("SPACKMON_USER")
self.headers = {}
self.allow_fail = allow_fail
self.spack_version = spack.main.get_version()
self.capture_build_environment()
# We keey lookup of build_id by full_hash
self.build_ids = {}
def load_build_environment(self, spec):
"""If we are running an analyze command, we will need to load previously
used build environment metadata from install_environment.json to capture
what was done during the build.
"""
if not hasattr(spec, "package") or not spec.package:
tty.die("A spec must have a package to load the environment.")
pkg_dir = os.path.dirname(spec.package.install_log_path)
env_file = os.path.join(pkg_dir, "install_environment.json")
build_environment = read_json(env_file)
if not build_environment:
tty.warning(
"install_environment.json not found in package folder. "
" This means that the current environment metadata will be used."
)
else:
self.build_environment = build_environment
def capture_build_environment(self):
"""Use spack.util.environment.get_host_environment_metadata to capture the
environment for the build. This is important because it's a unique
identifier, along with the spec, for a Build. It should look something
like this:
{'host_os': 'ubuntu20.04',
'platform': 'linux',
'host_target': 'skylake',
'hostname': 'vanessa-ThinkPad-T490s',
'spack_version': '0.16.1-1455-52d5b55b65',
'kernel_version': '#73-Ubuntu SMP Mon Jan 18 17:25:17 UTC 2021'}
This is saved to a package install's metadata folder as
install_environment.json, and can be loaded by the monitor for uploading
data relevant to a later analysis.
"""
from spack.util.environment import get_host_environment_metadata
self.build_environment = get_host_environment_metadata()
def require_auth(self):
"""Require authentication, meaning that the token and username must
not be unset
"""
if not self.token or not self.username:
tty.die("You are required to export SPACKMON_TOKEN and SPACKMON_USER")
def set_header(self, name, value):
self.headers.update({name: value})
def set_basic_auth(self, username, password):
"""A wrapper to adding basic authentication to the Request"""
auth_str = "%s:%s" % (username, password)
auth_header = base64.b64encode(auth_str.encode("utf-8"))
self.set_header("Authorization", "Basic %s" % auth_header.decode("utf-8"))
def reset(self):
"""Reset and prepare for a new request.
"""
if "Authorization" in self.headers:
self.headers = {"Authorization": self.headers['Authorization']}
else:
self.headers = {}
def prepare_request(self, endpoint, data, headers):
"""Given an endpoint url and data, prepare the request. If data
is provided, urllib makes the request a POST
"""
# Always reset headers for new request.
self.reset()
headers = headers or {}
# The calling function can provide a full or partial url
if not endpoint.startswith("http"):
endpoint = "%s/%s" % (self.baseurl, endpoint)
# If we have data, the request will be POST
if data:
if not isinstance(data, str):
data = sjson.dump(data)
data = data.encode('ascii')
return Request(endpoint, data=data, headers=headers)
def issue_request(self, request, retry=True):
"""Given a prepared request, issue it. If we get an error, die. If
there are times when we don't want to exit on error (but instead
disable using the monitoring service) we could add that here.
"""
try:
response = urlopen(request)
except URLError as e:
# If we have an authorization request, retry once with auth
if e.code == 401 and retry:
if self.authenticate_request(e):
request = self.prepare_request(
e.url,
sjson.load(request.data.decode('utf-8')),
self.headers
)
return self.issue_request(request, False)
# Otherwise, relay the message and exit on error
msg = ""
if hasattr(e, 'reason'):
msg = e.reason
elif hasattr(e, 'code'):
msg = e.code
if self.allow_fail:
tty.warning("Request to %s was not successful, but continuing." % e.url)
return
tty.die(msg)
return response
def do_request(self, endpoint, data=None, headers=None, url=None):
"""Do a request. If data is provided, it is POST, otherwise GET.
If an entire URL is provided, don't use the endpoint
"""
request = self.prepare_request(endpoint, data, headers)
# If we have an authorization error, we retry with
response = self.issue_request(request)
# A 200/201 response incidates success
if response.code in [200, 201]:
return sjson.load(response.read().decode('utf-8'))
return response
def authenticate_request(self, originalResponse):
"""Given a response (an HTTPError 401), look for a Www-Authenticate
header to parse. We return True/False to indicate if the request
should be retried.
"""
authHeaderRaw = originalResponse.headers.get("Www-Authenticate")
if not authHeaderRaw:
return False
# If we have a username and password, set basic auth automatically
if self.token and self.username:
self.set_basic_auth(self.username, self.token)
headers = deepcopy(self.headers)
if "Authorization" not in headers:
tty.error(
"This endpoint requires a token. Please set "
"client.set_basic_auth(username, password) first "
"or export them to the environment."
)
return False
# Prepare request to retry
h = parse_auth_header(authHeaderRaw)
headers.update({
"service": h.Service,
"Accept": "application/json",
"User-Agent": "spackmoncli"}
)
# Currently we don't set a scope (it defaults to build)
authResponse = self.do_request(h.Realm, headers=headers)
# Request the token
token = authResponse.get("token")
if not token:
return False
# Set the token to the original request and retry
self.headers.update({"Authorization": "Bearer %s" % token})
return True
# Functions correspond to endpoints
def service_info(self):
"""get the service information endpoint"""
# Base endpoint provides service info
return self.do_request("")
def new_configuration(self, specs):
"""Given a list of specs, generate a new configuration for each. We
return a lookup of specs with their package names. This assumes
that we are only installing one version of each package. We aren't
starting or creating any builds, so we don't need a build environment.
"""
configs = {}
# There should only be one spec generally (what cases would have >1?)
for spec in specs:
# Not sure if this is needed here, but I see it elsewhere
if spec.name in spack.repo.path or spec.virtual:
spec.concretize()
as_dict = {"spec": spec.to_dict(hash=ht.full_hash),
"spack_version": self.spack_version}
response = self.do_request("specs/new/", data=sjson.dump(as_dict))
configs[spec.package.name] = response.get('data', {})
return configs
def new_build(self, spec):
"""Create a new build, meaning sending the hash of the spec to be built,
along with the build environment. These two sets of data uniquely can
identify the build, and we will add objects (the binaries produced) to
it. We return the build id to the calling client.
"""
return self.get_build_id(spec, return_response=True)
def get_build_id(self, spec, return_response=False, spec_exists=True):
"""Retrieve a build id, either in the local cache, or query the server
"""
full_hash = spec.full_hash()
if full_hash in self.build_ids:
return self.build_ids[full_hash]
# Prepare build environment data (including spack version)
data = self.build_environment.copy()
data['full_hash'] = full_hash
# If we allow the spec to not exist (meaning we create it) we need to
# include the full spec.yaml here
if not spec_exists:
meta_dir = os.path.dirname(spec.package.install_log_path)
spec_file = os.path.join(meta_dir, "spec.yaml")
data['spec'] = syaml.load(read_file(spec_file))
response = self.do_request("builds/new/", data=sjson.dump(data))
# Add the build id to the lookup
bid = self.build_ids[full_hash] = response['data']['build']['build_id']
self.build_ids[full_hash] = bid
# If the function is called directly, the user might want output
if return_response:
return response
return bid
def update_build(self, spec, status="SUCCESS"):
"""update task will just update the relevant package to indicate a
successful install. Unlike cancel_task that sends a cancalled request
to the main package, here we don't need to cancel or otherwise update any
other statuses. This endpoint can take a general status to update just
one
"""
data = {"build_id": self.get_build_id(spec), "status": status}
return self.do_request("builds/update/", data=sjson.dump(data))
def fail_task(self, spec):
"""Given a spec, mark it as failed. This means that Spack Monitor
marks all dependencies as cancelled, unless they are already successful
"""
return self.update_build(spec, status="FAILED")
def send_analyze_metadata(self, pkg, metadata):
"""Given a dictionary of analyzers (with key as analyzer type, and
value as the data) upload the analyzer output to Spack Monitor.
Spack Monitor should either have a known understanding of the analyzer,
or if not (the key is not recognized), it's assumed to be a dictionary
of objects/files, each with attributes to be updated. E.g.,
{"analyzer-name": {"object-file-path": {"feature1": "value1"}}}
"""
# Prepare build environment data (including spack version)
# Since the build might not have been generated, we include the spec
data = {"build_id": self.get_build_id(pkg.spec, spec_exists=False),
"metadata": metadata}
return self.do_request("analyze/builds/", data=sjson.dump(data))
def send_phase(self, pkg, phase_name, phase_output_file, status):
"""Given a package, phase name, and status, update the monitor endpoint
to alert of the status of the stage. This includes parsing the package
metadata folder for phase output and error files
"""
data = {"build_id": self.get_build_id(pkg.spec)}
# Send output specific to the phase (does this include error?)
data.update({"status": status,
"output": read_file(phase_output_file),
"phase_name": phase_name})
return self.do_request("builds/phases/update/", data=sjson.dump(data))
def upload_specfile(self, filename):
"""Given a spec file (must be json) upload to the UploadSpec endpoint.
This function is not used in the spack to server workflow, but could
be useful is Spack Monitor is intended to send an already generated
file in some kind of separate analysis. For the environment file, we
parse out SPACK_* variables to include.
"""
# We load as json just to validate it
spec = read_json(filename)
data = {"spec": spec, "spack_verison": self.spack_version}
return self.do_request("specs/new/", data=sjson.dump(data))
# Helper functions
def parse_auth_header(authHeaderRaw):
"""parse authentication header into pieces"""
regex = re.compile('([a-zA-z]+)="(.+?)"')
matches = regex.findall(authHeaderRaw)
lookup = dict()
for match in matches:
lookup[match[0]] = match[1]
return authHeader(lookup)
class authHeader:
def __init__(self, lookup):
"""Given a dictionary of values, match them to class attributes"""
for key in lookup:
if key in ["realm", "service", "scope"]:
setattr(self, key.capitalize(), lookup[key])
def read_file(filename):
"""Read a file, if it exists. Otherwise return None
"""
if not os.path.exists(filename):
return
with open(filename, 'r') as fd:
content = fd.read()
return content
def write_file(content, filename):
"""write content to file"""
with open(filename, 'w') as fd:
fd.writelines(content)
return content
def write_json(obj, filename):
"""Write a json file, if the output directory exists."""
if not os.path.exists(os.path.dirname(filename)):
return
return write_file(sjson.dump(obj), filename)
def read_json(filename):
"""Read a file and load into json, if it exists. Otherwise return None"""
if not os.path.exists(filename):
return
return sjson.load(read_file(filename))

View file

@ -15,6 +15,7 @@
import contextlib import contextlib
import copy import copy
import functools import functools
import glob
import hashlib import hashlib
import inspect import inspect
import os import os
@ -1066,6 +1067,14 @@ def log_path(self):
# Otherwise, return the current log path name. # Otherwise, return the current log path name.
return os.path.join(self.stage.path, _spack_build_logfile) return os.path.join(self.stage.path, _spack_build_logfile)
@property
def phase_log_files(self):
"""Find sorted phase log files written to the staging directory"""
logs_dir = os.path.join(self.stage.path, "spack-build-*-out.txt")
log_files = glob.glob(logs_dir)
log_files.sort()
return log_files
@property @property
def install_log_path(self): def install_log_path(self):
"""Return the build log file path on successful installation.""" """Return the build log file path on successful installation."""

View file

@ -33,6 +33,7 @@
build_env_path = os.path.join(lib_path, "env") build_env_path = os.path.join(lib_path, "env")
module_path = os.path.join(lib_path, "spack") module_path = os.path.join(lib_path, "spack")
command_path = os.path.join(module_path, "cmd") command_path = os.path.join(module_path, "cmd")
analyzers_path = os.path.join(module_path, "analyzers")
platform_path = os.path.join(module_path, 'platforms') platform_path = os.path.join(module_path, 'platforms')
compilers_path = os.path.join(module_path, "compilers") compilers_path = os.path.join(module_path, "compilers")
build_systems_path = os.path.join(module_path, 'build_systems') build_systems_path = os.path.join(module_path, 'build_systems')

View file

@ -1661,7 +1661,13 @@ def to_node_dict(self, hash=ht.dag_hash):
d['patches'] = variant._patches_in_order_of_appearance d['patches'] = variant._patches_in_order_of_appearance
if hash.package_hash: if hash.package_hash:
d['package_hash'] = self.package.content_hash() package_hash = self.package.content_hash()
# Full hashes are in bytes
if (not isinstance(package_hash, six.text_type)
and isinstance(package_hash, six.binary_type)):
package_hash = package_hash.decode('utf-8')
d['package_hash'] = package_hash
deps = self.dependencies_dict(deptype=hash.deptype) deps = self.dependencies_dict(deptype=hash.deptype)
if deps: if deps:

View file

@ -0,0 +1,176 @@
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import pytest
import spack.config
import spack.package
import spack.cmd.install
from spack.spec import Spec
import spack.util.spack_json as sjson
from spack.main import SpackCommand
install = SpackCommand('install')
analyze = SpackCommand('analyze')
def test_test_package_not_installed(mock_fetch, install_mockery_mutable_config):
# We cannot run an analysis for a package not installed
out = analyze('run', 'libdwarf', fail_on_error=False)
assert "==> Error: Spec 'libdwarf' matches no installed packages.\n" in out
def test_analyzer_get_install_dir(mock_fetch, install_mockery_mutable_config):
"""
Test that we cannot get an analyzer directory without a spec package.
"""
spec = Spec('libdwarf').concretized()
assert 'libdwarf' in spack.analyzers.analyzer_base.get_analyzer_dir(spec)
# Case 1: spec is missing attribute for package
with pytest.raises(SystemExit):
spack.analyzers.analyzer_base.get_analyzer_dir(None)
class Packageless(object):
package = None
# Case 2: spec has package attribute, but it's None
with pytest.raises(SystemExit):
spack.analyzers.analyzer_base.get_analyzer_dir(Packageless())
def test_malformed_analyzer(mock_fetch, install_mockery_mutable_config):
"""
Test that an analyzer missing needed attributes is invalid.
"""
from spack.analyzers.analyzer_base import AnalyzerBase
# Missing attribute description
class MyAnalyzer(AnalyzerBase):
name = "my_analyzer"
outfile = "my_analyzer_output.txt"
spec = Spec('libdwarf').concretized()
with pytest.raises(SystemExit):
MyAnalyzer(spec)
def test_analyze_output(tmpdir, mock_fetch, install_mockery_mutable_config):
"""
Test that an analyzer errors if requested name does not exist.
"""
install('libdwarf')
install('python@3.8')
analyzer_dir = tmpdir.join('analyzers')
# An analyzer that doesn't exist should not work
out = analyze('run', '-a', 'pusheen', 'libdwarf', fail_on_error=False)
assert '==> Error: Analyzer pusheen does not exist\n' in out
# We will output to this analyzer directory
analyzer_dir = tmpdir.join('analyzers')
out = analyze('run', '-a', 'install_files', '-p', str(analyzer_dir), 'libdwarf')
# Ensure that if we run again without over write, we don't run
out = analyze('run', '-a', 'install_files', '-p', str(analyzer_dir), 'libdwarf')
assert "skipping" in out
# With overwrite it should run
out = analyze('run', '-a', 'install_files', '-p', str(analyzer_dir),
'--overwrite', 'libdwarf')
assert "==> Writing result to" in out
def _run_analyzer(name, package, tmpdir):
"""
A shared function to test that an analyzer runs.
We return the output file for further inspection.
"""
analyzer = spack.analyzers.get_analyzer(name)
analyzer_dir = tmpdir.join('analyzers')
out = analyze('run', '-a', analyzer.name, '-p', str(analyzer_dir), package)
assert "==> Writing result to" in out
assert "/%s/%s\n" % (analyzer.name, analyzer.outfile) in out
# The output file should exist
output_file = out.strip('\n').split(' ')[-1].strip()
assert os.path.exists(output_file)
return output_file
def test_installfiles_analyzer(tmpdir, mock_fetch, install_mockery_mutable_config):
"""
test the install files analyzer
"""
install('libdwarf')
output_file = _run_analyzer("install_files", "libdwarf", tmpdir)
# Ensure it's the correct content
with open(output_file, 'r') as fd:
content = sjson.load(fd.read())
basenames = set()
for key, attrs in content.items():
basenames.add(os.path.basename(key))
# Check for a few expected files
for key in ['.spack', 'libdwarf', 'packages', 'repo.yaml', 'repos']:
assert key in basenames
def test_environment_analyzer(tmpdir, mock_fetch, install_mockery_mutable_config):
"""
test the environment variables analyzer.
"""
install('libdwarf')
output_file = _run_analyzer("environment_variables", "libdwarf", tmpdir)
with open(output_file, 'r') as fd:
content = sjson.load(fd.read())
# Check a few expected keys
for key in ['SPACK_CC', 'SPACK_COMPILER_SPEC', 'SPACK_ENV_PATH']:
assert key in content
# The analyzer should return no result if the output file does not exist.
spec = Spec('libdwarf').concretized()
env_file = os.path.join(spec.package.prefix, '.spack', 'spack-build-env.txt')
assert os.path.exists(env_file)
os.remove(env_file)
analyzer = spack.analyzers.get_analyzer("environment_variables")
analyzer_dir = tmpdir.join('analyzers')
result = analyzer(spec, analyzer_dir).run()
assert "environment_variables" in result
assert not result['environment_variables']
def test_list_analyzers():
"""
test that listing analyzers shows all the possible analyzers.
"""
from spack.analyzers import analyzer_types
# all cannot be an analyzer
assert "all" not in analyzer_types
# All types should be present!
out = analyze('list-analyzers')
for analyzer_type in analyzer_types:
assert analyzer_type in out
def test_configargs_analyzer(tmpdir, mock_fetch, install_mockery_mutable_config):
"""
test the config args analyzer.
Since we don't have any, this should return an empty result.
"""
install('libdwarf')
analyzer_dir = tmpdir.join('analyzers')
out = analyze('run', '-a', 'config_args', '-p', str(analyzer_dir), 'libdwarf')
assert out == ''

View file

@ -119,7 +119,9 @@ def test_install_dirty_flag(arguments, expected):
def test_package_output(tmpdir, capsys, install_mockery, mock_fetch): def test_package_output(tmpdir, capsys, install_mockery, mock_fetch):
"""Ensure output printed from pkgs is captured by output redirection.""" """
Ensure output printed from pkgs is captured by output redirection.
"""
# we can't use output capture here because it interferes with Spack's # we can't use output capture here because it interferes with Spack's
# logging. TODO: see whether we can get multiple log_outputs to work # logging. TODO: see whether we can get multiple log_outputs to work
# when nested AND in pytest # when nested AND in pytest
@ -140,12 +142,15 @@ def test_package_output(tmpdir, capsys, install_mockery, mock_fetch):
@pytest.mark.disable_clean_stage_check @pytest.mark.disable_clean_stage_check
def test_install_output_on_build_error(mock_packages, mock_archive, mock_fetch, def test_install_output_on_build_error(mock_packages, mock_archive, mock_fetch,
config, install_mockery, capfd): config, install_mockery, capfd):
"""
This test used to assume receiving full output, but since we've updated
spack to generate logs on the level of phases, it will only return the
last phase, install.
"""
# capfd interferes with Spack's capturing # capfd interferes with Spack's capturing
with capfd.disabled(): with capfd.disabled():
out = install('build-error', fail_on_error=False) out = install('-v', 'build-error', fail_on_error=False)
assert 'ProcessError' in out assert 'Installing build-error' in out
assert 'configure: error: in /path/to/some/file:' in out
assert 'configure: error: cannot run C compiled programs.' in out
@pytest.mark.disable_clean_stage_check @pytest.mark.disable_clean_stage_check
@ -172,20 +177,17 @@ def test_install_with_source(
@pytest.mark.disable_clean_stage_check @pytest.mark.disable_clean_stage_check
def test_show_log_on_error(mock_packages, mock_archive, mock_fetch, def test_show_log_on_error(mock_packages, mock_archive, mock_fetch,
config, install_mockery, capfd): config, install_mockery, capfd):
"""Make sure --show-log-on-error works.""" """
Make sure --show-log-on-error works.
"""
with capfd.disabled(): with capfd.disabled():
out = install('--show-log-on-error', 'build-error', out = install('--show-log-on-error', 'build-error',
fail_on_error=False) fail_on_error=False)
assert isinstance(install.error, spack.build_environment.ChildError) assert isinstance(install.error, spack.build_environment.ChildError)
assert install.error.pkg.name == 'build-error' assert install.error.pkg.name == 'build-error'
assert 'Full build log:' in out
print(out) assert '==> Installing build-error' in out
assert 'See build log for details:' in out
# Message shows up for ProcessError (1) and output (1)
errors = [line for line in out.split('\n')
if 'configure: error: cannot run C compiled programs' in line]
assert len(errors) == 2
def test_install_overwrite( def test_install_overwrite(
@ -711,7 +713,9 @@ def test_install_only_dependencies_of_all_in_env(
def test_install_help_does_not_show_cdash_options(capsys): def test_install_help_does_not_show_cdash_options(capsys):
"""Make sure `spack install --help` does not describe CDash arguments""" """
Make sure `spack install --help` does not describe CDash arguments
"""
with pytest.raises(SystemExit): with pytest.raises(SystemExit):
install('--help') install('--help')
captured = capsys.readouterr() captured = capsys.readouterr()
@ -754,7 +758,9 @@ def test_compiler_bootstrap(
def test_compiler_bootstrap_from_binary_mirror( def test_compiler_bootstrap_from_binary_mirror(
install_mockery_mutable_config, mock_packages, mock_fetch, install_mockery_mutable_config, mock_packages, mock_fetch,
mock_archive, mutable_config, monkeypatch, tmpdir): mock_archive, mutable_config, monkeypatch, tmpdir):
"""Make sure installing compiler from buildcache registers compiler""" """
Make sure installing compiler from buildcache registers compiler
"""
# Create a temp mirror directory for buildcache usage # Create a temp mirror directory for buildcache usage
mirror_dir = tmpdir.join('mirror_dir') mirror_dir = tmpdir.join('mirror_dir')

View file

@ -579,6 +579,32 @@ def _raise_except(path):
monkeypatch.setattr(os, 'remove', orig_fn) monkeypatch.setattr(os, 'remove', orig_fn)
def test_combine_phase_logs(tmpdir):
"""Write temporary files, and assert that combine phase logs works
to combine them into one file. We aren't currently using this function,
but it's available when the logs are refactored to be written separately.
"""
log_files = ['configure-out.txt', 'install-out.txt', 'build-out.txt']
phase_log_files = []
# Create and write to dummy phase log files
for log_file in log_files:
phase_log_file = os.path.join(str(tmpdir), log_file)
with open(phase_log_file, 'w') as plf:
plf.write('Output from %s\n' % log_file)
phase_log_files.append(phase_log_file)
# This is the output log we will combine them into
combined_log = os.path.join(str(tmpdir), "combined-out.txt")
spack.installer.combine_phase_logs(phase_log_files, combined_log)
with open(combined_log, 'r') as log_file:
out = log_file.read()
# Ensure each phase log file is represented
for log_file in log_files:
assert "Output from %s\n" % log_file in out
def test_check_deps_status_install_failure(install_mockery, monkeypatch): def test_check_deps_status_install_failure(install_mockery, monkeypatch):
const_arg = installer_args(['a'], {}) const_arg = installer_args(['a'], {})
installer = create_installer(const_arg) installer = create_installer(const_arg)

View file

@ -0,0 +1,39 @@
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
# Need this because of spack.util.string
from __future__ import absolute_import
from spack.util.naming import mod_to_class
from llnl.util.lang import memoized, list_modules
import llnl.util.tty as tty
import inspect
__all__ = [
'list_classes'
]
@memoized
def list_classes(parent_module, mod_path):
"""Given a parent path (e.g., spack.platforms or spack.analyzers),
use list_modules to derive the module names, and then mod_to_class
to derive class names. Import the classes and return them in a list
"""
classes = []
for name in list_modules(mod_path):
mod_name = '%s.%s' % (parent_module, name)
class_name = mod_to_class(name)
mod = __import__(mod_name, fromlist=[class_name])
if not hasattr(mod, class_name):
tty.die('No class %s defined in %s' % (class_name, mod_name))
cls = getattr(mod, class_name)
if not inspect.isclass(cls):
tty.die('%s.%s is not a class' % (mod_name, class_name))
classes.append(cls)
return classes

View file

@ -9,7 +9,9 @@
import inspect import inspect
import json import json
import os import os
import platform
import re import re
import socket
import sys import sys
import os.path import os.path
@ -139,6 +141,40 @@ def pickle_environment(path, environment=None):
open(path, 'wb'), protocol=2) open(path, 'wb'), protocol=2)
def get_host_environment_metadata():
"""Get the host environment, reduce to a subset that we can store in
the install directory, and add the spack version.
"""
import spack.main
environ = get_host_environment()
return {"host_os": environ['os'],
"platform": environ['platform'],
"host_target": environ['target'],
"hostname": environ['hostname'],
"spack_version": spack.main.get_version(),
"kernel_version": platform.version()}
def get_host_environment():
"""Return a dictionary (lookup) with host information (not including the
os.environ).
"""
import spack.spec
import spack.architecture as architecture
arch = architecture.Arch(
architecture.platform(), 'default_os', 'default_target')
arch_spec = spack.spec.Spec('arch=%s' % arch)
return {
'target': str(arch.target),
'os': str(arch.os),
'platform': str(arch.platform),
'arch': arch_spec,
'architecture': arch_spec,
'arch_str': str(arch),
'hostname': socket.gethostname()
}
@contextlib.contextmanager @contextlib.contextmanager
def set_env(**kwargs): def set_env(**kwargs):
"""Temporarily sets and restores environment variables. """Temporarily sets and restores environment variables.

View file

@ -333,7 +333,7 @@ _spack() {
then then
SPACK_COMPREPLY="-h --help -H --all-help --color -c --config -C --config-scope -d --debug --timestamp --pdb -e --env -D --env-dir -E --no-env --use-env-repo -k --insecure -l --enable-locks -L --disable-locks -m --mock -p --profile --sorted-profile --lines -v --verbose --stacktrace -V --version --print-shell-vars" SPACK_COMPREPLY="-h --help -H --all-help --color -c --config -C --config-scope -d --debug --timestamp --pdb -e --env -D --env-dir -E --no-env --use-env-repo -k --insecure -l --enable-locks -L --disable-locks -m --mock -p --profile --sorted-profile --lines -v --verbose --stacktrace -V --version --print-shell-vars"
else else
SPACK_COMPREPLY="activate add arch blame build-env buildcache cd checksum ci clean clone commands compiler compilers concretize config containerize create deactivate debug dependencies dependents deprecate dev-build develop docs edit env extensions external fetch find flake8 gc gpg graph help info install license list load location log-parse maintainers mark mirror module patch pkg providers pydoc python reindex remove rm repo resource restage solve spec stage style test test-env tutorial undevelop uninstall unit-test unload url verify versions view" SPACK_COMPREPLY="activate add analyze arch blame build-env buildcache cd checksum ci clean clone commands compiler compilers concretize config containerize create deactivate debug dependencies dependents deprecate dev-build develop docs edit env extensions external fetch find flake8 gc gpg graph help info install license list load location log-parse maintainers mark mirror module patch pkg providers pydoc python reindex remove rm repo resource restage solve spec stage style test test-env tutorial undevelop uninstall unit-test unload url verify versions view"
fi fi
} }
@ -355,6 +355,28 @@ _spack_add() {
fi fi
} }
_spack_analyze() {
if $list_options
then
SPACK_COMPREPLY="-h --help --monitor --monitor-no-auth --monitor-keep-going --monitor-host --monitor-prefix"
else
SPACK_COMPREPLY="list-analyzers run"
fi
}
_spack_analyze_list_analyzers() {
SPACK_COMPREPLY="-h --help"
}
_spack_analyze_run() {
if $list_options
then
SPACK_COMPREPLY="-h --help --overwrite -p --path -a --analyzers"
else
_all_packages
fi
}
_spack_arch() { _spack_arch() {
SPACK_COMPREPLY="-h --help --known-targets -p --platform -o --operating-system -t --target -f --frontend -b --backend" SPACK_COMPREPLY="-h --help --known-targets -p --platform -o --operating-system -t --target -f --frontend -b --backend"
} }
@ -1041,7 +1063,7 @@ _spack_info() {
_spack_install() { _spack_install() {
if $list_options if $list_options
then then
SPACK_COMPREPLY="-h --help --only -u --until -j --jobs --overwrite --fail-fast --keep-prefix --keep-stage --dont-restage --use-cache --no-cache --cache-only --include-build-deps --no-check-signature --require-full-hash-match --show-log-on-error --source -n --no-checksum --deprecated -v --verbose --fake --only-concrete -f --file --clean --dirty --test --run-tests --log-format --log-file --help-cdash --cdash-upload-url --cdash-build --cdash-site --cdash-track --cdash-buildstamp -y --yes-to-all" SPACK_COMPREPLY="-h --help --only -u --until -j --jobs --overwrite --fail-fast --keep-prefix --keep-stage --dont-restage --use-cache --no-cache --cache-only --monitor --monitor-no-auth --monitor-keep-going --monitor-host --monitor-prefix --include-build-deps --no-check-signature --require-full-hash-match --show-log-on-error --source -n --no-checksum --deprecated -v --verbose --fake --only-concrete -f --file --clean --dirty --test --run-tests --log-format --log-file --help-cdash --cdash-upload-url --cdash-build --cdash-site --cdash-track --cdash-buildstamp -y --yes-to-all"
else else
_all_packages _all_packages
fi fi
@ -1505,7 +1527,7 @@ _spack_solve() {
_spack_spec() { _spack_spec() {
if $list_options if $list_options
then then
SPACK_COMPREPLY="-h --help -l --long -L --very-long -I --install-status -y --yaml -j --json -c --cover -N --namespaces -t --types" SPACK_COMPREPLY="-h --help -l --long -L --very-long -I --install-status -y --yaml -j --json -c --cover -N --namespaces --hash-type -t --types"
else else
_all_packages _all_packages
fi fi

View file

@ -20,6 +20,9 @@ class Libabigail(AutotoolsPackage):
depends_on('libdwarf') depends_on('libdwarf')
depends_on('libxml2') depends_on('libxml2')
# Libabigail won't generate it's bin without Python
depends_on('python@3.8:')
# Documentation dependencies # Documentation dependencies
depends_on('doxygen', type="build", when="+docs") depends_on('doxygen', type="build", when="+docs")
depends_on('py-sphinx', type='build', when="+docs") depends_on('py-sphinx', type='build', when="+docs")