Refactor `TermTitle` into `InstallStatus` and use it to show progress
information both in the terminal title as well as inline. This also
turns on the terminal title status by default.
The inline output will look like the following after this change:
```
==> Installing m4-1.4.19-w2fxrpuz64zdq63woprqfxxzc3tzu7p3 [4/4]
```
* Allow users to specify root env dir
Environments managed by spack have some advantages over anonymous Environments
but they are tucked away inside spack's directory tree. This PR gives
users the ability to specify where the environments should live.
See #32823
This is also taken as an opportunity to ensure that all references are to "managed environments",
rather than "named environments". Prior to this PR some references to the latter persisted.
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
When running many concurrent spack install processes that need to write
to the db, Spack regularly times out. This is because writing to the DB
after another process has written to it requires deserialization of the
db, mutating it in memory, and serializing it again, which takes some
time. On top of that, I believe there's a 1 second retry when a write
lock cannot be obtained, so I think this means only 3 processes can
really write to the DB at the same time before timing out.
Using `-Werror` is good practice for development and testing, but causes us a great
deal of heartburn supporting multiple compiler versions, especially as newer compiler
versions add warnings for released packages. This PR adds support for suppressing
`-Werror` through spack's compiler wrappers. There are currently three modes for
the `flags:keep_werror` setting:
* `none`: (default) cancel all `-Werror`, `-Werror=*` and `-Werror-*` flags by
converting them to `-Wno-error[=]*` flags
* `specific`: preserve explicitly selected warnings as errors, such as
`-Werror=format-truncation`, but reverse the blanket `-Werror`
* `all`: keeps all `-Werror` flags
These can be set globally in config.yaml, through the config command-line flags, or
overridden by a particular package (some packages use Werror as a proxy for determining
support for other compiler features). We chose to use this approach because:
1. removing `-Werror` flags entirely broke *many* build systems, especially autoconf
based ones, because of things like checking `-Werror=feature` and making the
assumption that if that did not error other flags related to that feature would also work
2. Attempting to preserve `-Werror` in some phases but not others caused similar issues
3. The per-package setting came about because some packages, even with all these
protections, still use `-Werror` unsafely. Currently there are roughly 3 such packages
known.
Adds another post install hook that loops over the install prefix, looking for shared libraries type of ELF files, and sets the soname to their own absolute paths.
The idea being, whenever somebody links against those libraries, the linker copies the soname (which is the absolute path to the library) as a "needed" library, so that at runtime the dynamic loader realizes the needed library is a path which should be loaded directly without searching.
As a result:
1. rpaths are not used for the fixed/static list of needed libraries in the dynamic section (only for _actually_ dynamically loaded libraries through `dlopen`), which largely solves the issue that Spack's rpaths are a heuristic (`<prefix>/lib` and `<prefix>/lib64` might not be where libraries really are...)
2. improved startup times (no library search required)
"spack install" will not update the binary index if given a concrete
spec, which causes it to fall back to direct fetches when a simple
index update would have helped. For S3 buckets in particular, this
significantly and needlessly slows down the install process.
This commit alters the logic so that the binary index is updated
whenever a by-hash lookup fails. The lookup is attempted again with
the updated index before falling back to direct fetches. To avoid
updating too frequently (potentially once for each spec being
installed), BinaryCacheIndex.update now includes a "cooldown"
option, and when this option is enabled it will not update more
than once in a cooldown window (set in config.yaml).
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* Change license dir from hard-coded to a configurable item
* Change config item to be a string not an array
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
The concretizer is going to grow to have many more configuration,
and we really need some structured config for that.
* We have the `config:concretizer` option that chooses the solver,
but extending that is awkward (we'd need to replace a string with
a `dict`) and the solver choice will be deprecated eventually.
* We have the `concretization` option in environments, but it's
not a top-level config section -- it's just for environments,
and it also only admits a string right now.
To avoid overlapping with either of these and to allow the most
extensibility in the future, this adds a new `concretizer` config
section that can be used in and outside of environments. There
is only one option right now: `reuse`. This can expand to include
other options later.
Likely, we will soon deprecate `config:concretizer` and warn when
the user doesn't use `clingo`, and we will eventually (sometime later)
move the `together` / `separately` options from `concretization` into
the top-level `concretizer` section.
This commit just adds the new section and schema. Fully wiring it
up is TBD.
Spack's `system` and `user` scopes provide ways for administrators and
users to set global defaults for all Spack instances, but for use cases
where one wants a clean Spack installation, these scopes can be undesirable.
For example, users may want to opt out of global system configuration, or
they may want to ignore their own home directory settings when running in
a continuous integration environment.
Spack also, by default, keeps various caches and user data in `~/.spack`,
but users may want to override these locations.
Spack provides three environment variables that allow you to override or
opt out of configuration locations:
* `SPACK_USER_CONFIG_PATH`: Override the path to use for the
`user` (`~/.spack`) scope.
* `SPACK_SYSTEM_CONFIG_PATH`: Override the path to use for the
`system` (`/etc/spack`) scope.
* `SPACK_DISABLE_LOCAL_CONFIG`: set this environment variable to completely
disable *both* the system and user configuration directories. Spack will
only consider its own defaults and `site` configuration locations.
And one that allows you to move the default cache location:
* `SPACK_USER_CACHE_PATH`: Override the default path to use for user data
(misc_cache, tests, reports, etc.)
With these settings, if you want to isolate Spack in a CI environment, you can do this:
export SPACK_DISABLE_LOCAL_CONFIG=true
export SPACK_USER_CACHE_PATH=/tmp/spack
This is a stop-gap approach until we have figured out how to deal with
the system and user config scopes more generally, as there are plans to
potentially / eventually get rid of them.
**User config**
Spack is a bit of a pain when you have:
- a shared $HOME folder across different systems.
- multiple Spack versions on the same system.
**System config**
- On shared systems with a versioned programming environment / toolkit,
system administrators want to provide config for each version (e.g.
21.09, 21.10) of the programming environment, and the user Spack
instance should be able to pick this up without a steep learning
curve.
- On shared systems the user should be able to opt out of the
hard-coded config scope in /etc/spack, since it may be incompatible
with their particular instance. Currently Spack can only opt out of all
config scopes through overrides with `"config:":`, `"packages:":`, but that
also drops the defaults config, which would have to be repeated, which
is undesirable, especially the lengthy packages.yaml.
An example use case is: having config in this folder:
```
/path/to/programming/environment/{version}/{compilers,packages}.yaml
```
and have `module load spack-system-config` set the variable
```
SPACK_SYSTEM_CONFIG_PATH=/path/to/programming/environment/{version}
```
where the user no longer has to worry about what `{version}` they are
on.
**Continuous integration**
Finally, there is the use case of continuous integration, which may
clone an arbitrary Spack version, which optimally should not pick up
system or user config from the previous run (like may happen in
classical bare metal non-containerized filesystem side effect ridden
jenkins pipelines). In fact this is very similar to how spack itself
tries to avoid picking up system dependencies during builds...
**But environments solve this?**
- You could do `include`s in environment files to get similar behavior
to the spack_system_config_path example, but environments require you
to:
1) require paths to individual config files, not directories.
2) fail if the listed config file does not exist
- They allow you to override config scopes, but this is generally too
rigurous, as it requires you to repeat the default config, in
particular packages.yaml, and just defies the point of layered config.
Co-authored-by: Tom Scogland <tscogland@llnl.gov>
Co-authored-by: Tim Fuller <tjfulle@sandia.gov>
Co-authored-by: Steve Leak <sleak@lbl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Installing packages with a lot of dependencies does not have an easy way
of judging the current progress (apart from running `spack spec -I pkg`
in another terminal). This change allows Spack to update the terminal's
title with status information, including its current progress as well as
information about the current and total number of packages.
Modifications:
- [x] Change `defaults/config.yaml`
- [x] Add a fix for bootstrapping patchelf from sources if `compilers.yaml` is empty
- [x] Make `SPACK_TEST_SOLVER=clingo` the default for unit-tests
- [x] Fix package failures in the e4s pipeline
Caveats:
1. CentOS 6 still uses the original concretizer as it can't connect to the buildcache due to issues with `ssl` (bootstrapping from sources requires a C++14 capable compiler)
1. I had to update the image tag for GitlabCI in e699f14.
1. libtool v2.4.2 has been deprecated and other packages received some update
- Change config from the undocumented `use_curl: true/false` to `url_fetch_method: urllib/curl`.
- Documentation of `url_fetch_method` in `defaults/config.yaml`
- Default fetch option explicitly set to `urllib` for users who may not have curl on their system
To upgrade from `use_curl` to `url_fetch_method`, run `spack config update config`
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).
This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.
As part of this change, the module roots configuration moved from the config section to inside each module configuration.
Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).
This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.
As part of this change, the module roots configuration moved from the `config` section to inside each module configuration.
Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
TODO:
- [x] code changes to support multiple module sets
- [x] code changes to support modules relative to a view
- [x] Tests for multiple module configurations
- [x] Tests for modules relative to a view
- [x] Backwards compatibility for module roots from config section
- [x] Backwards compatibility for default module set without the name specified
- [x] Tests for backwards compatibility
* Make -j flag less exceptional
The -j flag in spack behaves differently from make, ctest, ninja, etc,
because it caps the number of jobs to an arbitrary number 16.
Spack will behave like other tools if `spack install` uses a reasonable
default, and `spack install -j <num>` *overrides* that default.
This will be particularly useful for Spack usage outside of a traditional
HPC context and for HPC centers that encourage users to compile on
login nodes with many cores instead of on compute nodes, which has
become increasingly common as individual nodes have more cores.
This maintains the existing default value of min(num_cpus, 16). However,
as it is right now, Spack does a poor job at determining the number of
cpus on linux, since it doesn't take cgroups into account. This is
particularly problematic when using distributed builds with slurm. This PR
also introduces `spack.util.cpus.cpus_available()` to consolidate
knowledge on determining the number of available cores, and improves
core detection for linux. This should also improve core detection for Docker/
Kubernetes, which also use cgroups.
* Procedure to deprecate old versions of software
* Add documentation
* Fix bug in logic
* Update tab completion
* Deprecate legacy packages
* Deprecate old mxnet as well
* More explicit docs
Users can add test() methods to their packages to run smoke tests on
installations with the new `spack test` command (the old `spack test` is
now `spack unit-test`). spack test is environment-aware, so you can
`spack install` an environment and then run `spack test run` to run smoke
tests on all of its packages. Historical test logs can be perused with
`spack test results`. Generic smoke tests for MPI implementations, C,
C++, and Fortran compilers as well as specific smoke tests for 18
packages.
Inside the test method, individual tests can be run separately (and
continue to run best-effort after a test failure) using the `run_test`
method. The `run_test` method encapsulates finding test executables,
running and checking return codes, checking output, and error handling.
This handles the following trickier aspects of testing with direct
support in Spack's package API:
- [x] Caching source or intermediate build files at build time for
use at test time.
- [x] Test dependencies,
- [x] packages that require a compiler for testing (such as library only
packages).
See the packaging guide for more details on using Spack testing support.
Included is support for package.py files for virtual packages. This does
not change the Spack interface, but is a major change in internals.
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Co-authored-by: wspear <wjspear@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Since #11598 sbang has been installed within the install_tree. This doesn’t play
nicely with install_tree padding, since sbang can’t do its job if it is installed in a
long path (this is the whole point of sbang).
This PR changes the padding specification. Instead of $padding inside paths,
we now have a separate `padding:` field in the `install_tree` configuration.
Previously, the `install_tree` looked like this:
```
/path/to/opt/spack_padding_padding_padding_padding_padding/
bin/
sbang
.spack-db/
...
linux-rhel7-x86_64/
...
```
```
This PR updates things to look like this:
/path/to/opt/
bin/
sbang
spack_padding_padding_padding_padding_padding/
.spack-db/
...
linux-rhel7-x86_64/
...
So padding is added at the start of all install prefixes *within* the unpadded
root. The database and all installations still go under the padded root.
This ensures that `sbang` is in the shorted possible path while also allowing
us to make long paths for relocatable binaries.
connect_timeout can be used to increase the time Spack waits for the
server to answer. This can be used to work around slow connections or
servers.
Fixes#14700
Fixes#9394Closes#13217.
## Background
Spack provides the ability to enable/disable parallel builds through two options: package `parallel` and configuration `build_jobs`. This PR changes the algorithm to allow multiple, simultaneous processes to coordinate the installation of the same spec (and specs with overlapping dependencies.).
The `parallel` (boolean) property sets the default for its package though the value can be overridden in the `install` method.
Spack's current parallel builds are limited to build tools supporting `jobs` arguments (e.g., `Makefiles`). The number of jobs actually used is calculated as`min(config:build_jobs, # cores, 16)`, which can be overridden in the package or on the command line (i.e., `spack install -j <# jobs>`).
This PR adds support for distributed (single- and multi-node) parallel builds. The goals of this work include improving the efficiency of installing packages with many dependencies and reducing the repetition associated with concurrent installations of (dependency) packages.
## Approach
### File System Locks
Coordination between concurrent installs of overlapping packages to a Spack instance is accomplished through bottom-up dependency DAG processing and file system locks. The runs can be a combination of interactive and batch processes affecting the same file system. Exclusive prefix locks are required to install a package while shared prefix locks are required to check if the package is installed.
Failures are communicated through a separate exclusive prefix failure lock, for concurrent processes, combined with a persistent store, for separate, related build processes. The resulting file contains the failing spec to facilitate manual debugging.
### Priority Queue
Management of dependency builds changed from reliance on recursion to use of a priority queue where the priority of a spec is based on the number of its remaining uninstalled dependencies.
Using a queue required a change to dependency build exception handling with the most visible issue being that the `install` method *must* install something in the prefix. Consequently, packages can no longer get away with an install method consisting of `pass`, for example.
## Caveats
- This still only parallelizes a single-rooted build. Multi-rooted installs (e.g., for environments) are TBD in a future PR.
Tasks:
- [x] Adjust package lock timeout to correspond to value used in the demo
- [x] Adjust database lock timeout to reduce contention on startup of concurrent
`spack install <spec>` calls
- [x] Replace (test) package's `install: pass` methods with file creation since post-install
`sanity_check_prefix` will otherwise error out with `Install failed .. Nothing was installed!`
- [x] Resolve remaining existing test failures
- [x] Respond to alalazo's initial feedback
- [x] Remove `bin/demo-locks.py`
- [x] Add new tests to address new coverage issues
- [x] Replace built-in package's `def install(..): pass` to "install" something
(i.e., only `apple-libunwind`)
- [x] Increase code coverage
Add a configuration option to suppress gpg warnings during binary
package verification. This only suppresses warnings: a gpg failure
will still fail the install. This allows users who have already
explicitly trusted the gpg key they are using to avoid seeing
repeated warnings that it is self-signed.
Add a new entry in `config.yaml`:
config:
shared_linking: 'rpath'
If this variable is set to `rpath` (the default) Spack will set RPATH in ELF binaries. If set to `runpath` it will set RUNPATH.
Details:
* Spack cc wrapper explicitly adds `--disable-new-dtags` when linking
* cc wrapper also strips `--enable-new-dtags` from the compile line
when disabling (and vice versa)
* We specifically do *not* add any dtags flags on macOS, which uses
Mach-O binaries, not ELF, so there's no RUNPATH)
Dotkit is being used only at a few sites and has been deprecated on new
machines. This commit removes all the code that provide support for the
generation of dotkit module files.
A new validator named "deprecatedProperties" has been added to the
jsonschema validators. It permits to prompt a warning message or exit
with an error if a property that has been marked as deprecated is
encountered.
* Removed references to dotkit in the docs
* Removed references to dotkit in setup-env-test.sh
* Added a unit test for the 'deprecatedProperties' schema validator
* When cleaning the stage root, only remove directories that appear
to be used for staging Spack packages. Previously Spack was clearing
all directories in the stage root, which could remove content not
related to Spack if the user chose a staging root which contains
files/directories not managed by Spack.
* The documentation is updated with warnings about choosing a stage
directory that is only managed by Spack (although generally the
check added in this PR for "spack clean" should avoid removing
content that was not created by Spack)
* The default stage directory (in config.yaml) is now
$tempdir/$user/spack-stage and the logic is updated to omit the
$user portion of this path if $tempdir already contains a $user
directory.
* When creating stage root assign user read/write permissions to all
directories in the path under $user. Previously Spack was assigning
the permissions of the first existing parent directory
Fixes#11163
The goal of this work is to simplify stage directory structures by eliminating use of symbolic links. This means, among other things, that` $spack/var/spack/stage` will no longer be the core staging directory. Instead, the first accessible `config:build_stage` path will be used.
Spack will no longer automatically append `spack-stage` (or the like) to configured build stage directories so the onus of distinguishing the directory from other work -- so the other work is not automatically removed with a `spack clean` operation -- falls on the user.
* config:build_jobs now controls the number of parallel jobs to spawn during
builds, but cannot ever exceed the number of cores on the machine.
* The default is set to 16 or the number of available cores, whatever
is lowest.
* Updated docs to reflect the changes done to limit parallel builds
* Create option to build missing compilers and add them to config before installing packages that use them
* Clean up kwarg passing for do_install, put compiler bootstrapping in separate method
* Remove /nfs/tmp2 from default configuration
* /nfs/tmp2 is going away from LC... and doesn’t exist for the rest of the world.
* update documentation to remove /nfs/tmp2 as well
* Add a build_language config.yaml option which controls the language
of compiler messages
* build_language defaults to "C", in which case the compiler messages
will be in English. This allows Spack log parsing to detect and
highlight error messages (since the regular expressions to find
error messages are in English)
* The user can use the default language in their environment by setting
the build_language config variable to null or ''
Fixes#9166
This is intended to reduce errors related to lock timeouts by making
the following changes:
* Improves error reporting when acquiring a lock fails (addressing
#9166) - there is no longer an attempt to release the lock if an
acquire fails
* By default locks taken on individual packages no longer have a
timeout. This allows multiple spack instances to install overlapping
dependency DAGs. For debugging purposes, a timeout can be added by
setting 'package_lock_timeout' in config.yaml
* Reduces the polling frequency when trying to acquire a lock, to
reduce impact in the case where NFS is overtaxed. A simple
adaptive strategy is implemented, which starts with a polling
interval of .1 seconds and quickly increases to .5 seconds
(originally it would poll up to 10^5 times per second).
A test is added to check the polling interval generation logic.
* The timeout for Spack's whole-database lock (e.g. for managing
information about installed packages) is increased from 60s to
120s
* Users can configure the whole-database lock timeout using the
'db_lock_timout' setting in config.yaml
Generally, Spack locks (those created using spack.llnl.util.lock.Lock)
now have no timeout by default
This does not address implementations of NFS that do not support file
locking, or detect cases where services that may be required
(nfslock/statd) aren't running.
Users may want to be able to more-aggressively release locks when
they know they are the only one using their Spack instance, and they
encounter lock errors after a crash (e.g. a remote terminal disconnect
mentioned in #8915).
If the user sets "ccache: true" in spack's config.yaml, Spack will use an available
ccache executable when compiling c/c++ code. This feature is disabled by default
(i.e. "ccache: false") and the documentation is updated with how to enable
ccache support
- spack.util.lock behaves the same as llnl.util.lock, but Lock._lock and
Lock._unlock do nothing.
- can be disabled with a control variable.
- configuration options can enable/disable locking:
- `locks` option in spack configuration controls whether Spack will use filesystem locks or not.
- `-l` and `-L` command-line options can force-disable or force-enable locking.
- Spack will check for group- and world-writability before disabling
locks, and it will not allow a group- or world-writable instance to
have locks disabled.
- update documentation
* Add format to separate target and os for path
spec format can now handle separations of target and os for setting
up the path.
* Added ${PLATFORM} et al to spec.format()
${PLATFORM}, ${OS}, ${TARGET}
* Update tests
Updated tests and got rid of unnecessary code.
* Also update documentation to reflect this new ability.
* Add default path scheme to config.yaml
Added default path scheme to config.yaml. Users can overwrite this
section if they want.
* Module files now are generated using a template engine refers #2902#3173
jinja2 has been hooked into Spack.
The python module `modules.py` has been splitted into several modules
under the python package `spack/modules`. Unit tests stressing module
file generation have been refactored accordingly.
The module file generator for Lmod has been extended to multi-providers
and deeper hierarchies.
* Improved the support for templates in module files.
Added an entry in `config.yaml` (`template_dirs`) to list all the
directories where Spack could find templates for `jinja2`.
Module file generators have a simple override mechanism to override
template selection ('modules.yaml' beats 'package.py' beats 'default').
* Added jinja2 and MarkupSafe to vendored packages.
* Spec.concretize() sets mutual spec-package references
The correct place to set the mutual references between spec and package
objects at the end of concretization. After a call to concretize we
should now be ensured that spec is the same object as spec.package.spec.
Code in `build_environment.py` that was performing the same operation
has been turned into an assertion to be defensive on the new behavior.
* Improved code and data layout for modules and related tests.
Common fixtures related to module file generation have been extracted
in `conftest.py`. All the mock configurations for module files have been
extracted from python code and have been put into their own yaml file.
Added a `context_property` decorator for the template engine, to make
it easy to define dictionaries out of properties.
The default for `verbose` in `modules.yaml` is now False instead of True.
* Extendable module file contexts + short description from docstring
The contexts that are used in conjunction with `jinja2` templates to
generate module files can now be extended from package.py and
modules.yaml.
Module files generators now infer the short description from package.py
docstring (and as you may expect it's the first paragraph)
* 'module refresh' regenerates all modules by default
`module refresh` without `--module-type` specified tries to
regenerate all known module types. The same holds true for `module rm`
Configure options used at build time are extracted and written into the
module files where possible.
* Fixed python3 compatibility, tests for Lmod and Tcl.
Added test for exceptional paths of execution when generating Lmod
module files.
Fixed a few compatibility issues with python3.
Fixed a bug in Tcl with naming_scheme and autoload + unit tests
* Updated module file tutorial docs. Fixed a few typos in docstrings.
The reference section for module files has been reorganized. The idea is
to have only three topics at the highest level:
- shell support + spack load/unload use/unuse
- module file generation (a.k.a. APIs + modules.yaml)
- module file maintenance (spack module refresh/rm)
Module file generation will cover the entries in modules.yaml
Also:
- Licenses have been updated to include NOTICE and extended to 2017
- docstrings have been reformatted according to Google style
* Removed redundant arguments to RPackage and WafPackage.
All the callbacks in `RPackage` and `WafPackage` that are not build
phases have been modified not to accept a `spec` and a `prefix`
argument. This permits to leverage the common `configure_args` signature
to insert by default the configuration arguments into the generated
module files. I think it's preferable to handling those packages
differently than `AutotoolsPackage`. Besides only one package seems
to override one of these methods.
* Fixed broken indentation + improved resiliency of refresh
Fixed broken indentation in `spack module refresh` (probably a rebase
gone silently wrong?). Filter the writers for blacklisted specs before
searching for name clashes. An error with a single writer will not
stop regeneration, but instead will print a warning and continue
the command.