* Allow users to specify root env dir
Environments managed by spack have some advantages over anonymous Environments
but they are tucked away inside spack's directory tree. This PR gives
users the ability to specify where the environments should live.
See #32823
This is also taken as an opportunity to ensure that all references are to "managed environments",
rather than "named environments". Prior to this PR some references to the latter persisted.
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Batch scripts in general will not function without carriage return line
endings on Windows. We rely on these scripts to support cmd, so we
should not allow these scripts to be converted to lf.
Note: Windows 11 supports lf line endings due to the use of Windows
Terminal. Once support for Windows 10 is dropped, this change can be
reverted.
This PR enables the successful execution of the spack binary cache
tutorial on Windows. It assumes gnupg and file are available (they
can be installed with choco).
* Fix handling of args with quotes in spack.bat
* `file` utility can be installed on Windows (e.g. with choco): update
error message accordingly
Spack was running an external detection of Python during each invocation
of the setup script for Windows CMD/PWSH, which has dramatic performance
implications each time the script is invoked, and is completely
unneccesary. Remove this operation.
* Remove CI jobs related to Python 2.7
* Remove Python 2.7 specific code from Spack core
* Remove externals for Python 2 only
* Remove llnl.util.compat
Add a `project` block to the toml config along with development and CI
dependencies and a minimal `build-system` block, doing basically
nothing, so that spack can be bootstrapped to a full development
environment with:
```shell
$ hatch -e dev shell
```
or for a minimal environment without hatch:
```shell
$ python3 -m venv venv
$ source venv/bin/activate
$ python3 -m pip install --upgrade pip
$ python3 -m pip install -e '.[dev]'
```
This means we can re-use the requirements list throughout the workflow
yaml files and otherwise maintain this list in *one place* rather than
several disparate ones. We may be stuck with a couple more temporarily
to continue supporting python2.7, but aside from that it's less places
to get out of sync and a couple new bootstrap options.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add two no-op jobs named "all-prechecks" and "all"
These are a suggestion from @tgamblin, they are stable named markers we
can use from gitlab and possibly for required checks to make CI more
resilient to refactors changing the names of specific checks.
* Enable parallel testing using xdist for unit testing in CI
* Normalize tmp paths to deal with macos
* add -u flag compatibility to spack python
As of now, it is accepted and ignored. The usage with xdist, where it
is invoked specifically by `python -u spack python` which is then passed
`-u` by xdist is the entire reason for doing this. It should never be
used without explicitly passing -u to the executing python interpreter.
* use spack python in xdist to support python 2
When running on python2, spack has many import cycles unless started
through main. To allow that, this uses `spack python` as the
interpreter, leveraging the `-u` support so xdist doesn't error out when
it unconditionally requests unbuffered binary IO.
* Use shutil.move to account for tmpdir being in a separate filesystem sometimes
This patchset refactors our GitHub actions into a single top-level ci workflow that
invokes a series of reusable actions. The main goal of this is to be able to easily
control which tests run and in what order based on the success or failure of top-level
prechecks. Our previous workflows ran in three sets:
* nix tests: style and verification first, then linux and macos tests if successful
* windows tests: style and verification first, then linux and macos tests if successful
* bootstrap tests
As a result, the bootstrap tests ran even if the style failed, and style and verification
had to run on two different platforms despite running identical checks. I'm relatively
sure that's because of the limitation on dependencies between steps in the jobs.
Reusable workflows allow us to run the style, verification and now audit checks once,
then depending on the results, and the files changed, run the appropriate nix, windows
and bootstrap tests. While it saves only a few minutes by itself, this makes it easier to
refactor checks to subset tests without having to replicate tests or other workflow
components in the future.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Many noqa's in the code are no longer necessary now that the column limit is 99
characters. Others can easily be eliminated, and still more can just be made more
specific if they do not have to do with line length.
The only E501's still in the code are in the tests for `spack.util.path` and the tests
for `spack style`.
* refactor powershell setup to make it sourceable
* only set editor if it is unset
* change directory to spack root in subshell
* Update share/spack/setup-env.ps1
Co-authored-by: John W. Parent <45471568+johnwparent@users.noreply.github.com>
Co-authored-by: John W. Parent <45471568+johnwparent@users.noreply.github.com>
Fixup common tests
* Remove requirement for Python 2.6
* Skip new failing test
Windows: Update url util to handle Windows paths (#27959)
* update url util to handle windows paths
* Update tests to handle fixed url handling
* canonicalize path only when the path type matches the host platform
* Skip some url tests on Windows
Co-authored-by: Omar Padron <omar.padron@kitware.com>
Use threading.TIMEOUT_MAX when available (#24246)
This value was introduced in Python 3.2. Specifying a timeout greater than
this value will raise an OverflowError.
Co-authored-by: Lou Lawrence <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
CMake - Windows Bootstrap (#25825)
Remove hardcoded cmake compiler (#26410)
Revert breaking cmake changes
Ensure no autotools on Windows
Perl on Windows (#26612)
Python source build windows (#26313)
Reconfigure sysconf for Windows
Python2.6 compatibility
Fxixup new sbang tests for windows
Ruby support (#28287)
Add NASM support (#28319)
Add mock Ninja package for testing
Modifications:
- [x] Removed `centos:6` unit test, adjusted vermin checks
- [x] Removed backport of `collections.OrderedDict`
- [x] Removed backport of `functools.total_ordering`
- [x] Removed Python 2.6 specific skip markers in unit tests
- [x] Fixed a few minor Python 2.6 related TODOs in code
Updating the vendored dependencies will be done in separate PRs
Spack is internally using a patched version of `argparse` mainly to backport Python 3 functionality
into Python 2. This PR makes it such that for the supported Python 3 versions we use `argparse`
from the standard Python library. This PR has been extracted from #25371 where it was needed
to be able to use recent versions of `pytest`.
* Fixed formatting issues when using a pristine argparse.py
* Fix error message for Python 3.X when missing positional arguments
* Account for the change of API in Python 3.7
* Layout multi-valued args into columns in error messages
* Seamless transition in develop if argparse.pyc is in external
* Be more defensive in case we can't remove the file.
This adds RHEL8's `/usr/libexec/platform-python` to Spack's list of preferred
pythons. It will only be used if no other `python` is available in the `PATH`.
We have been testing with this python for a while now, and it seems to do all
that we need. If Spack one day isn't able to work with it, we'll take it out,
but for now it is useful to allow Spack to be used on RHEL8 without a dedicated
`python` installation.
This is to help debug situations like #22383, where python3.4 is
accidentally preferred over python2. It will also help on systems where
there is no python2 available or some other issue.
The SPACK_PYTHON environment variable can be set to a python interpreter to be
used by the spack command. This allows the spack command itself to use a
consistent and separate interpreter from whatever python might be used for package
building.
- [x] add `concretize.lp`, `spack.yaml`, etc. to licensed files
- [x] update all licensed files to say 2013-2021 using
`spack license update-copyright-year`
- [x] appease mypy with some additions to package.py that needed
for oneapi.py
I lost my mind a bit after getting the completion stuff working and
decided to get Mypy working for spack as well. This adds a
`.mypy.ini` that checks all of the spack and llnl modules, though
not yet packages, and fixes all of the identified missing types and
type issues for the spack library.
In addition to these changes, this includes:
* rename `spack flake8` to `spack style`
Aliases flake8 to style, and just runs flake8 as before, but with
a warning. The style command runs both `flake8` and `mypy`,
in sequence. Added --no-<tool> options to turn off one or the
other, they are on by default. Fixed two issues caught by the tools.
* stub typing module for python2.x
We don't support typing in Spack for python 2.x. To allow 2.x to
support `import typing` and `from typing import ...` without a
try/except dance to support old versions, this adds a stub module
*just* for python 2.x. Doing it this way means we can only reliably
use all type hints in python3.7+, and mypi.ini has been updated to
reflect that.
* add non-default black check to spack style
This is a first step to requiring black. It doesn't enforce it by
default, but it will check it if requested. Currently enforcing the
line length of 79 since that's what flake8 requires, but it's a bit odd
for a black formatted project to be quite that narrow. All settings are
in the style command since spack has no pyproject.toml and I don't
want to add one until more discussion happens. Also re-format
`style.py` since it no longer passed the black style check
with the new length.
* use style check in github action
Update the style and docs action to use `spack style`, adding in mypy
and black to the action even if it isn't running black right now.
Spack creates a separate process to do package installation. Different
operating systems and Python versions use different methods to create
it but up until Python 3.8 both Linux and Mac OS used "fork" (which
duplicates process memory, file descriptor table, etc.).
Python >= 3.8 on Mac OS prefers creating an entirely new process
(referred to as the "spawn" start method) because "fork" was found to
cause issues (in other words "spawn" is the default start method used
by multiprocessing.Process). Spack was dependent on the particular
behavior of fork to replicate process memory and transmit file
descriptors.
This PR refactors the Spack internals to support starting a child
process with the "spawn" method. To achieve this, it makes the
following changes:
- ensure that the package repository and other global state are
transmitted to the child process
- ensure that file descriptors are transmitted to the child process in
a way that works with multiprocessing and spawn
- make all the state needed for the build process and tests picklable
(package, stage, etc.)
- move a number of locally-defined functions into global scope so that
they can be pickled
- rework tests where needed to avoid using local functions
This PR also reworks sbang tests to work on macOS, where temporary
directories are deeper than the Linux sbang limit. We make the limit
platform-dependent (macOS supports 512-character shebangs)
See: #14102
`sbang` now lives at https://github.com/spack/sbang, and it has its own
test suite that's more extensive than what's in Spack. We'll leave sbang
tests to sbang from now on, and just vendor `bin/sbang` directly.
Remaining `sbang` tests have to do with patching files, not with
`sbang`'s functionality.
This update also fixes a bug with `sbang` and multiple command line
arguments that was introduced in #19529. See:
* https://github.com/spack/sbang/pull/1
* https://github.com/spack/sbang/pull/2
- [x] include latest `sbang` from https://github.com/spack/sbang
- [x] remove old `sbang` tests from Spack
- [x] update `COPYRIGHT` and `cmd/license.py`
`sbang` was previously a bash script but did not need to be. This
converts it to a plain old POSIX shell script and adds some options. This
also allows us to simplify sbang shebangs to `#!/bin/sh /path/to/sbang`
instead of `#!/bin/bash /path/to/sbang`.
The new script passes shellcheck (with a few exceptions noted in the file)
- [x] `SBANG_DEBUG` env var enables printing what *would* be executed
- [x] `sbang` checks whether it has been passed an option and fails gracefully
- [x] `sbang` will now fail if it can't find a second shebang line, or if
the second line happens to be sbang (avoid infinite loops)
- [x] add more rigorous tests for `sbang` behavior using `SBANG_DEBUG`
PHP supports an initial shebang, but its comment syntax can't handle our 2-line
shebangs. So, we need to embed the 2nd-line shebang comment to look like a
PHP comment:
<?php #!/path/to/php ?>
This adds patching support to the sbang hook and support for
instrumenting php shebangs.
This also patches `phar`, which is a tool used to create php packages.
`phar` itself has to add sbangs to those packages (as phar archives
apparently contain UTF-8, as well as binary blobs), and `phar` sets a
checksum based on the contents of the package.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
The current implementation of `spack-python` will leave an extra shell
around while it runs. That shell should really replace itself with
spack.
- [x] add exec to spack-python script
The perl binary can also be called `perlX.Y.Z` if using a development
build or simply using the versioned binary.
We were also dropping all sbang arguments, since `exec $interpreter_v`
was only using the first element of the `interpreter_v` array.
Rework Spack's continuous integration workflow to be environment-based.
- Add the `spack ci` command, which replaces the many scripts in `bin/`
- `spack ci` decouples the CI workflow from the spack instance:
- CI is defined in a spack environment
- environment is in its own (single) git repository, separate from Spack
- spack instance used to run the pipeline is up to the user
- A new `gitlab-ci` section in environments allows users to configure how
specs in the environment should be mapped to runners
- Compilers can be bootstrapped in the new pipeline workflow
- Add extensive documentation on pipelines (see `pipelines.rst` for further details)
- Add extensive tests for pipeline code
This extends Spack functionality so that it can fetch sources and binaries from-, push sources and binaries to-, and index the contents of- mirrors hosted on an S3 bucket.
High level to-do list:
- [x] Extend mirrors configuration to add support for `file://`, and `s3://` URLs.
- [x] Ensure all fetching, pushing, and indexing operations work for `file://` URLs.
- [x] Implement S3 source fetching
- [x] Implement S3 binary mirror indexing
- [x] Implement S3 binary package fetching
- [x] Implement S3 source pushing
- [x] Implement S3 binary package pushing
Important details:
* refactor URL handling to handle S3 URLs and mirror URLs more gracefully.
- updated parse() to accept already-parsed URL objects. an equivalent object
is returned with any extra s3-related attributes intact. Objects created with
urllib can also be passed, and the additional s3 handling logic will still be applied.
* update mirror schema/parsing (mirror can have separate fetch/push URLs)
* implement s3_fetch_strategy/several utility changes
* provide more feature-complete S3 fetching
* update buildcache create command to support S3
* Move the core logic for reading data from S3 out of the s3 fetch strategy and into
the s3 URL handler. The s3 fetch strategy now calls into `read_from_url()` Since
read_from_url can now handle S3 URLs, the S3 fetch strategy is redundant. It's
not clear whether the ideal design is to have S3 fetching functionality in a fetch
strategy, directly implemented in read_from_url, or both.
* expanded what can be passed to `spack buildcache` via the -d flag: In addition
to a directory on the local filesystem, the name of a configured mirror can be
passed, or a push URL can be passed directly.
The Python landscape is going to be changing in 2020, and Python 2 will
be end of life. Spack should *prefer* Python 3 to Python 2 by default,
but we still need to run on systems that only have Python2 available.
This is trickier than it sounds, as on some systems, the `python` command
is `python2`; on others it's `python3`, and RHEL8 doesn't even have the
`python` command. Instead, it makes you choose `python3` or
`python2`. You can thus no longer make a simple shebang to handle all the
cases.
This commit makes the `spack` script bilingual. It is still valid
Python, but its shebang is `#!/bin/sh`, and it has a tiny bit of shell
code at the beginning to pick the right python and execute itself with
what it finds.
This has a lot of advantages. I think this will help ensure that Spack
works well in Python3 -- there are cases where we've missed things
because Python2 is still the default `python` on most systems. Also,
with this change, you do not lose the ability to execute the `spack`
script directly with a python interpreter. This is useful for forcing
your own version of python, running coverage tools, and running profiling
tools. i.e., these will not break with this change:
```console
$ python2 $(which spack) <args>
$ coverage run $(which spack) <args>
$ pyinstrument $(which spack) <args>
```
These would not work if we split `spack` into a python file and a shell
script (see #11783). So, this gives us the best of both worlds. We get
to control our interpreter *and* remain a mostly pure python executable.
- Delete references to ruamel.yaml at Spack start-up, if they are present
- ruamel.yaml generates a .pth file when installed via pip that has the
effect of always preferring the version of this package installed at
site scope (effectively preventing us from vendoring it).
- This mechanism triggers when implicitly importing the 'site' module
when the python interpreter is started. In this PR we explicitly delete
references to 'ruamel.yaml' and 'ruamel' in sys.modules, if any, after
we set 'sys.path' to search from the directory where we store vendored
packages. This ensures that the imports after those statements will be
done from our vendored version.
- See #9206 for further details
- remove the old LGPL license headers from all files in Spack
- add SPDX headers to all files
- core and most packages are (Apache-2.0 OR MIT)
- a very small number of remaining packages are LGPL-2.1-only
* update help of `clean --all` to include `-p`
* remove old orphaned `.pyc` removal
* restrict removal or orphaned pyc files to `lib/spack` and `var/spack`