* add tensorflow
Change-Id: Id778c68d148cc42f0b478a9d10a8f937cb54cdc6
* make bazel and tensorflow build
Change-Id: Iae9005e8f4dcc8f1ed36ea9337d2430aeebb291f
* fix flake8
Change-Id: Ib05529dd796eab4a8855a5d7775cc4efea8e479d
* 2nd flake8 attempt
Change-Id: I46224be3a374b2a65793048b0c5178ea64adbd78
* replace md5 sums with sha256
* add version 1.13.2
* bazel() -> bazel('build',...
* specify versions of bazel dependency
* build with CUDA
* add TODOs
* add more todo"s
* improve enum34 dependency
* py-future is a dependency as of v1.14
* Update var/spack/repos/builtin/packages/tensorflow/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/tensorflow/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/tensorflow/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/tensorflow/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* enable nccl, cuda by default
* explain patches
* add todo
* remove unnecessary copt_flag
* use join
* join argument must be an iterable
* split long line; use same opts for non-cuda build
* without opt flags, configure hangs
* introduce build phases; re-arrange
* undo mistake
* restore unset tmp_path
* as of v1.14, nccl_install_path is parsed correctly, hence change ...prefix.lib to ...prefix
* now, version 1.14 compiles successfully with cuda
* add version 2.1.0
* specify bazel dependency for version 2.1.0-rc0
* account for deprecated bazel opts for v2.1.0-rc0
* disable mkldnn contraction kernel
* Flake8 fixes
* md5 -> sha256
* Fix TF and TF-estimator version deps
* Don't just comment out patch
* Add myself as a maintainer
* Patch py-astor to support newer py-setuptools
* Add more versions and bazel version constraints
* Add a build phase
* Add note about configure interactivity
* dev-build -> build-env
* Disable iOS build
* Use correct optimization flags
* Add variants for all possible features
* nccl isn't always a dependency
* Specify correct dependency versions for each release
* Libs may not be in lib or lib64
* Add py-opt-einsum package
* Add newer version of py-protobuf
* Add newer version of py-wrapt
* Fix Python 2.6 syntax error
* Code review
* Set more env vars for older versions
* Add more env vars, fix bazel versions, add conflicts
* Fix config options
* Specify version that support --config args
* Add py-future dependency for Python 2
* Fix cuda config flag and compute capabilities
* Fix installation on macOS, add unit tests
* Override cuda variant default to True on non-macOS
* Rename tensorflow to py-tensorflow
* Has to extend something
* Fix os.symlink call
* convert cuda_arc values to capabilities
* restore nccl prefix path for v1.13.1
* Revert to v2
* Remove extraneous period
* Add new version of jdk/openjdk
* More stable cuda_arch formatting
* Fix bazel unit tests
* Fix symlinking
* Fix unit tests
* +gcp by default until build error figured out
* apply strict constraint checks for patches, otherwise Spack may incorrectly treat a version range constraint as satisfied when mixing x.y and x.y.z versions
* add mixed version checks to version comparison tests
* package for Simmetrix SimModSuite
* simmodsuite: passes flake8
* simmetrix: add version, set cmake prefix path
A given install will either use the libs built on rhel7 or rhel6.
For now, I'm sticking with the non-spack install convention of
placing the libraries into sub-directories named according to their
build process (os + compiler).
* simmetrix: add older version
* simmetrix: set build env paths
easier to build pumi using CMAKE_PREFIX_PATH
* simmetrix: address review comments
* simmetrix: add new version and remove old one
* simmetrix: flake8 fixes
* simmodsuite: oslib var is in self
* simmodsuite: update version and checksum
* simodsuite: set LD_LIBRARY_PATH for cad kernels
* update license
* update setup_environment calls
* increase indentation for flake8
* python3.8 flake8 fixes
* use spack consistent naming
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* sha256 required, update versions and hashes
* Added build dependency on gawk
* Use virtual depdendency
* Added patch to prepare libgpg-error for use with gawk@5
* Added reasoning with link for need for patch
* Add a transaction around repeated calls to `spec.prefix` in the activation process
* cache the computation of home in the python package to speed up setting deps
* ensure that module-scope variables are only set *once* per module
* amber: Improved package.py and added version 18
- Added amber 18 with ambertools 19
- Added all available patches
- Added +update variant to use the self update
- Added +openmp variant to get openmp optomizations
- Added +x11 variant when possible
- Splitted amber 16 and 18 dependencies
- We now detect the copiler type and compile accordingly
- Added cray variant which is a bit special (untested)
- Improved detection of possible cuda versions
- All compilation optimizations +mpi +openmp +cuda are compatible
- Updated to use setup_build_environment(), setup_run_environment()
* dealii: Added 'threads' variant that controls the TBB dependency (#13931)
* dealii: Added 'threads' variant that controls the DEAL_II_WITH_THREADS cmake option and the dependency on Intel TBB
* Update var/spack/repos/builtin/packages/dealii/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* amber: Improved package.py and added version 18
- Added amber 18 with ambertools 19
- Added all available patches
- Added +update variant to use the self update
- Added +openmp variant to get openmp optomizations
- Added +x11 variant when possible
- Splitted amber 16 and 18 dependencies
- We now detect the copiler type and compile accordingly
- Added cray variant which is a bit special (untested)
- Improved detection of possible cuda versions
- All compilation optimizations +mpi +openmp +cuda are compatible
- Updated to use setup_build_environment(), setup_run_environment()
* amber: Adding missing flex and bison dependencies
* Removed cray variant; flex and bison now build only
* amber: Improved package.py and added version 18
- Added amber 18 with ambertools 19
- Added all available patches
- Added +update variant to use the self update
- Added +openmp variant to get openmp optomizations
- Added +x11 variant when possible
- Splitted amber 16 and 18 dependencies
- We now detect the copiler type and compile accordingly
- Added cray variant which is a bit special (untested)
- Improved detection of possible cuda versions
- All compilation optimizations +mpi +openmp +cuda are compatible
- Updated to use setup_build_environment(), setup_run_environment()
* amber: Adding missing flex and bison dependencies
* Removed cray variant; flex and bison now build only
* dealii: Fixed flake8 issues
* amber: corrected typo
* amber: Removed unused variant python
* dealii: Added 'threads' variant that controls the DEAL_II_WITH_THREADS cmake option and the dependency on Intel TBB
* Update var/spack/repos/builtin/packages/dealii/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Created an initial recipe for Sensei
* Cleanup syntax
* Small fixes for the Sensei recipe
* Cosmetic fixes to comply with PEP8
* More cosmetic fixes before PR
* Added more documentation before PR
* Fixed flake8 errors
* Fixes following PR review
* Fixes to pass Flake8 passes
* Some changes following PR review and support for SENSEI 3
* Update var/spack/repos/builtin/packages/sensei/package.py
Co-Authored-By: Axel Huebl <axel.huebl@plasma.ninja>
* Fixed Flake8 errors
Commit 78724357 added versions 2019.5 to 2019.8 but failed to update
the patches for these versions.
1. gcc_generic-pedantic patch -- include this up through 2019.5. This
was fixed in the TBB source tree in 2019.6.
2. tbb_cmakeConfig patch -- this needs to be modified (different file)
for 2019.5 and later.
3. tbb_gcc_rtm_key patch -- replace this with filter_file. This is
simpler and eliminates the need to update the patch whenever the
surrounding context changes.
* dont add perl bin directory to PATH when setting up env (this is already handled by spack core in a way that omits system dirs); also consolidate repeated logic between build/run env setup.
* the bin/ dir of each dependency is already added to PATH in Spack core, so there is no need to do this in the Perl package
* BLD: enforce C++11 std for boost + xl_r
* the spack `cxxstd` variant is not sufficient to enforce
`-std=c++11` usage in boost compile lines when `xl_r` compiler
spec is in use; while it would be nice if this were fixed
in a boost config file somewhere, for now this patch
allows boost to build on POWER9 with
an %xl_r compiler spec if the user specifies i.e.,:
`spack install boost@1.70.0+mpi cxxstd=11 %xl_r@16.1.1.5`
* Update var/spack/repos/builtin/packages/boost/package.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
The documentation states that Spack builds R without the recommmened
packages, with Spack handling the build of those packages to satisfy
dependencies. From the docs:
> Spack explicitly adds the --without-recommended-packages flag to
> prevent the installation of these packages. Due to the way Spack
> handles package activation (symlinking packages to the R installation
> directory), pre-existing recommended packages will cause conflicts for
> already-existing files. We could either not include these recommended
> packages in Spack and require them to be installed through
> --with-recommended-packages, or we could not install them with R and
> let users choose the version of the package they want to install. We
> chose the latter.
However, this is not what Spack is actually doing. The
`--without-recommended` configure option is not passed to R and
therefore those packages are built. This prevents R extension activation
from working as files in the recommended packages installed with R will
block linking of file from the respective `r-` packages.
This PR adds the `--without-recommended` flag to the configure options
of the R package. This will then have the Spack R build match what is
documented.