I wrote these instructions as part of "installing PyTorch with CUDA 12.1.1".
Anyway, if you still need to compile from source… here's how:
This is a dependency of PyTorch, which is sensitive to CUDA version.
Clone Magma:
| # /etc/security/limits.conf | |
| * soft nofile 999999 | |
| * hard nofile 999999 | |
| root soft nofile 999999 | |
| root hard nofile 999999 | |
| =========================================================== | |
| # /etc/sysctl.conf | |
| # sysctl for maximum tuning |
| import torchvision | |
| class UnNormalize(torchvision.transforms.Normalize): | |
| def __init__(self,mean,std,*args,**kwargs): | |
| new_mean = [-m/s for m,s in zip(mean,std)] | |
| new_std = [1/s for s in std] | |
| super().__init__(new_mean, new_std, *args, **kwargs) | |
| # imagenet_norm = dict(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225]) | |
| # UnNormalize(**imagenet_norm) |
| """ | |
| Extract the contents of a `run-{id}.wandb` database file. | |
| These database files are stored in a custom binary format. Namely, the database | |
| is a sequence of wandb 'records' (each representing a logging event of some | |
| type, including compute stats, program outputs, experimental metrics, wandb | |
| telemetry, and various other things). Within these records, some data values | |
| are encoded with json. Each record is encoded with protobuf and stored over one | |
| or more blocks in a LevelDB log. The result is the binary .wandb database file. |
| #!/bin/bash | |
| # Downloads and applies a patch from Drupal.org. | |
| if [ -z "$1" ] | |
| then | |
| echo "You need to supply a URL to a patch file." | |
| exit | |
| fi | |
| URL=$1; |
| ; /usr/share/pulseaudio/alsa-mixer/profile-sets/astro-a50-gen4.conf | |
| [General] | |
| auto-profiles = yes | |
| [Mapping analog-voice] | |
| description = Voice | |
| device-strings = hw:%f,0,0 | |
| channel-map = left,right | |
| paths-output = steelseries-arctis-output-chat-common |
| # Modify apt sources lists | |
| cd /etc/apt/sources.list.d/ | |
| sudo rm gds-11-7.conf cuda-12-3.conf cuda-12-2.conf cuda-12-1.conf 989_cuda-11.conf cuda-ubuntu2004-11-7-local.list cuda-ubuntu2004-11-7-local.list | |
| # Modify apt preferences | |
| cd /etc/apt/preferences.d | |
| sudo rm cuda-repository-pin-600 nvidia-fabricmanager | |
| # Startup shell environment variables | |
| sudo vim /etc/profile.d/dlami.sh # comment out both |
| git clone https://git.videolan.org/git/ffmpeg/nv-codec-headers.git | |
| cd nv-codec-headers | |
| vi Makefile # change the first line to PREFIX = ${CONDA_PREFIX} | |
| make install | |
| cd .. | |
| git clone https://git.ffmpeg.org/ffmpeg.git | |
| cd ffmpeg | |
| git checkout n4.2.2 | |
| conda install nasm |
I wrote these instructions as part of "installing PyTorch with CUDA 12.1.1".
Anyway, if you still need to compile from source… here's how:
This is a dependency of PyTorch, which is sensitive to CUDA version.
Clone Magma:
| """ | |
| Creates an HDF5 file with a single dataset of shape (channels, n), | |
| filled with random numbers. | |
| Writing to the different channels (rows) is parallelized using MPI. | |
| Usage: | |
| mpirun -np 8 python demo.py | |
| Small shell script to run timings with different numbers of MPI processes: |
| # https://github.com/HDFGroup/hdf5/blob/hdf5-1_13_1/release_docs/INSTALL_parallel | |
| # https://docs.olcf.ornl.gov/software/python/parallel_h5py.html | |
| # https://www.pism.io/docs/installation/parallel-io-libraries.html | |
| # using ~/local/build/hdf5 as the build directory. | |
| # Install HDF5 1.13.1 with parallel I/O in ~/local/hdf5, | |
| version=1.13.1 | |
| prefix=$HOME/local/hdf5 | |
| build_dir=~/local/build/hdf5 | |
| hdf5_site=https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.13 |