Setup Nx lib and EXLA to run NX/AXON with CUDA

Rmag Breaking News

Steps for you setup and run Machine Learning with Axon or simple Nx script with EXLA on CUDA (GPU).

Setup CUDA on local machine.

For case you want to try run model on GPU (Linux/Ubuntu) you need setup CUDA environment follow steps.

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin
sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda-repo-ubuntu2204-11-8-local_11.8.0-520.61.05-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2204-11-8-local_118.0-520.61.05-1_amd64.deb
sudo cp /var/cuda-repo-ubuntu2204-11-8-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda

Note: Remember check installed version CUDA is matched to toolkit version by commands

nvcc –version
nvidia-smi

Setup LiveBook in local for easy access local environment.

git clone https://github.com/livebook-dev/livebook.git
cd livebook
mix deps.get –only prod

# Run the Livebook server
MIX_ENV=prod mix phx.server

Access LiveBook from url in terminal.

LiveBook site

Create & setup new LiveBook.

setup at LiveBook for running XLA with CUDA

Mix.install(
[
#…
{:nx, “~> 0.7”},
{:exla, “~> 0.7”}
],
config: [
nx: [
default_backend: EXLA.Backend
]
],
system_env: [
XLA_TARGET: “cuda118”
]
)

Add compiler to EXLA in Axon if needed.

Axon.Loop.run(test_pipeline, trained_model_state, compiler: EXLA)

Leave a Reply

Your email address will not be published. Required fields are marked *