NVIDIA CUDA Setup on ServerMO

How to Install NVIDIA Drivers & CUDA on Ubuntu 24.04

The Enterprise Guide. System-Wide Config, No Bloatware, 100% Production Ready.

Skip this Tutorial? (The Easy Way)

Installing drivers manually involves reboots and risk. If you mess up, you might face a "Boot Loop" or version conflicts.

Let Our Experts Handle It

Don't want to deal with the terminal? We've got you covered.

Simply open a Support Ticket or request it in your Order Notes. Our engineering team will professionally install Ubuntu 24.04 + CUDA 13.1 + Docker for you, ensuring your H100 or RTX server is delivered 100% AI-Ready.

Prefer to configure it yourself? Follow the Enterprise Standard guide below.

Phase 1: Pre-Flight Check (Prerequisites)

Before we start, verify your hardware is detected.

lspci | grep -i nvidia
# Output should confirm: "NVIDIA Corporation H100" or "RTX 4090"

Also, ensure you have the necessary build tools installed:

sudo apt update && sudo apt upgrade -y
sudo apt install build-essential software-properties-common vulkan-tools curl wget -y

Step 1: Clean Slate (Remove Old Junk)

Old installations can conflict with the new setup. Wipe them clean:

sudo apt-get --purge remove "*nvidia*" "cuda*" "*cublas*" -y
sudo apt-get autoremove -y

Disable Nouveau: The default open-source driver often blocks installation.

sudo bash -c "echo 'blacklist nouveau' > /etc/modprobe.d/blacklist-nouveau.conf"
sudo bash -c "echo 'options nouveau modeset=0' >> /etc/modprobe.d/blacklist-nouveau.conf"
sudo update-initramfs -u
sudo reboot

Wait for the server to reboot.After the reboot, log back in via SSH to continue Step 2.

Step 2: Add NVIDIA Network Repository

We use the official repo to ensure you get the Data Center drivers, not the generic Ubuntu ones.

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt-get update

Step 3: Install (Choose Your Path)

Most tutorials tell you to install everything. That is wrong. Don't waste 4GB of space if you don't need to.

Option A: Production Server (Runtime Only)

Use this if you are just running AI Models (Inference). It installs only the drivers and libraries needed to run code.

sudo apt-get install -y cuda-drivers cuda-runtime-13-1

Option B: Development Server (Full Toolkit)

Use this if you are compiling code or developing models. Includes nvcc and debugging tools.

sudo apt-get install -y cuda-drivers cuda-toolkit-13-1

Step 4: System-Wide Environment Setup

Stop editing .bashrc! That is a rookie mistake. It only works for your user.

We will configure this System-Wide so Cron Jobs, Jenkins, and other users can access the GPU without errors.

# Create a profile script in /etc/profile.d/
sudo bash -c 'echo "export PATH=/usr/local/cuda-13.1/bin:\$PATH" > /etc/profile.d/cuda.sh'
sudo bash -c 'echo "export LD_LIBRARY_PATH=/usr/local/cuda-13.1/lib64:\$LD_LIBRARY_PATH" >> /etc/profile.d/cuda.sh'

# Make it executable
sudo chmod +x /etc/profile.d/cuda.sh

# Load it immediately
source /etc/profile.d/cuda.sh

Step 5: Verify Installation

Let's verify the driver communication.

nvidia-smi

If you see your GPU listed, congratulations! You have successfully performed an Enterprise-Grade installation.

Bonus: The Docker Setup

For Docker containers to see the GPU, install the NVIDIA Container Toolkit.

# Add NVIDIA Container Toolkit Keyring
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
  && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
  sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#' | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

# Install Toolkit & Configure Docker
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

Conclusion: Your AI Supercomputer is Ready

Congratulations! You didn't just "install drivers". You built a Production-Grade AI Node capable of Deep Learning and Rendering without limits.

The Old Way (Basic VPS)
Shared Resources
  • No GPU Passthrough
  • Slow Training Times
  • Noisy Neighbors
The ServerMO Way
H100 • A100 • RTX 4090 • L40S
  • Bare Metal Power
  • NVLink Clusters
  • Unmetered Bandwidth

Why struggle with drivers when you can rent a Pre-Configured Cluster?

Deploy massive GPU power instantly. From 8x H100 NVLink clusters to budget-friendly RTX 4090 servers.

Ready to train your model?

Deploy Your GPU Server

CUDA FAQ

Why shouldn't I use .bashrc for CUDA paths?

Setting paths in .bashrc only works for the current user. It breaks cron jobs, system services, and other users. The professional method is to use /etc/profile.d/ for system-wide availability.

What is the difference between Toolkit and Runtime?

The full 'Toolkit' (4GB+) includes compilers (nvcc) and debuggers needed for development. The 'Runtime' is lightweight and contains only the libraries needed to run pre-compiled AI models. Use Runtime for Production servers to save space.

Can ServerMO install this for me?

Yes. Manual installation risks boot loops and version conflicts. Simply request a Managed Installation via a support ticket or order notes, and our engineers will ensure your server is delivered 100% AI-Ready with stable drivers.

Ready to Launch with Unmatched Power?

Ready to Launch with Unmatched Power? Deploy blazing-fast 1–100Gbps unmetered servers, high-performance GPU rigs, or game-optimized hosting custom-built for speed, reliability, and scale. Whether it’s colocation, compute-intensive tasks, or latency-critical applications, ServerMO delivers. Order now and get online in minutes, fully secured, fully optimized.

Red and white text reads '24x7' above bold purple 'SERVICES' on a white background, all set against a black backdrop. Energetic and modern feel.

Power. Performance. Precision.

99.99% Uptime Guarantee
24/7 Expert Support
Blazing-Fast NVMe SSD

Christmas Mega Sale!

Unwrap the ultimate power! Get massive holiday discounts on all Dedicated Servers. Offer ends soon grab yours before the snow melts!

London UK (15% OFF)
Tokyo Japan (10% OFF)
00Days
00Hrs
00Min
00Sec
Explore Grand Offers