PostgreSQL ubuntu and ServerMO logo

Install and Tune PostgreSQL on Ubuntu 24.04 Bare Metal

Escape the default 128MB memory trap. Learn the brutal truths about modern RAM tuning, NVMe WAL separation, and disaster recovery on Ubuntu.

Executive Summary: Honest Engineering

Most online tutorials teach you how to install PostgreSQL, but they leave you with a configuration meant for a Raspberry Pi. If you simply run apt install postgresql on a massive 128GB RAM server, PostgreSQL will default to using a mere 128MB of RAM for its cache. This guide bridges the gap between a basic installation and a Database Administrator (DBA) reality, stripping away outdated myths (like blindly allocating 25% RAM or over-relying on RAID 10) to help you build a modern, high-throughput database architecture.

Phase 1: Enterprise Installation

Operating system repositories often carry outdated versions of PostgreSQL. For production workloads, always add the official PostgreSQL Global Development Group (PGDG) repository to install the latest stable version (e.g., PostgreSQL 16 or 17).

# Import the repository signing key
sudo install -d /usr/share/postgresql-common/pgdg
sudo curl -o /usr/share/postgresql-common/pgdg/apt.postgresql.org.asc --fail https://www.postgresql.org/media/keys/ACCC4CF8.asc

# Add the official repository
sudo sh -c 'echo "deb [signed-by=/usr/share/postgresql-common/pgdg/apt.postgresql.org.asc] https://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'

# Update and install PostgreSQL
sudo apt update
sudo apt -y install postgresql postgresql-contrib

Phase 2: The "25% shared_buffers" Myth

You will often read that you should set shared_buffers to 25% of your total RAM. On a 16GB server, this is great advice. On a modern 256GB Bare Metal server, allocating 64GB to shared_buffers is often a mistake that causes inefficient "double-buffering".

Modern DBAs rely heavily on the efficiency of the Linux Kernel Page Cache. Open sudo nano /etc/postgresql/16/main/postgresql.conf and tune honestly:

  • shared_buffers: For massive servers (128GB+ RAM), cap this between 16GB to 32GB. Let the Linux Page Cache handle the rest.
  • effective_cache_size: This does NOT allocate memory; it simply tells the query planner how much memory is available in total (OS Cache + shared_buffers). Set this to 75% of your total RAM.
  • work_mem: Memory used for complex sorting. Do not set this too high. If you set work_mem = 256MB and have 1,000 active connections, you will instantly consume 256GB of RAM and crash. A safe start is 32MB to 64MB.

Pro-Tip: Connection Pooling (PgBouncer)

To prevent the work_mem OOM (Out-of-Memory) crash mentioned above, never let your application connect directly to PostgreSQL. Always install a lightweight connection pooler like PgBouncer in front of your database to queue and multiplex connections.

# Always restart the service after modifying postgresql.conf
sudo systemctl restart postgresql

Phase 3: NVMe IOPS & WAL Separation

PostgreSQL default settings assume you are running on slow, spinning Hard Disk Drives (HDD). When using Enterprise NVMe SSDs, applying old-school RAID 10 logic is often overkill for pure performance, as a single NVMe drive can easily saturate the PCIe bus.

The true architectural secret to database speed is physically separating your WAL (Write-Ahead Log). Run your main database on one NVMe drive, and point your WAL directory to a completely separate, dedicated NVMe drive. This eliminates disk contention during heavy write operations.

# In postgresql.conf, apply these modern NVMe optimizations:

# Default is 4.0. Lower to 1.1 to tell the planner random reads are nearly as fast as sequential.
random_page_cost = 1.1

# Increase concurrent I/O requests for enterprise NVMe drives
effective_io_concurrency = 200

# Optimize Write-Ahead Logging (WAL) for high throughput
wal_buffers = 16MB
checkpoint_timeout = 15min
max_wal_size = 4GB

Phase 4: Linux OS Huge Pages (With Warnings)

When you configure a large shared_buffers (e.g., 16GB+), the Linux kernel struggles to manage memory in standard 4KB pages. By enabling Huge Pages (2MB per page), you measurably reduce CPU overhead during memory lookups.

However, this is not a magic bullet, and it comes with a severe risk:

Critical Startup Warning:

In your postgresql.conf, huge_pages = try is the safe default. If you force it to huge_pages = on, and you miscalculate the vm.nr_hugepages value in your Linux /etc/sysctl.conf, PostgreSQL will completely fail to start. Ensure you have enough contiguous free memory before enforcing this at the OS level.

Phase 5: Hardening Network Security

Many basic tutorials instruct users to set listen_addresses = '*'. Do not do this on a public network. Exposing port 5432 to the entire internet guarantees brute-force attacks.

Best Practices for Remote Access:

  • Bind the listener only to your private VPC IP or a VPN interface: listen_addresses = '10.0.0.5'.
  • If you must allow external connections, strictly whitelist the incoming IPs in /etc/postgresql/16/main/pg_hba.conf.
  • Always use modern cryptographic hashing for authentication. Ensure your pg_hba.conf utilizes scram-sha-256 instead of the outdated md5 or insecure trust methods.
# Example pg_hba.conf hardened entry:
# TYPE  DATABASE        USER            ADDRESS                 METHOD
host    production_db   app_user        192.168.1.50/32         scram-sha-256

After configuring pg_hba.conf, explicitly allow the port through the Uncomplicated Firewall (UFW) only for trusted IP subnets:

# Allow PostgreSQL port (5432) ONLY from your application server's IP
sudo ufw allow from 192.168.1.50 to any port 5432 proto tcp
sudo ufw enable

Phase 6: The Bare Metal Reality (Disaster Recovery)

The ultimate trade-off for unthrottled Bare Metal performance is responsibility. Unlike managed DBaaS platforms that offer automated one-click restores, a Bare Metal DBA is solely responsible for disaster recovery. A single accidental DROP TABLE can be fatal without a proper backup strategy.

  • Logical Backups: Use pg_dump for daily snapshots of smaller databases or specific tables.
  • Point-in-Time Recovery (PITR): For enterprise workloads, you must use tools like pgBackRest or WAL-G to enable continuous WAL archiving. This allows you to restore the database to any exact second before a crash.

Critical DBA Warning:

Never store your database backups on the same NVMe drive as your active database. Always stream your WAL archives and base backups to off-site object storage or a physically distinct secondary server.

Phase 7: Cloud IOPS vs. Bare Metal Economics

A common misconception is that public cloud environments (AWS, GCP, Azure) are inherently slow. That is false. Modern clouds can achieve massive IOPS and sustained high-throughput transactions using "Provisioned IOPS" (io2 block express) or Dedicated Hosts.

The real issue is the astronomical cost. To get the equivalent I/O performance of a single local NVMe drive on the cloud, you will pay massive premiums for provisioned storage and face unpredictable network egress fees during global database replication.

If your application relies on high-speed data ingestion (TimescaleDB), complex JOINs, or heavy AI vector searches (pgvector), you need raw unthrottled infrastructure. When architecting for global user bases, many DBAs strategically deploy their primary write-nodes on enterprise dedicated servers in the USA to leverage premium Tier-1 network blending for optimal transatlantic routing. With 100% bare metal NVMe power, massive ECC RAM, and unmetered global ports, you receive the raw performance of the cloud's highest tiers at a fraction of the economic cost.

Enterprise PostgreSQL Tuning FAQ

Should I always set shared_buffers to 25% of my RAM?

No. While 25% is a classic rule of thumb for smaller servers, on machines with 128GB or 256GB of RAM, capping shared_buffers between 16GB and 32GB is generally recommended. PostgreSQL relies heavily on the Linux OS Page Cache, and setting shared_buffers too high can lead to inefficient double-buffering.

Why shouldn't I just increase max_connections to 5000?

PostgreSQL uses a process-based architecture, meaning every connection forks a new heavy OS process. Having 5,000 active connections will cause severe CPU context-switching and RAM exhaustion, crashing your server. Always keep max_connections low (e.g., 200-500) and use PgBouncer to multiplex thousands of client requests into those few database connections.

Why did the Linux OOM-Killer suddenly terminate my database?

This fatal crash usually happens when work_mem is set too high. Because work_mem is allocated per operation (not per connection), a single complex query with multiple JOINs or sorts can consume gigabytes of RAM. If multiple users run complex queries simultaneously, you will exhaust your physical RAM, triggering the Linux Out-Of-Memory (OOM) killer.

Why does my database freeze during heavy write operations (Bulk Inserts)?

You are likely experiencing aggressive checkpointing. By default, PostgreSQL flushes dirty buffers to disk too frequently. To fix these I/O spikes on NVMe drives, dramatically increase your max_wal_size (e.g., to 16GB or 32GB) and ensure checkpoint_completion_target is set to 0.9. This spreads the massive write load over a longer period.

Is RAID 10 required for PostgreSQL on NVMe?

For modern NVMe SSDs, RAID 10 is often not required for pure speed, as NVMe drives are fast enough to saturate the PCIe bus independently. A better performance strategy is using RAID 1 for redundancy and physically separating your WAL (Write-Ahead Log) to a dedicated NVMe drive.

Is it safe to set listen_addresses to '*' in pg_hba.conf?

No. Setting it to wildcard ('*') opens your database port (5432) to the entire public internet, inviting brute-force attacks. You should only bind it to internal IP addresses, whitelist via UFW, or secure your connection through a VPN tunnel like WireGuard.

Ready to Launch with Unmatched Power?

Ready to Launch with Unmatched Power? Deploy blazing-fast 1–100Gbps unmetered servers, high-performance GPU rigs, or game-optimized hosting custom-built for speed, reliability, and scale. Whether it’s colocation, compute-intensive tasks, or latency-critical applications, ServerMO delivers. Order now and get online in minutes, fully secured, fully optimized.

Red and white text reads '24x7' above bold purple 'SERVICES' on a white background, all set against a black backdrop. Energetic and modern feel.

Power. Performance. Precision.

99.99% Uptime Guarantee
24/7 Expert Support
Blazing-Fast NVMe SSD

Christmas Mega Sale!

Unwrap the ultimate power! Get massive holiday discounts on all Dedicated Servers. Offer ends soon grab yours before the snow melts!

London UK (15% OFF)
Tokyo Japan (10% OFF)
00Days
00Hrs
00Min
00Sec
Explore Grand Offers