The Signal: A ProTop Blog

Beyond the Basics: The 8-Core Conundrum, The SAN Myth, and The Unbreakable Guide to Database Health

Written by Tom Bascom | January 28, 2026

We're here to talk about keeping your OpenEdge database and the server that runs it healthy. If you're looking for elite performance, you have to start with a good configuration.

At ProTop, we monitor thousands of systems, collecting literally billions (with a 'B') of daily data points to understand application and server performance. This gives us some really deep insight into what configuration options work and which are just expensive mistakes.

If your ultimate goal is performance, this is your blueprint for configuring an OpenEdge database server. If your goal is to save money, well, this might seem like strange advice. Although it is surprising how often making good performance decisions also turns out to be very cost effective!

 

Architectural Mistakes: The Cost of "More"

The biggest drains on performance come from believing you need more hardware or relying on modern infrastructure that fundamentally misunderstands the OpenEdge engine.

1. Stop Buying More Cores: The Speed Limit Problem

There’s a common tendency when you hit a performance wall to think you need to buy more cores. You are probably wrong.

The OpenEdge database engine uses a Latch (or mutex lock), which is a single-threaded control point for for safe-guarding operations against shared memory.. This is necessary to protect data integrity and ensure synchronization. This LACT creates an inevitable choke point, like a traffic circle.

  • You cannot fix this control point by having more lanes (cores). In fact more lanes makes it worse.
  • You fix it by having a higher speed limit (faster cores).

Fewer cores, running faster, is significantly more efficient. You should aim for the smaller number of cores running at a faster speed for better throughput.

Within a family of CPUs, the more cores you have, the slower they’re going to run. A 24-core CPU will be slower than an 8-core CPU from the same family and generation.

2. NUMA: The Costly Performance Killer

Your hardware vendor will be happy to sell you NUMA because they think that’s how they will make more money. Often, however, those "many core" NUMA systems can be less expensive than very fast non-NUMA (or single node NUMA). The vendors can have a built-in bias towards "more cores" partly because at the low end they can sell you a lot of very slow cores at quite attractive prices (and probably with pretty good margins). 

The thinking is that you get the performance of all of the cores summed up. And that thinking does work very well with many common workloads (like web servers as an example). But that isn't how things actually work with a database server. The vendors are generally much less familiar with that and so they push what they know.

But NUMA doesn't help your database performance. NUMA spreads the load wider and introduces far memory accesses (memory requests going over the bus). This requires coordination that adds massive latency and, in practice, your effective throughput is limited to less than the throughput of a single node. all of those extra nodes and their associated cores are just adding useless overhead .

The Shocking Takeaway: Testing showed that a single NUMA node delivered ~3 million reads per second. As soon as you use a core on a second node, performance drops right off a cliff to half the performance. By turning off 24 excess cores, performance doubled.

3. There is No High-Performance SAN

This is a phrase I stole: there is no such thing as a high-performance SAN.

The core problem is the latency introduced by layers. The data request must travel through disk controllers, network controllers, switches, and fabric before it even reaches the SAN's read process.

  • All-Flash is in the Wrong Place: An all-flash SAN doesn't solve this because the flash is at the wrong end of that cable.
  • The Speed of Light Barrier: You are fundamentally stuck with the physical speed of light in fiber.

Your fastest storage will always be internal solid state drives (SSD). It will be radically faster than any SAN you can buy. And it’s cheaper because the data doesn't have to travel through all of those layers and across all of those cables.

A Warning About Storage: One of the main reasons people use SANs is for ease of administration, which sounds virtuous. But the trade that you are making is ease of administration vs performance. 

The administrative benefits are awfully vague and unquantified while the performance impact is easily measured and very significant. On top of that, "ease of administration" sometimes backfires. For instance, we had a customer whose SAN administrator had deleted production, not testing, during a migration test. Just a lesson: Keep your friends close.

 

The Ideal Configuration Blueprint

If performance is your goal, here is the environment you need to build.

The Ideal Server Spec Sheet

Component

Recommendation

Details

Cores

A relatively small number of very fast cores.

You need at least two, but you almost certainly don't need more than eight.

Memory

Plenty (a terabyte is looking like the low end of a purchase these days).

You get a lot of memory at a relatively inexpensive price.

Buffer Cache (-B)

25% to 50% of total memory.

Big buffer caches are way more performant. If you have -B 100,000, you’re not even trying.

Storage

Internal SSD.

Radically faster and cheaper than any SAN. Assume data growth is out of control, and buy plenty of storage.

Network

Dual fast network interfaces.

One for users, one for administrative tasks (replication, backups), plus redundancy.

 

Client-Server Latency Solutions

The only thing slower than the SAN is the network cable. Moving data across a network is always going to be slow. Moving data in small, "chatty" packets rather than streaming large chunks at a time is especially slow. 

With OpenEdge the FIND statement is "chatty". Each FIND requires no fewer than 3 network messages (and sometimes many more), which kills performance.

  • Move the Code: Move code (especially for finds and for-eaches) into PASOE and run them in a shared memory configuration.
  • Cloud Proximity: If you must use client-server in the cloud, minimize ping time. Use Proximity Placement Group (Azure) or Local Zones (AWS) to ensure the servers are physically next to each other.
  • Temp Tables: Use a temp table on the client for small, read-often control records, and do temp table finds instead of hitting the database over the wire. Cache frequently used read-only data in Temp Tables

 

Final Thoughts: Getting Back to the Basics

We’ve covered everything from faster cores and why a multi-node NUMA setup fails the traffic test, to the non-negotiable need for after-imaging logs. The common thread here is that optimizing your OpenEdge environment is about ignoring marketing hype and getting the fundamentals correct.

Remember, the ideal database server is configured with a relatively small number of very fast cores and is running on internal SSD.

If you apply these back-to-basics principles (fast cores, internal storage,etc.), you'll create an environment that not only handles peak load but is easy to maintain.

Now, go check your core count. And if you haven't already, sign up for the PANS report. It’s the easiest way to keep your environment healthy.