Solving the "Last Item" Problem: Handling Race Conditions in Order Submissions
Race conditions are the silent killers of data integrity.
In a high-traffic environment the "double-spend" of inventory is a classic engineering hurdle. Imagine two users looking at the very last legendary skin in a store. They both see "1 in stock." They both click buy at the exact same millisecond. Without proper concurrency control, your database might end up with -1 items, leaving you with one happy customer and one very angry support ticket.
I have faced this challenge across various sectors. The solution is never "one size fits all." It is always a balance between performance, complexity, and the strictness of your consistency requirements.
The Atomic Update: Your First Line of Defense
The most efficient way to handle simple inventory decrements is to stop treating the read and the write as two separate application-level events. When you fetch the stock, check it in memory, and then send an update, you create a "gap" where the state can change.
Instead, you should delegate the logic to the database engine. SQL databases are designed for this. By using a single atomic statement, you ensure the check and the update happen as one unit of work.
import { Database } from './db-provider';
async function submitOrder(productId: string) {
// We perform the check and the update in one single query.
// This leverages the ACID properties of the database.
const result = await Database.query(
`UPDATE products
SET stock = stock - 1
WHERE id = $1 AND stock > 0`,
[productId]
);
if (result.rowCount === 0) {
throw new Error('Inconsistent state: Out of stock');
}
return { success: true };
}This approach is highly performant. It avoids long-lived locks and works perfectly for simple counters.
When Logic Gets Complex: Pessimistic Locking
Sometimes, an order involves more than just subtracting one from a column. You might need to verify user balance, apply regional discounts, and validate shipping rules all in one go. In these cases, a simple UPDATE isn't enough.
For these scenarios, I rely on Pessimistic Locking. Using SELECT ... FOR UPDATE tells the database to lock those specific rows until your transaction is finished. Any other process trying to read or write to those rows must wait in line.
Pessimistic concurrency control is best used where there is high contention for data. It ensures that a transaction will not be interrupted by another transaction’s attempt to modify the same data.
— PostgreSQL DocumentationWhile this guarantees 100% consistency, it is a heavy hammer. If your transaction takes too long (perhaps because of a slow external API call), you will bottleneck your entire system.
Scaling with Optimistic Locking
In distributed systems or environments where we expect low contention but high volume, we can do Optimistic Locking. We don't lock the row. Instead, we "version" it. Every time a record is updated, its version number increases.
The flow is simple. You read the version, and when you go to write, you include that version in your WHERE clause. If the version in the database has changed because someone else beat you to it, your update will fail. You then decide whether to throw an error or transparently retry the operation.
My Heuristic for Concurrency
- Use Atomic Updates if you are only modifying a single numeric value (stock, balance, likes).
- Use Pessimistic Locking if the business logic is complex and data integrity is more important than extreme throughput (e.g., banking transfers).
- Use Optimistic Locking for high-volume distributed systems where conflicts are rare but must be handled safely.
References & Resources
- Official Documentation: PostgreSQL Concurrency Control
