Database Best Practices
Your database is at the heart of your website. When it’s healthy, pages load fast and visitors stay happy. When it’s struggling, everything slows down—and in the worst cases, your site goes offline entirely.
Over the years, we’ve seen the same database problems come up again and again. A plugin that quietly fills a table with millions of rows. An import job that locks up the database during peak hours. A cleanup script that makes things worse instead of better.
This guide covers the most common database issues we encounter on freistilbox, and how to avoid them.
For CMS-specific guidance, please refer to the separate Database Best Practices for Drupal and Database Best Practices for WordPress.
Table of Contents
Unbounded Table Growth
One of the most common causes of database performance issues is tables that grow without limits. What starts as a few hundred rows can quietly balloon to millions over months or years, until the site slows to a crawl or the database runs out of disk space.
How It Happens
Many plugins and modules store data in the database without any built-in cleanup mechanism. Common culprits include:
- Search and analytics plugins that log every search query visitors make
- Redirect plugins that store URL mappings in the database rather than in server configuration
- Form plugins that keep every submission forever
- Activity logs that record every admin action or content change
The problem is rarely obvious at launch. A site might run perfectly for a year before the accumulated data starts causing trouble.
Warning Signs
Watch out for these indicators:
- Slow admin pages or dashboard loading times
- Database backups taking longer than usual
- Disk space usage climbing steadily
- Queries timing out during peak traffic
What You Can Do
Before installing a plugin or module that stores data, ask yourself:
- Does this data need to live in the database, or could it go elsewhere (flat files, external service, server logs)?
- Is there a built-in retention policy or cleanup option?
- How much data will this generate per day, week, or month?
For existing sites, audit your largest tables regularly. You can identify them with a query like:
SELECT table_name,
ROUND(data_length / 1024 / 1024, 2) AS size_mb,
table_rows
FROM information_schema.tables
WHERE table_schema = 'your_database_name'
ORDER BY data_length DESC
LIMIT 20;
If you find oversized tables, check whether the responsible plugin has a cleanup option, or whether you can safely truncate historical data.
Query Anti-Patterns
Even with properly sized tables, poorly written queries can bring a database to its knees. Here are the most common patterns we see causing performance problems.
Missing Indexes
Databases use indexes to find rows quickly without scanning the entire table. When a query filters or sorts by a column that isn’t indexed, MySQL has to examine every row—which gets slower as the table grows.
This is especially common with custom meta queries in WordPress or field queries in Drupal, where developers filter by values that the CMS doesn’t index by default.
What to do: Use EXPLAIN to analyze slow queries and identify missing indexes. If you see “Using filesort” or “Using temporary” with large row counts, an index might help.
N+1 Queries
This pattern occurs when code loads a list of items, then runs a separate query for each item to fetch related data. For example, loading 50 posts and then running 50 individual queries to get each post’s author.
The result is dozens or hundreds of database round-trips instead of one or two.
What to do: Look for loops that contain database queries. Refactor to load related data in batches using IN () clauses or JOINs.
Full Table Scans
Some query patterns can’t use indexes at all. The most common offender is a leading wildcard in a LIKE clause:
SELECT * FROM posts WHERE title LIKE '%keyword%';
The database can’t use an index here because it doesn’t know where in the string to look. It has to check every row.
What to do: If you need full-text search, consider using MySQL’s full-text indexes or an external search service like Elasticsearch.
Heavy Queries on Page Load
Running expensive aggregations, reports, or complex JOINs during a normal page request is a recipe for slow response times and database contention.
Another variant of this problem is loading large amounts of data on every request. WordPress’s wp_options table is particularly prone to this; we cover it in detail in our WordPress-specific guidance.
What to do: Move heavy queries to background jobs or cron tasks. Cache the results and serve them from the cache during page requests.
Lock Contention
When a write operation takes a long time—whether it’s a large import, a slow UPDATE, or a batch delete—it can block other queries waiting to access the same rows or tables. Visitors see slow page loads or timeouts while the database waits for locks to clear.
On freistilbox, each blocked query also ties up a Processing Unit (PU). While a PU waits for a database lock, it can’t handle other requests. In the worst case, all your cluster’s PUs end up waiting on the database, and your site stops accepting new requests entirely.
What to do: Break large write operations into smaller batches with brief pauses in between. Schedule heavy operations during low-traffic periods.
Import and Migration Mistakes
Importing content or migrating data is a routine task, but doing it carelessly can take down a production site. Here’s what to watch out for.
No Batching
Importing thousands of records in a single transaction puts enormous strain on the database. The transaction log grows, locks are held for extended periods, and if anything goes wrong, the entire operation has to roll back.
What to do: Break imports into smaller batches—500 to 1,000 records at a time is usually a good starting point. Commit each batch separately and add a short pause between batches to let other queries through.
Running Imports During Peak Hours
A large import competes with your visitors for database resources. Running one during business hours can slow down the site or cause timeouts for users.
What to do: Schedule heavy imports and migrations for low-traffic periods—early morning or late evening, depending on your audience. Check your analytics to find the quietest windows.
No Progress Tracking
Long-running imports can fail partway through due to timeouts, memory limits, or network issues. Without progress tracking, you’re left guessing how far it got, and you might have to start over from scratch.
What to do: Track which records have been processed, either by marking them in the source data or by logging completed batches. If the import fails, you can resume from where it left off instead of starting over.
Table Purging and Cleanup
When a table has grown out of control, you’ll need to clean it up. How you do this matters a lot.
The Problem with DELETE
The obvious approach is to run a DELETE statement:
DELETE FROM wp_some_huge_table WHERE created_at < '2023-01-01';
For small tables, this works fine. But when you’re deleting hundreds of thousands or millions of rows, DELETE causes several problems:
- Row-by-row processing: MySQL logs each deletion individually for transaction safety, which is extremely slow at scale.
- Lock contention: The long-running operation blocks other queries, slowing down or freezing your site. As explained earlier, this can exhaust your cluster’s Processing Units.
- Binary log bloat: Every deleted row gets logged and transmitted to replica nodes, causing substantial network traffic. Applying these changes row by row on the replicas can cause critical replication lag, where replicas fall behind the primary and serve stale data.
- Disk space not reclaimed: InnoDB doesn’t immediately free disk space after a
DELETE. You’d need to runOPTIMIZE TABLEafterward, which causes additional locking. Until you do, you’re paying for storage that holds deleted data.
When to Use TRUNCATE
If you need to empty a table completely, use TRUNCATE TABLE instead:
TRUNCATE TABLE wp_some_huge_table;
TRUNCATE drops and recreates the table structure rather than deleting rows one by one. It completes almost instantly regardless of table size and immediately frees disk space.
Keep in mind that
TRUNCATEis all-or-nothing—you can’t specify aWHEREclause. It also resets auto-increment counters and won’t work if foreign key constraints reference the table.
Batched Deletes for Partial Cleanup
If you need to delete a subset of rows, break the operation into smaller batches:
DELETE FROM wp_some_huge_table WHERE created_at < '2023-01-01' LIMIT 1000;
Run this repeatedly with short pauses between batches until no more rows match. This approach takes longer overall, but it avoids long locks and lets normal traffic flow between batches.