Premature Optimization Is Bad, But Your App Is Just Slow Because You're Lazy
By: Evgeny Padezhnov
Donald Knuth said premature optimization is the root of all evil. Developers took that as permission to never optimize at all.
The quote is from 1974. It was about micro-optimizations in loop counters and register allocation. It was never about shipping code that fires 500 database queries to render a single page. That is not "avoiding premature optimization." That is neglect.
The N+1 Problem: The Laziest Performance Bug
The single most common reason applications crawl under load is the N+1 query problem. It happens when code fetches a list of records, then loops through each one to fetch related data.
# N+1: 1 query for posts + N queries for authors
posts = Post.objects.all()
for post in posts:
print(post.author.name) # hits the database every iteration
With 100 posts, that is 101 queries. With 1,000 posts — 1,001 queries. The fix takes one line:
# Fixed: 2 queries total
posts = Post.objects.select_related('author').all()
Key point: the N+1 problem is not a database issue. It is a mismatch between object-oriented thinking and relational database mechanics. As noted in a detailed guide on Dev.to, spotting and fixing N+1 queries is what separates junior developers from senior ones.
Every ORM hides this by default. Django, SQLAlchemy, ActiveRecord, Sequelize, Prisma — all of them will happily fire hundreds of queries unless told otherwise. The abstraction is the trap.
How to Catch It Before Users Do
Slow queries do not announce themselves. Profiling does.
Enable Query Logging in Development
For PostgreSQL:
ALTER SYSTEM SET log_min_duration_statement = 100;
SELECT pg_reload_conf();
Every query slower than 100ms gets logged. In practice, most N+1 queries are fast individually — 2ms each — but 500 of them add up to a full second.
Count Queries Per Request
Django has django-debug-toolbar. Rails has bullet. Express has custom middleware. The tool does not matter. The habit does.
# Django: log query count per request
from django.db import connection
def query_count_middleware(get_response):
def middleware(request):
initial = len(connection.queries)
response = get_response(request)
total = len(connection.queries) - initial
if total > 20:
logger.warning(f"{request.path}: {total} queries")
return response
return middleware
Common mistake: only profiling when something feels slow. By then, the problem has been in production for weeks. Set a threshold. Log anything above it. Twenty queries per page is a reasonable starting point.
Use EXPLAIN on Suspicious Queries
EXPLAIN ANALYZE SELECT * FROM orders WHERE user_id = 42;
Look for Seq Scan on large tables. That usually means a missing index. The fix:
CREATE INDEX idx_orders_user_id ON orders(user_id);
Tested in production. A missing index on a 2M-row table turned a 1,200ms query into 3ms. No code change. One CREATE INDEX statement.
The Real Optimization Checklist
Before touching any algorithm, run through these:
1. Missing indexes. Check every WHERE, JOIN, and ORDER BY clause. If the column is not indexed and the table has more than 10K rows, add one.
2. N+1 queries. Use eager loading. select_related in Django. includes in Rails. Include in Entity Framework. with in Laravel.
3. Unbounded queries. Any query without LIMIT is a ticking bomb. Tables grow. What returns 50 rows today returns 50,000 next year.
-- Bad: returns everything
SELECT * FROM logs WHERE level = 'error';
-- Fixed: paginated
SELECT * FROM logs WHERE level = 'error' ORDER BY created_at DESC LIMIT 50 OFFSET 0;
4. Repeated identical queries. Same data fetched multiple times in one request. Cache it in a variable. Or use request-scoped caching.
5. Fat payloads. Selecting * when the frontend needs three fields. Returning 500KB of JSON when 20KB would do.
-- Bad
SELECT * FROM users;
-- Fixed
SELECT id, name, email FROM users;
None of these require algorithmic genius. They require reading the query log.
Tools That Show the Problem
Guessing where slowness lives is itself a form of laziness. Profiling tools exist for every stack.
For SQL specifically, SolarWinds Database Performance Analyzer provides wait-time analysis and query-level tuning advice. For lighter setups, PostgreSQL's built-in pg_stat_statements extension is free and sufficient:
CREATE EXTENSION pg_stat_statements;
-- Top 10 slowest queries by total time
SELECT query, calls, total_exec_time, mean_exec_time
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;
In plain terms: sort by total_exec_time, not mean_exec_time. A query averaging 5ms but called 100,000 times per day costs more than one averaging 500ms called twice.
Application-level profiling matters too. Node.js has clinic.js. Python has cProfile and py-spy. Ruby has rack-mini-profiler. The pattern is the same: measure first, then fix what actually matters.
When Knuth's Advice Actually Applies
The original quote has a second half that people conveniently forget:
"Yet we should not pass up our opportunities in that critical 3%."
Knuth was saying: do not optimize the 97% that does not matter. But absolutely optimize the 3% that does. The problem is that most developers never measure, so they never find the 3%.
Real premature optimization looks like this:
- Rewriting a function in C because "Python is slow" — when the bottleneck is a missing database index.
- Adding Redis caching before checking if the query itself is efficient.
- Switching to a faster JSON serializer when the response includes 10MB of unused data.
Key point: optimization without measurement is guessing. Guessing is not engineering.
The Lazy Defaults That Kill Performance
ORMs ship with lazy loading enabled by default. That is a design choice optimized for developer convenience, not application performance. Every post.author, order.items, user.roles access can trigger a hidden query.
Some frameworks are moving away from this. Prisma requires explicit include statements. Rust's Diesel checks queries at compile time. But most popular ORMs still default to lazy.
The fix is not abandoning ORMs. The fix is understanding what queries they generate. Every major ORM has a way to log generated SQL. Turn it on during development. Read the output.
# Django
export DJANGO_LOG_LEVEL=DEBUG
# Rails
ActiveRecord::Base.logger = Logger.new(STDOUT)
# Sequelize
const sequelize = new Sequelize({ logging: console.log });
Try It
Pick one endpoint in a current project. Count the queries it fires. If the number is above 10 for a single page load — fix the worst offender before closing the browser tab. That is not premature optimization. That is doing the job.
Frequently Asked Questions
How do you measure which parts of your code are actually causing performance bottlenecks?
Use pg_stat_statements for database queries and application profilers (py-spy, clinic.js, rack-mini-profiler) for code. Sort by total time, not average time. A fast query called thousands of times costs more than a slow query called once.
When should you optimize code versus leave it as-is for readability?
Fix architectural problems always: missing indexes, N+1 queries, unbounded selects. Leave micro-optimizations alone unless profiling proves they matter. If it works and the query count is reasonable — it is correct.
How do you know if you're optimizing based on measured data rather than assumptions?
If the answer to "which query is slowest?" comes from EXPLAIN ANALYZE or a profiling tool — that is data. If it comes from "I think this part is slow" — that is an assumption. Run the numbers first.
How do you identify the critical parts of code worth optimizing versus the rest?
Tested in production: the top 5 queries by total_exec_time in pg_stat_statements almost always account for over 80% of database load. Start there. Everything else can wait.
Information is accurate as of the publication date. Terms, prices, and regulations may change — verify with relevant professionals.