You asked for a simple todo list. AI delivered a microservices architecture with an API gateway, event-driven messaging, a separate authentication service, Redis caching, and a Kubernetes deployment configuration. The code is technically impressive—and completely unnecessary for your weekend project.
This is the over-engineering trap: AI code generators, trained on enterprise codebases and "best practice" tutorials, consistently suggest solutions far more complex than the problem requires. A 2025 Carnegie Mellon University study found that code complexity increased by more than 40% in repositories after adopting AI coding assistants—far beyond what the growth in codebase size could explain.
The pattern is consistent: long functions, heavy nesting, unnecessary abstractions, redundant checks, and architecture that looks impressive on a whiteboard but creates maintenance nightmares. According to an Ox Security report, AI-generated code is "highly functional but systematically lacking in architectural judgment."
This blog examines why AI over-engineers, documents the most common over-complexity patterns, and provides practical solutions including YAGNI-enforcing prompts, complexity budgets, automated metrics, and refactoring strategies to simplify AI-generated code.
The AI Complexity Crisis
Research Findings (2025)
CMU Study: AI Increases Code Complexity by 40%+
Researchers analyzed 807 open source GitHub repositories that adopted Cursor between January 2024 and March 2025. They found code complexity rose by more than 40 percent—far beyond what codebase growth could account for. The output consistently trended toward long functions, heavy conditional nesting, unnecessary comments, and over-engineered code paths.
The Architectural Judgment Gap
AI excels at local optimization—solving the specific problem directly in front of it. But it struggles with:
- Proportionality: Matching solution complexity to problem complexity
- Context awareness: Understanding project scale and team size
- Future costs: Recognizing maintenance burden of abstractions
- Trade-offs: Weighing simplicity against flexibility
A senior developer who spends an hour thinking about architecture before writing twenty lines of code is more valuable than a developer who writes two hundred lines of AI-generated code that creates technical debt.
The Hidden Cost of Over-Engineering
// Over-engineered: 200+ lines for simple user validation
class UserValidationStrategyFactory {
private strategies: Map<string, ValidationStrategy>;
private validatorRegistry: ValidatorRegistry;
private configProvider: ConfigurationProvider;
private logger: LoggingService;
constructor(
strategies: Map<string, ValidationStrategy>,
validatorRegistry: ValidatorRegistry,
configProvider: ConfigurationProvider,
logger: LoggingService
) {
this.strategies = strategies;
this.validatorRegistry = validatorRegistry;
this.configProvider = configProvider;
this.logger = logger;
}
createValidator(type: string): UserValidator {
const strategy = this.strategies.get(type);
const config = this.configProvider.getValidationConfig(type);
// ... 150 more lines of abstraction
}
}
// What you actually needed: 10 lines
function validateUser(user) {
if (!user.email || !user.email.includes('@')) {
return { valid: false, error: 'Invalid email' };
}
if (!user.name || user.name.length < 2) {
return { valid: false, error: 'Name too short' };
}
return { valid: true };
}
Why AI Over-Engineers
1. Resume-Driven Development in Training Data
AI models are trained on public repositories where impressive-looking code is overrepresented:
- Enterprise codebases: Complex patterns for large teams
- Tutorial projects: Demonstrate patterns, not solve real problems
- Framework examples: Show all features, not minimal usage
- Open source libraries: Built for maximum flexibility
"Resume-driven development" describes choosing technology because it looks good on a resume, not because it fits the problem. AI inherits this bias.
2. Pattern Matching Without Judgment
AI recognizes that "user authentication" often involves:
- JWT tokens with refresh mechanisms
- OAuth2 providers
- Role-based access control
- Session management
- Rate limiting
- Audit logging
So it suggests all of these—even for a personal project with one user.
3. "Best Practice" Cargo Culting
AI applies "best practices" without understanding when they apply:
// AI applies SOLID principles to a 50-line script
// Single Responsibility: Separate classes for everything
class EmailExtractor { ... }
class EmailValidator { ... }
class EmailFormatter { ... }
class EmailRepository { ... }
class EmailService { ... }
class EmailController { ... }
class EmailPresenter { ... }
// What you needed:
const emails = text.match(/[\w.-]+@[\w.-]+\.\w+/g) || [];
4. No Understanding of Scale
AI doesn't ask: "How many users will this have? How often will this run? What's the team size?"
Project Scale vs AI Suggestions
- Weekend project: Needs single file, SQLite — AI suggests microservices, PostgreSQL cluster
- Startup MVP: Needs monolith, managed database — AI suggests event-driven, Kafka, Redis
- Small team app: Needs modular monolith — AI suggests full DDD with bounded contexts
- Enterprise scale: Finally appropriate for microservices — AI suggests the same as above
Common Over-Engineering Patterns
Pattern 1: Premature Microservices
Less than 5% of applications truly benefit from microservices initially. Yet AI suggests them constantly:
# AI-generated for a blog application
# docker-compose.yml with 8 services
services:
api-gateway:
image: kong:latest
user-service:
build: ./services/user
post-service:
build: ./services/post
comment-service:
build: ./services/comment
notification-service:
build: ./services/notification
search-service:
build: ./services/search
analytics-service:
build: ./services/analytics
redis:
image: redis:alpine
postgres:
image: postgres:15
rabbitmq:
image: rabbitmq:management
# What you needed: One Next.js app with Prisma
Pattern 2: Abstraction Addiction
// AI-generated: Abstract everything
interface IDataFetcher<T> {
fetch(id: string): Promise<T>;
}
interface IDataTransformer<T, U> {
transform(data: T): U;
}
interface IDataValidator<T> {
validate(data: T): ValidationResult;
}
abstract class BaseRepository<T> implements IDataFetcher<T> {
protected abstract getEndpoint(): string;
protected abstract getTransformer(): IDataTransformer<any, T>;
// ... 100 lines of abstraction
}
class UserRepository extends BaseRepository<User> {
// ... implementation inheriting complexity
}
// What you needed:
async function getUser(id) {
const response = await fetch(`/api/users/${id}`);
return response.json();
}
Pattern 3: Configuration Explosion
// AI-generated: Make everything configurable
interface ButtonConfig {
size: 'xs' | 'sm' | 'md' | 'lg' | 'xl';
variant: 'primary' | 'secondary' | 'ghost' | 'link' | 'danger' | 'warning';
rounded: 'none' | 'sm' | 'md' | 'lg' | 'full';
loading: boolean;
disabled: boolean;
fullWidth: boolean;
icon?: ReactNode;
iconPosition?: 'left' | 'right';
loadingText?: string;
spinnerSize?: 'sm' | 'md' | 'lg';
spinnerColor?: string;
// ... 20 more options
}
// What you needed:
function Button({ children, onClick, disabled }) {
return (
<button onClick={onClick} disabled={disabled}>
{children}
</button>
);
}
Pattern 4: The Gas Factory
A "Gas Factory" is an overly complicated solution to a simple problem, often misapplying multiple design patterns:
// AI-generated: Factory + Strategy + Observer + Singleton
// for... formatting dates
class DateFormatterFactory {
private static instance: DateFormatterFactory;
private strategies: Map<string, FormattingStrategy>;
private observers: DateFormatObserver[];
private constructor() {
this.strategies = new Map();
this.observers = [];
this.registerDefaultStrategies();
}
static getInstance(): DateFormatterFactory {
if (!this.instance) {
this.instance = new DateFormatterFactory();
}
return this.instance;
}
// ... 200 more lines
format(date: Date, strategyName: string): string {
const strategy = this.strategies.get(strategyName);
if (!strategy) throw new StrategyNotFoundException(strategyName);
return strategy.format(date);
}
}
// What you needed:
const formatted = new Date().toLocaleDateString('en-US', {
year: 'numeric',
month: 'long',
day: 'numeric'
});
Pattern 5: Defensive Over-Programming
// AI-generated: Check everything multiple times
function addNumbers(a: number, b: number): number {
// Validate inputs
if (a === null || a === undefined) {
throw new InvalidArgumentError('First argument cannot be null');
}
if (b === null || b === undefined) {
throw new InvalidArgumentError('Second argument cannot be null');
}
if (typeof a !== 'number') {
throw new TypeError('First argument must be a number');
}
if (typeof b !== 'number') {
throw new TypeError('Second argument must be a number');
}
if (Number.isNaN(a)) {
throw new InvalidArgumentError('First argument cannot be NaN');
}
if (Number.isNaN(b)) {
throw new InvalidArgumentError('Second argument cannot be NaN');
}
// ... more checks
const result = a + b;
// Validate output
if (!Number.isFinite(result)) {
throw new OverflowError('Result exceeds safe number range');
}
return result;
}
// What you needed (with TypeScript):
function addNumbers(a: number, b: number): number {
return a + b;
}
YAGNI, KISS, and DRY Principles
YAGNI: You Aren't Gonna Need It
Coined by Kent Beck in Extreme Programming, YAGNI states: "Always implement things when you actually need them, never when you just foresee that you may need them."
// YAGNI violation: Adding "just in case" features
class UserService {
// "We might need caching someday"
private cache: CacheService;
// "We might need events someday"
private eventBus: EventEmitter;
// "We might need metrics someday"
private metrics: MetricsCollector;
// Current requirement: Get user by ID
async getUser(id: string): Promise<User> {
// 50 lines involving all the above
}
}
// YAGNI-compliant:
async function getUser(id: string): Promise<User> {
return await db.users.findUnique({ where: { id } });
}
// Add caching when you actually need it
KISS: Keep It Simple, Stupid
Most systems work best when kept simple. Complexity should be avoided unless truly necessary.
// Complex: Custom state management
class StateManager<T> {
private state: T;
private subscribers: Set<(state: T) => void>;
private middleware: Middleware<T>[];
private devTools: DevToolsConnection;
dispatch(action: Action): void {
let newState = this.state;
for (const mw of this.middleware) {
newState = mw(newState, action);
}
this.state = newState;
this.notifySubscribers();
this.devTools.log(action, newState);
}
// ... 200 lines
}
// KISS: Use React state
const [user, setUser] = useState(null);
// Or just use Zustand if you need shared state
DRY: Don't Repeat Yourself (But Don't Over-Apply)
DRY is often over-applied, creating abstractions worse than duplication:
// Over-applied DRY: Forced abstraction
function makeApiCall<T>(
endpoint: string,
method: 'GET' | 'POST' | 'PUT' | 'DELETE',
body?: unknown,
headers?: Record<string, string>,
retries?: number,
timeout?: number,
transform?: (data: unknown) => T
): Promise<T> {
// 100 lines trying to handle every case
}
// Better: Accept some duplication for clarity
async function getUsers() {
const response = await fetch('/api/users');
return response.json();
}
async function createUser(data: CreateUserDTO) {
const response = await fetch('/api/users', {
method: 'POST',
body: JSON.stringify(data),
headers: { 'Content-Type': 'application/json' }
});
return response.json();
}
// Rule of three: Extract only after 3+ occurrences
The Rule of Three: Don't abstract until you have three instances of duplication. Two similar pieces of code might evolve differently. Wait until you truly see the pattern before extracting.
Complexity Metrics
Use objective metrics to catch over-engineering:
Key Metrics and Thresholds
- Cyclomatic Complexity: Target <10, Warning 10-20, Critical >20
- Cognitive Complexity: Target <15, Warning 15-25, Critical >25
- Lines per Function: Target <30, Warning 30-50, Critical >50
- Parameters per Function: Target <4, Warning 4-6, Critical >6
- Depth of Inheritance: Target <3, Warning 3-5, Critical >5
- Class Coupling: Target <10, Warning 10-20, Critical >20
ESLint Complexity Rules
// .eslintrc.js
module.exports = {
rules: {
// Cyclomatic complexity
'complexity': ['error', { max: 10 }],
// Maximum depth of nested blocks
'max-depth': ['error', { max: 4 }],
// Maximum lines per function
'max-lines-per-function': ['error', {
max: 50,
skipBlankLines: true,
skipComments: true
}],
// Maximum parameters
'max-params': ['error', { max: 4 }],
// Maximum statements per function
'max-statements': ['error', { max: 15 }],
// Maximum nested callbacks
'max-nested-callbacks': ['error', { max: 3 }],
// Maximum classes per file
'max-classes-per-file': ['error', 1],
}
};
Scope-Aware Prompting
The key to preventing AI over-engineering is explicit constraints in your prompts.
Bad Prompt (Invites Over-Engineering)
// DON'T: Vague prompt
"Create a user authentication system"
// AI interprets as: Full enterprise auth with every feature
Good Prompt (Scope-Constrained)
"Create a simple user login for a personal project.
CONSTRAINTS:
- Single user (just me)
- Username/password only (no OAuth, no SSO)
- Session stored in localStorage
- No database (hardcode credentials for now)
- No password reset functionality
- No registration flow
SIMPLICITY REQUIREMENTS:
- Under 50 lines of code total
- No external libraries except React
- No abstraction layers
- No configuration objects
- No TypeScript generics
This is a weekend project. Keep it minimal."
Scale-Appropriate Prompting Template
"Create [feature] for a [scale] project.
PROJECT CONTEXT:
- Scale: [weekend project | MVP | startup | enterprise]
- Team size: [solo | 2-5 | 5-20 | 20+]
- Expected users: [1 | 10-100 | 100-10k | 10k+]
- Lifespan: [throwaway | 1 year | 5+ years]
COMPLEXITY BUDGET:
- Maximum files: [number]
- Maximum lines per file: [number]
- Maximum function parameters: 4
- No abstraction unless 3+ usages exist
DO NOT INCLUDE:
- Features not explicitly requested
- 'Just in case' extensibility
- Configuration for hypothetical scenarios
- Design patterns for patterns' sake
PREFER:
- Direct, readable code over clever code
- Duplication over wrong abstraction
- Simple functions over class hierarchies
- Explicit code over implicit magic"
Complexity Budgets
Set explicit limits before generating code:
// complexity-budget.json
{
"feature": "user-authentication",
"budget": {
"files": 3,
"totalLines": 200,
"maxLinesPerFile": 100,
"maxFunctionsPerFile": 5,
"maxParametersPerFunction": 3,
"allowedPatterns": ["module"],
"forbiddenPatterns": [
"factory",
"abstract-factory",
"strategy",
"observer",
"singleton"
],
"dependencies": {
"max": 2,
"allowed": ["bcrypt", "jsonwebtoken"]
}
}
}
// Use this in your prompt:
"Implement within this complexity budget: [paste budget]"
The Modular Monolith Alternative
When AI suggests microservices, consider a modular monolith instead:
src/
├── modules/
│ ├── users/
│ │ ├── user.controller.ts
│ │ ├── user.service.ts
│ │ ├── user.repository.ts
│ │ ├── user.types.ts
│ │ └── index.ts # Public API
│ ├── posts/
│ │ ├── post.controller.ts
│ │ ├── post.service.ts
│ │ ├── post.repository.ts
│ │ └── index.ts
│ └── comments/
│ └── ...
├── shared/
│ ├── database.ts
│ ├── auth.middleware.ts
│ └── errors.ts
└── app.ts # Single entry point
Benefits Over Microservices
- Single deployment: One binary/container to manage
- Simple testing: No distributed system complexity
- Easy refactoring: IDE support works across modules
- No network overhead: Function calls, not HTTP
- Shared database: Transactions work naturally
- Extractable: Can become microservices later if needed
When to Actually Use Microservices:
- Different parts need different scaling (compute vs. memory intensive)
- Teams need independent deployment cycles
- Different parts have different uptime requirements
- You've proven the boundaries with a modular monolith first
Refactoring Over-Engineered Code
Simplification Checklist
□ Remove unused parameters
□ Inline single-use functions
□ Collapse single-implementation interfaces
□ Remove empty abstract classes
□ Delete configuration options never used
□ Replace strategy pattern with switch (if <4 strategies)
□ Remove factory if only one product
□ Flatten unnecessary inheritance
□ Delete "just in case" code paths
□ Remove over-defensive null checks (use TypeScript)
Refactoring Prompt for AI
"Simplify this code while maintaining functionality:
[paste over-engineered code]
SIMPLIFICATION GOALS:
1. Remove abstractions with single implementations
2. Inline functions called only once
3. Remove unused parameters and options
4. Replace class hierarchies with plain functions where possible
5. Remove defensive checks that TypeScript handles
6. Collapse nested structures
CONSTRAINTS:
- Must pass existing tests
- Public API can change if simpler
- Prefer duplication over wrong abstraction
Show me the simplified version with explanation of what was removed and why."
CI/CD Complexity Gates
# .github/workflows/complexity-check.yml
name: Complexity Check
on:
pull_request:
branches: [main, develop]
jobs:
complexity:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
run: npm ci
- name: Run ESLint complexity rules
run: npm run lint:complexity
- name: Check file sizes
run: |
MAX_LINES=200
for file in $(find src -name "*.ts" -o -name "*.tsx"); do
lines=$(wc -l < "$file")
if [ "$lines" -gt "$MAX_LINES" ]; then
echo "ERROR: $file has $lines lines (max: $MAX_LINES)"
exit 1
fi
done
- name: Run complexity budget check
run: node scripts/check-complexity-budget.js
Key Takeaways
Avoiding the Over-Engineering Trap
- AI lacks architectural judgment—CMU research shows AI increases code complexity by 40%+
- Apply YAGNI ruthlessly—don't add features "just in case" or create abstractions until you have 3+ uses
- Use scope-aware prompts—tell AI your project scale, team size, and complexity constraints explicitly
- Set complexity budgets—define limits for files, lines, and forbidden patterns before coding
- Consider modular monolith first—less than 5% of apps need microservices initially
- Measure and enforce—use ESLint complexity rules, SonarQube quality gates, and CI/CD checks
Conclusion
The over-engineering trap is one of AI coding assistants' most insidious problems. Unlike obvious bugs that crash immediately, over-engineered code works—it just creates mounting maintenance costs, slows down development, and confuses future developers (including yourself).
The solution is proactive constraint: tell AI exactly what scale you're building for, set explicit complexity limits, and ruthlessly simplify any generated code that exceeds your needs. Remember: the best code is often the code you don't write.
In our next article, we'll explore AI Documentation Generation Limitations, examining why AI-generated documentation often lacks context and accuracy.