Telerik blogs

Explore seven ways developers can apply long-established software engineering principles, like requirements gathering, iterative development and effective code reviews, to transform the output they get from AI coding tools.

It can’t have escaped your attention that AI is moving at an astonishing pace just now.

Every week sees new announcements from Anthropic, OpenAI, et al—improved models, better tools, endless pronouncements that AI will be writing all our code in the near future.

But what does that look like in practice? How do you unlock the productivity gains promised by AI, while still producing code, features and applications you can be proud of?

Well, it turns out AI isn’t magic, and the tactics you can use to make it produce better results are the same tactics and tools you’ve been honing all these years as you improved your software engineering craft.

Here are seven ways you can apply long-established software engineering principles to transform the output you get from AI.

1. Dig into the Requirement

It’s really tempting, when your job is to build stuff, to rush to the building stuff part! But if you’ve been doing this a while, you know the importance of understanding what you’re building.

AI can be extremely helpful here, as a thinking/planning partner who can help you get much clearer on what you’re building and how it relates to the existing codebase.

With a single prompt you can get AI to dig into a requirement, ask questions, identify edge cases, holes in the proposed UI flow, etc.

A super simple version of such a prompt for building a product requirements document (PRD) looks like this:

Create a PRD. Go through the steps below but skip if you don't consider them necessary.

1. Ask me for a detailed description of the problem I'm solving and potential ideas for solutions
2. Explore the repo to verify assertions and get a clear understanding of the current state of the codebase
3. Interview me about every aspect of this plan until we reach a shared understanding. 
4. Identify the main modules needed (these may be new, or modifications needed to existing modules) to complete the implementation
5. Write the PRD, include Problem Statement, Solution, User Stories, Implementation Decisions, Testing Decisions, Out of Scope, and Further Notes

Better still, package this up as a skill and then you can use it every time you pick up a new feature or requirement.

For more comprehensive versions of the prompts and skills we cover here I highly recommend checking out this handy collection of Skills by Matt Pocock. His version of this prompt also includes a template for AI to follow when creating the PRD. AI loves templates!

2. Break It Down into Stories and Acceptance Criteria

A PRD is useful, but the real work happens when you break that down into specific stories and acceptance criteria.

Regardless of the format, the user story remains a helpful container for getting to the right level of granularity to actually go ahead and build a feature.

These stories, with their acceptance criteria, become the unit of work.

Again we can turn to an AI prompt for help here.

Break the PRD into stories that can be worked on independently, using vertical slices.

# Process

1. Locate the PRD
2. Explore the codebase (if you haven't already done so, to get the current state of the code)
3. Draft vertical slices (break the PRD into thin vertical slices that cut through all the integration layers)
4. Quiz me (present the proposed breakdown, for each item show title, blocked by, user stories covered and ask if the granularity feels right, dependency relationships are correct, slices should be merged or split further)

Iterate until I approve the breakdown

Where you then put these stories is up to you. I tend to add an instruction for the AI to create GitHub issues.

Notice the talk of vertical slices? That brings us to the first of our software engineering pillars (that work for both human and AI assisted coding).

3. Vertical Slice First

If you’re working full-stack on your project, vertical slices are a powerful unlock.

The temptation may be to build out the backend first, maybe starting with the DB, then the API, then the UI’s code to call that API, then the UI.

But tackling it this way introduces a considerable risk of scope creep, and also a temptation to overengineer or optimize one part of the whole.

In practice, if you’re “building out” the backend, that work can spiral and balloon, but until it’s wired up to the frontend, all that work is based on assumptions and future proofing for problems that haven’t happened yet.

In The Pragmatic Programmer, the authors talk about tracer bullets, the idea of creating a vertical slice through the application, from UI through API to DB (or the equivalent for your organization).

AI is just as likely as humans (perhaps more so) to go “off on one” and start building out infrastructure and plumbing horizontally when left to its own devices.

So another good use of your prompts/skills is to explicitly instruct it to write vertical slices and tracer bullets.

In the last prompt (PRD to stories) we explicitly included this requirement up front, so the stories themselves would represent these vertical slices.

Break the PRD into stories that can be worked on independently, using vertical slices.

And also referenced vertical slices several times in the process steps.

By forcing AI to complete one slice end-to-end, you catch integration issues early—before hours of work pile up on bad assumptions.

4. Tighter Feedback Loops

Imagine you dropped a junior engineer into your code, asked them to build a complex feature and explicitly told them they couldn’t compile the code, run the app, view it in the browser, run tests, etc. Without those feedback loops, there’s a good chance this wouldn’t end well!

AI is no different. Left to its own devices, AI will happily generate code, but with no way to check its work, the quality of the output can vary massively, from kinda OK to entirely broken.

Give it feedback loops and you’ll see much better results.

In general, the agent will try to compile the code after making changes as a minimum, but you can also include explicit instructions in a skill for any feedback loops you want it to run.

It pays to have it read errors in the logs (compile errors or runtime). Or direct it to run tests, use the Playwright or Chrome DevTools MCP to walk through the feature in the UI, and verify it actually flows correctly. Generate Playwright tests to speed that up for next time. Run code quality analysis tools, linters. etc.

AI agents are hungry for feedback loops, and the more you can direct them to the feedback that matters, the better.

5. Separate Writing Tests From Implementation

AI makes it much quicker and easier to write and maintain tests for your application, but this approach also brings its own challenges.

When AI can write tests, write the implementation code and modify the tests, it can get itself into an unfortunate loop of writing low quality tests or making test (and production) code objectively worse just to get back to green.

To counter this, in the last few months I’ve had notable success separating the writing of specs from the implementation.

The easy, manual(ish) way of doing this, is to be clear in your prompt (or skill) in any given coding session which of the two you want it to do (write/modify tests or work on the implementation).

Or you can turn to subagents. Subagents run with their own instructions, permitted tool calls and can often be restricted to specific folders. This means you can lock an agent down to one role, either writing/updating tests or writing/updating production code (but never both).

The net positives here end up being better tests that don’t rely on the internal details of “how” the feature is implemented, instead focused on testing the public contract of the feature.

Then, the AI that implements the feature can work on and change the implementation as it sees fit, so long as the public API isn’t broken.

Which brings us neatly onto …

6. Be Clear on your Boundaries (Modular Architecture)

When you look at your code, you bring a certain implicit understanding of its boundaries and shape. You know that the classes in this folder relate to billing, while the classes over there handle user onboarding, for example.

But in many systems, when AI starts exploring your codebase, it just sees the individual classes and methods.

Depending on how well your modules (like billing) are defined, AI may or may not develop an understanding of where the boundaries are in your system—or which classes should call methods on other classes.

Regardless of AI, if you’ve worked on any long-running project, you know how this ends.

Everything in the codebase is eventually wired to everything else, through a myriad of method calls across classes, across boundaries (if those boundaries even really exist) and “small changes” ripple through the codebase like a wildfire through a dry forest in the height of summer.

The alternative is to focus on architecting your system as modules, with clear boundaries, with thin “public API” interfaces for each module.

Tell your AI to use the public interface for a module when it needs to communicate between them and define those clearly in code, and you’ll see considerably better results (and more reliable, resilient software in general).

How you enforce this depends on your chosen language/framework.

For example, in C#, you might give every module its own namespace and top-level service (e.g., IBillingService) that defines the thin API for that part of the system.

AI can go to town behind that interface and implement things however it wants, but when it comes to communication between modules, you can steer it with a prompt/skill.

All communication between modules happens through public interfaces (I-prefixed). Keep implementations internal. If you need to expose data across module boundaries, use dedicated request/response records — never expose internal domain entities directly.

7. Review, Review, Review

You’ve got your systems and processes dialed in, your AI skills and prompts mean AI starts with a plan, breaks that plan down and then implements it responsibly, using feedback loops and delegating work to different agents to keep tests and implementation details separate.

All the while, it’s architecting your app using deep modules with light interfaces, so that boundaries are maintained and your software quality remains high.

But, even then, with all this guidance, AI will make mistakes.

It will leak domain objects when you explicitly told it to use dedicated request/response entities. It will couple internal classes together. It will put too much logic in the controller even though you instructed it to keep controllers thin.

So the last, crucial step is to have AI review its work against a clear set of rules.

Here’s an example of part of a review skill for C#.

---
name: csharp-conventions
description: C# Minimal API + Vertical Slice conventions. Use when the user wants to review their code, or work, or feature.
tools: Read, Bash, Glob, Grep, Edit, Write
---

# C# Minimal API + Vertical Slice Conventions

Load this skill when reviewing C#/ASP.NET backend code.

## When to Trigger

- `/review-api`
- "Review the branch", "review my code"
- "Check against conventions"
- Before creating a PR on a C#/ASP.NET API project

## Conventions

### Vertical Slice Structure

Each feature slice contains everything needed for that feature in one folder:

```
Features/
  Customers/
    GetCustomer.cs          # Endpoint
    GetCustomerHandler.cs   # Handler (business logic)
    CustomerResponse.cs     # Response DTO
    Customer.cs             # Domain model
```

- **Endpoint** — defines the route and handles HTTP concerns
- **Handler** — contains business logic, orchestrates database/services
- **DTOs** — request/response types specific to this endpoint
- **Domain models** — shared across the feature

---

### Handlers

- Handler classes are injected via DI
- Constructor injection for dependencies
- Business logic lives here, not in endpoints
- Return domain models or DTOs, not `IResult`
- Throw exceptions for errors — let exception middleware handle HTTP responses

#### Template

```csharp
public class GetCustomerHandler
{
    private readonly ApplicationDbContext _db;

    public GetCustomerHandler(ApplicationDbContext db)
    {
        _db = db;
    }

    public async Task<CustomerResponse?> Execute(Guid id, CancellationToken ct)
    {
        var customer = await _db.Customers
            .Where(c => c.Id == id)
            .Select(c => new CustomerResponse(c.Id, c.Name, c.Email))
            .FirstOrDefaultAsync(ct);

        return customer;
    }
}
```
...

This is only one example and represents a small snapshot of a potential review skill. You can ask AI to generate an equivalent for your own standards and conventions.

This review stage is where all your conventions get enforced. AI walks through every changed file and helps catch violations before they ship.

The key is to give the skill a clear set of instructions for how to run these checks and balances against your changes. In my case, I have something like this toward the end of the prompt/skill.

...

## Workflow

When triggered for review:

### 1. Determine the diff

```bash
git merge-base main HEAD
git diff main...HEAD
```

If only reviewing the last N commits, use `git diff HEAD~N...HEAD` instead.

### 2. Review each changed file

For every file in the diff, check against these conventions. Organise findings by category.

### Handler Review Checklist

- [ ] Handler class registered in DI
- [ ] Constructor injection for dependencies
- [ ] Business logic lives in handler, not endpoint
- [ ] Returns domain models or DTOs (not `IResult`)
- [ ] Throws exceptions for errors (middleware handles HTTP responses)

### 3. Report findings

Present findings in a table, grouped by severity:

| # | File | Issue | Convention | Severity |
|---|------|-------|------------|----------|

**Severity levels:**
- **High** — Type mismatches, missing security, broken patterns
- **Medium** — Unnecessary boilerplate, wrong annotations, scope violations
- **Low** — Formatting, missing comments, style inconsistencies

### 4. Offer to fix

After presenting findings, ask: "Want me to fix all of these?"

It might take a few goes to tighten this feedback skill up, but when you do, you’ll find AI is extremely good at catching quality issues that many would have otherwise slipped through (and/or required human code review to pick them up).

In Summary

There’s no end to the different ways you can employ AI to help you build your systems at this point, and at times it feels like magic when you give AI a requirement and it runs off to generate lots of code.

But good software engineering is good software engineering, and long-established principles and patterns are just as important as ever.

Drop AI into your codebase with zero (or few) feedback loops, no guidance and no easy way to check its work, and you’ll almost certainly be left disappointed (and with masses of suboptimal code to clean up).

But spend time on planning, give your AI clear feedback loops and explicit instructions to maintain and enhance your codebase (including defining and respecting boundaries between modules), and you’ll see much better results.


Jon Hilton
About the Author

Jon Hilton

Jon spends his days building applications using Microsoft technologies (plus, whisper it quietly, a little bit of JavaScript) and his spare time helping developers level up their skills and knowledge via his blog, courses and books. He's especially passionate about enabling developers to build better web applications by mastering the tools available to them. Follow him on Twitter here.

Related Posts

Comments

Comments are disabled in preview mode.