AI didn’t just change how we write code. It changed how we build products.
Over the last year, AI coding agents have gone from glorified autocomplete to autonomous systems capable of scaffolding entire features.
As our VP of Product put it:
“The speed of software engineering is no longer the bottleneck.”
— Genady Sergeev, VP Product, Progress
If that’s true—and it increasingly feels like it is—then something fundamental has shifted.
The constraint isn’t typing. It’s judgment.
These lessons come directly from conversations with our engineering and product teams. Across many of interviews and internal reflections, four themes emerged.
One of our product leads described the shift this way:
“AI made refactoring and handling technical debt much faster, allowing us to focus on market features. Validating ideas, creating POCs and writing tests are also areas where we’ve seen huge impact. The whole development lifecycle changed, and it opened the door for exploration and innovation.”
— Yoana Kalaydzhieva, Senior Manager, Software Engineering, Progress
AI lowers the cost of starting. It accelerates refactoring. It scaffolds tests. It makes proof-of-concepts nearly frictionless.
But our engineers were equally honest:
“AI is better at starting than finishing.”
— Stefan Mariyanov, Senior Product Manager, Progress
AI gives you momentum. It doesn’t give you judgment.
What we realized is that developers don’t actually want magic answers—they want orientation. They want something to react to, refine and push against.
That changed how we thought about AI in our tools. Instead of “generate and disappear,” we moved toward:
AI made refactoring and technical debt much faster, allowing teams to focus on market features. Validating ideas, creating POCs and writing tests also saw huge impact.
And what that really opened up was exploration.
Then we ran into something humbling.
AI-generated code often looks clean, structured and convincing. The formatting is good. The structure feels familiar. The solution appears reasonable at first glance. But many times it’s simply wrong.
As one of our engineers put it:
“Code often looks right but breaks in subtle ways.”
— Stefan Mariyanov, Senior Product Manager, Progress
That’s the danger. It sounds right.
Kathryn Grayson Nanz framed this beautifully:
“I’d encourage teams to learn how AI actually works instead of treating it like a black box. Even a high-level understanding—how models predict the next word, how they gain context—demystifies the magic.”
— Kathryn Grayson Nanz, Senior Developer Advocate, Progress
The key thing to understand is that AI doesn’t understand truth. It predicts plausibility. It generates the next token based on patterns it has learned.
And when tools are confidently wrong, trust erodes fast.
For developer tools, that’s a serious problem. Developers rely on their tools to accelerate their work, not introduce hidden uncertainty. If AI suggestions consistently require double-checking or rewriting, the productivity gains disappear.
That realization forced us to rethink how AI should be integrated into the products we build.
Instead of asking AI to guess what developers might want, we started focusing on giving it better context. Rather than generating output in isolation, we began connecting AI directly to the frameworks, component libraries and development tools our users already rely on.
This is where things like our MCP server and product-aware AI tooling come in. By allowing assistants to access real project structure, real component APIs and real framework constraints, the output becomes grounded in the actual environment the developer is working in.
In other words, the goal isn’t to make AI more clever. The goal is to make it more reliable.
For instance, in the Progress Kendo UI for Angular library AI tools, that meant building product-aware AI that understands the capabilities and APIs of the components in our libraries. It meant adding framework-aware constraints so generated code follows Angular patterns instead of generic JavaScript assumptions (or even outdate Angular patterns). And it meant leaning into opinionated scaffolding, where AI helps developers start with valid structures instead of forcing them to repair generated output afterward.
What we realized is that the gap wasn’t intelligence, it was context. AI could generate something that looked clever, but without grounding in real tools and constraints, it still left developers doing the hard part.
So we shifted the question. Instead of asking how to make AI more impressive, we started asking how to make it more trustworthy. And once we framed it that way, the answer showed up everywhere: context beats cleverness.
Speed came with a cost.
As we started integrating AI more deeply into developer workflows, we began to see both the upside and the tradeoffs more clearly. The productivity gains were real, but so were the risks.
One of our VPs of Product reflected on this directly:
“Using CLI agents successfully requires certain skills and practice. The improvement rate is so fast that observations from 3–4 months ago are already outdated.”
— Genady Sergeev, VP Product, Progress
That pace of change creates a new kind of pressure. Not just to adopt AI, but to continuously relearn how to use it well.
And we observed something else happening inside teams.
“AI helps experienced developers master their productivity—it’s impressive. But less experienced developers might over-trust AI-generated code, which leads to pull requests that require much deeper senior review.”
— Yoana Kalaydzhieva, Senior Manager, Software Engineering, Progress
In other words, AI doesn’t just accelerate output. It amplifies behavior.
For experienced developers, that can mean faster iteration and deeper focus. For less experienced developers, it can mean moving quickly in the wrong direction—and creating more work downstream.
That’s when the real cost of speed started to show up. AI makes output cheaper, but not ownership. It accelerates mistakes just as efficiently as it accelerates success. And when everything moves faster, the cost of catching issues later increases. So we changed how we approached tooling.
Instead of optimizing purely for generation speed, we started designing for accountability alongside acceleration. That meant introducing clear checkpoints where human review is expected, giving teams control over the models and endpoints they use, and enabling every AI-generated action to be traced, inspected and understood.
We also shifted our mindset around AI output itself. Instead of treating it as a near-final draft, we began treating it as untrusted input—something useful, but something that still requires validation.
This is where our AI Observability Platform became part of the solution. Not to “watch code,” but to watch reasoning.
We wanted to make it possible to see what the model did, how it arrived at a result and why certain decisions were made along the way. Because if AI is going to operate inside real workflows, its behavior can’t be opaque. Visibility becomes a requirement, not a bonus.
If AI accelerates execution, then visibility has to scale with it. Because speed, on its own, isn’t the goal. Speed is only valuable when accountability scales alongside it.
This one surprised us most.
One internal reflection stood out:
“What surprised me most is how quickly the attention span of our customers dropped, and how much the evaluation process changed.”
— Yoana Kalaydzhieva, Senior Manager, Software Engineering, Progress
Evaluation cycles compressed, tolerance for friction disappeared, and that shift affects end users too. Users struggle with:
AI isn’t a replacement UI. It’s an augmentation layer.
Lesson #4 shows up very clearly in how we’ve approached AI within the Kendo UI Grid—not as a replacement for the interface, but as a layer that makes it easier to use.
Instead of hiding the system behind a prompt box, we’ve focused on keeping the structure intact: columns, grouping, filtering and data visibility all remain explicit and inspectable. AI becomes a way to express intent faster—“group by this,” “filter that”—while still grounding every action in the underlying UI. You can explore the full capabilities here: Angular Smart Grid.
What this means in practice is that the grid doesn’t disappear when AI is introduced—it becomes more approachable. Users can move faster without losing their bearings, because every change is visible, reversible and tied to real UI controls.
That balance is the point: reducing cognitive load without sacrificing control. Rather than asking users to trust a black box, the grid invites them to collaborate with it—AI helps initiate actions, but the interface remains the source of truth.
Over time, these lessons started to feel less like separate observations and more like different edges of the same shape. AI is incredibly effective at helping us begin, but it doesn’t carry the responsibility of finishing. It can produce outputs that look convincing, but without context, that confidence can be misplaced. And while it dramatically increases speed, it also raises the cost of mistakes when that speed isn’t paired with visibility and accountability.
Even when we bring AI into the product itself, the goal isn’t to replace the interface—it’s to make it easier to use without taking control away from the user.
What this ultimately points to is a deeper shift in how we think about building tools. AI changes how quickly we can move, but it doesn’t change who owns the outcome. Developers still make the decisions. Teams still carry the consequences. So the goal can’t just be faster output—it has to be confidence in what we’re building, even as everything around us accelerates.
We’re hosting a live Developer Roundtable on April 15 at 9 a.m. PT, where we’re unpacking all of this in real time—no slides, no polished answers, just honest conversation with developers in the trenches.
We’ll be digging into questions like:
Join us on the Progress Labs Discord:
👉 Come share what’s working, what’s breaking and what still feels unresolved.
Because the truth is—we’re all still figuring this out. And the best insights aren’t coming from headlines. They’re coming from conversations like this.
Alyssa is an Angular Developer Advocate & GDE. Her two degrees (Web Design & Development and Psychology) feed her speaking career. She has spoken at over 30 conferences internationally, specializing in motivational soft talks, enjoys gaming on Xbox and scuba diving in her spare time. Her DM is always open, come talk sometime.