I’m a believer in using what works and leaving the rest. Here’s what I’ll continue to use AI for and what I won’t—at least for now.
As my friends, family, coworkers and probably several people on the internet already know: I am a AI skeptic—and I’m not particularly shy about it, either. I’m not really sold on the productivity improvement angle, and I’m especially cautious (and critical) when it comes to using it to create content. I’m routinely underwhelmed by emails, blogs, illustrations, logos and more that were clearly created without human involvement. As an artist and writer myself, those aren’t things that I, personally, am seeking to replace or automate out in my own life.
That said, I also work in tech where AI is (for the time being) borderline inescapable. I also think it’s decidedly lame to write something off without giving it a genuine try. And, if I’m completely honest, I was also just plain curious: what was everyone else seeing in this that I wasn’t? Was there some trick or approach to using it that I just hadn’t mastered yet? I created a ChatGPT account, started using the CoPilot account work had provided for us, installed our Kendo UI AI Coding Assistant and committed to giving “this AI thing” a real, honest shot for an extended period of time. I’ve been using it for the last couple months, and … well, my opinions are mixed.
There are some places where AI tooling excels and was genuinely helpful. There were many places where it wasn’t suited to what I was asking it to do, and I spent more time fighting with the machine than getting anything actually done. Ultimately, I’m not sold on AI as the do-everything tool it’s often marketed as—however, there are a handful of things it was great at that I have folded into my regular routine. With all that being said, here are the places and ways where I’m using AI (as a certified AI skeptic).
I found (pretty darn quickly) that I do not like when an AI tool writes the code for me: by which I mean literally populating or auto-completing lines of code in my IDE. Vibe coding and I simply do not get along; I found it challenging to follow what was being written, I didn’t remember how I had structured or named things (because I hadn’t actually structured or named them), and it ultimately slowed me down significantly.
What I did find helpful was using the AI like a pair programming partner. My coworker Alyssa wrote about this a while ago, and it was part of what informed my approach and helped me find a middle ground that worked for me. In the past, if I was implementing something new, I’d try and find examples in the docs or a tutorial blog that walked me through it—and I’d almost always have to make some adjustments for it to work in the context of my own project. Now, it’s handy to ask the AI to generate my own step-by-step implementation tutorials, all customized to my exact tech stack and needs.
I also really like using it to replace looking stuff up in the docs. The Progress Kendo UI AI Coding Assistant is particularly useful here—believe it or not, even though I’m the KendoReact Developer Advocate, I don’t actually have every possible prop for all 120+ components memorized yet (I know, I know, I’m working on it). Being able to throw a quick syntax question into the chat sidebar in VS Code is super handy. I’m not a fan of having it write the whole app for me—although it can do that—but it does have a much better “memory” than I do for whether that prop is called theme
or themeColor
(spoiler: it’s themeColor
).
Of course, I’d also be remiss if I didn’t mention using AI for troubleshooting errors in my code—it’s saved me more than a few times, now. However, (like all troubleshooting) the trick here is to not fall down the rabbit hole. It’s shockingly easy to just ask it question after question, following the natural chain it creates and letting it suggest wilder and wilder approaches. By the time you’ve tried 4-5 things it’s suggested with no solution, you’re on the AI version of the 10th page of Google: the answer just isn’t going to be here. In the pre-AI days, I would always suggest to junior devs to set a time limit on their troubleshooting: if you’ve tried for 1-2 hours (max) and made no headway, it’s time to stop Stack Overflow-ing and call in another person. Although the tech is different now, I think the rule still stands; time box your AI query-ing and learn to identify when you’ve hit the point of diminishing returns.
One of the places where I have actually found AI to live up to the productivity hype is to do a braindump of all my tasks and goals at the beginning of each week and let it make a little schedule for me.
I’ll tell it my pre-existing obligations on each day (calls, appointments, etc.), personal to-dos (workouts, chores, social engagements), school assignments (when I was still finishing up my grad school work) and work tasks and ask it to make a structured to-do list for each day of the week. It’s good at grouping mentally similar tasks, reducing the amount of “code switching” you need to do, and it’s also helpful to me to see where I have time set aside for each thing. I was never a time-blocking kind of person; it was just too specific and I always felt like I wasted more time setting up the schedule vs. actually completing the work. Outsourcing that work to the AI has been beneficial, and my Monday mornings now usually start with a schedule dump into ChatGPT.
This one is (admittedly) a little embarrassing, but for the sake of honesty I’m going to include it: I like to tell the AI chatbot when I start and finish tasks. I think of it kind of like body doubling, but without having to bother another actual person or actually sync up work schedules. I know we’ve moved into full-on placebo effect here, but something about knowing I have to “report back” when I’ve finished helps keep me on task. Brains are weird.
As I mentioned above, I don’t like to have the AI create content for me—that includes writing, which I do a lot of in my role. Conference talks, videos, blogs, ebooks: I spend a lot of time click-clacking away on my little keyboard, and I’m not terribly keen to outsource that work. Where I have found AI to be helpful in my writing process is to have it read my work and then ask it questions. What was the primary message the author was communicating in this piece? What were the main steps of this tutorial? What was the author’s tone?
Rather than having it check my work for accuracy (it’s decidedly not good at that) or rewrite my words, I like to have it summarize my work back to me so I can make sure I hit all the points I intended to hit and emphasized the right stuff. After all, someone else is probably going to be doing that—even if it’s not as direct as I’m doing here, they’ll be reading the Google AI summary of it or asking a question that some AI will reference my work to answer. It’s a helpful way to confirm that my most important messages are being effectively communicated when I write.
Yes, I know I just talked about this a little bit. But even beyond doing literal content creation, I also avoid AI generating outlines or overviews, emails, conference talk descriptions, DMs, etc. It’s just too opinionated, and it has a distinct tone of voice that doesn’t match my own. Plus, I know how I feel when I get an email or read a piece of work that wasn’t written by a human—and it feels bad. If it wasn’t important enough for you to write, then it’s not important enough for me to read.
Look guys, it’s just not good. Maybe it will be good in the future, but that future is not today. It’s always riddled with little mistakes, images that are supposed to look “real” have that kind of soft-edged glazed-over look, and just forget trying to generate anything with text in it. Websites like Unsplash offer high-quality, royalty-free images—just use those.
Until AI can say “I can’t find that,” then I’ll never trust it for research—it will just hallucinate an answer and present it to you like the truth. That goes for everything from actual academic paper research to “what time does this restaurant open?” I’m simply not interested in spending my time fact-checking a machine. I even switched away from Google because the (inaccurate) AI summaries at the top drove me crazy. Until AI chatbots are capable of admitting that they can’t do something, I’ll be DuckDuckGo-ing my questions (even though that doesn’t exactly roll off the tongue the way Google-ing did).
As detailed above. I could maybe see doing it for a small, one-off side project—but if it’s anything you’re going to have to work with again at literally any point in the future, it’s just not worth it. And even for that small, one-off thing, you’re more likely to learn and retain what you’ve worked with better if you write the code yourself.
Where have these tools fit into your life? Are you calling on them every day, or just a couple times a week? We’re all still finding our balance as AI tooling works its way into … well, just about everything. I’m not a believer in throwing the baby out with the bathwater, so I’ll keep playing with the new stuff as it comes out. After all, this isn’t an all-or-nothing game: take what works and leave the rest!
Kathryn Grayson Nanz is a developer advocate at Progress with a passion for React, UI and design and sharing with the community. She started her career as a graphic designer and was told by her Creative Director to never let anyone find out she could code because she’d be stuck doing it forever. She ignored his warning and has never been happier. You can find her writing, blogging, streaming and tweeting about React, design, UI and more. You can find her at @kathryngrayson on Twitter.