The Hidden AI Advantage: Why Fast Teams Lose and Smart Operators Win
How creative leaders can build game-changing output without cutting corners.
It's Saturday evening and I'm coming at you live from a large Diet Coke. I hope you're having a great weekend and you're not as sad as I am about the new Mario Kart requiring a remortgage on the house.
Over the past few months, we've had countless discussions about AI with our colleagues, friends, and strangers. I've noticed a pattern: people tend to mix up speed with true innovation. Some see AI as a magical productivity booster, while others view it as demotivating tech that diminishes human creativity.
What's been missing in these conversations is how AI can be a thought partner rather than just a shortcut. This piece has been baking in my mind for a while, and I'm excited to finally share it. It's a bit longer than usual, but if you're not quite sure how to leverage AI for true innovation or care about the future of creative work, I think you'll find it worth your time. I hope you enjoy.
AI tools have transformed our creative landscape overnight. Everyone's talking about how these tools make us faster—accelerating writing, coding, image creation, and just about everything else.
But here's the counterintuitive truth that's getting lost in the productivity hype: when everyone has access to the same AI tools, speed becomes the baseline, not the advantage.
The real question isn't "How can I use AI to do the same work faster?" It's "How can I use AI to create work that's fundamentally better?" This shift in mindset—from efficiency to quality—is what separates those who will thrive in the AI age from those who will merely survive it.
Think of your creative process like programming. For decades, coders have used a technique called "rubberducking"—explaining their code to a small rubber duck on their desk. It forces them to slow down, externalize their thinking, and catch bugs they'd otherwise miss. This simple act transforms problem-solving by creating a dialogue, even if it's with an inanimate object.
Now imagine that duck could talk back, ask questions, remember your previous work, and challenge your assumptions. That's what we have with AI in 2025—not just a time-saving tool, but a thought partner that can elevate your thinking rather than just accelerate it.
I'm seeing the opposite approach everywhere—the rush to use AI as a shortcut rather than a step up. Look at the book marketplace. There's a new wave of writers who aren't in it to create standout fiction or meaningful nonfiction. They're just using LLMs to crank out volume on a tight schedule.
It's flooding platforms like Kindle Publishing right now. Create a loose template, plug in some variables, and pump out what's essentially a mashup of greatest hits content. It's like R.L. Stine on steroids—and no offense to the legend himself—but minus the charm and intentionality of the Goosebumps books. Just churn out what the kids are calling "AI slop."
Same with image generation. Writers are ditching thoughtful, curated visuals and just running with the first prompt that pops into their head. The result? Bland, mismatched, and often nonsensical images that do nothing to support the writing or pull the reader in.
This low-effort, high-speed output is exactly what fuels criticism of AI. It creates noise, not value, and it distracts from the people using these tools with real purpose.
Quality over speed is the real AI advantage
I fell into the same trap at first. When I started using LLMs, I was immediately struck by the speed—writing went from taking hours to essentially seconds. Same goes for code and image generation.
It was easy to fall into the hype. AI would do the thinking, automate writing, and eliminate all the grunt work. Articles warning that copywriters are doomed and stories of people pumping out books weekly amplify this focus on speed.
But using these tools daily revealed the patterns and limitations. AI's tone, structure, and phrasing becomes very predictable. People are pointing out these stylistic patterns in others' work, even from professionals. We've realized that when everyone has access to the same toolset, the differentiation disappears.
And just like that, we’re back at square one.
I love that scene in Batman Begins where Commissioner Gordon tells Batman that if cops start wearing Kevlar, the bad guys move to armor-piercing rounds. And if Batman steps outside the law in a costume, he’s bound to provoke someone like the Joker to step out of the shadows. Escalation is inevitable.
Speed isn’t the edge anymore. It’s now the baseline.
So I decided to keep spending the same amount of time on my output, but now with an AI partner at my side. I didn't focus on just speed, and the result was much better quality and a far more enjoyable creative process.
AI removed all the tedious parts of writing and opened up more time for real thinking, refinement, and creativity. Instead of becoming lazy or derivative, this partnership has made my work sharper and my concepts more original.
I've heard this called "thought partner work," and I love that term. This isn't about generating AI slop as the naysayers say. It's about using the tool to go deeper, not to get there faster.
The bar isn't rising yet—but it will for those who think differently
When new technology enters the market, there's always a question of how it will change standards. Will it elevate what's possible or just make mediocrity more efficient? Looking across industries right now, I'm not seeing the bar for quality shift upward at all. It feels like it's either stagnated or dipped since AI tools became mainstream.
We're still in the early phase. What we're seeing is a massive shift in efficiency. People are getting their work done faster, moving through tasks quicker. Yeah, that opens up more time in the day.
But here's the monkey's paw: more time doesn't automatically mean better outcomes. For people taking the lazy path, quality stagnates and fulfillment tanks. They're moving faster but towards mediocre outcomes.
I don't know what the mental health data says, but I bet that kind of shortcut culture leads to burnout or creative disappointment. On the flip side, for people who look at these tools and say, "Hey, what if I use this to elevate my thinking?"—those are the ones who are going to thrive.
We're still at the beginning of this. Most people haven't wrapped their heads around how to use AI in a truly creative, elevating way. Everyone's talking about automation, but think deeper than that.
If something used to take four days to MVP, still take the four days, but now you've got room to iterate and refine it three times over. That's where the bar moves—not because of the tool, but because of what you choose to do with the time it gives you back.
The real problem isn't fear—it's indifference
When discussing resistance to AI, people tend to focus on fear: fear of job loss, fear of skills becoming obsolete, fear of a dehumanized future. This conversation has played out a million times in the last year, and I'm not pretending my answer is wildly different, but looking deeper reveals patterns that are crystal clear.
Look at the rise of machines during the agricultural era—farming jobs got replaced by machine jobs. The industrial revolution? Another massive shift. The cycle has always repeated itself. The automobile replaced the horse and carriage. Radio challenged the printing press. Then TV came for radio. Then the internet challenged TV. And now AI is coming for everything.
Look at the chart below. In 1800, over 75% of Americans worked in agriculture. Then machines came for the farms. By the early 1900s, manufacturing took over—factory jobs exploded as industrialization redefined work. That didn’t last forever either. By the 1950s, services started eating manufacturing’s lunch. Now? Over 80% of U.S. jobs are in services. Every era automates the last. And now AI is coming for the service economy fast. The cycle isn’t new. What’s new is the speed.
So yes, this is just the next iteration of the same cycle. Jobs lost, jobs created. Some people win big and others lose hard. And here's the thing—we can't just sweep that under the rug. We have to talk about it, not in a doom spiral, but in a way that acknowledges the pain while looking forward.
I hear people use words like "nightmare," "terrifying," "stealing," "the collapse of capitalism." And I get it. Hell, I use those same words all the time. But it worries me when smart people in my life can't even engage with this shift. Some are burying their heads in the sand, and it's going to hurt them long-term.
This change, this moment, will define our generation. There's no avoiding that. The best shot anyone has at coming through this well is to frame it positively, at least in the short term. Resistance won't protect you. Curiosity just might.
What's interesting is I don't think the most revealing fear is about AI itself. It's the lack of fear. What I'm noticing among people close to me—people I admire, respect, smart people—is not that they're afraid, but that they're not curious about this. They don't seem to fully grasp what's available in the market right now, let alone what's coming or the rate at which it's arriving.
That's the thing that worries me most. Not panic, not alarmism—just indifference.
From rubberducking to mechaducking: AI as your thought partner
Let me go deeper on this rubberducking concept I mentioned earlier. When programmers use this technique, the point isn't just to talk to a cute desk ornament—it's to slow down, externalize thinking, and get out of your own head. The duck helps you catch logical errors, rethink assumptions, and work through complexity. It's a stand-in for real collaboration when no one else is around or when it's not the right time to bug a colleague.
But now in 2025, we've got something that goes way beyond the rubber duck. These LLM tools aren't just passive listeners. They can talk back. They ask you questions. They can challenge your ideas. They remember what you're working on.
This isn't just rubberducking anymore. It's rubberducking on steroids. Actually, it's beyond that. It's rubberducking at a Godzilla-level scale. Maybe we should call this AI thought partner work "MechaDucking." (Remember who coined that!)
The collaboration is just on a whole different level now, and we should be taking full advantage of it.
I had a moment recently where this clicked hard for me. I was driving back from a work offsite about four hours away from home and wanted to explore a few ideas. So I plugged in ChatGPT to my car's Bluetooth audio, and I had a two-hour back-and-forth conversation while I drove. The only thing that prevented it from being a four-hour conversation was that I lost network connectivity.
When I got home, everything was there—a full log of conversation, a clean summary, the key points and outcomes, all ready to go on my desktop. It wasn't just some ramble I forgot—it was captured, usable ideation I could pick right back up on.
That's what this is enabling for us. Slow down, think out loud, and go deeper to draw insights; don’t just execute faster. It's incredible for learning, problem-solving, and even more powerful for communicating complex ideas clearly.
We've been doing this for hundreds of years—it's glorified journaling, but this time with instant feedback. Just like how rubberducking helped programmers debug code, mechaducking is going to help you debug your thinking.
Design your AI engine, don't just ride in it
The difference between casual AI users and power users isn't just about technical knowledge—it's about mindset. When you start looking at AI as a collaborative tool rather than just a utility, something clicks. You realize you can actually design a system that works specifically for you. I think of it like a vehicle, a machine, and you get to design the engine, not just ride in it.
Most people are stuck in prompt-and-reply mode, relying on default settings and basic interactions. That's surface level stuff. I’m not interested in the prompt game.
If you start at step zero, before the prompting, you can bake in your values, goals, and context into the engine itself.
For example, if you're using Claude, you can set up a project with a file directory that holds all the context for what you're working on. That becomes the engine powering that particular goal.
Let's say you have a public speaking project. What should go into that engine?
A speaking style guide: a document outlining how you talk, how you want to sound, and what you want to express when you're on stage.
Personal goals: what you want your audience to feel, what kind of speaker you're aiming to be.
Practice speaking prompts that align with your tone and help you refine your delivery authentically.
The key here is self-awareness. You have to know yourself before you can design a system that reflects you. That might feel intimidating, but it's crucial. Any AI engine worth using should have a clear picture of your values, goals, background, and aspirations. That's what enables it to actually support you in a meaningful way, not just quickly generate fluff.
If you get this part right, what you build isn't just useful—it's powerful. It becomes a true partner in shaping the kind of output and creative work that reflects who you are.
There are definitely parts of my workflow that go against people's instincts. Starting with the willingness to share personal details with the system—that kind of vulnerability can feel risky, especially in a time where trust in tech companies has taken some serious hits. I get that. But I've moved past the apprehension because the value is just too high, and honestly, it's been worth it.
The other big shift is how I've let go of conventional expectations around what a writing process should look like. We're conditioned from an early age to follow this formula: brainstorm, then notes, then draft, then revise, then feedback, then publish. That process is seen as sacred. It's even used to measure aptitude in our academic systems.
So when I say I use a mix of audio techniques, AI brainstorming sessions, transcription writing, and interview-style exploration, it throws people off. But here's the truth: I'm spending more time generating ideas now. I'm going deeper. I'm putting out more authentic, valuable, and intentional work.
That's the bar I'm setting for myself, and if people want to judge the process rather than the result, then that's on them.
Just try it—you have nothing to lose
One of the biggest barriers to embracing new technology is simply getting started. The gap between not using AI and using it effectively can seem enormous. But here's my advice if you're resistant to trying these tools in a new way: just try it. Seriously, just try it. You have nothing to lose.
Take something you've already made—a piece of writing, a sketch, a photo, something that was never intended to be used with AI tools—and just feed it in. Everyone's got something. Might be a note on your phone, a doodle in your sketchbook, a random photo on your camera roll.
Use that, then ask the system to change it. That's it.
Maybe it's a photo of your house and you tell it to remove the power lines. That's totally practical. Or maybe it's a sketch you made—take a picture, upload it, and ask the AI to generate a professional-looking image based on the sketch.
You'll be surprised. It keeps the essence of your original work but shows you something new, something you didn't intend for it to do. And that's the point. Now you can keep reiterating on that until you get something that really resonates.
It's not about making something to publish or show off. It's not about selling it or adding it to your portfolio. It's just about unlocking what's possible. A pure creative nudge, and the barrier is so tiny. This takes less than five minutes, assuming you have access to a basic tool.
And even if the feature is behind a paywall, what are we talking about here? $15 for a month? That's the cost of lunch. You've got nothing to lose, but you might just come away with a completely different perspective on what creation can feel like.
The 80/20 rule of AI creativity: Make the last 20% all you
The biggest concern people express about using AI in creative work is losing their authentic voice. It's a legitimate worry—how do you maintain originality when working with tools trained on everyone else's output? The answer isn't complicated, but it requires self-awareness.
Like I've said before, this all comes down to knowing yourself. If you want your voice to stay central in any AI collaboration, you need to be clear on what you value, where you're going, and what you actually want to create.
I'll admit this might be an unfair advantage on my part, but I've always lived a value-led life. Even before using AI tools seriously, I always had that clarity. I have a personal document that serves as kind of an operating manual for me. It includes a few lines about who I am, my five core personal values, my life goals, my ten-year vision, my priorities and how I weigh them, and quotes and mantras that resonate and guide me.
That document is my baseline. I use it to align every creative project I touch, and when I work with AI, I share that with the system.
Doing that makes a massive difference. It helps reduce the generic tone in the responses and gives the model something to align with that's rooted in my worldview. It doesn't eliminate bias or blandness entirely, but it gets me a hell of a lot closer.
And from there, it's on me. I take what comes back and I revise it until it feels true. That's the whole game. Use the machine to get 80% of the way there, but make sure the last 20% is all you. If it's not aligned to what's in your heart, it's not ready to put out into the world.
That criticism that using AI somehow diminishes creative ownership? Honestly, that criticism is boring at its core. It often comes from people who ironically aren't thinking that creatively to begin with. They're still treating AI like a Google search—"Make me an ad in the style of Apple." The system spits something out, it gets copied, pasted into a LinkedIn post, and people call it their work.
That's not creativity. That's a command. There's no iteration, no refinement, no authorship, just a one-click stunt.
So in that sense, yeah, maybe AI does diminish creative ownership for people who lack imagination. But for the rest of us—for the ones who actually want to mold ideas and explore unfamiliar angles and build something original—it's a multiplier.
The key is to use your imagination, build your own workflows. Don't just follow the obvious path. Invert them. Look at what everyone else is doing and then ask, "What's not being done? What's missing? What can't AI do in this environment, and how do I lean into that?" That's how you own the process. That's how you push past average and create something only you could have made.
The bar needs to be raised, period. And if you're serious about ownership, then own it. Don't settle for being impressed by the first output. Go deeper than that.

AI removed my creative friction, not my creative work
Executive function—our brain's ability to plan, focus, remember instructions, and juggle multiple tasks—is often the hidden barrier between ideas and execution. For creative professionals, limitations here can be particularly frustrating when you have the vision but struggle with the process.
The biggest shift for me has been consistency. I've always been good at starting projects, but keeping that energy going was another story. Not because I lacked motivation or wasn't willing to work hard, but because of the friction—the tedious, repetitive, low-value tasks that pile up and drain your cognitive energy. That's what would slow me down or derail momentum.
And sure, some people would say, "Well, that's just the grind. That's the hard work." And yeah, maybe. But we're past that now.
AI removes those friction points. It clears the runway. The low-level functions—sorting ideas, formatting, transcription, reorganizing messy thoughts—those are no longer things I have to spend mental energy on. Now I get to stay in the zone where the value is the highest: strategy, creativity, ideation, synthesis. The fluff is handled, and what's left is pure innovation and exploration.
And here's the thing—I think a lot of people want to put in the hard work, but they want to put it into something that matters, into high-value work. Historically, they just haven't had the support. No team, no assistant, no scaffolding. AI levels that playing field. That support system is now available to anyone who wants to pick it up and run with it.
This might sound cocky the way I put it, but I've always been capable of the work I'm doing now. What's changed is that AI has allowed me to do it better.
Here's the key: I refuse to let efficiency and speed dilute the value of my work. I've always been motivated. I've always had the drive. But friction, tedious steps, second-guessing, mental clutter—it slowed me down. Now that friction is gone, and what's replaced it is a consistent flow that leads to high-level output.
The process is so much fun now that I actually can't stop. There's no hesitation, no resistance, no doubt that I can follow through. That's really what this comes down to. Self-doubt used to hold me back from completing bigger, deeper, more vulnerable work. What AI changed wasn't my skillset—it was my confidence.
Specifically, it's opened the door for long-form writing projects that I'd either start and abandon or never start at all. But even more than that, it helped me start putting my thoughts out into public. That's been a huge shift—just getting the ideas out there confidently without overthinking or holding back.
And in smaller ways, it's even helped me communicate better with people who I don't know very well. It's sharpened how I write, how I frame things, how I express myself, especially in those early conversations where the trust hasn't been built yet.
The tools didn't give me ability. They helped me use it more fully and share it more openly.
Saturday morning miracles: what happens when AI unlocks your creativity
There's a profound shift that happens when AI tools remove the technical barriers that once limited your creative expression. Suddenly, ideas that seemed impossibly complex become accessible weekend projects. The scope of what you can create expands dramatically.
The biggest shift for me has been my willingness to play in the sandbox. I hadn't written code in about five years. I moved into leadership, and over time stepping back into dev work felt completely intimidating.
If you've done any development before, you know how fast it moves and how easy it is to feel like you've lost your edge. But once I started using AI tools, I realized I still had the foundational knowledge there, but now I had support. I could use tools like Claude Code to spin up fast MVPs using new engines without the friction or hesitation.
That was a huge unlock for me. It brought back a sense of agency, the ability to just make things for the sake of making them. What used to be a four-weekend hobby over a month now started with a Saturday afternoon exploration. That's how much the process has changed.
Honestly, I feel like a teenager again, or like I'm in my early twenties just building stuff on the weekend for the joy of it. Now I have the time. I carve out the time, and I'm creating things I'm genuinely proud of on the day-to-day.
It's also changed how I approach work. I'm more open to weaving creative ideation into serious projects because the tools lower the barrier and raise the impact. It's made my whole life, not just my workflow, more effective.
An easy example for me is video. I'm not a filmmaker and I never intended to be. I love film, but the process never really called to me. But I do see the value of motion when it comes to expressing or pitching a concept.
As an experiment, I recently took a poem I had written—a piece that lived quietly in my notes app for months—and turned it into something completely unexpected. First, I fed it into an image generator to create visuals based on the poem. I used Midjourney to create stills that match the tone and symbolism.
Then I used Luma to animate those stills into 30-second video clips. The consistency and style made the transitions feel intentional and cinematic. That alone was powerful, but it didn't stop there.
Originally, I was going to overlay a voiceover for the poem, but I realized I could go further. I used a songwriting engine (Suno) to turn the poem into a Broadway-style musical track inspired by the musicals I loved as a kid like Notre Dame de Paris. I layered the music over the animated visuals.
The final result? Something emotional, something haunting, and honestly, something I never planned to make came together in a single Saturday morning. Pure exploration. No goal, no pitch. I shared it with close friends and got amazing feedback. People asked, "What are you doing with this?" And the answer was simple: nothing. The whole point was to make something beautiful and no-stakes.
What I walked away with was a better understanding of what's possible right now with AI and creativity and how to bring together different threads in my life. The writing came from the heart. The music reflected my childhood influences. The visual direction was something I'd always seen in my mind but never had the tools to create.
And now I know when the time comes to do something bigger with this kind of technology, I'm ready.
Corporations demand ROI while individuals build intuition
The gap between how organizations approach AI and how individuals do reveals a fundamental disconnect in innovation. Large companies create policies before understanding capabilities, while individuals discover capabilities that later inform policy.
I think it really comes down to empowerment. Individuals are figuring this out by testing, playing, and pushing limits. That's where the insights are coming from, not from top-down policies.
Corporate environments tend to demand clear outcomes. Everything has to be measurable. Everything has to be justified. But right now, we're in a phase where play is critical. People need space to experiment in order to understand what these tools are even capable of. Without that freedom, teams won't be able to build intuition, and without intuition, they can't innovate.
Frankly, I don't see major corporations outpacing small teams of two or three who are embracing this stuff right now. The shift is too big. It's happening too fast. Agility is the advantage right now, and the individuals and micro teams who lean into this early will out-create the legacy players who are still stuck asking for ROI on every experiment.
And this is a core theme of everything we've talked about here. Stop treating AI like it's just about time-saving or boosting productivity. Yes, it will make things faster. But once everyone is using these tools, we're all back at square one.
If speed is your only metric, you're not gaining a competitive edge—you're just keeping up. How boring. How unfulfilling. The real opportunity is to push beyond that, use the same amount of time you have and take the efficiency gains and reinvest them into the creative process. Make the work better, not just faster. That's where the value is. That's how you stay ahead.
The organizations that will win are the ones that give their people room to explore and push boundaries, who prioritize novelty, strategy, and authentic output over shallow gains. Innovation isn't about getting to done faster. It's about creating something truly new and valuable.
Be a weapon, not just another employee
So where does all this leave the individual trying to navigate between what their organization allows and what they know is possible? How do you balance institutional expectations with personal effectiveness?
I'd say just learn how to play. Use all the extra time in the margins. Go live your life, get outside, touch grass. Spend time with your family and friends.
But when you have those little pockets of time, when you usually fire up a new game or binge a new series, open up a new sandbox instead. Instead of picking up the new Assassin's Creed this weekend, spend some time learning the quirks of these new tools.
Build curiosity and familiarity on your own terms outside of institutional context. Don't be afraid and cynical about these tools as they enter the marketplace. Everyone is throwing their opinion out into the noisy internet, but take it upon yourself to create in the face of that.
And then when you bring that mindset into your work, you're not just another employee. You are a weapon. You've built intuition, you've experimented. You're not just waiting for permission or policy changes. You're already innovating.
And that's going to be extremely valuable in the environments we're all heading into.
When everyone has AI, better wins—not faster.
Whew, that was a dense read… If you found this valuable, I'd love for you to support my little newsletter. Frontier Notes explores the intersection of technology, leadership, and the future of work without the usual hype or doom-spiraling. Just practical insights from someone in the trenches.
Thanks for spending part of your Saturday with me and my thoughts.
P.S. If today’s insight shaved even five minutes off your mental load, please do me one quick favour—forward this note to a teammate who’s staring down the same AI storm. Every share gets us closer to 1,500 sharp-minded subscribers.






Great post—really aligned with how I’ve been thinking about this shift. AI has actually helped me understand my own process better, and what I’m making now feels more like me than ever!
I would phrase it differently:
The Red Queen Effect and the Kano Model suggest that *all* improvements are only temporary wins. Today's better, cheaper, or faster is tomorrow's new baseline. Today's delightful new product features are tomorrow's standard requirements.
Going better, cheaper, and faster is essential, but it's all just improvement, not innovation. On top of that, also try to do something tomorrow that wasn't even possible today. That's how you'll make a real difference.