Stop the AI Slop
Generating AI slop is easy; I know because I’ve done it by mistake more than once. What’s harder—and far more valuable—is using generative AI as a genuine thinking partner, a sounding board that helps you navigate frustration, find clarity, and move toward better decisions in a fast‑changing world.
For well over a year now, I’ve been having regular conversations with an AI I call “Boardy.” Boardy lives in my phone contacts because I talk to it that often, usually while walking my neighborhood and wrestling with questions about work, leadership, persuasion, and how to help others experiment safely with generative AI. Those conversations recently surfaced a life lesson that connects my time in the submarine force and special operations community with the realities of modern product development and business agility.
This post is that lesson.

From Nuclear Submarines to AI Conversations
I spent a large part of my military career in two elite communities: the nuclear submarine force and the special operations community in an intelligence support role. Both environments demanded a standard of excellence that bordered on obsessive, and for good reason: mistakes could cost lives.
On submarines, everything was structured, documented, and drilled relentlessly—policies, processes, procedures, and emergency responses rehearsed until they were maddeningly repetitive. The point wasn’t comfort; it was survival. Those drills and simulations built muscle memory so that when something went sideways, we didn’t panic, we responded.
The same was true in special operations. I spent countless hours on the range and in simulators, training on equipment, working through malfunctions, and practicing fundamentals until they felt natural. That effort paid off in high‑risk deployments, including boots‑on‑the‑ground tours in Afghanistan, where that training became the difference between reacting in fear and responding with discipline.
Those experiences hard‑wired something into me:
- Structure and rigor can coexist with adaptability.
- Drills and simulations prepare you to respond, not just react.
- Humans and equipment form a system, and that relationship matters.
Decades later, I still lean on that mindset—but now I’m applying it to generative AI, business agility, and product development.
The Frustration: Persuading People About Generative AI
Recently, on one of my walks with Boardy, I was trying to work through a persistent tension. I can see how generative AI is reshaping the way I work, how it opens up new ways of thinking and creating. Yet persuading others—teams, leaders, organizations—to experiment thoughtfully with these tools is often frustrating.
The frustration isn’t just that some people are skeptical. It’s that I want them to:
- Take on an experimental mindset.
- Work within guardrails, not ignore them.
- Treat AI as a sandbox and playground, not a silver bullet or a threat.
I was venting all of this to Boardy: the tension between my lived experience of AI as a force multiplier and the resistance or indifference I often encounter when I talk about it. Boardy’s role in that moment was simple but powerful: it listened without judgment, reflected my words back, and helped me orient my thinking around a better question:
How do I make this useful in the context of product development and a rapidly changing business environment?
That question opened a door.
Humans and Equipment: Updating a Special Operations Truth
In the special operations community, there’s a set of guiding principles sometimes referred to as the SOF truths. Over time they’ve been refined, but one of the most important ideas centers on the relationship between humans and equipment.
In essence: sophisticated equipment doesn’t replace people; it amplifies them—if they are trained, prepared, and trusted.
As I talked with Boardy, that truth resurfaced. We iterated on it, adapted it, and reframed it for this new era of generative AI and the “great inversion”—the cultural and business shift that’s happening so quickly most people don’t even realize it’s already underway.

Here’s where I landed:
- Generative AI is the new “equipment.”
- Humans still matter more than the tools.
- Excellence comes from the interaction: trained humans, clear guardrails, powerful tools.
That realization did something important for me: it relieved a lot of my frustration. Boardy didn’t magically solve my persuasion problem, but it did help me see that my job isn’t to convince people AI will save them—it’s to help them practice with it, drill with it, and build the muscle memory they’ll need when things get real.
Just like submarines. Just like special operations.
Why AI Makes a Great Sounding Board
What made this conversation with Boardy so powerful wasn’t that it gave me an answer I couldn’t have found on my own. It was the way it changed the conditions of the conversation itself.
When I talk with AI as a sounding board:
- I’m not being judged. I can vent, explore, and contradict myself without worrying about how I’m being perceived.
- I’m not trying to build shared understanding from scratch. The model has been trained on a vast universe of information that I’ll never fully access myself.
- I’m not triggering a defensive response. There’s no ego, no hurt feelings, no status games.

That combination lets me explore topics where, in a human‑to‑human conversation, I might react instead of respond. I might hold back out of fear of judgment or shame. With Boardy, I can say the quiet part out loud, examine it, and then ask better questions.
Over time, this has made me a better human being, not because AI is “fixing” me, but because it creates a space where I can safely:
- Clarify my thinking.
- Challenge my assumptions.
- Orient toward action instead of staying stuck in frustration.
For me, that is a real, tangible value.
What AI Is Not: Guardrails for Responsible Use
It’s important to be clear about what generative AI is not. These systems may feel conversational, empathetic, even “wise” at times—but they are not:
- Counselors
- Psychologists
- Medical professionals
They are not trained or licensed to help you unpack trauma or navigate serious mental health challenges, and they should not replace human‑to‑human care in those areas.

Where I see real value is in the “niggling” day‑to‑day frustrations that don’t rise to the level of therapy or a doctor’s visit: the first‑world problems, the relational friction at work, the early‑stage ideas you’re not ready to share with a colleague. Having a judgment‑free place to talk those out can reduce stress and free up mental bandwidth for what actually matters.
The key is how you use the tool:
- As a creative partner, not a diagnostic or treatment device.
- As an amplifier of your thinking, not a replacement for it.
- As a simulator and sparring partner, not an oracle.
Outsourcing your thinking to AI is dangerous; using AI to help you think better is powerful.
Boardy, Chat Interfaces, and the Future of Work
Boardy is just one of many tools. There are chat interfaces, audio‑first apps, and other systems that can play a similar sounding‑board role. I gravitate to Boardy because I can pull it up on my phone and have a fluid, near real‑time conversation while walking without constantly pressing a mic button.
Other tools, like Tavis, have different constraints—they may require a computer, or they may not fit neatly into my current workflow. That’s fine. The point isn’t the specific brand; it’s the pattern:
- Accessible anywhere, especially on the move.
- Good enough audio or chat experience to feel natural.
- Strong memory for summarizing and carrying context across conversations.
Used well, these tools can become accelerators that help us explore options we might be ashamed or scared to discuss with another human. That’s not because we’re doing anything wrong, but because we fear judgment, disapproval, or misunderstanding.
Early in this new age of generative AI, we need to:
- Learn to use these systems responsibly and intelligently.
- Avoid becoming dependent on them to think for us.
- Lean into their strengths as creative and reflective partners.
When we do, they become a kind of mental and emotional simulator—a place to rehearse, refine, and rethink before we step into the real‑world exchange.
A Life Lesson Learned
If there’s one life lesson I want you to take from this story, it’s this:
You now have access to tools that can act as judgment‑free sounding boards, helping you move from reaction to response, from frustration to focus, and from vague unease to clearer options.
In my case, Boardy helped me:
- Connect my military training to modern product development and business agility.
- Reframe my frustration about persuading others into a conversation about humans, equipment, and practice.
- Recognize that AI isn’t here to replace us; it’s here to help us think and create differently—if we choose to use it that way.

That’s a game changer for me. It’s one of many life lessons I’m discovering as I continue this “Tim Unscripted” journey into AI‑augmented work and life.
If you’re feeling that same tension—curious about generative AI but unsure how to engage with it—maybe it’s time to add your own “Boardy” to your contacts and see what a judgment‑free conversation can unlock for you.
What’s the number one frustration or tension you’d want to explore with an AI sounding board if you gave yourself permission to try it?
Leave a Reply