Skip to main content

4 posts tagged with "AI Agents"

View All Tags

Why Vibe Coding Makes Your Mechanical Keyboard A Learning Disability

· 6 min read
Morgan Moneywise
CEO at Morgan Moneywise, Inc.

A massive, translucent spirit of a cute robot wizard acting as a puppet master, his large luminous eyes wide with innocent focus as he uses neon-blue strings to guide a frantic industrial robot's hands into slow, deliberate Tai Chi movements over a melting mechanical keyboard

In 1623, the "water-poet" John Taylor claimed that coaches would cause men to abandon the practice of riding on horseback and ruin their morality by facilitating "riot, whoring, and drunkenness."

Taylor was a man of deep convictions and even deeper grievances. He didn't just dislike coaches; he saw them as an existential threat to the very fabric of human discipline. He famously wrote:

"A coach is common, so is a whore: a coach is costly, so is a whore; a coach is drawn with beasts, a whore is drawn away with beastly knaves. A coach has loose curtains, a whore has a loose gown, a coach is laced and fringed, so is a whore: a coach may be turned any way, so may a whore: a coach has bosses, studs, and gilded nails to adorn it: a whore has Owches, brooches, bracelets, chains and jewels to set her forth: a coach is always out of reparations, so is a whore: a coach has need of mending still, so has a whore: a coach is unprofitable, so is a whore: a coach is superfluous, so is a whore."

Anyway, why am I talking about 17th-century horses and whores in a post about vibe coding?

Because you’ll find similarly strong feelings among many Senior Devs today. To them, vibe-coding isn’t just an efficiency tool; it’s a moral decay of the craft. They worry it’s creating a generation of "Imbecile Juniors" who don’t know a thing about system design—kids who can prompt but can’t architect. To them, if you don’t type every line of code by hand, you’ll never "truly" understand how it works. They’ve mistaken the performative struggle of manual typing for learning how to build software.

And while they’re busy polishing their $500 mechanical keyboards and enjoying the "thock" of hand-typing boilerplate, they’re missing the fact that their favorite tool has become a learning disability as vibe coding goes mainstream.

Just-In-Time Prompting: A Remedy for Context Collapse

· 5 min read
Morgan Moneywise
CEO at Morgan Moneywise, Inc.

A cute robot wizard reclining on the ground, exhausted with spiral eyes, reaching out to a hand from a blue portal delivering a glowing magic scroll

Watching an AI agent struggle with a LaTeX ampersand for the tenth time isn't just boring. It’s expensive. You’re sitting there watching your automation burn through your daily quota in real-time just because the LLM can’t remember a backslash.

I tried the usual prompt engineering voodoo. I even threw the "Pro" model at it, hoping the extra reasoning would bail me out. It did not 🥲

That’s when it clicked. Between the start of the session and the final compilation, there are so many intermediate steps that the agent inevitably hits the "lost in the middle" problem. By the time it’s actually time to fix the compile issues, it has forgotten the rules I painstakingly wrote into the system prompt!

I needed a way to inject the rules after the error happens but before the agent tries to fix it. I was about to do it manually—and honestly, at that point, I might as well have just written the LaTeX myself—but then I remember the new Skills feature in gemini-cli. It was exactly the approach I was looking for.

And now I have a blueprint for building AI agents that can reliably troubleshoot and fix their own mistakes with surgical precision!

3 Design Patterns to Stop Polluting Your AI Agent's Context Window

· 7 min read
Morgan Moneywise
CEO at Morgan Moneywise, Inc.

A cute robot wizard in a blue robe sweeping up 'context junk' with a broom

I've been watching Gemini CLI development for a while, and I started noticing a pattern that felt... redundant. First, we got custom Slash Commands. Then Custom Sub-Agents. Now, we have Skills.

It started to feel like feature bloat. Why do we need three different ways to shove a prompt into the context window? Is this just marketing, or is there actual engineering logic here?

On the surface, it looks like a massive violation of SRP. If all three features are just dumping strings into the context window, why do we need three separate abstractions? Is this just feature bloat, or is there an actual architectural reason for the redundancy?

So off I went poking around the source code again. It turns out the real difference isn't in what they can do, but in how much work they make you do to keep the session (and your sanity) from collapsing.

Agent Augmentation vs. Delegation in Google's Gemini CLI Skills System

· 6 min read
Morgan Moneywise
CEO at Morgan Moneywise, Inc.

A cute robot wizard in a Matrix-style chair saying 'I know kung fu'

Documentation tells you what the developers intended to share; the main branch tells you what they are actually building.

After accidentally discovering the undocumented sub-agent feature, I found myself watching Gemini CLI's repository with a closer lens. While reviewing the git logs on SHA d3c206c, I noticed a few mentions of "skills."

At first, I thought it might just be a marketing rebrand of custom agents. But the more I looked, the more it felt like a totally different approach to architecting Agentic AI workflows. I’ve been thinking of it as Delegation vs. Augmentation.

By understanding this distinction, you can stop stuffing your system prompts with every tool imaginable and avoid the unnecessary complexity of managing a swarm of sub-agents. If you can just make your current agent a bit smarter on the fly, why bother with all the extra overhead?