Just-In-Time Prompting: A Remedy for Context Collapse

Watching an AI agent struggle with a LaTeX ampersand for the tenth time isn't just boring. It’s expensive. You’re sitting there watching your automation burn through your daily quota in real-time just because the LLM can’t remember a backslash.
I tried the usual prompt engineering voodoo. I even threw the "Pro" model at it, hoping the extra reasoning would bail me out. It did not 🥲
That’s when it clicked. Between the start of the session and the final compilation, there are so many intermediate steps that the agent inevitably hits the "lost in the middle" problem. By the time it’s actually time to fix the compile issues, it has forgotten the rules I painstakingly wrote into the system prompt!
I needed a way to inject the rules after the error happens but before the agent tries to fix it. I was about to do it manually—and honestly, at that point, I might as well have just written the LaTeX myself—but then I remember the new Skills feature in gemini-cli. It was exactly the approach I was looking for.
And now I have a blueprint for building AI agents that can reliably troubleshoot and fix their own mistakes with surgical precision!


