I write a lot of prompts. Easily dozens a day at this point. Between experimenting with ideas, vied coding , vibe learning, asking questions, and drafting content, a good part of my day involves typing into some kind of LLM prompt.

The odd part is that I have never been a great typist, despite thousands of hours of typing. I type reasonably fast, but accuracy has always been a bit questionable. Over the years I developed a habit that mostly compensated for that. I would type quickly, then pause for a moment to read the message before sending it. That short review step usually caught the worst mistakes.
At least it used to.
Large language models are incredibly forgiving. You can type something with missing punctuation, half-spelled words, and sentences that barely hold together, and the model still figures out what you meant.
For example, I might type something like:
“whats best way train model small dataset few hundered rows”
And the response comes back as a clean, structured explanation about training strategies for small datasets.
No confusion. No request to clarify what I meant. The model just interprets it.
After a while this starts to change how you type.
When the system on the other end automatically corrects spelling, smooths out grammar, and infers your intent, there is less reason to slow down. Speed becomes the only thing that matters. The careful review step slowly disappears.
Now I catch myself doing something slightly embarrassing in normal conversations. I will fire off a quick Teams response and immediately notice that half the words look like they were assembled by a cat walking across the keyboard.
Then I go back and edit the message.
It eventually looks fine, but there is a brief moment where the original message probably makes very little sense to the person reading it. My finders bang out nonsense and hit send on autopilot.
The funny part is that an LLM would have understood it perfectly the first time.
I can type something that is barely recognizable as proper English and the model calmly responds with exactly the information I was trying to ask for. If only the rest of the world worked that way.
My guess is that this will become more common problem as AI usage increases. When people spend hours each day communicating with systems that automatically interpret messy input, the incentive to type carefully starts to fade and bad habits form.
We may end up in a strange situation where humans slowly become worse at typing clearly (possibly even speaking clearly) while machines become extremely good at understanding unclear commands.
I can imagine a world where technology enthusiast and teenagers struggle to communicate in written and spoken language with other humans without an AI interpreter.