You can’t help people you don’t understand. No understanding + no trust = undifferentiated = slop.
“Companion” seems to be a dirty word, but if – like me – you believe AI can help people with real problems, then you need to look past the word. Many agents will rely on deep user knowledge and trust – how will you build it?
To help with code, doesn’t your coding agent need code context? It builds and leverages understanding far beyond your prompting. Without knowing about you and your life, what is AI supposed to do when you ask it why your wife is treating you like that, or how much you should worry about that mole, or how to be more present for your kids? What the heck does _it_ know?
No understanding + no trust = undifferentiated = slop. Build connections strong enough for the AI to learn from people in real time, then deepen that understanding asynchronously_. Use that understanding to help people, then stay engaged so you can learn from the result and help better next time.
I eschewed big parts of the AI hype cycle because I care more about finding ways to help _normal people_ (not weirdos like us). One important lesson: Memory is almost *always* important. You could be accumulating medical records over years and re-examining them after every quick health catchup, or giving good relationship advice because you know my girlfriend’s attachment style, or whatever, but you can’t help people you don’t understand.
I’m building for _understanding_.
