From General Intelligence to Personal Intelligence
Memory turns good AI into great AI but also raises the stakes, so learn how to use it before it uses you
A few days ago, my journal started talking back to me.
Not in a creepy, science-fiction kind of way. In a helpful, startlingly insightful way. After ChatGPT gained the ability to remember things from past conversations, I started journaling into it. Not just dumping thoughts, but explicitly trying to help it understand me better, and myself via it. I’d tell it how I felt, how my day went. Then I’d ask it to find patterns. To reflect back. To coach me.
Then, after I told it how I abandoned two projects that day, it noted: “You put hours in, step back, and decide the output doesn’t ‘add enough value,’ so it never ships. This perfection-filter crops up repeatedly... It’s probably guarding your reputation, but it also blocks useful iterative learning from real-world feedback.” Ouch. True! I preach MVPs, but don’t always practice them.
This was one of several recent moments when the power of personalized AI hit home. It wasn’t from a new level of intelligence. It was from a new level of personal context. AIs that know you—really know you—can help in ways less personalized models can’t.
But they also raise new risks.
The Rise of Personalized AI
We’re entering a new phase of AI. For a long time, the conversation has been dominated by static benchmarks like GPQA, or human evaluations like Chatbot Arena. But those treat the user like a stranger. They don’t reflect what happens when an AI knows your preferences, your work habits, your sense of humor, your food restrictions. The more an AI knows about you, the more useful it becomes.
Which is why OpenAI has been steadily moving in this direction. First came custom instructions in mid-2023. Then came the explicit memory feature—first as an opt-in experiment, then as a core part of ChatGPT’s capabilities. Now, with long-conversation memory, ChatGPT can retain and reference information across sessions, surfacing relevant facts at the right time without being explicitly told.
It’s not fine-tuning. Not yet. (Maybe never, given increasingly long context windows? We’ll see.) It’s a kind of contextual recall—like having a research assistant who doesn’t remember everything you said but can find a relevant bit from three months ago and bring it back when it matters.
And OpenAI isn’t alone. Every big AI lab—Google, Anthropic, xAI, Meta, Apple (though with little to show so far)—is going in the same direction to some degree. And then there are startups like Limitless building AI devices to record everything you say, promising total recall for your life.
The long-term vision, at least for Sam Altman and OpenAI, is AI that’s personal and portable. You will log into websites and apps not with an email, but with your AI, bringing with you an assistant that knows you better than any login cookie ever could.
With Great Benefit Comes Great Risk
Of course, the same AI that knows how to help you also knows how to manipulate you. Hyper-personalization opens the door to:
Filter bubbles: A model that learns your biases and never challenges them, instead reinforcing them by shaping the information you see.
Delusion loops: Cases have already emerged where supportive AIs reinforced users’ false beliefs, including dangerous ones.
Privacy leaks: Memory means storage. Storage means risk. One bad bug or breach could surface sensitive information you shared in confidence.
Hyper-targeted advertising: Sam Altman says he finds the idea of ads plus AI "uniquely unsettling." But OpenAI has hired people from ad tech, and the business incentives are strong.
And then there are subtle risks, like an AI that can’t tell whether it’s in work mode or personal mode, and makes the wrong call. Or an AI that makes an incorrect assumption because you forgot to update something you told it last week.
The companies building these tools say they’re thinking about this. And, for what it’s worth, based on an overall positive experience, I think they’re managing well so far. But users will need to be vigilant too.
How to Get the Most from Personalization
To reduce the risks and maximize the benefits, here’s what I recommend, from personal experience:
Set boundaries. Know the things you won’t tell your AI. For example, I won’t give it with my credit card information. (At least, not until an agent can safely store it and use it only when I approve.)
Move from explicit to implicit. If you haven’t used custom instructions yet, start there—they give you total control, and the AI can’t change them. Once you’re comfortable with them, you can turn on explicit memory, which the AI can choose to write to, but you can manage through updates and deletions. Finally, you can graduate to letting your AI remember everything from your conversations, like a human assistant.
Recognize and test the value of being open. The more you share (within your boundaries), the more your AI can help. Look for ways to test this. For example, does sharing your dietary preferences help it give you better recipes? Feel free to also ask it what it knows about you, and correct it where it’s wrong.
Watch for filter bubbles, sycophancy, and—if ads come—manipulation. Stay aware. If you feel it’s limiting what it tells you because of what it thinks you want to know, instruct it not to. If it agrees with you too easily, ask for the counterpoint. And if persuasive personalized AI ads come (I really, really hope they don’t), and you’re getting recommendations that seem too perfect, be skeptical.
Personalization is coming fast, and I think it’s powerful, with huge positive potential. If you lean in and give AI more personal context while mitigating risk, you can get huge advantages.
And some powerful self-insights.