Sam Altman thinks GPT-5 will be smarter than him — but what does that mean?
![Sam Altman thinks GPT-5 will be smarter than him — but what does that mean?](https://jtimes.net/wp-content/uploads/2025/02/Screenshot-2025-02-10-at-17.21.27.jpg)
Sam Altman did a panel discussion at Technische Universität Berlin last week, where he predicted that ChatGPT-5 would be smarter than him — or more accurately, that he wouldn’t be smarter than GPT-5.
He also did a bit with the audience, asking who considered themselves smarter than GPT-4, and who thinks they will also be smarter than GPT-5.
“I don’t think I’m going to be smarter than GPT-5. And I don’t feel sad about it because I think it just means that we’ll be able to use it to do incredible things. And you know like we want more science to get done. We want more, we want to enable researchers to do things they couldn’t do before. This is the history of, this is like the long history of humanity.”
The whole thing seemed rather prepared, especially since he forced it into a response to a fairly unrelated question. The host asked about his expectations when partnering with research organizations, and he replied “Uh… There are many reasons I am excited about AI. …The single thing I’m most excited about is what this is going to do for scientific discovery.”
He didn’t answer the host’s question at any point during his reply, and he also didn’t give any details or explanation regarding his comment. What does it mean for GPT-5 to be smarter than Sam Altman?
Does it mean GPT-5 will be trained on data covering in-depth knowledge of more subjects than Altman has experience with? That’s probably already the case with GPT-4 but people don’t describe it as smart because it’s so bad at following instructions, retaining context, and revising its responses.
So, can we expect GPT-5 to improve in this area? It shouldn’t be impossible — my experience with DeepSeek, for example, has been much more positive in this area. If I ask for no more than 100 words, two bullet-point lists, and information taken from a certain link, it actually delivers.
Then, when I ask it to add an extra section summarizing an additional webpage I provide — I get what I asked for. I’ve never been able to achieve this kind of smooth and accurate operation with GPT-4, and I’m not even asking for anything complicated.
These are the kind of things I consider when assessing how “smart” I think an AI model is but it’s impossible to know what kind of criteria Altman judges by. He keeps talking about science and research — he even mentioned curing cancer at one point — but it’s hard to see how ChatGPT fits into such things.
I can see how artificial intelligence as a whole might contribute, but an LLM? The official site for ChatGPT describes it as a brainstorming partner, a meeting summarizer, a code generator, and a way to search the web. Which of these features will meaningfully help a research scientist dealing with questions no human has the answers to yet?
If Altman has thoughts or answers on these topics, he isn’t sharing them. He just sticks to sweeping statements that only sound impressive until you realize you have no idea what he actually means in practical terms.