OpenAI AMA: Future AI Models, Open-Source Shifts, and What’s Next for ChatGPT
If You Missed It, Don't Worry: Here's the Rundown
Three days ago, OpenAI’s top executives, including CEO Sam Altman, Chief Research Officer Mark Chen, and Chief Product Officer Kevin Weil, took to Reddit for an Ask Me Anything (AMA) session, fielding questions from an eager AI community.
Over the course of an hour (and thousands of comments), they touched on OpenAI’s evolving roadmap, the fate of GPT-5, the state of voice AI, and a potential shift in the company’s stance on open-source AI.
Here’s a breakdown of the biggest takeaways from the session.
GPT-5, Model Improvements, and Future AI Capabilities
Perhaps the most pressing question for many users was the fate of OpenAI’s flagship model series. When asked about GPT-5, Altman confirmed that it is in development but remained tight-lipped on the timeline:
“I think we’ll just call it GPT-5, not GPT-5o. Don’t have a timeline yet.” —Sam Altman
Meanwhile, OpenAI’s GPT-4o continues to receive updates, though no major version refresh was announced. Users also pushed for improvements in context length and memory, key areas where AI models still face limitations. While OpenAI executives confirmed they are actively working on longer context windows, they stopped short of providing specifics.
“We are working on increasing context length. Don’t have a clear date/announcement yet.” —Srinivas Narayanan, VP of Engineering
One of the most striking admissions came from Altman himself, who acknowledged that recursive self-improvement, where AI models enhance their own capabilities, may not be as far off as once believed:
“I personally think a fast takeoff is more plausible than I thought a couple of years ago. Probably time to write something about this…” —Sam Altman
The Future of Advanced Voice AI and Multimodal Capabilities
Users also zeroed in on Advanced Voice Mode, requesting features like pausing AI responses mid-conversation to prevent interruptions. Kevin Weil, OpenAI’s Chief Product Officer, assured users that improvements are on the way:
“Yes! Updates to Advanced Voice Mode are coming.” —Kevin Weil
Another hot topic was multimodal AI, particularly DALL-E 4 and AI’s ability to generate and edit images natively within ChatGPT. Weil confirmed that new capabilities are being developed:
“Yes! We’re working on it. And I think it’s going to be worth the wait.” —Kevin Weil
Users also asked about Whisper, OpenAI’s speech recognition model, and whether it would gain the ability to transcribe non-speech sounds for better closed-captioning. While no firm commitment was made, OpenAI executives acknowledged the interest.
User Requests: ChatGPT, Operator, and Task Automation
Users submitted a flurry of feature requests, including:
Cross-chat referencing in Projects
A more seamless task management system
Mobile support for Canvas
The ability to attach files to reasoning models
In response, Weil acknowledged the growing list of requested improvements and promised internal discussions:
“These are all great. I’m not even going to address them individually, just passing them to the product team.” —Kevin Weil
Another major area of interest was Operator, OpenAI’s automation tool for interacting with software. Weil hinted at its wider availability in the future:
“I don’t have a date for you, but computer use is clearly a part of long-term AGI, and we want to bring it to everyone as soon as we can.” —Kevin Weil
OpenAI’s Competitive Edge and Changing Views on Open Source
With the rise of open-source competitors like LLaMA, DeepSeek, and Qwen, users pressed OpenAI on its own stance on open-source AI. In a surprising moment of introspection, Altman admitted that OpenAI may have miscalculated its approach:
“Yes, we are discussing [open-sourcing more]. I personally think we have been on the wrong side of history here and need to figure out a different open-source strategy. Not everyone at OpenAI shares this view, and it’s also not our current highest priority.” —Sam Altman
This shift in perspective signals a potential future where OpenAI embraces more transparency and open research, though executives emphasized that their main focus remains improving proprietary models.
Another critical infrastructure challenge is compute power—the hardware backbone necessary to train increasingly advanced models. OpenAI’s Stargate project was mentioned as a key part of scaling up:
“Everything we’ve seen says that the more compute we have, the better the model we can build. Think of Stargate as our factory for turning power/GPUs into awesome stuff for you.” —Kevin Weil
Final Takeaways: AI’s Role in Science, Robotics, and Society
Beyond immediate product updates, OpenAI executives discussed AI’s broader impact on society and scientific discovery. When asked what major breakthroughs AI should tackle first, Srinivas Narayanan pointed to two massive fields:
“Curing diseases. Getting cheaper energy.” —Srinivas Narayanan
Sam Altman, when asked how AGI would impact humanity, expressed optimism about its role in accelerating scientific discovery:
“The most important impact, in my opinion, will be accelerating the rate of scientific discovery, which I believe contributes most to improving quality of life.” —Sam Altman
Despite growing pains, OpenAI remains committed to pushing AI forward. Whether it’s GPT-5, multimodal AI, or automation tools, the AMA made one thing clear—AI’s evolution is accelerating, and OpenAI intends to lead the charge.
BONUS: for a previous AMA with just Sam Altman from last month, check out this video.
Looking Ahead
With significant updates in the pipeline and an evolving stance on openness, OpenAI’s roadmap is more dynamic than ever.
While many questions remain unanswered—like the exact timeline for GPT-5 or full multimodal support—one thing is clear: the AI revolution is far from over. And OpenAI, for better or worse, remains at its forefront.
If you found this catch-up session informative, I hope you’ll consider subscribing to the Substack. This is just some of the content I’ve got planned, and I’d love for you to join me for the ride.