Shortly after completing its most significant organizational restructuring in history, OpenAI hosted a livestream.​

For the first time, it publicly shared a detailed timeline for its internal research goals, with the most eye-catching one being the target to “achieve fully autonomous AI researchers by March 2028″—specifying even the month.​

The information density of this launch event was extremely high, and even Sam Altman himself stated: “Given the importance of this content, we will share our specific research goals, infrastructure plans, and product strategies with unprecedented transparency.”​

Could it be that after restructuring, OpenAI has truly become “open” again?​

However, there were also some mishaps. OpenAI initially posted a call for questions from the audience, but too many people complained about the mandatory routing mechanism for sensitive conversations in GPT-4o. The two hosts stumbled over their words and glanced awkwardly at each other for a moment.​

In the end, Altman admitted: “We messed this up this time.”​

“Our goal is to protect vulnerable users while giving adult users more freedom. We have an obligation to protect minor users and adult users who are not in a rational state of mind.​

With the establishment of age verification, we will be able to strike a better balance. This is not our best work, but we will improve.”​

OpenAI Sets Clear Timeline: AI to Conduct Independent Research by 2028​

At the start of the livestream, Altman acknowledged his past misconceptions.​

“In the past, we imagined AGI as an ‘oracle in the sky’—a superintelligence that would automatically create wonderful things for humanity.​

But now we realize that what truly matters is creating tools that allow people to build their own futures with them.​

This shift in thinking is no accident. Every technological revolution in human history has stemmed from better tools: from stone tools to steam engines, from computers to the internet.​

OpenAI believes that AI will be the next tool to reshape the course of civilization, and their mission is to make this tool as powerful, user-friendly, and accessible as possible.”​

Next, Chief Scientist Jakub Pachocki unveiled a set of OpenAI’s internal goals and roadmap:​

  • September 2026: AI at the level of research interns. Capable of significantly accelerating researchers’ work through large-scale computing.​
  • March 2028: Fully autonomous AI researchers, able to independently complete large-scale research projects.​

When introducing research progress, he emphasized in particular that OpenAI believes deep learning systems are “likely less than a decade away” from superintelligence—defined here as systems that are smarter than humans in numerous critical fields.​

Their way of quantifying the progress of AI capabilities is to look at the time span required for models to complete tasks: from initial tasks that took seconds, to current tasks that take five hours (such as defeating top contestants in international mathematics and informatics competitions). This time span is expanding rapidly.​

“Think about the current time models spend on problems, then think about how much time you would be willing to invest in a truly important scientific breakthrough. It would be acceptable to let models use the computing resources of an entire data center to think—there is enormous room for improvement here.”​

Pachocki also detailed a new technology called “Chain of Thought Faithfulness.”​

In simple terms, during training, they deliberately avoid supervising the model’s internal reasoning process, allowing it to maintain faithful expression of its actual thinking.​

“We do not guide the model to think ‘good thoughts’; instead, we let it remain faithful to its actual thoughts.​

Within the five-tier AI safety framework, Chain of Thought Faithfulness targets the top tier of value alignment.​

What does AI truly care about? Can it adhere to high-level principles? How will it act when facing ambiguous or conflicting goals? Is it lacking in humanity?​

This issue is important for three reasons:​

  1. When a system engages in long-term thinking, we cannot provide detailed instructions for every step.​
  1. When AI becomes extremely intelligent, it may face problems that humans cannot fully understand.​
  1. When AI deals with problems beyond human capabilities, comprehensive specifications become difficult or even impossible.​

In such cases, we must rely on deeper alignment. People cannot write rules for every detail; they must depend on the AI’s inherent values.​

Traditional methods—where we review and guide the model’s thinking process during training—essentially teach it to say what we want to hear, rather than maintaining faithfulness to its true thinking process.​

Currently, this method is widely used within OpenAI to understand how models are trained, how their tendencies evolve, and in collaborative research with external partners. By examining unsupervised chains of thought, potential deceptive behaviors can be detected.​

However, getting AI’s values to not oppose monitoring is only half the battle. Ideally, we also want AI’s values to actually assist in monitoring models—and this is an area OpenAI is heavily researching next.​

New Architecture Unveiled: Non-Profit Foundation in Control of Everything​

The highly anticipated OpenAI restructuring plan was finally revealed, and it was surprisingly concise compared to the original proposal.​

The old architecture included multiple interconnected and complex entities:​

The new architecture consists of only two tiers:​

At its core is the OpenAI Foundation—a non-profit organization that will have full control over its subsidiary public benefit corporation, OpenAI Group.​

Initially, the Foundation will hold approximately 26% of the public benefit corporation’s equity, but this proportion can be increased through stock options if performance is excellent.​

Sam Altman hopes the OpenAI Foundation will become the largest non-profit organization in history, with its first major commitment being an investment of $25 billion in AI-assisted disease treatment research.​

Beyond medical research, the Foundation will also focus on a brand-new field: AI Resilience.​

OpenAI co-founder Wojciech Zaremba specifically explained this concept, which has a broader scope than traditional AI safety.​

“For example, even if OpenAI can prevent its models from being used for harmful purposes, if someone uses other models to cause trouble, society as a whole still needs a rapid response mechanism when problems occur.​

Zaremba believes this is analogous to cybersecurity in the early days of the internet. Back then, people were afraid to enter credit card numbers online, and when encountering a virus, they had to call each other to remind one another to disconnect from the internet. But now, with a complete cybersecurity industry chain, people dare to store their most private data and life savings online.”​

In terms of infrastructure, OpenAI publicly disclosed its investment scale for the first time: currently, the total committed infrastructure construction amounts to over 30 gigawatts (GW), with a total financial obligation of approximately $1.4 trillion.​

Altman also revealed a long-term goal: establishing an infrastructure factory capable of creating 1 GW of computing power per week, and hoping to reduce the cost per GW to around $20 billion over a five-year lifecycle.​

To achieve this goal, OpenAI is considering investing in robotics technology to assist in building data centers.​

To help the audience understand the scale, OpenAI highlighted the construction of their first “Stargate” data center in Abilene, Texas. Among multiple ongoing construction sites, this one is progressing the fastest.​

Thousands of workers are on-site every day, and the entire supply chain involves hundreds of thousands or even millions of people—from chip design and manufacturing to assembly, and then to energy supply.​

The Q&A Session Was Equally Insightful​

Q1: Technology is becoming addictive. However, Sora mimics TikTok, and ChatGPT may add advertisements. Why are you repeating the same patterns?​

Altman: Please judge us by our actions. If Sora becomes something that makes people mindlessly scroll instead of being used for creation, we will discontinue the product. We hope not to repeat the mistakes of the past, but we may make new ones—we need a rapid evolution process and close feedback loops.​

Q2: When will large-scale unemployment caused by AI occur?​

Pachocki: Many jobs will be automated in the next few years. But what jobs will replace them? What new pursuits will be worthy of everyone’s involvement?​

“I believe there will be several aspects: the ability to understand far more about the world, an incredible variety of new knowledge, new forms of entertainment, and new intelligence—all of which will provide people with considerable meaning and a sense of accomplishment.”​

Q3: How far ahead are internal models compared to publicly deployed ones?​

Pachocki expressed strong expectations for the next-generation models, predicting rapid progress in the coming months and year, but stated that “there is nothing extremely crazy being hidden.”​

Altman added that they have developed many components, and it is only when these components are combined that impressive results emerge.​

“Today, we only have many of these components—not a huge, undisclosed achievement waiting to be shown to the world. But we expect to have the opportunity to achieve a massive leap in AI capabilities within a year.”​

Q4: How can OpenAI provide so many features for users of the free version?​

Jakub first explained this phenomenon from a technical perspective:​

“When OpenAI develops a new generation of models (such as GPT-5), it represents a new frontier in intelligence—the highest level AI can currently reach.​

Once this frontier is achieved, cheaper methods to replicate this capability are quickly found.”​

Altman supplemented the discussion from a business perspective: “Over the past few years, the price of a specific unit of intelligence has dropped by approximately 40 times annually.​

Here, there seems to be a contradiction: why is a large amount of infrastructure still needed? The cheaper AI becomes, the more people want to use it—and ultimately, the total cost is expected to only increase.​

OpenAI hereby makes a commitment: as long as our business model remains viable, we are dedicated to continuing to incorporate the best technology we can achieve into the free tier.”​

Q5: Is ChatGPT OpenAI’s ultimate product, or a predecessor to something greater?​

Pachocki explained that as a research lab, they initially had no intention of building a chatbot.​

“But we now recognize that this product aligns with our overall mission: ChatGPT allows everyone to use powerful AI without needing programming knowledge or technical background.”​

Altman believes that the chat interface is a good interface, but it will not be the only one. “The way people use these systems will undergo tremendous changes over time.​

For tasks that take less than five minutes, the chat interface works well—you can ask questions back and forth, refining until you are satisfied.​

But for tasks that take five hours, a richer interface is needed. What about tasks that take five years or five centuries? That is almost beyond our imagination.”​

Altman then outlined what he sees as the most important direction of evolution: “An environment-aware, always-present companion—a service that observes your life and proactively helps you when you need it.”