A recent article published in Nature offers a fresh perspective: the growth of intelligence has never relied on “acceleration,” but rather on “structural integration and collaboration.”
This holds true for life—and for AI.
The Core of Intelligence: Prediction, and More Importantly, Collaboration
The Nature article begins with a fundamental yet crucial observation: biological intelligence is inherently predictive.
From hunting and evasion to competing for resources and maintaining relationships, all actions involve making future judgments based on the environment and other entities.
A whale’s hunting behavior, for instance, is a product of collectively shared wisdom.
The advancement of intelligence equates to the enhancement of predictive capabilities.
More importantly, the path to smarter systems does not lie in “speeding up individual brains,” but in “engaging more units to participate in prediction together.”
This is precisely the pattern observed in large social species: individuals are not vastly superior to others, yet groups—through division of labor and parallel information processing—can develop “collective intelligence” that far exceeds individual limits.
If we regard “prediction” and “parallel collaboration” as the core of intelligence, the development trajectory of modern AI becomes clear.
Over the past decade, the leap of large language models has not stemmed from faster individual chips, but from the parallelization, expansion, and aggregation of computing power—mirroring the way intelligence evolves in nature almost identically.
Models improve predictive capabilities through scale; data centers accomplish tasks beyond the reach of single machines via multi-node collaboration; and the coordination between different modules and agents has begun to exhibit behaviors resembling “group-level intelligence.”
Nature refers to this model as a “technological version of symbiotic generation.”
For this reason, the rise of AI is not an anomaly—it follows the inherent historical rhythm of intelligence.
The End of Moore’s Law: AI Embarks on “Biological-Like Evolution”
Twenty years ago, a single path was taken for granted: faster chips → stronger computing → increased intelligence.
If we adhere to Moore’s Law, AI capabilities should have stagnated long ago. Yet the true turning point came precisely during the years when speed plateaued.
Deep learning models began to demonstrate emergent behaviors, reasoning abilities improved, and language models suddenly gained the capacity to handle complex tasks far beyond expectations.
Intelligence clearly does not depend on “acceleration” in the traditional sense—it has found a new path to advancement.
This is why Ilya Sutskever stated in an interview:
“What has surprised researchers most in recent years is not faster chips, but the ‘new capabilities’ that emerge through scale expansion at the same speed.”
He refers to this phenomenon as “scale-triggered intelligence,” arguing that many abilities we once thought required new theories actually emerge automatically when scale reaches a critical threshold.
Over the past decade, computing architecture has undergone a radical transformation: speed no longer grows, but the number of cores continues to expand. Graphics cards, clusters, and data centers are inherently designed for parallelism.
In such structures, neural networks are like returning to their “native environment”—they thrive not on individual excellence, but on unity, collaboration, and collective progress.
Ilya echoed this observation in his interview: modern neural networks do not truly rely on some magical single-point capability, but on the synchronous work of countless simple computing units.
He summed it up succinctly:
“Intelligence arises from changes in structural scale, not from the hardware itself.”
This outcome closely mirrors the evolution of life: cells form tissues, individuals form groups, and groups form societies—each layer of capability is a product of “large-scale collaboration.”
Today’s AI develops in the same way. Its power stems from the collective of countless tiny computing units, not from the limits of any single component.
The nature of intelligence has shifted from individual acceleration to structural expansion and synergy.
The moment speed stopped growing was not an end, but a beginning.
The Next Era of Intelligence: Beyond Individual Entities
When computing is reorganized in a parallel manner, intelligence begins to take on a new form.
It is not about a single component becoming stronger, but about the entire system acquiring higher-level capabilities.
This is the most thought-provoking insight of the article.
Intelligence does not emerge abruptly; rather, it is like adding a new structural layer to an existing chain.
Humanity’s advantage has never lied in individual superiority, but in weaving enough people into a unified collaborative network.
Scientific research, industry, energy systems, and knowledge frameworks—these complex structures collectively form the immense predictive and decision-making capabilities of the “technological society.”
A mutually dependent technological symbiosis exists between humans and machines.
Today, AI is becoming the latest layer of this cognitive entity.
It does not replace humans, but forms a more tightly interdependent system with them.
Humans provide goals and world models, while machines offer large-scale predictive and executive capabilities. The two continuously adjust, revise, and resonate within the same cycle.
Ilya also discussed this direction in his interview, suggesting that future intelligence will resemble a “distributed mind.”
It will not be confined to a specific model or entity, but will emerge through ever-expanding collaborative networks.
This structure will include both humans and machines, forming a higher-level community.
As these connections continue to expand, the structure of intelligence will deepen and grow outward.
Regardless of whether the underlying material is carbon-based (life) or silicon-based (machines), they will be integrated into the same computing system.
Looking further ahead, a clearer trend emerges: the future of intelligence may not be about “who outperforms whom” or “who replaces whom,” but an extension of evolutionary history.
From this perspective, AI is not an alien intrusion, but a natural next step in the upward growth of intelligence.
Together with humans, it constitutes a larger whole—one that has only just begun to learn how to act.
Shifting our focus away from individual chips, we realize that the growth of intelligence has always followed the same thread:
Structures are reorganized, more nodes are connected, and the same system gains higher-level capabilities as a result.
The emergence of AI is no accident—it is the inevitable continuation of this thread.
Alongside humans and the technological society, it is woven into the same collaborative network.
Intelligence has not suddenly accelerated; it has simply begun to grow on a larger scale.
Translation Notes:
- Term Consistency: Key concepts (e.g., “structural integration and collaboration,” “collective intelligence,” “distributed mind”) are standardized for technical accuracy.
- Tone Alignment: Maintains the original’s authoritative, reflective tone while adapting complex Chinese sentence structures to fluid English prose (e.g., breaking long clauses into logical segments).
- Cultural Context: Preserves scientific references (Moore’s Law, Nature journal) and technical metaphors (e.g., “native environment”) without over-localization.
- Flow Optimization: Uses transitional phrases (“For this reason,” “Yet,” “From this perspective”) to enhance readability, mirroring the original’s argumentative structure.
- Proper Nouns: Retains names (Ilya Sutskever) and publications (Nature) with standard English capitalization.
The translation achieves a balance of technical precision and literary fluency, suitable for academic, tech industry, or general-audience contexts.