China’s Ant Group, an affiliate of Alibaba, detailed technical data round its new mannequin, Ring-1T, which the corporate mentioned is “the primary open-source reasoning mannequin with one trillion whole parameters.”
Ring-1T goals to compete with different reasoning fashions like GPT-5 and the o-series from OpenAI, in addition to Google’s Gemini 2.5. With the brand new launch of the most recent mannequin, Ant extends the geopolitical debate over who will dominate the AI race: China or the US.
Ant Group mentioned Ring-1T is optimized for mathematical and logical issues, code technology and scientific problem-solving.
“With roughly 50 billion activated parameters per token, Ring-1T achieves state-of-the-art efficiency throughout a number of difficult benchmarks — regardless of relying solely on pure language reasoning capabilities,” Ant mentioned in a paper.
Ring-1T, which was first launched on preview in September, adopts the identical structure as Ling 2.0 and educated on the Ling-1T-base mannequin the corporate launched earlier this month. Ant mentioned this enables the mannequin to assist as much as 128,000 tokens.
To coach a mannequin as giant as Ring-1T, researchers needed to develop new strategies to scale reinforcement studying (RL).
New strategies of coaching
Ant Group developed three “interconnected improvements” to assist the RL and coaching of Ring-1T, a problem given the mannequin's measurement and the sometimes giant compute necessities it entails. These three are IcePop, C3PO++ and ASystem.
IcePop removes noisy gradient updates to stabilize coaching with out slowing inference. It helps remove catastrophic training-inference misalignment in RL. The researchers famous that when coaching fashions, notably these utilizing a mixture-of-experts (MoE) structure like Ring-1T, there can usually be a discrepancy in chance calculations.
“This drawback is especially pronounced within the coaching of MoE fashions with RL as a result of inherent utilization of the dynamic routing mechanism. Moreover, in lengthy CoT settings, these discrepancies can regularly accumulate throughout iterations and grow to be additional amplified,” the researchers mentioned.
IcePop “suppresses unstable coaching updates by way of double-sided masking calibration.”
The following new technique the researchers needed to develop is C3PO++, an improved model of the C3PO system that Ant beforehand established. The tactic manages how Ring-1T and different extra-large parameter fashions generate and course of coaching examples, or what they name rollouts, so GPUs don’t sit idle.
The best way it really works would break work in rollouts into items to course of in parallel. One group is the inference pool, which generates new information, and the opposite is the coaching pool, which collects outcomes to replace the mannequin. C3PO++ creates a token price range to manage how a lot information is processed, making certain GPUs are used effectively.
The final new technique, ASystem, adopts a SingleController+SPMD (Single Program, A number of Information) structure to allow asynchronous operations.
Benchmark outcomes
Ant pointed Ring-1T to benchmarks measuring efficiency in arithmetic, coding, logical reasoning and common duties. They examined it in opposition to fashions resembling DeepSeek-V3.1-Terminus-Pondering, Qwen-35B-A22B-Pondering-2507, Gemini 2.5 Professional and GPT-5 Pondering.
In benchmark testing, Ring-1T carried out strongly, coming in second to OpenAI’s GPT-5 throughout most benchmarks. Ant mentioned that Ring-1T confirmed the most effective efficiency amongst all of the open-weight fashions it examined.
The mannequin posted a 93.4% rating on the AIME 25 leaderboard, second solely to GPT-5. In coding, Ring-1T outperformed each DeepSeek and Qwen.
“It signifies that our rigorously synthesized dataset shapes Ring-1T’s sturdy efficiency on programming purposes, which varieties a robust basis for future endeavors on agentic purposes,” the corporate mentioned.
Ring-1T exhibits how a lot Chinese language corporations are investing in fashions
Ring-1T is simply the most recent mannequin from China aiming to dethrone GPT-5 and Gemini.
Chinese language corporations have been releasing spectacular fashions at a fast tempo for the reason that shock launch of DeepSeek in January. Ant's mother or father firm, Alibaba, just lately launched Qwen3-Omni, a multimodal mannequin that natively unifies textual content, picture, audio and video. DeepSeek has additionally continued to enhance its fashions and earlier this month, launched DeepSeek-OCR. This new mannequin reimagines how fashions course of data.
With Ring-1T and Ant’s improvement of latest strategies to coach and scale extra-large fashions, the battle for AI dominance between the US and China continues to warmth up.

