Need smarter insights in your inbox? Join our weekly newsletters to get solely what issues to enterprise AI, information, and safety leaders. Subscribe Now
TikTok is making headlines once more right this moment after the White Home joined the favored social media software — however its mother or father firm ByteDance, a Chinese language internet large, additionally had a shock announcement up its sleeve.
The corporate’s Seed Workforce of AI researchers right this moment launched Seed-OSS-36B on AI code sharing web site Hugging Face.
Seed-OSS-36B is new line of open supply, giant language fashions (LLM) designed for superior reasoning, and developer-focused usability with a longer token context — that’s, how a lot data the fashions can settle for as inputs after which output in a single change — than many competing LLMs from U.S. tech firms, even leaders similar to OpenAI and Anthropic.
The gathering introduces three most important variants:
AI Scaling Hits Its Limits
Energy caps, rising token prices, and inference delays are reshaping enterprise AI. Be a part of our unique salon to find how high groups are:
- Turning power right into a strategic benefit
- Architecting environment friendly inference for actual throughput beneficial properties
- Unlocking aggressive ROI with sustainable AI techniques
Safe your spot to remain forward: https://bit.ly/4mwGngO
- Seed-OSS-36B-Base with artificial information
- Seed-OSS-36B-Base with out artificial information
- Seed-OSS-36B-Instruct
In releasing each artificial and non-synthetic variations of the Seed-OSS-36B-Base mannequin, the Seed Workforce sought to stability sensible efficiency with analysis flexibility.
The synthetic-data variant, educated with further instruction information, constantly delivers stronger scores on commonplace benchmarks and is meant as a higher-performing general-purpose possibility.
The non-synthetic mannequin, in contrast, omits these augmentations, creating a cleaner basis that avoids potential bias or distortion launched by artificial instruction information.
By offering each, the group offers utilized customers entry to improved outcomes whereas guaranteeing researchers retain a impartial baseline for finding out post-training strategies.
In the meantime, the Seed-OSS-36B-Instruct mannequin differs in that it’s post-trained with instruction information to prioritize job execution and instruction following, relatively than serving purely as a basis mannequin.
All three fashions are launched beneath the Apache-2.0 license, permitting free use, modification, and redistribution by researchers and builders working for enterprises.
Which means they can be utilized to energy industrial functions, inner to an organization or exterior/customer-facing, with out paying ByteDance any licensing charges or for software programming interface (API) utilization.
This continues the summer season 2025 pattern of Chinese language firms delivery highly effective open supply fashions with OpenAI trying to meet up with its personal open supply gpt-oss duet launched earlier this month.
The Seed Workforce positions Seed-OSS for worldwide functions, emphasizing versatility throughout reasoning, agent-like job execution, and multilingual settings.
The Seed Workforce, fashioned in 2023, has focused on constructing basis fashions that may serve each analysis and utilized use instances.
Design and core options
The structure behind Seed-OSS-36B combines acquainted design decisions similar to causal language modeling, grouped question consideration, SwiGLU activation, RMSNorm, and RoPE positional encoding.
Every mannequin carries 36 billion parameters throughout 64 layers and helps a vocabulary of 155,000 tokens.
One of many defining options is its native long-context functionality, with a most size of 512,000 tokens, designed to course of prolonged paperwork and reasoning chains with out efficiency loss.
That’s twice the size of OpenAI’s new GPT-5 mannequin household and is roughly equal to about 1,600 pages of textual content, the size of a Christian Bible.
One other distinguishing aspect is the introduction of a considering funds, which lets builders specify how a lot reasoning the mannequin ought to carry out earlier than delivering a solution.
It’s one thing we’ve seen from different current open supply fashions as properly, together with Nvidia’s new Nemotron-Nano-9B-v2, additionally out there on Hugging Face.
In follow, this implies groups can tune efficiency relying on the complexity of the duty and the effectivity necessities of deployment.
Budgets are beneficial in multiples of 512 tokens, with 0 offering a direct response mode/
Aggressive efficiency on third-party benchmarks
Benchmarks printed with the discharge place Seed-OSS-36B among the many stronger giant open-source fashions. The Instruct variant, particularly, posts state-of-the-art leads to a number of areas.
- Math and reasoning: Seed-OSS-36B-Instruct achieves 91.7 p.c on AIME24 and 65 on BeyondAIME, each representing open-source “state-of-the-art” (SOTA).
- Coding: On LiveCodeBench v6, the Instruct mannequin data 67.4, one other SOTA rating.
- Lengthy-context dealing with: On RULER at 128K context size, it reaches 94.6, marking the very best open-source consequence reported.
- Base mannequin efficiency: The synthetic-data Base variant delivers 65.1 on MMLU-Professional and 81.7 on MATH, each state-of-the-art leads to their classes.
The no-synthetic Base model, whereas barely behind on many measures, proves aggressive in its personal proper.
It outperforms its artificial counterpart on GPQA-D, offering researchers with a cleaner, instruction-free baseline for experimentation.
For enterprises evaluating open choices, these outcomes recommend Seed-OSS affords sturdy potential throughout math-heavy, coding, and long-context workloads whereas nonetheless offering flexibility for analysis use instances.
Entry and deployment
Past efficiency, the Seed Workforce highlights accessibility for builders and practitioners. The fashions might be deployed utilizing Hugging Face Transformers, with quantization help in each 4-bit and 8-bit codecs to scale back reminiscence necessities.
In addition they combine with vLLM for scalable serving, together with configuration examples and API server directions.
To decrease obstacles additional, the group consists of scripts for inference, immediate customization, and gear integration.
For technical leaders managing small groups or working beneath funds constraints, these provisions are positioned to make experimentation with 36-billion-parameter fashions extra approachable.
Licensing and concerns for enterprise decision-makers
With the fashions supplied beneath Apache-2.0, organizations can undertake them with out restrictive licensing phrases, an necessary issue for groups balancing authorized and operational considerations.
For resolution makers evaluating the open-source panorama, the discharge brings three takeaways:
- State-of-the-art benchmarks throughout math, coding, and long-context reasoning.
- A stability between higher-performing synthetic-trained fashions and clear analysis baselines.
- Accessibility options that decrease operational overhead for lean engineering groups.
By putting sturdy efficiency and versatile deployment beneath an open license, ByteDance’s Seed Workforce has added new choices for enterprises, researchers, and builders alike.