Enterprises which have been juggling separate fashions for reasoning, multimodal duties, and agentic coding might be able to simplify their stack: Mistral’s new Small 4 brings all three right into a single open-source mannequin, with adjustable reasoning ranges below the hood.
Small 4 enters a crowded area of small fashions — together with Qwen and Claude Haiku — which can be competing on inference value and benchmark efficiency. Mistral’s pitch: shorter outputs that translate to decrease latency and cheaper tokens.
Mistral Small 4 updates Mistral Small 3.2, which got here out in June 2025, and is offered below an Apache 2.0 license. “With Small 4, customers now not want to decide on between a quick instruct mannequin, a strong reasoning engine, or a multimodal assistant: one mannequin now delivers all three, with configurable reasoning effort and best-in-class effectivity,” Mistral stated in a weblog publish.
The corporate stated that regardless of its smaller dimension — Mistral Small 4 has 119 billion complete parameters with solely 6 billion energetic parameters per token — the mannequin combines the capabilities of all Mistral’s fashions. It has the reasoning capabilities of Magistral, the multimodal understanding of Pixtral, and the agentic coding efficiency of Devstral. It additionally has a 256K context window that the corporate stated works nicely for long-form conversations and evaluation.
Rob Might, co-founder and CEO of the small language mannequin market Neurometric, informed VentureBeat that Mistral Small 4 stands out for its architectural flexibility. Nonetheless, it joins a rising variety of smaller fashions that he stated dangers including extra fragmentation to the market.
"From a technical perspective, sure, it may be aggressive in opposition to different fashions,” Might stated. “The larger problem is that it has to beat market confusion. Mistral has to win the mindshare to get a shot at being a part of that take a look at set first. Solely then can they present the technical capabilities of the mannequin.”
Reasoning on demand
Small fashions nonetheless provide good choices for enterprise builders seeking to have the identical LLM expertise at a decrease value.
The mannequin is constructed on a mixture-of-experts structure, very like different Mistral fashions. It options 128 specialists with 4 energetic every token, which Mistral says permits environment friendly scaling and specialization.
This permits Mistral Small 4 to reply sooner, even to extra reasoning-intensive outputs. It may additionally course of and motive about textual content and pictures, permitting customers to parse paperwork and graphs.
Mistral stated the mannequin encompasses a new parameter it calls reasoning_effort, which might enable customers to “dynamically modify the mannequin’s conduct.” Enterprises would be capable to configure Small 4 to ship quick, light-weight responses in the identical model as Mistral Small 3.2, or make it wordier within the vein of Magistral, offering step-by-step reasoning for complicated duties, in line with Mistral.
Mistral stated Small 4 runs on fewer chips than comparable fashions, with a really useful setup of 4 Nvidia HGX H100s or H200s, or two Nvidia DGX B200s.
“Delivering superior open-source AI fashions requires broad optimization. By shut collaboration with Nvidia, inference has been optimized for each open supply vLLM and SGLang, making certain environment friendly, high-throughput serving throughout deployment situations,” Mistral stated.
Benchmark performances
Based on Mistral's benchmarks, Small 4 performs near the extent of Mistral Medium 3.1 and Mistral Giant 3, notably in MMLU Professional.
Mistral stated the instruction-following efficiency makes Small 4 suited to high-volume enterprise duties similar to doc understanding.
Whereas aggressive with different small fashions from different firms, Small 4 nonetheless performs under different in style open-source fashions, particularly in reasoning-intensive duties. Qwen 3.5 122B and Qwen 3-next 80B outperform Small 4 on LiveCodeBench, as does Claude Haiku in instruct mode.
Mistral Small 4 was in a position to beat OpenAI’s GPT-OSS 120B within the LCR.
Mistral argues that Small 4 achieves these scores with “considerably shorter outputs” that translate to decrease inference prices and latency than the opposite fashions. In instruct mode particularly, Small 4 produces the shortest outputs of any mannequin examined — 2.1K characters vs. 14.2K for Claude Haiku and 23.6K for GPT-OSS 120B. In reasoning mode, outputs are for much longer (18.7K), which is predicted for that use case.
Might stated that whereas mannequin selection is determined by a corporation’s objectives, latency is among the three pillars they need to prioritize. “It is determined by your objectives and what you’re optimizing your structure to perform. Enterprises ought to prioritize these three pillars: reliability and structured output, latency to intelligence ratio, fine-tunability and privateness,” Might stated.

