DeepSeek has recently launched two significant new models, DeepSeek V3.2 and DeepSeek V3.2-Speciale, in December 2025
This is a major release that positions them as strong competitors to models like GPT-5.
Key Model Releases & Their Purpose
DeepSeek V3.2 : Designed as an efficient, everyday reasoning assistant with integrated tool-use capabilities
DeepSeek V3.2 – Speciale: A high-powered variant focused exclusively on deep reasoning. It does not support tool calling but has achieved “gold-medal” performance in elite competitions like the 2025 International Mathematical Olympiad (IMO) and International Olympiad in Informatics (IOI)
Technical Innovation: DeepSeek Sparse Attention (DSA)
A core advancement in V3.2 is the DeepSeek Sparse Attention (DSA) mechanism. This innovation dramatically reduces the computational cost of processing long sequences (like 300-page books) while maintaining performance, leading to a roughly 50% reduction in inference costs compared to the previous model

📊 Performance Comparison with Competitors
Mathematics: DeepSeek-V3.2-Speciale scored 96.0% on AIME 2025, slightly ahead of GPT-5-High (94.6%) and Gemini-3.0-Pro (95.0%)
Coding: On the SWE-bench Verified benchmark, which tests real-world software bugs, V3.2 resolved 73.1% of issues, close to GPT-5-High’s 74.9%
The company notes that while competitive, its models can require longer “reasoning trajectories” to match the output quality of rivals like Gemini
DeepSeek-V3.2-Speciale was available via a temporary API endpoint that was set to expire on December 15, 2025, with its capabilities planned to be merged into the standard model
The company has also indicated that future work will focus on scaling pre-training compute to improve the breadth of world knowledge in its models
Official Source:
DeepSeek.com : DeepSeek V3.2 Official Release & Features
Leave a comment