A man takes photos of a DeepSeek display at a shopping mall in Hangzhou
Image Credits:CN-STR/AFP / Getty Images
AI

DeepSeek previews new AI model that ‘closes the gap’ with frontier models

Chinese AI lab DeepSeek has launched two preview versions of its newest large language model, DeepSeek V4, a much-awaited update to last year’s V3.2 model and the accompanying R1 reasoning model that took the AI world by storm.

The company says both DeepSeek V4 Flash and V4 Pro are mixture-of-experts models with context windows of 1 million tokens each — enough to allow large codebases or documents to be used in prompts. The mixture-of-experts approach involves activating only a certain number of parameters per task to lower inference costs.

The Pro model has a total of 1.6 trillion parameters (49 billion active), which makes it the biggest open-weight model available, outstripping Moonshot AI’s Kimi K 2.6 (1.1 trillion), MiniMax’s M1 (456 billion), and more than double DeepSeek V3.2 (671 billion). The smaller, V4 Flash has 284 billion parameters (13 billion active).

DeepSeek says both models are more efficient and performant than DeepSeek V3.2 due to architectural improvements, and have almost “closed the gap” with current leading models, both open and closed, on reasoning benchmarks.

The company claims its new V4-Pro-Max model outperforms its opensource peers across reasoning benchmarks, and outstrips OpenAI’s GPT-5.2 and Gemini 3.0 Pro on some tasks. In coding competition benchmarks, DeepSeek said both V4 models’ performance is “comparable to GPT-5.4.”

Image Credits:DeepSeek

However, the models seem to fall slightly behind frontier models in knowledge tests, specifically OpenAI’s GPT-5.4 and Google’s latest Gemini 3.1 Pro. This lag suggests a “developmental trajectory that trails state-of-the-art frontier models by approximately 3 to 6 months,” the lab wrote.

Both V4 Flash and V4 Pro support text only, unlike many of its closed-source peers, which offer support for understanding and generating audio, video, and images.

Techcrunch event

Meet your next investor or portfolio startup at Disrupt


Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $410.

Meet your next investor or portfolio startup at Disrupt


Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $410.

San Francisco, CA | October 13-15, 2026

Notably, DeepSeek V4 is much more affordable than any frontier model available today. The smaller V4 Flash model costs $0.14 per million input tokens and $0.28 per million output tokens, undercutting GPT-5.4 Nano, Gemini 3.1 Flash, GPT-5.4 Mini, and Claude Haiku 4.5. The larger V4 Pro model, meanwhile, costs $0.145 per million input tokens and $3.48 per million output tokens, also undercutting Gemini 3.1 Pro, GPT-5.5, Claude Opus 4.7, and GPT-5.4.

The launch comes a day after the U.S. accused China of stealing American AI labs’ IP on an industrial scale using thousands of proxy accounts. DeepSeek itself has been accused by Anthropic and OpenAI of “distilling,” essentially copying, their AI models.

Topics

, , , ,
Loading the next article
Error loading the next article