Back to Pulse
MarkTechPost
DeepSeek AI Releases DeepSeek-V4: Compressed Sparse Attention and Heavily Compressed Attention Enable One-Million-Token Contexts
Read the full articleDeepSeek AI Releases DeepSeek-V4: Compressed Sparse Attention and Heavily Compressed Attention Enable One-Million-Token Contexts on MarkTechPost
↗What Happened
DeepSeek-AI has released a preview version of the DeepSeek-V4 series: two Mixture-of-Experts (MoE) language models built around one core challenge making one-million-token context windows practical and affordable at inference time. The series consists of DeepSeek-V4-Pro, with 1.6T total parameters a
Cited By
React
Newsletter
Get the weekly AI digest
The stories that matter, with a builder's perspective. Every Thursday.
Loading comments...