Model Database

Pre-computed PeakWeights analysis for top open-source LLMs in 2026.

Analyzed Models

ModelParamsFP164-bitProtectedRecoveryKTypeDownload
Qwen2.5-7B
Alibaba
7.6B10.6211.5110.6399%50MLP.pwi
Mistral-7B-v0.3
Mistral AI
7.2B13.4413.813.5861%100lm_head.pwi
SmolLM2-1.7B
HuggingFace
1.7B17.9224.5617.9999%50MLP.pwi
DeepSeek-R1-Distill-Qwen-7B
DeepSeek
7.6B70.7373.5270.9193%50MLP.pwi
Phi-3-mini
Microsoft
3.8B11.6512.6711.6897%50MLP.pwi

Perplexity measured on WikiText-103. Recovery at K=50 protected weights.

Top 25 Models (Analysis Pending)

Run locally with:peakweights MODEL_ID --output weights.pwi

ModelOrganizationParametersArchitectureLicenseLink
DeepSeek-V3DeepSeek671B (37B active)MoEMITView
DeepSeek-R1DeepSeek671B (37B active)MoEMITView
Qwen3-235B-A22BAlibaba235B (22B active)MoEQwen LicenseView
Llama 4 MaverickMeta400B (17B active)MoELlama 4 LicenseView
Llama 4 ScoutMeta109B (17B active)MoELlama 4 LicenseView
Qwen2.5-72B-InstructAlibaba72BDense TransformerQwen LicenseView
Llama-3.3-70B-InstructMeta70BDense TransformerLlama 3.3 LicenseView
DeepSeek-R1-Distill-Llama-70BDeepSeek70BDense TransformerMITView
Mixtral-8x22B-Instruct-v0.1Mistral AI141B (39B active)MoEApache 2.0View
Qwen2.5-14B-InstructAlibaba14BDense TransformerApache 2.0View
Qwen2.5-7B-InstructAlibaba7BDense TransformerApache 2.0View
DeepSeek-R1-Distill-Qwen-7BDeepSeek7BDense TransformerMITView
Mistral-7B-Instruct-v0.3Mistral AI7BDense TransformerApache 2.0View
Gemma-2-9B-itGoogle9BDense TransformerGemma LicenseView
Gemma-2-27B-itGoogle27BDense TransformerGemma LicenseView
Phi-4Microsoft14BDense TransformerMITView
Qwen2.5-Coder-32B-InstructAlibaba32BDense TransformerApache 2.0View
Codestral-22B-v0.1Mistral AI22BDense TransformerMNPLView
DeepSeek-Coder-V2-InstructDeepSeek236B (21B active)MoEMITView
SmolLM2-1.7B-InstructHuggingFace1.7BDense TransformerApache 2.0View
Qwen2.5-3B-InstructAlibaba3BDense TransformerApache 2.0View
Phi-3.5-mini-instructMicrosoft3.8BDense TransformerMITView
Gemma-2-2B-itGoogle2BDense TransformerGemma LicenseView
MiMo-V2-FlashXiaomi309B (15B active)MoEApache 2.0View
Command R+Cohere104BDense TransformerCC-BY-NCView
OLMo-2-13B-InstructAI213BDense TransformerApache 2.0View

Run your own analysis

Analyze any HuggingFace model with the CLI or Python API.

CLI

pip install peakweights

# Analyze model
peakweights Qwen/Qwen2.5-7B \
  --top_k 50 \
  --output weights.pwi

Python

from peakweights import find, save

critical = find(
  "Qwen/Qwen2.5-7B",
  k=50,
  device="cuda"
)

save(critical, "weights.pwi")