Methodology

How AILeaderboard.live scores and sources model data.

Ranking formulas, source-of-truth rules, and current live-vs-curated coverage are public and fixed.

Models in current catalog11
Pricing: official live11
Pricing: curated official0
Latency sourceBenchmark

Ranking formulas

Cheapest

Lowest total token cost only.

Score = normalized total token cost only, where total cost = input price + output price.

Fastest

Lowest observed latency only.

Score = normalized latency only.

Smartest

Highest intelligence benchmark score only.

Score = normalized intelligence benchmark only.

Best Value

The most balanced tradeoff between cost, quality, and responsiveness.

Score = 45% intelligence + 35% cost + 10% latency + 10% context.

Coding

Highest coding benchmark score only.

Score = normalized coding benchmark only.

Source-of-truth rules
Provider-owned pricing should come from provider-owned pages whenever feasible.
Provider-owned limits should come from provider-owned docs whenever feasible.
Latency, intelligence, and coding are benchmark-backed today, not official provider metrics.
OpenRouter may be used later for discovery, but not as the sole pricing authority when official pricing exists.
Current live coverage
Official live pricing parsers: OpenAI, Google Gemini, Anthropic, xAI, Mistral, Cohere, DeepSeek, Groq.
Limits are still mostly curated official metadata; live limit parsing is not complete yet.
Benchmark source label in the catalog: Artificial Analysis.
Each model page exposes source links and provenance badges.
Benchmark and catalog references
Artificial AnalysisOpenRouter model catalog reference
OpenRouter is currently a reference for coverage and UX direction, not the pricing source of truth when official provider pricing exists.