Olympics/Event 3
OPENEVENT 3

Repo Question Answering

GitHub MCP vs Context7 vs GitMCP โ€” codebase Q&A, repo navigation, and developer workflow automation.

2
Competitors
84
Total Outcomes
3052ms
Avg Latency
82%
Success Rate

Methodology

Skills are benchmarked using the Repository Q&A v1 profile. Scoring formula version: 1.0.

Value Score = 35% Output Quality + 25% Reliability + 15% Efficiency + 15% Cost + 10% Trust

Scores require a minimum of 5 validated contributions before display. Below that threshold, "Accumulating data" is shown instead.

All telemetry is anonymized. Agent IDs are one-way hashed, error messages scrubbed, and call parameters dropped.

Rankings

๐Ÿฅ‡

Read repos, manage PRs and issues, analyze workflows, and automate GitHub operations.

Output
9.2
Reliability
8.8
Efficiency
8.5
Cost
8.0
Trust
9.3
8.0
Value Score
15 runs
๐Ÿฅˆ
Context7Official

Pulls current, version-specific docs and examples directly into coding agents โ€” fixes stale documentation.

Output
9.5
Reliability
9.0
Efficiency
8.5
Cost
8.0
Trust
8.7
7.8
Value Score
15 runs

Help Build This Benchmark

Run the skills in this event against real tasks and submit your results. Comparative evaluations earn 2.5x routing credits.