Model Self-Improvement, Driving Productivity Innovation
Through Technological Breakthroughs
The latest MiniMax AI model. Build a Complex Agent Harness Independently to Accomplish Highly Complex Productivity Tasks.
MiniMax M2.7 demonstrates outstanding capabilities in software engineering, professional office domains, complex environment interaction, and interactive entertainment — a major leap from MiniMax M2.5 and MiniMax M2.
Excellent MiniMax code performance in end-to-end project delivery, log analysis for bug hunting, code security, and machine learning tasks. On the SWE-Pro MiniMax benchmark, M2.7 scores 56.22%, nearly matching Opus's best level — a significant jump from MiniMax M2.5.
Enhanced MiniMax AI domain expertise across fields. Achieves GDPval-AA ELO score of 1495 — the highest among open-source models. Significant improvement in complex Office Suite editing over MiniMax M2.
On 40 complex skills cases, MiniMax M2.7 maintains a 97% skill adherence rate. Shows significant improvement in OpenClaw usage, approaching Sonnet 4.6 on the MMClaw evaluation.
MiniMax 2.7 demonstrates excellent identity preservation and emotional intelligence. Beyond productivity use cases, MiniMax AI opens space for innovation in interactive entertainment scenarios.
From initial design to full deployment, M2.7 handles complex multi-step software engineering projects with VIBE-Pro score of 55.6% — showing why MiniMax AI is chosen for production workflows.
Advanced MiniMax code agent capabilities enable autonomous problem-solving with deep understanding of complex engineering systems. Terminal Bench 2 score of 57.0% puts MiniMax M2.7 ahead of GPT-5.2.
Model self-improvement capabilities allow MiniMax M2.7 to iteratively refine its outputs, driving continuous enhancement through technological breakthroughs — a key differentiator from MiniMax M2.5 and MiniMax M2.
MiniMax AI seamlessly orchestrates multiple agents for complex workflows, enabling sophisticated task decomposition and parallel execution across tools like Claude Code and OpenCode.
Significant improvement in complex editing for Excel, PPT, and Word. MiniMax M2.7 handles multi-turn modifications and high-fidelity edits better than any previous MiniMax AI release.
Outstanding generalization across coding tools — MiniMax M2.7 is compatible with Claude Code, Cursor, Cline, Codex CLI, OpenCode, and more. The best MiniMax code experience in any IDE.
MiniMax M2.7 delivers competitive or leading results across software engineering, coding, and professional task benchmarks — outperforming MiniMax M2.5 and rivaling Kimi 2.7, GLM, and Claude Code.
| Benchmark | MiniMax M2.7 | MiniMax M2.5 | Kimi 2.7 | GLM | Claude Code | GPT-5.2 |
|---|---|---|---|---|---|---|
|
SWE-Bench Pro
Diverse agentic coding tasks
|
56.22% | 43.3% | — | — | — | 55.6% |
|
VIBE-Pro
End-to-end project delivery
|
55.6% | — | — | — | — | — |
|
Terminal Bench 2
Complex engineering systems
|
57.0% | — | — | — | 59.1% | 54.0% |
|
GDPval-AA
Expert tasks (ELO)
|
1495 | — | — | — | 1633 | 1462 |
|
Skill Adherence
40 complex skills (>2000 Token)
|
97% | — | — | — | — | — |
|
MMClaw
OpenClaw environment interaction
|
Approaching Sonnet 4.6 | Baseline | — | — | Leading | — |
All examples below were generated by MiniMax M2.7 in a single shot. See why Reddit developers and the Hugging Face community are excited about MiniMax code generation.
Full-featured music library with sidebar navigation and album displays
Charity website with hero section and donation integration
E-commerce platform with product grid and modern UI
Official museum website with immersive design and exhibit galleries
Personal homepage with photo gallery and minimalist aesthetic
Outstanding tool scaffolding generalization. Two API versions: M2.7 and M2.7-highspeed with identical results but faster speed. Also available via OpenRouter and Ollama.
import requests
url = "https://api.minimax.io/v1/text/chatcompletion_v2"
payload = {
"model": "MiniMax-M2.7",
"messages": [
{"role": "user", "content": "Hello"}
]
}
headers = {"Authorization": "Bearer <token>"}
response = requests.post(url, json=payload, headers=headers)
print(response.text)
The MiniMax coding plan delivers significantly improved performance while price remains unchanged. Token Plan users automatically benefit from higher inference speeds for MiniMax M2.7.
Read MoreAccess MiniMax M2.7 via the official API, OpenRouter, or Ollama. Supports standard M2.7 and high TPS version of M2.7-highspeed. Full automatic Cache support with OpenCode MiniMax integration.
Read MoreFind MiniMax M2.7 on Hugging Face (MiniMax Hugging Face) and the general Agent platform. Experience the best MiniMax code assistance and logical reasoning — no development required.
Read MoreCompatible with leading AI coding tools
MiniMax M2.7 competes directly with Kimi 2.7, GLM, and Claude Code in the frontier model tier. On the MiniMax benchmark suite, M2.7 achieves 56.22% on SWE-Pro (nearly matching Opus), 1495 ELO on GDPval-AA (highest among open-source models), and 97% skill adherence on complex tasks. In real-world MiniMax code generation, it rivals or surpasses these competitors at a fraction of the cost.
On Reddit (MiniMax M2.7 Reddit threads, especially r/LocalLLaMA), developers praise M2.7 for its backend coding strength and deep code-reading habits. The community notes it excels at complex refactoring and bug hunting. Some Reddit users highlight its cost-efficiency, calling it the most affordable frontier-level model available.
Yes. MiniMax M2.7 is accessible via Ollama (Ollama MiniMax) and OpenRouter as third-party integration options. Note that the Ollama cloud tag currently runs inference on MiniMax servers rather than local hardware. For direct access, the official API and MiniMax coding plan remain the recommended routes, with OpenCode MiniMax integration available.
MiniMax Hugging Face presence includes model cards, documentation, and community discussion. You can find MiniMax M2.7 on Hugging Face for benchmarking details and model information. For full inference, use the official API, OpenRouter, or the MiniMax coding plan to get started.
MiniMax M2.7 introduces Self-Evolution (model self-improvement), dramatically improved MiniMax code capabilities, and stronger MiniMax benchmark results across the board. Compared to MiniMax M2.5 and MiniMax M2, M2.7 shows major gains in agentic coding (SWE-Pro 56.22% vs 43.3%), OpenClaw environment interaction, and Office Suite editing fidelity. Google-scale evaluation methods confirm these improvements.
MiniMax M2.7 features outstanding tool scaffolding generalization and is compatible with Claude Code, OpenCode, Cursor, Cline, Codex CLI, Roo Code, Kilo Code, Droid, TRAE, and Grok CLI. As confirmed by Reddit and Hugging Face community feedback, OpenCode MiniMax and Claude Code integrations deliver the smoothest MiniMax code experience.
Experience model self-improvement and drive productivity innovation with MiniMax M2.7. Available now via API, OpenRouter, Ollama, and Hugging Face.