New Release

MiniMax M2.7

Model Self-Improvement, Driving Productivity Innovation
Through Technological Breakthroughs

The latest MiniMax AI model. Build a Complex Agent Harness Independently to Accomplish Highly Complex Productivity Tasks.

Why MiniMax M2.7 Is Worth
Your Attention

MiniMax M2.7 demonstrates outstanding capabilities in software engineering, professional office domains, complex environment interaction, and interactive entertainment — a major leap from MiniMax M2.5 and MiniMax M2.

MiniMax Code in Real-World Software Engineering

Excellent MiniMax code performance in end-to-end project delivery, log analysis for bug hunting, code security, and machine learning tasks. On the SWE-Pro MiniMax benchmark, M2.7 scores 56.22%, nearly matching Opus's best level — a significant jump from MiniMax M2.5.

56.22% SWE-Pro Score

MiniMax AI for Professional Office Domains

Enhanced MiniMax AI domain expertise across fields. Achieves GDPval-AA ELO score of 1495 — the highest among open-source models. Significant improvement in complex Office Suite editing over MiniMax M2.

OpenClaw & Complex Environment Interaction

On 40 complex skills cases, MiniMax M2.7 maintains a 97% skill adherence rate. Shows significant improvement in OpenClaw usage, approaching Sonnet 4.6 on the MMClaw evaluation.

MiniMax 2.7 Identity & Emotional Intelligence

MiniMax 2.7 demonstrates excellent identity preservation and emotional intelligence. Beyond productivity use cases, MiniMax AI opens space for innovation in interactive entertainment scenarios.

What MiniMax M2.7 Can Do
That Others Can't

01

MiniMax M2.7 End-to-End Project Delivery

From initial design to full deployment, M2.7 handles complex multi-step software engineering projects with VIBE-Pro score of 55.6% — showing why MiniMax AI is chosen for production workflows.

02

MiniMax Code & Agentic Coding

Advanced MiniMax code agent capabilities enable autonomous problem-solving with deep understanding of complex engineering systems. Terminal Bench 2 score of 57.0% puts MiniMax M2.7 ahead of GPT-5.2.

03

MiniMax M2.7 Self-Evolution

Model self-improvement capabilities allow MiniMax M2.7 to iteratively refine its outputs, driving continuous enhancement through technological breakthroughs — a key differentiator from MiniMax M2.5 and MiniMax M2.

04

MiniMax AI Multi-Agent Collaboration

MiniMax AI seamlessly orchestrates multiple agents for complex workflows, enabling sophisticated task decomposition and parallel execution across tools like Claude Code and OpenCode.

05

MiniMax M2.7 Office Suite Mastery

Significant improvement in complex editing for Excel, PPT, and Word. MiniMax M2.7 handles multi-turn modifications and high-fidelity edits better than any previous MiniMax AI release.

06

Claude Code, OpenCode & Tool Generalization

Outstanding generalization across coding tools — MiniMax M2.7 is compatible with Claude Code, Cursor, Cline, Codex CLI, OpenCode, and more. The best MiniMax code experience in any IDE.

MiniMax Benchmark:
M2.7 vs. the Best Models Out There

MiniMax M2.7 delivers competitive or leading results across software engineering, coding, and professional task benchmarks — outperforming MiniMax M2.5 and rivaling Kimi 2.7, GLM, and Claude Code.

Benchmark MiniMax M2.7 MiniMax M2.5 Kimi 2.7 GLM Claude Code GPT-5.2
SWE-Bench Pro
Diverse agentic coding tasks
56.22% 43.3% 55.6%
VIBE-Pro
End-to-end project delivery
55.6%
Terminal Bench 2
Complex engineering systems
57.0% 59.1% 54.0%
GDPval-AA
Expert tasks (ELO)
1495 1633 1462
Skill Adherence
40 complex skills (>2000 Token)
97%
MMClaw
OpenClaw environment interaction
Approaching Sonnet 4.6 Baseline Leading

MiniMax M2.7 Code:
From Prompt to Production, in a Single Shot

All examples below were generated by MiniMax M2.7 in a single shot. See why Reddit developers and the Hugging Face community are excited about MiniMax code generation.

Music Library Website

Full-featured music library with sidebar navigation and album displays

Wildlife Protection Charity

Charity website with hero section and donation integration

Fashion Shopping Website

E-commerce platform with product grid and modern UI

Natural History Museum

Official museum website with immersive design and exhibit galleries

Photographer Portfolio

Personal homepage with photo gallery and minimalist aesthetic

MiniMax Coding Plan:
How to Start Building with M2.7

Outstanding tool scaffolding generalization. Two API versions: M2.7 and M2.7-highspeed with identical results but faster speed. Also available via OpenRouter and Ollama.

import requests

url = "https://api.minimax.io/v1/text/chatcompletion_v2"

payload = {
    "model": "MiniMax-M2.7",
    "messages": [
        {"role": "user", "content": "Hello"}
    ]
}

headers = {"Authorization": "Bearer <token>"}

response = requests.post(url, json=payload, headers=headers)
print(response.text)
01

Subscribe to the MiniMax Coding Plan

The MiniMax coding plan delivers significantly improved performance while price remains unchanged. Token Plan users automatically benefit from higher inference speeds for MiniMax M2.7.

Read More
02

OpenRouter, Ollama & Platform Integration

Access MiniMax M2.7 via the official API, OpenRouter, or Ollama. Supports standard M2.7 and high TPS version of M2.7-highspeed. Full automatic Cache support with OpenCode MiniMax integration.

Read More
03

Hugging Face & MiniMax Agent Integration

Find MiniMax M2.7 on Hugging Face (MiniMax Hugging Face) and the general Agent platform. Experience the best MiniMax code assistance and logical reasoning — no development required.

Read More

Compatible with leading AI coding tools

Claude Code Cursor Cline Codex CLI OpenCode Roo Code Kilo Code Droid TRAE Grok CLI

What's Inside
MiniMax M2.7

Model Name
MiniMax M2.7 (MiniMax 2.7)
Versions
M2.7 & M2.7-highspeed
Input
Text Image Code
Output
Text Code
Tool Use
Function Calling Structured Output Code Execution
Best For
Agentic Coding Software Engineering Office Productivity Multi-Agent Systems
Availability
MiniMax API MiniMax Agent Token Plan OpenRouter Ollama Hugging Face
Cache
Full automatic cache, no configuration needed

MiniMax M2.7: Questions
Developers Are Actually Asking

MiniMax M2.7 competes directly with Kimi 2.7, GLM, and Claude Code in the frontier model tier. On the MiniMax benchmark suite, M2.7 achieves 56.22% on SWE-Pro (nearly matching Opus), 1495 ELO on GDPval-AA (highest among open-source models), and 97% skill adherence on complex tasks. In real-world MiniMax code generation, it rivals or surpasses these competitors at a fraction of the cost.

On Reddit (MiniMax M2.7 Reddit threads, especially r/LocalLLaMA), developers praise M2.7 for its backend coding strength and deep code-reading habits. The community notes it excels at complex refactoring and bug hunting. Some Reddit users highlight its cost-efficiency, calling it the most affordable frontier-level model available.

Yes. MiniMax M2.7 is accessible via Ollama (Ollama MiniMax) and OpenRouter as third-party integration options. Note that the Ollama cloud tag currently runs inference on MiniMax servers rather than local hardware. For direct access, the official API and MiniMax coding plan remain the recommended routes, with OpenCode MiniMax integration available.

MiniMax Hugging Face presence includes model cards, documentation, and community discussion. You can find MiniMax M2.7 on Hugging Face for benchmarking details and model information. For full inference, use the official API, OpenRouter, or the MiniMax coding plan to get started.

MiniMax M2.7 introduces Self-Evolution (model self-improvement), dramatically improved MiniMax code capabilities, and stronger MiniMax benchmark results across the board. Compared to MiniMax M2.5 and MiniMax M2, M2.7 shows major gains in agentic coding (SWE-Pro 56.22% vs 43.3%), OpenClaw environment interaction, and Office Suite editing fidelity. Google-scale evaluation methods confirm these improvements.

MiniMax M2.7 features outstanding tool scaffolding generalization and is compatible with Claude Code, OpenCode, Cursor, Cline, Codex CLI, Roo Code, Kilo Code, Droid, TRAE, and Grok CLI. As confirmed by Reddit and Hugging Face community feedback, OpenCode MiniMax and Claude Code integrations deliver the smoothest MiniMax code experience.

Don't Take Our Word for It — Try MiniMax M2.7

Experience model self-improvement and drive productivity innovation with MiniMax M2.7. Available now via API, OpenRouter, Ollama, and Hugging Face.

Independent MiniMax M2.7 website. Not affiliated with MiniMax.