Overview
MiroThinker-v1.5-30B is an open-source research agent model designed to advance long-horizon reasoning and complex multi-step analysis. It supports extended context windows and deep agent-environment interactions, positioning it for demanding tasks such as extensive reasoning workflows and information-seeking applications. The model is released in the 30B parameter scale with notable capabilities for tool integration and reasoning depth.
Official model page:
https://huggingface.co/miromind-ai/MiroThinker-v1.5-30B
Support for this model on DeployPad is planned and will be made available soon.
Launch
### Launch
[Launch on DeployPad](https://deploypad.geodd.io/launch/miromind-ai/mirothinker-v1.5-30b){: .btn .btn-primary}
Supported Hardware (Planned)
Support is planned for the following GPUs:
| GPU |
|---|
| NVIDIA H100 |
| NVIDIA H200 |
| NVIDIA RTX Pro 6000 (Blackwell) |
Model Characteristics
| Attribute | Details |
|---|---|
| Model Family | MiroThinker v1.5 |
| Parameter Scale | 30B |
| Context Window | Up to 256K tokens |
| Tool Calls Support | Up to 400 per task |
| Primary Focus | Long-horizon reasoning and research workflows |
| Architecture | Interactive agent-oriented model |
Key capabilities include long-context reasoning, deep multi-step analysis, and enhanced interaction depth with environments, making it suited for tasks where extended sequences and reasoning chains are important.
Supported Configurations (Planned)
| Capability | Status |
|---|---|
| Precision | FP8 (planned) |
| Streaming Inference | Planned |
| OpenAI-compatible API | Planned |
| Dynamic Batching | Planned |
| GPU Sharing / Multi-tenant | Planned |
| Production Deployment | Planned |
Performance Notes
Support for MiroThinker-v1.5-30B will enable deployment across the listed GPUs with optimized inference configurations. Official benchmarks are not yet available for DeployPad, and performance characteristics will be published once testing is complete.
Current Status
| State |
|---|
| Coming Soon |
Feedback Requested
To help refine performance profiles and deployment guidance, users are encouraged to share feedback on:
-
Output behaviour and reasoning quality
-
Deployment stability across hardware targets
-
Any errors or unexpected behaviours
Feedback can be posted in this thread or under Support & Troubleshooting.