upstage/Solar-Open-100B

Overview

Solar-Open-100B is Upstage’s flagship large language model built using a Mixture-of-Experts (MoE) architecture. It combines extensive reasoning and instruction-following capabilities with efficient inference performance by activating a subset of expert parameters per token. The model was trained from scratch at large scale and is released under the Solar-Apache License 2.0.

Official model page:
https://huggingface.co/upstage/Solar-Open-100B


Launch

### Launch
[Launch on DeployPad](https://deploypad.geodd.io/launch/upstage/solar-open-100b){: .btn .btn-primary}


Supported Hardware (Planned)

Support on DeployPad is planned for the following GPUs:

GPU
NVIDIA H100
NVIDIA H200
NVIDIA RTX Pro 6000 (Blackwell)

Model Characteristics

Attribute Details
Model Family Solar Open
Architecture Mixture-of-Experts (MoE)
Total Parameters ~102.6B
Active Parameters ~12B (per token)
Context Length Up to 128k tokens
Pre-training Scale ~19.7 trillion tokens
License Solar-Apache License 2.0

Solar-Open-100B’s MoE design enables it to deliver broad knowledge depth with inference efficiency by selectively activating a subset of experts per token.


Supported Configurations (Planned)

Capability Status
Precision FP8 (planned)
Streaming Inference Planned
OpenAI-compatible API Planned
Dynamic Batching Planned
GPU Sharing / Multi-tenant Planned
Production Deployment Planned

Performance Notes

Performance characteristics on DeployPad are under evaluation. Benchmarks such as throughput, latency, and cost per token will be published once integration testing is completed.


Current Status

State
Coming Soon

Feedback Requested

To help ensure a reliable and robust release, users are encouraged to share feedback on:

  • Model behavior and reasoning quality

  • Inference stability across workloads

  • Errors, edge cases, or unexpected outputs

Please provide feedback in this thread or open a topic under Support & Troubleshooting.