| deepseek-ai/DeepSeek-R1-0528 | Deepseek | Enhanced model with improved reasoning, inference, and algorithmic post-training optimizations; designed for high-accuracy tasks. |
| meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 | Meta AI | Multimodal instruction-tuned model leveraging a mixture-of-experts (MoE) architecture for top-tier performance in both text and image understanding. |
| gpt-oss-120b | Open AI | Open-weight 117B parameter Mixture-of-Experts model supporting 128k context, advanced reasoning via chain-of-thought, optimized for real-world tool use, coding, and efficient local or cloud deployment. |
| Intel/Qwen3-Coder-480B-A35B-Instruct-int4-mixed-ar | Qwen | High-capacity, instruction-tuned code generation model optimized with INT4 mixed-precision for fast inference, designed for complex programming tasks on Intel hardware. |
| Qwen3-Next-80B-A3B-Instruct | Qwen | High-capacity model optimized for instruction following and knowledge-intensive tasks. |
| gpt-oss-20b | Open AI | Open-source GPT-style model suitable for text generation and general-purpose tasks. |
| Qwen3-235B-A22B-Thinking-2507 | Qwen | Powerful 235B-parameter language model optimized for deep reasoning, planning, and complex multi-step tasks. |
| Mistral-Nemo-Instruct-2407 | Mistral AI | Instruction-tuned model focusing on efficient reasoning and NLP tasks. |
| mistralai/Magistral-Small-2506 | Mistral AI | Lightweight model optimized for chat, reasoning, and tool use in smaller deployment environments. |
| mistralai/Devstral-Small-2505 | Mistral AI | Developer-focused compact model designed for fast and accurate code completions and debugging tasks. |
| meta-llama/Llama-3.3-70B-Instruct | Meta AI | Large-scale transformer model fine-tuned for instruction-following, aligning responses with human preferences. |
| mistralai/Mistral-Large-Instruct-2411 | Mistral AI | Large instruction-tuned model offering strong general-purpose reasoning, summarization, and assistant-style responses. |
| Qwen/Qwen2.5-VL-32B-Instruct | Qwen | Powerful vision-language model trained to follow multimodal instructions, suitable for image understanding, captioning, and reasoning. |
| meta-llama/Llama-3.2-90B-Vision-Instruct | Meta AI | Vision-language model with instruction tuning, capable of image analysis, visual Q&A, and multimodal dialogue generation. |
| BAAI/bge-multilingual-gemma2 | BAAI | Multilingual embedding model optimized for semantic search and retrieval tasks across diverse languages. |
| zai-org/GLM-4.6 | Z.AI | Advanced large-language model that expands context capacity to 200K tokens and significantly enhances coding, reasoning, and agentic capabilities. It excels in real-world coding tools, delivering more natural, human-aligned outputs. |
| moonshotai/Kimi-K2-Instruct-0905 | Moonshot AI | A state-of-the-art mixture-of-experts (MoE) language model, featuring 32 billion activated parameters and a total of 1 trillion parameters. It delivers exceptional reasoning, coding, and content-generation performance. |
| moonshotai/Kimi-K2-Thinking | Moonshot AI | A high-performance open-source thinking model built for step-by-step thinking and dynamic tool use. It achieves state-of-the-art results on benchmarks such as Humanity’s Last Exam (HLE) and BrowseComp by dramatically scaling multi-step reasoning depth maintaining stable tool-use across 200–300 sequential calls. |