Discover the range of state-of-the-art large language models provided by NeosantaraAI.
Model Name | NeosantaraAI API ID | Capabilities |
---|---|---|
Nusantara-Base | nusantara-base | Text, Function Calling, JSON Mode |
Archipelago-7B | archipelago-7b | Text, JSON Mode |
Bahasa-LLM | bahasa-llm | Text, Function Calling, JSON Mode |
LuminAI | luminai | Text |
Vision-Emas-2045 | vision-emas-2045 | Vision, JSON Mode, Text |
Neosantara-Gen-2045 | neosantara-gen-2045 | Image Generation |
Replicate-Stable-Diffusion-XL | replicate-stable-diffusion-xl | Image Generation |
Gemini-1.5-Flash | gemini-1.5-flash | Text, Function Calling, JSON Mode |
Gemini-2.0-Flash | gemini-2.0-flash | Text, Function Calling, JSON Mode |
Gemini-1.5-Pro | gemini-1.5-pro | Text, Function Calling, JSON Mode |
GPT-3.5-Turbo | gpt-3.5-turbo | Text, Function Calling, JSON Mode |
GPT-4o-Mini | gpt-4o-mini | Text, Function Calling, JSON Mode |
Command-R | command-r | Text, JSON Mode |
Command-R-Plus | command-r-plus | Text, Web Search, JSON Mode |
Firefunction-v1 | firefunction-v1 | Text, Function Calling, JSON Mode |
Replicate-Prediction | replicate-prediction | Text, JSON Mode |
Gemma2-9B-IT | gemma2-9b-it | Text, JSON Mode |
Nusa-Embedding-0001 | nusa-embedding-0001 | Embedding |
Together-Embed-v2-L | together-embed-v2-L | Embedding |
Llama-3.3-70B-Instruct | llama-3.3-70b-instruct | Text, Function Calling |
Feature | Nusantara-Base | Neosantara-Gen-2045 | Bahasa-LLM |
---|---|---|---|
Description | General-purpose model with balanced performance and speed. | Our main image generation model. | Fast and efficient model for Indonesian language. |
Strengths | Versatile, good for most common tasks. | High-quality image generation, creative content. | Optimized for Indonesian language understanding. |
Multilingual | Yes | Yes (for prompts) | Yes |
Vision | Yes (via integration with underlying models like Gemini-2.0-Flash) | No (Generates images, does not analyze them) | No |
Function Calling | Yes | No (Not for text/chat function calling) | Yes |
JSON Mode | Yes | No (Not for structured text output) | Yes |
Context Window | 64,000 tokens | 0 (Not applicable for image generation) | 8,192 tokens |
Max Output | 2,048 tokens | 0 (Output is an image, not text tokens) | 2,048 tokens |
Training Data Cut-off | Varies by underlying provider (latest available for Gemini-2.0-Flash) | Varies by underlying provider (e.g., Together AI’s FLUX.1) | Varies by underlying provider (latest available for Gemma2-9B-IT) |
Comparative Latency | Fast | Fast | Fastest |
usage
metrics.