Home Together AI Together AI Frequently Asked Questions

Together AI Frequently Asked Questions

FAQ from Together AI

What types of AI models does Together AI support?

What GPU hardware is available on Together AI?

How does Together AI optimize performance and cost?

Can I fine-tune my own models on Together AI?

Is Together AI suitable for enterprise use?

FAQ from Together AI

What is Together AI?

Together AI is an AI Acceleration Cloud providing an end-to-end platform for the full generative AI lifecycle. It offers fast inference, fine-tuning, and training capabilities for generative AI models using easy-to-use APIs and highly scalable infrastructure. Users can run and fine-tune open-source models, train and deploy models at scale on their AI Acceleration Cloud and scalable GPU clusters, and optimize performance and cost. The platform supports over 200 generative AI models across various modalities like chat, images, code, and more, with OpenAI-compatible APIs.

How to use Together AI?

Users can interact with Together AI through easy-to-use APIs for serverless inference or deploy models on custom hardware via dedicated endpoints. Fine-tuning is available through simple commands or by controlling hyperparameters via API. GPU clusters can be requested for large-scale training. The platform also offers a web UI, API, or CLI to start or stop endpoints and manage services. Code execution environments are available for building and running AI development tasks.

What types of AI models does Together AI support?

Together AI supports over 200 generative AI models, including Chat, Multimodal, Language, Image, Code, and Embedding models, with a strong focus on open-source options.

What GPU hardware is available on Together AI?

Together AI offers state-of-the-art NVIDIA GPUs, including GB200, B200, H200, H100, A100, L40, and L40S, for both inference and training workloads.

How does Together AI optimize performance and cost?

Together AI optimizes performance and cost through custom transformer-optimized kernels (e.g., FP8 inference kernels, FlashAttention-3), quality-preserving quantization (QTIP), speculative decoding, and competitive pricing models.

Can I fine-tune my own models on Together AI?

Yes, Together AI provides comprehensive fine-tuning capabilities, including LoRA and full fine-tuning, allowing users to train and improve high-quality models with complete model ownership and no vendor lock-in.

Is Together AI suitable for enterprise use?

Yes, Together AI offers secure, reliable AI infrastructure, SOC 2 and HIPAA compliance, dedicated endpoints, and expert AI advisory services, making it suitable for enterprise-scale deployments.

Related AI tools