Our Groundbreaking Multimodal Model, Aya Vision, is Here!
Today, Cohere For AI, Cohereβs research arm, is proud to announce Aya Vision, a state-of-the-art multimodal large language model excelling across multiple languages and modalities. Aya Vision outperforms the leading open-weight models in critical benchmarks for language, text, and image capabilities.
built as a foundation for multilingual and multimodal communication, this groundbreaking AI model supports tasks such as image captioning, visual question answering, text generation, and translations from both texts and images into coherent text.
Find more information about Aya Vision here.