“Gemini 1.5 Pro achieves near-perfect recall on long-context retrieval tasks across modalities, improves the state-of-the-art in long-document QA, long-video QA and long-context ASR, and matches or surpasses Gemini 1.0 Ultra’s state-of-the-art performance across a broad set of benchmarks,” said Google researchers in a technical paper (PDF).
The efficiency of Google’s latest model is attributed to its innovative Mixture-of-Experts (MoE) architecture.
“While a traditional Transformer functions as one large neural network, MoE models are divided into smaller ‘expert’ neural networks,” explained Demis Hassabis, CEO of Google DeepMind.
Email us: contact@neptunesolution.in
Call: 0172-4102740, +91-9780373638, 7495055288 for more details.
Visit us: www.neptunesolution.in
Office address: Sector 34-A, SCO 156-157, second floor, Near Verka Corporate Office, Chandigarh – 160022