Google DeepMind’s Gemini models and the Rise of Long-Context LLMs
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Pietro Lio.
Gemini 1.5 has shown that large-scale foundational models are able to process information across millions of tokens of context. This talk will explore the importance and applications of these long-context AI models, examining the current state of their capabilities and highlighting open research problems in the field of long-context understanding.
This talk is part of the Foundation AI series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|