LLM Reference

About

LaMDA, short for Language Model for Dialogue Applications, represents Google's suite of conversational large language models. Developed using the Transformer architecture first introduced by Google Research in 2017, LaMDA is finely tuned for dialogue, making it adept at understanding the nuances of open-ended conversations. This capability arises from its comprehensive training on a diverse dataset of approximately 1.56 trillion words from public dialogues and web text. The most significant iteration of LaMDA features a staggering 137 billion parameters. Google's commitment to safety and factual accuracy in LaMDA is evident through their strategies that include fine-tuning with annotated data and leveraging external knowledge sources. LaMDA is instrumental in powering Google’s products like Bard and has sparked discussions around AI sentience 235.

Models(3)

Details

ResearcherGoogle DeepMind
Models3