LLM Reference
Tsinghua Knowledge Engineering Group (THUDM)

Tsinghua Knowledge Engineering Group (THUDM)

Leading China's LLM innovation surge

China
Academic

About

The Tsinghua Knowledge Engineering Group (THUDM) in Beijing, China, sits at the forefront of AI research, making substantial strides in generative AI and Large Language Models (LLMs). Their work is rooted in a commitment to advancing the capabilities of AI technologies and showcasing their unique position within the global research community. One of their most groundbreaking achievements is the development of the ChatGLM family of LLMs, with iterations like GLM-4, CodeGeeX, CogVLM (VisualGLM), WebGLM, and the expansive GLM-130B. These models exemplify THUDM's adeptness in creating diverse LLMs tailored for various purposes. Not content with merely model creation, their research also delves into developing training and inference techniques, offering a more holistic approach that underscores their standing as innovators in the field. THUDM's contributions extend further into the evaluation of LLMs, an area critical for understanding and enhancing AI performance. They have crafted several benchmarks, such as AgentBench, AlignBench, LongBench, and NaturalCodeBench, providing essential tools for the AI community to assess model capabilities and limitations. This research extends to integrating pre-training in graph neural networks, with projects like GraphMAE, GPT-GNN, GCC, SelfKG, and CogDL, evidencing their multi-disciplinary expertise and comprehensive research approach. In addition to foundational research, THUDM focuses on translating their work into practical applications. Models like CogVideo and CogVideoX, capable of generating video from text and images, highlight their foray into cutting-edge technologies. These are readily accessible through platforms like Hugging Face and ModelScope, reflecting their commitment to open-source and the democratization of technology. Their active participation in projects such as CogAgent and AutoWebGLM further demonstrates the practical impact of their work, aligning closely with the needs of the broader research and development communities. By embracing a collaborative and open-source-friendly mindset, THUDM not only pushes the envelope in AI research but also fosters an environment conducive to innovation and shared learning. Their extensive array of GitHub repositories and partnerships underscores a dedication to knowledge dissemination, reinforcing their role as a pivotal force driving progress in AI. While the exact founding date remains unconfirmed, THUDM's contributions solidify their importance in the evolution of generative AI and LLMs on the world stage.

Model Families

Information

Founded2018
Beijing, China