Magic
100M-token context window innovation
About
Magic, a trailblazing AI research company based in San Francisco, has rapidly established itself as a pivotal force in the domain of generative AI and Large Language Models (LLMs) since its inception in 2022. The company's mission is to develop safe and beneficial Artificial General Intelligence (AGI) that can tackle significant global challenges. Instead of relying solely on human efforts, Magic emphasizes the automation of AI research and code generation. This innovative approach addresses alignment issues and enhances model reliability, setting Magic apart in the crowded AI landscape. One of Magic's most remarkable contributions is its development of models capable of processing ultra-long context windows. Their pioneering LTM Net model, for example, supports a context window of five million tokens, a stark contrast to competitors like Google Gemini 1.5, which achieved only one million tokens in limited scenarios. This advancement in context processing empowers Magic's AI to grasp codebases with greater intricacy, leading to more refined and nuanced suggestions. The company also boasts an impressive technological feat with the creation of the LTM-2-mini model, offering an unprecedented context window of 100 million tokens. This capacity equates to about 10 million lines of code or 750 novels, marking the largest context window available in any commercial model to date. Such capabilities have allowed Magic's models to autonomously execute complex tasks, such as developing a password strength meter for open-source software and creating a calculator with a bespoke UI framework. Despite having a modest team of approximately two dozen engineers and researchers, Magic has drawn substantial financial backing, securing $465 million across various funding rounds. Notable investments include a $320 million infusion led by former Google CEO Eric Schmidt, with participation from other eminent figures and organizations. Earlier rounds saw contributions from industry heavyweights such as CapitalG, Elad Gil, and Sequoia. Magic's partnership with Google Cloud to develop two supercomputers, Magic-G4 and Magic-G5, further underscores its commitment to push the boundaries of AGI research. Utilizing NVIDIA’s H100 and Blackwell GPUs, the collaboration aims to achieve a staggering 160 exaflops of computational power, enabling Magic to enhance its AI capabilities even further. While Magic’s products are not yet available on the market, its audacious research endeavors, robust funding, and strategic collaborations make it a formidable player in the space of generative AI. The company's long-term vision transcends mere code generation, aiming at developing AGI that can reliably solve complex problems, advancing the field in ways previously unimagined.