Aquila 2 7B
About
The Aquila 2 7B is a bilingual large language model crafted by the Beijing Academy of Artificial Intelligence, capable of processing both Chinese and English languages. It features a state-of-the-art transformer architecture, drawing design influences from GPT-3 and LLaMA. The model's training employed the innovative HeuriMentor framework, enhancing adaptive learning and efficient data management. With a token vocabulary of 100,000 and support for sequences up to 8192 tokens, it excels in various NLP tasks like text generation. Notably more efficient than previous models, Aquila 2 7B is optimized for commercial use under a specific license agreement and offers open-source access to its model weights and code.
Capabilities
VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution
Compare
All comparisons →Aquila 2 7B vs Qwen2 7B InstructAquila 2 7B vs ShieldGemma 9BAquila 2 7B vs Dracarys Llama 3.1 70B InstructAquila 2 7B vs Gemma 2 9B SahabatAI InstructAquila 2 7B vs Llama 3 Taiwan 70B InstructAquila 2 7B vs Llama 3 Swallow 70B InstructAquila 2 7B vs Llama 3.1 Swallow 8B InstructAquila 2 7B vs Bielik 11B v2.6 InstructAquila 2 7B vs Sarvam-M Multilingual HybridAquila 2 7B vs Teuken 7B InstructAquila 2 7B vs Italia 10B InstructAquila 2 7B vs Colosseum 355B InstructAquila 2 7B vs Stockmark 2 100B InstructAquila 2 7B vs Seed-OSS 36B InstructAquila 2 7B vs Marin 8B InstructAquila 2 7B vs Granite Guardian 3.0 8B