Falcon H1R 7B AI Model Release By TII Demonstrates Compact Open-Source AI Leadership
The Technology Innovation Institute has launched Falcon H1R 7B, a compact reasoning model aimed at widening access to advanced AI worldwide. With 7 billion parameters, the system delivers high-level logic performance while staying efficient and openly available, supporting Abu Dhabi’s strategy to build strong in-country AI capability for research, government and industry.
Developed under the Advanced Technology Research Council, Falcon H1R 7B extends the Falcon H1-7B foundation model using a specialised training pipeline and a hybrid Transformer–Mamba architecture. This combination raises accuracy and speed together, allowing complex reasoning while keeping memory and energy demands low enough for practical deployment across data centres and constrained hardware.

On public benchmarks, Falcon H1R 7B competes with and often exceeds much larger open-source systems from Microsoft, Alibaba, NVIDIA and ServiceNow AI. The model achieved 88.1 percent on the AIME-24 maths test, surpassing ServiceNow AI’s Apriel 1.5 (15B) score of 86.2 percent, showing that a 7B model can match heavyweight alternatives.
| Benchmark / Metric | Falcon H1R 7B | Comparison Model | Score / Detail |
|---|---|---|---|
| AIME-24 (Math) | 88.1% | Apriel 1.5 (15B) | Apriel 1.5: 86.2% |
| Code & agentic tasks (overall accuracy) | 68.6% | Models under 8B | Best-in-class in sub-8B range |
| LCB v6 / SciCode Sub / TB Hard | 34% | DeepSeek R1-0528 Qwen 3 8B | DeepSeek R1-0528 Qwen 3 8B: 26.9% |
| LCB v6 / SciCode Sub / TB Hard | 34% | Qwen3-32B | Qwen3-32B: 33.4% |
| Throughput at batch 64 | Up to 1,500 tokens/sec/GPU | Qwen3-8B | Falcon H1R 7B runs at nearly double the speed |
In coding and agent-style workloads, Falcon H1R 7B reached 68.6 percent accuracy, ranking among the strongest models under 8B parameters. On the LCB v6, SciCode Sub and TB Hard benchmarks, Falcon H1R 7B scored 34 percent, ahead of DeepSeek R1-0528 Qwen 3 8B at 26.9 percent and also Qwen3-32B at 33.4 percent.
For broader reasoning tasks, Falcon H1R 7B shows strong logical thinking and reliable instruction following. Its performance matches or comes close to larger models such as Microsoft’s Phi 4 Reasoning Plus 14B, while using about half the parameters, which supports lower computational costs, faster inference and easier scaling across regional and sector-specific deployments.
Falcon H1R 7B is built to sit at a useful Pareto frontier, where higher speed does not force a quality drop. The hybrid Transformer–Mamba design delivers throughput of up to 1,500 tokens per second per GPU at batch size 64, almost twice as fast as Qwen3-8B, while keeping accuracy at elite benchmark levels.
"Falcon H1R 7B marks a leap forward in the reasoning capabilities of compact AI systems," said Dr. Najwa Aaraj, CEO of TII. "It achieves near-perfect scores on elite benchmarks while keeping memory and energy use exceptionally low, critical criteria for real-world deployment and sustainability." Researchers describe this as unlocking "latent intelligence" within a relatively small parameter budget.
Falcon H1R 7B and UAE AI leadership strategy
The Falcon programme has become central to the UAE’s AI strategy, with successive Falcon generations earning top global rankings in their classes. These models show that compact, sovereign architectures can compete with, and sometimes surpass, much larger AI systems on performance, efficiency and real-world deployability, supporting national resilience and independent capability.
Faisal al Bannai, Adviser to the UAE President and Secretary-General of the Advanced Technology Research Council, said, "Falcon H1R reflects the UAE’s commitment to building open and responsible AI that delivers real national and global value. By bringing world-class reasoning into a compact, efficient model, we are expanding access to advanced AI in a way that supports economic growth, research leadership, and long-term technological resilience."
The model is being released as open source under the Falcon TII License, in line with TII’s focus on transparency and collaboration. Developers, researchers and institutions worldwide can access Falcon H1R 7B via Hugging Face, along with a technical report detailing training strategies, architectural choices and measured performance on key reasoning benchmarks.
"This model is the result of world-class research and engineering. It shows how scientific precision and scalable design can go hand in hand," said Dr. Hakim Hacid, Chief Researcher at TII’s Artificial Intelligence and Digital Research Centre. "We are proud to deliver a model that enables the community to build smarter, faster, and more accessible AI systems." Together, these developments strengthen Abu Dhabi and the wider UAE’s position in frontier AI research.
With inputs from WAM