Alibaba’s reaction to DeepSeek is Qwen 2.5-Max, a large-scale model of the company’s latest EXPERTS (MOE).
QWEN 2.5-Max has deleted more than 20 trillion tokens in advance, and boasts fine-tuning through state-of-the-art techniques such as monitored fine-tuned (SFT) and reinforcement learning from human feedback (RLHF).
The API is available via Alibaba Cloud and can access the search through Qwen Chat, and Chinese technical companies are inviting developers and researchers to see the break -through directly.
Out -performance peer
The results of Qwen 2.5-Max are promising compared to some of the most prominent AI models in various benchmarks.
The evaluation includes general, MMLU-PRO for university-level problem solving, livecodebench for coding expertise, live benches of overall function, arena hardware for evaluating models for human preferences. It contained a metric.
According to Alibaba, “QWEN 2.5-Max is more competitive with MMLu-PRO with benchmarks such as arena-hard, LiveCodebench, and GPQA-DIAMOND. “”
Designated models designed for downstream tasks such as chat and coding are directly competing with major models such as GPT-4O, Claude-3.5-Sonnet, and Deepseek V3. Among these, QWEN 2.5-Max was able to surpass rivals in several important fields.
The comparison of the base model also gained a promising result. Individual models such as GPT-4O and Claude-3.5-Sonnet remained unacceptable due to access restrictions, but QWEN 2.5-Max is Deepseek V3, LLAMA-3.1-405B (Maximum Open Weight Density Model) It was evaluated for the main public options. Again, Alibaba’s newcomers showed extraordinary performances.
“Our bass model has a great advantage over most benchmarks,” Alibaba says.
QWEN 2.5-Max is accessible
Alibaba has integrated QWEN 2.5-Max with the QWEN chat platform to make the model easier to access the global community. Here, users can dialogue directly with models of various abilities. Investigate search functions and test complex queries.
For developers, the QWEN 2.5-Max API is now available through Alibaba Cloud in the model name “Qwen-Max-2025-01-25”. Interested users can start by registering an Alibaba Cloud account, activating model studio services and generating API keys.
API is compatible with Openai’s ecosystem, making it easier to integrate existing projects and workflows. This compatibility reduces barriers of enthusiastic people to test the application using model functions.
Alibaba has issued a strong statement in Qwen 2.5-Max. The company’s continuous commitment to the AI model scaling is not only improving performance benchmarks, but also improving the basic thinking and reasoning of these systems.
“Data and model -sized scaling not only shows the progress of model intelligence, but also reflects unwavering commitments to pioneering research,” said Alibaba.
In the future, the team aims to expand the boundaries of reinforced learning to promote more advanced inference skills. They say this may not only exceed human intelligence when their models solve complex problems, but also exceeds it.
The impact on the industry is profound. As the scaling method has improved, and the QWEN model opens a new frontier, there is a possibility that there will be more ripples throughout the AI -drive type fields that have recently been seen in a few weeks.
(Photo by Maico Amorim)
See: Chatgpt GOV aims to modernize US government agencies.

Do you want to know more about AI and big data from industry leaders? See AI & Big Data EXPO held in Amsterdam, California and London. Comprehensive events will be held in collaboration with other major events, including Intelligent Automation Conference, Blockx, Digital Transformation Week, Cyber Security & Cloud EXPO.
See more about Enterprise Technology events and webiners equipped with this TechForge.