Tencent T1 Reasoning Model Released

Chinese technology giant Tencent has officially unveiled its latest T1 Reasoning model, designed to deliver superior performance in processing text documents and executing reasoning tasks. In recent months, the company has made several strategic AI investments, and the introduction of the T1 model further strengthens its position in the increasingly competitive Chinese AI market.

The model’s outstanding performance is evident in its key features—faster response times and improved capabilities—which enable it to handle longer text documents more efficiently. According to Tencent’s official announcement, the model places a strong emphasis on maintaining clear and well-organized content logic while keeping the “hallucination” rate extremely low. These advantages allow the T1 to outperform competitor DeepSeek’s R1 model on several knowledge and inference benchmarks, although in some measures, OpenAI’s models achieve better results.

Built upon Tencent’s Turbo S base language model, the T1 has been further optimized through a hybrid architecture. In this solution, the integration of Google Transformer and Mamba technologies reduces both training and operational costs while ensuring faster query processing. This innovative combination enables the model to effectively meet increasing demands.

Currently, the T1 model is available via Tencent’s Yuanbao platform, and API access will soon be provided for developers and business partners. The pricing policy is competitive: users pay 1 yuan ($0.14) for processing 1 million input tokens, while output tokens are charged at 4 yuan. Although no specific free trial period has been designated for the T1 model, a one-week free trial is offered for the Hunyuan Turbo S model, giving interested parties an opportunity to test the technology.

Overall, the launch of the T1 Reasoning model is set to further boost AI development in China. With this innovation, Tencent is not only expanding its technology portfolio but also contributing to the global advancement of AI, demonstrating that both efficiency and cost reduction are key factors in future solutions. 

Share this post
Google Introduces the Agent2Agent (A2A) Open Source Protocol
In a recent speech, Jensen Huang (CEO of NVIDIA) divided the evolution of artificial intelligence into several phases and called the current phase the era of Agentic AI. Although he mainly focused on the next phase of the physical AI era, we should not forget that the Agentic AI era also started only this year, so its fully developed form has not yet been seen. The recent announcement by Google of the open source Agent2Agent protocol gives us a hint of what this more advanced form might look like. The protocol is designed to bridge the gap between AI agents created on different platforms, frameworks, and by various vendors, enabling smooth communication and collaboration.
Apple in Trouble with Artificial Intelligence Developments?
With Trump's tariffs, Apple appears to be facing increasing problems. One reason is that, besides the tariffs—which have hit Apple's shares hard—there are internal conflicts, especially in the division responsible for AI integration. Tripp Mickle, a journalist for The New York Times, reports that Apple has not been able to produce any new innovations lately. Although this may not be entirely true—since, after much debate, the company finally managed to launch Apple Intelligence—there is no doubt that it is lagging behind its competitors in the field of artificial intelligence.
New Collaboration Between Netflix and OpenAI
Netflix recently began testing a new artificial intelligence-based search feature that uses OpenAI’s technology to improve content search. This feature is a significant departure from traditional search methods because it allows users to find movies and TV shows using specific terms, such as their mood or preferences, rather than only using titles, genres, or actor names.
Strong Turbulence Around Meta Llama Models
Less than a week after its market debut, Llama 4 has already received harsh criticism from users. As mentioned before, one of Llama 4’s new features is its architecture built from different modules. This design lets the model have a much larger effective parameter set than the one it uses at run time, so in theory, it should perform much better. However, several independent user tests show that it does not meet the expected results, especially for mathematical tasks and coding. Some users claim that Meta heavily manipulated benchmarks to achieve better scores, while others believe that an internal version of the model was tested while a more modest version was released to the public.