Several Important New Features in Llama 4

Meta’s latest family of artificial intelligence models, Llama 4, brings significant innovations to multimodal model development. In addition to the two models currently available—Llama 4 Scout and Llama 4 Maverick—a very powerful model called Llama 4 Behemoth is in development. Behemoth is expected to play a significant role in STEM-related tasks (science, technology, engineering, and mathematics) in the future.

Recently, many multimodal models have been introduced. These models can process and integrate different types of data—such as text, images, sound, and video—at the same time. This allows them to understand questions in a richer context and solve much more complex problems than previous text-only models. However, this strength can also be a drawback, as such models usually need far more resources than traditional single-modal systems. Meta addresses this issue with the Mixture of Experts (MoE) architecture used in the Llama 4 family. The MoE approach activates only a portion of the model for a given input, which improves efficiency and greatly reduces computational costs. This method is not unique to Llama 4; many large companies are following a similar trend. Nevertheless, Llama 4’s open-source strategy clearly sets it apart from its competitors.

As mentioned earlier, only the two smaller models in the family—Scout and Maverick—are currently available. Both models have 17 billion active parameters, meaning that they process input using 17 billion parameters. However, each model actually contains many more total parameters. Scout has 109 billion parameters, while Maverick has 400 billion. This is due to the MoE architecture: the models activate specific submodules (called “experts” by Meta) when processing input. Accordingly, Scout uses 16 experts and Maverick uses 128 experts. Although Scout is smaller than Maverick, it has the unique feature of a context window that can handle up to 10 million tokens, making it ideal for analyzing long texts, documents, or large codebases. While Maverick does not support such an extensive context window, several benchmarks show that it outperforms competitors like GPT-4o or Gemini 2.0 Flash in inference and coding tasks, even while using only half as many parameters as DeepSeek V3.

Although still in development, Meta claims that Behemoth will perform better than GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro in STEM-related tasks. Like its two smaller siblings, Behemoth will have 288 billion active parameters, but with 16 submodules it will have nearly 2,000 billion parameters in total. Behemoth is also notable because Meta plans to use this model to train smaller models, and it could be integrated into Meta services such as Messenger, Instagram Direct, and WhatsApp. 

Share this post
After a Historic Turn, SK Hynix Becomes the New Market Leader in the Memory Industry
For three decades, the name Samsung was almost synonymous with leadership in the DRAM market. Now, however, the tables have turned: in the first half of 2025, South Korea’s SK Hynix surpassed its rival in the global memory industry for the first time, ending a streak of more than thirty years. This change signifies not just a shift in corporate rankings but also points to a deeper transformation across the entire semiconductor industry.
The Number of Organized Scientific Fraud Cases is Growing at an Alarming Rate
The world of science is built on curiosity, collaboration, and collective progress—at least in principle. In reality, however, it has always been marked by competition, inequality, and the potential for error. The scientific community has long feared that these pressures could divert some researchers from the fundamental mission of science: creating credible knowledge. For a long time, fraud appeared to be mainly the work of lone perpetrators. In recent years, however, a troubling trend has emerged: growing evidence suggests that fraud is no longer a series of isolated missteps but an organized, industrial-scale activity, according to a recent study.
Beyond the Hype: What Does GPT-5 Really Offer?
The development of artificial intelligence has accelerated rapidly in recent years, reaching a point where news about increasingly advanced models is emerging at an almost overwhelming pace. In this noisy environment, it’s difficult for any new development to stand out, as it must be more and more impressive to cross the threshold of user interest. OpenAI carries a double burden in this regard: not only must it continue to innovate, but it also needs to maintain its lead over fast-advancing competitors. It is into this tense landscape that OpenAI’s newly unveiled GPT-5 model family has arrived—eagerly anticipated by critics who, based on early announcements, expect nothing less than a new milestone in AI development. The big question, then, is whether it lives up to these expectations. In this article, we will examine how GPT-5 fits into the trajectory of AI model evolution, what new features it introduces, and how it impacts the current technological ecosystem.
The Most Popular Theories About the Impact of AI on the Workplace
Since the release of ChatGPT at the end of 2022, the field of AI has seen impressive developments almost every month, sparking widespread speculation about how it will change our lives. One of the central questions concerns its impact on the workplace. As fears surrounding this issue persist, I believe it's worth revisiting the topic from time to time. Although the development of AI is dramatic, over time we may gain a clearer understanding of such questions, as empirical evidence continues to accumulate and more theories emerge attempting to answer them. In this article, I’ve tried to compile the most relevant theories—without claiming to be exhaustive—as the literature on this topic is expanding by the day. The question remains: can we already see the light at the end of the tunnel, or are we still heading into an unfamiliar world we know too little about?
A Brutal Quarter for Apple, but What Comes After the iPhone?
Amid global economic and trade challenges, Apple has once again proven its extraordinary market power, surpassing analyst expectations in the third quarter of its 2025 fiscal year. The Cupertino giant not only posted record revenue for the period ending in June but also reached a historic milestone: the shipment of its three billionth iPhone. This achievement comes at a time when the company is grappling with the cost of punitive tariffs, intensifying competition in artificial intelligence, and a series of setbacks in the same field.
The Micron 9650: The World's First Commercial PCIe 6.0 SSD
In the age of artificial intelligence and high-performance computing, data speed has become critically important. In this rapidly accelerating digital world, Micron has announced a technological breakthrough that redefines our concept of data center storage. Enter the Micron 9650, the world’s first SSD equipped with a PCIe 6.0 interface—not just another product on the market, but a herald of a new era in server-side storage, offering unprecedented speed and efficiency.