Recent advances in artificial intelligence have led to significant progress in natural language processing, particularly in the area of large language models (LLMs). Tools like ChatGPT and Copilot are already widely used to assist with programming. These so-called autoregressive (AR) models generate code step by step, from left to right—much like a human would write it. However, a new approach is now fundamentally challenging this linear paradigm: diffusion language models (dLLMs), which generate content not sequentially but globally, through iterative refinement. But are they truly better suited to code generation than the well-established AR models? And what insights can we gain from DiffuCoder, the first major open-source experiment in this field?
Code Writing Isn't Always Linear
Programming rarely follows a strictly sequential thought process. Developers often revisit earlier sections of code, make modifications, restructure elements—sometimes completely rethinking previously “finished” parts. The way diffusion models work—by repeatedly adding noise to an entire code snippet and then refining it—naturally aligns more closely with this non-linear workflow. As a result, growing attention is being paid to whether these models could offer a better alternative for code generation, particularly when tackling complex tasks.
Developed through a joint effort between Apple and the University of Hong Kong, DiffuCoder is the first large-scale (7 billion parameters) diffusion model specifically trained for code generation. What sets it apart is its ability to adapt the order in which it generates tokens: it isn’t bound to the traditional left-to-right structure. Depending on the sampling temperature, it can alter the generation sequence dynamically.
The concept isn’t entirely new. Earlier this year, Inception Labs introduced the Mercury Coder within its Mercury model family—a diffusion-based model also aimed at code generation. That effort already showed promising results in terms of generation speed. Unsurprisingly, the Apple researchers cite Mercury as an initial point of reference—at the very least, as a practical demonstration of the viability of the diffusion approach.
How Is This Different from What We've Known So Far?
To better understand DiffuCoder's behavior, researchers introduced new metrics to measure how “autoregressive” a model is—that is, how strictly it generates content in sequence. They found that diffusion models tend to exhibit a natural causal bias, still favoring tokens that follow directly after the prompt. This bias likely stems from the structure of the training data. However, by adjusting factors such as sampling temperature, the model can gradually deviate from this sequential tendency, leading to more varied and—after multiple sampling attempts—more effective code generation.
In AR models, sampling temperature typically only affects the diversity of generated tokens. In diffusion models, however, it also influences where generation begins. The model doesn't just choose different tokens—it can start the generation process in different positions altogether.
The Role of Reinforcement Learning: Learning Through Rewards
During DiffuCoder's development, researchers employed a four-stage training process. Traditional supervised fine-tuning was supplemented with a novel form of reinforcement learning (RL). Specifically, they used an optimization method called coupled-GRPO, which aligns well with the logic of diffusion-based generation. Rather than masking tokens randomly, this method uses complementary masking strategies—ensuring that each token is evaluated at least once in a meaningful context.
According to benchmark tests, this approach improved the model’s performance across several code generation tasks. For instance, on HumanEval and MBPP, applying coupled-GRPO led to a 4–7% increase in accuracy.
A Measured Step Forward—Not a Revolution
It’s important to emphasize that DiffuCoder and other diffusion models have not yet clearly outperformed leading AR models. In fact, AR models still show more dramatic improvements when it comes to instruction tuning, whereas diffusion models only see moderate gains. For now, the diffusion approach should be seen as a complementary direction rather than a wholesale paradigm shift. Nevertheless, the research behind it offers valuable insights into how future language models might be better suited for tackling non-linear, complex tasks—such as writing software.
What’s Next?
The creation of DiffuCoder—and its release as an open-source projekt, available on GitHub—goes beyond simply unveiling a new model. It provides a foundation for deeper research into how diffusion-based language models behave and how their generation processes can be controlled, for example through reinforcement learning. Code generation no longer needs to follow a single, linear path. While this new approach is not revolutionary, it opens up the possibility for machines to develop and follow their own internal “thinking” order. In the long run, this flexibility could benefit not only software development, but also other complex content generation tasks.
The future of diffusion models is still taking shape, but they have already established themselves as a serious force in the evolution of language modeling. DiffuCoder represents a careful yet meaningful step in that direction.