OpenAI unveils the O3-Pro model

OpenAI has launched its O3-Pro model, which replaces O1-Pro and promises significant performance advances in science, education, programming, data analysis and scripting.

The O3-Pro model is designed to provide ChatGPT with more reliable answers with longer think time. Performance tests show that O3-Pro excels in math, science and coding, outperforming both O3 and O1-Pro models. Although O3-Pro response times may be slightly longer than O1-Pro, OpenAI believes that the extra latency is justified, especially for complex and challenging tasks.

OpenAI stresses that O3-Pro makes ChatGPT a more versatile tool. It will be able to search web pages, analyse documents, interpret visual content, use Python, and provide personalised responses through the in-memory feature. It should be noted that for technical reasons, O3-Pro does not currently support Canvas and image generation. For this, users will need to use other models such as the GPT-4o, O3 or O4-Mini. Pricing for the O3-Pro API is $20 per million input tokens and $80 per million output tokens.

The O3-Pro model will be available to ChatGPT Pro/Team subscribers from 11 June and will also be available to developers via the API. Enterprise and Edu subscribers will get access next week.

In addition, OpenAI has updated ChatGPT's enhanced voice mode, which allows for more natural and smoother conversations for subscribers. ChatGPT's voice feature now also offers intuitive and efficient language translation: simply instruct ChatGPT to translate and it will translate continuously throughout the conversation until you ask it to stop or change language. 

Share this post
What is WhoFi?
Wireless internet, or WiFi, is now a ubiquitous and indispensable part of our lives. We use it to connect our devices to the internet, communicate, and exchange information. But imagine if this same technology, which invisibly weaves through our homes and cities, could also identify and track us without cameras—even through walls. This is not a distant science fiction scenario, but the reality of a newly developed technology called WhoFi, which harnesses a previously untapped property of WiFi signals. To complicate matters, the term “WhoFi” also refers to an entirely different service with community-focused goals, so it's important to clarify which meaning is being discussed.
China’s Own GPU Industry Is Slowly Awakening
“7G” is an abbreviation that sounds almost identical to the word for “miracle” in Chinese. Whether this is a lucky piece of marketing or a true technological prophecy remains to be seen. What Lisuan Technology is presenting with the 7G106—internally codenamed G100—is nothing less than the first serious attempt to step out of Nvidia and AMD’s shadow. No licensing agreements, no crutches based on Western intellectual property—this is a GPU built from scratch, manufactured using 6 nm DUV technology in a country that is only beginning to break free from the spell of Western technology exports.
Anticipation is high for the release of GPT-5 — but what should we really expect?
OpenAI’s upcoming language model, GPT-5, has become one of the most anticipated technological developments in recent months. Following the release of GPT-4o and the specialized o1 models, attention is now shifting to this next-generation model, which—according to rumors and hints from company leaders—may represent a significant leap forward in artificial intelligence capabilities. But what do we actually know so far, and what remains pure speculation?
What Does the Rise of DiffuCoder and Diffusion Language Models Mean?
A new approach is now fundamentally challenging this linear paradigm: diffusion language models (dLLMs), which generate content not sequentially but globally, through iterative refinement. But are they truly better suited to code generation than the well-established AR models? And what insights can we gain from DiffuCoder, the first major open-source experiment in this field?
Apple's New AI Models Can Understand What’s on Your Screen
When we look at our phone's display, what we see feels obvious—icons, text, and buttons we’re used to. But how does artificial intelligence interpret that same interface? This question is at the heart of joint research between Apple and Finland’s Aalto University, resulting in a model called ILuvUI. This development isn’t just a technical milestone; it’s a major step toward enabling digital systems to truly understand how we use applications—and how they can assist us even more effectively.
Artificial Intelligence in the Service of Religion and the Occult
Imagine attending a religious service. The voice of the priest or rabbi is familiar, the message resonates deeply, and the sermon seems thoughtfully tailored to the lives of those present. Then it is revealed that neither the words nor the voice came from a human being—they were generated by artificial intelligence, trained on the speaker’s previous sermons. The surprise lies not only in the capabilities of the technology, but also in the realization that spirituality—so often viewed as timeless and intrinsically human—has found a new partner in the form of an algorithm. What does this shift mean for faith, religious communities, and our understanding of what it means to believe?