Apple Plans Its Own “Vibe-Coding” Platform in Partnership with Anthropic

Apple has encountered several challenges in developing its own AI solutions recently, so it’s perhaps unsurprising that the company is turning to external expertise. According to the latest reports, Apple has decided to join forces with Anthropic to create a revolutionary “vibe-coding” software platform that uses generative AI to write, edit, and test programmers’ code.

The Claude Sonnet model will reportedly be integrated into Xcode, Apple’s primary software development environment. This unexpected partnership departs from Apple’s usual in-house approach to core technology development. Bloomberg reported on May 2 that, while Apple intends to bring the platform in-house, it has not yet decided whether to make it publicly available.

This collaboration follows Apple’s earlier attempt to build its own AI coding assistant, Swift Assist, which was delayed by issues such as hallucinations and slow performance. Swift Assist currently works alongside third-party tools like GitHub Copilot and ChatGPT, both of which are already integrated into Xcode to provide additional AI-powered support.

The term “vibe-coding” was coined in February 2025 by Andrej Karpathy, co-founder of OpenAI and former head of AI at Tesla. It describes a programming paradigm in which developers describe a problem in natural language, and a large language model tuned for coding translates that description into traditional source code. In this model, the programmer’s role shifts from manual coding to overseeing, testing, and refining AI-generated code.

A key aspect of vibe-coding is that users often accept code without fully understanding it. As AI researcher Simon Willison noted, “If an LLM has written every line of code, but you have reviewed, tested, and understood it all, that is not vibe-coding in my opinion—it’s using an LLM as a typing assistant.”

Karpathy describes his own method as conversational: he issues voice commands while the AI generates the actual code. He admits that, although the technique has limitations—especially in error correction—it’s “not too bad for throwaway weekend projects,” and he finds it “quite fun.”

The partnership will leverage Anthropic Claude’s 3.5 Sonnet model, which has become popular among developers for coding tasks. The model delivers excellent performance and versatility: it runs twice as fast as its predecessor while maintaining high cognitive capabilities. Optimized specifically for software development—from code migrations and bug fixes to compilations—it excels in both design and solving complex coding challenges. With a context window of 200,000 tokens, it enables comprehensive code analysis and generation.

For developers, this collaboration could transform workflows. The vibe-coding paradigm promises to allow even amateur programmers to build software without the extensive training traditionally required. However, for complex projects, the expectation is that vibe-coding will produce initial concepts; if those prove viable, developers will reimplement them in the conventional way, since AI-generated code is not yet fully maintainable. By using advanced AI models like Claude Sonnet, developers can spend less time writing code and more time tackling higher-level problems and creative solutions. 

Share this post
This is how LLM distorts
With the development of artificial intelligence (AI), more and more attention is being paid to so-called large language models (LLMs), which are now present not only in scientific research but also in many areas of everyday life—for example, in legal work, health data analysis, and computer program coding. However, understanding how these models work remains a serious challenge, especially when they make seemingly inexplicable mistakes or give misleading answers.
MiniMax-M1 AI model, targeting the handling of large texts
With the development of artificial intelligence systems, there is a growing demand for models that are not only capable of interpreting language, but also of carrying out complex, multi-step thought processes. Such models can be crucial not only in theoretical tasks, but also in software development or real-time decision-making, for example. However, these applications are particularly sensitive to computational costs, which are often difficult to control using traditional approaches.
 How is the relationship between OpenAI and Microsoft transforming the artificial intelligence ecosystem?
One of the most striking examples of the rapid technological and business transformations taking place in the artificial intelligence industry is the redefinition of the relationship between Microsoft and OpenAI. The two companies have worked closely together for years, but recent developments clearly show that industry logic now favors more flexible, multi-player collaboration models rather than exclusive partnerships.
Google Cloud Run Adds GPU Support for AI and Batch Workloads
Google Cloud has officially launched general availability of NVIDIA GPU support for Cloud Run, marking a major step forward in its serverless platform. This update aims to give developers a cost-effective, scalable solution for GPU-powered tasks, especially those involving AI inference and batch processing. It addresses the rising need for accessible, production-ready GPU resources in the cloud—while preserving the key features that have made Cloud Run popular with developers.
Gemini Advanced Strengthens GitHub Integration
There is no shortage of innovation in the world of AI-based development tools. Google has now announced direct GitHub integration for its premium AI assistant, Gemini Advanced. This move is not only a response to similar developments by its competitor OpenAI, but also a significant step forward in improving developer workflows.
JetBrains Mellum Is Now Open Source
As of April 30, 2025, JetBrains has taken a major step forward in AI by open-sourcing Mellum, its custom language model for code completion. Previously available only in JetBrains’ commercial products, this 4-billion-parameter model is now freely accessible on Hugging Face, opening new doors for researchers, educators, and development teams.