HomeTechnologyApple’s UICoder teaches itself to write SwiftUI interfaces with automated feedback

Apple’s UICoder teaches itself to write SwiftUI interfaces with automated feedback

The project, detailed in the study UICoder: Finetuning Large Language Models to Generate User Interface Code through Automated Feedback, addresses a longstanding problem in AI coding

August 16, 2025 / 19:05 IST
Story continues below Advertisement
Apple
Apple

Apple researchers have unveiled a novel method to train a large language model (LLM) to produce high-quality SwiftUI code — and it essentially taught itself through an automated feedback loop.

The project, detailed in the study UICoder: Finetuning Large Language Models to Generate User Interface Code through Automated Feedback, addresses a longstanding problem in AI coding: while LLMs have become adept at general programming and creative writing, they often fail to generate syntactically correct, well-structured user interface code. The reason, according to the researchers, is that examples of UI code are scarce in most training datasets, sometimes making up less than one percent of the data.

Story continues below Advertisement

Starting from scratch with minimal SwiftUI exposure
The team began with StarChat-Beta, an open-source coding-focused LLM, and provided it with a list of UI descriptions. From there, the model generated a large synthetic dataset of SwiftUI programs based on those descriptions. Each generated program was first compiled using Swift to ensure it ran without errors.

The compiled interfaces were then analysed by GPT-4V, a vision-language model that compared them to the original descriptions. The output was refined over multiple iterations — with each improved version of the model producing cleaner and more accurate SwiftUI code than the last.