
Software development inside OpenAI is undergoing a shift as AI agents move from supporting roles to becoming the main interface for building, testing, and maintaining code. OpenAI co-founder Greg Brockman outlined six internal recommendations aimed at reshaping how teams work with agentic systems, following rapid improvements in tools such as Codex since December.
Brockman said engineers now rely on agents for writing most production code, debugging workflows, and handling operational tasks that previously required extensive manual effort. To adapt, OpenAI is restructuring both technical systems and team culture.
The first and most immediate change Brockman outlined is a shift in default behaviour. For any technical task, engineers are now encouraged to start by interacting with an AI agent rather than opening a traditional code editor or command-line tool.
Alongside this, OpenAI wants agent usage to remain safe while being productive enough that most workflows do not require special permissions or additional approvals. The aim is to make agents the natural starting point for development without slowing teams through heavy governance layers.
This change reflects how quickly agent capabilities have expanded, moving from handling unit tests to producing full application logic and infrastructure scripts.
2. Teams must actively learn and assign ownership for agents
Brockman’s second recommendation focuses on adoption rather than technology.
He urged teams to spend time experimenting with the tools instead of assuming what agents can or cannot do. Many engineers who tried newer versions of Codex reported that their workflow changed significantly, while others delayed simply due to workload or habit.
To support this transition, Brockman suggested appointing an “agents captain” within each team — someone responsible for integrating agents into daily workflows. He also encouraged knowledge-sharing channels and company-wide hackathons to accelerate learning and experimentation.
3. Create AGENTS.md files and reusable skills
The third piece of advice centres on documentation designed specifically for AI systems.
OpenAI teams are now encouraged to maintain AGENTS.md files within each project. These documents act as living guides that explain how agents should interact with the codebase, including common tasks, rules, and known failure points.
Alongside this, Brockman recommended building reusable “skills” — automated workflows that agents can perform consistently — and committing them to shared repositories.
Whenever an agent struggles or makes mistakes, teams are expected to update these resources so future interactions improve.
4. Make internal tools accessible to AI systems
Many engineering teams rely on internal dashboards, testing frameworks, deployment pipelines, and monitoring systems. Brockman’s fourth recommendation is to inventory these tools and ensure agents can directly access them.
This could involve building command-line interfaces, APIs, or lightweight servers that allow agents to trigger tests, retrieve logs, or deploy services without human intervention.
The goal is to remove friction so agents can operate across the full software lifecycle rather than being limited to code generation alone.
5. ‘Say no to slop’ and protect code quality
One of Brockman’s clearest warnings focused on the risk of low-quality AI-generated code entering production systems.
Managing AI-written software at scale, he said, will require new processes and conventions. While agents can produce functional code quickly, that output can still be hard to maintain, poorly structured, or inconsistent if not carefully reviewed.
Every code change must still have a human owner. Review standards should remain as strict as they are for human-written code, and engineers must fully understand what they approve.
The objective is to prevent “functionally correct but poorly maintainable code” from accumulating over time.
6. Build core infrastructure around agent workflows
The sixth recommendation addresses the systems required to support large-scale agent usage.
Brockman said there is major scope for building foundational infrastructure, including:
Tracking agent activity and decision paths
Monitoring outputs beyond just committed code
Managing which tools agents can access centrally
Improving observability across AI-driven workflows
While core agent tools are improving rapidly, the surrounding infrastructure is still developing. Strong internal systems will be essential for reliability, accountability, and long-term scalability.
A cultural shift in software development
Brockman framed the move toward agent-first development as comparable to earlier transitions such as cloud computing and internet-based software.
The technology alone is not enough. Teams must rethink workflows, accountability, documentation, and collaboration patterns to fully benefit from AI-driven development.
Managers are being encouraged to actively guide this shift, test new processes, and identify safeguards that preserve long-term code health.
As AI agents continue to improve, Brockman suggested that organisations that adapt early will gain speed advantages — while those that treat agents as optional tools may struggle to keep pace.
OpenAI’s six-point approach signals a future where engineers increasingly act as supervisors, system designers, and quality controllers, with AI agents handling much of the execution behind modern software development.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.