HomeTechnologyMicrosoft brings Copilot Actions to Windows 11, enabling secure AI agents for local tasks

Microsoft brings Copilot Actions to Windows 11, enabling secure AI agents for local tasks

Microsoft has introduced Copilot Actions on Windows 11, an experimental AI agent feature that performs real tasks on local files while ensuring privacy, user control, and security through agentic safeguards.

October 16, 2025 / 18:56 IST
Story continues below Advertisement
Copilot Actions Image
Copilot Actions Image

Microsoft has announced the introduction of Copilot Actions for Windows 11, a new experimental feature that enables AI-powered agents to perform tasks directly on local files while maintaining robust privacy and security standards. The feature expands on Microsoft’s earlier Copilot Actions on the web—first announced in May 2025—by extending these agentic capabilities beyond the browser. Initially rolling out to Windows Insiders through Copilot Labs, the preview marks a key step toward secure, AI-driven task automation within the Windows ecosystem.

Features
Copilot Actions is designed to act as an active digital collaborator rather than a passive assistant. The AI agent can perform actions like clicking, typing, and scrolling across apps and files—helping users update documents, organize folders, send emails, or even book tickets. By integrating with Windows, Copilot Actions leverages on-device apps and data, performing complex tasks once users explicitly grant permission.

Story continues below Advertisement

The new system introduces an agent workspace—a contained environment where the AI can operate separately from the user’s session. This workspace enables runtime isolation, ensuring that agents work securely in parallel with users. During its preview phase, Copilot Actions will have access only to specific known folders such as Documents, Downloads, Desktop, and Pictures, with any additional access requiring user authorization.

Security and privacy
Microsoft’s launch emphasizes its focus on securing agentic AI as these systems evolve. AI agents, which can now take real-world actions on behalf of users, introduce potential security risks such as cross-prompt injection (XPIA), where malicious content could manipulate agent behavior. To address such threats, Microsoft has outlined four core Agentic Security and Privacy Principles designed to safeguard user data and control: