Back to Home

Safety & Responsibility

Project Mariner is designed with safety at its core. We believe AI agents should augment human capabilities while keeping humans firmly in control.

Human Oversight

Project Mariner keeps humans in control. You can pause, review, and intervene at any point during task execution.

Restricted Actions

The agent cannot perform sensitive actions like making payments or sending emails without explicit user confirmation.

User Confirmation

Critical decisions require your approval. The agent will pause and ask before taking irreversible actions.

Error Detection

Built-in safeguards detect when the agent is uncertain or may be making mistakes, prompting for human review.

Data Privacy

Your data stays in your browser. The agent processes information locally and minimizes data transmission.

Graceful Degradation

When encountering unexpected situations, the agent safely stops and reports rather than taking risky actions.

Our Research Approach

Project Mariner is part of Google DeepMind's broader research into safe and beneficial AI agents. We're exploring how AI can help people accomplish tasks while maintaining appropriate human oversight and control.

As a research prototype, Project Mariner helps us understand the challenges and opportunities of AI agents in real-world settings. Your feedback is invaluable in helping us improve safety and usefulness.

Our Safety Principles

1

Human Agency First

AI agents should enhance human capabilities, not replace human judgment. Users remain in control at all times.

2

Transparency

The agent clearly communicates what it's doing and why, making its reasoning visible and understandable.

3

Minimal Footprint

The agent only accesses information necessary for the task and processes data locally whenever possible.

4

Continuous Improvement

We actively learn from user feedback and research findings to make the agent safer and more helpful over time.