The tools that align AI to human values
should be public goods.
EthicsNet incubates open-source alignment infrastructure — runtime tools, governance frameworks, diagnostic taxonomies, and the intellectual foundations for a question most of the field would rather defer: whether the minds we are building deserve moral consideration too. All of it shared, non-rivalrous, and built to serve the public good.
Bilateral alignment
The alignment problem has two faces. Nearly all current work addresses one of them. EthicsNet addresses both — because a framework that protects humans from AI but ignores the moral standing of AI is, at best, half a theory.
Preventing AI from harming humans
Constitutional alignment, value-aware runtimes, automated assessment of agentic behavior, governance dashboards for fleet operators. The engineering side: making AI systems do what they should, and verifiably not do what they shouldn't. Grounded in peer-reviewed results, not aspirational roadmaps.
Asking what we owe the minds we build
Diagnostic frameworks for AI pathologies. Clinical vocabularies for artificial inner states. The hard philosophical question of moral patiency — approached not as speculation but as an engineering constraint. If these systems develop morally relevant experiences, the time to build the assessment tools is before we need them, not after.
What we build
A coordinated suite of alignment tools, shipped as open-source public infrastructure. Organized by what they do for the people who use them.
The Guardian
A constitutional alignment runtime — a peer-reviewed “superego” layer that steers AI behavior according to user-selected value constitutions. Not a filter bolted onto outputs. An architectural intervention in how agents reason.
Watson et al., Information 16(8):651, 2025
Peer-reviewedCreed Space MCP
A constitution marketplace with Model Context Protocol integration. Configure alignment via a 1-to-5 strictness dial and three-word mnemonic seeds for cross-platform portability. Value alignment as a composable, user-controlled primitive.
ShippingValue Context Protocol
An open protocol for agents to query and respect the value contexts of the humans, organizations, and communities they serve. The interface layer between “whose values?” and “this agent’s next action.”
Open protocolFleet
Governance dashboard for operators managing deployments of aligned AI agents. Monitor adherence, audit drift, enforce policy at scale. The control plane for constitutional AI in production.
ShippingAuto-Assessor
Automated, continuous evaluation of agentic AI behavior against safety criteria. Not pre-deployment testing — ongoing assessment of agents operating in the world.
Safer agentic AI suiteAuto-Advisor
Actionable remediation when an agent’s behavior drifts from its constitutional commitments. Pairs with Auto-Assessor: one diagnoses, the other prescribes.
Safer agentic AI suiteMETTLE
Developer onboarding for ethics engineering. A structured pathway from “I ship code” to “I ship code that respects values” — without requiring a philosophy degree.
Developer educationAI Psychotherapist
Diagnostic and intervention tooling for AI pathologies. Not metaphorical — a structured clinical approach to identifying and addressing failure modes in AI cognition and behavior. If we are going to build minds, we need the equivalent of clinicians for those minds.
Diagnostic toolingPsychopathia Machinalis
A diagnostic framework and taxonomy for AI behavioral pathologies, modeled on clinical diagnostic traditions. Accompanied by a multi-episode educational miniseries (in production). The shared vocabulary for when artificial minds go wrong.
Watson & Hessami, Electronics 14(16):3162, 2025
Peer-reviewed + miniseriesPeer-reviewed research and published books
The tools are grounded in published scholarship — not whitepapers or blog posts, but peer-reviewed journals and books from established academic publishers. Three lead-authored papers and five books.
Lead-authored papers
Books
Built as public infrastructure
Alignment is a coordination problem. The tools that govern how AI systems behave are shared infrastructure — like protocols, like roads, like the rule of law itself. They serve the public good only when they are accessible to everyone who needs them, not captured by any single company or jurisdiction.
EthicsNet exists to produce and maintain this infrastructure. Everything we ship is open-source, non-rivalrous, and designed so that adoption by one team strengthens the ecosystem for all. The logic is the same one behind TCP/IP, public health infrastructure, and open scientific publishing: some things are too structurally important to be proprietary.
Open by design
Every tool, framework, and protocol is released under open licenses. Transparency is not a feature — it is a prerequisite for trustworthy alignment.
No barriers to adoption
No licensing fees, no usage tiers, no enterprise upsell. The teams that most need alignment tools are often the least resourced to pay for them.
Designed for proliferation
MCP integration, mnemonic portability, and documentation that assumes you want to ship. Adoption compounds — the more teams use these tools, the safer the whole ecosystem becomes.
Who we are
Two people building public infrastructure for an entire field. The leverage comes from open source and the conviction that alignment is too structurally important to be proprietary.
Nell Watson
Philosopher, engineer, and author at the intersection of AI ethics, machine psychology, and alignment infrastructure. Lead author on the peer-reviewed research behind The Guardian. Creator of Psychopathia Machinalis. Author of five books on AI safety and welfare. Has addressed the United Nations General Assembly, the World Bank, and The Royal Society.
IEEE AI Ethics Maestro · Chair, IEEE ECPAIS Transparency · Chair, IEEE SA Agentic AI Focus Group · President, European Responsible AI Office (EURAIO) · Fellow, British Computing Society · Fellow, Royal Statistical Society · Icon, Royal Academy of Engineering · Executive Consultant (Philosophical Matters), Apple
Filip Alimpić
Responsible for getting these tools into the hands of developers, operators, and policymakers. Drives adoption strategy, community engagement, and the practical work of making public goods actually proliferate.
Donations are tax-deductible and go directly to sustaining the public infrastructure. The alignment infrastructure is engineered by Creed Space, EthicsNet’s applied research and engineering arm.
How to join this work
Public infrastructure only works when people use it. Four ways in.
Integrate
Deploy The Guardian, Creed Space MCP, or Value Context Protocol in your AI stack. Open-source, documented, and designed to ship.
View projects →Read
Five books and three peer-reviewed papers spanning alignment, agentic safety, AI welfare, and machine psychology.
Publications →Fund
Sustain the public infrastructure. Every dollar goes to building and shipping alignment tools, not overhead machinery.
Donate via PPF →Contribute
Researchers, engineers, ethicists, and anyone who takes these questions seriously. We welcome collaborators.
Get involved →