Web Analytics Made Easy - Statcounter

AI Alignment in Open Source - Discussion Framework

AI Alignment in Open Source - Discussion Framework
Photo by Emilipothèse / Unsplash

I'm on my way home now from CHAOSScon and FOSDEM, where I ran two complimentary sessions called 'AI Alignment for Open Source' .

AI alignment is the effort to design Artificial Intelligence systems so their goals, behaviors, and decisions are consistent with human values and intentions, making them safe, helpful, and reliable. In the context of open source, this BoF will explore what it means for AI to be aligned with open source (what we have built, know, value, expect).

One thing I wanted to accomplish was to move the conversation past specific symptoms like "AI slop" and instead ask: at what layer of open source does this misalignment show up, and whose interests are being served or ignored? Those two questions are the basis of the framework I have created for my sessions (and which may help you with yours).

You can see that, at the center is the values of the project/community. They might not be the right layers, but I proposed them as a start.

A circle with layers of color moving outward from the center, where the word 'values'radiates outwards to other layers, 'values', 'governance, ' labor', 'legal' knowledge and community
A framework to start conversations about alignment in your project/community

Below some examples of how I have seen alignment/misalignment turning up, organized by layer. See the link at the bottom of this post, for the HTML version of these (which includes links to examples).

 curl bug bounty shutdown AI-generated fake vulnerability reports flooded the project. 20 bogus reports in 3 weeks. Bounty program shut down. Labor Governance ✓ User (gaming rewards) ✗ Maintainer Joshua Rogers + curl Same project, different outcome. Skilled researcher used AI as a tool. 50+ real bugs fixed. Labor Values ✓ Contributor ✓ Maintainer ✓ Community Castle Game Engine Contributors submitting plausible-looking code that used obsolete or non-existent APIs. Compiled but didn't work. Knowledge Labor ✓ User (wanted to help) ✗ Maintainer (review burden) ✗ Community Cloudflare "Matrix" implementation Blog announced Matrix support. Code was full of "TODO: Check authorization." Marketing, not engineering. Values Governance Community ✓ Vendor (marketing story) ✗ User ✗ Community (protocol) Stack Overflow → LLM enclosure Questions that used to be public are now private conversations with LLMs. Training data extracted, commons hollowed out. Knowledge Community ✓ Vendor (training data) ✗ Community (commons) Red Hat AI principles Published framework emphasizing transparency, human accountability, and community respect. Explicit about trade-offs. Values Governance ✓ Maintainer ✓ Contributor ✓ Community
A table of examples of both alignment, and misalignment to get people thinking bout the WHO and WHAT of alignment statements

I will write more about the themes that came out of these sessions in the near future. However, I will say that misalignment exists on both very difficult topics like the environment, and data ('described data' feeling like a brick wall ) and those that were maybe easier to solve like adding rules to governance about AI usage and/or improving contribution ladders to encourage learning and growing in knowledge of a project (before submitting a PR). I also think Red Hat is showing some early leadership in proactively designing for alignment .

The CHAOSS AI Alignment Working Group, is surveying our community to learn more about their perspectives and encounters with AI - before proposing solutions. In the meantime I hope this discussion framework can help you/your community navigate this bumpy time in OSS/.

You can find the HTML deck in our ai-alignment repository.

Subscribe to Emma's open notes

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe
Licensed under CC BY-SA 4.0