AI Alignment in Open Source - Discussion Framework
I'm on my way home now from CHAOSScon and FOSDEM, where I ran two complimentary sessions called 'AI Alignment for Open Source' .
AI alignment is the effort to design Artificial Intelligence systems so their goals, behaviors, and decisions are consistent with human values and intentions, making them safe, helpful, and reliable. In the context of open source, this BoF will explore what it means for AI to be aligned with open source (what we have built, know, value, expect).
One thing I wanted to accomplish was to move the conversation past specific symptoms like "AI slop" and instead ask: at what layer of open source does this misalignment show up, and whose interests are being served or ignored? Those two questions are the basis of the framework I have created for my sessions (and which may help you with yours).
You can see that, at the center is the values of the project/community. They might not be the right layers, but I proposed them as a start.

Below some examples of how I have seen alignment/misalignment turning up, organized by layer. See the link at the bottom of this post, for the HTML version of these (which includes links to examples).

I will write more about the themes that came out of these sessions in the near future. However, I will say that misalignment exists on both very difficult topics like the environment, and data ('described data' feeling like a brick wall ) and those that were maybe easier to solve like adding rules to governance about AI usage and/or improving contribution ladders to encourage learning and growing in knowledge of a project (before submitting a PR). I also think Red Hat is showing some early leadership in proactively designing for alignment .
The CHAOSS AI Alignment Working Group, is surveying our community to learn more about their perspectives and encounters with AI - before proposing solutions. In the meantime I hope this discussion framework can help you/your community navigate this bumpy time in OSS/.