meaning alignment institute

https://www.meaningalignment.org/
https://twitter.com/meaningaligned

folks in the metacrisis space, received a grant from openAI, proposing AI dialectical/wisdom training with a methodology around harmonizing values (moral graphs).

good introductions:

see a 5-min article introduction + link to 38-page paper: https://meaningalignment.substack.com/p/new-paper-what-are-human-values-and

1h30 video explaining the overarching vision/research basis: https://www.youtube.com/watch?v=hZpKdfbrd6osee


people involved: joe edelman, ellie hain, oliver klingefjord