Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Political Economy of Reinforcement Learning Systems (PERLS)

Demanding and Designing Aligned Cognitive Architectures

Koen Holtman


Abstract:

With AI systems becoming more powerful and pervasive, there is increasing debate about keeping their actions aligned with the broader goals and needs of humanity. This multi-disciplinary and multi-stakeholder debate must resolve many issues, here we examine two of them. The first is to clarify what demands stakeholders might usefully make on the designers of AI systems, useful because the technology exists to implement them. We introduce the framing of cognitive architectures to make this technical topic more accessible. The second issue is how stakeholders should calibrate their interactions with modern machine learning researchers. We consider how current fashions in machine learning create a narrative pull that participants in technical and policy discussions should be aware of, so that they can compensate for it. We identify several technically tractable but currently unfashionable options for improving AI alignment.

Chat is not available.