Artificial Intelligence Safety

humanAIhandshake

Artificial Intelligence (AI) holds both great promise, and great risk, for human society.  Technologies of the past have successfully addressed safety issues through trial-and-error responses to their failures.  The power of AI necessitates a different approach of proactive governance.

A new paper from the Future of Life Institute outlines the topics of safety research needed in advance of or in parallel with AI development in order to ensure net positive gains from the development of this technology.

There are significant open problems in the theoretical and philosophical foundation of both ethics and its uses that need to be solved for AI success.  Our society must get much better at developing fair and coherent methods of aggregating multiple peoples’ preference and values.  Constitutional institutions and the United Nations are adequate approximations for this aggregation while our civilization maintains a nation-state level of organization.  However, transnationalism is already disrupting this global order and AI will transform this beyond recognition.

We approach an inflection point for our species.  Let’s have wisdom prevail.

About Paul Kavitz
Paul Kavitz has been principal of Bluecrue since 2001. He brings insight, experience, and focus to the fields of resilience, global security, and sustainable development.

Leave a comment