Artificial Intelligence Safety

humanAIhandshake

Artificial Intelligence (AI) holds both great promise, and great risk, for human society.  Technologies of the past have successfully addressed safety issues through trial-and-error responses to their failures.  The power of AI necessitates a different approach of proactive governance.

A new paper from the Future of Life Institute outlines the topics of safety research needed in advance of or in parallel with AI development in order to ensure net positive gains from the development of this technology.

There are significant open problems in the theoretical and philosophical foundation of both ethics and its uses that need to be solved for AI success.  Our society must get much better at developing fair and coherent methods of aggregating multiple peoples’ preference and values.  Constitutional institutions and the United Nations are adequate approximations for this aggregation while our civilization maintains a nation-state level of organization.  However, transnationalism is already disrupting this global order and AI will transform this beyond recognition.

We approach an inflection point for our species.  Let’s have wisdom prevail.

Emergent Endogenous Risk in System-Based Industries

The type of catastrophe that emerged within the global finance system will emerge there again, in different ways, and will also emerge within other critical systems like food, energy, communications, and information technology.

The subprime mortgage crisis of 2008 showcases a new category of system-wide risks that emerge from the collective behavior of individual institutions.  This new category of threats should be included within national infrastructure assurance plans.

A key response to the subprime contagion within the banking sector is in the form of macroprudential policy: policy seeking to improve resilience of the entire system, not just the institutions within that system. The need for macroprudential policy is not confined to banking and finance however; it applies to all system-based infrastructure sectors including communications, energy, food and agriculture, and information technology.  This paper demonstrates the existence of systemic risks to information technology to justify extension of macroprudential policy approaches to this sector.

This policy/risk mismatch summarized here arises from an aggregate decline in backup power capacity across U.S. data centers, yielding increased vulnerability year-on-year to sustained power outages.  A service failure within the energy/electricity sector may exceed in duration the threshold of tolerance of numerous data centers simultaneously as a result of the annual decline in backup power capacity.  This simultaneous trend across numerous individual actors creates an emergent endogenous risk to the system overall. This risk category that ‘emerges from within’ stands in stark contrast with the prevailing cybersecurity focus upon deterring exogenous risk.

Doing nothing has the lowest explicit cost in the short-term but will eventually result in a massive negative economic impact in the long term.  Innovative tactical regulatory approaches, such as policy for green data centers, can reduce exposure to this specific systemic risk while delivering other public benefits (e.g. lower carbon footprint), but fail to address the broader issue of other emergent endogenous risks.  The most effective approach is to nurture enterprise adaptive capacity, also known as enterprise resilience, through macroprudential policy approaches.  Borrowing innovation discovered through urban and environmental policy such as those epitomized within the Stockholm Resilience Centre, would be a good place to start.

Resilience defined

Resilience refers to the capacity for an entity, be it ecosystem or enterprise, to absorb distrubance and retain basic function.