695 0 obj Combining both countries economic and technical ecosystem with government pressures to develop AI, it is reasonable to conceive of an AI race primarily dominated by these two international actors. The primary difference between the Prisoners Dilemma and Chicken, however, is that both actors failing to cooperate is the least desired outcome of the game. trailer In these abstractions, we assume two utility-maximizing actors with perfect information about each others preferences and behaviors. The real peril of a hasty withdrawal of U.S. troops from Afghanistan, though, can best be understood in political, not military, terms. As the infighting continues, the impulse to forego the elusive stag in favor of the rabbits on offer will grow stronger by the day. Often, games with a similar structure but without a risk dominant Nash equilibrium are called assurance games. > [23] United Nations Office for Disarmament Affairs, Pathways to Banning Fully Autonomous Weapons, United Nations, October 23, 2017, https://www.un.org/disarmament/update/pathways-to-banning-fully-autonomous-weapons/. Gray[36] defines an arms race as two or more parties perceiving themselves to be in an adversary relationship, who are increasing or improving their armaments at a rapid rate and structuring their respective military postures with a general attain to the past, current, and anticipated military and political behaviour of the other parties.. 75 0 obj
<>stream
An example of the game of Stag Hunt can be illustrated by neighbours with a large hedge that forms the boundary between their properties. Finally, if both sides defect or effectively choose not to enter an AI Coordination Regime, we can expect their payoffs to be expressed as follows: The benefit that each actor can expect to receive from this scenario is solely the probability that they achieve a beneficial AI times each actors perceived benefit of receiving AI (without distributional considerations): P_(b|A) (A)b_Afor Actor A and P_(b|B) (B)b_Bfor Actor B. These two concepts refer to how states will act in the international community. Overall, the errors overstated the companys net income by 40%. But, at various critical junctures, including the countrys highly contentious presidential elections in 2009 and 2014, rivals have ultimately opted to stick with the state rather than contest it. In the Prisoner's Dilemma, in contrast, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when both players choose to defect. ? [44] Thomas C. Schelling & Morton H. Halperin, Strategy and Arms Control. [3] Elon Musk, Twitter Post, September 4, 2017, https://twitter.com/elonmusk/status/904638455761612800. in . International Relations, What are some good examples of coordination games? If all the hunters work together, they can kill the stag and all eat. [9] That is, the extent to which competitors prioritize speed of development over safety (Bostrom 2014: 767). This makes the risk twofold; the risk that the stag does not appear, and the risk that another hunter takes the kill. Two players, simultaneous decisions. An individual can get a hare by himself, but a hare is worth less than a stag. Let us call a stag hunt game where this condition is met a stag hunt dilemma. In the context of international relations, this model has been used to describe preferences of actors when deciding to enter an arms treaty or not. It is also the case that some human interactions that seem like prisoner's dilemmas may in fact be stag hunts. b David Hume provides a series of examples that are stag hunts. This table contains a representation of a payoff matrix. Like the hunters in the woods, Afghanistans political elites have a great deal, at least theoretically, to gain from sticking together. (required), 2023 Cornell University Powered by Edublogs Campus and running on blogs.cornell.edu, The Stag Hunt Theory and the Formation Social of Contracts, http://www.socsci.uci.edu/~bskyrms/bio/papers/StagHunt.pdf. To reiterate, the primary function of this theory is to lay out a structure for identifying what game models best represent the AI Coordination Problem, and as a result, what strategies should be applied to encourage coordination and stability. SUBJECT TERMS Game Theory, Brinkmanship, Stag Hunt, Taiwan Strait Issue, Cuban Missile Crisis 16. As such, it will be useful to consider each model using a traditional normal-form game setup as seen in Table 1. As a result, this tradeoff between costs and benefits has the potential to hinder prospects for cooperation under an AI Coordination Regime. Finally, Table 13 outlines an example payoff structure that results in a Stag Hunt. [24] Defined by Bostrom as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills, Nick Bostrom, How long before suerintelligence? Linguistic and Philosophical Investigations 5, 1(2006): 11-30. We have recently seen an increase in media acknowledgement of the benefits of artificial intelligence (AI), as well as the negative social implications that can arise from its development. In the long term, environmental regulation in theory protects us all, but even if most of the countries sign the treaty and regulate, some like China and the US will not forsovereigntyreasons, or because they areexperiencinggreat economic gain. Finally, a Stag Hunt occurs when the returns for both actors are higher if they cooperate than if either or both defect. The Nash equilibrium for each nation is to cheat, so it would be irrational to do otherwise. This is expressed in the following way: The intuition behind this is laid out in Armstrong et al.s Racing to the precipice: a model of artificial intelligence.[55] The authors suggest each actor would be incentivized to skimp on safety precautions in order to attain the transformative and powerful benefits of AI before an opponent. War is anarchic, and intervening actors can sometimes help to mitigate the chaos. For example, international sanctions involve cooperation against target countries (Martin, 1992a; Drezner, . This same dynamic could hold true in the development of an AI Coordination Regime, where actors can decide whether to abide by the Coordination Regime or find a way to cheat. I will apply them to IR and give an example for each. Type of game model and prospect of coordination. What is the difference between 'negative' and 'positive' peace? One example payoff structure that results in a Chicken game is outlined in Table 11. I refer to this as the AI Coordination Problem. 714 0 obj For example, can the structure of distribution impact an actors perception of the game as cooperation or defection dominated (if so, should we focus strategic resources on developing accountability strategies that can effectively enforce distribution)? In addition to the example suggested by Rousseau, David Hume provides a series of examples that are stag hunts. The Stag-hunt is probably more useful since games in life have many equilibria, and its a question of how you can get to the good ones. This technological shock factor leads actors to increase weapons research and development and maximize their overall arms capacity to guard against uncertainty. In this article, we employ a class of symmetric, ordinal 2 2 games including the frequently studied Prisoner's Dilemma, Chicken, and Stag Hunt to model the stability of the social contract in the face of catastrophic changes in social relations. 0000004367 00000 n
[4] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014). SCJ Int'l L. & Bus. In game theory, the stag hunt is a game that describes a conflict between safety and social cooperation. In the context of the AI Coordination Problem, a Stag Hunt is the most desirable outcome as mutual cooperation results in the lowest risk of racing dynamics and associated risk of developing a harmful AI. .more Dislike Share Noah Zerbe 6.48K subscribers We find that individuals under the time pressure treatment are more likely to play stag (vs. hare) than individuals in the control group: under time constraints 62.85% of players are stag -hunters . A sudden drop in current troop levels will likely trigger a series of responses that undermine the very peace and stability the United States hopes to achieve. Before getting to the theory, I will briefly examine the literature on military technology/arms racing and cooperation. Understanding the Stag Hunt Game: How Deer Hunting Explains Why People are Socially Late. Moreover, the AI Coordination Regime is arranged such that Actor B is more likely to gain a higher distribution of AIs benefits. [28] Once this Pandoras Box is opened, it will be difficult to close. In the most common account of this dilemma, which is quite different from Rousseau's, two hunters must decide separately, and without the other knowing, whether to hunt a stag or a hare. the 'inherent' right to individual and collective self-defence recognized by Article 51 of the Charter and enforcement measures involving the use of force sanctioned by the Security Council under Chapter VII thereof. Actor As preference order: DC > CC > CD > DD, Actor Bs preference order: CD > CC > DC > DD. In this example, each player has a dominantstrategy. Each player must choose an action without knowing the choice of the other. (lljhrpc). I discuss in this final section the relevant policy and strategic implications this theory has on achieving international AI coordination, and assess the strengths and limitations of the theory outlined above in practice. Because of its capacity to radically affect military and intelligence systems, AI research becomes an important consideration in national security and would unlikely be ignored by political and military leaders. 0
Leanna Litsch, Kabul Security Force Public Affairs. [43] Edward Moore Geist, Its already too late to stop the AI arms race We must manage it instead, Bulletin of the Atomic Scientists 72, 5(2016): 318321. Additionally, the feedback, discussion, resource recommendations, and inspiring work of friends, colleagues, and mentors in several time zones especially Amy Fan, Carrick Flynn, Will Hunt, Jade Leung, Matthijs Maas, Peter McIntyre, Professor Nuno Monteiro, Gabe Rissman, Thomas Weng, Baobao Zhang, and Remco Zwetsloot were vital to this paper and are profoundly appreciated. The payoff matrix in Figure 1 illustrates a generic stag hunt, where Hume's second example involves two neighbors wishing to drain a meadow. Your application of the Prisoners Dilemma (PD) game to international trade agreements raises a few very interesting and important questions for the application of game theory to real-life strategic situations. Back to the lionesses in Etosha National Park . Payoff matrix for simulated Prisoners Dilemma. [52] Stefan Persson, Deadlocks in International Negotiation, Cooperation and Conflict 29, 3(1994): 211244. If, by contrast, each hunter patiently keeps his or her post, everyone will be rewarded with a lavish feast. These are a few basic examples of modeling IR problems with game theory. Using game theory as a way of modelingstrategicallymotivated decisions has direct implications for understanding basic international relations issues. Civilians and civilian objects are protected under the laws of armed conflict by the principle of distinction. Therefore, an agreement to play (c,c) conveys no information about what the players will do, and cannot be considered self-enforcing." Different social/cultural systems are prone to clash. The field of international relations has long focused on states as the most important actors in global politics. Image: The Intelligence, Surveillance and Reconnaissance Division at the Combined Air Operations Center at Al Udeid Air Base, Qatar. In addition to boasting the worlds largest economies, China and the U.S. also lead the world in A.I. A persons choice to bind himself to a social contract depends entirely on his beliefs whether or not the other persons or peoples choice. This democratic peace proposition not only challenges the validity of other political systems (i.e., fascism, communism, authoritarianism, totalitarianism), but also the prevailing realist account of international relations, which emphasises balance-of-power calculations and common strategic interests in order to explain the peace and stability that characterises relations between liberal democracies. Moreover, they also argue that pursuing all strategies at once would also be suboptimal (or even impossible due to mutual exclusivity), making it even more important to know what sort of game youre playing before pursuing a strategy[59]. Here, both actors demonstrate high uncertainty about whether they will develop a beneficial or harmful AI alone (both Actors see the likelihood as a 50/50 split), but they perceive the potential benefits of AI to be slightly greater than the potential harms. THE STAG HUNT THE STAG HUNT T HE Stag Hunt is a story that became a game. Evaluate this statement. A person's choice to bind himself to a social contract depends entirely on his beliefs whether or not the other person's or people's choice. In this paper, I develop a simple theory to explain whether two international actors are likely to cooperate or compete in developing AI and analyze what variables factor into this assessment. As a result, this could reduce a rival actors perceived relative benefits gained from developing AI. In game theory, the stag hunt, sometimes referred to as the assurance game, trust dilemma or common interest game, describes a conflict between safety and social cooperation. The payoff matrix is displayed as Table 12. An individual can get a hare by himself, but a hare is worth less than a stag. hTIOSQ>M2P22PQFAH [31] Meanwhile, U.S. military and intelligence agencies like the NSA and DARPA continue to fund public AI research. [18] Deena Zaidi, The 3 most valuable applications of AI in health care, VentureBeat, April 22, 2018, https://venturebeat.com/2018/04/22/the-3-most-valuable-applications-of-ai-in-health-care/. If the regime allows for multilateral development, for example, the actors might agree that whoever reaches AI first receives 60% of the benefit, while the other actor receives 40% of the benefit. Instead, each hunter should separately choose the more ambitious and far more rewarding goal of getting the stag, thereby giving up some autonomy in exchange for the other hunter's cooperation and added might. [16] Google DeepMind, DeepMind and Blizzard open StarCraft II as an AI research environment, https://deepmind.com/blog/deepmind-and-blizzard-open-starcraft-ii-ai-research-environment/. We can see through studying the Stag Hunt game theory that, even though we are selfish, we still are ironically aiming to for mutual benefit, and thus we tend to follow a such a social contract. This essay first appeared in the Acheson Prize 2018 Issue of the Yale Review of International Studies. [21] Moreover, racist algorithms[22] and lethal autonomous weapons systems[23] force us to grapple with difficult ethical questions as we apply AI to more society realms. The area of international relations theory that is most characterized by overt metaphorical imagery is that of game theory.Although the imagery of game theory would suggest that the games were outgrowths of metaphorical thinking, the origins of game theory actually are to be found in the area of mathematics. In international relations, examples of Chicken have included the Cuban Missile Crisis and the concept of Mutually Assured Destruction in nuclear arms development. But cooperation is not easy. In addition to leadership, the formation of a small but successful group is also likely to influence group dynamics. The current landscape suggests that AI development is being led by two main international actors: China and the United States. How do strategies of non-violent resistance view power differently from conventional 'monolithic' understandings of power? 0000000696 00000 n
Explain Rousseau's metaphor of the 'stag hunt'. Interestingly enough, the Stag Hunt theory can be used to describe social contracts within society, with the contract being the one to hunt the stag or achieve mutual benefit. As stated before, achieving a scenario where both actors perceive to be in a Stag Hunt is the most desirable situation for maximizing safety from an AI catastrophe, since both actors are primed to cooperate and will maximize their benefits from doing so. The remainder of this section looks at these payoffs and the variables that determine them in more detail.[53].