Artificial Intelligence: American Attitudes and Trends
January 2019
1 Executive summary
Advances in artificial intelligence (AI)1 could impact nearly all aspects of society: the labor market, transportation, healthcare, education, and national security. AI’s effects may be profoundly positive, but the technology entails risks and disruptions that warrant attention. While technologists and policymakers have begun to discuss AI and applications of machine learning more frequently, public opinion has not shaped much of these conversations. In the U.S., public sentiments have shaped many policy debates, including those about immigration, free trade, international conflicts, and climate change mitigation. As in these other policy domains, we expect the public to become more influential over time. It is thus vital to have a better understanding of how the public thinks about AI and the governance of AI. Such understanding is essential to crafting informed policy and identifying opportunities to educate the public about AI’s character, benefits, and risks.
In this report, we present the results from an extensive look at the American public’s attitudes toward AI and AI governance. As the study of the public opinion toward AI is relatively new, we aimed for breadth over depth, with our questions touching on: workplace automation; attitudes regarding international cooperation; the public’s trust in various actors to develop and regulate AI; views about the importance and likely impact of different AI governance challenges; and historical and cross-national trends in public opinion regarding AI. Our results provide preliminary insights into the character of U.S. public opinion regarding AI. However, our findings raise more questions than they answer; they are more suggestive than conclusive. Accordingly, we recommend caution in interpreting the results; we confine ourselves to primarily reporting the results. More work is needed to gain a deeper understanding of public opinion toward AI.
Supported by a grant from the Ethics and Governance of AI Fund, we intend to conduct more extensive and intensive surveys in the coming years, including of residents in Europe, China, and other countries. We welcome collaborators, especially experts on particular policy domains, on future surveys. Survey inquiries can be emailed to surveys@governance.ai.
This report is based on findings from a nationally representative survey conducted by the Center for the Governance of AI, housed at the Future of Humanity Institute, University of Oxford, using the survey firm YouGov. The survey was conducted between June 6 and 14, 2018, with a total of 2,000 American adults (18+) completing the survey. The analysis of this survey was pre-registered on the Open Science Framework. Appendix A provides further details regarding the data collection and analysis process.
1.1 Select results
Below we highlight some results from our survey2:
Americans express mixed support for the development of AI. After reading a short explanation, a substantial minority (41%) somewhat support or strongly support the development of AI, while a smaller minority (22%) somewhat or strongly opposes it.
Demographic characteristics account for substantial variation in support for developing AI. Substantially more support for developing AI is expressed by college graduates (57%) than those with high school or less education (29%); by those with larger reported household incomes, such as those earning over $100,000 annually (59%), than those earning less than $30,000 (33%); by those with computer science or programming experience (58%) than those without (31%); by men (47%) than women (35%). These differences are not easily explained away by other characteristics (they are robust to our multiple regression).
The overwhelming majority of Americans (82%) believe that robots and/or AI should be carefully managed. This figure is comparable to with survey results from EU respondents.
Americans consider all of the thirteen AI governance challenges presented in the survey to be important for governments and technology companies to manage carefully. The governance challenges perceived to be the most likely to impact people around the world within the next decade and rated the highest in issue importance were3:
- Preventing AI-assisted surveillance from violating privacy and civil liberties
- Preventing AI from being used to spread fake and harmful content online
- Preventing AI cyber attacks against governments, companies, organizations, and individuals
- Protecting data privacy
- We also asked the above question, but focused on the likelihood of the governance challenge impacting solely Americans (rather than people around the world). Americans perceive that all of the governance challenges presented, except for protecting data privacy and ensuring that autonomous vehicles are safe, are slightly more likely to impact people around the world than to impact Americans within the next 10 years.
- Americans have discernibly different levels of trust in various organizations to develop and manage4 AI for the best interests of the public. Broadly, the public puts the most trust in university researchers (50% reporting “a fair amount of confidence” or “a great deal of confidence”) and the U.S. military (49%); followed by scientific organizations, the Partnership on AI, technology companies (excluding Facebook), and intelligence organizations; followed by U.S. federal or state governments, and the UN; followed by Facebook.
Americans express mixed support (1) for the U.S. investing more in AI military capabilities and (2) for cooperating with China to avoid the dangers of an AI arms race. Providing respondents with information about the risks of a U.S.-China AI arms race slightly decreases support for the U.S. investing more in AI military capabilities. Providing a pro-nationalist message or a message about AI’s threat to humanity failed to affect Americans’ policy preferences.
The median respondent predicts that there is a 54% chance that high-level machine intelligence will be developed by 2028. We define high-level machine intelligence as when machines are able to perform almost all tasks that are economically relevant today better than the median human (today) at each task. See Appendix B for a detailed definition.
Americans express weak support for developing high-level machine intelligence: 31% of Americans support while 27% oppose its development.
Demographic characteristics account for substantial variation in support for developing high-level machine intelligence. There is substantially more support for developing high-level machine intelligence by those with larger reported household incomes, such as those earning over $100,000 annually (47%) than those earning less than $30,000 (24%); by those with computer science or programming experience (45%) than those without (23%); by men (39%) than women (25%). These differences are not easily explained away by other characteristics (they are robust to our multiple regression).
There are more Americans who think that high-level machine intelligence will be harmful than those who think it will be beneficial to humanity. While 22% think that the technology will be “on balance bad,” 12% think that it would be “extremely bad,” leading to possible human extinction. Still, 21% think it will be “on balance good,” and 5% think it will be “extremely good.”
1.2 Reading notes
In all tables and charts, results are weighted to be representative of the U.S. adult population, unless otherwise specified. We use the weights provided by YouGov.
Wherever possible, we report the margins of error (MOEs), confidence regions, and error bars at the 95% confidence level.
For tabulation purposes, percentage points are rounded off to the nearest whole number in the figures. As a result, the percentages in a given figure may total slightly higher or lower than 100%. Summary statistics that include two decimal places are reported in Appendix B.
1.3 Press coverage
Select press coverage include the following:
- Vox: “The American public is already worried about AI catastrophe” (by Kelsey Piper), January 9, 2019.
- MIT Technology Review: “Americans want to regulate AI but don’t trust anyone to do it” (by Karen Hao), January 10, 2019.
- Axios: “America is split over advancing artificial intelligence” (by Kaveh Waddell), January 10, 2019.
- Bloomberg: “U.S. military trusted more than Google, Facebook to develop AI” (by Jeremy Kahn), January 10, 2019.
- Future of Life Institute Podcast: “Artificial Intelligence: American Attitudes and Trends with Baobao Zhang” (hosted by Ariel Conn), January 24, 2019.
We define AI as machine systems capable of sophisticated (intelligent) information processing. For other definitions, see Footnote 2 in Dafoe (2018).↩
These results are presented roughly in the order in which questions were presented to respondents.↩
Giving equal weight to the likelihood and the rated importance of the challenge.↩
Our survey asked separately about trust in 1) building and 2) managing the development and use of AI. Results are similar and are combined here.↩