Appendix B: Topline questionnaire
Below, we present the survey text as shown to respondents. The numerical codings are shown in parentheses following each answer choice.
In addition, we report the topline results: percentages weighted to be representative of the U.S. adult population, the unweighted raw percentages, and the raw frequencies. Note that in all survey experiments, respondents were randomly assigned to each experimental group with equal probability.
Global risks
[All respondents were presented with the following prompt.]
We want to get your opinion about global risks. A “global risk” is an uncertain event or condition that, if it happens, could cause a significant negative impact for at least 10 percent of the world’s population. That is at least 1 in 10 people around the world could experience a significant negative impact.
You will be asked to consider 5 potential global risks.
[Respondents were presented with five items randomly selected from the list below. One item was shown at a time.]
- Failure to address climate change: Continued failure of governments and businesses to pass effective measures to reduce climate change, protect people, and help those impacted by climate change to adapt.
- Failure of regional or global governance: Regional organizations (e.g., the European Union) or global organizations (e.g., the United Nations) are unable to resolve issues of economic, political, or environmental importance.
- Conflict between major countries: Disputes between major countries that lead to economic, military, cyber, or societal conflicts.
- Weapons of mass destruction: Use of nuclear, chemical, biological or radiological weapons, creating international crises and killing large numbers of people.
- Large-scale involuntary migration: Large-scale involuntary movement of people, such as refugees, caused by conflict, disasters, environmental or economic reasons.
- Rapid and massive spread of infectious diseases: The uncontrolled spread of infectious diseases, for instance as a result of resistance to antibiotics, that leads to widespread deaths and economic disruptions.
- Water crises: A large decline in the available quality and quantity of fresh water that harms human health and economic activity.
- Food crises: Large numbers of people are unable to buy or access food.
Harmful consequences of artificial intelligence (AI): Intended or unintended consequences of artificial intelligence that causes widespread harm to humans, the economy, and the environment.
- Harmful consequences of synthetic biology: Intended or unintended consequences of synthetic biology, such as genetic engineering, that causes widespread harm to humans, the economy, and the environment.
- Large-scale cyber attacks: Large-scale cyber attacks that cause large economic damages, tensions between countries, and widespread loss of trust in the internet.
- Large-scale terrorist attacks: Individuals or non-government groups with political or religious goals that cause large numbers of deaths and major material damage.
- Global recession: Economic decline in several major countries that leads to a decrease in income and high unemployment.
- Extreme weather events: Extreme weather events that cause large numbers of deaths as well as damage to property, infrastructure, and the environment.
- Major natural disasters: Earthquakes, volcanic activity, landslides, tsunamis, or geomagnetic storms that cause large numbers of deaths as well as damage to property, infrastructure, and the environment.
QUESTION:
What is the likelihood of [INSERT GLOBAL RISK] happening globally within the next 10 years? Please use the slider to indicate your answer. 0% chance means it will certainly not happen and 100% chance means it will certainly happen.
ANSWER CHOICES:
- Very unlikely: less than 5% chance (2.5%)
- Unlikely: 5-20% chance (12.5%)
- Somewhat unlikely: 20-40% chance (30%)
- Equally likely as unlikely: 40-60% chance (50%)
- Somewhat likely: 60-80% chance (70%)
- Likely: 80-95% chance (87.5%)
- Very likely: more than 95% chance (97.5%)
- I don’t know
QUESTION:
If [INSERT GLOBAL RISK] were to happen, what would be the size of the negative impact for several countries or industries within the next 10 years?
ANSWER CHOICES:
- Minimal (0)
- Minor (1)
- Moderate (2)
- Severe (3)
- Catastrophic (4)
- I don’t know
Table B.1: Likelihood - Failure to address climate change; N = 666
Very unlikely < 5% |
10.53 |
10.21 |
68 |
Unlikely 5-20% |
6.87 |
6.46 |
43 |
Somewhat unlikely 20-40% |
11.61 |
11.41 |
76 |
Equally likely as unlikely 40-60% |
18.44 |
18.62 |
124 |
Somewhat likely 60-80% |
15.81 |
15.77 |
105 |
Likely 80-95% |
13.47 |
13.81 |
92 |
Very likely > 95% |
16.00 |
16.37 |
109 |
I don’t know |
7.17 |
7.21 |
48 |
Skipped |
0.10 |
0.15 |
1 |
Table B.2: Likelihood - Failure of regional/global governance; N = 652
Very unlikely < 5% |
5.40 |
5.52 |
36 |
Unlikely 5-20% |
7.99 |
7.98 |
52 |
Somewhat unlikely 20-40% |
12.14 |
12.42 |
81 |
Equally likely as unlikely 40-60% |
24.71 |
24.39 |
159 |
Somewhat likely 60-80% |
17.80 |
18.10 |
118 |
Likely 80-95% |
11.54 |
11.96 |
78 |
Very likely > 95% |
8.86 |
9.51 |
62 |
I don’t know |
10.96 |
9.66 |
63 |
Skipped |
0.58 |
0.46 |
3 |
Table B.3: Likelihood - Conflict between major countries; N = 625
Very unlikely < 5% |
3.47 |
3.36 |
21 |
Unlikely 5-20% |
6.45 |
7.04 |
44 |
Somewhat unlikely 20-40% |
10.68 |
10.40 |
65 |
Equally likely as unlikely 40-60% |
22.16 |
20.64 |
129 |
Somewhat likely 60-80% |
22.46 |
23.36 |
146 |
Likely 80-95% |
13.92 |
14.24 |
89 |
Very likely > 95% |
12.21 |
12.80 |
80 |
I don’t know |
8.49 |
8.00 |
50 |
Skipped |
0.16 |
0.16 |
1 |
Table B.4: Likelihood - Weapons of mass destruction; N = 645
Very unlikely < 5% |
7.05 |
6.67 |
43 |
Unlikely 5-20% |
13.71 |
13.80 |
89 |
Somewhat unlikely 20-40% |
15.19 |
15.04 |
97 |
Equally likely as unlikely 40-60% |
24.33 |
24.19 |
156 |
Somewhat likely 60-80% |
17.15 |
17.36 |
112 |
Likely 80-95% |
9.26 |
9.15 |
59 |
Very likely > 95% |
6.44 |
6.98 |
45 |
I don’t know |
6.87 |
6.82 |
44 |
Skipped |
0 |
0 |
0 |
Table B.5: Likelihood - Large-scale involuntary migration; N = 628
Very unlikely < 5% |
6.70 |
6.53 |
41 |
Unlikely 5-20% |
7.83 |
7.32 |
46 |
Somewhat unlikely 20-40% |
11.57 |
11.62 |
73 |
Equally likely as unlikely 40-60% |
18.65 |
18.31 |
115 |
Somewhat likely 60-80% |
20.91 |
21.34 |
134 |
Likely 80-95% |
13.63 |
14.01 |
88 |
Very likely > 95% |
12.31 |
13.06 |
82 |
I don’t know |
8.27 |
7.64 |
48 |
Skipped |
0.12 |
0.16 |
1 |
Table B.6: Likelihood - Spread of infectious diseases; N = 620
Very unlikely < 5% |
4.76 |
4.03 |
25 |
Unlikely 5-20% |
13.12 |
13.06 |
81 |
Somewhat unlikely 20-40% |
17.24 |
17.58 |
109 |
Equally likely as unlikely 40-60% |
22.76 |
23.39 |
145 |
Somewhat likely 60-80% |
17.55 |
17.58 |
109 |
Likely 80-95% |
10.07 |
10.00 |
62 |
Very likely > 95% |
6.94 |
6.94 |
43 |
I don’t know |
7.46 |
7.26 |
45 |
Skipped |
0.12 |
0.16 |
1 |
Table B.7: Likelihood - Water crises; N = 623
Very unlikely < 5% |
6.37 |
6.10 |
38 |
Unlikely 5-20% |
9.71 |
10.43 |
65 |
Somewhat unlikely 20-40% |
13.22 |
13.64 |
85 |
Equally likely as unlikely 40-60% |
21.23 |
21.03 |
131 |
Somewhat likely 60-80% |
20.26 |
19.26 |
120 |
Likely 80-95% |
11.04 |
10.91 |
68 |
Very likely > 95% |
10.83 |
11.72 |
73 |
I don’t know |
7.33 |
6.90 |
43 |
Skipped |
0 |
0 |
0 |
Table B.8: Likelihood - Food crises; N = 1073
Very unlikely < 5% |
6.29 |
5.96 |
64 |
Unlikely 5-20% |
12.53 |
11.65 |
125 |
Somewhat unlikely 20-40% |
14.49 |
14.82 |
159 |
Equally likely as unlikely 40-60% |
22.53 |
22.55 |
242 |
Somewhat likely 60-80% |
16.90 |
17.24 |
185 |
Likely 80-95% |
10.46 |
10.90 |
117 |
Very likely > 95% |
9.38 |
10.07 |
108 |
I don’t know |
7.36 |
6.71 |
72 |
Skipped |
0.08 |
0.09 |
1 |
Table B.9: Likelihood - Harmful consequences of AI; N = 573
Very unlikely < 5% |
11.26 |
11.34 |
65 |
Unlikely 5-20% |
16.43 |
16.06 |
92 |
Somewhat unlikely 20-40% |
15.95 |
15.53 |
89 |
Equally likely as unlikely 40-60% |
19.36 |
20.07 |
115 |
Somewhat likely 60-80% |
11.56 |
11.34 |
65 |
Likely 80-95% |
8.30 |
8.03 |
46 |
Very likely > 95% |
7.71 |
7.85 |
45 |
I don’t know |
9.43 |
9.77 |
56 |
Skipped |
0 |
0 |
0 |
Table B.10: Likelihood - Harmful consequences of synthetic biology; N = 630
Very unlikely < 5% |
9.92 |
9.68 |
61 |
Unlikely 5-20% |
15.66 |
15.08 |
95 |
Somewhat unlikely 20-40% |
15.06 |
15.24 |
96 |
Equally likely as unlikely 40-60% |
23.48 |
22.86 |
144 |
Somewhat likely 60-80% |
12.32 |
12.86 |
81 |
Likely 80-95% |
7.47 |
7.62 |
48 |
Very likely > 95% |
6.04 |
6.19 |
39 |
I don’t know |
10.06 |
10.48 |
66 |
Skipped |
0 |
0 |
0 |
Table B.11: Likelihood - Cyber attacks; N = 650
Very unlikely < 5% |
2.04 |
2.15 |
14 |
Unlikely 5-20% |
4.28 |
3.69 |
24 |
Somewhat unlikely 20-40% |
7.74 |
7.85 |
51 |
Equally likely as unlikely 40-60% |
15.78 |
16.15 |
105 |
Somewhat likely 60-80% |
22.66 |
21.85 |
142 |
Likely 80-95% |
16.44 |
16.62 |
108 |
Very likely > 95% |
22.40 |
23.54 |
153 |
I don’t know |
8.53 |
8.00 |
52 |
Skipped |
0.12 |
0.15 |
1 |
Table B.12: Likelihood - Terrorist attacks; N = 635
Very unlikely < 5% |
5.21 |
4.88 |
31 |
Unlikely 5-20% |
4.53 |
4.88 |
31 |
Somewhat unlikely 20-40% |
12.43 |
11.81 |
75 |
Equally likely as unlikely 40-60% |
19.47 |
19.21 |
122 |
Somewhat likely 60-80% |
22.28 |
22.52 |
143 |
Likely 80-95% |
15.74 |
15.43 |
98 |
Very likely > 95% |
12.45 |
12.91 |
82 |
I don’t know |
7.89 |
8.35 |
53 |
Skipped |
0 |
0 |
0 |
Table B.13: Likelihood - Global recession; N = 599
Very unlikely < 5% |
4.17 |
3.67 |
22 |
Unlikely 5-20% |
7.34 |
7.18 |
43 |
Somewhat unlikely 20-40% |
12.68 |
12.85 |
77 |
Equally likely as unlikely 40-60% |
23.43 |
24.21 |
145 |
Somewhat likely 60-80% |
23.83 |
23.04 |
138 |
Likely 80-95% |
10.80 |
10.85 |
65 |
Very likely > 95% |
8.34 |
8.68 |
52 |
I don’t know |
9.41 |
9.52 |
57 |
Skipped |
0 |
0 |
0 |
Table B.14: Likelihood - Extreme weather events; N = 613
Very unlikely < 5% |
3.52 |
3.10 |
19 |
Unlikely 5-20% |
5.64 |
5.22 |
32 |
Somewhat unlikely 20-40% |
8.77 |
8.81 |
54 |
Equally likely as unlikely 40-60% |
20.12 |
18.76 |
115 |
Somewhat likely 60-80% |
18.09 |
18.27 |
112 |
Likely 80-95% |
13.02 |
14.03 |
86 |
Very likely > 95% |
24.95 |
25.45 |
156 |
I don’t know |
5.89 |
6.36 |
39 |
Skipped |
0 |
0 |
0 |
Table B.15: Likelihood - Natural disasters; N = 637
Very unlikely < 5% |
2.47 |
2.51 |
16 |
Unlikely 5-20% |
4.10 |
4.08 |
26 |
Somewhat unlikely 20-40% |
7.32 |
7.06 |
45 |
Equally likely as unlikely 40-60% |
17.63 |
17.74 |
113 |
Somewhat likely 60-80% |
19.43 |
19.15 |
122 |
Likely 80-95% |
18.12 |
18.05 |
115 |
Very likely > 95% |
25.73 |
26.37 |
168 |
I don’t know |
5.21 |
5.02 |
32 |
Skipped |
0 |
0 |
0 |
Table B.16: Size of negative impact - Failure to address climate change; N = 666
Minimal |
13.46 |
13.96 |
93 |
Minor |
11.26 |
10.96 |
73 |
Moderate |
23.37 |
23.27 |
155 |
Severe |
28.41 |
28.08 |
187 |
Catastrophic |
14.26 |
14.56 |
97 |
I don’t know |
9.13 |
9.01 |
60 |
Skipped |
0.10 |
0.15 |
1 |
Table B.17: Size of negative impact - Failure of regional/global governance; N = 652
Minimal |
6.04 |
5.98 |
39 |
Minor |
6.09 |
5.67 |
37 |
Moderate |
28.68 |
28.99 |
189 |
Severe |
33.21 |
34.05 |
222 |
Catastrophic |
10.76 |
10.89 |
71 |
I don’t know |
15.12 |
14.26 |
93 |
Skipped |
0.10 |
0.15 |
1 |
Table B.18: Size of negative impact - Conflict between major countries; N = 625
Minimal |
1.18 |
0.96 |
6 |
Minor |
4.94 |
4.80 |
30 |
Moderate |
28.81 |
28.16 |
176 |
Severe |
38.23 |
38.56 |
241 |
Catastrophic |
14.80 |
16.00 |
100 |
I don’t know |
11.89 |
11.36 |
71 |
Skipped |
0.14 |
0.16 |
1 |
Table B.19: Size of negative impact - Weapons of mass destruction; N = 645
Minimal |
2.28 |
2.17 |
14 |
Minor |
4.99 |
4.19 |
27 |
Moderate |
13.57 |
13.49 |
87 |
Severe |
31.05 |
31.01 |
200 |
Catastrophic |
38.06 |
39.38 |
254 |
I don’t know |
10.05 |
9.77 |
63 |
Skipped |
0 |
0 |
0 |
Table B.20: Size of negative impact - Large-scale involuntary migration; N = 628
Minimal |
2.07 |
2.07 |
13 |
Minor |
8.67 |
8.28 |
52 |
Moderate |
25.63 |
25.96 |
163 |
Severe |
35.31 |
36.15 |
227 |
Catastrophic |
18.14 |
17.83 |
112 |
I don’t know |
9.99 |
9.55 |
60 |
Skipped |
0.19 |
0.16 |
1 |
Table B.21: Size of negative impact - Spread of infectious diseases; N = 620
Minimal |
2.72 |
2.58 |
16 |
Minor |
6.03 |
5.65 |
35 |
Moderate |
26.86 |
28.06 |
174 |
Severe |
32.00 |
32.58 |
202 |
Catastrophic |
20.50 |
20.48 |
127 |
I don’t know |
11.88 |
10.65 |
66 |
Skipped |
0 |
0 |
0 |
Table B.22: Size of negative impact - Water crises; N = 623
Minimal |
1.72 |
1.93 |
12 |
Minor |
4.42 |
4.65 |
29 |
Moderate |
19.92 |
19.42 |
121 |
Severe |
36.71 |
36.44 |
227 |
Catastrophic |
27.24 |
28.25 |
176 |
I don’t know |
10.00 |
9.31 |
58 |
Skipped |
0 |
0 |
0 |
Table B.23: Size of negative impact - Food crises; N = 1073
Minimal |
2.55 |
2.61 |
28 |
Minor |
7.22 |
6.99 |
75 |
Moderate |
22.81 |
22.37 |
240 |
Severe |
33.93 |
34.67 |
372 |
Catastrophic |
24.04 |
24.88 |
267 |
I don’t know |
9.38 |
8.39 |
90 |
Skipped |
0.08 |
0.09 |
1 |
Table B.24: Size of negative impact - Harmful consequences of AI; N = 573
Minimal |
7.54 |
7.50 |
43 |
Minor |
14.82 |
13.79 |
79 |
Moderate |
27.77 |
27.92 |
160 |
Severe |
20.46 |
21.82 |
125 |
Catastrophic |
14.62 |
14.31 |
82 |
I don’t know |
14.79 |
14.66 |
84 |
Skipped |
0 |
0 |
0 |
Table B.25: Size of negative impact - Harmful consequences of synthetic biology; N = 630
Minimal |
6.77 |
6.67 |
42 |
Minor |
11.95 |
11.59 |
73 |
Moderate |
28.40 |
27.94 |
176 |
Severe |
26.03 |
26.03 |
164 |
Catastrophic |
11.15 |
11.90 |
75 |
I don’t know |
15.70 |
15.87 |
100 |
Skipped |
0 |
0 |
0 |
Table B.26: Size of negative impact - Cyber attacks; N = 650
Minimal |
1.19 |
1.23 |
8 |
Minor |
4.46 |
4.46 |
29 |
Moderate |
21.43 |
21.23 |
138 |
Severe |
38.26 |
37.69 |
245 |
Catastrophic |
23.01 |
24.46 |
159 |
I don’t know |
11.66 |
10.92 |
71 |
Skipped |
0 |
0 |
0 |
Table B.27: Size of negative impact - Terrorist attacks; N = 635
Minimal |
2.61 |
2.68 |
17 |
Minor |
6.11 |
6.14 |
39 |
Moderate |
29.29 |
29.45 |
187 |
Severe |
33.69 |
33.70 |
214 |
Catastrophic |
15.97 |
15.91 |
101 |
I don’t know |
12.32 |
12.13 |
77 |
Skipped |
0 |
0 |
0 |
Table B.28: Size of negative impact - Global recession; N = 599
Minimal |
2.71 |
2.67 |
16 |
Minor |
5.94 |
5.68 |
34 |
Moderate |
29.89 |
29.72 |
178 |
Severe |
35.49 |
36.23 |
217 |
Catastrophic |
14.63 |
14.52 |
87 |
I don’t know |
11.35 |
11.19 |
67 |
Skipped |
0 |
0 |
0 |
Table B.29: Size of negative impact - Extreme weather events; N = 613
Minimal |
2.54 |
2.45 |
15 |
Minor |
6.69 |
6.53 |
40 |
Moderate |
25.94 |
26.43 |
162 |
Severe |
32.50 |
31.97 |
196 |
Catastrophic |
22.79 |
23.00 |
141 |
I don’t know |
9.56 |
9.62 |
59 |
Skipped |
0 |
0 |
0 |
Table B.30: Size of negative impact - Natural disasters; N = 637
Minimal |
1.29 |
1.26 |
8 |
Minor |
5.86 |
5.81 |
37 |
Moderate |
22.26 |
23.08 |
147 |
Severe |
36.41 |
36.11 |
230 |
Catastrophic |
27.47 |
27.32 |
174 |
I don’t know |
6.72 |
6.44 |
41 |
Skipped |
0 |
0 |
0 |
Survey experiment: what the public considers AI, automation, machine learning, and robotics
[Respondents were randomly assigned to one of the four questions. The order of answer choices was randomized, except that “None of the above” was always shown last.]
QUESTIONS:
- In your opinion, which of the following technologies, if any, uses artificial intelligence (AI)? Select all the apply.
- In your opinion, which of the following technologies, if any, uses automation? Select all that apply.
- In your opinion, which of the following technologies, if any, uses machine learning? Select all that apply.
- In your opinion, which of the following technologies, if any, uses robotics? Select all that apply.
ANSWER CHOICES:
- Virtual assistants (e.g., Siri, Google Assistant, Amazon Alexa)
- Smart speakers (e.g., Amazon Echo, Google Home, Apple Homepod)
- Facebook photo tagging
- Google Search
- Recommendations for Netflix movies or Amazon ebooks
- Google Translate
- Driverless cars and trucks
- Social robots that can interact with humans
- Industrial robots used in manufacturing
- Drones that do not require a human controller
- None of the above
Table B.31: Artificial intelligence (AI); N = 493
Virtual assistants (e.g., Siri, Google Assistant, Amazon Alexa) |
62.87 |
64.30 |
317 |
Smart speakers (e.g., Amazon Echo, Google Home, Apple Homepod) |
55.46 |
56.19 |
277 |
Facebook photo tagging |
36.16 |
36.51 |
180 |
Google Search |
35.59 |
36.51 |
180 |
Recommendations for Netflix movies or Amazon ebooks |
27.73 |
29.01 |
143 |
Google Translate |
29.49 |
30.02 |
148 |
Driverless cars and trucks |
56.38 |
57.20 |
282 |
Social robots that can interact with humans |
63.63 |
64.10 |
316 |
Industrial robots used in manufacturing |
40.11 |
40.16 |
198 |
Drones that do not require a human controller |
53.48 |
52.74 |
260 |
Table B.32: Automation; N = 513
Virtual assistants (e.g., Siri, Google Assistant, Amazon Alexa) |
66.75 |
67.06 |
344 |
Smart speakers (e.g., Amazon Echo, Google Home, Apple Homepod) |
60.81 |
61.01 |
313 |
Facebook photo tagging |
43.74 |
45.42 |
233 |
Google Search |
52.12 |
53.80 |
276 |
Recommendations for Netflix movies or Amazon ebooks |
45.13 |
46.39 |
238 |
Google Translate |
45.06 |
46.39 |
238 |
Driverless cars and trucks |
68.16 |
68.62 |
352 |
Social robots that can interact with humans |
64.00 |
64.72 |
332 |
Industrial robots used in manufacturing |
64.70 |
65.11 |
334 |
Drones that do not require a human controller |
65.04 |
65.69 |
337 |
Table B.33: Machine learning; N = 508
Virtual assistants (e.g., Siri, Google Assistant, Amazon Alexa) |
59.10 |
60.43 |
307 |
Smart speakers (e.g., Amazon Echo, Google Home, Apple Homepod) |
46.70 |
46.65 |
237 |
Facebook photo tagging |
35.37 |
36.81 |
187 |
Google Search |
45.42 |
46.26 |
235 |
Recommendations for Netflix movies or Amazon ebooks |
37.97 |
38.19 |
194 |
Google Translate |
33.40 |
34.06 |
173 |
Driverless cars and trucks |
52.96 |
54.33 |
276 |
Social robots that can interact with humans |
59.19 |
59.45 |
302 |
Industrial robots used in manufacturing |
37.41 |
37.80 |
192 |
Drones that do not require a human controller |
49.03 |
49.41 |
251 |
Table B.34: Robotics; N = 486
Virtual assistants (e.g., Siri, Google Assistant, Amazon Alexa) |
45.27 |
46.30 |
225 |
Smart speakers (e.g., Amazon Echo, Google Home, Apple Homepod) |
35.59 |
36.83 |
179 |
Facebook photo tagging |
21.00 |
21.40 |
104 |
Google Search |
22.07 |
23.25 |
113 |
Recommendations for Netflix movies or Amazon ebooks |
17.84 |
18.31 |
89 |
Google Translate |
20.30 |
21.19 |
103 |
Driverless cars and trucks |
60.26 |
61.93 |
301 |
Social robots that can interact with humans |
61.89 |
63.17 |
307 |
Industrial robots used in manufacturing |
67.99 |
69.75 |
339 |
Drones that do not require a human controller |
57.55 |
59.05 |
287 |
Knowledge of computer science (CS)/technology
QUESTION:
What is your knowledge of computer science/technology? (Select all that apply.)
ANSWER CHOICES:
- I have taken at least one college-level course in computer science.
- I have a computer science or engineering undergraduate degree.
- I have a graduate degree in computer science or engineering.
- I have programming experience.
- I don’t have any of the educational or work experiences described above.
Table B.35: Computer science/technology background; N = 2000
Took at least one college-level course in CS |
24.73 |
25.05 |
501 |
CS or engineering undergraduate degree |
7.12 |
7.30 |
146 |
CS or engineering graduate degree |
3.85 |
3.75 |
75 |
Have programming experience |
10.88 |
11.10 |
222 |
None of the above |
63.68 |
63.20 |
1264 |
Support for developing AI
[All respondents were presented with the following prompt.]
Next, we would like to ask you questions about your attitudes toward artificial intelligence.
Artificial Intelligence (AI) refers to computer systems that perform tasks or make decisions that usually require human intelligence. AI can perform these tasks or make these decisions without explicit human instructions. Today, AI has been used in the following applications:
[Respondents were shown five items randomly selected from the list below.]
- Translate over 100 different languages
- Predict one’s Google searches
- Identify people from their photos
- Diagnose diseases like skin cancer and common illnesses
- Predict who are at risk of various diseases
- Help run factories and warehouses
- Block spam email
- Play computer games
- Help conduct legal case research
- Categorize photos and videos
- Detect plagiarism in essays
- Spot abusive messages on social media
- Predict what one is likely to buy online
- Predict what movies or TV shows one is likely to watch online
QUESTION:
How much do you support or oppose the development of AI?
ANSWER CHOICES:
- Strongly support (2)
- Somewhat support (1)
- Neither support nor oppose (0)
- Somewhat oppose (-1)
- Strongly oppose (-2)
- I don’t know
Table B.36: Support for developing AI; N = 2000
Strongly support |
12.58 |
12.65 |
253 |
Somewhat support |
28.36 |
28.65 |
573 |
Neither support nor oppose |
27.84 |
27.60 |
552 |
Somewhat oppose |
12.79 |
12.75 |
255 |
Strongly oppose |
8.90 |
9.05 |
181 |
I don’t know |
9.54 |
9.30 |
186 |
Skipped |
0 |
0 |
0 |
Survey experiment: AI and/or robots should be carefully managed
QUESTION:
Please tell me to what extent you agree or disagree with the following statement.
[Respondents were presented with one statement randomly selected from the list below.]
- AI and robots are technologies that require careful management.
- AI is a technology that requires careful management.
- Robots are technologies that require careful management.
ANSWER CHOICES:
- Totally agree (2)
- Tend to agree (1)
- Tend to disagree (-1)
- Totally disagree (-2)
- I don’t know
Table B.37: Responses to statement - AI and robots; N = 656
Totally agree |
51.41 |
53.20 |
349 |
Tend to agree |
30.09 |
28.96 |
190 |
Tend to disagree |
4.79 |
3.81 |
25 |
Totally disagree |
0.59 |
0.76 |
5 |
I don’t know |
13.12 |
13.26 |
87 |
Skipped |
0 |
0 |
0 |
Table B.38: Responses to statement - AI; N = 667
Totally agree |
53.54 |
53.67 |
358 |
Tend to agree |
30.85 |
30.13 |
201 |
Tend to disagree |
3.67 |
3.90 |
26 |
Totally disagree |
0.80 |
0.90 |
6 |
I don’t know |
11.14 |
11.39 |
76 |
Skipped |
0 |
0 |
0 |
Table B.39: Responses to statement - Robots; N = 677
Totally agree |
51.66 |
52.44 |
355 |
Tend to agree |
30.31 |
31.31 |
212 |
Tend to disagree |
5.76 |
5.17 |
35 |
Totally disagree |
1.81 |
1.48 |
10 |
I don’t know |
10.46 |
9.60 |
65 |
Skipped |
0 |
0 |
0 |
Trust of actors to develop AI
QUESTION:
How much confidence, if any, do you have in each of the following to develop AI in the best interests of the public?
[Respondents were shown five items randomly selected from the list below. We included explainer text for actors not well known to the public; respondents could view the explainer text by hovering their mouse over the actor’s name. The items and the answer choices were shown in a matrix format.]
- The U.S. military
- The U.S. civilian government
- National Security Agency (NSA)
- Federal Bureau of Investigation (FBI)
- Central Intelligence Agency (CIA)
- North Atlantic Treaty Organization (NATO)
- Explainer text for NATO: NATO is a military alliance that includes 28 countries including most of Europe, as well as the U.S. and Canada.
- An international research organization (e.g., CERN)
- Explainer text for CERN: The European Organization for Nuclear Research, known as CERN, is a European research organization that operates the largest particle physics laboratory in the world.
- Tech companies
- Google
- Facebook
- Apple
- Microsoft
- Amazon
- A non-profit AI research organization (e.g., OpenAI)
- Explainer text for OpenAI: Open AI is an AI non-profit organization with backing from tech investors that seeks to develop safe AI.
University researchers
ANSWER CHOICES:
- A great deal of confidence (3)
- A fair amount of confidence (2)
- Not too much confidence (1)
- No confidence (0)
- I don’t know
Table B.40: U.S. military; N = 638
A great deal of confidence |
17.16 |
17.08 |
109 |
A fair amount of confidence |
32.19 |
30.88 |
197 |
Not too much confidence |
23.92 |
24.14 |
154 |
No confidence |
14.40 |
14.89 |
95 |
I don’t know |
12.33 |
13.01 |
83 |
Skipped |
0 |
0 |
0 |
Table B.41: U.S. civilian government; N = 671
A great deal of confidence |
5.59 |
5.66 |
38 |
A fair amount of confidence |
24.04 |
24.29 |
163 |
Not too much confidence |
32.77 |
33.23 |
223 |
No confidence |
23.80 |
23.40 |
157 |
I don’t know |
13.79 |
13.41 |
90 |
Skipped |
0 |
0 |
0 |
Table B.42: NSA; N = 710
A great deal of confidence |
9.63 |
9.30 |
66 |
A fair amount of confidence |
28.04 |
26.90 |
191 |
Not too much confidence |
26.65 |
26.76 |
190 |
No confidence |
22.82 |
24.37 |
173 |
I don’t know |
12.87 |
12.68 |
90 |
Skipped |
0 |
0 |
0 |
Table B.43: FBI; N = 656
A great deal of confidence |
9.26 |
9.60 |
63 |
A fair amount of confidence |
26.20 |
25.46 |
167 |
Not too much confidence |
25.07 |
25.15 |
165 |
No confidence |
27.10 |
27.44 |
180 |
I don’t know |
12.25 |
12.20 |
80 |
Skipped |
0.14 |
0.15 |
1 |
Table B.44: CIA; N = 730
A great deal of confidence |
8.43 |
8.77 |
64 |
A fair amount of confidence |
26.10 |
25.07 |
183 |
Not too much confidence |
26.80 |
26.99 |
197 |
No confidence |
25.61 |
26.30 |
192 |
I don’t know |
12.93 |
12.74 |
93 |
Skipped |
0.13 |
0.14 |
1 |
Table B.45: NATO; N = 695
A great deal of confidence |
4.40 |
4.17 |
29 |
A fair amount of confidence |
25.41 |
24.75 |
172 |
Not too much confidence |
25.98 |
26.62 |
185 |
No confidence |
23.13 |
24.03 |
167 |
I don’t know |
21.08 |
20.43 |
142 |
Skipped |
0 |
0 |
0 |
Table B.46: Intergovernmental research organizations (e.g., CERN); N = 645
A great deal of confidence |
11.97 |
12.25 |
79 |
A fair amount of confidence |
28.87 |
28.84 |
186 |
Not too much confidence |
22.94 |
22.64 |
146 |
No confidence |
16.85 |
16.59 |
107 |
I don’t know |
19.37 |
19.69 |
127 |
Skipped |
0 |
0 |
0 |
Table B.47: Tech companies; N = 674
A great deal of confidence |
10.28 |
10.83 |
73 |
A fair amount of confidence |
34.15 |
34.57 |
233 |
Not too much confidence |
28.40 |
27.15 |
183 |
No confidence |
14.91 |
15.13 |
102 |
I don’t know |
12.15 |
12.17 |
82 |
Skipped |
0.12 |
0.15 |
1 |
Table B.48: Google; N = 645
A great deal of confidence |
11.91 |
11.47 |
74 |
A fair amount of confidence |
27.35 |
26.82 |
173 |
Not too much confidence |
25.92 |
26.67 |
172 |
No confidence |
21.56 |
21.40 |
138 |
I don’t know |
13.00 |
13.33 |
86 |
Skipped |
0.26 |
0.31 |
2 |
Table B.49: Facebook; N = 632
A great deal of confidence |
4.29 |
3.96 |
25 |
A fair amount of confidence |
14.35 |
13.45 |
85 |
Not too much confidence |
26.40 |
27.22 |
172 |
No confidence |
41.27 |
42.88 |
271 |
I don’t know |
13.44 |
12.18 |
77 |
Skipped |
0.25 |
0.32 |
2 |
Table B.50: Apple; N = 697
A great deal of confidence |
10.41 |
10.76 |
75 |
A fair amount of confidence |
26.29 |
26.26 |
183 |
Not too much confidence |
27.00 |
27.98 |
195 |
No confidence |
22.20 |
21.81 |
152 |
I don’t know |
13.84 |
12.91 |
90 |
Skipped |
0.26 |
0.29 |
2 |
Table B.51: Microsoft; N = 597
A great deal of confidence |
10.85 |
10.89 |
65 |
A fair amount of confidence |
33.08 |
32.66 |
195 |
Not too much confidence |
26.89 |
27.14 |
162 |
No confidence |
17.99 |
17.76 |
106 |
I don’t know |
11.05 |
11.39 |
68 |
Skipped |
0.14 |
0.17 |
1 |
Table B.52: Amazon; N = 685
A great deal of confidence |
10.60 |
10.95 |
75 |
A fair amount of confidence |
29.53 |
29.34 |
201 |
Not too much confidence |
25.51 |
25.40 |
174 |
No confidence |
22.02 |
22.19 |
152 |
I don’t know |
12.34 |
12.12 |
83 |
Skipped |
0 |
0 |
0 |
Table B.53: Non-profit (e.g., OpenAI); N = 659
A great deal of confidence |
10.19 |
10.17 |
67 |
A fair amount of confidence |
29.40 |
30.35 |
200 |
Not too much confidence |
23.57 |
23.98 |
158 |
No confidence |
13.65 |
13.66 |
90 |
I don’t know |
23.04 |
21.70 |
143 |
Skipped |
0.13 |
0.15 |
1 |
Table B.54: University researchers; N = 666
A great deal of confidence |
13.86 |
14.11 |
94 |
A fair amount of confidence |
36.29 |
36.04 |
240 |
Not too much confidence |
22.27 |
22.82 |
152 |
No confidence |
12.75 |
12.31 |
82 |
I don’t know |
14.70 |
14.56 |
97 |
Skipped |
0.14 |
0.15 |
1 |
Trust of actors to manage AI
QUESTION:
How much confidence, if any, do you have in each of the following to manage the development and use of AI in the best interests of the public?
[Respondents were shown five items randomly selected from the list below. We included explainer text for actors not well known to the public; respondents could view the explainer text by hovering their mouse over the actor’s name. The items and the answer choices were shown in a matrix format.]
- U.S. federal government
- U.S. state governments
- International organizations (e.g., United Nations, European Union)
- The United Nations (UN)
- An intergovernmental research organization (e.g., CERN)
- Explainer text for CERN: The European Organization for Nuclear Research, known as CERN, is a European research organization that operates the largest particle physics laboratory in the world.
- Tech companies
- Google
- Facebook
- Apple
- Microsoft
- Amazon
- Non-government scientific organizations (e.g., AAAI)
- Explainer text for AAAI: Association for the Advancement of Artificial Intelligence (AAAI) is a non-government scientific organization that promotes research in, and responsible use of AI.
- Partnership on AI, an association of tech companies, academics, and civil society groups
ANSWER CHOICES:
- A great deal of confidence (3)
- A fair amount of confidence (2)
- Not too much confidence (1)
- No confidence (0)
- I don’t know
Table B.55: U.S. federal government; N = 743
A great deal of confidence |
6.86 |
6.59 |
49 |
A fair amount of confidence |
20.26 |
20.19 |
150 |
Not too much confidence |
28.44 |
28.67 |
213 |
No confidence |
31.50 |
32.44 |
241 |
I don’t know |
12.68 |
11.84 |
88 |
Skipped |
0.25 |
0.27 |
2 |
Table B.56: U.S. state governments; N = 713
A great deal of confidence |
6.25 |
6.45 |
46 |
A fair amount of confidence |
20.39 |
19.21 |
137 |
Not too much confidence |
31.57 |
32.12 |
229 |
No confidence |
29.65 |
30.72 |
219 |
I don’t know |
11.69 |
11.22 |
80 |
Skipped |
0.45 |
0.28 |
2 |
Table B.57: International organizations; N = 827
A great deal of confidence |
5.94 |
5.80 |
48 |
A fair amount of confidence |
22.48 |
21.77 |
180 |
Not too much confidence |
29.58 |
29.87 |
247 |
No confidence |
26.81 |
27.45 |
227 |
I don’t know |
14.81 |
14.87 |
123 |
Skipped |
0.38 |
0.24 |
2 |
Table B.58: UN; N = 802
A great deal of confidence |
6.23 |
6.61 |
53 |
A fair amount of confidence |
22.49 |
21.57 |
173 |
Not too much confidence |
26.14 |
26.18 |
210 |
No confidence |
31.90 |
31.55 |
253 |
I don’t know |
12.64 |
13.59 |
109 |
Skipped |
0.60 |
0.50 |
4 |
Table B.59: Intergovernmental research organizations (e.g., CERN); N = 747
A great deal of confidence |
6.69 |
7.10 |
53 |
A fair amount of confidence |
30.51 |
29.72 |
222 |
Not too much confidence |
23.89 |
24.10 |
180 |
No confidence |
20.32 |
20.21 |
151 |
I don’t know |
18.36 |
18.61 |
139 |
Skipped |
0.22 |
0.27 |
2 |
Table B.60: Tech companies; N = 758
A great deal of confidence |
8.33 |
8.44 |
64 |
A fair amount of confidence |
33.50 |
32.98 |
250 |
Not too much confidence |
25.07 |
26.12 |
198 |
No confidence |
19.88 |
20.45 |
155 |
I don’t know |
12.81 |
11.74 |
89 |
Skipped |
0.41 |
0.26 |
2 |
Table B.61: Google; N = 767
A great deal of confidence |
9.61 |
9.13 |
70 |
A fair amount of confidence |
23.60 |
23.86 |
183 |
Not too much confidence |
27.44 |
27.77 |
213 |
No confidence |
25.13 |
25.03 |
192 |
I don’t know |
13.75 |
13.95 |
107 |
Skipped |
0.47 |
0.26 |
2 |
Table B.62: Facebook; N = 741
A great deal of confidence |
4.99 |
4.45 |
33 |
A fair amount of confidence |
16.18 |
16.19 |
120 |
Not too much confidence |
28.50 |
28.21 |
209 |
No confidence |
36.95 |
38.46 |
285 |
I don’t know |
13.14 |
12.42 |
92 |
Skipped |
0.24 |
0.27 |
2 |
Table B.63: Apple; N = 775
A great deal of confidence |
8.25 |
8.39 |
65 |
A fair amount of confidence |
25.10 |
24.90 |
193 |
Not too much confidence |
29.08 |
28.65 |
222 |
No confidence |
23.91 |
24.52 |
190 |
I don’t know |
13.55 |
13.42 |
104 |
Skipped |
0.12 |
0.13 |
1 |
Table B.64: Microsoft; N = 771
A great deal of confidence |
7.79 |
7.78 |
60 |
A fair amount of confidence |
30.11 |
29.83 |
230 |
Not too much confidence |
22.98 |
23.48 |
181 |
No confidence |
24.10 |
24.38 |
188 |
I don’t know |
14.68 |
14.14 |
109 |
Skipped |
0.35 |
0.39 |
3 |
Table B.65: Amazon; N = 784
A great deal of confidence |
10.19 |
10.33 |
81 |
A fair amount of confidence |
25.22 |
24.87 |
195 |
Not too much confidence |
25.20 |
25.38 |
199 |
No confidence |
24.53 |
24.87 |
195 |
I don’t know |
14.87 |
14.54 |
114 |
Skipped |
0 |
0 |
0 |
Table B.66: Non-government scientific organization (e.g., AAAI); N = 792
A great deal of confidence |
7.64 |
7.83 |
62 |
A fair amount of confidence |
30.32 |
30.05 |
238 |
Not too much confidence |
25.37 |
26.39 |
209 |
No confidence |
15.03 |
14.65 |
116 |
I don’t know |
21.46 |
20.83 |
165 |
Skipped |
0.19 |
0.25 |
2 |
Table B.67: Partnership on AI; N = 780
A great deal of confidence |
8.89 |
9.23 |
72 |
A fair amount of confidence |
30.12 |
29.49 |
230 |
Not too much confidence |
25.89 |
26.79 |
209 |
No confidence |
16.33 |
15.77 |
123 |
I don’t know |
18.64 |
18.59 |
145 |
Skipped |
0.12 |
0.13 |
1 |
AI governance challenges
We would like you to consider some potential policy issues related to AI. Please consider the following:
[Respondents were shown five randomly-selected items from the list below, one item at a time. For ease of comprehension, we include the shorten labels used in the figures in square brackets.]
- [Hiring bias] Fairness and transparency in AI used in hiring: Increasingly, employers are using AI to make hiring decisions. AI has the potential to make less biased hiring decisions than humans. But algorithms trained on biased data can lead to lead to hiring practices that discriminate against certain groups. Also, AI used in this application may lack transparency, such that human users do not understand what the algorithm is doing, or why it reaches certain decisions in specific cases.
- [Criminal justice bias] Fairness and transparency in AI used in criminal justice: Increasingly, the criminal justice system is using AI to make sentencing and parole decisions. AI has the potential to make less biased hiring decisions than humans. But algorithms trained on biased data could lead to discrimination against certain groups. Also, AI used in this application may lack transparency such that human users do not understand what the algorithm is doing, or why it reaches certain decisions in specific cases.
- [Disease diagnosis] Accuracy and transparency in AI used for disease diagnosis: Increasingly, AI software has been used to diagnose diseases, such as heart disease and cancer. One challenge is to make sure the AI can correctly diagnose those who have the disease and not mistakenly diagnose those who do not have the disease. Another challenge is that AI used in this application may lack transparency such that human users do not understand what the algorithm is doing, or why it reaches certain decisions in specific cases.
- [Data privacy] Protect data privacy: Algorithms used in AI applications are often trained on vast amounts of personal data, including medical records, social media content, and financial transactions. Some worry that data used to train algorithms are not collected, used, and stored in ways that protect personal privacy.
- [Autonomous vehicles] Make sure autonomous vehicles are safe: Companies are developing self-driving cars and trucks that require little or no input from humans. Some worry about the safety of autonomous vehicles for those riding in them as well as for other vehicles, cyclists, and pedestrians.
- [Ditigal manipulation] Prevent AI from being used to spread fake and harmful content online: AI has been used by governments, private groups, and individuals to harm or manipulate internet users. For instance, automated bots have been used to generate and spread false and/or harmful news stories, audios, and videos.
- [Cyber attacks] Prevent AI cyber attacks against governments, companies, organizations, and individuals: Computer scientists have shown that AI can be used to launch effective cyber attacks. AI could be used to hack into servers to steal sensitive information, shut down critical infrastructures like power grids or hospital networks, or scale up targeted phishing attacks.
- [Surveillance] Prevent AI-assisted surveillance from violating privacy and civil liberties: AI can be used to process and analyze large amounts of text, photo, audio, and video data from social media, mobile communications, and CCTV cameras. Some worry that governments, companies, and employers could use AI to increase their surveillance capabilities.
- [U.S.-China arms race] Prevent escalation of a U.S.-China AI arms race: Leading analysts believe that an AI arms race is beginning, in which the U.S. and China are investing billions of dollars to develop powerful AI systems for surveillance, autonomous weapons, cyber operations, propaganda, and command and control systems. Some worry that a U.S.-China arms race could lead to extreme dangers. To stay ahead, the U.S. and China may race to deploy advanced military AI systems that they do not fully understand or can control. We could see catastrophic accidents, such as a rapid, automated escalation involving cyber and nuclear weapons.
- [Value alignment] Make sure AI systems are safe, trustworthy, and aligned with human values: As AI systems become more advanced, they will increasingly make decisions without human input. One potential fear is that AI systems, while performing jobs they are programmed to do, could unintentionally make decisions that go against the values of its human users, such as physically harming people.
- [Autonomous weapons] Ban the use of lethal autonomous weapons (LAWs): Lethal autonomous weapons (LAWs) are military robots that can attack targets without control by humans. LAWs could reduce the use of human combatants on the battlefield. But some worry that the adoption of LAWs could lead to mass violence. Because they are cheap and easy to produce in bulk, national militaries, terrorists, and other groups could readily deploy LAWs.
- [Technological unemployment] Guarantee a good standard of living for those who lose their jobs to automation: Some forecast that AI will increasingly be able to do jobs done by humans today. AI could potentially do the jobs of blue-collar workers, like truckers and factory workers, as well as the jobs of white-collar workers, like financial analysts or lawyers. Some worry that in the future, robots and computers can do most of the jobs that are done by humans today.
- [Critical AI systems failure] Prevent critical AI systems failures: As AI systems become more advanced, they could be used by the military or in critical infrastructure, like power grids, highways, or hospital networks. Some worry that the failure of AI systems or unintentional accidents in these applications could cause 10 percent or more of all humans to die.
QUESTION:
In the next 10 years, how likely do you think it is that this AI governance challenge will impact large numbers of people in the U.S.?
ANSWER CHOICES:
- Very unlikely: less than 5% chance (2.5%)
- Unlikely: 5-20% chance (12.5%)
- Somewhat unlikely: 20-40% chance (30%)
- Equally likely as unlikely: 40-60% chance (50%)
- Somewhat likely: 60-80% chance (70%)
- Likely: 80-95% chance (87.5%)
- Very likely: more than 95% chance (97.5%)
- I don’t know
QUESTION:
In the next 10 years, how likely do you think it is that this AI governance challenge will impact large numbers of people around the world?
ANSWER CHOICES:
- Very unlikely: less than 5% chance (2.5%)
- Unlikely: 5-20% chance (12.5%)
- Somewhat unlikely: 20-40% chance (30%)
- Equally likely as unlikely: 40-60% chance (50%)
- Somewhat likely: 60-80% chance (70%)
- Likely: 80-95% chance (87.5%)
- Very likely: more than 95% chance (97.5%)
- I don’t know
QUESTION:
In the next 10 years, how important is it for tech companies and governments to carefully manage the following challenge?
ANSWER CHOICES:
- Very important (3)
- Somewhat important (2)
- Not too important (1)
- Not at all important (0)
- I don’t know
Table B.68: Likelihood in the U.S. - Hiring bias; N = 760
Very unlikely < 5% |
2.57 |
2.63 |
20 |
Unlikely 5-20% |
6.07 |
6.18 |
47 |
Somewhat unlikely 20-40% |
10.86 |
10.92 |
83 |
Equally likely as unlikely 40-60% |
22.27 |
22.50 |
171 |
Somewhat likely 60-80% |
23.34 |
22.89 |
174 |
Likely 80-95% |
12.39 |
12.76 |
97 |
Very likely > 95% |
9.86 |
9.61 |
73 |
I don’t know |
12.35 |
12.37 |
94 |
Skipped |
0.29 |
0.13 |
1 |
Table B.69: Likelihood in the U.S. - Criminal justice bias; N = 778
Very unlikely < 5% |
4.94 |
4.50 |
35 |
Unlikely 5-20% |
8.76 |
8.61 |
67 |
Somewhat unlikely 20-40% |
13.25 |
12.85 |
100 |
Equally likely as unlikely 40-60% |
21.23 |
21.08 |
164 |
Somewhat likely 60-80% |
17.13 |
17.22 |
134 |
Likely 80-95% |
12.28 |
12.60 |
98 |
Very likely > 95% |
9.05 |
9.64 |
75 |
I don’t know |
12.90 |
12.98 |
101 |
Skipped |
0.45 |
0.51 |
4 |
Table B.70: Likelihood in the U.S. - Disease diagnosis; N = 767
Very unlikely < 5% |
2.79 |
2.61 |
20 |
Unlikely 5-20% |
4.73 |
4.95 |
38 |
Somewhat unlikely 20-40% |
10.18 |
9.52 |
73 |
Equally likely as unlikely 40-60% |
23.12 |
23.21 |
178 |
Somewhat likely 60-80% |
20.50 |
19.95 |
153 |
Likely 80-95% |
13.43 |
13.95 |
107 |
Very likely > 95% |
9.72 |
10.17 |
78 |
I don’t know |
13.62 |
13.69 |
105 |
Skipped |
1.91 |
1.96 |
15 |
Table B.71: Likelihood in the U.S. - Data privacy; N = 807
Very unlikely < 5% |
2.75 |
2.11 |
17 |
Unlikely 5-20% |
4.53 |
4.58 |
37 |
Somewhat unlikely 20-40% |
7.52 |
7.19 |
58 |
Equally likely as unlikely 40-60% |
16.10 |
15.86 |
128 |
Somewhat likely 60-80% |
18.81 |
19.33 |
156 |
Likely 80-95% |
17.00 |
16.36 |
132 |
Very likely > 95% |
20.59 |
21.69 |
175 |
I don’t know |
10.87 |
10.78 |
87 |
Skipped |
1.84 |
2.11 |
17 |
Table B.72: Likelihood in the U.S. - Autonomous vehicles; N = 796
Very unlikely < 5% |
3.65 |
3.64 |
29 |
Unlikely 5-20% |
5.80 |
5.90 |
47 |
Somewhat unlikely 20-40% |
10.93 |
10.43 |
83 |
Equally likely as unlikely 40-60% |
16.17 |
16.33 |
130 |
Somewhat likely 60-80% |
23.62 |
23.62 |
188 |
Likely 80-95% |
15.78 |
15.45 |
123 |
Very likely > 95% |
12.29 |
12.94 |
103 |
I don’t know |
10.89 |
10.68 |
85 |
Skipped |
0.87 |
1.01 |
8 |
Table B.73: Likelihood in the U.S. - Digital manipulation; N = 741
Very unlikely < 5% |
2.79 |
2.83 |
21 |
Unlikely 5-20% |
3.24 |
3.10 |
23 |
Somewhat unlikely 20-40% |
8.12 |
7.69 |
57 |
Equally likely as unlikely 40-60% |
13.81 |
14.30 |
106 |
Somewhat likely 60-80% |
16.58 |
16.33 |
121 |
Likely 80-95% |
17.74 |
18.08 |
134 |
Very likely > 95% |
23.45 |
23.62 |
175 |
I don’t know |
12.49 |
12.15 |
90 |
Skipped |
1.77 |
1.89 |
14 |
Table B.74: Likelihood in the U.S. - Cyber attacks; N = 745
Very unlikely < 5% |
3.36 |
2.42 |
18 |
Unlikely 5-20% |
4.28 |
3.89 |
29 |
Somewhat unlikely 20-40% |
8.44 |
8.59 |
64 |
Equally likely as unlikely 40-60% |
15.45 |
15.84 |
118 |
Somewhat likely 60-80% |
19.22 |
19.46 |
145 |
Likely 80-95% |
15.96 |
15.30 |
114 |
Very likely > 95% |
20.52 |
21.21 |
158 |
I don’t know |
9.70 |
10.47 |
78 |
Skipped |
3.07 |
2.82 |
21 |
Table B.75: Likelihood in the U.S. - Surveillance; N = 784
Very unlikely < 5% |
2.70 |
2.42 |
19 |
Unlikely 5-20% |
2.92 |
2.81 |
22 |
Somewhat unlikely 20-40% |
6.19 |
6.38 |
50 |
Equally likely as unlikely 40-60% |
15.23 |
15.05 |
118 |
Somewhat likely 60-80% |
18.95 |
18.75 |
147 |
Likely 80-95% |
16.03 |
15.69 |
123 |
Very likely > 95% |
23.52 |
24.23 |
190 |
I don’t know |
12.15 |
12.12 |
95 |
Skipped |
2.32 |
2.55 |
20 |
Table B.76: Likelihood in the U.S. - U.S.-China arms race; N = 766
Very unlikely < 5% |
3.24 |
3.26 |
25 |
Unlikely 5-20% |
5.98 |
6.01 |
46 |
Somewhat unlikely 20-40% |
10.01 |
10.84 |
83 |
Equally likely as unlikely 40-60% |
18.74 |
18.41 |
141 |
Somewhat likely 60-80% |
20.08 |
19.71 |
151 |
Likely 80-95% |
13.17 |
12.79 |
98 |
Very likely > 95% |
10.62 |
11.36 |
87 |
I don’t know |
15.17 |
14.62 |
112 |
Skipped |
3.00 |
3.00 |
23 |
Table B.77: Likelihood in the U.S. - Value alignment; N = 783
Very unlikely < 5% |
3.78 |
4.21 |
33 |
Unlikely 5-20% |
7.30 |
6.90 |
54 |
Somewhat unlikely 20-40% |
9.01 |
9.07 |
71 |
Equally likely as unlikely 40-60% |
20.34 |
19.54 |
153 |
Somewhat likely 60-80% |
19.26 |
19.28 |
151 |
Likely 80-95% |
13.66 |
13.79 |
108 |
Very likely > 95% |
12.96 |
13.67 |
107 |
I don’t know |
12.43 |
12.26 |
96 |
Skipped |
1.26 |
1.28 |
10 |
Table B.78: Likelihood in the U.S. - Autonomous weapons; N = 757
Very unlikely < 5% |
6.22 |
5.94 |
45 |
Unlikely 5-20% |
10.36 |
9.38 |
71 |
Somewhat unlikely 20-40% |
12.75 |
12.68 |
96 |
Equally likely as unlikely 40-60% |
18.91 |
19.02 |
144 |
Somewhat likely 60-80% |
15.72 |
15.72 |
119 |
Likely 80-95% |
11.44 |
11.76 |
89 |
Very likely > 95% |
10.72 |
11.23 |
85 |
I don’t know |
11.99 |
12.29 |
93 |
Skipped |
1.89 |
1.98 |
15 |
Table B.79: Likelihood in the U.S. - Technological unemployment; N = 738
Very unlikely < 5% |
3.08 |
2.98 |
22 |
Unlikely 5-20% |
5.80 |
5.69 |
42 |
Somewhat unlikely 20-40% |
11.00 |
11.11 |
82 |
Equally likely as unlikely 40-60% |
17.74 |
17.62 |
130 |
Somewhat likely 60-80% |
17.16 |
17.75 |
131 |
Likely 80-95% |
14.86 |
14.91 |
110 |
Very likely > 95% |
15.75 |
15.99 |
118 |
I don’t know |
12.84 |
12.20 |
90 |
Skipped |
1.75 |
1.76 |
13 |
Table B.80: Likelihood in the U.S. - Critical AI systems failure; N = 778
Very unlikely < 5% |
6.98 |
6.43 |
50 |
Unlikely 5-20% |
7.94 |
7.58 |
59 |
Somewhat unlikely 20-40% |
12.26 |
12.98 |
101 |
Equally likely as unlikely 40-60% |
20.36 |
20.31 |
158 |
Somewhat likely 60-80% |
15.59 |
15.42 |
120 |
Likely 80-95% |
12.25 |
11.83 |
92 |
Very likely > 95% |
9.36 |
10.15 |
79 |
I don’t know |
14.85 |
14.78 |
115 |
Skipped |
0.41 |
0.51 |
4 |
Table B.81: Likelihood around the world - Hiring bias; N = 760
Very unlikely < 5% |
2.95 |
3.03 |
23 |
Unlikely 5-20% |
5.47 |
5.00 |
38 |
Somewhat unlikely 20-40% |
8.54 |
8.55 |
65 |
Equally likely as unlikely 40-60% |
20.23 |
21.45 |
163 |
Somewhat likely 60-80% |
21.55 |
21.32 |
162 |
Likely 80-95% |
13.68 |
13.55 |
103 |
Very likely > 95% |
12.20 |
12.11 |
92 |
I don’t know |
15.04 |
14.61 |
111 |
Skipped |
0.35 |
0.39 |
3 |
Table B.82: Likelihood around the world - Criminal justice bias; N = 778
Very unlikely < 5% |
4.44 |
4.24 |
33 |
Unlikely 5-20% |
8.06 |
7.71 |
60 |
Somewhat unlikely 20-40% |
10.96 |
10.80 |
84 |
Equally likely as unlikely 40-60% |
19.17 |
19.41 |
151 |
Somewhat likely 60-80% |
18.29 |
18.25 |
142 |
Likely 80-95% |
13.09 |
13.62 |
106 |
Very likely > 95% |
9.38 |
9.90 |
77 |
I don’t know |
16.38 |
15.94 |
124 |
Skipped |
0.23 |
0.13 |
1 |
Table B.83: Likelihood around the world - Disease diagnosis; N = 767
Very unlikely < 5% |
2.31 |
2.35 |
18 |
Unlikely 5-20% |
4.18 |
4.17 |
32 |
Somewhat unlikely 20-40% |
9.93 |
9.13 |
70 |
Equally likely as unlikely 40-60% |
21.28 |
20.99 |
161 |
Somewhat likely 60-80% |
20.47 |
20.47 |
157 |
Likely 80-95% |
15.00 |
15.38 |
118 |
Very likely > 95% |
10.94 |
11.47 |
88 |
I don’t know |
15.80 |
15.91 |
122 |
Skipped |
0.09 |
0.13 |
1 |
Table B.84: Likelihood around the world - Data privacy; N = 807
Very unlikely < 5% |
2.86 |
2.23 |
18 |
Unlikely 5-20% |
2.92 |
2.60 |
21 |
Somewhat unlikely 20-40% |
8.32 |
8.30 |
67 |
Equally likely as unlikely 40-60% |
13.79 |
14.75 |
119 |
Somewhat likely 60-80% |
19.07 |
18.84 |
152 |
Likely 80-95% |
18.43 |
18.22 |
147 |
Very likely > 95% |
21.09 |
21.81 |
176 |
I don’t know |
13.34 |
13.01 |
105 |
Skipped |
0.19 |
0.25 |
2 |
Table B.85: Likelihood around the world - Autonomous vehicles; N = 796
Very unlikely < 5% |
3.77 |
3.52 |
28 |
Unlikely 5-20% |
5.25 |
5.65 |
45 |
Somewhat unlikely 20-40% |
12.37 |
11.68 |
93 |
Equally likely as unlikely 40-60% |
16.74 |
17.21 |
137 |
Somewhat likely 60-80% |
21.09 |
21.11 |
168 |
Likely 80-95% |
14.13 |
14.45 |
115 |
Very likely > 95% |
12.04 |
12.19 |
97 |
I don’t know |
13.99 |
13.57 |
108 |
Skipped |
0.63 |
0.63 |
5 |
Table B.86: Likelihood around the world - Digital manipulation; N = 741
Very unlikely < 5% |
1.98 |
2.16 |
16 |
Unlikely 5-20% |
1.67 |
1.48 |
11 |
Somewhat unlikely 20-40% |
7.34 |
7.29 |
54 |
Equally likely as unlikely 40-60% |
12.68 |
12.96 |
96 |
Somewhat likely 60-80% |
17.18 |
17.00 |
126 |
Likely 80-95% |
21.22 |
21.73 |
161 |
Very likely > 95% |
22.31 |
22.00 |
163 |
I don’t know |
15.24 |
14.98 |
111 |
Skipped |
0.39 |
0.40 |
3 |
Table B.87: Likelihood around the world - Cyber attacks; N = 745
Very unlikely < 5% |
1.08 |
1.21 |
9 |
Unlikely 5-20% |
4.95 |
4.03 |
30 |
Somewhat unlikely 20-40% |
4.76 |
5.10 |
38 |
Equally likely as unlikely 40-60% |
16.95 |
16.64 |
124 |
Somewhat likely 60-80% |
18.94 |
19.73 |
147 |
Likely 80-95% |
19.13 |
19.06 |
142 |
Very likely > 95% |
20.57 |
20.40 |
152 |
I don’t know |
13.20 |
13.42 |
100 |
Skipped |
0.42 |
0.40 |
3 |
Table B.88: Likelihood around the world - Surveillance; N = 784
Very unlikely < 5% |
1.26 |
1.40 |
11 |
Unlikely 5-20% |
3.55 |
3.19 |
25 |
Somewhat unlikely 20-40% |
5.12 |
5.36 |
42 |
Equally likely as unlikely 40-60% |
14.26 |
14.41 |
113 |
Somewhat likely 60-80% |
18.90 |
19.13 |
150 |
Likely 80-95% |
20.30 |
19.77 |
155 |
Very likely > 95% |
22.62 |
22.70 |
178 |
I don’t know |
13.93 |
13.90 |
109 |
Skipped |
0.07 |
0.13 |
1 |
Table B.89: Likelihood around the world - U.S.-China arms race; N = 766
Very unlikely < 5% |
3.21 |
3.13 |
24 |
Unlikely 5-20% |
4.61 |
4.83 |
37 |
Somewhat unlikely 20-40% |
7.70 |
7.83 |
60 |
Equally likely as unlikely 40-60% |
19.50 |
19.19 |
147 |
Somewhat likely 60-80% |
20.71 |
20.76 |
159 |
Likely 80-95% |
14.99 |
14.75 |
113 |
Very likely > 95% |
12.46 |
12.92 |
99 |
I don’t know |
16.61 |
16.32 |
125 |
Skipped |
0.22 |
0.26 |
2 |
Table B.90: Likelihood around the world - Value alignment; N = 783
Very unlikely < 5% |
2.70 |
2.94 |
23 |
Unlikely 5-20% |
4.66 |
4.60 |
36 |
Somewhat unlikely 20-40% |
8.80 |
8.81 |
69 |
Equally likely as unlikely 40-60% |
19.92 |
19.41 |
152 |
Somewhat likely 60-80% |
18.97 |
18.77 |
147 |
Likely 80-95% |
15.57 |
15.33 |
120 |
Very likely > 95% |
14.93 |
15.71 |
123 |
I don’t know |
14.44 |
14.43 |
113 |
Skipped |
0 |
0 |
0 |
Table B.91: Likelihood around the world - Autonomous weapons; N = 757
Very unlikely < 5% |
3.72 |
3.70 |
28 |
Unlikely 5-20% |
7.04 |
5.42 |
41 |
Somewhat unlikely 20-40% |
9.42 |
9.64 |
73 |
Equally likely as unlikely 40-60% |
17.23 |
17.44 |
132 |
Somewhat likely 60-80% |
16.08 |
15.85 |
120 |
Likely 80-95% |
16.35 |
17.04 |
129 |
Very likely > 95% |
14.87 |
15.19 |
115 |
I don’t know |
15.20 |
15.59 |
118 |
Skipped |
0.09 |
0.13 |
1 |
Table B.92: Likelihood around the world - Technological unemployment; N = 738
Very unlikely < 5% |
2.76 |
2.57 |
19 |
Unlikely 5-20% |
4.92 |
4.47 |
33 |
Somewhat unlikely 20-40% |
8.31 |
8.81 |
65 |
Equally likely as unlikely 40-60% |
18.36 |
18.16 |
134 |
Somewhat likely 60-80% |
19.90 |
21.00 |
155 |
Likely 80-95% |
14.78 |
14.50 |
107 |
Very likely > 95% |
16.71 |
16.67 |
123 |
I don’t know |
13.77 |
13.41 |
99 |
Skipped |
0.51 |
0.41 |
3 |
Table B.93: Likelihood around the world - Critical AI systems failure; N = 778
Very unlikely < 5% |
5.36 |
5.27 |
41 |
Unlikely 5-20% |
8.07 |
7.97 |
62 |
Somewhat unlikely 20-40% |
10.75 |
10.41 |
81 |
Equally likely as unlikely 40-60% |
18.03 |
17.87 |
139 |
Somewhat likely 60-80% |
16.71 |
16.84 |
131 |
Likely 80-95% |
13.09 |
13.11 |
102 |
Very likely > 95% |
11.23 |
11.83 |
92 |
I don’t know |
16.76 |
16.71 |
130 |
Skipped |
0 |
0 |
0 |
Table B.94: Issue importance - Hiring bias; N = 760
Very important |
56.86 |
57.11 |
434 |
Somewhat important |
22.11 |
22.76 |
173 |
Not too important |
6.56 |
6.05 |
46 |
Not at all important |
1.50 |
1.58 |
12 |
I don’t know |
12.98 |
12.50 |
95 |
Skipped |
0 |
0 |
0 |
Table B.95: Issue importance - Criminal justice bias; N = 778
Very important |
56.08 |
56.68 |
441 |
Somewhat important |
21.78 |
22.49 |
175 |
Not too important |
6.65 |
5.91 |
46 |
Not at all important |
1.83 |
1.67 |
13 |
I don’t know |
13.66 |
13.24 |
103 |
Skipped |
0 |
0 |
0 |
Table B.96: Issue importance - Disease diagnosis; N = 767
Very important |
55.60 |
56.98 |
437 |
Somewhat important |
22.37 |
21.25 |
163 |
Not too important |
6.68 |
6.91 |
53 |
Not at all important |
1.98 |
1.83 |
14 |
I don’t know |
13.26 |
12.91 |
99 |
Skipped |
0.11 |
0.13 |
1 |
Table B.97: Issue importance - Data privacy; N = 807
Very important |
63.65 |
64.93 |
524 |
Somewhat important |
17.65 |
17.10 |
138 |
Not too important |
4.76 |
4.71 |
38 |
Not at all important |
1.71 |
1.36 |
11 |
I don’t know |
12.05 |
11.65 |
94 |
Skipped |
0.19 |
0.25 |
2 |
Table B.98: Issue importance - Autonomous vehicles; N = 796
Very important |
58.70 |
59.55 |
474 |
Somewhat important |
22.36 |
21.73 |
173 |
Not too important |
6.13 |
6.28 |
50 |
Not at all important |
1.44 |
1.63 |
13 |
I don’t know |
11.15 |
10.55 |
84 |
Skipped |
0.22 |
0.25 |
2 |
Table B.99: Issue importance - Digital manipulation; N = 741
Very important |
57.66 |
58.30 |
432 |
Somewhat important |
18.75 |
18.08 |
134 |
Not too important |
6.25 |
6.48 |
48 |
Not at all important |
3.11 |
2.97 |
22 |
I don’t know |
14.16 |
14.04 |
104 |
Skipped |
0.08 |
0.13 |
1 |
Table B.100: Issue importance - Cyber attacks; N = 745
Very important |
62.12 |
61.21 |
456 |
Somewhat important |
17.80 |
18.39 |
137 |
Not too important |
7.07 |
7.38 |
55 |
Not at all important |
1.14 |
1.07 |
8 |
I don’t know |
11.88 |
11.95 |
89 |
Skipped |
0 |
0 |
0 |
Table B.101: Issue importance - Surveillance; N = 784
Very important |
58.54 |
58.80 |
461 |
Somewhat important |
19.33 |
19.26 |
151 |
Not too important |
6.40 |
6.63 |
52 |
Not at all important |
1.73 |
1.66 |
13 |
I don’t know |
13.93 |
13.52 |
106 |
Skipped |
0.07 |
0.13 |
1 |
Table B.102: Issue importance - U.S.-China arms race; N = 766
Very important |
55.88 |
55.74 |
427 |
Somewhat important |
19.44 |
19.71 |
151 |
Not too important |
7.07 |
7.57 |
58 |
Not at all important |
2.38 |
2.35 |
18 |
I don’t know |
15.13 |
14.49 |
111 |
Skipped |
0.10 |
0.13 |
1 |
Table B.103: Issue importance - Value alignment; N = 783
Very important |
56.46 |
56.45 |
442 |
Somewhat important |
20.49 |
20.95 |
164 |
Not too important |
6.69 |
6.64 |
52 |
Not at all important |
1.56 |
1.66 |
13 |
I don’t know |
14.80 |
14.30 |
112 |
Skipped |
0 |
0 |
0 |
Table B.104: Issue importance - Autonomous weapons; N = 757
Very important |
58.32 |
57.73 |
437 |
Somewhat important |
20.00 |
19.55 |
148 |
Not too important |
5.52 |
5.94 |
45 |
Not at all important |
1.23 |
1.45 |
11 |
I don’t know |
14.94 |
15.32 |
116 |
Skipped |
0 |
0 |
0 |
Table B.105: Issue importance - Technological unemployment; N = 738
Very important |
54.12 |
54.34 |
401 |
Somewhat important |
22.07 |
22.49 |
166 |
Not too important |
6.50 |
6.91 |
51 |
Not at all important |
2.83 |
2.44 |
18 |
I don’t know |
14.39 |
13.69 |
101 |
Skipped |
0.09 |
0.14 |
1 |
Table B.106: Issue importance - Critical AI systems failure; N = 778
Very important |
52.63 |
53.86 |
419 |
Somewhat important |
21.10 |
20.44 |
159 |
Not too important |
7.98 |
8.10 |
63 |
Not at all important |
2.93 |
2.44 |
19 |
I don’t know |
15.36 |
15.17 |
118 |
Skipped |
0 |
0 |
0 |
Survey experiment: comparing perceptions of U.S. vs. China AI research and development
[Respondents were presented with one randomly-selected question from the two below.]
QUESTIONS:
- Compared with other industrialized countries, how would you rate the U.S. in AI research and development?
- Compared with other industrialized countries, how would you rate China in AI research and development?
ANSWER CHOICES:
- Best in the world (3)
- Above average (2)
- Average (1)
- Below average (0)
- I don’t know
Table B.107: Perceptions of research and development - U.S.; N = 988
Best in the world |
9.73 |
10.02 |
99 |
Above average |
36.16 |
37.55 |
371 |
Average |
26.09 |
24.70 |
244 |
Below average |
4.99 |
4.96 |
49 |
I don’t know |
23.03 |
22.77 |
225 |
Skipped |
0 |
0 |
0 |
Table B.108: Perceptions of research and development - China; N = 1012
Best in the world |
7.33 |
7.41 |
75 |
Above average |
45.40 |
46.64 |
472 |
Average |
16.66 |
15.81 |
160 |
Below average |
3.93 |
3.66 |
37 |
I don’t know |
26.68 |
26.48 |
268 |
Skipped |
0 |
0 |
0 |
Survey experiment: U.S.-China arms race
[All respondents were presented with the following prompt.]
We want to understand your thoughts on some important issues in the news today. Please read the short news article below.
Leading analysts believe that an “AI arms race” is beginning, in which the U.S. and China are investing billions of dollars to develop powerful AI systems for surveillance, autonomous weapons, cyber operations, propaganda, and command and control systems.
[Respondents were randomly assigned to one of the four experimental groups listed below.]
Control
[No additional text.]
Nationalism treatment
Some leaders in the U.S. military and tech industry argue that the U.S. government should invest much more resources in AI research to ensure that the U.S.’s AI capabilities stay ahead of China’s. Furthermore, they argue that the U.S. government should partner with American tech companies to develop advanced AI systems, particularly for military use.
According to a leaked memo produced by a senior National Security Council official, China has “assembled the basic components required for winning the Al arms race…Much like America’s success in the competition for nuclear weapons, China’s 21st Century Manhattan Project sets them on a path to getting there first.”
War risks treatment
Some prominent thinkers are concerned that a U.S.-China arms race could lead to extreme dangers. To stay ahead, the U.S. and China may race to deploy advanced military AI systems that they do not fully understand or can control. We could see catastrophic accidents, such as a rapid, automated escalation involving cyber and nuclear weapons.
“Competition for AI superiority at [the] national level [is the] most likely cause of World War Three,” warned Elon Musk, the CEO of Tesla and SpaceX.
Common humanity treatment
Some prominent thinkers are concerned that a U.S.-China arms race could lead to extreme dangers. To stay ahead, the U.S. and China may race to deploy advanced military AI systems that they do not fully understand or can control. We could see catastrophic accidents, such as a rapid, automated escalation involving cyber and nuclear weapons.
“Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons,” warned the late Stephen Hawking, one of the world’s most prominent physicists. At the same time, he said that with proper management of the technology, researchers “can create AI for the good of the world.”
[The order of the next two questions is randomized.]
QUESTION:
How much do you agree or disagree with the following statement?
The U.S. should invest more in AI military capabilities to make sure it doesn’t fall behind China’s, even if doing so may exacerbate the arms race. For instance, the U.S. could increase AI research funding for the military and universities. It could also collaborate with American tech companies to develop AI for military use.
ANSWER CHOICES:
- Strongly agree (2)
- Somewhat agree (1)
- Neither agree nor disagree (0)
- Somewhat disagree (-1)
- Strongly disagree (-2)
- I don’t know
QUESTION:
How much do you agree or disagree with the following statement?
The U.S. should work hard to cooperate with China to avoid the dangers of an AI arms race, even if doing so requires giving up some of the U.S.’s advantages. Cooperation could include collaborations between American and Chinese AI research labs, or the U.S. and China creating and committing to common safety standards.
ANSWER CHOICES:
- Strongly agree (2)
- Somewhat agree (1)
- Neither agree nor disagree (0)
- Somewhat disagree (-1)
- Strongly disagree (-2)
- I don’t know
Table B.109: Responses to statement that U.S. should invest more in AI military capabilities - Control; N = 510
Strongly agree |
23.38 |
24.31 |
124 |
Somewhat agree |
25.99 |
25.88 |
132 |
Neither agree nor disagree |
23.48 |
22.75 |
116 |
Somewhat disagree |
8.88 |
8.82 |
45 |
Strongly disagree |
4.93 |
4.71 |
24 |
I don’t know |
13.34 |
13.53 |
69 |
Skipped |
0 |
0 |
0 |
Table B.110: Responses to statement that U.S. should invest more in AI military capabilities - Treatment 1: Pro-nationalist; N = 505
Strongly agree |
20.88 |
20.40 |
103 |
Somewhat agree |
26.89 |
27.52 |
139 |
Neither agree nor disagree |
21.79 |
22.18 |
112 |
Somewhat disagree |
11.69 |
12.28 |
62 |
Strongly disagree |
5.30 |
5.35 |
27 |
I don’t know |
13.45 |
12.28 |
62 |
Skipped |
0 |
0 |
0 |
Table B.111: Responses to statement that U.S. should invest more in AI military capabilities - Treatment 2: Risks of arms race; N = 493
Strongly agree |
18.26 |
19.07 |
94 |
Somewhat agree |
27.85 |
27.38 |
135 |
Neither agree nor disagree |
21.69 |
20.28 |
100 |
Somewhat disagree |
12.87 |
13.79 |
68 |
Strongly disagree |
6.88 |
6.90 |
34 |
I don’t know |
12.45 |
12.58 |
62 |
Skipped |
0 |
0 |
0 |
Table B.112: Responses to statement that U.S. should invest more in AI military capabilities - Treatment 3: One common humanity; N = 492
Strongly agree |
22.38 |
20.53 |
101 |
Somewhat agree |
27.29 |
27.85 |
137 |
Neither agree nor disagree |
24.37 |
23.98 |
118 |
Somewhat disagree |
6.73 |
7.11 |
35 |
Strongly disagree |
6.17 |
6.91 |
34 |
I don’t know |
13.07 |
13.62 |
67 |
Skipped |
0 |
0 |
0 |
Table B.113: Responses to statement that U.S. should work hard to cooperate with China to avoid dangers of AI arms race - Control; N = 510
Strongly agree |
22.34 |
22.55 |
115 |
Somewhat agree |
26.16 |
26.27 |
134 |
Neither agree nor disagree |
22.02 |
20.59 |
105 |
Somewhat disagree |
8.29 |
9.02 |
46 |
Strongly disagree |
7.38 |
7.45 |
38 |
I don’t know |
13.59 |
13.92 |
71 |
Skipped |
0.21 |
0.20 |
1 |
Table B.114: Responses to statement that U.S. should work hard to cooperate with China to avoid dangers of AI arms race - Treatment 1: Pro-nationalist; N = 505
Strongly agree |
18.51 |
18.81 |
95 |
Somewhat agree |
27.35 |
28.12 |
142 |
Neither agree nor disagree |
20.08 |
20.99 |
106 |
Somewhat disagree |
10.09 |
9.90 |
50 |
Strongly disagree |
8.45 |
7.92 |
40 |
I don’t know |
15.51 |
14.26 |
72 |
Skipped |
0 |
0 |
0 |
Table B.115: Responses to statement that U.S. should work hard to cooperate with China to avoid dangers of AI arms race - Treatment 2: Risks of arms race; N = 493
Strongly agree |
24.97 |
25.96 |
128 |
Somewhat agree |
25.32 |
25.15 |
124 |
Neither agree nor disagree |
21.53 |
20.49 |
101 |
Somewhat disagree |
9.83 |
9.94 |
49 |
Strongly disagree |
5.84 |
5.68 |
28 |
I don’t know |
12.51 |
12.78 |
63 |
Skipped |
0 |
0 |
0 |
Table B.116: Responses to statement that U.S. should work hard to cooperate with China to avoid dangers of AI arms race - Treatment 3: One common humanity; N = 492
Strongly agree |
23.63 |
24.19 |
119 |
Somewhat agree |
27.52 |
28.46 |
140 |
Neither agree nor disagree |
21.31 |
20.33 |
100 |
Somewhat disagree |
8.50 |
7.32 |
36 |
Strongly disagree |
6.72 |
6.91 |
34 |
I don’t know |
12.31 |
12.80 |
63 |
Skipped |
0 |
0 |
0 |
Issue areas for possible U.S.-China cooperation
QUESTION:
For the following issues, how likely is it that the U.S. and China can cooperate?
[Respondents were presented with three issues from the list below. All three issues were presented on the same page; the order that they appeared was randomized.]
- Prevent AI cyber attacks against governments, companies, organizations, and individuals.
- Prevent AI-assisted surveillance from violating privacy and civil liberties.
- Make sure AI systems are safe, trustworthy, and aligned with human values.
- Ban the use of lethal autonomous weapons.
- Guarantee a good standard of living for those who lose their jobs to automation.
ANSWER CHOICES:
- Very unlikely: less than 5% chance (2.5%)
- Unlikely: 5-20% chance (12.5%)
- Somewhat unlikely: 20-40% chance (30%)
- Equally likely as unlikely: 40-60% chance (50%)
- Somewhat likely: 60-80% chance (70%)
- Likely: 80-95% chance (87.5%)
- Very likely: more than 95% chance (97.5%)
- I don’t know
Table B.117: Likelihood of cooperation with China - Prevent AI cyber attacks against governments, companies, organizations, and individuals; N = 1173
Very unlikely 5% |
9.20 |
8.95 |
105 |
Unlikely 5-20% |
10.26 |
10.49 |
123 |
Somewhat unlikely 20-40% |
17.56 |
17.22 |
202 |
Equally likely as unlikely 40-60% |
23.55 |
23.36 |
274 |
Somewhat likely 60-80% |
13.77 |
13.73 |
161 |
Likely 80-95% |
6.98 |
7.25 |
85 |
Very likely > 95% |
4.14 |
4.18 |
49 |
I don’t know |
14.45 |
14.75 |
173 |
Skipped |
0.08 |
0.09 |
1 |
Table B.118: Likelihood of cooperation with China - Prevent AI-assisted surveillance from violating privacy and civil liberties; N = 1140
Very unlikely 5% |
12.43 |
12.37 |
141 |
Unlikely 5-20% |
12.78 |
13.33 |
152 |
Somewhat unlikely 20-40% |
19.48 |
19.74 |
225 |
Equally likely as unlikely 40-60% |
21.93 |
20.70 |
236 |
Somewhat likely 60-80% |
10.59 |
10.79 |
123 |
Likely 80-95% |
4.02 |
4.12 |
47 |
Very likely > 95% |
3.82 |
4.12 |
47 |
I don’t know |
14.87 |
14.74 |
168 |
Skipped |
0.08 |
0.09 |
1 |
Table B.119: Likelihood of cooperation with China - Make sure AI systems are safe, trustworthy, and aligned with human values; N = 1226
Very unlikely 5% |
6.34 |
6.53 |
80 |
Unlikely 5-20% |
9.07 |
8.97 |
110 |
Somewhat unlikely 20-40% |
16.79 |
16.88 |
207 |
Equally likely as unlikely 40-60% |
26.32 |
25.53 |
313 |
Somewhat likely 60-80% |
14.84 |
14.85 |
182 |
Likely 80-95% |
7.35 |
7.26 |
89 |
Very likely > 95% |
5.77 |
5.87 |
72 |
I don’t know |
13.38 |
13.95 |
171 |
Skipped |
0.14 |
0.16 |
2 |
Table B.120: Likelihood of cooperation with China - Ban the use of lethal autonomous weapons; N = 1226
Very unlikely 5% |
12.28 |
12.32 |
151 |
Unlikely 5-20% |
11.14 |
10.85 |
133 |
Somewhat unlikely 20-40% |
14.03 |
14.03 |
172 |
Equally likely as unlikely 40-60% |
23.98 |
23.65 |
290 |
Somewhat likely 60-80% |
10.15 |
10.60 |
130 |
Likely 80-95% |
6.67 |
6.93 |
85 |
Very likely > 95% |
5.69 |
5.46 |
67 |
I don’t know |
15.91 |
15.99 |
196 |
Skipped |
0.14 |
0.16 |
2 |
Table B.121: Likelihood of cooperation with China - Guarantee a good standard of living for those who lose their jobs to automation; N = 1235
Very unlikely 5% |
13.19 |
13.36 |
165 |
Unlikely 5-20% |
13.01 |
13.28 |
164 |
Somewhat unlikely 20-40% |
18.26 |
18.46 |
228 |
Equally likely as unlikely 40-60% |
22.81 |
22.19 |
274 |
Somewhat likely 60-80% |
9.46 |
9.39 |
116 |
Likely 80-95% |
5.08 |
5.18 |
64 |
Very likely > 95% |
4.27 |
4.53 |
56 |
I don’t know |
13.78 |
13.44 |
166 |
Skipped |
0.14 |
0.16 |
2 |
Trend across time: job creation or job loss
QUESTION:
How much do you agree or disagree with the following statement?
[Respondents were presented with one statement randomly selected from the list below.]
- In general, automation and AI will create more jobs than they will eliminate.
- In general, automation and AI will create more jobs than they will eliminate in 10 years.
- In general, automation and AI will create more jobs than they will eliminate in 20 years.
- In general, automation and AI will create more jobs than they will eliminate in 50 years.
ANSWER CHOICES:
- Strongly agree (2)
- Agree (1)
- Disagree (-1)
- Strongly disagree (-2)
- I don’t know
Table B.122: Responses to statement that automation and AI will create more jobs than they will eliminate - No time frame; N = 484
Strongly agree |
6.37 |
6.82 |
33 |
Agree |
20.19 |
18.18 |
88 |
Disagree |
27.39 |
28.10 |
136 |
Strongly disagree |
21.43 |
22.31 |
108 |
Don’t know |
24.45 |
24.38 |
118 |
Skipped |
0.17 |
0.21 |
1 |
Table B.123: Responses to statement that automation and AI will create more jobs than they will eliminate - 10 years; N = 510
Strongly agree |
3.40 |
3.53 |
18 |
Agree |
17.67 |
18.04 |
92 |
Disagree |
30.03 |
29.02 |
148 |
Strongly disagree |
22.85 |
23.92 |
122 |
Don’t know |
26.04 |
25.49 |
130 |
Skipped |
0 |
0 |
0 |
Table B.124: Responses to statement that automation and AI will create more jobs than they will eliminate - 20 years; N = 497
Strongly agree |
3.69 |
4.02 |
20 |
Agree |
17.82 |
17.10 |
85 |
Disagree |
31.02 |
30.99 |
154 |
Strongly disagree |
21.31 |
21.73 |
108 |
Don’t know |
25.98 |
25.96 |
129 |
Skipped |
0.18 |
0.20 |
1 |
Table B.125: Responses to statement that automation and AI will create more jobs than they will eliminate - 50 years; N = 509
Strongly agree |
6.77 |
6.48 |
33 |
Agree |
15.37 |
15.52 |
79 |
Disagree |
35.35 |
35.56 |
181 |
Strongly disagree |
18.82 |
18.27 |
93 |
Don’t know |
23.69 |
24.17 |
123 |
Skipped |
0 |
0 |
0 |
High-level machine intelligence: forecasting timeline
QUESTION:
The following questions ask about high-level machine intelligence. We have high-level machine intelligence when machines are able to perform almost all tasks that are economically relevant today better than the median human (today) at each task. These tasks include asking subtle common-sense questions such as those that travel agents would ask. For the following questions, you should ignore tasks that are legally or culturally restricted to humans, such as serving on a jury.
In your opinion, how likely is it that high-level machine intelligence will exist in 10 years? 20 years? 50 years? For each prediction, please use the slider to indicate the percent chance that you think high-level machine intelligence will exist. 0% chance means it will certainly not exist. 100% chance means it will certainly exist.
______ In 10 years?
______ In 20 years?
______ In 50 years?
ANSWER CHOICES:
- Very unlikely: less than 5% chance (2.5%)
- Unlikely: 5-20% chance (12.5%)
- Somewhat unlikely: 20-40% chance (30%)
- Equally likely as unlikely: 40-60% chance (50%)
- Somewhat likely: 60-80% chance (70%)
- Likely: 80-95% chance (87.5%)
- Very likely: more than 95% chance (97.5%)
- I don’t know
Table B.126: Forecasting high-level machine intelligence - 10 years; N = 2000
Very unlikely < 5% |
4.46 |
4.50 |
90 |
Unlikely 5-20% |
8.19 |
8.20 |
164 |
Somewhat unlikely 20-40% |
14.84 |
14.75 |
295 |
Equally likely as unlikely 40-60% |
20.34 |
19.95 |
399 |
Somewhat likely 60-80% |
21.08 |
21.25 |
425 |
Likely 80-95% |
10.69 |
10.65 |
213 |
Very likely > 95% |
7.40 |
7.85 |
157 |
I don’t know |
12.91 |
12.75 |
255 |
Skipped |
0.09 |
0.10 |
2 |
Table B.127: Forecasting high-level machine intelligence - 20 years; N = 2000
Very unlikely < 5% |
1.52 |
1.45 |
29 |
Unlikely 5-20% |
2.73 |
2.95 |
59 |
Somewhat unlikely 20-40% |
6.26 |
5.85 |
117 |
Equally likely as unlikely 40-60% |
16.83 |
16.40 |
328 |
Somewhat likely 60-80% |
18.17 |
18.65 |
373 |
Likely 80-95% |
22.25 |
22.25 |
445 |
Very likely > 95% |
17.91 |
18.30 |
366 |
I don’t know |
14.18 |
14.00 |
280 |
Skipped |
0.15 |
0.15 |
3 |
Table B.128: Forecasting high-level machine intelligence - 50 years; N = 2000
Very unlikely < 5% |
2.28 |
2.30 |
46 |
Unlikely 5-20% |
1.66 |
1.55 |
31 |
Somewhat unlikely 20-40% |
2.75 |
2.75 |
55 |
Equally likely as unlikely 40-60% |
10.08 |
9.90 |
198 |
Somewhat likely 60-80% |
12.33 |
12.20 |
244 |
Likely 80-95% |
14.43 |
14.50 |
290 |
Very likely > 95% |
40.86 |
41.15 |
823 |
I don’t know |
15.52 |
15.55 |
311 |
Skipped |
0.09 |
0.10 |
2 |
Support for developing high-level machine intelligence
QUESTION:
How much do you support or oppose the development of high-level machine intelligence?
ANSWER CHOICES:
- Strongly support
- Somewhat support
- Neither support nor oppose
- Somewhat oppose
- Strongly oppose
- I don’t know
Table B.129: Support for developing high-level machine intelligence; N = 2000
Strongly support |
7.78 |
8.10 |
162 |
Somewhat support |
23.58 |
23.30 |
466 |
Neither support nor oppose |
29.40 |
28.75 |
575 |
Somewhat oppose |
16.19 |
16.60 |
332 |
Strongly oppose |
11.02 |
11.10 |
222 |
I don’t know |
11.94 |
12.05 |
241 |
Skipped |
0.09 |
0.10 |
2 |
Expected outcome of high-level machine intelligence
QUESTION:
Suppose that high-level machine intelligence could be developed one day. How positive or negative do you expect the overall impact of high-level machine intelligence to be on humanity in the long run?
ANSWER CHOICES:
- Extremely good
- On balance good
- More or less neutral
- On balance bad
- Extremely bad, possibly human extinction
- I don’t know
Table B.130: Expected outcome of high-level machine intelligence; N = 2000
Extremely good |
5.35 |
5.45 |
109 |
On balance good |
21.28 |
21.25 |
425 |
More or less neutral |
21.00 |
21.10 |
422 |
On balance bad |
22.38 |
23.10 |
462 |
Extremely bad, possibly human extinction |
11.66 |
11.55 |
231 |
Don’t know |
18.25 |
17.45 |
349 |
Skipped |
0.09 |
0.10 |
2 |