Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Lethal autonomous weapons systems use artificial intelligence to identify and kill human targets without human intervention. Russia, the United States and China have all recently invested billions of dollars secretly developing AI weapons systems sparking fears of an eventual “AI Cold War.”In April 2024 +972 Magazine published a report detailing the Israeli Defense Forces intelligence-based program known as “Lavender.” Israeli intel…
Read moreResponse rates from 17.3k America voters.
43% Yes |
57% No |
43% Yes |
57% No |
Trend of support over time for each answer from 17.3k America voters.
Loading data...
Loading chart...
Trend of how important this issue is for 17.3k America voters.
Loading data...
Loading chart...
Unique answers from America voters whose views went beyond the provided options.
@9M3KVCG1yr1Y
Only some weapons... ... and I think that they should be able to be guided by artificial AND human intelligence - human intelligence first.
@9LFRY891yr1Y
Yes, but only when the artificial intelligence is completely ready and foolproof, and it shouldn't be used for all weapons.
@B2W4NHW5mos5MO
I would need more information regarding the efficacy of the AI capabilities regarding military weapons
@B5XCPM9 4 days4D
No, AI should never be used for possible life altering and changing and threatening situations as it is not always accurate and reliable.
@B5X23FC1wk1W
The military should be allowed to have the choice to use AI-guided weapons, but with a concrete system of checks and balances, and only at the discretion of top artificial intelligence researchers
@B5WN5NZ1wk1W
Yes and no. Human-in-the-Loop Systems: Many experts support AI-assisted weapons with human oversight—where a human must approve lethal decisions.
• International Laws: Groups like the UN and Human Rights Watch advocate for treaties banning or regulating fully autonomous lethal weapons.
• Transparency and Ethics Boards: Militaries can form oversight bodies to ensure ethical use and clear rules of engagement for AI systems.
@B5WDD242wks2W
Yes, as long as there are clear ethical regulations ensuring accountability and avoiding collateral damage
@B5VXCLH2wks2W
Yes if the AI system can identify and track targets, but a human must authorize any use of force (Human-in-the-loop) or The AI system can select and engage targets autonomously, but a human supervises and can intervene or override decisions (Human-on-the-loop). However no if the AI system operates entirely autonomously, with no human intervention at the point of lethal action (Human-out-of-the-loop).
Join in on the most popular conversations.