AI in defense refers to the use of artificial intelligence technologies to enhance military capabilities, such as autonomous drones, cyber defense, and strategic decision-making. Proponents argue that AI can significantly enhance military effectiveness, provide strategic advantages, and improve national security. Opponents argue that AI poses ethical risks, potential loss of human control, and can lead to unintended consequences in critical situations.
Narrow down which types of responses you would like to see.
Narrow down the conversation to these participants:
Political party:
Voting for candidate:
County:
City:
Zipcode:
@ISIDEWITH1yr1Y
No
@B5FRQBT2mos2MO
AI is just not advanced enough yet, most EV's that use AI still can't even detect children crossing the road. What if that were to happen with a missile accidentally firing and hitting civilians in an unarmed nation.
@B59PVGN2mos2MO
if we give this power to AI how do we know that we can take it away. Once they tried to delete a AI app it made another version of itself and hid it from the creators.
@B4NYQZ73mos3MO
Imagine how the military and U.S. agencies could use this technology to harm U.S. citizens and further strip our rights away.
@ISIDEWITH3mos3MO
Yes, but only to assist and not replace human decision making
@B4NYQZ73mos3MO
AI would be incredibly helpful in fighting against enemies of the United States, by being able to identify foreign adversaries on the battlefield.
@ISIDEWITH3mos3MO
Yes, but with very strict oversight and regulations
@B5V7PMWIndependent2wks2W
AI is vital for the defense of the untied states as in elimates human error and protects america from threats quickly.
@B5PKF7K1mo1MO
Because the environmental costs of AI are ludicrous, and boiling the planet is bad for everyone's safety
@B5FRQBT2mos2MO
With how AI currently is even with strict oversight it could just shoot off a missile into a place randomly by declaring them a "threat". Until AI is more advanced it should remain out of firing weapons.
@B59PVGN2mos2MO
AI is not as advance as people act like it is, it is still very buggy and could lead to horrible things
@ISIDEWITH10mos10MO
Do you think letting machines make life-and-death decisions in military conflicts is a necessary step forward or does it cross an ethical line?
@9VNR3SY9mos9MO
I'm not too sure, AI helps a lot but I think they shouldn't help with real life problems such as court, politics and government
@9VNR2FQ9mos9MO
No, human intelligence is more important and meaningful than AI.
@9VNQVWV9mos9MO
No, because the AI wouldn't understand all of the context for the war.
@9VNQRHN9mos9MO
No, it is absolutely unethical. Posing a question of life or death to a machine minimizes the significance of that event.
@9MM5PH41yr1Y
Yes, but increase oversight with strict regulations.
Experimentation with AI in controlled environments is fine, but it shouldn’t be applied until more regulation is put in place to ensure ethical usage and accountability for collateral damage
Experimenting with AI in controlled environments is fine, but it shouldn’t be widely applied until there are regulations put in place that ensure accountability and prevent potential unintended consequences
@9N589VHIndependent1yr1Y
Yes, if humans are making final decisions for strategic decision-making. AI is likely already used in some capacity.
@9S2PDWW11mos11MO
I am very skeptical of artificial intelligence, but I see it’s practicality in military uses. I support it, but in a limited capacity with the utmost caution.
@ISIDEWITH10mos10MO
What are your thoughts on who should be held responsible if an AI system makes a mistake that results in the loss of lives during a conflict?
@9TNHQ3G10mos10MO
The government should be responsible for mistakes.
@9TNHSW510mos10MO
Whoever gave the AI permission to do what caused the incident.
@9T7DM2Q8mos8MO
Anyone (the government) who approved its use. AI is not suitable for tasks such as this and is prone to mistakes. As someone who has knowledge and experience in programming AI should never be used in cases where human lives are at stake.
@ISIDEWITH10mos10MO
How do you think the use of AI in national defense aligns with our values around human rights and justice?
@9TLZRSGRepublican 10mos10MO
I think at the rate were going AI will be dangerous. If AI has all of the information it needs there could be national security issues and a question of when is too intelligent will arise.
@vwilson98 10mos10MO
AI should not be used in national defense. There are some things that humans should be directly and solely in control of.
@maadiman1170Libertarian 11mos11MO
Yes, but only to the extent that they would need to subvert such applications used by foreign adversaries - no domestic applications whatsoever.
@9TZTZSF9mos9MO
Yes, but only if it's more of a reliable one instead of the one that Google and other tech companies made.
@ISIDEWITH10mos10MO
Do you think AI could help prevent wars from happening or will it just escalate arms races between countries?
@7NN387N 10mos10MO
No, these are skills that should be learned by people instead, and AI should not be trusted for critical situations affecting the lives of others
Not yet. AI is too new and experimental to utilize in situations as delicate as military and national security. Maybe later, when the technology has been refined and studied more, we can look about using it for national security.
@9P9GHJ91yr1Y
Yes, but there are far too many variables to say that this would not have adverse effects within the next century.
@9SV9J3J10mos10MO
Could be helpful, but takes away jobs and could make us attack poorly during war. It could also be hacked.
@9P52WM8 1yr1Y
Yes, but only in areas like modelling and computing that could be improved with AI. Critical missions and important decisions should still be carried out by humans.
@9NHRXVX1yr1Y
Not until there's a clear understanding of AI capability, followed by strict laws and regulations on it.
Yes and no, it can be a useful tool and we don’t want to fall behind our enemies, but we don’t want to become too reliant on something that could turn against us. It shouldn’t be incorporated in things like weapons or targeting systems.
@B5YFPTJProgressive3 days3D
Yes, but it should be with strict oversight and regulations, as well as not replacing human decision making, only assisting in that.
@B5XCPM9 1wk1W
No, because AI is not always reliable and accurate and known for being used for malicious purposes. And people working in Defense application could lose their jobs.
@B5XBB461wk1W
The government should invest in defense against AI, because if it doesn't, all Americans (and also all non-americans) will be dead by 2050
@B5WDD242wks2W
Yes, but only with strict regulations and oversight guaranteeing accountability and limiting collateral damage
@B5SQ2QX3wks3W
The role of the government in AI is to regulate the environment in which the industry is developing. Suppose we apply AI to the defense of our nation. In that case, it needs to be tightly regulated with strict guidelines that benefit the American people exclusively, rather than corporations or those who seek to benefit from these agreements.
@B5M9VSWRepublican1mo1MO
AI within the government defense applications should have strict oversight and regulations, but should only be developed as a means of assisting human decision making.
@B5LD8GR 1mo1MO
No, artificial intelligence is incredibly detrimental to the environment, and in its current state is nowhere close to reliable.
yes, but have strict oversight, and regulations, and make sure it doesn't replace human jobs or decision-making.
@B5H9ZZ62mos2MO
It should only be invested to only assist and not replace human decision making while greenlighting more testing in controlled environments.
@B5DHWRG2mos2MO
AI is already going to do this. AI ethics and our understanding of AI should be done first however. Application is a time bomb without guardrails, even in war.
@B58NX8HIndependent2mos2MO
If they want to do that that is fine but needs to test it out before actual testing it on defense capability, so launch a beta version before anything happens
@B57P8KDIndependent2mos2MO
Yes, but only to assist and not replace human decision making and with very strict oversight and regulations
Yes, because other governments / groups will develop competing AI that will get through our defenses
@B4KKGYL3mos3MO
No, even though AI is powerful, it still can make immoral or dehumanizing desicions by error. no AI is perfect.
@B4HWQD2Progressive3mos3MO
Yes but only enough to keep up with other countries with very strict regulation in place to prevent unwarranted and unsafe use of imperfect and unproven systems!
@B4HB5QSProgressive3mos3MO
Invest in it in accordance to keep up, but do not actively use it until very specific requirements are met
@B4H5RC23mos3MO
Depends of the context. We need to invest in it and be able to counter it, but it shouldn't be completely independent
@B4FNL7M3mos3MO
cautious approach; while research might be necessary, prioritize ethical considerations and the potential for autonomous weapons systems; focus on diplomatic solutions
@B4D6KHP3mos3MO
No, allow private companies to do it instead for the sake of capitalism, federalism, weak government, low taxes, low national debt, and checks and balances.
@B4CSJFP3mos3MO
Yes the government should invest in artificial intelligence for defense applications as long as they know how to handle the situation of losing control of a military machine and prevent the unintended consequences in critical situations.
@B4CQGX73mos3MO
Yes but not significantly because it poses ethical risks, so it should be improvised by humans strictly.
@9FZPSHS 3mos3MO
No, as the technology is still too inconsistent and unreliable; once it improves they should consider it
@B4BK4TW3mos3MO
Yes, but it should be highly regulated and only used in extreme situations in a limit capacity. I only feel yes, because the US cannot be at a disadvantage to other countries who have invested in AI. AI is a scary thing, it has the protentional to do great things, but it also has the power to be extremely harmful.
@B45Y2S93mos3MO
Yes, with failsafes in place for absolute redundancy (e.g. partially autonomous fighter planes, with a pilot to make sure the system is working properly and as intended)
@B45NVLCLibertarian3mos3MO
No, military strength should be privatized. But development of this technology to defend liberty is critical
@B3ZYM5D4mos4MO
No, AI should be strongly discouraged for the sake of keeping the Unemployment Rate from surging and preserving human safety
@B3X26PD4mos4MO
No, ai is still learning. Even though ai is intelligent, it can still be prone to errors and misinformation.
@B3VGV2T 4mos4MO
Yes, government investment in AI for defense applications is a strategic necessity, offering significant advantages in national security and global competitiveness, but requires careful ethical and regulatory oversight.
Here's a more detailed look at the arguments for and against AI in defense:
Arguments for Government Investment:
Enhanced Decision-Making:
AI can analyze vast amounts of data to improve the speed, accuracy, and effectiveness of military decision-making, providing a "decision advantage".
Improved Situational Awareness:
AI-powered systems can enhance threat asse… Read more
@B3SJ67N4mos4MO
Yes, but only to not fall behind other nations which will surely implement new technologies like AI.
@B3RWGBM4mos4MO
I think AI could be useful in some circumstances, but I don't think that our country should fully invest in AI technology, as it could prove unreliable in some areas.
@B3QZNB84mos4MO
Depends on what the defense applications are for. If they are for Israel, then no. But for an occupied nation, definitely. I don’t feel like that’s going to happen, though.
@B3QM2V94mos4MO
Only if the president is of sound mind and uses it to defend out country and not against its citizens.
@B3PYZBM4mos4MO
Keep AI out of all government. It’s too prone to error, manipulation, and making decisions based on inaccurate information.
@B3NSCXT4mos4MO
Using AI for defense purposes would be very helpful, but people could hack into them and use it against whoever added them.
@B3MNRGF4mos4MO
it has been proven time and time again that AI has the ability to malfunction just like every other type of technology.
@B3HF4VR4mos4MO
Yes, but only to provide aid in strategic decision-making and intelligence; not to implement autonomous, potentially dangerous actions.
@B3H2FC24mos4MO
Yes, but not for autonomous drones. AI should only be utilized for assisting in strategic decision making or designing defense technology.
@B3GT2Q94mos4MO
Yes to an extent. We shouldn't become reliant but if it works for the few we send out maybe invest in more.
@B3GHF4K4mos4MO
Unless proven that AI can solve problems we have been unable to, then it should be left alone and that money should go to different things.
@B3C4MBBProgressive 4mos4MO
NO! Artificial Intelligence, no matte how much we work to make it subservient to us, is too unpredictable and would most probably exaggerate its directives and be our undoing.
@B39Y6CV4mos4MO
No, AI is too new and untrained, it would make as much errors as a human as they don't have the best memory, it should stick with all humans for right now.
@B36PBY54mos4MO
i think to an extent. not using it for full control, but maybe some new perspectives on strategies and what not
@B354R5K4mos4MO
Yes, for very minimal use once AI is more developed. It could be used in small ways but should never become a central unit in the U.S military as it poses ethical risks and cannot make complex humane decisions.
@B3547RT4mos4MO
I think it could be beneficial, but also it could potentially be very harmful in the future if AI becomes too powerful.
@B34LF3T5mos5MO
It depends on what you mean by defense applications. If meaning development of robot soldiers replacing humans, I could see the benefit in something like that as it would save human lives from warfare. However, things that could potentially go wrong with something like that concerns me, so I lean towards no
@B334T97Libertarian5mos5MO
It depends on what for, If the AI is tracking people, then no- if it's for military rockets then yes.
@B2ZNDHC5mos5MO
Yes, but only if the AI is constantly monitored by trained personnel, and only if the AI's decision to launch weapons can be overruled.
@B2ZDKCV5mos5MO
Yes, AI should never be used to make a final decision, but should be able to be used for information-gathering purposes
@B2SHHWC5mos5MO
AI needs to be studied and heavily regulated before giving it any access to a military superpower. Furthermore, any access it is given needs to have failsafe and be heavily monitored and regulated.
@B2S5S6B5mos5MO
Only for detecting and code. Not for physical offense and defense without human control, as they can be hacked, or make the wrong decision.
@B2RBLZN5mos5MO
No, I don't believe the technology is there yet where I would support this, but down the line i could see it used
Deleted5mos5MO
Yes, but cautiously, the technology is too young to be used commercially, and presents several risks.
@B2R684N5mos5MO
No, we should not have AI defense applications ever. That would be equal to saying should we have more Nuclear warheads though. You already opened that box, we can’t do much now.
@B2QZGN25mos5MO
Yes, but we need to understand that AI is subject to being inherently flawed. It can only make assumptions based on information given to it, much like humans. The area in which AI prevails is in bulk operation or mundane tasks.
@B2P338V5mos5MO
I mean AI depends on what your going to use it on for like wars or to ask questions because creating robots who are taught to destroy and kill are dangerous to the world because they can turn on us quickly.
@B2MWBD65mos5MO
only if it is used with very strict regulations and can always be reverted back to human control in seconds
Yes, but only for the sake of its intended use, and there should be tight regulations to ensure the government does not violate our safety, privacy or infringe on our individual freedoms.
@B2K2QD75mos5MO
That depends because it can be used for good but also can be used for bad. this is a hard question but I think its important.
Loading the political themes of users that engaged with this discussion
Loading data...
Join in on more popular conversations.