Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Lethal autonomous weapons systems use artificial intelligence to identify and kill human targets without human intervention. Russia, the United States and China have all recently invested billions of dollars secretly developing AI weapons systems sparking fears of an eventual “AI Cold War.”In April 2024 +972 Magazine published a report detailing the Israeli Defense Forces intelligence-based program known as “Lavender.” Israeli intel…
Read moreNarrow down the conversation to these participants:
Political party:
@ISIDEWITH1yr1Y
No
@9LQWKSN1yr1Y
There are many instances currently where AI is already out of our control, making them in control of weapons would make it even worse and dangerous to us.
@ISIDEWITH1yr1Y
Yes
@9LKJH9V1yr1Y
People should be held accountable and be able to visibly see or otherwise understand their actions in war so as to not make sweeping decisions that affect large numbers of lives on a whim.
@9M3KVCG1yr1Y
Only some weapons... ... and I think that they should be able to be guided by artificial AND human intelligence - human intelligence first.
@9LFRY89Independent1yr1Y
Yes, but only when the artificial intelligence is completely ready and foolproof, and it shouldn't be used for all weapons.
@B2W4NHW3mos3MO
I would need more information regarding the efficacy of the AI capabilities regarding military weapons
@B58JJKS6 days6D
If the military wants artificial intelligence for weapon systems that is fine but make sure it is safe before using it and launch a beta test before implementing and launching an artificial intelligence system network
@B58GD3L6 days6D
If the training and development is substantial, then yes. There should be very minimal mistakes if any. Small “accidents” could cause a global conflict.
@B57S6QXIndependent 1wk1W
Yes, but it still should be monitored by human eyes, so if there is error or an issue hopefully we can correct it before it becomes a problem
@B569JD81wk1W
Yes, but with significant human oversight to ensure accuracy, safety, ethical responsibility, and effectiveness.
@B54BY5Z2wks2W
AI is an effective tool in many technical fields, but if a man must be killed, it should be done by a human being who understands and feels the impact of their sins.
@B4WWBPB3wks3W
Artificially-guided weapons should only be used depending on the risk of human and mechanical error.
@Esoteric3wks3W
Yeah, use AI to move faster and smarter. But final lethal shots? That still needs a breathing, thinking human, otherwise we’re just one bad line of code away from straight-up sci-fi horror movie territory.
@B4MLMXG4wks4W
AI needed to be used very tactfully and carefully if it were to be used for guiding weapons. Otherwise, it will be susceptible of being abused by the military as well as the government.
@9VP9F35 1mo1MO
A.I. can be used for intelligence and targeting but it should never be used to pull the trigger. Responsibility must always be on a human.
@B49LF8ZRepublican1mo1MO
The military should use weapons of artificial intelligence as guides, but a human should still be the one to use the weapon
@B46959L2mos2MO
While AI could improve precision and efficiency, there are times where decision-making, accountability, and unintended consequences could be an issue.
@B3HSBCY2mos2MO
Yes, but any lethal action must have a human making the final decision and follow a chain of command.
@B3H4K522mos2MO
AI should be used for targeting and fast number-crunching, but should not be involved in the decision to fire
@kiernsen2mos2MO
Due to the fact that other countries are doing it, we cannot allow ourselves to be outmatched by our allies nor our enemies, so yes. If there were a possibility of having every country come together against the use of weapons guided by artificial intelligence, that would be the best possible outcome.
@B3DMFRLLibertarian2mos2MO
yes, but with heavy monitoring to insure accuracy and safety since in some aspects AI can be more observant than humans but in others it is still ultimately a computer and may miss certain nuances.
@B358X8M3mos3MO
Yes, if it can be done safely, under perticular applications, but this question is too broad and vague.
@9PN68WB 3mos3MO
no, not until the technology improves and biases inherent to the training data are properly addressed
@B2LXPXZ3mos3MO
We already do, there is an ongoing debate about whether or not AI should be able to "pull the trigger". Currently, AI targeting on weapons systems requires a human to tell it to fire.
@B2LP44T3mos3MO
I guess if its well educated with the situation is in the country and knows the right threats then yes but if its just there than there nothing to do with them.
@B2JDJN6Independent4mos4MO
There needs to be a human factor no matter what, yes I think they should be able to but they would need to be closely watched. A human should be the one pulling the trigger
@B27DL5BRepublican4mos4MO
I feel like human controlled weapons would be more effective, since AI still isn’t the greatest. Now unless it is GPS located with a person watching it and can override it at any given moment. That would be okay.
@9SFYQ4DIndependent 5mos5MO
Yes, however it should undergo many tests to ensure it only targets the right people. AI is advancing fast and it's a great thing the US is getting on board with it but it needs to be tested significantly before using it.
@Dry550Independent 6mos6MO
There are many benefits to A.I. but as referenced in the Gaza war, there are many pitfalls to A.I. There needs to be a balance between human intervention and machine intervention.
@9VKYJ7Q7mos7MO
I would say no just for this small fact that the artificial intelligence can always be messed with or malfunction
@9VKXP587mos7MO
They're already using AI guided weapons. Kinda hard to put the genie back in the bottle after it's been released.
@9VGV22N 7mos7MO
Yes, but they individually should be able to choose whether they want to use artificial intelligence guided weapons.
@9VGGMSS7mos7MO
AI should not be used in any military decisions. AI can be used for tracking only after a human identifies, targets, and acts to destroy the target.
@z28wantedLibertarian 7mos7MO
Maybe. Depending on safeguards and technological innovation. Weapons could be more accurate, and cause less collateral damage
@9VB7L6S7mos7MO
Yes, but they should start testing it first and make sure it is not going to mess up and kill someone who it is not trying to.
@9V58SY78mos8MO
i think having weapons is a good idea but to a certian point i think having weapons made by articfical intellagnce can have pros and cons to it and weather its a reliable source
@9V54FCQ8mos8MO
Only if the AI is fully trustworthy and secure, and only if the weapons are not high risk, and only if the use of AI is limited fairly.
@96QRPWTLibertarian 8mos8MO
AI should help nominate and prioritize potential targets through distillation of information from multiple intelligence types ("why"), humans should decide "who," "what," "where," "when," and "how" to strike while minimizing collateral damage, and precision weapons should strike only those targets.
@9TTHZB88mos8MO
Yes, as long as it doesn't remove the responsibility from the person or state employing it for any wrong targeting
@9TPYWST8mos8MO
AI weaponry must always be under the command and control of live human operators, who themselves are under the chain of command of the military serving under civilian control
@9TPXQ9C8mos8MO
Only if it makes the weapon more accurate and reduces civilian casualties. The AI should have a mechanism where if it's calculations are below a certain threshold of certainty, it does not detonate the weapon.
@9TJGTF28mos8MO
guidence systems using AI is ok but fully operated by artificial intelligence is too easy to manipulate
@9TGY6WPIndependent8mos8MO
Depends on how guided and what you mean by artificial intelligence. If it is simply to assist in highlighting items but have humans oversee and make final decisions, then yes.
@9TDR4L98mos8MO
Weapon guidance and weapon launch are two separate issues. AI should never hold the trigger, but could guide weapons more accurately than humans. Need more clarification on question to fully answer.
Only if the artificial intelligence has a human overseeing it. Otherwise go all out I want to see countries burn under the hands of AI directed drone firing squads.
@B4VBZQB3wks3W
No, AI shouldn't guide human decision. It should be used as a tool to improve intel collection. But not be used as the deciding factor.
@B4V6HMFIndependent3wks3W
Military weapons can have computers help guide things, but having weaponry fully controlled by artificial intelligence does not sit quite right with me.
@B4PVGDL4wks4W
AI guided weapons should undergo intense testing to ensure they are up to the standard that current weapons are at
@B4P444Q4wks4W
I'm tilted with this question because A.I. can always fail so it could be bad but on the other hand it could be good so there is no accidental errors with human guiding
@B4MZLGG4wks4W
No, in some cases AI can be effective but in it's current state it is to unreliable to be used in military equipment.
@B3QWXWS2mos2MO
Not at the moment. There needs to be much more research conducted, however, if our adversaries seem to be gaining a military advantage, we may have no choice, but to use AI in a weapon systems.
@B3N52RH2mos2MO
Yes, only after sufficient testing that quantifies a high rate of effectiveness to ensure accuracy and safety
@B3MWMNZ2mos2MO
In general, yes. I did not believe we should make weighty investments in this technology, however. I do not think we need more AI in the military at this moment although circumstance may demand it in the future. I tolerate what is authorized for use.
@B3MQBTB2mos2MO
Yes, but humans need to be involved in the kill chain at all times to ensure that these weapons are not making decisions on there own.
@B3K2SXB2mos2MO
Yes, but before doing so there needs to be a lot of safeguards added to ensure that the AI is not going unchecked.
@9WFM6GV 3mos3MO
we should look into it, but everything doesn't need to be artificial intelligence, man should be able to control it in nessesary
@B2XZWM23mos3MO
Yes, but it has to be very limited, over rideable, and used for location and not people based strikes.
@B2XMXJ33mos3MO
It would be nice to have less human deaths. On the other hand it seems that ai is really smart and i feel bad for them. Humans would basically do what they've been doing to themselves onto others that they deem un equal.
@B2XMHTM3mos3MO
Yes, but only if there are humans involved in the launching and controlling of the weapons (having a human "in the loop")
@B2VNW8CIndependent3mos3MO
AI should only be used to operate turrets, but guiding precision strikes should be done by humans. AI could be used to help the human operator, not replace it.
@B2TQT8JRepublican3mos3MO
Yes and no we should first invest more in cyber protection, then only start to invest in the military using weapons guided by artificial intelligence?
Yes, but enhanced security mechanisms and oversight are essential to prevent miscalculations and mismanagement of the military tool.
@9XNVT5Q 3mos3MO
No, but if there is undeniable evidence that China or Russia are using these technologies and they are working, we should adopt them. A system similar to Fallout's V.A.T.S. would be nice.
@LoopedCheese1Democrat 3mos3MO
If they prove efficient, maybe. However, I am worried by the fact that they might not recognize the difference between a combatant or a civilian. If there is proof that there will be no mishaps, then I can get behind the idea
@B2PKSBH3mos3MO
No, not until there is more research and development into the technology to ensure no preventable tragedies occur
@B25DBS65mos5MO
yes but only if monitored by another person or tracked by the government to see if its planned trajectory is correct
@B258LXP5mos5MO
Yes, but only if they perfect it, for now it's a no-go because of the innocents that die who we are trying to save.
@9ZZ8CL6Independent5mos5MO
I only support AI systems weapon systems that are HITL (Human in the Loop). Humans still need to make the final call through AI can assist in the efficiency of the system.
@9ZYCGKVRepublican5mos5MO
Yes , but also doing many safety checks based on viruses or corruptions presented and not presented. At the time increasing the security.
@9ZWR9HQ5mos5MO
If we are to use AI to develop weapons, then we must have a broad range of test subjects. And that just isn’t possible at AI’s current state. So no.
@9ZV99MK 5mos5MO
Yes, but the AI could glitch or something could go wrong meaning we should give it the option to be manned by humans
@9ZQPHSH6mos6MO
This is not wise unless there is a kill switch manned by a living individual. AI can only make determination on targets according to programming and directives and as such can not make these decision at all times.
It depends on how serious the reason is as to why they are using weapons. If it's during war probably not but maybe during smaller activities yeah.
@9ZLSMHS6mos6MO
The Military should use weapons that are guided by artificial intelligence but can have human intervention.
@9YBHXL2Peace and Freedom6mos6MO
Somewhat yes and some what no because then can use artificial intelligence on missiles and turrets but for using normal guns no so they can shoot properly.
@9Y8PPQYRepublican6mos6MO
It depends on how well they can control the AI, it is better than having soldiers go out and die in war, I'd say 50% yes 50% no.
@9Y6W4R76mos6MO
some levels of AI are ok, but not fully dependent. augmented reality vs virtual reality type situation
@9Y5QXNK6mos6MO
AI should play a role in gathering data and presenting analysis, but it should not be the sole decision maker
@9XNYPCS6mos6MO
Yes, provided that there are regulations ensuring ethical usage and limiting potential collateral damage.
@9XKRPYN6mos6MO
I believe this technology is not at a good developmental stage to which it could be used in combat, where lives are possibly at stake
@9XKQFYQ6mos6MO
Depends on the current ai ability and the restrictions on the ai set by the nueral network. Also depends on the data structers of the ai.
@9XKJ43T6mos6MO
in some cases, but AI can malfunction, so everything should always have an option or something to be manually used.
@9X767BK6mos6MO
It depends on what weapon and how advanced and if it's necessary and something the military cant do with regular weapons.
@9VYHH6C7mos7MO
AI currently shouldn't be used to guide weapons as our AI technology is not advanced enough to use without the surveillance of an actual person.
@chasarch 7mos7MO
Eventually. But it seems like a warcrime waiting to happen for now. Also, we should reduce our military spending and activity.
@9VXDKY57mos7MO
Yes, but only with actual human beings having direction and control over such weapons, such as the ability to call off or abort an attack
yes, unless it may effect the skills of the humans that also use those weapons. We may not always have AI and being on top with our people will win wars. People win wars.
@9VN9QSV7mos7MO
Yes, but there need to be restrictions to protect the innocent, including women, children, and the elderly.
@KindredburkeLibertarian 7mos7MO
The current state of artificial intelligence is not actual, artificial intelligence making the question irrelevant.
@9ZS26M96mos6MO
Weapons guided by artifical intelligence is a good idea if we arent able to have a truly peaceful society, but i dont support the idea of a big military
@JcawolfsonIndependent 6mos6MO
No, we should apply Asimov’s laws of robotics. However, we should be prepared for our enemies doing so; we cannot underestimate their lengths of depravity and means of capability.
Maybe, but AI has a potential to tweak out at times, so it would be logical to take advice from a human expert instead (at first anyway).
@9W6B4KC7mos7MO
Depends. Sometimes it is helpful with guiding missiles, but AI should never be completely in charge of controlling any type of machine.
@9W293MN7mos7MO
This sounds like an episode of Black Mirror. I actually do not know enough about AI in the military, but something about it sounds unfair.
@9VPHMHH7mos7MO
Yes. But there should be frequent investigations on the reliability of the technology and whether it abuses the rules of war.
@9T58YD28mos8MO
I am against this in terms of ethics, but if we encounter this from an opposite in wartime, I would expect the U.S. to retaliate accordingly.
@9T3LVJMPeace and Freedom8mos8MO
I believe it should be a mixed of both to have it guided by humans with the help of artificial intelligence
@9SQGLQY9mos9MO
No, because the AI faces the risk of being hacked and used against military personnel and civilians of the US and our allies.
@9SP4LB2 9mos9MO
I feel as though that the military should not use weapons guided by artificial intelligence because they are unpredictable.
Loading the political themes of users that engaged with this discussion
Loading data...
Join in on more popular conversations.