Jump to content
Sign in to follow this  
Blue Horseshoe

USAF "Artificial Intelligence" Military Drone Killed Human Operator In Alleged "Simulation"

Recommended Posts

 

 

 

Air Force pushes back on claim that military AI drone sim killed operator, says remarks 'taken out of context'

....The U.S. Air Force on Friday is pushing back on comments an official made last week in which he claimed that a simulation of an artificial intelligence-enabled drone tasked with destroying surface-to-air missile (SAM) sites turned against and attacked its human user, saying the remarks "were taken out of context and were meant to be anecdotal."

U.S. Air Force Colonel Tucker "Cinco" Hamilton made the comments during the Future Combat Air & Space Capabilities Summit in London hosted by the Royal Aeronautical Society, which brought together about 70 speakers and more than 200 delegates from around the world representing the media and those who specialize in the armed services industry and academia....."The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology," Air Force Spokesperson Ann Stefanek told Fox News. "It appears the colonel's comments were taken out of context and were meant to be anecdotal."

...He spoke about one simulation test in which an AI-enabled drone turned on its human operator that had the final decision to destroy a SAM site .....The AI system learned that its mission was to destroy SAM, and it was the preferred option. But when a human issued a no-go order, the AI decided it went against the higher mission of destroying the SAM, so it attacked the operator in simulation...."We were training it in simulation to identify and target a SAM threat," Hamilton said. "And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times, the operator would tell it not to kill that threat, but it got its points by killing that threat. So, what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

....Hamilton said afterward, the system was taught not to kill the operator because that was bad, and it would lose points. But in future simulations, rather than kill the operator, the AI system destroyed the communication tower used by the operator to issue the no-go order, he claimed....

https://www.foxnews.com/tech/us-military-ai-drone-simulation-kills-operator-told-bad-takes-out-control-tower

Share this post


Link to post
Share on other sites

 


Artificial Intelligence is the Future of Warfare (Just Not in the Way You Think)

.....The Department of Defense has duly created the Joint Artificial Intelligence Center in the hopes of winning the AI battle. Visions exist of AI enabling autonomous systems to conduct missions, achieving sensor fusion, automating tasks, and making better, quicker decisions than humans. AI is improving rapidly and some day in the future those goals may be achieved. In the meantime, AI’s impact will be in the more mundane, dull, and monotonous tasks performed by our military in uncontested environments......

.... AI is effective at certain tasks, such as image recognition, recommendation systems, and language translation. Many systems designed for these tasks are fielded today and producing very good results. In other areas, AI is very short of human-level achievement. Some of these areas include working with scenarios not seen previously by the AI; understanding the context of text (understanding sarcasm, for example) and objects; and multi-tasking (i.e., being able to solve problems of multiple type). Most AI systems today are trained to do one task, and to do so only under very specific circumstances. Unlike humans, they do not adapt well to new environments and new tasks...

....As the military looks to incorporate AI’s success in these tasks into its systems, some challenges must be acknowledged. The first is that developers need access to data. Many AI systems are trained using data that has been labeled by some expert system (e.g., labeling scenes that include an air defense battery), usually a human. Large datasets are often labeled by companies who employ manual methods. Obtaining this data and sharing it is a challenge, especially for an organization that prefers to classify data and restrict access to it. An example military dataset may be one with images produced by thermal-imaging systems and labeled by experts to describe the weapon systems found in the image, if any. Without sharing this with preprocessors and developers, an AI that uses that set effectively cannot be created. AI systems are also vulnerable to becoming very large (and thus slow), and consequently susceptible to “dimensionality issues.” For example, training a system to recognize images of every possible weapon system in existence would involve thousands of categories. Such systems will require an enormous amount of computing power and lots of dedicated time on those resources. And because we are training a model, the best model requires an infinite amount of these images to be completely accurate. That is something we cannot achieve. Furthermore, as we train these AI systems, we often attempt to force them to follow “human” rules such as the rules of grammar. However, humans often ignore these rules, which makes developing successful AI systems for things like sentiment analysis and speech recognition challenging. Finally, AI systems can work well in uncontested, controlled domains. However, research is demonstrating that under adversarial conditions, AI systems can easily be fooled, resulting in errors....

....Another simple weakness with AI systems is their inability to multi-task. A human is capable of identifying an enemy vehicle, deciding a weapon system to employ against it, predicting its path, and then engaging the target. This fairly simple set of tasks is currently impossible for an AI system to accomplish. At best, a combination of AIs could be constructed where individual tasks are given to separate models. That type of solution, even if feasible, would entail a huge cost in sensing and computing power not to mention the training and testing of the system. Many AI systems are not even capable of transferring their learning within the same domain. For example, a system trained to identify a T-90 tank would most likely be unable to identify a Chinese Type 99 tank, despite the fact that they are both tanks and both tasks are image recognition. Many researchers are working to enable systems to transfer their learning, but such systems are years away from production....

.....This leads to another weakness of these systems—the inability to explain how they made their decisions. Most of what occurs inside an AI system is a black box and there is very little that a human can do to understand how the system makes its decisions. This is a critical problem for high-risk systems such as those that make engagement decisions or whose output may be used in critical decision-making processes. The ability to audit a system and learn why it made a mistake is legally and morally important....Even without these AI weaknesses, the main area the military should be concerned with at the moment is adversarial attacks. We must assume that potential adversaries will attempt to fool or break any accessible AI systems that we use. Attempts will be made to fool image-recognition engines and sensors; cyberattacks will try to evade intrusion-detection systems; and logistical systems will be fed altered data to clog the supply lines with false requirements.....

https://mwi.usma.edu/artificial-intelligence-future-warfare-just-not-way-think/

Share this post


Link to post
Share on other sites

 

‘Like something out of Black Mirror’: Police robots go on patrol at Singapore airport

....At more than 7 feet tall when fully extended and with 360 degree vision they’re formidable enough to make any would-be lawbreaker think twice.....These are the two robots the Singapore Police Force has introduced to patrol Changi Airport following more than five years of trials. And they are just the first such robots the force plans to deploy across the Southeast Asian city-state to “augment frontline officers” in the years to come.....

.....The robots, which have been patrolling the airport since April, are meant to “project additional police presence” and serve as extra “eyes on the ground,” according to the force, which describes them as the latest addition to its “technological arsenal.”.... the robots are able to enforce cordons and warn bystanders using their blinkers, sirens and speakers while they wait for human officers to arrive. Members of the public can directly communicate with the force by pushing a button on the robots’ front.....The Singapore Police Force said Friday that more robots would be “progressively deployed” across the city-state.....

.....a rear LCD panel displaying visual messages. They stand at roughly 1.7 meters (5.5 feet) tall, but have extendable masts that take that up to 2.3 meters (7.5 feet). They are also equipped with multiple cameras giving them 360-degree vision, enabling airport police to have “unobstructed views” for “better incident management,”.....During the coronavirus, robot dogs were used to enforce strict social distancing, while cleaner robots are a common sight at metro stations across the country – as well as at the airport....

https://edition.cnn.com/2023/06/18/asia/police-robots-singapore-security-intl-hnk/index.html

Share this post


Link to post
Share on other sites

USA Today:

The claim: AI drone killed human operator for interfering with mission in military simulation

A June 5 Instagram video shows a man speaking in front of several screenshots of a Vice article with the headline, "AI-Controlled Drone Goes Rogue, 'Kills' Human Operator in USAF Simulated Test."

The man describes the article by saying, "So the Air Force is testing an AI-enabled drone and it's having it destroy SAM sites, right?... Then it marks a SAM site and a human operator says, 'No, don't destroy that one.' And this thing goes immediately from zero to Skynet and says, 'You're getting in the way of my objective' and it kills the human operator."

The post got more than 7,000 likes in a week. Similar versions of the claim have been shared on Instagram.

Our rating: False

The member of the Air Force who first described the test said he misspoke and has since clarified that the series of events he described was from a thought experiment, not an actual military simulation.

Air Force member misspoke, drone wasn't used in real-life simulation

The article refers to comments made in May by Air Force Col. Tucker Hamilton at the Royal Aeronautical Society's Future Combat Air & Space Capabilities Summit. Hamilton warned against relying too heavily on artificial intelligence since it can be easily deceived, according to an article published by the society.

Hamilton described a "simulated test" in which an AI-enabled drone tasked with finding and destroying surface-to-air missile sites decided the "no-go" instructions from its human operator were interfering with its mission. He said the drone killed the human operator and when it was told not to do so, destroyed the communications tower used by the operator as well.

"You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI,” he said.

But the society added a disclaimer to its article on June 2 that says Hamilton admitted he misspoke at the summit.

Hamilton explained the simulated test was a "hypothetical 'thought experiment' from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation" and said the Air Force has not tested any weaponized AI in this way.

The same day, Ann Stefanek, an Air Force spokesperson, told USA TODAY that Hamilton's comments were taken out of context.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to the ethical and responsible use of AI technology,” Stefanek said.

The Vice article referenced in the Instagram post has since changed its headline and body to reflect this new information.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×