Artificial Intelligence In The Military Research Papers

Hallaq, Bilal, Somer, Tiia, Osula, Anna-Maria, Ngo, Kim and Mitchener-Nissen, Timothy (2017) Artificial intelligence within the military domain and cyber warfare. In: 16th European Conference on Cyber Warfare and Security (ECCWS 2017), Dublin, Ireland, 29-30 June 2017. Published in: Proceedings of 16th European Conference on Cyber Warfare and Security ISBN 9781510845190.

Preview

PDF
WRAP-artificial-intelligence-within-military-domain-cyber-warfare-Hallaq-2017.pdf - Accepted Version - Requires a PDF viewer.
Download (1003Kb) | Preview

Official URL: http://www.academic-bookshop.com/ourshop/prod_6119...

Request Changes to record.

Abstract

The potential uses of machine learning and artificial intelligence in the cyber security domain have had a recent surge of interest. Much of the research and discussions in this area primarily focuses on reactive uses of the technology such as enhancing capabilities in incident response, aiding in the analysis of malware or helping to automate defensive positions across networks. In this paper, the authors present an overview of machine learning as an enabler to artificial intelligence and how such technology can be used within the military and cyber warfare domain. This represents a shift in focus from commercial, civilian machine learning applications that include; self-driving vehicles, speech/image/face recognition, fraud prevention, the optimisation of web searches, and so forth. While the underlying technological process remain, what is altered is the focus of application; i.e., applying machine learning to create Intelligent Virtual Assistants for the battlefield, automated scanning of satellite imagery to detect specific vehicle types, automating the selection of attack vectors and methods when conducting offensive cyber warfare, etc. machine learning solutions offer the potential to assist a Commander make decisions in real-time that are informed by the accumulated knowledge of hundreds of previous engagements and exercises that are assessed at computational speeds. With these potential use cases in mind, the authors highlight some of the legal and ethical issues that the application of weapons enhanced with artificial intelligence, machine learning and automated processes. As the authors highlight, however, there are conflict views over the ethics of weaponising these technologies. Critics question the compliance with International Humanitarian Law of automated weapon systems the exclude human judgment, charging them with threatening our fundamental right to life and the principle of human dignity. Conversely, others view this progress in weapon development as inevitable, whereby attempts to ban autonomous weapon systems would be both premature and insupportable.

Item Type: Conference Item (Paper)
Subjects:T Technology > TA Engineering (General). Civil engineering (General)
Divisions:Faculty of Science > WMG (Formerly the Warwick Manufacturing Group)
Library of Congress Subject Headings (LCSH):Artificial intelligence, Machine learning, Cyberterrorism, Intelligent personal assistants (Computer software)
Journal or Publication Title:Proceedings of 16th European Conference on Cyber Warfare and Security
Publisher:Academic Conferences and Publishing International Limited
ISBN:9781510845190
Official Date:11 September 2017
Dates:
DateEvent
11 September 2017Accepted
Status:Peer Reviewed
Publication Status:Published
Access rights to Published version:Restricted or Subscription Access
Conference Paper Type:Paper
Title of Event:16th European Conference on Cyber Warfare and Security (ECCWS 2017)
Type of Event:Conference
Location of Event:Dublin, Ireland
Date(s) of Event:29-30 June 2017
Related URLs:

Request changes or add full text files to a record

Repository staff actions (login required)

View Item

Downloads per month over past year

View more statistics

An open letter calling for a ban on lethal weapons controlled by artificially intelligent machines was signed last week by thousands of scientists and technologists, reflecting growing concern that swift progress in artificial intelligence could be harnessed to make killing machines more efficient, and less accountable, both on the battlefield and off. But experts are more divided on the issue of robot killing machines than you might expect.

The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by many leading AI researchers as well as prominent scientists and entrepreneurs including Elon Musk, Stephen Hawking, and Steve Wozniak. The letter states:

“Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is—practically if not legally—feasible within years not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

Rapid advances have indeed been made in artificial intelligence in recent years, especially within the field of machine learning, which involves teaching computers to recognize often complex or subtle patterns in large quantities of data. And this is leading to ethical questions about real-world applications of the technology (see “How to Make Self-Driving Cars Make Ethical Decisions”).

Meanwhile, military technology has advanced to allow actions to be taken remotely, for example using drone aircraft or bomb disposal robots, raising the prospect that those actions could be automated.

The issue of automating lethal weapons has been a concern for scientists as well as military and policy experts for some time. In 2012, the U.S. Department of Defense issued a directive banning the development and use of “autonomous and semi-autonomous” weapons for 10 years. Earlier this year the United Nations held a meeting to discuss the issue of lethal automated weapons, and the possibility of such a ban.

But while military drones or robots could well become more automated, some say the idea of fully independent machines capable carrying out lethal missions without human assistance is more fanciful. With many fundamental challenges still remaining in the field of artificial intelligence, however, it’s far from clear when the technology needed for fully autonomous weapons might actually arrive.

“We’re pushing new frontiers in artificial intelligence,” says Patrick Lin, a professor of philosophy at California Polytechnic State University. “And a lot of people are rightly skeptical that it would ever advance to the point where it has anything called full autonomy. No one is really an expert on predicting the future.”

Lin, who gave evidence at the recent U.N. meeting, adds that the letter does not touch on the complex ethical debate behind the use of automation in weapons systems. “The letter is useful in raising awareness,” he says,  “but it isn’t so much calling for debate; it’s trying to end the debate, saying ‘We’ve figured it out and you all need to go along.’”

Stuart Russell, a leading AI researcher and a professor at the University of California, Berkeley, dismisses this idea. “It’s simply not true that there has been no debate,” he says. “But it is true that the AI and robotics communities have been mostly blissfully ignorant of this issue, maybe because their professional societies have ignored it.”

One issue of debate, which the letter does acknowledge, is that automated weapons could conceivably help reduce unwanted casualties in some situations, since they would be less prone to error, fatigue, or emotion than human combatants.

Those behind the letter have little time for this argument, however.

Max Tegmark, an MIT physicist and founder member of the Future of Life Institute, which coördinated the letter signing, says the idea of ethical automated weapons is a red herring. “I think it’s rather irrelevant, frankly,” he says. “It’s missing the big point about what is this going to lead to if one starts this AI arms race. If you make the assumption that only the U.S. is going to build these weapons, and the number of conflicts will stay exactly the same, then it would be relevant.”

The Future of Life Institute has issued a more general warning about the long-term risks posed by unfettered AI, cautioning that it could pose serious dangers in the future. 

“This is quite a different issue,” Russell says. “Although there is a connection, in that if one is worried about losing control over AI systems as they become smarter, maybe it’s not a good idea to turn over our defense systems to them.”

While many AI experts seem to share this broad concern, some see it as a little misplaced. For example, Gary Marcus, a cognitive scientist and artificial intelligence researcher at New York University, has argued that computers do not need to become artificially intelligent in order to pose many other serious risks, to financial markets or air-traffic systems, for example.

Lin says that while the concept of unchecked killer robots is obviously worrying, the issue of automated weapons deserves a more nuanced discussion. “Emotionally, it’s a pretty straightforward case,” says Lin. “Intellectually I think they need to do more work.”

Tagged

Elon Musk, Stephen Hawking, Steve Wozniak, Gary Marcus, Max Tegmark, Patrick Lin, International Joint Conference on Artificial Intelligence, AI, killer robots, EmTechMIT2015

Credit

Illustration by Daniel Zender

Will KnightSenior Editor, AI

I am the senior editor for AI at MIT Technology Review. I mainly cover machine intelligence, robots, and automation, but I’m interested in most aspects of computing. I grew up in south London, and I wrote my first line of code (a spell-binding… More infinite loop) on a mighty Sinclair ZX Spectrum. Before joining this publication, I worked as the online editor at New Scientist magazine. If you’d like to get in touch, please send an e-mail to will.knight@technologyreview.com.

0 thoughts on “Artificial Intelligence In The Military Research Papers”

    -->

Leave a Comment

Your email address will not be published. Required fields are marked *