Category: autonomous weapons

The United Nations is Considering a Possible Ban on “Killer Robots”

Still in Charge

Representatives from countries around the world met on Nov. 18 to discuss weapons systems at the United Nations’ Convention on Conventional Weapons (CCW). One point of particular interest at the meeting was a call by 22 nations to place an outright ban on the development and utilization of automated weapons, also known as “killer robots.”

Leading up to the convention, hundreds of experts in the field of artificial intelligence (AI) and robotics joined in sending letters to world leaders, urging them to support a ban on autonomous weapons. Elon Musk, founder of OpenAI and CEO of Tesla and SpaceX has also been pushing for the regulation of autonomous weapons development.

The meeting may have been less productive than these groups hoped. They were mainly able set groundwork for future talks, likely to occur sometime next year. Mary Wareham, advocacy director of the Arms Division at the Human Rights Watch and global coordinator for the Campaign to Stop Killer Robots told AFP, “Countries do not have time…to waste just talking about this subject.” She says that militaries and defense companies are already investing heavily in bringing these weapons into reality.

However, the chair of the meeting, Amandeep Gill, India’s disarmament ambassador, tried to clear away some of the hype surrounding the issue. “Ladies and gentlemen, I have news for you: the robots are not taking over the world. Humans are still in charge,” he exclaimed, according to reporting from The Guardian. “I think we have to be careful in not emotionalizing or dramatizing this issue.”

According to the Campaign to Stop Killer Robots, the meeting did lead to two points of agreement along these lines: Most nations assented that we need a “legally binding instrument” controlling the use of these technologies and that the majority of “states now accept that some form of human control must be maintained over weapons systems.” Talks moving forward will have to focus on what these points of accordance will look like in practice.

Setting Limits

Autonomous weapons will have a profound impact on the way war is waged, and the arms escalation this could drive has motivated some, especially nations with smaller military budgets, to call for regulation (at the least). Toby Walsh, an expert on AI at the University of New South Wales in Australia, did not mince words regarding his feelings on the topic.

“These will be weapons of mass destruction,” Walsh told reporters during a separate event at the UN. “I am actually quite confident that we will ban these weapons … My only concern is whether [countries] have the courage of conviction to do it now, or whether we will have to wait for people to die first.”

Futuristic Weapons: How We Will Fight in the Future
Click to View Full Infographic

While international agreements on the development and use of autonomous weapons are ideal, individual countries are also making their intentions known. In response to the letter from Musk and others, the United Kingdom has already decided to ban fully autonomous weapons. An announcement was handed down from the U.K. Ministry of Defense in September.

But Musk’s concerns for the future of AI are not relegated to weapons applications, as he believes that AI development, in general, should be closely watched and regulated. “I think anything that represents a risk to the public deserves at least insight from the government, because one of the mandates of the government is the public wellbeing,” he said at a conference call with Tesla investors.

AI is a foreboding specter over the uncertain future. Many experts, like Ray Kurzweil, try to counter arguments for dampening AI development with promises that AI will “enhance us.” Even so, any good technology could also have destructive applications. Ensuring that the awesome potential of these technologies are developed in a way that is genuinely good for all of humanity is, unsurprisingly, the best way forward.

The post The United Nations is Considering a Possible Ban on “Killer Robots” appeared first on Futurism.

Today, World Leaders Will Meet to Decide the Future of “Killer Robots”

Slaughterbots

A new short film illustrating the prospect of military drones has been commissioned for an event at the United Nations Convention on Conventional Weapons, which is being hosted by the Campaign to Stop Killer Robots.

The film presents a fictionalized scenario in which a tech company showcases and deploys its latest combat drone, which is capable of distinguishing the good guys from the bad guys. A montage of mock new reports illustrates what happens next, when the device’s true abilities are revealed and the machines begin killing off politicians and activists.

Stuart Russell, an artificial intelligence (AI) scientist at the University of California in Berkeley, is part of the group that will show the film to attendees. He has stated that the technology depicted in the film already exists, and it would actually be much easier to implement than self-driving vehicles.

Military drones are nothing new, having been used for reconnaissance missions as well as attacks. However, they have largely been operated by human pilots via remote control, whereas we’re now in a position to outfit these machines with automated targeting systems. This advancement would allow them to execute missions autonomously.

This situation is troubling enough in its own right, but there are also concerns about the potential for widespread proliferation. These drones could be manufactured en masse for a relatively small amount of money – and they could be used to enact the unthinkable if they were to fall into the wrong hands.

Ban These Bots

The Campaign to Stop Killer Robots hopes to convince international authorities to establish a treaty that bans autonomous weapons. This would outlaw the large-scale manufacture of such machines, and apply oversight to any nation choosing to explore the technology.

“Pursuing the development of lethal autonomous weapons would drastically reduce international, national, local, and personal security,” argued Russell, according to a report from The Guardian. This line of thinking has been compared to the approach that prompted the Biological Weapons Convention.

As the underlying technology that facilitates this kind of weaponry has progressed, experts have realized the need to appeal to lawmakers. And, while calls for legislation have been made for years, there has been a serious increase in activity over the course of 2017.

In August, Elon Musk led a host of prominent A.I. experts in signing an open letter that outlined the dangers of autonomous weapons. In November, scores of experts reached out to the Australian and Canadian heads of state, urging them to take action.

This technology is very real – and if we wait too long to regulate, it might be impossible to close Pandora’s box.

The post Today, World Leaders Will Meet to Decide the Future of “Killer Robots” appeared first on Futurism.

Australian and Canadian AI Experts Call for Autonomous Weapons Ban

Terminating Terminators

In two letters addressed to the heads of state in Australia and Canada, hundreds of experts in the field of artificial intelligence (AI) have urged for the ban of “killer robots,” artificially intelligent weapons with the ability to decide whether a person lives or dies. They join a growing crowd of scientists who have stressed the need for an autonomous weapons ban.

The Australian open letter, addressed to Prime Minister Malcolm Turnbull, carried 122 researcher signatures, while the Canadian letter addressed to Prime Minister Justin Trudeau carried was signed by 216.

“Delegating life-or-death decisions to machines crosses a fundamental moral line – no matter which side builds or uses them,” said Toby Walsh, Scientia Professor of AI at the University of New South Wales (UNSW) Sydney, to The Independent. “Playing Russian roulette with the lives of others can never be justified merely on the basis of efficacy. This is not only a fundamental issue of human rights. The decision whether to ban or engage autonomous weapons goes to the core of our humanity.”

The letters call for governmental support, at the upcoming United Nations Conference on the Convention on Certain Conventional Weapons (CCW), of an international ban on such weaponry from being developed and deployed.

Walsh explained in a press release, “These will be weapons of mass destruction. One programmer will be able to control a whole army. Every other weapon of mass destruction has been banned: chemical weapons, biological weapons, even nuclear weapons. We must add autonomous weapons to the list of weapons that are morally unacceptable to use.”

The letters call for an autonomous weapons ban, which would prevent the development of autonomous weapons like this Samsung SGR-A1.
South Korea’s Samsung SGR-A1, a sentry gun, one of the first commercialized weapons with autonomous capabilities. (Image credit: MarkBlackUltor/Wikimedia Commons)

Chorus of Experts

Many experts agree on the need to ban these weapons from entering the sphere of war. In August, 116 experts, including SpaceX and Tesla founder Elon Musk, sent an open letter to the United Nations that called for strong AI regulation, especially in the area of AI weaponry.

The letter stated: “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.” 

Futuristic Weapons: How We Will Fight in the Future
Click to View Full Infographic

Still, not everyone is so sure that an autonomous weapons ban is practical or even possible. According to Greg Allen, coauthor of a report commissioned by the Office of the Director of National Intelligence to explore the implications of AI on war, “You are unlikely to achieve a full ban of autonomous weapons,” he told Wired. “The temptation for using them is going to be very intense.”

Others still don’t believe such a ban would even be effective. A study from SUNY Buffalo concluded that killer robots are not inherently a problem; instead, the problem lies in the way society is enabling and researching them: “…instead of demonizing Killer Robots as such, we need to understand the tools, processes and operating procedures that create, support and validate these objects.”

Both sides seem to agree that, at the very least, we must be watchful in this early development stage. While these robots have the potential of preserving the lives of any military with the technological sophistication to launch them, the potential for a new level of weapon-based brinkmanship can further agitate an already volatile state of global conflict.

The post Australian and Canadian AI Experts Call for Autonomous Weapons Ban appeared first on Futurism.

Following Elon Musk Letter, UK Government Plans to Ban Fully Autonomous Weapons

Heeding the Warning

It looks like warnings about the dangers of applying artificial intelligence (AI) in weapons development has not fallen on deaf ears. In response to the open letter to the United Nations, sent by 116 experts and led by serial entrepreneur Elon Musk, the government of Great Britain has decided to ban fully autonomous weapons and weapons systems. The news comes in an announcement made by the U.K. Ministry of Defense earlier this week.

Specifically, the British government’s ban extends to the development of weapons that can decide their own targets — yes, like those AI-powered missiles Russia is supposedly working on and those fully autonomous drones Russian arms developer Kalashnikov is building. Fully autonomous weapons are weapons that can “think” for themselves. This doesn’t include remotely operated drones and semi-autonomous defense systems, which armed forces from nations like the United States, South Korea, and even the U.K. currently employ.

For U.K. armed forces minister Mark Lancaster, deciding what to target is a responsibility suitable only for human soldiers. “It’s absolutely right that our weapons are operated by real people capable of making incredibly important decisions, and we are guaranteeing that vital oversight,” Lancaster said regarding the new doctrine.

The announcement was concurrent with the Defense and Security Equipment International show — one of the biggest weapons exhibitions in the world.

Human Oversight

The new doctrine published by the Ministry of Defense affirms that “human control of cutting-edge weaponry” is safer for both civilians and military personnel. Such weapons, the announcement said, “will always be under control as an absolute guarantee of human oversight and authority and accountability.” The doctrine also noted that the U.K. “does not possess fully autonomous weapon systems and has no intention of developing them.”

Developing fully autonomous weaponry is definitely appealing to military powers, and Russian President Vladimir Putin even said that he sees future wars being fought using such weapons. That’s as close as we could get to an open declaration of an AI arms race, as Putin himself said that whichever country leads in AI development — weapons included — “will be the ruler of the world.”

Even before Musk’s letter, AI experts warned about this possibility: back in 2015, in an open letter released at the International Joint Conference on Artificial Intelligence (IJCAI). “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control,” the letter stated.

The same warning was echoed in the open letter sent to the U.N., saying that such weapons “can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

Hopefully, the issue of weapons doesn’t take away from all the positive effects AI can bring. Indeed, human control and oversight are key. But, as U.K. robotics expert Noel Sharkey told The Verge, he hopes this will translate to “human control of weapons in a meaningful and deliberative way.”

The post Following Elon Musk Letter, UK Government Plans to Ban Fully Autonomous Weapons appeared first on Futurism.

Expert: The U.S. Needs to Do More to Prepare for Autonomous Warfare

Arms Race

Modern warfare is set to undergo major changes, thanks to new technologies springing forth from the fields of artificial intelligence and robotics. As Jon Wolfsthal sees it, the US isn’t doing enough to ensure that these advances are made with the proper consideration.

Wolfsthal is a non-resident fellow at Harvard University’s Managing the Atom project, and at the Carnegie Endowment for International Peace. Between 2014 and 2017, he acted as the senior director for arms control and nonproliferation at the National Security Council, serving as a special assistant to President Barack Obama.

In a guest post submitted to DefenseNews, Wolfsthal argues that while AI and autonomous weapons stand to improve national security and mitigate the risks taken by servicemen and women, the need to compete with other technologically advanced nations is resulting in a lack of oversight.

Neither the government nor the general public seems interested in having a serious discussion about the ethical ramifications and the legal basis of developing these programs, says Wolfsthal. As a result, bodies like the Department of Defense are focusing on what they can create, rather than whether they should.

He suggests that the National Security Council needs a better process for assessing the technologies the US wants to pursue, and what’s being investigated by other nations. He adds that Congress should be more proactive in developing policy, and that the Senate and House Armed Services committees should be be fostering debate and discussion. Wolfsthal also criticizes President Trump for failing to staff the White House’s Office of Science and Technology Policy, a decision he describes as “unconscionable.”

Risk and Reward

“The possible advantages to the United States are endless,” writes Wolfsthal. “But so too are the risks.” AI and autonomous weapons aren’t necessarily something that the military should shy away from — adoption of these technologies seems like something of a foregone conclusion — but they need to be implemented with care and consideration.

This stance mirrors the one taken by Elon Musk. The Tesla and SpaceX CEO has made no secret of his concerns about AI. However, last month he clarified his position, stating that the technology offers up huge benefits if we can avoid its most perilous pitfalls.

Now is the time for these discussions to take place. We’re already seeing drones employed by the US Army, even if the hardware is sometimes imperfect. Meanwhile, Russia is thought to be developing missiles that make use of AI, and China is working on its own intelligent weapons systems.

It might seem like an exaggeration to compare the advent AI and autonomous weapons to the introduction of nuclear weaponry, but there are some broad similarities. These are instruments of death that can be used at long range, reducing the risk of friendly casualties.

It is likely naive to think that there’s still an opportunity to reverse course and curb the implementation of these technologies in a military context. At this point, the priority has to be making sure that we don’t allow these advances to be utilized recklessly. Like nuclear armaments, these technologies stand to completely revolutionize the way nations go to war. And before a technologically augmented conflict begins in earnest, it would be wise for the government and the public to figure out where they stand on how these weapons are wielded.

The post Expert: The U.S. Needs to Do More to Prepare for Autonomous Warfare appeared first on Futurism.

The Marines’ Latest Weapon Is a Remote-Controlled Robot With a Machine Gun

Remote-Controlled Warfare

Few images of war are as iconic as the 1944 storming of the beach at Normandy, but due to new military technologies, tomorrow’s version of D-Day will undoubtedly look much different. Weaponized drones will likely fill the air, and instead of soldiers streaming out of the water, we could see amphibious hovercrafts that move as quickly across the land as they do the sea. Aboard those craft might be the latest high-tech military tool, the Multi-Utility Tactical Transport (MUTT).

Each MUTT is roughly the size of an ATV and is operated remotely using a joystick and tablet. General Dynamics designed the vehicles to transport food, water, and other supplies, and the Marines recently tested MUTTs with a far more powerful piece of cargo on board: a .50-caliber machine gun.

With a series of cameras and sensors providing accurate imaging and heavy-duty tracks that have no trouble navigating across rough terrain, MUTTs could prove to be an invaluable military tool, particularly in conditions that would be extremely hostile for soldiers to navigate. As one soldier told Business Insider, “It’s a mobile platform where it doesn’t get fatigued. It doesn’t need water. It needs very little maintenance, and it’s always in the fight, so that’s a great asset to have.”

Armed and Autonomous

A remote-controlled MUTT does clearly have several advantages over a similarly equipped soldier, but according to a senior Marine officer, the military isn’t content with the system as it currently stands.

“Eventually, what we hope to do is go to systems that are more truly autonomous, that I can say, ‘Hey, go cover my right flank,’ and it’s going to go do the things it needs to do in order to make that happen,” the officer told Business Insider. “And then if it does get in the situation where the weapons systems malfunction, it can either fix itself or then I’ve got to send a marine to go fix it.”

Warfare 2040: The Future of Military Technology [INFOGRAPHIC]
Click to View Full Infographic

The idea of an autonomous system armed with a machine gun having its “weapons systems malfunction” will no doubt worry Elon Musk, Stephen Hawking, and the thousands of others who signed an open letter calling for a ban on autonomous weapons. They assert that we need to draw a distinct line between a robot directly controlled by a human — like these current MUTTs — and ones that operate autonomously.

“Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations, and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity,” the letter reads. “There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.”

Unless a ban is enacted, however, there’s nothing stopping the military from pursuing such tech advancements. Even if the practice does become illegal in one nation, that wouldn’t necessarily prevent others from developing such tech. Truly, the wars of the future will look much different, but whether those differences keep us safer or not is still up for debate.

The post The Marines’ Latest Weapon Is a Remote-Controlled Robot With a Machine Gun appeared first on Futurism.

Eventually, Robots Will “Glitch” And Cause Fatalities

Consider the following wartime scenario: Hoping to spare the lives of soldiers, a country deploys an autonomous weapon to wipe out an enemy force. This robot has demonstrated military capabilities that far exceed even the best soldiers, but when it hits the ground, it gets confused. It can’t distinguish the civilians from the enemy soldiers and begins taking innocent lives. The military generals desperately try to stop the robot, but by the time they succeed, it has already killed dozens.

Who is responsible for this atrocity? Is it the commanders who deployed the robot, the designers and manufacturers of the robot, or the robot itself?

Liability: Autonomous Systems

As artificial intelligence improves, governments may turn to autonomous weapons — like military robots — in order to gain the upper hand in armed conflict. These weapons can navigate environments on their own and make their own decisions about who to kill and who to spare. While the example above may never occur, unintended harm is inevitable. Considering these scenarios helps formulate important questions that governments and researchers must jointly consider, namely:

How do we hold human beings accountable for the actions of autonomous systems? And how is justice served when the killer is essentially a computer?

As it turns out, there is no straightforward answer to this dilemma. When a human soldier commits an atrocity and kills innocent civilians, that soldier is held accountable. But when autonomous weapons do the killing, it’s difficult to blame them for their mistakes.

An autonomous weapon’s “decision” to murder innocent civilians is like a computer’s “decision” to freeze the screen and delete your unsaved project. Frustrating as a frozen computer may be, people rarely think the computer intended to complicate their lives.

Intention must be demonstrated to prosecute someone for a war crime, and while autonomous weapons may demonstrate outward signs of decision-making and intention, they still run on a code that’s just as impersonal as the code that glitches and freezes a computer screen. Like computers, these systems are not legal or moral agents, and it’s not clear how to hold them accountable — or if they can be held accountable — for their mistakes.

So who assumes the blame when autonomous weapons take innocent lives? Should they even be allowed to kill at all?

Liability: from Self-Driving Cars to Autonomous Weapons

Peter Asaro, a philosopher of science, technology, and media at The New School in New York City, has been working on addressing these fundamental questions of responsibility and liability with all autonomous systems, not just weapons. By exploring fundamental concepts of autonomy, agency, and liability, he intends to develop legal approaches for regulating the use of autonomous systems and the harm they cause.

At a recent conference on the Ethics of Artificial Intelligence, Asaro discussed the liability issues surrounding the application of AI to weapons systems. He explained, “AI poses threats to international law itself — to the norms and standards that we rely on to hold people accountable for [decisions, and to] hold states accountable for military interventions — as [people are] able to blame systems for malfunctioning instead of taking responsibility for their decisions.”

The legal system will need to reconsider who is held liable to ensure that justice is served when an accident happens. Asaro argues that the moral and legal issues surrounding autonomous weapons are much different than the issues surrounding other autonomous machines, such as self-driving cars.

Though researchers still expect the occasional fatal accident to occur with self-driving cars, these autonomous vehicles are designed with safety in mind. One of the goals of self-driving cars is to save lives. “The fundamental difference is that with any kind of weapon, you’re intending to do harm, so that carries a special legal and moral burden,” Asaro explains. “There is a moral responsibility to ensure that [the weapon is] only used in legitimate and appropriate circumstances.”

Furthermore, liability with autonomous weapons is much more ambiguous than it is with self-driving cars and other domestic robots.

With self-driving cars, for example, bigger companies like Volvo intend to embrace strict liability – where the manufacturers assume full responsibility for accidental harm. Although it is not clear how all manufacturers will be held accountable for autonomous systems, strict liability and threats of class-action lawsuits incentivize manufacturers to make their product as safe as possible.

Warfare, on the other hand, is a much messier situation.

“You don’t really have liability in war,” says Asaro. “The US military could sue a supplier for a bad product, but as a victim who was wrongly targeted by a system, you have no real legal recourse.”

Autonomous weapons only complicate this. “These systems become more unpredictable as they become more sophisticated, so psychologically commanders feel less responsible for what those systems do. They don’t internalize responsibility in the same way,” Asaro explained at the Ethics of AI conference.

To ensure that commanders internalize responsibility, Asaro suggests that “the system has to allow humans to actually exercise their moral agency.”

That is, commanders must demonstrate that they can fully control the system before they use it in warfare. Once they demonstrate control, it can become clearer who can be held accountable for the system’s actions.

Preparing for the Unknown

Behind these concerns about liability, lies the overarching concern that autonomous machines might act in ways that humans never intended. Asaro asks: “When these systems become more autonomous, can the owners really know what they’re going to do?”

Even the programmers and manufacturers may not know what their machines will do. The purpose of developing autonomous machines is so they can make decisions themselves – without human input. And as the programming inside an autonomous system becomes more complex, people will increasingly struggle to predict the machine’s action.

Companies and governments must be prepared to handle the legal complexities of a domestic or military robot or system causing unintended harm. Ensuring justice for those who are harmed may not be possible without a clear framework for liability.

Asaro explains, “We need to develop policies to ensure that useful technologies continue to be developed while ensuring that we manage the harms in a just way. A good start would be to prohibit automating decisions over the use of violent and lethal force, and to focus on managing the safety risks in beneficial autonomous systems.”

The post Eventually, Robots Will “Glitch” And Cause Fatalities appeared first on Futurism.