"As far as video games go, Operation Overmatch is rather unremarkable. Players command military vehicles in eight-on-eight matches against the backdrop of rendered cityscapes -- a common setup of games that sometimes have the added advantage of hundreds of millions of dollars in development budgets. Overmatch does have something unique, though: its mission. The game's developers believe it will change how the U.S. Army fights wars. Overmatch's players are nearly all soldiers in real life. As they develop tactics around futuristic weapons and use them in digital battle against peers, the game monitors their actions.
Each shot fired and decision made, in addition to messages the players write in private forums, is a bit of information soaked up with a frequency not found in actual combat, or even in high-powered simulations without a wide network of players. The data is logged, sorted, and then analyzed, using insights from sports and commercial video games. Overmatch's team hopes this data will inform the Army's decisions about which technologies to purchase and how to develop tactics using them, all with the aim of building a more forward-thinking, prepared force... While the game currently has about 1,000 players recruited by word of mouth and outreach from the Overmatch team, the developers eventually want to involve tens of thousands of soldiers. This milestone would allow for millions of hours of game play per year, according to project estimates, enough to generate rigorous data sets and test hypotheses."
Brian Vogt, a lieutenant colonel in the Army Capabilities Integration Center who oversees Overmatch’s development, says:
“Right after World War I, we had technologies like aircraft carriers we knew were going to play an important role,” he said. “We just didn’t know how to use them. That’s where we are and what we’re trying to do for robots.”
"The Pentagon may soon be unleashing a 21st-century version of locusts on its adversaries after officials on Monday said it had successfully tested a swarm of 103 micro-drones.
The important step in the development of new autonomous weapon systems was made possible by improvements in artificial intelligence, holding open the possibility that groups of small robots could act together under human direction.
Military strategists have high hopes for such drone swarms that would be cheap to produce and able to overwhelm opponents' defenses with their great numbers.
The test of the world's largest micro-drone swarm in California in October included 103 Perdix micro-drones measuring around six inches (16 centimeters) launched from three F/A-18 Super Hornet fighter jets, the Pentagon said in a statement.
"The micro-drones demonstrated advanced swarm behaviors such as collective decision-making, adaptive formation flying and self-healing," it said.
"Perdix are not pre-programmed synchronized individuals, they are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature," said William Roper, director of the Pentagon's Strategic Capabilities Office. "Because every Perdix communicates and collaborates with every other Perdix, the swarm has no leader and can gracefully adapt to drones entering or exiting the team."
Defense Secretary Ash Carter—a technophile and former Harvard professor—created the SCO when he was deputy defense secretary in 2012.
The department is tasked with accelerating the integration of technological innovations into the US weaponry.
It particularly strives to marry already existing commercial technology—in this case micro-drones and artificial intelligence software—in the design of new weapons.
Originally created by engineering students from the Massachusetts Institute of Technology in 2013 and continuously improved since, Perdix drones draw "inspiration from the commercial smartphone industry," the Pentagon said."
"A US Department of Defense (DoD) research programme is funding universities to model the dynamics, risks and tipping points for large-scale civil unrest across the world, under the supervision of various US military agencies. The multi-million dollar programme is designed to develop immediate and long-term "warfighter-relevant insights" for senior officials and decision makers in "the defense policy community," and to inform policy implemented by "combatant commands."
Launched in 2008 – the year of the global banking crisis – the DoD 'Minerva Research Initiative' partners with universities "to improve DoD's basic understanding of the social, cultural, behavioral, and political forces that shape regions of the world of strategic importance to the US."
High-tech robots called PackBots will be unleashed during the 2014 FIFA World Cup in Brazil to help boost security and examine suspicious objects.
The Brazilian government purportedly spent US$7.2 million to buy 30 military-grade robots from designers iRobot that will police the stadiums throughout Brazil’s 12 host cities during soccer matches.
PackBot is a hunk of metal with an extendable arm and tactile claw, jam-packed on-board sensors and a computer with overheat protection, nine high-resolution cameras and lasers and two-way audio.
But is it overkill to implement wartime robots to a sporting event?
Sport’s history of violence
On April 30 1993, then-world number 1 tennis sensation Monica Seles was stabbed in the back while playing a quarter-final at Hamburg’s Rothenbaum. She was only 19.
That incident not only changed the course of women’s tennis history but also changed the face of security in sport.
Of course, we can also point to the Munich massacre of the 11 members of the Israeli Olympic team during the 1972 Summer Olympics in West Germany in rethinking approaches to the safety of high-profile athletes.
It was Seles’ plight however, that brought attention to an ever-increasing problem of public figure security. Her stabbing in Hamburg had naught to do with terrorism, and more to do with her perpetrator’s fixation on arch rival Steffi Graf. Player safety was going to become even bigger business.
It was floated that the Rothenbaum tournament organisers had spent A$650,000 on security, and that Seles herself had employed security guards to protect her at all her tournament appearances. So what went wrong?
The human factor
Not only are people unpredictable but intervention is almost impossible if one cannot anticipate the actions of another. On November 13 1982, one of Australia’s great wicket takers Terry Alderman made a costly mistake when he took security matters into his own hands.
The West Australian was disabled for over a year with a shoulder injury he sustained when he came off second best after attempting to tackle an English-supporting ground invader at the WACA Ground in Perth.
Such has become the concern over security that spectators can no longer spill onto the grounds after the final siren to get close to their heroes.
Pitch invasions had long been a tradition of Australian Football League (AFL), and at the end of matches supporters could run onto the field to celebrate the game and play kick-to-kick with their family and friends.
But in recent years stricter controls were introduced and finally the “rushing the field” was banned, to the great disappointment of fans.
The non-human factor
What makes PackBots attractive for civilian security situations, such as large-scale sporting tournaments?
PackBots made their debut in Afghanistan as far back as 2002. During the “war on terror” these uninhabited systems had several tasks:
to clear bunkers
search in caves
enter collapsed buildings in search of life
This began a trend of development subsequently in Iraq and other US conflicts, until recently when they went where no human would want to go, the Fukushima nuclear facility in March 2011 after the devastation of the Japanese tsunami.
There are certainly positive uses to these uninhabited systems which few would argue against.
PackBots can move faster than 14km/h, rotate 360 degrees, traverse rugged terrain, climb up 60% grades and even swim in water, being able to cope with being submerged up to two metres. It can even be remotely operated with hardly any lag using a joystick.
iRobot’s bots are not recent entries into the commercial market. No, many of us would have been introduced to the domestication of the robot by the introduction of the company’s Roomba household cleaning machine.
And the use of electronics in sport isn’t new. Hawk-Eye officiates whether the ball was in or out of the sideline, FoxCopter hovers above spectators at the cricket just to give us up-close personal shots of players and the third umpire adjudicates challenges.
But now the PackBots are coming: ostensibly precise, they are not supposed to malfunction or act against the controller’s wishes (or those instructions that they have been programmed with) and they cannot be easily destroyed. In the not-so-distant future they could use their cameras to observe you, their chemical sensors to breathalyse you, their extended arm to trap you and their claw to handcuff you.
We are giving over control to machine entities, or better still, “objects and units” outside of ourselves.
In fact many argue we have already lost great chunks of our autonomy without the expected commensurate increase in security. Will the natural instincts and creative inputs of human beings become increasingly redundant in a world where the “tin man” has the final say?
Katina Michael receives funding from the Australian Research Council (ARC). She is affiliated with the Institute of Electrical and Electronics Engineers (IEEE) and the Australian Privacy Foundation (APF).
MG Michael does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.
War changes everything. War is an apocalypse and a technological revolution and a life-changing adventure, all rolled into one. So it's not surprising that many of science fiction's most indelible stories are about warfare.
Eliezer S. Yudkowsky* wrote about an experiment which had to do with Artificial Inteligence. In a near future, man will have given birth to machines that are able to rewrite their codes, to improve themselves, and, why not, to dispense with them. This idea sounded a little bit distant to some critic voices, so an experiment was to be done: keep the AI sealed in a box from which it could not get out except by one mean: convincing a human guardian to let it out.
What if, as Yudkowsky states, ‘Humans are not secure’? Could we chess match our best creation to grant our own survival? Would man be humble enough to accept he was superseded, to look for primitive ways to find himself back, to cure himself from a disease that’s on his own genes? How to capture a force we voluntarily set free? What if mankind worst enemy were humans?
In a near future, we will cease to be the dominant race.
In a near future, we will learn to fear what is to come.