"A German government watchdog has ordered parents to “destroy” an internet-connected doll for fear it could be used as a surveillance device. According to a report from BBC News, the German Federal Network Agency said the doll (which contains a microphone and speaker) was equivalent to a “concealed transmitting device” and therefore prohibited under German telecom law.

The doll in question is “My Friend Cayla,” a toy which has already been the target of consumer complaints in the EU and US. In December last year, privacy advocates said the toy recorded kids’ conversations without proper consent, violating the Children’s Online Privacy Protection Act.

Cayla uses a microphone to listen to questions, sending this audio over Wi-Fi to a third-party company (Nuance) that converts it to text. This is then used to search the internet, allowing the doll to answer basic questions, like “What’s a baby kangaroo called?” as well as play games. In addition to privacy concerns over data collection, security researchers found that Cayla can be easily hacked. The doll’s insecure Bluetooth connection can be compromised, letting a third party record audio via the toy, or even speak to children using its voice.

Although the FTC has not yet taken any action against Cayla or its makers Manufacturer Genesis Toys, German data and privacy laws are more stringent than those in America. The legacy of the Stasi, the secret police force that set up one of the most invasive mass-surveillance regimes ever in Communist East Germany, has made the country’s legislators vigilant against such infringements."

Source: http://www.theverge.com/2017/2/17/14647280...
Posted
AuthorJordan Brown

"When you browse online for a new pair of shoes, pick a movie to stream on Netflix or apply for a car loan, an algorithm likely has its word to say on the outcome.

The complex mathematical formulas are playing a growing role in all walks of life: from detecting skin cancers to suggesting new Facebook friends, deciding who gets a job, how police resources are deployed, who gets insurance at what cost, or who is on a "no fly" list.

Algorithms are being used—experimentally—to write news articles from raw data, while Donald Trump's presidential campaign was helped by behavioral marketers who used an algorithm to locate the highest concentrations of "persuadable voters."

But while such automated tools can inject a measure of objectivity into erstwhile subjective decisions, fears are rising over the lack of transparency algorithms can entail, with pressure growing to apply standards of ethics or "accountability."
 

Data scientist Cathy O'Neil cautions about "blindly trusting" formulas to determine a fair outcome.

"Algorithms are not inherently fair, because the person who builds the model defines success," she said.

O'Neil argues that while some algorithms may be helpful, others can be nefarious. In her 2016 book, "Weapons of Math Destruction," she cites some troubling examples in the United States:

- Public schools in Washington DC in 2010 fired more than 200 teachers—including several well-respected instructors—based on scores in an algorithmic formula which evaluated performance.

- A man diagnosed with bipolar disorder was rejected for employment at seven major retailers after a third-party "personality" test deemed him a high risk based on its algorithmic classification.

- Many jurisdictions are using "predictive policing" to shift resources to likely "hot spots." O'Neill says that depending on how data is fed into the system, this could lead to discovery of more minor crimes and a "feedback loop" which stigmatizes poor communities.

- Some courts rely on computer-ranked formulas to determine jail sentences and parole, which may discriminate against minorities by taking into account "risk" factors such as their neighborhoods and friend or family links to crime.

- In the world of finance, brokers "scrape" data from online and other sources in new ways to make decisions on credit or insurance. This too often amplifies prejudice against the disadvantaged, O'Neil argues.

Her findings were echoed in a White House report last year warning that algorithmic systems "are not infallible—they rely on the imperfect inputs, logic, probability, and people who design them."

Source: https://phys.org/news/2017-02-algorithms-s...

"Researchers at Stanford and Princeton universities have found a way to connect the dots between people’s private online activity and their Twitter accounts—even for people who have never tweeted.

When the team tested the technique on 400 real people who submitted their browsing history, they were able to correctly pick out the volunteers’ Twitter profiles nearly three-quarters of the time.

Here’s how the de-anonymization system works: The researchers figured that a person is more likely to click a link that was shared on social media by a friend—or a friend of a friend—than any other random link on the internet. (Their model controls for the baseline popularity of each website.) With that in mind, and the details of an anonymous person’s browser history in hand, the researchers can compute the probability that any one Twitter user created that browsing history. People’s basic tendency to follow links they come across on Twitter unmasks them—and it usually takes less than a minute.

“You can even be de-anonymized if you just browse and follow people, without actually sharing anything.”

Source: https://www.theatlantic.com/technology/arc...

"R&D company Draper is developing an insect control "backpack" with integrated energy, guidance, and navigation systems, shown here on a to-scale dragonfly model.

To steer the dragonflies, the engineers are developing a way of genetically modifying the nervous system of the insects so they can respond to pulses of light. Once they get it to work, this approach, known as optogenetic stimulation, could enable dragonflies to carry payloads or conduct surveillance..."

Source: http://spectrum.ieee.org/automaton/robotic...

"The Pentagon may soon be unleashing a 21st-century version of locusts on its adversaries after officials on Monday said it had successfully tested a swarm of 103 micro-drones.

The important step in the development of new autonomous weapon systems was made possible by improvements in artificial intelligence, holding open the possibility that groups of small robots could act together under human direction.

Military strategists have high hopes for such drone swarms that would be cheap to produce and able to overwhelm opponents' defenses with their great numbers.

The test of the world's largest micro-drone swarm in California in October included 103 Perdix micro-drones measuring around six inches (16 centimeters) launched from three F/A-18 Super Hornet fighter jets, the Pentagon said in a statement.

"The micro-drones demonstrated advanced swarm behaviors such as collective decision-making, adaptive formation flying and self-healing," it said.

"Perdix are not pre-programmed synchronized individuals, they are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature," said William Roper, director of the Pentagon's Strategic Capabilities Office. "Because every Perdix communicates and collaborates with every other Perdix, the swarm has no leader and can gracefully adapt to drones entering or exiting the team."

Defense Secretary Ash Carter—a technophile and former Harvard professor—created the SCO when he was deputy defense secretary in 2012.

The department is tasked with accelerating the integration of technological innovations into the US weaponry.

It particularly strives to marry already existing commercial technology—in this case micro-drones and artificial intelligence software—in the design of new weapons.

Originally created by engineering students from the Massachusetts Institute of Technology in 2013 and continuously improved since, Perdix drones draw "inspiration from the commercial smartphone industry," the Pentagon said."

Source: http://phys.org/news/2017-01-pentagon-succ...
Posted
AuthorJordan Brown

"Late last year, you were introduced to real, live, remote-controlled cockroaches. Well, the insect hackers at the North Carolina State University are at it again, this time with a Microsoft Kinect and a software program that can boss the bugs around without human input. In other words, we have successfully co-opted cockroach sovereignty — and given it to the machines.

The goal is to ultimately use this kind of technology to create armies of biobots capable of things bio-inspired robots can only dream of.

Now, instead of those impulses being controlled remotely by a human, they’re tapped into the software program, which takes cues from the Xbox Kinect’s tracking data. If the cockroach veers away from the target, the Kinect observes the change and relays it to the software, which in turn makes a split-second decision about how much correctional impulse should be sent to the roach. Longer stimulation is designed to produce more drastic correction, just like pulling hard on a steering wheel.

The results are pretty impressive. Their previous work with remote control yielded only about a 10 per cent success rate, but the new technology has bumped them up to 27 per cent. You can see it for yourself below with a roach that really seems to want nothing in the world but to turn right."

Source: http://www.slate.com/blogs/future_tense/20...
Posted
AuthorJordan Brown

"Could flashing the "peace" sign in photos lead to fingerprint data being stolen? Research by a team at Japan's National Institute of Informatics (NII) says so, raising alarm bells over the popular two-fingered pose. Fingerprint recognition technology is becoming widely available to verify identities, such as when logging on to smartphones, tablets and laptop computers. But the proliferation of mobile devices with high-quality cameras and social media sites where photographs can be easily posted is raising the risk of personal information being leaked, reports said. The NII researchers were able to copy fingerprints based on photos taken by a digital camera three meters (nine feet) away from the subject."

Source: https://phys.org/news/2017-01-japan-finger...
Posted
AuthorJordan Brown

"It comes as no surprise to any Facebook user that the social network gathers a considerable amount of information based on their actions and interests. But according to a report from ProPublica, the world’s largest social network knows far more about its users than just what they do online.

What Facebook can’t glean from a user’s activity, it’s getting from third-party data brokers. ProPublica found the social network is purchasing additional information including personal income, where a person eats out and how many credit cards they keep.

That data all comes separate from the unique identifiers that Facebook generates for its users based on interests and online behavior. A separate investigation by ProPublica in which the publication asked users to report categories of interest Facebook assigned to them generated more than 52,000 attributes.

The data Facebook pays for from other brokers to round out user profiles isn’t disclosed by the company beyond a note that it gets information “from a few different sources.” Those sources, according to ProPublica, come from commercial data brokers who have access to information about people that isn’t linked directly to online behavior."

From ProPublica:

"When asked this week about the lack of disclosure, Facebook responded that it doesn’t tell users about the third-party data because it’s widely available and was not collected by Facebook.

Facebook has been working with data brokers since 2012 when it signed a deal with Datalogix. This prompted Chester, the privacy advocate at the Center for Digital Democracy, to file a complaint with the Federal Trade Commission alleging that Facebook had violated a consent decree with the agency on privacy issues. The FTC has never publicly responded to that complaint and Facebook subsequently signed deals with five other data brokers.

Oracle’s Datalogix provides about 350 types of data to Facebook."

Source: http://www.ibtimes.com/facebook-privacy-so...
Posted
AuthorJordan Brown
You have the right to remain silent — but your smart devices might not.

Amazon’s Echo and Echo Dot are in millions of homes now, with holiday sales more than quadrupling from 2015. Always listening for its wake word, the breakthrough smart speakers boast seven microphones waiting to take and record your commands.

Now, Arkansas police are hoping an Echo found at a murder scene in Bentonville can aid their investigation.

First reported by The Information, investigators filed search warrants to Amazon, requesting any recordings between November 21 and November 22, 2015, from James A. Bates, who was charged with murder after a man was strangled in a hot tub.

While investigating, police noticed the Echo in the kitchen and pointed out that the music playing in the home could have been voice activated through the device. While the Echo records only after hearing the wake word, police are hoping that ambient noise or background chatter could have accidentally triggered the device, leading to some more clues.

Amazon stores all the voice recordings on its servers, in the hopes of using the data to improve its voice assistant services. While you can delete your personal voice data, there’s still no way to prevent any recordings from being saved on a server.

[...]

Even without Amazon’s help, police may be able to crack into the Echo, according to the warrant. Officers believe they can tap into the hardware on the smart speakers, which could “potentially include time stamps, audio files or other data.”

The investigation has focused on other smart devices as well. Officers seized Bates’ phone but were unable to break through his password, which only served to delay the investigation.

”Our agency now has the ability to utilize data extraction methods that negate the need for passcodes and efforts to search Victor and Bates’ devices will continue upon issuance of this warrant.”

Police also found a Nest thermostat, a Honeywell alarm system, wireless weather monitoring in the backyard and WeMo devices for lighting at the smart home crime scene.

Ultimately, it might have been information from a smart meter that proved to be the most useful. With every home in Bentonville hooked up to a smart meter that measures hourly electricity and water usage, police looked at the data and noticed Bates used an “excessive amount of water” during the alleged drowning.
Source: https://www.cnet.com/uk/news/police-reques...
First came the assault on privacy. Name, address, telephone, DOB, SSN, physical description, friends, family, likes, dislikes, habits, hobbies, beliefs, religion, sexual orientation, finances, every granular detail of a person’s life, all logged, indexed, analyzed and cross-referenced. Then came the gathering of location and communication data. Cell phones, apps, metro cards, license plate readers and toll tags, credit card use, IP addresses and authenticated logins, tower info, router proximity, networked “things” everywhere reporting on activity and location, astoundingly accurate facial recognition mated with analytics and “gigapixel” cameras and, worst of all, mindlessly self-contributed posts, tweets, and “check-ins,” all constantly reporting a subject’s location 24-7-365, to such a degree of accuracy that “predictive profiling” knows where you will likely be next Thursday afternoon. Today we are experiencing constant efforts to shred anonymity. Forensic linguistics, browser fingerprinting, lifestyle and behavior analysis, metadata of all types, HTML5, IPv6, and daily emerging “advances” in surveillance technologies - some seemingly science fiction but real - are combining to make constant, mobile identification and absolute loss of anonymity inevitable. And, now, predictably, the final efforts to homogenize: the “siloing” and Balkanization of the Internet. As Internet use becomes more and more self-restricted to a few large providers, as users increasingly never leave the single ecosystem of a Facebook or a Google, as the massive firehose of information on the Internet is “curated” and “managed” by persons who believe that they know best what news and opinions you should have available to read, see, and believe, the bias of a few will eventually determine what you believe. What is propaganda? What is truth? You simply won’t know. In a tradition dating back to the first HOPE conference, for three full hours Steven Rambam will detail the latest trends in privacy invasion and will demonstrate cutting-edge anonymity-shredding surveillance technologies. Drones will fly, a “privacy victim” will undergo digital proctology, a Q&A period will be provided, and fun will be had by all.
Source: https://www.youtube.com/watch?v=FHwl6AyL6j...
You could be on a secret government database or watch list for simply taking a picture on an airplane. Some federal air marshals say they’re reporting your actions to meet a quota, even though some top officials deny it.

The air marshals, whose identities are being concealed, told 7NEWS that they’re required to submit at least one report a month. If they don’t, there’s no raise, no bonus, no awards and no special assignments.

”Innocent passengers are being entered into an international intelligence database as suspicious persons, acting in a suspicious manner on an aircraft ... and they did nothing wrong,” said one federal air marshal.
Source: http://www.thedenverchannel.com/news/marsh...

"Foreign travelers arriving in the United States on the visa waiver program have been presented with an "optional" request to "enter information associated with your online presence," a government official confirmed Thursday. The prompt includes a drop-down menu that lists platforms including Facebook, Google+, Instagram, LinkedIn and YouTube, as well as a space for users to input their account names on those sites. The new policy comes as Washington tries to improve its ability to spot and deny entry to individuals who have ties to terrorist groups like the Islamic State. But the government has faced a barrage of criticism since it first floated the idea last summer. The Internet Association, which represents companies including Facebook, Google and Twitter, at the time joined with consumer advocates to argue the draft policy threatened free expression and posed new privacy and security risks to foreigners. Now that it is final, those opponents are furious the Obama administration ignored their concerns. The question itself is included in what's known as the Electronic System for Travel Authorization, a process that certain foreign travelers must complete to come to the United States. ESTA and a related paper form specifically apply to those arriving here through the visa-waiver program, which allows citizens of 38 countries to travel and stay in the United States for up to 90 days without a visa."

Source: http://www.politico.com/story/2016/12/fore...

"Earlier this year, [ZDNet was] sent a series of large, encrypted files purportedly belonging to a U.S. police department as a result of a leak at a law firm, which was insecurely synchronizing its backup systems across the internet without a password. Among the files was a series of phone dumps created by the police department with specialist equipment, which was created by Cellebrite, an Israeli firm that provides phone-cracking technology. We obtained a number of these so-called extraction reports. One of the more interesting reports by far was from an iPhone 5 running iOS 8. The phone's owner didn't use a passcode, meaning the phone was entirely unencrypted. The phone was plugged into a Cellebrite UFED device, which in this case was a dedicated computer in the police department. The police officer carried out a logical extraction, which downloads what's in the phone's memory at the time. (Motherboard has more on how Cellebrite's extraction process works.) In some cases, it also contained data the user had recently deleted. To our knowledge, there are a few sample reports out there floating on the web, but it's rare to see a real-world example of how much data can be siphoned off from a fairly modern device. We're publishing some snippets from the report, with sensitive or identifiable information redacted."

Source: http://www.zdnet.com/article/israeli-firm-...

Emphasis added:

"Some people consider dolls creepy enough, but what if that deceptively cute toy was listening to everything you said and, worse yet, letting creeps speak through it?

According to The Center for Digital Democracy, a pair of smart toys designed to engage with children in new and entertaining ways are rife with security and privacy holes. The watchdog group was so concerned, they filed a complaint with the Federal Trade Commission on Dec. 6 (you can read the full complaint here). A similar one was also filed in Europe by the Norwegian Consumer Council.

“This complaint concerns toys that spy,” reads the complaint, which claims the Genesis Toys’ My Friend Cayla and i-QUE Intelligent Robot can record and collect private conversations and offer no limitations on the collection and use of personal information.

Both toys use voice recognition, internet connectivity and Bluetooth to engage with children in conversational manner and answer questions. The CDD claims they do all of this in wildly insecure and invasive ways.

Both My Friend Cayla and i-QUE use Nuance Communications' voice-recognition platform to listen and respond to queries. On the Genesis Toy site, the manufacturer notes that while “most of Cayla’s conversational features can be accessed offline,” searching for information may require an internet connection.

The promotional video for Cayla encourages children to “ask Cayla almost anything.”

The dolls work in concert with mobile apps. Some questions can be asked directly, but the toys maintain a constant Bluetooth connection to the dolls so they can also react to actions in the app and even appear to identify objects the child taps on on screen.

The CDD takes particular issue with that app and lists all the questions it asks children (or their parents) up front during registration: everything from the child and her parent’s names to their school, and where they live.

Source: http://mashable.com/2016/12/08/hacking-toy...

"Most Americans do not see "information overload" as a problem for them despite the explosion of internet data and images, according to a Pew Research Center survey on Wednesday.

Only 20 percent of U.S. adults feel they get more information than they can handle, down from 27 percent a decade ago. Just over three-quarters like having so much information at hand, the survey of 1,520 people showed.

"Generally, Americans appreciate lots of information and access to it," said the report into how U.S. adults cope with information demands.

Roughly four in five Americans agree that they are confident about using the internet to keep up with information demands, that a lot of information gives them a feeling of more control over their lives, and that they can easily determine what information is trustworthy.

Americans who are 65 or older, have a high school diploma or less and earn less than $30,000 a year are more likely to say they face a glut of information.

Eighty-four percent of Americans with online access through three sources - home broadband, smartphone and tablet computer - say they like having so much information available.

By contrast, 55 percent of those with no online source felt overwhelmed by the amount of possible information.

The term "information overload" was popularized by author Alvin Toffler in his 1970 bestseller "Future Shock." It refers to difficulties that people face from getting too much information or data.

The Pew survey involved people over 18 interviewed by landline or cell phones from March 7 to April 4. The margin of error was 2.9 percentage points, meaning results could vary by that much either way."

Source: http://www.reuters.com/article/us-usa-tech...
Posted
AuthorJordan Brown

Hossein Derakshan, an Iranian-Canadian author, media analyst, and performance artist writes in MIT Technology Review:

Like TV, social media now increasingly entertains us, and even more so than television it amplifies our existing beliefs and habits. It makes us feel more than think, and it comforts more than challenges. The result is a deeply fragmented society, driven by emotions, and radicalized by lack of contact and challenge from outside. This is why Oxford Dictionaries designated “post-truth” as the word of 2016: an adjective “relating to circumstances in which objective facts are less influential in shaping public opinion than emotional appeals.”

[...]

Traditional television still entails some degree of surprise. What you see on television news is still picked by human curators, and even though it must be entertaining to qualify as worthy of expensive production, it is still likely to challenge some of our opinions (emotions, that is).

Social media, in contrast, uses algorithms to encourage comfort and complaisance, since its entire business model is built upon maximizing the time users spend inside of it. Who would like to hang around in a place where everyone seems to be negative, mean, and disapproving? The outcome is a proliferation of emotions, a radicalization of those emotions, and a fragmented society. This is way more dangerous for the idea of democracy founded on the notion of informed participation.

This means we should write and read more, link more often, and watch less television and fewer videos — and spend less time on Facebook, Instagram, and YouTube.

Our habits and our emotions are killing us and our planet. Let’s resist their lethal appeal.
Source: https://www.technologyreview.com/s/602981/...

The "Investigatory Powers Act," has been passed into law in the UK, legalising a number of illegal mass surveillance programs revealed by Edward Snowden in 2013. It also introduces new powers to require ISPs to retain browsing data on all customers for 12 months, while giving police new powers to hack into computers and phones and to collect communications data in bulk.

"Jim Killock, executive director of the Open Rights Group, responded...saying: "...it is one of the most extreme surveillance laws ever passed in a democracy. The IP Act will have an impact that goes beyond the UK’s shores. It is likely that other countries, including authoritarian regimes with poor human rights records, will use this law to justify their own intrusive surveillance powers.”

"Much of the Act gives stronger legal footing to the UK's various bulk powers, including "bulk interception," which is, in general terms, the collection of internet and phone communications en masse. In June 2013, using documents provided by Edward Snowden, The Guardian revealed that the GCHQ taps fibre-optic undersea cables in order to intercept emails, internet histories, calls, and a wealth of other data."

Meanwhile, FBI and NSA poised to gain new surveillance powers under Trump.
 

Snooper Charter allows the State to tell lies in court.

"Charter gives virtually unrestricted powers not only to State spy organisations but also to the police and a host of other government agencies. The operation of the oversight and accountability mechanisms...are all kept firmly out of sight -- and, so its authors hope, out of mind -- of the public. It is up to the State to volunteer the truth to its victims if the State thinks it has abused its secret powers. "Marking your own homework" is a phrase which does not fully capture this...

Section 56(1)(b) creates a legally guaranteed ability -- nay, duty -- to lie about even the potential for State hacking to take place, and to tell juries a wholly fictitious story about the true origins of hacked material used against defendants in order to secure criminal convictions. This is incredibly dangerous. Even if you know that the story being told in court is false, you and your legal representatives are now banned from being able to question those falsehoods and cast doubt upon the prosecution story. Potentially, you could be legally bound to go along with lies told in court about your communications -- lies told by people whose sole task is to weave a story that will get you sent to prison or fined thousands of pounds.

Moreover, as section 56(4) makes clear, this applies retroactively, ensuring that it is very difficult for criminal offences committed by GCHQ employees and contractors over the years, using powers that were only made legal a fortnight ago, to be brought to light in a meaningful way. It might even be against the law for a solicitor or barrister to mention in court this Reg story by veteran investigative journalist Duncan Campbell about GCHQ's snooping station in Oman (covered by the section 56(1)(b) wording "interception-related conduct has occurred") – or large volumes of material published on Wikileaks.

The existence of section 56(4) makes a mockery of the "general privacy protections" in Part 1 of the IPA, which includes various criminal offences. Part 1 was introduced as a sop to privacy advocates horrified at the full extent of the act's legalisation of intrusive, disruptive and dangerous hacking powers for the State, including powers to force the co-operation of telcos and similar organisations. There is no point in having punishments for lawbreakers if it is illegal to talk about their law-breaking behaviour.

Like the rest of the Snoopers' Charter, section 56 has become law. Apart from Reg readers and a handful of Twitter slacktivists, nobody cares. The general public neither knows nor cares what abuses and perversions of the law take place in its name. Theresa May and the British government have utterly defeated advocates of privacy and security, completely ignoring those who correctly identify the zero-sum game between freedom and security in favour of those who feel the need to destroy liberty in order to "save" it.

The UK is now a measurably less free country in terms of technological security, permitted speech and ability to resist abuses of power and position by agents of the State, be those shadowy spys, police inspectors and above (ie, shift leaders in your local cop shop) and even food hygiene inspectors – no, really."

Source: https://www.theguardian.com/world/2016/nov...

Distracted. Addicted. Alone Together. Emotionally dead. Disengaged from the real world. A parody of itself.

Animation by Steve Cutts. Music by Moby & The Void Pacific Choir, These Systems Are Failing.