5c6cc0f026289864101ece74-1200.jpg

In early February, Google announced that its home security and alarm system Nest Secure would be getting an update. Users, the company said, could now enable its virtual-assistant technology, Google Assistant. The problem: Nest users didn't know a microphone existed on their security device to begin with. The existence of a microphone on the Nest Guard, which is the alarm, keypad, and motion-sensor component in the Nest Secure offering, was never disclosed in any of the product material for the device. On Tuesday, a Google spokesperson told Business Insider the company had made an "error." "The on-device microphone was never intended to be a secret and should have been listed in the tech specs," the spokesperson said. "That was an error on our part."

Source: https://www.businessinsider.com.au/nest-mi...

Writer and artist James Bridle writes in Medium:

"Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatize, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level.

To begin: Kid's YouTube is definitely and markedly weird. I've been aware of its weirdness for some time. Last year, there were a number of articles posted about the Surprise Egg craze. Surprise Eggs videos depict, often at excruciating length, the process of unwrapping Kinder and other egg toys. That's it, but kids are captivated by them. There are thousands and thousands of these videos and thousands and thousands, if not millions, of children watching them. [...] What I find somewhat disturbing about the proliferation of even (relatively) normal kids videos is the impossibility of determining the degree of automation which is at work here; how to parse out the gap between human and machine."

Sapna Maheshwari also explores in The New York Times:

"Parents and children have flocked to Google-owned YouTube Kids since it was introduced in early 2015. The app's more than 11 million weekly viewers are drawn in by its seemingly infinite supply of clips, including those from popular shows by Disney and Nickelodeon, and the knowledge that the app is supposed to contain only child-friendly content that has been automatically filtered from the main YouTube site. But the app contains dark corners, too, as videos that are disturbing for children slip past its filters, either by mistake or because bad actors have found ways to fool the YouTube Kids algorithms. In recent months, parents like Ms. Burns have complained that their children have been shown videos with well-known characters in violent or lewd situations and other clips with disturbing imagery, sometimes set to nursery rhymes."

Very horrible and creepy.

Source: https://medium.com/@jamesbridle/something-...

"Silicon Valley's utopians genuinely but mistakenly believe that more information and connection makes us more analytical and informed. But when faced with quinzigabytes of data, the human tendency is to simplify things. Information overload forces us to rely on simple algorithms to make sense of the overwhelming noise. This is why, just like the advertising industry that increasingly drives it, the internet is fundamentally an emotional medium that plays to our base instinct to reduce problems and take sides, whether like or don't like, my guy/not my guy, or simply good versus evil. It is no longer enough to disagree with someone, they must also be evil or stupid...

Nothing holds a tribe together like a dangerous enemy. That is the essence of identity politics gone bad: a universe of unbridgeable opinion between opposing tribes, whose differences are always highlighted, exaggerated, retweeted and shared. In the end, this leads us to ever more distinct and fragmented identities, all of us armed with solid data, righteous anger, a gutful of anger and a digital network of likeminded people. This is not total connectivity; it is total division."

Source: http://www.newsweek.com/how-silicon-valley...

Elise Thomas writes at Hopes & Fears:

"Right now, in a handful of computing labs scattered across the world, new software is being developed which has the potential to completely change our relationship with technology. Affective computing is about creating technology which recognizes and responds to your emotions. Using webcams, microphones or biometric sensors, the software uses a person's physical reactions to analyze their emotional state, generating data which can then be used to monitor, mimic or manipulate that person’s emotions."

[...]

"Corporations spend billions each year trying to build "authentic" emotional connections to their target audiences. Marketing research is one of the most prolific research fields around, conducting thousands of studies on how to more effectively manipulate consumers’ decision-making. Advertisers are extremely interested in affective computing and particularly in a branch known as emotion analytics, which offers unprecedented real-time access to consumers' emotional reactions and the ability to program alternative responses depending on how the content is being received.

For example, if two people watch an advertisement with a joke and only one person laughs, the software can be programmed to show more of the same kind of advertising to the person who laughs while trying different sorts of advertising on the person who did not laugh to see if it's more effective. In essence, affective computing could enable advertisers to create individually-tailored advertising en masse."

"Say 15 years from now a particular brand of weight loss supplements obtains a particular girl's information and locks on. When she scrolls through her Facebook, she sees pictures of rail-thin celebrities, carefully calibrated to capture her attention. When she turns on the TV, it automatically starts on an episode of "The Biggest Loser," tracking her facial expressions to find the optimal moment for a supplement commercial. When she sets her music on shuffle, it "randomly" plays through a selection of the songs which make her sad. This goes on for weeks. 

Now let's add another layer. This girl is 14, and struggling with depression. She's being bullied in school. Having become the target of a deliberate and persistent campaign by her technology to undermine her body image and sense of self-worth, she's at risk of making some drastic choices."

 

Source: http://www.hopesandfears.com/hopes/now/int...

"The biggest psychological experiment ever is being conducted, and we’re all taking part in it: every day, a billion people are tested online. Which ingenious tricks and other digital laws ensure that we fill our online shopping carts to the brim, or stay on websites as long as possible? Or vote for a particular candidate?

The bankruptcies of department stores and shoe shops clearly show that our buying behaviour is rapidly shifting to the Internet. An entirely new field has arisen, of ‘user experience’ architects and ‘online persuasion officers’. How do these digital data dealers use, manipulate and abuse our user experience? Not just when it comes to buying things, but also with regards to our free time and political preferences.

Aren’t companies, which are running millions of tests at a time, miles ahead of science and government, in this respect? Now the creators of these digital seduction techniques, former Google employees among them, are themselves arguing for the introduction of an ethical code. What does it mean, when the conductors of experiments themselves are asking for their power and possibilities to be restricted?"

"Researchers at Stanford and Princeton universities have found a way to connect the dots between people’s private online activity and their Twitter accounts—even for people who have never tweeted.

When the team tested the technique on 400 real people who submitted their browsing history, they were able to correctly pick out the volunteers’ Twitter profiles nearly three-quarters of the time.

Here’s how the de-anonymization system works: The researchers figured that a person is more likely to click a link that was shared on social media by a friend—or a friend of a friend—than any other random link on the internet. (Their model controls for the baseline popularity of each website.) With that in mind, and the details of an anonymous person’s browser history in hand, the researchers can compute the probability that any one Twitter user created that browsing history. People’s basic tendency to follow links they come across on Twitter unmasks them—and it usually takes less than a minute.

“You can even be de-anonymized if you just browse and follow people, without actually sharing anything.”

Source: https://www.theatlantic.com/technology/arc...

"If YOU think you are not being analysed while browsing websites, it could be time to reconsider. A creepy new website called clickclickclick has been developed to demonstrate how our online behaviour is continuously measured.

The site, which observes and comments on your behaviour in detail, and is not harmful to your computer, contains nothing but a white screen and a large green button. From the minute you visit the website, it begins detailing your actions on the screen in real-time.

The site also encourages users to turn on their audio, which offers the even more disturbing experience of having an English voice comment about your behaviour.

Designer Roel Wouters said the experiment was aimed to remind people about the serious themes of big data and privacy. “It seemed fun to thematise this in a simple and lighthearted way,” he said.

Fellow designer Luna Maurer said the website her own experiences with the internet had helped with the project. “I am actually quite internet aware, but I am still very often surprised that after I watched something on a website, a second later I get instantly personalised ads,” she said."

Source: http://www.news.com.au/technology/online/s...
Posted
AuthorJordan Brown

"Yahoo has filed a patent for a type of smart billboard that would collect people's information and use it to deliver targeted ad content in real-time."

To achieve that functionality, the billboards would use a variety of sensor systems, including cameras and proximity technology, to capture real-time audio, video and even biometric information about potential target audiences.

But the tech company doesn’t just want to know about a passing vehicle. It also wants to know who the occupants are inside of it.

That’s why Yahoo is prepared to cooperate with cell towers and telecommunications companies to learn as much as possible about each vehicle’s occupants.

It goes on to explain in the application:

Various types of data (e.g., cell tower data, mobile app location data, image data, etc.) can be used to identify specific individuals in an audience in position to view advertising content. Similarly, vehicle navigation/tracking data from vehicles equipped with such systems could be used to identify specific vehicles and/or vehicle owners. Demographic data (e.g., as obtained from a marketing or user database) for the audience can thus be determined for the purpose of, for example, determining whether and/or the degree to which the demographic profile of the audience corresponds to a target demographic.
Source: https://www.grahamcluley.com/yahoo-creepy-...
Increasing aspects of our lives are now recorded as digital data that are systematically stored, aggregated, analysed, and sold. Despite the promise of big data to improve our lives, all encompassing data surveillance constitutes a new form of power that poses a risk not only to our privacy, but to our free will.

A more worrying trend is the use of big data to manipulate human behaviour at scale by incentivising “appropriate” activities, and penalising “inappropriate” activities. In recent years, governments in the UK, US, and Australia have been experimenting with attempts to “correct” the behaviour of their citizens through “nudge units”.

Nudge units: "In ways you don't detect [corporations and governments are] subtly influencing your decisions, pushing you towards what it believes are your (or its) best interests, exploiting the biases and tics of the human brain uncovered by research into behavioural psychology. And it is trying this in many different ways on many different people, running constant trials of different unconscious pokes and prods, to work out which is the most effective, which improves the most lives, or saves the most money. Preferably, both."

In his new book Inside the Nudge Unit, published this week in Britain, Halpern explains his fascination with behavioural psychology.

”Our brains weren’t made for the day-to-day financial judgments that are the foundation of modern economies: from mortgages, to pensions, to the best buy in a supermarket. Our thinking and decisions are fused with emotion.”

There’s a window of opportunity for governments, Halpern believes: to exploit the gaps between perception, reason, emotion and reality, and push us the “right” way.

He gives me a recent example of BI’s work – they were looking at police recruitment, and how to get a wider ethnic mix.

Just before applicants did an online recruitment test, in an email sending the link, BI added a line saying “before you do this, take a moment to think about why joining the police is important to you and your community”.

There was no effect on white applicants. But the pass rate for black and minority ethnic applicants moved from 40 to 60 per cent.

”It entirely closes the gap,” Halpern says. “Absolutely amazing. We thought we had good grounds in the [scientific research] literature that such a prompt might make a difference, but the scale of the difference was extraordinary.

Halpern taught social psychology at Cambridge but spent six years in the Blair government’s strategy unit. An early think piece on behavioural policy-making was leaked to the media and caused a small storm – Blair publicly disowned it and that was that. Halpern returned to academia, but was lured back after similar ideas started propagating through the Obama administration, and Cameron was persuaded to give it a go.

Ministers tend not to like it – once, one snapped, “I didn’t spend a decade in opposition to come into government to run a pilot”, but the technique is rife in the digital commercial world, where companies like Amazon or Google try 20 different versions of a web page.

Governments and public services should do it too, Halpern says. His favourite example is Britain’s organ donor register. They tested eight alternative online messages prompting people to join, including a simple request, different pictures, statistics or conscience-tweaking statements like “if you needed an organ transplant would you have one? If so please help others”.

It’s not obvious which messages work best, even to an expert. The only way to find out is to test them. They were surprised to find that the picture (of a group of people) actually put people off, Halpern says.

In future they want to use demographic data to personalise nudges, Halpern says. On tax reminder notices, they had great success putting the phrase “most people pay their tax on time” at the top. But a stubborn top 5 per cent, with the biggest tax debts, saw this reminder and thought, “Well, I’m not most people”.

This whole approach raises ethical issues. Often you can’t tell people they’re being experimented on – it’s impractical, or ruins the experiment, or both.

”If we’re trying to find the best way of saying ‘don’t drop your litter’ with a sign saying ‘most people don’t drop litter’, are you supposed to have a sign before it saying ‘caution you are about to participate in a trial’?

”Where should we draw the line between effective communication and unacceptable ‘PsyOps’ or propaganda?”
Source: https://theconversation.com/data-surveilla...