"Silicon Valley's utopians genuinely but mistakenly believe that more information and connection makes us more analytical and informed. But when faced with quinzigabytes of data, the human tendency is to simplify things. Information overload forces us to rely on simple algorithms to make sense of the overwhelming noise. This is why, just like the advertising industry that increasingly drives it, the internet is fundamentally an emotional medium that plays to our base instinct to reduce problems and take sides, whether like or don't like, my guy/not my guy, or simply good versus evil. It is no longer enough to disagree with someone, they must also be evil or stupid...

Nothing holds a tribe together like a dangerous enemy. That is the essence of identity politics gone bad: a universe of unbridgeable opinion between opposing tribes, whose differences are always highlighted, exaggerated, retweeted and shared. In the end, this leads us to ever more distinct and fragmented identities, all of us armed with solid data, righteous anger, a gutful of anger and a digital network of likeminded people. This is not total connectivity; it is total division."

Source: http://www.newsweek.com/how-silicon-valley...

Elise Thomas writes at Hopes & Fears:

"Right now, in a handful of computing labs scattered across the world, new software is being developed which has the potential to completely change our relationship with technology. Affective computing is about creating technology which recognizes and responds to your emotions. Using webcams, microphones or biometric sensors, the software uses a person's physical reactions to analyze their emotional state, generating data which can then be used to monitor, mimic or manipulate that person’s emotions."

[...]

"Corporations spend billions each year trying to build "authentic" emotional connections to their target audiences. Marketing research is one of the most prolific research fields around, conducting thousands of studies on how to more effectively manipulate consumers’ decision-making. Advertisers are extremely interested in affective computing and particularly in a branch known as emotion analytics, which offers unprecedented real-time access to consumers' emotional reactions and the ability to program alternative responses depending on how the content is being received.

For example, if two people watch an advertisement with a joke and only one person laughs, the software can be programmed to show more of the same kind of advertising to the person who laughs while trying different sorts of advertising on the person who did not laugh to see if it's more effective. In essence, affective computing could enable advertisers to create individually-tailored advertising en masse."

"Say 15 years from now a particular brand of weight loss supplements obtains a particular girl's information and locks on. When she scrolls through her Facebook, she sees pictures of rail-thin celebrities, carefully calibrated to capture her attention. When she turns on the TV, it automatically starts on an episode of "The Biggest Loser," tracking her facial expressions to find the optimal moment for a supplement commercial. When she sets her music on shuffle, it "randomly" plays through a selection of the songs which make her sad. This goes on for weeks. 

Now let's add another layer. This girl is 14, and struggling with depression. She's being bullied in school. Having become the target of a deliberate and persistent campaign by her technology to undermine her body image and sense of self-worth, she's at risk of making some drastic choices."

 

Source: http://www.hopesandfears.com/hopes/now/int...

"An ambitious project to blanket New York and London with ultrafast Wi-Fi via so-called "smart kiosks," which will replace obsolete public telephones, are the work of a Google-backed startup.

Each kiosk is around nine feet high and relatively flat. Each flat side houses a big-screen display that pays for the whole operation with advertising.

Each kiosk provides free, high-speed Wi-Fi for anyone in range. By selecting the Wi-Fi network at one kiosk, and authenticating with an email address, each user will be automatically connected to every other LinkNYC kiosk they get within range of. Eventually, anyone will be able to walk around most of the city without losing the connection to these hotspots.

Wide-angle cameras on each side of the kiosks point up and down the street and sidewalk, approximating a 360-degree view. If a city wants to use those cameras and sensors for surveillance, it can.

Over the next 15 years, the city will go through the other two phases, where sensor data will be processed by artificial intelligence to gain unprecedented insights about traffic, environment and human behavior and eventually use it to intelligently re-direct traffic and shape other city functions."

Source: http://www.computerworld.com/article/32114...

"Facebook researchers used a game to help the bot learn how to haggle over books, hats, and basketballs. Each object had a point value, and they needed to be split between each bot negotiator via text. From the human conversations (gathered via Amazon Mechanical Turk), and testing its skills against itself, the AI system didn't only learn how to state its demands, but negotiation tactics as well -- specifically, lying. Instead of outright saying what it wanted, sometimes the AI would feign interest in a worthless object, only to later concede it for something that it really wanted. Facebook isn't sure whether it learned from the human hagglers or whether it stumbled upon the trick accidentally, but either way when the tactic worked, it was rewarded.

It’s no surprise that Facebook is working on ways to improve how its bot can interact with others—the company is highly invested in building bots that can negotiate on behalf of users and businesses for its Messenger platform, where it envisions the future of customer service."

Source: https://qz.com/1004070/facebook-fb-built-a...

The Guardian is running an article about a 'mysterious' big-data analytics company called Cambridge Analytica and its activities with SCL Group---a 25-year-old military psyops company in the UK later bought by "secretive hedge fund billionaire" Robert Mercer. In the article, a former employee calls it "this dark, dystopian data company that gave the world Trump."

Mercer, with a background in computer science is alleged to be at the centre of a multimillion-dollar propaganda network.

"Facebook was the source of the psychological insights that enabled Cambridge Analytica to target individuals. It was also the mechanism that enabled them to be delivered on a large scale. The company also (perfectly legally) bought consumer datasets -- on everything from magazine subscriptions to airline travel -- and uniquely it appended these with the psych data to voter files... Finding "persuadable" voters is key for any campaign and with its treasure trove of data, Cambridge Analytica could target people high in neuroticism, for example, with images of immigrants "swamping" the country. The key is finding emotional triggers for each individual voter. Cambridge Analytica worked on campaigns in several key states for a Republican political action committee. Its key objective, according to a memo the Observer has seen, was "voter disengagement" and "to persuade Democrat voters to stay at home"... In the U.S., the government is bound by strict laws about what data it can collect on individuals. But, for private companies anything goes."

Source: https://www.theguardian.com/technology/201...

"R&D company Draper is developing an insect control "backpack" with integrated energy, guidance, and navigation systems, shown here on a to-scale dragonfly model.

To steer the dragonflies, the engineers are developing a way of genetically modifying the nervous system of the insects so they can respond to pulses of light. Once they get it to work, this approach, known as optogenetic stimulation, could enable dragonflies to carry payloads or conduct surveillance..."

Source: http://spectrum.ieee.org/automaton/robotic...

Emphasis added:

"Some people consider dolls creepy enough, but what if that deceptively cute toy was listening to everything you said and, worse yet, letting creeps speak through it?

According to The Center for Digital Democracy, a pair of smart toys designed to engage with children in new and entertaining ways are rife with security and privacy holes. The watchdog group was so concerned, they filed a complaint with the Federal Trade Commission on Dec. 6 (you can read the full complaint here). A similar one was also filed in Europe by the Norwegian Consumer Council.

“This complaint concerns toys that spy,” reads the complaint, which claims the Genesis Toys’ My Friend Cayla and i-QUE Intelligent Robot can record and collect private conversations and offer no limitations on the collection and use of personal information.

Both toys use voice recognition, internet connectivity and Bluetooth to engage with children in conversational manner and answer questions. The CDD claims they do all of this in wildly insecure and invasive ways.

Both My Friend Cayla and i-QUE use Nuance Communications' voice-recognition platform to listen and respond to queries. On the Genesis Toy site, the manufacturer notes that while “most of Cayla’s conversational features can be accessed offline,” searching for information may require an internet connection.

The promotional video for Cayla encourages children to “ask Cayla almost anything.”

The dolls work in concert with mobile apps. Some questions can be asked directly, but the toys maintain a constant Bluetooth connection to the dolls so they can also react to actions in the app and even appear to identify objects the child taps on on screen.

The CDD takes particular issue with that app and lists all the questions it asks children (or their parents) up front during registration: everything from the child and her parent’s names to their school, and where they live.

Source: http://mashable.com/2016/12/08/hacking-toy...

Adam Turner at The Age writes: "When you look at how social media works, it was inevitable that it would turn into one of the world's most powerful propaganda tools. It's often painted as a force for good, letting people bypass the traditional gatekeepers in order to quickly disseminate information, but there's no guarantee that this information is actually true.

Facebook has usurped the role of the mainstream media in disseminating news, but hasn't taken on the fourth estate's corresponding responsibility for keeping the bastards honest. The mainstream media has no-one to blame but itself, having engaged in a tabloid race to the bottom which devalued truth to the point that blatant liars are considered more honest.

The fragmentation of news is already creating a filter bubble in that most people don't tend to read the newspaper from front to back, or sit through entire news bulletins, they just pick and choose what interests them. The trouble with Facebook is that it also reinforces bias, the more extreme your political views the less likely you are to see anything with an opposing viewpoint which might help you develop a more well-rounded view of the world."

Brooke Binkowski, the managing editor of the fact-checking at Snopes.com says, "Honestly, most of the fake news is incredibly easy to debunk because it's such obvious bullshit..."

The problem, Binkowski believes, is that the public has lost faith in the media broadly -- therefore no media outlet is considered credible any longer. The reasons are familiar: as the business of news has grown tougher, many outlets have been stripped of the resources they need for journalists to do their jobs correctly. "When you're on your fifth story of the day and there's no editor because the editor's been fired and there's no fact checker so you have to Google it yourself and you don't have access to any academic journals or anything like that, you will screw stories up," she says."

 

UPDATE 1/12/2016 -- Most students can't spot fake news

"If you thought fake online news was a problem for impressionable adults, it's even worse for the younger crowd. A Stanford study of 7,804 middle school, high school and college students has found that most of them couldn't identify fake news on their own. Their susceptibility varied with age, but even a large number of the older students fell prey to bogus reports. Over two thirds of middle school kids didn't see why they shouldn't trust a bank executive's post claiming that young adults need financial help, while nearly 40 percent of high schoolers didn't question the link between an unsourced photo and the claims attached to it.

Why did many of the students misjudge the authenticity of a story? They were fixated on the appearance of legitimacy, rather than the quality of information. A large photo or a lot of detail was enough to make a Twitter post seem credible, even if the actual content was incomplete or wrong. There are plenty of adults who respond this way, we'd add, but students are more vulnerable than most.

As the Wall Street Journal explains, part of the solution is simply better education: teach students to verify sources, question motivations and otherwise think critically."

(Emphasis added)

Source: https://backchannel.com/according-to-snope...

"The Stack reports on Google's "new research into upscaling low-resolution images using machine learning to 'fill in' the missing details," arguing this is "a questionable stance...continuing to propagate the idea that images contain some kind of abstract 'DNA', and that there might be some reliable photographic equivalent of polymerase chain reaction which could find deeper truth in low-res images than either the money spent on the equipment or the age of the equipment will allow."

Rapid and Accurate Image Super Resolution (RAISR) uses low and high resolution versions of photos in a standard image set to establish templated paths for upward scaling... This effectively uses historical logic, instead of pixel interpolation, to infer what the image would look like if it had been taken at a higher resolution.

It’s notable that neither their initial paper nor the supplementary examples feature human faces. It could be argued that using AI-driven techniques to reconstruct images raises some questions about whether upscaled, machine-driven digital enhancements are a legal risk, compared to the far greater expense of upgrading low-res CCTV networks with the necessary resolution, bandwidth and storage to obtain good quality video evidence.

"The article points out that "faith in the fidelity of these 'enhanced' images routinely convicts defendants."

Source: https://thestack.com/world/2016/11/15/rais...
Adobe is working on a new piece of software that would act like a Photoshop for audio, according to Adobe developer Zeyu Jin, who spoke at the Adobe MAX conference in San Diego, California today. The software is codenamed Project VoCo, and it’s not clear at this time when it will materialize as a commercial product. The standout feature, however, is the ability to add words not originally found in the audio file. Like Photoshop, Project VoCo is designed to be a state-of-the-art audio editing application. Beyond your standard speech editing and noise cancellation features, Project VoCo can also apparently generate new words using a speaker’s recorded voice. Essentially, the software can understand the makeup of a person’s voice and replicate it, so long as there’s about 20 minutes of recorded speech. In Jin’s demo, the developer showcased how Project VoCo let him add a word to a sentence in a near-perfect replication of the speaker, according to Creative Bloq. So similar to how Photoshop ushered in a new era of editing and image creation, this tool could transform how audio engineers work with sound, polish clips, and clean up recordings and podcasts.

"When recording voiceovers, dialog, and narration, people would often like to change or insert a word or a few words due to either a mistake they made or simply because they would like to change part of the narrative," reads an official Adobe statement. "We have developed a technology called Project VoCo in which you can simply type in the word or words that you would like to change or insert into the voiceover. The algorithm does the rest and makes it sound like the original speaker said those words."

Imagine this technology coupled with a false video manipulation component, that also already exists as a working proof:

One really could make potentially convincing entirely unreal audio/video of a person's likeness...

Source: http://www.theverge.com/2016/11/3/13514088...
Posted
AuthorJordan Brown