5c6cc0f026289864101ece74-1200.jpg

In early February, Google announced that its home security and alarm system Nest Secure would be getting an update. Users, the company said, could now enable its virtual-assistant technology, Google Assistant. The problem: Nest users didn't know a microphone existed on their security device to begin with. The existence of a microphone on the Nest Guard, which is the alarm, keypad, and motion-sensor component in the Nest Secure offering, was never disclosed in any of the product material for the device. On Tuesday, a Google spokesperson told Business Insider the company had made an "error." "The on-device microphone was never intended to be a secret and should have been listed in the tech specs," the spokesperson said. "That was an error on our part."

Source: https://www.businessinsider.com.au/nest-mi...


"...It uses AI to learn which faces are important to you, then starts automatically capturing photos and videos. I was similarly excited by early promotional videos of parents in Google Glass playing with their young kids, capturing photos and videos in a hands-free way that didn’t interrupt the moment." 

Read more

https://www.theverge.com/2017/10/5/16428708/google-clips-camera-privacy-parents-children

https://www.theverge.com/2017/10/4/16405200/google-clips-camera-ai-photos-video-hands-on-wi-fi-direct

https://techcrunch.com/2017/10/04/google-clips-is-a-new-249-smart-camera-that-you-can-wear/

"One of the engineers behind Google's self-driving car has established a nonprofit religious corporation with one main aim – to create a deity with artificial intelligence. According to newly uncovered documents filed to the state of California in September 2015, Anthony Levandowski serves as the CEO and president of religious organisation Way of the Future."

Way of the Future’s startling mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”

Source: https://www.theguardian.com/technology/201...
Posted
AuthorJordan Brown

"On mobile, where the majority of the world's content is now consumed, Google and Facebook own eight of the top 10 apps, with apps devouring 87% of our time spent on smartphones and tablets, according to new comScore data (Figure A).

apps-rule.png

"In sum, the majority of our time online is now mediated by just a few megacorporations, and for the most part their top incentive is to borrow our privacy just long enough to target an ad at us.

Then there's Mozilla, an organization whose mantra is "Internet for people, not profit." That feels like a necessary voice to add to today's internet oligopoly, but it's not one we're hearing. Mozilla once had a commanding share of the desktop web browser market; today that share has dwindled, and on mobile devices it's virtually non-existent.

This isn't good, but I'm not sure what to do about it. We clearly need an organization standing up for web freedom, as expecting Google to do that is like asking the fox to guard the henhouse."

Source: http://www.techrepublic.com/article/mozill...
Posted
AuthorJordan Brown

We train the machine so well, and it's use so ubiquitous, that it can become invisible: Google is making CAPTCHAs invisible using "a combination of machine learning and advanced risk analysis that adapts to new and emerging threats," Ars Technica reports. Emphasis added.

"The old reCAPTCHA system was pretty easy -- just a simple "I'm not a robot" checkbox would get people through your sign-up page. The new version is even simpler, and it doesn't use a challenge or checkbox. It works invisibly in the background, somehow, to identify bots from humans. [...] When sites switch over to the invisible CAPTCHA system, most users won't see CAPTCHAs at all, not even the "I'm not a robot" checkbox. If you are flagged as "suspicious" by the system, then it will display the usual challenges.
[...]
reCAPTCHA was bought by Google in 2009 and was used to put unsuspecting website users to work for Google. Some CAPTCHA systems create arbitrary problems for users to solve, but older reCAPTCHA challenges actually used problems Google's computers needed to solve but couldn't. Google digitizes millions of books, but sometimes the OCR (optical character recognition) software can't recognize a word, so that word is sent into the reCAPTCHA system for solving by humans. If you've ever solved a reCAPTCHA that looks like a set of numbers, those were from Google's camera-covered Street View cars, which whizz down the streets and identify house numbers. If the OCR software couldn't figure out a house number, that number was made into a CAPTCHA for solving by humans. The grid of pictures that would ask you to "select all the cats" was used to train computer image recognition algorithms."

Source: https://arstechnica.com/gadgets/2017/03/go...
Posted
AuthorJordan Brown
First came the assault on privacy. Name, address, telephone, DOB, SSN, physical description, friends, family, likes, dislikes, habits, hobbies, beliefs, religion, sexual orientation, finances, every granular detail of a person’s life, all logged, indexed, analyzed and cross-referenced. Then came the gathering of location and communication data. Cell phones, apps, metro cards, license plate readers and toll tags, credit card use, IP addresses and authenticated logins, tower info, router proximity, networked “things” everywhere reporting on activity and location, astoundingly accurate facial recognition mated with analytics and “gigapixel” cameras and, worst of all, mindlessly self-contributed posts, tweets, and “check-ins,” all constantly reporting a subject’s location 24-7-365, to such a degree of accuracy that “predictive profiling” knows where you will likely be next Thursday afternoon. Today we are experiencing constant efforts to shred anonymity. Forensic linguistics, browser fingerprinting, lifestyle and behavior analysis, metadata of all types, HTML5, IPv6, and daily emerging “advances” in surveillance technologies - some seemingly science fiction but real - are combining to make constant, mobile identification and absolute loss of anonymity inevitable. And, now, predictably, the final efforts to homogenize: the “siloing” and Balkanization of the Internet. As Internet use becomes more and more self-restricted to a few large providers, as users increasingly never leave the single ecosystem of a Facebook or a Google, as the massive firehose of information on the Internet is “curated” and “managed” by persons who believe that they know best what news and opinions you should have available to read, see, and believe, the bias of a few will eventually determine what you believe. What is propaganda? What is truth? You simply won’t know. In a tradition dating back to the first HOPE conference, for three full hours Steven Rambam will detail the latest trends in privacy invasion and will demonstrate cutting-edge anonymity-shredding surveillance technologies. Drones will fly, a “privacy victim” will undergo digital proctology, a Q&A period will be provided, and fun will be had by all.
Source: https://www.youtube.com/watch?v=FHwl6AyL6j...

"The Stack reports on Google's "new research into upscaling low-resolution images using machine learning to 'fill in' the missing details," arguing this is "a questionable stance...continuing to propagate the idea that images contain some kind of abstract 'DNA', and that there might be some reliable photographic equivalent of polymerase chain reaction which could find deeper truth in low-res images than either the money spent on the equipment or the age of the equipment will allow."

Rapid and Accurate Image Super Resolution (RAISR) uses low and high resolution versions of photos in a standard image set to establish templated paths for upward scaling... This effectively uses historical logic, instead of pixel interpolation, to infer what the image would look like if it had been taken at a higher resolution.

It’s notable that neither their initial paper nor the supplementary examples feature human faces. It could be argued that using AI-driven techniques to reconstruct images raises some questions about whether upscaled, machine-driven digital enhancements are a legal risk, compared to the far greater expense of upgrading low-res CCTV networks with the necessary resolution, bandwidth and storage to obtain good quality video evidence.

"The article points out that "faith in the fidelity of these 'enhanced' images routinely convicts defendants."

Source: https://thestack.com/world/2016/11/15/rais...
Vint Cerf, the living legend largely responsible for the development of the Internet protocol suite, has some concerns about history. In his current column for the Communications of the ACM, Cerf worries about the decreasing longevity of our media, and, thus, about our ability as a civilization to self-document—to have a historical record that one day far in the future might be remarked upon and learned from. Magnetic films do not quite have the staying power as clay tablets.

At stake, according to Cerf, is “the possibility that the centuries well before ours will be better known than ours will be unless we are persistent about preserving digital content. The earlier media seem to have a kind of timeless longevity while modern media from the 1800s forward seem to have shrinking lifetimes. Just as the monks and Muslims of the Middle Ages preserved content by copying into new media, won’t we need to do the same for our modern content?”

As media becomes more ephemeral across technological generations, the more it depends on the technological generation that comes next.

Also, depends on the mindset of the generation that comes next too... What if we don't even want to remember?

Source: http://motherboard.vice.com/read/vint-cerf...

"Google democratized information, Uber democratized car rides, and Twitter democratized publishing a single sentence. But to the World Bank, the powerful Washington-based organisation that lends money to developing countries, Silicon Valley’s technology firms appear to be exacerbating economic inequality rather than improving it."

Source: https://www.theguardian.com/technology/201...
Posted
AuthorJordan Brown