"Foreign travelers arriving in the United States on the visa waiver program have been presented with an "optional" request to "enter information associated with your online presence," a government official confirmed Thursday. The prompt includes a drop-down menu that lists platforms including Facebook, Google+, Instagram, LinkedIn and YouTube, as well as a space for users to input their account names on those sites. The new policy comes as Washington tries to improve its ability to spot and deny entry to individuals who have ties to terrorist groups like the Islamic State. But the government has faced a barrage of criticism since it first floated the idea last summer. The Internet Association, which represents companies including Facebook, Google and Twitter, at the time joined with consumer advocates to argue the draft policy threatened free expression and posed new privacy and security risks to foreigners. Now that it is final, those opponents are furious the Obama administration ignored their concerns. The question itself is included in what's known as the Electronic System for Travel Authorization, a process that certain foreign travelers must complete to come to the United States. ESTA and a related paper form specifically apply to those arriving here through the visa-waiver program, which allows citizens of 38 countries to travel and stay in the United States for up to 90 days without a visa."
"Earlier this year, [ZDNet was] sent a series of large, encrypted files purportedly belonging to a U.S. police department as a result of a leak at a law firm, which was insecurely synchronizing its backup systems across the internet without a password. Among the files was a series of phone dumps created by the police department with specialist equipment, which was created by Cellebrite, an Israeli firm that provides phone-cracking technology. We obtained a number of these so-called extraction reports. One of the more interesting reports by far was from an iPhone 5 running iOS 8. The phone's owner didn't use a passcode, meaning the phone was entirely unencrypted. The phone was plugged into a Cellebrite UFED device, which in this case was a dedicated computer in the police department. The police officer carried out a logical extraction, which downloads what's in the phone's memory at the time. (Motherboard has more on how Cellebrite's extraction process works.) In some cases, it also contained data the user had recently deleted. To our knowledge, there are a few sample reports out there floating on the web, but it's rare to see a real-world example of how much data can be siphoned off from a fairly modern device. We're publishing some snippets from the report, with sensitive or identifiable information redacted."
Google Home, Amazon Echo, "smart" systems... terrifying invasive futures. Product exists as of 4th November, 2016 for US$129.
"Some people consider dolls creepy enough, but what if that deceptively cute toy was listening to everything you said and, worse yet, letting creeps speak through it?
According to The Center for Digital Democracy, a pair of smart toys designed to engage with children in new and entertaining ways are rife with security and privacy holes. The watchdog group was so concerned, they filed a complaint with the Federal Trade Commission on Dec. 6 (you can read the full complaint here). A similar one was also filed in Europe by the Norwegian Consumer Council.
“This complaint concerns toys that spy,” reads the complaint, which claims the Genesis Toys’ My Friend Cayla and i-QUE Intelligent Robot can record and collect private conversations and offer no limitations on the collection and use of personal information.
Both toys use voice recognition, internet connectivity and Bluetooth to engage with children in conversational manner and answer questions. The CDD claims they do all of this in wildly insecure and invasive ways.
Both My Friend Cayla and i-QUE use Nuance Communications' voice-recognition platform to listen and respond to queries. On the Genesis Toy site, the manufacturer notes that while “most of Cayla’s conversational features can be accessed offline,” searching for information may require an internet connection.
The promotional video for Cayla encourages children to “ask Cayla almost anything.”
The dolls work in concert with mobile apps. Some questions can be asked directly, but the toys maintain a constant Bluetooth connection to the dolls so they can also react to actions in the app and even appear to identify objects the child taps on on screen.
The CDD takes particular issue with that app and lists all the questions it asks children (or their parents) up front during registration: everything from the child and her parent’s names to their school, and where they live.
"Most Americans do not see "information overload" as a problem for them despite the explosion of internet data and images, according to a Pew Research Center survey on Wednesday.
Only 20 percent of U.S. adults feel they get more information than they can handle, down from 27 percent a decade ago. Just over three-quarters like having so much information at hand, the survey of 1,520 people showed.
"Generally, Americans appreciate lots of information and access to it," said the report into how U.S. adults cope with information demands.
Roughly four in five Americans agree that they are confident about using the internet to keep up with information demands, that a lot of information gives them a feeling of more control over their lives, and that they can easily determine what information is trustworthy.
Americans who are 65 or older, have a high school diploma or less and earn less than $30,000 a year are more likely to say they face a glut of information.
Eighty-four percent of Americans with online access through three sources - home broadband, smartphone and tablet computer - say they like having so much information available.
By contrast, 55 percent of those with no online source felt overwhelmed by the amount of possible information.
The term "information overload" was popularized by author Alvin Toffler in his 1970 bestseller "Future Shock." It refers to difficulties that people face from getting too much information or data.
The Pew survey involved people over 18 interviewed by landline or cell phones from March 7 to April 4. The margin of error was 2.9 percentage points, meaning results could vary by that much either way."
The "Investigatory Powers Act," has been passed into law in the UK, legalising a number of illegal mass surveillance programs revealed by Edward Snowden in 2013. It also introduces new powers to require ISPs to retain browsing data on all customers for 12 months, while giving police new powers to hack into computers and phones and to collect communications data in bulk.
"Jim Killock, executive director of the Open Rights Group, responded...saying: "...it is one of the most extreme surveillance laws ever passed in a democracy. The IP Act will have an impact that goes beyond the UK’s shores. It is likely that other countries, including authoritarian regimes with poor human rights records, will use this law to justify their own intrusive surveillance powers.”
"Much of the Act gives stronger legal footing to the UK's various bulk powers, including "bulk interception," which is, in general terms, the collection of internet and phone communications en masse. In June 2013, using documents provided by Edward Snowden, The Guardian revealed that the GCHQ taps fibre-optic undersea cables in order to intercept emails, internet histories, calls, and a wealth of other data."
Snooper Charter allows the State to tell lies in court.
"Charter gives virtually unrestricted powers not only to State spy organisations but also to the police and a host of other government agencies. The operation of the oversight and accountability mechanisms...are all kept firmly out of sight -- and, so its authors hope, out of mind -- of the public. It is up to the State to volunteer the truth to its victims if the State thinks it has abused its secret powers. "Marking your own homework" is a phrase which does not fully capture this...
Section 56(1)(b) creates a legally guaranteed ability -- nay, duty -- to lie about even the potential for State hacking to take place, and to tell juries a wholly fictitious story about the true origins of hacked material used against defendants in order to secure criminal convictions. This is incredibly dangerous. Even if you know that the story being told in court is false, you and your legal representatives are now banned from being able to question those falsehoods and cast doubt upon the prosecution story. Potentially, you could be legally bound to go along with lies told in court about your communications -- lies told by people whose sole task is to weave a story that will get you sent to prison or fined thousands of pounds.
Moreover, as section 56(4) makes clear, this applies retroactively, ensuring that it is very difficult for criminal offences committed by GCHQ employees and contractors over the years, using powers that were only made legal a fortnight ago, to be brought to light in a meaningful way. It might even be against the law for a solicitor or barrister to mention in court this Reg story by veteran investigative journalist Duncan Campbell about GCHQ's snooping station in Oman (covered by the section 56(1)(b) wording "interception-related conduct has occurred") – or large volumes of material published on Wikileaks.
The existence of section 56(4) makes a mockery of the "general privacy protections" in Part 1 of the IPA, which includes various criminal offences. Part 1 was introduced as a sop to privacy advocates horrified at the full extent of the act's legalisation of intrusive, disruptive and dangerous hacking powers for the State, including powers to force the co-operation of telcos and similar organisations. There is no point in having punishments for lawbreakers if it is illegal to talk about their law-breaking behaviour.
Like the rest of the Snoopers' Charter, section 56 has become law. Apart from Reg readers and a handful of Twitter slacktivists, nobody cares. The general public neither knows nor cares what abuses and perversions of the law take place in its name. Theresa May and the British government have utterly defeated advocates of privacy and security, completely ignoring those who correctly identify the zero-sum game between freedom and security in favour of those who feel the need to destroy liberty in order to "save" it.
The UK is now a measurably less free country in terms of technological security, permitted speech and ability to resist abuses of power and position by agents of the State, be those shadowy spys, police inspectors and above (ie, shift leaders in your local cop shop) and even food hygiene inspectors – no, really."
Distracted. Addicted. Alone Together. Emotionally dead. Disengaged from the real world. A parody of itself.
Animation by Steve Cutts. Music by Moby & The Void Pacific Choir, These Systems Are Failing.
Adam Turner at The Age writes: "When you look at how social media works, it was inevitable that it would turn into one of the world's most powerful propaganda tools. It's often painted as a force for good, letting people bypass the traditional gatekeepers in order to quickly disseminate information, but there's no guarantee that this information is actually true.
Facebook has usurped the role of the mainstream media in disseminating news, but hasn't taken on the fourth estate's corresponding responsibility for keeping the bastards honest. The mainstream media has no-one to blame but itself, having engaged in a tabloid race to the bottom which devalued truth to the point that blatant liars are considered more honest.
The fragmentation of news is already creating a filter bubble in that most people don't tend to read the newspaper from front to back, or sit through entire news bulletins, they just pick and choose what interests them. The trouble with Facebook is that it also reinforces bias, the more extreme your political views the less likely you are to see anything with an opposing viewpoint which might help you develop a more well-rounded view of the world."
Brooke Binkowski, the managing editor of the fact-checking at Snopes.com says, "Honestly, most of the fake news is incredibly easy to debunk because it's such obvious bullshit..."
The problem, Binkowski believes, is that the public has lost faith in the media broadly -- therefore no media outlet is considered credible any longer. The reasons are familiar: as the business of news has grown tougher, many outlets have been stripped of the resources they need for journalists to do their jobs correctly. "When you're on your fifth story of the day and there's no editor because the editor's been fired and there's no fact checker so you have to Google it yourself and you don't have access to any academic journals or anything like that, you will screw stories up," she says."
UPDATE 1/12/2016 -- Most students can't spot fake news
"If you thought fake online news was a problem for impressionable adults, it's even worse for the younger crowd. A Stanford study of 7,804 middle school, high school and college students has found that most of them couldn't identify fake news on their own. Their susceptibility varied with age, but even a large number of the older students fell prey to bogus reports. Over two thirds of middle school kids didn't see why they shouldn't trust a bank executive's post claiming that young adults need financial help, while nearly 40 percent of high schoolers didn't question the link between an unsourced photo and the claims attached to it.
Why did many of the students misjudge the authenticity of a story? They were fixated on the appearance of legitimacy, rather than the quality of information. A large photo or a lot of detail was enough to make a Twitter post seem credible, even if the actual content was incomplete or wrong. There are plenty of adults who respond this way, we'd add, but students are more vulnerable than most.
As the Wall Street Journal explains, part of the solution is simply better education: teach students to verify sources, question motivations and otherwise think critically."
"In 2015 alone, Indians taking selfies died while posing in front of an oncoming train, in a boat that tipped over at a picnic, on a cliff that gave way and crumbled into a 60-foot ravine and on the slippery edge of a scenic river canal. Also, a Japanese tourist trying to take a selfie fell down steps at the Taj Mahal, suffering fatal head injuries.
Researchers analysed thousands of selfies posted on Twitter and found that men were far more likely than women to take dangerous selfies. It found 13 per cent were taken in what could be dangerous circumstances, and the majority of victims were under the age of 24.
The most common cause of death worldwide was "falling off a building or mountain," which was responsible for 29 deaths. The second most second-most common being hit by a train, responsible for 11 deaths.
The authors hope the study will serve as a warning of the hazards and inspire new mobile phone technology that can warn photo-takers if they are in a danger zone.
Last year, no-selfie zones were also established in certain areas of the massive Hindu religious gathering called the Kumbh Mela because organisers feared bottlenecks caused by selfie-takers could spark stampedes."
"Scientists say they can deduce the lifestyle of an individual, down to the kind of grooming products they use, food they eat and medications they take, from chemicals found on the surface of their mobile phone. Experts say analysis of someone's phone could be a boon both to healthcare professionals, and the police.
"You can narrow down male versus female; if you then figure out they use sunscreen then you pick out the [people] that tend to be outdoorsy -- so all these little clues can sort of narrow down the search space of candidate people for an investigator," said Pieter Dorrestein, co-author of the research from the University of California, San Diego.
Writing in the Proceedings of the National Academy of Sciences, researchers from the U.S. and Germany describe how they swabbed the mobile phone and right hand of 39 individuals and analyzed the samples using the highly sensitive technique of mass spectrometry.
The results revealed that each person had a distinct "signature" set of chemicals on their hands which distinguished them from each other. What's more, these chemicals partially overlapped with those on their phones, allowing the devices to be distinguished from each other, and matched to their owners.
Analysis of the chemical traces using a reference database allowed the team to match the chemicals to known substances or their relatives to reveal tell-tale clues from each individual's life -- from whether they use hair-loss treatments to whether they are taking antidepressants.
"Roughly two-thirds of the world's internet users live under regimes of government censorship, according to a report from Freedom House, a pro-democracy think tank. The report adds that internet freedom declined worldwide for a sixth consecutive year in 2016 with the governments increasingly crack down on social media services and messaging apps. From NPR:
"The Stack reports on Google's "new research into upscaling low-resolution images using machine learning to 'fill in' the missing details," arguing this is "a questionable stance...continuing to propagate the idea that images contain some kind of abstract 'DNA', and that there might be some reliable photographic equivalent of polymerase chain reaction which could find deeper truth in low-res images than either the money spent on the equipment or the age of the equipment will allow."
"The article points out that "faith in the fidelity of these 'enhanced' images routinely convicts defendants."
"In a small recent study, researchers from New York University found that those who considered themselves in higher classes looked at people who walked past them less than those who said they were in a lower class did. The results were published in the journal of the Association for Psychological Science.
According to Pia Dietze, a social psychology doctoral student at NYU and a lead author of the study, previous research has shown that people from different social classes vary in how they tend to behave towards other people. So, she wanted to shed some light on where such behaviours could have originated. The research was divided into three separate studies.
For the first, Dietze and NYU psychology lab director Professor Eric Knowles asked 61 volunteers to walk along the street for one block while wearing Google Glass to record everything they looked at. These people were also asked to identify themselves as from a particular social class: either poor, working class, middle class, upper middle class, or upper class. An independent group watched the recordings and made note of the various people and things each Glass wearer looked at and for how long. The results showed that class identification, or what class each person said they belonged to, had an impact on how long they looked at the people who walked past them.
During Study 2, participants viewed street scenes while the team tracked their eye movements. Again, higher class was associated with reduced attention to people in the images.
For the third and final study, the results suggested that this difference could stem from the way the brain works, rather than being a deliberate decision. Close to 400 participants took part in an online test where they had to look at alternating pairs of images, each containing a different face and five objects. Whereas higher class participants took longer to notice when the face was different in the alternate image compared to lower classes, the amount of time it took to detect the change of objects did not differ between them. The team reached the conclusion that faces seem to be more effective in grabbing the attention of individuals who come from relatively lower class backgrounds."
"If YOU think you are not being analysed while browsing websites, it could be time to reconsider. A creepy new website called clickclickclick has been developed to demonstrate how our online behaviour is continuously measured.
The site, which observes and comments on your behaviour in detail, and is not harmful to your computer, contains nothing but a white screen and a large green button. From the minute you visit the website, it begins detailing your actions on the screen in real-time.
The site also encourages users to turn on their audio, which offers the even more disturbing experience of having an English voice comment about your behaviour.
Designer Roel Wouters said the experiment was aimed to remind people about the serious themes of big data and privacy. “It seemed fun to thematise this in a simple and lighthearted way,” he said.
Fellow designer Luna Maurer said the website her own experiences with the internet had helped with the project. “I am actually quite internet aware, but I am still very often surprised that after I watched something on a website, a second later I get instantly personalised ads,” she said."
"When recording voiceovers, dialog, and narration, people would often like to change or insert a word or a few words due to either a mistake they made or simply because they would like to change part of the narrative," reads an official Adobe statement. "We have developed a technology called Project VoCo in which you can simply type in the word or words that you would like to change or insert into the voiceover. The algorithm does the rest and makes it sound like the original speaker said those words."
Imagine this technology coupled with a false video manipulation component, that also already exists as a working proof:
One really could make potentially convincing entirely unreal audio/video of a person's likeness...
Groups of citizens wielding cameras take to the streets of New York to document the systemic police brutality and racism facing the public. The cops hate it and so they push back hard.
This is how police accountability plays out in the real world. Take heed Australia:
Of course, always in the name of "safety."