Monday, November 12, 2018

Tuesday / Wednesday Thursday November 13/ 14 / 15 fake news: can you tell the difference



Something light!
Dutch Silly Walk
silly walk in action.



London's international tattoo festival – in pictures















L


Learning Targets: I can analyze and evaluate the effectiveness of the structure an author uses in his or her exposition or argument, including whether the structure makes points clear, convincing, and engaging.

I can integrate and evaluate multiple sources of information presented in different media or formats (e.g., visually, quantitatively) as well as in words in order to address a question or solve a problem.

Class assignment: Due at the close of class on Thursday, or by midnight if you receive extended time.  Please send along as one document.

Part 1: There are three articles and a short video below relating to fake news.  Please respond to the following questions as your read / watch  them. (Class Participation Grade)

Part 2: copy the following list of headlines and identify if they are real or fake. Send along with your Part 1 question responses
            
                                     1.  Rosa Parks' Daughter Praises Trump's Response to Charlottesville 
                                  2. Someone just gave Donald Trump a full-moon salute
                                  3. Delaware Cemetery Begins Exhuming Bodies of Confederate Soldiers
                                  4. Ted Cruz pokes fun at being called The Zodiac Killer
                                  5. Durex launching new flavour condom- eggplant
                                  6. FBI seizes over 300 penises at morgue employee's home
                                  7. Ivanka Trump claims she had a "punk phase"
                                  8. Kim and Kayne's car burglarized one year after Paris 
                                  9. Female serial killer is the daughter of United States senator
                                 10. Video  poker machines taking over Las Vegas
                                 11.Supreme Court nominee Neil Gorsuch founded  the “Fascism 
                                           Forever Club” in high school. 
                                 12. Barack Obama orders Harvard to reverse his daughter Malia's 
                                       suspension.
                                 13. Hasbro has launched a limited-edition Disney Classic 
                                       Monopoly
                                 14. Usher's herpes victim tries to drag in a Jane Doe
                                 15. Florida governor Rick Scott critically injured during hurricane                                         Irma clean up.  
                             

Part 1, Video : Washington Post video: FAKE NEWS (link)
                    1. How have Google and Twitter attempted to combat fake news?
                     2. List three ways one may check if news is fake? 
                     3. How does a chrome extension work?

Part 1, Article 1: NPR article (below)
                    1. How can one be media literate?
                    2. How can information be objectively verified?
                    3. What type of language in the "about us" section might make you skeptical?
                    4. How can you verify the quality of quotes?
                    5. How can you check the authenticity of an image?
                    6. How are The Onion and Clickhole NOT fake news?

                     

Fake Or Real? How To Self-Check The News And Get The Facts 

Wynne Davis

Fake news stories can have real-life consequences. On Sunday, police said a man with a rifle who claimed to be "self-investigating" a baseless online conspiracy theory entered a Washington, D.C., pizzeria and fired the weapon inside the restaurant.

So, yes, fake news is a big problem.
These stories have gotten a lot of attention, with headlines claiming Pope Francis endorsed Donald Trump in November's election and sites like American News sharing misleading stories or taking quotes out of context. And when sites like DC Gazette share stories about people who allegedly investigated the Clinton family being found dead, the stories go viral and some people believe them. Again, these stories are not true in any way.
Stopping the proliferation of fake news isn't just the responsibility of the platforms used to spread it. Those who consume news also need to find ways of determining if what they're reading is true. We offer several tips below.
The idea is that people should have a fundamental sense of media literacy. And based on a study recently released by Stanford University researchers, many people don't.
Sam Wineburg, a professor of education and history at Stanford and the lead author of the study, said a solution is for all readers to read like fact checkers. But how do fact checkers do their job?
Alexios Mantzarlis, director of the International Fact-Checking Network at Poynter, says fact checkers have a process for each claim they deal with.
"You'll isolate a claim that has something that can be objectively verified, you will seek the best primary sources in that topic. Find whether they match or refute or prove the claim being made, and then present with all limitations the data and what the data says about the claim being made," Mantzarlis says.
That's the framework for professionals, but there are ways for everyone to do a bit of fact checking themselves.
Melissa Zimdars is an assistant professor of communication and media at Merrimack College in North Andover, Mass. When she saw her students referencing questionable sources, she created and shared a document with them of how to think about sources, as well as a list of misleading, satirical and fake sites.
Both Mantzarlis and Zimdars agreed there are a few best practices people can use when reading articles online.
Pay attention to the domain and URL
Established news organizations usually own their domains and they have a standard look that you are probably familiar with. Sites with such endings like .com.co should make you raise your eyebrows and tip you off that you need to dig around more to see if they can be trusted. This is true even when the site looks professional and has semi-recognizable logos. For example, abcnews.com is a legitimate news source, but abcnews.com.co is not, despite its similar appearance.
Read the "About Us" section
Most sites will have a lot of information about the news outlet, the company that runs it, members of leadership, and the mission and ethics statement behind an organization. The language used here is straightforward. If it's melodramatic and seems overblown, you should be skeptical. Also, you should be able to find out more information about the organization's leaders in places other than that site.
Look at the quotes in a story
Or rather, look at the lack of quotes. Most publications have multiple sources in each story who are professionals and have expertise in the fields they talk about. If it's a serious or controversial issue, there are more likely to be quotes — and lots of them. Look for professors or other academics who can speak to the research they've done. And if they are talking about research, look up those studies.
Look at who said them
Then, see who said the quotes, and what they said. Are they a reputable source with a title that you can verify through a quick Google search? Say you're looking at a story and it says President Obama said he wanted to take everyone's guns away. And then there's a quote. Obama is an official who has almost everything he says recorded and archived. There are transcripts for pretty much any address or speech he has given. Google those quotes. See what the speech was about, who he was addressing and when it happened. Even if he did an exclusive interview with a publication, that same quote will be referenced in other stories, saying he said it while talking to the original publication.
Check the comments
A lot of these fake and misleading stories are shared on social media platforms. Headlines are meant to get the reader's attention, but they're also supposed to accurately reflect what the story is about. Lately, that hasn't been the case. Headlines often will be written in exaggerated language with the intention of being misleading and then attached to stories that are about a completely different topic or just not true. These stories usually generate a lot of comments on Facebook or Twitter. If a lot of these comments call out the article for being fake or misleading, it probably is.
Reverse image search
A picture should be accurate in illustrating what the story is about. This often doesn't happen. If people who write these fake news stories don't even leave their homes or interview anyone for the stories, it's unlikely they take their own pictures. Do a little detective work and reverse search for the image on Google. You can do this by right-clicking on the image and choosing to search Google for it. If the image is appearing on a lot of stories about many different topics, there's a good chance it's not actually an image of what it says it was on the first story.
These tips are just a start at determining what type of news an article is. Zimdars outlined these and others in a guide for her students.
If you do these steps, you're helping yourself and you're helping others by not increasing the circulation of these stories.
And you won't be the only one trying to stop the spread of this false content. The company leaders behind the platforms these stories are shared on are trying to figure out how to fix the issue from their side, but they are also trying to make sure not to limit anyone's right to freedom of speech. It's a tricky position to be in, but they've said they'll try. In the end, it really does depend on taking responsibility and being an engaged consumer of news.
Here's one last thing. Satirical publications exist and serve a purpose, but are clearly labeled as exaggerated and humorous by the writers and owners. Some of the more well-known ones like The Onion and ClickHole use satire to talk about current events. If people don't understand that, they might share these articles after reading them in the literal sense.
If this happens or if you see your friends sharing blatantly fake news, be a friend and kindly tell them it's not real. Don't shy away from these conversations even if they might be uncomfortable. As said, everyone has to help fix the fake news problem.
Part 1, Article 2 (article below)
 1. What is typosquatting?
   2. How do cyber criminals use typosquatting?
    3. Who is Paul Horner?
   4. How do fraudsters use counterfeit sites?    
   5. How effective are security software programs?                             
  

Hackers use typosquatting to dupe the unwary with fake news, sites


Elizabeth Weise, USA TODAY
SAN FRANCISCO – The proliferation of fake news has shone a light on another murky corner the web, the practice of typosquatting.
These are the URLs that pass for common ones — say Amazoon.com instead of Amazon.com — if the user isn't paying close attention to the Web address.
Always eager to capitalize on human inattention, cyber criminals have embraced this method of registering a commonly misspelled Web address to use as a base for the distribution of malware or to steal information from unsuspecting users.
“They create a site that looks essentially like the real one, at least on the surface. It’s fairly straightforward to do and then you’re simply relying on human nature to not notice,” said Steve Grobman, chief technology officer at Intel Security.
Sometimes called URL hijacking, multiple media sites have been hit with the ploy, including usatoday.com (usatodaycom.com) and abcnews.com ( abcnews.com.co.)
The technique can make made-up stories seem more legitimate and give them a brief but powerful ride in legitimate news sites until they're debunked. Such articles played a role in this year's presidential election, though how much they influenced the outcome is unknown.
On October. 17, a fake story claimed to report on someone paid $3,500 to protest at rallies for then-presidential candidate against Donald Trump. The story was credited to the Associated Press, though it was not from that legitimate news outlet, and appeared on the fake news site abcnews.com.co.
The story was in fact created by Paul Horner, who earns his living writing fake stories and who told the Washington Post he made $10,000 each month selling ads on his fake news sites.
In May, the same faked ABC site published a “story” that Michael Jordan was threatening to move his NBA team from Charlotte, N.C. unless the state repealed a recently-passed law that kept transgender people from using the bathroom of their current, as opposed to original, gender.
The fake story was picked up by multiple outlets before it was finally unmasked as a hoax.
Two years ago, a Change.org petition was created in response to a made-up article from the satirical National Report, which was later picked up by a faked nbc.com.co site. The article claimed that Arizona had passed a “self-rape” law under which a 15-year-old boy was sentenced to prison after his mother found him masturbating.
These websites are created to make money in two different ways, said Akino Chikada, senior brand protection manager with MarkMonitor, a San Francisco-based company.
Fraudsters use counterfeit sites as phishing farms, trying to entice those who visit them to fill out personal information that can be used to steal credentials and other potentially saleable information.
“If you accidentally mistype a particular brand name, it could lead you to a survey. You think it’s for a brand you love, but it’s actually a thief trying to steal information about you,” said Chikada.
Companies can’t always protect themselves against this type of fraud because they can’t register every conceivable variant on their names. “It’s too expensive and inefficient. Though they do tend to register the most common typos. Then they just have to monitor,” said Chikada.
Another common ploy is for criminals to place banners or ads that link to slightly off URLs.
“You go to your site and at the bottom, you see what looks like an Amazon ad that says there's a Macbook Pro for $299. But when you click on it, it doesn’t really go to Amazon, maybe it goes to amazoon.com. But how carefully are you going to study the URL you’re clicking?” Grobman said.
Fake news sites especially take advantage of the urgency they try to create in their readers.
“They’re using the sensationalized aspect of it to make you click much quicker than if you were going through the process rationally," he said. A sensational headline, especially if it reinforces or denounces a strongly-held belief, might cause a reader to be less cautious.
Many security software programs are fairly effective against blocking such typo-ridden URLs if they go to a known malware-infected site, but some can slip through, he said.
But as with most things online, the key is awareness and taking an extra moment to stay safe. That includes glancing at a URL before accepting it as valid or perhaps opening a new browser window and actually typing in a desired destination, rather than simply clicking on a link on a site that seems dubious

Respond to the following based upon your reading of the following article: 
1. Who created the video and what was its purpose?
2. What exactly is a "deep fake"?
3. What is the technology behind a "deep fake"?
4. What is a major concern about the use of a "deep fake"?
5. Explain the "liar's dividend".

‘When nothing is true then the dishonest person will thrive by saying what’s true is fake.’

When nothing is true then the dishonest person will thrive by saying what’s true is fake.’ Photograph: Joan Wong

Technology can make it look as if anyone has said or done anything. Is it the next wave of (mis)information warfare?

In May, a video appeared on the internet of Donald Trump offering advice to the people of Belgium on the issue of climate change. “As you know, I had the balls to withdraw from the Paris climate agreement,” he said, looking directly into the camera, “and so should you.”
The video was created by a Belgian political party, Socialistische Partij Anders, or sp.a, and posted on sp.a’s Twitter and Facebook. It provoked hundreds of comments, many expressing outrage that the American president would dare weigh in on Belgium’s climate policy.
One woman wrote: “Humpy Trump needs to look at his own country with his deranged child killers who just end up with the heaviest weapons in schools.”
Another added: “Trump shouldn’t blow so high from the tower because the Americans are themselves as dumb.”
But this anger was misdirected. The speech, it was later revealed, was nothing more than a hi-tech forgery.
Sp.a had commissioned a production studio to use machine learning to produce what is known as a “deep fake” – a computer-generated replication of a person, in this case Trump, saying or doing things they have never said or done.
Sp.a’s intention was to use the fake video to grab people’s attention, then redirect them to an online petition calling on the Belgian government to take more urgent climate action. The video’s creators later said they assumed that the poor quality of the fake would be enough to alert their followers to its inauthenticity. “It is clear from the lip movements that this is not a genuine speech by Trump,” a spokesperson for sp.a told Politico.
As it became clear that their practical joke had gone awry, sp.a’s social media team went into damage control. “Hi Theo, this is a playful video. Trump didn’t really make these statements.” “Hey, Dirk, this video is supposed to be a joke. Trump didn’t really say this.”
The party’s communications team had clearly underestimated the power of their forgery, or perhaps overestimated the judiciousness of their audience. Either way, this small, left-leaning political party had, perhaps unwittingly, provided the first example of the use of deep fakes in an explicitly political context.
It was a small-scale demonstration of how this technology might be used to threaten our already vulnerable information ecosystem – and perhaps undermine the possibility of a reliable, shared reality.






The fake Trump video was created using a machine learning technique called a “generative adversarial network”, or a GAN. A graduate student, Ian Goodfellow, invented GANs in 2014 as a way to algorithmically generate new types of data out of existing data sets. For instance, a GAN can look at thousands of photos of Barack Obama, and then produce a new photo that approximates those photos without being an exact copy of any one of them, as if it has come up with an entirely new portrait of the former president not yet taken. GANs might also be used to generate new audio from existing audio, or new text from existing text – it is a multi-use technology.

The use of this machine learning technique was mostly limited to the AI research community until late 2017, when a Reddit user who went by the moniker “Deepfakes” – a portmanteau of “deep learning” and “fake” – started posting digitally altered pornographic videos. He was building GANs using TensorFlow, Google’s free open source machine learning software, to superimpose celebrities’ faces on the bodies of women in pornographic movies.
A number of media outlets reported on the porn videos, which became known as “deep fakes”. In response, Reddit banned them for violating the site’s content policy against involuntary pornography. By this stage, however, the creator of the videos had released FakeApp, an easy-to-use platform for making forged media. The free software effectively democratized the power of GANs. Suddenly, anyone with access to the internet and pictures of a person’s face could generate their own deep fake.
When Danielle Citron, a professor of law at the University of Maryland, first became aware of the fake porn movies, she was initially struck by how viscerally they violated these women’s right to privacy. But once she started thinking about deep fakes, she realized that if they spread beyond the trolls on Reddit they could be even more dangerous. They could be weaponized in ways that weaken the fabric of democratic society itself.
“I started thinking about my city, Baltimore,” she told me. “In 2015, the place was a tinderbox after the killing of Freddie Gray. So, I started to imagine what would’ve happened if a deep fake emerged of the chief of police saying something deeply racist at that moment. The place would’ve exploded.”
Citron, along with her colleague Bobby Chesney, began working on a report outlining the extent of the potential danger. As well as considering the threat to privacy and national security, both scholars became increasingly concerned that the proliferation of deep fakes could catastrophically erode trust between different factions of society in an already polarized political climate.   
In particular, they could foresee deep fakes being exploited by purveyors of “fake news”. Anyone with access to this technology – from state-sanctioned propagandists to trolls – would be able to skew information, manipulate beliefs, and in so doing, push ideologically opposed online communities deeper into their own subjective realities.
“The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases,” the report reads. “Deep fakes will exacerbate this problem significantly.”
Citron and Chesney are not alone in these fears. In April, the film director Jordan Peele and BuzzFeed released a deep fake of Barack Obama calling Trump a “total and complete dipshit” to raise awareness about how AI-generated synthetic media might be used to distort and manipulate reality. In September, three members of Congress sent a letter to the director of national intelligence, raising the alarm about how deep fakes could be harnessed by “disinformation campaigns in our elections”.
The specter of politically motivated deep fakes disrupting elections is at the top of Citron’s concerns. “What keeps me awake at night is a hypothetical scenario where, before the vote in Texas, someone releases a deep fake of Beto O’Rourke having sex with a prostitute, or something,” Citron told me. “Now, I know that this would be easily refutable, but if this drops the night before, you can’t debunk it before serious damage has spread.”
She added: “I’m starting to see how a well-timed deep fake could very well disrupt the democratic process.”






While these disturbing hypotheticals might be easy to conjure, Tim Hwang, director of the Harvard-MIT Ethics and Governance of Artificial Intelligence Initiative, is not willing to bet on deep fakes having a high impact on elections in the near future. Hwang has been studying the spread of misinformation on online networks for a number of years, and, with the exception of the small-stakes Belgian incident, he is yet to see any examples of truly corrosive incidents of deep fakes “in the wild”.
Hwang believes that that this is partly because using machine learning to generate convincing fake videos still requires a degree of expertise and lots of data. “If you are a propagandist, you want to spread your work as far as possible with the least amount of effort,” he said. “Right now, a crude Photoshop job could be just as effective as something created with machine learning.”
At the same time, Hwang acknowledges that as deep fakes become more realistic and easier to produce in the coming years, they could usher in an era of forgery qualitatively different from what we have seen before.
“We have long been able to doctor images and movies,” he said. “But in the past, if you wanted to make a video of the president saying something he didn’t say, you needed a team of experts. Machine learning will not only automate this process, it will also probably make better forgeries.”
Couple this with the fact that access to this technology will spread over the internet, and suddenly you have, as Hwang put it, “a perfect storm of misinformation”.
Nonetheless, research into machine learning-powered synthetic media forges ahead.
In August, an international team of researchers affiliated with Germany’s Max Planck Institute for Informatics unveiled a technique for producing what they called “deep video portraits”, a sort of facial ventriloquism, where one person can take control of another person’s face and make it say or do things at will. A video accompanying the research paper depicted a researcher opening his mouth and a corresponding moving image of Barack Obama opening his mouth; the researcher then moves his head to the side, and so does synthetic Obama.
Christian Theobalt, a researcher involved in the study, told me via email that he imagines deep video portraits will be used most effectively for accurate dubbing in foreign films, advanced face editing techniques for post-production in film, and special effects. In a press release that accompanied the original paper, the researchers acknowledged potential misuse of their technology, but emphasized how their approach – capable of synthesizing faces that look “nearly indistinguishable from ground truth” – could make “a real difference to the visual entertainment industry”.
Hany Farid, professor of computer science at the University of California, Berkeley, believes that although the machine learning-powered breakthroughs in computer graphics are impressive, researchers should be more cognizant of the broader social and political ramifications of what they’re creating. “The special effects community will love these new technologies,” Farid told me. “But outside of this world, outside of Hollywood, it is not clear to me that the positive implications outweigh the negative.”
Farid, who has spent the past 20 years developing forensic technology to identify digital forgeries, is currently working on new detection methods to counteract the spread of deep fakes. One of Farid’s recent breakthroughs has been focusing on subtle changes of color that occur in the face as blood is pumped in and out. The signal is so minute that the machine learning software is unable to pick it up – at least for now.
As the threat of deep fakes intensifies, so do efforts to produce new detection methods. In June, researchers from the University at Albany (SUNY) published a paper outlining how fake videos could be identified by a lack of blinking in synthetic subjects. Facebook has also committed to developing machine learning models to detect deep fakes.
But Farid is wary. Relying on forensic detection alone to combat deep fakes is becoming less viable, he believes, due to the rate at which machine learning techniques can circumvent them. “It used to be that we’d have a couple of years between coming up with a detection technique and the forgers working around it. Now it only takes two to three months.”
This, he explains, is due to the flexibility of machine learning. “All the programmer has to do is update the algorithm to look for, say, changes of color in the face that correspond with the heartbeat, and then suddenly, the fakes incorporate this once imperceptible sign.” (For this reason, Farid chose not to share some of his more recent forensic breakthroughs with me. “Once I spill on the research, all it takes is one asshole to add it to their system.”)
Although Farid is locked in this technical cat-and-mouse game with deep fake creators, he is aware that the solution does not lie in new technology alone. “The problem isn’t just that deep fake technology is getting better,” he said. “It is that the social processes by which we collectively come to know things and hold them to be true or untrue are under threat.”
Indeed, as the fake video of Trump that spread through social networks in Belgium earlier this year demonstrated, deep fakes don’t need to be undetectable or even convincing to be believed and do damage. It is possible that the greatest threat posed by deep fakes lies not in the fake content itself, but in the mere possibility of their existence.
This is a phenomenon that scholar Aviv Ovadya has called “reality apathy”, whereby constant contact with misinformation compels people to stop trusting what they see and hear. In other words, the greatest threat isn’t that people will be deceived, but that they will come to regard everything as deception.
Recent polls indicate that trust in major institutions and the media is dropping. The proliferation of deep fakes, Ovadya says, is likely to exacerbate this trend.
According to Danielle Citron, we are already beginning to see the social ramifications of this epistemic decay.
“Ultimately, deep fakes are simply amplifying what I call the liar’s dividend,” she said. “When nothing is true then the dishonest person will thrive by saying what’s true is fake.”




No comments:

Post a Comment

Tuesday-Thursday, June 4-6 photo narratives

Year-end round up of graded assignments: 1. Personal theme choice: This was due on Monday, June 3. (Most of you sent those along. Tha...