From Fake News to DeepFakes

3 min read

Stories created to deliberately misinform or deceive readers are common. The traditional name for such stories was ‘lies’. However, in a post truth world, the correct name for them is now fake news. Fake news has been around for ages, but I bet it is one Donald Trump who popularized the phrase, and almost gave it a new meaning. We can easily detect fake news when it does not agree with what we believe, but miss it when it seems to reinforce our own believes and stereotypes.

The problem of fake news has been prevalent in elections and influencing the masses all over the world. People have died in the hands (or words) of fake news, elections have been won and lost, and some people have built businesses that are majorly powered by fake news. With social media, the impact of fake news has been felt far and wide as algorithms and adverts work to deliver information to where there is likely to be more clicks and likes.

But in what seems to make fake news look like child play, we now have Deepfakes. This is a technology that allows one to make a video of anyone, saying anything. It is the Photoshop of video, and can be used to make almost anybody say anything. The technology involves use of software that learns how a subject makes specific sounds, the facial expressions that accompany them, and a 3D facial model of the subject. Armed with that, the software can create a footage of the subject saying anything that has been fed into it.

Various people have been victims of deepfakes. Celebrities are the ones that are majorly targeted, with several having been featured in fake sex tapes. There are videos of drunk Nancy Pelosi, Mark Zuckerberg saying that the more people use Facebook, the more Facebook owns them, and even Barack Obama saying nasty things. It is now easy to produce a video of a politician inciting violence, a company executive claiming that their product is defective, or a person from a minority group supporting terrorism. Unfortunately, most people cannot identify fake news, and will stand even a worse fate when it comes to identifying fake videos.

The problem posed by this technology is that it will make it harder for anyone to believe anything that is seen on video. If I can make a politician say anything I want on video, then it means no video should be believed as authentic. The reality is no longer real. The implication is that people will never believe anything at all. This will also cause people to believe only the ‘truth’ that aligns with their beliefs or preconceived ideas. It will become harder to pass a message across, as none will be believable.

What are the solutions?

Currently, there are people working to combat these deepfakes. There are tech startups and labs that are creating tools to detect fake videos. These will prove useful in future when fake videos arise that need to be authenticated.

The mainstream media will also need to find a way of authenticating anonymous clips provided to them. This could involve tracking the sharing of a file and edit history from the source of the video, something which blockchain technology will help. The weakest link lies in the human nature to want to click and share any juicy stories. This is what keeps the social media alive, and fuels fake news and rumors. People will need to be more skeptical about that unbelievable video that we are so tempted to share, and instead, find out the truth behind it.

What do you think?

Your email address will not be published. Required fields are marked *

No Comments Yet.