Send by email

your name: email to: message:
Username: Email: Password: Confirm Password:
Login with
Confirming registration ...

Edit your profile:

Country: Town: State:
Gender: Birthday:
Email: Web:
How do you describe yourself:
Password: New password: Repite password:

Tuesday, March 13, 2018

We have a problem, a serious one

Por sumily

Misinformation on the Internet, caused by false news, no matter how good the production is, is disclosed through a family process once it enters our distribution channels on social networks. The lie is shared 50,000 times and the actual publication, denying an hour later, is only shared 200 times. False information receives an algorithmic boost in services such as Facebook and YouTube, while experts give shouts that reach the void.

There is no reason to weigh that with the video montages it will be different. People will share them when they are ideologically beneficial and ignore them when they are not. The unsuspecting who believe The Onion's satirical stories will also be fooled by deep fades and scrupulous people who care about the truth will find ways to detect and disprove them.

Realistic videos created by computer, until very recently, were a working format that was only available for Hollywood productions with large budgets or in the case of cutting-edge researchers. Currently, social networking applications like Snapchat contain some primitive technologies to transform the face.

An amateur community has started experimenting with more powerful tools, including FakeApp. The application created an anonymous developer using open source software written by Google. It makes it free and rationally easy to create realistic facial transformations and leave meager vestiges of manipulation. Since the release of a version of the application on Reddit, it has been downloaded more than 120,000 times.

One of the newest forms of digital manipulation is the video montages. It is one of the most susceptible ways to defame politicians, set traps for people to blame for crimes or create vengeful pornography. US congressmen have begun to worry about the way these videos could be used as political sabotage and propaganda.

Recently, FakeApp triggered panic after the Motherboard technology site reported that people were using the application to create fake pornographic videos of celebrities. Pornhub, Twitter and other pages soon vened the videos, and Reddit closed some groups of deepfakes, including one with almost 100,000 members.

The groups on Reddit, before they closed, had a mix of users who exchanged video editing tips and exposed their latest fakes. Some Reddit users defended the videos and blamed the media for exaggerating their potential for damage. Other cases uploaded their videos to alternative platforms, anticipating the closure of those groups in Reddit due to their rules against non-consensual pornography. In addition, some made known their moral doubts about propagating this technology. 

Afterwards, they continued to do more. Doing a deepfake is easy. The main thing is to know how to choose the right data source. Short videos are easier to manipulate than long ones, and scenes filmed with a single angle give better results than those recorded with multiple angles and of course or it may be necessary to take into account the genetics if people look like it is better. Now fears increase when surfing the internet. It becomes increasingly easier for us to steal our identity.