Google cracks down on deepfakes

Google cracks down on deepfakes

Earlier this month, Google decided to add deepfake training to its list of prohibited projects on its Colaboratory service. The change was first seen by developer DFL on Discord as "chervonij". When he tried to train his deepfake models on the platform, he got an error message saying

"You may be running unauthorized code, which may restrict your ability to use Colab in the future. Please note the prohibited actions specified in our FAQ.

Google appears to have made the change under the radar, and has remained silent on the matter ever since. While ethics is the first potential theory that comes to mind, the real reason might be a bit more pragmatic.

Abuse of the free resource

Deepfakes are "Photoshopped" videos: fake videos that show people saying things they never really said. Its creators harness artificial intelligence (AI) and machine learning (ML) technologies to create super-engaging videos, which are increasingly difficult to distinguish from legitimate content.

However, to be convincing, deepfakes require significant computing power, much like that offered by the Colab service. This Google project allows users to run Python in their browser while using free computing resources.

Since deepfakes are often used to make pranks, create fake news, or spread fake revenge porn, it's easy to think that ethics were behind Google's decision. However, it could also be that too many people were using Colab to create funny little fake videos, preventing other researchers from doing more "serious" work. After all, the computing resource is free to use.

Aside from deepfakes, Google does not allow Colab to be used for projects such as cryptocurrency mining, performing denial-of-service attacks, password cracking, using multiple accounts to bypass access or resource usage restrictions , the use of remote desktop or SSH, or the connection. to remote proxy servers.

Via: BleepingComputer (Opens in a new tab)