Google bans deepfakes from its machine learning platform

Tom Cruise, pixelated.

Screenshot via Deep Tom Cruise / Edits by author

Google will no longer allow people to create deepfakes using its collaborative machine learning platform, according to the company.

Google’s Colaboratory service, known as Colab, allows users to run Python code from their browsers rather than on their own hardware, essentially giving them a free way to access massive computing power. People use Colab for everything from running Minecraft servers and neural networks capable of automatically recognizing handwriting, but the same computing power is also being used by some people to create non-consensual deepfake pornography.

Google hasn’t made any official announcement regarding the change to ban the creation of deepfakes from the service, but as noted by Bleeping Computerarchived versions of its FAQ section show that “creating deepfakes” was added to the list of prohibited activities in the last month.

Last month, apparently shortly before this change, Motherboard published an investigation into DeepFaceLab, an open-source project that was used in the creation of the viral deepfake Tom Cruise. DeepFaceLab is currently the most popular method for creating deepfakes, including deepfake porn. In fact, Motherboard’s investigation found that DeepFaceLab repeatedly links and sends users to Mr. Deepfakes, the largest online deepfake porn site, in order to learn how to use the software.

A popular fork of DeepFaceLab is “DFL-Colab”, which allows users to create deepfakes on Google Colab rather than on their own hardware, which requires expensive and hard-to-obtain graphics cards. Google now bans DeepFaceLab under the new rule.

“We regularly monitor abuse pathways in Colab that go against Google’s AI principles, while balancing our support for our mission to give our users access to valuable resources such as TPUs and GPUs,” a Google spokesperson told Motherboard. “Deepfakes were added to our list of prohibited activities from Colab runtimes last month in response to our regular reviews of abusive models.”

Deepfakes have “great potential” to go against Google’s AI principles, the spokesperson said: “We aspire to be able to detect and deter abusive deepfake patterns over benign patterns, and we will modify our policies as our methods progress.” from google Principles of AI state that AI applications should be “socially beneficial” and should not cause harm.

Motherboard saw that the developer of DFL-Colab, a man who goes by the name of Nikolay Chervoniy, was discussing the DeepFaceLab Discord ban, but Chervoniy declined to comment for this story.

Emanuel Maiberg contributed reporting for this story.

Comments are closed.