Microsoft fixes Designer AI after Taylor Swift deepfakes

Microsoft fixes Designer AI after Taylor Swift deepfakes

Last week, deepfake photos of the singer Taylor Swift have gone viral online, sparking another conversation about the misuse of AI and forcing X (formerly Twitter) to block searches for Taylor Swift on the platform. Now, the site 404 Media reports that Microsoft made changes to AI designer to prevent images from being generated with explicit explicit content. According to 404 Media’s investigation, the deepfake photos of the celebrity were created using Microsoft’s AI Designer. Furthermore, the Telegram channel and 4chan they even advised users to take advantage of this tool.

Microsoft: The tactic users use to create explicit content with Designer AI

Microsoft already offers useful tools to prevent people from generate explicit images. However, users have gotten around the issue by typing wrong names or describing sexual acts instead of directly using the names in the prompt. 404 Media’s testing found that Designer AI is unable to generate an image of “Jennifer Aniston”. On the other hand, it was possible to create deepfake images of the actress using the phrase “jennifer ‘attrice’ aniston”. This ploy had also been used by users of a Telegram group. In fact, members recommended using the prompt “Taylor ‘cantante’ Swift” to generate explicit deepfake images of the US singer.

The Redmond company said found no evidence that Designer AI was used to create the deepfakes. However, the loophole used by users for bypass keyword blocking has now been resolved and prompts of this type no longer work on Designer AI. Although the problem appears to have been permanently resolved, it appears that 4chan users have already found it other ways to get around the bans by Bing and Designer AI. Microsoft will therefore have to try to block new attempts to create deepfakes before the new images go viral.

Leave a Reply

Your email address will not be published. Required fields are marked *