Deepfakes and Photoshop
Generative AI-based deepfakes have been in the news a lot recently, in part due to concerns that a faked video of a world leader declaring war on someone might cause geopolitical issues (and/or market problems related to those issues), but also that they could influence election outcomes by suggesting one of the candidates said something they didn’t, or suggesting that they’re infirm, unwell, or unhinged in some way that they’re not.
In most relevant contexts, a “deepfake” is a piece of media—usually a video, but sometimes a photo—that has been modified in some way, usually by overlaying one person’s face and possibly other attributes onto that of someone else, and this is usually accomplished with the help of generative AI tools.
One common criticism of those concerned about the strategic, nefarious use of these tools is that we’ve had the ability to edit images and videos (and audio files) for decades, and though the quality of the resulting fakes have varied, and though these tweaked bits of media have at times caused a stir, we’re still here: nothing fundamental has changed about the way politics work because there has always been the possibility that someone could lie about something they saw or something a politician said, and we’ve thus always had to be careful; we’ve got basic BS-filters in place.
The counterargument to this argument is that these new tools represent a step-change in how such media is created, and in the quality of the results.
Keep reading with a 7-day free trial
Subscribe to Brain Lenses to keep reading this post and get 7 days of free access to the full post archives.