

We've seen a rapid improvement in AI image generation recently, and the next leap will be greater control over it's outputs. The ability to make specific changes will present new opportunities for creators, but at the same time make it harder for the consumer to spot misinformation.
This incredible demo by FacePoke, making detailed edits to the Mona Lisa, is a glimpse into what's coming:
This is insane. Detailed edits to the Mona Lisa using FacePoke. pic.twitter.com/Ry3eUXYCLo
— Victor Mustar (@victormustar) March 23, 2023
For creators and AI artists, more control is exciting. But with such granular editing power, the potential for subtle manipulation and misinformation grows. It's easier now to create content that looks real but is either entirely fabricated, or even partly altered. Seeing is no longer believing.
The impact is already being felt. Misinformation can hurt people or divert attention during critical moments. We've seen how altered content can mislead the public. Viral deepfakes in relation to hurricane Helene could direct essential resources or waste valuable time.
AI-generated images are becoming increasingly difficult to detect.
— Mario Nawfal (@MarioNawfal) March 26, 2023
pic.twitter.com/ZM1rPjJcVP
This is where Trueshot is required.. reporting on real world events with verifiable images. We provide a timestamped NFT to prove an image is authentic the moment it's captured. As image editing becomes easier, the need for authenticity grows stronger, especially in fast moving news reporting.
The future of digital content creation is bright, but it comes with challenges. As we gain more control over image output, we need to be cautious about how it's used.
Let's embrace this power—but with the tools to protect what's real.