States Rush to Criminalize AI-Powered Fraud

Image generators were some of the first AI models to make a big splash with consumers in 2022. Next came the more limited text-based chatbots that are ubiquitous today. But it wasn’t until last month that those two AI models came together when both Google and OpenAI released multimodal image generation into their flagship AI models. Previously, you had separate image-generating models and text-based models. When you asked a chatbot to make you an image, it would simply create its own text prompt and feed it into a separate image-generating AI model to produce an image. Relying on the less advanced image generator models created so-so images with some telltale signs of AI creation. 

However, with the recent release of multimodal image generation in GPT-4o and Gemini Flash, the frontier models can now create those images directly. The result is much higher-quality images that can be adjusted and edited with simple prompts (something the image-generating models struggled with previously). This is all a long-winded way of saying: AI images are about to get much better and they’ll be everywhere (see e.g., the Ghibli studio-style photo craze). This improving technology, when expanded to not just images, but to video and voice, is ripe for fraud. 

Scene: A finance worker receives a message from his employer’s CFO requesting a large, confidential transaction. The message seems suspicious, but the CFO sets up a video conference call. Upon recognizing the CFO, who is with other executives, the finance worker is satisfied with the legitimacy of the request and proceeds to make 15 transactions, depositing around $25 million into accounts at the direction of the CFO. It turns out the “CFO” was, in fact, a scammer who used deepfake technology to deceive the employee.

Previous
Previous

California Agency Retreats on Bold AI Regulation Plans 

Next
Next

California's Blueprint for Responsible AI Governance