Gov. Abbott’s repost of AI-generated photo highlights blurred line of artificial intelligence

It comes as AI is reshaping how political campaigns operate, which is raising questions about misinformation and sparking concerns about limited regulation.

AUSTIN, Texas — After a daring rescue by special forces over the weekend deep inside Iran to rescue a U.S. airman who was injured after his fighter jet was shot down behind enemy lines, Gov. Greg Abbott is facing criticism for a post on social media.

On Easter Sunday, Abbott praised a real event, the rescue of the pilot, but paired it with an image that wasn’t real. He reposted an AI-generated image on X that falsely depicted the rescue.

The original post, which came from an account named “Missy in So Cal,” said, “Here is the photo of the honorable Colonel being rescued yesterday-God bless him-our soldiers are ALL doing God’s work! HAPPY EASTER!”

On Sunday morning, Abbott reposted the image on X and commented: “This is so awesome.” 

The post depicted a man in military gear sitting on an aircraft of some sort, holding an American flag, smiling as other members of the military surrounded him. At first glance, it looks like an incredible moment after the daring rescue, but it is a scene that never happened and an image that is not real. X quickly flagged and labeled it as AI-generated. 

Over the weekend, the U.S. successfully located the second crew member of the F-15E fighter jet, which was shot down over Iran on Friday. The crew member was trapped in the Iranian mountains for nearly two days. During that time, he was able to maintain communications with the U.S. military. The CIA assisted in the rescue, orchestrating a deception campaign, spreading word inside Iran that U.S. forces had already found the airman and were “moving him on the ground for exfiltration out of the country.” President Trump, in a post on social media, said the colonel sustained injuries but will be just fine.

During the rescue, U.S. officials said they deliberately destroyed two C-130 aircraft due to mechanical difficulties so they wouldn’t fall into Iranian hands, as well as four MH-6 Little Bird helicopters. Replacement aircraft flew the airman out of enemy territory.

Since the operation was completed, the Pentagon has not released any official images of the rescue mission or the rescued pilot.

“Similar images of smiling soldiers with flags in helicopters have been synthetic fakes,” a “community note” on the governor’s post said. 

Abbott later took down and deleted his post on Sunday afternoon, after it had been up for several hours.

“For Gov. Abbott to share this image isn’t surprising at all, because it is something that you and I will struggle to distinguish, between what’s real and what’s AI-generated,” Kevin Frazier, the director of AI Innovation and Law at the University of Texas School of Law, said.

Frazier said that during conflicts like those in the Middle East, AI-generated images tend to be more prevalent. 

“You’ll see an uptick in the number of AI-generated images during very sensitive events,” Frazier said. “In the context of war, in the context of elections, even during sporting events, there’s an uptick in AI-generated images, because that’s when folks know people are online, they’re engaged and they’re looking for that sort of persuasive emotional content.”

KVUE reached out to the governor’s office on Monday to ask if they had any comment on the governor’s post, now that it has been taken down. So far, we have not heard back.

Last month, the governor posted and quickly deleted a video that purported to show a U.S. warship shooting down an Iranian fighter jet during this conflict. It turned out that it was a video from a World War II video game. Abbott responded “bye bye” to this post, which was captioned “an Iranian plane VS a US ship.”

In the fall, Abbott reposted a social media post with a fake quote from Houston Texans Quarterback CJ Stroud, where he called for the team and the NFL as a whole to observe a moment of silence after the assassination of conservative activist Charlie Kirk.

“I think increasingly what we’re going to have to do is start treating images the way we have been treating text. I could tell you right now that the Prime Minister of Canada said some wonky stuff and you could take my word for it or not, but if you wanted to take it seriously, you’d have to verify it,” said Liam Mayes, a lecturer on media studies, politics, law and social thought in the School of Humanities and Arts at Rice University. “Just in that same way, when you see an image now, just circulating online, you cannot take it at face value. You’re going to have to try to verify it.”

This is sparking a larger conversation about how to spot AI-generated images. Experts say artificial intelligence is increasingly blurring the line between what is real and what is not.

“AI-generated images are only going to become more and more frequent over time. We’re seeing that the costs of creating compelling images is only going down,” Frazier said. “We can expect everyone from politicians to small businesses to your mom creating compelling AI images. So this is the low watermark.”

As tools like AI become easier to use, the images they generate will seem increasingly realistic, making it harder to tell the difference between something that is real and something that is AI-generated.

“Long gone are the days when you see, for example, someone with six fingers or someone missing an eyebrow,” Frazier said. “These images are more and more compelling and persuasive.”

With AI, experts said anything can be created quickly and cheaply.

“If it’s too good to be true, then oftentimes it is,” Frazier said.

They are all over political ads, and campaigns are using them this election season to an extent and in ways they never have before.

“Politicians have been very crafty with media for a very long time,” Mayes said. “There’ve been manipulated images before. There’s been manipulated discourse before. This seems to me to be the next step in that trajectory.”

One example is an ad created by Sen. John Cornyn’s campaign, targeting Attorney General Ken Paxton’s alleged adultery. It shows an AI-generated version of Paxton riding in a car with two alleged mistresses. Cornyn and Paxton are set to face off in a runoff in May to determine who will be the party’s nominee in the November general election.

“Because of the low barriers to entry, any candidate, from running for mayor to running for president, can generate not only quick images, but whole videos,” Frazier said.

The use of AI this election season has been especially prevalent in the high-stakes U.S. Senate race. An ad created by AG Paxton’s campaign used AI to show Cornyn dancing with Democratic Congresswoman Jasmine Crockett.

The National Republican Senatorial Committee released an ad that used AI to create a deepfake of Democratic Senate nominee State Rep. James Talarico reading some of his past social media posts. There is a small disclosure in the corner of the screen noting the use of AI.

“Since the dawn of time, we’ve seen people using cartoons, using fake voices, using makeup to disguise messages to make folks look different, and so using that same sort of patience and media savvy is only going to become more important with respect to AI,” Frazier said.

Texas has laws banning the use of deepfakes in campaign ads, but those laws apply only to state races. That means AI is fair game in federal races, like the U.S. Senate contest.

During last year’s regular legislative session, former Texas House Speaker and State Rep. Dade Phelan (R-Beaumont) pushed a bill that would require candidates and political committees to disclose the use of AI in political advertising.

Phelan, who is not running for reelection and announced last summer that he planned to retire from the Texas House, was the subject of AI attacks and memes during his 2024 reelection campaign. Most notably, a group sent out mailers in his community containing edited images of Phelan’s head on House Minority Leader Hakeem Jeffries’ body and hugging Congresswoman Nancy Pelosi.

“It is my goal to prevent someone from impacting or altering an election by using fake media that never occurred in reality, be an AI or deepfakes,” Phelan said during the debate in the Texas House.

While the bill passed the House, it died in the Texas Senate.

“I’m all in favor of, you know, common sense, practical guardrails,” Mayes said. “Because right now, there really aren’t any at the federal level. I wouldn’t mind seeing some of the kinds of state regulations that we’ve seen happen at the federal level.”

Frazier said he is wary about “heavy-handed” government regulation.

“If there’s anything scarier than a candidate misusing media, for example, it’s the government saying exactly how and when someone can use a technological tool,” Frazier said. “What I instead think is a better approach is something that we’ve seen across the states leaning into AI literacy, leaning into things like transparency, so that consumers know when and how these tools are being used, and then making sure we have a competitive marketplace.”

Frazier said he believes it is ultimately up to Texas to tell candidates and politicians what the right balance and most ethical use of AI is.

“This is an environment in which I would really encourage voters to press candidates on when and how they’re using AI, because this is ultimately a question of what we want from our political system,” Frazier said. “Are we comfortable with these AI tools? Are we comfortable with how candidates use them? If not, then vote with your feet. Vote and make sure that folks know that you’re not going to support this use of AI. Or perhaps you’re OK with it, but that’s ultimately up to us.”

Mayes said his advice is for people to assume something could be AI-generated unless they are positive that it’s not. 

“I think you have to try to bring the same kind of skepticism that you would have brought to quotations to images,” Mayes said. “Unless it’s coming from a legacy media institution that you trust, from a government that you trust with a fairly high level, approach with some skepticism.”

Asked whether or not there are telltale signs that would show that an image is fake, Mayes said: “Emphatically, no.”

“I think at this moment in time, someone who has been looking at a lot of AI images probably has a fairly good sense of what is AI-generated and what’s not,” Mayes said. “But given the speed with which this technology is developing, I think maybe in a year, all those signs are going to be gone basically.”

Some hallmarks of an AI image include anomalies or elements that don’t look realistic, such as lighting or shadows.

It is also important to verify the source of a photo or picture and whether that source is trustworthy.

“If you want to know if an image is real, you’re going to have to trace it back to its source and then make a good judgment call there based on the credibility of the source, not the image itself,” Mayes said.

In a world where the lines between fact and fiction are increasingly blurred, even in sensitive and life-or-death situations like what is happening in Iran, the best advice from experts is to proceed with caution.

“You also just need to make sure you are in a good media environment, talking with your friends about where they find good information, working with your city officials, your state officials, to know where they are going to post verified information,” Frazier said. “These small steps can shield people from finding themselves in very difficult or even tragic situations.”

Original News Source