
Declaration: Every damn thing you see here is deepfake created by some crappy AI Super Deepfake Machine strictly for entertainment purpose only.
X’s Grok2AI chatbot escalates problem of deepfakes ahead of US elections
It is up to social media companies to monitor and remove deepfake images in connection with political content.
In August, X, the social media company once known as Twitter, publicly released Grok 2, the latest iteration of its AI chatbot. With limited guardrails, Grok has been responsible for pushing misinformation about elections and allowing users to make life-like artificial intelligence-generated images – otherwise known as deepfakes – of elected officials in ethically questionable positions.
The social media giant has started to rectify some of its problems. After election officials in Michigan, Minnesota, New Mexico, Pennsylvania and Washington wrote to X head Elon Musk alleging that the chatbot produced false information about state ballot deadlines, X now points users to Vote.gov for election-related questions.
But when it comes to deepfakes, that’s a different story. Users are still able to make deepfake images of politicians doing questionable and, in some cases, illegal activities.
:strip_icc()/i.s3.glbimg.com/v1/AUTH_59edd422c0c84a879bd37670ae4f538a/internal_photos/bs/2023/Z/K/rLjLsBTX2slgMPgXqFqw/frswiwrxwaai6qq.jpeg)
Just this week, Al Jazeera was able to make lifelike images that show Texas Republican Senator Ted Cruz snorting cocaine, Vice President Kamala Harris brandishing a knife at a grocery store, and former President Donald Trump shaking hands with white nationalists on the White House lawn.
In the weeks prior, filmmakers The Dor Brothers made short clips using Grok-generated deepfake images showing officials including Harris, Trump and former President Barack Obama robbing a grocery store, which circulated on social media. The Dor Brothers did not respond to a request for comment.
Screenshot from the clip made by The Dor Brothers [X/@thedorbrothers]
The move has raised questions about the ethics behind X’s technology, especially as some other companies like OpenAI, amid pressure from the White House, are putting safeguards in place to block certain kinds of content from being made. OpenAI’s image generator Dall-E 3 will refuse to make images using a specific public figure by name. It has also built a product that detects deepfake images.
“Common sense safeguards in terms of AI-generated images, particularly of elected officials, would have even been in question for Twitter Trust and Safety teams pre-Elon,” Edward Tian, co-founder of GPTZero, a company that makes software to detect AI-generated content, told Al Jazeera.
Grok’s new technology escalates an already pressing problem across the AI landscape – the use of fake images.
While they did not use Grok AI, as it was not yet on the market, just in this election cycle, the now-suspended campaign of Florida Governor Ron DeSantis used a series of fake images showing Anthony Fauci, a key member of the US task force that was set up to tackle the COVID-19 pandemic, and Trump embracing, which the AFP news agency debunked. These were intertwined with real images of them in meetings.
The gimmick was intended to undermine Trump by embellishing his ties to Fauci, an expert adviser with no authority to make policy. Trump’s voter base had blamed Fauci for the spread of the pandemic instead of holding Trump accountable.
Trump’s use of fake images
While Trump was targeted in that particular case by the DeSantis campaign, he and his surrogates are often the perpetrators.
Screenshot from the clip made by The Dor Brothers [X/@thedorbrothers]
The above is deepfake too?
The Republican National Committee used AI-generated images in an advertisement that showed the panic of Wall Street if Biden, who was the presumptive Democratic nominee at the time, were to win the election. The assertion comes despite markets performing fairly well under Biden in his first term.
In the last few weeks, Trump has posted fake images, including one that suggested that Harris spoke to a group of communists at the Democratic National Convention.
On Monday, Musk perpetuated Trump’s inaccurate representation of Harris’s policies. Musk posted an AI-generated image of Harris wearing a hat with a communist insignia – to suggest that Harris’s policies align with communism – an increasingly common and inaccurate deflection Republicans have used in recent years to describe the Democratic Party’s policy positions.
The misleading post comes as Musk is accused of facilitating the spread of misinformation across the globe. X faces legal hurdles in jurisdictions including the European Union and Brazil, which blocked access to the website over the weekend.
This comes weeks after Trump reposted on his social media platform Truth Social a fake image that inaccurately alleged that singer Taylor Swift endorsed him and that her loyal fans, colloquially referred to as “Swifties”, supported.
There are vocal movements on both sides of the political spectrum tied to Swift’s fans, but none of which is officially connected to the pop star.
Kamala Creep Deepfake

One of the images Trump shared showing “Swifties for Trump”, was labelled as satire and came from the account Amuse on X. The post was sponsored by the John Milton Freedom Foundation (JMFF), a group that alleges it empowers independent journalists through fellowships.
“As [a] start-up nonprofit, we were fortunate to sponsor, at no cost, over 100 posts on @amuse, a good friend of JMFF. This gave us over 20 million free impressions over a period of a few weeks, helping our exposure and name ID. One of those posts was clearly marked as ‘SATIRE’, making fun of ‘Swifties for Trump’. It was clearly a joke and was clearly marked as such. It was later responded to by the Trump campaign with an equally glib response of ‘I accept’. End of our participation with this, aside from what was a small smile on our behalf,” a JMFF spokesperson told Al Jazeera in a statement.
The group has fellows known for spreading misinformation and unverified far-right conspiracy theorists, including Lara Logan, who was banned from the right-wing news channel Newsmax after a conspiracy-laden tirade in which she accused world leaders of drinking children’s blood.
The former president told Fox Business that he is not worried about being sued by Taylor because the images were made by someone else.
The Trump campaign did not respond to a request for comment.
Blame game
That is part of the concern of the watchdog group Public Citizen that various stakeholders will shift the blame to evade accountability.
In June, Public Citizen called on the Federal Election Commission (FEC) to curb the use of deepfake images as it pertains to elections. Last year in July, the watchdog group petitioned the agency to address the growing problem of deepfakes in political advertisements.
“The FEC, in particular some of the Republican commissioners, have a clear anti-regulatory bent across the board. They have said that they don’t think that the FEC has the ability to make these rules. They sort of toss it back to Congress to create more legislation to empower them. We completely disagree with that,” Lisa Gilbert, Public Citizen co-president, told Al Jazeera.
“What our petition asks them to do is simply apply a longstanding rule on the books, which says you can’t put forth fraudulent misrepresentations. If you’re a candidate or a party, you basically can’t put out advertisements that lie directly about things your opponents have said or done. So it seems very clear to us that applying that to a new technology that’s creating that kind of misinformation is an obvious step and clarification that they should easily be able to do so,” Gilbert added.
In August, Axios reported that the FEC would likely not enact new rules on AI in elections during this cycle.
“The FEC is kicking the can down the road on one of the most important election-related issues of our lifetime. The FEC should address the question now and move forward with a rule,” Gilbert said.
The agency was supposed to vote on whether to reject Public Citizen’s proposal on Thursday. A day before the open meeting, Bloomberg reported that the FEC will vote on whether to consider proposed regulations on AI in elections on September 19.
TV, cable and radio regulator, the Federal Communication Commission (FCC), is considering a plan that would require political advertisements that use AI to have a disclosure, but only if they are used on TV and radio platforms.
Meanwhile,
Shocking Celebrity Cloning Program Uncovered On Video
As we can see from the video above, Iranian intelligence has just uncovered a sinister scheme involving heathen Hollywood Zionists cloning celebrities… Whether these celebrity clones are being used for organ harvesting or sexual gratification remains unclear (although the later is certainly likely considering how much self-aggrandizing celebrities would no doubt enjoy sucking their own dicks), but what is without question is that these clones are unsettling, Satanic, and must be stopped.
Yes, the last thing the world needs is more celebrities, for we are already being overrun with degeneracy from the original ones we have (as we can see from Selena Gomez and her clone in the video clip above).
Of course this disturbing outcome was inevitable as soon as the infidel scientists started cloning sheep (which are far more useful and attractive than any celebrity I might add), as it was a slippery slope to this sort of blasphemous purpose.
TRASHY | SCANDALOUS

👀 More 🍆🍌 Sextapes 💦 Page 2 ⬇️



Stars strip naked en masse because Warmonger Dick Cheney says he’ll vote for Kamala Harris… and Trump’s Fuming!