Why do “deep counterfeiting” clips spread? Can it be stopped?
The Internet is filled with “Deepfakes” content – whether it is a sound, pictures or videos made using artificial intelligence tools – where people appear as if they were saying or doing things they did not say or do, or they are in places where they were not, or their appearance changes.
Some include what is known as “digital nudity”, as the images are modified to show someone without clothes, and other “deep counterfeiting” are used to deceive consumers, or to harm the reputation of politicians and other public figures.
Artificial intelligence progresses facilitates the creation of “deep falsification” with only a few clicks on the keyboard. Governments are anxious to confront this phenomenon, but the battle appears losing, as attempts to use “deep counterfeiting” have increased by more than 20 times during the past three years, according to the data of the “Signicat” identity company.
What measures taken to combat “deep counterfeiting”?
On May 19, US President Donald Trump signed the “Take it” law, which criminalizes non -consensual pornographic materials with artificial intelligence, also known as “profound counterfeiting”, and obliges social media companies to remove these explicit sexual images upon request.
Last year, the American Federal Communications Committee made the use of artificial intelligence voices in illegal automatic calls. This ban came two days after the committee issued a “stop and stop” against the company responsible for “deep audio counterfeiting” by President Joe Biden. The residents of New Hamposher had received an automatic call before the presidential preliminary elections in the state, in which the voice seemed to be a voice of Biden, urging them to stay in their homes and “keep their votes for the November elections.”
The European Union AI law obliges platforms to put signs that the content is the product of “deep forgery”. China applied similar legislation in 2023, and on April 28, the British government’s children’s affairs commissioner to ban the “digital nudity” applications spread widely online.
Where did “deep counterfeiting” reported in the news?
Pictures of “deep counterfeiting” spread to the pop star, Taylor Swift, widely on social media in January 2024, angered her fans and pushing the White House to express his concern.
During the American presidential elections in 2024, Elon Musk shared a propaganda video using “deep counterfeiting” in which a modified voice using artificial intelligence of the democratic candidate, Kamala Harris, without the video being classified as misleading. In the video, she seemed to describe the President, Joe Biden, as “the difference” and says she does not “know anything about the management of the country.” The video won tens of millions of views. In response, the governor of California, Gavin Newsum, pledged to ban digitally modified political “deep counterfeiting”, and was legally signed in September.
How to make videos using “deep counterfeiting”?
It is often produced using an artificial intelligence algorithm trained to identify patterns in real video recordings of a specific person, a process known as “deep learning”. Then it becomes possible to replace an element of a video, such as the person’s face, with another content without it appearing as a superficial montage. These modifications are more misleading when using them with “sound cloning” techniques, which analyze an audio clip of a person who speaks to very small audio parts that are reassembled to form new words that seem to be the original video speaker.
How did “deep counterfeiting” spread?
This technology was initially the preserve of academics and researchers. However, in 2017, the “Motherboard” platform of the “VICE” website reported that a Reddit called “Deep Fix” created a algorithm to make fake videos using an open source blade. “Reddit” banned this user, but the practice quickly spread. In its beginnings, deep counterfeiting technology required a real video and actual audio performance, as well as advanced editing skills.
Today, current obstetric intelligence systems allow users to produce convincing images and videos through simple written orders. Ask a computer to create a video that puts words in a person’s mouth, and it will definitely see you.
How can you get to know “deep forgery”?
These digital forgery became more difficult to discover, with artificial intelligence companies using new tools on a huge amount of material available on the Internet, from YouTube to stored photo and video libraries.
Sometimes, clear signs indicate that a picture or video of what was created using artificial intelligence, such as having a non -place party or a hand containing six fingers. Difference in colors may appear between the modified and modified parts of the image. Sometimes, the mouth movement does not match the speech in the clips of “deep forgery”. Artificial intelligence may have difficulty providing precise details of elements such as hair, mouth, and shadows, and the edges of the bodies may sometimes be rough or contain visible pixels. But all of this may change with the improvement of basic models.
What are the most prominent examples of the use of “deep counterfeiting” technology?
In August 2023, Chinese promoters released modified pictures of forest fires on Maui Island in Hawaii, to support allegations that these fires are caused by a secret “air” weapon test by the United States. In May 2023, American stocks fell shortly after a picture on the Internet showing the Pentagon as if burning. Experts said the fake image carries production features using artificial intelligence.
In February of the same year, a fake audio clip appeared in which he appeared as if the Nigerian presidential candidate, Atiko Abu Bakr, was planning to falsify the elections in that month. In 2021, a single -minute video clip was posted on social media, showing Ukrainian President Voludmir Zellinski, as if he was calling his soldiers to throw arms and surrender to Russia. There are other harmful “deep” forms, such as those that show football star Cristiano Ronaldo as he sings Arabic poems.
What are the dangerous dimensions associated with this technology?
Fear lies in the “deep counterfeiting” clips to become so convincing to make it impossible to distinguish between what is real and what is fake. Imagine fraudsters manipulating stock prices by producing forged videos of executives who make updates related to their companies, or fake clips of soldiers who commit war crimes. Politicians, business leaders, and celebrities are among the most vulnerable, given the large number of recordings available for them.
The report of the UK’s Children’s Affairs Commissioner issued in April highlighted the increase in children’s fear that they would be victims of a scandalous “deep counterfeiting” content. Additional concerns will facilitate the spread of awareness about “deep counterfeiting” the claim of people who actually appear in records and they say or do rejected or illegal matters, that the evidence against them is fake, as some people have already started using the “deep counterfeiting” defense before the courts.
What other measures can be taken to reduce the spread of “deep counterfeiting”?
It is impossible to reverse the type of automatic learning used to produce “deep counterfeiting” easily in order to discover fake clips, but a few startups such as “Sensity Ai” in the Netherlands and “Sentinel” in Estonia are working to develop techniques to detect fake clips, along with many major technological companies in the United States.
Companies, including Microsoft, have pledged to include digital watermarks in the images they produce using their artificial intelligence tools, to distinguish them as a fake content. Openai, the GBT, has developed techniques to detect the images produced with artificial intelligence, as well as a way to add watermarks to the texts, but the last technology has not yet been presented, partly because it sees that it is easy for the bad parties to circumvent them.