Select Language

English

Down Icon

Select Country

Spain

Down Icon

Sydney Sweeney, your new AI math teacher?

Sydney Sweeney, your new AI math teacher?

A digital clone of Sydney Sweeney explaining math has gone viral, sparking a debate about the limits of AI. Is it a harmless educational use or a dangerous identity violation? We explain the technology and its consequences.

Recently, social media, particularly TikTok, has been flooded with a series of videos featuring popular actress Sydney Sweeney in an unexpected role: as a math teacher. In the clips, distributed by the account @onclocklearning, a digital clone of the actress explains math concepts in a fun and accessible way. The realism of these videos is so striking that many users find it difficult to distinguish whether the content is authentic or artificially created.

The technology behind this phenomenon is 'deepfake,' an artificial intelligence technique that uses deep learning to superimpose existing images and videos over others. Using machine learning models, not only a person's physical appearance but also their voice, tone of voice, and facial expressions are imitated, achieving an extraordinarily believable result.

In this particular case, the content is presented with a seemingly educational and harmless purpose, which has sparked intense debate. While some viewers claim to have finally understood concepts they had long struggled with, others question the ethics of using a celebrity's image without their consent, even for a "kind" purpose. This case becomes a perfect example of how a technology can be perceived in radically different ways, opening up a crucial conversation about its limits.

The Sydney Sweeney case, with its educational facade, may seem like a recreational use of technology, but it's just the tip of the iceberg. The same technology is being used for much darker purposes and without the consent of those involved, leading to legal action and public denunciations.

A striking example is that of actress Scarlett Johansson, who took legal action after her image was used in a deepfake video for ideological purposes without her permission. In a statement to Vanity Fair, Johansson warned: "We must denounce the misuse of AI, regardless of its message, or we risk losing touch with reality."

Similarly, singer Céline Dion publicly denounced AI-created imitations of her voice, calling the recordings "fake and unapproved." These cases demonstrate the growing problem of non-consensual use of public figures' identities.

The harmful potential of this technology is even more alarming in cases like the one in Almendralejo, Spain, where an app was used to create fake nude images of underage girls, highlighting how deepfakes can be a tool for sexual violence.

The conflict has also reached the labor sphere. The Hollywood actors' union, SAG-AFTRA, sued Epic Games, the company behind the popular video game Fortnite, for recreating the voice of the iconic character Darth Vader using AI without the appropriate negotiation with the voice actors, setting a precedent for the protection of labor rights in the digital age.

The rapid evolution of deepfake technology has left legislation far behind. Currently, there is a significant legal vacuum in most countries, making it difficult to protect victims and punish those responsible.

In the United States, action has begun. Laws like the TAKE IT DOWN Act, passed in 2025, seek to make it easier to remove non-consensual intimate content and deepfakes. Furthermore, states like California and Texas have passed legislation criminalizing the creation of deepfakes for malicious purposes, such as election interference or non-consensual pornography.

However, outside these jurisdictions, protection is lacking. In Latin America and many other parts of the world, there are no specific legal frameworks to address this problem, leaving citizens highly vulnerable to the manipulation of their digital identity.

"Deepfakes […] could cause distress and negative effects to recipients, increase misinformation and hate speech, and could even stimulate political tension, public inflame, violence, or war."

While celebrity cases grab the headlines, the risks of deepfake technology affect everyone. Malicious use of AI can lead to serious privacy violations, smear campaigns, harassment, and large-scale scams.

Generative AI tools can be used to create fake nudes and commit "sextortion" crimes, where victims are threatened with publishing manipulated images if they don't comply with certain demands. They can also be used for identity theft in financial fraud or to create false evidence in legal disputes.

The fundamental problem exposed by the deepfake phenomenon is the collapse of contextual trust. Until now, a video or audio was considered relatively reliable evidence that something happened. However, we are entering an era of "falsifiable reality," where we can no longer trust what we see and hear without rigorous external verification. This impacts not only news and politics, but also personal relationships, business agreements, and legal certainty as a whole. The threat is not just "fake news," but the erosion of reality itself as a verifiable concept.

Recently, social media, particularly TikTok, has been flooded with a series of videos featuring popular actress Sydney Sweeney in an unexpected role: as a math teacher. In the clips, distributed by the account @onclocklearning, a digital clone of the actress explains math concepts in a fun and accessible way. The realism of these videos is so striking that many users find it difficult to distinguish whether the content is authentic or artificially created.

The technology behind this phenomenon is 'deepfake,' an artificial intelligence technique that uses deep learning to superimpose existing images and videos over others. Using machine learning models, not only a person's physical appearance but also their voice, tone of voice, and facial expressions are imitated, achieving an extraordinarily believable result.

In this particular case, the content is presented with a seemingly educational and harmless purpose, which has sparked intense debate. While some viewers claim to have finally understood concepts they had long struggled with, others question the ethics of using a celebrity's image without their consent, even for a "kind" purpose. This case becomes a perfect example of how a technology can be perceived in radically different ways, opening up a crucial conversation about its limits.

The Sydney Sweeney case, with its educational facade, may seem like a recreational use of technology, but it's just the tip of the iceberg. The same technology is being used for much darker purposes and without the consent of those involved, leading to legal action and public denunciations.

A striking example is that of actress Scarlett Johansson, who took legal action after her image was used in a deepfake video for ideological purposes without her permission. In a statement to Vanity Fair, Johansson warned: "We must denounce the misuse of AI, regardless of its message, or we risk losing touch with reality."

Similarly, singer Céline Dion publicly denounced AI-created imitations of her voice, calling the recordings "fake and unapproved." These cases demonstrate the growing problem of non-consensual use of public figures' identities.

The harmful potential of this technology is even more alarming in cases like the one in Almendralejo, Spain, where an app was used to create fake nude images of underage girls, highlighting how deepfakes can be a tool for sexual violence.

The conflict has also reached the labor sphere. The Hollywood actors' union, SAG-AFTRA, sued Epic Games, the company behind the popular video game Fortnite, for recreating the voice of the iconic character Darth Vader using AI without the appropriate negotiation with the voice actors, setting a precedent for the protection of labor rights in the digital age.

The rapid evolution of deepfake technology has left legislation far behind. Currently, there is a significant legal vacuum in most countries, making it difficult to protect victims and punish those responsible.

In the United States, action has begun. Laws like the TAKE IT DOWN Act, passed in 2025, seek to make it easier to remove non-consensual intimate content and deepfakes. Furthermore, states like California and Texas have passed legislation criminalizing the creation of deepfakes for malicious purposes, such as election interference or non-consensual pornography.

However, outside these jurisdictions, protection is lacking. In Latin America and many other parts of the world, there are no specific legal frameworks to address this problem, leaving citizens highly vulnerable to the manipulation of their digital identity.

"Deepfakes […] could cause distress and negative effects to recipients, increase misinformation and hate speech, and could even stimulate political tension, public inflame, violence, or war."

While celebrity cases grab the headlines, the risks of deepfake technology affect everyone. Malicious use of AI can lead to serious privacy violations, smear campaigns, harassment, and large-scale scams.

Generative AI tools can be used to create fake nudes and commit "sextortion" crimes, where victims are threatened with publishing manipulated images if they don't comply with certain demands. They can also be used for identity theft in financial fraud or to create false evidence in legal disputes.

The fundamental problem exposed by the deepfake phenomenon is the collapse of contextual trust. Until now, a video or audio was considered relatively reliable evidence that something happened. However, we are entering an era of "falsifiable reality," where we can no longer trust what we see and hear without rigorous external verification. This impacts not only news and politics, but also personal relationships, commercial agreements, and legal certainty as a whole. The threat is not only "fake news," but the erosion of reality itself as a verifiable concept.

La Verdad Yucatán

La Verdad Yucatán

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow