top of page

Outrunning the law: artificial intelligence and consent

Aurelia Athanasia

Anchor Staff Writer


As artificial intelligence becoming ever more pervasive and significant as it continues to improve at a meteoric pace, many are left concerned about the potential long-term effects and the real present-day effects already felt in its wake. As it exists now, AI is a powerful tool worth developing and using, but as with many recent technological advancements, the ethics in how the tool is used are unexplored and could spell trouble without prompt regulation.


Machine learning is the process at the core of modern AI technology and understanding the basics of the concept is necessary to grasp the ethical and social ramifications of using AI. From an article by Sara Brown for the MIT Sloan School of Management, “Machine learning is a subfield of artificial intelligence, which is broadly defined as the capability of a machine to imitate intelligent human behavior. Artificial intelligence systems are used to perform complex tasks in a way that is similar to how humans solve problems.” This approximation of human thought patterns is achieved through various methods, each of which starting with a very large amount of data. To put it simply, huge amounts of data, categorized and labeled by humans, is analyzed by the machine, teaching it about that group of data. If, for example, you feed a huge series of images of jellyfish to a learning algorithm like this and tell the machine the images contained jellyfish, it would ‘learn’ from common elements within the images what an image containing a jellyfish looks like and this ‘knowledge’ could then be used to recognize or even reproduce an image of jellyfish using the provided images to inform its synthesis of a ‘new’ image by mashing together examples of these common elements from the dataset.

Image credited to Tara Winstead from Pexels.com

This is the basic idea behind most generative AI language models today, which take text user commands and other media input to generate media corresponding to the user’s prompt using the data the model is trained on. Some models currently available to the public are capable of scouring and learning from the internet, amassing a huge collection of media to synthesize from, without compensating or acquiring the consent of the creators or owners of that media, raising questions about copyright law and the ethics of replicating human work. An article for The Verge discusses an example in an experiment conducted by a student using Disney illustrator Holly Mengert’s art. The student used a series of Mengert’s illustrations to train an AI language model to reproduce art in Mengert’s own style, who said it felt “like someone’s taking work that I’ve done, you know, things that I’ve learned… and is using it to create art that I didn’t consent to and didn’t give permission for.” This sentiment is shared by many artists who take to social media platforms to discuss instances of their own work being replicated in a similar fashion, their work analyzed and replicated, creating work that is attached to them and their brand, despite them having no hand in creating the material. These concerns echo throughout all forms of generative AI trained on human work, including the particularly troubling potential of voice synthesis. Someone’s voice is a powerful identifier and an implicit marker of ownership of what is said, which makes the possibility of fraud and disinformation campaigns increasingly worrying as AI voice models rapidly approach near-perfect impersonation. AI voice deepfakes emulating the voices of U.S. presidents with surprising accuracy have been finding viral success on YouTube and TikTok, heralding the creation of falsified or modified audio being used in political disinformation campaigns. The audio in these meme videos may not be entirely convincing on its own overall, but short sequences can sound alarmingly genuine and could serve to change the meaning of existing statements said in a public figure’s authentic voice to a highly convincing effect, using short AI-generated audio clips spliced into the original context. Concerningly, as the tech improves and the laws fall behind in adapting to it, time marches onward towards the upcoming 2024 presidential elections.


The slow response of the law to these problems is entirely unsurprising and has already presented a problem to artists, writers, voice actors, and others who have had their content taken and used in these AI models. This isn’t a new problem; as cited in an article for the Washington Post, AI-assisted deepfakes have permeated throughout online spaces for years, with 96% of that content being pornography depicting women created without their knowledge or consent as of 2019. With only three states having laws combating this content, it’s plain to see the violent, anti-consent trajectory AI is accelerating on, with very little in the way of regulations to slow it down. AI is a tool that can be used to achieve great things, but like any other tool, careful evaluation of the ethics of its use is extremely important, until and after its ravenous growth can be better bound by legislature.

79 views

Recent Posts

See All
bottom of page