Artificial intelligence (AI) means that computers are able to perform tasks which normally require human intelligence, including both cognitive and emotional aspects of learning and development. Art is one such task that relies upon human intellect and emotional understanding. Although the current craze of AI generated art makes AI art seem like a unique trend, the confluence of machine-based learning and art making have a deeper and longer history than the present moment. It is important to understand the history and contemporary implications of AI (both the good and the bad) since we are clearly existing in a culture where humans and machines are learning things simultaneously.

The first notable instance of AI in art was in 1972, via Harold Cohen’s AARON platform. Through Cohen’s digital code language, AARON had the ability to replicate the process of drawing. AARON was different from much of today’s AI hardware and software because it could not learn on its own (i.e. machine-based learning). AARON was reliant upon the codes Cohen wrote, whereas current AI technology is able to make predictions and/or decisions without being explicitly programmed to do so. Nevertheless, AARON was a major impetus regarding the potential for collaboration between human artists and complex computerized machines. Because AARON’s artistic abilities were derived from an experiential process of writing code, its artistic development somewhat mimicked the way we often begin to learn and express ourselves visually. Cohen explained that “in all its versions prior to 1980, AARON dealt exclusively with internal aspects of human cognition. It was intended to identify the functional primitives and differentiations used in the building of mental images and, consequently, in the making of drawings and paintings” (quoted in Garcia, 2016).
AARON’s initial drawings were very crude. Cohen embellished and expanded its vague renderings with traditional forms of painting and drawing. Beginning in the 1980s, Cohen prompted AARON to make more detailed compositions and enabled it to choose between several types of brushes/drawing utensils and hues depending on the imagery it was tasked to make. AARON was not an open-sourced platform, meaning that its code was only known to and utilized by Cohen. Therefore, when Cohen died in 2016, AARON became inoperative.
As of late, the use of AI to create art has taken the cultural world by storm. AI rendered artwork is more prevalent than ever because of applications like DALL-E, MidJourney and a whole slew of other programs that make it possible for anyone to create a work of art derived from AI. These platforms are learning models that generate digital images from natural language descriptions, called “prompts.” In other words, AI learns how to render images by observational learning from millions upon millions of images online (think about how we search for an image using a #hashtag on a Google Image search). If you ask it to create a dog using the #dog it will search its gargantuan data sets for images tagged “dog.” Doing so, it will see all types of figures from different perspectives and media (paintings, sculpture, drawings etc.), and build an understanding of what defines the characteristics of a dog and what sets it apart from other animals.

Image courtesy of OpenAI
Once the prompt is entered, these AI programs are able to render the tagged imagery in the style of artworks and graphics from across visual culture (see: Sharp, 2022). In other words, a user can create an image in the style of any well known artist, or essentially any creator who has posted their work on the internet. The image above was DALL-E’s response to the prompt: “sea otter in the style of ‘Girl with a Pearl Earring’ by Johannes Vermeer.” While there are unique examples of artists using these platforms in conjunction with their own art practice, the overall process of AI generated imagery poses a significant problem for working artists.
There is an outstanding issue of AI’s questionable sourcing of imagery, as well as a perceived and tangible threat to the intellectual property and livelihood of artists. In her op-ed, “Beware a world where artists are replaced by robots. It’s starting now,” artist Molly Crabapple lays out the indictment against AI, stating that the algorithms that DALL-E et al are coded to mine imagery and metadata from massive data sets that have been known to obtain original artwork from artists without their authorization. Crabapple (2022) alleges that, “these data sets were not ethically obtained. LAION sucked up 5.8 billion images from around the internet, from art sites such as DeviantArt, and even from private medical records.” She adds, “I found my art and photos of my face on their databases. They took it all without the creator’s knowledge, compensation or consent.” This poses a major ethical concern because as Crabapple further explains, “AIs can spit out work in the style of any artist they were trained on — eliminating the need for anyone to hire that artist again. People sometimes say “AI art looks like an artist made it.” This is because it vampirized the work of artists and could not function without it.”
Another issue with letting AI go unchecked, is its potential to prolong and proliferate racial, gender and ethnic bias. Because AI relies on algorithms, which were initially created by humans, the machines are essentially learning from us. This also means that these intelligent machines are capable of developing and expressing the explicit and implicit bias that we portray. Transdisciplinary artist Stephanie Dinkins exposes how AI upholds stereotypes and sociocultural bias. Some of her key artworks such as Conversations with Bina48 (2014-ongoing), set out to discover whether it is possible for AI to exhibit a greater sense of social and emotional understanding and ethical behavior; or whether it will continue to mimic the systemic racial, gender, and ethnic prejudices of mainstream culture (see: “Social and Emotional Learning for Artificial Intelligence”).
Martine Syms is another transdisciplinary artist who has made works of art that incorporate computer algorithms and machine-based learning. Like Dinkins, she uses AI to make critical inquiries regarding the political underpinnings of mass-media images and the technologies that are used to produce them. An example is her immersive media installation Neural Swamp (2022), which was created for the Future Fields Commission in Time-Based Media at the Philadelphia Museum of Art. The installation incorporates facets from the worlds of sports, popular culture, and technology to expose racial and gender-based bias and how it is propagated in our hyper-digital era.
Syms’ first experience making a work of art using AI was Mythiccbeing (2018), a virtual assistant in the form of a three-dimensional rendering of Syms’ face which mimics her unique facial expressions. This AI likeness of Syms is a chatbot like Apple’s Siri or Amazon’s Alexa, which she calls Teeny. Many of us have grown accustomed to virtual assistants providing us with answers to questions, giving us recommendations and performing quotidian tasks for us. Although Teeny mimics the role of an intelligent virtual assistant, its responses are absurd, gritty and unabashedly assertive. Teeny’s persona is the antithesis of popular virtual assistants’ gendered characteristics and is a stark reminder that systemic race and gender discrimination is ingrained in the technology our society creates and consumes. Curator Janna Keegan (2020) explains that Teeny, “upends the subservience of digital assistants like Apple’s Siri and Amazon’s Alexa, whose overtly gendered identities encourage an association between obedience and femininity. Such positioning suggests that the views perpetuated by the male-dominated world of engineering are another instance of built-in bias to overcome.”
While there is a glaring need for reflection and criticism around AI art, there are ample opportunities to use it responsibly, ethically and pedagogically. Several contemporary artists who are aware of the pitfalls of AI, are working with accessible machine-based learning platforms like DALL-E to make conceptual artwork or as a sketchbook and research process for their artistic practice.

Syms notes that she enjoys the playfulness and immediacy of DALL-E. She approaches the platform in a conceptual way that relates to the fusion of humor and social commentary in her multimedia installation art. Rather than typing in literal commands, she describes feeding the app prompts that are poetic in nature, such as “writhing in contorted emotion” and “whenever I do something illogical, inefficient, unproductive or nonsensical I can just smile at my innate humanity.” Syms reflects that “I’m more interested in thinking about poetics. That’s what brought me to machine learning in the first place” (quoted in Furman, 2022). Another artist who has found some connection between AI and their overarching artistic practice is Beth Frey. She created an Instagram account for her DALL-E creations called Sentient Muppet Factory. Frey is conscious not to use prompts that indicate the names of other artists and their artworks. Instead, she considers this project to be an exercise that is complementary to her multimedia artwork, which combines puppetry, performance, painting, sculpture and video, to create disjointed narratives that comment on how social and cultural identity are impacted in the digital age.
Frey states that: “one thing I appreciate about this experience is that it’s connected me to a lot of artists: filmmakers and puppeteers and make-up artists, among others, and I’m exploring ways to turn these images into something ‘real’, with actual people moving and interacting. I’d like to use AI in a way that is not the ‘death of art’ but rather a way to connect artists and collaborate on something beautiful and fun” (quoted in ladycultblog, 2022).
A pedagogical example of AI’s benefits for teaching, learning and connecting with culture, comes from a blog post by writer Cedar Sanderson, called “Teaching Art History with an AI.” In the post, Sanderson describes how she has used MidJourney to entice her son to have an appreciation for art history. Sanderson used prompts that combined the names of famous artists and artworks along with topical themes that her son was interested in. Doing so made the artists and iconic artworks, which were initially unfamiliar to her son, relevant due to the content and context of the AI generated image. They treated the process like a game, combining object and/or action based prompts that her son generated with the names of artists, ranging from the Renaissance to the present day, that Sanderson wanted him to be aware of. She asserts, “for a teenager who has had less than no interest in artists, this was fascinating for me to watch him suddenly absorbed in looking at art in a new way. He was looking at colors – like me, he was smitten with Maxfield Parrish’s blues – shapes, composition, and more. He was interested in the different schools of art” (Sanderson, 2022).
We might justifiably conclude that this aforementioned example touches upon the serious ethical issues that Crabapple has brought to our attention. There should be a compromise for using AI to discover the work of other artists, while ensuring that their work is protected and their authorship is acknowledged. Some solutions might include the ability for artists to opt out of AI’s data sets when posting work online and paying artists for their contribution to data sets. Another idea proposed by artist Greg Rutkowski, is to completely exclude living artists and artwork from AI data sets (Small, 2022).
So there we have it, several examples of the hazards and benefits of AI as an art form and educational resource. It is apparent that AI has both progressive and problematic potential and should be used as sparingly and ethically as possible. There needs to be further exploration, discovery and insight into AI as an art medium, but there should equally be an acute degree of moderation to ensure that the intellectual property of artists is safe, and that it is equally representative of diverse identities. If we are going to coexist with smart machines, we need to find a way to model good learning habits of mind for them to develop under.
References, Notes, Suggested Reading:
Crabapple, Molly. “Beware a world where artists are replaced by robots. It’s starting now,” Los Angeles Times, 21 December 2022. https://www.latimes.com/opinion/story/2022-12-21/artificial-intelligence-artists-stability-ai-digital-images
Furman, Anna. “From ‘Barbies scissoring’ to ‘contorted emotion’: the artists using AI,” The Guardian, 11 July 2022. https://www.theguardian.com/technology/2022/jul/10/dall-e-artificial-intelligence-art
Garcia, Chris. “Harold Cohen and AARON—A 40-Year Collaboration,” CHM Blog, 23 August 2016. https://computerhistory.org/blog/harold-cohen-and-aaron-a-40-year-collaboration/
Keegan, Janna. “Martine Syms, ‘Threat Model,’ ‘MythiccBeing,'” from Beyond the Uncanny Valley: Being Human in the Age of AI, 19 June 2020. https://www.famsf.org/stories/martine-syms-threat-model-mythiccbeing
Ladyartblog. “Artist’s Spotlight: @sentientmuppetfactory!” ladyartblog, 7 December 2022. https://ladycultblog.com/2022/12/07/artists-spotlight-sentientmuppetfactory/
Sanderson, Cedar. “Teaching Art History with AI,” Cedar Writes, 31 August 2022. https://www.cedarwrites.com/2022/08/31/teaching-art-history-with-an-ai/
Sharp, Sarah Rose. “DALL-E,” the New AI Artist Who Can Draw Anything,” Hyperallergic, 13 April 2022. https://hyperallergic.com/723877/new-ai-artist-who-can-draw-anything/
Small, Zachary. “A.I. Is Exploding the Illustration World. Here’s How Artists Are Racing to Catch Up,” artnet, 24 October 2022. https://news.artnet.com/art-world/artificial-intelligence-illustration-spawning-2195919
Fascinating & thought provoking. Thank you for posting.
LikeLiked by 1 person
Hi Adam, Thanks I really enjoyed this introduction to AI. Perhaps you can speak to my class about AI this semester. Toby
LikeLiked by 1 person