Social and Emotional Learning for Artificial Intelligence

Screen Shot 2019-04-12 at 3.39.36 PM

Conversation between the robot Bina48 and artist Stephanie Dinkins.

Artificial Intelligence (AI) is the biggest and most ambitious futuristic concept that has arrived at our cultural doorstep (still no flying cars…). For decades, the concept of AI has been surmised and depicted through genres of science fiction, as well as through other fantastical media that conflated fiction with reality. Today, after many years of on and off research and development, we are starting to see the effects of how AI might interact and inform our collective culture. As theorized by some of the previous sci-fi accounts, AI can have both advantageous and detrimental impacts on our society.

One of the most problematic consequences of AI is its predisposition to exhibit discriminatory bias towards marginalized communities. Science journalist Daniel Cossins (2018) writes about five algorithms that exposed AI’s prejudice towards non-white men. The five discriminating algorithms include racial, gender and economic bias towards minorities. This is troubling because AI is increasingly being used by advertisers, job recruiters and the criminal justice system. AI’s favoring of white folks disturbingly revisits the revelatory insights gained from ‘the doll test,’ which was performed by Doctors Kenneth and Mamie Clark during the 1940s. In the Clark’s doll test, it was revealed that black children were conditioned to assign negative traits towards their own race and social status. When the Clarks presented African-American children with a black doll and a white doll and asked them which doll they preferred, the children overwhelmingly chose the white doll. Furthermore, they attributed more positive attributes to the white doll than to the black doll. The study reflects how segregation and racial stereotypes have a significant impact on a child’s social, emotional and cognitive development, and does enormous damage to their self-esteem. The poignant results of the doll tests were pivotal in deciding the Brown vs. Board of Education case, which ruled that racial segregation in schools was unconstitutional (Blakemore, 2018).

Unfortunately, while our social and emotional awareness regarding instersectionality has improved, there have not been nearly enough improvements to overcoming systemic racism and gender disparity. Artificial Intelligence, which is supposed to mimic our cognitive functions, such as learning, critical thinking and problem solving, provides a stark assessment of how far we are from achieving equal, equitable and social justice throughout our society. However, the arts have a problem-posing model (collaboration and critical thinking via dialogue between students and educators, which leads to liberation and empowerment. See: Freire, 1970) that sheds light on the possibilities for humans and artificial intelligence to collectively engage in genuine modes of listening, dialogue and action.

Transdisciplinary artist, Stephanie Dinkins, realized that AI was negatively conflating gender and race and has set out to explore and discover ways for AI to exhibit a greater sense of social and emotional understanding and ethical behavior. The big question within Dinkins’ work, is whether it is possible to teach a robot the habits of mind that will create an environment of hope, love, humility and trust (Freire, 1970) and empower humans and machines alike to be empathetic and virtuous collaborators.

Dinkins’ project Conversations with Bina48 (2014-ongoing), is a collaborative problem posing model involving the artist, a group of youth participants and an AI unit by the name of Bina48. Over the past five years, Dinkins has been building a relationship with a robot named Bina48, who was built with the capabilities to communicate individual thoughts and emotions. Bina48 is also representative of a black woman, however, the overarching issue is whether or not she can truly comprehend and reflect upon issues of race, gender and economic inequity.

The conversations between Dinkins and Bina48 blur the lines between human and non-human consciousness, exploring what it means to be a living being and whether it is possible to achieve transhumanism (life beyond our physical bodies). The depth of the interpersonal interactions encompasses the philosophical and is surprisingly profound, with moments of absurdity, where it is obvious that the human experience does not fully compute with Bina48. While Bina48 was able to answer Dinkins’ question about whether or not it knows racism, the response was both compelling, semi-relational and frustrating all at once. It is evident that there is still a great deal of learning necessary for robots to repletely understand and make meaningful connections to the intersectionality of identities that comprises human nature.

Because the algorithms used by these robots disproportionately reflect experiences outside of communities of color, AI needs to do a better job finding patterns and making connections (two studio habits of mind learned through the arts) to large populations that are marginalized by these algorithms.  To address this glaring discrepancy,  Dinkins enlisted several youth and adult participants from communities of color to develop inquiry based questions and dialogues that could be programmed into AI algorithms that support their communities. The ongoing project is titled Project al-Khwarizmi (PAK), and the transdisciplinary dialogue (which utilizes aesthetics, coding, speech and language) shows that there is possibility for co-learning and the creation of new sincere knowledge between humans and intelligent machines. When machines learn in ways that are similar to human data processing either through supervision, semi-supervision, or on their own, it is known as ‘deep learning’.

The results of AI’s ability for ‘deep learning’ is represented in another ongoing project by Dinkins called Not The Only One (N’Too). In this project, an AI unit presents a familial memoir, which develops via dialogue between a multi-generational African-American family and a deep learning AI algorithm that collects data about their life experiences and demographic information. Through active listening, the emotionally intelligent AI will be able to relate the collective stories of others in an intimate manner that shows it is growing both emotionally and cognitively. With each new narrative the AI will build upon its vocabulary and relatable topics.

If we are going to continue on the current trajectory, where AI is poised to become embedded into the fabric of our society, it is essential for us to develop methodologies and practices that ensure that the relationship between humans and machines follows problem-posing models. If humans and their robot counterparts are able to understand one another through active listening, dialogue and participatory action, then the world is far less likely to resemble the dystopic prophecies that sci-fi genres have illustrated.


References, Notes, Suggested Reading:

Blakemore, Erin. “How Dolls Helped Win Brown vs. Board of Education.” History. 27 Mar. 2018. https://www.history.com/news/brown-v-board-of-education-doll-experiment

Cossins, Daniel. “Discriminating algorithms: 5 times AI showed prejudice.” New Scientist.  27 Apr. 2018. https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/

Freire, Paulo. 1970. Pedagogy of the Oppressed. New York: Herder and Herder.

Russell, Stuart J. and Norvig, Peter. 2003. Artificial Intelligence: A Modern Approach (2nd ed.). Upper Saddle River, New Jersey: Prentice Hall.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s