What is artificial intelligence (AI)? Why would Ava be so hard to identify as AI, but not Siri? 

I think our class discussion on Ex Machina was very stimulating, and I thought a while about why I felt uneasy about Ava’s AI, and yet, when we mentioned Siri in class, I noticed how I didn’t react in the same way to Siri’s AI. To further analyze this idea, I needed to understand the difference between AI and other algorithm based technologies.

Artificial intelligence is also known as machine learning. It is based on a series of algorithms and codes that not only has vast information within the binary patterns, but it goes further by allowing a machine to apply this information to situations that have not been encountered before. To explain using an example, let’s say there are two Robots, R1 and R2.

  • R1 was coded without AI to analyze 100 images of various happy, smiling people
  • R2 was coded with AI to analyze the same images.
  • Now, both R1 and R2 would recognize instantly any of the 100 happy faces that were shown to them. However, if we showed R1 and R2 a happy face that wasn’t shown earlier, only R2 would be able to identify it as a happy face. Why?
  • R1 can only statically recognize the information it has analyzed and stored. However, R2 gathers information for what a happy, smiling face looks like from the pictures it analyzed and  applies its to the image it has never encountered before. Now, imagine 1 million images were shown to R2: it would be incredibly accurate at predicting a happy face.

After reading on AI, I was able to interpret more of how Siri and Ava works. Both are technologies that are equipped with ridiculous capacity of knowledge that can be applied to novel situations. However, I think the difference between the two is that Ava was additionally massively equipped with the knowledge of emotive and social intelligence. For example, if you said “I really love you” to Siri in a happy tone, or angrily yelled it into the mic, Siri would probably give the same response. However, Ava would notice the difference in how it was said, and cater her response to the intonation as well. This sublime concept of emotive and social intelligence projected in Ava’s character contributes to the gothic analysis of Ex Machina: it is such a powerful, life changing technology, that if induces both awe and anxiety simultaneously.

 

Thoughts?

2 comments

  1. saratohmee · May 27, 2016

    I think you nailed it! I completely agree with the way you saw it! I think that Ava’s knowledge of emotive and social intelligence comes from experience as well as from the information she stores from the codes she is given. During her encounters with Caleb and Nathan, she experiences different states of emotions. She asks Caleb questions about his past and feels sorry for him because she is able to read his emotions. She is able to feel hatred towards Nathan and experiences sadness when he tears up her picture. I think that Ava has the Information about things but she also possesses the ability to be critical, interpretative and adaptive to situations which allows her to shape her knowledge in accordance.

    I think that the “awe and anxiety” that are present simultaneously in Ava come from her ability to possess infinite knowledge (the awe part) and her ability to use her knowledge subjectively for her manipulative purposes or even for sympathetic ones (the anxiety). The problem also lies in the intentions behind her “emotions” that are issued from the basic “objective” knowledge she is given through the data she receives.

    I think that the “artificial” component in her intelligence alludes to the fact that it is man made and therefore under control or controllable. The anxiety is losing control or even having it control its creator. Caleb underestimates Ava’s intelligence because he is too taken with the awe behind her creation. One way to look at it could be that because she is artificial, he underestimate her intelligence thus allowing her to escape.

    Any thoughts on my take?

    Like

  2. chelseyju · May 29, 2016

    Hey Sarah!

    Thanks for the comment;
    I agree with your thoughts, but I think the way I understand “objective” knowledge is a bit different. I think Ava was programmed with vast knowledge, and a lot of knowledge comes from a statistical perspective, especially for emotions. So for example, she was probably programmed with emotions we feel; happiness, anger, etc., and the common ways people tend to react (facial expressions, sayings, body language knowledge, etc.) when exhibiting these emotions. The way we react to these emotions is based a lot on the surrounding culture or society that a person spends time in. I don’t know if that is truly objective; I would describe it as collecting subjectivity in mass quantities. In other words, Ava can exhibit almost every emotion perfectly because she knows how most people react to each, and why they tend to do so. I don’t think she truly has intent to feel any emotion or use any emotion, as the way AI works is that it needs to be fed with situations that it can apply its knowledge to.

    Another thought I had was: What if we were to program AI to model “statistical anomalies and outliers” in human personalities and emotive responses. i.e. instead of creating perfect human beings, what if we could make AI models for mental health research?

    Like

Leave a comment