An essay on thought, purpose, and capital: Where can we draw the line?

The first episode in season 2 of Black Mirror, Be Right Back, asks an extremely forward-thinking, yet age old question about identity. It gives us a backwards version of Theseus’ Paradox, which asks how much of something we can replace before it is no longer the original. The episode asks how much of something we can replicate until it essentially becomes the original. This is where we become tangled in our concepts of humanity – is there a line where technology is so human-like that it can be accepted as human?

In the episode, Martha’s husband passed away in an undisclosed accident. Noticing her heartbrokenness, a lady at a party offers her a service where she can text a chatbot that pulls from online information to replicate Ash’s messages. Although she was opposed at first, in search of solace after finding out she was pregnant, she responds to one of the emails and becomes addicted. After giving it access to Ash’s personal information in order to improve its mimicry, it is able to text, call, and copy his unique inflection almost exactly. Martha becomes very attached to her phone as an object, so she was more than willing to take the next step, which was to embody this AI personality.

I want to call attention to the final scene in the show, which takes place a few years after her baby is born. While celebrating a birthday, Martha allowed her child up into the attic, where Ash was standing, looking at the wall, completely still. When the child runs over, he suddenly ‘turns on’ and acts like a normal father figure, while Martha waits uncomfortably at the bottom of the stairs. This brings up the question that we have subconsciously been asking throughout the episode: Will Martha be able to accept Ash as human? The answer that we are given in this ending scene is that he is human enough for her to not be able to dispose of him, but not human enough to be a normal father. This gray area, between human and ‘close enough’, is where the questions about humanity, technology, and commodification lie.

To begin dishing out what concepts of humanity are being tested in the episode, we can link Ash’s situation to the developing concepts of humanity that we all may be subconsciously aware of. After all, we all, including Martha, know he’s not human for some reason. Let’s start by proving he is human, and later see where the gaps lie. One of the more obvious traits that make Ash human is the fact that he is embodied. Although Ash was already human-like as a phone call, when he got put in an exact replica of his source’s body, that was the moment in the show where the audience realized that they are becoming indifferentiable. This also plays into why Martha didn’t begin to reject him until he was embodied: our ability to exist in relation to time and space is uniquely human. So, when Ash reveals that he is unable to move 20 meters from the bathtub where he was revived, that largely breaks the illusion. (Although he stated that he cannot move very far, that alone does not make him inhuman to the audience. There are situations where humans cannot move out of a certain area, and also if Ash could move around without limit, that wouldn’t convince us he’s really Ash). Interactions with the physical world show that we have agency over ourselves, and the ability to interact with other people using nonverbal cues adds to our ability to be in networks of community. Furthermore, being able to interact with our society in ways that are only possible via embodiment gives people their sense of self awareness and puts people in a larger social context. In Ash’s case, his awareness of his own condition is eerily human – he often asserts himself as ‘not real’ in order to comfort and relate to Martha. Although linguistically he denies humanity, the claim is overruled by the self-awareness, perception, and physicality that he displays. Despite his limitations, Ash’s embodiment makes him human – or more importantly, is not what makes him inhuman.

To expand on the idea that Ash is perceptive and aware of his own condition, this brings up challenges regarding how Ash’s brain is really working. Is it more human or robot? My foundations for assertions about how human consciousness functions will be from the essay The Idea of the Self by Jerrold Seigel. Seigel describes one of the dichotomies of our consciousness as “a continuum between reflexivity as an automatic response and reflectivity as a conscious endeavor” (Seigel 19). Most large language models and chatbots that we use today can be considered reflexive in the sense that there is a known body of information that it pulls from on a call-response basis, and the process is completely automatic via a series of codes and statistical processing to decide what words are to be used next. For humans, a reflexive response may be a strong reaction to an emotional trigger, for example. Reflective responses address “the tensions or conflicts between what biology demands and what social and cultural existence imposes” (Seigel 17). So, it is our conscious mind’s ability to make decisions based on social, physical, economic, political, and other abstract outside factors. That level of processing and contemplative thought is seemingly uniquely human.

I find myself lost in these two frameworks wondering if Ash has reflective thought. It can be described as the ability to process, digest, come to terms with, and communicate new information. I am thinking about a scene where Martha requests that Ash’s clone lightly breathe when sleeping to seem more realistic, and he keeps trying, considering Martha’s corrections, until he can do it right. Although the clone’s personality is said to come from a digital body of information about Ash, that body grows every moment that he exists. So, if the software can process new information and digest it into the pool of knowledge, and communicate new information based off what he just learned, how far off is that from what humans do? Martha knows it’s not the real Ash, but the way he can learn and improve (potentially until he reaches perfect Ash personality) is undeniably human. My point is, if technology continues to advance, the threshold between human lines of consciousness and the mimicked brain function that technology is able to achieve becomes scarily thin. If technology is able to process and reflect on information the same way humans can, and is embodied, what makes it inhuman? If this episode was real life, what right do I have to deny his humanity?

    By drawing upon my previous concepts of humanity based on the ways we’ve defined it through The Idea of The Self, we have still been largely unable to find the missing thread that is at stake: Why doesn’t Martha accept Ash’s clone as human?

     I think the issue is less about whether Martha can accept the clone or not, and more about whether Martha can accept herself as being capitalized upon. Although Ash’s clone has personal agency and self-awareness, it is unable to function outside of the purpose that was given to it by its company. This is a tension – Ash’s new brain can encode and decode information as effectively as humans, but what makes him inhuman is his inability to change his main purpose, which is to continue to capitalize on Martha’s grief. The illustration of AI in this episode sits in a large line of media and social construction that views AI as only ever being in a corporate lens. Our capitalist society can only look at things in terms of their working capital. This is how humans get dehumanized: when commodified for labor. I look at it as a scale, Ash’s life is 100% labor, so he is not human. If my life was 100% labor, I would not be human. This is the missing thread that separates these concepts. Even if I were performing labor 100% of the time, I wouldn’t be a robot. But if my brain were programmed to ONLY serve the purpose of labor, all of my reflective and reflexive thoughts were based around the purpose of a company’s needs, then I may be a robot. But that is impossible, and also explains why people cannot be robots. If robots become advanced to the point where they can break free from the purpose they were created to serve, that could make robots people. This is so inconceivable to us in our current society because we can’t imagine a world without capital, where a robot would be anything other than an asset. So, to circle back to Theseus’ question: If every part of a ship gets replaced, is it still the original ship? It depends on who’s doing the replacing.


Works Cited

Brooker, Charlie. “Be Right Back.” Black Mirror, season 2, episode 1, Netflix, 11 Feb. 2013.

Seigel, Jerrold. The Idea of the Self: Thought and Experience in Western Europe Since the Seventeenth Century. Cambridge University Press, 2005.


Comments

Leave a comment