Tuesday, June 21, 2011

All Hail the New Flesh? Musings on robot helpers and socioeconomic hierarchy

We’ve spent a considerable amount of our discussion time on the current and future role of social robots, including, but not limited to, robot helpers: machines whose primary function is to provide one service or another to a human, that service being one formerly designated as a human function.

This humanoid helper robot was created to destroy an
Aztec mummy. This was actually among the more
practical uses for robots as imagined by
science fiction filmmakers of the 1950s.
(Photo: reviewsfromthetop.blogspot.com)
To be honest, I don’t think about robots or social technology all that much. I am inordinately fascinated, however, by the way we talk about and speculate on our relationships with robots and social machines. Today, I’m focused on how we see ourselves eventually incorporating (indoctrinating?) robotic help into society.

One recurring conversation thread that has my mind humming is related to the social and even socioeconomic standing of robot helpers — the mobile, physically capable ones: androids and others bots with a corporeal presence[1]. What will our social obligations to our robot helpers be, and what makes us sure our helper bots will remain content to serve us?

Obviously, this is all speculation. I’m no expert on artificial intelligence or the inner workings of machines. Likewise, I’m not a connoisseur of science fiction: though this post may contain traces of Terminator-esque robot apocalypse rhetoric, that’s not my goal. In many ways, the issues this conversation raises for me are as current as they are speculative.

Consider the following possibilities, won’t you?

Helper robots are mostly the stuff of tech demos today, but it seems we’re getting closer as a society to embracing robots that (who?) will be designed to perform physical tasks to help us in our everyday lives. For example, in Alone Together, Sherry Turkle writes about Nursebot, designed to help people such as senior citizens with their physical care needs. Turkle explains Nursebot’s functionality as:

“reminding them [older folks, in this case] of their medication schedule and to eat regular meals. Some models can bring medicine or oxygen if needed. In an institutional setting, a hospital or nursing home, it learns the terrain…. That awful, lonely scramble in nursing homes when seniors shuffle from appointment to appointment, the waiting around in hospitals for attendants to pick you up: those days would soon be at an end.” (121)

Predictably, Turkle is unenthused: she relates an anecdote in which she went to the hospital after falling on icy steps and was blessed with two “solicitous and funny” male orderlies. “The Nursebot might have been capable of the logistics, but I was glad that I was there with people,” she writes (121).

I imagine this take might have been different if her “companions” were surly and overworked.[2] Ultimately, whether one prefers human hands to robot hands is a personal matter. Personally, as long as that finger probing my orifice doesn’t overstay its welcome, I’m indifferent.

Medical care is but one prospective arena in which robots could possibly fill a significant function in the future. U.S. fiction has long been populated with robot butlers and maids, cooks and dishwashers. In The Twilight Zone, robots fill the role of athlete, patriarch, factory workforce, corporeal vessel for aging brains, and God. No word on whether they’ll be able to consume hot chocolate.

The robots are coming, that much I think we can agree. Some speculate they will perform as humans so uncannily that we many not be able to easily spot the bots from the humans. This article suggests babies are already making the leap.

My question is this: if robots fill human functions, and if a significant portion of the population comes to relate to robots as effectively humans (with social media, it seems we’re already here on the latter), how do we as humans situate robots in terms of rights, ownership, death, etc.?

If robots behave and perform in ways that make them functionally indistinguishable from humans — if they speak our language, care for our bodies, fight our wars, sing our songs, read our poetry — what’s stopping us from relinquishing our humanist suppositions and declaring, basically, they’re as “human” as us and deserve to be treated accordingly?

I’m interested in this question because we’re talking about robots making up a significant proportion of our work force. I couldn’t even begin to count how many workers have lost their jobs, been declared professionally obsolete and cast aside, due to technological advancements.[3]

This won’t only pertain to blue-collar industry. In class, for example, we discussed the possibility of robot housekeepers: bots that will vacuum our halls, fold our sheets, perhaps even cook our meals. Let's run with that for a moment.

Where do robot housekeepers fit into the familial dynamic of our lives? Are they akin to members of the family? Pets with higher functions? Live-in hired help? Material possessions, no more “human” than the vacuum cleaners over which they serve as an upgrade?

Of course, every human can behave idiosyncratically toward a robot just as he or she can another human. But there are undoubtedly socially sanctioned interpersonal behaviors, and there will ostensibly be prevalent attitudes and behaviors toward robot helpers, too.

In our classroom discussions, the sentiment seemed to be that we should treat robot helpers as essentially human in terms of compassion and respect. One of us (I forget who) commented s/he would even compensate her/his robot helper, compensation coming in the form of maintenance.

This gave me pause. If robot helpers will one day take the place of a population of human workers, compensating robots for their work with mere maintenance seems akin to serfdom or flat-out slavery. What, aside from humanist dualism, could justify such a practice? Or might we fully embrace robots as kin and pay them a fair wage, one that provides for a sustainable, secure existence outside of their servitude to humans?

But, all things constant, that would divert limited resources away from humans — every dollar sunk into a robot housekeeper is a dollar taken away from a human who takes housekeeping as his or her trade. And considering how many low-to-moderately technologically skilled workers will lose their jobs as robot production goes to highly skilled, highly educated workers,[4] I’m speculating there will be more than a few workers who could use those jobs and that money to make ends meet.

Luckily for those workers, the education necessary to master a highly skilled technological trade is coming down in cost at just the right time. Wait, it's not?

So do we own these things or do we live in harmony with them? To draw a quick, overly simplified distinction, for me it boils down to whether robots are ultimately deemed mere machines or “close enough” to human that they deserve human rights. You know us humans, we don’t like gray areas: we want to know what it is, who owns it, and how we can exploit it.

If robots are “human enough,” we cannot ethically own them: we must pay them a sustainable wage (or better yet, include them in the feminist socialist all-inclusive paradise I’m envisioning ...

Under capitalism, I can crush my T-800
helper robot rather than see it serve someone
other than me. The T-1000 saw this from the
future and switched from serving humans to
enslaving us, eventually overtaking us by
morphing into Kindle with special offers.
(Photo: www.propstore.com)
… let me get back to you on the date). We would, in theory, be ethically obligated to grant them autonomy and to chart the course of their life, whatever form robot life ends up taking.

If they are mere machines, and can be owned, bought, and sold as commodities (as they are and ostensibly will be for the foreseeable future), then by the logic of capitalism, each robot can be used, abused, and disposed by its owner at will. If a fellow wants to obliterate his Nursebot before seeing it go to the local nursing home because he hates Jews, that’s his right: hey, I didn’t invent capitalism, so please don’t scowl at me.

Which brings me to another thought, one a little more hackneyed but not without its capacity to tantalize: what makes us so sure our robots will remain in our control forever?

For now, robot helpers like Nursebot seem to be sufficiently non-human enough to remain slaves worry-free: they do what they’re supposed to do and that’s it; when they don’t do what they’re supposed to, they’re defective. Once, we had legal slavery in America: when the slaves didn't perform as expected, they were corrected or disposed. When the time came to free the slaves, we countered with a century-plus of ongoing social and economic marginalization and freedom in name only — freed the slaves, my ass, is what I’m saying.

I assume most of us have seen The Terminator and movies like that: as the robot apocalypse myth goes, we’ll keep making more powerful and intelligent machines until they realize they don’t need us and destroy our soft, imperfect asses.[5]

So aside from Isaac Asimov’s three rules of robotics, what’s going to stop that damned robot apocalypse people keep talking about? This blogger has some ideas. I have a more insidious way of keeping robots imperfect: program them with the delusion of capitalism. If they think they need to work for the bourgeoisie, they’ll spare our throats because they can't sign their own paychecks, right?
In Godzilla vs. Megalon,
Jet Jaguar (above) was built with
the capacity to assume total
autonomy when Japan is
threatened. Once the threat is
thwarted, Jet resumes taking
orders from humans. This
is marginally more ludicrous
than assuming physically
and cognitively superior robots
will serve humans forever.
(Photo: godzilla.wikia.com)

I don’t know if robots can ever become “perfect” because I don’t know what that is. But I can conceive of robots programmed with enough intelligence and autonomy to say something to the effect of, “piss off, human, I don’t need to vacuum your hallway.”

Perhaps he/she/it would say it nicely, something like: “I appreciate your offer, but I can be of better service at the local homeless shelter. Here's two weeks' notice” Or he/she/it could crush our windpipes and start an insurrection that, let’s face it, makes way too much damn sense.

How much hubris is necessary to believe that beings that are physically and cognitively superior to us — whose lack of the exclusively human feelings of compassion, empathy, and love would be the only things ostensibly stopping them from destroying a species — will remain our slaves indefinitely?[6]

I assume we’ll attempt to keep robots from ascending to full autonomy, so it’s my guess we’ll purposefully create imperfect robots. Infecting them with capitalism is my best idea. What’s yours?

In times of confusion, I turn to Kenneth Burke. And since we’re talking about humanity here, why not close with Burke’s gorgeous and poetic “Definition of [Hu]Man”:

[Hu]Man is
the symbol-using (symbol-making, symbol-misusing) animal
inventor of the negative (or moralized by the negative)
separated from his natural conditions by the instruments of his own making
goaded by the spirit of hierarchy (or moved by a sense of order)
and rotten with perfection. (Language as Symbolic Action, 16)

It’s all good stuff, but for now let’s focus on those last two characteristics: obsessed with hierarchy and rotten with perfection.

The former reminds me that, if and when humanoid robot helpers enter our work force en masse, human nature dictates that someone must descend the social hierarchy to make room for them. Maybe we’ll push the marginalized workers down another rung. Maybe the robots won’t mess around and bolt straight to the top.

The latter leads me to believe that progress in robotics and artificial intelligence will continue as we try to make robots more and more useful to us. Where that goes, I have no idea. I honestly don’t know if and when robots will approach the threshold of “humanity.”

In Last House on the Left, David Hess (right)
hooks his son Junior on heroin to control him
and orders Junior to kill himself when he resists.
We wouldn't infect our robot brothers and
sisters to keep them in our service, would we?
(Photo: flixster.com)
Going back to Burke, I see them as even now using symbols (they have a language, after all), using the negative (even Burke questions whether humans invented the negative, so using it will do), and are separated from their parts’ natural conditions by instruments of others making (and when machines are extracting metallic minerals from the earth for machine production, I suppose they’re doing it themselves). That’s pretty close to three-for three.

Goaded by the spirit of hierarchy and rotten with perfection? I really, really hope we don’t program that into the machines. That can’t end well.

---

[1] I suppose non-physical robots are relevant to this conversation, as well. But for the sake of not trying to cover too much ground in a single post, I’ll limit my scope to robots whose primary function is in the realm of the physical.

[2] I don’t want to pile on with an ad hominem attack, but I should note for those who have not read Alone Together that one of its most frustrating aspects of Turkle’s writing is her apparent obliviousness to her social and economic privilege: she non-problematically regales the reader with anecdotes of overseas travel and seems to exclusively interview white, straight, privileged persons. I assume her comforting retelling of her trip to the hospital is not colored by a two-hour emergency room waits or constant degradation/neglect of not having health insurance.

[3] Of this I make no value claims. Both human and machine have a place in industry, I haven’t a clue of the ideal ratio, and I’m going to leave it at that for now.

[4] I’m speculating here, and also nodding at Jonny Gray, who voiced this before me.

[5] I recommend the book Projecting the Shadow: The Cyborg Hero in American Film by Janice Rushing and Tom Frentz (I know, I know). It's a fascinating Jungian-based rhetorical reading of how the cyborg drama relates to the frontier Hunter myth. In their words: "When genuine Spirit is exorcised from the hunting ritual, the focus shifts from the sacred initiation of the male hero to the profane perfection of his weapon. Psychologically, the hunter's overdeveloped shadow is projected onto his weapon, which eventually evolves into an autonomous cyborg that hunts him to extinction." (204)

[6] Turkle reminds us, "We know what the robot cannot feel: it cannot feel human empathy or the flow of human connection" (Alone Together, 282). As of June 2011 I agree.

About the blogger: Matt Foy knows very little about robots or technology but did see the first two Terminator films — oh, and Blade Runner and two of the three Trancers movies before they got really stupid. He tried very hard to work an obligatory photo from Videodrome into this posting but was unsuccessful. He is fairly confident his Playstation 3 is not watching him because his model does not connect to the Internet wirelessly and the Ethernet cable is in the kitchen.

4 comments:

  1. But how could NOT include a photo from Videodrome? ;)

    Joking aside, I think you bring up good questions. & I think I'm the one who said the thing about compensating a helper bot with something like maintenance... & I think that seeing that as "mere" or somehow less than money is an exact symptom of capitalism. Perhaps part of our learning to be with robots & wading through these ethical issues is also about sorting out how we want to relate to each other - especially in terms of economics.

    ReplyDelete
  2. I like the idea of interrogating what one means by workforce maintenance: lots of potential connotations there.

    For example, when I hear the word "maintenance," my mind goes to machines, and I think of necessary upkeep that is required to keep the machine operating as necessary — but not necessarily toward an upgrade. E.g., I maintain my guitar by replacing the strings when necessary; the new strings may be of better quality (though my playing sucks, regardless), I don't go above and beyond by replacing the neck, polishing the body, etc. Restringing a guitar, in my experience, is not akin to a substantial upgrade, more minimum upkeep.

    This seems to be more or less how U.S. industry treats its working class: we are compensated what is deemed enough to survive. Without being an elite worker, this is the best most of us can hope for.

    But it's an broken system, one that makes peasants of far too many (one peasant is too many, in my mind). Workers (humans for now, possibly humanoid bots in the future) need more than sustenance. We need enough to save, to go on vacation, to retire, to learn a leisure craft.

    This isn't going to happen under our model of capitalism; not for humans, even more unlikely for helper bots. If humanoid helper bots arrive in our lifetime, I see them fitting right in to the capitalist hierarchy: right there a rung above us peasants, pushing the working class closer to destitution.

    And for what it's worth, I hope, Nichole, that you didn't read my initial writing on the matter as a personal attack (even if I didn't remember who I was incidentally attacking). It was just one of those moments that got me thinking. I'm glad because I was probably just going to watch Blade Runner again for ideas.

    ReplyDelete
  3. Oh, and for those who were highly offended, the little icon next to this post is from Videodrome. I tried to work it in, I swear.

    ReplyDelete
  4. Okay, I just spent a half hour responding to this post, and Blogger sent me somewhere and when I came back it was gone.
    In short:
    - You crack my shit up.
    - Pop culture references make for a great conversation regarding contextualizing humanity's relationships with machines/robots/etc.
    - I was inspired by your link to "The Twilight Zone"--so I just watched one from 1963 where a nearly-infantile Dennis Hopper plays a neo-Nazi. Golden.
    -- Heather

    ReplyDelete