Showing posts with label Robots. Show all posts
Showing posts with label Robots. Show all posts

Tuesday, July 5, 2011

Robot Apocalypse

A few years ago, the computer science department at our university was holding a “computer science day” to recruit high school students. During this time, I was assisting a professor in the Computer Science department who had received a grant for five robots to develop a multi-agent system paradigm. My job was to help program these robots so that they could communicate with each other to avoid obstacles, navigate around a room autonomously, and be controlled remotely by an operator. These were simple tasks to accomplish, and were the early stages of a much larger project.


P3-DX RobotsThe robots look like the machines pictured in this post. Human appearance was not reflected in the design – just a machine that cruised around on wheels. Each robot contained six sonar sensors. With a little bit of programming, the sensors allowed the robots to determine the distance between them and an obstacles in their path. This helped the robots communicate with each another to avoid collisions when navigating autonomously. If a human wished to intervene, we designed a touch-screen tablet that an operator could use to control the robots remotely, and the human could see what they “see” through a webcam mounted on the robot. This allowed the operator to navigate the machines around even if he or she was not in the same room.

We gave this technology to high school students during computer science day, because the robots were fun to use and we thought students would find them entertaining. During the demonstration, sometimes the robots' sonar ping would travel through a wall and hit the studs, throwing off the distance the robots calculated between themselves and the wall. As a result, the robots sometimes rammed into walls at full speed and made a few (additional) holes in Faner Hall.

The emotional impact on everyone was different. High school students, and us, winced when the robots slammed the wall, but for different reasons. Unlike the high school students, we didn't want the robots damaged primarily because they were expensive. The robots also had value to us because we spent a lot of time working with them. Nothing more. The robots were simply machines. It wasn't the same “feeling” of being intensely connected with non-living objects, as many individuals described in Sherry Turkle's book Alone Together. The robot was programmed to conduct simple tasks, and it just needed to work at the end of the day.Image attribution: University of Cincinnati's Cooperative Distributed Systems Lab

The high school students in attendance felt a bit different. The ability to control the robots was exciting, and they didn't want to lose a source of entertainment. Some high school students probably saw a robot slamming against a wall as serious excitement, especially when it created a new hole. When our robots had a collision, the unintended disruption caused many high school students to want to take control of the robots. A connection developed between the people wanting to compete over who could operate the robots most effectively, and not necessarily the connection between humans and machines themselves. In this case, the technology helped facilitate bonding and built friendships in the form of competition. It was healthy. To the high school students, I suspect watching the robots accidentally slam into the walls was a healthy and safe way to relieve some aggression indirectly – similar to why people watch boxing or aggressive sports. I also suspect that if Sherry Turkle was reading this post, she would probably express her legitimate concern to me and disagree completely, claiming these actions are destructive to society.

Later, when the robots were navigating autonomously, we programmed them to avoid obstacles and each other. Students often took this as an opportunity to walk into a group of robots operating autonomously, curious how the machines would react. As expected, the robots tried to move quickly out the way and avoid the students and each other, but the students also had to move to avoid them in the chaos. Both the operator and the robot would manipulate each others actions in a response to a disturbance. The high school students seemed to enjoy this the most. Perhaps it was the mystery of the robot that they found intriguing. It makes me question if the “connection” that Sherry Turkle mentions between humans and robotics would remain once the novelty diminished. Much like a human relationship, it's likely to get boring if it remains predicable. As a programmer, I knew how the machine would react, so perhaps my perception of the robot was different than what the high school students felt.

image attribution: Random Robotics

We also programmed the robots to follow people that came within a certain distance. The robots provided attention to the high school students and responded to their behavior and interactions by following them. When the occasional pedestrian member passed by too close to our demonstration, the robots would stop following the high school students and would begin to follow the pedestrian instead. At first it was amusing because this was completely unexpected. Innocent bystanders were suddenly in control of our robots. Some bystanders were anxious because they accidentally influenced the demonstration. Others enjoyed being the center of attention. Realizing this, students began to compete for control over who could get the most robots to follow them. It was a competition, and connection, between people... not humans and machine.

This robot demonstration was on my mind when reading Sherry Turkle's book Alone Together. As programmers, when the robots hit a wall, sometimes we just felt bad because of the potential loss of value in the robot and the time put into it. It was like a car... we work hard to pay for our vehicles and feel terrible when they get rear ended in a parking lot. We felt the same when the robots had a collision, which is why I found it so difficult to relate to Turkle's stories. When students had the attention of the robot, there was a feeling of satisfaction because of the human interactions that took place. These interactions were facilitated by the use of technology, and it was healthy – even when things went wrong. When that attention was lost, there was disappointment. Communication, even with objects, can play with our emotions in many unexpected ways. The outcome isn't always terrible, either.

Wednesday, June 22, 2011

Zen and the Art of Technological Engagement

"Vegetarians all say the same thing . . . 'I don't watch TV.'"
I heard this quote while I was road-tripping down the coast of California. I don't even know if Adam Carolla really said it, but I immediately loved it. I found it hilarious. I was a vegetarian. I definitely didn't watch TV. My pals and I were what you might consider neo-hippies: we wore bright crazy clothes, we traveled great distances to see our favorite bands, we slept in parks. We ate a lot of granola.

Most importantly (for
this blog), we distanced ourselves from technology. There would be absolutely no texting, TV watching, or email checking on this road trip. We placed our faith in deep ecology. I mean anyone who's ever expanded their mind has seen that we're all connected, man. The bird is the same as the leaf or the cloud. We are made of the same stuff that the stars are made of.

http://iasos.com/artists/alexgrey/

So why were we so afraid of technology? Because we automatically associated it with all the bad juju we were trying to escape from: capitalism, industry, horrific pollution. We saw technoculture as the opposite of the natural world. We saw technology as opposite us.

But one has to ask: if we’re all connected and we’re all made of the same elements, couldn’t the computer be a part of this compassionate, Utopian ecosystem?


During my first semester of graduate school, I came across the notion of posthumanism. When I was assigned to read Nicolas Gane’s article (simply titled “Posthuman”), I was delighted to find that “I have been a posthumanist all along, and I didn’t even know it” (quoted from my own handwritten notes in the margin of his article). So what made me an unidentified posthumanist? I suppose it was my personal belief in equality among all creatures and things. It was wrapped up in my love of inanimate objects, fruits and flowers, sunsets and stars. And it was intertwined in an innate desire to see dominant ideologies shift into something better, something far more nuanced, just, and beautifully diverse.

Posthumanism is a contemporary philosophical movement that tries to resist humanism, or the idea that humans are the smartest, the best, and the most valuable creatures on the earth. You may have heard that humans are the only beings that have souls, or that “man is the measure of all things”? Yep, that’s humanism right there. While humanism heralds itself as the bastion of human rights and sometimes does a lot of good things, (the American Humanist Association or ASA often advocates for fair treatment of workers, same-sex marriage, things of that nature) it is this elevation of the human above all that promotes horrendous treatment of animals, endless corporate greed, and the annihilation of delicate ecosystems. Not to mention the disavowal of the consciousness inherent in both our carbon and silicon-based, nonhuman friends.

So posthumanism is a response to humanism. It’s an augmentation of deep ecology. It seeks to extend the compassion and equality of humanism to all beings. Deep ecology brings in water, soil, mountains, and air. Posthumanism says, “What about our laptops? What about our robot friends?”

http://anamsh13.blogspot.com/2010/11/
nature-versus-technology.html
Alright, alright so my cellphone is not exactly the same as my aloe plant. I can see that. But the codes that govern our digital devises are not nearly as “inanimate” as we would sometimes like to believe. Code degrades. It evolves. It mutates in unforeseen and unforeseeable ways. Much like our own DNA, computer code is simply not static, and I would wager to say that sometimes, those devises have an agenda all their own.

In this posthuman era, people have begun to open their hearts to encounters with the surreal, and synchronicity shows us that we are not always in control of meaning. Exposure to chaos teaches us not to be afraid of it. Our understanding of the ways that animals and plants communicate is being revolutionized. Personal relationships to machines have never before been so prevalent, individual dependence on technology has never been so widely accepted, and technological advancement has placed the means of digital art production into the hands of consumers.

I seem to have adopted a more Zen and the Art of Motorcycle Maintenance approach to technology. In Robert M. Pirsig’s delightful fiction novel, the narrator (who studies Eastern philosophy and could easily be categorized as a kind of biker hippie) critiques “romantics” who shy away from technology the way my friends and I did. Pirsig’s narrator explains that, "The Buddha, the Godhead, resides quite as comfortably in the circuits of a digital computer or the gears of a cycle transmission as he does at the top of the mountain, or in the petals of a flower." The ability to bond with, understand, and repair one’s own technological apparatuses (be they ICUs, motorcycles, or kitchen sinks) is seen as a locus of Zen in this book. When we refuse to learn how to use the technology available to us, we place even more power in the hands of the elite. (Donna J. Haraway would agree.)

Now I certainly can’t wave a magic wand and use my compassion to dissolve all the ethical conundrums circling around technology. It simply shows that we can’t throw the robot out with the transmission fluid.

In some ways, maybe my young hippie friends and I have sold out. Some of us got the kind of jobs we said we’d never get, wear the kinds of clothes we said we would never wear, and made commitments we said we would always resist. We’ve climbed down off the safe pedestals we thought we could teeter on. We got a little more digital and a lot more real.

http://mat3i.tumblr.com/post/
217199331
This is not to say that we are completely different people. I’m talking about shaving our armpits and getting cellphones, not joining the Republican party. And though I ascribe some nostalgia to the kids we used to be, I feel very content with the ways we’ve gone.

And of course we still go on road trips, trading in the back woods Rainbow Gatherings for the deeply cyborgian Burning Man Festival, where posthumanism is alive and well. I get the feeling that it is beginning to thrive everywhere: in farmers’ markets and local art shows, in letters to congress and digital music sharing, in our dreams, in our fantasies, and in our plays. It is a good time to be conscious matter.


References:

Gane, Nicholas. “Posthuman.” Theory, Culture & Society 23 (2006): 431-34. Pdf.

Haraway, Donna J. "A Cyborg Manifesto: Science, Technology, and SocialistFeminism in the Late Twentieth Century." Simians, Cyborgs and Women: The Reinvention of Nature. New York: Routledge, 1991. 14981. Print.

Tuesday, June 21, 2011

All Hail the New Flesh? Musings on robot helpers and socioeconomic hierarchy

We’ve spent a considerable amount of our discussion time on the current and future role of social robots, including, but not limited to, robot helpers: machines whose primary function is to provide one service or another to a human, that service being one formerly designated as a human function.

This humanoid helper robot was created to destroy an
Aztec mummy. This was actually among the more
practical uses for robots as imagined by
science fiction filmmakers of the 1950s.
(Photo: reviewsfromthetop.blogspot.com)
To be honest, I don’t think about robots or social technology all that much. I am inordinately fascinated, however, by the way we talk about and speculate on our relationships with robots and social machines. Today, I’m focused on how we see ourselves eventually incorporating (indoctrinating?) robotic help into society.

One recurring conversation thread that has my mind humming is related to the social and even socioeconomic standing of robot helpers — the mobile, physically capable ones: androids and others bots with a corporeal presence[1]. What will our social obligations to our robot helpers be, and what makes us sure our helper bots will remain content to serve us?

Obviously, this is all speculation. I’m no expert on artificial intelligence or the inner workings of machines. Likewise, I’m not a connoisseur of science fiction: though this post may contain traces of Terminator-esque robot apocalypse rhetoric, that’s not my goal. In many ways, the issues this conversation raises for me are as current as they are speculative.

Consider the following possibilities, won’t you?

Helper robots are mostly the stuff of tech demos today, but it seems we’re getting closer as a society to embracing robots that (who?) will be designed to perform physical tasks to help us in our everyday lives. For example, in Alone Together, Sherry Turkle writes about Nursebot, designed to help people such as senior citizens with their physical care needs. Turkle explains Nursebot’s functionality as:

“reminding them [older folks, in this case] of their medication schedule and to eat regular meals. Some models can bring medicine or oxygen if needed. In an institutional setting, a hospital or nursing home, it learns the terrain…. That awful, lonely scramble in nursing homes when seniors shuffle from appointment to appointment, the waiting around in hospitals for attendants to pick you up: those days would soon be at an end.” (121)

Predictably, Turkle is unenthused: she relates an anecdote in which she went to the hospital after falling on icy steps and was blessed with two “solicitous and funny” male orderlies. “The Nursebot might have been capable of the logistics, but I was glad that I was there with people,” she writes (121).

I imagine this take might have been different if her “companions” were surly and overworked.[2] Ultimately, whether one prefers human hands to robot hands is a personal matter. Personally, as long as that finger probing my orifice doesn’t overstay its welcome, I’m indifferent.

Medical care is but one prospective arena in which robots could possibly fill a significant function in the future. U.S. fiction has long been populated with robot butlers and maids, cooks and dishwashers. In The Twilight Zone, robots fill the role of athlete, patriarch, factory workforce, corporeal vessel for aging brains, and God. No word on whether they’ll be able to consume hot chocolate.

The robots are coming, that much I think we can agree. Some speculate they will perform as humans so uncannily that we many not be able to easily spot the bots from the humans. This article suggests babies are already making the leap.

My question is this: if robots fill human functions, and if a significant portion of the population comes to relate to robots as effectively humans (with social media, it seems we’re already here on the latter), how do we as humans situate robots in terms of rights, ownership, death, etc.?

If robots behave and perform in ways that make them functionally indistinguishable from humans — if they speak our language, care for our bodies, fight our wars, sing our songs, read our poetry — what’s stopping us from relinquishing our humanist suppositions and declaring, basically, they’re as “human” as us and deserve to be treated accordingly?

I’m interested in this question because we’re talking about robots making up a significant proportion of our work force. I couldn’t even begin to count how many workers have lost their jobs, been declared professionally obsolete and cast aside, due to technological advancements.[3]

This won’t only pertain to blue-collar industry. In class, for example, we discussed the possibility of robot housekeepers: bots that will vacuum our halls, fold our sheets, perhaps even cook our meals. Let's run with that for a moment.

Where do robot housekeepers fit into the familial dynamic of our lives? Are they akin to members of the family? Pets with higher functions? Live-in hired help? Material possessions, no more “human” than the vacuum cleaners over which they serve as an upgrade?

Of course, every human can behave idiosyncratically toward a robot just as he or she can another human. But there are undoubtedly socially sanctioned interpersonal behaviors, and there will ostensibly be prevalent attitudes and behaviors toward robot helpers, too.

In our classroom discussions, the sentiment seemed to be that we should treat robot helpers as essentially human in terms of compassion and respect. One of us (I forget who) commented s/he would even compensate her/his robot helper, compensation coming in the form of maintenance.

This gave me pause. If robot helpers will one day take the place of a population of human workers, compensating robots for their work with mere maintenance seems akin to serfdom or flat-out slavery. What, aside from humanist dualism, could justify such a practice? Or might we fully embrace robots as kin and pay them a fair wage, one that provides for a sustainable, secure existence outside of their servitude to humans?

But, all things constant, that would divert limited resources away from humans — every dollar sunk into a robot housekeeper is a dollar taken away from a human who takes housekeeping as his or her trade. And considering how many low-to-moderately technologically skilled workers will lose their jobs as robot production goes to highly skilled, highly educated workers,[4] I’m speculating there will be more than a few workers who could use those jobs and that money to make ends meet.

Luckily for those workers, the education necessary to master a highly skilled technological trade is coming down in cost at just the right time. Wait, it's not?

So do we own these things or do we live in harmony with them? To draw a quick, overly simplified distinction, for me it boils down to whether robots are ultimately deemed mere machines or “close enough” to human that they deserve human rights. You know us humans, we don’t like gray areas: we want to know what it is, who owns it, and how we can exploit it.

If robots are “human enough,” we cannot ethically own them: we must pay them a sustainable wage (or better yet, include them in the feminist socialist all-inclusive paradise I’m envisioning ...

Under capitalism, I can crush my T-800
helper robot rather than see it serve someone
other than me. The T-1000 saw this from the
future and switched from serving humans to
enslaving us, eventually overtaking us by
morphing into Kindle with special offers.
(Photo: www.propstore.com)
… let me get back to you on the date). We would, in theory, be ethically obligated to grant them autonomy and to chart the course of their life, whatever form robot life ends up taking.

If they are mere machines, and can be owned, bought, and sold as commodities (as they are and ostensibly will be for the foreseeable future), then by the logic of capitalism, each robot can be used, abused, and disposed by its owner at will. If a fellow wants to obliterate his Nursebot before seeing it go to the local nursing home because he hates Jews, that’s his right: hey, I didn’t invent capitalism, so please don’t scowl at me.

Which brings me to another thought, one a little more hackneyed but not without its capacity to tantalize: what makes us so sure our robots will remain in our control forever?

For now, robot helpers like Nursebot seem to be sufficiently non-human enough to remain slaves worry-free: they do what they’re supposed to do and that’s it; when they don’t do what they’re supposed to, they’re defective. Once, we had legal slavery in America: when the slaves didn't perform as expected, they were corrected or disposed. When the time came to free the slaves, we countered with a century-plus of ongoing social and economic marginalization and freedom in name only — freed the slaves, my ass, is what I’m saying.

I assume most of us have seen The Terminator and movies like that: as the robot apocalypse myth goes, we’ll keep making more powerful and intelligent machines until they realize they don’t need us and destroy our soft, imperfect asses.[5]

So aside from Isaac Asimov’s three rules of robotics, what’s going to stop that damned robot apocalypse people keep talking about? This blogger has some ideas. I have a more insidious way of keeping robots imperfect: program them with the delusion of capitalism. If they think they need to work for the bourgeoisie, they’ll spare our throats because they can't sign their own paychecks, right?
In Godzilla vs. Megalon,
Jet Jaguar (above) was built with
the capacity to assume total
autonomy when Japan is
threatened. Once the threat is
thwarted, Jet resumes taking
orders from humans. This
is marginally more ludicrous
than assuming physically
and cognitively superior robots
will serve humans forever.
(Photo: godzilla.wikia.com)

I don’t know if robots can ever become “perfect” because I don’t know what that is. But I can conceive of robots programmed with enough intelligence and autonomy to say something to the effect of, “piss off, human, I don’t need to vacuum your hallway.”

Perhaps he/she/it would say it nicely, something like: “I appreciate your offer, but I can be of better service at the local homeless shelter. Here's two weeks' notice” Or he/she/it could crush our windpipes and start an insurrection that, let’s face it, makes way too much damn sense.

How much hubris is necessary to believe that beings that are physically and cognitively superior to us — whose lack of the exclusively human feelings of compassion, empathy, and love would be the only things ostensibly stopping them from destroying a species — will remain our slaves indefinitely?[6]

I assume we’ll attempt to keep robots from ascending to full autonomy, so it’s my guess we’ll purposefully create imperfect robots. Infecting them with capitalism is my best idea. What’s yours?

In times of confusion, I turn to Kenneth Burke. And since we’re talking about humanity here, why not close with Burke’s gorgeous and poetic “Definition of [Hu]Man”:

[Hu]Man is
the symbol-using (symbol-making, symbol-misusing) animal
inventor of the negative (or moralized by the negative)
separated from his natural conditions by the instruments of his own making
goaded by the spirit of hierarchy (or moved by a sense of order)
and rotten with perfection. (Language as Symbolic Action, 16)

It’s all good stuff, but for now let’s focus on those last two characteristics: obsessed with hierarchy and rotten with perfection.

The former reminds me that, if and when humanoid robot helpers enter our work force en masse, human nature dictates that someone must descend the social hierarchy to make room for them. Maybe we’ll push the marginalized workers down another rung. Maybe the robots won’t mess around and bolt straight to the top.

The latter leads me to believe that progress in robotics and artificial intelligence will continue as we try to make robots more and more useful to us. Where that goes, I have no idea. I honestly don’t know if and when robots will approach the threshold of “humanity.”

In Last House on the Left, David Hess (right)
hooks his son Junior on heroin to control him
and orders Junior to kill himself when he resists.
We wouldn't infect our robot brothers and
sisters to keep them in our service, would we?
(Photo: flixster.com)
Going back to Burke, I see them as even now using symbols (they have a language, after all), using the negative (even Burke questions whether humans invented the negative, so using it will do), and are separated from their parts’ natural conditions by instruments of others making (and when machines are extracting metallic minerals from the earth for machine production, I suppose they’re doing it themselves). That’s pretty close to three-for three.

Goaded by the spirit of hierarchy and rotten with perfection? I really, really hope we don’t program that into the machines. That can’t end well.

---

[1] I suppose non-physical robots are relevant to this conversation, as well. But for the sake of not trying to cover too much ground in a single post, I’ll limit my scope to robots whose primary function is in the realm of the physical.

[2] I don’t want to pile on with an ad hominem attack, but I should note for those who have not read Alone Together that one of its most frustrating aspects of Turkle’s writing is her apparent obliviousness to her social and economic privilege: she non-problematically regales the reader with anecdotes of overseas travel and seems to exclusively interview white, straight, privileged persons. I assume her comforting retelling of her trip to the hospital is not colored by a two-hour emergency room waits or constant degradation/neglect of not having health insurance.

[3] Of this I make no value claims. Both human and machine have a place in industry, I haven’t a clue of the ideal ratio, and I’m going to leave it at that for now.

[4] I’m speculating here, and also nodding at Jonny Gray, who voiced this before me.

[5] I recommend the book Projecting the Shadow: The Cyborg Hero in American Film by Janice Rushing and Tom Frentz (I know, I know). It's a fascinating Jungian-based rhetorical reading of how the cyborg drama relates to the frontier Hunter myth. In their words: "When genuine Spirit is exorcised from the hunting ritual, the focus shifts from the sacred initiation of the male hero to the profane perfection of his weapon. Psychologically, the hunter's overdeveloped shadow is projected onto his weapon, which eventually evolves into an autonomous cyborg that hunts him to extinction." (204)

[6] Turkle reminds us, "We know what the robot cannot feel: it cannot feel human empathy or the flow of human connection" (Alone Together, 282). As of June 2011 I agree.

About the blogger: Matt Foy knows very little about robots or technology but did see the first two Terminator films — oh, and Blade Runner and two of the three Trancers movies before they got really stupid. He tried very hard to work an obligatory photo from Videodrome into this posting but was unsuccessful. He is fairly confident his Playstation 3 is not watching him because his model does not connect to the Internet wirelessly and the Ethernet cable is in the kitchen.

Monday, June 20, 2011

Social Media and Sex Robots

I don't know much about technology trends nor do I claim any expertise about their effects on social relationships. This is not because I staunchly disapprove of technology’s direction; I simple spend time differently. However, as we have been reading Alone Together, I must acknowledge how fascinating and eye-opening our class discussions have been.
Tamagotchi

In our short week we have troubled words such as “real” and “i
maginary” and I have learned about concepts and trends I never knew existed; e.g. Tumblr, Tamagotchi, Sex Robots. And while I support a human’s right to choose how she or he wishes to socially engage, I cannot help but wonder what the long-term social effects of our increased exposure to technology, social media, and robots might mean for future relationships.

As a precursor, I do not romanticize face-to-face communication over online communication. But I do think they are different and that they do different things to people. The former is not categorically “better,” but I wish to recognize that people do adapt and evolve to their surroundings and when humans stop performing certain tasks, th
ey may lose those skills. For instance, how many of us make our own clothes, grow our food, or write in cursive with a pencil and paper? While the first two examples are exceedingly archaic in the age of agribusiness, to write (or not to write) in cursive in the age of the computer is a very real paradigmatic-shifting debate happening right now.
Svedka Fem Bot

Beyond penmanship, I have three
areas of interest when it comes to the blurring of humans and technology: relationships, hierarchy, and environmental sustainability. My interest in relationships was sparked during the conversation about Sex Robots. After reflection, I wonder if Sex Robots have less to do with sex and more to do with our socialized desire for instant gratification. Finding a human sexual or romantic partner who meets our ideals can be difficult and may require a lot of work. However, robots are created for human consumption and always available.

While
Sex Robots are not yet widely available, I do find hints of relational instant gratification through technology such as Twitter, Facebook, and Tamagochi. These items pro
vide unlimited access to people and animals whereas traditional forms of communication may necessitate more patience - sending letters, traveling, and picking up dog poop. And it is a great thing that some technologies have aided many who may not have the ability to send a letter, travel, or care for a biological pet.

Yet, I do want to mark that technology ask that we perform these relational tasks differently and consequentially will compel us to treat others differently. By instantly gratifying ourselves both relationally and sexually through robotics, I wonder about the long-term effects this may have on how humans treat other humans and animals. Will we be more demanding of our partners to dress certain ways or manipulate their bodies? Will we get increasingly irritat
ed when Fido poops on the carpet? And if we are already heading in this direction, why would we want to create robots that may further precipitate these damaging interactions?
Predator Drone

If we never have to stand face-to-face with humans, will it ma
ke it easier to objectify them? I have never killed anyone, but I imagine that it is easier to do with a predator drone than having to plunge a knife into someone’s chest. Unfortunately, the creation of autonomous technology will facilitate us in questioning the positive and negative consequences.

My second interest is wondering whether or not hierarchies can be eliminated. This interest was generated through two items: toxic waste and turtles. When discussing robotics as a way to clean up toxic waste, I got “stuck” when ask to make the choice between sending in humans or sending in robots and how my choices reflect privilege.

My position is this: I will always support sending in robots. Currently, people who clean up waste are more likely to be poor and wield minimal social capital. However, if
the wealthy people who created the spill have to clean it up, perhaps I’d have a different position.

Anyway, this pro-human position does not negate the interesting theoretical question posed by such a debate. What is the difference between a robot and a human and who ge
ts to decide which entity is more valuable? And even then, what about turtles? When visiting turtles with her 10-year-old daughter Rebecca Turkle has a “robotic moment”:

“[Rebecca said,] ‘They could have used a robot.’ I was taken aback and asked what she meant. She said she thought it was a shame to bring the turtle all this way from its island home in the Pacific, when it was just going to sit there in the museum, motionless, doing nothing. Rebecca was both concerned for the imprisoned turtle and unmoved by its authenticity.”
This insight is rather impressive. Yet, when thinking about robotic autonomy, I start to inquire as to whether or not a robot would want to be locked up in a zoo any more than a biological turtle. If our core motivation for robotics is summarized by our human desire to clean waste, free turtles, and have sex, then it seems as if we are simply discovering new entities to subjugate rather than altering our systemic ways of moving through the world. Certainly humans can be nicer to the planet, animals, and robotics, but I’m not sure if we can cease being the stewards of these entities. Even if we make more compassionate choices toward other entities, these choices still place humans in the position of ultimate choice maker.
Mountain Top Removal

This leads to my final appeal of
environmental sustainability.Currently, we only know how to explore technology by killing the planet through strip mining and mountain top removal
. It saddens me to know that the advancement of technology facilitates our never-ending consumption and disposal of precious metals. Humans used to buy one phone and one typewriter for a lifetime. Now we replace them every year or two. Wendell Berry articulates this in his essay “Why I am not going to buy a computer,” which is easily, and ironically, found online.

I do not disparage technology. I send emails, use Facebook, and edit papers using “Backspace” instead of “Whiteout.” However, I cannot ignore the evidence suggesting that social media and robotics change our relationships. Unlike the direction Turkle seems to lean, I am not one to categorically label these relationships as “good” or “bad,” but with Turkle I do propose that socializing online has changed human relationships with other humans, animals, robotics, and the environment and we should be conscious of these changes when moving forward.