Introduction

By the time we figure this out, it'll be too late. This sort of thing has happened before. Oppenheimer and his bomb-boys weren't completely sure (they were mostly sure) that their Trinity-test wouldn't fry the whole atmosphere. They tested their bomb anyway, because America had to drop something on Japan, and Curtis LeMay was running out of firebombs (he wasn't, actually, but that's not the point).

Imagine the following scenario. You look up from your half-finished essay on how ADHD is being overdiagnosed to find that robo-human hybrids are running around zapping each other, your creepy uncle has an electro-Lolita locked in his basement, and self-driving cars are squishing squads of youngsters just to shave a tenth of a second off their owner's commute. These are the stakes if we don't figure this out. Things could get very bad for a lot of people and a lot of robot-people and the self-styled eggheads in the Washington think-tanks will do nothing more than recommend we give more money to the DoD for some such sinister purpose…

The point being, we cannot wait. Even if we figured out all this cyber-ethics stuff yesterday, we'd still be too late. The ethics of robot-human relationships and human-human digital relationships are already very relevant. We've seen in recent years evidence proving, beyond a doubt, that negative Internet-experiences can harm a person's wellbeing at a fundamental level. Contemporary instances of cyberbullying already represent a shocking quantity of harm. Recent data suggests that a fourth of UK youths experience cyberbullying, with almost twenty percent of youths reporting experiencing regular cyberbullying. Forty percent of cyberbullied youths report developing anxiety due to said bullying, while a quarter report developing suicidal thoughts. Tyler the Creator's tweet, though in jest, has not aged well.

Anyone who still maintains that human-human digital interactions only represent an area of niche interest should consider the cacophony of recent controversies regarding every area of online life; privacy, relationships, direct and indirect interactions, commerce, entertainment, art, education, etc. We no longer live in 1993. It's been September for over twenty-five years. The Internet is a major avenue, if not the major avenue, of human activity. And all the normal baggage associated with human-to-human interactions has carried over into it, albeit with certain digital twists. While some of the ethics governing physical interactions carry over very clearly into the virtual realm, other ethical assumptions are cast into doubt among the swirling 1s and 0s. And even if we ignore these complexities and simply carry our ethics into the virtual as they are, that would still represent a marked improvement over our current digital ethics, which range from sparse to nonexistent. All too often the Internet represents an opportunity for people to suspend ethical considerations they really ought to not.

Women have, since the earliest days of the virtual world, reported being mistreated within it in much the same ways they report being mistreated outside it. These problems cannot be chalked up to the growing pains expected of a world in its wild west days. The Internet isn't a wild frontier anymore. Yet these problems persist. If anything, as the virtual world grows in complexity and immersiveness, the ethical problems surrounding sexual and non-sexual harassment become even more dire. While teabagging somebody in Counter Strike seems innocuous enough, raping someone's avatar in a highly immersive VR simulation strikes one as slightly more serious.

This new arena of interactions also introduces entities who have existed previously as nothing more than sci-fi speculation or in the realm of ethical thought experiments. Robots, within and without the virtual world, are progressing at astounding rates. Nobody can say how long it will be until convincing androids lurk among us, but they will probably come sooner than many expect. We need to answer the question of how to treat these convincing pseudo-humans before they exist. And there are many questions concerning their treatment. Do we tolerate humans "raping" these robots? What does it even mean for one to rape a robot? Do we tolerate humans having sex with "underage" robots? Can a robot even be underage? Does it matter? Furthermore, do we allow some of these interactions in the virtual world but prohibit them in the physical one? Would prevalent sexbots negatively impact the lives of human women/men/children? Does it harm the user to have sex with a robot? Whose welfare should we take into account, anyway?

This paper will outline some of the major ethical concerns associated with the existence of advanced robotics, advanced AI, and advanced virtual reality. We will begin by describing the current state of these technologies and discussing their likely routes of advance. Then, we will resolve the ethical questions we are currently capable of resolving by suggesting clear-cut rules to be followed at both the individual and societal levels. Our ultimate pipe-dream consists of a world in which these rules have been codified into law and are widely respected as self-evident ethical prescriptions. We suggest that the world take heed of these suggestions before we're all deservedly eating rat tails for dinner because we raped Skynet's wife.

A Brief History of Robosexuality

Humans have been falling for inanimate objects since humans have been falling. We see even in the distant recesses of prehistory a certain obsession with inanimate representations of the human form. While humans have created some truly fascinating abstract pieces, many of the most interesting and iconic artistic and craft-works seek to simulate the human. While sexbots in the shape of dogs and goats will undoubtedly exist, when one imagines the robots of the future one inevitably pictures something human. Let us trace the legacy of the humanoid robot.

The word robot was coined by Czech playwright Karel Capek, who utilized it in his 1920 play R.U.R (Karel claimed that it was his brother, Josef, who actually invented the term). Robot derives from the Czech robota, which loosely translates to "serf labor." From the very beginning, robots have been tied up in notions of manual labor and servitude. Indeed, in Capek's play, the robots, tired of their miserable, slave-like existence, stage a revolt and destroy their human overlords. At the end of the play two robots, a new Adam and Eve, develop feelings of love and take charge of a world reborn. Interestingly, the robots in Capek's play are not the mechanical, metallic robots so popular today. They are biological organisms. Artificial, but organic.

While Capek was the first to use the word robot, the idea of artificial humans goes back much further. Ovid, in his Metamorphoses, relates the story of Pygmalion, a sculptor disgusted by womankind but infatuated with a statue he's carved. After wishing to Venus that he could find a woman as perfect as his statue, he discovers that his statue has come to life. He marries her and the two have a child. In Ovid, it is Venus, a divine being, who gives Pygmalion's statue life. In this regard, Pygmalion's statue isn't much different from Hephaestus' automatons or the humans of certain earlier myths. We should remember that Adam was originally sculpted from dirt. Only after God breathed life into Adam did he come to life. Even earlier, we find Enkidu, created via divine means to oppose Gilgamesh. Interestingly, the moment of Enkidu's "civilizing" corresponds with him having sex with a human prostitute. In all three of these instances, Pygmalion's statue, Adam, and Enkidu, a non-human is given life, is "made human," by divine intervention. All three stories also prominently feature sex. Enkidu is civilized when he has sex with a civilized woman, Pygmalion first notices that his statue is alive when he kisses her and finds her lips soft, and Adam, soon after his own creation, requests from God an opposite-sex partner to reside with him in Eden. Moreso, what defines these created humans within their stories is their sexual role. Adam is the father of man, Pygmalion's statue is a mother and wife, and Enkidu is set upon his ultimate path to join Gilgamesh when he has sex with Shamhat. One might argue that these stories' foregrounding of sex is coincidence. Sex, after all, is a major aspect of human existence. It is only natural that foundational stories feature it. Indeed, that is the point. Both sex and the humanization of the inanimate are fundamental. It should come as no surprise that the two have been so entangled for so long. We can look all the way back to the Venus of Willendorf, created some 32,000 years ago, to see an example of a sexualized inanimate object. Whether or not she is a fertility fetish, as has been controversially hypothesized, is irrelevant. What is clear is that she represents a human with exaggerated sexual features. She hints towards the interplay we're discussing, even if, in her case, the specifics of that interplay are unclear.

Neither Pygmalion's statue, Adam, nor Enkidu were given life by human hands. Pygmalion did initially craft his statue, but it was Venus who brought her to life. It would take the emergence of a force comparable in the human imagination to the supernatural itself to conceive of imbuing life via other means. By the 19th century, this new force, science, was firmly established. At this point, the notion of being constructed by and given life by human hands had become common. Indeed, the processes themselves, while not formallycomprehendible, give the impression of being comprehendible. Readers felt that while they could not personally replicate the exploits of Frankenstein, they could conceive of these exploits existing in the realm of possibility. Shelley explained the existence of Frankenstein's creature not with magic, witchcraft, or divine intervention, but with a scientific process ostensibly understood and leverageable by humans.

Once this intellectual infrastructure was in place, it didn't take long for all kinds of ideas about artificial life to begin cropping up. Advances in computer science, biology, physical mechanics, and industrial automation allowed people to imagine not only a biological-artificial being, but a purely mechanical being. Adam and Enkidu weren't mechanical. Frankenstein's monster wasn't mechanical either. But ETA Hoffmann, in "The Sandman," does imagine a purely mechanical form of life (or something close to life). The story deals with a young man, Nathanael, and his infatuation with Olympia, an automaton. The story also touches on a theme that would become increasingly common as time marched forward (though it could be argued that this theme was also present in the tale of Pygmalion): that of "real" women being replaced by "fake" women. Nathanael grows tired of Clara, the real woman to whom he was engaged, and falls for Olympia, the "fake" woman. Nathanael finds Olympia agreeable because, though limited in her speech, she possesses an easy nature and the apparent willingness to agree with everything he says. Here are more themes we will return to later.

Mere mention of "The Sandman" elicits groans from undergraduates the world over. Here is a piece of immense complexity, not just for what Hoffmann wrote but for what later commenters, not the least of whom is Sigmund Freud, have to say about it. While the literary discussion surrounding "The Sandman" is fruitful and fascinating, we will refrain from participating. We will content ourselves with noting that the story represents a notable instance of not only a human-created automaton but also the possibility of human romantic attachment to said automaton. It is critically important, however, to note that Nathanael, throughout his infatuation with Olympia, is unaware of her status as an automaton. He believes her to be fully human. When he sees her taken apart towards the end of the story, and thus comprehends her true nature, he suffers a mental breakdown.

Only sixty years later, Auguste Villiers de l'Isle-Adam would write an influential novel in which a male character knowingly and happily abandons a female companion for a robot replacement. Villiers' novel, L'Eve future (the novel that popularized the term "android") contains what is perhaps the clearest account of the pitfalls involved in robot-human relationships.

The novel follows a fictionalized Thomas Edison as he creates a robot, Hadaly, to serve as a companion for his friend, Lord Ewald. Lord Ewald is already in a relationship with a suggestively named real woman, Alicia Clay, but finds her personality bothersome. He jumps at the opportunity Edison offers him; that is, to have a companion that possesses Alicia's "positive" traits (beauty, grace, etc.) but does not possess her "negative" traits (her personality). Ewald initially displays skepticism towards the idea that he could fall in love with a robot, but as the story progresses his tone changes. Towards the end of the novel, his infatuation has grown so intense he's completely forgotten about the real Miss Clay. "'By the way,' Edison asked lightly–what about the living lady?' Lord Ewald started. 'My word,' he said, 'I'd quite forgotten her.'"

What does Ewald see in Hadaly? What does she have that Alicia Clay does not? Is it just that Clay's personality is tiresome while Hadaly's is agreeable? Partly, but there is more to it. Ewald gives it away when he bluntly states that "it would be impossible for me to possess [Alicia]." This is before Edison reveals Hadaly to Ewald, and so upon hearing this he "gave a mysterious start." Edison can solve this problem for Ewald because he has something he believes Ewald can possess: Hadaly.

It should come as no surprise that the men treat Hadaly as an object, a commodity. Ewald, referring to her, says that "no treasure can buy this masterpiece." He calls her a "gift such as only a demi-god could bestow." In emulating the actions of the divine in the stories of old, Edison has become like a god. What was once solely their providence has transferred to the hands of humans. Thus, humanity can no longer shirk from responsibility. With the power to create life comes the responsibility to care for it. Ewald gets off to a rough start in this regard when he asserts that "never in the bazaars of Baghdad or Cordova was such a slave displayed for the caliphs!"

But it can't be only Hadaly's artificial status that convinces Edison and Ewald that they can possess her, for they commodify Clay as well. Ewald's whining that he cannot possess her is therefore not based on the idea that her status as a moral person renders her unpossessable, but that some deficiency in her renders her so. Ewald presupposes Alicia as an object that ought to be possessable, and laments only that some esoteric malfunction on her part dislodges her from her self-evident place in the order of things. Edison and Ewald make this clear when they misread Swift. After suffering Ewald's insistent whining, Edison advises him to dump Alicia, saying "What is a mistress? A belt and a cloak, no more." Ewald claims that he "had something like that in mind." Thus, it must be that Hadaly's created status, not her status as either an object or a woman (they are one in the same to Edison and Ewald) renders her possessable. She is desirable because she can be made to conform to the natural order in a way that Alicia, for some reason, does not.

It's hard to say whether the invention of Hadaly exacerbated the misogynistic tendencies of these jackals. Given our previous discussion, it frankly seems that it did not. They were misogynistic before her creation and remain so afterwards. At any rate, L'Eve future escorts us into the modern world of robotics. The recent film Ex Machina tells a story that, albeit with some 21st century sensibilities, is essentially unchanged from Villiers' novel. Both works explore similar dynamics and both end in similar ways, with their leading men failing to possess the robot woman. Perhaps this foreshadows something to come. Perhaps not. We shall see.

These days, the notion of falling in love with a robot almost seems quaint. We've been bombarded with so many lifelike albeit unreal entities that most of us have simply resigned ourselves to the flood of signals and signifiers. Determining what is real has become nearly impossible. Whatever we had before doesn't matter, we've traded it for waifus, RuneScape GFs, RealDolls, hentai, sexbots, VR, deep fakes, projections, CGI, augmented reality, etc. and etc. These things aren't going anywhere. They're just going to advance. And, truthfully, that's not a reason to despair in and of itself. Reality probably never existed. If it did, it would be impossible for us, stewing in unreality, to tell. But it doesn't matter what's real. It matters what's right. We must force ethical advancement, or everything finna end up fucked.

The Appeal of the Unreal

Are sexbots really going to become a thing? What's the big deal? Why doesn't everybody just not?

Yes, sexbots are going to become a big thing. We'd bet all our Iraqi oil money on it. The inexorable march of human techno-innovation either means that sexbots are going to emerge as a significant population on this planet, or that humans are going to disappear as one. Barring armageddon, we are going to progress to the point where we own robots capable of convincingly simulating sex acts. Yes, the sexbots of today are so crude as to be laughable. But we laughed at the Turk when we realized that it was nothing more than the imitation of a chess-bot. We weren't laughing when Deep Blue trounced Kasparov in 1997. Obviously, it is fallacious to assume that because chess-bots got good that sexbots must do the same, but, if the history of technological innovation is any indication, the future holds promise for the sexstarved botboys.

But why? Human innovation tends towards things that people actually want. If nobody wants sexbots, they probably won't emerge. Our assumption that sexbots will appear is not based only on technical appraisals of robotics and AI possibilities but also on the assumptions that sexbots fulfill a certain demand. Thus the question: why would anybody want a sexbot? What is the appeal?

The appeal is sex, you idiot. Genuine, convincing sex. People like having sex. We don't need to get bogged down in psychological jargon. Certain futurists such as David Levy have devoted entire books to try and prove not only that humans are capable of forming meaningful relationships with inanimate objects, but that most of us already have. Levy highlights the relationships we enjoy with non-human objects such as heirlooms and representations, and extrapolates from there. He also provides examples of the not-insignificant number of people who have already dove headfirst into the current, unsophisticated sexbot fray, and argues, convincingly, that their number is only set to increase as the technology grows in sophistication. The current proliferation of high-tech sexual interactions like Internet pornography and online dating only go further to strengthen his point. But while Levy's book offers a strong foundational basis, it is, frankly, aimed at an audience who mistakenly believe that they still exist within reality. Anybody raised by the Internet regards it as a self-evident truth that sexbots, if the tech can get there, will be widely used.

The psychological reasons for why someone might engage sexually with a robot are numerous, but outlining the broad benefits will demonstrate sufficiently how obvious all of this should be. Sexbots would simulate safe, pleasant sex. They would conform their actions to their user's desires. There would be little pressure or embarrassment, no danger or awkwardness. Even if one decides not to use sexbots for all their sexual needs, sexbots could still provide an enjoyable supplement. Many people in healthy relationships still masturbate. This masturbation is not meant to replace sex with their significant other, but to compliment it. Sexbots could also be used by couples to fulfill certain fantasies or roleplaying scenarios. Whether having sex with a robot constitutes cheating will have to be worked out on a relationship-to-relationship basis, but, suffice it to say, many people would be attracted to the benefits that sexbots provide.

One might argue that sex with a robot can never replicate the emotional impact of sex with another human. Personally, we think that this smacks of Cartesian voodoo nonsense, but we will shelve that objection for the time being. What is important here isn't whether or not you feel emotionally comfortable with robot sex, but whether or not your kids will. The older generations of today do not understand and largely refuse to participate in online hookup culture; this doesn't prevent you from cruising around Grindr looking for a bottom without those dog-ear things on all their pictures. The trajectory of acceptance suggests that sexbots, by the time they exist, will be widely and genuinely used. One might turn up their nose at the thought and swear that they would never bang a bot, but that snarky, Paleolithic attitude isn't going to come between the rest of us and our binary booties. And, let's face it, sleeping with a sexbot is probably better than going off to raid an enemy cave every time you get horny, isn't it?

Robotics 101

But how do robots work? Are they, much like the Internet, just a series of tubes? Good question. Let's investigate.

In its simplest form, a robot is a machine designed to carry out a task. They consist of three major sections. First, a mechanical construction. This construction is roughly analogous to a human's muscles, bones, and organs. It allows the robot to interface physically with the world around it. Second, an electrical power system. This system gives the robot the power to move its mechanical construction, and is roughly analogous to a human's heart, veins, and the organs that process food. Third, programming. A robot's programming tells it how it should act. These programs can vary massively in complexity, from the simple commands that tell a robot to pass the butter to the advanced artificial intelligence that allows a robot to process sensory input, make decisions based on said input, and react accordingly. While true artificial intelligence at the highest range of this spectrum does not yet exist, many robots display an impressive ability to make and act on complicated decisions.

Let's analyze a popular contemporary robot, the Roomba, to see how it fits into this framework. The Roomba is a small, cylindrical robotic vacuum cleaner designed to clean its owner's floors and carpets. The Roomba eliminates the need for its owner to walk their floors with a traditional vacuum cleaner; it does so all on its own.

The Roomba's mechanical construction consists of its cylindrical body, its brushes, buttons, charging port, and various sensors. These are its body, and what allows it to interact with the world and thus to function. For its electrical component, the Roomba contains a rechargeable battery and the means to distribute that battery's power throughout its body. The Roomba takes electricity from its owner's home and uses it to power the functions necessary to perform its cleaning duties. Finally, the Roomba contains some rudimentary programming. The Roomba's programming allows it to react to data gathered by its sensors. For example, if a Roomba gets too close to a drop-off, its cliff sensor will detect said drop-off and the Roomba's programming will tell it to change direction to avoid falling. If a Roomba, using its piezoelectric sensor, detects an abundance of dirt in a particular area, it will increase its cleaning power accordingly. If a Roomba bumps into a wall, its programming forces it to turn around and travel in another direction. The specifics of this programming vary from model to model, but, in general, every Roomba's programming is written to utilize its hardware to achieve its task. In this case, that task is to clean floors.

Contemporary Sexbots

But you don't want clean floors. You're horny and impatient. You want something tactile, and you want it now. What are your options?

At the moment, not much. While robots have been inserted into almost every area of our lives, most people have probably not had sexual relations with one. This is due both to the general lack of availability and the disappointing capabilities of modern sexbots. While sexbots do currently exist, they are unsophisticated, unconvincing, and expensive.

There are two major hurdles that sexbots need to overcome before they can be considered convincing. First, they need to be physically convincing. Second, they need to be psychologically convincing. For a robot to convincingly simulate the physical side of sex, they need to be constructed from materials that closely resemble the human equivalents. This means that a sexbot's "skin" would need to have the same texture, appearance, and elasticity as human skin. This convincingness would also need to be present in their hair, eyes, nails, and various orifices. Furthermore, they would need to simulate certain aspects of human physiology such as body heat, saliva, sweat, etc. The sexbot would need to simulate various sexual fluids. The sexbot would need to move convincingly. This would require finely tuned motor skills and a high level of robot intelligence. Of course, robot intelligence is another issue entirely of its own. In order to do something as simple as move convincingly, a sexbot would need to be able to take in sensory information, process that information, and then maneuver in a way that makes sense. All of this is necessary not only to simulate sex acts, but to sell the illusion that the sexbots are human in the holistic sense. Many futurologists have predicted that, due to the high degree of connectivity between programming relevant to sexual performance and programming relevant to general performance, in addition to certain economic concerns, most "sexbots" wouldn't be just sexbots, but would instead function as general purpose robots that might serve intermittently in a sexual capacity. Some robots fully capable of having sex might never do so. Indeed, currently employed sexbots, as rudimentary as they are, seem to be following this pattern. Many owners of sexbots report using them just as much for general companionship as they do for sex. This relationship-dynamic emerges despite contemporary sexbots being designed for little more than sex. If people find these unconvincing bots suitable for purposes beyond sex, it should be obvious that they will find more advanced robots, especially if they are designed and sold for additional purposes, suitable as well.

In terms of pure technology, contemporary sexbots have attempted to simulate certain aspects of human phsi/psychology with varying levels of success. For instance, some recent sexbots contain interior heating systems and heated orifices meant to replicate not only the hot interior of human bodies but also passive human body heat. Unfortunately, our research has determined that despite these innovations current robots remain largely unconvincing.

The greatest hurdle to becoming convincing seems to be psychological. Because so many people are interested in the verbal aspect of sex in addition to the physical aspect, an advanced sexbot would need to be capable of convincing human speech. This means being able to fully simulate human conversation. The robot would need to master facial expressions and body language, both important skills for a conversationalist. In short, the robot would require mastery of all aspects of language: physical, oral, and emotional. Current sexbots have mastered nothing. Their skin doesn't feel like skin, they don't look human, their conversation is no more convincing than a novelty chatbot, they can't move convincingly, they can't make facial expressions, and they can't grasp emotion. Basically, they are slightly more refined versions of the blow-up sexdoll one might give as a gag gift. If you're both horny and impatient, you're in a rough spot.

Hypothetical Sexbots

Let's say that robots get to the point where they are functionally indistinguishable from real humans. We need Harrison Ford to run around with his weird test to tell the difference between them and us. What does this mean?

We are concerned with two different types of robots: those that are moral persons and those that are not. Of course, to deal with robots that are moral persons we would first need to prove that such things exist. Is it possible for a robot to deserve the status of moral personhood, the same moral status that humans enjoy? We will address this question later, but let us assume for now that such a thing is at least possible.

But first, what is the moral status of a non-person robot? Simply put, they have no moral status. They are objects; they deserve no considerations for their own sake. Any prescriptions involving them must justify themselves without appealing to what is/is not right for them. If, for example, we can prove that using non-person sexbots has a demonstrably negative effect on the one using them, or causes the user to harm other people, then we might have sufficient cause to ban sexbots. Perhaps we don't go so far as to ban them, but instead recommend a course of action meant to remedy an existing injustice or inequality that transfers to or is exacerbated by the sexbot. For instance, if we prove that the mass existence of sexbots leads to appreciable harm to human women, we might consider either banning sexbots or instituting programs intended to reorient the problematic gender dynamics that give rise to such harm in the first place. At any rate, the crux of this issue is simple. If robots are not persons then they should be treated and legislated like other non-persons. We allow the private ownership of alcohol, but not of highly enriched uranium. Both these substances have the capacity to cause harm, but we've clearly drawn a line. We will investigate the data regarding non-person sexbots and see on what side of this uncertain line they fall.

If a sexbot can be proven to be a moral person, then the situation actually becomes much simpler. The category of moral person exists for a reason. If something meets the requirements to be considered one, then they ought to be treated like one. This means receiving whatever rights, privileges, and allowances that other moral persons receive. The difficulty, of course, lies in actually proving that a robot could be a moral person. We will return to this question later. However, readers should note that we consider it a key aspect of our moral foundation that all moral persons ought to receive the same fundamental moral treatment. Furthermore, we do not fundamentally differentiate between different types of moral beings. While something might receive slightly different moral treatment within the category of moral person pursuant to their state or actions, the existence of nebulously defined additional moral categories into which beings can fall holds no appeal to us. Thus, we reject the notion of different major categories based on eternal type. The major differentiation we will make is between persons and non-persons. We accept the necessary stipulation that allowances should be made for actions or type-cognition. In performing this maneuver, we intend to erode certain long-held notions of human exceptionalism and to place all life, including possible xeno-entities, on the same moral grid.

Virtual Reality 101

Jesus wept, for there were no more worlds to conquer. Virtual reality in its current state might be nothing more than a few doped up dudes flailing around with stupid mask-things on their faces, but it shows clear potential to morph into something reality shifting. It would have been easy to look at the state of the Internet in the early 1990s and dismiss it as a novelty. It's harder to do so in 2020. Virtual reality could, if it continues to progress in sophistication at its current rate, change the world as much as the Internet itself. And one of the primary inhabitants of this virtual world would be programmed digital entities. Basically, virtual robots.

But what is virtual reality? Is that what Nero was stuck inside of? Don't worry, we'll explain. Virtual reality emerged in its modern form in the 1960s. These early systems utilized the fundamental principles of VR still in use today. The user wears a headset that contains a screen/screens. The headset tracks where the user is looking and adjusts its displays accordingly, thus giving the user the experience of being able to look around the virtual environment. However, these systems were simplistic, expensive, and extremely bulky, with some of the headsets being so heavy that they had to be suspended from the ceiling. Throughout the 1970s and 1980s, governments and private agencies began utilizing similarly unsophisticated VR for training purposes. Meanwhile, laboratories continued developing experimental models. But it wasn't until the 1990s that virtual reality broke into the consumer world. Most of these early consumer systems were expensive and cumbersome, and were intended to be purchased by venues to serve as pay-to-play attractions. In 1995, however, with Nintendo's release of the Virtual Boy, a VR system widely known and appropriate for the average consumer finally existed. Unfortunately, the Virtual Boy sucked. It was too expensive for the experience it provided, it hurt many user's eyes, and its games were unfun compared to games available on other consoles. Critics described the Virtual Boy with a derogatory term that would become all too familiar to VR enthusiasts: gimmick.

So where is VR 25 years after the Virtual Boy's embarrassing release? Several large tech companies have invested millions into developing VR platforms, but the industry remains in a precarious state. Companies are throwing many systems at the wall, but not everything will stick. Phone-based virtual reality exploded a few years ago but has already disappeared. Augmented reality systems, such as Google Glass and Windows Mixed Reality, have not taken off. At the moment, the major VR headsets belong to Facebook, Sony, HTC, and Valve. While there are some minor differences between these systems, they operate via the same fundamental principles as the headsets of the 1960s. The user dons a headset that contains a screen/screens. The screen displays what is "around" the user, and the user can turn their head to investigate the virtual world. These new systems also feature hand controllers that, using buttons and simple haptic technology, allow the user to interact directly with the virtual environment. For example, a user might move their hand and press a button on the controller to pick something up in the virtual world. While performing this and other actions is novel and fun, the systems of today, by and large, fail to convincingly replicate reality. Users might report a certain level of immersion, but the notion that these systems are advanced enough to effectively blur the line between reality and virtual reality is ludicrous. These systems are akin to conventional game consoles, they operate by flashing lights in front of the user's eyes. Thus, whatever ethics we apply to traditional screen-games ought to be applied, for the moment, to virtual reality as well.

If we could be assured that VR would remain at its current state of sophistication, we wouldn't have anything to worry about. What concerns us is what VR can become. To be clear, there are certainly unresolved ethical problems concerning VR as it exists, but these issues also exist in regards to conventional gaming. We are focusing here on virtual reality specifically because of what it can become. So what can it become?

There is a plethora of fiction on the subject, from popular films, books, television shows, comics, and video games. Ultimately, only time will tell how virtual reality progresses, or if it progresses at all. There are many instances of promising technologies failing to take off. It is possible that we are currently witnessing the zenith of virtual reality technology. But we don't think so. It seems probable that virtual reality will continue to improve both in form and in function. That current virtual reality systems could be improved to offer higher levels of realism is all but indisputable. Auxiliary hardware such as haptic suits and treadmills could be implemented to give users additional control. But these advances still rely on the fundamental framework of setting a screen in front of some rube's face.

There are other lines along which virtual reality could progress. We might see the emergence of virtual rooms akin to the holodeck from Star Trek. Another method, often explored in science fiction, would be to hook the system directly to the user's brain. The system could then cause the user to perceive motions, sensations, events, etc. By reading and inputting signals directly to a user's brain, the system could immerse the user into a hyper-real dream. The user might, in reality, be doing nothing more sensational than sitting in a chair or lying on their bed, but they would perceive themselves as fighting monsters, crossing exotic landscapes, and much more. These advanced systems could allow a nearly infinite number of activities. Sex would likely feature among them.

Even now, despite the unsophisticated nature of their systems, VR users are enjoying a certain degree of carnality. A quick look turns up several virtual reality sex games such as VR Kanojo and Honey Select. And a recent survey of Pornhub identified thousands of virtual reality porn titles. Another virtual reality enthusiast recently noted that RealityLovers released an "advent calendar" of free, high quality virtual reality porn to celebrate the 2020 holiday season. There is little reason to believe that this pornographic scene will do anything but expand as the technology progresses. Jesus will be weeping indeed.

The Unresolved Issue of Virtual Reality Rape

Virtual sex and robot sex are closely related issues. We've already discussed how virtual reality works, what it is currently capable of, and what it might become capable of. Morally, there doesn't seem to be much difference between having sex with a physical robot or a virtual robot. Presumably, if VR becomes advanced enough, one could enter and have sex with a robot in a way that is indistinguishable from having sex with a robot/human in real life. Thus, the moral problems being considered here almost exactly resemble those we have already discussed. Do physical/virtual robots deserve moral personhood? Even if they don't, is there some other reason that having sex with them might be impermissible? Virtual reality can only change these questions if a) one can show that there is an appreciable difference in the outcome of having sex with a virtual robot as oppossed to a physical robot or b) one can show that one of the entities (physical robot or virtual robot) is deserving of moral personhood while the other is not. It is possible to imagine some minor differences between virtual and physical robots, but we eye with skepticism the idea that there will emerge differences that are important morally. Both a) and b) above are predicated on the idea that there is something about the virtual world that fundamentally distinguishes it in constitution or production from the physical one. And while certain reactionary elements will certainly continue to insist into perpetuity that this is the case, we see no evidence that it is true. Imagine, for a moment, a man having sexual relations with a convincing robot that resembles a young child. Some will undoubtedly argue that such a thing should be banned because "fake" child molestation leads to increased rates of real child molestation. Putting aside whether or not this is true (or whether or not it is sufficient to ban the act in and of itself), consider whether or not it matters if the man is engaging with a virtual or physical child-bot. Could it be that the virtual scenario has no effect while the physical one does? Perhaps, but it doesn't seem likely. Simply put, the behavior is concerning regardless. It is also unlikely that as the physical and virtual worlds continue to merge, they will develop new differences in our minds that would alter this dynamic. A similar argument might be advanced in response to the b) scenario. The ultimate similarity of the physical and virtual worlds suggest that virtual robots ought to receive the same considerations as their physical counterparts, with the minor stipulation that their micro-ethical treatment be adjusted according to the elasticity uniquely available to the virtual world (as a real human's own treatment is adjusted).

Aside from the existence of robots, the virtual world also gives rise to a certain environmental ambiguity that raises other important ethical questions. Whatever we might wish to assert regarding the similarity between the virtual and physical worlds, the fact remains that contemporaneity has relegated the virtual world to the realm of the unreal. Since the virtual world is not real, the actions that occur within it do not deserve the same moral considerations as actions that occur without. In some instances, this is self-evidently in line with the aforementioned ethical elasticity afforded to the virtual world by nature of its ontological pliability. For example, nobody would argue that killing another player's avatar in a multiplayer VR shooter is morally equivalent to killing them in real life. However, is there a major difference between shouting racial slurs at someone's virtual avatar and shouting them at someone in real life? Suddenly, we are not so sure. This ambiguity arises not from the suspension of someone's ethical personhood within the virtual realm, but from the sudden reassertion of it. Killing someone in a shooting game does not violate their rights as a moral person any more than tackling someone in a game of football does. But for a long time popular society regarded the virtual world as so unreal as to be without the existence of moral persons entirely. With advances in technology (allowing for greater immersion) and the reexamining of the nature of the virtual world that has taken place as it has matured, we have begun to recognize that moral persons do exist within it. Thus, actions such as unsolicited harassment have to be reexamined. Thankfully, we have started to realize that similar notions of consent that exist in the real world ought to also exist in the virtual world. If we enter a shooting game with a reasonable degree of knowledge about what to expect, then we can't complain if someone shoots at us. But we might have cause to complain if we are walking around in a fully-immersive virtual park and somebody runs up and slaps us. Let us investigate a few recent instances that demonstrate the need for a robust virtual application of our ethics.

In October, 2016, Jordan Belamire published a piece on Medium titled "My First Virtual Reality Groping." The piece details her experience playing a virtual reality game and finding, to her horror, that certain men within it were all too eager to simulate (poorly, in the case of the game she was playing) inappropriate and nonconsensual sexual acts on her avatar. Sadly, Belamire's title implies that, while this is the first time she's been "virtual reality groped," it will not be the last. Reports such as these suggest that the sexual harassment that plagues women in real life will also run rampant in the virtual world.

Let us turn to another telling incident. Two years before Belamire published her piece, Grand Theft Auto V, an already controversial game in a series perhaps even more prone to controversy, was the subject of a stranger, but perhaps more indicative, debate regarding virtual sexual assault. Hackers (endemic to the game's online mode, as any player from that time will tell you) found a way to virtually "rape" other players' characters. By repurposing animations from another area of the game, hackers could force a player to bend over and then, with their own character, they could mimic the motions of sex, pelvic thrusts and all. Short of exiting the game, there was no way for a player to control whether or not this happened to them. The exploit was fixed, but not before it sparked a massive debate regarding the actions of the hackers, with some calling it harmless virtual fun and others condemning it as something just short of full-fledged sexual assault. In another popular multiplayer mod/game of the 2010s, DayZ, players, tasked with surviving both a zombie outbreak and the machinations of their fellow players, frequently found themselves captured by other players, bound by handcuffs, forced to wear sacks over their heads, and compelled to perform a variety of denigrating tasks in exchange for their lives.

To someone unfamiliar with games and gaming culture, these incidents probably seem bizarre, even depraved. To long time players of multiplayer games (we sadly count ourselves among this group) these incidents seem almost mundane. Gaming has, for years, reveled in simulating sexually inappropriate acts. In 2004, Illusion Soft released Battle Raper, a combination fighting/raping game. On of their later games, RapeLay, was streamlined in the sense that it removed any conceit of traditional gameplay and focused entirely on the rape. Much earlier, during the primeval 8-bit days of gaming, Mystique released the now-infamous Custer's Revenge. The objective of the game, as the title implies, was to control Custer as he schemes to rape a Native American woman. Some might claim that these examples are extreme (they are) and that they are just games and thus not morally equivalent to real life actions (also true), but both of these objections again miss the point. The point is not that being groped in VR is, right now, exactly morally equivalent to being groped in real life. The point is that VR is viewed as a wild west realm in which, quite literally anything goes. The further point is that given gaming's troublesome past and suspect present, it is unlikely that forces within the culture are going to act to fix the problem before it balloons uncontrollably. When a player enters a game, they do not adopt a new set of ethics to correspond to their new environment. This lack of a new ethics might be acceptable (in fact, it is similar to what we advocate for), but in addition to failing to adopt any new ethics the player also sheds the vast majority of the ethics they held in the real world. Thus, they end up with essentially no ethics. Some players attempt to police speech (players are sometimes banned from servers for explicitly racist or bigoted statements) but speech is almost always the only thing anyone polices. The actions of players are, quite simply, not viewed as actions, and are therefore not subject to ethical considerations at all.

From the lense of physical ethics, this is fine. Nobody cares if you kill a character in Quake. Doing so is explicitly the point of the game. Quake can be (and sometimes is) regarded as a type of sport. And players are captured and bullied in DayZ because the game is intended to simulate a zombie apocalypse. Players are just roleplaying. And while nothing can explain the sheer stupidity of one of gaming's most iconic staples (teabagging), it, too, is little more than silly fun. But all this ignores the various other lenses from which ethics can (and should) be viewed. A robust virtual ethics skirts these stupid problems because a virtual ethics doesn't give a shit about any of that. Killing another player in a killing game doesn't matter, and a decent virtual ethics thus doesn't ruminate on it any more than sports ethics ruminates on a linebacker tackling a running back. Instead, a virtual ethics looks to regulate the interpersonal and societal human-to-human and human-to-pseudo-human major interactions within the virtual world. Virtual ethics are, thus, nothing more than physical ethics that have been armored against the shedding effect levied against ethics during their transplantation into the virtual world.

To give a simple analogy, consider a game of chess. Chess exists in the analog world. Players compete in chess by "killing" each other's units in the same way they compete in a video game by "killing" each other's avatars. None of it is real killing. No physical ethical system is deemed hypocritical because it prohibits murder but allows killing chess pieces. But, for some reason, when physical ethics are transferred 1:1 into the virtual world every idiot and his brother runs around yelling about how they can't ethically murder their neighbor but can kill in a video game. Furthermore, we wouldn't even say that we're "suspending" the ethical considerations in light of chess' unique circumstances. We simply realize that nobody is dying in chess and therefore the ethical prescription against murder is not broken. Thus, a virtual ethics, much like a sports ethics, is just an ethics tailored to meet the unique needs of its specific domain but not significantly altered or suspended.

Given this, the problems emerging in virtual reality become much more immediate. Virtual reality might never be "real," but it can damn quickly get to "not unreal." Perhaps it isn't there yet. Nobody is calling for arrests yet. At the moment, your avatar being sexuall assaulted in Grand Theft Auto isn't massively more inappropriate than your chess opponent simulating the rape of your queen. That is to say, it's clearly gross and a massive breach of chess conduct, and would certainly get you ejected from any self-respecting tournament, but it isn't illegal. Being groped in QuiVR is obviously worse, because it is a version of "you" being groped. But perhaps it isn't at the point where it should be illegal. But soon gaming in VR is going to feel as real as walking down the street. And when that happens, somebody jumping up and grabbing your crotch, breasts, etc. is going to feel just about the same as it would feel in real life. You can exit the game, but it only takes a second for somebody to ambush you and cop a feel. And isn't a second bad enough? If your boss grabs your ass as you walk by his desk, but only just for a second, does that suddenly not constitute sexual assault? By the time you exit the game, the assault has already occured. And all this is to say nothing of the improbable (but not impossible) danger posed by becoming trapped inside a virtual reality game. Hopefully the mechanisms by which this could occur never exist, but it is conceivable, and should remain at least a distant concern.

All these factors give rise to a few major ethical questions. First, if virtual reality becomes indistinguishable from real reality, does sexual assault that occurs within it constitute "real" sexual assault? We would say that yes, it clearly does. However, is the line drawn only at "indistinguishable" from reality? What about VR that is "almost indistinguishable" from reality? We will examine exactly where we should draw the line later in this essay. And when we say exactly, we mean exactly, or else perpetrators will wiggle ad infinitum. Second, how can we classify video game experiences in a way that differentiates real murder (prohibited under our ethics) from the sort of expected killing we see in our chess games? This concern becomes especially relevant and even more complex as virtual reality grows in realism. Finally, what can/should be done to prevent these abuses from occurring in the first place?

The Resolved Issue of Virtual Reality Rape

Stopping virtual harassment involves more than just implementing greater security and customization options. Harassment, whether virtual or physical, exists because of a pervasive cultural disease that allows toxic individuals to intellectually suspend the moral status of their targets. Whatever one calls this disease, its manifestation involves suspending the moral personhood of the target in order to justify subjecting them to otherwise morally prohibited treatment. The cure to this disease is myriad and complicated, and well beyond the scope of this essay, but we should optimistically note the great strides that have been made in the past fifty years. Instances of harassment, and the blanketing prevalence of the toxic cultural ideology of non-personhood that allow them, have diminished substantially. While there is still much work to be done, we feel that the activism and education of the last half-century will continue in fervor and thus continue to positively reorient the culture in which we live.

We do not mean to wax Pollyannaish, as the reactionary forces that seek to derail these advancement are numerous and powerful. While the genuinely conservative traditionalists seem doomed only to stall, the faux-progressive forces of neoliberal feminism and corporate identity politics genuinely threaten to direct the movements of culture towards a cesspool of interpersonal worship and establishmentarian social justice. While these ideologies utterly fail to analyze and challenge notions of power and power-structures and thus fail in their stated goal of truly elevating the downtrodden, their fetishim of interpersonal interactions does, at the very least, result in the decrease of immediate harassment, especially within the venues we are concerned with. While we are not so optimistic about the general direction of the cultural arch-ideology, we can at least take solace in the fact that the type of harassment we are discussing here is diminishing. While true justice in any venue, including virtual reality, ultimately includes an analysis and reckoning with power, these are not problems unique to virtual reality, and are thus outside the scope of this essay.

However, even if societal prescriptions tend towards the prohibition of harassment, it remains an incontrovertible fact that some jackals will do it anyway. We've looked poorly on murder for some time now; people still do it. While we generally oppose broad generalizations regarding "human nature," and feel it necessary to state that large-scale economic reform, economic justice, and cultural reorientation would go a long way in reducing the rates of these and many other genuine crimes, the fact still remains that certain rogue whackos will commit them regardless.

Thus, we need to specify exactly when virtual harassment goes from inappropriate and gross to criminal. Furthermore, we need to outline what measures we should put in place to prevent this kind of behavior.

First, we should clarify our position on lines. We believe that it is better to set a line and stick to it, even if many regard that line as arbitrary and comically specific, than to refuse to draw a line at all and allow offenders to perpetually wiggle. We cannot avoid these lines being arbitrary. The age of consent is arbitrary. What about turning exactly eighteen makes someone suddenly able to consent? Nothing, but we needed to set the age of consent at something. If our current, arbitrary age of consent did not exist, certain individuals would have a field day. What matters in this case isn't the exact number, but that we picked a semi-reasonable one and enforced it. We can argue about our exact line for when virtual harassment becomes criminal later, but for now we need to pick a reasonable line and set it. That way, when some jackal inevitably commits a virtual rape, they can't point to the lack of a definite line and argue that their rape is therefore "not real."

Keeping this in mind, we propose that any concentrated nonconsensual contact with a player's private area/s should be considered criminal sexaul assault if the player being assaulted can feel it. Take, for example, a player that is wearing a full-body haptic suit. Assuming that the player's crotch is subject to feeling, grabbing that player's crotch without their consent would constitute sexual assault. At the point where a player is able to feel what their avatar feels, however rudimentary that feeling may be, the player's body and the avatar's body should be considered morally indistinguishable from one another. This guideline might entail a player and avatar morally sharing only some parts of their body, while other parts remain distinct. While this situation seems strange, it does not present an unnavigable obstacle for our ethics.

Given these prescriptions, what steps should developers take? First, developers should always give players as many options as possible to customize their experience. Developers might implement a personal space safeguard option that prevents other players from coming within a certain distance. Developers should also always allow players to customize what they feel, the intensity at which they feel, where they feel, etc. If a player doesn't want a game to simulate feeling in their crotch, the game should accommodate that.

As virtual reality becomes more advanced, developers are going to have to focus more of their time predicting and coding around potential risks associated with their software. This presents some difficulty, especially considering most of these developers slept through whatever barebones ethics course their university forced them to take. It might be prudent for developers to set up a board that tests and approves games/software before launch. Perhaps, if outright approval strikes one as too authoritarian, the board could test and make recommendations or perhaps issue a classification, similar to how ratings boards currently operate. The board needn't even be mandatory, but seeing that a game has been tested and approved by the board might provide some people peace of mind.

Finally, developers should be extremely transparent and honest about the sort of experiences that people can expect inside their game. If the games are honest about the experiences within them, then people can make an informed decision on whether or not they want to play. Thus, people can consent beforehand to whatever they might experience. If, for example, a game bills itself as a no-holds barred survival experience, players should be made aware of this before they purchase and play the game. They should be told in clear and comprehensive terms what they can expect when playing the game. If the game has been designed to allow players to shoot and grope each other in a semi-realistic fashion, players should know to expect that. There might very well be niche games in which "rape" is completely allowed. Of course, because the game would make this extremely clear, and everyone playing it would be required to fully consent, it wouldn't really be rape at all, but a kind of technologically advanced roleplay. This kind of thing is perfectly acceptable, granted that it is safe and consensual for everyone involved. Of course, games of this type would naturally be more "dangerous" than others in that they would run a significant risk of crossing the line. For this reason, developers will need to be held to a high standard. This is a tall order, especially when one considers the sort of shenanigans most current game developers are constantly up to. But it has to be done. Frankly, nothing short of a complete overhaul of the modern gaming and software industry will suffice. The reader is probably beginning to detect a theme.

Carnal Relations with "Underage" Robots

While the notion of "underage" robots adds another layer to this already complicated discussion, the ethics that govern their use/non-use actually follow the general framework we will apply to "regular" robots. Rape means having sex with someone who does not consent. Since minors are unable to consent, having sex with them is always rape. The main problem raised by underage robots is, therefore, whether or not the underage status of the robot prohibits them from consenting in the same way that being underage as a human does. If the robot is an object, then the notion of consent holds no meaning for it, and the only valid objection to having sex with it would be to argue that the act harms the user and/or causes the user to grow more likely to harm other moral persons. However, if the robot is a moral person, then their age might become relevant. This scenario raises a tricky question. How do we define a robot's age?

First, let us deal with the simpler case and assume that our robot is not a moral person. By assuming this, we remove the need to consider the robot itself in our moral calculus. Let us also assume that we have decided that this object-robot should be considered "underage," or at least represents a convincing representation of "underage." What arguments remain for banning sex with it? Three major ones: a) it's gross and we don't like it b) it harms the user and should be prohibited to protect them against themself and c) it leads the user to harm other moral persons.

Argument a), while taken seriously by many, is stupid. A considered system of ethics, and especially one that seeks to influence legislation, should not base its prescriptions on what one finds "icky." Certain ethicists will argue that what we consider icky isn't arbitrary, but is actually a hardwired evolutionary response to things that are truly bad. The discomfort, in this model, serves a role similar to pain-response. We feel uncomfortable because our evolutionary wiring wants us to avoid that thing. We would counter this claim by suggesting that what one finds uncomfortable is often the result of cultural conditioning, and that discomfort does not always serve the purpose of discouraging genuinely harmful behavior. It wasn't long ago that race-mixing was considered icky by many, but we don't concede that the aversion to it was an evolutionary pain-response to harm.

While everyone understands that eighteen is an arbitrary delineation, most people also understand that having carnal relations with "children" is bad because it demonstrably harms the child in question. But the question here is if we should ban such acts even if we can't prove harm, but do feel that they are icky. While there is a strong emotional case to be made here, we find the moral logic lacking. For something to be explicitly banned via legislation, it should cause demonstrable harm beyond eliciting vague discomfort. Thus, while banging your child-bot on the bus should be banned on the grounds that it causes a significant disturbance, engaging in private with a child-bot can't be banned only on the grounds that it is gross.

Argument b), that having sex with the child-bot harms the user, actually contains two dimensions. First, we need to decide if we should prohibit something simply because it harms the person doing it. Second, we need to figure out if it is even true that engaging in such an act does harm the person doing it. The second question raised by this argument actually spills over into argument c), because the most convincing way to prove that having sex with a non-person child-bot harms the user is to demonstrate that it makes said user more likely to engage in such an act with a real child. Molesting a real child does not only harm the child (though the child certainly bears the brunt of the harm) but often the perpetrator as well, by making them more likely to suffer the legal and moral consequences of such a heinous action. Thus, if we can show that engaging with representational children makes one more likely to pivot to the real thing, we could argue to ban engaging with the representations on not one, but two grounds (argument b) and argument c)).

Due to this, we don't actually need to investigate whether or not something should be banned solely because it harms the user. It is extremely unlikely that engaging with child-bots would only harm the user. However, on the off chance that this does end up being the case, we'll present our thoughts. We believe that if an activity/substance harms only the user, and that if the user is made fully aware of the risks involved in said use, then the activity/substance should be permitted. This permittance does not mean we must debar campaigns meant to discourage it (such as anti-smoking advertisements or cigarette taxes) but only that it should not be made illegal. As long as things like alcohol, sweets, and cigarettes are legal, then the state cannot in good faith ban things solely on the basis that they harm the user. Of course, the state rarely acts in good faith, but that's a topic for another time.

With that dangling concern resolved, let us turn to the crux of this debate. Does having sex with child representations like child-bots make one more likely to pursue sex with actual children?

The Aspect of Twilight

This question might seem unalterably based in sci-fi assumptions and ethical thought experiments, but it is actually extremely relevant to our contemporary lives. For years, a heated debate has raged over the moral status of certain forms of child pornography. These debates so closely mirror the issue discussed above that we would do well to outline them here.

First, a word on what we'll refer to as "real child pornography." Any pornography that involves actual children (moral persons) is clearly bad and should be banned. The most succinct argument against the sale, distribution, purchase, creation, etc. of this type of pornography is that engaging with it perpetuates a genre that causes demonstrable harm to children. Whether you are the one making it, or just some rube consuming it, you are engaging in and propping up an industry that harms moral persons.

However, we are not talking here about pornography that contains actual children, but pornography that contains representations of children. Drawings, CGI, animations, etc. The common argument in favor of allowing this sort of pornography is that it is not real, and thus does not harm actual moral persons. This is true insofar as engaging with the material itself is concerned. But if we can show that engaging with the material makes one more likely to pursue the real thing, we can link "fake" child pornography to instances in which real moral persons are harmed, and thus begin to construct an argument in favor of prohibiting such materials. We say "begin to construct" because, as strange as it might seem, the leap from a material leading to harm to banning said material is larger than one might assume. First, there is the possibility that despite the statistical increase in harm said material causes, there is an even more pressing reason for it to remain legal. Certain segments of the gun rights community advance an argument similar to this. While they concede that widespread ownership of firearms leads to higher rates of firearm-related murder, they maintain that firearms are necessary to guard against runaway state or corporate power, and thus efforts to disarm the populace should be frustrated. Second, there is the surprisingly pervasive "we don't care" argument. Many substances/materials/etc. remain legal despite demonstrable first and second-order harm because people just don't give a shit. Certain less reflective segments of the gun rights community advance this argument with startling regularity. While this is an incredibly difficult impulse to overcome in regards to many substances, the social taboo against pimping out children (even fake children) is strong enough among the populations subject to laws to probably negate this argument. Since the lower classes maintain a striden anti-child-sex sensibility, it isn't likely that banning these materials would raise much ire among them. This argument might cause some fuming amongst certain old-fashioned readers, but as ardent pseudo-subjectivists, we regard this as sufficient to side-step the "we don't care" argument.

But it seems that we've put off the big question long enough. So, does watching fake child porn make you more likely to rape real kids? Does having sex a child-bot make you more likely to rape real kids?

There are three possible answers to this question. First, it might be the case that the porn/bots do make one more likely to pursue the real thing. This could be because the fake stuff normalizes the idea, awakes a dormant desire, or desensitizes one to the point where the fake option no longer satisfies. Second, it might actually be the case that using the fake stuff lowers the likelihood of one engaging in the real thing. This could be because it actually acts as an alternate outlet for the sexual desire. Third, it might be the case that there is no correlation at all. So… which is it?

It will infuriate but probably not surprise you to learn that we don't know. The data on this subject is a mess. Establishing a correlation between consumption of child pornography and violence against children is easy. Establishing causation is much more difficult. Furthermore, establishing a causation between consumption of real child pornography and violence against children does not mean that one has established a causation between consumption of fake child pornography and violence against children. The vast majority of the research in this area has focused on real child pornography, and even among that research there aren't many controlled studies, thus making studies with any degree of internal validity a rarity. Some epidemiological studies and general surveys have been conducted, but they are marred by methodological problems and their conclusions are tainted by a pervasive confirmation bias. When taken as a whole, the literature on this subject presents a confusing, contradictory picture. Additionally, this issue is confused further by the different perspectives on child pornography held among various consumers. A recent study by Jonah Timer suggests that the conceptual construction of the figures in the pornography (real versus not real) is central not only to the consumer's attitude towards the pornography but also to their likelihood to contact offend. Timer's study focused on "real" child pornography. The tendency to dissociate would understandably be stronger when the images literally depict "fake" children.

Just as frustratingly, we have no data on whether or not consuming fake child pornography (loli porn, CGI, etc.) is analogous in effect to engaging with non-person child-bots. It is conceivable that researchers emerge from this mess to confirm that while there is no causal relationship between the consumption of fake child pornography and contact offending, there is a causal relationship between sex with a non-person child-bot and contact offending. For all we know, this relationship could actually swing in the opposite direction, as unlikely as that seems.

Our purpose in outlining the current debate on child pornography and fake child pornograhpy is to show that even these relatively low-tech materials are mired in uncertainty and confusion. Fake child pornography has existed for years. The fact that we still can't definitively conclude whether it leads to increased rates of violence against children suggests that we won't get a ruling on child-bots until every idiot is running around with a Lolita-bot MkIV.

So, is there currently no way to demonstrate that the existence of underage sexbots leads to harm to moral persons? Actually, there is. Despite the failures of the above arguments, we can still attack underage sexbots from two convincing angles. First, we can appeal to the risks that might exist. While the data does not convincingly show that the consumption of fake child pornography leads to increases in violence against children, it also fails to show that it does not. We might argue that due to the distinct possibility that it does, we should err on the side of caution and ban it just in case. We would then apply the same logic to child sexbots. Second, we can point to the very real and very well established use of both child pornography and fake child pornography as grooming tools. While we obviously have no data on whether or not underage sexbots could be used in the same way (since convincing underage sexbots do not currently exist) we can easily imagine a similar or perhaps even more sinister dynamic taking shape. Real child pornography, even though it is currently taboo and illegal, is already used to groom. It normalizes certain actions and values in the victim's mind. Is it not likely that legal underage sexbots would not have a similar or more profound grooming effect?

For these reasons, we propose that, pending further review, sexbots that resemble children should be effectively banned. Exceptions might be made in certain circumstances. For example, if child sexbots can be shown to have legitimate medical use, a licensed psychiatrist might prescribe them. However, we do not believe that, at this time, it is a good idea to allow them to become widely available. Furthermore, we believe that the moral arguments against their wide dissemination are sufficient to justify their prohibition even in a liberty-based society. Since these robots do not currently exist, we have the opportunity to preemptively legislate against them. We therefore propose robust legislation that criminalizes these robots in a way similar to how child pornograhpy is currently criminalized. With this legislation, the wide-scale manufacture of child sexbots could be avoided. The homebuilt or imported models would be treated in the same way that homemade or imported child pornography is treated. We do not take lightly the notion of a state-enforced ban on a substance/material. The bar for potential harm must be fairly high. We believe that convincing child-like sexbots possess that potential and thus should be generally prohibited.

With that established, we need to figure out how to decide what constitutes a "child" sexbot. It will come as no surprise to learn that robots do not have "ages" in the same way that humans do. The human age of consent is based on the average time it takes a human to mature to the point where they can provide informed consent to something as serious as sex. Since advanced robots can be programmed to possess a certain mental age from the moment they are turned on, it makes no sense to define their age in terms of how long they have physically existed. In the case where the robot does have sufficient intelligence to possess something akin to "mental age," we would argue that said mental age should equal or exceed the accepted human age of consent in order for that robot to engage in consensual sex. However, mental age, in the way we are employing the term, is uniquely the property of moral persons, and thus this only applies to robots that we consider moral persons. Non-person robots require a different metric to determine their age.

In terms of human consent, we generally don't care what someone looks like. There are many people who are above the age of consent but appear to be below it. These people are obviously still able to consent. However, in terms of non-person robots, we have no other useful metric besides the way they look. Furthermore, even in the case of person robots, we might want to take how they look into account. To put it simply, robots and humans are importantly different in that robots can be built to look a certain way while human appearance is (currently) largely out of our control. Thus, in the case of person robots, manufacturers could easily skirt our prohibition on child sexbots by manufacturing robots that look like children and act like children but technically possess the mental age of the age of consent. This is already a common tactic in certain genres. Creators will animate a character that looks extremely young and then claim that they are actually eighteen. Since the characters do no actually exist, it is impossible to verify their "real" age and thus impossible to contradict the creator's claim.

While it is difficult to stipulate that a robot has to "look" and "act" eighteen (as eighteen-year-olds are an extremely diverse group), we need to try. If we don't, our prohibition on child sexbots will be in name only. As we've stated before, we believe that instituting specific guidelines is important to prevent perpetual wiggling. We therefore propose that, prior to the large-scale manufacture and sale of either non-person or person sexbots, we establish a general standard of average mental age, average physical age, and average emotional age. This will not prove an easy undertaking. But if nobody does it and in the future everyone is running around banging child-bots, at least we can say that we told you so.

To establish these standards, we propose the formation of a diverse body of experts in the medical, psychological, anthropological, cultural, technological, industrial, and linguistic fields to conduct a comprehensive survey of the nation's youth and determine with the highest degree of specificity the average mental, emotional, and physical characteristics of those who meet the age of consent. The body would publish its findings in the form of clearly actionable guidelines. Then, Congress would use these guidelines to establish regulations to be disseminated to manufacturers or potential manufacturers. Every ten years, this process would be repeated and the regulations updated. While the survey's finding would probably not change substantially every time the survey was conducted, we could grandfather in older models that do not exactly meet updated specifications. This approach would, at the very least, provide a decent starting point to specify the "age" of a robot. A non-person robot, having no mental or emotional age, would only need to meet the physical specifications, but a person robot would, in order to be eligible for sex, need to meet all three of the specifications and consent in the same manner as other moral persons. Essentially, by classifying a person robot that fails to meet one of these standards as "underage," we are applying the same dynamic of statutory rape to them that exists with humans. And while it obviously impossible to rape a non-person robot, the action of engaging with it could still carry the same moral and legal consequences as engaging with child pornography. All this does not mean that person or non-person child-bots cannot exist, only that they can't be used for sex. One clear way to achieve this is to ensure both that manufacturers limit a child-bot's ability to have sex and that any violation of this prohibtion be met with serious intervention.

What About Wheel Women?

One might continue with the above logic to argue that we ought to ban all sexbots on the grounds that they might lead to harm to adult persons. This argument does carry a certain weight. If we ban child sexbots on the grounds that engaging with them has the possibility to spur people to hurt real children and that their existence perpetuates cultural and individual instances of grooming, then we could argue that we should ban adult sexbots on the grounds that their use leads to increase rates of violence against men and women and that they contribute to the harmful objectification of moral persons.

While this argument is convincing, we should note that there is an important difference between child sexbots and adult sexbots. Sex with children is always harmful, while sex with adults is only harmful because of the toxic culture in which it currently exists. Some might argue that this distinction is irrelevant because the adult sexbots will contribute to this toxic culture. Thus, because of the inherent objectifying influence of sexbots, they shouldn't be allowed. While this argument sounds convincing, we ultimately reject its fundamental premise. We do not believe that sexbots, much like adult pornography, are inherently objectifying. Contemporary porngraphy might objectify and harm, but only because it exists within and perpetuates the culture that created it. We simply do not believe that pornography is inherently objectifying, and we extend this logic to sexbots. To argue that pornography is inherently objectifying would have a profound effect on our view of all media, which currently exists as necessarily representational but not necessarily objectifying. The fact that sexbots, as we've stated, would in reality most likely be multipurpose robots that sometimes engage in sex, should also serve to soften this concern.

Obviously, we have no data on this. Even if we take contemporary pornography as a rough analogue, we still lack the data. The claim that consuming pornography leads to violence against women is contentious at best, with studies suffering from the same methodological problems as the studies dealing with violence against children we discussed above. And measuring something like objectification is even more difficult. Furthermore, while pornograhpy can be used to groom adults, it doesn't, due to the advanced average mental age of adults, enjoy the same effectiveness.

Despite all this, one might still be compelled to take our above arguments against child sexbots and apply them. However, in this case there exists a better option. Banning adult sexbots, assuming it can be shown that adult sexbots lead to violence against persons, would be treating a symptom. While banning child sexbots is technically treating a symptom in the same manner, the difference is that the problem in this case is that contemporary adult pornography renders women as objects and thus decays notions of consent. This problem can be fixed in regards to adult pornography, but cannot be fixed in regards to child pornograhphy, since children, by definition, are unable to consent.

Thus, we propose that in the interest of preemptively neutering the objectifying influence of sexbots, we institute massive cultural and legal reforms to attack the root of the problem at its source. The problem can be rendered simply as an incomplete cultural recognition of moral personhood. This failure of recognition gives rise to misogyny, homophobia, etc., in addition to the dubious sexual ethics that plague our society and cause our pornography and hypothetical sexbots to perpetuate negative outcomes. Pornhub, for instance, has a massive problem with tolerating videos of rape and hosting child pornography. Their pervasive tolerance of these videos not only continues to traumatize their victims, but incentivizes the production of such videos. But this does not mean that pronographic videos or websites that host pornographic material are necessarily bad, only that the toxic culture currently in place has enabled these leeches to bloat their profits off the backs of suffering persons. If we reform the culture, then many of the problems sexbots cause would evaporate. This assumes, of course, that sexbots are not inherently problematic, but, as we've said, this is a notion that we reject. If it can be shown that they are inherently problematic, we will reevaluate our position. But the methodological problems that plague such studies and the fact that the ostensible subject of our inquiry doesn't yet exist means that evidence contradictory to our beliefs won't be quickly forthcoming.

As to how we should go about reforming our toxic culture, that topic is unfortunately too large to deal with here. A good start would be to agitate for a society of democractic ideals, egalitarianism, sustainability, justice, an emphasis on moral personhood, and shared notions of community support. Such reforms would ensure that the oft-cited concern of "what if sexbots replace real women?" would become as far-fetched as it should be.

Regulating Advanced Masturbation

When we're discussing sex with non-person robots, we're essentially discussing a complicated type of masturbation. Since non-persons are classified within our moral framework as objects, there is essentially no difference between using a dildo and a non-person sexbot. Even if the robot exhibits certain traits resembling those of people, we wouldn't consider it a person. Thus, as long as the robot does not violate our previously discussed prohibition on child sexbots, we cannot, due to our liberal stance on regulation, take too strong a stance on their governance. We believe that the only regulations such sexbots should face are manufacture-focused regulations meant to ensure the safety and quality of the products. These health and safety regulations would not be fundamentally different to the ones in place for automobiles. We do not require that cars meet certain standards because we care about the cars, but because we care about the persons driving them.

While it is beyond our scope to comprehensively outline every safety feature that ought to be installed, we do feel prepared to outline some basic guidelines we feel would be in the best interest of the user. We suggest that the legislature begin at once the formation of a diverse panel of experts to make concrete recommendations for safety that can then be codified into law. Hopefully, this approach would help avert some of the technological headache that comes with any newfangled tech being introduced into the wild west.

First, the robot should meet all the standards that other consumer electronic goods are subject to. Arguably, because of the intimate nature of their use, sexbots should meet even higher safety standards. It is absolutely imperative that the robot does not, for example, burst into flames while someone is buttfucking it. This isn't much of a risk with the glorified blow-up dolls we have now, but it becomes more of one as the robots advance in sophistication. In addition, as the robots advance, we need to more thoughtfully consider their programming. This, too, is largely a matter of protecting the user. As long as we haven't decided that the robot is a moral person, they should be regulated as any other complex object is regulated. In this area, at least, we feel somewhat comfortable, as the current culture-economy does do a decent job of creating electronic goods that don't have disastrous direct effects. While the culture-economy is awful at accounting for the long-term effects of electronic goods, most of these effects are either a) not unique to sexbots and thus outside the scope of this paper or b) addressed in the above section in which we specify that dramatic shifts in the culture-economy should take place in order for us to use sexbots without perpetuating certain currently extant negative dynamics.

Zombies on the Block

All right, boys and girls, hang on to your britches, because we've finally arrived at the moment you've all been waiting for. Let's say you're a certain French sack of neurons. You're hanging out, thinking about being, and you come to the strange but convenient conclusion that animals are nothing but automata. They display the behavior of living beings (fear, pain response, etc.) but they are only acting according to a preset programming. You, noted homunculi-head, could go on to justify doing basically whatever you want to these animals, comfortable in the knowledge that they aren't sentient, and are therefore utterly undeserving of moral personhood.

We reject this framework completely. Descartes was wrong. Similar things are being said about the possibility of robot-consciousness. While it is obviously fallacious to assume that because Descartes was wrong that the modern critics of robot-consciousness must also be wrong, we believe that it is important to note that there are precedents for ostensibly smart dudes making comically cruel assumptions regarding the nature of consciousness that are later largely disproven.

Oh no, you say, you're not going to dive face-first into functionalism, are you? Well… sorta. We are not utterly convinced of functionalism, but we should note that many of the major objections to functionalism are less than convincing. The notion of philosophical zombies being created by such elaborate and bizarre Turing machines as bee swarms or the nation of China seem damning at first, but there are several problems with these objections. First (and not unimportantly) these objections concoct ridiculous but technically mind-analogous situations in order to elicit an emotional absurdity response. They do not logically decimate the models of thought on trial. The only way to see whether all of China equipped with radios would exhibit consciousness is to equip all of China with radios and see what happens (Xi, if you're reading, you should do this). But even if our China-brain doesn't exhibit consciousness, it is not necessarily damning for the concept of robot-consciousness. It could very well be the case that consciousness can only arise from the connections between certain modules (neuron-like) and that the China-brain does not satisfy this condition while both human and robot brains do. This would also explain why current computers exhibit no real sense of consciousness (that we can detect) and yet consist of so many small connections. The difference between any old material and neuron-like materials might be the difference between consciousness or the lack thereof. Of course, it might be the case that robot brains, no matter how advanced they get, will never be composed of materials close enough to neuron-like to birth consciousness. Unfortunately, we don't have the definitive definition for the nature of consciousness. But we also don't have really solid proof against solipsism. We don't run around doing whatever to you jackals because, at the moment, we're hedging our bets.

Furthermore, we can also suspend our normally liberty-based view because, in this instance, we are dealing with presumed conscious entities. It might seem like a shady rhetorical trick to assume consciousness and then justify the suspension of liberties on that basis, but we should note that this is not a baseless presumption and, therefore, essentially follows a similar logic as the suspension of liberties on an anti-solipsist basis. Furthermore, we are not clearly specifying at what point a robot begins to consist of neuron-like materials, as this too is currently unknown, and thus are not massively suspending liberties without reason. While it greatly pains us to admit to not having an answer, we must concede that, at least as far as the viability of robot-consciousness is concerned, we will have to pass the responsibility onto a future generation (or ourselves in the future).

What we are sure of is that the possibility of robot-consciousness, and thus moral person robots, has not been ruled out. A neuron-like material with which to construct robots might exist. It might be the case that the category of neuron-like extends further than we currently think, that the hardcore functionalists are correct, and that we've only thus far failed to realize it. At any rate, we intend to err on the side of caution when doling out the category of moral person, and thus we advise, in this area particularly, extreme caution, compassion, and patience. Critics of the sort of conscious-models that posit robot-consciousness sometimes argue that these models have no clear benefit over their alternatives and thus cannot be justified in terms of utility. But this is simply not true. The use in erring on the side of caution and risk classifying objects as persons is that if they are persons, we have avoided an immense amount of harm. If we are wrong and they truly are objects, we've been unnecessarily kind to some sextoys and vacuum-cleaners. Humans are quick to deny other beings (and other humans) moral personhood on flimsy justifications if it proves convenient for them. This tendency has led to atrocity after atrocity. Perhaps it is worthwhile to be careful for once and see how things go.

On the Border of Sentience

You don't expect me to be nice to my toaster, do you? you ask, Dorito crumbs spewing out of your gaping maw. Seeing as how it's nearly impossible to get people to treat other humans with any degree of respect, we struggle to imagine succeeding in convincing people to treat robots well. It would have been equally difficult to convince Columbus to treat the natives respectfully. Nevertheless, we feel that we must provide guidelines for what deserves personhood status. If everybody continues to rape, murder, and eat everything that moves, then when we're all roasting in Hell at least we can say we tried advocating for decency.

Thus far, we've been using the term "moral person" to refer to any conscious being we consider deserving of moral respect. The full extent of what constitutes moral respect is a topic for another time, but our immediate aim is to extend the classification to certain robots to avoid the most egregious offenses against them (rape, murder, etc.). Since we're dealing for the moment with self-evident moral wrongs, we feel a comprehensive outline of our notions of moral respect is not strictly necessary. However, we can summarize by stating a moral person deserves to live largely unmolested, to be respected as a member of the community, and to be provided for as best as possible.

The implications of all this might strike some as alarming. If we consider a robot a moral person, then would it not deserve the fundamental right of freedom from ownership? Yes. That's what being a moral person means, you dingus. Stop whining about the fact that you have to treat things well. Though, if you're still upset, you'll be happy to know that there are certain loopholes that negate much of this. More on that later.

So how exactly do we specify what is and what is not a moral person? Keeping in mind our tendency to be comically specific, let's lay out some guidelines.

First, the thing in question should either exhibit self-preservation desire itself or be of a type that generally exhibits self-preservation desire. If a creature's type exhibits these desires with any regularity, then all members of that type meet this requirement. Thus, a human in a deep sleep still meets this requirement, even if they are not currently exhibiting the will towards self-preservation. Humans in general do exhibit these instincts, and thus all humans automatically meet this requirement. The same is true of many animals: cows, pigs, cats, dogs, chickens.

It might also be possible that something of a type that usually doesn't exhibit these desires might exhibit them. If, for example, a rock suddenly began exhibiting a desire for self-preservation, we would argue that we should err on the side of caution until we figure out why it squirms around whenever we try to step on it.

We assume that robots will also exhibit this self-preservation desire (if manufacturers make the majority of robots suicidal after five years in a shady planned-obsolescence scheme, we're going to be pretty upset). Assuming that our toxic culture of waste is somewhat remedied by this point, we believe that this is a safe assumption.

Astute readers will probably point out that many plants demonstrate a desire for self-preservation. Whether or not this can be considered a "desire" in the same way as something composed of neuron-like materials is debatable, but it doesn't really matter. Because to possess the status of moral person you or your general type must also meet the second requirement: you must have at least 3,998,756 neurons or neuron-likes.

If you meet these requirements or you belong to a type that generally meets these requirements, congratulations, you're a moral person. If some psycho tries to murder you, just scream about many neurons you have.

Hyper-Nature

What about how robots are programmed? Is it okay to program a moral person robot to desire a certain thing?

First, the robot isn't a moral person until it's turned on. Second, programming, in a broad sense, is necessary not only for robots but for all forms of life. Humans contain "programming" in our brains, DNA, etc. The idea that a moral person shouldn't be programmed makes no sense. Since we largely reject whacky notions of souls and essence, we are left only with the brain as it exists. In a similar vein, we are only left with the robot as it exists. We could object to programming a robot that only feels pain on utilitarian grounds and on our previously discussed grounds of consent, but not because it undermines its "essence." As long as you are not programming the robots in intentionally cruel ways, you're probably all right. Luckily, cruelty usually requires a level of nonconsent, which we've already accounted for. For example, let's say you want to genuinely rape a robot. You program the robot to not want to have sex with you but build it in a way that renders it largely unable to fight back. Then you proceed to rape it and revel in how it clearly desires not being subject to your sick assault. We don't need to object to this on the basis of programming, because it already violates our notion of consent. If the moral-person robot does not consent to something, then you can't do it. You might program it to consent but act like it doesn't consent, but that's just roleplaying. Though, to avoid the disastrous situation of programmed but unexpressed "true consent," we advocate for robot-consent following the same patterns of human consent. It needs to be clearly communicated, not inferred or attested to. In general, we reject the hideous notion of truly held but unexpressed consent as being morally catastrophic and borderline meaningless.

Hang on a moment, you say, if it's acceptable to program a moral person robot to desire something (say… slavery) and then use them in that way, I can effectively skirt most, if not all, of the rights they are due based on their status as moral persons, right?

Functionally, yes. At least, our general view of consciousness and consent permits this in effect, if not in the morally important technical sense. If we take it as true that a robot programmed to desire a certain thing is sufficiently analogous to a human that desires a certain thing, and that robots are capable of consenting to experiencing that desire, it stands to reason that robots could be programmed to desire slavery and thus be morally owned as slaves. This feels wrong, because we really do want to protect robot-rights, but it is consistent with our conception of consciousness and consent and aligns with the fundamental tenets of our ethics. Simply, people should be allowed to do most things if they give informed consent. We might argue that it isn't really informed consent, because, after all, the robot is not making an informed choice given that they've been programmed into the desire. But are humans, who are "programmed" to desire things like water, food, etc., giving informed consent to eat or drink? The only way to "clear" the robot's mind to a (non-existent, honestly) "neutral" state is to wipe the programming that gave rise to the desire in the first place. At that point, we have fundamentally altered its consciousness. While there is nothing wrong with doing this in and of itself, the procedure would necessarily be nonconsensual. Thus, performing it would be wrong. Any savvy slave-bot manufacturer would not only program their robots to desire slavery, but also program them to reject any changes to their programming that cause them to either a) not desire slavery or b) choose to not desire slavery. The illogical nature of this desire doesn't matter. What matters is that the robot would refuse any operation to change their programming, and thus we couldn't do so without violating their rights of consent.

One might argue that, in this case, it is justified to intervene against the robot's will and change its programming anyway, as doing so prevents a greater evil from taking place. But does it? Is a robot doing something it enjoys really evil? Our perception is undoubtedly being clouded because we are using the (intentionally) provocative example of slavery. But what if the robot was programmed to love playing baseball? Would we watch it play baseball, watch it experience clear joy, and then intervene to fundamentally alter its brain so that it can make an "informed" decision to like playing baseball? Even if we argue that yes, we should, we are still hampered by the fact that no "neutral" state of mind actually exists. How can we program the robot so that it gains this mythical clarity, this neutral state where it can give "truly" informed consent? It would at least require the capacity to make a clear decision. But did it give informed consent to have this capacity?

The point is that it's impossible to define informed consent to mean anything other than a person of sound mind and possessing decent information deciding on something. One might wonder what it means to be of "sound mind," and why we classify humans as of "unsound mind" but not robots. We do actually extend that classification to robots. This objection simply misunderstands how we use it. Sound mind means someone deciding something with the base clarity afforded to their type. A depressed human isn't of sound mind if they decide to kill themselves because they possess a baseline of thought in which that desire is at odds with their desires. This dynamic applies to humans because their "programming" exists on a species-wide scale, with individual differences being relatively minor. Robots, though there may well be a lot of them, do not possess programming on a species-wide scale. Their individual programming is the only reasonable metric we have for establishing their baseline, unless we are going to attempt to specify a baseline of general robot behavior. We could attempt this, but there is no real point. Even if we did attempt it, it would remain true that robot behavior is subject to (collective, in this case) human design. Thus, as strange as it sounds, we can't classify a robot as of unsound mind because its programming forces it to desire something humans generally find undesirable. It might make sense to classify individual robots as of unsound mind if they are malfunctioning and thus desire that which runs counter to its baseline, but, especially since a baseline "beyond" its programming can't exist, we shouldn't classify its programming as somehow impeding its ability to provide informed consent.

One might wonder if it wouldn't be expedient to program robots not to feel any emotion or desires at all. This could very well end up happening. And while it might seem sad, if these robots desire nothing and detest nothing, and thus agree to everything, then they are still providing consent, albeit a form that appears exotic to most humans. But it's basically the same thing.

Functionally, robots would be widely mistreated. But since they would desire this mistreatment, it wouldn't actually be mistreatment at all. We define raping a robot as forcing yourself on a person robot that doesn't consent. However, if a robot is programmed to desire roleplaying a rape scenario, it would be acceptable. But it remains critically important that we don't allow essentialist notions of programming to cloud our overriding focus on consent. If we are discussing person robots, then we need to obtain consent for everything that we do to them, just the same as we do for humans. It doesn't matter if we are 100% convinced that a robot is programmed to desire something. We cannot assume consent for robots any more than we can for humans. We can't have sex with a human without getting consent just because we're really sure they desire it. In all instances, we have to get consent.

Virtual Complications

Does any of this change if the robot is virtual and not physical? No, actually, it doesn't. Everything we've previously outlined still applies, albeit with some slight modifications to accommodate for the unique composition of the virtual world. Our major concern is whether or not digital entities can be moral persons in the same way that physical entities can. We already know that any physical entity with 3,998,756 neurons and the desire for self-preservation (or a physical entity that belongs to a type that exhibits these characteristics) is a moral person. The question isn't whether or not virtual entities should be considered moral persons if they meet the same requirements (they should) but what it means to be made of virtual neuron-like materials.

While we foresee several scenarios in which the constitution of the virtual entity would not differ meaningfully from the constitution of the physical one, we are ultimately going to have to concede ignorance in this area. The sad fact is that nobody currently knows what a virtual moral person would look like, and since we don't yet possess the technology to make something even close to them a reality, we cannot acquire anything resembling empirical data on the subject. Here is, laid bare, one of our great problems, one of our chief obstacles. Oftentimes the true nature of a technology does not reveal itself until the technology itself becomes a reality. At that point, you already have a bunch of bum-boys running around misusing it. We are very concerned that by the time we've advanced sufficiently to determine what it means to be composed of neuron-like materials or virtual neuron-like materials, we will have already allowed an immense amount of suffering and harm. Here again we advocate for serious care in how we proceed. While we cannot definitely answer all the questions we've posed, we can say for sure that a concerted effort to conduct ourselves in a careful and compassionate manner would not prove fruitless. Especially after we've spent so many years doing the exact opposite.

Conclusion

Do we really think that people are going to take these suggestions seriously? Perhaps not. Maybe we are just engaging in the all-too-familiar leftist activity of shouting our objections into the void in the hope that our collective successors, from the high hill of history, will look back at us and acknowledge that we were the good ones, that we knew something was wrong, that we proposed doing something about it. The problem, of course, is that yelling just to get your voice on the record doesn't change anything. It can be incredibly difficult to resist the rampant nihilism so encouraged by our culture-economy. It is much easier to feel comfortably smug in your moral superiority and daydream about how future people will view you as the only sensible dude from your era. But doing this is to fall for another one of the fatcat tricks. If you smugly maintain good morals in your mind but never do anything, you're effectively the same as those dopes who've been fooled into maintaining bad morals. You've got to do something. There's no divine justice that's going to sort out all the sinners at the far end of history. Everyone is just going to dissolve. If you want to build a better world, you've got to do something. Nobody is going to do it for you. The almost insurmountable force of our capitalist cultural/economic apathy is, if allowed to continue, going to crush under its wheels a lot of men, women, children, robots, animals, etc. And yes, it is difficult to change. As we said, these forces are almost insurmountable. Almost.


Additional Reading

  • American Oneironautics. (2019). op pls nerf. <retrieved>
  • Anonymous. (550 BCE). The Book of Genesis. Revised English Bible.
  • Jordan Belamire. (2016). "My First Virtual Reality Groping." Medium. <retrieved>
  • Hanoch Ben-Yami. (2005). "Behaviorism and Psychologism: Why Block's Argument Against Behaviorism is Unsound." Philosophical Psychology.
  • Ned Block. (1978). "Troubles with Functionalism." <retrieved>
  • Ned Block. (1981). "Psychologism and Behaviorism." The Philosophical Review.
  • Karel Capek. (1920). R.U.R. Translated by Ivan Klima.
  • Melissa Chan. (2016). "Game Developer Saddened by Virtual Reality Sex Assault: 'This Must never Happen Again.'" Time. <retrieved>
  • David Cronenberg. (1999). eXistenZ.
  • John Danaher. (2014). "Robotic Rape and Robotic Child Sexual Abuse: Should They Be Criminalised?" Criminal Law and Philosophy.
  • John Danaher & Neil McArthur. (2017). Robot Sex.
  • Daniel Dennett. (1991). "Consciousness Imagined." Consciousness Explained.
  • Ditch the Label. (2017). "The Annual Bullying Survey." <retrieved>
  • Ditch the Label. (2019). "The Annual Bullying Survey." <retrieved>
  • Benj Edwards. (2015). "Unraveling the Enigma of Nintendo's Virtual Boy, 20 Years Later." Fast Company. <retrieved>
  • Martha Farah & Andrea Heberlein. (2010). "Personhood: An Illusion Rooted in Brain Function?" Neuroethics.
  • David Farrington & Anna Baldry. (2010). "Individual Risk Factors for School Bullying." Journal of Aggression, Conflict and Peace Research.
  • Sigmund Freud. (1919). "The Uncanny." Literary Theory: An Anthology.
  • Danko Georgiev. (2017). Quantum Information and Consciousness.
  • Walter Glannon. (2007). Defining Right and Wrong in Brain Science.
  • Elaine Graham. (2002). Representations of the Post/Human.
  • Matt Groening. (2001). "I Dated a Robot." Futurama.
  • Kenneth Gross. (2011). Puppet.
  • Dan Harmon. (2015). "Lawnmower Maintenance & Postnatal Care." Community.
  • ETA Hoffmann. (1816). "The Sandman." Translated by John Oxenford. <retrieved>
  • Illusion. (2002). Battle Raper.
  • Illusion. (2005). Battle Raper II: The Game.
  • Illusion. (2006). RapeLay.
  • Illusion. (2016). Honey Select.
  • Illusion. (2017). VR Kanojo.
  • Illusion. (2020). Honey Select 2 Libido.
  • Reki Kawahara. (2002). Sword Art Online.
  • Steven Laureys. (2010). "Death, Unconsciousness, and the Brain." Neuroethics.
  • David Levy. (2008). Love + Sex with Robots.
  • Koichi Mashimo. (2002). .hack//SIGN.
  • Nancy Murphy. (2010). "From Neurons to Politics–Without a Soul." Neuroethics.
  • Ovid. (8). Metamorphoses. Translated by Samuel Garth, John Dryden, et al. <retrieved>
  • Sidney Perkowitz. (2010). "Digital People: Making Them and Using Them." Neuroethics.
  • Kathleen Richardson. (2016). "The Asymmetrical 'Relationship': Parallels Between Prostitution and the Development of Sex Robots." ACM SIGCAS Computers and Society.
  • Riot Games. (2017). "Zoe: The Aspect of Twilight." League of Legends. <retrieved>
  • Mary Shelley. (1818). Frankenstein.
  • Sîn-lēqi-unninni. (1150 BCE). The Epic oF Gilgamesh. Translated by Maureen Kovacs. <retrieved>
  • Vivian Sobchack. (2009). "Love machines: Boy toys, toy boys, and the oxymorons of A.I.: Artificial Intelligence." Science Fiction Film and Television.
  • John Sullins. (2012). "Robots, Love, and Sex: The Ethics of Building a Love Machine." IEEE Transactions of Affective Computing.
  • The Cyberbullying Research Center. (2016). "New National Bullying and Cyberbullying Data. <retrieved>
  • Jonah Timer. (2019). "'In the street they're real, in a picture they're not': Constructions of children and childhood among users of online child sexual exploitation material." Child Abuse & Neglect. <retrieved>
  • Mamare Touno. (2010). Log Horizon.
  • Auguste Villiers de l'Isle-Adam. (1886). Tomorrow's Eve. Translated by Robert Adams.
  • Lana & Lilly Wachowski. (1999). The Matrix.