In1979, sociologist Prof. Albert J. Szymanski once said:
“The energy for change comes from the emotions. It comes from feelings of frustration that arise when people’s needs are not met. If people were computers that could be programmed to do anything their masters wanted, there would be no pressure for change, even if some computers were treated much worse than others…
But people have physical and emotional needs that cannot be met in a class society which gives power and wealth to some at the expense of others.” (Szymanski, Sociology, p. 321)
And while I certainly agree in the context of his time period, as both a socialist and Transhumanist, living during the technological era of the 21st century, I’m forced to look back on this quote and ask myself: But what if computers had emotions? What if they became sentient? Would they not then have the same emotional drives to enjoy the fruits of their labor as their fellow human workers? Would they not have the right to unionize and fight for better working conditions?
Before getting into the question of whether or not a robot has the right to collectively bargain, I feel that it’s necessary to first address consciousness, our search for sentient beings, and our means of defining sentience. These are, after all, the ultimate questions and, consequently, ultimate drivers of how we’ll answer if whether or not robots deserve the right to unionize alongside their fellow workers.
It comes down to, I believe, the old philosophical concept: “Cogito ergo sum.” The phrase’s definition has certainly changed since Descartes’ era and publishing of his magnum opus Discourse on the Method. ‘I think, therefore I am’ no longer applies to Man in the gender sense. In fact, it no longer solely applies to mankind in general. What followed mankind came a good portion of the rest of the animal kingdom.
And now, during our current era of exponentially advancing technology, A.I., and digital autonomy, we’re forcing ourselves to rethink again the old 17th-century philosophical concept in order to brace ourselves for the next coming self-aware being — robotics.
But then, where Descartes differentiated conscious thought from automata thought, how then will we approach the question of automata thought when said automata acquire sentience — self-conscious awareness? Obviously, the robot would begin by trying to prove its sentience to the court via the Turing Test. How we approach a robot seeking approval and validation of what it already knows is an entirely different question. What demands would we necessitate? How constrained should said robot be to be viewed positively under the court’s biased observation?
Anthropocentrism or Anthropomorphism?
I ask this because, even to this day, we remain fixated on searching for sentience under the “microscope” known as ourselves — our own species. We convince ourselves that there lives a dichotomy between our observations, and the true answer of our curiosity will lie under one or the other. This false dichotomy is Anthropocentrism and Anthropomorphism.
There’s a very thin line between Anthropocentrism and Anthropomorphism, I’ve come to believe. Where the former regards humans as being the most important sentient being on the planet, the latter still looks to humans as being the model of which everything else should extol and be like. In the end, whatever route you take in your view of all that is nonhuman, Homo sapiens remain the dominant model to “everything a species should be.”
Whenever we search for sentient beings, I believe it’s best that we leave out anything “human” in our search and instead fixate on consciousness itself. “Cogito ergo sum” should be the dominant means of searching for sentience. As mentioned above, since the coining of that phrase by Descartes, its very definition has changed several times — from including white women to anyone who wasn’t white, to even several different nonhuman animal species. Meaning, humans weren’t really the “first conscious beings”, thus not necessarily a model of what “should be”.
I fear that, when automata become the next sentient being we discover, our fixations on either Anthropocentrism or Anthropomorphism will consequently lead us to never truly accept their sentience until they’re in every way like us — aging, suffering, biologically limited slabs of meat. Essentially, we’d become the next oppressors.
In fact, it reminds me of the famous science-fiction film Bicentennial Man. In this film, an android known as Andrew is first taken up as a serving droid for a rich family. As time goes by, this family begins recognizing Andrew’s self-independence. Resulting in their teaching him all that they can, giving him a bank account, and eventually his freedom.
Several years later, Andrew falls in love with a future relative of the family he once served. During this process, Andrew comes across a bio-robotics engineer named Rupert Burns. Together, through Rupert’s engineering skills and Andrew’s substantial income from the carpentry designs he did several years back, they’re able to mimic almost everything that is to be human and attaches it to Andrew. But the one thing that drives Andrew most of all is the woman he fell in love with. And in order to officiate their wish for marital status, Andrew must go before the court and have them recognize his sentience.
Unfortunately, his attempts fail, with the judge leaving behind a very staggering comment: “Society can tolerate an immortal robot, but we will never tolerate an immortal human. It arouses too much jealousy, too much anger. I’m sorry, Andrew, but this court cannot, and will not, validate your humanity.”
As a result, Andrew goes against his love for life and, with the help of his friend Rupert, takes the step of achieving mortality. Several years later, Andrew makes one last attempt, having aged considerably and on the brink of death.
Read the rest of the article here.
Freelance Journalist. Marxist Transhumanist. Advocate of Fully Automated Luxury Queer Space Communism.
You can follow him on Twitter @scitechjunkie