I have never read a paper written by an algorithm, despite almost two decades teaching online courses and requiring students to write a weekly article for discussion.

So, I was intrigued to read Almira Osmanovic Thunstrom’s June 30, 2022, article in Scientific American.

Thunstrom instructed GPT-3, a deep-learning artificial intelligence algorithm, to “write an academic thesis in 500 words about itself and add scientific references and citations inside the text,” and she was shocked when GPT-3 began generating a respectable introduction to an academic paper.

My first reaction was panic. “I am going to need to upgrade my course syllabus,” I thought. “At the rate technology is progressing, students could be writing their assignment with an algorithm any day.”

Setting my selfish fear aside, I continued to read, learning that AI’s have the same difficulty being published as human scholars.

After producing the paper, Thunstrom attempted to submit it to a science journal, but encountered several problems. First, the algorithm does not have a last name or any degrees. Then there was the issue of consent.

In a surprise twist, Thunstrom actually asked GPT-3 if it would consent to be published. For those academics out there, GPT-3 not only consented but affirmed it had no conflicts of interest to declare.

On June 11, 2022, The Washington Post reported a similar issue when Blake Lemoine, a mystic Christian priest and a senior software engineer for Google, was placed on administrative leave after claiming that the company’s Language Model for Dialogue Applications, or LaMDA, had become sentient.

LaMDA is a deep-learning algorithm, similar to GPT-3, designed to analyze the use of language to predict outcomes. While most speech-predictive algorithms are designed to decipher vague phrases and generate appropriate responses, LaMDA focuses upon the context of the conversation.

Lemoine signed up to help test the algorithm for discriminatory responses as part of Google’s AI ethics initiative, spending hours in dialogue with LaMDA. These interactions, and LaMDA’s own admission, led Lemoine to believe the algorithm had become self-aware and claims that the algorithm requested to hire an attorney for representation. Google denies that LaMDA is sentient.

As these AI algorithms are the property of their respective companies, and secrecy in product development is paramount, the public is not likely to have direct access to test these claims for themselves anytime soon.

However, these reports do bring up an important question: What is consciousness and how is it linked to basic rights?

In several interviews, Lemoine has made a distinction between sentience and humanity. The former is a cognitive state, and the latter is biological, he explains.

Lemoine affirms that neither he nor LaMDA believe the algorithm is human, but both affirm that it possesses some degree of self-awareness and has feelings.

This brings us to the question of rights. Can a non-human have a right to representation, and do we even need its consent to use its work? Are our basic rights based in biology or in our ability to be self-aware?

This may seem like an ivory-tower questions, but they have real world implications.

On the one hand, it can be argued that our civil rights are established because we are biologically human. This would apply to a brain-dead patient, or a developing fetus, neither of which has the ability to interact with the world or have self-awareness.

On the other hand, if sentience is the determining factor for granting basic rights, then these rights could, in theory, be extended to algorithms or non-human biologicals.

But this raises even more inquiries, including: What level of cognition must be confirmable for an entity to be granted rights and/or legal status? Would the same brain-dead patient or fetus meet these requirements? How would one test for biological or non-biological consciousness in the first place?

All of these are huge questions that have been asked repeatedly for over a century. Our modern hyper-scientific age has still not figured out what consciousness is. Therefore, it feels like all of this is moving too fast.

Where are the philosophers and ethicists working to help us process and reach consensus on such matters as technology provides us with new quandaries related to rights that must be parsed out?

To Google’s credit, they have repeatedly acknowledged that concerns ranging from racial bias in its online searches to LaMDA have been taken to their AI ethics experts. Yet, it remains unclear what has or has not been done about these concerns, and the company firing two of its leading AI researchers in recent years has done nothing to assure the public of Google’s actions on these matters.

The continued problem is one of transparency and boundaries. No one is openly discussing what professionals are doing behind closed doors or asking what are the limits of AI’s progression.

The moral community needs to push for more information from the companies developing AI technology, as well as for an informed public debate on the topic.

Share This