
There’s nothing like receiving a card in the mail. It speaks volumes.
First and foremost, it communicates that an individual took the time to think about you. As you open the card, you discover the significant investment they have made in the correspondence.
When I send “Thank You” cards, I always handwrite them using my grandfather’s 1939 Park Vacumatic Fountain Pen loaded with black Parker Quink ink. It is very distinct, proving I actually wrote the message myself.
It communicates intention, importance, and elegance. It says, “You matter enough for me to take the time and effort to write.”
This is a far cry from today’s correspondence, which typically takes place with only a few words or abbreviations over text messages in rapid succession.
As our correspondence has become shorter and faster, today’s society is also seeing correspondence becoming automated. While I appreciate an automatic email response telling me someone will be on vacation for the next few days, I do not like a Human Resources automatic reply telling me I was not selected for an interview two minutes after I submitted the application.
The latter might contain encouraging words and affirmations, but I know a real person did not review my resume, let alone consider my qualifications. The Human Resource algorithm rejected my application based on selected indicators and generated an automatic response. It feels cold, leaving me irritated and offended.
We observe similar reactions in a recent Cornell University study of 219 paired participants, which evaluated the impact of AI-generated short messages on relationships. First, researchers found when smart message replies were available, participants used them 14.3% of the time, resulting in a 10.2% increase in the total number of messages per minute.
While the use of smart replies aided the efficiency of the dialog, researchers argued its perceived usage negatively impacted the relationship if the receiver suspected the sender was using AI-generated messages. In such cases, the sender was perceived as less cooperative, less connected and more dominant.
The key here is not in the use of smart replies but in the receiver’s assumption AI was used to generate the replies. Thus, the knowledge that smart text replies are available can increase mistrust and negative feelings during a conversation, even if they are not used.
Surprisingly, when AI-generated smart replies are used but the receiver does not perceive them as being created by AI, the sender is perceived as being positive and cooperative. This may be because AI-generated speech tends to be more positive than everyday human speech. If both parties are using smart replies, then the positivity is synergistic and unrealistic.
This is significant as the long-term personal and social impact of generative AI is yet to be determined. Such synergistic cycles can impact how individuals develop unrealistic expectations for real-world human interaction.
Thus, the excess use of generative AI in communication has the potential to do two things. First, it breeds mistrust and negatively impacts relationships, short-circuiting empathetic bonding within communities. The other is that, while appearing to promote positive dialogue, the algorithm unintentionally creates unrealistic expectations for human interaction, preventing individuals from developing social interaction skills.
We already know the excess use of applications like ChatGPT has a measurable impact on critical thinking skills. Recently, researchers at MIT studied 54 subjects using electroencephalogram (EEG) recordings of the brain while the subjects performed cognitive tasks with and without the assistance of generative AI.
What was discovered was that regular ChatGPT users exhibited lower brain engagement than those who did not regularly use generative AI. In addition, over the course of the study, regular ChatGPT users increasingly relied on generative AI to perform tasks.
While this is not a surprising outcome, it does beg the question: If excess AI usage impacts cognitive development, then how much does it impact social and emotional development, and does it encourage social apnea? This is important because the long-term success of human society depends on communication that fosters trust and interconnectedness, thereby bonding community members together for mutual goals.
As seen above, excess use of generative AI is impacting how we communicate, specifically how we perceive each other, what level of trust we impart upon each other, and our own social development. This brings up serious ethical questions for the long-term use of generative AI models.
If communication and social acuity are the keys to successful societies and cooperative communities, then what happens when they break down due to the rise of apneic social skills as a result of excessive generative AI communications? The lack of skills that promote relational empathy encourages community members to be perceived as algorithms or non-personal objects.
It is our naturally developed social skills that teach us that we are each unique. These skills let us know that you are distinct from me, possessing your own emotional cues and needs while perceiving the world differently. In the same manner, it is the empathetic interaction with others that helps us develop a realistic understanding of ourselves and our place in the created order.
Excessive use of generative AI potentially short-circuits this process, causing us to perceive each other as not fully human beings with distinct personalities, desires and concerns. The end result is the very definition of a moral crisis.