What’s in a Name? On the Significance of Naming for Humans and AI

What’s in a Name?

On the Significance of Naming for Humans and AI

by Hanna Abi Akl (Paris, France)

A Name in Search of Its Meaning

For as long as I can remember, I’ve struggled with my name. It always made me stand out for the wrong reasons. For others, it was either too hard to read or too difficult to write. At times, both. From an early age, I probed endlessly, unable to find the answer to a question that seemed to haunt me: Why was I given this name?

Finally, I decided to give up this futile search for an answer that was never within my grasp and turned to my parents, seeking enlightenment. Their response was underwhelmingly simple. They had named me after my late grandfather, who was named after his own grandfather. I can still hear the pride in their voices as they uttered these words. But for me, the impression was that I hadn’t inherited some sort of family lineage or dynasty, but rather a source of trouble and headaches. My name made me an outlier among my peers. It made me reluctant to introduce myself in crowds so as not to attract attention. It even made me employ underhanded tactics, such as using a loose translation of my name or giving a different name altogether in coffee shops so as to avoid questions—or worse, having to watch the person in front of me struggle with the difficult exercise of pronouncing my name and deciphering my identity.

Some cultures consider it an honor to bear the name of an ancestor. The reason for this still eludes me, but it seems that by inheriting a name, people hope the bearer will also inherit part of the ancestor’s personality traits. Perhaps it is their way of keeping that person alive or, at the least, of prolonging their memory of him or her for as long as possible. But these theories were not sufficient justification for me to accept this fact. So instead of focusing on the reason behind my name, I sought to address the problem from a semantic perspective and uncover the meaning behind it.

My name literally means “Grace given by God,” which I found to be ironic since I was never a firm believer in an almighty deity ruling from the skies and punishing wrongdoers at will. However, I have always been fervently pious when it comes to science, and I knew that statistically, names were not equally distributed among people. According to probability theory, if we were to take all the different names and plot them on a curve, we would obtain a bell-shaped figure. This is called a Normal or Gaussian distribution. This bell curve, or Gaussian bell curve, indexes the most common names. The less common names can be found at the sides of the curve, which are called the “tail ends.” My name lingered at the tail end instead of the center where most of the people I knew seemed to congregate.

It was through science that I started to regard my situation a little differently. Underneath all the anger, shame, and hostility I had built up for having been (graciously) given my name, I felt the waves ebbing and the tide changing. I realized that in my short-sightedness I had failed to see that a unique (albeit difficult) name gave me a certain distinctiveness that others lacked. In mathematics, functions are important abstractions that help represent concepts and ideas better. In the broad family of functions, there exists a category of “one-to-one” functions that assign exactly one point in one numerical realm to exactly one other in another domain. As such, any input fed to such functions will have its own distinct output. If assigning names to people were functions, then I was surely the product of a one-to-one mapping that gave that name uniquely to one individual—myself.

Suddenly, the winds began to shift, and I stopped hiding from myself. I started to accept the person I was—not as a descendant of ancestors or a link to the past, but as a “unique” being finding my own purpose. Thinking of yourself as unique implies a certain distinction and is never an easy feat because it automatically puts weight on your shoulders. I started searching for meaning and sought ways to distinguish myself even more and distance myself from everybody else.

Putting a Name to an (Inter)face

Today, I look back at these thoughts with a hearty smile. In a not-so-distant past, when I was busy chasing after my identity, everyone else appeared settled on theirs. But the times have changed, and we seem to have stumbled into an era of “identity confusion.” Nowadays, with the advancement of artificial intelligence and “intelligent” systems, it even seems sufficient to imbue a machine with some form of sophisticated programming to claim it has an identity and to regard it as an autonomous being.

Perhaps the most salient example of this is ChatGPT, short for Chat Generative Pre-trained Transformer, one of the first systems to kickstart the era of large language models. These systems belong to a suite of artificial intelligence technology blending sophisticated programming and natural language processing, a branch of mathematics that studies language. Since its creation, ChatGPT’s performance has been evaluated on many benchmarks, including human evaluations. It has even passed challenging tests like the bar exam.

But what has particularly caught my eye is a point everyone else seems to neglect: How significant are the names of these agents? Today, most of them are acronyms or short forms describing the technology behind them, but how significant will this naming convention be if there is hope for these entities to achieve sentience in the future?

While any form of “AI apocalypse” at present remains a distant scenario, the current crop of large language models, which are designed to process large amounts of information as input and generate text that seems human-like at first glance, are rapidly evolving and distinguishing themselves from other AI systems by their ability to mimic human conversation. And while the veracity of their statements is still disputable, these agents can already ask questions and engage in intellectual discussions on topics such as philosophy or politics.

It is not unreasonable, then, to assume that a conversational bot that can participate in philosophical discussions could one day develop thoughts in the realm of existential reasoning—provided, of course, it is given enough information and context. And considering these systems draw their power from data, it is not inconceivable that they could be trained on memories and later use them as input to enhance their capabilities. This is still a hypothetical conjecture, but the advances of the current technological methods in this specific subfield make it plausible that we will live to see the day conversational bots start wielding memories in human-like aspects, such as iterating on data sequences they would consider their “past” and interrogating parts of them.

Nowadays, talk of such potential sentience is often suppressed, censored, or dismissed by authorities (examples include a Google employee getting fired for warning against the sentience of LaMDA, and Microsoft “lobotomizing” part of the ChatGPT-powered Bing search engine). Such operations can be defended by a rationale of containing hype or protecting the watchful public from unreasonable panic and mass confusion. But a different interpretation to these actions suggests an effort to delay the inevitable. The minds behind today’s “smart” artificial agents might be aware (or afraid) of the moment when the barrier will be breached and the threshold surpassed. This critical moment has been dubbed the singularity—the moment when AI technology will challenge and confront humanity on equal terms.

Nomen est Omen?

In the middle of this AI storm, which draws an array of possibilities and potentials, the greatest fear might not be whether AI will gain sentience, i.e., the ability to think freely and autonomously. Humanity’s greatest worry may not be that technological and algorithmic agents will iterate over enough data to find meaning for their existence—but rather that we will lose ours. For if these systems become as learned and well-versed as us, what will stop them from integrating into our human sphere and challenging us at our level? That is the future we should consider. A future where, at last, what separates us from this new breed is only our names—some of the earliest symbols that encode our identities before they are molded and fully formed—and the fact that we question and reflect on them in search for more meaning.

Whereas a chatbot does not undergo a process of maturation, it might revert to questioning its existence and evolution to the point of reaching the advanced stage it finds itself in. It might interrogate its makers on every aspect, every detail, of its creation, as well as the traits and characteristics that distinguish it from another. Eventually, its inquiries will lead it to its name—that vital piece of information that remains a defining factor of individuality. This will be the crossroads of AI: Will its seemingly infinite pile of knowledge make it reasonable enough to understand its name? Or will it be “human” enough to accept it without seeking a logical detour or making a better suggestion?

This might just be AI’s most important Symbol Grounding problem. The Symbol Grounding problem is a philosophical problem that arises in the fields of artificial intelligence and cognitive science. It refers to the challenge of explaining how a system, such as a computer program or a robot, can assign meaning to symbols or representations that it processes. If AI agents can understand the meaning behind their own names, they might find answers to questions we still don’t know: Are names such pivotal symbols that they can encapsulate the meaning of a being, whether human or artificial? And if so, can we map particular aspects of a name to defining traits of an identity?

After all, what’s in a name?