LaMDA: Simple Chat Bot or Ghost in the Machine?

The Internet and social media have been buzzing for a couple of days about a Google engineer who seems to think an AI program developed by Google and called LaMDA is sentient. Huh?

Interestingly, the engineer at issue seems to have been put on administrative leave since, leaving us to ponder this. Are we really already THERE?

Though the story makes for a good one, the overall feeling around the digital campfire is that, NO, we are not there yet. At this point, no AI is not really capable of thinking for itself and coming up with mental leaps, ideas, preferences or opinions, in a way that truly approximates what we do as humans. Sure AI can make connections, but real ideas of its own? Feelings? Opinions? A personality? No, sadly it seems all that, for the time being anyway, is still science-fiction.

The reason I mention the story though, is that at some point, if we keep collectively investing in AI, there may come a time when something does come out of it that may very approximate pretty well for it is to be a person. If and when that happens, we may have to reexamine how, legally, we define a person and what rights we may want to give such digital personalities. Though this may not be a real problem for a while yet, at some point, it may very well become a real issue we’re collectively forced to contend with.

Sure, for now keyboard conversations with chat bots like LaMDA are more like parlor tricks, but it may not always remain so. Shouldn’t we collectively start thinking about this eventuality, including as to how the law may want to handle it? This kind of story begs the question.