LaMDA: Simple Chat Bot or Ghost in the Machine?

The Internet and social media have been buzzing for a couple of days about a Google engineer who seems to think an AI program developed by Google and called LaMDA is sentient. Huh?

Interestingly, the engineer at issue seems to have been put on administrative leave since, leaving us to ponder this. Are we really already THERE?

Though the story makes for a good one, the overall feeling around the digital campfire is that, NO, we are not there yet. At this point, no AI is not really capable of thinking for itself and coming up with mental leaps, ideas, preferences or opinions, in a way that truly approximates what we do as humans. Sure AI can make connections, but real ideas of its own? Feelings? Opinions? A personality? No, sadly it seems all that, for the time being anyway, is still science-fiction.

The reason I mention the story though, is that at some point, if we keep collectively investing in AI, there may come a time when something does come out of it that may very approximate pretty well for it is to be a person. If and when that happens, we may have to reexamine how, legally, we define a person and what rights we may want to give such digital personalities. Though this may not be a real problem for a while yet, at some point, it may very well become a real issue we’re collectively forced to contend with.

Sure, for now keyboard conversations with chat bots like LaMDA are more like parlor tricks, but it may not always remain so. Shouldn’t we collectively start thinking about this eventuality, including as to how the law may want to handle it? This kind of story begs the question.

Google Photos Class Action in Québec Derailed Off the Bat

The Québec Superior Court recently rejected a proposed class action involving Google Photos and the allege misuse of biometrics data resulting from this Google service. In the decision at issue, Homsy v. Google (2022 QCCS 722), the court refused to authorize the proposed class action, because the plaintiff failed to show he had even mere color of rights. In short, he failed to demonstrate that he had a case or, rather, what could be reasonably considered a real case.

Legally, the explanation of the rejection off hand of this (proposed) class action stems from the requirement that any such proceedings in Québec seem, at the very least, to hold water, if you will. To do so, the court should conclude, looking at the claim as presented, that if the alleged facts were true, then this case would justify a Québec court indeed awarding the remedy requested by that plaintiff.

Even though one might think this allows anyone to sue like this by alleging X, Y and Z, it is not so, as it could force unfounded and/or unworthy proceedings on the Québec justice system -something we collectively definitely do not need.

Indeed, jurisprudence is now teaching us that not mere allegations in initial proceedings (to institute a class action) may NOT suffice to allow a class action in Québec to stand. In effect, simply alleging a bunch of suppositions and theories isn’t sufficient to introduce a valid class action before Québec courts. You need more; maybe not tons more, but more. Thus, given the lack of even a modicum of evidence in the case at issue, the court agreed to throw it out (or, rather, refuse to authorize this class action against Google); this case simply did not pass muster. As cases such as this one demonstrate, even though Québec rules generally seek to facilitate class actions (as compared to your ordinary proceedings, anyway), you do need more than mere conjecture, theories, suppositions and inferences . If this is all you have initially (as was the case in Homsy), then the court may simply refuse to authorize your action -sorry.

Artificial Intelligence – Very Real Creations: AI as a Generator of I.P.?

With an over-expanding array of applications that rely on artificial intelligence (“AI”), we’re finding more and more ways to use them, sometimes to unexpected results. In fact, some AI applications can now even create new things in ways that are not dependent solely on choices and manipulations by human creators and operators. Yes, in today’s world, IA can now create things on its own, leaving the law to wonder what to do with that new reality. Can our legal system deal with creation of inventions or works of authorship by machines (or rather AI)?

That very question is now being asked in most jurisdictions worldwide, as we collectively try and deal with an uncomfortable realization that, maybe, we as humans are not the only ones capable of creating intangible creations that may worthwhile to protect as intellectual property (“I.P.”). Indeed, this is happening as to works like texts, images and music, and even as to what would be considered inventions, had a human been the creator. When such a creation comes about as a result of the operation of an IA program, should we acknowledge it or, instead, sweep it under the rug and hold that the humans responsible for the initial execution are the akin to the authors or inventors? We should note that this issue exists as to copyrights (as in the case of that painting) but also does for industrial designs and inventions.

A good example of that trend is the Canadian Intellectual Property Office (“CIPO”) on-going consultations as to whether we should recognize the possibility of AI acting as actual creators of other types of I.P. such as patentable inventions or works of authorship. It may be that we do what to open that door, or maybe we just want to avoid the whole mess and stick with the status quo. The jury is still out on that one… for now.

Recently, CIPO may have creaked the door opened, as it allowed the registration of copyrights to a certain painting, the authors of which are presented as a human and, yes, an IA application. To my knowledge, this is a first in Canada though it has happened elsewhere, such as in India last year as to that very painting.

So, according to Canadian copyright registration No. 1188619, the co-authors of the painting at issue are an individual named Ankit Sahni, on the one hand, and “Painting App, RAGHAV Artificial Intelligence”, on the other hand. The work of joint-authorship is thus presented on the Canadian register as resulting from the combined creative work of two entities (for lack of a better term), one of whom (which?) is a computer program.

It is not yet clear how the law can/would/will deal with this kind of factual situation, including as to what the rules are when a “thing” is named like as an author (or an inventor, if it gets to that), including who the I.P. belongs to off-hand, who can be seen as the co-author (or co-inventor) and why, whether the IA could have been named as the sole author (or inventor), etc. One could also consider to extent to which an AI application must be identified as a creator when it as involved -the same as when a human creator is involved, etc. When IA considered more than a mere tool for a human creator? As you can imagine, the potential questions abound.

Though the idea may seem simple, allowing us to consider IA as a creator or inventor does (will) lead to all sorts of consequences that we collectively would do well to think through, before proceeding.

At any rate, IA creating stuff is an inescapable reality that, one way or an other, we collectively have to deal with. Unfortunately, as every jurisdiction makes these kinds of decision without necessarily paying heed to what is being done elsewhere, we may very well end-up with an I.P. legal system that is even more messy than it currently is, as down the line some countries may allow IA as creators and some may not. As I was writing above, every jurisdiction is currently grappling with these questions.