LaMDA: Simple Chat Bot or Ghost in the Machine?

The Internet and social media have been buzzing for a couple of days about a Google engineer who seems to think an AI program developed by Google and called LaMDA is sentient. Huh?

Interestingly, the engineer at issue seems to have been put on administrative leave since, leaving us to ponder this. Are we really already THERE?

Though the story makes for a good one, the overall feeling around the digital campfire is that, NO, we are not there yet. At this point, no AI is not really capable of thinking for itself and coming up with mental leaps, ideas, preferences or opinions, in a way that truly approximates what we do as humans. Sure AI can make connections, but real ideas of its own? Feelings? Opinions? A personality? No, sadly it seems all that, for the time being anyway, is still science-fiction.

The reason I mention the story though, is that at some point, if we keep collectively investing in AI, there may come a time when something does come out of it that may very approximate pretty well for it is to be a person. If and when that happens, we may have to reexamine how, legally, we define a person and what rights we may want to give such digital personalities. Though this may not be a real problem for a while yet, at some point, it may very well become a real issue we’re collectively forced to contend with.

Sure, for now keyboard conversations with chat bots like LaMDA are more like parlor tricks, but it may not always remain so. Shouldn’t we collectively start thinking about this eventuality, including as to how the law may want to handle it? This kind of story begs the question.

Artificial Intelligence – Very Real Creations: AI as a Generator of I.P.?

With an over-expanding array of applications that rely on artificial intelligence (“AI”), we’re finding more and more ways to use them, sometimes to unexpected results. In fact, some AI applications can now even create new things in ways that are not dependent solely on choices and manipulations by human creators and operators. Yes, in today’s world, IA can now create things on its own, leaving the law to wonder what to do with that new reality. Can our legal system deal with creation of inventions or works of authorship by machines (or rather AI)?

That very question is now being asked in most jurisdictions worldwide, as we collectively try and deal with an uncomfortable realization that, maybe, we as humans are not the only ones capable of creating intangible creations that may worthwhile to protect as intellectual property (“I.P.”). Indeed, this is happening as to works like texts, images and music, and even as to what would be considered inventions, had a human been the creator. When such a creation comes about as a result of the operation of an IA program, should we acknowledge it or, instead, sweep it under the rug and hold that the humans responsible for the initial execution are the akin to the authors or inventors? We should note that this issue exists as to copyrights (as in the case of that painting) but also does for industrial designs and inventions.

A good example of that trend is the Canadian Intellectual Property Office (“CIPO”) on-going consultations as to whether we should recognize the possibility of AI acting as actual creators of other types of I.P. such as patentable inventions or works of authorship. It may be that we do what to open that door, or maybe we just want to avoid the whole mess and stick with the status quo. The jury is still out on that one… for now.

Recently, CIPO may have creaked the door opened, as it allowed the registration of copyrights to a certain painting, the authors of which are presented as a human and, yes, an IA application. To my knowledge, this is a first in Canada though it has happened elsewhere, such as in India last year as to that very painting.

So, according to Canadian copyright registration No. 1188619, the co-authors of the painting at issue are an individual named Ankit Sahni, on the one hand, and “Painting App, RAGHAV Artificial Intelligence”, on the other hand. The work of joint-authorship is thus presented on the Canadian register as resulting from the combined creative work of two entities (for lack of a better term), one of whom (which?) is a computer program.

It is not yet clear how the law can/would/will deal with this kind of factual situation, including as to what the rules are when a “thing” is named like as an author (or an inventor, if it gets to that), including who the I.P. belongs to off-hand, who can be seen as the co-author (or co-inventor) and why, whether the IA could have been named as the sole author (or inventor), etc. One could also consider to extent to which an AI application must be identified as a creator when it as involved -the same as when a human creator is involved, etc. When IA considered more than a mere tool for a human creator? As you can imagine, the potential questions abound.

Though the idea may seem simple, allowing us to consider IA as a creator or inventor does (will) lead to all sorts of consequences that we collectively would do well to think through, before proceeding.

At any rate, IA creating stuff is an inescapable reality that, one way or an other, we collectively have to deal with. Unfortunately, as every jurisdiction makes these kinds of decision without necessarily paying heed to what is being done elsewhere, we may very well end-up with an I.P. legal system that is even more messy than it currently is, as down the line some countries may allow IA as creators and some may not. As I was writing above, every jurisdiction is currently grappling with these questions.

European Law Makers Targeting AI

The media are reporting this week that Europe’s Parliament tabled legislation yesterday proposing to impose a legal framework to the use of artificial intelligence (“AI”) by businesses. The announcement introduces yet another bill that innovates far beyond what most (if not all) other jurisdictions are currently doing, in this case to regulate AI, which is somewhat akin to Europe’s adoption of the GDPR, on the privacy side, two years ago.

This time around, the proposed legislation seeks to constrain what businesses may do with AI by dividing such systems in 4 categories, based on the level of risk that any such system may carry for the rights and safety of individuals. Even though we can all agree that AI brings with it a great potential to increase efficiency, it also involves substantial risk, in particular as regards violating the rights of individuals, including as to privacy but also as to their security, human rights, etc. Because of these risks, the new European statute proposes a framework meant to curb potential abuses by imposing rules, limits and prohibitions on the worst kinds of AI systems, with a view to avoiding a nightmare scenario in which citizens’ existence comes to be ruled through AI systems that individuals can no longer really control or understand.

In short, Europe wants its citizens to retain confidence in AI, which it proposes doing by imposing a framework over the use of those types of systems. For example, the proposed regulation would prohibit the use of AI systems by organizations which represent an “Unacceptable risk,” while allowing but restricting those that represent a “High risk,” and imposing limited rules and restrictions over those systems that represent merely a “Limited risk” or a “Minimal risk.”

To give you an idea, according to the announcement: “AI systems considered a clear threat to the safety, livelihoods and rights of people [i.e. “Unacceptable risk”] will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g., toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.”

The proposal would then constrain “High-risk” AI systems, namely “AI technology used in:

Critical infrastructures (e.g., transport), that could put the life and health of citizens at risk;

Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g., scoring of exams);

Safety components of products (e.g., AI application in robot-assisted surgery);

Employment, worker’s management and access to self-employment (e.g., CV-sorting software for recruitment procedures);

Essential private and public services (e.g., credit scoring denying citizens opportunity to obtain a loan);

Law enforcement that may interfere with people’s fundamental rights (e.g., evaluation of the reliability of evidence);

Migration, asylum and border control management (e.g., verification of authenticity of travel documents);

Administration of justice and democratic processes (e.g., applying the law to a concrete set of facts).”

Again according to the proposal, “High-risk AI systems will be subject to strict obligations before they can be put on the market:

Adequate risk assessment and mitigation systems;

High quality of the datasets feeding the system to minimize risks and discriminatory outcomes;

Logging of activity to ensure traceability of results;

Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;

Clear and adequate information to the user;

Appropriate human oversight measures to minimize risk;

High level of robustness, security and accuracy.”

Though I am not aware of any similar legislative initiative in Canada at the moment, I think we can safely assume something like this will creep up here as well at some point. As with the GDPR initiative (as to privacy), it is more than likely Europe’s new proposed legislation is going to be imported abroad eventually, including in Canada, to a certain degree.

If you’re curious, this draft legal framework includes 85 articles spread over something like 50 pages—yeah, light reading for the beach this summer, if you see what I mean.