European Law Makers Targeting AI

The media are reporting this week that Europe’s Parliament tabled legislation yesterday proposing to impose a legal framework to the use of artificial intelligence (“AI”) by businesses. The announcement introduces yet another bill that innovates far beyond what most (if not all) other jurisdictions are currently doing, in this case to regulate AI, which is somewhat akin to Europe’s adoption of the GDPR, on the privacy side, two years ago.

This time around, the proposed legislation seeks to constrain what businesses may do with AI by dividing such systems in 4 categories, based on the level of risk that any such system may carry for the rights and safety of individuals. Even though we can all agree that AI brings with it a great potential to increase efficiency, it also involves substantial risk, in particular as regards violating the rights of individuals, including as to privacy but also as to their security, human rights, etc. Because of these risks, the new European statute proposes a framework meant to curb potential abuses by imposing rules, limits and prohibitions on the worst kinds of AI systems, with a view to avoiding a nightmare scenario in which citizens’ existence comes to be ruled through AI systems that individuals can no longer really control or understand.

In short, Europe wants its citizens to retain confidence in AI, which it proposes doing by imposing a framework over the use of those types of systems. For example, the proposed regulation would prohibit the use of AI systems by organizations which represent an “Unacceptable risk,” while allowing but restricting those that represent a “High risk,” and imposing limited rules and restrictions over those systems that represent merely a “Limited risk” or a “Minimal risk.”

To give you an idea, according to the announcement: “AI systems considered a clear threat to the safety, livelihoods and rights of people [i.e. “Unacceptable risk”] will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g., toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.”

The proposal would then constrain “High-risk” AI systems, namely “AI technology used in:

Critical infrastructures (e.g., transport), that could put the life and health of citizens at risk;

Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g., scoring of exams);

Safety components of products (e.g., AI application in robot-assisted surgery);

Employment, worker’s management and access to self-employment (e.g., CV-sorting software for recruitment procedures);

Essential private and public services (e.g., credit scoring denying citizens opportunity to obtain a loan);

Law enforcement that may interfere with people’s fundamental rights (e.g., evaluation of the reliability of evidence);

Migration, asylum and border control management (e.g., verification of authenticity of travel documents);

Administration of justice and democratic processes (e.g., applying the law to a concrete set of facts).”

Again according to the proposal, “High-risk AI systems will be subject to strict obligations before they can be put on the market:

Adequate risk assessment and mitigation systems;

High quality of the datasets feeding the system to minimize risks and discriminatory outcomes;

Logging of activity to ensure traceability of results;

Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;

Clear and adequate information to the user;

Appropriate human oversight measures to minimize risk;

High level of robustness, security and accuracy.”

Though I am not aware of any similar legislative initiative in Canada at the moment, I think we can safely assume something like this will creep up here as well at some point. As with the GDPR initiative (as to privacy), it is more than likely Europe’s new proposed legislation is going to be imported abroad eventually, including in Canada, to a certain degree.

If you’re curious, this draft legal framework includes 85 articles spread over something like 50 pages—yeah, light reading for the beach this summer, if you see what I mean.

Québec Copyright Infringement Decision About the Photograph of a Sculpture Serves Valuable Lesson Regarding Risk of Relying on a Mere Permission from the Author

A Québec court issued a copyright infringement decision this week which seems worth mentioning even though it’s merely a small claims court decision. Not too surprisingly, the decision at issue, Gadbois v. 2734-3540 Québec Inc. (2020 QCCQ 11186), concludes that a photograph circulated by email did infringe the copyright of the author of the sculpture shown therein. Even though we’re dealing with a small claims decision and a simple fact pattern, the decision seems worth discussing a little this morning, if only for the lesson it serves us as to copyright-related permission.

This decision stems from the holding of a 1995 art exhibition which the organizer then sought to promote through email messages that included a photograph of one of the sculptures shown at the event. Twenty-five (yes, 25) years later, the sculptor sues the organizer of the exhibition for copyright infringement based on the alleged failure of the organizer to obtain his authorization to circulate photographs of his work.

Before the court, the organizer of the event claimed that the sculptor had given his permission, even though the company was unable to produce a copy of the writing through which such an authorization had allegedly been given. Unfortunately for the defendant, this prompted the judge to refuse to even entertain the possibility that permission had been given, something the judge may have been somewhat harsh about.

At any rate, this case illustrates that even though the Copyright Act does not require that reproduction permission be granted in writing (as opposed to assignments), relying on an authorization that was not documented in writing may prove hazardous once in front of a judge. Even though legally it does works, in principle, if litigation occurs, you may not be able to demonstrate adequately that sufficient permission really was granted by this particular author in those particular circumstances.

Even though this decision isn’t exactly revolutionary nor precedent-setting, it does serve as a good reminder that getting copyright permission in writing is a useful precaution, and may well avoid unnecessary litigation later. Failing to do so may well the author to later claim they had not in fact consented to whatever was done (such as sending photographs by email) and, thus, that they’re allowed damages because of the copyright infringement.

As to such writings, one should note that they need not be complicated or lengthy documents to prove effective for this purpose. Heck, a simple paragraph (or even one sentence) may sometimes be all you need to show that permission did exist. Contrary to what some may think, such permission need not be described in long-winded documents with numerous complicated provisions. What you want to avoid is simply being involved in eventual litigation where you end up having to testify that the author did authorize you to do X, where the author himself testifies to the contrary—this is not something that will normally require producing complex documentation.

Even though, in the case of Mr. Gadbois’s sculpture, we’re dealing with a fairly insignificant damage award ($1,000), the defendant may well have been better off retaining a writing showing the author’s permission. This decision also shows that such problems may materialize years after the events, like here, with events dating back to 1995. Given how long copyrights remain protected for, failing to document permission may come back to haunt you years and years later—one more reason to avoid this problem, especially through something as cheap and easy as a written permission.

Coinsquare to Disclose to the CRA the Identity of Certain Sellers of Cryptocurrency

We learned this week that the Federal Court recently ordered Coinsquare Ltd., a cryptocurrency trading platform, to disclose the identity of some of its clients to the Canadian tax authorities.

After a similar fight between this company and the IRS, Canadian taxation authorities have also been trying to get Coinsquare to disclose who its clients are, so as to ascertain tax liability of profit-making users, something Coinsquare has so far refused to do. For the Canada Revenue Agency (the “CRA”), given how difficult it is to determine which taxpayers are making profits by selling cryptoassets, platforms such as Coinsquare should be required to report to the authorities any transaction allowing Canadians to generate taxable profits, whenever transacting intangible assets of this kind, so as to avoid allowing such taxpayers to avoid tax liability too easily.

Even though the CRA initially requested that Coinsquare disclose the identity of all its customers since 2013, Coinsquare managed to negotiate much less onerous disclosure obligations. According to the recently issued Federal Court order, the company is only required to disclose the identity and assets of its biggest clients, those who meet a certain threshold. This corresponds to about 10% of Coinsquare users.

For the CRA, this was a first foray into the realm of cryptocurrency platforms, including to get them to cooperate with Canadian tax authorities in a way that increases the odds of collecting tax on profits made through the sale of cryptoassets by Canadians. Given the substantial profits now being made by some cryptocurrency traders, one can certainly understand the CRA’s motivation in attempting to get trading platforms like Coinsquare to collaborate.