Canadian Government Angling to Control Content Placed Online, including UGC and Even Apps

As you may recall, since last fall, the Canadian government has been working toward getting its bill C-10 enacted. The bill aims to allow taxing streaming services such as Netflix. Though this may have been the initial impetus behind the introduction of the bill, we’re now seeing that C-10 may also go so far as to allow the regulation of content placed online, including user-generated content, computer games and apps of all kinds. Yes, Canada seems to have decided to shed its laissez-faire attitude toward what’s placed on the Internet.

Indeed, it would now seem that the Liberal government may be trying to broaden bill C-10 so as to grant the CRTC additional powers to regulate whatever is placed online, including (the latest twist in this little legislative soap opera), apps—yes, you read this right: apps. This story is being disseminated by Michael Geist, further to a statement seemingly made by mistake by an MP while commenting on an amendment that has yet to be formally introduced. Apparently, the government may be in the process of making changes to C-10 that would allow the CRTC to regulate not only streaming services, but also some content itself, such as apps made available on the Internet.

Though the government stated it did not intend to try and regulate computer games, it now appears C-10 may, on the contrary, end up allowing the CRTC to regulate software made available through the Internet, a prospect that has many cringing.

From a bill initially justified as a way to simply allow the taxation of streaming services (such as Netflix) in Canada (to level the playing field vs. other ways of making content available to Canadians), we’re now faced with a bill that seems to be transmogrifying into a bill meant to empower the government (through the CRTC) to control what is placed or made available by and to Canadians online. This may end up being extended and/or applied to computer games, content placed on social networks, blog posts, podcasts, etc. Hmm, so much for the CRTC’s 2000 position that it wouldn’t mess with the Internet.

Is it just me or are we faced with a slight drift in the federal government’s recent efforts to try and better control the Internet in Canada? Hmmm—to be continued, unfortunately.

Canada Opts for New Digital Services Tax

The media is reporting that Canada’s 2021 budget which was recently made public includes the idea of implementing a digital services tax (“DST”) that will be imposed on foreign businesses providing digital services to Canadians. This tax will, of course, apply to companies and services such as Netflix, Spotify and Amazon Prime, but also to other businesses that generate revenues through matching sellers with buyers online, dealing with user-generated content, advertisements through online platforms, and selling or licensing user data.

The rate of this new tax would be set at 3% of the income these businesses generate from digital services provided to Canadians. This, the Canadian government believes, will allow Canada to obtain its fair share of the revenues being generated from streaming and online services, which are often provided to Canadians by foreign (usually American) companies.

This new tax will come into effect on January 1, 2022, and is expected to bring in about 500 million dollars per year for the Canadian government. With figures like these, one has to admit it may be hard to resist for a country like Canada.

European Law Makers Targeting AI

The media are reporting this week that Europe’s Parliament tabled legislation yesterday proposing to impose a legal framework to the use of artificial intelligence (“AI”) by businesses. The announcement introduces yet another bill that innovates far beyond what most (if not all) other jurisdictions are currently doing, in this case to regulate AI, which is somewhat akin to Europe’s adoption of the GDPR, on the privacy side, two years ago.

This time around, the proposed legislation seeks to constrain what businesses may do with AI by dividing such systems in 4 categories, based on the level of risk that any such system may carry for the rights and safety of individuals. Even though we can all agree that AI brings with it a great potential to increase efficiency, it also involves substantial risk, in particular as regards violating the rights of individuals, including as to privacy but also as to their security, human rights, etc. Because of these risks, the new European statute proposes a framework meant to curb potential abuses by imposing rules, limits and prohibitions on the worst kinds of AI systems, with a view to avoiding a nightmare scenario in which citizens’ existence comes to be ruled through AI systems that individuals can no longer really control or understand.

In short, Europe wants its citizens to retain confidence in AI, which it proposes doing by imposing a framework over the use of those types of systems. For example, the proposed regulation would prohibit the use of AI systems by organizations which represent an “Unacceptable risk,” while allowing but restricting those that represent a “High risk,” and imposing limited rules and restrictions over those systems that represent merely a “Limited risk” or a “Minimal risk.”

To give you an idea, according to the announcement: “AI systems considered a clear threat to the safety, livelihoods and rights of people [i.e. “Unacceptable risk”] will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g., toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.”

The proposal would then constrain “High-risk” AI systems, namely “AI technology used in:

Critical infrastructures (e.g., transport), that could put the life and health of citizens at risk;

Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g., scoring of exams);

Safety components of products (e.g., AI application in robot-assisted surgery);

Employment, worker’s management and access to self-employment (e.g., CV-sorting software for recruitment procedures);

Essential private and public services (e.g., credit scoring denying citizens opportunity to obtain a loan);

Law enforcement that may interfere with people’s fundamental rights (e.g., evaluation of the reliability of evidence);

Migration, asylum and border control management (e.g., verification of authenticity of travel documents);

Administration of justice and democratic processes (e.g., applying the law to a concrete set of facts).”

Again according to the proposal, “High-risk AI systems will be subject to strict obligations before they can be put on the market:

Adequate risk assessment and mitigation systems;

High quality of the datasets feeding the system to minimize risks and discriminatory outcomes;

Logging of activity to ensure traceability of results;

Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;

Clear and adequate information to the user;

Appropriate human oversight measures to minimize risk;

High level of robustness, security and accuracy.”

Though I am not aware of any similar legislative initiative in Canada at the moment, I think we can safely assume something like this will creep up here as well at some point. As with the GDPR initiative (as to privacy), it is more than likely Europe’s new proposed legislation is going to be imported abroad eventually, including in Canada, to a certain degree.

If you’re curious, this draft legal framework includes 85 articles spread over something like 50 pages—yeah, light reading for the beach this summer, if you see what I mean.