Navigator logo
Perspectives logo

Perspectives | Issue

Navigator’s folio of ideas, insights and new ways of thinking

Canada’s Approach to AI Regulation Walks a Perilous Tightrope

April 26, 2024
Jack Porter
Jack Porter | Intern
LinkedIn icon

Fresh off the heels of his $2.4 billion artificial intelligence (“AI”) capacity-building announcement, Prime Minister Trudeau has positioned himself as an AI evangelist, citing the technology’s ability to unlock economic activity, improve productivity, and reduce the time workers spend on repetitive tasks.1

For a country predicted to be the worst-performing economy among advanced economies over the next decade, we can’t afford to be on the sidelines of the AI ecosystem – but we also can’t dive in head-first without strong, principled regulation in place. Today, the uncertain status of Bill C-27, Canada’s Digital Charter, indicates we are simply not ready for this moment.

Workers, vulnerable populations and wider society need assurance that Canada is safeguarded through robust AI regulatory policy. Without those assurances, we will fail to ensure the responsible development and use of AI. And the consequences for society may be irreversible.

Regulating AI is a delicate affair; too much regulation and you risk slowing major economic advancements, not enough regulation and you risk perpetuating the well-documented harms of AI. Regulators around the world are called upon to walk this tightrope, including those here at home.

Fortunately, the EU, the U.K. and the U.S. all leave Canada with teachable lessons on how we should embark on this journey.

The EU: The earliest mover

The EU passed the first-ever legislation comprehensively governing AI by the narrowest of margins. Despite tense negotiations and fears that these talks would amount to nothing, the EU AI Act sets a positive standard and a clear signal that emerging AI will exist within the purview of the public good.

Many have expressed skepticism over the ability for regulators to adequately govern AI, given knowledge barriers and the slow policymaking process relative to the latest advancements in AI.

In line with this thinking, some claim the EU AI Act is inattentive to unlocking the innovative benefits of AI. These detractors say it will lead to disastrous consequences for Europe’s economies. They maintain the legislation will make firms less competitive, less profitable and talent will migrate to more innovation-friendly countries. In short, they assert it is a precautionary tale for other countries beginning to regulate the technology.

But every tale has at least two sides. And industry experts and academics alike are making strong calls to government to pass serious, comprehensive legislation to regulate AI. Calls that, no doubt, will not be easy to answer. But difficulty is never an excuse to eschew responsibility. And if Canada wants to follow through on its intentions to develop AI responsibly, we must instead find a middle path that balances innovation and the responsible development of these powerful new tools.

The U.S.: Move fast and break things

In the U.S., the Office of Science and Technology Policy published the ‘Blueprint for an AI Bill of Rights,’ a comprehensive but unenforceable document surveying the risks posed by AI and potential solutions. While some hail this as a monumental advancement towards AI governance by the U.S., the excitement is unwarranted.

Without enforceability, the Bill of Rights is effectively a weak plea to AI companies. There’s still a great deal of work before the U.S. is anywhere close to accomplishing a feat like the EU AI Act, and it does not look like its legislators have the ambition for this.

Instead, experts observe that the U.S. approach to tech regulation prioritizes a “relentless pursuit of innovation and uncompromised faith in markets as opposed to government regulation.” The ‘move fast and break things’ approach, characteristic of American enterprise, has predictably found its way into AI governance discourse. Discussions regarding self-regulation and industry-specific regulation suggest fundamental opposition to comprehensive AI rules. Therefore, the U.S. is set up for a mix of patchwork industry specific regulations and enterprise-specific compliance measures. The bottom line? The private sector will ultimately take the lead and self-regulate.

Although self-regulation carries a host of potential benefits, it does not sufficiently protect citizens from the risks posed by AI as industries tend to opt for the paths of least resistance and least burden. In Canada, we cannot take this same approach and need robust regulations in place. Yes, our innovators must move fast – but we also must have safeguards in place to protect against what they might break.

The U.K.: AI Doomerism

There is a relatively new school of thought on AI regulation that expresses a greater concern over the apocalyptic potential of AI instead of addressing its immediate societal impacts. This is known as AI Doomerism and the U.K. is ground-zero.

Organizations such as the Future of Life Institute and theories such as Effective Altruism popularized this view of the future of technology and society. These organizations and philosophies greatly influence AI governance discourse in the U.K. However, many AI policy researchers warn that AI Doomerism entirely misses the immediate risks AI poses to society such as algorithmic bias and job displacement, urging regulators to ignore the sci-fi scenarios and concern themselves with the former.

So, while the U.K.’s approach to AI governance might look like it is safeguarding against AI’s potential consequences, its current standard for regulation is lower and more attractive for tech companies looking to avoid government oversight. From my perspective, the philosophy of AI Doomerism allows the U.K. government to boast about being conscientious of the societal impacts of technology in the long-term, while doing little to mitigate the pressing impacts in the near-term.

What does this mean for Canada?

Canada stands at a pivotal juncture and the task ahead is not to take the easy path, but the right one. As we learn from the promise and perils of the EU’s groundbreaking strides, the U.S.’s market-driven ethos, and the U.K.’s emphasis on existential threats, it’s clear that Canada must articulate a clear and comprehensive vision on AI regulation. And it must be one that places societal well-being at the helm, encourages innovation, and remains firmly anchored to ethical standards.

To achieve these aims, the mistakes and successes of our allies are vitally instructive. We must not succumb to deregulatory pressures from the U.S. and U.K. and give up the ambitious project of comprehensive AI regulation the EU has proven to be possible.

The Digital Charter will be more than a policy—it will be a statement of our national ethos in the digital age. As such, it is essential that it demonstrates a balance between AI regulation based on firm principles and the encouragement of AI innovation. There is no inherent need to compromise one for the sake of the other; rather, they can coexist for everyone’s benefit.

 

References

1 https://www.pm.gc.ca/en/news/news-releases/2024/04/07/securing-canadas-ai2 Digital Empires by Anu Bradford.

fb_btn tw_btn

About the author:

Jack Porter
Jack Porter | Intern
LinkedIn icon

Jack is an intern in the Toronto office. He is a recent graduate of the Political and Legal Thought graduate program from Queen’s University, where his research focused on constitutional rights. Jack developed a passion for politics and government by contributing to provincial and federal political campaigns as a canvasser and campaign organizer. Jack is also an avid writer, with academic publications concerning the intersection of politics and technology.

Jack is excited to apply the strong research, writing, and critical thinking skills acquired through his education and work experience to serve Navigator’s clients.

Enjoyed reading?

Get Notifications