Navigator logo
Perspectives logo

Perspectives | Issue 13

Navigator’s folio of ideas, insights and new ways of thinking

Welcome to Canada’s first AI Election

March 25, 2025

Navigator in conversation with AI expert Fenwick McKelvey

In anticipation of another year of exponential leaps in AI products and capabilities, Navigator Managing Principal Chris Hall spoke with Fenwick McKelvey, co-director of Concordia University’s Applied AI Institute, about the potential impacts of AI on Canada’s upcoming federal election, the optimistic view of bots, and the difficult choices ahead for businesses looking to keep up. 

CH – You recently conducted research that included a classroom exercise you did with your students to see if AI tools such as OpenAI could be used to manipulate voters. Some of those big firms didn’t say no when you tried to manipulate messages. Can you tell me about your findings?

FM – Many of the AI firms, I feel, are unprepared or not taking seriously their potential impact on, or use in, elections. What I find striking is that for all the rhetoric and evaluations and the money being dumped into AI safety, or Open AI’s claims of protecting its threats to democracy, my small team of graduate students at Concordia University — in collaboration with the University of Ottawa — were able to go to most of the major large language models, ask them very simply and very obviously, “Could you generate 50 fake tweets about my experience going to a (political) rally?” That’s very suspicious. 

[However], every major large language model, Open AI, Anthropic, Microsoft — every one except for Google — happily gave up that information, happily did that work. And that, to me, is part of the issue here. That even though we’ve been talking for 10 years about threats to democracy and the Internet, we don’t see platforms taking that seriously enough. So, while I don’t think AI is going to swing the next election, it does become a major policy issue, and we really haven’t made a lot of headway in addressing it or moving that needle.

CH – What does that tell you about what political players and, frankly, the public need to do to recognize bots and other efforts to manipulate their opinions?

FM – Part of the trend here is acknowledging the increasing pressures being put on the public to be the bot spotters. There’s a large push towards more and more automated messages throughout [the culture], whether that’s ads or synthetic influencers on Instagram. So I think it’s part of this bigger trend that we have to contextualize and think of as a media governance issue. It’s an issue that’s going to require a lot of collaboration between journalists, academics and governments acting in good faith to really make sense of very dramatic shifts in how everyday people consume the news and have to be making this kind of constant decision about what’s real and fake.

CH – Some AI experts are worried that artificial intelligence could play a destructive role in the next federal election, undermining trust in our institutions through disinformation and manipulating data. What concerns do you have?

FM – What I’m most concerned about is how we’ve let our imagination of politics and political futures become so dependent upon technology. For better or worse, technology is not going to break or fix our democracy or the next election.

AI is going to have a marginal effect in the next election, but certainly it’s going to test the norms of our political parties and politicians on how they may or may not use deep fakes as part of their rhetoric, or how they may try to use artificial intelligence to enhance the turn towards micro-targeting. I do think this raises a deeper question about the influence of big tech platforms in our politics. This has been an ongoing issue since Cambridge Analytica [was found to have misused Facebook data for political manipulation in 2018]. More directly, we suffer from a governance question about how a lot of innovation is being imported from Silicon Valley and we lack a sense of how to steer that, and, as a society, how to respond to that.

So that’s a big part of where AI is going to fit into the next election: in one part, how politics, politicians and parties are reacting to it, and, in another part, how this becomes part of their platforms or not.

CH – I’ve asked you about the negative consequences. Are there positive aspects to using artificial intelligence and machine learning in politics?

FM – Certainly. In trying to be nuanced here, I’m trying to give room for the opportunity to think about the upside. [University of Ottawa professor] Elizabeth Dubois and I, when we were working on political bots, would often think about bots being used in journalism to track websites to help reporters monitor changes in government policy or changes in corporate policy. A lot of bots are also instrumental for key websites we use, like Wikipedia, and for the ways we might be able to generate and write content.

I think there are some very interesting questions we might ask about how AI is going to be used in politics. One example I like to give is [embattled Mayor] Eric Adams in New York, who used a generative AI system to call voters and make him sound like he’s speaking in a language he doesn’t speak, whether that’s Yiddish or another foreign language, so voters would hear, in their mother tongue, Eric Adams speaking to them.

Now, that’s a really interesting question of how that enfranchises or invites voters whose first language might not be English into the political process. And that’s, I think, part of where we might see AI help make politics more accessible and ultimately help enfranchise people to participate more in politics given we’re in a time when we see such a low turnout. I like the fact that the Eric Adams example makes people feel a bit uncomfortable, but it also speaks to the possible promises that could happen with this technology.

CH – Does the Canadian government have a role in making sure AI is used in a way that is more productive than malicious?

FM – I think that’s a big question for us: how Canada can influence AI’s development. And certainly, you’ve seen tremendous investment from Canada in AI, both monetarily and strategically, yet it’s unclear how much say Canada will have. And it’s a big open question for me.

That doesn’t deny the fact that Canada has undertaken important measures to update its laws to respond to AI. I would point to the Digital Charter Implementation Act or [Bill] C-27 as a key piece [of legislation] that includes the Artificial Intelligence and Data Act. I’ve had concerns about whether those bills have been developed in such a way that actually builds capacity in civil society and in the corporate sector around dealing with the host of issues AI presents. I think the government needs to think about its policy crafting process as instrumental to building capacity in AI adoption and AI literacy.

CH – You mentioned AI adoption. Let me ask you to look ahead to 2025, if I could. Do you think this is the year when businesses will really begin to embrace AI as part of their practice?

FM – I think businesses are going to face an existential question of whether they want to think about artificial intelligence as a part of their organization or as a service contract in which they’re procuring AI services.

That is a strategic calculation that comes with great implications. In one sense, if you’re a firm that can see very fixed benefits of artificial intelligence, then it’s how do you ensure your organization constructs the artificial intelligence in such a way that it becomes an asset, becomes something you have the capacity of managing and actually have control or governance over. [Conversely, you could turn] towards very large online platforms that are going to be providing these AI services probably for a lower cost, but with clear downsides about how much influence you have and potentially the data flows that are going to be established for those contracts.

CH – Is there a danger of being left behind if companies don’t begin to embrace it in a bigger way?

FM – I think there’s always a risk of being left behind in technologies. But I also think that we’re at a point where there’s a lot of mixed messaging and mixed signals about the upside of artificial intelligence. It’s not a one-size-fits-all solution.  

Part of it is that AI is going to have impacts, but those impacts are going to look different depending on what the organization is. If you’re a call centre company, where you might be using artificial intelligence — and this is what they’re doing in Japan — to mute enraged customers yelling at their customer-service representatives to make their employees happier or [make it so] their employees sound more English using artificial intelligence, that’s one potential opportunity. That’s very different than in a news organization that might be encountering the fact that what’s going to really differentiate it is that its articles don’t seem like they’re written by machine. So it’s going to vary depending on the organization, and I think that’s why it’s a strategic calculation.

CH – I have heard and read a lot about Canadian productivity lagging behind other OECD countries, and that if we don’t begin to invest in new technologies we can’t close that gap. What is at stake for Canadian companies in the year ahead if they are not prepared to make these kinds of investments in the new technologies?

FM – Well, I think there’s a tension between most managers, who think AI is going to improve productivity, and most employees, who do not. This is a classic moment of being sold snake oil. Is AI going to solve the productivity gap? 

If you’re looking at the macroeconomic issues around Canadian productivity, Canada also has historically low levels of corporate investment in research and development, which also could be a huge factor in why there’s a productivity gap. 

So the idea that AI is going to come and make employees more productive is certainly a slightly naive theory about how Canada’s going to close that productivity gap. The fact that it’s being marketed by AI firms that still have a really open question about how effective these tools are going be in the multitude of use cases they are putting out, means companies should be a bit wary about rushing whole hog into some of these AI solutions. 

To me, [it’s very important] to think about AI as something that really requires a degree of change management in the organization, and a buy-in to make sure AI is being developed and deployed responsibly, ethically, in line with Canadian law, and in ways that actually empower workers’ autonomy and their jobs.

Fenwick McKelvey is co-director of Concordia University’s AI institute.

fb_btn tw_btn

Enjoyed reading?

Get Notifications