Can We Trust Big Tech With AI?

Sonali Ravi

Experts weigh in at trial

Can Big Tech be trusted with artificial intelligence (AI)? It’s a deceptively simple question with no easy answer. This was the issue that ignited a battle of ideas in a mock trial co-presented by U of T’s Future of Law Lab, Canadian news outlet The Logic, and the Rotman School of Management. On September 27, some 200 “jury members” gathered to witness Big Tech on trial. During the proceedings, it became clear that the question was not whether governments should risk leaving AI in the hands of Big Tech. Instead, the real head-scratcher was instead how to precisely regulate these tech titans and their work with AI. Is it time for an all-out rulebook or a gentler hand?   

Dany H. Assaf (Co-Chair of the Competition & Foreign Investment Group at Torys LLP) captivated the audience with his opening statements. As lead prosecutor, he demonstrated the paradoxes of AI: it can deceive and displace, but it simultaneously has the potential to solve humanity’s most pressing problems. Taking a practical and utilitarian view that he echoed throughout the trial, he argued that AI is just too instrumental not to trust. 

The prosecution, led by Fiona A. Schaeffer (partner at Milbank LLP and Chair of the American Bar Association’s Antitrust Law section), launched a fiery attack against Big Tech’s trustworthiness in relation to AI. She argued that since AI runs on biased, untested data, it magnifies untruths and therefore cannot be left to its own devices and needs safeguards. 

Joshua Morrison, director of the Future of Law Lab, whose efforts played a pivotal role in bringing this event to life, summed up the remarkable scope of the proceedings:. “It was the first event I’ve ever been a part of that successfully linked healthcare legislation, the plight of immigrants, military drones, and the development of the radio. We covered unbelievable ground, and I imagine that everyone who attended, including the expert witnesses, was exposed to a huge amount of information they had never considered before.”

The highlight of the trial was its roster of expert witnesses, a star-studded cast who elucidated the problem from a diverse set of angles. Simon Kennedy (Deputy Minister of Innovation, Science and Economic Development Canada), was frank about the government’s limitations, admitting that developing prescriptive rules would be impossible, as it is difficult to predict where AI is headed. U of T’s own Professor Gillian Hadfield underscored the importance of democratically-set regulations to offset the problem of AI usage by a small number of powerful tech giants. Armughan Ahmad (CEO of Appen) argued that AI management requires a human touch. He powerfully demonstrated AI’s potential to facilitate “radical abundance,” allowing societies across the globe to benefit from this technology.

However, the expert witnesses unanimously agreed that stringent regulations could stifle innovation. Avi Goldfarb (Professor of Marketing at the Rotman School of Business and Rotman Chair in Artificial Intelligence and Healthcare) warned against AI’s “death by a million regulatory cuts.” As Daniel Araya (Senior Partner of the World Legal Summit and Senior Fellow of the Centre for International Governance Innovation) explained, AI regulation is an international problem that demands an international solution.

Perhaps as compelling as the content was the format. Morrison explained why he chose a mock trial format to debate the issue: “It feels like half of all events these days are panels around the impact of AI in a particular profession. We wanted an idea that would stand out amidst a crowded field […] We were worried the witnesses (our guest experts) would object to being cross-examined, but everyone was a great sport.” Indeed, David Skok (CEO and Editor-in-Chief of The Logic) acted as judge, keeping things fair, but not without witty commentary to keep the mood lighthearted.

How did this pan out in the “courtroom”? Acting as an avatar for the jury, along with Sara Maqsood (2L) and Samir Reynolds (3L), I explained to the “court” where I stood after the trial. I, for one, was convinced by the prosecution and its resounding warning against too much regulation. However, Maqsood was persuaded by the need for “checks and balances designed by democratically elected representatives.” In the end, the numbers spoke for themselves. Initial audience polls showed that 72 percent distrusted Big Tech with Al. At the culmination of the trial, this number rose even more, to 77 percent. 

So, can we trust Big Tech with AI? Only time will tell for certain, but as future lawyers, we are in a unique position to not only watch the situation unfold, but also play a part in shaping it. 

Editor’s Note: Sonali Ravi is an executive in U of T Law’s Privacy and Cybersecurity Law Group.

Categories:
Tags:

Advertisement

Begin typing your search above and press return to search. Press Esc to cancel.