An Interview with Professor Gillian Hadfield

Angela Gu

Hadfield’s work occurs at intersection of law, AI

On January 16, 2020, Professor Gillian Hadfield delivered the keynote presentation at the CIFAR Canada-US AI Symposium on Economic Innovation’s public forum at MaRS Discovery District. The public forum convened panels of industry experts in discussions about AI policy and how it relates to economic growth and responsible innovation. After attending the forum, I sat down with Professor Hadfield in her office at the Faculty of Law to talk about law and AI. Hadfield is the inaugural Schwartz Reisman Chair in Technology and Society and the director of the newly established Schwartz Reisman Institute for Technology and Society.

Professor Gillian Hadfield at MaRS Discovery District. Photos courtesy of CIFAR.

Ultra Vires (UV): Your work occurs at the intersection of law and AI. What brought you into the field? 

Gillian Hadfield (GH): So, this book [gesturing to her book Rules for a Flat World]—which was published in the end of 2016, was the product of what I’d been working on for about 15 years up to that point, and that was thinking about our legal systems, our legal infrastructure, and looking at that from the point of view, originally, of how well is that working? Even just as an economist, because that’s my PhD, I was starting to think about, structurally, the market for lawyers, and the nature of law. 

That kind of set me off on a path of thinking about, are our systems of lawyers working very well? And then I started looking around saying, “well you know, they don’t seem to be working all that well, they can’t be working all that well if a lot of people can’t afford them, and they’re not working very well if they take too long to come to decisions.” So I started thinking about the design of our legal systems and evaluating them against how well they perform. 

The book makes out the argument that our legal institutions today are not well-adapted to the complexity and the speed and the global nature of what’s happened. I primarily focus on our economic relationships, but it’s true of our social relationships as well. And the focus on AI kind of just grows out of that. So now here’s this transformational technology, and my conclusion ten years ago was that our existing legal regulatory structures are not going to be able to handle that. What are we going to do now that we have AI on the horizon? That’s what got me interested in it, sort of from a theoretical point of view.

The other [thing that got me thinking about this intersection between law and AI] is my oldest son, who is finishing a PhD in AI at [UC] Berkeley, and his area of interest is how will we make sure that AI is good? So he and I started having lots of conversations and found all these overlaps between his research in AI and the type of work and research I was doing in economics and law, and so it’s a very rare example of the son getting his parent into conferences and conversations that sort of introduced me to a lot of people [laughter], and also it’s great to have somebody to ask really stupid questions about AI when I was still learning about it. 

AI is moving very quickly, and there’s lots of conversation about how we want to make sure it’s safe and good and fair, and then I’m looking back and saying, but we don’t have the regulatory and legal structure to manage that and I really think there’s an urgency around how we’re going to develop that. 

UV: In your presentation, you mentioned how, if you give a privacy statute to 10 lawyers to interpret, you will get back 11 opinions, could you elaborate on that a little more? 

GH: I talk about this in the book a little. So, a part of what we’re facing in our legal systems today, is that they’re highly complex in part because of the language. When we write a statute, or a contract, those statutes and contracts generally have a lot of fuzzy language in them. They say things like, “use best efforts,” “exercise good faith,” and  “take reasonable steps”. We don’t write them that way because we’re not smart enough to realize that’s going to be ambiguous, we write it that way because we can’t enumerate all the possible facts and circumstances that might arise. 

The question of the formation of a contract is, what was it reasonable for the other person to think? Did they intend to be bound or not? Well, how do we express it that way? Because we can’t say something more precise upfront about the circumstances, so it’s part of the nature of law that it’s subject to interpretation. As we get more law, more cases, more guidance, and longer statutes, we end up where we can give it to 10 lawyers and get 11 opinions. 

Those numbers came to me when I taught an advanced contracts class (Professor Drassinower was one of my students in that class) and I co-taught it with a practitioner from McCarthy Tétrault. It was that lawyer who said to the class, “you know, you give a contract to 10 lawyers, and you will get 11 opinions, about what it means.” 

This is why I don’t have a lot of confidence in the idea that we are going to address the problems we’re facing right now, in the privacy domain [for example, privacy law blocking AI’s progress in healthcare in Ontario] with another thousand pages of privacy law. We need a different kind of system to try and figure out how we will make sure that we’re protecting people, but that we’re not cutting off our ability to do AI research. 

UV: Also, there might already be bias in the data depending on who consents to its use.

GH: There’s bias that comes from that, and then there’s also the question: does it mean anything? Can we really use our consent tools for this? We’re clicking boxes all the time, which is contracting. We enforce a contract because both parties have said, “yes, I’d like to do this! I think this is good for me.” We’re supporting their autonomy, and we’re enabling private ordering, but most of us clicking those boxes, what do we know? Maybe we didn’t even read it. But even if you did read it, and I have read them, you have no way of evaluating whether this is a good trade-off? Most of you just want to get to the next screen, so you click. So the idea we’re using consent as a tool is highly problematic. I don’t think it’s appropriate at all. 

Which is not to say people shouldn’t have the ability to say no. Informed consent makes a lot of sense if you’re saying, I don’t want to participate in your medical clinical trial. It may not make any sense when we’re saying, I don’t want my test results to ever be used in the medical research community, in any way that might possibly run a risk that some researcher might be able to figure out who I was. I mean, we want to protect against that. We want people to feel like hospitals treat them with care and dignity. But maybe we don’t want to say we can’t do this research that makes it easier for people with diabetes to avoid blindness. So these are regulatory questions. 

Professor Gillian Hadfield at MaRS Discovery District. Photos courtesy of CIFAR.

UV: This morning we talked a lot about public policy and how education plays into it, with an informed public. What kind of role do educational institutions like U of T have to play in this? 

GH: A certain amount of these decisions we need experts, and we shouldn’t all have to understand all the details of how AI works in order to be protected and well-treated by it. I think AI is going to be so pervasive, it raises unique kinds of challenges. I do think that it’s going to be so pervasive that everybody on campus, frankly, should understand some basic things about artificial intelligence. 

You know, we’ve organized our legal education for a century, around contracts, property, torts, these doctrinal areas of law. We cannot keep graduating students who don’t understand the basics of this technology. Because lawyers are going to play a big role. Lawyers are drafting privacy legislation. And judges will be deciding it. I think we absolutely have to understand that. At the same time, I think our computer science and engineering students need to understand that it’s not just somebody else’s problem to figure out: is this safe, is this fair, is this appropriate? They need basic exposure to understanding the way people work, the way societies work, and the way institutions work. 

I think that’s an ambition for the [Schwartz Reisman] Institute [for Society and Technology]. I think we should find a way to really make [this kind of education] available and knowable, that it’s part of the repertoire of every well-educated student graduating from the University of Toronto. We really need to have everybody talking the same language, and collaborating on that, and one of the issues we face in our modern universities is that we’re very siloed, and so we would also like to be in a place where students in graphic design, law, engineering, and  nursing can find each other, to talk about these issues. And to say, “hey, I’ve got a great idea about how we could do that better”, and I hope we’ll be doing some of that too. 

UV: Can you share some words of advice to law students who may be interested in thinking about how AI and law intersect?

GH: I will be teaching a course! I haven’t designed it yet, but I will be teaching a course on law and AI next year. I’d say, definitely go out and learn something. You can just go find a million videos on YouTube, understand how machine learning works, you can find blogs, you could read. Just start exposing yourself to all of these issues, and anticipate that this is going to be of real comparative advantage, for anybody who gets out there in the law world, who knows about this [AI] domain, will be well-served. Just start putting your cup in the stream, and drinking [from it]. Well it’s a big stream, but you can still take a cupful at a time. 

This interview has been edited and condensed for clarity and length. 

Categories:
Tags:

Advertisement

Begin typing your search above and press return to search. Press Esc to cancel.