Dear @UTLaw, @BlueJLegal is Not an Access to Justice Tool

I have a confession to make: I hate technology.

Let me clarify. I specifically hate the idea that technology is special or, worse, that “technology will save us.” As a result of this revulsion (and hours spent aimlessly refreshing Twitter instead of studying for exams), I have a bone to pick with our dear Faculty’s Twitter account.    

If you follow the Faculty, you may have seen one of over a dozen promotional tweets for Blue J Legal. Led by Professors Alarie, Yoon, and Niblett, Blue J is a startup that sells software designed to help tax lawyers predict how a judge may rule in a given case. It must be said upfront that Blue J is an excellent and valuable contribution to tax law, one which will improve that area of practice substantially.

However, my disdain for positioning technology as a stand-alone solution for any of the world’s ills has made following @UTLaw unbearable at times.

Tweets about Blue J often emphasize the importance of artificial intelligence (AI) to the future of law and the resulting boom in the legal technology startup space. Take, for instance, this tweet, which links to a Maclean’s article highlighting the potential for AI to replace traditional lawyer roles:Screen Shot 2017-03-23 at 1.13.03 PM

Or this one, which links to an article in PrecedentJD about how many students will be hired by the startup:Screen Shot 2017-03-23 at 1.13.36 PM

Most shockingly, the @UTLaw account also, without explanation, promoted Blue J as a tool capable of improving access to justice at a reception for donors to the Jackman Law Buildingwhere Blue J was given exclusive access to Minister Chrystia Freeland and MP Arif Virani:

Screen Shot 2017-03-23 at 1.12.38 PM

To me, this simply doesn’t add up.

Unlike other legal technology startups, Blue J is not primarily concerned with the importance of securing equality before the law or maintaining the rule of law in our legal system. Blue J is a “tax foresight” startup that allows professionals to simulate the judgment of a court in a new situation; it is built off of IBM’s Watson, a sophisticated question answers computer system. It parses natural language legal decisions and returns answers that are statistically related to similar tax situations the software has seen before. As Blue J itself explains, the ideal user of this tool is a tax professional that wants help navigating “uncertainty where there are competing reasonable arguments.” Thus, the ideal user of Blue J is likely already accessing justice the way I access Riverdale on Netflix: frequently and with confidence.

Why, then, is @UTLaw overemphasizing both the type and promise of AI behind Blue J, and making inflated claims about it as an access to justice tool? Does the school not know what it means by access to justice?

The Faculty of Law surely is aware of the access to justice problem in Ontario. This is evidenced by its many projects, such as the Middle Income Access to Civil Justice Project, which issued a report about “an acute lack of access to justice for the working poor and middle class in Ontario” and “the increasing phenomenon of unrepresented litigants.” There are dozens of tweets from the #FlipYourWig campaign and maybe a hundred other tweets about access to justice (type: “from:utlaw access to justice” into the search bar on Twitter to see for yourself).

It is important to ask these questions about the Faculty’s Twitter account and this seemingly cavalier use of the term “access to justice” because of the immensely close ties between the University of Toronto and the startup. Does the Faculty truly believe tax foresight tools will lead to democratization of the law? Or, does it just feel safer to use a public law school’s Twitter account to promote a side business when they use buzzwords like “access to justice?”

If our Faculty wants to enter the growing legal technology space, shouldn’t itas a public law schoolbe using its Twitter account to responsibly reflect the debates in this field, which are alive and well?

Without more, these tweets exemplify a common misunderstanding about the state of machine design that permeates the thinking of AI proponents. The underpinning argument is that access to justice will improve where lower costs of producing information reduce the cost of understanding legal rights and obligations. Costs of producing information will be lower because computer programs will read and synthesize court cases instead of humans. As Prof. Alarie predicted in a recent University of Toronto Law Review article, the result of this revolution will be the “legal singularity,” which arrives “when the accumulation of a massive amount of data and dramatically improved methods of inference make legal uncertainty obsolete.”

By implying that AI driven cost reductions will increase access to legal services as if this were self-evident, Blue J and the Faculty make the common mistake of suggesting that because something is logically possible, it is plausible. It is true that technology can do more and more things better or quicker than humans can. However, as Luciano Floridi (a professor of philosophy and the ethics of information at the University of Oxford) notes, this does not mean the technology is limitless: “It is like a two-knife system that can sharpen itself.” What’s the difference? The same as between you and the dishwasher when washing the dishesmachines are better, but not necessarily smarter. Further, who is to say these cost reductions will be passed on to clients?

The Blue J team is prepared for skeptics, like me, who think law is special and cannot be reduced to a mere algorithm. In their writings, they say that this same skepticism led people to believe a computer could never win a chess match, and that humans “underestimate the importance” of technological change. However, there is a critical difference between chess and the legal system. The algorithms that underpin machine learning can only work if you can define their limits. Chess is simple for learning machines to understand because the game has well-defined limits. The Blue J team does not address how we can specify similar limits around the entire legal system.

More importantly, by suggesting that democratization of the law can be accomplished with AI because machine learning tools are “devoid of emotion and bias; free from fatigue,” the Blue J team fails to address how difficult and resource-intensive it is to test for confirmation bias in artificial intelligence. As a significant (and growing) body of literature dedicated to this issue explains, machine-learning algorithms may be neither transparent nor predictable because they are inscrutable by their very nature. Thus, it is especially important to highlight overstatements about the capabilities of AI in the context of legal applications because of their ability to disregard the law’s built-in ethical considerations, such as the principles of transparency and openness. Without access to participate in the creation or enforcement of the law, a society cannot be said to be ruled by law, nor can it provide for equal distribution of legal statuses.

The capabilities of machine learning today simply shift the responsibility from the legal code and people that uphold it onto an algorithm and the designers of that algorithm. A claim that machine-learning algorithms will eliminate the need for ex post ethical considerations neglects to consider that relying only on one regulatory constraint, technology, requires giving up control of the legal system. Fundamentally, the motivation to promote machine learning as the path to access to justice boils down to a legal positivist anxiety: humans are imperfect and will never perfect the legal system, so we ought to code something outside of ourselves that shifts the burden of responsibility.

Promoting Blue J as an access to justice tool, without interrogating the biases (e.g. profit) underlying its development (or even those in the Tax Code itself), misunderstands an essential factor of technology in general: Technology, like the law, is neither essential nor neutral. It is predicated on social context, embedded with assumptions, and limited by the boundaries of its design. Technologically determined solutions that do not assess the power dynamics responsible for existing barriers to justice in Anglo-American law cannot democratize that law. Suggesting that technology alone can improve access to justice obscures the significant risk that these applications will solidify existing distributions of legal power, making it “easier” to access justice only if one accepts the status quo.

I am incredibly excited for the Faculty to become more involved in legal technology and continue to support theoretical and practical inquiries into this space (as it has with the recent Artificial Intelligence, Technology and the Law conference). However, any promotion of for-profit enterprises that claim to be access to justice tools should be qualified by specifying what is meant by that phrase, or at least also give time to important theoretical debates taking place.

Blue J cannot democratize the law on its own, nor any other artificial intelligence tool. Changing the distribution of legal resources will, as always, require significant political and social will. I simply think the Faculty’s tweets should reflect this reality.