The 2023 Grafstein Lecture Focuses on the Social Implications of AI

Tom Russell

Professor Kate Crawford discusses the ideas from her new book, Atlas of AI

On the evening of February 8, the Rosalie Abella Moot Court Room was full of academics, artificial intelligence (AI) enthusiasts, and posh tweed jackets. The Grafstein Lecture in Communications is an annual event established by retired Senator, the Honourable Jerry S. Grafstein, K.C..

The speaker this year was Professor Kate Crawford, a principal researcher at Microsoft Research, co-founder and former director of research at the AI Now Institute at New York University, and associate professor at the University of New South Wales. Prof. Crawford discussed some ideas from her 2021 book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. 

Prof. Crawford began her lecture by highlighting that we are currently at an inflection point in the development of AI. ChatGPT has been the talk of the town for the past few months. Additionally, Google announced its own AI chatbot, as did Microsoft. Some news outlets have warned that this may turn into a sort of “AI arms race” that could have unintentional consequences. Prof. Crawford’s writing aims to shed light on what these consequences might be. She describes her book as a look behind the curtain, looking critically at how exactly the “magic of AI” really happens. 

In her lecture on February 8, Prof. Crawford gave a summary of four concerns, which she labelled different “grounds,” that she has about how the magic of AI comes to be. 

First, Prof. Crawford identified issues with “ground truth”—otherwise known as the reality individuals seek to model with an algorithm. Machine learning used in the development of AI uses training data to calibrate the algorithm to accurately identify the training data. However, Prof. Crawford identified a fundamental problem with this approach to acquiring training data. She lamented that the current philosophy is “quantity over quality.” The process of acquiring training data has been dependent on click-workers, ignoring the benefits of any human expertise, stripping the data of any context, and ignoring the cultural and subjective decision-making involved in this process. We are left with ground truth that is subjective, stripped of any context, and without any external measurement of the validity of the data. 

Professor Kate Crawford, author of Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (2021). Credit: Kate Crawford

Second, Prof. Crawford pointed to the issue of what she calls “slippery ground” with our current emphasis on collecting datasets as massive as possible. She began her argument by asking us to imagine nouns existing along an axis from the most concrete to the most abstract (e.g., rock versus hypocrite). The problem, Prof. Crawford stated, is that training data that pairs abstract nouns to images can quickly resemble moral judgments that capture the culture and prejudices of the developers of that training data, especially when stripped of any context or external validation. Prof. Crawford brought up the example of ImageNet, and the 2019 story on the algorithm by The New York Times. The New York Times story highlighted concerns that people were being assigned moral judgements based on their gender and race, amongst other characteristics. In response, ImageNet took efforts to strip its datasets of dangerous image-noun pairings. However, Prof. Crawford is unconvinced. She asked the audience how we should define “safe” when we are in the business of classifying people. 

Third, Prof. Crawford explained the issue of “polluted ground.” She stressed that the computer power necessary for AI is enormous, requiring specialized hardware and energy. The current “AI revolution” is poised to occur at a time when the environment is already incredibly strained. The tech sector is poised to overtake the aviation industry as the biggest polluter. If AI does end up becoming a staple of almost every element of society, then there is a risk that the day-to-day functioning of our society will dramatically increase its carbon footprint.

Finally, Prof. Crawford also discussed the issue of “generative ground.” Prof. Crawford asked the audience what will happen to ground truth when most content is being produced by AI. She noted some concern about how AI will change how we understand ourselves and others when most of the language and images we consume are based on statistical representations of the past. Prof. Crawford noted that no one really knows the answer to this question, but it is worth thinking about. 

At the end of her lecture, Prof. Crawford stressed a few points. She first emphasized that there is no standard and quality control for datasets currently used to train AI when there probably should be. Second, it is unclear how we will investigate datasets when they are made up of billions of entries. Third, there is a risk that laws and regulations will stray dangerously far behind the development of this technology. Fourth, we will encounter difficulty understanding creative control in this new age of AI. Finally, we do not truly know who will win and lose as a result of this change in information technology and there is a danger that it could entrench power into a smaller set of hands.

Prof. Crawford’s lecture was insightful, interesting, and, at times, quite concerning. The promise of the AI revolution is exciting and scary at the same time, and I am grateful that there are academics like Prof. Crawford who devote so much effort to understanding the consequences of our actions and trying to steer us toward a better tomorrow. During the question and answer period of the lecture, one audience member asked her if she was an optimist or a pessimist. In response, Prof. Crawford replied, “I think if I was a pessimist, I wouldn’t be able to do this job.” If you are interested in learning more about Professor Kate Crawford’s work and ideas, check out Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, now available online.

Categories:

Advertisement

Begin typing your search above and press return to search. Press Esc to cancel.