Skip to main content

Reclaiming the Future: Privacy, Ethics & Organizing in Tech with Meredith Whittaker & Kade Crockford

Friday, June 7, 2019
7:30pm Pacific Time
KQED Broadcast: 08/18/2019, 08/20/2019, 08/21/2019

This event appeared in the series
Conversations on Science & the Future

We've made a recording of this event free to all. Please support our institution and these productions by making a tax-deductible contribution.

As Director of the Technology for Liberty Program at the ACLU of Massachusetts and MIT Media Lab Director’s Fellow, Kade Crockford works to protect and expand core First and Fourth Amendment rights and civil liberties in the digital 21st century, focusing on how systems of surveillance and control impact not just the society in general but their primary targets—people of color, Muslims, immigrants, and dissidents. The Information Age produces conditions facilitating mass communication and democratization, as well as dystopian monitoring and centralized control. The Technology for Liberty Program aims to use our unprecedented access to information and communication to protect and enrich open society and individual rights. Kade has written for The Nation, The Guardian, The Boston Globe, WBUR, and many other publications, and regularly appears in local, regional, and national media as an expert on issues related to technology, policing, and surveillance.

Meredith Whittaker is a Distinguished Research Scientist at New York University, Co-founder and Co-director of the AI Now Institute, dedicated to researching the social implications of artificial intelligence and related technologies, and the founder of Google’s Open Research group. She has worked extensively on issues of privacy and security in numerous capacities, including as co-founder of M-Lab, a globally distributed network measurement system that provides the world’s largest source of open data on internet performance and as co-founder of Simply Secure. Whittaker has advised the White House, the FCC, the City of New York, the European Parliament, and many other governments and civil society organizations on artificial intelligence, internet policy, measurement, privacy, and security.


Organizations Referenced:

Articles/Studies Referenced

Scholars referenced

Transcript

City Arts & Lectures presents Reclaiming the Future: Privacy, Ethics & Organizing in Tech. Meredith Whittaker & Kade Crockford. Friday, June 7, 2019. cityarts.net • 415-392-4400Meredith Whittaker: Good evening those of you not watching The Warriors. Don’t spoil it, I’m gonna watch it on the NBA app afterward.

Great. So we’re going to introduce ourselves and then we’re going to just start talking to each other and we hope you enjoy the conversation we’re going to have. But maybe Kade, if you want to kick it off. Like, who are you, why are you here, and what is the great work you’ve been doing? 

Kade Crockford: Hi everyone. My name is Kade Crockford. I am the director of something called the Technology for Liberty program at the ACLU of Massachusetts. Thank you very much. 

We work to essentially fight for civil rights in the digital age. So historically that has meant doing a lot of work to try to rein in government surveillance using digital technologies. And in the past few years, it has involved a lot of work related to the implications for civil and human rights related to machine learning technologies. And that’s a lot of what we’re going to talk about tonight.

Meredith Whittaker: Yeah, I’m Meredith Whittaker. I am the co-founder of the AI Now Institute at NYU and we are an academic research institute founded to look at the social implications of AI. So how do we begin asking better  questions and you know, getting more information about the way these technologies are shaping our lives right now.

I am also somebody who’s been working with a number of my colleagues at Google and across the industry to organize tech workers around some of the same issues that we’re going to talk about tonight. So that’s why I’m here and that’s who I am. 

Kade Crockford: So to begin, why don’t you–since you co-founded an Institute called AI Now–why don’t you tell us what is AI and what is particularly concerning about it?

Meredith Whittaker: Let’s start with a small question. I mean, I, you know, I might go back and do like a kind of self-indulgent history if you don’t mind, because I think you know, one, AI is an overhyped marketing term, so it’s unclear what AI is a lot of times, especially because it’s mainly being produced by corporations and companies that don’t let you in to sort of check it out.

Kade Crockford: Yeah, I’ve heard some engineers say, “when you’re talking to technical people you call it machine learning, and when you’re trying to sell something you call it AI.” 

Meredith Whittaker: Yeah, if you want that series a, you’re selling AI. But I have been in the tech industry for over a decade. Like it’s literally my only career. I’ve been at Google almost 13 years, and I also, you know founded the independent research institute. And it was around maybe, I don’t know, 2013, 2012, that AI suddenly became the sort of like M.O. of everything. 

Every, you know, engineering team I was talking to, any technical conference was suddenly–you know problems that had been difficult before, sort of markets that had not been available to technology, were suddenly you know, seemingly easy or seemingly open because we were just going to AI it, right. 

So I got pretty curious. Like what is this thing that you’re talking about? And it turns out that you know, AI is many things, but sort of fundamentally in the most like schematic way– and I don’t want to argue with anyone on the internet about this–

Kade Crockford: Don’t @ me please. 

Meredith Whittaker: Do not @ me. I do not have time for a reply guy on this Friday evening. Basically you have a huge amount of data. Or you have as much data as you can get. And you use this data to train an algorithmic system. So think of this as a special kind of software system to understand what’s in that data.

So like a classic example is images of cats, right. And you show this system, you know, a hundred images of cat, a thousand images of a cat, a million, you know a hundred million images of a cat, and pretty soon the system can recognize what is cat-like about an image and whether or not an image has a cat in it, right. 

And then you have a system that once you’ve trained it to have that capability, you can point it at other data. You can point it at other images and you know, I’ll point it at Kade–Kade does not have a cat in it, right. But you can imagine beyond this sort of use case of detecting felines, there’s a lot of data out there, right? 

And so these systems are being trained on medical data to you know, offer diagnoses and possible clinical advice. They’re being trained on policing data to detect who might be a criminal and who is less risky, right? They are being trained on education data to determine which students deserve to get into one school, and which students might not be a good fit, right. They’re being trained on past resumes and employment data to determine who is a good hire and who’s not going to be a good hire. 

And I think you know, at this moment maybe you can begin to see what some of the problems with these systems might be, because of course when we talk about the data we’re using to train these systems we’re talking about data that reflects the social world we live in right now. That reflects sort of hierarchies, inequity, racism, misogyny.

All of these patterns that you know, we’re sort of working through in our world are being embedded in these systems, often times in ways that are very hard to detect, in part because these systems are being produced by large corporations that are proprietary and that are you know, oftentimes kind of, you know, I would say laundering this type of bias, this type of view, through sophisticated computational systems that are you know, hard to contest, and that a lot of people treat as neutral and objective. 

So there are a lot of dangers with this technology and we can get into some more of those. But that’s a kind of quick version of how I got here and why I’m worried. 

But Kade I would love to hear a little on your perspective, specifically coming from the context of working so closely with criminal justice and policing and the intersection of those domains and and these technologies. 

Kade Crockford: Yeah. Sure. Before I get to that, I want to share with you something that Joi Ito, the director of the MIT media lab, has said, which I think is really smart. I worry that people who don’t work in machine learning, who are not technology experts, who don’t obsess over the civil and human rights implications of technology the way that you and I do, Meredith, have a sense that because the consumer-facing technologies that we all use all the time are so smart and effective–Google Maps, iPhones, etc.–that there’s almost this mystical or kind of like religious view that AI or technical systems generally are magic. Or that they are somehow God-like in their authority. And Joi Ito talks about AI, instead of using the term artificial intelligence, he describes it as extended intelligence.

And I think that’s a really important conceptual framework because it points us to the fact that from the beginning stages of developing an AI system all the way through the output, what you’re looking at is a series of human decisions and human choices, right. From the first human decision of what AI should we build, to the very last decision, which is when does the AI work? When is it good enough to be unleashed on to the market? 

And in between those two decisions, there are a series of human decisions that are made that shape, you know, the product and the decisions that the product makes on people like us every day. So I just want to introduce that concept, because it’s not magic. It’s not artificial at all, actually. It’s really the product of a series of human decisions. 

Yeah. So, you know, I’m interested in human decisions related to again civil rights and human rights and civil liberties, and we’re really concerned about uses of machine learning technologies and even some more crude forms of automated statistical analysis in the criminal legal system. One of which is facial surveillance, face recognition, and we’re sitting here in San Francisco tonight. I think probably you all know that San Francisco last month became the first city in the country to ban the use of face recognition in the municipal government.

So that is in large part due to my colleague Matt Cagle who is a technology lawyer at the ACLU of Northern California, who I believe is in the room right now. So shout out to Matt. And we in the Massachusetts affiliate are also working with the government in Somerville, which is a city right outside of Boston, to ban the use of face recognition by the municipal government there.

We’re also really concerned–and I think we should talk more in a little bit about some of our specific concerns related to face surveillance–but we’re also concerned about the use of machine learning in systems that are called predictive policing. And I think predictive policing technologies are a really good example of some of the dangers that are raised by these technologies.

Some people have said that if you want to make the future look like the past, machine learning is a really good tool for that. And the reason is because, as Meredith said, these systems are trained on data. So again, in a society where police, not in one part of the country, but in every part of the country, and not just for a short period, but for as long as police have existed in the United States, have disproportionately focused their attention, in terms of surveillance, investigations, arrests, prosecutions, all the way through the criminal legal system into the length of criminal sentences, whether someone will be granted parole or probation at the end of their sentence–there are racial disparities, extreme racial disparities, throughout every stage of that system. 

So in the predictive policing context, corporations are developing products that input large quantities of arrest data, for example, and then offer law enforcement advice about where they ought to schedule their patrols.

So just a very crude way of thinking about this is to think about the example of drug arrest data. We know that in basically every part of the country, you are many many times more likely to be arrested for a drug offense if you are Black then if you are white. In the city of New York where Meredith lives, something like 90% of marijuana arrests are of Black and Latinx people, despite the fact that I can tell you from personal experience, white people in New York smoke weed. And we aren’t arrested for it. 

So, you know, it’s funny ha-ha, but that’s obviously a crisis in our society. And when you are taking those data and entering them into an algorithm and expecting anything other than the reification of that bias to come out on the other side, that’s a fool’s errand. 

Unfortunately, however, you know for a variety of self-interested reasons, corporations and governments are adopting these systems. And not only does the technology reproduce and sometimes even exacerbate historical inequities and biases, but on top of that is placed this veneer of technological neutrality. Sometimes I call it tech-washing, right? I’ve even heard police say “well, of course, you know, “the instructions, the advice, that we get from these predictive policing systems can’t be racially biased because computers are not racist, computers can’t be racist.” You know, it sounds silly but that’s really the type of ridiculousness that we’re dealing with.

So we’ve been pushing back on the adoption of technology like predictive policing, like face surveillance. 

I’m curious to hear what you think about this though, Meredith. You know, many of us are familiar, when we log on to a website, Google will sometimes give us a Captcha right, where we are asked to please check all the boxes that have a crosswalk in them, or that have a car in them.

You probably know this, the technologically-sophisticated audience in San Francisco, but you are actually helping Google’s machine learning system learn how to identify a crosswalk, learn how to identify a car. That’s one way in which human beings are actually mobilized very specifically to develop these technologies, but can you talk about the case of the robot at Berkeley?

Meredith Whittaker: Yes. Let me go off. And I think as you previewed right, like it’s humans all the way down. There’s actually, at every stage, there are humans involved in making subjective choices that then get bundled into something we call AI or machine learning technology. 

So the robots in Berkeley, I, you know, I’m going to describe this for the potential radio audience and maybe you guys haven’t been up to the Berkeley campus where you have these little robotic creature-like things that are kind of like a mini fridge on wheels.

And you’ll be walking through the sort of pastoral environs of the UC Berkeley campus and then one of these things will sort of roll up on your heels. It’ll pause. It’ll roll a little further. And these are food delivery bots. And I guess you can order Ramen or cookie dough or whatever college students eat, and it will you know, muddle its way to your dorm and give you your food and somehow a transaction is, I don’t know.

The point of this is that everyone was sort of oohing and aahing–“these are automated food delivery. This is really, you know, we’re seeing the, you know line of technological progress…” 

Kade Crockford: “This is the future!” 

Meredith Whittaker: This is the future. It’s cookie dough. I made that up, I don’t know if they deliver that. 

Um, but you know, it turns out that these aren’t automated robots. They are robots that are piloted by very low wage workers in Colombia who make under two dollars an hour to basically look at a screen and sort of direct the robot’s next step. 

And this is a little bit surprising, but it’s actually kind of the status quo on a lot of these you know, seemingly automated technologies. You have you know, a company in China that sells an AI based service that you can walk into–imagine walking into a 7-Eleven and all you do is you grab a bunch of drinks and you leave. And it automatically deducts what you grabbed from some account you have, right. And you don’t have to pay, there’s no cashier. 

Well, you know this auto magic that is being sold as AI actually has again, a sort of dark room full of precarious low-paid workers who look at images, sort of photos of whatever you grabbed off the shelf and verify that like, it was a Sprite and not a Snapple. It was an apple and not a sandwich. Because the AI can’t do that. 

And so a lot of times what you’re seeing with this sort of, you know–the promise of automation is not the replacement of human labor. It’s the displacement of human labor. And the scholar Astra Taylor talks about this in a really lovely lovely essay in Logic Magazine, but it’s sort of hiding all of the work that needs to be done to make the system appear to be technologically sophisticated and live up to the marketing promises. 

And we haven’t even gotten into the armies of click workers–again sort of precarious labor that are paid very little–who are required to label the datasets. Say the sort of cat data set I mentioned before, you need somebody to label each one of those images, cat, cat, cat, cat, cat, or the machine won’t know what it is. So those labels are absolutely essential for a supervised machine learning, which is the, you know, the flavor of machine learning we’re usually talking about when we talk about AI. 

But you need that human capacity. And you have you know, hundreds of thousands of people being paid, you know pennies, to label these images. Their determination about what is a cat and what isn’t, or in a more controversial example, what is hate speech and what is not, becomes the ground truth on which these systems rely and which they reflect back onto the world every time they sort of interpret a new image or interpret a new phrase. 

So across the board there’s a lot of labor that’s required to prop up the image of an infallible technically sophisticated automated system. And you know, I think we can get into some of the sort of you know, the advent of gig work and the way in which these systems are being increasingly applied for sort of worker control and surveillance, but… 

Kade Crockford: I want to get back to classification in labeling for a second though, because it’s relatively non-consequential whether someone makes a mistake when they’re deciding is this a picture of a cat yes or no, right. That’s a binary distinction. Probably worldwide most people have a pretty similar idea of what a cat looks like and what it doesn’t look like. 

But machine learning technologies that are based on these data sets that require human classification and labeling are also being used in contexts where people may have widely varying views on for example, what is a man or a woman? Or what is a Black person or a white person? Or what is a happy face or a sad face? What, for example, is a criminal face? 

So you’ve looked into some research on this criminality AI. Can you talk a little bit about that?

Meredith Whittaker: I mean, that is one application we’re seeing, which is fairly egregious, and again, I think is, you know–there was a paper out a couple years ago that purported to be able to detect who was a criminal based on a driver’s license photo of somebody and it purported to be able to do that with more accuracy than a human.

But again what you’re seeing–and this is sort of a brand of facial recognition called affect recognition. And basically you know, facial recognition is good for saying you know, I recognize that face. I’m going to map an image of that face to a face in our database that has Kade Crockford’s name next to it. And now I know, I recognize that face as belonging to that identity, right. That’s useful for police, that’s useful for law enforcement, I would say it’s very problematic for those cases as well. But it’s not an application you really can sell much beyond that right? There isn’t a big market for it. 

But there is a big market for sort of affect recognition. So we layer on top, you know, not only is this Kade, but Kade has the physiological characteristics of a good worker. Or maybe not a good worker, maybe this is not someone I want to offer a job to, right. Kade looks happy, Kade looks sad, you know, this is Kade’s gender, this is Kade’s race. 

And so you begin sort of adding on different detection capabilities that I will say frankly are extremely alarming to me because these mirror sort of centuries of race science and of kind of pseudo-scientific classification techniques that were often used to you know, to classify differences between human beings as markers of superiority or inferiority. And actually to justify social hierarchies and social inequality as the product of biological destiny. 

And in the case of machine learning, there is no new scientific proof that these things are true. What you have is, you know, some precarious workers in a warehouse saying like this is what a woman looks like, this is what a man looks like. Most of this happens in India and the Philippines. And then that data set is piped through an AI system. You may have some people who are specialists, you may not. 

But again, what you’re looking at is the kind of you know, rehashing of these notions and then these become sort of you know, used for extremely significant decision making. So you know, they’re used in hiring, they’re used at border control, they’re used by police now, they’re used in stores to detect you know, shoplifters. Do they have the sort of body language of somebody who’s anxious, etcetera. 

So we could go into some examples of some of those, but I think you know, again, that’s the direction we’re seeing this technology headed in, and something I think we should be alarmed by. 

Kade Crockford: Yeah, and just to give you folks an example of how poorly these technologies perform often, I think it’s Microsoft that still has a tool available online called “who is,” I believe, and I fed this system a photograph of me, and it’s supposed to return to you your age and your gender, and it said that I was a three-year-old boy, so. 

Meredith Whittaker: You look great. 

Kade Crockford: Yeah. Yeah, so they don’t necessarily work. And you know, it’s funny when it’s a totally inconsequential internet prank, but it’s not so funny, for example, if I’m trying to access a restroom that is reading my gender or something and then refuses to unlock the door to let me in. I really don’t want to pee myself in public because AI doesn’t know what to do with my gender. 

So, you know, so Meredith, you and I and many people who work in this space around, you know, the intersection of technology and society, technology and politics, technology and civil rights, get invited these days especially to a lot of conversations about AI and ethics.

You’ve probably read a whole series of articles, even in mainstream, press now, about ethics and AI, fairness in AI, bias in Ai, and I’m just curious, do you think that ethics is the right way to frame a conversation about artificial intelligence? 

Meredith Whittaker: Love this setup. 

Kade Crockford: That was an alley-oop.

Meredith Whittaker: Yeah. 

Kade Crockford: Actually. 

Meredith Whittaker: Thanks. Um, a dunk, on our play being back in the game.

No, no, it’s not. I think oftentimes ethics is a smokescreen that avoids conversations about real accountability and liability and topics of power, right. I think you are seeing a kind of rapid adoption of ethical principles, the formation of ethics boards, you know, the Davos has a whole section on ethics this last year, right. 

And you’re seeing that in a way because ethics, you know, it’s not pegged to anything, right. You’re not even talking about human rights law or anything where there is sort of a mutual global understanding of what we mean, what it means is generally be good. Right, and a lot of companies adopt very broad ethical principles, sort of with the promise, you know, the implicit promise, that they’re going to self-regulate, that they’re going to self-govern. 

And these I think serve as kind of a convenient proxy for some of the real, you know, I will say regulatory interventions that we need to make sure that we can validate the claims made about this technology. To make sure that the incentives that are driving a massive for-profit multinational corporation are aligned with the interests of the public who are going to bear the harms of this technology if it fails. To make sure that we can you know, test and sort of deliberate before these technologies are deployed as you know, basically on experimental test subjects often without their knowledge in context from health care to education and beyond. 

So I think ethics is not sufficient. We need regulation. We need significant accountability. We need to make sure that the people who are benefiting and profiting from these technologies have liability for their failures and for the harms that they commit. And I think that’s going to frankly require a pretty radical restructuring of the way that the technology industry functions right now.

Kade Crockford: Yeah, people get frustrated with me I think when they invite me to these kinds of events. I’m like, oops, you messed up. 

Meredith Whittaker: You too? 

Kade Crockford: Yeah because I always want to talk about politics. And you know, people say well, you know, we’re here to talk about technology and ethics and you know, bias and machine learning, and those are important conversations. 

So let me give you an example of what I mean here. Joy Buolamwini is a researcher at the MIT media lab who has done some really important research that is covered in pretty much every article you read about facial recognition in the United States, because she found, as a result of actually just playing around with some commercially available face recognition APIs, that a lot of those technologies couldn’t identify her face.

She’s a relatively dark-skinned black woman. And she actually had to put on a white mask in order for these technologies to see her. So she started digging around and did a pretty rigorous peer reviewed study that found that a lot of the commercially available face recognition products that are on the market today, in use, some of them by law enforcement, including Amazon’s recognition product, are highly inaccurate when they’re examining the faces of particularly Black women, dark-skinned women, even though they’re very accurate when looking at the faces of white men. 

So, you know, my point to introduce politics here is not to say that bias is not a problem in AI, it’s definitely a problem, but it’s insufficient to merely think about the problem of ethics and AI as a bias problem. Because even if face recognition products developed by Amazon and Microsoft and the rest of these companies were 100% accurate, we would still have a whole lot of questions about whether they ought to be used in various contexts. 

And it’s those threshold questions of, as I said in the beginning, what AI will we build? Right, I can imagine if we lived in a radically different society that we may have a democratized technology that would try to envision and plan out a kind of Dutch approach to transportation in cities like San Francisco and New York and Boston where you know, at least on the East Coast, we’re experiencing real transit crises in pretty much every metropolitan area.

We’re not mobilizing our technological know-how to imagine for example, how to get rid of cars, right. Or exactly what type of public transit needs to be built to effectively move people and goods in urban areas. Instead corporations that have a vested interest in making as much money as they possibly can, like Amazon, are building technologies that are effectively technologies of control, technologies that have the impact of further centralizing power, exacerbating existing inequalities, certainly not addressing, you know, the crisis of basically an informal system of racial apartheid, which we’ve had in this country for a long time. 

So face recognition, bias is a problem yes. But there are a number of other problems there, so I want to get into a couple of them. How does racism creep in to the face recognition problem? And Joy Buolamwini’s work and others have shown that indeed, there’s built-in technological bias in a lot of these technologies, but even if we solve that problem, we’re still going to be dealing with racial justice issues at every stage here.

The second way that racial bias creeps in is when you examine the databases that law enforcement is using to compare images of people against when they, for example, have a still photo of a Walmart security camera, somebody is suspected of stealing diapers. So a police officer takes a still image from that surveillance video and feeds it into what? A mugshot database. 

So again back to the problem of historically racially biased policing in the United States. That mug shot database looks a lot browner and blacker than the general population. This has the impact not only of exacerbating again, law enforcement’s historically, you know, biased obsession with policing Brown and Black and poor people, but also weirdly enough has this sort of new and ugly impact of further occluding people like me, white people, from law enforcement’s gaze. If I was the person who stole those diapers from Walmart, I wouldn’t be caught by a face surveillance system if they were only checking against a mugshot database, because I’ve never been arrested. So that’s the second way that racial bias creeps into that system. 

And the third is just really obvious. You know, I’m imagining a world if we don’t stop it, which we intend to by the way, and I hope we have your help. I’m imagining a world in which, and this is already happening in places like China, where law enforcement officials have some type of Google Glass or like AI enhanced glasses or even contact lenses that they’re wearing, and every time they walk down the street every person who passes them by is scanned by the face surveillance algorithm that’s built into those glasses and then automatically connects to a series of commercial and law enforcement databases that populate a risk score for every person who walks down the street. 

Where are police going to be wearing those goggles? You know I come from the city of Boston. Not sure who in the room is familiar with the community around Boston, but there’s a town called Brookline right next to Boston, which is very very wealthy. Largely white. And my guess is, we’ll never use face surveillance technology in the way that I’ve just described, but in a poor Black neighborhood that is already over policed, where law enforcement for many years now has invested significant resources in adopting technologies to enhance surveillance and control of marginalized and historically oppressed populations, those technologies are going to be used right away and to great effect. 

So those are three ways, even if you deal with the problem of the racial bias inherent to the technology, which I should say companies have a vested interest in doing, right. I mean they want to sell a product to law enforcement that is going to be able to identify Black people. You better believe they do. So, you know, that’s an externality I think, for the company that they want to take care of. Those two other problems though are more systemic political problems that merely addressing, you know, the fairness or biased question within the algorithm does not come close to fixing.

Meredith Whittaker: Mhm. Yes. I mean I want to get back to sort of thinking about who gets to make these technologies and who gets to use them on whom. Because I think you’re touching on a lot of that in this example, right. You know at this point we have about five to seven companies, it’s debatable, but a scant handful that actually have the resources required to create the type of AI we’re usually talking about sort of, you know soup to nuts, that, you know, from building it to deploying it at large scales you know, in different contexts. 

And you know, those are the companies that have kind of data monopolies that have you know, advanced computational systems that are extremely expensive to maintain and that are able to pay like basketball star signing bonuses to a sort of rare AI talent who is required to sort of write the kind of primitives for AI systems. Right, and it’s not a market that other people can break into. That’s just real. 

So what we’re talking about is a couple of actors that are you know, traditionally very very non diverse. So if you look at you know, the stats, I think it’s, there’s 10% of AI researchers at Google are women. I think at Facebook it’s fifteen. Eighty percent of AI professors are men. If you look at the stats for people of color, I think 2.5 percent of full-time employees at Google are Black. I think it’s 4% at Facebook and Microsoft. If you look at some of the leading AI conferences, an extremely brilliant AI researcher and machine vision team Timnit Gebru, so that at one of the leading conferences, she counted eight Black people among 8500 attendees in 2016. 

So I think you know, I don’t need to belabor this point, but these are extremely non diverse places. And if you look at a number of the accounts coming out of places like Riot games and and Google and others, you’re also looking at places that are, you know, extremely hard for women, gender minorities, people of color to work in. 

So you look at that, and then you look at sort of this concentration of power, and you recognize that the people building these systems, you know, with ill intentions or with good, are a very narrow homogeneous set of the population. The people benefiting from these systems, the people who get bonuses when they sign that contract, you know, Amazon signs the contract with police, are a very specific subset of the population that often, you know–it’s beyond not familiar with contexts and the sort of you know, diverse lives at these systems touch, they have no idea they should be familiar about this. Right? 

So I think there really is a crisis here, where we need to open up, you know, these questions of politics and power and begin to bring way more people into the room. And this is not you know, I think educating, you know, making sure STEM education is readily available, yeah, that’s fine, but I think AI needs to break out of a sort of technical mode. When we talk about, you know, police injustice and the centuries of sort of racially biased policing, we need to have people familiar with those histories in the room. 

This is not actually a technical problem. This is a political problem. It’s a problem of power and it’s something we need to treat as such. Which means that you know, it is convenient for a lot of tech companies and others to sort of focus on you know, fixing this as a technical problem, sort of tuning the knobs of bias and making sure that facial recognition systems, voila, they recognize everyone, no bias, right.

But as Kade pointed out eloquently, like this is a much bigger problem, and to address it we need to sort of break open the rooms in which these decisions are made and make sure that the people who are most at risk from these systems have the loudest say. 

Kade Crockford: Yeah, you know one of the–I’m really excited about what happened in San Francisco with the face surveillance ban because I don’t want to live in a world where everywhere I go I’m tracked by the government from you know, using my face. And everyone is tracked everywhere they go. To abortion clinics, substance abuse clinics, to visit a friend, to cheat on their partner, whatever, that’s a world that I don’t think anybody really wants to live in. It frankly flies in the face of everything that the Bill of Rights and the United States is supposed to stand for. 

But we’re not asking necessarily the threshold question of whether we ought to adopt certain technologies. What’s so exciting about the face surveillance ban that we intend to replicate in Massachusetts and all across the country, is that it throws a wrench in the gears of technological determinism, right? I mean, I think we’ve been told by fiction, by Silicon Valley, that it will be built, and it will come, that’s inevitable. I hear it all the time, talking to people about the campaigning the ACLU is doing on this very issue. “Well, you know, it’s inevitable. It’s inevitable that face surveillance is going to be used everywhere. After all the technology exists, so it will be deployed.” 

And that’s a frankly frightening approach to thinking about the future. We can certainly live in that world if we choose to, or we can choose not to. And I think we really ought to choose not to, and we ought to start thinking about technology as a system of power and a system that too often centralizes and replicates the worst parts of our society and those inequities that we’ve been talking about all night. And thinking about democratizing technology and what that would look like. 

So, you know the work that we’re doing on the face surveillance campaign is important for practical and material reasons. It’s also important I think philosophically, because I think it really woke up Silicon Valley, and the whole country, to sort of feel the power of what democracy actually looks and feels like when people intervene in these systems–government, technology systems–and say, “you know what, it’s fine that you made that, but we actually don’t want it here and we can make that decision for ourselves,” which is great.

So I want to ask you a question, Meredith, which is where do we go from here? 

Meredith Whittaker: Yeah. I mean I want to ask you that question too, frankly. I mean, I think we have a lot of places to go and we need to go to all of them. But you know, I will talk about one mode of resistance, so to speak, that I think has been surprising and gratifying and that I think compliments a number of other fronts. We’re going to have to develop on this problem. 

And that is sort of the organizing of tech workers and the sort of burgeoning tech workers movement, which has you know, I have been intimately involved in it along with thousands and thousands of other people. But this is really people inside these companies who have a frontline view to what is being built, what the logics of these systems are, who kind of similar to the city of San Francisco, is saying like “yeah, I know tech, and I’m going to say no to this. That I don’t want to build it. I don’t want to be complicit in, you know building technology will that will then get applied in context like the drone war or facial recognition for police.” 

You actually saw a number of Amazon workers signing on to open letters asking Jeff Bezos to stop selling the system to police. So, you know, I think the people inside these companies are doing a lot of really brave and really good work to stand up and begin to put limits on what they are willing to participate in, and to call out publicly some of the real ethical concerns with you know, unrestrained capitalism dictating the direction of these incredibly powerful technologies with almost no accountability.

So that’s you know, that’s one part of it. I also want to sort of name-check kind of broader resistance that I don’t think gets folded into this narrative enough. And that is that I think we’re already seeing movements against automation. You’re seeing the Uber drivers who are sort of an automated atomized workforce managed by a centralized AI driven platform, right. So you’re seeing Uber drivers coordinate a strike right before the Uber IPO.

You’re seeing schoolchildren in Brooklyn and in Kentucky, you know, stage sit-ins and walkouts against the sort of Facebook-branded educational AI that had been implemented in their classrooms and that was sort of making learning into like drudgery, right.

You’re seeing Amazon workers in Minneapolis strike against the sort of algorithm that was setting the sort of terms of their employment in ways that were, you know, extremely grueling and inhumane. 

So I think we need to begin to look at, there’s a lot of social movements. There’s actually a lot of resistance. And if we scratch the surface we’re seeing people, you know, intelligently and effectively push back against the interpolation of these systems into their lives, their workplaces, and their schools. So, more of that. 

Kade Crockford: Yeah, and I just want to make a plug for the power of law in this context. I’m told, as I said before, all the time, that the widespread deployment of face surveillance is an inevitability that we ought to just figure out how to live with, and that’s just plainly wrong. 

I can give you a really good example. In the 1960s, this country had a major debate about what we ought to do with law enforcement’s desire to wiretap our phone conversations. And instead of waiting around for the Supreme Court to deal with it, Congress and State legislatures all across this country passed wiretap laws. Imagine that. 

And those wiretap laws are some of the strongest privacy laws that we have today in the United States. They require something that is basically a super warrant. It’s a very high standard that law enforcement has to meet to actually listen to your phone conversations in real time. And those laws did not just have the impact of protecting our privacy when we talk to each other on our phones. They actually changed the future of technology in a very crucial way.

So there are 50 million or so surveillance cameras all across the United States. And you may have never thought about it this way, but surveillance cameras in banks, surveillance cameras that are owned by the government, that are on light poles all around cities in this country, they don’t record audio. And the reason for that is because of the wiretap statute. The wiretap statute in most states, in many states, in my home state of Massachusetts, says that you can’t secretly record someone talking. You have to do it publicly. 

And for that simple reason, a very one law had the impact of changing an entire industry. So when companies started manufacturing CCTV cameras, they made those cameras without microphones. And I think that’s a really important example of how law can actually be hugely influential over the development of commercial technologies.

So please don’t let anybody tell you that there’s nothing that we can do. There’s a whole lot that we can do. The problem frankly in this country is that we have a broken political system. So, you know, we have a system in which DC is subject to regulatory capture, for a variety of reasons. We’re looking at some serious struggles right now in Washington DC where they’re finally considering what they’re calling privacy legislation. You know, it’s probably not going to happen over the next year or so. 

But those conversations are subject to intense pressure from lobbyists from Silicon Valley who are pouring millions and millions of dollars into DC to make sure that whatever lawmakers do there doesn’t interfere with their ability to maximize profit off of these surveillance databases that they have created over the past 15 or 20 years. That’s not necessary. It’s not inevitable. It’s not technologically inevitable. 

The only thing that’s required of us is to change the political system so that those lawmakers of ours in DC listen to the people and vote in the people’s interest instead of in the interest of those major corporations. And I think that that same line of thinking and reasoning applies to so many of the conversations that we’re having about technology today. 

So, you know, I get invited to these rooms and people want to talk about tech and I just want to talk about power and politics and say, you know, “if you’re concerned about ethics what you really ought to do is start organizing and make sure that you vote,” frankly.

Meredith Whittaker: Yeah. Yeah. So let’s say we can push these politicians to vote in the people’s interest, which I think you do that by making it more costly for them not to, right, which means you probably do need to organize, but it worked in San Francisco. What are the laws you want to see? Like give me your top. 

Kade Crockford: So yeah one really important one, you know, some people have asked us at the ACLU, why does the San Francisco face surveillance ban only cover government? Well my response to that is, how do you eat an elephant, right, you eat it one bite at a time. I’m not interested necessarily in having to fight the police and Amazon at the same time. 

But there’s a state law in Illinois called the Biometric Information Privacy Act, which was passed a few years ago, and that law ought to be Federal. Basically what that law says is that private companies cannot collect your biometric information without your opt-in consent. It’s a pretty straightforward simple piece of legislation. But what it’s meant is that companies like Facebook, Google, Amazon, effectively cannot deploy the types of technologies that you see in the film Minority Report.

Right, where you walk into a mall and some you know, automated robot things says, “hello Kade. How did you like the size extra small underwear that you bought last week?” That’s not possible in a world where we have a Federal Biometric Information Privacy Act that restricts the collection of biometric information to very narrow circumstances in which you affirmatively opt-in. And by the way, that doesn’t mean by reading the terms of service and clicking a button to say I accept. You’re not allowed to force people to agree to those terms to engage in business with you. So that’s one law that I think really ought to become Federal. 

Another is, we would like to see a moratorium on the use of face surveillance and government passed in DC. It is not inevitable that we live in a world where government agents in you know, some secretive fusion center or bunker type office have the capability to again, track not one person’s every movement, but every person’s every movement and association and habit, and not on one day, but on all days. We don’t have to live in that world and we can make conscious political decisions to force our representatives, our elected officials, to choose a different path. So I would really like to see that happen. 

And then finally, in these privacy debates that are taking place in Congress, I think it’s really critical for folks to know that the strongest Privacy Law that has been passed in the past five years has been passed in the states. DC has not passed Electronic Privacy Law since 1986. I was three years old in 1986. Technology that we use today was not even close to existing in 1986. I believe, I think it’s one gigabyte of storage in 1986 cost $75,000. It was a radically different world. 

So DC, we can’t wait for them to make the kind of change we need. We can pass laws at the local level, which is what happened here in San Francisco. We can pass really strong state law. Just last week in Maine, the Maine State Legislature passed the nation’s strongest internet consumer privacy law restricting internet service providers–so that’s Verizon, Comcast, Spectrum, Time Warner–from monetizing the personal information and sensitive data that they’re able to harvest through the use of Internet services. 

That, some of you may remember, was a rule that the Obama FCC implemented right before Obama left office. Unfortunately when Trump and the GOP Congress took over in 2017, it was one of the first things that they got rid of. So now unfortunately, in the United States if you don’t live in Maine or a couple other states, you can pay your ISP for Internet service every month for the pleasure of them turning around and monetizing your data in the same way that Facebook and Google do. So, we ought to you know, make the Maine statute National again, a federal law as it was before Trump killed it. 

And then just really in the weeds a little bit, the number one thing that Google and Facebook are looking for in these debates over consumer privacy law in DC is to create a preemption rule that prohibits States from passing any privacy law, consumer-facing privacy law, that is stronger than the federal rule that they want to create, which is obviously going to be very weak. It is critical that we ensure our Representatives do not vote for any legislation that creates that ceiling in DC, saying that we don’t care what you want in California, we don’t care what you want in Maine or Massachusetts or Nevada, what we have in DC is the strongest rule you’re going to get. 

So those are the those are the things that I would say for now. Yeah. And how about you? 

Meredith Whittaker: Oh, I have a laundry list. I mean I would co-sign–I think one detail here is the reason companies don’t want these sort of State, you know, strong State privacy laws, is because they work. 

And you know the Illinois Law is you know, it’s a small state and you’re talking about a set of companies that are global, that scale their technologies to billions of people, why would one state’s law perturb them? Well, it perturbs them because building bespoke systems for every state and geography is really difficult, if not impossible in some cases. 

So the logics of these companies are, we build one thing and we deploy it to everyone. And things like GDPR, which is the European Privacy Law that passed recently, things like the Illinois Law, things like the San Francisco ban, are actually really meaningful because they shape the behavior of global companies and you know, can prevent the sort of untrammeled rollout of some of these techs.

So I think you know, we’re not looking at a context right now, in my view, where you’re going to see much positive movement on the national level, right? But what you are seeing is San Francisco, what you are seeing is Maine, what you are seeing is Massachusetts considering a ban that is similar to San Francisco. So I think continue pushing there. 

I don’t know, I would add to my wishlist, I think you know, when we look at these tech companies right now, one of the huge issues we have is that most of this technology is trade secret. So the stories we know about these tech, the things we think it can do, the sort of idea about what AI is, is by and large written by the marketing departments of the companies that are interested in selling it or monetizing it in some way. 

So I think we need you know, we need to think about you know, what that means and we you know, I’m looking for legislation that would require waiving trade secrecy so that people can review and audit these technologies. Super basic, right, like kind of surprising, I hope, to some people that we don’t already have that. That if you claim that this technology can do diagnostics, it’s not tested in any rigorous sort of FDA-approved way, right? But. 

Kade Crockford: You mean like a fingerprint blood pricking technology? Something like that? 

Meredith Whittaker: Yeah, it’s like the Fyre Festival but for medicine. 

Kade Crockford: Yeah, something like that. Yeah, whatever. No big deal, right somebody might have died, but I’m sure it’s fine. 

Meredith Whittaker: But um, you know, just, you know, it all nets out in the disruption. 

Kade Crockford: Yeah. 

Meredith Whittaker: I think you know, I think we also need frankly, strong protections for whistleblowers within tech. One of the reasons we know…Like we know what we know because people took a huge risk to tell us about it. We would not have known Theranos was a scam. Right? We would not have known about Google’s relationship with the DOD and their plans to build an AI for drone targeting.

We would not have known about a number, you know, we would not have known about some of the Amazon stuff if it hadn’t been, you know, either disclosed by Freedom of Information requests or you know, had whistleblowers within the company. 

So again, when we are dealing with these immensely powerful corporations with the walls of trade secrecy that are erected between sort of the consumer and the market that is sort of fed the marketing and the sort of reality of what these systems do, who they’re actually made to benefit, we need to support anyone who’s going to begin to sort of, you know, allow that information to be made public. Because you know, we should not have corporations driven by a shareholder value making determinations that you know, profoundly affect the well-being of millions if not billions of people. 

Kade Crockford: So, you know, when the Project Maven fraucus was in the newspaper…

Meredith Whittaker: The incidence. 

Kade Crockford: Yeah. The incident, if you will. So Project Maven, for those of you who don’t know, was a project that Google was working on with the US military to develop AI for drone targeting. And Meredith was one of many Googlers who rebelled against this and actually got the project shut down, so props to Meredith.

Meredith Whittaker: I think some of my co-conspirators may be in the room tonight. But yeah. 

Kade Crockford: You know there are people who said in response to that, including I believe Ash Carter, the former Defense secretary, in an op-ed in the Boston Globe today, “dear young Googler, or dear young Tech worker, aren’t you worried about China?”

Right. I mean, so there’s this argument that is marshaled when people raise concerns about human rights implications of technologies like these, and they basically are concern-trolling, and saying well, you know, “if the United States isn’t developing these tools, if the United States isn’t making the smartest drone targeting technology, aren’t we just all going to be speaking Chinese in 10 years? And how do you feel about that?” So what do you say to those people? 

Meredith Whittaker: Mmm. This is a really big topic to unpack as we come up against time. But I think you know, I think one of the things that I look at is, you know, if you look at China, you have a central party, you have a government that is very honest about what they want to do with this technology. Right, that is you know, we are building, you know face-tracking to be able to know where everyone is and better calibrate their social credit score, which you know, the actual dimensions of the social credit score a little hazy and I’m not going to get into it. 

But you know what we have in the US is, you know similar technology under development, but it is again sort of shrouded right. We don’t actually know who Amazon is selling their facial recognition or other AI services to unless we have Freedom of Information or we have whistleblowers who tell us. So in a sense, I you know the distinction there is a little hazier than it may be made to seem. 

And I think again this narrative is deployed, it is always deployed, when you know, people make the reasonable request for more democratic decision-making around whether or not these technologies should be applied for more deliberative and ethical development processes and for more accountability in the corporations that are currently dominating the AI space. 

And I don’t, you know, I think we can also get back to the fact that most of the stuff doesn’t actually work as intended. So do we want you know, is it going to lead to sort of international dominance if you know, people deploy a lot of AI that requires that precarious labor to run and isn’t actually as effective as the marketing may seem. That there are a lot of ways to address that narrative, but I think it is, you know, it’s being used to silence really an urgent conversation around the sort of ethics and power dynamics of these techs. And Ash Carter, the architect of the Iraq War, making you know that argument is classic 2019. 

Kade Crockford: Specious at best. Yeah. So, okay. So having demolished in part the argument about, you know totalitarian race to the bottom with China, why don’t we take some questions from the audience? 

Meredith Whittaker: Yeah, shoot.

City Arts & Lectures: This question is coming from the back and center of the orchestra.

Audience Member 1: Should I stand up? Hi, my name is Erin. I’m a recent law graduate, so I’ve written notes about my questions here. I apologize. It’s kind of like a two-part question that melds into one larger question. 

So the first is, as a law student, one of the things that’s like deeply disturbing, I guess I’m a graduate now as of three weeks ago, but one of the things that’s very…

Meredith Whittaker: Congrats.

Audience Member 1: Thank you.

Very disturbing to me is a fetishization formalization, which I see executed in legal decision-making preferring things like algorithms because it diffuses liability. 

So one specific area I work on is child welfare. Nobody wants to be the person who takes a child from their family and nobody wants to be the person who made the decision not to take a child from their family and then that child ends up dead or whatever. 

The problem is AI doesn’t exist in a vacuum. AI exists as an alternative to human intelligence. And so we’re not asking, is AI getting it right, we’re asking is AI getting it right more often than humans are getting it right? 

And I think just based on the empirical data we have right now, we should think that humans are doing a better job. We want case workers who are responding to individual families in the case of children for example, and not having a data set that’s fed to an algorithm that then–and California is one of many many states that uses algorithms that are trade secrets–that when a decision is made about families, placement, there’s no legal recourse for figuring out why that particular decision was made. 

So I want to know like, do you think that it’s just full stop, AI is never going to be better than a human person who’s sitting on the other side of the table and making this decision? Like how do you judge AI decision-making versus human decision-making? And is it just always AI is going to be worse? Or do we you know, change proprietary laws and try and make this better? 

And then as someone who’s also a doctoral student in philosophy, I want to make a suggestion, that when I interact with my STEM colleagues, I find it shocking their illiteracy in the humanities. And whether or not you think something like making competency in Foucault and familiarity with–

Meredith Whittaker: Ouch.

Audience Member 1: The panopticon necessary to graduate with a degree in computer science, as necessary as familiarity with Java? I self-taught myself algorithms. As someone who has no science background, because I was like, why does everyone– sorry, sorry.

Meredith Whittaker: I mean there’s a lot to unpack there. I think in the first question, who gets to define better is the core of that question, right. 

And I would I would look at Virginia Eubanks’s work which looked very specifically at child welfare algorithms in Allegheny County in Pennsylvania. I would look at Dorothy Roberts’s recent paper that engages with that work. She’s a brilliant legal scholar. 

But to get into some of these sort of gnarly questions, I will also add that a lot of these a lot of these systems are deployed as sort of a mask for austerity policies. So you will see an algorithm introduced in social welfare systems at the same time that massive budget cuts are happening. So again, it’s the same sort of, you know, diffusion of responsibility into nothingness, a kind of perfect bureaucratic tool. 

But I can’t answer whether it’s you know, ever going to be better. Again, that’s a subjective decision, and in the case of sort of child welfare, it needs to be deliberated with the people who sort of you know, families and lives are at risk. 

And I don’t you know, there’s a much longer conversation. Do we want to punish engineers by giving them Foucault? I you know, sure, let’s do it. But again that’s placing a lot of agency in individual engineers and sort of making this a problem of ideas and not power. And I think we need to look at the structures within which these people you know, this pedagogy is forged and these people are working beyond just simply thinking that like inoculating them with you know, some old radical ideas is going to change those systems fundamentally.

Kade Crockford: Yeah, I mean, I would answer it slightly differently, which is to say, you know, I’ve heard a lot, and encouraged technical universities like MIT and Stanford to you know, have some of their computer science students take required courses in history or you know, anthropology, social sciences.

But there’s another way of looking at it too, which is to say that we should probably start teaching young people–as young as  Elementary School–not necessarily how to code, but teach them about how machine learning systems work. I think that’s as important today as sex education, right? Because those young people are entering a world in which so many decisions that are made about them are going to be mediated through these technologies. And so increasing the amount of technical literacy among the general population I actually think is equally important as educating the engineers in the humanities. 

Another thing that I thought about while listening to your question is the problem that we ran into in Massachusetts a few years ago when the Nationwide movement to abolish cash bail–so I’m wearing a t-shirt right now from the Massachusetts bail fund. 

There’s the idea, you know tons of people in this country, 70% of people locked up in Suffolk County Jail, which is where Boston is, are there pretrial. They’re people who haven’t been convicted of any crime and that’s pretty typical throughout the United States. Those people are just there because they’re poor. They just can’t pay bail. So they can’t go home and await their trial. 

That has really negative consequences for the outcome of people’s criminal cases. If you show up to court in your street clothes, you’re much more likely to go home then if you do if you’re showing up from custody, from jail. So we have to deal with the problem of cash bail. And what advocates on the left are pushing for is just the abolition of cash bail.

Don’t hold somebody before their trial, they haven’t been convicted of anything yet. Send them home. Unless you have a really really good reason to believe that that person is a threat to their community, then they ought not to be locked up before they face their trial. So that’s what people have been pushing for is a real systemic reform.

What we saw in Massachusetts was the government-elected officials, their response to that pretty radical idea of a true systemic reform that would frankly empty the jails in Massachusetts almost, was to say “well, you know, we’re not exactly comfortable with the idea of eliminating cash bail, but what if we mandated the use of risk assessment instruments instead?” And that would merely enable the incarceration of those same people under a different pretext, under a different justification, right. 

So I’m concerned about the application of these technologies in lieu of demands for systemic, you know radical reforms that would truly change the way that systems deal with people in our society. And I think you know, that probably applies to the child welfare context a little bit too.

Meredith Whittaker: Yeah. Yep. Boo. 

City Arts & Lectures: This question is coming from the front. 

Kade Crockford: We beat it back by the way. They did not implement mandatory risk assessments, but we still have cash bail so. 

Audience Member 2:  Hi, tech worker in SF. And love this talk, this is really really great. 

My question is, with things like open source that you know, year after year makes it easier and easier and lowers the barrier of entry for tech workers to get their hands on this type of technology, with the hardware that year-by-year gets better, that goes into our pockets, that allows, you know, tech minded citizen to do things like facial recognition on their own, and with the constant year-over-year introduction of more and more data into the internet that could be mined by any of those tech workers–when a lot of the concerns that you talked about tonight are corporation facing, how do we think about the open-source movement that is putting that code out there, the tools that we are given to analyze that data, and the data itself that’s going out to the internet? 

And how do we sort of think about that in the coming years to sort of stem this problem from getting distributed if you will, into the hands of citizens who are potentially going to use it in just as, you know, bad or malicious ways as the corporations might? 

Meredith Whittaker: I mean, that’s a good question. I think there’s sort of two elements there. The first is like, you know, could be construed as don’t people have the same access to the resources that corporations do? And I think they get sort of the dividends of corporate resources, right? They can rent out infrastructure from Amazon or from Google or from Microsoft, but no one, and this includes startups, this includes almost anyone, actually runs their own data centers or their own infrastructures anymore. So any AI–

Kade Crockford: Even the US government. 

Meredith Whittaker: Even the US government.

Kade Crockford: They rely on Amazon. 

Meredith Whittaker: Amazon is the CIA’s infrastructure. And much more. Like that’s real. That’s just true. Yeah, it’s so you know, if you scratch an AI startup, even, you see an Amazon contract, right? You see a contract with Google, you see a contract with Microsoft, because you know again, accrues back to those who have this infrastructure, which is expensive, and you can’t just sort of bootstrap that if you want to enter the market. 

Same with data. You need, you know, the companies with vast social market reach, the companies that install like apps on my phone, and the companies like Amazon are the ones that have these constant data flows, you can’t just make that. 

So what you can do, and there are a bunch of consumer technologies that let you sort of reuse an AI model that might have been trained by one of these companies, that let you rent their infrastructure, that may you know, there are data services that let you buy some data if you want to sort of train it, there are some open data sets. But that is not the same thing as sort of an individual having the same power as a corporation does. 

And ultimately, you know, whatever the bugs and biases and terms of service and fickleness of product direction that a corporation will have, you’ll be at the sort of behest of that. In terms of like is this going to enable some pretty gnarly activity by individuals who then can get their hands on these tools? Absolutely. 

I think we just saw something pretty disturbing. There was a developer that use the kind of commercial facial recognition API to develop a system that allowed people to enter the image of a person, a woman, and check it against the faces of people who’d appeared in pornography. And this was marketed as like, check if your girlfriend had been in porn, right?

This is you know, that’s buildable. You can get a commercial face recognition API. You can rent out the service. You can sort of, you know, get a data set of you know people who’ve appeared in porn, and you could make that right. So yeah, and I think there’s there are sort of you know some–there are a number of other disturbing possibilities. But yeah. 

Kade Crockford: I just want to make another plug for the law here. You know, law is not everything, but again, the example of the wiretap statute I think is really relevant. There’s a relationship between law and culture that’s really important. Obviously the law cannot constrain all bad behavior. Murder is illegal, people still kill each other, right? But generally it’s frowned on. You know, people aren’t super psyched about it. 

So I want to make an argument for a combined effort of better law and thoughtful culture around technology. So if we decide as a group to create some really great privacy laws that prohibit these types of you know, technological forays into creeps stalking on the Internet or whatever, we then have to pair that with a cultural sensibility that excludes that type of behavior from what we consider to be acceptable. Right? 

And you know, that may be kind of like an amorphous wishy-washy answer, but I actually think it’s the right one. That, you know, if somebody does something like that with a technology, even if it’s illegal, the rest of us say, “come on man, really? Like that’s not cool. You can’t be doing that kind of thing.” And that person doesn’t have a lot of friends. 

Meredith Whittaker: Yeah. Drag ’em on Twitter.

Kade Crockford: People don’t like them. Yeah. Like in the wiretapping context, people can still secretly record me in Massachusetts, but not a whole lot of people do it actually. And it’s because it’s illegal and culturally unacceptable behavior.  

City Arts & Lectures: This question is coming from your right. 

Audience Member 3: Right here. How do we get more tech workers involved in policy and law making, and what inspires a worker with a cushy tech job to get involved in activism?

Meredith Whittaker: I mean, I’ll answer the second one first. I think you know, we’re in a moment where it is very clear that tech is political and that you know, doing nothing is also a political stance. I think we’ve seen these companies move from sort of a sunny disposition, which also you know, hit some problematic behaviors, maybe 10, 20 years ago, to you know, basically building the technical infrastructure of our core social and governmental institutions.

And the way that that’s being done, the sort of ethos under which that’s being done, I think is really disturbing to a lot of people. So I don’t you know, I can’t say like what is the magic recipe to tell people get involved. 

But I think we have had a couple of years where every month there has been a litany of tech scandals, from Cambridge Analytica, to facial recognition sold by the police, to bias algorithm after bias algorithm. And it’s very clear that the sort of PR platitudes that are being used by the sort of corporate PR departments to address these and the kind of ethical fixes and bias busting tool kits are simply not sufficient for the profound responsibility that these companies now have. 

So I you know, I think people get that, right. They’re inspired.

How to get them more involved in law and policy? I think, you know in a sense they’re already like maybe too involved, in that they’re writing policy, you know, but saying it’s code. That’s a little bit of a joke, but I think you know, in a sense politicizing and clearly drawing the lines between the database they’re building, the model you’re training, the product you’re designing, has clear political implications. And if you’re not engaging with sort of these specific nuanced policies and laws and contexts and norms within which that’s going to be deployed, you’re actually only doing you know, part of your job.

City Arts & Lectures: This question is coming from the front and center. 

Audience Member 4: Hi, so in an uncharacteristically positive question, we certainly spoke a lot about the limitations of AI, but there’s also kind of this silver lining to all of this, which is that there’s inevitably a positive side to large data sets, which can be processed by an algorithm and we can get useful results. For example, diagnosing cancer, or you know, better medical technology that is then reviewed by a professional, etcetera. 

It seems like in order to stem some of the more nefarious uses of this technology we have to find directions where they can be both beneficial to society and also lucrative. What do you see those being? 

Meredith Whittaker: I mean, it’s a really tricky question because I think those are actually at tension and that’s frank. I think there are possibilities. If you could collect perfect oncological data and you could feed that into a system that was sort of cognizant of the specific patient populations you were treating. And you had people who were sort of, you know, doing rigorous testing of those systems, which doesn’t happen now, there’s a possibility you know, yes, you could like–the ability to detect patterns in large data sets can be powerful. 

But all of those ingredients have to be much more thoughtfully put together. And we don’t have processes right now that are incentivizing that in any way. 

So I think you know in a sense, I think there are areas that should not be profit-driven. And some of them are these sensitive areas like Health Care, right, like Social Services, right? You look at the insurance sector and I think you know, health insurance and health care in the US is you know, frankly a disaster because we have something that should be a public good, everyone should be cared for, everyone should receive Health Care as a right, it has been turned into a commodity and it is a mess and people are dying. 

So I think those things are actually at, yeah. And let’s not even get into the fact that that is actually the data, those health records, there’s like, you know, the fragmented non-commensurate data that’s collected based on, you know, one insurance billing code or another is going to be mushed together, called a data set, fed into a system, it may not even be commensurate. 

So I’m in a little bit of a rant I guess but um. But I think you know, yes, there are hypothetical possibilities of benefit that would require deep structural change to realize in a way that was actually sort of validatable and sort of, you know robust, and that’s my view on it. 

Kade Crockford: Yeah, I mean, so I think it’s hard to answer the profit part of your question, because at least in my view, many of the most productive and important parts of our society are the aspects that are not profit-driven.

So for example teaching, right? I mean, there’s not a whole lot of profit in that. It’s incredibly important. So if you remove the profit part of your question, I can imagine a lot of uses of machine learning that would be hugely socially beneficial. 

There’s a sort of like Punk project that a friend of mine in New York developed, which is kind of like a play on predictive policing. This was a project for the New Inquiry magazine. And instead of developing a predictive policing system that is aimed at you know, predominantly poor Black and LatinX people, which are all the systems that are in use today by police departments, this system takes, I think it’s SEC data on financial crime, and tries to predict where the next financial crimes are likely to happen. You probably won’t be surprised to know that lower Manhattan was lit up on that screen. 

So yeah, I mean, that’s like kind of a jokey example. But I mean our transportation systems right. Trash pickup is can be made more efficient with the use of machine learning algorithms. In the city of Boston, where I live, some engineers from MIT were asked by the city of Boston public school’s system to use machine learning to try to come up with a better system to allocate school buses and to devise new bell times, start and end times for public schools to make the system fairer and more efficient.

So that’s that’s another example of a use of the technology, which I think is like wholly socially beneficial. And obviously there are important questions and you know, accountability and transparency need to be in play anytime those systems are used to make consequential decisions for lots of people. 

But I’m you know, my problem isn’t with the use of data to help us make better decisions. My problem frankly is with the way that our political system is set up that incentivizes, you know, the centralization of that information and the centralization of the decision-making power about what problems we’re going to solve. So I just want to get back to that. You know, we started on that and I think that’s really critical.

A lot of people are developing technologies for problems that we don’t really have. Merely because it makes somebody some money. I think if we lived in a society where we oriented our efforts toward a more sort of like egalitarian or utilitarian or dare I say socialist approach, you know, we could have really different machine learning technologies.

I was actually asking Meredith earlier, like if we lived in a socialist democracy, what kind of machine learning do you think we’d have? And the first example that I came up with was a system that would help us figure out how to move people and goods better, right? How to devise transportation networks and infrastructure that would you know, help us deal with climate change, address economic inequality, address mobility problems.

So, you know, I don’t think it’s really the technology that’s the problem. It’s the power structure.  

Meredith Whittaker: Cool.

City Arts & Lectures: This question is coming from the center. 

Meredith Whittaker: Yeah, I think we have time for one more, according to the clock I can see. 

Audience Member 4: To what extent is it practical and and imaginable that people could own and control the information about themselves? Including websites they use, their medical information, their financial information and so on.

Meredith Whittaker: So there’s a lot of talk of data ownership. This is sort of a meme. And I think it’s really attractive right. The idea that it’s all mine, right and I can do with it what I will. I think it hits up against a lot of problems because you know, sort of my information in a vacuum is not actually that useful to me. And you know, how do I make an informed consensual choice to sell it or loan it to somebody. Especially when you know, I won’t actually know the consequences of giving up my data until my data is combined with Kade’s data, is combined with all of your data, to create a model that may then be used to make inferences or predictions or decisions that are actually harmful to me. 

So I think in the case of machine learning, this sort of data ownership model kind of slips very quickly, because the actual implications of what does it mean when I click yes, I will give you my data or no, I won’t give you my data I guess, how does that affect the ultimate decision of a system that has yet to be trained with my data or of a system that’s going to see my data and make a decision about it, sort of that may or may not impact my life? 

So I think you know, I don’t think it’s the solution to the problem that we’re facing, because you know, ultimately what we’re talking about is not systems that are you know, using my data and making a decision about Meredith, they’re using all of our data and then making a decision about how similar I may be to… 

Kade Crockford: About someone like Meredith. 

Meredith Whittaker: Someone like Meredith. Probability of Meredith 90%. Yeah, so, you know, we’re looking at, a lot of times, collective harms. And we’re looking at the sort of mass quantities of data that are feeding these systems, which are sort of constitutive of how they have to work.

Kade Crockford: So yeah, just to close, I would say that on that note, in the Civil Rights and civil liberties context, there are certain areas in the law where we really ought not to be allowing governments to make decisions about individuals based on the patterns of groups, right? 

So, you know again, getting back to the question of the threshold matter. Should we use x technology in y circumstance? I think in a lot of circumstances that we at the ACLU deal with, the answer to those questions just has to be no. So that’s it. No. Thank you. 

Meredith Whittaker: There you have it. On a resounding no!