City Arts & Lectures: Welcome to City Arts & Lectures, a season of talks and onstage conversations with some of the most celebrated writers, artists, and thinkers of our day, recorded before an audience at the Sydney Goldstein Theater in San Francisco.
Welcome to a new season of City Arts & Lectures. In our first fall 2020 broadcast, New Yorker staff writer, Jill Lepore, talks with KQED’s Mina Kim about Lepore’s new book, If Then: How the Simulmatics Corporation Invented the Future. A guest on our stage at the Sydney Goldstein Theater in San Francisco many times in the past, Jill Lepore is the journalist behind recent New Yorker articles that have helped us make sense of this troubling time, among them, “The History of Loneliness: Is Staying In Staying Safe?”, “The Invention of the Police,” and “Ruth Bader Ginsburg: The Great Equalizer.” On September 16th, 2020 Jill Lepore was interviewed by Mina Kim of KQED. Join us now for a conversation with Jill Lepore and Mina Kim.
Mina Kim: If you’ve ever worried about the amount of data companies like Google or Facebook, Apple, or Amazon have on you, or you’ve just felt weird about all the ways your data are manipulated or commodified, and wondered “how did we get here?” Well, Jill Lepore says there is a beginning. In the 1960s, there was a computer program designed to collect data and use it to predict human behavior, called the people machine. It was put out by a company with the awkward name, Simulmatics, and Lepore wrote about it: this forgotten company that she said says invented the future, this world that we now know, where algorithms and predictions of human behavior are the norm in politics and other fields.
Lepore is a Harvard University historian, New Yorker staff writer, and bestselling author of many books, including These Truths. Her new book on the Simulmatics corporation is If Then. Jill Lepore, it is great to talk with you for City Arts & Lectures.
Jill Lepore: Hey, thanks so much. Great to be here.
Mina Kim: I did not realize how refreshing it would be to read about a time when just the idea of a presidential campaign, collecting data on voters, using it to predict their behaviors and determine their campaign strategy, would be so controversial. I mean, it was morally questionable to do this and that, the campaign even considered rejecting this opportunity for this kind of knowledge. I mean, it just feels so far gone, especially on the cusp of this election. And I was wondering if you were struck by that too, when you were researching all of this?
Jill Lepore: Absolutely. I mean, we take so much of this for granted and also I think we take for granted the story that Silicon Valley startups tell about themselves, and that those entrepreneurs tell about themselves, which is that, you know, they’re geniuses and they invented everything. And kind of just pulled it out of their, the tops of their heads, and sent it across the world. And there’s a real disavowal of any idea that there are origins or grandparents or founders or histories to algorithmic codes, to predictive analytics, to all the stuff that we encounter every day, kind of in a hidden way, in our daily lives.
So when I came across this story, I had that same reaction that you did. It was like, oh, it is kind of comforting to know that A, these guys didn’t invent this, and B, the way in which it got invented is useful to know. Like it kind of casts what’s going on now in a different light. And I think even in a more critical light than we might already perceive it to be.
Mina Kim: And I love it the way that you used Kennedy’s campaign to really illustrate sort of the beginnings of this. And I was wondering if you could actually take us back to the Kennedy campaign and how it decided to tackle head-on opposition to Kennedy being Catholic. And then we can kind of unfold the story of Simulmatics from there.
Jill Lepore: Sure. Yeah. So the election of 1960 was an incredibly close election, but before it came down to Kennedy and Nixon–and we think of that campaign iconically through the televised debates that Kennedy and Nixon held in the fall of 1960, the first time that happened–but before that it was a big battle for Kennedy to become the democratic nominee. And, you know, there were a number of other strong contenders, but there was also a candidate who’d run in 1952 and 1956, and although he didn’t declare, who was expected to run for the democratic nomination in 1960, and that was Adlai Stevenson, governor of Illinois.
But Kennedy went out and ran in a lot of primaries, which, you know, were optional. And you didn’t have to do. And his big challenge early on in winning the nomination was convincing liberals that he was actually a liberal. Because Kennedy hadn’t really been a liberal, his family was very close to Joseph McCarthy, and liberals didn’t trust him for that reason. And he had a pretty weak record on civil rights and the party was really struggling with it’s long-standing, very cowardly, position on civil rights. So he kind of squeaked by gaining the nomination. Then named one of his rivals, Lyndon Johnson, as his running mate. And he faced a very formidable campaigner in Richard Nixon.
I mean, Nixon had already served two terms as Eisenhower’s Vice President. He had gotten himself back up on his feet after a number of scandals. He was much suspected and disliked, but he was a relentless campaigner, and a tireless one, and a great debater, known to be a great debater. So the Kennedy campaign had a lot to worry about. And they brought in the Simulmatics corporation, not altogether willingly. The Simulmatics corporation was formed in 1959 in order to advise the Democratic National Committee about what to do about civil rights. And prepared a report for the DNC that influenced the party’s platform at the Democratic National Convention in August of 1960.
And then when Kennedy became the nominee, Simulmatics went to him and said, you know, we’d like to advise your campaign. And Bobby Kennedy, who was running his brother’s campaign, was pretty reluctant to hire them. The Kennedy’s had their own pollster. They worked with Lou Harris. So they had polling information, like what was the added value of this computer simulation of the electorate that, you know, came out of MIT and these, you know, these distinguished social scientists, but it wasn’t really clear what that would offer them.
But they paid quite a lot of money for three reports that were prepared by Simulmatics in August of 1960, and everything that Simulmatics recommended that Kennedy do in the weeks, the last weeks–really it’s right about now, right? The last weeks before the election. Everything that they recommended he do, he did. And then he won. And when he won, they claimed credit.
Mina Kim: So the things that they recommended were things like, go ahead and talk about your Catholicism, but frame it in the sense of religious prejudice and how that’s a bad thing, because that would appeal to a certain important contingent of voters, as well as some of the other things that you talked about, in terms of having a stronger stance on civil rights, and so on.
I think you asked this really interesting question though, of whether or not the campaign recognized this anyway, and would have probably done these things anyway, or whether or not Simulmatics–by pulling in data from previous elections about the electorate that it had–actually influenced Kennedy’s campaign and helped him win a squeaker of an election. So what makes you wonder about whether it was Simulmatics or something else?
Jill Lepore: Well, I mean, I think to look at sort of the two most important pieces of advice that Simulmatics gave to the democratic party and to the Kennedy campaign–one was about the importance of Black voters. And one was about the importance of being forthright about Kennedy’s Catholicism.
And if you–looking back at that, now it is easy for us to say, okay, it’s 1960. This is the spring of Greensboro lunch counter sit-ins, and sit-ins across the South. Do we need a computer simulation of the electorate to decide the Black vote in northern swing states might be important, and that the Democrats long-term feet dragging on civil rights might be something that the Kennedy campaign should put a stop to and reach out to Black voters in the North?
That was the message of the first of the Simulmatics reports–that Black voters matter, really. And, you know, they were strong civil rights figures, the people who worked for Simulmatics and founded Simulmatics, that really, they were really angry that the democratic party had been, had taken such a long, cowardly position on the question of civil rights.
So. But you kind of have to ask yourself, like, why did that require a computer–did it really require a computer stimulation? And I think that’s a little about the kind of like technofetishism of the Cold War, you know, it’s the age of the arms race and, you know, the emergence of the mainframe computer. “Well, if a giant electronic brain tells me that Black voters matter, then I believe it, you know,” these white liberals say, instead of, you know, talking to voters.
And the other is Kennedy’s Catholicism, which he had been fairly quiet about. Although there was, he had reason to suspect that a lot of Protestants were dubious of him, you know, that he would…We forget how much anti-Catholic sentiment there wasn’t the United States, but you know, there’s a great suspicion that he would be answerable to the Pope, to the Vatican, and not to the American people.
And what’s Simulmatics did in its computer simulations, was purport to be able to show that if Kennedy took a stronger position, and was more forthright about his Catholicism, he would not further alienate any Protestants, but he would earn the loyalty of Black voters and Jewish voters who would identify with him as a member of an oppressed group. So that he would, he would gain votes where he needed them most, and he would not lose votes from voters whose votes he had already lost.
Mina Kim: Whether or not it was Simulmatics that helped Kennedy win the election, the company certainly wanted to paint it that way. Because they were a fledgling company, right, and they wanted to be able to sell themselves further for other campaigns and other maybe research projects and things like that. High profile research projects. Can you talk a little bit about why the campaign denied the use of this sort of election machine, this election simulator?
Jill Lepore: Yeah. You know, polling, when it started, was also really controversial. Public opinion polling, as we know it now started in 1935, and in the 1940s, there were congressional hearings about whether polling was ethical. And I think the decision went the wrong way. I don’t actually think polling is ethical. I think it contaminates the political process. But those concerns, you know, had endured, and then they were raised again, and with greater vehemence over the question of a computer simulation of the electorate, which is different than just asking voters their opinions. It’s not asking voters their opinions, and predicting how they will act if you send them a new message. So, it’s a little bit different.
The Kennedy campaign, once Simulmatics took credit for Kennedy’s victory, denied even knowing what the company was, which was kind of an interesting rebuttal that you could get away with in a time of, you know, very different media environment. Partly because, much like today, automation was a very important political issue. And the Democrats, remember, are the party of labor. Republicans were the party of business. And Kennedy had taken a strong position on automation, which was, you know, he had all, he was endorsing all kinds of jobs programs, and jobs retraining programs, and reaching out to voters who were losing their jobs because of automation, or feared they were going to lose their jobs because of automation. So when the story broke that Kennedy had hired a giant computer to tell him how to decide what to say on various issues, it looked like he was, himself, an automaton–that he had kind of outsourced the political thinking necessary for running a campaign to a computer.
So there was that problem, but then there was just the general issue of leadership. Like we, I just do not believe that our elected officials should be making decisions based on predictive analytics. I don’t actually think that’s how our system of representative government was set up to work. And I don’t think it’s working well, in part because of that.
It’s also the case that when Simulmatics first went to the Stevenson campaign, when it looked like he was likely to be the nominee, and asked, you know, would you guys like to use this incredible machine that we have devised? Stevenson’s–one of Stevenson’s very closest advisors, Newton Minow, was a law partner of Stevenson’s, wrote a memo to Arthur Schlesinger, the American historian who had switched camps from Stevenson–or was in the process of switching camps from Stevenson to Kennedy–and said, “what do you think of this machine? I think it should be illegal. I think it’s immoral. And I also think it can’t possibly work.”
And Schlesinger, who was a great, you know, presidential biographer, but very much a power broker himself, wrote back to Minow, and he said, you know, “I share your reservations. Like, I don’t think it’s good for political leadership. Like people shouldn’t be taking their instructions from a computer prediction. But on the other hand, I, you know, I don’t want to stand in the way of science, and I think it might work.” So, what the Kennedy campaign kind of later says internally is, if you’re going to use such a thing, you should hide it in a basement, you know, behind a locked door, and throw away the key, so that no one can ever discover that you used this kind of device.
Mina Kim: And so you were saying that you feel like, that you would agree with that sentiment of the time, right? That we shouldn’t be using predictive analytics to determine maybe leadership behaviors or decisions that we make. But I wonder, then, how you must feel about the fact that it is completely normalized now, and in so many other industries, besides just political campaigning, to use these kinds of tools, where you feed an entity, a computer, a lot of data, it spits out basically predictions about human behavior, and then that guides people’s decisions about, I don’t know, what they sell, what they buy, what they tell you, or even what decisions to make as a leader, absent conviction.
Jill Lepore: Yeah. I mean, I’m not a Luddite. There’s a thousand different ways that we use computer aided prediction and modeling to make the world a better, safer place. We use it for contact tracing and pandemic research. We use it for thinking about weather, obviously we use it for weather prediction. We use it to to track and find new stars. I mean, there are a thousand reasons that we should be using these kinds of methods for all kinds of things. I happen to believe–and I think there are a lot of AI researchers who believe this as well–that where it becomes ethically complicated and ultimately indefensible has to do with social outcomes, which is a little bit different than the politics of this.
So for instance, an example that I would give, you know, in my local public school system, kids in the big public high school are set up with a predictive analytics program called Naviance, like starting freshman year, that tells them what colleges they could get into based on their grades, their ethnicity, the school that they’re attending, how other students from the same school have done when applying to this set of schools.
And so by the time they get to their junior and their senior year, when they’re really thinking–those who are considering going to college–are really thinking about where to go, this program spits out a list of schools to apply to. That’s, in many cases, you know, it amplifies educational inequity, it happens to correspond to–it lowers the ambitions of students who are struggling to raise their ambitions. And, I think it gives students the illusion that they are just cogs in a machine. Now it doesn’t have to say that the college application and admissions process itself isn’t really broken. It is completely broken. Like there are a thousand problems with that process. But the idea that you tell kids, young people, “oh just, we’ll just put all your data into, into this program and we’ll run it against a model, and we’ll come up with a set of options for you for a decision that is going to affect your future life.”
Or these the same, you know, predictive algorithms are used in assessing whether people who are arrested should be released on bail, or without bail, or what the bail level should be set at. Some courts use them in sentencing. They use them in doing risk assessment of young people who were in the foster care system. Anything to do with social outcomes, I think is just indefensible, because, although I think many of these programs have been devised with very good intention, to aid a process that is tangled and complicated, and caseworkers who are burdened and overworked, it tends to be, you know, you basically end up in Minority Report before too long.
Mina Kim: Do you feel like your position is being more widely adopted or–it almost feels like sometimes, is there, going back, like, is there an appetite to put parameters on things like that, on predictive analytics related to social outcomes?
Jill Lepore: I think, you know, there are some places where there has been an incredibly strong pushback. I think that bail setting, algorithmic bail setting, there’s been a lot of critique of, but that is because advocates seeking to reform the criminal justice process are incredibly well organized.
Similarly, a lot of police departments that instituted predictive policing, which uses algorithms, you know, you gather together crime data, and neighborhood data, and census data, and economic data, and you then predict what neighborhoods are going to see crime, and then you patrol them more, which, you know, you think about it for half a minute, and like, well, if you patrol the neighborhood more, you’re going to make more arrests. If you make more arrests, it’s going to look like there’s more crime in that neighborhood. Whether or not, you know, it becomes cyclical. So a lot of police departments that had, I think, you know, swallowed the snake oil and spent a lot of money retaining analytics companies to do predictive policing, have pulled back and have canceled those contracts. So there are some venues, and again, like it tends to be places where there’s a very well organized opposition that’s deeply concerned with the ethics of inequality.
So I don’t think, for instance, I have myself not seen a lot of pushback around the use of educational software that does similar things, that predict social outcomes. There was a book, came out a few years ago, by a mathematician, called Weapons of Math Destruction, you may recall, which had a really interesting chapter on the educational applications of predictive analytics and how dangerous it was, partly because the people that are using these tools are a little bit more gullible about how definite their predictions are. I think, you know, in an academic environment, it might be that some of these programs can be used with great caution and they can be really useful, but you’re talking out in the field with, you know, a social worker who’s trying to decide about a placement of a very young child, and whether this child, you know, is going to be a risk, you know, to a family that has a younger child, because of the history of her abuse. I mean, I would trust the caseworker there much more than I would trust the algorithm. But there’s not a social justice movement around the foster care system.
Mina Kim: Can you talk a little bit about why, what happened to Simulmatics? Because we’re talking about sort of where things are right now, but the company itself went bankrupt, right? It died, essentially. Can you talk about what happened to it and why that happened?
Jill Lepore: Yeah, it’s a little bit easier to see why it went bankrupt when you think about what it tried to do after its founding. So it’s founded in 1959, 1960 works for the Kennedy campaign and the DNC, claims credit. It goes public in 1961 with a stock offering, and then begins lining up clients, basically retail clients and manufacturers, like Colgate Palmolive, or Ralston Purina. They’re going to use their predictive tool to set up an imaginary population on which they can test advertising messages that are aiming to get people to switch brands, you know, switch dog food. We can come up with the best possible advice for you, what ad campaign to use to get people to buy your dog food, because we have an imaginary population and a computer model that can test out and can simulate their purchasing decisions.
So they do that for a little while, but then Madison Avenue kind of catches up with them. People might remember, there’s an episode of Mad Men when they bring in a computer to the advertising agency. So the advertising agencies, big advertising agencies, catch up with them and so they no longer can really make money doing that. They’re too small of an outfit.
They then decide they’re going to work with media organizations. They get a big contract from the New York Times in 1962 to do election analysis on election night. They just can’t pull it off. They’re not good enough technicians. That contract collapses. And then they start turning to do more work for the government, and in particular for the Department of Defense. They undertake a contract to produce a simulation of the Venezuelan economy. And then that very quickly takes them into the world of counterinsurgency in the US involvement in the war in Vietnam, where Simulmatics opens an office in Saigon and undertakes to do the work of trying to essentially come up with, to measure the popular opinion, public opinion of Vietnam, of the South Vietnamese.
They then, back in the US, apply that research to a project that the Kerner Commission in 1968 is interested in, which is predicting race riots. Johnson, Lyndon Johnson, had established the Kerner Commission in late, in ’67. Its report comes out at the beginning of ’68, after, you know, after the racial unrest in places like Newark and Detroit. That also is considered by the people that hired Simulmatics to do the work to be a failure. The Department of Defense terminates all their contracts in Vietnam. They decided that the work is useless.
And at that point, the student antiwar movement begins to focus on companies like Simulmatics, who’ve been doing contract work in Vietnam. Especially through protests of university faculty who have been doing that work. So, one of the leading scientists at Simulmatics had been at, it was at MIT, students, you know, when the SDS becomes extremely violent and radical in ’69, they attack him personally. They call him a war criminal. They hold a mock trial for him for war crimes, for what Simulmatics has done in Vietnam. And the faddishness of computer simulation comes to an end, partly, you know, as part of the critique of McNamara’s, Robert McNamara, the Secretary of Defense, his vision for running the war in Vietnam like a computer simulation.
So the company goes bankrupt in 1970, and then for a long time, simulation is actually just something in video games. Like, think about SIM city or all the simulation stuff. It doesn’t really kind of re-emerge as a commercial product, you know, until, you know, after the dot com boom.
Mina Kim: So then what is–given the fact that it did badly, and it did especially badly, especially on social issues or predictive analytics around those kinds of things–what is its legacy do you think? Like why do you see it as sort of the beginning?
Jill Lepore: I think that it is a useful, kind of a missing link in how we think about the distance between say, psychological propaganda, psychological warfare of the second World War, and of the Cold War, and our modern era of, you know, election meddling from Russia and Cambridge Analytica. And, you know, what data does the government have about you? The sort of NSA, Edward Snowden stuff, versus the what data does Facebook have about you? Should you let Google collect data about you? What are your privacy rights? I think it’s important to remember that Simulmatics explains in some ways, and helps us to see the continuity between Cold War-era psychological warfare, and modern Silicon Valley. That there was a moment when there was an attempt to run this stuff as a commercial business by behavioral scientists, who’d worked in psychological warfare in the second World War. And who worked in psychological warfare projects during the Cold War, and in Vietnam. That they saw a way–a lot of these guys worked both in the study of voting behavior in the US and in the study of propaganda to third world countries, trying to prevent them from becoming communist countries.
Right, and in a way you can see why those two things are really closely related. They’re both the study of how do you get in someone’s, inside someone’s head, and change their mind. So you have to know what they’re thinking. Then you have to know what you want them to think. And then you have to figure out what message will move them from what they’re thinking now, to what you want them to think.
This used to be called psychological warfare. After the second World War, people were like we shouldn’t call it that anymore. We’ll call it the study of mass communications. And that is the field of mass communications. It kind of comes from that. And it’s not that, you know–that’s what advertising is like. It’s not all nefarious. Like, I don’t have some kooky idea that it’s a bunch of sinister people. But I think for me, the reason it was significant and I think, comforting is not exactly the right word, but really compelling to page through box after box after box of archival material from this company’s history, was, I do often have a feeling, you know, sometime, and maybe this time of day, in the middle of the afternoon, where I go to do something on my phone and some message pops at me that I’m kind of staggered by the uncanny feeling that someone knows exactly what I’m about to look for. And, you know, I almost kind of like shriek and leap back and drop my phone, of like, who is messing with my head? And it is helpful to know that that is because of that work comes out of psychological warfare. Like how to capture your attention and change your opinion about something is both, you know, the work of psychological warfare, and the work of an advertising campaign, and the work of a political campaign.
It’s one thing when that is a fair fight, right? Like when you have, you know, think of all the media literacy efforts that, you know, when I grew up, we were, we learned a lot in school about–this was kind of in the kind of Ralph Nader era–how to resist television advertisements to children so that you wouldn’t buy the junky cereal. You know, like that you needed to be sophisticated about how television was trying to control your mind.
You know, there was kind of media literacy curriculum in a lot of schools in the seventies for that reason. We don’t really quite have that now, with, especially for young people, but certainly even for voters, right? Like, we know that people who get their news about the pandemic exclusively from Facebook have large number of misconceptions about the etiology of the disease. But they don’t necessarily know that. Because we don’t, we’ve just swallowed this thing whole.
Mina Kim: So it sounds like you feel like psychological warfare may be the more accurate way of describing what’s happening to us, especially given the power differential between just us as individuals signing away our permissions, I guess, versus what these companies and entities that they’re connected with can do?
Jill Lepore: I mean, I think so, it’s easy to state this position in a way that is extreme and therefore easy to dismiss, and it’s important to remember, you know, one of the Simulmatics scientists said, about the so-called people machine that they had built, you know, asked whether this was nefarious and would lead to the end of American politics as we know it–we’re thinking specifically about the political use of computer simulation of the electorate–he said, “look, you know, knowledge is progress. Like we know how to do this. We know that it works. It tells us a lot. It can tell a candidate a lot. It’s the obligation of a candidate or an office holder to best and to really as fully possible understand his or her constituency. And it’s great. People should do that. And, you know, I would like for every candidate and every office holder to have a people machine, and then that would be progress. Because, you know, it’s, the only thing that’s wrong with running a computer simulation of the electorate to decide how to conduct your campaign is if you’re the only one with that tool. That’s unfair. That’s, but if everyone had one, it would be fine.” This was his answer to that question.
And I think that’s a serious answer. Right? And, you know, people say, you know, you have a really hard day, and you say, I wish the internet didn’t exist, but we don’t wish the internet didn’t exist, right? Like the, the knowledge production and the diffusion of knowledge that this technology of the internet represents is extraordinary.
So the question is, you know, what about the manipulation of the attention of people who are sovereign participants in a democracy? That is a different–that, you know, that remains a different question, because our notion of how voters make decisions is contingent on the premise that there is a fair field. You know, Benjamin Franklin said, you know, when truth and error wage a fight on a fair field, truth will always win. So therefore he said, this is a printer, you know, I’ll print anything. Because so long as you print both truth and error, truth will win. And that’s, you know, as long as you keep printing, cause that means that the field is a fair one. It’s not a fair field anymore. It’s not a fair field because of money, but it’s certainly also not a fair field because that basic premise, on which the freedom of the press and freedom of speech is designed, like they’re dependent on that idea–it just doesn’t obtain any longer.
City Arts & Lectures: You’re listening to Jill Lepore and Mina Kim. This is City Arts & Lectures.
Mina Kim: You’ve covered so many different things. And it was interesting listening to you talking about kind of unearthing these pages and pages of documents about Simulmatics, and kind of connecting it to our present moment. And, I’m just wondering, just removing ourselves from this particular story, sort of what fuels your curiosity? Like what makes you find something like this and then just, you know, enmesh yourself into it? And I mean, just generally there are so many different things that you do end up writing and delving into as historian and as a journalist. Just on a personal level, curious what drives that.
Jill Lepore: Yeah. I mean, I really, I keep expecting there’ll come a day, I’ll wake up and I won’t be curious about everything anymore. You know, I have this maybe like childlike sense of wonder at the world, I really am curious about everything. But I also, I love finding out something that nobody else knows. I love finding out something that nobody else knows that explains how things got to be this way. And maybe that helps me to imagine that something had a beginning and therefore it can have an end. I think what makes people feel powerless about stuff that’s hard and is unfair, is when it feels like it’s always been that way and it’s somehow becomes naturalized.
So the story–I was just telling the story recently. I remember, it was a number of years ago. I was at an academic conference and my oldest son was four months old and it was the first time I had left him. And I really didn’t want to go, but I felt like, “Oh, I got to go give this stupid paper.” And I went into the women’s room, and I had my breast pump with me, and there were, you know, nine women in line to use the outlet to attach, to plug in their breast pump, and, you know, extract milk from their breast with these ridiculous plastic horns in front of everyone else. Cause there was no other place do it. And you know, we were all laughing and it was ridiculous and it was embarrassing, it was miserable. And I, you know, I’m a big rule maker, so I made a rule that I would never go to a conference again when I had a child under the age of two at home, and I never did.
But, while I was attached to the stupid machine, I just remember thinking, like, how did it come to this? Like what about the story of the emancipation of women over the course of the 20th century and the history of technology led to a moment where I’m in this room in some small town in Pennsylvania, this small college, you know, a thousand miles away from my baby, attached to this machine? Like, and I just desperately wanted an answer to that question cause I kind of wanted to be able to unplug from it.
And this just happens to be another kind of technological story, but I ended up writing an essay, called “Baby Food,” you know, in which I spent a long time thinking about the history of breastfeeding and the history of breast pumps, which, you know–it began as a medical device, and really crucial for saving the lives of children being born prematurely, you know, who can’t nurse, but need their mother’s milk, or there’s a thousand reasons why there should be breast pumps, but how they became like…Do you remember, like there were like these Gucci-like bags that women had, and they’re like disguised as if you’re like, you’re off to your latest board meeting?
Mina Kim: Yeah. I never had one of those, but yeah.
Jill Lepore: No. I mean, I think I like used one from an old student. But you know, the reason for them is because we don’t have maternity leave. So it became like a big corporate employee relations thing. “Oh, you have, you can’t have maternity leave, but you know what we have? We have these really nice breast pumps. And we have a nice room with leather furniture that has wifi and you can still work while you’re hooked up, you know.” And it was–and meanwhile, they’re, you know, they’re giving away to women who are on welfare or on food stamps, you know, these really cheap, crappy ones. You know, and it was like when welfare–you had to go to work when you were on welfare to keep some benefits. Well, if you’re going to go to work and you have a baby, then you have to pump milk. So then the government’s giving you the breast pumps, and it was just a way–just unbelievably, just dystopic demeaning of women, and of motherhood and of work. And, so it was a kind of thing that I just remember being, you know, racing to the library to do the reading, to figure this out, and the thinking.
So that was a particularly intense intellectual experience for me. And I still get emails from women saying, “Oh my God. You know, my boss wants me to go back to work and”–not during the pandemic, but up until the point, you know–“my boss wants me to go back to work. You know, they told me they have this breast pump redemption, like they sent me a coupon for a breast pump, and then I read your piece and I thought, wow, there’s just not, this isn’t right.” You know? And I didn’t really know where this came from and I want to think about a way to stop it. So yeah. I often have that kind of drive about a particular story.
Mina Kim: That’s so interesting because I think, when you go back to its origins, you’re basically going back to a time when the reality that you’re in wasn’t reality, right? So it enables you to reimagine or imagine a completely different future, which is sometimes what, as you say, is really lacking. I mean, in terms of, you know, reading this book and then thinking about where we’ve landed now, with regard to data and predictive analytics and just how it really is valued, it feels like, as knowledge, and above so much, so many other things, you can’t help but almost feel like you’re in a loop all the time, because you’re providing data. They’re using that data to basically kind of affirm what you’re doing and manipulate you in these directions, which then generate more data, that then just sort of create the same outcomes over and over again. I don’t know if that’s making a lot of sense, but it was this sense that I was feeling of, yeah, I’m in a loop. And maybe it is the question of “how did we get here” that can help us get out of it a little bit.
Jill Lepore: I mean, I hope that that’s the case. And to also though, to sort between what is really great about what these technologies can do, and what is not so great, right? Like, as opposed to just, it’s all so wonderful. Those men are geniuses. Which seems to be what the culture wants us to believe. “Thank God for Elon Musk. Oh my goodness. We’re so grateful.” You know, well wait, no, let’s sort it out, you know, what are their criteria? And, you know, I find it kind of fascinating that a lot of people that I spoke to really thought that 2016 was going to be that moment of reckoning, at least for social media companies, because of the interference in the election. And it really wasn’t, you know, Hiroshima was that moment for physicists, Agent Orange was that moment for chemists. Like we have, there are reckonings with work that is done, that crosses ethical lines where, you know, we get the whole field of bioethics from that kind of a reckoning.
There now, you know, a lot of colleges and universities have a required, like, embedded ethics class. And they’re like computer science majors, or like the intro to computer science. And I’ve talked–and these people are incredibly well intentioned who are designing these curriculum. I don’t mean to demean it, like better do that than do nothing. But a lot of students I hear from say, well, you know, it’s like, here’s how to program and here’s, you know, become a great coder, but also like you have to like, jump through this hoop of like answering these ethical questions. It isn’t, you know, it isn’t as embedded as we think maybe it ought to be. That reckoning really still has not happened, it seems to me.
And you know, one of the things I learned working on this book was how intensely people in the 1960s debated whether this was the right course of action. And then they really just decided they would kick it down the road. You know, there’s this book that I read, it was published in 1968, think it’s the year of 2001, the Stanley Kubrick film, big year for futurism. Book is called Toward the Year 2018. And it brought together eminent practitioners in a number of disciplines to imagine where their field would be in 50 years time. And I read it in the last few weeks of 2018. And you know, a lot of the predictions, it’s really technologists who are quite prescient, they’re really on the money. People really do know what’s going to happen: that you would have a phone that you could use to have a video conversation with somebody. That you would have a phone that you could carry in your pocket. That you know, that they knew a lot about weaponry and where it would be by 2018, for instance.
But one of the founders of Simulmatics wrote an essay in which he said, you know, “I’m pretty sure that by 2018”–I might not get this exactly, but I’m pretty sure he said, like, “I’m pretty sure that by 2018, if I wanted to hire somebody, and had a job applicant, I could, at my desk, without ever leaving my desk, by sending messages out through a network of computers, find out that person’s IQ and get their high school transcripts and their college transcripts, find out if anyone in their family had ever received government assistance or been arrested for anything. I could find out their entire military history. If the person were a veteran. You know, I could get job reports from any previous job they’d ever, without ever leaving my desk. And that will be, I’d be able to do that by 2018. But the question is, should I be able to do that by 2018? Should I be doing that? You know, there will always be a contest between the expansion of knowledge and the demands of privacy.”
And then he said, “well, they’ll have to figure it out. Like, that will be the problem of 2018,” he said, “will be figuring out where that line is.” And it’s like, okay, but it’s 1968 and you know that, why are you not having conversations about where that line should be now? Because we haven’t drawn the line. We still haven’t drawn the lines, you know, 52 years later.
Mina Kim: Yeah, and it feels like it’s much harder to go back now than it would have been in like, 1970 or something.
Jill Lepore: Yeah.
Mina Kim: You know, it’s reminding me of–so I read a couple of pieces that you wrote about how we’re moving into this indoor life. I think that was your most recent piece for the New Yorker. And then you had one earlier in April about loneliness. And one of the things that you bring up about how we’re moving indoors was that building design is now being done through an immense amount of data collection, so that they can learn our behaviors, our habits, our preferences. And you make this point about whether or not there’s really an ethical discussion, an ethical conversation around whether or not we should be developing this type of data. And that you sense that the appetite to even have a robust discussion around it will diminish, as a result of this pandemic. I was wondering if you could just talk a little bit about, I don’t know, the connections. I mean, it all just feels so connected to me between the sense of people knowing so much about us, but yet feeling so much less understood or alone. And I, you know, I don’t know, is data really telling us anything meaningful?
Jill Lepore: Well, you know, again, I think tremendously good and energetic and smart people work in these realms and they do really meaningful work. And, you know, in the realm of people who are in the healthy building design movement, I mean, I wouldn’t–the last thing I would do ever dispute anyone’s intentions. You know, they want to make buildings healthier. And now that indoor, living indoors is, was already really an unhealthy thing to do, it’s more unhealthy in an age of pandemic, you know, a lot of these people that work in this field are now trying to think about what public school buildings would need to reopen. You know, new ventilation systems, you know, what, how would you have to modify shared bathrooms? You know, they’re thinking about things that are important.
But you know, it very quickly moves into a realm that seems to me bonkers, when commercial firms are selling products like an app that goes on your phone, or if you’re an employee for, you know, a company that has 3,000 people in a big building and the company will always know where you are, and we will evaluate your productivity, whether you’re closer to a window or closer to a fan, or whether you’re seated on the same floor as your four closest collaborators, or whether you’d be more productive if you were seated on a different floor from the four closest collaborators, and also more fit, because you would be encouraged to use the stairs, if the elevator is deliberately slowed, so that you’d be, you know, encouraged to use the stairs to meet, to go to meetings with your collaborators. Like all that data can be collected on an app on your phone that your employee could hold. Like, and it’s on some level, like, who wants to work in that company, you know? Like what is the work of management? What is the work of employee relations? I mean, what is the work of a community, of working with dignity with your fellow human beings and producing something meaningful together, if there’s just this machine that is in the kind of beating heart of the corporation, that is always deciding where it would be best for you to be inside of a building.
And although I think some of that stuff looked kookier and less likely to be salable–you know, you imagine some of the stock offerings of some of those startup companies that were offering services like that maybe kind of flopped, because people are like, ew, you know, like that isn’t…I don’t want to run my company that way.
But now that people are thinking about bringing back people into office buildings, those tools seem like they might be really important. They might be really important to their employee’s health. They might be really important to preventing, to lowering the risk of lawsuit exposure for the companies who, you know, might know something about whether, you know, the person seated 12 feet away from you has recently seen someone who has only recently recovered from COVID and they know that, and you don’t know that. And they’re going to send you a message and say, you have to leave the building. And if they knew that and they didn’t tell you that, and then you sued, like it, it just, we, it very quickly descends into a, I think, a very dark place.
Mina Kim: Yeah, I guess so. I guess, I wonder though, if, I guess if it’s just being used to create these environments that end up kind of removing us from each other. I mean, I guess there are a lot of health–of course there are a lot of health implications that are related to that. But you know, the extent to which these data can be used to benefit everybody or just people who can afford the benefits of the architecture or the design that comes as a result of collecting all this information. It just raises all these questions about what is it for and who is it for in terms of our future, I think is what I grapple with a lot.
Jill Lepore: Yeah, no, absolutely. And I don’t see where that conversation is quite happening right now. You know, there’s a lot of hoarding of resources going on.
Mina Kim: The other thing that I wanted to ask you about was, I was really fascinated by how we have gone from being a nation of so many people who are doing solo living. And when you were talking about, how did we get here, sort of that question, I was wondering if you could talk a little bit about how we got to that point, and how it connects to what, as you said, the former surgeon general described as an epidemic of loneliness? Because especially right now in this pandemic, when we’re all so isolated and we’re all trying so hard to just kind of make it, or even put a positive spin on what’s happening to us so that we can get through this all, it just feels really relevant.
Jill Lepore: Yeah. I mean, demographers have a number of different ways that they track that, and I think explanations that they offer, and I’m by no means an expert on it. And some of the contributing factors to the rise in single person living, which is, you know, can be described as an epidemic of loneliness, because of the health outcomes you see that is associated with that. Some of them are things that, you know, many of us would think are very good things. The emancipation of women, right? The women could live alone. Women living alone is a pretty new thing, historically. Young women, women after their–when they become widows, women who never marry. That is a large part of single-person living, has to do with the growing independence of women.
A lot of it has to do with economic growth and economic mobility. Not recently, not in recent decades, but there were decades in the 20th century when, you know, you moved away from the house that you grew up in and the town you grew up in because there were jobs to be had in another part of the country and you could afford to move there and take those jobs, and you might end up living alone if your partner, if you didn’t find a partner, if your partner died, or you weren’t interested in a partner, because the rest of your family isn’t nearby. And we would think of that kind of opportunity and generational independence as probably a good thing. You know, a lot of it falls into the broader decline of community, arguments about the supplanting of the extended family with a nuclear family, which is a historical anomaly, like living just as parents and kids in a single living unit is historically extremely unusual.
One of the things that’s fascinating about the pandemic is how that’s reversing to some degree. You know, I’m sure, you know a lot of young parents who’ve moved back in with their parents, in order to have their parents take care of the kids, so that they can do their work from home. Not that that it hasn’t been a continuous feature of life among a large segment of the population. But, it had become like a quite disfavored mode of living among, you know, kind of aspiring white collar professionals for decades. So there’s some, you know, there’s a lot of weird kind of movement going on around that stuff.
But I think that, you know, one of the things I say in that essay about the history of loneliness is, I put the founding of Facebook kind of in the timeline of a really accelerating growth of single person living, and point out that what Facebook did when it was founded was monetizing loneliness. You know, it was a way to sell, and this is, you know, what Zuckerberg will always say, what Facebook’s mission is: “we connect people to the people that they love.” It’s like, no, actually what you do is you get people to voluntarily give you their data and then you sell it to other people and make money off of it. But yes, you do connect people. You know, that is the reason for the tremendous success, early, especially early on, at Facebook, long before Facebook news, was people’s loneliness in an increasingly atomized world.
Mina Kim: So. Right now, it feels like we’re sort of shuttered indoors, we’re lonely. I wonder what sort of optimism you have for the human condition moving forward.
Jill Lepore: You know, I don’t know. I sometimes think I have a lot of optimism, but I’ve just had this series of assignments to write about things that are really depressing at a time when things are just generally getting worse. And so I’m trapped now in these positions that I don’t entirely hold. Yeah, I think all of these things can turn really fast. I think all of them could turn really fast, honestly.
Mina Kim: Turn meaning in a good way or a bad way?
Jill Lepore: Yeah, I think a lot of things could turn really fast in a good way. They, you know, they have, they’ve been turning at a pretty tight clip in a bad way for a while now. But, if this isn’t a moment when people really confront what it is to be a member of the human family, then when is that moment?
Mina Kim: Well, Jill Lepore, it was really such a pleasure to talk with you. I don’t know if you have any final thoughts you want to leave us with, that my questions didn’t get to?
Jill Lepore: You know, I just want to thank you for doing what you’re doing, which, you know, I know it’s hard sitting in a room talking into a microphone when, you know, it’d be so thrilling to be in a big room with a thousand people and feel the thrum of that and the energy and the murmuring and the whispering and the kind of heat rising in the room from all those bodies. Like, I think it’s really incredibly generous of you to, you know, to do what you’re doing in these really different conditions and do it so well. And I’m sure that your listeners value it immensely, and it’s been a real treat to speak with you. So thank you.
Mina Kim: Wow, Jill, thank you so much. That’s not what I expected you to say, but I really appreciate that. And thank you for this book. Jill Lepore’s new book is If Then: How the Simulmatics Corporation Invented the Future. Thanks to everyone for tuning in.
City Arts & Lectures: You’ve been listening to Jill Lepore in conversation with Mina Kim. This program was recorded on September 16th, 2020. We had research help from Elizabeth Levy, a sophomore at Yale University, studying art history. City Arts & Lectures can’t wait to be back at our theater with hundreds of you. We too miss the thrum and the energy, the murmuring, and the whispering and the heat rising. While we wait for a time when it’s safe to assemble again, we’ll continue to bring you voices from leading figures in arts and ideas, politics, and food, from their homes to yours.
The programs we are working on this fall include a conversation with Yaa Gyasi, author of the explosive debut novel Homegoing. She is out with a new novel, Transcendent Kingdom. The book intimately describes a very particular story of relationship between an immigrant mother from Ghana and a first generation American daughter.
Yaa Gyasi: I think the main thing is, again, this question of what we make of, what we do with trauma that we have inherited–maybe trauma that isn’t ours, but that we take on. Both of these books are, I think, interested in looking at how trauma moves within a family. And then the other thing I would say is that they are both, in some ways they are both books that are asking how we make sense of a life and a world in which senseless things happen, when we understand that there is a kind of randomness and senselessness to some of the aspects of being alive.
City Arts & Lectures: Later this fall, poet and author Claudia Rankine has written a lyrical, genre-confounding book. It scrapes away at the sedimentary layers that make up our muddy US culture. Just Us: an American Conversation explores the most pressing questions about race in America today, through essays, poetry and pictures. You can watch a live stream of Claudia Rankine on October 1st at our website, cityarts.net, or hear her in an upcoming broadcast.
Chanel Miller artist and writer is joined by Jia Tolentino, New Yorker staff writer, and author of trick mirror. Miller’s memoir, Know My Name, is about trauma and transcendence and not being defined sexual assault. Alice Wong, disabled media maker, and founder and director of the Disability Visibility Project, is joined by comedian Kamau Bell. They’ll talk about amplifying the voices and experiences of disabled people. And Alicia Garza, Black Lives Matter co-founder. She’ll talk about her forthcoming book, The Purpose of Power: How We Come Together When We Fall Apart.
These are just some of the programs coming to you from City Arts & Lectures this fall. For more information, visit cityarts.net. That’s cityarts.net.
These broadcasts are produced by City Arts & Lectures in association with KQED public radio, San Francisco. Executive producers are Kate Goldstein-Breyer and Holly Mulder-Wollan. Director of communications and design is Alexandra Washkin. Production and communications assistant is Juliet Gelfman-Randazzo. The post production director is Nina Thorsen. The Sydney Goldstein Theater technical director, Steve Echerd. House manager, Lucie Faulknor. The recording engineer is Jim Bennett. The music composed and performed by Pat Gleason. The founding producer is Sydney Goldstein. City Arts & Lectures programs are supported by Grants for the Arts of the San Francisco Hotel Tax Fund.
Additional funding provided by the Wallace Alexander Gerbode foundation, the Mimi and Peter Haas Fund, the Bernard Osher Foundation and the friends of City Arts & Lectures. Support for recording and post production of City Arts & Lectures is provided by Robert Mailer Anderson and Nicola Miner.
To attend a live program, see who is coming next, or find out more about our podcast, visit our website at cityarts.net. That’s city arts.net. Special thanks to Ann Oyama for making our programs possible as we shelter in place at home.