Me Myself and AI Transcript

Megan Figueroa: Hi, welcome to the Vocal Fries Podcast. The podcast about linguistic discrimination.

Carrie Gillon: I’m Carrie Gillon.

Megan Figueroa: And I’m Megan Figueroa.

Carrie Gillon: I don’t know if you saw this, but in the UK, Lord Digby Jones accuses Alex Scott of spoiling Olympic coverage by dropping her g’s.

Megan Figueroa: What?

Carrie Gillon: Yeah. So Alex Scott, is a television presenter on BBC.

Megan Figueroa: Okay. What dialect of British English is it that you drop your g’s? Is that not a common thing or is it a specific dialect that only does it?

Carrie Gillon: Well, first of all, it’s not dropping g’s.

Megan Figueroa: Well, yes, yes, yes.

Carrie Gillon: I think we have to push back on this because I just really hate the description. It doesn’t make any sense if you actually understand sounds. So instead of pronouncing it /ɪŋ/, she pronounces it /ɪn/. There are many dialects that do that in the UK. There are also dialects in the UK that pronounce at /ɪŋg/, and there are also dialects that pronounce it /ɪŋk/. So you have all these varieties. Some that actually have a g.

Megan Figueroa: Right.

Carrie Gillon: Most don’t though, but the prestigious dialect, just like in North America, the most prestigious dialect pronounces it /ɪŋ/ with the engma, which we spell ng.

Megan Figueroa: Right. Okay, and this is the same in the US where it’s a denigrated variety if they don’t pronounce it with a g sound, like the south.

Carrie Gillon: It’s a denigrated accent for a lot of reasons. And one of the things that you pick up on is pronouncing it /ɪn/.  

Megan Figueroa: right. Yeah.

Carrie Gillon: Yeah. So, this is what he said. He used to be a Minister – kind of like a Congressperson, right. They equivalent of. So used to be a Minister. And he said that she needs elocution lessons and spoils Olympics coverage. Yeah.

Megan Figueroa: No, this is why, was it Madeline Albright that got, uh, not Madeline Albright. The person that got elocution lessons, the famous British lady.

Carrie Gillon: Oh, are you talking about Margaret Thatcher? Oh boy. Oh boy. Madeline Albright.

Megan Figueroa: Yeah, I know. I’m sorry.

Carrie Gillon: Same generation, I guess, or similar or close. And a woman. Otherwise pretty much very different.

Megan Figueroa: Yes. Yes. I apologize to everyone from that part of the world, I was probably very insulting. Yeah. No, it’s a problem all across the board if, you know, Thatcher had to also worry about this.

Carrie Gillon: Yeah. I mean, it was for a different reason. It was because I guess she just sounded too nice or something. I can’t remember exactly what it was, but it wasn’t that she was speaking- I don’t think it was because she had a non-prestigious dialect. I don’t think, anyway. Yes, it’s in the UK, there’s less of lots of things going on, but I would say probably the major thing going on in is class. And this is just an example of class.

Megan Figueroa: This is a class thing.

Carrie Gillon: Yeah. So, this is the whole thing. I think this is actually, you have to hear it all.

Megan Figueroa: Okay.

Carrie Gillon: “Enough. I can’t stand it anymore. Alex Scott spoils a good presentational job on the BBC Olympics team with her very noticeable, inability to pronounce her g’s at the end of a word. Competitors are not taking part, Alex, in the fencin, rowin, boxin, and kayakin, weightliftin and swimmin.”

Megan Figueroa: I…. am horrified.

Carrie Gillon: Yeah.

Megan Figueroa: I, I, I’m almost speechless because… I don’t know it. It’s not, why am I surprised by this?  I don’t know. I don’t know why I’m surprised by this.

Carrie Gillon: It shouldn’t be a thing that is said anymore at this point. But anyway, so her response was “ I’m from a working-class family in East London, Poplar, Tower Hamlets, & I am PROUD. Proud of the young girl who overcame obstacles and proud of my accent! It’s me, it’s my journey, It’s my grit.”

Megan Figueroa: Get it, girl. I love it! I mean, it’s so easy to, I don’t know. I feel like 10 years ago it would have been easy to fall in the trap of, “oh yeah. I need to change the way I speak.” You’re right. I’m glad that that’s what her response was, but I just feel like it was, it’s novel to me.

Carrie Gillon: So Lord Digby Jones-

Megan Figueroa: Yeah. Fuck you

Carrie Gillon: responded to this, this morning. Well, this morning, our time; afternoon, their time. “Alex Scott, please don’t play the working class card. You are worthy of much better than that! I admire & often publicly praised the adversity you faced & defeated to achieve all the success you deserve. Not sounding a g at the end of a word is wrong, period.” And the reason why I bring this up is because he says period here, and half of the comments in response to him are like, “you should be saying full stop,” which is so funny. So, so funny. You’re being American.

Megan Figueroa: That is amazing. Oh my God. I’m over here in, you know, the desert in the US and I’m wondering how the thoughts about class-based discrimination against dialects or varieties is changing over in the UK. What that looks like these days.

Carrie Gillon: I hope it’s changing. I think it probably is, but it’s really, really ingrained in the UK. I mean, it’s ingrained here in, in the north America, too. It’s kind of masked by racism, but it’s still, there is still classism here, but there it’s just like very, very much more like obvious, I guess.

Megan Figueroa: Well, they have royalty, right? I mean, there’s nothing more obvious to me about class-based distinctions.

Carrie Gillon: I guess that’s true. But there are other places with monarchies that are slightly less intense on the class

Megan Figueroa: Sure. Yeah. Well, and it’s probably related to how I grew up thinking that the most prestigious way you could speak would be to sound like someone from the UK. And I, you know, of course that they only sounded like one way when I was little, but I was like, that’s, those are the rich fancy people. I always associate it with wealth and with, with having power.

Carrie Gillon: Well, you weren’t completely wrong. I mean, RP is the dialect of wealth and social status.

Megan Figueroa: I mean, Lord Digby, I don’t think, I mean, he’s not going to move on this. He can go fuck himself. Cause he’s not going to move out an inch on this. I don’t think

Carrie Gillon: Probably not.

Megan Figueroa: Did your dad at the CBC go through, have you talked to him about what they may, if they ever said anything about his voice?

Carrie Gillon: I haven’t really like had a, like a long conversation about it or anything, but, there was like a list of words that they had to pronounce a certain way. So, for example, schedule /skɛʤʊl/, he was supposed to say schedule /ʃɛʤʊl/. So that’s how he pronounces it. That’s not the way most Canadians pronounce it. That’s the way Brits pronounce it.

Megan Figueroa: What about aluminium?

Carrie Gillon: No, no. We say aluminum. Aluminium is everywhere else. All the other Commonwealth countries, except for Canada. And by the way, it reflects the scientific naming convention better to call it aluminium.

Megan Figueroa: Yeah, that’s true. Yeah. That’s yup. You’re right. But what is it schedule /ʃkɛʤʊl/? How does he say it?

Carrie Gillon: /ʃɛʤʊl/ Yeah.

Megan Figueroa: to this day.

Carrie Gillon:  I haven’t heard him say it in a long time, but I definitely remember the last time I heard him say it. It was after he retired and he was still saying schedule /ʃɛʤʊl/. So yeah, but I I’m sure there were like- I mean, I used to not be able to tell my dad apart from other male announcers on CBC. Cause they had this kind of CBC voice. I don’t think it’s as bad now. Like I think there’s more variety, but there’s still like a cadence to it.

Megan Figueroa: Yeah, no, I know what you mean.

Carrie Gillon:   Yeah. It’s different from NPR, but it’s closer to NPR than probably anything else you can think of.

Megan Figueroa: Yeah, we should have them on one day.

Carrie Gillon:   Could be fun.

Megan Figueroa: Yeah. If it’s not, we’ll just delete the episode. Never let anyone see it, hear it and pretend it never happened. Well, Lord Digby. Jeez. I don’t know. Alex Scott, you’re amazing. And it reminds me of talking with Deion in our episode, a couple episodes back, it’s just kind of, you know, thinking you have to sound one way, but then owning it. in the case of him going to a speech pathologist.

Carrie Gillon:   Yeah. He learned some things to make himself, um, more legible to other brothers in English, but it didn’t completely change his accent.

Megan Figueroa: Exactly. Yeah. And like, I like to say, I just think that we need to start perceiving people differently instead of expecting everyone to change, to make us feel better.

Carrie Gillon:   Yeah. I mean, my dream is one day. It will be exactly two-way street. Yeah. I have to change a little and you have to hear a little differently. Everyone’s doing their best. That’s what I want, but we’re not there yet.

Megan Figueroa: The changing a little like that doesn’t mean you’re, you’re giving up something because you know what I don’t say fuck all the time when I’m around complete strangers in a, in a certain setting. Like there is a little bit of like, you know, give and take. I’m like, okay, maybe I would’ve said fuck here, but I’m not because I’m in this situation. So, it’s like, if you think about it that way, it’s not, you know, you’re not giving into white supremacy or, or, you know, these kinds of things by, by giving a little

Carrie Gillon:   right. Uh, you know, once things are actually on an equal footing, it will just be reciprocal and it won’t be all one direction.

Megan Figueroa: Yeah. If anyone wants to tweet at us, um, or, or message us at Gmail, it’s, about, you know, what’s going on in the UK when it comes to like class-based varieties and like perceptions of it. I would love to know more. And we’d love to hear from you over there.

This is an important episode.

Carrie Gillon:   Um, oh, absolutely important. Yes. Incredibly important. I’ve learned so much about this area in the past month. Cause I’m writing the chapter for our book about this and the conversation obviously was part of that. But yeah, it’s an incredibly important part of what’s going on in the world and I can’t believe how destructive it is. So yeah,

Megan Figueroa:  It can destroy lives absolutely.

Carrie Gillon:   Well, literally burning the world up at one level like you bring up. But then in in other ways as well. Yeah.

Megan Figueroa:  Well, anyway, it seems funny to say enjoy, but we mean it because our guest is amazing. So

Carrie Gillon:   yeah. It’s a really fun episode, even though we’re talking about pretty dire things,

Megan Figueroa:  yes, it should be basically a subtitle of our show.

Carrie Gillon:   Yeah, almost always. It’s pretty dire.

Megan Figueroa:   It’s a pretty, it’s a pretty fun episode, even though it’s dire.


Megan Figueroa: Okay. So, we’re so excited today to have Dr. Emily M. Bender, who is a Professor in the Department of Linguistics at the University of Washington. She is also an Adjunct Professor in the Department of Computer Science and Engineering and the Faculty Director of the Computational Linguistics Program and the Director of the Computational Linguistics Laboratory. Thank you so much for being here, Emily.

Carrie Gillon: Yeah. Thank you.

Emily M. Bender: I’m so excited. I’m one of the cool kids now.

Megan Figueroa: Oh, I feel cool talking to you.

Carrie Gillon: Yeah.

Megan Figueroa: kind of my hero. I, that I’ve never really said because I don’t know if anyone knows this. Maybe even Carrie doesn’t, but I have this thing where I wish my life would have went a little bit differently, cause I think I would have been more into the computational part of things because I was so into like learning HTML when I was like nine years old, I made my own websites and all these things. And I feel like if I, perhaps if I weren’t a girl, maybe someone would have like actually pushed me to pursue it. So, I’m just like, I love women in tech. It’s just, it makes me so happy.

Carrie Gillon: Also, you have like a rule named after you.

Emily M. Bender: Yes.

Megan Figueroa: Wait, what? I don’t know. I don’t know how, I don’t know.

Emily M. Bender: So, it’s the hashtag Bender rule

Megan Figueroa: Okay.

Emily M. Bender: I didn’t name it, but I have to say that once other people needed it, it was like three or four people at the same time came up with that. I was like, okay, I’m going to run with this. And it’s basically the Bender rule states that you should always name the language you’re studying, even if it’s English, it seems kind of like a bare minimum thing to do, but yeah. You’d be surprised. And so that’s, it’s, it comes from the context of computational linguistics and natural language processing where people will talk about how we’re working on machine reading, or we’re working on question answering or we’re working on speech recognition and they don’t say English, and then you’ll have we’re working on machine reading in Italian, and those papers have a harder time getting published where people go, oh, well, that’s, that’s just specific to Italian, as if the other stuff weren’t specific to English. Right?

Megan Figueroa: It’s like being specific to white or Anglo as the

Carrie Gillon: It’s very similar. Yeah.

Emily M. Bender: Yeah. Yeah. exactly. Exactly. And there’s Nicole Holliday has a, has a Holliday rule now, which is it’s all right to say they’re white,

Megan Figueroa: Yes. Yeah, absolutely.

Carrie Gillon: If people are going to say it’s all right to be white, you should be able to be able to say it then, too.

Megan Figueroa: I know.

Emily M. Bender: Yeah, exactly. It’s like, who are you studying? And, you know, don’t let one group of people be normative and sort of unmarked. Like you have to say, this is English. This is white people we’ve been listening to. This is, you know, has to be, it has to be specific. Otherwise, you get this fake generalization, that’s just, you know, scientifically invalid and leads to all kinds of other problems.

Megan Figueroa: absolutely.

Carrie Gillon: So, before we get into like the stuff that you actually really want to talk to you about, I just realized we need to ask the very basic question. What is computational linguistics?

Emily M. Bender: That is a great question. So computational linguistics very broadly is using computers to deal with language. And you can be doing that for the purposes of linguistic research. So you might be my colleague, Alicia Beckford Wassink has a service called CLOX, C L O X, where she’s got sort of an interface to commercial speech recognition set up so that it helps sociolinguists do their transcription faster so they can get on with the science. She doesn’t consider herself a computational linguist, I don’t think, but that application is computational linguistics. You can be talking about doing corpus searches. You can be talking about grammar engineering, which is sort of my initial home within computational linguistics, or you could be working on practical applications. So, getting computers to deal with language, for things like autocorrect or machine translation or speech recognition or speech synthesis or oh, there are applications like matching patients to clinical trials based on the notes in their medical files, for example, so huge range of things. Uh, just lots and lots of different applications. Most of it happens unfortunately on and for English.  but you know, not specific to English tends to be specific to languages with a written form, there’s relatively less going on that doesn’t involve writing. Like you might have speech to text or text to speech, tiny amounts of work on signed languages, but most computational linguistics involves either mapping from spoken to written or just dealing with the written.

Megan Figueroa: What is AI and how does that relate back to what computational linguistics is?

Emily M. Bender: That is another great question.  so AI stands for artificial intelligence and what gets called AI these days. So, if you think of AI as this, as this research program of creating computers that have capabilities that are similar to what we consider intelligence in humans. And then there’s like this whole interesting and problematic set of questions about sort of what is intelligence. And if we think about using computers to model intelligence, what kind of reductionist idea of how humans work is that giving us? That’s sort of AI as a research program. AI right now, we’ve probably started hearing a lot more about it in the last three or four years, I would imagine, is largely pattern recognition at scale, applied to specific different context. So, pattern recognition is useful in classifying things. So you probably a million times now answered a question of which of these photographs have a stop sign in it. All right. Do you know why you’re doing that? You know what for?

Carrie Gillon: Is that to train the AI, to learn what, what things are?

Emily M. Bender: Yeah, it is, it is creating training data for computer vision systems. So, basically, we as humans doing a CAPTCHA are providing labels for “this picture has a stop sign and that one doesn’t.” So that in someone trying to build autonomous vehicles, it needs to be able to reliably detect stop signs and pedestrians and bicycles and strollers.  

Megan Figueroa: the test that they use to see if we’re human is training computer to be more human. Yeah.

Emily M. Bender: yeah, exactly. I think that the initial idea there was it’s called CAPTCHA and the idea was I, we do something. We, we need people to perform some tasks to prove that they’re humans and along the way let’s make that useful. You know, that’s like, I I’m, I’m on board with that, but where I start sort of parting ways with the people who are big AI boosters these days is this idea that the pattern recognition is showing that the machine can, can do these things and do them in a superhuman fashion or a sufficiently human fashion, depending on the task.  when oftentimes the things are things that aren’t even possible tasks at all. So, there’s an awful lot going on. That gets called modern phrenology or digital phrenology and people who are really in the know point to that was actually physiognomy, I think is the right word. So,

Carrie Gillon: That’s right.

Emily M. Bender:  This is like, pretending that you can tell something about a person. Personality or other characteristics based on the shape of their head or the shape of their face.  so, you know, there’s a whole bunch of articles, recently, people claiming they can do emotion detection using computer vision. And it’s like, no, you can’t.  You can’t even as a human know what someone else’s feeling just by looking at their face, you can make some guesses based on what they may or may not be showing you on there.   but there’s all kinds of reasons those signals could be scrambled. And so, you know, the best a computer vision system could do would be to assign them the same labels that the human who labeled the picture in the first place did. And that human might well not be the person who knows the data subject very well at all.  So, you have a picture of somebody smiling. Somebody else says they’re happy, right? That person’s mom looks at that picture. Like, no, no, no, no. That’s the face my kid makes when they’re told to smile for the camera and they’re grumpy in it.

Carrie Gillon: So, speaking of this, I did that. I think it was the financial times did this thing where it was like trying to figure out your emotions from your face and you’re supposed to react to things. And it told me that I disliked cats and I was like, a) that is completely false. I have four of them. b) like I know for myself that people, when I’m smiling, some people don’t know that I’m actually happy. They think that I’m upset. So, okay. This thing doesn’t work on me. Not surprising to me

Emily M. Bender: Exactly. Exactly. So, so it’s pattern recognition and there are scenarios where pattern recognition is a useful thing, right? So, speech recognition is a kind of pattern recognition and especially in a context where like you are, you’ve got a bunch of stuff you’ve got to transcribe. And the speech recognition system now is fast enough that it is easier to edit and fix its mistakes than it is to do it by hand. Like that is useful.  but then there’s plenty of other places where people claim that this pattern recognition is telling things about the world that are true, or that maybe you get, you can’t even tell as a human, but the computer can tell it now.  And that’s completely spurious. And then on top of that, when it, you know, if people will get sold on this and they say, Yeah, I’m going to believe the machine. And the machine of course makes mistakes. The mistakes that it makes are not sort of randomly distributed. But they are going to reproduce all kinds of discrimination and inequality in our society and have those same discriminatory things come out as well, but it was the machine. So therefore, you know, must be right. And in fact, it’s really easy to pick on the, on the applications of, of face recognition, but there’s one, that’s also been in the news recently of apparently some universities and other schools decided it was a good idea to use computer vision, to make sure that the students have not walked away from their computers during exams.

Carrie Gillon: Oh, right.

Emily M. Bender: We should just like, tell me, like, there’s not a better way to do your teaching practice than to like, you know, anyway. So that was all really problematic, but you have darker skin students who have to like find extra, super bright lights in their learning space so that the computer can even see them.

Carrie Gillon: Oh, my God. Of course. I didn’t even think of that.

Emily M. Bender: So, it’s like, this is bad for everybody, but it’s worse for the people who are probably already getting the worst end of the stick of everything else already. Like, yeah, let’s just not.

Megan Figueroa: Right. We tell the computer what the stop signs are. That’s the training set that gets fed into the machine. That’s learning these patterns of what a stop sign is?

Emily M. Bender: Yeah, exactly. So sometimes called pattern recognition, but machine learning is the more common term and people will refer to it as AI. And there’s this hilarious web comic where there’s like a crack on the wall and that’s something that statistics, and then someone puts a frame around it and now it’s machine learning, and then someone there’s like this whole audience in front of the person with the frame around the crack, and it’s like, and this is AI, right?

Carrie Gillon: Yup.

Megan Figueroa: That’s actually really helpful for me.

Emily M. Bender: I don’t want to say that there’s nothing there, like the, the algorithmic work of building the computer systems that can take in all that data, process it efficiently enough and be able to produce answers on more inputs in a way that is,, in some cases useful or, in other cases, of all we know is that it’s better than chance according to the test data. Uh, but anyway, doing that is real research, right. And requires a certain skillset. And that’s, so that’s what machine learning is about. It is a way to train the machines to do pattern recognition. And the claim is that that is an, an approach to building artificial intelligence. And then you get these people who want to talk about artificial general intelligence as this big idea of like building a system that is not just like, Okay. give me training data, now, now I know how to do that task, but a system that can actually learn new tasks on the fly or be taught new tasks with a very small amount of instructions the way humans can. And so, this is this idea of what they call artificial general intelligence, which is snake oil. Absolutely snake oil.

And now coming back to computational linguistics, right? So, we can use pattern recognition productively in many ways. So, you can use pattern recognition to do automatic speech recognition, speech to text. You can use pattern recognition to do machine translation. And what comes out is not the kind of artistry and faithfulness that a human translator would do, but it is a useful thing, especially if it is appropriately contextualized, so that a person reading the output in a language, they know when the input is language they don’t know, knows to take it with a grain of salt, but this is pattern recognition and it may have made a mistake, may have picked up a pattern that wasn’t there.

Megan Figueroa: Does this mean this is all kind of like little forms of the Turing test, just repeated over and over again?

Emily M. Bender: So, the closest thing to the Turing test is, is chatbots. The sort of big thing these days in computational linguistics, natural English processing, is what’s called large language models. And fundamentally a language model is basically a pattern recognition system whose job it is to predict the likelihood of some word or string, given the words coming up to it, or maybe the words around it.  That’s all that it is, is if you give it lots and lots of texts and you either say, keep guessing what’s coming next. And then as you get it wrong, update your parameters so that you do better next time, or here’s lots and lots of texts. We are making a closed task out of it. We’re going to drop these words out, you have to guess what they were, and then same thing, update your parameters. As you’re told what the answer is. And those have now gotten big enough that they can spit out seemingly coherent text based on some kind of a prompt. So that prompt might be a chatbot. So, you might tell it something. And then it’s like, ok, its job is to guess what comes next. And so, it spits something out and it feels like grabbing a conversation with it, but you’re not, right. It’s just coming out with what comes next. And when I say lots and lots of texts, Uh, you shoot, I don’t have the numbers off the top of my head. I could look them up in our paper, but it’s, it’s enormous. It’s web-scale and it’s the sort of thing that’s only available for a handful of languages and mostly, really only done so far at web-scale for English, because I mean, web-scale for other languages is going to be smaller.

Carrie Gillon: I totally thought I was talking to a chatbot yesterday. I can’t prove it, but I just, I had a, an issue with our VPN and I had to like contact the company. And I don’t know, just something about the way that every sentence was structured. I was like, I think this is a chatbot, but I’m going to pretend it’s a human so that I’m not accidentally rude to a human.

Megan Figueroa: You said the word seemingly coherent. Uh, I feel like we should bring up your paper really quick. Cause you also mentioned your paper just to give context it’s called “on the dangers of stochastic parrots. Can language models be too big?” And then amazingly there’s the emoji of the parrot. And I want to have an emoji in one of my papers now but in a title, but anyway, oh, so this paper, I was reading it and I’m bringing up coherence because you talk about coherence. Can you talk a little bit about what you might mean by seemingly coherent and what’s coherent to certain people may not be coherent to others.

Emily M. Bender: Yeah. So, for me, seemingly coherent is actually distinguishing the human reading, the text of making sense of it from the computer that doesn’t have any communicative intent, doesn’t have any meaning it’s trying to express. And for the chatbot that you encountered, it’s entirely possible that, that there, if the, so let’s say that you’re talking to a chatbot, that really should be disclosed, right? When this is like one of the, one of the fundamental things is don’t pretend to be a human and try to fool the humans. Except if you’re in the context of like an IRB approved study where you can tell the humans afterwards.  But you could imagine setting up that chatbot where it has a particular set of common, like FAQs that it knows the answer to. And it’s going to try to answer your questions before sending you onto the more expensive human. And so it’s set up to try to classify your questions into the range of possible answers that it knows about. Now. It there is still a computer. It has no consciousness. It has no communicative attempt, but it is at least programmed to try to provide you specific information on behalf of the company. As opposed to a language model based one, which is just going to like come up with whatever sounds good where there’s no, no one has put any thought into this is the information we need to convey. In that context where it’s just like “I’m out here synthesizing texts because that’s the program I’ve been programmed to do”, there’s no, there, there there’s nothing. No, no person has sat down and said, “this is the information that we need out in the world. You know, as the company, we want our customers to know this, or, you know, as the public health authority, we need the public to know that,” but just let’s let this program run. And because it’s gotten very good at closely modeling what looks like probable English. What comes out is something that we, as English-speaking humans say, oh Yeah. I know how to make sense of that. And then we immediately, and without reflecting on it, do what we always do with language, which is to imagine the mind on the other side of that and their motivations and their beliefs, and use that as context to fit the meanings of those words and sentence structures in as, “Okay. This must be what they were trying to convey.” So that’s what I mean by seemingly coherent. It’s the humans who make it coherent with the machines have managed to do is produce a facsimile of that at this point.

Carrie Gillon: So, would a case of the latter be that that bot that turned into a Nazi in like 24 hours?

Emily M. Bender: Yeah.  so

Megan Figueroa: Excuse me. What, What,

Emily M. Bender: this was, oh, what. 20. I want to say 2016. I would say

Carrie Gillon: Something like that.

Emily M. Bender: Yeah. Before we did the terrible thing with our own elections. Yeah. So Microsoft had this idea, it came out of Microsoft to create. Twitter based bot that would learn from the people interacting with it. And the idea was wouldn’t it be cool that it can sort of learn to talk like the people that are tweeting at it. And so, within 24 hours, basically Nazis got ahold of this and started feeding it, all of these hateful, racist messages. And of course, it did what it was programmed to do, which was to tweet back the same kinds of things. And so it wasn’t even out for 24 hours, they yanked it. And this is one of those, like how did you not know that that was going to happen?

Carrie Gillon: Do you think it’s naivety, or…

Emily M. Bender:  so I mean, it could be that the people who are programmed that like just aren’t very online and don’t know what’s out there on Twitter. But it could also be. I think that, you know, there’s a whole culture within tech to, of salesmanship to basically, you know, it’s all about talking about how great your product is going to be and the problems that’s going to solve and the, you know, the cool things that’s going to be able to do. And there are very few, if any, incentives to spend time thinking about what could go wrong and how are we going to like bare minimum sort of like capitalistic view on this, how are we going to keep from embarrassing the company with it?

Carrie Gillon: that’s the part I don’t understand, but they don’t even do that.

Emily M. Bender: Yeah. exactly. So, I’ve gotten to work over the last few years with Batya Friedman, who’s in the iSchool here at UW, and she is one of the leading developers of a framework called value sensitive design, which is a bunch of techniques for engaging with stakeholders. So, people who are going to use the technology and people who are going to be affected by the technology, whether or not they’re using it and see what their values are, and then figuring out how to design to support those values. And one of the ideas that came out of value sensitive design, the paper, I think is Nathan at al. 2007 is something called design noir. Where you basically say, okay, imagine that this technology became pervasive and everybody’s using it all the time. Now imagine you are somebody who is in some way marginalized. Maybe you have limited mobility, maybe you’re a small child, maybe you are experiencing poverty or homelessness. How do you move through this world where the technology has become pervasive and what could go wrong? And sort of thinking through those things would allow you to say, aha.  Okay. We’re going to put this thing out on the web and everyone’s gonna have fun with it and start tweeting at it. What does everyone mean and what could go wrong and whose values should we be looking out for in this context?

Megan Figueroa: Why aren’t we all starting from that foundation?

Emily M. Bender: Yeah. So, the deal is the large language models are recognizing patterns in the form of language. And when we, as humans encounter that, we see that form and those are the same forms that we use in communicating with each other to convey communicative intent. And so we make sense of it in that way, even if we know it’s an artificial agent, right.  and so you get all these people who are building this technology. They are excited about the idea of turning it into a general purpose- so remember artificial general intelligence, this idea – with the very large language models you can say, okay, you know, absorb all these patterns. Now we are going to show you a couple of small patterns. Like if I say what’s two plus two, you’re supposed to say four and like give it two or three like that. And then it can do a few of them to like 80% accuracy, I think for single digit addition, it’s fine. But if you up to three digits, it falls apart. And in some cases, so that gets called few-shot learning.  and then there’s something else called zero-shot learning, which I find to be just like this incredibly macho term.

Megan Figueroa: Yeah.

Emily M. Bender: So, zero-shot learning is the idea that the system has been given no explicit instruction on the task. You just ask it a question or give it the prompt and it goes with it. And so, people see this and it really is a large system manipulating form and the sort of, what’s a probable string given us training data, but it looks like it’s doing reasoning or it’s learning tasks on the fly. And so, people actually ask with a straight face, you know, If we keep building larger and larger language models, are we going to get AGI – artificial general intelligence? And from where I sit, that makes about as much sense as running faster and faster and thinking you’re going to learn how to teleport.

Carrie Gillon: I love that analogy.

Megan Figueroa: That’s perfect. So, do you think people are asking the wrong questions with this, then going back to the values, like what, what questions we should start they asking the wrong questions?

Emily M. Bender: I think they ‘re asking the wrong questions. And I think that the problem is like, if, if it’s this whole academic area of research towards artificial intelligence, were just academic. And it was just a few people in computer science departments sort of like plugging away on this problem. Like fine, go ahead. Like we, we want to have a, you know, diversity of research questions, but that’s not what it is, right. This is driving lots and lots of, of money and affecting people’s lives all over the place. And the problem is that we’ve, we’ve ended up in a place where the AI is supposed to solve everything rather than being a tool that can be appropriately scoped and vetted for specific use cases.  and so I think that’s where it’s gone off the rails.

Carrie Gillon: Yeah. So, speaking of going off the rails, where do you think it’s gone off the rails in terms of language data?

Emily M. Bender: One of the, so off the rails, first of all, I’m like assuming that large language models can do anything in there for are artificial intelligence in and of themselves makes no sense. So that’s, that’s one thing, but sort of much more importantly, when our general approach to dealing with any language problems. is we can do better by getting more and more data. We have now gotten into territory where the amount of data that we’re we believe is required and that’s sort of a, we that I’ve don’t really include myself in, but that amount of data is too big to actually understand what’s in there as humans. So, at the same time, we’re understanding that machines, when they’re doing pattern recognition from the data, they’re recognizing all the patterns in there, including all the racist garbage, and all the sexist garbage, and all the ableist garbage that’s in there. That’s going to play out in various, you know, subtle and not so subtle ways.  so there’s a, you know, a lot of excellent work on this.  Black women scholars are definitely leading the way one important work is Sophia Noble’s book, Algorithms of Oppression. And there, you know, you’ve got these patterns that come both in terms of what people have written on the web. And then also the questions that people ask and there’s this attitude from a lot of the tech world that says, well, the data is just the data. That’s the world that’s out there. And it’s like, No, it’s not actually. And that’s part of,

Megan Figueroa: No, no, hate that. I hate that so much. I hear it so much in psycholinguistics too. “The data is the data. What can we do about it? It’s telling us something racist, but it’s the data.”

Emily M. Bender: Deb Raji has a wonderful opinion piece that came out in the MIT tech review at the very end of last year, talking about how that is absolutely not the case and how, you know, the data lies all the time, especially about, about Black women.  highly recommend that and I’ll send you the link for it. So it’s not just that, you know, the data is not the data. And that’s part of what we do in the Stochastic Parrots paper. We say, Okay, what ends up being the data who’s participating on the internet in the first place? That’s not representative. Who gets to continue to participate comfortably. And who’s getting pushed out by the moderation policies on platforms that tend to basically make it harder for marginalized people to stay. And somehow, really don’t like to come down on the people who are harassing others.  So the second layer there. How was the data actually selected? Cause there’s never everything. And then finally, there’s some filtering steps, like, because the people who are doing this large scraping approach know that they’re getting some garbage, so they want to try at least to remove things. But that tends to be very naively done with like, word lists, that include things like the word twink, and then you lose all of the sites that have, you know, people inhabiting LGBTQ identities speaking about their own lives, positively. Oh, use the word twink. Well, we can’t, this site’s not safe. We can’t use this site. So all of that.

So, so the data is never neutral. And then on top of that, algorithms aren’t neutral, right? There are design decisions in algorithms that represent values. So, that’s not neutral. So instead of starting with, how can I make money or what cool thing can I build. If we started instead with what needs are there and not what needs do I think are there, but given the populations that I mean to serve, what do they tell me that they need? And then how can I build the support that, where the people I’m serving are really the ones whose needs and ideas are centered. And it’s not about, what’s the coolest thing I can build. And what’s the most recognition I can get for building like the one true solution that’s going to solve everything.

Megan Figueroa: So, can you tell us a little bit about the effect of your paper at Google and what happened I guess last summer at this point or winter?

Emily M. Bender: So this is last summer. So there was one of the things that happens at computational linguistics conferences is something called shared tasks where someone will say, ?”Okay. here’s the dataset. Now, everyone who wants to participate can make their algorithm to try to score best on this task.” And there was a conference in Switzerland last summer that had an associated shared task, shared task called GermEval, cause it’s for German and the shared task four or one of them for last summer was predicting initially framed as predicting IQ scores from text.

Carrie Gillon: Uh, what?

Emily M. Bender: Yeah, exactly. And so a bunch of us on the corpora mailing list, which is this really interesting email list that sort of a meet point between corpus linguists and natural language processing people, like one of the few places where those people mix. So on that list, a bunch of us were saying, “This is a really bad idea.”  and it falls into that like digital phrenology bad idea, like you cannot measure IQ from short snippets of text. That’s just not- the information’s not there. A human can’t do it, you know?  but in the machine learning world, you have a dataset where we’ve got a bunch of IQ scores and short texts written by the same people. So we can set that up as input output, and like probably do better than chance because IQ scores are going to be correlated with something that’s also correlated something with language variety, right? I raised a stink and other people did too. And the net result of that was they did reframe the description of the task a little bit.  and then there was this panel at the conference sort of discussing the ethics of this task. And it was me and a philosopher of ethics and a psychologist. And the shared task organizer who a PhD student. So he was really on the spot, I think. And then one of the other shared task organizers. Anyway, we had this meeting ahead of time where I suggested that we each get to propose something of our own writing to read so that we can know where everyone else was coming from. And the philosopher says, “Oh, no, no, no. I think it’ll be more fun if we just come at it cold,” I was like,

Carrie Gillon: Oh, God, of course.

Megan Figueroa: Oh,

Carrie Gillon: Fun. Fun is the most important part

Emily M. Bender: Yeah. And somehow in that context, in that pre-meeting I mentioned that IQ tests are racially biased and psychologist was like, “I have no idea what you’re talking about. I’ve never heard of that.” Yeah.

Carrie Gillon: This is common knowledge.

Emily M. Bender: I ended up with the assignment of going into the psychology literature to go find an article. Saying that, and what I found was this enormous mess. So there’s the people who are doing the race science with IQ tests that are like that firmly believe and are trying to show that this thing that this test is measuring, first of all, is intelligence, and secondly is associated correlated with race. And which also requires a, something that race is a thing instead of racism being a thing. So there’s that literature. And then the people arguing against them are taking this deficit perspective that the racialized people aren’t doing as well, because they have impoverished learning environments. And I’m like, where are the people saying “no, this test is basically just testing for”

Carrie Gillon: Whiteness.

Emily M. Bender: “Whiteness. white culture, right? White formative experiences.” And I eventually through ranting on Twitter, found something along those lines and shared it. And the actual panel event was just a terrible experience. So there was two moderators, one of whom was a woman and I was the only other woman there. And the philosopher sort of led with it’s like, you got to do a little introductory thing. And his, his framing of it is, “is some knowledge dangerous?”

Carrie Gillon: Oh, I should have known that the philosopher was going to do

Emily M. Bender: Oh, it was, yeah. And then his first example of dangerous knowledge involves somebody being told that. they were unattractive in some way. he uses, of course it’s a female person being told that she’s unattractive and he uses the name of the female moderator.

Carrie Gillon: what?

Megan Figueroa: Oh,

Carrie Gillon: I’m sorry. I’m sorry. I

Megan Figueroa: Oh,

Carrie Gillon: normally not the one who is as shocked as you it’s usually Megan, it’s like

Megan Figueroa: know it is

Carrie Gillon: oh my God.

Emily M. Bender: So anyway, all that to say that this, the literature on standardized testing is like buried so deeply down the race science hole. That apparently the only thing that people have managed to like come up a little bit out of hole is a deficit thing. And It’s like, no, there’s, there’s a broader picture here. Right?

Carrie Gillon: It’s so strange to me because. Like, I feel like this information has been known since I was a teenager, which is, you know, many decades ago now. So like what? How are academics not like even remotely aware of this. I just I’m so angered and so upset.

Emily M. Bender: It sounded, sothe psychologist on the panel, said something like, I mean, I’m, I’m, they’re like trying to get this point across. And, and he said, “well, you know, my field is defined as the field that studies IQ and nobody here believes blah, blah, blah.” And I was just like, okay, so you are so deeply committed to these racist ideas that you just can’t, you know, can’t see them.

Megan Figueroa: Absolutely.

Emily M. Bender: It’s

Megan Figueroa: It’s so much cleaner and it gets you off the hook because you don’t have to put hard work in when you can, when you can believe something like that. I mean, cause at this point it’s just, it’s so embarrassing.

Carrie Gillon: You also don’t have to like interrogate your own whiteness, you know?

Megan Figueroa: Exactly. So yeah, absolutely.

Emily M. Bender: And actually, so I’m come back around to Google in a moment, but one of the things that -it’s fortunately calmed down- but for the first couple of years that I was working in this space, anytime you wanted to talk about AI ethics, you ended up with this stupid trolley problem, right? This philosophical experiment. If you flip the switch, then only one person dies, but you’ve caused them to die as opposed to, if you don’t put it, right. And it’s like, somehow that’s applied to self-driving cars and should the self-driving car like avoid the baby, but hit the old lady or something. And it’s like, so far removed from most of the harm that’s going on. And it always sucks all the air out of the discussion. And I finally realized that what’s going on there is that thought experiment does not challenge anybody’s privilege.

Megan Figueroa: Right.

Emily M. Bender: really easy to spend time talking about that. Cause it’s Yeah, Anyway, so what happened at Google? So we wrote this paper and actually the paper initially was written very quickly. So it started as a Twitter DM conversation between Dr. Timnit Gebru and me, and she wrote to me and said, Hey, have you ever,” written about sort of the possible risks of large language models, or if you know of anybody who has,” and I said, “no, why?” She said, “well, I keep having these conversations at work and pointing people towards your tweets, but it’d be really nice to have a paper.” and I said, “oh yeah, that would be a good paper to have. Here’s the like seven things I can think of off the top of my head.” And then the next day. “This actually feels like a paper. Do you want to write it together?” And she said, “oh, I dunno. I’m kind of busy.” And we were looking at the fact conference deadline, which was like a month away at that point. And I said, “well, here’s an outline.” And she went to her team and got a Meg Mitchell and three other people on board who are all wonderful, amazing scholars with sort of different disciplinary backgrounds.  and I brought on board my PhD student, Angelina MacMillan Major, and put that outline in Overleaf. And everyone just started contributing things from their different expertise and the paper came together and we got it submitted by the deadline.  went through Google’s internal publication approval process. And then it was okay, fine. That’s that was a busier month than we expected, but nice paper. Let’s see what happens. And that was the point where we thought the thing about this paper is it’s going to be the one with the emoji and the title, like that was going to be the thing about it. right. And then while we were waiting, so it was due October 8th for the submission to the conference and in, in computer science affiliated conferences, the conferences do full paper review. So that’s like, a dense eight- or nine-page paper and the reviewers read the whole thing. And then that becomes, if it’s accepted, you can do some revisions, but the revisions are never checked. And it goes into the proceedings. I shouldn’t say never checked this paper actually did this conference had a shepherding process, but our paper wasn’t shepherded. We’re waiting to hear back if it’s accepted and out of the blue, the Google coauthors are told, “retract this paper, take your names off.” And initially they’re told nothing else. Just, “you must do this.” And they were told that the Friday before Thanksgiving with the deadline to decide being Thanksgiving, Friday, us Thanksgiving, right? When all Googlers in the US were actually supposed to be off that entire week. And all of the people involved in this story are in the US.  so it wasn’t someone, you know, outside the US not tracking when our weird autumn holiday is.  Uh, initially the Google coauthors were like, well, okay.  I guess we’ll take our names off of it. Like as at UW, we were like, well, you know, we feel kind of weird, like claiming all this work as our own, but we’ll be guided by your decisions. And they said, well, we want the paper out there in the world. So just go ahead and publish it just as you two. And then Timnit said, “you know, This is not okay. The way we’re being treated, that is not, not appropriate.” so she spoke up and she pushed back.  and she said, “look, I need to know how this decision was made. I need to know what the objections to the paper were and you know, who was making these objections. And, you know, if I know all that then, okay, I’ll go along with it and take my name off. But if not, then this is an untenable way to be a researcher. Cause I, you know, to be told by shadowy forces, no, we’re not going to publish your stuff is not okay.” And so she said something quite diplomatic, like, “you know, if I can’t know these things, then I will work with my manager to find an appropriate time to not disrupt my team and move along.” The next thing that happens is she actually finds out through her reports, cause she was a manager herself, that they were told that she had resigned. She’s like, Yeah. I did not resign.  so they, they call it resignated. And so, this was the Wednesday after Thanksgiving. She says “I’ve been fired.” She says this on Twitter Wednesday night. Thursday morning, there starts being news articles about it. She did an absolutely amazing job and continues to do, keeping the focus of all that media attention on the discriminatory practices within big tech.  and it has been amazing. And so for me, the first couple of days were like, okay, I’m talking to the media, this is super intrusive. It’s really weird to have all of this intense focus on my not even finished paper.  Cause you always do some revisions before the final version, which, and we still didn’t know if it was accepted to the conference.  But then within about 48 hours, I figured out what it was that she was doing and how I could support it. Since then, you know, it’s been, it’s been work. And I was like, okay, well, this will be a busy couple of weeks. I’m still talking to the media, you know? So that’s it’s been exciting and interesting. And I think that it’s really valuable for the world at large, to know the contents of the paper. And what’s going on with, with problems that come when you have pattern recognition at scale applied to language, and then present it to people in this way where it looks like it may have been a person or some other actual intelligent agent crafting that language. But also, super important to be able to an end, like just a wonderful experience to be able to do something, to help work towards more equitable workforce.  so it’s been a real pleasure, but you know, with this sort of sadness at the core, because, you know, she shouldn’t have lost her job over this. And then her colleague, Meg Mitchell also was fired about three months later.

When you look at the paper it’s kind of, why, like, there’s not, it’s a, it’s a survey paper, right. We, we gathered together a bunch of perspectives and we sort of weave them into a narrative, but the value add is that gathering together. There’s no new analysis. We didn’t collect any new data. It’s it’s really about,

Megan Figueroa: Oh, it’s a lovely paper, as someone who doesn’t understand everything, but I didn’t know that there was such a huge environmental impact. Like all these things, it’s just like a really important paper and it, you know, it’s some hard truths.

Emily M. Bender: nothing like there’s no, there’s no particular pointed remarks about Google in there. It’s, you know, it’s about technology that, yes, Google is building, but so are other, you know, institutions and companies. And one of the first journalists we spoke to was Karen Hao at the MIT Tech Review, who’s an absolutely amazing tech journalist. And she approached with this question. You know, I want to find out what it is. It’s in this paper that made Google so mad. I’m like, “well, here you go, good luck. You’re not gonna find it in the paper.” But I think part of what’s going on there is that Timnit and Meg were both working really hard to improve the work environment for themselves and the, and the diverse team that they had constructed. And that causes friction. Not because it’s their fault in any way, but because they’re pushing back against, you know, racism and misogyny and ableism and all of that. And I think that the reason that they ended up being fired based on this was much less to do with the paper, much more to do with the company not being able to handle the important work they were doing internally.

Carrie Gillon: So this was just a pretext. I didn’t even think of that.

Emily M. Bender: Yeah. I mean, I don’t, I don’t know the details internally, but it looks like it to me. I mean, you don’t get fired over writing a paper.

Carrie Gillon: No, it made no sense. What, like when I first saw the other people tweeting about it, it was so bizarre. I do couldn’t even make sense of it because it doesn’t make sense. There’s no reason to have fired her. And then, and then when they fired, yeah.  Margaret Mitchell, a few months later, I was just. I knew that I knew it had to be something shady. I just couldn’t figure and put my finger on like what the shadiness was. Okay.

Emily M. Bender: And you know, early on I said to Timnit to said, ” I’m really sorry.” She’s like, “what are you doing? Like, there’s nothing to apologize for here.” And every once in a while, we sorta check in with each other and you know, it’s always, “you know, this is terrible, but it’s been a real pleasure to be on this journey with you” and I wholeheartedly agree. So they’re fantastic researchers. So, you know, Dr. Gebru, Dr. Mitchell, their whole team are just amazing and Yeah. Google could have supported and celebrated and gotten all the brownie points for having that team. And instead, they just kept doubling down on terrible choices.

Carrie Gillon: Yeah. That’s what a lot of people were saying too on Twitter was this like this, this was like their shining jewel. Like it was giving them all this credibility and now they have zero and it’s like, yeah, I just it’s, I don’t get it. I don’t get why they would do that.

Emily M. Bender: no, this is, you know, just, just like the psychologists who are completely wedded to this ridiculous view of IQ tests. I think that once you are, this is, you know, once you have a certain kind of power, it is really hard to convince yourself or unconvince yourself of certain things I think and, and power protects itself, you know?

Carrie Gillon: Power does protect power. Yes.

Megan Figueroa: Why do you think so many people are able to disentangle a AI from human behavior? And the fact that humans are the ones that, you know, the, the training sets for, for machine learning are sets of, you know, human data. Like why do you think it’s so easy for so many people to, to forget that conveniently or not know it in the first place?

Emily M. Bender: So that is an excellent question. And I think it connects with these ideas around IQ also. So what, you know, what is AI supposed to be? And here I’m, I’m, I’m influenced very much by a book by Joseph Weizenbaum from 1976, which is like completely applicable today.  and it’s called Computer Power and Human Reason.  and he’s, he’s the guy who made the Eliza program. You ever seen that, where it’s like this at Rogerian psychotherapist where you all chatbot like from the 1960s.  And what he saw is that people were taken in by it, even though they knew it was a computer because it was presenting the stimulus that we recognize as coming from other humans. Having been the programmer of it and seen people get hoodwinked by it, and then seeing what the people around him were saying about what AI was doing, he wrote this wonderful sort of critical book about why this is a problem. And so one of the ideas that comes out of that book is the whole idea around artificial intelligence. And I think also IQ as a measure of intelligence takes this notion of the highest form of cognition is this abstract logical thinking that is above the fray of any specific data. And so it can be applied to particular data, but the data is just sort of the messy details and the real value, the real hard problems, the real, you know, shining jewel is this abstract algorithm that can do the, the logical thinking.  and in Weizenbaum’s time, that was like, literally programming first order logic into computers kind of stuff.   and now it’s this neural nets and deep learning where the people building the machines have difficult work to do around making the algorithms efficient enough that they can train on lots and lots of data. But the whole conceit is that the algorithm is actually learning itself from the data. And so, the human doesn’t even have to do that part. And I think that it’s sort of focus on the abstract quote, unquote learning capabilities and placing all of the value there. And one of the things that we’re starting to see a nice pushback on in natural language processing and other areas that machine learning is applied to is the sort of people pushing back on the way that data set construction is devalued and models and algorithms are sort of where it’s at.  and I see this all as part of as the same problem. So you got to keep the people in focus and keeping people in focus means, you know, data is people, right. And the systems affect people. But also, if we, instead of looking to machines as a model of what it means to be intelligent, actually look at people and see people as, you know, embodied and social and emotional and all of these things are part of what it means to be human that can’t be reduced to the sort of abstract intelligence stuff.

Megan Figueroa: Absolutely a hundred percent.

Carrie Gillon: Can you explain what a neural net is? I, this is one of the areas where I feel like the least that I understand.

Emily M. Bender: All right. So, this is, this is an outsider’s understanding of neural nuts. Cause I don’t work on them directly, but it is basically a system for, you know, going from inputs to outputs.  like any other computer program that is inspired by a very simplistic notion of what a neuron is. Okay. So, the neuroscience of the 1970s and 1980s was looking at, and again, total outsider perspective here. This is just sort of what I know, second and third hand.  but the idea is that neurons are these cells that when they get enough input, they will then send the electric electrical spike onto the other neurons that they’re connected to. All right.  and so a neural net is a series of pieces there, they’re not physical circuits cause it’s, you know, abstract computer program. But the idea is that individual elements within this neural net, if they get enough input from the things that they are connected to on one side, then they will send their pulse to the thing on the other side. And in this idea has been around since the 1980s. But back then, computers weren’t powerful enough. Like we didn’t have big enough machines with enough where the, the clock speed wasn’t fast enough. And the memory wasn’t big enough to really get much out of it. Where now you can make very, very big ones of these with lots of so-called neurons inside of them and train them. So have them look at lots and lots of data. And the idea is that you know, if their job is to predict what word comes next, then they” try like they have this sort of a random initialization and they say, okay, you’ve told me the cat is on the, and I’m going to predict window.”  And then what’s actually there. “Oh, it’s Matt. okay. So let me go back and tweak a few parameters so that Matt is what I would have predicted as the most likely thing here.”  and by tweaking parameters, it has to do with how much input a given unit has to get from a previous one, or maybe which units are connected to each other ones, but now I’m making stuff up. Cause that’s beyond what I know about these things.

Carrie Gillon: Oh, okay.

Emily M. Bender: Oh, and the people who are doing machine learning love to say we’re learning just like babies do. And like-

Megan Figueroa: Yes, they do.

Carrie Gillon: No.

Emily M. Bender: Babies learn language by being shown just lots and lots and lots of the form. Not! Like.

Carrie Gillon: Yeah. Like in what planet is that relevant?

Emily M. Bender: And if you spend any time in the language acquisition literature, you find all these really cool things about how, you know, child language learning is predicated on joint, attention and co-situatedness. And, you know, it’s not just, “I hear lots of words. And so, I learned those words.” That’s how it works.

Megan Figueroa: Okay. Exactly. I know there’s definitely a disconnect between the language acquisition people and like machine learning people. And then when they try to get together, I still think that they’re not asking the right questions anyway. So that’s another issue.

Emily M. Bender: Yeah.

Megan Figueroa: This has been a really interesting conversation. I’m really excited for our listeners to hear it.

Emily M. Bender: I have had a lot of fun too.

Megan Figueroa: Yeah. I don’t think we think about this stuff enough. I could say that about everything we talk to our guests about, but.

Emily M. Bender: Yeah, Everybody should think about linguistics all the time.

Megan Figueroa: Yes.

Carrie Gillon: Before you go, you had something to tell us about medical interpretation.

Emily M. Bender: Oh, yeah. So, I enjoyed this morning your show that you just posted this morning with Elena Costello and you were talking in there about the medical interpretation in the context of COVID vaccines. And I just wanted to give you a report back on what it was like at the site that I volunteer at, which is Lumen Field in Seattle, which is the biggest mass a civilian run mass vaccination sites, supposedly in the U S and I was there most recently on Wednesday, where we were going to do 9,000 to 11,000 shots administered that day. So super exciting. And the way the process works there is when people arrive outside, they are greeted at first by somebody with this big, like giant lollipop shaped thing. It’s like a big pink circle on a stick that has in a bunch of languages. So, the languages spoken in the Seattle area, which are going to include like, Spanish and Vietnamese and Russian and Amharic and oh, Korean and Tagalog I think and Cantonese anyway, all these languages basically need interpretation. And then there’s even a picture of the sign for interpretation in American Sign Language. So they get asked there and sort of like it every point where you encounter somebody there’s this offer, would you like interpretation services? But to the points from your previous guest, the interpreters who are there on site are volunteers. So, they’re just people who happen to have competence in these languages and can, you know, try to do the interpretation.  and then for languages that are not handled on site, one of the volunteer roles is to basically hold the device that would do the, you know, connect to the offsite interpreter. And I imagine those people are professionals. And I think in all the times I volunteer there, I’ve seen like two or three people actually having an interpreter with them, as opposed to just having like, you know, older parents with adult children doing the interpretation, but it’s happening at least in Seattle. So that’s the good news.

Megan Figueroa: That is definitely good news, especially since you’re serving so many people.

Emily M. Bender: Yeah. Yeah. Super exciting is so much fun to get to do that work.

Megan Figueroa: Yeah, that is well, thank you so much for adding to that context for us. And we

Emily M. Bender: my pleasure.

Megan Figueroa: with a final message. Don’t be an asshole

Carrie Gillon: Don’t be an asshole.

Emily M. Bender: Very good words all the time.

Megan Figueroa: Yes. Thank you.


Carrie Gillon: All right. So this month we would like to thank Caneva.

Megan Figueroa: Yay.

Carrie Gillon: Stephanie Zielenkievicz

and Drew Crosby

Megan Figueroa: Thank you so much, three. That’s so lovely. Thank you all so much. And if you would like to be a Patreon supporter, you can go to

Carrie Gillon: and you can get a bunch of 40 plus bonus episodes at this point.

Megan Figueroa: That’s where all the deep dark secrets are. So get in on it.

Carrie Gillon: That definitely the salty goodness is in there.

Megan Figueroa: Yes.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s