Probable Causation

View Original

Episode 63: Elizabeth Luh

Elizabeth Luh

Elizabeth Luh is a Post-doctoral Fellow with the Criminal Justice Administrative Records System (CJARS) at the University of Michigan.

Date: December 21, 2021

Bonus segment on Dr. Luh’s career path and life as a researcher.

A transcript of this episode is available here.


See this content in the original post

Episode Details:

In this episode, we discuss Dr. Luh's work on detecting racial bias in police searches:

“Not so Black and White: Uncovering Racial Bias from Systematically Misreported Trooper Reports” by Elizabeth Luh.


OTHER RESEARCH WE DISCUSS IN THIS EPISODE:


Transcript of this episode:

Jennifer [00:00:08] Hello and welcome to Probable Causation and a show about law, economics and crime. I'm your host, Jennifer Doleac of Texas A&M University, where I'm an Economics Professor and the Director of the Justice Tech Lab. 

 

Jennifer [00:00:18] My guest this week is Elizabeth Luh. Elizabeth is a Postdoctoral Fellow at the University of Michigan, where she works with the Criminal Justice Administrative Record System or CJARS. Elizabeth, welcome to the show. 

 

Elizabeth [00:00:30] Thank you. 

 

Jennifer [00:00:31] Today, we're going to talk about your research on how to detect racial bias in police stops. But before we get into that, could you tell us about your research expertize and how you became interested in this topic? 

 

Elizabeth [00:00:41] Sure. So I would call myself an applied microeconomist with a focus and labor and public economics, and my specific research interests are looking at policing and the criminal justice system. So specifically looking at disparate treatment and racial bias and then looking at how the economic consequences of this disparate treatment. So I actually just stumbled on this topic late one night when I was searching on the internet for some cool data sets to potentially do some research on. And the Stanford Open Policing Project announced that they were releasing their data on motor stops by police across the United States. So when I was scrolling through their website, I noticed that for Texas, there was a small footnote that said that highway troopers in Texas had been caught misreporting minority civilians as far back as 2010, and they figured this out in 2015. And so that in that data set, they were going to be estimating drivers' race. And so I thought this misreporting problem that was in Texas would be a very interesting research project. So that's kind of how I found this topic and how I started researching it. 

 

Jennifer [00:01:43] Love it. So it's a combination of Twitter and footnotes. 

 

Elizabeth [00:01:46] Yeah.

 

Jennifer [00:01:48] Excellent story. 

 

Jennifer [00:01:50] Okay, so your paper is titled "Not so Black and White: Uncovering Racial Bias from Systematically Misreported Trooper Reports." And in it, you consider the discretion that police officers have when reporting the race of civilians they interact with. And you're specifically considering state troopers in Texas, as you mentioned. So let's start with some context. What types of interactions are you considering and what information do troopers record about these incidents? 

 

Elizabeth [00:02:16] Right. So in Texas, I'm looking at highway searches, and I think this is advantageous for two reasons. The first is that unlike in highway stops, troopers don't know if the motorist is guilty prior to searching the vehicle. So unlike in speeding citations, for example, if you're going 75 miles per hour in a 60 mile per hour zone, and the trooper stops you, the trooper already knows that you were speeding, so he knows if you're guilty. Whereas with highway searches, the trooper doesn't really know that. So guilt is kind of unknown. The other nice thing about highways is that cars tend to travel much faster than compared to residential areas. So troopers are actually less likely observed the motorist race prior to flagging down the motorist, so we know that stops are less likely to be selected on by race. So that's two main reasons for why highway searches are kind of a nice scenario or context to study racial bias. 

 

Elizabeth [00:03:02] Troopers report a lot of information from these incidences. At least in Texas, they do. So they record a lot of stop information, so date and time of the stop, the specific location of the stop, the violation that kind of triggered the stop, so think like speeding or failing to signal a lane change, whether a search was conducted, for what reasons the search was conducted, the search outcome, so whether or not any contraband was found and sometimes what contraband was found, and then they also record a lot of the driver's personally identifiable information. So full name, recorded race, the gender and the home address. And these are all very important for my study, for reasons I'll probably elaborate later on. 

 

Jennifer [00:03:38] What are troopers looking for when they're searching? Is it drugs, guns? What would prompt a search? 

 

Elizabeth [00:03:44] Yeah. So drugs and guns are all things that are not good to be found in a search if you're a civilian. They also look for like a lot of money, especially because we're by the border. So if someone is carrying like a lot of cash, for example, one big concern is if someone is carrying a lot of like currency is that they're like just handling like drug money, for example. But the two main things are really just drugs and guns, like you said. 

 

Jennifer [00:04:07] Okay, so this type of data on police civilian interactions has been used in many studies to try to test for racial bias. How do those studies typically use this information and what have they found in the past? 

 

Elizabeth [00:04:19] So many of the studies use the search information. So whether a search is conducted, the search outcome along with the recorded driver's race and then probably a bunch of other driver  stop characteristics. So think like the location or time of day. The main papers that kind of study racial bias in highway searches, though, are Knowles, Persico, and Todd, which I consider to be the seminal paper of studying racial bias in highway searches. Other papers that have followed that have also been very good and pushed the frontier forward, I think, are Antonovics and Knight and Anwar and Fang, which look at like characteristics of the officer, for example. But all these studies kind of used what's called the hit rate tests. So the hit rate test builds off of Becker's model of racial bias, which it looks at equilibrium, the profitability of the search. And the idea is that officers who are unbiased are going to adjust their search behavior so that all the searches across race are profitable. So pretty much what you're going to see if someone's racially biased is you're going to find unequal search success rates across motorist race. Whereas if there is no bias, then the search success rates are going to be equal across motor's race. So Knowles, Persico, and Todd find in their paper that there is no bias against African-Americans, which is surprising, I think given what we think about - given the present context with the Black Lives Matter movement. But they do find evidence of bias against Hispanics, but they're underpowered. There's also a whole bunch of other literature that looks at racial bias outside the context of highway searches. So I think on Goncalves and Mello has a very great paper that was recently published I think in 2021. It's been working paper for a while, so I might get the date wrong and Anbarci and Lee where they look at officers discounting of speed. So are officers more likely to kind of write a lower speed for whites compared to minorities, and they find that that's indeed true. There's also other tests that look at - so that have been pioneered by Grogger and Ridgeway, which are called the veil of darkness. So they look at the change in officer behavior to measure bias using the visibility at night kind of as their exogenous change. So all these papers kind of look at civilian stops and racial bias. 

 

Jennifer [00:06:21] Yeah. And so the basic challenge that researchers are trying to overcome is that you want to be able to see if kind of similar black and white drivers might be searched or stopped at different rates. And in general, if you're just looking at raw correlational data, we might worry that white and black drivers might have different search or stop rates because they're behaving in different ways. Maybe they live in different neighborhoods or different wealth levels, or right, driving different cars or something like that. And so all of these studies are kind of trying to find ways to compare very similar black and white drivers, especially the veil of darkness test is a great example of that. 

 

Elizabeth [00:06:59] Yeah, it's always trying to disentangle the racial bias from statistical discrimination. 

 

Jennifer [00:07:03] Yeah. Other things might be correlated with race that the officer can see that the researcher can't see. And yes, so those hit rate tests in particular, in addition to all the academic studies, I think they just become really popular studies to use in police departments as a way to test for racial bias. That's become sort of a really normal test to do in local data to see if you've got a bias problem, see if the results of black and white searches are equally productive, I guess, and you said profitable, but, and so in this context, of course, police aren't profiting, but it's the idea is basically if you're finding contraband in 30 percent of the searches you do on white drivers, then you'd also want to see a 30 percent success rate for black drivers. Otherwise, these officers should be adjusting who they search somehow if they're really just trying to maximize the success rate. So these hit rate tests are important, I think, in the current literature, which is why your paper is so important. 

 

Jennifer [00:08:03] So, you know, detecting racial bias becomes much more difficult using these kinds of data if police are misreporting race since race is such an important variable. And so that's the problem you focus on in this paper. So why would troopers misreport civilians race in the first place? What's the story you have in mind for why you might expect racial bias to affect misreporting? 

 

Elizabeth [00:08:22] So as you kind of mentioned before, hit rate tests are actually used quite commonly in law enforcement agencies to kind of be a quick and dirty way of looking at racial bias. It's very easily understood, and it's very easy to kind of present the statistics. You just look at the search success rate across race. So the story I have in my mind is that there's a bias officer who needs to hide his racial bias so he knows about the hit rate test because his law enforcement agency uses it as kind of a measure of racial bias. And he knows that he needs to adjust his search behavior in some ways so that he doesn't get accused of racial bias. So you can do this in two ways. One, he needs to search less minorities, so he needs to be a little bit more prudent for about a search decision for minorities. Or he needs to search more white drivers to kind of equalize his search success rates. Because right now, this bias trooper has a much higher search success rate for whites compared to minorities. The other thing that he can do that's a lot easier to implement is that he can just start recording the race of his failed minority searches as white. And so that way, he's going to be able to boost his search success rate for minorities because his failures are now not going to be recorded. And he's going to reduce his search success rate for whites because the failed minority searches are now going to be hidden in the white search success rate. The nice thing about misreporting the race for his failed searches is that really no one looks at these failed searches. It's really just the motorist because he didn't find anything. It's not going to show up later in the criminal justice system, like in a case disposition, for example. 

 

Jennifer [00:09:50] Right. And so to measure whether officers misreported race, you need to know what the driver's true race was. And as you said, that's going to be tough to detect. And we only see the reported race in this case. So that's going to be a real challenge. So how do you get around this? 

 

Elizabeth [00:10:06] So I was really fortunate that the Stanford Open Policing Project was so generous with their data because they gave me the raw data, which has the driver's full name and their home address, which isn't normally publicly available for obvious reasons. So to estimate the driver's true race, I'm going to use this personally identifiable information combined with various statistical methods. So for certain racial and ethnic groups like Asian or Hispanic, they have very distinct last names. So, for example, Chang is a very distinctively Asian last name, so I'm going to use the 2000 Census Surnames dataset and look at the likelihood that someone is Asian or Hispanic, given a certain last name. So think like Gomez or Lopez or Lou, for example, they're all like distinctly ethnic names. So for black drivers, I'm not going to use their last name. Instead what I'm going to do is I'm going to map their - the driver's home address to a block fips code and look at the proportion of black individuals within that block fips. Since black Americans tend to geolocate together, this is kind of a more accurate way of estimating race than using a black driver's last name, for example. 

 

Jennifer [00:11:07] And just to clarify a fips code is what? 

 

Elizabeth [00:11:10] Oh, I don't know what the acronym is. But it's built up from the Census tract. It's kind of like a census neighborhood. 

 

Jennifer [00:11:16] Yeah, it's like a geocode, basically. 

 

Elizabeth [00:11:17] Geocode, yeah. 

 

Jennifer [00:11:18] Okay. So black drivers have more common names, so it's just not going to be as easy to just use this. You're going to use where they're living as a way to predict race for them. So you're going to be predicting race. And of course, reported race might not match the predicted race you have for someone for many reasons. Maybe your predictions wrong or maybe the trooper just made a mistake. It was dark outside or something. So the key insight that you bring in this paper is that these errors should not depend on the outcome of the search. So tell us more about what you have in mind here and how you use this insight to test for racial bias. 

 

Elizabeth [00:11:53] Right. So as you mentioned, troopers can make mistakes recording race for reasons aside from bias. For example, there could be poor visibility that day, so he misidentified someone as white or the motorist with a last name that is Hispanic sounding might actually self-identify as white. This isn't a problem for me, though, since in both of these scenarios, they're going to happen independently across search outcome. So people are not going to be more likely to identify as white with Lopez as their last name and have search failures, for example. But on the other hand, if the trooper is misreporting because he's trying to hide his bias, he's going to be more likely to missupport his failed minority searches as white. And so I'm going to rely on that differential behavior as my measure of bias. So I'm going to be differencing out his misreporting likelihood for search failures and compare it to his misreporting likelihood for his search successes. So the search successes are kind of going to act as a sort of benchmark or way of differencing out those mistakes, if that makes sense. 

 

Jennifer [00:12:52] Yeah. So since you're using, you know, just name and location to predict race, there's really no reason to think that whether the search in a particular instance was successful is going to be correlated with whether your prediction was right or wrong. It's got to be something else about -that the officer is is - it's got to be something else that is correlated with the search outcome. 

 

Elizabeth [00:13:16] Yeah, exactly. 

 

Jennifer [00:13:17] Which is that officer decision. 

 

Jennifer [00:13:19] Okay. So what data are you using for this? 

 

Elizabeth [00:13:22] So the data I'm using is, as I mentioned before, a big part of it comes from the Stanford Open Policing Project, which originated from the Texas Department of Public Safety. So that's my highway search data. I also combine this with employment data from the Department of Public Safety of Troopers who are employed from 2013 to 2015, which I obtained via FOIA. So this employment data has like trooper salary, trooper rank by year, when they joined the force and when they left the force and their full name and their badge number, which is what I used to link to the highway stop data. 

 

Jennifer [00:13:54] Amazing. And all of this data is public record at some level, right? 

 

Elizabeth [00:13:58] Yes. I think though, the stop data is getting a little bit harder to get because I think from what I heard from someone who works at CJARS is that now they're not releasing the publicly identifiable information of the driver. So -

 

Jennifer [00:14:10] Interesting. Okay. So they got it - I think the Stanford Open Policing Project did get all of their data through public records requests, which is why they're able to just post it right. Otherwise they would have - if they had DUAs or data use agreements, it would be really tough to share. 

 

Elizabeth [00:14:23] Right.

 

Jennifer [00:14:23] Yeah. So it must have been a recent change, which is unfortunate that they've made it more difficult to get. But good for you that you have it. 

 

Elizabeth [00:14:31] Thank you. Thank you. 

 

Jennifer [00:14:32] Okay, great. So let's talk about the results. What do you find when you consider how reported race varied across search outcomes? Whose race was more frequently misreported and when? 

 

Elizabeth [00:14:41] So from 2010 to 2015, I find that Hispanic motorists are misreported the most. I find that Hispanic motorists are two percent more likely to be reported as Hispanic if the search ends in failure. For motorists of other races, I also find significantly higher likelihoods of being misreported as white conditional on search failure, but I don't have enough of these searches to kind of look deeply at the trooper level measure of racial bias. So I actually omit them. But I find that as Asians, for example, are five percentage points more likely to be recorded as Asian when the search ends in failure. For blacks, I find a lower likelihood of being misreported compared to search outcomes, so they're only like 0.3 percentage points more likely to be misrecorded as white if the search ends in failure. 

 

Jennifer [00:15:24] So most of the story in Texas, at least, is for Hispanic drivers. 

 

Elizabeth [00:15:28] Yeah, and that's not surprising to me, I think just because of the political context, like it's the proximity to the border, the like high population of Hispanics within Texas. 

 

Jennifer [00:15:39] Yeah. And again, when you're saying more likely be misreported, you're looking at this gap between your prediction and what was reported and whether the gap is bigger or smaller, basically. 

 

Elizabeth [00:15:51] Exactly.

 

Jennifer [00:15:52] Great. Okay. So this alone is a really nice contribution because it provides a new way that police departments can test for racial bias and stops and searches without incentivizing their officers to misreport civilians race in order to cover up their bias. So you can imagine a whole bunch of police departments doing this. Right, like basically using driver's names to predict their race and then comparing the success rates of searches across the groups with different predicted races and seeing for whom there's a bigger gap and do exactly what you did, basically. It wouldn't be that hard to kind of share some code or have an analyst help these police departments with these tests. So I, for one, am really hoping that police departments replace all their hit rate tests with your test. I hope that's the change we see. 

 

Jennifer [00:16:38] But then what's also really cool, as you alluded to earlier in the conversation, is that then you take the analysis a step further by looking at the effect of a policy change. So tell us more about what happened in Texas and how it changed how troopers reported the race of the civilians they stopped. 

 

Elizabeth [00:16:54] Right. So yeah, as I mentioned earlier, Texas troopers were caught misreporting in 2015. So what happened was that a new station in Austin called KXAN published an article on November 8th that claimed that troopers had been wildly misreporting minorities, mostly for Hispanics. But this misreporting was caught with Asians and Blacks motorists also, and this was a big deal in Texas. This article was published on November 8th and by November 15th, the House Committee on County Affairs held a hearing with DPS where DPS claimed that they had no idea what was going on. This misreporting was in fact, a computer glitch, but we know that a computer glitch would not cause more misreporting with search failures compared to search successes. But what was nice about the hearing, even though DPS was kind of able to escape any sort of responsibility, was that DPS was now required to change their race recording rules of their troopers. So prior to the rule change, Texas troopers have kind of used our own best judgment for determining the driver's race. But post rule change, they would be required to always verbally ask drivers for their race. And so this policy went into effect November 23rd, which was pretty quick turnaround, especially because the articles published on November 8th. 

 

Jennifer [00:18:05] That is some very quick policymaking. Sort of amazing. 

 

Jennifer [00:18:09] Okay, so in the past, basically troopers would just fill out this form and take their best guess at - of the race of the civilians that's sitting in front of them. And after this, they technically are required to ask the person what their race is when they're filling out the form. So it's actually not that big a change. It is kind of amazing it changed behavior, frankly. Like you would think, if there are all these biased troopers, they would just not do it. But -. 

 

Elizabeth [00:18:32] Right.

 

Jennifer [00:18:32] Police officers are rule followers in many cases. And so as we'll see, it did change behavior. So how do you use this change to measure the effects on misreporting?

 

Elizabeth [00:18:42] Right. So I treat this change as a sort of like natural experiment because this rule change happens so quickly compared to when the article was first published. The turnaround is like 15 days. So I can imagine that troopers can't really change their behavior in anticipation of the rule change, which makes it nice. So what I'm going to look at is I'm going to look at troopers I estimate to be racially biased. So who are more likely to misreport Hispanic drivers as white when the search ends in failure compared to search success, I'm going to look at their race recording behavior conditional on search outcomes. So I'm going to look at how likely are they  to record their failed searches as white compared to their unbiased peers before and after the rule change. So I'm going to use that difference-in-differences. 

 

Jennifer [00:19:25] Yeah. So basically seeing how these gaps change after the policy changes. 

 

Elizabeth [00:19:29] Exactly. Yeah. 

 

Jennifer [00:19:31] And so what do you find? What happened to the misreporting rates when troopers were told to ask drivers their race rather than guessing it? 

 

Elizabeth [00:19:37] Okay, so what I find is actually really neat. I think this is - in economics, we always talk about like a money making graph, like the graph that describes your whole paper to me, this is like my money making graph. 

 

Jennifer [00:19:48] Totally.

 

Elizabeth [00:19:49] Yeah. Because in this graph, what I find is that bias troopers are on average nine percentage points more likely to record their failed searches white compared to their unbiased peers. So prior to 2015, we see that there is this difference between people I estimate to be bias and people I estimate to have no bias. There's definitely a significant difference in their race recording behavior for their failed searches. But after the rule change, what I find is that bias troopers are equally likely to record their failed searches as white compared to their unbiased peers. So this means that prior to the rule change, they could kind of do their own thing for reporting race. But after the rule change, they start recording race similar to their unbiased peers. So this shows to me - 1) the rule change works because we see biased troopers no longer misreporting significantly based on the search outcome. So there's not a differential behavior if their search ends in failure compared to search success. We also see that my estimate of trooper bias is actually capturing misreporting biased behavior, too, because otherwise we wouldn't see any change. 

 

Jennifer [00:20:53] Right, exactly. Yeah. So, basically the gaps just close, the gaps disappear. And so, yeah, what you said at the end is exactly what I was just about to follow up with was this really does give us confidence that that gap was meaningful, right? That like the fact that the troopers changed their behavior and the gaps between the bias and unbias troopers in terms of how accurate your prediction was wasn't just picking up something else that we don't really understand, like it must have been that they were misreporting the race because that's what changed. 

 

Jennifer [00:21:25] You're also able to see what happened to the officers after the policy change. And you mentioned before what data you used, but tell us again what the data are that you have for that analysis. 

 

Elizabeth [00:21:35] Right. So to look at troopers' employment outcomes before the rule change, the Texas Tribune every year FOIAs, all the public servants' salary in Texas and publishes it online. So the year I did this analysis was 2019. So I have 2019 employment information of troopers in Texas. So this is the exact same information that I got from the Department of Public Safety, except that I don't have the year they started work and whether or not they left the force. But I have like their salary, their rank and their name, and I use that to link them to the past employment data. 

 

Jennifer [00:22:10] And so what happened to the officers once their bias was more visible after the policy change? 

 

Elizabeth [00:22:15] If I compare their employment outcomes before and after the policy change, I find worse labor outcomes for our troopers I estimate with bias compared to their unbiased peers, conditional on staying in the force up to 2019. So one standard deviation of bias is correlated with a $300 lower salary growth and four percent lower likelihood of promoting up in rank. So what to me - I interpret this as evidence that misreporting is a pretty effective shield for hiding bias troopers from negative labor outcomes. And then, once misreporting becomes significantly harder or once it's much harder to hide your racial bias, then they start facing these negative labor outcomes. 

 

Jennifer [00:22:53] Yeah, and it turns out there are consequences in the departments, you know, to the extent that we might worry that maybe departments don't care about this or they're not taking it seriously. I think one take away from your results is that they did care and there were real consequences for these officers, which I think it's also really interesting. 

 

Elizabeth [00:23:12] It's yeah, it's a good one. 

 

Jennifer [00:23:13] Yeah. So speaking of implications, what other policy implications are there of these results? What should policymakers and practitioners take away from this? 

 

Elizabeth [00:23:23] So you kind of alluded to this earlier, but I think one big thing that is very practical that a policymaker should learn immediately is that changing the race recording role is very effective. So requiring troopers to always verbally asked drivers for their race is a very effective way of reducing misreporting. It's very cost effective. It doesn't require any new technology or like new pens or computers, for example. All it does is require asking like a question. And so it's not very time cost - there's not a high time cost either. I think with the advent of like body worn cameras or the higher use of body worn cameras and dashcam footage, it's becoming easier and easier to also assess whether or not troopers are following this rule and also look at whether troopers are misreporting. So I think both these things are things that policymakers should keep in mind. So I think the last thing is about data quality. I think one nice thing about my paper is that not to toot my own horn, but like I think my paper's like the first to really consider whether or not the data in front of us in the context of racial bias is like the truth, and I think we as researchers, we oftentimes take it for granted that the data in front of us is correct and is factual. And I think in the context of racial bias, this is a major problem because oftentimes when we're studying racial bias, the data comes from individuals who are racially biased. So it kind of makes it harder. I mean, you should question it at minimum, whether or not there's a potential for that. And then where the misreporting might be at - race is just like one aspect of data that we look at. There could be others. 

 

Jennifer [00:24:55] Yeah, absolutely. I mean, I think even if people had realized before that there could be this misreporting, usually we just sort of say, well, there's nothing we can do about it. This is the data we have. We'll just think about how this might affect results and it could be biased or whatever. But the really nice contribution you're making here is you show us what to do about it. You basically like, you could use these tools to predict race and then see how that compares with the reporting. And in that way be able to uncover this biased misreporting in a way that's really useful for being able to detect bias. And so, yeah, I mean, that's another - in my mind another big policy implication here is just, you know, as I said before, this is a tool that police departments could just take off the shelf and use, right? 

 

Elizabeth [00:25:42] Right.

 

Jennifer [00:25:42] So this is now - we have an alternative bias test to the standard hit race test that is problematic for various reasons. And so departments that are interested in trying this out should email you, I guess. 

 

Elizabeth [00:25:57] Yeah, I've actually had one email me. 

 

Jennifer [00:25:59] Oh, great. 

 

Elizabeth [00:25:59] Oregon. The state police did, but they don't record their drivers' like PIA, which makes it hard. So departments should also take note. Record that information. 

 

Jennifer [00:26:09] You need names, I guess, in order to do this. Yes. 

 

Elizabeth [00:26:12] Yeah. Or at least last names. 

 

Jennifer [00:26:15] At least last names, right. That's interesting that they didn't have names. 

 

Jennifer [00:26:19] Excellent. Okay. Well, so that is all your paper. Have any other papers related to this topic come out since you first started working on the study? 

 

Elizabeth [00:26:25] Yeah. So I think one major paper that I think actually has maybe has a bigger contribution in terms of like thinking about the data and racial bias is Knox, Lowe, and Mummolo and I might have messed up the order and I apologize to the authors if I did, but they have an amazing contribution where they think about how the officer's choice to even engage in an interaction with the civilian itself can hide bias. So even the choice itself might hide bias and they come up with a cool correction for it. But I think thinking about just the choice of interaction is an important first step in thinking about racial bias between civilians and law enforcement officers. I think another interesting article that I just read recently, I think it came out last week. ProPublica just released article about how Louisiana police officers are just recording their Hispanic civilians as white. So in the parish, with the most Hispanic residents, like none of the tickets issued to individuals with the last names of Lopez, Martinez, Rodriguez and Gomez are recorded as Hispanic, which is obviously not possible in a real world. So, but in terms of leveraging like misreporting and misrecording, I think Goncalves and Mello have kind of similar work where they look at officer leniency. So are they going to underwrite the recorded speed compared to the actual speed? But that's pretty much it. I think there's definitely a lot more work to be see on this topic. 

 

Jennifer [00:27:45] Yeah. And so speaking of that, what is the research frontier? What are the next big questions in this area that you and others will be thinking about going forward? 

 

Elizabeth [00:27:53] That is a very good question. I hope going forward, I see a lot more papers that question the veracity of the data and how that might be affecting research on civilian interactions with law enforcement officers. I think at Southern's, which was like last weekend, so that's why it's fresh on my mind, Katie Bollman, who's on the job market this year, has a paper that looks at how the roll out of body worn cameras can affect case disposition. What she finds is that there's a 10 percent reduction in offenses initiated during an arrest. So things like resisting arrest or unruly behavior that goes down because body worn camera footage can show that that incident didn't happen. I think that sort of research is kind of nice because you can see how being able to have some sort of like benchmark or some sort of ability to see the truth can change officer behavior and might affect even future outcomes. I mean, case disposition, I think, is a pretty big deal in terms of the individual's outcomes. 

 

Jennifer [00:28:50] It is interesting to think about how surveillance data are really useful for all of this, right? Because a lot of the questions that we might have about whether data are accurate or information that police write in their reports is accurate, for the most part, we've had to just say, well, it might not be, but there's nothing we can do about it, right? We just kind of have to acknowledge that and shrug and go on with the data we have. But now that as you mention, I think the more and more cops are wearing body cameras and there are surveillance cameras everywhere, you could imagine having regular audits or researchers being able to look at the footage and actually check if what happened is what matches the report. And so it just opens up more avenues for creative ways to be able to check the accuracy of data and measure bias in various settings. 

 

Jennifer [00:29:41] One other research frontier I think it would be worth mentioning is like, you know, what are other ways that we can reduce racial bias in policing, right? And so I think one nice example here from your paper is you - I guess you have two pieces to this. One is you have the policy change that forced troopers to ask civilians their race, so it was easier to detect. But then you also had the departments responding to that and actually, you know, not promoting people who are biased and not paying them as much. But that's, I think, would probably be surprising to a lot of people that it worked out that way. But in general, there's just we know so little about how or which types of policies are effective at increasing accountability for police officers. And I know that that's a space that a lot of researchers are thinking really hard about right now. But we don't  have the answers we'd like to have yet. 

 

Jennifer [00:30:32] Well, my guest today has been Elizabeth Luh from the University of Michigan. Elizabeth, thank you so much for talking with me. 

 

Elizabeth [00:30:37] Thank you for having me. 

 

Jennifer [00:30:44] You can find links to all the research we discussed today on our website probablecausation.com. You can also subscribe to the show there or wherever you get your podcasts to make sure you don't miss a single episode. Big thanks to Emergent Ventures for supporting the show, and thanks also to our Patreon subscribers and other contributors. Probable Causation is now part of Doleac Initiatives, a 501(c)(3) nonprofit, so all contributions are tax deductible. If you enjoy the podcast, please consider supporting us via Patreon or with a one time donation. You can find links in our website. Please also consider leaving us a rating and review on Apple Podcasts. This helps others find the show, which we very much appreciate. Our sound engineer is Jon Keur with production assistance from Haley Grieshaber. Our music is by Werner, and our logo is designed by Carrie Throckmorton. Thanks for listening, and I'll talk to you in two weeks.