The UMB Pulse Podcast

This Isn't 'CSI': Not All Forensic Evidence is Bulletproof

November 04, 2022 Maneka Sinha Season 2 Episode 13
The UMB Pulse Podcast
This Isn't 'CSI': Not All Forensic Evidence is Bulletproof
Show Notes Transcript

Shows like "CSI" have made forensic evidence flashy, brought to TVs with cases wrapped in under an hour. Some types of forensic evidence, like pattern matching, aren't as reliable as what a TV show might make it appear to be, and it's a challenge to address the reliability of certain pattern matching evidence in courtrooms, according to Maneka Sinha, JD, associate professor at the University of Maryland Francis King Carey School of Law. Sinha goes in-depth on one area of her research addressing ShotSpotter — a gunfire detection technology used in Baltimore and in cities across the U.S.

Listen to The UMB Pulse on Apple, Spotify, Amazon Music, and wherever you like to listen. The UMB Pulse is also now on YouTube.

Visit our website at umaryland.edu/pulse or email us at umbpulse@umaryland.edu.

Charles:

Alexa play The UMB Pulse Hmm.. Amazon Alexa: I don't know that one. Well that's interesting I mean that was clear as day That was my voice and I'm sitting right here Why Why doesn't she know the Pulse? Can you imagine if you had a microphone like that trying to detect a gunshot in a community when there's lots of noise everywhere where there's cars fireworks. I can't imagine the ability to to pick something like that up

Jena:

Yeah It must be tough zeroing in on exactly what something is I mean even the most sophisticated AI and and technology can make mistakes a lot of times

Dana:

So forensic evidence forensic data how it's kind of changed over the years and how we we tend to rely on it So many of us have watched CSI and we take that as fact and we're gonna talk about how that really might not be as factual as we think it is.

Charles:

Maneka Sinha from the University of Maryland Francis King Carey School of Law she really goes in depth about this technology called ShotSpotter that's used in Baltimore. Is it a car exhaust backfiring? Is it a firecracker? And what happens when the police arrive and and how the technology is deployed may not be what you expect

Dana:

With us today we have Maneka Sinha. She is an associate professor at the University of Maryland Carey School of Law and she has often been recognized for her expertise in forensic science. In fact her research specifically explores the connection between forensic science evidence and outcomes in criminal cases which is what we're gonna be talking about today Maneka Thank you for joining the Pulse.

Maneka:

Thank you all so much for having me I'm excited to be here.

Dana:

We were very interested in booking you after hearing you at convocation this fall and we look forward to the conversation today. Before we get into the nitty gritty of your research let's back up and start at the beginning. You have a long background working in criminal litigation which is what you did before you came to UMB right?

Maneka:

Yeah that's exactly right. So when I graduated from law school I joined the Public Defender Service in Washington, D.C., Where I practiced as a public defender for 10 years. I started defending kids charged with felonies in juvenile court and I worked my way up to all kinds of felony cases from armed robberies to burglaries And then finally at the end towards the end of my practice my caseload was exclusively homicides and serious sex cases

Dana:

Oh boy. And then what what got you back in 2019 to was it in 2019 when you returned to UMB?

Maneka:

I came to UMB in 2019. That's exactly right

Dana:

Okay. And what brought you here at that time? A couple of things. The main thing was that Carey Law was restarting a lapsed criminal defense clinic that had been I think not in operation for about eight or nine years at that point And I felt like it was a really great point in my career to transition from active practice to teaching the next generation about how to do that work and the importance of that work and getting them ready to be able to do that as they move on out of law school and into the world of practice

Charles:

Great. So let's really get into detail about all of that forensic practice and what's the good what's the bad. And what was really entertaining though and maybe put it into perspective more was that convocation speech that you gave that little Ted Talk as President Jarrell said where you said forensics might be just as scientific as a Tinder match. So explain exactly what you mean by that.

Maneka:

Sure So what I was trying to get at was the idea that apps like Tinder and lots of technology in general they can appear on the surface to be really scientific Look it's an algorithm Beneath the surface there may not be a lot of science or the tech may not be really as good as we want it to be at doing what it's trying to do So in the case of Tinder finding you a date that's suitable to you and forensics can be like that Most of us think that the technologies we use in the Carceral system are very very reliable There's actually quite a bit of research that shows that jurors give tremendous weight to expert testimony even when it's not reliable And so beneath that What I'll call that sheen of reliability A lot of what we use in criminal cases hasn't been tested in the ways we think it might have been to establish that it's scientifically valid or that it's used in ways that are not unreliable Even if the technique is in general reliable We see a lot of pushing the limits of techniques in ways that can make them unreliable So that's what I was trying to get at The things that may appear on the surface to be high tech and fancy and scientific really underneath the surface there may not be as much as we think there is there

Charles:

That's really scary to think about.

Jena:

Yeah definitely. So it like separating it into the categories of like maybe one forensics that are a bit more reliable maybe less reliable. Can you explain what would be in each category for those?

Maneka:

Yeah sure. So I think it's helpful to start with how we're defining forensics. So some people use a slightly narrower definition of forensics and distinguish it as basically analysis of evidence found at crime scenes Like say when you find blood or fingerprints or expended ammunition after a crime has occurred and you try to identify who might be responsible for that crime based on that evidence I use a little bit of a broader definition So I think about forensics as just the application of science or technology to legal questions generally So the reason I say that's a little bit broader is because that can include techniques that precede when a crime has occurred and evidence is actually collected So for example forensics under my definition would include things like surveillance technologies that are not employed Only after a crime has occurred but they're employed all the time So any for me I think of it as any sort of science technology or technique that is applied to a legal question and usually that is in criminal cases. So when we think about good and bad I'm not sure if good and bad is the best way to phrase it. There are forensic methods that are more reliable than others but I think the really important thing to emphasize here is that every forensic method has limits There's no one forensic technique that is reliable all the time in every case including DNA, which is sort of the one that we think of as the most reliable. And there's no scientists that would disagree with me on that What that means is that any method when pushed past its limits or used in cases it wasn't intended for is gonna produce unreliable results And to get back to your question some forensic techniques are more susceptible to un reliability than others For example there's a category of forensic methods called pattern matching disciplines the idea that you're comparing characteristics in something from a crime scene to unknown sample So fingerprinting is a pattern matching discipline You recover a fingerprint at a crime scene you compare it to the print of a known suspect. Bite mark analysis is one of them. Microscopic hair comparison analysis is another. Firearms analysis is another And the reason that those techniques are more susceptible to unreliability than others is because what you are what you're doing in those techniques is you have a human examiner looking for features that correspond between two items and there's a tremendous amount of subjectivity in that process. What I see or what I think of as an important feature might be different than for example what Charles sees when he's looking at the same two two sets of items or same two items So the they're more susceptible to un reliability And so the types of forensic methods that are we think of as classically unreliable the ones that have been established to be unreliable bite mark analysis hair comparison analysis Those are your sort of classic really unreliable disciplines But it's important to remember that other disciplines that we think of as more reliable are susceptible to the same problems because they still have that subjectivity involved Things that we think of as more reliable like DNA analysis Again you have to parse out what we mean by that If I leave If I prick my finger and leave some blood on this table and you send it to the lab there's no question I'm not gonna have any question that they're gonna be able to interpret that profile and say that I left it That's very reliable But it's very different when all of us are touching this table and then somebody comes in later and uses this table and then you swab the table and you don't have a lot of DNA and you have a lot of different dna That's a much more complex process and that's where when you push it past its limits that's when you start to produce unreliable results So I think the main takeaway is that good and bad isn't really the right question it's how are we using the discipline and are we using it in ways that push it passe its limit And has the discipline been established as scientifically valid to start with Which many of these disciplines I would say have not.

Jena:

Yeah And I'm sure it gets even more complicated when crimes happen in public places that lots of people have been and walked on and touched and who knows what else so Yeah And you had mentioned that a lot of times jurors and juries put a lot of weight on expert testimony or even forensic evidence and DNA evidence that type of thing do you have an example maybe from your criminal defense days when the forensics evidence was really heavily relied on but maybe wasn't super reliable and could have affected the case in a specific way?

Maneka:

Yes. I think the answer to that question is I can't give you one specific example because there are so many. There are really common problems with the way experts present evidence and the way that forensic testimony can be exaggerated So for example the kind of classic example of the way examiners exaggerate their testimony is by saying testifying to an absolute certainty or a 0% error rate in the discipline which is just scientifically unsupported Even DNA analyst don't say there's a 0% error rate that's just false There's always an error rate. We don't see that as much anymore. We don't see as much of experts saying in court 0% error rate or This is a match to the exclusion of all others. We don't see language like that so much anymore but what we do see is this shell casing came from this specific gun or This bullet came from this specific gun. So what I mean by that is we see unqualified source identifications without any sort of statistical confidence or statistical weight that lets a jury assess how strong of correspondence that is. And courts have started to say that's problematic because it came from this gun implies the same thing as that problematic testimony that we used to hear years ago. It implies the same thing as absolute certainty It implies the same thing as 0% error rate and there's no statistical basis for a statement like that. So that's the type of testimony that I heard in my cases all the time and it's really difficult for a juror to be critical of that when the person that the judge has told them is an expert is saying without any qualification this thing came from this thing. It's my opinion and this is an issue that I've litigated over the years that there's not valid basis in the science to make claims like that.

Charles:

That's really interesting. One area of your research that you talk about maybe claims aren't as true as what they appear is ShotSpotter and that technology is used in Baltimore. It's widely criticized by some but for those who don't know what ShotSpotter is explain what that technology is and how it falls into a forensic category.

Maneka:

Sure. So ShotSpotter is an automated gunshot detection system. You can think of it as having three parts. And so the first part is just they have microphones that are attached to buildings and various structures around the city and they're listening These microphones are listening for loud impulsive sounds like gunshots and if they hear a loud impulsive sound like a gunshot then software. The second component attempts to determine where the sounds originated like a precise location that the sound came from and whether or not those sounds are gunfire So they get the location essentially by triangulating like you're basically cross referencing How long it took for the sound to reach multiple sensors And they won't register something as an actual gunshot unless it's been picked up by a minimum number of sensors So if it if only one sensor hears the sound you're not gonna get an alert that it's gunfire And then the third component is they'll have a human analyst listen to the sound and look at the waveform and make a determination about whether the software was accurate and categorizing it as gunfire If that happens if they do confirm it as gunfire then it gets pushed out as an alert to police. So police can receive that alert like on their dash computers or on their phones It varies or a dispatcher can call it in and say that we got a ShotSpotter alert at such and such locations and then the police might respond to that location in search of people who might have been affected by gunfire So victims people who might have been responsible for gunfire evidence of gunfire like expended ammunition shell casings bullets things like that so the police can respond So that's essentially how ShotSpotter aims to work and what it tries to do

Dana:

So what is the success rate of finding who fired the gun? When you look at their website they claim to have a really good rate of detailing where the firing happens

Maneka:

Yeah So the short answer to that question is that the success rate in finding who fired a gun based on a ShotSpotter alert is very very low the data we have shows that evidence of a shooting so even forget about finding a person responsible arresting them let alone convicting them Even before we get to any of that the Finding evidence of a shooting in response to a ShotSpotter alert is occurs in just a sort of tiny fraction of alerts Some of the data that I've uncovered as I've been researching this is that we have data from Baltimore from 2018 to 2021. I'm giving you rough numbers but there were over 8,500 alerts. There was evidence of a shooting or the discharge of a gun in about 1700 of those alerts. So just 20% roughly in older investigation in San Francisco from 2013 to 2016 said that they found no evidence of gunshots in over two thirds of over 3000 calls for alerts. So you get alerts you have calls for those alerts and there was about 3000 over two thirds. There's no evidence recovered there were two arrests in all of those over 3000 calls for alerts. One was not even related to gunfire like more astounding data in St Louis and St Louis County there was a five year period they studied in which fewer than 1% of alerts resulted in enough evidence to even generate a police report. They did a longer study from 2008 to 2018 where over 19,000 alerts resulted in 13 — One-Three arrests. Chicago is probably the most telling data of all the data we have There have been two major studies of ShotSpotter in Chicago One was done by the MacArthur Justice Center at Northwestern Law School and they looked at ShotSpotter alerts from a period in 2019 to a period in 2021 and they found that 89% of alerts resulted in no evidence of a gun crime and they found over 40,000 alerts were unfounded. The Inspector General of Chicago so it's a nonpartisan watchdog did a study in an overlapping period from 2020 to 2021 and they looked at over 50,000 alerts and found that in around 4,000 or less than 10% did they find evidence of a gun crime. So the short answer again is you they find evidence of gun crimes or evidence of a discharge in a small fraction of ShotSpotter alerts.

Jena:

It sounds like there's a lot of margin for error for something that is used like pretty widely around the country.

Maneka:

Yeah I think that's a really good point and I think we have to separate out a couple of questions. One is What are we trying to accomplish with ShotSpotter? And as Dana was saying their website indicates what their goals are and those goals are we wanna help police solve gun crimes. We wanna get to victims faster. We want to be part of solving the problems of gun violence in the country generally. And then you have to parse all of those things out right? So one how much evidence of gun crimes is ShotSpotter helping us actually recover? But then what are the harms And there are a whole bunch of harms that are associated with ShotSpotter

Charles:

Your smart speaker in your apartment might pick up noises and alert you that there's there has been a noise . We think it might be this but it might not be. And so it almost sounds like you're taking an Amazon Echo or a Siri or something and putting it on a street pole out in public with there's just all sorts of noises and trying to decipher if we think it's a gun. Is that's almost what's happening here?

Maneka:

Yeah. So I think it's really helpful to break it up and think about ShotSpotter as a starting point. There's multiple ways to think about it but in two ways So the first question is it even scientifically valid in doing what it's supposed to do, which is detect and locate gunfire. And so that's part of the question you're asking Is it even accurate in detecting gunfire and distinguishing it from other things that might be gunfire like fireworks or cars backfiring or nail guns or whatever And there is evidence that shows that Mistakes sounds that are not gunfire for gunfire i.e. It registers false positive. So it falsely identifies something that is not gunfire as gunfire. And that brings us to a bigger question of how well the system is tested. And there's a ton of research that suggests that it has not been tested. So that in a way that really tells us what its accuracy is And a lot of this data from all of the studies that I was mentioning suggest that perhaps it's not as accurate as we think it is or as they say it is. So some of the research that does exist isn't as robust as how we would expect it to be or what we'd want it to be And so what I mean by that is ShotSpotter has commissioned a couple of studies of its effectiveness and those studies often rely on police reporting So when police report to the company that ShotSpotter has not registered gunfire is something they confirm to be gunfire but that's not always the appropriate question Right? The sometimes the appropriate question is exactly what you're asking which is when's it getting it wrong? Like when is it saying that this sound that has nothing to do with gunfire at all is gunfire And so it studies don't actually incorporate that question, so two answers to that is one is there's not good data on how effective ShotSpotter it is and the a lot of the data that does exist suggests that it's not as effective as they claim it is. And there's a real opacity to what we know about ShotSpotter because in a lot of jurisdictions we and by we I mean the public people who wanna investigate this are restricted from having access to that data. So ShotSpotter has asked jurisdictions that employ the system not to disclose the data. And so there's questions about how much testing has been done the appropriateness of the testing that has been done and the transparency of the data that does exist. So there there's a whole host of issues there and that's just the sort of first part of how we think about ShotSpotter. The second part relates to the questions that we were talking about earlier which is how good is it really at doing the thing that it says it's doing, which is helping deal with gun crime? And the evidence there is much more clear and that it's not as effective as we are often told that it is.

Charles:

So let's go back to Chicago for a minute And you mentioned the MacArthur Justice Center at Northwestern University. They're actually challenging the use of ShotSpotter and what have they found to be the biggest issue?

Maneka:

So essentially the MacArthur Justice Center is claiming the city of Chicago and in particular the Chicago Police Department is using ShotSpotter to falsely manufacture reasonable suspicion to conduct unlawful stops of people on the street and possibly also unlawful arrests. And so the idea is based on under the Fourth Amendment we all have a right to be free from unreasonable searches and seizures. And the very sort of short non-technical version is in order for a police officer to stop you on the street they need to have what's called reasonable suspicion that a criminal activity is occurring or has occurred or is about to occur and that the person that they're stopping specifically is responsible for that. And so they're saying that in violation of folks' Fourth Amendment rights the city is using ShotSpotter to artificially manufacture reasonable suspicion to justify these stops And so that's sort of one of their big claims They're also saying that their individual clients have been falsely detained or arrested or imprisoned without that requisite legal justification which would either be reasonable suspicion or in the case of an arrest probable cause in violation of their Fourth Amendment right to be free from unreasonable searches and seizures. And then they have a separate claim that's alleging a violation of their clients' 14th Amendment Equal Protection Rights that essentially says that these particular communities and people and their clients are being subjected to police surveillance in this unfounded discriminatory way because of the way ShotSpotter is only deployed in these predominantly Black and Brown communities. So they're being subjected in violation of their equal protection rights in this discriminatory way to this surveillance because it's being deployed in this racialized manner.

Charles:

So they see it like an excuse to go into the neighborhood?

Maneka:

Yeah. So it's two things. So the Fourth Amendment claim is like they're using it as an excuse to go to the neighborhood and then justify these stops And then there's a separate equal protection claim, which is the way in which ShotSpotter is deployed in this racially disparate way means that only Black and Brown communities for the most part are being subjected to the type of surveillance And so that's a violation of their equal protection rights because of the sort of facially discriminatory way in which the system is deployed in that city.

Jena:

Is that an issue in just like the Chicago area or is that just the only area we have data of the fact that ShotSpotters were primarily being used in Black and Brown neighborhoods? Is it like that in other places as well or do we not just not know?

Maneka:

So the answer to that question is no that is absolutely not unique to Chicago. There's evidence that ShotSpotter is deployed disparately in predominantly Black and Brown communities as opposed to others in a whole bunch of cities. VICE has reported that they found evidence of that in Kansas City, Cleveland, Atlanta, Chicago, there has been evidence found of that in Washington, D.C. So this is sort of a widespread problem and it makes sense because shots spot's extremely expensive. So there's I would imagine if you are a decision maker and you're trying to figure out what you're gonna do and how you're gonna deploy ShotSpotter. One of the considerations is how much it costs to implement it? And so you might not implement it throughout a city. And so the problem that's resulted from that is we have this racially disparate deployment of the system.

Charles:

How far around ShotSpotter do the police search? Because I'm wondering a criminal's not just gonna be hanging around the scene of the crime. They're probably hitting it in the car as fast as they can or on foot to get out of there so I'm not imagining they're sticking around the places that they're going to be searching immediately after that notification.

Maneka:

That's such a huge common sense point. So I think the answer to the question of how large is the area that they're going to search is this gonna vary right from case to case jurisdiction to jurisdiction. We don't necess we can't say we can't give a blanket answer for how wide that area is gonna be But it comes back to the discussion that Dana and I were having earlier which is it makes perfect sense that you're not finding people responsible for firearm discharges to the extent that ShotSpotter has accurately alerted to a firearms discharge. Because again the testing around that question isn't sufficient for us to be able to say that in any singular incident ShotSpotter is really pointing to an actual discharge but even assuming it does at that point it makes sense that you're not necessarily finding the person responsible because exactly as you said Charles people aren't sticking around. If somebody has shot a weapon they're not like waiting on the scene for the police to arrive. They're in their getaway car. They're running away or they're jumping into the building or whatever. And so that makes a lot of sense.

Jena:

We had talked about how how reliable ShotSpotter is in finding the person who fired a gun or was implementing the crime that was going on allegedly but in in some studies it found that ShotSpotter was helpful in locating victims of gunshot wounds that ShotSpotter heard the gunshot and police come to that area and ended up finding a gunshot victim and be able to transport them and treat them more efficiently than otherwise not. And Baltimore City Police who actually use the ShotSpotter technology they say that since 2018 it's helped them locate about 804 gunshot victims and get it get them transported and sent to hospitals for treatment much faster than they would otherwise. So in theory it seems that ShotSpotter alerts would be a useful tool especially in neighborhoods that are like a little more reluctant to call 9 1 1 at the sound of a gunshot. But in practice the results may not always be as positive. Can you paint us a picture of where that can go wrong?

Maneka:

Yeah. So let me break down your question a little bit. You mentioned that there is a study that found that ShotSpotter can help police respond and transport victims of gunfire more quickly. That is true There is a study that says that and I think it's worth noting that the study itself notes that one of its limitations is that it's a study conducted in a really small jurisdiction. I believe it's Camden New Jersey and they note that as a limitation of the study. Would these results be true in a different j jurisdiction? We don't know the answer to that question. So that study found also that after adjusting for various variables mortality wasn't significantly different between ShotSpotter and non ShotSpotter incidents So I say that only to say that it's difficult to know how far we can extrapolate the results of that single study. So then the question becomes what about our own data You said that Baltimore City Police have said since 2018 ShotSpotter has helped them locate about 804 gunshot victims. So I have a lot more questions about that to know how effective and how useful ShotSpotter really is? So one is just what were the outcomes of those cases right? So when we say help locate them what does that mean? Was it help locate them in the absence of anything else? Meaning was ShotSpotter the but for reason that they were able to locate these victims? Or were people calling them or flagging them down? Or were they being transported to the hospital already by friends and family? So the question is what was the role ShotSpotter really played? I think it's not enough to just say that it's .Helped them locate these victims. I also wanna know what the outcomes were for each of those victims. And then the second piece of that is we have to ask ourselves what harm has ShotSpotter itself cause? So I gave you these anecdotes about how ShotSpotter can drive police into communities after a single alert in ways that create harm and violence and how ShotSpotter can just reinforce bias policing practices that have been entrenched in communities around the country for decades right? And so just the fact that we have ShotSpotter alerts going off in these neighborhoods brings more police and drives more policing in those communities And so we have to ask what are the harms the fact that they responded or they were able to locate those particular sets of victims doesn't answer that question of were there any corollary harms that we need to be concerned about? And then I think the third piece and I'm at fault here too because the conversation we've been having so far hasn't included this either which is What does the community want? Has the community been engaged in the conversation about do we want it or do we not want it? And I'm not here to speak for what the community members would say in that regard but I don't know that they've been consulted and that they have had a voice in the question of whether they want this technology which is a surveillance technology in their neighborhoods or not. And the answer might be because all communities are not the same that some might and some might not. So those are a couple of ways to to just give a backdrop to how to answer that question because you're right that it's it does seem like ShotSpotter would be a useful tool in these particular in instances So the picture of how this can go wrong is first let's start with a basic premise which is that ShotSpotter actually tells police that gunshot has happened or a gunfire has occurred We don't actually know that's true. The testing is insufficient to tell us about the accuracy of ShotSpotter in any specific instance So that's sort of a baseline problem is you have police responding to an alert that may or may not be accurate in any individual case. Then on top of that we have the problem of how rushing in and response to ShotSpotter alert can lead to violence. So you can of course have police coming in thinking they might be facing an armed suspect and be primed for that and create a scenario that's really volatile for people in the area Then you have a problem that we've seen happen in cases in court cases many times already which is you get a shot spot or alert you go to the supposed location of the alert. There's only one person there I'm going to stop that person which let's be clear is in and of itself a harm I'm gonna subject you to a seizure that is harmful to the person being seized. Sometimes a seizure can be totally innocuous sometimes it can be forceful It can often lead to an arrest you can have situations in which ShotSpotter alerts go off, police come into a neighborhood and they end up arresting somebody for something like I don't know possession of alcohol or possession of drug paraphernalia which is one of the things reported by the Chicago Inspector General in their study of ShotSpotter, meaning you can have police responding to a ShotSpotter alert have not are not able to find the person responsible to the extent that it really was a a gunshot and then arrest people who are totally unrelated to the triggering alert in the first place. And that creates a whole domino effect of harms right? The second you're sucked into the criminal legal system, that's hugely problematic in the ways that we all already know because you can go to prison you can lose your job you can lose your housing you can be unable to get loans. There's just a whole host of corollary problems with that that's the baseline problem of what happens when there's a singular alert. There's the second problem of the fact that alerts go off in a neighborhood is now being used by police to justify stops in that neighborhood even when there's not an alert in the moment. And that's problematic too because all of the stop and frisk policing that we've been hearing about in the news that has been applied disparately to Black and Brown communities that has led to violence and other harms it reinforces that it allows that to continue And so there's a whole sort of corollary list of harms that can occur when you get a ShotSpotter alert or when you use ShotSpotter in in these particular communities.

Dana:

So have there been any studies where so far everything you've talked about these studies are showing the negative sides of using the technology. Are there any sides that are good?

Maneka:

I think ShotSpotter would tell you that it's commission studies that have found the system to be effective My opinion is that those are not independent meaning that they're not conducted by people who are unaffiliated with the system or the company at all And they're also not well designed in the sense that we talked about is basing a study on police reports of how effective ShotSpotter isn't a really meaningful way to test the system. Because for any given ShotSpotter instance unless there's independent evidence that gunfire did occur police don't know. They don't know if there was really gunfire or not because they don't know ground truth You need to do testing in a controlled environment where you know the answer and you need to test all of these variables we've been talking about You need to test its ability to distinguish gunshots from other loud impulsive noises You need to test in a manner that covers the full range of guns that exist You need to test in conditions where there's a lot of ambient noise or other loud noises and see how well the software can distinguish that and how well the analyst performs You need to test not just the software but you need to test the analyst too. They're all part of the thing I wouldn't say that there has been a study that I know of that has been well designed and that has tested all of the issues in this robust manner that we're talking about. ShotSpotter would disagree with me.

Dana:

Okay And one other thing based on what you were talking about before we didn't touch on gun tracing. What if law enforcement manages to find a gun or a bullet from the use of ShotSpotter? This creates issues like fingerprinting and firearm matching and much more and there's a lot we could unpack but briefly tell us why this creates a domino effect?

Maneka:

Yeah so that's a great question. So imagine a scenario where ShotSpotter alert goes off and police respond to the scene and they do recover evidence of gunfire. So that could be either a bullet or a shell casing for example. And maybe they have some other information about a suspect or they find a gun and they wanna figure out if they can associate that ammunition with that particular weapon. That's firearms analysis.. And that's a pattern matching discipline like we were talking about before And so what they would do is if they wanted to figure out if a particular piece of ammunition came from a particular weapon typically they would test fire that weapon and then they would do a microscopic examination of the evidence items from the crime scene to the test fired items And it's a sort of detailed process But the short version is they will try to say if they can find enough correspondence between things like scratches and dents and things like they can observe under the microscope between the evidence items and the known items And this is one of those disciplines that it's my perspective has not reached the level of scientific validity that should allow one to be able to say this particular casing came from this particular gun. You can say things like they're the same caliber which is pretty powerful evidence. They have characteristics in common that allow me to say that I can't say that this wasn't the gun but it's a different matter to say it was and it definitely was the gun. And so that's one of those forensic methods that has the potential to be unreliable. And that's exactly how you can have a domino effect of a series of forensic failures that can result from one of these lawyers. And the same thing as you were describing maybe you find fingerprints on that crime scene evidence as well And you're doing that same sort of technique and you could have those types of problems arise in that method as as well.

Jena:

Bringing it back to Baltimore has there been a legal challenge here for the use of ShotSpotter at all?

Maneka:

So it's my understanding that there has not been such a challenge in Baltimore I could be wrong about that. There may have been one that I haven't heard about. My understanding is that the MacArthur Justice Challenge in Chicago is the first of its kind. And I wouldn't be surprised if we see litigation like that are mimicking that pop up all over the country But my expectation is that the folks who might be thinking about such challenges are gonna wait to see how that one plays out before launching those challenges of their own.

Charles:

And what's interesting too about that challenge in Chicago, ShotSpotter itself the company isn't a defendant in that case it's just against the police department that's using the technology. Do you think that they'll ever be a case where ShotSpotter then is put directly I guess on the stand?

Maneka:

I don't know. The reason for that in the MacArthur Justice Challenge is actually really straightforward. I think we've all heard what we call sort of short form is 1983 challenges. This these are primarily a 1983 challenge. And what I mean by that is Section 1983 of the US Code protects against government violations of constitutional rights. So when you have an actor acting under color of state law in a way that violates a constitutional right that's what allows you to bring a challenge under 1983. So it makes sense that the challenges against the Chicago PD and the City of Chicago rather than ShotSpotter because it's those actors who are acting under the color of state law not ShotSpotter in that particular instance. So that makes perfect sense In this case. Will we see some sort of lawsuit against ShotSpotter? I don't know

Charles:

It's designed to pick up sound. Has there been any case or use where it's picking up conversation and conversations have been used as evidence?

Maneka:

Yeah so one of the early concerns that privacy advocates had about ShotSpotter was that it's got these microphones that are listening all the time and can pick up conversations and things people are saying. And ShotSpotter commissioned an audit of its whole system by the policing project at NYU Law School And that audit didn't find major concerns with that particular privacy aspect or privacy implication of ShotSpotter And they found that because it's calibrated not to record people's voices it's calibrated to record loud impulsive sounds that could be gunfire. And because ShotSpotter the company has put some protections in place that would prevent widespread recording and collection of people's voices and conversations. That's the backdrop though there have been at least a few known cases in which evidence has been admitted in court that contains people's conversations or people's voices which is part of that privacy concern ShotSpotter has absolutely recorded voices and recorded conversations and those voices and conversations have been used in court cases contrary to what. ShotSpotter says the system is designed and calibrated to do. A lot of that has been in association with a loud impulsive sound. So in the wake of a loud impulsive sound there have also been voices. But again it's I think it is possibly a slightly lesser concern than some of the other things that we've been talking about although you're exactly right that it absolutely can and has happened.

Jena:

What changes would you like to see in the use of ShotSpotter or does it need to be entirely scrapped from communities in general or just revamped in a way?

Maneka:

The short answer to that question is I haven't seen any persuasive evidence that would convince me that we should continue using it. I think it should be scrapped. I don't think it's effective as a baseline. I don't think it's been scientifically validated. The testing is as of today in my perspective inadequate. The transparency around the data that exists is very very problematic And that's just the question about whether it's scientifically valid. The secondary question about whether it's actually effective in accomplishing the goals that it seeks to accomplish that data's pretty clear. It's not effective And the the numbers we've been talking about throughout this conversation suggests that it is not valuable in recovering evidence of gun crimes particularly in helping police find the actual perpetrators of such crimes. That's a huge problem and again that's just a backdrop. When you combine that with the harms that it causes from the over-policing from the driving more police into these communities to the use of ShotSpotter to justify stop and frisk in the ways that we know are already problematic from years of data not just in Baltimore but in lots of cities around the country. It really really troubles me in deep deep ways. So I'm not at a point yet where I think it's technology we should be using to the extent that I can't stop people from using it or to the extent that the communities themselves want this technology in their communities. That's a different story. I would be happy to have those conversations with folks in communities to make sure it's being used in the ways that they deem best for them. But as of this moment I don't have the data. I'm not convinced that it's a useful tool.

Dana:

So before we sign off let's circle back to the opening of the conversation where we were talking about forensic evidence in general. And what would you say your hope is that people pull away from your research in that area?

Maneka:

I think the main thing to pull away is just to be critical. I think we wanna recognize that our carceral system is harmful it's punitive it has disproportionately over many many many decades harmed Black people especially and other people of Color, poor people, Indigenous people, other marginalized communities in really really significant ways And all of these technologies and these tools and forensics they're a part of that system And you can't extricate them out and you can't put them in a bubble separately. They sort of enable us to do the type of policing and prosecution that we do and we need to start to confront that And so I think the number one takeaway is be really critical of this stuff and think about whether or not it makes the problems that we're talking about worse or whether it's really effective at solving them And then also think about having community members be involved in these conversations because they're the ones our communities including people who are sucked into the criminal legal system by the use of these tools all of our community members are the ones who are most affected by them And so those are the takeaways which is we need to step back a little bit and take a critical look about what their role is in what is overall I think most of us agree a harmful criminal legal sytem.

Dana:

And what advice do you have for any students who might be listening that want to become public defenders who have been growing up watching CSI and thinking that it's all that?

Maneka:

Well that's all of my students my advice is just put one foot in front of the other Keep an open mind and don't believe everything you hear. And I know a lot of us for folks who are thinking about being public defenders I know a lot of us went to law schools to get away from science and technology and math. You can do it. It's gonna be fine. Just put one foot in front of the other and you learn it bit by bit and you learn it throughout your entire career and you never stop learning it. And remember that the reason you're doing it is that there's a person who's standing next to you that you're fighting for. And that's the motivation for you to keep at it. And just reach out I'm happy to talk to anyone who is thinking about that as their career.

Charles:

Thank you so much for everything that you've shared today It's a really interesting conversation and really an honest look at an issue that affects this community we appreciate you coming on.

Maneka:

Thank you so much for having me I really enjoyed the conversation.

Dana:

Be sure to tune in next month we'll be talking with Dr. Karen Kotloff. She is a pediatric infectious disease expert here at the University of Maryland School of Medicine, and she has made major contributions to the field of vaccine development in the developing world. She is a fellow with the American Society for Tropical Medicine and Hygiene, so she's going to take us on a global conversational tour about epidemiological studies that she's worked on. So be sure to tune in!

Charles:

You never know when we're gonna drop a bonus episode. So if you're listening to us on Spotify and Apple, go ahead and hit that follow button. And thank you for listening to The UMB Pulse.

Podcasts we love