> records showed that ShotSpotter actually initially picked up what sounded like a firework a mile away, and this was later reclassified by ShotSpotter staff to be a gunshot at the intersection where and when Williams was seen on camera.
So the AI didn’t even make the call. The staff did, manually. I assume that means the actual audio is available and entered into evidence?
If humans are making the call then blaming AI seems like a stretch. That’s almost like blaming the motion detection algorithms for triggering video recordings that were later reviewed by humans. It’s still humans reviewing the recordings and making decisions.
Later it's presented as AI to a jury. It's basically the same scam as lots of forensics, junk science to get a desired result.
When a scientist proposes a new test, they include measurement techniques as well as a method of interpretation. If a medical test is used in a way that is frequently prone to misinterpretation, it would be fair to call that entire test useless.
And yet there are many examples of this with malpractice, unnecessary deaths, etc. Lyme tests are notorious for false negatives depending on the lab you use. It's also a widely held belief that you want tests done in the middle of the week instead of on a Friday or weekend because some techs just want to get out of there and are more prone to mistakes.
The reason I'm drawing a hard line is because of fact vs opinion. This same line gets drawn in court. They will ask experts if it's a fact or an opinion that they testified about (usually to attack a witness). Autopsies are full of facts, and then there are opinions on what those facts mean. If it's an opinion, it's possible to counter it with another professional opinion.
The only sort of opinion they should be sharing is included, excluded, or undetermined. Hair is not a "match", but it could include or exclude someone and maybe mathematical probabilities.
1. Was it he said statistics lie sometimes?
2. Or, questioned 90% efficacy of some Covid vaccines?
Given this history it is absolutely reasonable to reject most forensic evidence unless there actual good science supporting the reliability of expert interpretations.
It's basically impossible for someone who isn't pretty handy with Bayes to work out the significance for themselves.
The average jury member has no chance.
> The science is generally sound (procedures and theory). Usually it's the prosecutors that present the forensics in a way that misrepresents what the results actually mean.
Look at this damming quote - "ShotSpotter strongly insisted it had not improperly altered any data to favor the police's case". That shows they don't have a system in place to prevent data being altered or to prevent themselves knowing what the police want, so all their findings are suspect.
It's only supposed to be used to include or exclude someone, and based on the numbers/probabilities it's generally only useful for excluding someone. Of course that's not how most prosecutors use it...
Basically, the majority of forensics are based on sound theories and practices. The issues are when professionals and prosecutors try to "interpret" the facts.
I would be interested in any citations showing that forensics are generally unsound.
Do you have a link for the fingerprinting claim?
The Justice Department and FBI have formally acknowledged that nearly every examiner in an elite FBI forensic unit gave flawed testimony in almost all trials in which they offered evidence against criminal defendants over more than a two-decade period before 2000.
Of 28 examiners with the FBI Laboratory’s microscopic hair comparison unit, 26 overstated forensic matches in ways that favored prosecutors in more than 95 percent of the 268 trials reviewed so far
This isn't the case I originally recalled, and doesn't say how many points were matched, but I remember a 5 point match being used to convict a person who was later proven innocent.
This is the case for many types of evidence; witness statements for example can also be notoriously unreliable. It's not bad to use this kind of evidence, but you do have a problem if you consider any witness statement to be absolute truth and never consider any "reasonable doubt" there may be, and this is how you end up convicting people based purely on "I saw the suspect commit the crime from 50 metres away in the dark and I'm sure it was him!" or some such.
If you have six independent witness statements like that, or a witness statement and fingerprints then the reasonable doubt starts becoming a lot less reasonable.
This is also the problem with the case in this article: a piece of evidence over which it seems like reasonable doubt should exists is the singular piece of evidence tying this person to the crime. The problem here isn't the evidence as such IMHO, but that juries and/or defenders (depends on the case) seem exceedingly bad at sceptically examining physical evidence.
Fingerprints are directly-recorded facts. What may be junk science is what expert witnesses present about the interpretation of fingerprints.
> The problem is that many juries, judges, prosecutors, and even defenders forget about this.
Juries as triers of fact, and judges insofar as they are ruling on the facts of the case, aren't, in any case, supposed to allow themselves fo be guided by extrinsic knowledge, they are supposed to act based on facts (both direct and the expert knowledge relating to the import of direct facts) provided as evidence, both testimony and exhibits, in the case.
That's not really how it works; false positives exist, especially if you consider that a lot of times you have incomplete or smudged fingerprints and there is actually some amount of subjectivity involved in determining a match. See e.g. : "only two properly designed studies of latent fingerprint analysis had been conducted. These both found the rate of false matches (known as “false positives”) to be very high: 1 in 18 and 1 in 30."
Again: useful evidence for sure, but considering them undisputable facts is not wise.
Yes, it is.
> false positives exist
Matches (positive or negative) are not the same thing as fingerprints. Fingerprints are directly recorded facts. Matches are matters of interpretation (the fact of a match reported by a particular system is a distinct direct fact from the fingerprint itself, but again the significance is...what expert testimony exists to establish or challenge.)
No evidence is foolproof. My entire point is that the issue isn't with “fingerprints” or the knowledge that juries and judges bring extrinsically (which was the explicit claim made upthread), but with the expert testimony that contextualizes fingerprints.
(This was even more clearly the problem with fiber evidence when the FBI crime lab was presenting pure bunk expert testimony in virtually every case.)
You pedantism is wholey off base here.
How then do you explain the success of Touch ID on Apple devices and the equivalent systems on other phones and tablets?
Random people can't just walk up and unlock a Touch ID device with their fingerprint, which suggests that fingerprints are in fact a very good way to tell people apart.
Yes, it is possible to make a fake fingerprint that Touch ID will not be able to distinguish from my fingerprint. But that fake fingerprint will only unlock my device, not your device, so it is still correctly distinguishing our fingerprints.
Because on those devices the fingerprint is only used as a second factor (the first factor being physical possession of a particular device).
If Apple used fingerprints like law enforcement do, that is running a fingerprint against the big DB of all prints, then they would certainly get some arbitrary matches.
late edit: how often do people try to unlock your unattended apple device?
Another way fingerprints are used is to compare fingerprints found at a crime scene to people who have been identified using other methods that have nothing to do with fingerprints.
E.g., if you have someone murdered in his office when working late at night, and security cameras show that five other people were in the building at the time of the murder, and you are able to get a good set of fingerprints off the murder weapon you don't need a database. You take the fingerprints from all five of those other people, and if one is a very good match and four do not match, you concentrate most of your effort on the one that matched.
It's actually quite similar to how DNA evidence has been used and misused. I don't know how they compare DNA nowadays, but when DNA evidence was first making waves getting people convicted they only compared two DNA samples at a few points. For a given sample there would likely be several people in the country that matched.
That's fine if used right, like fingerprints in the earlier murder hypothetical. Narrow it down to only 5 people who could have committed the crime, get a DNA sample from the crime scene that must be from the criminal, and if that matches exactly one of those 5 suspects it is strong evidence they are the criminal.
Have no suspects yet, run that same sample through a database, get exactly one match, and conclude that must be the criminal. Totally bogus. A lot of people were convicted in the early days of DNA matching that way.
The database method can be made sound, but only if the database includes everybody. Match against a database that includes everyone, only get one hit, and you've probably got your criminal. But if the database includes everyone you are likely to get several matches.
As I said, that was how it used to be. I know DNA sequencing has gotten faster and cheaper over the years, but I have no idea if routine forensic DNA matching now matches enough to make matches unique except in the case of identical siblings.
Do you think the forensic prints would meet the standard required by Touch ID?
Given how much R&D goes into making better fingerprint scanners, it seems odd to then claim that a fingerprint taken off of a victim has anything near that level of detail. I guess its sound to say that it may be used as a piece of evidence to further narrow down the target, but its completely disingenuous to claim that it's sound evidence.
Like are you joking?
Saying TouchID validates fingerprinting is like saying MRI machines validate psychics.
It seems you two have a different idea of what is being discussed. The underlying fundamentals of finger prints are solid. The part that can make them ineffective is when they are using insufficiently tested tools or matching on too few points. And of course misrepresentating what a match actually means and how it pertains to the case.
What exactly are these underlying 'fundamentals'? Nobody even knows if everybody has unique fingerprints, it's just an assumption.
Beyond that, it's not all that relevant since as you mentioned how the matching is done is largely subjective, and it depends a lot on the number of "points" used and who does the matching, and that should really bring the accuracy even more into question. What exactly is the statistical probability of an incorrect match based on the number of points used? Good luck answering that question.
IMO the situation is made a lot worse by the fact that the public generally assumes fingerprinting to be extremely accurate (or 100% accurate).
That said, human in the loop ML can be extremely useful. Surfacing possible positive examples for human review and/or cutting down 'obvious' negatives can multiply the productivity of human reviewers in many contexts.
Human-in-the-loop ML can be extremely useful in avoiding obvious mistakes.
Human-in-the-loop ML can be extremely useful in fabricating desired results.
Misusing science is a problem regardless of whether ML is the science in question. It's /already/ a big problem within criminal justice (polygraphs, psychopath tests, etc).
Avoiding 'obvious' mistakes is helpful in reducing the amount of work human raters/evaluators need to do. Automatically classifying 90% of data and leaving 10% for humans can reduce human workloads by 90%, or enable new use-cases where doing 100% of the work simply wasn't feasible. Using ML in this way doesn't make it 'fake AI.'
I'm ignorant on the topic. Got links?
For a more recent example, here's an article on denim analysis...
The underlying problems seem to be bad incentives (the prosecution is paid+evaluated on its ability to put people in jail) and the court's willingness to accept insufficiently validated tech.
>sales people ready to make a call any way their customer (police) wants it made.
You really believe people signed up for a dangerous amd thankless job with high liability and low pay so they can feverishly conspire to have this guy arrested?
Edit: This place is becoming reddit. Someone posting links which dont substantiate their statements gets upvoted, meanwhile my replies questioning this get hit by a downvote brigade.
Also, while police to have a more-dangerous-than-average job (and face unique risks - they are probably the most likely profession to die of a gunshot on the job), there are much more dangerous jobs.
So if you have been excusing police because you think they are accepting poverty for danger, I'd ask you to consider that Police are more like extremely well paid garbage collectors.
The 'Avg Total pay &
benefits' sorted by title has Chief Information Officer/Registrar of Voters as the entry into that level of remuneration. Police specific above that you only have the Sherrif, which is an elected official.
Police pay for the vast majority of roles is significantly below $450k.
Edit: I can no longer edit my original post so this will have to do.
Second, it would be absurd to account for extra income to say they make what you're saying. That's like saying a cashier at walmart works three jobs to make an average of 70k a year, so cashiers at walmart commonly make 70k a year.
Except it's extra income that they make by being a police officer. A business that hires an off-duty police officer for security isn't hiring them because they're a big burly-looking dude; they're being hired because they are a police officer. An off-duty officer will typically wear their police uniform while working "off duty", and can take actions which an ordinary civilian could not (like conducting arrests).
It's not what I say they make, it's what the public records say they make. You can go look yourself.
For example, from my first link, Sergeant Stuart Eugene Barnes made $125,380.76 in salary and $172,330.17 in overtime. I think it fair to say that's compensation from their job as a police officer.
Kind of makes sense though, most officers start there then move to a safer place once they get enough experience. Remember seeing billboards advertising $80k starting salary 10+ years ago while driving through town.
As for your second part, I guess it would seem unlikely for some positions, but I know a doctor who made like 3x his base salary each year through all sorts of absolutely legal, ethical, and hospital-sanctioned means. Taking every single weekend he could on call (even offering to "relieve" other doctors who had been assigned that weekend) for example allowed him to make much more than you would ever think possible.
An administration that squashes all overtime faces extreme wrath. They can be cutting employees' take home in half. I and plenty of others have seen people lose their houses or suffer other financial crumbles when the expectation of built-in pay far beyond their published salaries were cut off. You can argue that it is foolish to depend on such extra pay and I would agree. But when that significant extra has been their reality for all 10 years of their employment with the department, you can see how they would just plan it into their future finances.
From your source (for the rank of Sergeant):
Regular pay: $139,197.60
Overtime pay: $160,477.73
Other pay: $13,292.65
Total pay: $312,967.98
Total pay & benefits: $440,208.56
The way I interpret that is that they're paid with the idea that they'll have two kids or a Sergeant is a "Manager" (Management $124,884, from the living wage source). I'm curious what the reason for overtime is, but I suspect that it's staffing requirements.
Debating pay to danger ratios is a pretty fruitless effort. People don't get paid for danger.
TLDR: "Service members will receive $7.50 for each day they are on duty in an IDP area up to the maximum monthly rate of $225."
Moreover, I was a 4 year Corporal living in the barracks with the equivalent of two deployments under my belt raking in about $20k a year.
To my knowledge, this is a fairly standard rate. Side note, we used to crack all kinds of jokes around, "I guess I earned my 7.50 today."
I think what people do get paid on frequently is the availability of people who can and will do the job. A limited and capable talent pool will always make more, especially when demand is high.
I do wonder if the pay negatively affects the confrontations that the individual officer will risk, e.g. let a suspect escape due to not having good enough backup.
People are terrible with nuance. OP did a good job at contextualizing danger of policing, without claiming it's "safe", as you'd see on reddit, but I'm guessing something is getting lost here.
I'm very sceptical someone is in the multiple sixes of figures without serious responsibility and credentials. In my area officers make similar (lower) pay than tech jobs, and every area I've seen is similar. So people on reddit will be complaining about officers making 80k-100k out west, when in reality that's just marginally better than bartenders.
So yeah, they're making like 60k-100k, or more out west. Here in the midwest it's more like 40k-80k. This compares to developer jobs I've seen, but it's not apples to apples. Cops end up years behind in pay, but end up with better benefits. So they'll be at 70k 3-10 years in, instead of 0-5 in.
In either case there's other tiers and roles, was assuming grunts. There might be other cases where "super grunt" jobs exist, and get paid very well. Just like with engineers it seems like the way to advance is increased specialization and moving to management.
You can spin your wheels at low level engineering jobs for decades if you want, too. Usually, people also have to move towards leadership roles to advance in engineering...
So I digress, but I guess it's fair to say police officer is of similar value as engineer in terms of job perks... but engineer is considered a good job.
1. Police officers (and more importantly, prosecutors) are incentivized to make arrests and have those arrests stick.
2. It's a well understood psychological phenomenon that individuals are much more lenient of their own actions. They call this attribution theory in psychology .
3. While policing is a risky job, it comes with privileged treatment, powers, and trust that a common citizen does not have. As such, and as public servants, we hold those with these powers to a higher degree of responsibility and behavior than those without such powers. To hold police accountable, we need a minimum level of oversight and control. Historically, this has either not happened or been circumvented by way of police unions establishing ill-advised contracts with the local governments they work with. 
I'll agree on points that individual police officers aren't evil, but I'd need a lot of convincing to believe that many policing organizations within the United States are not being held accountable to an acceptable level; nor are they being trained in ways that encourage appropriate interactions with the public they serve.
2. That applies to everyone, not just police.
3. This is a fallacy due to media portrayal. If you are late to your job what happens? Probably nothing or you get asked not to do it again. The same mistakes you make, police officer will get investigated by internal affairs and punished over.
Policing is also a lot less risky a job than is commonly perceived.
Hopefully cases like this will eventually stop the system being used in court.
By all other accounts, it doesn't work at all and is losing contracts.
> "...a study of Chicago police data found that over a nearly 22-month period ending in mid-April, almost 90% of ShotSpotter alerts didn’t result in officers reporting evidence of shots fired or of any gun crime."
> "In four years, police have made two arrests while responding to a ShotSpotter activation. Lt. Shawn Takeuchi, an SDPD spokesman, could only confirm that one of those arrests was directly related to the activation, but declined to give more information on both."
I think this part is pretty key, and has always been a big open question in my mind:
"The court further recognized a potential problem in assessing reliability related to handler cuing, as even well-trained dogs could respond to subconscious cues from their handlers. For this reason, the court determined that a critical factor in determining reliability is the record of false positive alerts made by the dog."
The subconscious cues, or even deliberate cues that only the handler and dog know, could allow an officer with ill intentions to make some kind of signal only known to him and the dog, which causes the dog to make an "alert", giving the cop probable cause.
If judges continue to allow this quality of evidence, the police will continue to abuse the legal environment that creates.
Your average lunch near the courthouse.
I'm assuming that's an unfortunate Freudian slip of a typo.
not how the system should work, even though it is now how it seems to work.
If anything, I think the expense of defense and the poor quality of public defenders probably accounts for bias.
My impression is that if they do recognize it, they don't care. I also think many dont recognize it. After all, are they going to side with a layman citizen or the police and prosecutor? Even with a defense attorney, it often comes down to the police officer's word vs the accused. Who are they going to believe?
These are not just traffic violations and minor issues. And even if they were, that doesn't mean that people's rights stop mattering, nor is that a valid excuse for gross incompetence.
Backs up the GP claim but I can’t find any evidence of this being replicated in another independent experiment. It’s at least a little bit strange that no one tried to replicate these rather concerning findings in the past 10 years since publication.
Maybe it will stop this system from being used in court, but new ones will take its place. It would be great if we could change the structure of the justice system to prevent these types of issues from popping up again. I have no idea what a solution would look like.
I don't see any incentive here. The only consequence suffered by the prosecution was having to drop that evidence. That's it. Maybe there's a civil case later, but that will be separated by a large time gap, etc.
I agree that it is disturbing they tried to use this as evidence.
Serious question, what part of this technology is amazing? Its just sounds trained against audio labeled as gun shots. How is this any different then detecting faces in images or whatever?
"Strongly insisted" links to: https://www.theregister.com/2021/08/02/nvidia_cuda_openai/
> One of the pieces of evidence against Williams claims ShotSpotter's sensors in Chicago identified gunfire where surveillance cameras had seen Williams stop his car by a south-side Chicago block, right when and where the cops said Herring was shot.
> However, Williams’ lawyer submitted paperwork [PDF] claiming ShotSpotter actually detected a firework a mile away from that location, and that ShotSpotter later reclassified the bang as a gunshot and the location as being where Williams was seen on camera, Vice first reported.
> Williams' lawyer demanded the court hold an inquiry into the ShotSpotter evidence, and the prosecutors simply withdrew it.
> ShotSpotter responded by denying at length it improperly altered any data or evidence, and hit back at any suggestion it had done so to help the police make a case. It said its software generates real-time alerts automatically, and staff later analyze the microphone readings to submit forensic reports for the courts, and these final reports can therefore differ from the initial alerts.
> "The idea that ShotSpotter 'alters' or 'fabricates' evidence in any way is an outrageous lie and would be a criminal offense," it said in a statement. "We follow the facts and data for our forensic analysis. Period."
> Update: The case against Williams was dismissed by the judge at the request of the prosecution, which admitted it now had insufficient evidence. Williams had spent the best part of a year in jail awaiting trial.
Yet the system will let them. I've had a prosecutor and trooper hold a charge that they know was incorrect and subject us to pre trial restrictions found only under that charge. The prosecutor also used their position to block our ability to have a witness remotely testify by telling court scheduling not to even speak to us (inappropriately using their position to influence the court). The state police said they see nothing wrong with knowingly holding an incorrect charge and subjecting people to pretrial under it. The Bar said that even though the conduct we described would constitute prosecutorial misconduct, they won't pursue complaints against prosecutors unless the court makes a determination supporting that. That would cost thousands of dollars to hire a lawyer, make an appeal, etc. The ACLU said they have bigger issues to deal with. I submitted a complaint to DOJ, but I'm not holding my breath.
Literally everyone we dealt with as a part of the system made a bunch of mistakes and didn't give damn about correcting them or protecting people's rights. It's us vs them and all about the money.
(a) The staff of the company who makes the AI, which would have an incentive to make the AI's original judgement appear accurate?
(b) The staff of a company paid by the state/police, which aims to please its customers?
(c) Was the staff trained/experts in recognizing such sounds, or were they random AI company employees, which just made a judgement call? If so, was this call presented as AI-supported to the trial, and was it sold as anything beyond "some devs, no gun experts, mind you, heard this recording and said this must be a gunshot".
The modern criminal investigation has so much negative baggage attached, from historical shenianigans, that it's easy to find ways to question the prosecution's case.
Another factor here is that simply evaluating more data is also likely to lead to more erroneous evaluations since random chance has a much larger impact.
If that's what happened in this circumstances it would be, police check ShoySpotter for more evidence, ShotSpotter hasn't automatically marked anything, police request manual review for gunshot they are sure must be there, ShotSpotter manual review process comes up with something and marks it as a shot with a human override.
There's a lot of places bias, or outright corruption, could cause problems in that process, if it's what happened here.
I think a crucial distinction is to clarify who “they” is: Who was making the claim? From the article, it seems that the prosecutors may have been the ones exaggerating the evidence:
> Prosecutors said ShotSpotter picked up a gunshot sound where Williams was seen on surveillance camera footage in his car, putting it all forward as proof that Williams shot Herring right there and then.
Naturally the defense wanted to see the actual evidence, so that’s what they got down to.
I know everyone wants to vilify AI, but I think the actual bad actor in this article might be the prosecutor trying to misrepresent the evidence.
The prosecutors are the "they". The further removed you get from people who know the 'AI' the more the validity of the evidence provided by the AI relies merely on what others have said.
The people working on the system know that it can be faulty. Other software developers have an idea that it could be faulty. Prosecutors are told that it's alright. They turn around and argue it in a court of law. And the further you get the less people have any idea that the system is faulty. Down the line people will go "the machine says so, therefore it must be so".
Thats the real feature of AI, devision laundering: a flawed decision made by someone for their own gain now appears impartial and trustworthy, and come wothout accountability
I wonder how much of "AI" introduced in court (or for a warrant) involves some element of individual human attention though... They just call it "AI" without qualification until they get caught (if even then).
Would it be better though if it had been purely the output of a machine learning algorithm? (Which after all is not infallible). In that case, the guy probably would have been convicted on that alone...
Which is mostly unrelated to whether or not it's accurate.
There is a very, very, very long history of police and prosecutors using bad (in the sense of accuracy) evidence to put people in jail.
Prosecution over thumbnails, file hash matches with tenuous connections to individuals, Crime-Tech has given a new position for law enforcement to craft fanciful narratives to support prosecutions of just about anyone. I'm not specifically citing or inferring there have been *targeted* instances of this, but it happens very often towards US citizens already.
Stingrays, Cellebrite, Forensic "science", ShotSpotter, it's absolute chaos and anyone can be a victim here.
My goal here is not anything about Apple and its CSAM strategy, but to make it very clear that the imagined scenario happens now and happens frequently.
It seems... far fetched.
"Our CSAM matching found 29 matches that are either confirmed or strongly suspected in your iCloud drive. But because Apple values your privacy, we are not turning this information over."
No. Expect that number to go to '1', or very close, and very soon. Because as much as this is a PR nightmare for Apple right now, the above scenario is even worse.
Do not waive a speedy trial ever. The judge will want you to, your attorney will want you to ($), the DA will want you to. Tell them to go fuck themselves.
Knowing that, I think I would demand a speedy trial, but most people have horrible lawyers.
I'm unsure about federal guidelines. I think many states have laws that could be challenged in federal court, and their de facto behavior most certainly can, if you have money.
> The rapid growth of functional magnetic resonance imaging (fMRI) research has led to new insights on neuroanatomical structure and function, which has led to a greater understanding of human behavior and cognition. As a response, there has been an emergence of questions regarding how these findings can be applied to criminology and legal processes.
I hope they have a more solid foundation of evidence than fMRI.
Eg from https://www.fastcompany.com/90520750/duke-university-researc... :
> The researchers reexamined 56 peer-reviewed, published papers that conducted 90 fMRI experiments, some by leaders in the field, and also looked at the results of so-called “test/retest” fMRIs, where 65 subjects were asked to do the same tasks months apart. They found that of seven measures of brain function, none had consistent readings.
Also https://www.pnas.org/content/113/28/7900 :
> Functional MRI (fMRI) is 25 years old, yet surprisingly its most common statistical methods have not been validated using real data. Here, we used resting-state fMRI data from 499 healthy controls to conduct 3 million task group analyses. Using this null data with different experimental designs, we estimate the incidence of significant results. In theory, we should find 5% false positives (for a significance threshold of 5%), but instead we found that the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%. These results question the validity of a number of fMRI studies and may have a large impact on the interpretation of weakly significant neuroimaging results.
I have a bad feeling this will be a theme over the next decade as government and law enforcement around the world starts to integrate "AI" (or rather call it what it is: statistical models + mass surveilence), without enough scepticism built into their process. Some of us know better, but it looks like society and government will still have to learn the hard way that these tools are not a highlight pen, they are a very crude spray can, and even then they will miss spots.
House arrest with deferral of your debts and obligations and mandatory reinstatement of your job if you are exhonorated or the charges are removed seems like what you would need at a minimum to live up to "innocent until found guilty".
The person in this case, however, had a criminal history including attempted homicide so the judge decided to opt for detention while awaiting trial instead.
Sounds like perfectly normal procedure to me and happens other countries as well.
I don't think the answer is to do away with these systems but as we yield more and more authority to machines regulation will need to catch up so they aren't just used a fig leaf for corroboratory-evidence-as-a-service.
I am struggling with the use of the word "forced"; the researchers put their software out there to be used by anyone with an internet connection. The researchers created that dynamic.
And how are they not seeing that AI created no-humans-used-to-make-this-porn could be a net good.
Do the microphones record conversations too? Can the government spy on its own citizens? I guess some secret court ( https://en.m.wikipedia.org/wiki/United_States_Foreign_Intell... ) allowed it?
 - https://www.smartcitiesdive.com/news/san-diego-faces-lawsuit...
If you want to read up on all the ways this kind of thing can go wrong, the technical term here is "source localization."
These are two very different approaches to solve different problems.
Who would have thought?
Shocking, shocking I tell you.
It identified it as a firework a mile away! The humans at the software company, REclassified it as a gunshot at Williams' location.
Interesting that there's no mention of the actual gun shot, which appears to have happened some time later?
Not only is this "tool" bullshit, it seems the employees of said org are morally bankrupt enough to falsify evidence in order to put an innocent man in jail.
The whole thing would be utterly unbelievable if it wasn't happening in America.
The outcome - a guy being charged with a crime using bogus evidence is the story. The software may have been right, but the people who operate the software didn't trust it. Regardless, the story is trying to point out we might not be ready to put our faith in AI for criminal prosecutions at this time.
AI is just not advanced enough to jail people, by a very large margin. And yet we will see it happen.
I dont understand what Shotspotter has to do with this case at all. The question the case seems to hinge on is “who fired the gun”, not “was a gun fired at this location”.
What am I missing?
The fact that some of the evidence the police used to say who did what where was manufactured by the company.
> A man spent a year in jail on a murder charge that hinged on disputed AI evidence.
But after the reading the article, the evidence seemed irrelevant?
Maybe the headline should have been:
> AI evidence was manually reclassified after human review in murder case
The police used the altered location of the shot to imply that the defendant's story about what happened wasn't true. Which was that another person in another var at a different location did the shooting.
They used the altered location data alongside video evidence to imply the defendant and the victim were alone at the shot location and therefore the defendant was the only possible suspect.
Ok that would make more sense, but I definitely did not take that away from the article. I’m sure there are better sources on this case though.
Yet another example of why you cannot get your information from one source. Even if they are not trying to mislead, the one author might not have all of the information to provide in thier writings.
Ah yes, I remember hearing that a lot growing up.
Editors, do your job!
- The use of the passive “It’s said” has no business in a news item. Who said it?
- Idiomatically “It’s said” is typically used to identify a bit of folk wisdom, not a particular incident.
It’s terrible writing and it should appall a decent editor. This isn’t a blog post or a comment, it’s an item in an English language professional online publication. Do better.
It's like saying I was driving through the United States to go to my local corner store.
I wouldn't have thought about the poor writing if OP didn't mention it, but it is still a bit pedantic in my opinion.
It's not good journalism. If someone said something, say who said it. If it's an anonymous source say so. "It's said..." in an article like this means "I vaguely think this has been said somewhere but I can't be bothered to check or quote my sources".
The "AI" is just used to see if it's a gunshot for early detection. Obviously anyone can listen in later to the many recording (min 3) to see what they think about the noise.
So the gun shot was in a car, so the mathematics fails. This is why it was withdrawn.
So if you were blaming AI it's important you blame mathematics instead now you are better informed. Because the only thing I hate more than the fake AI industry is people with an inconsistent stance.