Investigation Uncovers Police Relying Solely on Facial Recognition Software
As we have previously covered, law enforcement agencies across the country have begun to utilize emerging and sophisticated technology in their investigations, sometimes to catastrophic effect for individuals who are misidentified by these new technologies. The Washington Post recently uncovered numerous instances in which investigators have utilized facial recognition technology powered by artificial intelligence to identify and charge individuals with crimes based solely on those automated results. In some instances, the investigators relied on grainy or blurry images captured from some surveillance system at the crime scene that then cross referenced image databases of mugshots even though the AI company behind the software warns that any results are “nonscientific” and “should not be used as the sole basis for any decision” in leading to an arrest. Despite this warning, officers in St. Louis, Missouri brought charges against one man, Christopher Gatlin, eight months after a security guard was beaten up on a remote train platform, which ultimately led to the Gaitlin spending 16 months in jail while his charges were pending after he was identified through the AI matching software and police improperly coerced the victim to identify Gatlin through a photo lineup, a judge would later find in dismissing the case. The over-reliance on the AI technology in identifying Gaitlin was not disclosed until months after his arrest during a deposition of an officer, with the detective assigned to the case eventually admitting in court that the sequence of events that led to Gaitlin’s arrest was not a “reliable way to get a legitimate identification of a suspect” under questioning from Gaitlin’s public defender. In another instance, police officers arrested an individual in Miami, Florida for cashing a fraudulent check after facial recognition software identified him as the culprit without any other basis and without utilizing any other investigative tools, such as checking the arrestee’s bank accounts, the time stamping on the footage of the fraudulent check being cashed, or any other evidence connecting him to the crime. In fact, his image was only shared with investigators because he had legitimately cashed a check at the same bank on the same day, which ultimately led to him spending three days in jail after his arrest, which purportedly remains on his arrest record despite all charges being eventually dropped. These instances of false identification not only disrupt individual’s lives, but can lead to real life consequences, with some individuals arrested based on faulty artificial intelligence technology reporting that their arrests led to loss of jobs, missed car and home payments, and even emotional harm after their children see their parents falsely arrested for crimes they had no connection to. The numerous examples uncovered reveal that while artificial intelligence and emerging technologies present another tool for investigators and some argue the simplification of life tasks, the over reliance on such technology presents a potential for a dystopian future in which human intelligence is discarded in favor of automation which only presents a small detail in an investigation. Indeed, a study found that fingerprint examiners were more likely to erroneously match fingerprints from a crime scene to a potential match when a computer system reported that it was a potential match based on their confidence in that computer technology. Given that many law enforcement agencies do not often voluntarily disclose what technology they rely on in their investigations, anyone accused of a crime should ensure that they have a dedicated and experienced defense attorney who is familiar with investigation procedures and can uncover any missteps that may have led to an erroneous or false arrest or unreliable evidence.