was moved by the recent news about a woman being shot in her home by a police officer who had come to check up on her. Once again, I found it frustrating that people immediately jumped to apply blame before all the facts were in, which tends to do more damage than good.
From my reading, three mistakes contributed to the incident. The first was by the neighbor, who should have called the homeowner before calling the police. The second was by the police, who should have called the woman prior to going to the front door. Finally, the woman made the final mistake by picking up a gun without being ready to use it, although if she had, that might have led to the police officer's death, which also would have been tragic.
My personal view is that the problem had to do with poor police training, poor police management, poor weapons training for the homeowner, and poor judgment by both the officers responding and the neighbor who called in the open door.
Jumping to a solution before you even attempt to define a problem is common, but using this event as a backdrop, I've come up with three advances we need to avoid some of the incredibly stupid, and in this case deadly, mistakes that keep recurring. The first is a reliable people stopper (a stun gun), the second is a general purpose artificial intelligence, and the third is a reliable lie detector.
I'll close with my product of the week: an inexpensive security camera that could have saved this woman's life.
We put our police in an impossible position. They must make life or death decisions at the drop of a hat during a shift that likely is just hanging out waiting for something to happen, for the most part. The result is that a lot of folks seem to be getting shot for being in the wrong place at the wrong time, which really shouldn't be a death sentence. I was a sheriff myself years ago, and you couldn't pay me enough to put on a badge today.
We need a more reliable stun gun. Tasers aren't good enough for officers who fear it's likely they'll be shot. They need a weapon that will drop people and render them harmless almost instantly. If it could be used en masse, it also could be a solution for mass shootings, particularly in schools.
Teachers could be issued that kind of weapon, because if they got it wrong, the likely result would be someone falling after passing out, which probably wouldn't be deadly.
I should point out that the woman who died likely would not have been shot if she had not been holding a gun, which kind of goes against the NRA's message that guns make you safe. In today's world, they are far more likely to get you shot, because the police have no way of telling the good guy with a gun from the bad guy with a gun. If the police had reliable stun guns, many outcomes would be far less dire, and if private citizens had stun guns their liability would be far lower.
There are some interesting advancements, like this Lasso gun tested by the New York Police department, but we have a long way to go on this front. Given the criticality of finding a solution like this, the research spend seems excessively light.
Some police departments are replacing guns with stun guns to eliminate accidental shootings, but the existing technology, I'm afraid, is likely not up to the task.
General Purpose AI
The general purpose AI is the holy grail of artificial intelligence -- we don't have it yet. The concept involves rapid deployment of an AI that could provide real-time advice on the best practice in almost any situation.
In the context of police use, it would know what the officers were walking into and could advise them on the best approach to protect both the officers and the homeowner. It might even call the homeowner itself before the police arrived, or deploy a drone to see if there was a danger to the officers or homeowner before that danger resulted in the death of either.
A general purpose AI, particularly as IBM imagines it, is there to help and assist, not replace, so it would guide you as the homeowner to shelter in place (away from the windows and with your child safe beside you), and advise the officers on the best approach. It even could have advised the neighbor that it would be better to call his neighbor than call the police.
We all make mistakes, but an AI would learn from those mistakes -- both ours and others -- and use that information to aid decision making. That not only should increase our life expectancy, but also significantly reduce the number of avoidable mistakes we make.
Tied into the sensors on our phones and wearable tech, it could get us medical help proactively, keep us out of danger in the first place, and make us more successful in both our public and private lives.
Reliable Lie Detector
There are two types of lies that concern me. One involves the speaker knowingly lying, and one involves the speaker just being wrong. We are close to solving the latter case, because you could get there with a digital assistant tied back to a decent decision engine.
Of course, the obvious test case would be politicians, and technology could at least provide a sense of whether they were telling the truth or not. However, we still wouldn't know if they believed what they were telling was true or if they were intentionally lying. However, with the amount of data captured about each of us, an AI should be able to determine with high accuracy whether we believe what we are saying or not, just by observing us.
We not only lie in person. We lie in email and social media, and we undoubtedly have tells when that happens. Over the years, databases have been and are collecting massive amounts of data on us, and an AI should be able to analyze it to build a profile of what we do when we aren't telling the truth.
The result would be that those in power no longer could make up crap when they wanted to manipulate us. Neither could our kids, siblings, parents, bosses, coworkers, or significant others. That doesn't mean lying would become impossible -- with enough work, any system can be overcome -- but it would get more difficult, particularly at scale.
You'd still need a fact system to back this up, because the person lying to you could have been misled themselves and thus be telling the truth as they know it, although it would still be false. Granted, this likely would end more than a few political careers, but that is probably a good thing. We are making advancements here as well -- but as with the other categories, we are still years off from a truly reliable lie detector.
I believe that technology can make us safer, smarter, and far likely to be taken advantage of by a con artist (and we have lots of con artists in and out of government). A reliable stun weapon would allow do-overs, should a shooting decision go wrong. At scale, it could make mass shootings a thing of the past (you could stun the building and wouldn't need to worry that much about collateral damage outside of someone falling down a staircase).
A general purpose AI could provide us all with the advice and help we need when we need it, and a reliable lie detector could help us pick honest politicians. In an accident or other situation resulting in harm, it could more reliably determine who or what was at fault.
I think those three improvements not only could reduce the number of deadly mistakes massively, but also make us all safer, smarter, and less likely to be conned.