Robots could be the answer to stopping car break-ins

LoDoMus Prime is one of more than 7,000 robots deployed by the company Knightscope nationwide to supplement security efforts in what the company deems “high-risk areas.” The robots record 360-degree video in 4K, have thermal imaging, can detect gunshots, and can contact emergency responders, all using AI and other tech.

In a garage in Denver, the company that hired LoDoMus Prime claims car break-ins have fallen by nearly 70%.

“Our long-term mission is to see if we can make the United States of America the safest country in the world,” said William Lee, CEO of Knightscope. 

While the company has been around for nearly a decade, it’s only started ramping up its crime-fighting robot task force in recent years. Lee says the goal is to have a million robots on college campuses, corporate offices, parking garages and transportation centers in coming years. In early January, Knightscope was granted authority to operate in federal agencies around the country by the U.S. government.

“We want, over the long-term, to make all these machines be able to see, hear, feel, smell, and speak. And if you have that unprecedented capability at your fingertips, then an officer or guard can have their really smart eyes, ears, and voice on the ground in multiple locations. And you need to have the machines — these are tools. Have them do the monotonous, computationally heavy stuff that no human can do, and you want the humans to do the enforcement,” said Lee. 

Pretty cool, right? But there’s another side to the rise of the robots — concerns that have been building in tech circles for years.

“Right now, we are at the mercy of companies being ethical and operating and sending products that are ethical which is a major risk in potential side effects,” said Alessandro Roncone, computer science professor at CU Boulder.

Because technology has been evolving so quickly, it means progress is outpacing legislation, so currently it is up to these companies to operate ethically so these machines operate ethically. For example, a review of artificial intelligence from Chapman University showed AI has the ability to act with bias depending on the data it learns from. It means it’s imperative that these companies use diverse datasets so robots such as Knightscope’s don’t misidentify threats, particularly on the grounds of race or creed.

“These systems are going to be deployed in society and they’re going to be collecting data at all times. We, as the general public, are not going to have control over which type of data gets recorded and which kind of data gets stored because all this information is going to belong to the company. So we don’t have any mechanism, at the moment, to have transparent access to this data and to transparently understand how this data gets used,” said Roncone. 

There have been pushes over the last couple of months to make AI risks a government priority, but as with many advancements, safeguards lag innovation. 

A 2021 study on AI by the University of California Davis concluded that “bias in artificial intelligence can lead to biased outcomes, particularly for minorities and women.”

The study went on to find: “Facial recognition technologies, for example, have come under increasing scrutiny because they’ve been shown to better detect White faces than they do the faces of people with darker skin. They also do a better job of detecting men’s faces than those of women. Mistakes in these systems have been implicated in a number of false arrests due to mistaken identity.”

That’s not to say that is happening with companies like Knightscope, but it is to say it’s a possibility that must be considered with the same degree of importance as profit and innovation, according to experts like Roncone.

“We need the entire country to change the way it’s been operating, and this is going to take some time. You know, the country is over 200 years old. We’re on our 46th president — no one’s ever fixed this problem before,” said Roncone. 

It’s weird, it’s cool, to some it might be scary, but it’s here; and if it’s going to have the benevolent impact those who create this technology claim, it will rely on all of us to get there.