Data Science

Virtual Spike Strips and AI Self-Driving Cars


By Lance Eliot, the AI Trends Insider

In many parking lots where you need to pay to get into the parking area, the exits often have a series of metal spikes that are pointed toward any cars that might be tempted to sneak into the lot. That small strip of spikes is enough to likely puncture the tires of any trespassing car that has a driver intent on avoiding paying any fees to use the parking lot. Cars that are properly exiting the parking lot are able to readily rollover the spikes since the spikes are angled away from that direction and usually on a spring action, so they suppress under the pressure of the car tires and the weight of the car.

I’ve never actually seen someone try to drive the wrong way over those metal spikes and it seems to be a pretty good deterrent. We all readily accept the idea that they are there for the purpose of preventing interlopers. They are handy too since those leaving the parking lot don’t need to do anything special to get out. Unlike a gate arm that might need to lift up, those metal teeth just sit there, grimacing at those that want to tempt fate and see if they can somehow ride across their lethal smile.

There’s another use for these metal teeth.

Here in Southern California, we are somewhat known as the capital of car chases for the United States. Some statistics claim that we average one car chase a day throughout the year. Our sunny weather probably helps encourage drivers to consider going on a car chase. Along with our apparent obsession with cars and wanting to drive cars.

The police usually catch the driver of the car that’s leading the chase, but this fact doesn’t seem to overly discourage people from launching into a car chase. When the person being chased has a long track record of lawbreaking and has committed some crime coinciding with the undertaking being pursued, such as robbing a liquor store, it makes relative sense that they might try to run, even if the odds of successfully getting away are relatively low. The chases that are especially peculiar involve the situations wherein the person fleeing has no criminal track record and the crime they committed such as rolling through a stop sign is not commensurate with the escalation into a car chase, and the legal troubles they’ll get into for the chase are so severe it just belies logic that the person would take such rash action.

You might be aware that there is controversy about whether or not the police should even undertake such chases. A wild car chase through populated areas can be highly dangerous for everyone involved. The person that’s being sought is often a desperate driver, willing to ram other cars, hit pedestrians, drive on sidewalks, drive the wrong way on the roads, and otherwise do anything to get away. Besides the police being endangered as they try to follow such illegal antics, there is the danger to bystanders and other innocents that had nothing to do with the matter other than being in the wrong place at the right time.

In some jurisdictions, the police will at times stay back from the frantic driver and try not to seemingly pressure the driver into doing the especially dangerous maneuvers. In some cases, the police follow by helicopter rather than by police car. There have been instances whereby the police let the person zoom along and get away, doing so because they knew that the person had a limited criminal record and also knew their identity and where they live. Some would argue it is a safer approach to the matter to call-off the chase and try to catch the person later on, rather than “provoke” them into a car chase mode and have something untoward occur. It’s all debatable.

A recent car chase was one for the record books since the driver opted to go onto train tracks. This is a rather unusual attempt to get away. Furthermore, the driver then kept on the rails and went into an underground train tunnel. The driver was going the wrong way into the tunnel, which meant that a train would be coming in the other direction and could potentially smash head-on into the car. While watching the car chase unfold on TV, there were many that thought the driver was pretty stupid to go into the tunnel since the police could quickly place guards at either end and the driver would have no means of escape.

When the car did not pop-up by coming out of the other side of the tunnel, it began to dawn on people that perhaps the desperado was going to hide inside the tunnel. Turns out, the person abandoned the car and tried to come up to the street level via various of the tunnel escape hatches. The police caught them, doing so at the point of the escape hatches and for one of them at some distance from an escape hatch.

Some might wonder why the police need to undertake a car chase when presumably they can simply try to disable the car involved. Indeed, you’ve probably seen that the police often will try to lay down a strip of metal spikes in the hopes that the wayward driver will drive over the spikes. These traffic spikes are also sometimes called tire shredders. Other names for the spikes include being referred to as stop sticks, jack rocks, and stingers.

These strips function in the same way that the parking lot strips do. The notion is that a car will try to rollover the spikes, the spikes will puncture the tires, and the tires will either deflate or be torn apart. Without viable tires, in theory the car chase should come to an end. The driver would not be able to control the car without viable tires and would not be able to drive at high speeds without viable tires. Trying to continue driving would be difficult and the odds are that relatively soon the driver would be driving on the rims of the tires.

Observations from the Capital of Car Chases

Again, as the apparent capital of car chases, I’ll say that we have had quite a number of car chases that involves the driver still trying to drive the car, in spite of having had the tires punctured. One car chase involved the driver continuing for several miles, and sparks were flying as the rims were essentially acting as a tire. Eventually, the sparks ignited some other parts of the car, and the entire car became engulfed in flames. It was actually impressive how long the car itself withstood this treatment and was maybe a testament to the makers of the car that it was able to continue that long.

In any case, the metal barbs or spikes are supposed to either stop the car nearly dead in its tracks or at least cause enough damage that the car is not going to presumably be further drivable. In some cases, the teeth are made to detach. This is thought to be a further means of damaging the tire, since the detached teeth are intended to embed into the tire and continue causing damage. If a tire merely rolls across the teeth, there might or might not be deep damage done to the tire, but if the teeth themselves embed into the tire and stay with the tire, the thinking is that the metal spike will be able to do ongoing damage as long as the driver continues to drive the car.

You might find of interest that the teeth on a stop strip are often in the shape of a caltrop. For those of you that are history buffs, you might know that during the time of the Romans, caltrops were used to slow down horses during wars and possibly impact elephants and human troops. Essentially, a caltrop is a four toothed spike that when you throw it onto the ground there will be three teeth facing to the ground and one tooth facing up. In the case of horses, the notion was that a horse might step into the upward facing spike and injure the horse or at least frighten it into panic.

Perhaps it is remarkable that in this day and age, we still use the same kind of mechanism, doing so when trying to prevent cars from going improperly into a parking lot or when trying to stop a getaway car. Those Romans, they certainly were clever and had some long-lasting inventions (well, one cannot give them all the credit, there were others that had used caltrops even before them!).

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect that is a rather “thorny” topic, so to speak, involves the matter of externally stopping an AI self-driving car.

Allow me to elaborate.

First, let’s clarify that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

Let’s consider two separate overall use cases, one involving a Level 5 self-driving car, which as I’ve mentioned would be a self-driving car being driven entirely and only by the AI, and the other case would be a less than Level 5 self-driving car for which there is a co-sharing of the human driver and the AI.

In a Level 5 self-driving car, there will be some form of conversational dialogue between the AI and the human occupants that involves the humans making requests of the AI that’s driving the car. You get into a Level 5 self-driving car and tell it you want to be driven to the baseball stadium. The AI proceeds accordingly. Perhaps during the driving journey, you tell the AI to stop at the Starbucks that’s down the street, so you can use their drive-thru to get some coffee before you get to the ballgame. And so on.

What kind of latitude do you have as the human directing the AI self-driving car?

Can you tell it to drive illegally? Maybe you are late getting to work one day, and so you tell the AI to exceed the posted speed limit on the highway that you take to get to work. The posted speed is 45 miles per hour, but you tell the AI to go 55 miles per hour, in hopes of getting to work on time. Should the AI obey such a command?

I’m betting that you are tempted to say that no, the AI should not obey a commend to undertake an illegal driving act. Your wanting to get to work on-time is not much of a reason to have the AI perform an illegal driving action and also it could be dangerous too since the rest of the traffic might be going 45 mph and meanwhile the AI self-driving car might be swerving around other cars to go the desired 55 mph. Could be unsafe and produce a car crash of some kind.

But, maybe you are in the Level 5 self-driving car and you are bleeding profusely because you somehow had gotten cut, or maybe someone is in the self-driving car that is pregnant and about to deliver a baby, under those circumstances would it make sense to allow the AI to go ahead at 55 mph rather than 45 mph, even though it would be an illegal driving act? I’m assuming you are more sympathetic in such instances to allowing the AI to “break the law” as part of its driving efforts.

The point being that we as a society have yet to wrestle with the range of legal and “illegal” acts of what an AI of a self-driving car is going to be “allowed” to do in terms of the driving task. It’s going to be a crucial matter once we start to see the advent of AI self-driving cars on our roadways.

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For the illegal acts of an AI self-driving car, see my article: https://aitrends.com/selfdrivingcars/illegal-driving-self-driving-cars/

In terms of conversing with AI self-driving cars, see my article: https://aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

So far, I think you can see that there’s going to be a fine line between the strict legal kind of driving that we assume an AI self-driving car will do and the potential need for the AI to be permitted to go beyond the normally stated constraints.

Let’s consider the car chase predicament.

If you were to tell your AI self-driving car to proceed to lead a car chase, telling it to go at very high speeds and try to be evasive as it drives, should it do so?

I’m sure you are saying that even the suggestion that the AI would abide by such a command is absurd on the face of things. Have an AI self-driving car that starts a car chase? Nuts! This should never happen.

Reasons You May Tell Your AI Self-Driving Car to Engage in Evasive Action

Suppose that you have just survived a potential car jacking and are desperate to get away from attackers that are trying to get you. Maybe you are a celebrity or a very wealthy person that is being sought by some bad people. Maybe it’s a gang that just wanted to get your wallet and your car. Indeed, in terms of cars, there are some that believe we’ll begin to see “robojacking” of AI self-driving cars, an obviously undesirable trend that might arise as self-driving cars become more prevalent.

For my article about robojacking, see: https://aitrends.com/features/robojacking-self-driving-cars-prevention-better-ai/

For defensive driving tactics and AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

Would it be OK then for the AI self-driving car in this instance be able to proceed as though being car chased and therefore be at the forefront of the car chase? I’m guessing you are now more sympathetic to the notion. Sure, if an innocent person’s life might depend on it, maybe having the AI proceed to try and getaway makes sense.

If the AI was trying to do a getaway because the person had committed an illegal act, such as robbing a bank, I’m sure we would all agree this kind of a car chase effort should not be condoned. Or, maybe the human is drunk and blurts out to the AI that it should start driving evasively because Martians are on the way to beam them up to Mars. This is another circumstance that I believe we would all agree is not warranted for the AI to take such rash driving tactics.

For the moment, can we agree that there might be valid cases of the AI undertaking evasive driving action for which we could construe the driving to be the rudiments of a car chase? The AI would be driving the car at a fast pace, attempting to allude followers, and likely would be committing “illegal” driving efforts as it does so.

If you agree with that aforementioned notion, we then need to figure out how far are we willing to have the AI go on this matter.

Can the AI drive the self-driving car on the wrong side of the street? Can it drive on sidewalks? Can it swerve around other cars? These are all dangerous acts that could potentially harm others.

Here’s your conundrum. If you say that the AI cannot do any kind of driving that might harm others, I challenge you to then explain what exactly the AI can do during this evasive driving? It’s not much of an evasive driving if the self-driving car is driving along just as a normal car does, and I’d dare say that’s not evasive driving at all.

You might try to suggest that the AI needs to be astute enough to ferret out a legitimate request for driving evasively and those that are not legitimate. If the car was being driven by a human chauffer, we would likely expect that human to be able to discern between a person that gets into the car and has just robbed a bank versus someone that gets into the car and is about to deliver a baby.

This logic though doesn’t especially do us much good since even if the human chauffeur realized that the person had just robbed a bank, and suppose the chauffeur then refused to drive evasively, the bank robber might point a gun at the chauffeur and say drive evasively anyway. The chauffeur has to then make a choice between presumably staying alive and driving evasively, versus refusing to do so and possibly getting shot and killed. Maybe I’ve seen too many movies, but I think most humans would opt to drive evasively and hope that somehow, they will otherwise be able to escape the situation.

The point here is that the AI is unlikely to be able to ferret out the matter of whether driving evasively is warranted or not. Maybe far in the future when someday there is AI that is more akin to the “singularity” that some speak of, but for now, the common sense reasoning of the AI is quite limited and there’s not much likelihood we’ll see it become so advanced that it can be available for the advent of AI self-driving cars (though, some AI pundits say that only via breakthroughs in common sense reasoning for AI will we even achieve true Level 5 self-driving cars).

For my article about the singularity, see: https://aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/

For my article about the common sense reasoning topic, see: https://aitrends.com/selfdrivingcars/common-sense-reasoning-and-ai-self-driving-cars/

For my article about whether an AI self-driving car might become a modern day Frankenstein, see: https://aitrends.com/selfdrivingcars/frankenstein-and-ai-self-driving-cars/

At this juncture of the discussion, I hope you are at least open to the notion that we might end-up with AI self-driving cars that are driving evasively, doing so with a presumed purpose, whether or not the purpose is considered legitimate by society or not.

Though the focus herein involves an AI self-driving car that is seemingly driving at a fast clip and doing dangerous driving tactics, I’ll point out that my next comments herein could even apply to a situation of an AI self-driving car that appears to be driving in a perfectly normal everyday way. Please keep that in mind.

Suppose that an AI self-driving car is driving in a manner that we otherwise don’t want it to do be doing so, and we want to essentially stop the self-driving car.

How could that be accomplished?

You could use those metal spike strips. In other words, regardless of whether we are trying to stop a human driven car or an AI self-driving car of a Level 5, the use of the tire shredders could still be invoked.

Toss those strips in front of an oncoming Level 5 self-driving car. What happens? Assuming that the AI is not able to avoid rolling over the metal teeth, the tires would presumably get punctured. At this point, the AI is now continuing to try and drive the self-driving car. Doing so is rough at this juncture. The self-driving car is going to lurch and have the same difficulties that any normal car would have when the tires have been punctured.

Few of the auto makers and tech firms are working on how to have the AI deal with circumstances such as having the tires shredded and still be able to safely drive the car. They consider this kind of problem to be an “edge” problem. An edge problem is one that is at the periphery of the core problem that you are trying to solve. At this time, the AI developers are primarily concerned about getting an AI self-driving car to drive along properly on a properly working self-driving car on a properly devised roadway in properly good weather. That’s considered core.

Do we need the AI to ultimately be able to deal with other problems associated with the physical aspects of the self-driving car? Absolutely. A self-driving car is like any other car in that it will have physical breakdowns and problems. A human driver would need to accommodate such difficulties, and so should the AI. We cannot assume that all self-driving cars will always work just dandy, which for the roadway trials taking place now is pretty much the assumption. Today’s AI self-driving cars are being pampered, but once they get into the real-world and no longer have a dedicated car pit crew, it will be a different story.

For my article about self-driving car recalls, see: https://aitrends.com/selfdrivingcars/auto-recalls/

For the freezing robot problem and self-driving cars, see my article: https://aitrends.com/selfdrivingcars/freezing-robot-problem-and-ai-self-driving-cars/

On the towing aspects of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/towing-and-ai-self-driving-cars/

For the responsibility aspects of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/responsibility-and-ai-self-driving-cars/

Besides the usual physical ways to stop a car, which would seem to apply to an AI self-driving car too, would we possibly have other means to try and stop an AI self-driving car?

Virtual Spike Strip Alternatives for AI Self-Driving Cars

Perhaps we might use a virtual spike strip.

By this, I mean that we could somehow convince the AI that it should bring the AI self-driving car to a stop.

We could possibly do so then without necessarily tossing a physical metal strip of spikes in front of the self-driving car. In lieu of that rather more blunt approach, we could make the AI bring the self-driving car to a halt simply because we told it to do so. In a sense, it is like a virtual kind of spike strip.

Presumably, this would be a lot safer too than a physical spike strip. The self-driving car is still intact and thus rather than shredding the tires and hoping that the self-driving car doesn’t do a barrel roll and injure anyone including bystanders, the AI could presumably bring the self-driving car to a safe halt instead. Imagine how much safer this is for the police as well, wherein with physical strips they need to take a heightened risk of getting front of the car and deploy the strips (not as easy as it seems in the movies).

As an aside, we can consider even having the AI self-driving car take some other action, such as some have suggested that we could instruct an AI self-driving car being used as a getaway vehicle by criminals that that just robbed a bank and we might instruct the AI to bring them to the nearest police station. Wouldn’t it be nice that the AI could wrap up those dastards in a nice tight bow and deliver them directly to the police? Well, I have my doubts about the practical nature of this suggestion, but that’s something I’ll tackle for you another day.

Let’s then pursue the notion that we, whomever the “we” is, might commandeer the AI of the AI self-driving car and have the AI do something other than what perhaps the human occupants have told the AI to do.

How could we control the AI in this manner, doing so externally of the AI self-driving car?

One somewhat obvious way might be to use the OTA (Over The Air) capability of the AI self-driving car. The OTA is normally used to get data from the self-driving car, such as sensory data, and also provide updates to the self-driving car. When a new version of the AI software is needed for your AI self-driving car, it via OTA can connect to the cloud setup by the auto maker or tech firm, and receive those updates sent via electronic communication. No need to take your car into the auto shop for such updates.

For my article about OTA, see: https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

For my article about API’s, see: https://aitrends.com/selfdrivingcars/apis-and-ai-self-driving-cars/

For my article about back-doors into AI self-driving cars, see: https://aitrends.com/selfdrivingcars/ai-deep-learning-backdoor-security-holes-self-driving-cars-detection-prevention/

Suppose the police see an AI self-driving car that is rocketing down the freeway, presumably being chased or potentially a car that the police will want to chase. The police might instead make contact with the auto maker or tech firm that setup the cloud OTA for this particular brand of AI self-driving car and have the auto maker or tech firm presumably send an electronic command to the self-driving car that instructs the self-driving car to come an immediate safe stop.

Lest you think this might take a long time to do, it could be all orchestrated beforehand. The police might already have a ready means to send such a request to an auto maker or tech firm. Perhaps entering the license plate number of the self-driving car into a special app by the police department and then the rest happens as fast as any usual electronic communication can do so. The auto maker or tech firm might not have any human intervention involved and just assume that any bona fide electronic command to tell an AI self-driving car that’s using the OTA will then automatically send out such a command. Could happen in seconds.

Problem solved! Or, should I say, problem solved?

You can imagine that this raises all sorts of societal entanglements. Should just any police officer be able to issue such a command, doing so at any time? That seems a bit Big Brother like. Suppose too that someone somehow got ahold of the police capability and they opted to use it for their own devious purposes? Furthermore, there’s now a chance of a security breech that if you could hack this system then you might be able to stop an AI self-driving car. Maybe you could stop thousands of them all at once.

We’re also somewhat overlooking the technological aspect that suppose the OTA is not able to send a signal to the AI self-driving car, perhaps the self-driving car is not in an area where it has connectivity. Or, maybe the OTA was disabled by the human using the self-driving car, intentionally doing so, either by sabotaging it or maybe putting some kind of dampening mechanism to prevent the OTA from functioning.

For my article about 5G wireless and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/5g-and-ai-self-driving-cars/

For my article about edge computing, see: https://aitrends.com/selfdrivingcars/edge-computing-ai-self-driving-cars/

Some believe that maybe our government or others want us to become dependent upon AI self-driving cars so they can then control us, see my article about thee conspiracy theories: https://aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

Another approach might be to have the police physically display something that the AI self-driving car would “see” and therefore invoke the halt command in that manner. This is handy since it does not rely on any electronic communication over the airwaves. Forget about the OTA, and instead just physically present something to the attention of the AI self-driving car. This presumes that you are physically near to the AI self-driving car, which of course if you were using actual metal strips you would need to be anyway.

Keeping in mind that the AI self-driving car has cameras for visual processing of the surroundings, you might have agreed beforehand with the auto maker or tech firm that if a certain kind of image is seen by the sensors, the AI self-driving car will come to an immediate safe halt. Thus, the police arrange to get in front of where the AI self-driving car is driving toward, and they hold-up a sign that has this special image on it (actually, since there are cameras pointing to the rear of the car too, the police could do this from behind the self-driving car and don’t even need to try and get in front of it; that’s a “nice” difference in comparison to actual physical metal strips wherein you need to get in front of a speeding car!).

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Let’s assume then that the police hold-up the special image, the camera sensors detect the image, and during the sensor data collection and interpretation the AI system realizes it is indeed the special image. The virtual world model which is being used to keep track of the surroundings of where the AI self-driving car is driving, the model is then used by the AI to try and identify a safe place to come to an immediate halt. Maybe it determines that up ahead a quarter mile would be the safest spot and allow for a gradual reduction in speed rather than slamming on the brakes. The AI action planning routine then devises the driving tasks to do so and sends those commands to the car controls.

Voila, the AI self-driving car now comes to a safe halt.

I mentioned that the sign might be a special visual image. Since most AI self-driving cars also have radar sensors, sonic sensors, and LIDAR, we don’t necessarily need to even use a physical visual sign. It could be something else that might trigger the same kind of response. Perhaps an electronic signal might be used. Or a shape of some kind that might be detected by the radar. Or, there might be triggers established beforehand for each of the various sensors, thus, increasing the odds that the AI self-driving car will one way or another be able to detect the command.

One advantage of this approach would be that there’s no need to rely upon an electronic communication that is remotely being beamed to the AI self-driving car. Instead, if you are within physical proximity, you can trigger it to abide by the signal. The odds are that the sensors of the AI self-driving car are going to be working, since otherwise the AI self-driving car is generally not going to be able to be driven anyway. It relies upon those sensors to be able to detect the environment and drive the car accordingly.

We’re once again though finding ourselves facing the issue of who can rightfully make use of these special images or signals? Can any police or authority do so? Suppose it gets leaked out how these triggers work, and someone opts to use them for devious purposes. And so on.

You might also find of interest that there have already been some efforts undertaken to demonstrate that you can potentially fool a Machine Learning algorithm by doing something like this. Suppose you’ve used Machine Learning to train toward detecting stops signs. A sneaky means has been shown possible to train that Machine Learning to potentially ignore the stop sign if there’s a certain image on the stop sign, like a simple yellow post-it note.

For more about Machine Learning and AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/ensemble-machine-learning-for-ai-self-driving-cars/

And here’s of my articles also on Machine Learning: https://aitrends.com/ai-insider/machine-learning-benchmarks-and-ai-self-driving-cars/

On this overall topic, there are some AI developers that are worried that even having a special program or routine embedded in the AI system that can be triggered and would bring the AI self-driving car to a halt is itself a dangerous “hole” in their view. In essence, once such a routine or program exists within the AI that’s on-board the AI self-driving car, it means that one way or another it can somehow potentially be invoked. This could be done by someone that is considered authorized to do so, or by someone that is not authorized to do so (or, one supposes it could even be accidentally invoked by the AI itself, and unexpectedly and erratically come to a halt for no apparent sensible reason).

Some would argue that the AI should not have a specific routine or program for this purpose per se, and instead would merely take as input the “suggestion” of coming to an immediate safe halt. In essence, the communication to the AI self-driving car, whether electronic or via visual image or whatever, is not so much a command as it is a request. Someone or something is making a request of the AI, and the AI would need to decide whether or not to comply with the request.

This might even involve the AI asking the human occupants whether or not they want the AI to bring the self-driving car to a safe halt. It might be that the AI tells them that a request has been issued to do so, and it wants to hear their side of the story. They might tell the AI, yes, please go ahead and bring the AI self-driving car to a safe halt. Probably, and much more likely, they would tell the AI to ignore the request. In which case, what does the AI do then?

Admittedly, using physical metal strips to stop a car are a bit more straightforward. Most of the time, those strips are only in the possession of the police. Most of the time, they are only using the strips when the circumstances seem generally to merit it. This is usually done in the public view. Everyone can see that it is taking place. The virtual spike strips, which can be done “in secret” seem to raise all sorts of other thorny questions, as I had so suggested was the case earlier in this discussion.

You might note that I have not covered so far in this discussion the question about what to do when the AI is a less than Level 5 self-driving car.

I’ll leave it to you to ponder this, but the mainstay of the issue would be whether the AI in a co-sharing driving task should be able to exert control over the self-driving car such that even if the human driver does not want to come to a halt, the AI would force it to happen anyway.  As food for thought, if we are going to say that the human driver of an AI self-driving car for less than a Level 5 is considered the responsible driver, and yet we are willing to have the AI completely take over control of the self-driving car, this seems to open a can of worms.

We are also making an assumption throughout all of this discussion that the AI would readily be able to bring the AI self-driving car to an “immediate” and “safe” halt.

What timeframe constitutes the word immediate? Within seconds, minutes, or how long is permitted to execute the halt? In terms of “safe” that’s also a somewhat tenuous notion. In a utopian world, sure, the AI might be able ensure a completely safe stop. In the real-world, suppose the AI pulls to the side of the freeway, and meanwhile a car that was behind the self-driving car gets confused and rams into the back of the self-driving car, killing the occupants. Was that a safe means to stop the car?

Right now, most of this kind of discussion is not yet taking place overtly. As mentioned, it is generally considered an edge problem for now.

Once we have prevalent AI self-driving cars, the topic will become more mainstream. Do we need some kind of virtual spike strips for AI self-driving cars? If so, what would it consist of? How would it be used? These are other such questions are not just technological, but also societal aspects that will need some very thought-provoking consideration. In the meantime, please do watch out for those metal strips!

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 


Source link

Guest Blogger

We feature multiple guest blogger from around the digital world. If you are featured here, don't be surprised, you are a our knowledge star. :)

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Close
Close