Data Science

Rear-End Collisions and AI Self-Driving Cars: Plus Apple-Lexus Incident


By Lance Eliot, the AI Trends Insider

Rear-end collisions. They can be a doozy. Let me share with you an example that happened to me some years ago.

I was stopped at a red light and idly looking around, awaiting a green light. My driver’s side window was rolled down and it was a nice summer day with the sounds of birds chirping and I was on my way down to the beach for some R&R.

All of a sudden, in a seemingly split second, I heard a revving engine and I began to uncontrollably lurch forward and flop back-and-forth in the driver’s seat – my car had been hit from behind by an accelerating car that had been approaching the red light (turns out, the senior citizen driver mistakenly hit the accelerator instead of the brake). He barreled into the rear of my car, shoving my car into the car ahead of me. The hit was so severe that it ruptured the gas tank of my car and petrol began to spill onto the ground. I was momentarily unconscious or somehow blacked out and nearby pedestrians rushed to help pull me out of my car. I lived to tell the tale.

As I mentioned, rear-end collisions can be a doozy. In this case, I was lucky that I was unhurt. My car had quite a bit of damage and it got repaired via my car insurance. The car ahead of me had damage. The car that hit me had damage. Miraculously, no one was actually injured in this rather scary cascading event. We were all lucky. It could have been a lot worse.

According to the National Highway Traffic Safety Administration (NHTSA), in the United States alone there are something like 6 million car accidents per year and about 40% of those involve rear-end collisions. The math there is that there are approximately 2.4 million rear-collisions each year. In that case, there’s a rear-end collision in the United States on the average about every 13 seconds (approximately 31,536,000 seconds per year, divided by the approximate 2,400,000 rear-enders per year).

Statistics suggest that most of the rear-end collisions occur at speeds less than 10 miles per hour. Typical injuries to the occupants of the colliding cars includes whiplash, damage to the knees, spine twisting, sometimes brain concussions, etc. Fortunately, at lower speeds, rear-enders tend to not involve death, and instead it is mainly mild injuries and sometimes severe injuries. That being said, anyone that’s harmed or crippled due to a rear-end collision has every right to be upset and angry that the situation arose.

How do rear-end collisions arise?

Let’s consider that rear-end collisions involve a lead car and a follower car. The lead car is the one that’s going to suffer being hit at the rear-end of the vehicle. The follower car is going to be the one that rams into the back of the lead car. That’s the fundamental setup.

We can make this more complex by considering multiple rear-end collisions like my situation that had a car that was behind me (a follower) that hit my car (the lead car), and then I slid into the rear of the car ahead of me (in which case, I became a follower car and the car head of me became the lead car). Cascading rear-end collisions do happen and it’s almost like a game of dominos, knocking down one domino causes it to hit the next, and the next, and so on. For the moment, allow me to concentrate on the fundamental use case of just two cars.

The two cars in a fundamental rear-end scenario can be in motion or at a standstill, just prior to the initiating of the rear-end collision event.

The senior citizen driving the car that was behind me, he was in motion and smacked into the rear of my car, of which my car was motionless at that point in time of the impact. My car then went into motion as a result of the impact and rammed into the motionless car ahead of me. This caused the car ahead of me to go into motion, but that car was at the front of the pack and fortunately after getting pushed forward there were no other nearby cars to get hit.

For the two cars involved in the foundational case, the motions can involve acceleration or deceleration. The senior citizen behind me had confused his accelerator pedal for the brake pedal, and he was trying to come to a full stop, so he was pushing down onto the pedal with great force. This meant he was pushing down forcibly on the accelerator pedal due to his confusion. That’s also why I heard an engine revving noise just moments before the crash. It’s also why the crash was strong enough to rupture my gas tank and push my car forward into the back of the car ahead of me.

In general, we then have the situation of a lead car and a follower car, of which they might be in motion, both or neither, and they might be accelerating or decelerating, just prior to the rear-end collision event.

This structure will help to explain a variety of rear-end collision circumstances.

Suppose a lead car is in motion and going at a speed of say 1 mile per hour (meaning that it is just barely crawling forward, and presumably not at a full-stop). Suppose that a follower car is going around 15 miles per hour and coming up upon the lead car. The follower car for whatever reason rams into the lead car, producing a rear-end collision. Both cars are merging into traffic onto an expressway.

This seems pretty much like an everyday kind of example of a rear-ender and on the surface there’s nothing particularly noteworthy about it. Turns out this is a real example. Furthermore, it is noteworthy and newsworthy due to the aspect that the accident involved as the lead car was one of Apple’s AI self-driving cars consisting of a specially outfitted Lexus SUV RX450h 2016.

Apple AI Self-Driving Car in a Rear-End Collision

Yes, in case you hadn’t already heard about it, there was an incident of an Apple AI self-driving car that recently got involved into a rear-end collision, wherein it was hit from behind by a human-driven Nissan Leaf.

Both cars were merging from Rifer Road onto the Lawrence Expressway in Sunnyvale, California. This is a relatively popular roadway location that involves lots of car traffic. The incident occurred on Friday, August 24, 2018 at a reported 2:58 p.m. PDT. I’d guess that a late mid-afternoon incident on a Friday would not be out-of-the-ordinary per se, and there’s often people rushing around on Friday afternoons in that location, wanting to get home from work and taking off a bit early, or maybe rushing back to the office after other off-site meetings, or maybe heading to a baseball game, etc.

Because Apple has registered for use of its AI self-driving cars onto California public roadways, it is also required to report accidents involving its cars, doing so via filing the Department of Motor Vehicles (DMV) form number OL316, entitled the “Report of Traffic Collision Involving an Autonomous Vehicle.” The law requires that the report be filed within 10 business days of any such incident. In this case, the report was signed as Wednesday, August 29, 2018, thus about 5 days after the accident itself.

It was reported that both of the cars had some relatively minor damage. There were no reported injuries to humans. The weather at the time of the incident was clear, sunny, during daylight, and the road surface was dry. Both of the cars were proceeding straight. Thus, it was a textbook style rear-end collision.

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars, and we are earnestly interested in any incidents involving self-driving cars. There’re always handy lessons to be learned, both for us and for the industry as a whole, and one might say for the public at large too.

When dealing with a mix of human driven cars and AI driven cars, there are four distinct combinations of the rear-end collision scenario – see Figure 1.

Just after the Apple Lexus incident was broadcast on the news, I had someone ask me an interesting question that perhaps illuminates the understanding and misunderstanding of the public’s perception of AI self-driving cars.

I was asked this: “How could an AI self-driving car be involved in getting hit from behind in a rear-end collision, shouldn’t it be smart enough to have avoided it?”

There’s a lot to unpack in that question. It seemingly suggests that AI self-driving cars will be omnipresent and presumably never get into car accidents. I’ve in my writings and my speeches at industry conferences said repeatedly that we need to be careful of not over ascribing incredible super powers to AI self-driving cars. They are still cars, and they are still prone to the laws of physics.

For my article that breaks the myth of zero fatalities due to the advent of AI self-driving cars, please see: https://aitrends.com/selfdrivingcars/self-driving-cars-zero-fatalities-zero-chance/

My article about the public perception of AI self-driving cars provides insights about what people think AI self-driving cars can do, see:https://aitrends.com/selfdrivingcars/roller-coaster-public-perception-ai-self-driving-cars/

For the nature of how omnipresence comes into play for AI systems of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/omnipresence-ai-self-driving-cars/

Now, admittedly, it would be useful to know whether or not the Lexus was able to detect that the follower car was about to hit it. One would hope that the rear-facing sensors were able to discern that the Leaf was coming up at a fast pace and that there was a high likelihood that the Leaf was about to strike the Lexus.

For AI self-driving cars, per my framework, here’s the key driving task aspects involved:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

For my overarching framework about AI self-driving cars, see:https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

There are lots of questions that would fascinating to ask of Apple in this situation.

Did the Lexus detect the upcoming Leaf, plus, did the Lexus also try to identify a means of escape to avoid the incident or otherwise lessen the impact? The sensors hopefully detected the Leaf coming up behind the Lexus, and identified that it was a car coming up. The Lexus should have known it was doing the alleged 1 mile per hour and been able to detect the Leaf going at the reported 15 miles per hour. Simple math would have been able to ascertain that the pace of the Leaf was going to overtake the Lexus, based on the distances between them and the detectible speeds of both cars.

The virtual world model of the AI system for the Lexus should have been able to plot out the upcoming Leaf and made a prediction that the rear-end collision would likely occur. The AI action plan should have then tried to identify ways to avoid the incident, if feasible. For example, suppose the Lexus opted to rapidly accelerate, it might have had time to avoid getting hit from behind, or maybe lessened the impact by being at a closer matching speed of the Leaf.

It could be that accelerating was not a viable choice, given the distances and time available, and maybe the Lexus might have accelerated upon a car ahead and thus generated its own rear-end collision. So, another aspect might be to consider veering out of the path of the Leaf. This though might not have been a good solution depending upon whether or not there was available room to the left or right to swerve and avoid being hit by the Leaf. Also, any such sudden maneuver could have other adverse consequences, possibly hitting something else such as a pole or other car, and since there were humans inside the Lexus it would preclude potentially doing such a radical maneuver that the maneuver itself might harm the human occupants – this is an ongoing ethical question about AI self-driving cars, as to having to have the AI make tough choices of this nature.

For my article about the ethics issues facing AI self-driving cars, see:https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For my article analyzing the Uber crash in Arizona, see:https://aitrends.com/selfdrivingcars/ntsb-releases-initial-report-on-fatal-uber-pedestrian-crash-dr-lance-eliot-seen-as-prescient/

We don’t yet know what was happening inside the AI of the Lexus. Presumably, the AI development team of Apple has taken a close look to figure out what occurred. Maybe the AI did not live-up to the task at hand, so to speak, and needs more work on it to be able to deal with such incidents. Maybe the AI did everything it could have. If no other viable options seemed reasonable, it could be that the AI opted to just go ahead and take the rear-end collision. This could be a sound choice. Out of the myriad of options, sometimes the one that seems to involve no action might be the best, and the result in this case was apparently no human injuries, so in that sense the AI might have made a good choice. We don’t know.

I’ve mentioned many times that just because we might not see any overt action by an AI self-driving car that it doesn’t mean that it didn’t nonetheless do all sorts of permutations on what to do. In the few seconds or split seconds leading up to a car crash, there might be a tremendous volley of computational analyses, all of which end-up at the “best” choice being to take no action. Thus, an outsider observing the self-driving car might be misled into believing that the AI did nothing at all. It’s hard to say without cracking open the AI to ascertain what it did or did not do in the circumstance.

For my article about cognitive timing aspects of AI self-driving cars, see:https://aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/

It’s like a human driver that decided not to turn the wheel or hit the brakes, and afterward you might ask why, and the human might be able to tell you that they overtly made such a decision, or they might have not been aware of what was happening and thus took no action because they weren’t able to think it through. As mentioned about my case of getting hit from behind by the senior citizen driver, I was blissfully unaware that the car was about to hit me (I was listening to chirping birds instead!). I did hear the revving of an engine, but it was a blur in my mind and the whole event unfolded so fast that I had no mental awareness of it occurring.

Now, you might counter-argue that I should have been looking in my rear-view mirror to see any cars coming up from behind me. I admit this is something I do from time-to-time, but not all of the time. How many of us are continually looking in our rear-view mirror while sitting at a red light and doing so because we think that a car behind us might ram into our car? I dare say few human drivers would do this. I did so for a few weeks after the incident, but gradually have let my guard down. But, that’s a human for you.

For why defensive driving tactics are essential for an AI self-driving car, see my article: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

For the levels of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

The AI self-driving car has no such difficulty and should always be looking behind itself. Its sensors should always be on, and it should be undertaking continuously the cycle of sensor data collection and interpretation, sensor fusion, virtual world model updating, AI action planning, and issuing cars controls commands. Was the Lexus doing so? That’s in the hands of the AI developers at Apple.

One of the weakest portions of most contemporary AI self-driving cars is their lack of looking behind them. Nearly all of the sensors and attention goes toward what’s in front of the self-driving car. This certainly makes sense in terms of the rudiments of what an AI self-driving car needs to do. Some therefore consider looking behind the self-driving car as a kind of “edge” problem and less crucial. But, however you want to classify it, I think we would all agree that a well-rounded and properly proficient driving car should be doing some quite significant look-behinds.

Here’s my article about the weaknesses of the look-behind aspects of today:https://aitrends.com/selfdrivingcars/looking-behind-self-driving-car-neglected-blind-spot/

For more about edge problems in the AI self-driving car realm, see my article:https://aitrends.com/selfdrivingcars/edge-problems-core-true-self-driving-cars-achieving-last-mile/

Some have criticized the DMV regulations as being inadequate because it does not require the company that reports an incident to also explain what happened inside of the AI. There are those that believe it would be beneficial to all of us, the high-tech developers and the public and the regulators, if we knew why the AI might have faltered or not faltered in any given situation. The firms themselves suggest that this would not be helpful to anyone other than the firm itself, given the idiosyncratic aspects of each AI system and also that it could divulge proprietary secrets of the AI self-driving car involved.

For more about the DMV reporting requirements, see my article:https://aitrends.com/business-applications/disingenuous-disengagements-reporting-ai-self-driving-cars/

For my article about the proprietary nature of AI self-driving cars, see:https://aitrends.com/selfdrivingcars/stealing-secrets-about-ai-self-driving-cars/

I realize that some AI self-driving car pundits would say that there should never be any rear-end collisions involving AI self-driving cars, due to the belief that we should have only and exclusively AI self-driving cars on our roadways. In the case of the Apple Lexus, these pundits would point at the Leaf being driven by a human and exhort that it provides further proof that we need to get human drivers out of the equation, by getting those pesky human drivers off the roadways.

Please be aware that in the United States alone there are about 200+ million conventional cars. Those conventional cars are driven by humans. Those conventional cars are not going to suddenly become AI self-driving cars overnight. We are going to have a mix of human driven cars and AI self-driving cars, which will last for many years, likely decades.

In fact, some question whether we will ever get to a point of having only AI self-driving cars and there are some that cling to the belief that humans will still insist on being able to drive a car. This would become a regulatory issue of some likely contention, at least for the foreseeable future. Perhaps at some point down-the-road, we’ll all be so used to AI self-driving cars that no one will want to actually humanly drive a car. Hard to foresee, but possible (or, maybe humans can drive cars on private roads and at car race tracks, but not on public roadways!).

Let’s all agree that there are going to be situations of AI self-driving cars getting hit from behind by human driven cars, and that human driven cars are a reality for the foreseeable future. Sometimes, an AI self-driving car might be able to avoid such a collision, while sometimes not. Thus, it won’t always happen, and it depends partially on the circumstances and whether the AI is good enough to be able to avoid a collision, if such avoidance is plausible.

Take a look at Figure 2 to see a matrix of the rear-end collision scenarios.


Will AI self-driving cars be the culprit that does a rear-end collision? Sure. I know that some pundits might say it won’t ever happen, but again they fail to grasp the physics involved. Suppose a human driven car is ahead of the AI self-driving car. Thus, the human driven car is the lead car, and the AI self-driving car is the follower car. The lead car decides to slam on the brakes. What does the AI self-driving car do?

Assume that there’s no place to swerve the AI self-driving car to avoid hitting the rear-end of the lead car. Let’s go with a straight-ahead scenario. You might argue that the AI self-driving car should be able to detect the sudden deceleration of the lead car. I’d agree that it should be detectible by the AI self-driving car, and just to be on the safe side let’s pretend that’s a 99% time that it is detected (I’m going to leave 1% for situations wherein the sensors either fail or are otherwise unable to detect the car ahead – I realize that some will argue with me, in that some will say it should do so 100% of the time, while others might say its more like 90% and 10%, and so on).

Okay, so we’ve got a suddenly decelerating lead car, and the AI self-driving car is the follow car. Can the AI self-driving car stop in time? You might contend that yes, it will always be able to do so, assuming that the AI is properly keeping a safe stopping distance away from the car ahead of it. Some believe that AI self-driving cars should always maintain the proper stopping distances. We know that humans do not do so, and that routinely we humans drive with too little stopping distance ahead of us.

Is it realistic to assume that an AI self-driving car will always be maintaining the proper stopping distance ahead of it? No. That’s a completely unrealistic expectation. I’ll give you a simple example as illustration.

While on a freeway, suppose an AI self-driving car has a proper stopping distance ahead of it, and meanwhile a car veers into the gap, cutting the distance between the AI self-driving car and the car now ahead of it. The AI self-driving car would need to presumably slow down to try and arrive at a proper stopping distance from the interloper car. Even if the AI self-driving car is viably able to slow down in the given situation, for some moment in time there will be an improper stopping distance.

If the car that is the interloper suddenly jams on its brakes, the AI self-driving car is going to rear-end it, assuming that there’s no other avoidable alternative. There’s too little stopping distance for the AI self-driving car to come to a halt without hitting the interloper car.

Pundits would say that its because the stupid human became the interloper. Stupid or not, the human driver is part of the equation and as I mentioned earlier they will be for many years to come. I suppose if we had a utopian world of all and only AI self-driving cars, the AI self-driving cars in theory could arrange themselves and communicate via say V2V (vehicle-to-vehicle) to avoid such situations. Not going to happen for a very long time.

Some pundits would also say its unfair to “blame” the AI self-driving car in such a situation, since the human caused the incident. I’m not talking herein about the blame game, and only responding to the notion that somehow AI self-driving cars “cannot” ever rear-end another car. I believe that I’ve well-articulated now that it is quite easily possible for an AI self-driving car to ram into the rear of another car, regardless of who caused it to happen.

We’ve got these key situations:

  •         Rear-end scenario #01: Human driven follow car rams into a human driven lead car
  •         Rear-end scenario #02: AI driven follow car rams into a human driven lead car
  •         Rear-end scenario #03: Human driven follow car rams into an AI driven lead car
  •         Rear-end scenario #04: AI driven follow car rams into an AI driven lead car

For rear-end scenario #01, we already know that in the United States this happens around 2.4 million times a year (i.e., human driver rear-ends another human driven car).

For rear-end scenario #02, as discussed herein, we’re going to have AI self-driving cars that will ram into human driven cars, either due to the human driver taking some untoward action, or perhaps due to something in the AI that presumably unintentionally goes awry.

For rear-end scenario #03, we’ve seen the Apple Lexus incident as an example of what’s to come, namely that we’re going to have human driven cars that ram into AI self-driving cars, of which hopefully the AI self-driving cars will try to avoid if at all feasible.

For rear-end scenario #04, pundits would say that there should never be an instance of an AI self-driving car that rams into another AI self-driving car. In theory, the two AI self-driving cars should communicate and coordinate in such a manner that it would never occur. I don’t think this is a very reasonable expectation. You cannot assume perfect communication, and nor can you assume that the AI self-driving cars will be operating perfectly. I say wake-up and smell the coffee, in the real-world this is going to happen and get ready for it.

As mentioned, there are rear-end collisions that are doozies and others that are not. The Apple Lexus incident was somewhat a fortunate occurrence in that it involved reportedly no human injuries, luckily so (of course, let’s all hope for no such incidents). Had there been human injuries, I’m sure that the news media would have rocketed the story to the front-page headlines.

For my article about fake news and AI self-driving cars, see:https://aitrends.com/selfdrivingcars/ai-fake-news-about-self-driving-cars/

For AI self-driving car related accidents, see: https://aitrends.com/ai-insider/accidents-contagion-and-ai-self-driving-cars/

Sadly, such incidents involving AI self-driving cars are going to happen and we and the public need to be mentally prepared for it. The key will be to learn from the incidents and see what can be done to try and prevent them or mitigate the injuries and damages when they occur. Let’s be safe other there.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.


Source link

Guest Blogger

We feature multiple guest blogger from around the digital world. If you are featured here, don't be surprised, you are a our knowledge star. :)

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Close
Close