Data Science

Reframing AI Levels for Self-Driving Cars: Bifurcation of Autonomy


By Lance Eliot, the AI Trends Insider

The emergence of Level 3 self-driving cars is going to endanger the advent of AI self-driving cars. There, I’ve said it. Not many have been willing to stand-up and make that statement. There is though a growing contingent of industry insiders that are gradually voicing their concerns on this matter. It is a serious matter worthy of explicit attention and discussion.

At the recently undertaken World Safety Summit on Autonomous Technology there was some concerted deliberations and debate about Level 3. Indeed, a white paper by the company Velodyne LiDAR entitled “A Safety-First Approach to Developing and Marketing Driver Assistance Methodology” was distributed at the Summit and it essentially asserted that it might be timely to consider reframing Level 2 and Level 3 of AI self-driving cars. I agree with their assertion.

For my article about safety of AI self-driving cars and remarks about the World Safety Summit, see: https://aitrends.com/ai-insider/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

I’d like to walk you through the nature of the Level 3 perils and get you involved in this crucial discussion.

This topic also pertains to the nature of AI and how we can best express the spectrum or range of AI capabilities for a particular task.

In the case of AI self-driving cars, there is the SAE (Society for Automotive Engineers) standard known as the “Surface Vehicle Recommended Practice,” numbered as document J3016, which provides a taxonomy and vocabulary for expressing the AI capabilities of self-driving cars. It is a cornerstone to the field of AI self-driving cars. Without it, we’d all have a difficult time even discussing AI self-driving cars, since we wouldn’t all have a common set of definitions and meanings.

Some might say that a rose is a rose by any other name, but I assure you that if I am calling a rose an apple, and you are calling a rose an orange, we would have a lot of confusion whenever you said the world “apple” and I said the word “orange.” Thus, we do need a foundation of the words we will use to describe something, and in the case of AI capabilities it is essential since we otherwise cannot readily compare one AI system to another one.

Cars like the Tesla outfitted with AutoPilot are currently considered Level 2 on the SAE standard. Some argue that it is more like 2.5 rather than 2, but I point out that it is nonsensical to refer to fractions since the standard definitions are a range of 0 to 5, integers, and there is no such thing as fractionally meeting the defined levels. Furthermore, it confuses discussions since by making up a fractional level it implies there is such a thing, and it also opens a can-of-worms as to what exactly at 2.5 level consists of. This might also cause a slippery slope toward people opting to refer to 2.8 or 3.1 or any other kind of made-up fractional amount. So, I urge those of you wanting to refer to levels by fractions to stop doing so, thanks (or, I suppose, work toward changing the SAE standard to include fractional levels, if you feel such a compelling need to have them!).

For more about the Tesla and the AutoPilot naming, see my article: https://aitrends.com/selfdrivingcars/crossing-the-rubicon-and-ai-self-driving-cars/

For my article about the marketing of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/marketing-self-driving-cars-new-paradigms/

Many of the auto makers and tech firms are now aiming toward Level 3 self-driving cars. I’ll explain in a moment the nature of a Level 3 in terms of the AI self-driving capabilities. This will take us from a world somewhat getting accustomed to a Level 2 (though only a small fraction of society has experienced it) and shift us into a world of Level 3 self-driving cars. It is going to happen somewhat incrementally, meaning that we’ll see auto maker after auto maker bringing their Level 3 self-driving cars to the marketplace. This will also lead to an uptick in consumer expectations about what AI self-driving cars can and cannot do.

Once I’ve explained the Level 3 perils, you’ll perhaps then see the logic that some are worried about the Level 3 being akin to the famous story about the frog in the boiling water. If you don’t know the frog story, here it is. In the mid-1800’s, some scientists reported that if you put a frog into water and very slowly brought the water to a boil, the frog would not try to jump out of the pot of water and instead would die in the boiling water, presumably not having been able to have anticipated that their death would arise over the course of the upticks in temperature. On the other hand, if you tossed a frog into a pot of boiling water, it would immediately react and try to escape.

The overall lesson to be learned is that we can sometimes gradually get used to something, but for which in the end it is disastrous for us, and yet had we gotten abruptly tossed into it toward the end we would have been able to readily see that the end was near.

These frog experiments have since been pretty much debunked, but in any case, it is a handy metaphor for referring to situations in which something slowly happens and you kind of gradually fall into it (even if it isn’t really true about frogs in boiling water!). I often refer instead to such situations as a form of quicksand. Once you get yourself into quicksand, it at first doesn’t seem overly alarming if you are unaware of what quicksand can do. You figure at first that you can somehow readily work your way out of it. Unfortunately, often it is not the case that you can, and so instead you gradually and inexorably get pulled under.

So, Level 3 for some industry participants and observers is the frog in the progressively boiling water or if you like instead it is the quicksand that will sink us all — not only in Level 3, but perhaps sink the rest of AI self-driving car adoption too (or at least substantially undermine the acceptance of AI self-driving cars).

I would dare say that there could be an even larger spillover into AI systems of many varieties, in the sense that if the public and regulators become disturbed at what happens in the AI aspects of AI self-driving cars, it could readily carryover into AI systems of other kinds.

For my article about the public perception of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/roller-coaster-public-perception-ai-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

There are some auto makers and tech firms that are purposely avoiding the Level 3 and instead jumping straight to Level 4 and Level 5. One reason to skip past Level 3 entails the perils that I’ll be describing for you momentarily. Another reason that some are jumping past Level 3 is that they simply believe in what some would call truly autonomous cars and they don’t want to mess around with or get mired in anything less than something that is truly autonomous. You could say they are doing so on an overall technological and business philosophical basis about the nature of AI self-driving cars.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. We too are desirous of achieving truly autonomous cars. This aim at truly autonomous cars is more of a moonshot than might seem to be the case on the surface (for those of you that aren’t directly involved in developing such AI).

Allow me to elaborate about the levels of AI self-driving cars.

The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For a Level 4, the self-driving car is allowed to have various self-imposed restrictions on when the AI can and cannot perform the self-driving of the car. An auto maker might define an Operational Design Domain (ODD), such that their particular version of a Level 4 self-driving car can do the self-driving when the roads are dry and there is no snow, but if a driving situation arises outside of the defined ODD (let’s say it begins to snow), the AI is supposed to check to see if a driver is present that might want to intervene, and if not the AI then will perform a fallback effort and put the car into a minimal risk condition (such as pulling off to the side of the road).

For self-driving cars less than a Level 4, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other. Period.

Human Attention to the Driving Task

As mentioned, the levels less than a Level 4 have as a clear-cut requirement that a human driver must be present in the AI self-driving. The human driver must be licensed to drive. The human driver must be ready at all times to takeover the driving task. The human driver must be attentive to the driving of car by the AI system, since otherwise the human driver might be aloof of the driving situation and therefore be unable to readily and immediately takeover the driving task.

You’ve likely seen YouTube videos of human drivers that are sitting in the driver’s seat of an AI self-driving car at a Level 2, and those human drivers are not watching the road. Instead, you see them trying to type on their smartphone, or they are reading a book in their lap, or they are looking at the backseat of the car where their cute baby is cooing at them, etc. There are many such videos, including ones that show the human driver putting their hands outside the driver’s side window to convince us that they are not driving the car.

Imagine that in the moment that their arms are waving outside of the car that all of a sudden, a truck veers unexpectedly into the path of the self-driving car and the AI urgently requests that the human driver take over the driving task. Do you think that the human driver would be able to pull their arms back into the car, put their hands onto the steering wheel, and then sufficiently hit the brakes or maneuver the car to safety, doing so in the likely split seconds before a fatal crash? I think not.

For my analysis of the time it takes for human drivers to react to driving situations, see my article: https://aitrends.com/selfdrivingcars/not-fast-enough-human-factors-ai-self-driving-cars-control-transitions/

For taking responsibility for an AI self-driving cars actions, see my article: https://aitrends.com/ai-insider/responsibility-and-ai-self-driving-cars/

I say that the videos help illustrate a very crucial aspect about human behavior. If humans believe that an AI system is going to be able to undertake a certain task, it is going to be the case that the humans will get lax in performing the task and become complacent or ill-prepared to undertake the task when the time comes for them to do so.

Take a look at Figure 1.

In the diagram, there is a line that starts at the High position of the vertical axis and then proceeds downward across the diagram toward the rightmost edge. This line represents the attention level of the human driver. There is a second line on the diagram, which starts at the Low position of the vertical axis at the leftmost edge, and this line makes its way upwards across the diagram.9936 This is a line representing the AI driving capabilities of a self-driving car.

What this diagram is suggesting consists of the notion that as the AI driving capabilities rise, the human attention to the driving task decreases. At the leftmost edge of the diagram, the human driver attention is quite high, which makes sense since the AI driving capabilities are quite low. The human realizes that they cannot rely upon the AI, and so they drive the car as though they are the driver of the car (which, they are). At the rightmost edge of the diagram, the AI driving capabilities are quite high, and as a result the human driving attention is low since the human is fully expecting the AI to handle the driving task.

For those of you that are mathematicians or statisticians, I realize you might want to argue about the diagram in terms of whether there is a direct inverse correlation between these two lines. Also, you might want to argue that the lines shouldn’t be portrayed as straight arrows but perhaps have some other more fluid shape to them. Sure, I’ll go along with those aspects and want to emphasize that the diagram is meant to overall showcase the nature of the phenomena, and not necessarily be an exact portrayal of it.

Now, please take a look at Figure 2.

In this diagram, I have highlighted the gaps between the two lines. The lines eventually cross each other. For the portion that exists to the left of the crossover point, we have a “positive” gap between the level of the human driving attention and the level of the AI driving capability. Once we’ve reached the crossover point, there is essentially a “negative” gap to the right of the crossover, meaning that the human attention level has now fallen below the level of the AI capabilities (this is a region of the graph that I refer to as the zone of attention deficit).

I’ve labeled four examples, shown as A, B, C, and D on the diagram, indicating gaps between the human driving attention level and the AI driving capability. Let’s consider each of those gap instances.

For the example labeled as A, we have a relatively large gap of a “positive” nature that shows the human to be at a relatively high level of attention, and this corresponds to a relatively low level of AI driving capability. You can consider this as a good situation generally since it implies the human is aware of the need to be attentive to the driving task when the AI is not up-to-snuff.

For the example labeled as B, we have a smaller gap of a “positive” nature that shows the human level of attention is getting less and less, and meanwhile the AI driving capability is becoming more and more. This could be Okay if the AI driving capability is sufficiently up-to-snuff and if the human driver can safely rely upon the AI to do the driving task.

For the example labeled as C, we now have the human attention to the driving task that has dropped below the level of the AI driving capability. The human is now allowing their attention to lapse as they believe that the AI driving capability is able to handle the driving task.

For the example labeled as D, there is now a large gap between the human level of attention to the driving task, which has dropped quite low, and meanwhile the AI driving capability is indicated as quite high. This is a situation of potential heightened risk since the human is assuming that the AI will indeed be able to handle the driving task.

One aspect to keep in mind about this diagram is that there is the actual AI capability of driving versus the human perceived AI-capability of driving. The human might or might not accurately understand what the AI self-driving capability actually consists of. There are many that are concerned that there are mixed messages being portrayed to human drivers about the nature of the AI capabilities in self-driving cars. As such, the human is potentially basing their own attentive levels on what might be a false understanding or combobulated interpretation of what the AI self-driving can actually do.

For confusion about the use of the word “autopilot” and AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/airplane-autopilot-systems-self-driving-car-ai/

For issues of irreproducibility and AI self-driving cars, see my article:  https://aitrends.com/selfdrivingcars/irreproducibility-and-ai-self-driving-cars/

For my analysis of the Uber crash in Arizona, see: https://aitrends.com/selfdrivingcars/ntsb-releases-initial-report-on-fatal-uber-pedestrian-crash-dr-lance-eliot-seen-as-prescient/

Let’s see how this indication of human attention levels and AI driving capabilities can apply to driving situations.

Take a look at Figure 3.

During a driving journey, you are likely to have times that the driving is somewhat monotonous. Whenever I drive from Southern California up to Silicon Valley, much of the drive consists of an open highway or freeway that usually has little traffic. The driving becomes a rather mindless task and consists of being on the watch for outliers.

You can have times during a driving journey that there is sporadic activity. While on the freeway during my morning commute, the traffic is snarled and so much of it takes place on a stop-and-go basis. You come to a stop, you wait, and traffic continues. This repeats itself. In that sense, you could say it is sporadic. It happens with a certain amount of regularity.

There are also situations wherein driving consists of sudden actions. I’ll refer to this as spiky driving conditions. Suppose I’m in stop-and-go traffic, and all of a sudden, the traffic opens up and there is a brief period that allows you to quickly accelerate and cover some ground. But, then it comes to a somewhat abrupt halt and you find yourself once again in the classic stop-and-go formation.

Next, take a look at Figure 4.

A driving journey can consist of many segments, each segment being either one of the possibilities of being monotonous, sporadic, or spiky. While driving home at night after work, I have stretches of my drive that are monotonous, and then find myself in a spiky situation, and then maybe back to being monotonous, and then it perhaps becomes sporadic. And so on.

Shown in the diagram is a driving journey that began as sporadic, and then became monotonous thereafter.

Take a look at Figure 5.

I now want to bring your focus to the human attention that might occur during each of those driving segments. Depending upon the arrangement or series of driving segments involved, a human driver might become increasingly less attentive to the driving task over time. In other words, perhaps at the start of the driving journey the driver was highly attentive, but then became lulled due to the nature of the driving effort involved.

I am illustrating this without regard for any kind of AI driving capability. This aspect of driving attention applies to situations of conventional car driving, along with the advent of AI self-driving cars.

As indicated in the diagram, the driver attention consists of a decay curve.

In this example, the elongated set of monotonous driving has led to the human attention span dropping. Unfortunately, a spiky instance then arises, and the driver is at low point in their driving attention. You’ve perhaps experienced this in your own driving. It might be late a night, you are driving for a while, the road is relatively open and uneventful. All of a sudden, a drunk driver pulls onto the roadway. Your attention has been dulled and your reaction time is sluggish. Your ability to react quickly and aptly might be a lot less so than if the drunk driver had pulled onto the road when you first began your driving trip and were more alert and aware.

This is a bad combination in that the driver attention is very low and yet the need to react and respond is very immediate.

The Level 3 Perils

Having setup a bit of a foundation for you about the driving task, let’s now get back to considering the nature of Level 3 and the perils is entails.

For an AI self-driving car of a Level 3, the SAE standard indicates that a human driver must be present during a driving journey and be ready for two major possibilities of driving action: (1) the human driver must be receptive to an AI system issued request for the human to intervene, and (2) the human driver needs to be aware of the status of the self-driving car such that if there is a need for the human driver to intervene due to a vehicle system failure that then the human driver will be ready and able to do so (regardless of whether the AI informs the driver to do so, since it might be that the AI does not inform the driver but that the human driver is supposed to take overt action on their own anyway).

I realize there’s a lot packed into the definition for a Level 3. So, let’s consider some examples.

One example provided by the SAE standard involves the aspect that suppose the radar sensors on a self-driving car fail or falter and suppose that the AI realizes this has happened. The AI would alert the human driver and ask that the human driver take over the driving task. It is open-ended as to how the AI would notify the human driver about this matter and each of the auto makers and tech firms are devising their own means of how to alert the human driver as to such matters.

For my article about the use of Natural Language Processing (NLP), see: https://aitrends.com/selfdrivingcars/car-voice-commands-nlp-self-driving-cars/

For conversing with an AI self-driving car, see my article:  https://aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/

For explanation-based Machine Learning and AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/explanation-ai-machine-learning-for-ai-self-driving-cars/

It is not necessarily the case though that the AI might even realize that something is amiss on the self-driving car. Another example in the SAE standard consists of a tie rod that breaks during a driving journey. Would the AI realize that the tie rod has broken? Maybe, if the AI has the kind of sensors on the self-driving car that might detect such an anomaly. But, it could be that the self-driving car does not have sufficient sensors to detect this particular issue.

Suppose the human driver in the self-driving car could “feel” that the self-driving car was driving in a rather poor manner, possibly pulling hard to one side or the other of the road. The human might not know that the tie rod in particular is broken, but overall might realize that the car has suffered some kind of malfunction or breakdown. As such, the human driver might need to make a choice, do they continue with the AI driving the self-driving car, or would it be better to disengage the AI and have them takeover the driving of the self-driving car?

This is a bit of a conundrum for the human driver.

Which is better, for the human to continue to allow the AI to drive the self-driving car, or for the human to instead takeover the driving of the self-driving car? The human doesn’t know for sure what is wrong with the car. The human is unsure whether the AI can handle the driving of the car at this juncture. It is a bit disconcerting that the AI hasn’t even apparently realized that the car is not as drivable as it once was.

Imagine you were in an Uber or Lyft, and the ridesharing car was being driven by a human driver. You are the passenger. All of sudden, the car begins to lurch somewhat and you can readily discern that something is amiss with the car. You look expectantly at the Uber or Lyft driver and wait to see what they are going to do about it. You’d be quite surprised that the Uber or Lyft driver seems to be completely unaware that the car is now lurching. Say what, the driver doesn’t know that the car is in trouble? That is a scary proposition!

Well, that’s exactly what can happen with a Level 3 self-driving car. The AI doing the driving might not be aware that there is something amiss with the car. Meanwhile, you, the human driver, sensing that there is something wrong with the car, need to make what could be a potentially life-or-death kind of choice.

If you decide to disengage the AI, you are hoping and betting that you’ll be able to do a better job of driving the self-driving car than would the AI in this circumstance. If you decide to not disengage the AI, you are sitting there in fear that at some moment the AI is going to get completely thrown for a loop by the problem brewing and it might be that the AI will make a wrong decision and get things into an even worse predicament.

In short, we have the instance of the human driver having to be ready to respond and takeover the driving if the AI asks the human driver to intervene. But, this is not so easy as it might seem. How will the AI identify what is wrong and thus the reason for having the human driver take over? The human driver is expected to be able to take over in a timely fashion, but what is the nature of the time involved? What kind of action should the human driver take once they have taken over the driving task (this is entirely up to the human at that point).

We have the other instance whereby the human suspects that there is something amiss with the self-driving car and so the human has to decide whether or not to overtly and on their own opt to takeover the driving task. The human driver might become concerned and even confused that the AI itself has not requested the human driver to take over the driving. This might imply to the human driver that they themselves are maybe falsely feeling that something is amiss, and the human driver might shrug off something serious, wasting precious moments that might have made a difference in taking over the driving of the self-driving car.

With the Level 2 self-driving cars, by-and-large most human drivers get the idea that they as a human driver need to remain well-connected and engaged in the driving task (I’m not saying everyone does this, just the preponderance seem to do this).

With the Level 3 self-driving cars, we are upping the ante in that the human drivers will be potentially lulled into assuming that the AI is able to handle the driving task (since it is more able to do so than with the Level 2), and thus the human drivers won’t be ready when the AI either asks them to intervene or they won’t be ready when on their own they should take over the driving task due to some kind of vehicle system failure.

You might say that in my Figure 2 we are going from the left side of the crossover point, consisting perhaps of Level 2 self-driving cars, and will now with Level 3 we will be entering into the right side of the crossover point. We are entering into the zone of attention deficits. The danger zone.

Devices for Driver Attention Detection

Some believe that the solution for potential attention deficits of human drivers consists of using various attention detection devices.

In a self-driving car, the steering wheel might be augmented by a sensing device that would ascertain whether the human driver has their hands on the wheel. If the driver seems to not have their hands on the steering wheel, the steering wheel might light-up to remind the driver to put their hands back onto it, or there might be a buzzing sound or some other means to alert the human driver about needing to keep their hands on the wheel.

The means of prompting the driver could include audio alerts, visual alerts, and tactile alerts (any variation thereof, or possibly all three modes at once).

See my article about steering wheel detection and reminders aspects: https://aitrends.com/selfdrivingcars/steering-wheel-gets-self-driving-car-attention-ai/

Another sensing mechanism might be to have a camera mounted in front of the driver and facing toward the driver. The camera might keep track of the driver’s head, being able to detect if the head of the driver appears to be tilted or turned away from looking forward at the road ahead. This might also include a capability to detect eye movements. Thus, even if the head is aiming forward, it could be that the eyes of the driver are not also focused on the road ahead, and so the facial detection aspects of the eye movement could be another element for tracking. As per the steering wheel rejoinders, if the camera detection determines that the driver does not seem to be focused ahead, it will emit a sound or light-up an alert to jog the driver into compliance.

As a penalty for a driver that seems to repeatedly be lax in their physical attention forward, the self-driving car might either restrict the driver from being able to drive or the AI might opt to perform a fallback operation to bring the car to a minimal risk condition such as pulling over and parking at the side of the road. Though this seems like an appropriate precaution, it is not risk free as the driver might otherwise rebel against the system or take untoward action as a result of this kind of monitoring and system action.

Of course, we all know that distracted drivers are a problem on our roadways. The advent of smartphones has seemed to exacerbate the problem of drivers that are tempted to try and do two things at once. The driver wants to look at the latest tweet and at the same time be driving the car. If the roadway appears to be of a monotonous nature in terms of traffic, the driver figures that they can handle both the reading of their chats and the driving of the car.

Distracted driving is not limited to simply looking at your smartphone while driving. A distracted driver can be focusing on a crying baby in the backseat of the car and be therefore no longer focused solely on the driving task. I’ve seen drivers that were engrossed in a heated debate with a passenger in the front seat of their car and had become so preoccupied with the acrimonious discussion that they no longer knew what was happening on the road ahead of them. There are drivers that put their make-up on while heading to work, and they too are distracted. There are drivers that are looky-loos that cannot help but look at a disabled car on the side of the road, whisking past at 80 miles per hour, and having their heads turned and looking away from the upcoming traffic. Etc.

Let’s though clarify the difference between a so-called “distracted” driver and an inattentive driver.

In the case of a distracted driver, there is some other activity or aspect occurring that has drawn their attention. It could be that the driver themselves have opted to focus on the distractor, or it could be that the distraction itself has spurred the driver to become distracted. Either way, we can say that the driver has their attention elsewhere other than solely the driving task.

Thus, there can be a driver that has become inattentive, and the basis for the inattentiveness is due to distraction to something else.

We can also have an inattentive driver that is not necessarily “distracted” in the purist sense of distraction. An inattentive driver might be looking straight ahead and seemingly focused on the road, and not be talking with anyone and not be doing any other apparent activity, and yet they could still be inattentive to the driving task. Their mental state is the final determiner of whether or not they are providing attention to the driving task.

In case you think I am splitting hairs on this point, I am not. It is crucial to understand that an inattentive driver is not always and nor necessarily a distracted driver (if by “distracted” we mean there is some other physical manifestation that has drawn away the attention of the driver, which is the usual meaning most commonly applied).

Take a look at Figure 6.

In the Driver Attention Matrix, there consist of rows that constitute the physical attention provided by the human driver and there are columns that represent the mental attentiveness of the human driver.

Suppose the human driver is physically aligned with watching the road ahead. Our steering wheel sensor says that the hands of the driver are present on the steering wheel, and the camera says that the driver is facing forward, and their eyes are on the road. Does this physical positioning and posture of the human driver then guarantee that they are attentive to the driving task?

No, it does not.

These handy gadgets to detect and warn the driver about being attentive are primarily dealing with the physical aspects of the human driver. By-and-large, they will catch the “distracted” driver that is looking down at their smartphone or turning their head to coo at the baby in the back seat. The hope is that by potentially preventing the driver from physically being inattentive, they will realign themselves physically and then presumably (hopefully) put their mind into the game too.

There is certainly a much greater chance of the driver being attentive to the driving task when their body is aligned with the needs of the driving task. Keep in mind though that it is somewhat like the proverbial line that you can bring a horse to water but you can’t necessarily make it drink. Just because you force the driver to be physically “attentive” it does not ergo bring their mind to the matter as well.

The Driver Attention Matrix shows the four different possibilities of the circumstance when a driver is physically attentive versus not physically attentive, and combined with the driver’s aspects of being mentally attentive versus not being mentally attentive.

The optimum or “best” driving setting is when the human driver is both physically attentive and mentally attentive (see the Matrix square labeled as “01”).

When the driver is physically attentive but not mentally attentive (see the Matrix square labeled as “02”), you have a driver that is not mentally engaged in the driving task. This mental inattentiveness can mean that even though their hands, feet, and eyes are in the right spots to react to a driving urgency, their mind is not ready. The lack of mental attentiveness can therefore create delays in responding on a timely basis, plus their mental drift might prevent them from even knowing that action is needed and nor what should be the proper action to take, and might overall negate the aspect that their body was at least in a position to take needed action.

If the driver is mentally attentive but their physical attentiveness is not properly in place (see the Matrix square labeled as “03”), this lack of “eyes on the road” can mean that their mind won’t even get a chance to determine that is something amiss. If their hands and feet are also misplaced, the mind that tries to command their body to take action will have likely delays in physically getting into place in-time. Plus, the act of trying to get their body properly in place could cause other adverse consequences, such as steering in the wrong direction or jamming on the accelerator when they meant to hit the brakes (this happened to me when an elderly driver got confused and rear-ended my car at a stop when he inadvertently put his foot on the accelerator rather than the brake pedal).

The most dangerous driving attention deficit consists of the Matrix square labeled as “04” and involves a human driver that is both physically inattentive and mentally inattentive.

Today’s devices that try to detect when a driver is “distracted” from the driving task are dealing with the physical attentiveness aspects. This is helpful, but it is only part of the story. It is one-half of the coin. As mentioned before, the horse still needs to drink the water once they have been forced to the trough.

The emergence and advancement of the various physical attentive detection devices are really just a surrogate for deriving the mental attentiveness. It is yet unknown how much of the time this works in that regard of also forcing someone to mentally reengage in the driving situation.

I’d suggest that we cannot say that it is 100% of the time that a physically inattentive driver that was sparked into becoming attentive is also going to become a mentally attentive one too. As such, if the physical attentiveness gets us to say mental attentiveness 90% of the time, it still bodes for concern that 10% of the time the driver is mentally still inattentive. Or, it could be 80% and 20%, or maybe even 50% and 50%. Or worse.

You might be wondering how we could ascertain whether a human driver is mentally attentive to the driving task?

It’s a hard nut to crack. There is no particular means to somehow use sensors to detect that the human is actively thinking about the driving task, though in the future there are some that predict that we might able to do so via brainjacking.

For my article about brainjacking and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/brainjacking-self-driving-cars-mind-matter/

Another approach could consist of having the AI converse with the human driver and try to determine whether the driver is aware of the driving situation at-hand.

This would be somewhat akin to when you are helping a novice teenage driver to drive a car. The novices sits in the driver seat, and you try to coach them as to what is taking place. This is a kind of co-sharing arrangement of the driving task, though you usually don’t have your own driving controls and must rely upon the teenager to undertake the maneuvering of the car.

If you’ve ever done this kind of driving assist, you know that part of the time you are gauging whether the teenager is aware of the driving situation, and at other times you might be telling them what to do or urging them to take certain actions. The more savvy they become, the less you are likely to offer commands and instead just probe their mental state to ensure that they know there are pedestrians nearby or that they are getting rather close to the car ahead of them.

The AI would potentially be able to ascertain whether the human driver is mentally ready or engaged and is indeed ready to undertake the driving task if needed. The AI could periodically interact with the human driver. If the physical attention sensors detect that the human driver is physically becoming inattentive, the AI could then be part of the alert for the driver and also then act secondarily to try and ensure that the driver mentally gets back into the appropriate driving mode too.

For conversing with an AI self-driving car to give driving commands, see my article: https://aitrends.com/selfdrivingcars/car-voice-commands-nlp-self-driving-cars/

For the socio-behavioral aspects of humans instructing AI self-driving cars, see my article: https://aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/

For Machine Learning aspects about self-driving cars, see my article: https://aitrends.com/ai-insider/explanation-ai-machine-learning-for-ai-self-driving-cars/

For more about how humans interact with AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/shiggy-challenge-and-dangers-of-an-in-motion-ai-self-driving-car/

This is no silver bullet, unfortunately. First, the AI would need to be good enough to be able to carry on such a dialogue and be “smart” enough to judge whether the human was immersed in the driving moment or not. Second, the act of the AI conversing with the human driver can itself be considered a form of potential distraction. Third, the method of communicating, if verbal, might be too slow and crude to effectively convey in a timely fashion whatever urgency might be arising. And so on.

Reframing the Standard Levels

By now, I hope that you are grasping the concerns associated with the Level 3 self-driving cars. It is somewhat ironic that though they are more capable than a Level 2, it also tends to cause the human driver to become more inattentive, which then gets us into the perils of Level 3.

If we get a lot of Level 3 self-driving cars on our roadways, what might happen?

Well, we could see a lot of unfortunate car crashes that injure or kill people. This could happen during that boundary time of when the AI is trying to get the human driver to intervene, but the human driver is seemingly caught unaware. Or, the AI or the self-driving car has experienced some kind of failure, but the human driver did not take over the driving controls of their own volition and therefore the situation took a turn for the worse.

The auto makers and tech firms that make the Level 3 self-driving cars would undoubtably try to cast these incidents as a human failing. It wasn’t the fault of the AI per se, they would contend, it was those human drivers that were lazy, distracted, inattentive, confused, or whatever. Even if this could be used as a defense, the end result of having injuries and deaths is going to be enough to likely put the kibosh on the self-driving desire that seems right now to be rolling forward. One could expect a public backlash along with a regulatory backlash.

For my article about product liability issues of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/product-liability-self-driving-cars-looming-cloud-ahead/

For the crossing of the Rubicon and AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/crossing-the-rubicon-and-ai-self-driving-cars/

Unfortunately, the odds are that if the Level 3 self-driving cars become the bad apple in the barrel, it might well spoil the rest of the barrel. If the public and regulators perceive that Level 3 is an “autonomous” self-driving car, they will naturally extend their concerns to the Level 4 and Level 5 self-driving cars. All self-driving cars will be tossed into the same bucket, regardless of their capabilities.

Take a look at Figure 7.

This diagram indicates the six levels of self-driving cars as ranging from a Level 0 to a Level 5. I’ve tried to indicate that the AI capability increases as the levels get higher in number.

Some would argue that the Level 4 and Level 5 should be much higher and tower well above the levels below Level 4, doing so to signify how much an improvement in self-driving there is after you get past Level 3. I wanted to keep the diagram readable and so I have shown the Level 4 and Level 5 as at least somewhat taller than Level 3. If you like, think of them as a lot taller.

One of the concerns about the levels is that they are all part of the overall definition associated with self-driving cars. But, the reality is that the Level 0, 1, 2, and 3 are not really truly autonomous cars, and yet they are being associated overall with self-driving cars.

Take a look at Figure 8.

As shown, I’ve divided up the levels into two major groupings. The Level 3 and the levels below it are all considered as not autonomous. The Level 4 and Level 5 are considered to be autonomous.

This is a bifurcation of the autonomous driving levels.

By bifurcating or splitting the levels into two groups, we can then perhaps differentiate between the two groups. It is a potential means of reframing the underlying nature of these cars.

This might help when dealing with any dilemmas associated with Level 3. If the public and regulators, and the media, began to realize that the Level 3 is in the “not autonomous” camp, it might be instructive when the time inevitably comes for them to have qualms about “self-driving cars” – since one might then clarify that the Level 3 is not actually a true self-driving car at all.

The Velodyne white paper that I earlier had mentioned in this matter has proposed that any vehicle that is less than Level 4 should not be construed as and nor referred to as an autonomous vehicle and nor a self-driving or driverless vehicle.

Furthermore, going even further, any such vehicle that is less than Level 4 should overtly and directly clarify that it is not an autonomous vehicle, nor a self-driving or driverless vehicle. By being overt about this matter, it would hopefully reduce the likely implied aspect that if it does not say that it isn’t self-driving then perhaps one could infer that it is self-driving (this would likely be suggested by the marketing aspects, I’d wager).

And, instead of separately referring to Level 2 and Level 3, it has been proposed that those two levels be collectively referred to as Level 2+. Meanwhile, the Level 4 and Level 5 might be referred to as Level 4+.

Take a look at Figure 9.

As shown in the diagram, we would have a bifurcation of the levels into two major groups, consisting of the not autonomous levels and the autonomous levels.

Within the autonomous levels, they would be collectively referred to as L4+. For the not autonomous levels, the Level 2 and Level 3 would be referred to as L2+.

Presumably, the lowest levels, Level 0 and Level 1, which are included in the not autonomous grouping, would simply be referred to as Level 0 and Level 1 (though, I suppose we could try to come up with something catchy like say “L1-“ or something like that).

Can we get the auto makers and tech firms to adopt this kind of nomenclature?

It will certainly be hard to do. The jockeying for position in the autonomous car realm is tremendous and those that are marketing their wares will often go to rather sketchy lengths to do so.

For my article about the misuse of sizzle reels for AI self-driving cars, see: https://aitrends.com/selfdrivingcars/sizzle-reel-trickery-ai-self-driving-car-hype/

For my article about the fake news about AI self-driving cars, see: https://aitrends.com/selfdrivingcars/ai-fake-news-about-self-driving-cars/

Efforts to seek near-term glory need to be balanced with the future of the industry. Auto makers and tech firms need to take a longer-term perspective and realize that if the public and regulators and media turn toward blocking progress in the self-driving car industry, it will hurt everyone involved.

By being more mindful about how we describe these innovations, it might provide some semblance of a chance to keep the entire barrel from being cast as no good. This is not just trying to call a rose by some other name. This is differentiating what one kind of “automation equipped” car can do versus what a truly autonomous self-driving car can do. We need to be able to call an orange an orange, and an apple an apple. Right now, we appear to be heading toward a boiling point (recall the story of the frog!), and it would make sense to do something before it is too late and the pot boils over.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.


Source link

Guest Blogger

We feature multiple guest blogger from around the digital world. If you are featured here, don't be surprised, you are a our knowledge star. :)

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Close
Close