1. 程式人生 > >Ethics Review Boards and AI Self-Driving Cars

Ethics Review Boards and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider

As a driver of a car, you are continually making judgments that involve life-or-death matters. We don’t tend to think explicitly about this aspect of driving and take it for granted most of the time. Whenever there is a car accident, the topic comes up about what the driver did or did not do, and any aspects of how judgment came to play in the accident usually comes to light.

Suppose you are driving down a street at nighttime. You have your radio on. It has been a long hard day at work and you are heading home for the evening. How well are you paying attention to the driving task? Perhaps your thoughts are focused on a difficult problem at work that you are hopeful of solving. The radio is meanwhile tuned to a talk show and it covers a topic of keen interest to you.

You normally take the main highway to get home, but tonight you opted to use a less common road that you hope has little traffic and will allow you to get home faster. The speed limit is 45 miles per hour, and you are doing about 55 mph. Going over the speed limit on this particular road happens all the time and going just 10 mph over the speed limit is actually not much of an excess in comparison to what other drivers do.

Suddenly, via your headlight beams, you see what might be a figure in the road up ahead. There’s not a crosswalk nearby and so you weren’t anticipating that any pedestrians would be in the roadway. You weren’t looking for pedestrians, plus with your thoughts on the problems at work and with your somewhat rapt listening to the radio talk show, it all added up that you didn’t notice the shadowy figure at first.

Your mind races as to whether it really is a person or not. The roadside lighting in this area is rather poor. You have only a few seconds of time to decide what to do. Should you slam on the brakes? But, if so, there is a car behind you and they might ram into your car. Plus, perhaps by slamming on the brakes you might lose control of the car and not be able to maneuver it. You could instead try to swing wide, out of your lane, and do so in a somewhat frantic manner under the belief that the shadowy figure is headed in the other direction. You might just skirt the figure by going to the left, if you can swerve just enough and if the shadowy figure continues to move to the right.

Swinging over into the other lane isn’t so easy though. There is another car in that lane. You might cause the other driver to react and they might then swerve into the median. You could maybe try to go to your right, up on the sidewalk, doing so to avoid the figure in the street. But, it is so dark that you aren’t sure if there might be anyone on the sidewalk and besides the idea of driving on the sidewalk seems almost crazy, really just a desperate last resort to avoid hitting the figure in the street.

This is a relatively realistic scenario and one that any of us could encounter.

Let’s analyze the situation.

The driver is faced with a rather untoward dilemma. There might or might not be a pedestrian in the path of their car. Whatever is in the path, the driver only has a few seconds to decide what it is and what action to take.

If the driver opts to use their brakes, it could lead to the car behind the driver doing a rear-ender and it might injure or kill the human occupants in either or both cars.

If the driver opts to swing into the next lane to the left, it could lead to the car in that lane becoming concerned and possibly veering into the median, which could injure or kill the human occupants, and might careen further into traffic and injuring or killing other humans in nearby cars.

If the driver opts to drive up onto the sidewalk in hopes of avoiding the figure in the street, there might be pedestrians there that could get injured or killed, plus the driver might generally lose control of the car and the driver gets injured or killed too.

If the driver decides to stay the course and continue forward, they will potentially hit the shadowy figure. This might injure or kill the figure, assuming it is a human, and the driver might also get injured or killed in the process of striking the figure.

Is there a proper and precise equation or some form of calculus that we can use to identify what the correct course of action is?

I don’t think so.

Suppose you had time to try and develop some kind of calculation, what would it consist of? You might try to find out the ages of the various “participants” such as the driver of the car, the driver of the car behind the dilemma facing car, the driver of the car in the next lane over, etc. Maybe you could say that the older the driver the more they have lived their lives and so the less they count in terms of whether to be on the one that might take the brunt of the situation. In other words, you might say that moving into the lane to the left is the “better” option because the driver in that car is the oldest of those involved and thus has already lived their life.

Some would say that your use of age in this manner is outrageous and absolutely wrong. You might instead try to calculate the societal value of each participant, somehow trying to encompass what they do and how they are helping our society. Or, maybe you come up with some other factors to try and weigh the value of their human lives.

You might instead just decide to use probabilities regarding the various actions involved. If the approach of slamming your brakes has a 30% chance of injury or death, while if you swing into the next lane there is a 60% chance of injury or death, perhaps you should go with the brakes option since it has the lower probability of an adverse outcome.

These analytic methods could be handy and yet it seems rather trying that any of us could individually come up with an agreeable set of equations or formulas to cover such circumstances for us and others. As far as we all know, the method used by today’s human drivers is the nebulous notion of “human judgment.” None of us can really say whether our brains do some kind of mathematical calculation, nor can we explain directly why we did something. We can rationalize what we did by offering an explanation, but the explanation itself might have little to do with what really happened inside our heads.

Explanations are provided as a means to try and turn our mental aspects into something that can be elaborated to other people. Usually, our explanations are intended to suggest a logical means of how we arrived at a decision. No one can though definitively say or prove that their mind actually carried out the logical steps offered. Instead, the explanation is a post-reflected aspect that might match to what our minds considered, or it might be a completely concocted aspect.

Suppose the driver in this case decides to go ahead run into the shadowy figure. Did they do so after carefully considering all of the other options?

The driver might after-the-fact claim they considered the various options, but perhaps they did and maybe they did not. It could be that the after-the-fact explanation is an attempt to rationalize what took place. The driver might not want to seem as though they just mindlessly rammed into the shadowy figure, and as such, provide instead an elaborated indication of the other options, which might allude to the notion that the driver tried to find a means to avoid the incident, even though maybe they just froze-up or maybe didn’t even notice the shadowy figure beforehand at all.

Ponder for a moment the number of times that each of us as car drivers make these kinds of spur of the moment decisions, doing so in real-time, in order to try and avoid causing some kind of car incident that might injure or kill others. It’s not just limited to those occasions when you get into a car accident. You undoubtedly have lots of situations that fortunately don’t lead to an accident per se, and yet you had to make some tough decisions anyway.

In this case of the driver, suppose it turns out that the shadowy figure was actually a large tumbleweed that was blowing across the street. If the driver opted to plow ahead and into the tumbleweed, perhaps it led to no car accident. The driver just kept going. Meanwhile, the car behind also kept going, and the car in the lane to the left kept going. None of them are injured or killed. Yet, there was a split second or so when a decision might have been made that could have led to their injury or death. No one would have likely recorded this non-event and no explanation or rationalization was sought or tendered.

I’d like to suggest that with the millions of cars on our roads on a daily basis, we are all involved in millions upon millions of such judgment calls, continually, and those of us in the cars, either as drivers or passengers, are subject to the outcomes of those judgments. So too are the pedestrians nearby to wherever cars are driving.

It is actually a bit staggering that we don’t have more car accidents. With this many people and they are all making those millions upon millions of judgments, it is somewhat a miracle that their judgments are good enough and sound enough that we don’t experience even more car incidents and more injuries and deaths accordingly.

I hope this doesn’t scare you from getting into your car. Also, I hope that this discussion hasn’t been overly macabre or ghastly. As I suggested earlier, the reliance on human judgement permeates our car driving and determines life-and-death matters. We don’t usually overtly consider this aspect in our daily driving and tend to take it in stride.

What does this have to do with AI self-driving cars?

AI Self-Driving Cars Will Need to Make Life-or-Death Judgements

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One crucial aspect to the AI of self-driving cars is the need for the AI to make “judgments” about driving situations, ones that involve life-and-death matters.

I’ve had some AI developers tell me that there isn’t a need for the AI to make such judgments. When I ask why the AI does not need to do so, the answer is that the AI won’t get itself into such predicaments.

I am flabbergasted that someone could have such a belief. In the scenario that I just described, I would assert that the AI could readily have gotten itself into exactly the same predicament that I had indicated the human driver was involved in.

Some might say that the AI would not be distracted by the radio playing and would not be thinking about problems at work. Okay, let’s subtract that entirely from the scenario. Some might say that the self-driving car would not be driving over the speed limit. I’d tend to debate that aspect, but anyway, let’s go ahead and assume that the self-driving car was doing the 45-mph speed limit.

We still have the situation of the car approaching the shadowy figure and need to consider the matter of car behind the self-driving car and the car that is to the left of the self-driving car, all being done in real-time, with just a few seconds to decide, and with the balance of people’s lives at stake.

If you were to suggest that the self-driving car would be better able to detect the shadowy figure because the self-driving car has not only cameras but also radar, sonic, and perhaps LIDAR capabilities, I’d say that yes there is a chance of having a more robust indication, but in practical terms those sensors won’t guarantee you that you have a better detection. Anyone that knows much about those sensors would concede that you can still have an imperfect indication of what is ahead of the self-driving car. There are many factors that can limit the capabilities of those sensors.

Some would say that the self-driving car would make sure to have sufficient distance between it and other cars so that it could have the needed stopping distance unimpeded. I don’t quite see how that is feasible per se. If the car behind you is on your tail, how do you ensure that there is sufficient stopping distance without getting rear-ended by that other car?

The answer usually is that the other car is being driven by a human and the “stupid” human has not allowed for the proper stopping distance. Therefore, the problem now is that we have a human driver, which if we just remove all of the pesky human drivers and have only AI self-driving cars, we would not need to be concerned with cars being too close on our tails.

This will require me to take you on a related tangent about the nature of self-driving cars.

There are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other. Period.

Returning then to the matter at hand of the scenario about the driver and the shadowy figure in the roadway, we need to dispense with the notion that the cars around the self-driving car will be only AI self-driving cars. Realistically, there will be a mix of human driven cars and AI self-driving cars.

I say this to clarify that the scenario I’ve painted remains the same, namely the AI is faced with the matter of having to try and determine whether to hit the brakes but might get rear-ended, or swing into the next lane but might cause the other driver to veer into the median, or the AI might drive onto the sidewalk but maybe harm pedestrians, or the AI might continue straight ahead and potentially plow into the shadowy figure.

As mentioned, there are AI developers that claim that an AI self-driving car would not let itself get into such a predicament, but there doesn’t seem to be any realistic world in which the AI could have magically avoided this situation and many other such situations. I’m putting a stake in the ground and will unabashedly say that there are going to be unavoidable crashes that AI self-driving cars will need to confront (and, of course, there will be avoidable crashes too, for which hopefully the AI will be astute enough to avoid).

I’ve stated many times that there are crucial ethical decisions or judgments that the AI will need to make when driving a self-driving car. I don’t believe you can hide behind the matter by saying that the AI will never get itself into a situation involving an ethical decision or judgment. Saying this belies the very act of driving a car. Anyone developing an AI self-driving car that seems to think that the AI won’t get itself mired into such situations has their head in the sand, and worse too they are developing an AI system that cannot presumably handle the real-world driving tasks that the AI will face.

For the moment, please go with me on the notion that the AI will need to cope with ethical decisions or judgments as part of the driving task. If that is indeed the case that the AI will need to deal with the matter, the question then becomes how it will do so.

You might suggest that the AI needs to use common sense reasoning.

Common Sense Reasoning for AI Self-Driving Cars Not Available

As humans, we seem to have an ability of being able to use common sense about the world around us. We somehow know that a chair is a chair and that the sky is blue. We also presumably use common sense to decide when to slam on our brakes in the car versus swerving into another lane. Well, sad to report that we don’t yet have any true semblance of common-sense reasoning for AI systems, and so let’s count out for now the “solution” that we could just plug-in common sense reasoning and have dealt with the ethical choices matter swiftly by doing so.

You might say that the AI should use Machine Learning (ML) to figure out how to cope with these ethics related decisions. Are you suggesting that we let AI self-driving cars drive around and sometimes they hit and injure or kill someone, and sometimes they don’t, and by the collection of such driving instances that somehow over time the ML “learns” which approach to take in these dicey situations? This seems impracticable. I would wager that most of us would not want to be one of the humans injured or killed during the thousands of such instances that the ML needed to collect to be able to find patterns and “learn” from the experiences.

In short, the better approach would be to explicitly design, develop, test, and field the ethical decision making or judgment aspects into the AI.

Thus, since we don’t have available as yet any kind of automated common-sense reasoning, and since relying upon ML to somehow miraculously over time figure out what to do (during which grave results are apt to occur), it would seem prudent to overtly tackle the problem and devise a system capability for the AI to rely upon.

If we do nothing, the AI will be unable to adequately perform when such moments arise, and the result will be likely random chances of the self-driving car either managing to avoid an incident or getting involved in an incident and doing so without any explainable rhyme or reason for it. I don’t think we want self-driving cars to become clueless rockets of potential destruction.

Now, assuming that indeed the appropriate approach would be to devise a system component for this purpose of ethical decision making, this raises a slew of technological and societal considerations.

Should this be left to the auto makers and tech firms to devise on their own, each independently creating such system components? This would seem somewhat questionable. If you have brand X self-driving car driving around and it is going to decide one way as to how to ascertain whether to proceed forward toward the potential pedestrian or weave or hit the brakes, and there is brand Y self-driving car that decides another way, it would be potentially confusing for the public at large as to what to expect from the AI of these self-driving cars.

Besides the aspect that each of the auto makers or tech firms would need to reinvent the wheel, as it were, in terms of trying to come up with a viable approach, it would seem more consistent and transparent if some overarching approach were used. This too would deal with the potential thorny aspect that involves the crux of how the decisions are being made.

The thorny aspect involves how to decide what the “best” course of action might be in these ethical dilemmas. I had earlier asked whether humans use some kind of mental calculus to determine which choice to make. Do humans weigh each factor? Do they consider whether age is important of those that might get injured or killed? How do humans do this? We can’t say for sure how humans do it.

This makes trying to have an AI system do something similar a problematic issue. It would be handy to know how humans make such decisions and thus we could just pattern the AI to do the same. I’ve had some AI developers that tell me that all this will take is to ask people how they decide, and then essentially “program” this into the system. As pointed out earlier, the rationalizations that people provide are not necessarily how they truly decide, and we are not even close as yet to being able to probe into the mind to discover how people really make such decisions.

Perhaps this takes us toward the ML approach and the need to collect sufficient data, though doing so via car accidents themselves would seem dubious. Another approach would be the use of simulations and have humans that gauge and make choices in the car driving simulations, out of which the ML might “learn” the approach being used by humans (even if we don’t know what’s actually happening in their minds).

Another approach would involve using an actuary’s kind of analytics method. As emotionally difficult as it might seem, there might well be a need to identify and agree to factors that should come to play in these decision moments. The result would be developed as part of the AI for use in the on-board system of the self-driving car. The same kind of gut-wrenching aspects are involved in trying to decide actuarial matters and thus it seems potentially fitting to use the same kind of methods for these purposes.

Rather than leaving this task to the auto makers or tech firms alone, some have proposed that an Ethics Review Board mechanism should be utilized. These would presumably be special committees or boards that would meet to aid in determining the parameters and thresholds for use in the ethics aspect components of the AI self-driving car systems. It might be something crafted by industry or it might be something created via potential regulations and regulatory bodies.

These Ethics Review Boards might be established at a federal level and/or a state level. They would be tasked with the rather daunting and solemn task of trying to guide how the AI should be established for these tough decision-making moments (providing the policies and procedures, rather than somehow “coding” such aspects). They might also be involved in assessing incidents involving AI self-driving cars that appear to go outside the scope of what was already established, and thus be an ongoing aid in the re-adjustment and calibration of the implemented approach.

Some have suggested that if there was an AI component for these ethical decision-making moments, and if there is a desire to standardize it across self-driving cars, perhaps the component should be housed in the cloud. Similar to how self-driving cars will be using OTA (Over The Air) electronic connections to update the AI systems, perhaps the AI component would not be embedded into the on-board system of the self-driving car and instead be accessed remotely.

Of course, the remote access aspects might get in the way of the decision making itself. It is more than likely that the ethics component would need to be accessed in real-time with split seconds to render a choice. Doing so via electronic connection seems dicey and prone to being inaccessible at the moment that the aspect is urgently needed.

What would seem prudent would be to have an on-board capability that could be updated via a cloud or centrally based standard. The on-board component would then be honed to presumably be able to render a choice in whatever sufficient time is available in a given circumstance. If insufficient time existed in any particular instance, there would need to be some shortcut choice capability, which I mention since once again the thought is to avoid an arbitrary choice and one-way-or-another have a “reasoned” choice that can be understood and explained.

One question that some have posed is whether this ethics decision making component could be truly able to handle all of the many variants of the Trolley problem. For example, I’ve outlined the case of the driver that is not sure if a pedestrian is in the road, and there is a car immediately behind, and there is a car to their left. Surely there are thousands of such potential instances, all of which would have variants. How could a system possibly contend with so many variations?

I’ll bring us back to the aspect that humans seem to be able to contend with these multitude of variants. I’d guess that the driver faced with the situation I’ve outlined has not experienced that exact situation before. Instead, they have an overall experience base and need to use whatever they can to try and apply it to the moment and the situation at hand. Presumably, the AI component would need to do the same. Plus, the AI component would be sought to be adjusted and enhanced over time (via the use of the Ethics Review Boards).

There is unquestionably bound to be controversy about the notion of the Ethics Review Boards. Some suggest that they should be called Safety Review Boards or perhaps Institutional Review Boards, providing a naming that might be more palatable. There are some that have pointed out that there is the possibility of having them become labeled as “death panels” as per the political term that arose during the 2009 debates about aspects of the federal healthcare legislation (this phrasing seemed to strike a chord with the public at large, though there is quite a dispute about the merits of the labeling).

In one sense, it could be argued that the Ethics Review Boards would be shaping how the AI will respond to dicey driving incidents, and as such, those Boards are deciding how life-or-death decisions will be made. It would be no easy matter for the members to serve on such a committee. Careful selection and criteria for participation would need to be figured out.

As unseemly as it might seem to have such Boards, the alternatives are to allow whatever happens to just happen or allow for particular auto makers or tech firms to make those a priori choices for us all. It would seem to be the case that society would likely prefer the more open and transparent and collective approach of using the Boards, but this is something yet to be ascertained.

A few final comments that I’d like to cover on this topic encompass various security related aspects.

One concern would be that a hacker might somehow be able to mess with the on-board ethics choice component and alter it so that it would do something untoward. When the ethics component is involved in a dire situation, the hacked version might make a choice that purposely seeks to maximize injury and death, rather than minimize it. Of course, systems security does need to be paramount for the on-board AI, and in fact I’d suggest that if the hacker could hack pretty much any part of the AI of the self-driving car, the odds are they can produce an untoward result in some fashion.

In essence, rather than focusing on solely the ethics component, nearly any other element of the AI system if hacked can likewise produce adverse consequences. As such, all I’m saying is that you cannot argue that there should not be an ethics component due to the potential for it being hackable, since you could make the same argument for nearly all other components of the AI system for a self-driving car. If you then are making the argument that any of those components could be hacked and therefore they are inherently untrustworthy, you might as well then say that there is no such viable thing as an AI self-driving car.

In a somewhat similar manner, let’s consider the cloud and the OTA. One might argue that suppose a hacker gets to the cloud version of the centralized ethics component and messes with it. The hacker has made things presumably easier for themselves in that they didn’t need to try and access any particular self-driving car, and instead they will let the OTA do so for them. The OTA would presumably blindly and dutifully send the updates to the on-board AI systems and thus allow a viral-like spread of the untoward ethics component aspects.

I’ll invoke the same argument as before. Yes, if a hacker could hack the centralized version, it would potentially produce this kind of calamity. I would submit though that if the hacker could alter nearly any aspect of the centralized patches that are going to be pushed down to the AI self-driving cars, you can have an untoward result. As such, the security needs to be quite tight at both the on-board self-driving car and at the OTA cloud-based elements. Either one can allow for something untoward if the security is not sufficiently tight.

I’ve had some AI developers tell me that their “solution” to these ethical choice situations involves having as a default that if the self-driving car cannot decide what to do, it will simply slow down and come to a halt. I hope that you can readily see that such an approach is nonsensical. Using my earlier example, would we have wanted the driver to have simply slowed down and come to a halt? This is quite impractical in the given situation and as I say is a nonsensical way of thinking.

Another idea that has been offered would be to ask the humans in the self-driving car as to what the AI should do. Again, a nonsensical answer. First, suppose there aren’t any human occupants in the AI self-driving car at the time of such a decision-making moment? We are going to have AI self-driving cars driving around on their own, quite a bit.

Second, even if there is a human on-board, would they be able to out-of-the-blue be able to make such a decision? Let’s assume they aren’t driving and aren’t paying attention to the driving task, which in a Level 5 self-driving car is indeed their prerogative.

Third, suppose the human on-board is drunk? Suppose the human on-board is a child? Suppose there are humans on-board and yet the decision needs to be made within 2 seconds – how could the humans be told the problem and offer an answer in a mere second’s worth of time. And so on.

Another point some make is that maybe we should setup remote human operators that would make these decisions. Sorry, it’s a nonsensical idea. Suppose the remote operator could not fully grasp the nature of the situation? Suppose they only had two seconds to decide and meanwhile they somehow needed to “review” what the situation is and what options to consider. Suppose there are electronic communication delays or snafus and the remote operator is not able to participate in the time needed? And so on.

I’d say that the automation is what is going to get us into this predicament, and it would seem like the automation is the only means to get out of it (as coupled with the Boards and the approach to devising the solution). Though, when I say get out of it, let’s be clear that however this is devised, the odds are that the AI system will be second-guessed about the choices made. This would be true of humans and it will certainly be the same about the AI. The AI might “perfectly” execute whatever the AI ethics component consists of, and yet still human lives might be lost.

There are unavoidable crashes that no matter what you do, a crash is going to occur. For my earlier example, suppose it really was a pedestrian in the roadway. And, suppose that each of the choices involved either injuring or killing someone, either the pedestrian, or the driver in the car behind you, or the driver in the car to your left, or you as the driver. There is not going to be a magical way to get out of the unavoidable crashes unscathed.

Would we prefer as a society to pretend it won’t happen and then wait and see. Or, would we rather step-up to the matter and address is head-on. Time will tell.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

相關推薦

Ethics Review Boards and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider As a driver of a car, you are continually making judgments that involve life-or-death matter

Sleeping Barber Problem and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider The sleeping barber problem. It’s a classic. If you are unfamiliar with it, you might be thi

5G and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Repeat after me, there’s 1G, and 2G, and 3G and more, plus 4G today and 5G is galore! I wax

Hurricanes and AI Self-Driving Cars: Plus Other Natural Disasters

By Lance Eliot, the AI Trends Insider Living in Southern California means that it is best to be prepared for the possibility of ea

AI Arguing Machines and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider I’d like to argue with you. Ready? No, I don’t think you are ready. What’s that, you say tha

Internal Naysayers and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider On the Friday just before we all moved our clocks to “fall back” due to the conclusion of th

Exascale Supercomputers and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Supercomputers are great. If you are really into computing and computers, you’ve got to admi

Prevalence-Induced Behavior and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider During my daily commute on the hectic freeways of Southern California (SoCal), there are dri

Gen Z and the Fate of AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Which gen are you? Maybe you are a Baby Boomer, having been born sometime around 1946 to 196

Pranking of AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider When you do a prank on someone, it hopefully is done in jest and with no particular adverse

Debugging of AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Do you know how the word “debugging” originated? It is attributed to Grace Hopper, a pioneer

Compressive Sensing for AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider Years ago, I did some computer consulting work for a chemical company. It’s quite a curious

Cognitive Timing for AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider How fast can you think? If I give you a jigsaw puzzle and ask you to assemble it, you would

Induced Demand Driven by AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider For those of you that have lived through multiple generations of computers, you likely know

Power Consumption Vital for AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider I was in an airport and on my way to give a speech about self-driving cars when I realized t

Turing Test for AI Self-Driving Cars

By Dr. Lance Eliot, the AI Trends Insider When my children were young, I used to teach them various games such as checkers, chess,

Intelligent Connectivity: How 5G is Boosting AI, IoT, and Self-driving Cars

As new technologies mature, new and advanced use cases arise from the fusion of 5G, Artificial Intelligence (AI), and the Internet of Things (IoT). This fu

Fail-Safe AI and Self-Driving Cars

By Lance Eliot, the AI Trends Insider Fail-safe. If you’ve ever done any kind of real-time systems development, and especially inv

Self-Driving Cars and the Future of Transportation

So, what’s it like? Let me start off by saying, once you experience not having to deal with rush-hour traffic there is no going back. The technology is mos

Audi and Disney are creating media for self-driving cars

If and when fully autonomous cars hit the road in earnest, you're going to have a lot of free time on your hands. Audi and Disney think they can fill that