1. 程式人生 > >Pranking of AI Self-Driving Cars

Pranking of AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider

When you do a prank on someone, it hopefully is done in jest and with no particular adverse consequences. You’ve likely seen the many YouTube videos of people pulling pranks on other people. Sometimes, these are innocent pranks and seemingly other than the surprise to the person being pranked, there isn’t any lasting negative impacts. There are though pranks that go-over-the-line, so to speak, and at times cause harm to the target of the prank and possibly to others too.

Some would say that pranks are fun, interesting, and not a big deal. Even those that pull pranks would likely reluctantly admit that you can take things too far. A good friend of mine got hurt when someone with them at a bar opted to suddenly pull their chair away from the table when they had gotten up, and the friend upon trying to sit back down, unknowingly and shockingly went all the way to the floor. The impact to the floor hurt their back, neck, and nearly caused a head concussion. My friend went to the local emergency room for a quick check-up. A seemingly “fun” joke that was meant to be harmless, turned out to have serious consequences.

Of course, pranks can be purposely designed to be foul. Some years ago, when I was a professor, a colleague was upset with someone and opted to “prank them” by messing with a research project the person had labored on for several months. The colleague went into the lab where the experiment had been gradually evolving and made some changes that were unbeknownst to the principle researcher.

The principle researcher later found that their results seemed quite odd, and eventually traced it to someone having intervened in the experiment. This set her back many months on the work. This was not an accidental kind of prank result, but one done by determination. The person that did the prank tried to pretend that they had no idea that doing the prank would cause such difficulty. Most everyone didn’t buy into his rather hollow attempt to act naïve about the situation.

Let’s shift attention from the notion of pranks to something similar to a prank, but we’ll recast it in different terms.

Sometimes in a sport like basketball or football, you might use a feint or dodge to try and fool your opponent and gain a competitive advantage over them. While dribbling a basketball, you might move your body to the right as though you are going to head in that direction, and then suddenly swerve instead to your left. This feint or “fake” move can be instrumental to how you play the sport. It isn’t something that you do just on rare occasion, but instead it’s an ongoing tactic or even strategy for playing the game.

When you consider the sport of fencing, feints are the lifeblood of the competition. You want your opponent to think you are going to lunge at them, but you don’t actually do so. Or, you want them to think you are not going to lunge at them, but you do so. Typically, a so-called feint attack consists of making the appearance of an attack to provoke a response from the other fencer. If they fall for the trap, you’ve then got them in a posture wherein you can potentially undertake a true attack.

There’s also the feint retreat, which usually consists of actively engaging your opponent and then make a fast retreat, even though they might think you are coming on strong toward them and therefore have reacted accordingly. When you watch a fencing match, it is a fast paced and consists of a dizzying series of rapid feints. In a sport like basketball or football, feints tend to occur over a period of many seconds of time, while in fencing it can be occurring in split seconds of time. Each fencer is trying to do feints to the other, and each is trying to detect the feints and react in a manner best suited to counteract the feint of the other.

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars. One of the concerns currently about AI self-driving cars is that some people are trying prank them.

Allow me to elaborate.

The AI of today’s self-driving cars is still quite crude in comparison to where we all hope to be down-the-road. Generally, you can consider the AI to be a very timid driver. It would be as though you had a novice teenager that’s first learning to drive. You’ve probably seen teenage drivers that go very slowly, take turns with great caution, ride their brakes, come to full and lengthy stops at stop signs, and so on. In some sense, the AI for self-driving cars is currently performing in a like manner.

You need to be aware that there are various levels of AI self-driving cars. The topmost level is referred to as Level 5. A Level 5 self-driving car is one that is supposed to be able to drive the car without any human driver needed. Thus, the AI needs to be able to drive the car as though it was as proficient as a human driver. This is not easy to do. I’ve mentioned many times that to have AI that’s good enough to be like a proficient human driver is nearly akin to achieving a moonshot.

For self-driving cars less than a Level 5, it is assumed and required that a human driver be present. Furthermore, the human driver is considered responsible for the driving task, in spite of the aspect that the driving is being co-shared by the human driver and the AI system. I’ve said many times that this notion of a co-shared driving effort is problematic and we’re going to have lots of deadly consequences. In any case, for the purposes of this pranking topic, I’m going to focus on the Level 5 self-driving car, though in many respects my comments are applicable to the less than Level 5 self-driving cars too.

Early AI Self-Driving Cars Cautious Like Teenage Drivers

So, imagine that you’ve got an AI system that drives a Level 5 self-driving car in the simplest of ways, being at times akin to a teenage driver (though, don’t over ascribe that analogy; the AI is not at all “thinking” and thus not similar to a human mind, even that of a teenager!). The AI is driving the self-driving car and taking lots of precautions. This makes sense in that the auto makers and tech firms don’t want an AI self-driving car to be driving in a manner that could add risk to having an incident occur. The media is poised to clamor about self-driving car incidents, plus of course the auto maker and tech firm doesn’t intentionally want to have human injuries or deaths (though, their actions via their AI self-driving cars might lead to such results).

You’ve got a kind of grand convergence in that some people have figured out how timid these AI self-driving cars are, and of those people, there are some that have opted to take advantage of the circumstances. As I’ll emphasize in a moment, more and more people are going to similarly opt to “prank” AI self-driving cars.

When AI self-driving cars first started appearing, it was considered a novelty and most people kept clear of an AI self-driving car. They did so because they were surprised to even encounter one. It was like suddenly seeing an aardvark, which you’d heard exist, but you’d not seen one with your own eyes. You would give such a creature a wide berth, wanting to see what it does and how it does things. This was the “amazement” phase of people reacting to AI self-driving cars.

In addition, most of the time, the AI self-driving cars were being tried out on public roads in relatively high-tech areas. Places like Sunnyvale, California and Palo Alto. These are geographical areas that are dominated by tech firms and tech employees. As a tech person, if you saw an AI self-driving car, you were somewhat in the “amazement” category, but perhaps more so in the “this is someone else’s experiment and I respect their efforts” category. You weren’t necessarily in awe, but more so curious and also figured that you’d prefer that people don’t mess with your tech creations, so you should do the same about their tech creations.

One of the early-on stories about how the AI reacts in a timid manner consisted of the now famous four-way stop tale. It is said that an AI self-driving car would come to a four-way stop, and do what’s expected, namely come to a full and complete stop. Do humans also come to a full and complete stop? Unless you live in some place that consists of rigorously law-abiding human drivers, I dare say that many people do a rolling stop. They come up to the stop sign and if it looks presumably safe to do so, they continue rolling forward and into the intersection.

In theory, we are all supposed to come to a complete stop and then judge as to which car should proceed if more than one car has now stopped at the four-way stop. I’m sure you’ve had situations wherein you arrived at the stop sign just a second or two before another car, and yet that other human driver decided to move ahead, even though they should have deferred to you. It can be exasperating and quite irritating. There are some human drivers that think other human drivers are like sheep, and if they are a wolf they will be happy to dominate over the sheep.

Well, the AI self-driving car detected that other cars were coming up to or at the stop sign on the other sides of the four-way stop. The AI then calculated that it should identify what those other cars are going to do. If a human inadvertently misreads a stop sign and maybe doesn’t even realize it is there, and therefore barrels into an intersection, you would certainly want to have the AI self-driving car not mistakenly enter into the intersection and get into an untoward incident with that ditzy human driver. Crash, boom.

But, suppose those human drivers weren’t necessarily ditzy and were just driving as humans do. They came up to the stop sign and did a traditional rolling stop. The AI of the self-driving car would likely interpret the rolling stop as an indicator that the other car is not going to keep the intersection clear. The right choice then would be for the AI to keep the self-driving car at the stop sign, waiting until the coast is clear.

Suppose though that human driven cars, one after another, all did the same rolling stop. The AI, being timid or cautious, allegedly sat there, waiting patiently for its turn. I think we can all envision a teenage driver doing something similar. The teenage driver might not want to assert themselves over the more seasoned drivers. The teenage driver probably would figure that waiting was better than taking a chance on getting rammed in the middle of the intersection. In a manner of consideration, the AI was doing something similar.

Was it coincidental that the other cars, the human driver cars, proceeded each to do a rolling stop? It could be. But, it could also be that they noticed that the other car waiting at the stop sign was an AI self-driving car. These clever humans might have wondered whether the AI was going to move forward or not. If it appeared that the self-driving car was sitting still, it would be like realizing that a teenage driver has frozen at the wheel and won’t take action. You might as well then just proceed. No need to wait for the teenage driver to realize they can go. This same logic likely was used by some of the human drivers as to not allowing the AI per se to proceed.

Is this a prank then by those human drivers upon an AI self-driving car?

I suppose you might argue with the use of the word “prank” in this case. Were those humans trying to pull away the seat of someone before they sat down? Were these humans messing with someone’s experiment as a means to get revenge? In one sense, you could argue that they were pranking the AI self-driving car, and doing so to gain an advantage over the AI self-driving car (didn’t want to wait at a four-way stop). You could also argue that it wasn’t a prank per se, but more like a maneuver to keep traffic flowing (in their minds they might have perceived this), and perhaps it is like a feint in a sport.

Imagine if the human driver coming up to the four-way stop tried to do a rolling stop, but meanwhile another human driven car did the same thing. You’d likely end-up with a game of chicken. Each would challenge the other. If you dare to move forward, I will too. The other driver is thinking the same.

I’ve seen this actually happen. At one point, as ridiculous as it might seem, I saw two cars that were in the middle of an intersection, each crawling forward, each unwilling to give up territory for the other car. It’s crazy too because they were holding up other traffic and making things go slowly even for their own progress. What idiots! People at times get into a possessive mode when behind the wheel of a car. They are willing to play games of chicken, in spite of the rather apparent dangers of doing so with multi-ton cars that can harm or kill people.

The four-way stop example showcases the situation of an AI self-driving car and its relationship to human driven cars. There’s also the circumstances of a pedestrian messing around with an AI self-driving car.

Pedestrians Likely to Mess with Self-Driving Cars

Depending upon where you live in the world, you’ve probably seen pedestrians that try to mess with human drivers of conventional cars. In New York, it seems an essential part of life to stare down human drivers when you are crossing the street, especially when jaywalking. There are some New Yorkers that seem to think that the mere act of making eye contact with a human driver will promote a safe journey across a street. It’s as though your eyes are laser beams or something like that. This is also why many New York drivers won’t make eye contact with a pedestrian, since it’s a means of pretending the pedestrian doesn’t exist. You can just blindly proceed, and the pedestrian better stay out of the way.

There is no ready means to do a traditional stare down with an AI self-driving car. Presumably, a pedestrian would then be mindful to be less risky when trying to negotiate the crossing of a street. They can no longer make the eye contact that says don’t you dare drive there and get in my pedestrian way. Instead, the AI self-driving car is going to be possibly do whatever it darned well pleases to do.

I’d gauge that most pedestrians right now are willing to give an AI self-driving car a wide berth, but this is only if they even realize it is an AI self-driving car. There are many AI self-driving cars that are easily recognizable because they have a conehead shape on the top of the self-driving car (usually containing the LIDAR sensor). Once again fitting into the “amazement” category, pedestrians are in awe to see one drive past them. Give it room, is the thought most people likely have.

But, if you see them all the time and instantly recognize them, the awe factor is gone. Furthermore, if they are traveling slowly and acting in a timid matter, well, you’ve got better things to do than wait around for some stupid AI self-driving car to makes its way down the street. It could also be that you don’t even realize it is an AI self-driving car, either because it doesn’t have that look of an AI self-driving car, or you aren’t paying attention to the traffic and just opt to do what pedestrians often do, namely jaywalk.

In a manner similar to the four-way stop, you can often get an AI self-driving car to halt or change its course, doing so by some simple trickery. The AI is likely trying to detect pedestrians that appear to be a “threat” to the driving task. If you are standing on the sidewalk and a few feet from the curb, and you are standing still, you would be likely marked as a low threat or non-threat. If you instead were at the curb, your threat level increases. If you are in-motion and going in the direction of the street and where the AI self-driving car is headed, your threat risk increases further.

Knowing this, you can potentially fool the AI into assuming that you are going to jaywalk. Given the timid nature of the AI, it will likely then calculate that it might be safer to come to a stop and let you do so, or maybe swerve to let you do so. If you have one pedestrian try this, and it works to halt the AI self-driving car, and if there are more pedestrians nearby that witness this, they too will likely opt to play the same trick. Similar again to the four-way stop, you might have person after person, each of them making motions to get into the street, and the AI opting to just keep waiting until those pesky pedestrians are no longer consider a threat to proceeding.

You don’t need an AI self-driving car to see this same kind of phenomena occurring. Drive to any downtown area that is filled with pedestrians. If those pedestrians’ sense that you are a sheep, they will take advantage of the situation. Many pedestrians use age as a factor in ascertaining whether to assert their pedestrian rights, such as if they see a teenage driver or a senior citizen driver. Some will gauge the situation by the type of car or how the car is moving down the street. And so on.

As we increasingly have more AI self-driving cars on our roadways, I’ll predict that the “amazement” category will fade and instead will be replaced with the “prank” an AI self-driving car mindset.

Some AI developers that are working on AI self-driving cars are dumbfounded when I make such a prediction. They believe fervently in the great value to society that AI self-driving cars will bring. They cannot fathom why people would mess around with this. Don’t those pedestrians realize that they are potentially undermining the future of mankind?

Why would people mess around with AI self-driving cars in this “prank” kind of way? I have lots of reasons why people would do so.

Here’s some of the likely reasons:

  •         Why did the chicken cross the road, in order to get to the other side. If humans, whether as drivers or pedestrians, perceive that an AI self-driving car is essentially in their way, some of those humans are going to find a means to keep it from getting in their way. These humans will simply outmaneuver the AI self-driving car, as such, one of the easiest means will be to do a feint and the AI self-driving car will do the rest for the human.
  •         Humans at times like to show that they are as smart or even smarter than automation. What can be more ego boosting than to outwit a seemingly supreme AI system that’s driving a car?
  •         Some humans are frankly show-offs. Watch me, a pedestrian says to anyone nearby, as I trick a human driver into thinking I’m going to jump into the street and that driver will freak out. Equally treasured if it’s an AI system that gets freaked out.
  •         Humans love sports. They invent new sports all the time. Remember planking? Well, one brand new sport will be pranking an AI self-driving car. Imagine the YouTube videos and the number of views for the most outlandish pranks that successfully confounded an AI self-driving car.
  •         They say that curiosity killed the cat. Don’t know about that. A human though is likely to be curious about AI self-driving cars and what makes them tick. Wouldn’t you be tempted to as a pedestrian wave your arms in the air and see if it causes an AI self-driving car to react? Of course, you would.
  •         There are some that believe AI is a potential doomsday path for society. In that case, you’d want to provide tangible examples of how AI can get things goofed-up. Making an AI self-driving car do something that we would not expect a human driven car to do, or that shows how much less capable the AI is than a human, it’s a likely gold star for anyone aiming to take down AI.

Some Likely to Envision Their Pranks Are Helpful

  •         There could be some people that will ponder whether they might somehow help AI self-driving cars by purposely trying to prank them. If you come up behind your friend and say boo, the next time someone else does it, they’ll hopefully be better prepared. Some people will assume that if they prank an AI self-driving car, it will learn from it, and then no longer be so gullible. They might be right, or they might be mistaken to believe that the AI will get anything out of it (this depends on how the AI developers developed the AI for the self-driving car).

There you have it, a plethora of reasons that people will be tempted to prank an AI self-driving car. I can come up with more reasons, but I think you get the idea that we are heading toward a situation wherein a lot of people will be motivated to undertake such pranks.

What’s going to stop these pranksters?

Some auto makers and tech firms, and especially some AI developers, believe that we should go the root of the problem. What is that root? In their minds, it’s the pesky and bothersome human that’s the problem.

As such, such advocates say that we should enact laws that will prevent humans from pranking AI self-driving cars.

In this view, if you have tough enough penalties, whether monetary fines or jail time, it will make pranksters think twice and stop their dastardly ways. I don’t want to seem unsympathetic to this notion, but can you imagine two people in prison, one says to the other that they committed armed robbery, and the other one says that he waved his arms at an AI self-driving car and got busted for it.

Overall, it’s not clear that a regulatory means of solving the problem will be much help in the matter. I’m sure that law abiding people will certainly abide by such a new law. Lawbreakers would seem less likely, unless there’s a magical way to readily catch them at their crime and prosecute them for it. The AI developers would say that it’s easy to capture that the person did a prank, since the AI self-driving car will undoubtedly record video and other sensory data that could presumably be used to support the assertion that the human pulled a prank on the AI self-driving car.

If we go down that rabbit hole, how exactly are we to ascertain that someone was carrying out a prank? Maybe the person was waving their arms or making their way into the street, and they had no idea an AI self-driving car was there. Also, are we going to outlaw the prankster doing the same thing to a human driver? If not, could the prankster claim they were making the motions toward a human driver and not the AI? Or, they though the AI self-driving car was being driven by a human and couldn’t see into the car well enough to see if there was a human driver there or not.

I’d say that the legal approach would be an untenable morass.

There are some though that counter-argue that when trains first became popular, people eventually figured out to not prank trains. It was presumably easy for someone to stand in the train tracks and possibly get an entire train to come to a halt. But, this supposedly never took root. Some say there are laws against it, depending upon which geographical area you are in. Certainly, one could also say that there are more general laws that could apply in terms of endangering others and yourself.

Culturally, we could try to blast those that conduct pranks. Make them outcasts of society. This might potentially have some impact on the pranksters. If you go along with the idea that AI self-driving cars are overall a boon for society, it’s conceivable that there could become a cultural momentum towards wanting to “help” the beleaguered AI and try to castigate the human pranksters that go the opposite direction by trying to confound the AI.

Some say that we should have a consumer education campaign to make people aware of the limitations of AI self-driving cars. Perhaps the government could sponsor such a campaign, maybe even making it mandatory viewing by government workers. It could be added into school programs. Businesses maybe would be incentivized to educate their employees about fooling around with pranking of AI self-driving cars.

Some are a bit more morbid and suggest that once a few people are injured by having pranked an AI self-driving, and once some people are killed, it will cause people generally to realize that doing a prank on an AI self-driving car has some really bad consequences. People will realize that it makes no sense to try and fool AI self-driving cars since it can cause lives to be lost.

These and similar arguments are all predicated on the same overarching theme, namely that the AI is the AI, and that the thing that needs to be changed is humans and human behavior.

I’d be willing to wager a bet that people will not be willing to accept an AI system that can be so readily pranked. If it comes down to whether to enact laws to stop people from pranking, or culturally trying to stop them from pranking, or providing consumer education, in the end it’s more likely that there will be a clamor for better AI. Indeed, there is a greater chance of people saying keep the AI off-the-road, rather than having a willingness to change the behavior of people due to AI that can’t cope with pranksters.

I know that this disappoints many of those AI developers that are prone to pointing the finger at the humans, and in their view it’s better, easier, faster to change human behavior. I would suggest that we ought to be looking instead at the AI and not delude ourselves into believing that mediocre AI will carry the day and force society to adjust to it.

I realize there are some that contend that people won’t somehow figure out that they can prank AI self-driving cars. Maybe only a few people here or there will do so, but it won’t be a mainstream activity.

I’d like to suggest we burst that bubble. I assure you that once AI self-driving cars start becoming prevalent, people will use social media to share readily all the ways to trick, fool, deceive, or prank an AI self-driving car. Word will spread like wildfire.

You know how some software systems have hidden Easter eggs? In a manner of speaking, the weaknesses of the AI systems for self-driving cars will be viewed in the same light. People will delight in finding these eggs. Say, did you hear that the AI self-driving car model X will pull over to the curb if you run directly at it while the self-driving car is going less than 5 miles per hour?

Look Forward to Being Duped by a False Prank

This though is also going to create its own havoc. The tips about how to prank an AI self-driving car will include suggestions that aren’t even true. People will make them up out-of-the-blue. You’ll then have some dolt that will try it on an AI self-driving car, and when the self-driving car nearly hits them, they’ll maybe realize they were duped into believing a false prank.

It could be that true prank tips might also no longer work on an AI self-driving car. As mentioned earlier, there is a chance that the Machine Learning (ML) of the AI might catch onto a prank and then be able to avoid falling victim to it again. There’s also the OTA (Over The Air) updating of AI self-driving cars, wherein the auto maker or tech firm can beam into the AI self-driving various updates and patches. If the auto makers or tech firm gets wind of a prank, they might be able to come up with a fix and have it sent to the AI self-driving cars.

This though has its own difficulties. People may not yet realize that the AI self-driving cars are not homogenous and that the nature of the AI systems differs by the auto maker or tech firm. Thus, you might learn of a prank that works on the AI self-driving car brand X, but not on another brand Y. Or, maybe works on brand X model 2, but not on the brand X model 3.

In short, though I am not one to say that a technological problem must always have a technological solution, I’d vote in this case that there should be more attention toward having the AI be good enough that it cannot be readily pranked.

We need to focus on anti-pranking capabilities for AI self-driving cars.

I say this and realize that in so doing that I am laying down a gauntlet for others to help pick-up and run with. We are doing the same.

This is a difficult problem to solve and not one that lends itself to any quick or easy solution. I know that some of you might say that an AI self-driving car needs to shed its tepidness. By being more brazen, it would be able to not only overcome most pranks, it would create a reputation that says don’t mess with me, I’m AI that’s not be played for a fool.

Here’s an example of why that’s not so easy to achieve. A pedestrian walks into the middle of the street, right in front of where an AI self-driving car is heading. Let’s assume we don’t know whether it’s a prank. Maybe the pedestrian is drunk. Maybe the pedestrian is looking at their smartphone and is unaware of the approaching car. Or, maybe it is indeed a prank.

What would you have the AI do? If it was a human driver, we’d assume and expect that the human driver will try to stop the car or maneuver to avoid hitting the pedestrian. Is this because the human is timid? Not really. Even the most brazen of human drivers is likely to take evasive action. They might first honk the horn, and maybe shine their headlights, and do anything they can to get the human to get out of the way, but if it comes down to hitting the pedestrian, most human drivers will try to avoid doing so.

Indeed, for just the same reasons, I’m a strong proponent of having AI self-driving cars become more conspicuous in such circumstances. My view is that an AI self-driving car should use the same means that humans do when trying to warn someone or draw attention to their car. Honk the horn. Make a scene. This is something that we’re already working on and urge the auto makers and tech firms to do likewise.

Nonetheless, if that human pedestrian wont budge, the car, whether human driven or self-driving, will have to do something to try and avoid the hitting of the pedestrian.

That being said, some humans play such games with other humans, by first estimating whether they believe the human driver will back-down or not. As such, there is some credence to the idea that the AI needs to be more firm about what it is doing. If it is seen as a patsy, admittedly people will rely upon that. This doesn’t take us to the extreme posture that the AI needs to therefore hit or run someone down to intentionally prove its mettle.

In the case of the four-way stop situation, I’ve commented many times that if the other human drivers realized that the AI self-driving car was willing to play the same game of doing a rolling stop, it would cause those human drivers to be less prone to pulling the stunt of the rolling stop to get the AI self-driving car into a bind. I’ve indicated over and over that AI self-driving cars are going to from time-to-time be “illegal” drivers. I know this makes some go nuts since they are living in a Utopian world whereby no AI self-driving car ever breaks the law, but that’s not so easily applied in the real-world of driving.

Some say too that two illegal acts, one by the human driver and one by the AI, does not make a right. I’d agree with that overall point, but also would like to note that there are “small” illegal driving acts happening every day by every human driver. I know its tempting to say that we should hold AI self-driving cars to a higher standard, but this does not comport with the realities of driving in a world of mixed human driving and AI driving. We are not going to have only and exclusively AI self-driving cars on our roadway, and no human driven cars, for a very long time.

There’s also the viewpoint that AI self-driving cars can team-up with each other to either avoid pranks or learn from pranks on a shared basis. With the advent of V2V (vehicle-to-vehicle communications), AI self-driving cars will be able to electronically communicate with each other. In the case of a prank, it could be that one self-driving car detects a prankster trying a prank on it, and then the AI shares this with the next self-driving cars coming down that same street. As such, then all of those AI self-driving cars might be ready to contend with the prank.

Unfortunately, there’s also another side of that coin. Suppose the AI of a self-driving car inadvertently misleads another AI self-driving car into anticipating a prank when in fact there isn’t one coming up. It’s a false positive. This could readily occur. The forewarned AI self-driving car has to be savvy enough to determine what action to take, or not to take, not simply by the word shared with it from other AI self-driving car.

If you introduced a slew of teenage drivers into our roadways, doing so all at once, what would happen? Presumably, you’d have tons of timid human drivers that would not take the kinds of shortcuts that more seasoned human drivers have learned over time. Some hope or believe that the AI self-driving cars will do the same. In essence, over time, with the use of Machine Learning and via OTA updates by the AI developers, the AI self-driving cars will get better at more “brazen” driving aspects.

Depending upon the pace at which AI self-driving cars are adopted, some think that maybe the initial small population of AI self-driving cars will take the brunt of the pranking and this will then be overcome by those AI self-driving cars getting us to the next generation of AI self-driving cars. It will be a blip that people at one time pranked the AI self-driving cars in their early days of roadway trials (remember when you could stick out your leg and pretend you were kicking toward an AI self-driving car, and it would honk its horn at you – what a fun prank that was!).

I’d suggest we need to take a more overt approach to this matter and not just hope or assume that the “early day” AI self-driving cars will come through on getting better at dealing with pranks. We need to be building anti-pranking into AI self-driving cars. We need to be boosting the overall driving capabilities of AI self-driving cars to be more human-like. Having AI self-driving cars on our roadways that can too easily fall for a feint attack or a feint retreat, well, those kinds of AI self-driving cars are going to potentially spoil the public’s interest and desire in having AI self-driving cars on our roadways. There will always be human pranksters, it’s likely in the human DNA. Face reality and let’s make sure the “DNA” of AI self-driving cars is anti-prank encoded.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.