Recently, one of Google’s self-driving Lexus cars got into an accident with a bus. Google’s car assumed that as it slowly nudged its way into traffic, the bus was going to slow down and allow it to pass by. However, the car assumed wrong and the bus did not yield, resulting in the collision. The car withstood damage to the left front fender, the front left wheel, and the driver’s side sensors. No injuries were reported. Google then came out and suggested that their automatic cars will get into more accidents, and that’s okay.
Google conveyed that they are going to use incidents like this to teach the cars to think more like humans. Regarding the incident Google came out and stated, “Our test driver, who had been watching the bus in the mirror, also expected the bus to slow or stop. And we can imagine the bus driver assumed we were going to stay put. Unfortunately, all these assumptions led us to the same spot in the lane at the same time. This type of misunderstanding happens between human drivers on the road every day. This is a classic example of the negotiation that’s a normal part of driving – we’re all trying to predict each other’s movements. In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision. That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that ”.
Google demonstrated that their main goal for the self-driving car is to eliminate nearly 40,000 deaths caused by human driving error. Some believe that the only way for the self-driving cars to be 100% trustworthy is by having all cars on the road be self-driven. However, Google suggests that even a single self-driving car on the road is improving driving safety for all.