Forums before death by AOL, social media and spammers... "We can't have nice things"
|    rec.autos.driving    |    Automobile discussion (general)    |    162,178 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 161,451 of 162,178    |
|    When Retards Drive to All    |
|    Google's Driverless Cars Run Into Proble    |
|    02 Sep 15 07:32:08    |
      XPost: austin.general, sac.politics, alt.fan.rush-limbaugh       XPost: alt.google-sucks       From: wrd@google.com              MOUNTAIN VIEW, Calif. — Google, a leader in efforts to create       driverless cars, has run into an odd safety conundrum: humans.              Last month, as one of Google’s self-driving cars approached a       crosswalk, it did what it was supposed to do when it slowed to       allow a pedestrian to cross, prompting its “safety driver” to       apply the brakes. The pedestrian was fine, but not so much       Google’s car, which was hit from behind by a human-driven sedan.              Google’s fleet of autonomous test cars is programmed to follow       the letter of the law. But it can be tough to get around if you       are a stickler for the rules. One Google car, in a test in 2009,       couldn’t get through a four-way stop because its sensors kept       waiting for other (human) drivers to stop completely and let it       go. The human drivers kept inching forward, looking for the       advantage — paralyzing Google’s robot.              It is not just a Google issue. Researchers in the fledgling       field of autonomous vehicles say that one of the biggest       challenges facing automated cars is blending them into a world       in which humans don’t behave by the book. “The real problem is       that the car is too safe,” said Donald Norman, director of the       Design Lab at the University of California, San Diego, who       studies autonomous vehicles.              “They have to learn to be aggressive in the right amount, and       the right amount depends on the culture.”              Traffic wrecks and deaths could well plummet in a world without       any drivers, as some researchers predict. But wide use of self-       driving cars is still many years away, and testers are still       sorting out hypothetical risks — like hackers — and real world       challenges, like what happens when an autonomous car breaks down       on the highway.              For now, there is the nearer-term problem of blending robots and       humans. Already, cars from several automakers have technology       that can warn or even take over for a driver, whether through       advanced cruise control or brakes that apply themselves. Uber is       working on the self-driving car technology, and Google expanded       its tests in July to Austin, Tex.              Google cars regularly take quick, evasive maneuvers or exercise       caution in ways that are at once the most cautious approach, but       also out of step with the other vehicles on the road.              “It’s always going to follow the rules, I mean, almost to a       point where human drivers who get in the car and are like ‘Why       is the car doing that?’” said Tom Supple, a Google safety driver       during a recent test drive on the streets near Google’s Silicon       Valley headquarters.              Since 2009, Google cars have been in 16 crashes, mostly fender-       benders, and in every single case, the company says, a human was       at fault. This includes the rear-ender crash on Aug. 20, and       reported Tuesday by Google. The Google car slowed for a       pedestrian, then the Google employee manually applied the       brakes. The car was hit from behind, sending the employee to the       emergency room for mild whiplash.              Google’s report on the incident adds another twist: While the       safety driver did the right thing by applying the brakes, if the       autonomous car had been left alone, it might have braked less       hard and traveled closer to the crosswalk, giving the car behind       a little more room to stop. Would that have prevented the       collision? Google says it’s impossible to say.              There was a single case in which Google says the company was       responsible for a crash. It happened in August 2011, when one of       its Google cars collided with another moving vehicle. But,       remarkably, the Google car was being piloted at the time by an       employee. Another human at fault.              Humans and machines, it seems, are an imperfect mix. Take lane       departure technology, which uses a beep or steering-wheel       vibration to warn a driver if the car drifts into another lane.       A 2012 insurance industry study that surprised researchers found       that cars with these systems experienced a slightly higher crash       rate than cars without them.              Bill Windsor, a safety expert with Nationwide Insurance, said       that drivers who grew irritated by the beep might turn the       system off. That highlights a clash between the way humans       actually behave and how the cars wrongly interpret that       behavior; the car beeps when a driver moves into another lane       but, in reality, the human driver is intending to change lanes       without having signaled so the driver, irked by the beep, turns       the technology off.              Mr. Windsor recently experienced firsthand one of the challenges       as sophisticated car technology clashes with actual human       behavior. He was on a road trip in his new Volvo, which comes       equipped with “adaptive cruise control.” The technology causes       the car to automatically adapt its speeds when traffic       conditions warrant.              But the technology, like Google’s car, drives by the book. It       leaves what is considered the safe distance between itself and       the car ahead. This also happens to be enough space for a car in       an adjoining lane to squeeze into, and, Mr. Windsor said, they       often tried.              Dmitri Dolgov, head of software for Google’s Self-Driving Car       Project, said that one thing he had learned from the project was       that human drivers needed to be “less idiotic.”              On a recent outing with New York Times journalists, the Google       driverless car took two evasive maneuvers that simultaneously       displayed how the car errs on the cautious side, but also how       jarring that experience can be. In one maneuver, it swerved       sharply in a residential neighborhood to avoid a car that was       poorly parked, so much so that the Google sensors couldn’t tell       if it might pull into traffic.              More jarring for human passengers was a maneuver that the Google       car took as it approached a red light in moderate traffic. The       laser system mounted on top of the driverless car sensed that a       vehicle coming the other direction was approaching the red light       at higher-than-safe speeds. The Google car immediately jerked to       the right in case it had to avoid a collision. In the end, the       oncoming car was just doing what human drivers so often do: not       approach a red light cautiously enough, though the driver did       stop well in time.              Courtney Hohne, a spokeswoman for the Google project, said       current testing was devoted to “smoothing out” the relationship       between the car’s software and humans. For instance, at four-way       stops, the program lets the car inch forward, as the rest of us       might, asserting its turn while looking for signs that it is       being allowed to go.              The way humans often deal with these situations is that “they       make eye contact. On the fly, they make agreements about who has       the right of way,” said John Lee, a professor of industrial and       systems engineering and expert in driver safety and automation       at the University of Wisconsin.              “Where are the eyes in an autonomous vehicle?” he added.              But Mr. Norman, from the design center in San Diego, after years       of urging caution on driverless cars, now welcomes quick       adoption because he says other motorists are increasingly              [continued in next message]              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca