ChatterBank1 min ago
Self Driving Cars, 80% Off Insurance?
It has been estimated that as 94% of accidents are caused by human error insurance premiums could be reduced by 80% once 'autonomous cars' are the only ones on the road
http:// blogs.b reakery ard.com /blog/l egal-is sues-au tonomou s-cars/ ?utm_so urce=Cu stomers -10-201 3&u tm_camp aign=ec 2b4876a 4-Febru ary_201 6_p_to_ z_Newsl etter_2 016& ;utm_me dium=em ail& ;utm_te rm=0_c0 8575103 9-ec2b4 876a4-7 8798353
What do you think?
http://
What do you think?
Answers
Best Answer
No best answer has yet been selected by EDDIE51. Once a best answer has been selected, it will be shown here.
For more on marking an answer as the "Best Answer", please visit our FAQ.The image recognition abilty of modern computer systems is phenominal; could you idenitfy a person from his iris, for example or pick out a person from s crowd? It is not beyond he realms of possibility for a computer to differentiate between a dog, a cat or a child, at least as well as a human can, and take the appropriate action. The difference being it could do it more quickly and, probably, more reliably..
So are they going to feed it a compendium on the animal kingdom? According to a link above they cannot distinguish a bus from fresh air! PMSL I don't buy it. A plane can now take off and land on it's own, would you get on one without a pilot? Even if the software is 100% it cannot defy the laws of physics, hence it will hit things and that causes huge legal problems.
The system does not have to be perfect - better than a very good driver is good enough!
There is the evidence of 1.2 million miles for the Google car, on US roads, with no worse problem than hitting a bus at 2mph for which it was only partially at fault (and probably only then as they were trying to solve the problem of weighting "common sense" vs "by the book").
This suggests that this is a technology that is theoretically possible. It could be introduced by stealth, e.g.
ABS - computer braking better than a human
collision warning systems
lane departure warnings
automatic braking
automatic parking
(All of the above already in the market). Then ...
collision avoidance
automated driving in controlled environments
automated motorway driving
automated urban driving
by which stage we may be ready for fully driverless cars.
There is the evidence of 1.2 million miles for the Google car, on US roads, with no worse problem than hitting a bus at 2mph for which it was only partially at fault (and probably only then as they were trying to solve the problem of weighting "common sense" vs "by the book").
This suggests that this is a technology that is theoretically possible. It could be introduced by stealth, e.g.
ABS - computer braking better than a human
collision warning systems
lane departure warnings
automatic braking
automatic parking
(All of the above already in the market). Then ...
collision avoidance
automated driving in controlled environments
automated motorway driving
automated urban driving
by which stage we may be ready for fully driverless cars.
"According to a link above they cannot distinguish a bus from fresh air! "
Actually no, it recognised the bus for what it was but pulled out in front anyway, "assuming" that the bus would slow down to let the car merge into the traffic. Instead, the bus didn't slow down, for some unknown reason. That's rather different from not realising there was a bus at all. In fact it's arguably a rather... human... mistake.
Lawyers and insurance people would, I suspect, rather enjoy trying to overcome those obstacles, rather than regarding them as something not even worth trying to surmount.
In the end, no matter how you try and sell it, the idea of banning self-driving cars because of some weird hypothetical conditions in which they fail is akin to saying that the current human cost now doesn't matter because self-driving cars won't be perfect. No doubt. But they're probably rather better at driving than we are.
Actually no, it recognised the bus for what it was but pulled out in front anyway, "assuming" that the bus would slow down to let the car merge into the traffic. Instead, the bus didn't slow down, for some unknown reason. That's rather different from not realising there was a bus at all. In fact it's arguably a rather... human... mistake.
Lawyers and insurance people would, I suspect, rather enjoy trying to overcome those obstacles, rather than regarding them as something not even worth trying to surmount.
In the end, no matter how you try and sell it, the idea of banning self-driving cars because of some weird hypothetical conditions in which they fail is akin to saying that the current human cost now doesn't matter because self-driving cars won't be perfect. No doubt. But they're probably rather better at driving than we are.
What causes accidents? human error nothing else, when they sort that out accidents will continue to be caused by human error, the majority of accidents on Motorways are tail ended " Human error" traveling too close, Fog accidents, Human Error, Road Junctions accidents, Human Error, until they educate the Human being accident will be caused.
TTT a driverless car would recognize both the dog and the bus queue, it would not swerve to avoid the dog as a human driver might do.
I am pretty sure that the 'driver' would have to be in a fit state to take over from the computer if there was a failure , so 'In charge of a vehicle while drunk or incapable' would still be illegal.In fact I think there would be a built in breathalyzer that you needed to blow into before you could start the car.
If you want to drive home p**** then you still need a Taxi.
I am pretty sure that the 'driver' would have to be in a fit state to take over from the computer if there was a failure , so 'In charge of a vehicle while drunk or incapable' would still be illegal.In fact I think there would be a built in breathalyzer that you needed to blow into before you could start the car.
If you want to drive home p**** then you still need a Taxi.
"What causes accidents? human error nothing else" - really only human error? so kids running out into the road leaving the driver no chance are somehow avoidable with autonomous cars? - OK so it's the kids error and yes that is human but the kid is just as squashed, no software can overcome the laws of physics. I think you are putting too much store in software written by humans that create the human errors which you think the software will miraculously overcome. Every single program ever written has bugs, lots of them, mostly they don't kill people but they are there because as you so rightly say humans are fallible. Anyway eddie interesting discussion and to answer your original question, no insurance will not get cheaper if anything it will get a lot higher. Tell you what guys when I can buy a driver less car and use it as described in my first post on this I'll eat my words.
"I am pretty sure that the 'driver' would have to be in a fit state to take over from the computer if there was a failure , so 'In charge of a vehicle while drunk or incapable' would still be illegal" - then there is no point at all, none, zilch, I'd rather drive myself if I'm going to get prosecuted for software shortcommings.
TTT, the haulage firm I used to work for had a device installed that told you when to changer gear, at the right revs, we used to pull 44 tonnes from Lanc's to Glasgow every other night, as you may be aware the M6>>M74 had a few big gradients, if we drove the way this stupid thing wanted us to it would have taken all night to get there, In theory it worked, in reality a load of Bull st.
Well TTT, I don't share your pessimism. And, like I've again said, this rather sad obsession with "what happens if I'm drunk as a newt?" instead of thinking about the potential (and highly likely) massive reduction in traffic accidents and deaths seems to me to be misplaced priorities.
Self-driving technology doesn't need to be infallible. It just needs to be better than we are at coping with dangers, and it's already not that far away. Never is a very long time for technology and computing.
Self-driving technology doesn't need to be infallible. It just needs to be better than we are at coping with dangers, and it's already not that far away. Never is a very long time for technology and computing.
Related Questions
Sorry, we can't find any related questions. Try using the search bar at the top of the page to search for some keywords, or choose a topic and submit your own question.