• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Ed Self-Driving Cars: Pros, Cons, and Predictions

Evaluate Self-Driving Cars on a scale of 1-5 (1 = Terrible, 3 = Meh, 5 = Great)

  • 1

    Votes: 10 6.6%
  • 2

    Votes: 11 7.2%
  • 3

    Votes: 24 15.8%
  • 4

    Votes: 28 18.4%
  • 5

    Votes: 79 52.0%

  • Total voters
    152
  • Poll closed .
oh, i forgot too the way the headlights are mounted. it's a little light bar basically right above the front bumper, right where the snow piles up. pretty serious safety concerns imo

I understand that isn't the case, that that light bar is just the DRLs and the actual headlights are somewhere else.
 
I understand that isn't the case, that that light bar is just the DRLs and the actual headlights are somewhere else.
oh where are they mounted? i don't really see anything else on the truck
 
I could see SDCs being safer in usual conditions, less safe in unusual conditions, and statistically safer overall.
I think that's likely true and likely to be a factor in various mandates going forward, but also one of the faults with using overall statistics to judge what you should or should not do. Overall statistics are great for some things, like allocating funds and determining insurance premiums, but not so great for deciding what's best for an individual. I think Steven J Gould went into this to some degree in his essay on cancer prognoses, trying to figure his chances with what he had. The statistical fact that autonomous cars are safer than human driven ones is cold comfort if you do most of your driving in the edge conditions where they aren't.
 
I think that's likely true and likely to be a factor in various mandates going forward, but also one of the faults with using overall statistics to judge what you should or should not do. Overall statistics are great for some things, like allocating funds and determining insurance premiums, but not so great for deciding what's best for an individual. I think Steven J Gould went into this to some degree in his essay on cancer prognoses, trying to figure his chances with what he had. The statistical fact that autonomous cars are safer than human driven ones is cold comfort if you do most of your driving in the edge conditions where they aren't.
Thanks Bruto. I read that essay years ago and because of you I read it again. I have always been a lover of facts over spin. And I am fully aware that while statistics are facts they must be treated with a skeptical eye. Unfortunately, that skepticism often finds that the use of statistics is often abused. Which makes them difficult to decipher.
 
Thanks Bruto. I read that essay years ago and because of you I read it again. I have always been a lover of facts over spin. And I am fully aware that while statistics are facts they must be treated with a skeptical eye. Unfortunately, that skepticism often finds that the use of statistics is often abused. Which makes them difficult to decipher.
Skepticism is one thing, denial is another.

Let's put aside any 'skepticism' you may have and assume the numbers are accurate enough for our purposes. The idea that policy shouldn't be applied according to statistics because there are 'edge' cases is silly. We don't allow non-licensed people to drive even though some of them might be perfectly fine, because we know the vast majority aren't. We still have seat belts and airbags even though in some cases they may increase rather than reduce injury.

But it's a typical denier tactic. Well sorry fella, but your edge case isn't going to stop us from making things better. In the future self driving cars will be safer in almost every circumstance. People will still be allowed to drive without it of course, because they prefer it and/or for those 'edge' cases where manual operation is needed. But the cars will become more and more autonomous to the point where most people won't be bothered 'driving' them most of the time.
 
Skepticism is one thing, denial is another.

Let's put aside any 'skepticism' you may have and assume the numbers are accurate enough for our purposes. The idea that policy shouldn't be applied according to statistics because there are 'edge' cases is silly. We don't allow non-licensed people to drive even though some of them might be perfectly fine, because we know the vast majority aren't. We still have seat belts and airbags even though in some cases they may increase rather than reduce injury.

But it's a typical denier tactic. Well sorry fella, but your edge case isn't going to stop us from making things better. In the future self driving cars will be safer in almost every circumstance. People will still be allowed to drive without it of course, because they prefer it and/or for those 'edge' cases where manual operation is needed. But the cars will become more and more autonomous to the point where most people won't be bothered 'driving' them most of the time.
I agree with all of that. Not sure why you are sorry.

I was saying I have read studies that suggest autonomous vehicles are safer than human drivers. Not that they are. I was also saying it was challenging to understand what those studies actually tells us. What I get from those studies, right or wrong on my part is that they generally are. But not in every circumstance. And as the Bard says, "aye, there's the rub."
 
So basically at the end of the day, yeah, they're safer on the highway, unless it's dusk/dawn and unless there's a traffic event, cf the same article. When you're in a city and you have lots of turns and humans around and braking at streetlights, yeeaah, no, things ain't so great any more. And it's worse if it's a Tesla, I guess.
Any comparisons you see today are already out of date. FSD V13 - released a few days ago - is much better than 12, which was much better than 11 etc. By the time Cybercabs are rolling off the production line it will be so good that all prior studies are worthless.

Part of the reason for this is increased data collection for training, and part is increased compute power. And of course the engineers are constantly identifying and solving problems as they come up, so any complaints or 'issues' you hear about now will probably soon be history.

Tesla's FSD has also proved itself to be more flexible than other systems. Tesla recently added it to the Cybertruck, and there is evidence of it being tested on the Semi too. Eventually Tesla may license it to other car makers, who will be able to integrate FSD into their models without much effert. The more vehicles that have it, the more feedback and the better it gets.
 
When autonomous cars drive more safely than humans there will be no reason not to let them, and reason to ask whether we should still let humans do it.

But it won't be so clear cut. If they're better than average then <shrug> so is everyone if they're not drunk or playing with their phone. And they won't all be the same. Different models will have different strengths and weaknesses. I wonder how we'll type approve each startup brand to decide it's adequately safe on our own country's roads. A car trained in one country is like a foreign tourist in another. Will there be an aftermarket tuner business selling assertiveness upgrades to make your car get through traffic faster than the standard model? If that's banned, will it go on illegally anyway?

The more I think about it, the messier it feels like it's going to get.
 
When autonomous cars drive more safely than humans there will be no reason not to let them, and reason to ask whether we should still let humans do it.

But it won't be so clear cut. If they're better than average then <shrug> so is everyone if they're not drunk or playing with their phone. And they won't all be the same. Different models will have different strengths and weaknesses. I wonder how we'll type approve each startup brand to decide it's adequately safe on our own country's roads. A car trained in one country is like a foreign tourist in another. Will there be an aftermarket tuner business selling assertiveness upgrades to make your car get through traffic faster than the standard model? If that's banned, will it go on illegally anyway?

The more I think about it, the messier it feels like it's going to get.
If current trends continue I'd expect a subscription model. You won't actually own your car (you'll own the maintenance but not the control) and will have to subscribe to wherever you take it or be always connected to a source which will detect where you are and charge you for the localized learning. Just a guess, but I'm guessing that whatever happens will cost more and require greater connectivity, and will somehow just happen also to end up generating targeted spam.
 
Any comparisons you see today are already out of date. FSD V13 - released a few days ago - is much better than 12, which was much better than 11 etc. By the time Cybercabs are rolling off the production line it will be so good that all prior studies are worthless.
I don't see any reason to believe v13 is better than v12 in terms of safety. Most of the hype is around better parking performance, somewhat smoother driving, and (maybe?) fewer driver interventions. That's precisely the kind of question we need data to answer, and Tesla doesn't provide it.

Part of the reason for this is increased data collection for training, and part is increased compute power. And of course the engineers are constantly identifying and solving problems as they come up, so any complaints or 'issues' you hear about now will probably soon be history.
v13 is being rolled out to cars running v12 on HW4, and Autopilot doesn't rely on cloud compute in operation, so I'm not sure where you think the increased compute power is coming from. Improved performance doesn't correlate directly to increased safety in any case. One of the problems involved in getting from ADAS to full automation is that you're going to have a period where it's good enough that it seems like you don't need to pay attention, and so people will stop paying attention (and Tesla does very little to ensure that drivers are paying attention, despite the fact that this is a relatively trivial problem to solve). In one recent fatality involving a Tesla with FSD engaged, a motorcyclist was killed when the car rear-ended him. The driver noticed when he heard the sound of the motorcyclist being killed, because he wasn't paying attention, and was using his phone instead. Distracted driving is a problem with unassisted driving too, of course, but most manufacturers don't do their best to imply that the car can drive itself. This should probably be considered a marketing defect.

And "constantly identifying and solving the problems" understates the severity of the problem. The number of edge cases is basically uncountable, and they're trying to brute force a solution to the problem, which is that these systems are incapable of common sense reasoning when presented with novel circumstances.

Tesla's FSD has also proved itself to be more flexible than other systems. Tesla recently added it to the Cybertruck, and there is evidence of it being tested on the Semi too. Eventually Tesla may license it to other car makers, who will be able to integrate FSD into their models without much effert. The more vehicles that have it, the more feedback and the better it gets.
Just like how Apple licenses their OS to other manufacturers.
 
Last edited:
When autonomous cars drive more safely than humans there will be no reason not to let them, and reason to ask whether we should still let humans do it.

Which, as you also seem to say, is not that clear cut a question. The data presented in this thread suggests that, yes, they may or may not be better under ideal conditions.

Unfortunately, conditions can get less than ideal by as little as someone wearing a t-shirt with a stop sign printed on it. Seriously, there is plenty of video evidence that you can get some ADV taxis to brake by as little as wearing that and turning towards them.

Or, as a weeb, I'd like to introduce you to Japan's "itasha" (literally "painful" or "cringeworthy" + "car") culture, basically meaning having lots of images of anime, manga or video game characters painted on your car. And even more serious companies have their own itasha wraps and characters, such as an IIRC tire company having their own character Miu, a pun on the µ symbol for the friction coefficient. If you think that's a trivial problem, not just car cameras, but also facial recognition cameras have problem with "cloak of invisibility" apparel featuring lots of faces on it. You can literally become invisible to most of that software by wearing an ahegao hoodie. (Apparel painted with lots of hentai women's orgasm faces. Maybe don't google it at work.) Or at least get them to lock on a lot of faces that aren't real.

You COULD maybe distinguish between a car painted with people and actual people with LIDAR and ultrasound, but again, that doesn't exactly describe a Tesla, does it?


...and then there's the truly one of a kind situations, that I'd assume most cars were not trained on. A few years ago, I was in a taxi that almost got t-boned on my side by one of those two ton SUVs, going way over the speed limit too, at an intersection. I had already made my "I go to Odin" peace when I saw it coming, but the taxi driver's quick reflexes and a MAD swerve into the grass lane in the middle, denied my getting picked by a Valkyrie. Honestly on that trip the biggest scare was the driver being pretty much marinated in adrenaline after that. Do I think an AI car is trained exactly on that scenario? Nope. I'd probably have met the allfather in one of those.
 
Really?

Why aren't you concerned with the many millions times more waste from coal mining and coal fire reactors? The lead, the cyanide, arsenic etc. They are significantly more a threat. 99.9% more deaths result from the mining of coal and it's use. 99.8% more from oil. And I'm not including the problem of global warming. And when breeder reactors are developed they will be able to reduce not only the amount of nuclear waste by 90% they will reduce the radioactivity of transuranic waste from hundreds of thousands of years to a few hundred.
you are doing the cheap thing - but I EXPLICITY compared Nuclear with Renewables, so that was just straight up disingenuous . .

At this point, nuclear power is pretty much irrelevant:

Sure, when all the problems are solved, especially with long-term storage, we should use nuclear power, including using up the waste of other reactors, but it is not the salvation people think it is or could be - it is just way too expensive and inflexible - and the providers have a history of being irresponsible and a drain on public funds.
 
Last edited:
you are doing the cheap thing - but I EXPLICITY compared Nuclear with Renewables, so that was just straight up disingenuous . .

At this point, nuclear power is pretty much irrelevant:

Sure, when all the problems are solved, especially with long-term storage, we should use nuclear power, including using up the waste of other reactors, but it is not the salvation people think it is or could be - it is just way too expensive and inflexible - and the providers have a history of being irresponsible and a drain on public funds.
You said the market has spoken. But the market even by the source you cited has been corrupted by politics. That said, the need for an ever increasing demand for energy has not ended the demand for coal and other fossil fuels. And renewables like wind and solar fail to address the need for baseline energy sources. They are intermittent.
But your argument was that nuclear power was unsafe. Which is a flat out untruth. That is a bald face deception that has gone on for 60 years.

That said, I do agree the further development and deployment of solar, wind and energy storage might destroy any economic argument for nuclear energy to be included in the mix.
 
May I point out again that this is a completely irrelevant side-track, that's not even a valid analogy? I mean, the argument from analogy is the weakest anyway, but it relies on transferring a relevant common attribute from context X to context Y. As in, it actually is relevant to the argument at hand, such as why it makes it safe.

An argument that:

P1: Hawks have wings.
P2: Hawks fly.
P3: Penguins have wings.
Therefore:
C: Penguins fly.

Is obviously showing the weakness of the argument, but it at least relies on the supposed relevance that wings help with flight. If your argument was that crows are black, penguins are black, therefore penguins fly, that's an irrelevant attribute for flight that makes it not even be a valid analogy.

If all you've got is literally just "but they opposed/laughed at X too", then see Carl Sagan's quote, "They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown." You literally don't have an analogy and you're just wasting everyone else's time with irrelevent handwaving.
 
Last edited:
If all you've got is literally just "but they opposed/laughed at X too", then see Carl Sagan's quote, "They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown." You literally don't have an analogy and you're just wasting everyone else's time with irrelevent handwaving.
Actually it is relevant. Nuclear is unsafe. So is coal, gasoline, electricity, fire and water. Cars are unsafe, airplanes are unsafe, trains and ships are unsafe. Guns are unsafe, knives are unsafe, scissors are unsafe. Every year ~12,000 people in the US die from falling down stairs. We are surrounded by unsafe technologies which we continue to use despite the danger.

But there's lot we can do to make them safer. Nuclear is made safer with strict protocols, highly trained operators, regular inspections, multiple redundancies and massive containment buildings. Despite all that the government has to provide insurance because no private company can take on the risk. If we didn't do all those things to make nuclear safe then... imagine a Chernobyl every few months.

Nuclear and radiation accidents and incidents
As of 2014, there have been more than 100 serious nuclear accidents and incidents from the use of nuclear power. 57 accidents or severe incidents have occurred since the Chernobyl disaster, and about 60% of all nuclear-related accidents/severe incidents have occurred in the USA.
Now compare that to the number of serious car accidents. Nuclear has the potential to kill millions in a single incident, but it doesn't because billions have been spent on making it safe. Same goes for commercial airplanes, which have the potential to kill hundreds at a time (and sometimes do). But cars are (individually) less lethal, so we don't do so much to make them safer.

There's a lot more we could do to make cars safer. Unfortunately many of the most effective measures have not been implemented due to cost and/or impinging on drivers' 'freedoms'. If cars were nuclear power plants, 90% of them would be off the road for various violations. The accident rate would drop to almost zero, but the backlash would be political suicide. Enter autonomous driving...

Previously we didn't have the technology to do it, but now we do - and with the aid of advanced AI and powerful computer chips it's improving exponentially. Most cars produced today have some level of autonomy built in, but to make them truly safe we need full autonomy to remove the human element that causes the vast majority of accidents. It also makes driving more enjoyable and less stressful, as well as saving money in insurance premiums, repairs and medical bills.

There's just one problem - the same one we've always had with cars - 'freedom'. People want to do what they want when they want, without any 'safety' features getting in the way. This is why mandatory seatbelts failed in the US, and why airbags were introduced. However eventually most of the public accepted seatbelts, as it turned out they weren't a significant restriction on their 'freedoms'. Today we have the same problem with autonomous driving. People fear change, and don't trust a machine to do their thinking for them. It will take a while for people be comfortable with it, just like it took a while for them to be comfortable wearing setbelts. Meanwhile we have to put up with 'skeptics' inventing arguments against self-driving cars to hide their fear of new technolgy.
 
Meanwhile we have to put up with 'skeptics' inventing arguments against self-driving cars to hide their fear of new technolgy.
Can I ask you to stop imputing motivations like this? People are not 'inventing' arguments, they're articulating them.
 

Back
Top Bottom