By Charles Costa

The Ethics and User Experience Behind Programming Cars

By Charles Costa

Google Car

Cars have come a long way from the days of the Ford Model T. While it used to be enough for a car to have Bluetooth and phone integration, software now controls the entire driving experience. From entertainment systems to engine operations, safety systems and more. Technology has led to innovations such as intelligent cruise control, parallel parking assist and even automatic overtaking, yet none of these are as impressive as the self-driving features Google and some Tesla vehicles sport.

Programming cars used to be limited to engineers with years of experience in the automotive industry. Today, with advanced programming interfaces, coding for cars is much more accessible. It’s still not as easy as coding a website, but at least you don’t need to master embedded systems programming to make modifications to cars. Unfortunately, that’s a whole different topic beyond the scope of this article. Instead we’re going to focus on a few non-technical challenges to programming cars.

Programming Judgement

Self-driving cars are supposed to be safer than traditional cars, however the recent batch of Google cars are proving to be too cautious. From cyclists causing cars to go haywire and cars being rear ended at traffic lights, programming cars to drive like humans is a major hurdle to these machines.

One of the biggest challenges to self-driving cars can’t be solved by technology. Nothing in life is foolproof and as self-driving cars gain popularity, one of the most pressing issues is how the car should handle worst case scenarios. The MIT Technology Review uses the example of a self-driving car losing control and heading towards a crowd of people. The car could go into the crowd or swerve into a wall. It’s easy to say it’s best to sacrifice the driver to save lives, yet you probably won’t have that attitude if you were behind the wheel.

People aren’t going to buy cars which turn them into martyrs, yet people don’t want to be killed by cars when they lose control. The decision is tough enough in a traditional car, but self-driving cars mean programmers determine who lives and dies. Right now that’s a question which remains unanswered, but it illustrates how much code can impact lives.

Intentionally Hiding Defects

Car manufacturers are now able to cover up gaps in safety just by adding additional code to their systems. Volkswagen recently came under fire because their diesel engines detected emissions test equipment and adjusted performance for those machines. Although emissions aren’t directly a safety issue, if you can program a car to fool emissions systems, what’s stopping a programmer from hiding defects in engine sensor accuracy? Drive-by-wire means virtually all the mechanics within modern cars are controlled by code. From the throttle to the brakes, everything is controlled through digital signals.

Whether the car is a self-driving or traditional vehicle, you can’t cut corners when writing code for vehicles. In a world where a sliding floor mat causes cars to accelerate uncontrollably, it’s not like fixing code is going to be easier to correct. That’s the biggest thing here. Physical defects in cars cause enough trouble. Fixing faulty code could require more effort than installing a hook to hold a car mat in place.

Google Car


Technology Is Ahead of Its Time

Autonomous cars might be safer than human drivers, yet the recent launch of the Tesla autopilot feature illustrates what happens when drivers are given powerful features without the right training. For example, auto steer isn’t useful if it causes you to nearly drive off a highway offramp. This isn’t a case of programming gone wrong. It’s a case of drivers overwhelmed by new technologies. Just because you can code something doesn’t mean it’s worth doing.

When it comes to traditional apps, the worst that comes from new features is confused users switching to something else. On the other hand, when you’re programming a car, overwhelming users can have fatal results. Even the user experience of in-car entertainment systems plays a huge role in distracted driving. When it comes to automotive technology it’s best to only add essential features, rather than putting in half-baked code (or novelty features) to make your car sexier.

The Silver Lining

Of course there’s plenty of positives to cars becoming more advanced. Self-driving cars can virtually eliminate car accidents, while advanced sensors in cars results in more robust safety systems. Technology isn’t a bad thing. It’s just a whole new field which hasn’t been tapped before. Until the space becomes more mature, it’s best to keep your development cycles moving at a modest pace, especially with driving assist technologies. The Google cars show self-driving cars are practical. It’s really a matter of training the driver, as shown with the case of Tesla Autopilot.

What are your thoughts?

  • Patrick Catanzariti

    Great article! It’s a really intriguing topic I plan on covering one day on my own blog, so I was thrilled to see a SitePoint piece on this! I’ll be sharing it in my weekly Dev Diner newsletter (

    I think there’s so many factors to making intelligent cars, planes are incredibly safe due to their intelligence, whilst I feel like their job could be less complex (no need to navigate in response to other drivers and pedestrians) so cars may not reach the same safety for a while!

    I’d like to ask – do you thini we’ll get to a point where it is just a given that all cars are self driving and if a pedestrian walks in front, it’s their own responsibility? Similar to walking in front of trains?

    • Hey Patrick,

      Glad you enjoyed the piece! As far as the scenario you mentioned, I know the Google cars are engineered with a cushioning on the front so that even if a person was hit, the impact would be much less severe than a traditional car.

      As far as responsibility goes, that’s really tough to say because if the computer code isn’t open sourced, it’s impossible to tell whether there was a coding error or something of that sort. I’m not an insurance expert but I imagine governments will be working closely with insurance companies to determine fault guidelines.

      In the meantime though – I believe modern cars are equipped with black boxes (similar to the ones in aircraft) which log the last minute or so of activity before there’s a major impact. I imagine today they could use those to provide some insights to help determine fault. Only thing is that I’m sure most people are going to be skeptical of the data – so it really goes back to the cultural aspects of this. There’s only so much which can be done on the technical side.

  • simon codrington

    Thanks for the article Charles.

    I’ve been looking into at how everything has been progressing with car automation and even though on paper it seems great (they lower the risk of hitting other cars, increase the general safety on the road by following road rules all the time etc) I’m worried when it seems like basis things like cyclists or pedestrians cause it to flip out.

    I think people won’t trust fully autonomous cars for a long time, not until the technology has reached a good state of maturity (it’s not like a new gadget that if it fails it’s fine, if your car fails to drive correctly you’ll die)

    Thanks again for the sharing :)

    • Theresa Porter

      Is this how it’s possible to earn ninety seven bucks an hr…? After being un-employed for 6 months, I started a job over this web-site and now I cannot be happier. After 5 months on my new job my income is round about dollars 7500 a month working 20hrs a week..

    • No problem, glad you enjoyed the piece! I’m sure in time those issues will be resolved, but still, like you said, trusting your life to a car is a bit different than relying on a smartwatch to tell the time,

  • George

    Am betting the lawyers can’t wait to get a client rear-ended by a self-driving car and be able to sue a multi-billion-dollar corporation instead of the (literally) poor schmuck “behind the wheel”.

    And I have yet to hear or see anyone talking about programming for winter conditions. We do have some of that up here in Canada. No way I trust a car to think its way out of a spin out on a downward slope, or to see the black ice conditions 100 metres ahead of me. Do they know the difference between a highway and a country road, which will usually have a very pronounced curved profile? Or more importantly, how to correct a slide on each?

    • Hey George,

      One of the reasons Google is running all their tests out here is because we barely get rain or any other imperfect weather. I believe I have seen the cars driving in the rain, but even that’s a major challenge. The cars right now don’t go above 25 mph.

      From what I’ve heard, getting the cars to drive effectively on snow or ice isn’t goign to happen anytime soon.

      The point about the lawyers – I’ve seen plenty of the Google cars up close (I even saw three go by while at a traffic light today) – they are covered in cameras specifically due to liability concerns. They have had eleven accidents since the program was first launched, but the cameras combined with the computer data pretty much prevented any lawsuits and put the other driver at fault.

      Here in CA the car insurance is comparative fault – so pretty much if you’re 100% at fault for an accident, the other party (typically) doen’t have their premiums rise, and the other party also has to pay 100% of the damages. Even if Google had some fault (20%) – the other driver would still be paying 80% of damages.

      I’m not a lawyer but as far as courts go – yeah, you could always demand a trial and pursue damages, but even getting into court would cost tens of thousands in fees.

Get the latest in Mobile, once a week, for free.