I've never seen anything to indicate Google is arbitrarily penalising sites that have invalid code. Their goal is to give searchers the most relevant results, and dropping sites because they have technical errors in their code isn't going to fulfil that goal. (Besides, it would be more than a little hypocritical if they did!)
That doesn't mean that code errors won't harm your search position though. The key word above is 'arbitrarily'. Google reads your code, and uses that to find out what your page is about and determine how and where it should rank. If your code is scrappy and all over the place, riddled with mistakes, there's a fair risk that Google won't be able to understand it properly, and that will harm your search position.
The main reason for caring about validation (apart from your own professional high standards, of course) is that sites with invalid code are much more likely to display incorrectly on some or all browsers. It might look fine in one browser and be wrong in another. That browser that it looks wrong in might not even be out yet – you can test it in every browser available today and it's fine, then tomorrow a new version of (whatever) is launched and it chokes on your errors. It's much easier to check the code is valid than to test it in every version of every browser!
On the other hand, proprietary code, whether it's -moz- prefixes in the CSS or Google/Facebook code in the HTML, is always designed to use 'new' tags that aren't part of any spec. That way, supporting agents will work with them correctly, and all others will just ignore them. So it's no big deal if you have errors resulting from proprietary code, as long as you know why they're there.