Off Topic:

Quote Originally Posted by sg707
I guess you're entitled to your opinion but it's only making it look like OS because everyone is familiar with Windows OS UI. This reduces the learning curve to understand the UI.
everyone is Windows users are familiar with Windows OS UI. This absolutely MUST be kept in mind. Also, which Windows OS UI? They kept bashing it with a baseball bat every release.

Very off-topic but I think interesting: I've been reading Tog's stuff for a while now and he explains on many levels what went "wrong" (inefficient) with Microsoft and Sun UIs (partially because of Apple being copywrite/patent pigs, take that as you wish). He mentions some things specifically here.

Regarding #5: Now I personally like my menus on my windows, either because I grew up that way (familiarity) or because when I have multiple windows open, they each should clearly have their menus with them. But as Ubuntu embraces Unity, it's bringing more and more Appley stuff to the interface. Shared menus (so that the menus are at the top or on an edge) are one. The horrid, horrid horrid mistake that is the App Bar/Task Bar appearing from a hidden location on the edge is also one (many of us have been conditioned to click on the usually-empty left side of Youtube videos to gain our focus back from the Flash... but in Unity this brings the unwanted applications bar. Also when trying to hit the Back button. Or reach the application's File menu. Arg arg arg. Horrid. So now we spend wasted time figuring out how to move it to a more sensible location like the bottom).

Anyway I guess I'll say putting the mistakes of an OS UI on the web is a bad thing and putting the things they got right of an OS UI on the web is only a good thing if the reasons for those things are the same between Desktop and Web. Sg707 is right that familiarity *is* good in that the learning curve for new people is lower. But the way to go there is take what works and makes sense on the web and cloak it in familiar things where appropriate, so people can recognise what new things are (skeuomorphic design).

About 10 years ago: remember when DHTML came out and people used Javascript for dropdown menus? What were those dropdown menus emulating? Heirarchical application dropdown menus. Except, on desktops, the menus usually stayed on-screen after a click, which was necessary because 1. it frees the pointer and if there are submenus 2. humans consistently suck at moving perfect horizonal lines. So DHTML imitated that: submenus often appeared onclick. Fine mouse control wasn't necessary.

When CSS (with :hover) started taking over the job of DHTML, though, a problem came: many sites had been training users to click to open menus. With CSS menus, because the top-level itself should be clickable to somewhere (in case the dropdown cannot be opened for one reason or another), clicking on the top-level item brought unexpected results to experienced users. Meanwhile slow new people got a chance to see the dropdown appear on :hover because they hadn't had time to click on it yet.

Also finer mouse control became a must, and nested submenus became increasingly difficult to use. Go three levels down but then oops you went a pixel too far off the sub sub sub menu and now it's all gone and you have to start all over again. WUT!

Hovering to show submenus started becoming the norm for the web (but still isn't for any of my applications on Desktop though. For good reason: hovering isn't considered enough of an intent, while clicking is. So hovering without intent should not bring unexpected or unwanted actions). So we have familiarity, and then we have training. We train users, and we ourselves get trained by applications we use.

And now we have touch interfaces actually in the hands of the regular public rather than limited to strange museum kiosks and ticket machines and whatnot. And touch has different ideas of intent, and does not have :hover (and has a different idea of :focus). We haven't been touching our Desktops (Metro I hear wants to change this, and Dell and some others have been offering combo touch/mouse/keyboard machines now), nor have we been using pens on them, but we might in the near future or already are on our tablets. This means our interfaces can't simply copy over from mouse-and-keyboard OS UIs. It's not that simple, even if it's familiar to people. I would rather trade in some familiarity for some works-better-with-this-setup. If something is intuitive or easily discoverable then it doesn't have to be so familiar.

An interesting thing mentioned in a lot of usability texts: people figure out a way to use something. It may be arduous and take many steps. It turns out, users tend to find the first way that works, and they stick to it even if it's longer and harder than another way. Users do not generally explore new ways of doing something once they've found one way to do it. So I'd say when building an interface, make the first way they are likely to find how to do something the easiest.

We're nerds, right? How many of us have watched someone using a mouse to fill in a web form? Zomg it's painful.