Why Your Website Statistics Reports Are Wrong, Part 3
In my previous posts, we examined the advantages and drawbacks of client-side and server-side data collation for website statistics. In the final post of this series, we’ll finish by looking at some other statistical anomalies.
There are several online services which provide general website statistics reports. StatCounter is one of the more well-known: it analyses data from 3 million websites across the world to gather meaningful statistics. The information is useful, but you should be careful when using the data to make business decisions. Just remember:
There is no such thing as a generic website.
All websites experience different usage and demographic figures depending on their content and target audience. We can draw some obvious conclusions:
- Mozilla.org is likely to have a higher than normal incidence of Firefox users.
- Apple.com will experience higher Mac and Safari usage than average.
- Opera.com will probably have more Opera users than most.
- The Google Chrome Extensions page will be dominated by Google Chrome users.
- A site written in Chinese will have a larger proportion of users from Asia.
For example, you might spend significant time and effort supporting IE6 because it has a global market share of 14%. However, if you’re running a technology-based website with primarily North American users, IE6 usage is likely to be less than 5%.
It is possible to generate meaningful statistics from web applications which require user actions (form posts) and record data in a database or files.
For example, sites such as Flickr and YouTube will know how many users have registered and how many photographs/videos those individuals have uploaded. However, the majority of visitors are viewers who may not be making data changes; their statistics will be affected by the same drawbacks discussed for client-side and server-side data collation.
Are Website Statistics Useful?
Website statistics reports have evolved. They were originally used by web developers and administrators to assess server loads and performance bottlenecks. They may now be prettier and more accessible, but that does not mean they are correct. They can certainly be useful for assessing general trends, but all statistics are meaningless unless you understand how the data was collated and the caveats of the analysis methods used.
For example, you may discover your website has a low number of Opera users and decide to abandon testing in that browser. But what if your current site does not support Opera? The low figure might be because those users can’t view your home page.
Website reports may look impressive, but it’s easy to misinterpret the underlying data. I’d recommend obtaining reports from two or more separate sources and be wary about basing your marketing decisions solely on pretty line graphs or bar charts!
Do your clients understand their statistics? Have they ever made dangerous decisions based on the information?