Just What Are We Optimizing Anyway?

With a 3-day weekend coming up, at least here in beautiful Frisco, TX… I’d like to close the week out with an idea that we hammer on relentlessly in my SEO and SEM training classes.

If your SEO efforts are solely dedicated to optimizing for higher rankings or placement on a limited number of search terms, you are missing out on the real opportunity. Instead of optimizing for rankings, why not think about the problem a little differently? What we’re really trying to optimize is our business results.

When you make this change in your thinking, you will start to consider:

  • Improving a site’s conversion rate is often more productive than trying to improve the ranking for a specific search term. For most websites, it’s a lot easier to double the conversion rate than it is to double the amount of traffic coming in.
  • Writing rich copy with relevant search term modifiers (additional words that are often used by searchers alongside your primary search terms) can provide a big boost to your search engine traffic, as well as conversions.
  • Investing resources in improving a site’s conversion rate will deliver more resources for SEO and PPC in the long run, improving a site’s ability to compete in the marketplace.

When you implement the right strategies to improve conversion, you can actually boost your search engine profile at the same time. Here are a few good resources for conversion rate improvement:

1) Streetwise Relationship Marketing on the Internet by Roger C. Parker. This book focuses on understanding the information needs of your prospects, and developing websites to build long-term relationships with customers and prospects. The best $20 I’ve ever spent! By following Parker’s strategy, we were able to accomplish a four-fold improvement in our conversion rate at SEO Research Labs, and that was just Phase I.

2) The Grok Dot Com conversion rate improvement newsletter, published by Bryan and Jeffrey Eisenberg of FutureNow, Inc. This newsletter is simply a must read, every week. There are others publishing newsletters and books in this category, but the big difference is that these guys understand search engine marketing, including SEO.

3) Conversion Chronicles, edited by Steve Jackson. This newsletter will teach you a lot about conversion rate improvement, but be careful about implementing the kind of tracking links they use on their site. There’s a better way to implement this kind of tracking; the way they have done it can create crawlability and duplicate content issues for search engines.

Free book: Jump Start HTML5 Basics

Grab a free copy of one our latest ebooks! Packed with hints and tips on HTML5's most powerful new features.

  • http://www.mjswebsolutions.com type0

    Thank you for the links and post.

    I bought the book and signed up for the newsletters.

    Conversion is really the whole point isn’t it?

  • http://www.dynamicfunctions.com Kadence

    I think you can learn more from examples of effectively converting sites than from books. However data on how different sites convert is almost impossible to find.

  • Steve Jackson

    Thanks for the kind referral Dan. The point you make about our search optimization “the way they have done it can create crawlability and duplicate content issues for search engines.”

    You’re right but I thought I best expand on this point. We measured the search engines and found that 2 variables in a URL are spidered equally as well as an html page name.
    So in effect page.php?PageID=36&tracking=article_about_search_engines works as well as;
    article_about_search_engines.html when you talk about spidering.

    When you use 3 variables we found that spiders tended to ignore the link more often. For the record every one of our articles is ranked on Google.

    Our problem now is irrelevant traffic from engines because of the wide variations of quotes and analogies in the articles. So we know our visibility is good, but in our case the problem you described wasn’t an issue. We stick to 2 variables throughout the URL’s in most cases (3 on 5 or 6 URL’s).

    We are going to change the way we name pages, using mod.rewrite on apache to generate a better name for each page. I would add that from an SEO perspective;
    article_about_search_engines.html is more likely to get a higher listing than our pages named like they are for the keywords in the URL. This is the main reason we’re changing the way we name pages, not because of spidering or content issues.

  • http://www.practicalapplications.net bwarrene

    Great reminder that conversions lead to dollars and massive traffic does not necessarily lead to the same unless purely an ad driven site. Even then – the volume of ads required is substantial to cash fill a business.

  • Steve Jackson

    Another amazing book I have just begun reading on the subject of conversion from Bryan and Jeff Eisenberg. Call to action (http://www.calltoactionbook.com/)

    The real difference with these guys is not just that the know SEO and SEM, it’s that they understand that developing personas is crucial to the whole persuasion process. SEO and SEM keywords stem from the development of persona.

  • http://nervecentral.com Nerveman

    Always good to remember the main objective… money, money, and more money! All I have to do now is find the time to implement all this advice!

  • mhdoc

    Nice to see some advice other than targeting popular keywords.

  • http://www.homeorchardsociety.org SRTech

    There’s a better way to implement this kind of tracking; the way they have done it can create crawlability and duplicate content issues for search engines.

    Can you post your better way? I have been trying to figure out how to track links without hurting SEO and would love to know how I should do it.

  • craig34

    I’m with SRTech, I’d love to see some information on how to accomplish this task better.

  • http://www.seoresearchlabs.com DanThies

    Steve,

    The main issue with your tracking links isn’t the number of variables, it’s the use of a variable to track which page the clicked link is on. I emailed you about this a few months ago, when I was working on my book.

    Let’s say you have 3 pages: A, B, and C.
    If you link to page C from both A & B, your tracking system involves adding a variable to the links on both A & B. From A, you use something like /page=c&from=a; from B you use /page=c&from=b. This creates two URLs that both point to the same resource, page C. Two problems arise from this practice.

    First, you now have duplicate content (the same content presented on more than one URL). Second, because search engine spiders also follow these links, you need to add an extra step in your path analysis to make sure you’re only tracking human clicks.

    The approach I recommended (based on advice from Alan Perkins of Silverdisk) was to use Javascript to add the from=whatever variable to the URLs. This allows you to track your human visitors (or at least a very high percentage) without confusing the spiders, or worrying about which clicks are actually humans and which are spiders.

  • http://jrickards.ca jrickards

    Slightly OT: Where does Google Local factor into SEO? My website (http://jrickards.ca) certainly doesn’t use all of the SEO tricks so it doesn’t get ranked very high but I figured that it would appear in Google Local for my city (Sudbury, Ontario, Canada). However, I recently checked and it doesn’t appear at all within the Google Local listings despite my use of the <address> tag around the address block in by contact page.

    Any ideas?

  • http://www.pbfan.com/blogs/ Automatik

    Good info, and I’ll put those books on my wishlist.

    Now I’d love to see an article on Trustrank :)

  • http://www.conversionchronicles.com aboavista

    First, you now have duplicate content (the same content presented on more than one URL). Second, because search engine spiders also follow these links, you need to add an extra step in your path analysis to make sure you’re only tracking human clicks.

    The duplicate content issue may be a problem. I didn’t think Google penalized you for referencing the same PageID from different places. The only case this may arise is from embedded links on our site and so far we’ve had no problems. However we are changing this in the next couple of months to have simple URL’s and no tracking variables.

    Spiders following our links is not an issue for us, we have log file tracking to follow what the search engines do on our site and ASP browser tracking to follow what the people do. The ASP ignores search engine activity so the tracking variables on the URL’s have been helpful. However I do see your point more clearly now.

  • http://www.seoresearchlabs.com DanThies

    [QUOTE=aboavista]The duplicate content issue may be a problem. I didn’t think Google penalized you for referencing the same PageID from different places. The only case this may arise is from embedded links on our site and so far we’ve had no problems.[/QUOTE]
    I’d hesitate to refer to it as a penalty, but any search engine can have problems indexing your content when you have duplication. At the time I reviewed this some months ago, the Conversion Chronicles site did in fact have a large number of URLs that Google was aware of (the URLs were listed in a site:domain search) but had not indexed.

    You can do a quick check for duplicate content and indexing issues by comparing the results from a few searches using Google’s advanced operators:
    - Site:ConversionChronicles.com returns 251 results indicating that Google knows of 251 unique URLs.

    - Site:ConversionChronicles.com sitemap returns 191 results, indicating that 60 (251-191) of the 251 known URLs are not indexed. I am assuming that the word “sitemap” appears on all pages. We usually use the word “copyright” for this search, but I saw no notice on the pages of this particular site.

    - site:conversionchronicles.com inurl:115 shows two copies of the same page which are indexed under different URLs; a print-friendly version and the main version.

    This last shows duplicate content issues outside of the use of embedded tracking IDs in links. I would recommend using robots.txt to keep spiders from indexing the print.php URLs, since the print version is a less navigable entry page for visitors. The print version also doesn’t contain newsletter sign-up info, which no doubt reduces the site’s conversion rate from search engine referrals.

    Just put this in your robots.txt file if you’d like to eliminate this duplicate content and conversion rate problem:


    User-agent: *
    Disallow: print.php

  • http://www.hypertextdesign.co.uk kajax101

    Great links, must admit I was already signed-up for the grock :)

  • john kerry

    We provides high quality cng,lpg kits.

  • http://viagra-canada.in fefdeandy

    Hi all! Good stuff dude, thanks! Where it is possible to buy the ? It is very necessary! map cialis levitra sales viagra , Diabetic-Lifestyle offers recipes, menus, medical updates, entertaining, travel – practical information to enhance life while managing diabetes on a daily basis. bbu.y vvi.ag.ra- on line . Good luck!