On Page SEO Checklist for the Optimized Page

By Greg Snow-Wasserman

On Page SEO Checklist for the Optimized Page

This article is part of an SEO series from WooRank. Thank you for supporting the partners who make SitePoint possible.

The content on your page, and the technical elements behind the scenes, are the most accessible and controllable SEO factors for your site. They’re the best place to start when you’re looking to improve your rankings or redesigning your site. Plus, putting together a page with well-optimized on page elements will help you with your off page efforts.

Since your goal is to make your site as search engine friendly as possible, we’ve got a checklist of how to optimize each on page element, broken up into technical and content optimizations.

Technical On Page Elements


Optimize your URLs for both human and search engine usability — both affect your SEO. Make sure they clearly show where the page stands in the site’s hierarchy of information so users always know where they are and include the page’s most important keyword. Take WooRank’s SEO guide on link juice for example: https://www.woorank.com/en/edu/seo-guides/link-juice. Users can look at that URL and know that:

  1. It’s about link juice.
  2. It’s part of the SEO Guide category, which is part of the Educational category.
  3. The page is in English, and the fact that this is specified is a pretty strong hint that at least one other language option is available.

A more generic URL, like this one for a Banana Republic product page, http://bananarepublic.gap.com/browse/product.do?cid=1050641&vid=1&pid=306193012, is a lot less useful for users, who can’t see what product or category it’s for.

URL structure is also important to search engines because it helps them understand the importance of a page in relation to the rest of the website. Including keywords in the URL tells them how relevant the page is to a certain topic or keyword. Having well-optimized URLs also helps attract more relevant backlinks since people are more likely to use relevant anchor text for URLs that include keywords.

Title Tag

Title tags are one of the most important parts of on page SEO. Search engines see them as maybe the strongest hint regarding the page’s topic. Put your most important keywords in title tags so search engines can see how relevant a page is to a search. It’s important to write unique title tags for each page since, in theory, all of your pages are different (otherwise you’d have duplicate content).

A well-optimized title tag is no more than 60 characters (with between 50-60 characters as the ideal) and uses keywords at the beginning of the tag. If you want to include more than one keyword, or put your brand in the title as well, separate them with pipes (the | character). A well-optimized title for this article might look like:

<title>On Page SEO Checklist | SEO Best Practices | SitePoint</title>

Be careful when writing your titles, though, because search engines have gotten really good at figuring out when you’re trying to manipulate them. Stuffing your title tag with keywords will end up doing more harm than good: this page crammed its title tag with keywords related to deals on the iPhone 5 and could only get to the 54th page on Google.

Over-optimized title tag

If you’re optimizing for local search, include not only your keyword but also your target location, business and industry.

Meta Description

Search engines sometimes use meta descriptions to help determine the topic of a page, but don’t really use them as a ranking factor. However, they do use them, along with title tags, to form a page’s search snippet. Search snippets are the titles, links and descriptions displayed by search engines in SERPs.

Think of search snippets and meta descriptions as free text ads for your website — use them to entice users to click through to your page. Keywords used in your description will appear in bold when they match keywords in user search queries, so use them when appropriate, and, if possible, include prices and commercial words like cheap, free shipping, deals and reviews to attract in-market users.

Having a high click through rate (CTR) looks good to search engines and will help increase your rank. However, having a high bounce rate will look bad, so clearly and accurately describe the information on your page.


A robots.txt file is a simple text file, stored in your website’s root directory and specifies which pages can and cannot be crawled by search engine bots. It’s generally used to keep search engines from crawling and indexing pages you wouldn’t necessarily want to show up in search results, such as private folders, temporary folders or your legacy site after a migration. You can block all, none or individual bots to save on bandwidth.

There are two parts to a robots.txt: User-agent and disallow. User-agent refers to the crawler you want to specify. It most often designates a wildcard, represented with an asterisk (*), which means it refers to all bots, but can specify individual bots using user-agent directives. Disallow specifies which pages, directories or files the aforementioned user-agent cannot access. A blank disallow line means the bots can access the whole site, while a slash blocks the whole server.

So to block all bots from the whole server, your robots.txt should look like this:

User-agent: *
    Disallow: /

To allow all crawlers access to the whole site use:

User-agent: *

Blocking a bot (in this example, Googlebot) from accessing certain folders, files and file types would look like this:

User-agent: Googlebot
    Disallow: /tmp/
    Disallow: /folder/file.html
    Disallow: *.ppt$

Some crawlers even let you get granular by using the ‘allow’ parameter so you can allow access to individual files stored in a disallowed folder:

User-agent: *
    Disallow: /folder/
    Allow: /folder/file.html

Note that robots.txt files won’t work if a bot finds your URL outside of your site. It can still crawl and index that page so if you want extra blocking power, use a robots meta tag with a “noindex” value:

<meta name="robots” content=”noindex”>

Unfortunately, not all bots are legitimate, so some spammers may still ignore your robots.txt and robots meta tag. The only way to stop those bots is by blocking IP access through your server configuration or network firewall.


Sitemaps are xml files that list every URL on a website and provide a few key details such as date or last modification, change frequency and priority. If you’ve got a multilingual site you can also list your hreflang links in your sitemap. A (very simple) sitemap for a site with one page looks like this:

<?xml version="1.0” encoding=”UTF-8”?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9” xmlns:xhtml=”http://www.w3.org/1999/html”>
        <xhtml:link rel="alternate” hreflang=”fr” href=”https://www.example.com/fr/”/>

<urlset> shows the current protocol for opening and closing sitemaps. <loc> is the address of the web page and is required. Always use uniform URLs in your sitemap (https, www resolve, etc.). <changefreq> tells search engines how often you update your page, which can encourage them to crawl your site more often. Don’t lie, though, since they can tell when the <changefreq> doesn’t match up with actual changes, which may cause them to just ignore this parameter. <priority> tells how important this URL is compared to others, which helps bots crawl your site more intelligently, giving preference to higher priority pages. <xml:html> tags give URLs to alternate versions of the page.

The <loc> attribute is required since it lists where to find a page while the other values are optional. Learn more about URL attributes and other types of sitemaps in our sitemaps beginners guide.

Canonical URLs

It’s easy to wind up inadvertently hosting duplicate content due to your content management system, syndicated content or e-commerce shopping systems. In these cases, use the rel=”canonical” tag to point search engines to the original content. When search engines see this annotation, they know the current page is a copy and to pass link juice, trust and authority to the original page.

When deciding on a canonical URL, pick the one that’s best optimized for users and search engines, and points to content that has optimized on page elements. To implement a www resolve, set a preferred domain in Google Search Console. Google will take your preferred domain into account while crawling the web, so when it encounters a link to example.com, it will pass the link juice, trust and authority to your preferred domain www.example.com. It will also display your preferred domain in search results.

Set preferred domain in Google Search Console

On HTML pages the rel=”canonical” tag is implemented in the <head> of the page, while non-HTML pages should put it in the HTTP header:

  • HTML: <link rel="canonical” href=”https://www.example.com”/>
  • HTTP: Link: <https://www.example.com>; rel="canonical”

The URLs you use in your canonical tags have to be 100% exact matches to the actual canonical URLs. Google will see http://www.example.com, https://www.example.com, and example.com all as separate pages, and won’t be able to tell that http should really be https. You can only use one rel=”canonical” tag per page. If you use more than one, or point to a URL that returns a 404 error, Google will simply ignore your canonical annotations.

Optimizing Page Content

Content Optimization

There are a couple of elements that need to be present in your page’s copy for it to be optimal for search engines. It needs to include your target keywords throughout, of course, but it also needs to be unique, high quality and in-depth.

  • Keywords: There is no magic number of times you should use keywords in your content — it really varies with how long it is. What matters is that you use them naturally throughout and at each level of content: Titles, headers and body copy. Do some research to find latent semantic keywords and use those on your page as well. Don’t over do it! Packing too many keywords onto your page will make you look like spam, which could cause you to lose ranking or even incur a penalty.
  • Unique and high quality: Living in a post-Panda world, you can’t get away with duplicating content for SEO. Duplicating, or spinning, content won’t cause a penalty, but it will keep you from ranking highly. Straight up duplicating content could even cause Google to omit you from results altogether. Your content also needs to be quality so proofread it before publishing. Spelling, vocabulary, grammar and punctuation mistakes will hurt your rankings too.
  • In-depth: Google likes long content. In fact, the average top ten search results have almost 2,000 words per page. Now, obviously you can rank at the top if you have less than 2,000 words on your page. The point is that you need to cover your subject with enough depth to make it valuable to users. As an added benefit, if you’re creating authoritative, detailed content you can leverage that as valuable evergreen content.

Creating quality, well-optimized content can also serve as linkbait, so you can really set yourself up for success here.

Internal links are often overlooked when optimizing your pages, but they’re a great way to spread link juice around your site. Chances are you’ve got pools of link juice sitting on your more popular pages waiting to be passed through internal linking. Find pages on your site that feature your keywords using the site: and intext: search operators. You can go with the top results as reservoirs of authority to pass, or you could use tools like Majestic to find your page’s authority and trust.

Add links to your high-authority pages pointing to your target pages using the target keywords as anchor text. Avoid hyperlinking with only exact match (or irrelevant) anchor text because this will look spammy. Use LSI keywords and synonyms instead.

Note that you used to be able to use nofollow links to influence how much link juice you passed through each link, called “PageRank sculpting”. That’s no longer the case. Every link passes an equal amount of equity; link juice from nofollow links is simply lost into the abyss.

Backlinks are the bread and butter of SEO and are one of the most important ranking signals. However, some studies suggest that outbound links can help them rank as well. Linking to authoritative external sources makes your content look more trustworthy to your readers, just like citing sources does in an academic setting, and Google likes websites that help their users. Take advantage of outbound link benefits by linking to an authoritative website that’s similar to yours, on topic and gives your readers better context or a better understanding of the topic.

Some good general rules to follow when adding links to your content are:

  • Include the rel=”nofollow” attribute when linking to a site that you wouldn’t necessarily vouch for.
  • Use descriptive contextual anchor text to help both users and search engines understand the context of your links. Again, avoid using too much exact match anchor text so you don’t look like spam.
  • Include internal links as well as external. Don’t pass on all your link juice to someone else.

Image Optimization

Even though search engines can’t “see” images, they can still be optimized to help your page rank, as well as ranking themselves in image search. This is done through the HTML alt parameter. When viewing your page’s HTML it looks like this:

<img src="exampleimage.jpg” alt=”Sample image alt text” />

Your image’s alt attribute and file names are absolutely relevant to SEO. They help tell search engines what the image is and whether it’s relevant to the rest of the content on the page. They also help your images appear in Google Images search results. As we’ve pointed out before, getting your images into Image results can help to get backlinks.

Filenames and alt attributes are also important for human readability. They help users with slow internet connections that fail to load images, text-only browsers, or those who have disabled images on their browsers. They’re also important for visually impaired users who use screen readers to surf the web.

Wrapping Up

There’s no single item on this list that will make or break your SEO. They work in combination with more than 200 other factors Google and other search engines look at when ranking pages. Many of these factors are beyond your control. However, if you take care of your on page SEO, you’ll start to see improvements in your search rankings, and traffic, soon.

How have you optimized the above on page elements for SEO? Have you encountered challenges with on page SEO? What did you do to resolve those challenges?

  • ankita

    Thank you to provide us this useful information.

  • TradeMySite

    Thanks for providing such a meaningful article keep doing

  • NCode Technologies

    Hello Greg, fine list of on page SEO ckecklist. It can be really helpful to optimize any website as on-page optimization is basic need of any website optimization.

  • Abdullah Rubel

    Wonderful list of on page SEO checklist. That is better for a website higher ranking.

  • Pretty handy list. I find it quite difficult to follow at times – it gets quite technical with the robots.txt, sitemap. I am not sure if a SEO needs to really know about those things.

  • Nice list! Don’t forget to add schema to the list for local sites.

  • alexsanris

    Thanks for list of on page SEO checklist.


Instant Website Review

Use Woorank to analyze and optimize your website to improve your website to improve your ranking!

Run a review to see how your site can improve across 70+ metrics!

Get the latest in Front-end, once a week, for free.