When you’re first trying to understand how search engine optimization works, it’s tough to see where to start. There’s a lot to digest. And just when you think you “know” something, Google releases another algorithm update that changes things on you.
But if you’re a web developer, there’s good news: Understanding some basics will go a long way to helping you code pages that will perform well when Google crawls them.
To that end, keep the following five SEO guidelines for developers in mind as you’re working on your website.
1. Understand the Search Process
The first thing a web developer needs to keep in mind is the relationship between websites and search engines.
Most people think of the Internet as a bridge that users walk across to get to their destinations.
In fact, it’s more like a restaurant without a menu. When a user conducts a search, he writes his order and gives it to the waiter (the search engine). The waiter then looks for the chef (or web page) best suited to complete the order.
This is what happens every time a user conducts a search. The engine goes back and forth between the user and the page. There’s no solid connection between the two, just constant exchanges.
The developer’s job is to do everything possible to make sure the search engine is able to accurately assess whether the page will fulfill the user’s request.
2. Learn how to write a good URL
The first step in making a page easier to find is writing a proper URL.
The URL is the first thing search engines see, and it also gives users their first impression of the site. How a search engine reads a website’s address is a determining factor in the effectiveness of the SEO.
Making a mistake in your URLs can hurt your rankings–no pressure.
A URL has eight parts:
- The protocol represents the set of rules that the browser and the web server use to communicate to each other. This is the
part of the URL. The search engine automatically adds this part to the URL so it can start the search.
- The root domain is the overarching structure where everything else in the URL springs from, hence the name. Usually, this is the name of the web site.
- A subdomain is a subdivision of the root domain. The best examples of subdomains are when websites have different locations for different languages and regions. If the URL were a building, subdomains would be the different floors.
- The top-level domain is the highest element in the hierarchy of the domain name system on the Internet. It’s the last label of a fully qualified domain name. The most common TLD is
- Subfolders are the first divisions in the content of a website. They allow developers to organize their site to make it easier for users and browsers to navigate.
- A page is what the user is actually looking for, at least according to the search engine. This is the title of a specific web page. Your page names should be user-friendly as possible to improve rankings.
- Optional parameters allow developers to control which pages Google can crawl on a website. This is useful when a site needs multiple pages for related content, while avoiding penalties for duplicate content.
- Adding name anchors allows the search engine to focus on a specific part of a page. This is a useful SEO device because it’s like a giant neon sign in the URL that says “This is what you’re looking for!” Users appreciate pages with name anchors since they don’t have to scroll through blocks of information they don’t need.
In order to maximize the page’s potential, write the URL in this order:
Protocol > Subdomain > Root Domain > Top-Level Domain > Subfolder > Page > Parameter > Name Anchor
Writing URL this way organizes the content and preserves domain authority.
3. Watch your meta tags
There’s a lot of debate over whether bad coding has an effect on search engine rankings.
Here’s the short answer: It does.
Most developers do worry about their code hurting their page load times and causing usability issues with the site. While this can hurt your conversion rates and increase your bounce rate, a poorly coded site can also have a profound effect on the ranking potential of any page.
In some cases, coding errors can confuse search engines when they try to read the page, which is bad in every way possible. Search engines can’t rank what they can’t even understand.
Developers also need to be aware of the importance of meta tags.
Search engines want to give users the best online experience, and that means serving up unique content. Search engines use meta tags to help determine which pages are most relevant, so developers need to play by these rules and keep the meta tags interesting and compact.
Developers don’t usually create the content of the meta tags, but they still need to understand how they work.
The most important meta tags are the title and description tags. Whenever a user conducts a search, the content from these two tags is the first thing that users see in the search results.
As a developer, one way you can hurt your page rankings is by accidentally creating duplicate meta tags. Search engines generally frown on duplicate content, so avoid this.
It’s also important to keep meta tags short. Remember, it’s a title and a description, not a novel. You’ll want to limit the character count of the title tag to 80 characters at the most, and the description should cap out at 160 characters.
If you do wind up writing the content for some meta tags yourself, be sure to place important keywords near the beginning of the title and description, and make each entry as unique as possible.
4. Mind your redirects
Developers are often called upon to move content around on a site, and that’s where redirects come into play.
Redirects are tools that allow developers to divert a user from an old URL to a newly created page.
There are five basic types of redirects:
- 300 – multiple choices
- 301 – permanent move
- 302 – temporary move
- 303 – a redirect using a GET method
- 307 – new redirect for temporary move
Among these options, the most important redirects are 301 and 302.
You should use a 301 redirect when you:
- Take a page down
- Move a page or entire site somewhere else
- Point users to the original page when you take down duplicate content
You should use a 302 redirect when:
- A page is temporarily unavailable
- You want to experiment with moving to a new domain without damaging history and rankings
- You need to send users to a temporary site while the old one undergoes renovation
Some developers argue that the 302 is unnecessary because of the 303 and the 307 redirect. The 303 and the 307 can perform the same function as the 302 with specific effects: The 303 forces the browser to perform a GET request, even if the browser originally made a POST request, while the 307 provides the browser a new URL for a either a GET or POST request.
There’s just one problem: Nobody cares about GET and POST requests, nor should they.
Stick with 302 redirects for temporary moves.
Avoid these common redirect errors
It’s common to use 302 redirects when you move a page permanently, but it’s a mistake to do so.
Using a 302 redirect in a permanent move is bad because “link juice” doesn’t transfer over a 302. Link juice is the extra authority a page gains when external sites point a link towards that page. Since a 302 is supposed to be temporary, search engines don’t transfer the link juice to the new page.
Developers are guilty of committing this error because users never notice. Whether a developer uses a 301 or a 302, they still get redirected. But, search engines know the difference. This messes up the potential for any kind of ranking and can cause severe drops in traffic.
Another frequently committed error is redirecting all of the pages from the old site to the homepage of the new site. Not only does it frustrate users who have to scour the site for a specific page, but it also deprives existing pages of their link juice.
Your site can lose a lot of traffic if you do this because you’ll be hiding “long tail” pages, or pages of a website that cater to highly specific searches.
5. Maximize crawler access
Search engines employ “bots” or “spiders” that crawl through established sites and look for useful content.
Crawler access is a frequently overlooked part of SEO because it’s tricky to implement, and its effects are hard to see. Search engines like people to think that they have every piece of information on the internet on record. But even mega-companies have limited resources–they need to be selective about which pages they should index.
Developers can to use this to their advantage and make search engines crawl over the pages they care about.
The best way to do this is to provide a solid site architecture. Work with your SEO expert as you build the site to ensure that crawlers can find what’s important every time they visit your site.
While SEO can seem intimidating, it’s not as hard as it looks.
It’s true that there are many variables involved in getting good search engine rankings, but keeping the five points above in mind will give you a solid head start.