Developers aren’t responsible for Search Engine Optimization. They make sure their websites are compliant with Search Engine Optimization standards. That implies web developers have the power to optimize a certain page or website. Contact us to learn more about expert phoenix seo

It’s true, isn’t it? If you’re a developer and you’re reading this, go crazy laughing. You have complete authority over your destiny.

You control three things: viability, visibility, and site flexibility. This post provides guidelines for all three.

What’s A Developer?

This isn’t a navel-gazing philosophical question.

A web developer connects a website to a database (or whatever passes for one), creates pages using the supplied design, and performs all of the tasks involved in those two jobs.

A web developer does not web design. They do not write content. If you do all three jobs, tell the designer/content parts of your brain to take a break. This post isn’t for them.


SEO Viability: Actions taken on the server and in early software configuration that prepare a site for future SEO strategy.

Mostly I chose this word because the other two ended with “ility,” and it just works.

Generate And Store HTTP Server Logs

Log files are an SEO source of truth. Crawler chaos may be seen in log file analysis. Every web server on the planet has an HTTP log file. Someone will tweet me about a platform that, in contradiction to all sense and reason, does not create log files. More than 99 percent of web servers on the globe have some sort of log file.

Happy? Great. Now go make sure your server generates and saves HTTP logs.

In most cases, servers are set up correctly without any configuration. However, just to be safe, make sure log files include:

  • The referring domain and URL, date, time, response code, user agent, requested resource, file size, and request type
  • IP address helps, too
  • Relevant errors

Also make sure that:

  • The server does not remove log files. Someone will need to do a year-over-year analysis at some point. They won’t be able to do so if your server wipes logs every 72 hours or similar nonsense. Instead, archive the logs. If they’re really big, make the SEO web development team pay for an Amazon Glacier account.
  • It’s simple to access the logs. I realize if you don’t want your SEOs meddling with the server. But make it easier on yourself and the rest of the development team by making sure HTTP logs are readily available. It will save you time in the long run, as well as ensure that when you win the lottery, your replacement can find.

Log files, folks. Love ’em. Keep ’em. Share ’em.

Don’t “Turn On” Analytics. Configure It.

Why is it that everyone assumes analytics is a light switch? Simply paste the code, walk away, and bam! You’ve got facts.


Before you add that JavaScript, make sure your analytics toolset—Google, Adobe, whatever—can:

  • Track onsite search. People employ the magnifying glass that’s hidden in your site navigation’s tines. Onsite query data may teach your SEO (and UX) teams a thing or two. Keep track of it now, rather than apologizing later when you discover that you didn’t do so.
  • Track across domains and subdomains. If your company operates multiple domains or splits content across subdomains, creepily stalk users across all of those properties. Your SEO experts can then see how organic traffic flows from website to website.
  • You can also use the IP address to filter your way through your search results. Filter out people from your company, competitors, or a pesky neighbor who keeps asking you for a job. Users inside your workplace will appreciate one IP filter. Set it up, and they’ll buy you whatever drink you choose, except Southern Comfort.
  • Track events on the page. If your Analytics team is ready for you, put the “hooks” in place immediately, saving everyone time later.

Is this all SEO stuff? Not exactly. But it all helps the SEO team. Is this your job? Maybe not. But you’re on the Web Developers. You know you’re at the top of the escalation tree for everything from analytics data to printer malfunctions. When they can’t find the data they need, the Search engine Optimization, SEO team will end up at your door.

Consider Robots.txt File

Hopefully, you already know all about robots.txt file. If not, read this guide.

Even if you do, keep in mind:

  • Robots.txt file tells bots not to crawl a URL or page. The page might remain in the search index if it was previously crawled (at least, in my experience)
  • Robots.txt noindex probably won’t work much longer
  • The meta robots tag tells bots not to index a page, and/or not follow links from that page. The bot has to crawl the page to find the tag
  • When you launch the site remember to remove the robots disallow/and noindex meta tags please gods please I beg you

Set The Correct Response Codes

Use the right response codes:

200: Everything’s OK, and the resource exists

301: The resource you requested is gone forever. Poof. Look at this other one instead

302: The resource you requested is gone, but it might be back. Look at this other one for now

40x: The resource you requested can’t be found. Oops

50x: Gaaaahhhhh help gremlins are tearing my insides out in a very not-cute way. Nothing’s working. Everything’s hosed. We’re doomed. Check back later just in case

For lost resources, some servers utilize 200 or 30x responses. This makes Tim Berners-Lee cry salty tears. It also makes me cry, but I don’t matter. Change it now.

Worse yet, some CMSes and carts are pre-installed with a 200 error code for broken links and missing resources. When you visit a nonfunctioning page in your browser, it attempts to load the corresponding page. Instead of a 404 response, the server provides a 200 ‘OK’ response, keeping you on that page.

The web server then responds to the user by displaying a “page not found” message. Crawlers index every occurrence of that notice, resulting in significant duplication. However, it begins as a response code problem (see below).

Yes, Google claims they’ll eventually determine whether you intended to use a 302 or a 301. Keyword: eventually. Don’t wait for Google. Make sure you do it right the first time around.

Configure Headers

I’m not passing judgment on the benefits or drawbacks of each. But, before you go live, prepare ahead of time and set them up:

  • last-modified
  • rel canonical
  • hreflang
  • X-Robots-Tag

Other Random Things

Check them off now, so you don’t have to deal with them later:

  • Put your site on an SSD-based server. Performance is superior. Make your case if you wish, but a faster server means a faster website, which makes ranking simpler. When I get to Performance, I’ll explain this further.
  • Virtual servers. I’m old-fashioned, but putting my site on a server with 900 others gives me a rash. I’m not concerned about shared IPs or search reputation. What worries me is what would happen if someone creates an infinite loop and hiccups my website.

Viability: It’s Like Good Cholesterol

I’ve just learned that I have high cholesterol, which is inconvenient because I eat carefully and ride 50–100 miles each week.

Moving on, I’d like to highlight that server viability prevents potential blockages by ensuring that your SEO team can go right away… This is an inapt reference.


This is the most common question we get: How you construct a website determines whether search engines can find, crawl, and index your material. Visibility is all about the software. The site’s construction has an impact on it.

Get Canonicalization Right

A single valid address should be assigned to every resource on your site. There is only one. Every page, every picture, and so forth.

Canonicalization difficulties can lead to duplicate content, which consumes crawl budget, lowers authority, and damages relevance. Don’t take my word for it. Google’s advise is worth reading.

If you follow these recommendations, you’ll avoid 90% of canonicalization problems:

Home Page Has a Single URL

If your domain is, then your home page should “live” at

It shouldn’t be

or anything else. Those are all canonically different from Make sure all links back to the home page are canonically correct.

Don’t depend on rel canonical or 301 redirects for this. Make sure all internal site references point to the same canonical home page address. Internal links should never be required to take visitors to a different home page through a 301 redirect from within them.

Pagination Has One Start Page

Make sure that the link to page one of a pagination tunnel always links to the untagged URL. For example: If you have paginated content that starts at /tag/foo.html, make sure that clicking ‘1’ in the pagination links takes me back to /tag/foo.html, not /tag/foo.html?page=1.

No Hard-Coded Relative Links

Friends don’t let friends create links like this:

<a href=‘~’>

Those can create infinitely-expanding URLs:


Unless you want to be the comic relief in an SEO presentation, avoid hard-coding relative links.

No Query Attributes For Analytics

Use direct links, not query attributes, to tag and monitor navigation. Consider the scenario in which you have three different links leading to /foo.html. To track which connections are clicked, it’s a good idea to use the ?loc=value syntax on each link. Then you can examine your analytics reports for that attribute and

You don’t need to do that. Instead, try out Hotjar, which records where people click and generates scroll, click, and heat maps of your page.

If you really must use tags, use /# instead of ? and alter your analytics software to recognize it, so that ?loc=value becomes #loc=value. Everything after the hash sign is ignored by web crawlers.

Things to Do

Whether you have canonicalization issues or not, make sure you:

  • Update your preferred domain in Google Search Console and Bing Webmaster Tools (as of right now, you can do this in both).
  • Set rel=canonical for all pages in one fell swoop. It’s better to deal with it ahead of time.
  • Set the HTTP header that all URLs must point to.

Quick Fixes

It’s best to correct canonicalization difficulties by doing it yourself: develop your website so that each page has a single address.

If you can’t do that, though, use these:

  • Use rel=canonical to make your page more crawl-able, then point it to better search engines on the same page. It doesn’t cure crawl budget concerns, but it’s something. Make careful you utilize it properly! Incorrect rel=canonical settings may do more harm than good.
  • Use the Google Search Console URL Parameters tool to filter parameters that result in duplication. Take caution. This tool has a lot of dangers.

Get Canonicalization Right From The Start

Please don’t do these things:

  • To conceal duplicate content, use robots.txt or meta robots. This completely destroys the site’s link structure, fails to hide the material, and loses you authority.
  • The canonical link for one set of duplicates at different target pages might be given by the rel=canonical tag.
  • To delete duplicate URLs, use Google Search Console or Bing Webmaster Tools.

In other words, no flirting. Start with it correctly the first time.

Pay Attention To Performance

Performance is a tired topic, so I’m going to keep it simple. To begin with, here’s a little sermon: page speed is a straightforward improvement that will give you many victories. Faster load time equals higher rankings; however , sure. It also translates into greater conversion rates and improved UX.

The first step is to run Lighthouse. Examine several web pages. batch things with the command line. Everything you’ll need can be found in the Lighthouse Github repository.

Lighthouse isn’t flawless, but it’s a helpful optimization checklist. It also checks for accessibility in a handy 2-in-1 package.

Do all the stuff.

Regardless of the test results:

  • If you’re able, use HTTP/2. It has a number of advantages, including increased bandwidth and fewer connection issues.
  • Use hosted libraries. You don’t have to use Google’s, but they are available.
  • If you look at code coverage, trim the fluff out of your included files, and start there.
  • Compress your photos. Make sure your team uses squoosh. They’ll remember to use it for approximately a day. At that point, either punish them frequently or automate the process with Gulp.
  • CSS and JavaScript have been blocked. Because I told you so.

You may also consider adding page speed modules to your site. I’d never do it. I don’t want Google software to run on my own server. They do, however, have a lot of utility. You make the call based on your needs.

I suggest reading my team’s recent complete guide to page speed, which I’ve linked to below.

A few other quick tips:

Third-Party Scripts

It’s far more probable that someone else will add a slew of third-party scripts, which will degrade site performance. You can get off to a good start:

  • When third-party scripts have to be loaded, you should defer them.
  • Inquire with the service provider for a reduced copy of the script. They are frequently provided.
  • Use CDN variations wherever feasible. For example, the Google CDN’s jquery may be utilized.

Use DNS Prefetch

Consider using DNS prefetch if you’re importing assets from a third-party site. That solves the DNS look up in advance: <link rel=”dns-prefetch” href=”//”> This reduces the amount of time it takes to resolve domains. More information is available in this blog post: https://www

Use Prefetch

If a single element on your website is taking longer to load than others, you may be using too much HTML. Use prefetch (not to be confused with DNS prefetch, above) to find the most popular resources and use them immediately instead of waiting until they’re needed later on. Prefetching assets when the browser is idle decreases overall load.

Engineer Away ‘Thin’ Content

To avoid ‘thin’ content, make your site so that it isn’t full of ‘thin’ material: pages with little substance and distinct information.

Avoid these things. Don’t laugh. I still find this kind of stuff in audits all the time:

  • Unique query attributes Send-to-a-Friend links
  • Bios and/or no other valuable information on other websites
  • “More reviews” pages that are completely blank or have a very low value. Some websites provide links to separate review pages for each item. That’s handy until there aren’t any reviews, or the bulk of the reviews are useless, such as “great product.”
  • Empty, paged photo galleries. I have no idea how sites manage it, but they somehow do.
  • Tag pages for tags with a single item of information are created.

Don’t wait for an SEO to make you go back and fix it. Build to prevent this kind of stuff:

  • Use fragments or similar code if you must have send-to-a-friend links. Crawlers will disregard anything that follows the hash.
  • Exclude member profiles with short or empty bios unless a minimum length biography is required.
  • Don’t create additional review pages unless you have a certain amount of reviews.
  • If the tags have more than N pieces of content, don’t create or link to tag pages. You may choose “N.” Please make sure it’s not “1.”
  • For numerous SKUs, forms, and anything else that might generate thin content, use rel=canonical. This isn’t a fix; it’s just a lousy workaround. However, it’s better than nothing, and it’ll catch things you wouldn’t have caught otherwise.

Use Standard Page Structure

We’ve already dealt with title elements and such, so this is a lot easier. Every page should:

Have a Single H1

Headings don’t necessarily affect rankings, but page structure can be seen through rendering. The top level of the page hierarchy is represented by the H1 mark.

Have a single H1 that automatically uses the page headline, whether it’s a product description, an article title, or some other unique page heading. In an H1 element, do not use the logo, photos, or content that repeats across pages.

Make H2, H3, H4 Available to the content creators

Allow for the use of multiple H2, H3, and H4 elements on the page. Allow authors to utilize H2, H3, and/orH4. You may even allow them to dig even deeper, but I’ve discovered that it leads to some rather unusual page layouts.

Use <p> Elements for Paragraph Content, Not Hard Breaks or DIVs

This is common knowledge among developers. Some content producers are oblivious to it. I still see many writers use double line breaks in their work. It’s not simple, but if you can somehow ensure that <p> elements are used for paragraphs, it will make future style modifications considerably easier.

Use Relevant Structured Data

At a minimum, generate structured markup for:

  • Places
  • Products
  • Reviews
  • People

The HTML5 spec doesn’t define a schema for microdata, though provides further information. JSON-LD is currently the most popular technique to add structured data to a website. It’s simplest, and if you use a tag manager correctly, you can add structured data to the page without editing code.

Oh, Come On Ian

I can hear you. No need to mutter. You’re saying, “None of this impacts rankings.”

It’s possible. It’s impossible. However, employing a standard page structure improves consistency across the site for every content manager and designer who works on it. That results in better site practices, which result in a superior user experience. This is beneficial to SEO.

So there.

Put Videos On Their Own Pages

Video libraries are fantastic, but putting all of your videos on a single page makes search engines weep. Each video should be on its own web page. A description and, if feasible, a translation should be included. The library’s URL should also be linked to each film. That gives search engines something to consider when ranking the site.

Generate Readable URLs

Where possible, create URLs that make sense. /products/shoes/running is better than /products?blah–1231323

Readable URLs may not directly impact rankings. But they improve clickthrough because people are more likely to click on readable URLs.

Also, Google bolds keywords in URLs.

Finally, what are you more likely to link to?



/asdf/shoes/ ?

Use Subfolders, Not Subdomains

Yes, go ahead and insult me. I’ve heard it all before. If you want to discuss it, read this essay first.

Quality material should all be located on the same domain. Subfolders are useful. The blog should be accessible at /blog. /store or a similar URL should house the shop. On this one, I always get pushback. Google has previously stated that subdomains are fine. Yes, they’re acceptable. They’re not the greatest thing ever. Google claims that subdomains are sometimes just as good as full domains. Not always.

When Googlebot encounters a subdomain, it decides whether to regard it as a subfolder or not. They’re ambiguous about it, and the outcomes differ. I have no test data. I can state that, in most circumstances, moving material to a subfolder helps if we interpret ‘most’ as ‘every site I’ve ever worked on.’

Why take a chance when you can take control? Now is the time to use a subfolder, which will save you time and aggravation later.

There are two exceptions to the rule:

  • If you’re working on your reputation, one of the most important things you can do is to control as many listings as possible on the first page of a search result. Google frequently ranks subdomain material separately. A subdomain may help you take up an additional space in your diet.
  • If you’re having difficulties with a lot of low-quality or thin content, consider moving it to a subdomain. You might notice rankings improvements as a result.

The most popular purpose for subdomains is blogging: the CMS, server, or other component does not support a blog. As a result, you create a site.

Blog is a small website that you can keep updated on a regular basis. It’s better than nothing, right? That gets transformed into If you have to do it, consider putting everything under one domain using a reverse proxy. Of course, if you have no choice, use a subdomain rather than nothing at all.

Don’t Use Nofollow

Don’t. Nofollow is designed to protect you from penalty when it comes to links on comments and advertising. It does not contribute to the method by which PageRank circulates throughout a website. It does, however, burn PageRank. It’s a terrible concept.

The only time to use nofollow is when you’re penalized for linking to another site through advertising or other paid space on your website. A good rule of thumb is to ask whether you’re doing something “only” for SEO. Nofollow is a classic example.

Make Navigation Clickable

Clicking the top-level menu should take me to something other than ‘/#.’ No matter where I click, it takes me back to that page.

Top-level nav that expands subnav but isn’t clickable creates three problems:

  • The site’s main navigation is a hidden rollover. It will be given less weight by Google and Bing.
  • You’re giving up the opportunity to have a top-level link from every other page on your site. There’s a lot of internal authority that’s been lost.
  • Users will likely be dissatisfied with the result.

Make sure clicking any visible navigation takes me somewhere.

Link All Content

I need to be able to click on links in order for a page to be indexed. Forms, JavaScript maps, and so on aren’t enough. If you have a stores directory, keep the map and ZIP code search from before.

Make sure the index has a clickable list of stores I may use to locate them. That implies I can also link to it. This rule is especially significant when working with JavaScript frameworks. For further information on that, see Chapter 19.

Don’t Hide Content (If You Want To Rank for It)

Last week (seriously, Google just changed this last week), Google stated that they wouldn’t consider content that appeared only as a result of user action. Content hidden behind tabs loaded via AJAX when the user clicks, and so on got no consideration.

In a recent speech, Google’s SEO chief Matt Cutts stated they do examine this material and take it into account while determining relevancy. I believe them, but as always, they’ve left out some details:

  • Do they give the same value to material that requires user involvement?
  • Do they distinguish between hidden content (such as tabs) and content that doesn’t load automatically without user interaction?

Oh, also: The old tiny-content-at-the-bottom-of-the-page trick still doesn’t work. That’s not what they meant.

JavaScript & Frameworks

JavaScript isn’t bad for indexing or crawling. JavaScript is bad for SEO.

Instead of typing yet another diatribe about the evils of Javascript, I’ll link to mine and add a few quick notes:

Ask Yourself Why

To begin, consider why you’re creating your site the way it is before attempting to figure out how to fix the SEO problems brought on by so many frameworks and JavaScript widgets. Consider doing something else if there’s no compelling argument–if using a framework doesn’t provide important features–or if you don’t want to do it.

Only Hide Content When Essential

The hard part is over: don’t hide anything on the page behind a tab, an accordion, or whatever else. People who want to see everything will scroll down on a well-designed page. They didn’t intend to click the tab if they didn’t want to see it in the first place.

Don’t Deliver Content Based on User Events

Don’t provide content based on a user event if you want it to be indexed by Google. Yes, according to Google, now they index material that becomes visible after a user action. If you can, play it safe.

Show Content Before the Load Event

Examine your site’s HAR. Anything that comes after the ‘load’ event is unlikely to be indexed: the Load event, in an HAR

Make sure whatever you want indexed appears before then.

Use Indexable URLs

See Make Content Clickable. URLs with /#! and similar won’t get crawled. Google deprecated that as an indexing method.


If you must use JavaScript content delivery, try to mitigate the damage.


This is the most important thing to consider when it comes to SEO. No one gives this any thought. Nobody. The job of content managers, analysts, designers, and other non-developers is never-ending modifications and changes. If they can’t do the task, they flood the resource-strapped development team with requests.

SEO grinds to a halt, and organic performance falls.

If you don’t have enough developers, no worries. You may skip the rest of this post. Return to feeding your unicorn rainbow-crapping pet.

Otherwise, keep reading this relatively brief section.

Have One, Editable Title Tag on Each Page

The title element is a strong on-page organic ranking signal.

  • Each page must have one <title> element with the following format
  • It must be a separate, editable field. Have the title element default to the page headline but make it separately editable
  • The ideal title tag is currently around 60 characters long, but don’t set a limit. It fluctuates from day to day. Because it’s the finest thing since Nestle KitKats, your users should utilize the Portent SERP Preview Tool. Right? Right? RIGHT? -> The best title tag length at this moment is about 60

Make Meta Tags Editable in the CMS

The meta keywords tag, which was first added in 2004, is now obsolete. It’s best to get rid of it. If you want to keep your SEO going, look for a new one. After that little hiccup, be sure to include the following editable META tags on each page.


Every page should have a description meta tag that can be edited. The description tag has no impact on rankings. It does, however, have an effect on clickthrough rates, which may lead to organic traffic growth even if rankings don’t improve. Make the description tag a separate, editable field similar to the title tag.

On product pages, the description tag should default to the short product description. If the page is a longer descriptive document, have the description tag default to the first 150 characters of page content. Never use a blank meta description! Google and Bing will choose whatever they believe is best if you do. Don’t rely on them.

Open Graph Protocol (OGP)

To construct the text, image, and title of shared content, Facebook utilizes OGP tags. Without it, Facebook may choose a picture for the title and meta description tag. It may be something different. OGP tags allow you to control what appears on Facebook and, like the meta description tag, they can help improve clickthrough rates.

The page’s title, meta description, and featured image should all be the OGP tags. Then let the author modify them. og:title, og:type, og:image, and og:url are at a bare minimum required; you may read more about OGP tags at

Twitter Card Markup

The audience for your Twitter marketing efforts will be more concentrated. Because OGP tags are a fallback, they aren’t necessary. If you can include them, however, it gives content producers even greater control over what Twitter displays when shared material is tweeted.

Twitter cards may be used to increase clicks and other engagement. They are well worth the effort. For further information, see

Make Image ALT Attributes Editable in the CMS

Another significant ranking signal is the ALT attribute. When a user uploads an image as part of page content, it must be editable. If they do not enter an ALT attribute, default to:

  • “Image:” + product name, if this is a product page
  • “Image:” + image caption, if entered
  • “Image” + file name

I recommend including the code’s caption, “Image:,” so that screen readers and other assistive devices recognize it as an ALT attribute.

Keep Your CSS Clean

Classes can lead to problems. Use semantic CSS wherever feasible: In place of “.h2,” use “h2.” (lousy punctuation to ensure the CSS is understood).