The 5-Point Technical SEO Checklist For Bloggers

1. Crawl your website with Screaming Frog and clean your untidy URL structure

● Start with crawling your website by downloading Screaming Frog (a limit of 500 URLs in the free version).

● It will fetch key onsite elements finding duplicate page titles, meta descriptions and your URL structure.

● To get started, enter your root domain in the blank box and press the start button.

● The tool will process results depending on the size of your website. There’s a progress bar at the top that you can check.

● Once analyzed, start with checking the ’Response Codes’ tab. It will show you the HTTP status codes of URLs.

● You should also cross-check your website’s crawl errors by logging your Webmasters Account. Here’s a guide on fixing various kinds of crawl errors. ​ ​

● Next, check the tool’s URL tab and sort the column by length.

● A good URL is short (4 to 5 words) and explanatory of the content inside.

● Spot any unnaturally long URLs and non-ASCII character URLs that can cause indexation problems.

● Screaming Frog has filters to sort such URLs at the click of a button.

● Ensuring a compelling and punchy title, appropriate image size and a persuasive meta description is a part of on-page SEO.

● Last and a critical aspect of the tool is creating an XML sitemap – a website roadmap for search engines to show how your content is organized.

● You’ll need to use the top navigation bar to find the feature under > Create XML Sitemap.

● A large website can have multiple sitemaps. Because one XML sitemap can’t have more than 50,000 URLs of 50MB.

● Once created, you can submit the sitemap from console from “Crawl >> Sitemaps >> Add/Test Sitemap.”

● It’s also advisable to create an HTML sitemap that can be read by the users (also helps search engines).

● If you’re on WordPress, you can use Yoast’s SEO plugin for creating an HTML sitemap. ​ ​

● In 2007, Google and Yahoo united to use a system of auto discovery of sitemaps – without the need for you to submit your sitemap to all search engines.

● So you also need to share your sitemap location under your website’s robots.txt file.

● This article explains how you can create a robots file integrating your sitemap location.

● Pro Tip: Silo your website content to help search engines understand what your website is ​ about. This means that you break your content into easily understandable categories. And all your website pages are accessible within 3-4 clicks.

● The pages present deeper than 4 clicks might face difficulty in getting crawled. So use the right filters and a faceted navigation.

● And you might just narrow down millions of results to the essential few in a couple of clicks.

2. Properly implement redirects and fix your broken links to optimize your crawl budget

● Let’s start with understanding how to use the correct version of redirects. Many webmasters confuse between:

❏ 301 – a permanent redirect that passes 90-99% of original page’s authority.

❏ 302 – a temporary redirect that indicates to the that the original page will be restored soon.

● Now if you use a 302 redirect instead of 301, then you’re going to confuse search engines. And if 301 stays for a while, you’ll lose some traffic.

● As a rule of thumb: If you’re no longer willing to have a page on your website, then 301 redirect it to a relevant and updated one.

● You should also pay special attention to the pages that show a 404 error code (found through Google Webmaster Tools and Screaming Frog in point 1).

● They might be your website pages that:

❏ have been moved to new locations,

❏ Were incorrectly linked by other webmasters,

❏ Had URLs deleted from your website.

● The bottom line is that these incorrect URLs waste Google’s time in indexing non-existing content on your website. How?

● See Google allocates a crawl budget to every website based on their Page Rank (it’s an internal metric used by Google search team, not the one that you see).

● As per Botify logs analyzer, the crawl ratio is 81% for large websites and only 11% for smaller websites.

● So if you waste your valuable budget on links that don’t even exist on your website, then the bandwidth to scan important URLs will become limited.

● So either create a user-friendly 404 page that shows your brand’s personality.

● Or 301 redirect the 404 error page to a relevant page on your website for preserving link juice.

● For ensuring that Google crawls your website’s most important URLs you need to have meaningful and user-friendly internal linking.

● So naturally use keyword and channel your users as well as bots to other relevant pages in an article.

● You’ll build keyword relevancy and vastly improve your website’s crawlability.

● If you want to determine which pages are the most important (based on internal links), you can access your internal link report inside Google’s Search Console.

● Read more tips for crawl budget optimization here. ​ ​

3. Remove junk from your code and ensure your site loads fast

● Let’s start by understanding the issues plaguing your website with Google’s PageSpeed ​ Insights tool. ​

● Just plug your URL inside the tool and hit the “Analyze” button.

● A score above 80 is fine. But you should consider fixing the issues presented by the tool.

● If you want the exact time, it’s taking to load your website, then go to Pingdom and take their speed test.

● You’ll get your website’s load time along with the page size, grade and the number of requests your website is sending.

● If your website loads in 7+ seconds, then you need to work on the recommendations by the tool.

● Start with installing CloudFlare – a free Content Delivery Network. It can save 60% ​ ​ bandwidth and substantially reduce the number of requests sent by your website.

● Next, WordPress users can install a cache plugin like W3 Total Cache.

● Note that improving your website’s load speed on mobile and desktop involves different steps.

● Since a great mobile experience is equally important, check out some tips for speeding up your mobile website in this mobile landing page article.

4. Resolve duplicate and thin content issues

● Start with scanning your website with Siteliner. It will crawl up to 250 pages and show you ​ ​ the duplicate content, sorted by percentage match.

● You can also use the to check for duplicate issues.

● Head over to “Search appearance >> HTML improvements.” And click on the number to get the specific duplicate content cases.

● Depending on your situation, you can resort to any of the following 3 steps for resolving duplicate content issues:

1. Implement “noindex” tags – If you’re facing duplicate content issues due to pagination, then ​ you can set the meta robots tag value to “noindex, follow”. This ensures that search engines crawl the links on the specified page, but will not include them in their index.

2. Implement a rel=“canonical” tag in the of the duplicate page – This is a way of ​ showing the preferred version of your page to Google.

● And it resolves the unintentional duplicate content issues created as a result of session IDs, affiliate codes and URL parameters.

● Here are 3 URLs that will take a visitor to the same page, but might get indexed as separate pages in search engines and lead to a penalty:

❏ neilpatel.com/2015/09/27/13-secrets-thatll-boost-your-facebook-organic-reach/

❏ neilpatel.com/2015/09/27/13-secrets-thatll-boost-your-facebook-organic-reach/?session id=5349312

❏ neilpatel.com/2015/09/27/13-secrets-thatll-boost-your-facebook-organic-reach?ref=ema il/

3. Delete or reduce the quantity of duplicate content – This is a simple solution if it’s possible to ​ implement for you.

5. Get the edge with rich snippets in SERP

● You can get started with Google’s Markup Helper. It’ll take you step-by-step through the ​ ​ process of tagging data to a URL.

● You can go through the detailed schema markup tutorial at KISSMetrics. ​ ​

● If you’re on WordPress, the Schema Creator plugin can also help in feeding some values. ​ ​