Alerts & Monitoring Developers

SEO Monitoring For Your Developers

SEO Monitoring for Your Developers

5th June 2019Ben Goodsell & Clara LiBen Goodsell & Clara Li

By Ben Goodsell, lead SEO and owner of Tight Ship Consulting
And Clara Li, Product Manager, GrubHub

Over the last few years, the SEO industry has seen best practice knowledge transfer from agencies to in-house teams, and to a large extent website development process itself. Especially for those of us working on big brands or larger websites, modern day responsibilities for SEOs no longer only include catering our own workflows to ever-changing best practice. They also include ensuring data and the subsequent information consumed are:

  1. Comprehensive and reflective of SEO best practices.
  2. Integrated into developer education and monitoring processes.
  3. Used to make informed decisions in times of catastrophic crisis.

This article is a breakdown of how we leveraged Botify to generate data comprehensively according to the latest best practices, and how that led to informing critical data-driven decisions, pre-and-post production— ultimately leading to huge savings in developer resources and lost revenue.

Having critical technical SEO monitoring in place is important to establish a business case for building out a system and process that will lead to identifying and fixing issues quickly. It often takes illustrating that a drop in organic traffic was preventable, and communicating the loss in revenue as ongoing cost savings in a business case at the executive level. We hope that this article can provide an outline to help take advantage of these opportunities to establish minimum requirements for SEO monitoring and developer collaboration.

1. SEO data for decision making should be comprehensive and reflect best practices.

Producing quality and comprehensive data, along with setting a cadence for generation, is the foundation for success. These considerations include answering the following questions:

  • How many URLs need to be crawled to get a comprehensive view of all links on the site?
  • Are there top performing pages generating enough value to justify an additional project for daily checks on a set list of URLs?
  • Does the site have pre-production or staging servers that can be checked for issues before changes are pushed live?
  • Are you using a prerender service or is the server configured to change HTML depending on the user agent requesting a page (Googlebot Smartphone VS Desktop)?
  • Does the site generate content or links via JavaScript?

Answers to these questions will inform the ideal setup and configuration of crawls and data pulls.

Screenshot 1

How many URLs need to be crawled to get a comprehensive view of links on the site?

Comprehensive crawls can help quantify the value of internal linking changes, and is a key starting point for identifying which pages are linked on the site. With the integration of log files, this is an essential first step to get a comparison view of pages linked on the site versus what search engines like Google can find. Answering questions such as “If this page is linked to on the site, why isn’t Googlebot crawling it?” and “Shouldn’t this page be linked to on the site if Googlebot is crawling it?”

Screenshot 2

Are there top performing pages generating enough value to justify an additional project for daily checks on a set list of URLs?

At a quick glance, Botify’s latest DataModel includes 3,628 data points, but they’ve rolled up the most important into one: non-indexable.

A page is non-indexable if it’s no longer or significantly less eligible for performance in search results, having a quick daily check on top performing pages, is complementary to comprehensive crawls as they’re typically scheduled weekly or bi-weekly, and can take some time and server resources.

If a spike is spotted, or an issue is found to have occurred, the daily crawl can often save a few days of lost revenue, and comes in handy to retroactively identify the specific date an issue was present. Then, comprehensive crawl can be used to understand at scale.

Does the site have pre-production or staging servers that can be checked for issues before changes are pushed live?

Depending on the site, developers might have multiple site commits, or changes they push live to a staging server before it goes live on the production site.

Catching critical issues like non-indexable, significant internal linking, or canonical changes before they go live is an obvious win.

Are you using a prerender service or is the server configured to change HTML depending on user agent requesting a page?

In the wake of mobile-first over the last couple years, the Googlebot Smartphone user agent has taken over, but Google still uses Desktop too! While one or the other may be responsible for influencing indexing/ranking signals more or less, it’s safe to keep an eye on both, and continually verify parity between Mobile and Desktop HTML.

Does the site generate content or links via JavaScript?

Last year at Google I/O, John Mueller confirmed what many of us already suspected—that Google typically uses the initial HTML from the server for indexing, then runs a second round when there are enough computing resources to render the JavaScript of the same page.

“We crawl a page, we fetch the server-side rendered content and then rerun some initial indexing on that document but rendering the JavaScript powered web pages takes processing power and memory. And this effectively means that if the site is using a heavy amount of client-side JavaScript for rendering, you could be tripped up at times when your content is being indexed due to the nature of this two phase indexing process. And so ultimately, what I’m really trying to say is: because Google’s Googlebot actually runs two waves of indexing across web content, it’s possible some details might be missed.”

With these words taken into consideration, it’s essential to understand variance in compliance signals, how content and links change for HTML vs JS crawling, and generally to ensure and advocate parity between HTML and JS crawling to developers.

2. Integrate into developer education and monitoring processes.

Even with comprehensive best practice knowledge applied to the configuration and generation of data, incorporating into developer workflows can help justify the cost and quickly prove value. Developers are intimately familiar with the infrastructure of the site, and often are the only route to fix the root-causes of issues, helping to offset the all-too-common band-aid solutions. Once you have created an ideal SEO monitoring setup, use it as an opportunity to collaborate with developers.

Workshop a Sweet Name and Establish Process

  1. Start efforts with a brainstorm of fun team names to represent the collaboration of SEO and developers on technical SEO checks, troubleshooting, and relevant industry news.

Screenshot 3

  1. Set up a rotation of all team members to take turns looking through established SEO checks.
  2. Create a Slack channel, similar group chat, or email distribution list for communication.
  3. When a major issue is identified and fixed, establish a process for drafting a write-up, reviewing, and communicating to the larger company or executive-level.
  4. Establish a weekly or bi-weekly meeting to go over ways to improve the process.

Establish Thresholds for Critical Technical SEO KPIs by Page Type

Start out by creating a Custom Report in Botify, run through why each metric was included and think through the anticipated thresholds for key metrics, then work together over time to improve. These Custom Reports can be made specific to each page type, and scheduled via email. Bonus: Create (or have the developers create) a Slack alert to notify the team that an updated report is available!

Screenshot 4

Create a shared Google Doc or Sheet that can be used to help establish workflow, and can be improved over time. Link out to both a Botify Custom Report containing graphs, and also the page-level detail (Botify URL explorer) to make it quick and easy to dig in. Note: Be sure to take into consideration sampling especially when linking out to Analytics reports.

Consider establishing thresholds by page type for important KPIs. This will make it easier for the team to raise a flag when something is off. Use year-over-year percent change to avoid flagging normal seasonality.

Threshold Examples:

  • Logs Organic Visits 35K > x > 45K
  • Analytics Visits 45K > x > 55K

Top KPI examples:

  • Avg Ratings Average
  • Avg Title Length
  • Count Active from Bing from Desktop Devices Yes
  • Count Active from Bing from Mobile Devices Yes
  • Count Active from Bing from Tablet Devices Yes
  • Count Active from Google from Desktop Devices Yes
  • Count Active from Google from Mobile Devices Yes
  • Count Active from Google from Tablet Devices Yes
  • Count of Blank Pages
  • Count Canonical Points to Self Yes
  • Count In Sitemap Yes
  • Count URL is Part of Redirect Loop Yes
  • Sum of No. of Crawls from Bing Search Bot (Logs)
  • Sum of No. of Crawls from Bing Search Bot With Bad HTTP Status Code (Logs)
  • Sum of No. of Crawls from Google Search Bot (Logs)
  • Sum of No. of Crawls from Google Search Bot With Bad HTTP Status Code (Logs)
  • Sum of No. of Crawls from Google Smartphone Bot (Logs)
  • Sum of No. of Crawls from Google Smartphone Bot With Bad HTTP Status Code (Logs)
  • Sum of No. of Duplicate Title (Among All URLs)
  • Sum of No. of Redirection Hops To Ultimate Destination
  • Sum of No. of Similar Pages (Score >= 90%)
  • Sum of No. of Visits from Bing (Logs)
  • Sum of No. of Visits from Bing from Desktop Devices
  • Sum of No. of Visits from Bing from Mobile Devices
  • Sum of No. of Visits from Bing from Tablet Devices
  • Sum of No. of Visits from Bing with Bad HTTP Status Code (Logs)
  • Sum of No. of Visits from Google (Logs)
  • Sum of No. of Visits from Google from Desktop Devices
  • Sum of No. of Visits from Google from Mobile Devices
  • Sum of No. of Visits from Google from Tablet Devices
  • Sum of No. of Visits from Google with Bad HTTP Status Code (Logs)
  • Sum of Number of Ratings
  • Sum of Number of Reviews

But I’m just a human, can’t robots do this? Please?

You might be thinking to yourself, “There’s not enough time in the world to run through all those checks for multiple sites.” Good point! Another advantage of working with developers is that they can help automate these checks! If you have a Business Intelligence of Data Team, get them involved to leverage the Botify API to database, automate threshold checks, and create custom views.

The process outlined above gets everyone on the same page about how changes made to the website can affect organic search and establishes a unique set of applicable critical SEO KPI checks.

Our developers even innovated further to create a sev-score, based on how many pages were affected, % loss of organic traffic, etc. They even have it displayed on flat screens, so anyone can see at a glance how each site is performing for SEO when they walk in the door.

3. Make informed decisions in times of catastrophic crisis.

Having a blue-sky SEO monitoring process in place will help to identify and fix issues as they arise, limiting potential organic loss. All too often, a system hasn’t been put in place, or a new unanticipated issue will occur. The ability to retroactively look back at reports containing comprehensive data are most critical at these times.

In this example, we were leveraging a prerender service to serve Googlebot HTML post-JavaScript execution of our pages when a tweak to the settings broke the display of content.

“A massive number of our pages got de-indexed by Google in March 2018, which was caused by Prerender change to clear local storage after each load which broke pages. While it was not reported by our internal monitoring system, this was first reported in Botify’s custom report of blank pages.”

While this was around the time Google said they were going to stop using the old Ajax crawling scheme, it turned out to be caused by a new prerender configuration (and yes, we had been working through no longer using this depreciated method for months 🙂 ).

Initial indications of a problem were found in our log data showing Googlebot not crawling these escaped_fragment pages.

Screenshot 5

Google stopped including affected top performing pages in search results.

Screenshot 6

Ultimately this resulted in -8% to -40% drop in organic sessions across sites.

Screenshot 10

Though showing only a small sample of the larger issue — the following Botify Custom view allowed us to pull a list of problem URLs, and more importantly, helped us to identify the exact date range the issue was present, helping to resolve the issue quickly with minimal loss in revenue. The savings in lost revenue was clear, but helping the developers to narrow the time-frame in their search through weeks of commits trying to find the bug saved a significant amount of time.

Screenshot 8

Screenshot 9

Possibly more significantly, it led to executives asking what we can do to prevent this from happening in the future; how can we catch and fix issues as fast as possible? As a result, we were able to set up an extensive monitoring system with the help of top-notch developers and the support of management.

We hope this article provides an answer and an outline for others to take advantage of these opportunities. For many, organic revenue is taken for granted. If you haven’t already, it’s time to communicate the need for SEO Insurance, and transfer your SEO knowledge into the data, developer collaboration, and processes of the future.

Dec 12, 2020 - 2 mins

New in AlertPanel: More Alerts & Management Features

Alerts & Monitoring Developers
May 5, 2020 - 8 mins

SEOs & Developers: 7 Tips for Working Better Together & Getting Your Projects Implemented

Alerts & Monitoring Developers
Oct 10, 2020 - 4 mins

Mitigating SEO Disasters – An Interview with Kaspar Szymanski

Alerts & Monitoring Developers