Caret leftBack to Blog

H1, Title, Description: Your SEO Check List In a Few Clicks!

16th September 2014AnnabelleAnnabelle

Spend your time deciding what to do, not gathering the facts! Checking title tags, H1, H2 tags and meta-descriptions is basic SEO. Basic, but necessary, to give search engines clear indications about a page's content and allow them to present it accurately in SERPs.
We need to ask ourselves:

  • Are they set on each page?
  • Are they unique to each page?
  • Are there several on the same page?

Getting answers to these questions, and making do-to lists to get the tags right can be extremely time consuming without the right tools.

To allow you to concentrate on making decisions and planning what you need to do, Botify Analytics points you directly to notable URLs:

  • Pages with a unique title / H1 / meta description (no other crawled URL uses it)
  • Pages with a duplicate title / H1 / meta description (at least one other URL uses it)
  • Pages with multiple title tags / H1 tags / meta descriptions (there are several in the page)

...and lets you explore some more (H2, H3, other criteria) through the URL Explorer.

Duplicates tags can be a symptom of different situations. A very large number of pages with the same tag value often indicates that the tag was placed on an element that is part of the page template, instead of being placed on an expression that is specific to the page. A smaller number of duplicate tags can point to duplicate content or near duplicates.

*Unique tags *, which is what we're expecting, are interesting to exclude pages from our to-do list.

Multiple tags need a more detailed assessment. They are rarely ideal.
In our experience, only the first title or meta description is used in SERPs. As far as H1s are concerned, multiple H1s can make sense in some cases (in other cases, we may wonder, if there are two H1s for two separate parts of the page, if it wouldn't be a better idea to create two pages). However, in many situations, multiple H1 are not the result of careful editorial choices, but of siloed approaches: one H1 is placed on a template element, another on an expression that is specific to the page, for instance.
But an H1 on a template element defeats its purpose - describing that specific page's content as precisely as possible.

Assess the situation

Go to your Botify Analytics report's "html tags" tab. Let's look at the Library of Congress website as an example:

Summary: good or bad?

Metadata's overall quality:

Diagnose duplicate or missing tags problems

Let's take for instance duplicate H1s - the approach is the same for titles and meta-descriptions, and for missing or multiple tags.

In the html tags tab of the report, click on the H1 duplicates block in the URLs section:

You will get the list of H1s found on several pages, the number of pages it's on, and a sample of 10 URLs:

From there, you can go to the URL Explorer to see and export:

  • All pages for a given H1 (114,841 pages for the first H1 listed above): click on "view all" at the bottom of the sample list
  • All pages with a duplicate H1 (458K URLs in our example): click on "Explore all URLs" in the upper right above the table.

Let's look at the full list or pages with duplicate H1s in the URL Explorer:
The columns are the same, with an additional column for the URL - the URL Explorer, as its names indicates, allows to explore data associated to URLs. To obtain a view that is similar to the table above, URLs are grouped by H1: only one URL is listed in the URL column, for each duplicated H1. All other URLs for that same H1 are listed in the "pages with the same H1" column. And that's why, in our example, the number of urls found is 1,631 (which corresponds to the number of H1s):

There may be several H1s in the same page. All are displayed, but duplicates are identified based on the first H1.

Data formatted as above is intended to be displayed, browsed, sorted in the online report. But such formatting is not the best fit for exports: there will be a great number of columns and you won't be able to filter by URL.

Before exporting, transform the view above into a table with all URLs in the URL column (the 458K URLs in our example), and H1 information repeated on every line. To do so, remove the "First duplicate H1 found" filter that allowed to group URLs for each H1 (click on the red cross):

And remove the "pages with the same H1" selected field:

If there are several H1s in a page, all H1s tags content will appear in the export in different columns. You can also add "number of H1" to the fields to display for additional insight (start typing "number" in the "Fields to display" area) .

Here is what you get after clicking on "Apply":

Two main possibilities when there are a number of meta-tags duplicates:

How about H2 and H3 tags?

The same information is available for H2 and H3 tags, directly through the URL Explorer. Simply change the filters and displayed fields accordingly.

How to get a full html content tag inventory

Here is another example of custom data we can get with the URL Explorer: let's look at all pages to see if they all have a title, H1 and meta-description (or, perhaps, several). We're going to extract the number of each type of tag for each page. Such an inventory may help prioritize corrective actions, and plan more efficiently: if there are many pages where several types of tags are missing for instance, it makes sense to plan to update all tags at the same time.

Go to the URL Explorer, and select the following filters and displayed fields settings. To select displayed fields, click in the fields area and select from the drop-down list (start typing any part of the field name you want to select to narrow the list down).

Click on "Apply" to get the results table (you can order by a column by clicking on the column header, the little arrow indicates how the list is sorted):

Click on "export as CSV" to export the table.

Advanced approach for large websites:

If the total number of URLs shown in URL Explorer results is too big to be exported in a single file (more than 100K URLs), you can for instance add filters to:

  • Exclude, or export separately, cases which correspond to what you expect or don't expect (for instance, in our last example, pages with one title, one description, and one H1, which is what we expect in most cases)
  • Export only key sections of your website at a time, by filtering on a specific URL pattern: for instance path starts with x, ends with x, contains x...(path is the part of the URL that begins after the domain, starting with "/" and ending before the query string [?xxxx], if any)