In recent years, SEOs have been hearing more and more about JavaScript and its impact on search performance, leading many SEOs to wonder if they needed to become JavaScript developers to do their jobs effectively.
Thankfully, that’s not the case.
While SEOs may not need to learn how to code in JavaScript, there are benefits to understanding how it’s used and when it could have an impact on search performance.
Although JavaScript has been around for a while, it’s begun to come up with increasing frequency in SEO circles – why is that?
JavaScript has become the popular choice for coding websites for a few reasons:
So while some JavaScript can make an SEOs life more difficult, it’s usually implemented with good intentions (i.e. increased interactivity, saves time, saves money).
Definitely not! JavaScript can be used in a number of different ways, so the impact it will have on search will differ depending on exactly how it’s used.
Here are the three main types of JavaScript websites, and whether they affect SEO:
In short, rendering delays.
Google has two waves of indexing. The first wave of indexing is instant because Google is only looking at the page’s HTML. Google will then come back for a second wave of indexing where they’ll render your JavaScript, but this could be a few days or even a few weeks later.
In a recent Google webmaster hangout, Martin Splitt said that, while the two-waved indexing is still a reality, crawling, rendering, and indexing will occur closer together in the future. He went on to explain that rendering JavaScript is cheaper than Google initially thought, and therefore rendering delays would play less of a role in the future.
For now though, Google still continues to index JavaScript websites in two phases, which means that they may initially miss any content or links that you’re loading with JavaScript. If you have pages where the content is updated frequently, it means that by the time Google sees the content on your page, it may have changed already! This is an especially important consideration for publishers who are posting timely content such as breaking news, or e-commerce websites that have a constantly changing inventory.
It’s also important to remember that, while Google holds the majority of search engine market share, other major search engines like Bing have a much harder time rendering JavaScript.
There are four main ways to render the content of a website: client-size, server-side, pre-rendering, and dynamic rendering.
Client-side rendering is the least SEO-friendly of all the rendering options, but when done properly, it can have some advantages when it comes to user experience.
Client-side rendering means all the burden of rendering the content is on the “client” – that means you! Instead of the page being assembled at the server and then sent to your browser, the page is sent to your browser disassembled, leaving your browser to do all the work of loading the content. This means that your browser has to make multiple round trips to a server to grab and then load all of the content on the page.
Client-side rendered content is subject to Google’s “rendering budget,” which means there will be a delay before Google accesses it, and nearly inaccessible by other search engines, but many businesses (particularly those people dealing with infrastructure and finance) prefer this option because it reduces the load on their own servers.
Most SEOs prefer server-side rendering because all the meaningful content gets rendered on the website’s server, which means it’s not subject to the two waves of indexing. Both users and bots will receive a fully-loaded page, without the need to request additional resources.
Server-side rendering takes the burden of rendering a page off of you (browser) and places it on the website’s server.
Pre-rendering can be a viable option for some websites, but it’s somewhat of a workaround. It works like this:
Because pre-rendering is a solution for search engines, there’s no real benefit for users. Third-party pre-rendering solutions like prerender.io can also be expensive, and they’re prone to bugs and can break on occasion.
Pre-rendering also may not be a great solution for websites with pages that change often. For example, we worked with an e-commerce business whose prerender solution was causing product prices to get out of sync — Google was seeing one price while users were seeing another. It works best when used on sites that don’t often change or are completely static.
Dynamic rendering is a method that uses a different rendering process depending on who’s trying to access the page. Search engine crawlers will receive a full server-rendered HTML, while humans will receive the initial HTML and make all the additional requests on their end.
While this solution helps search engines see your fully-loaded page instantly, it still places the entire burden of loading the page on the user.
If you’ve gotten to this point and are still unclear on what rendering means, that’s understandable! Rendering is a somewhat foreign concept to non-developers, but easy enough to understand once you break it down.
Rendering is the process your phone, computer, tablet, or other device’s browser has to go through in order to actually “build” a web page.
Most of the time, this requires your computer to go out and get hundreds of different resources in order to make the page work the way the site intended it to.
The “rendering” process can take a long time, depending on the size and quantity of those different resources your browser has to go and fetch.
This comes at a cost to you (battery, speed, data, etc.) as well as search engines.
Remember, not all JavaScript websites are created equal. In a lot of cases, websites that use JavaScript have plenty of resources that are not necessarily used to render content or links.
When it comes to crawling your site for SEO analysis, there are certain resources you’d want to ignore, such as:
When you’re crawling a site for the purpose of SEO analysis, you’ll primarily be interested in simulating a search engine’s experience. That’s why it’s a good idea to configure your crawler so that it’s looking at content (text) and internal links to other pages.
At Botify, we often work with customers to decide which (if any) of their website’s resources that load on a page are needed to do a proper SEO analysis. By eliminating any unnecessary resources, we ensure we’re only looking at the elements that matter for SEO, and not wasting time crawling things that aren’t important for your analysis.
For example, take a look at a site before and after we performed a JavaScript crawl configuration. It went from 354 resources down to just 13 resources that were critical for SEO evaluation!
Botify can crawl your full website, including JavaScript, with JavaScript Crawl!
What does JavaScript Crawl do?
Why is this functionality important? Because it allows you to identify, at scale, the most common SEO issues caused by JavaScript.
Because JavaScript Crawl can do all these things, as a result, Botify will:
Can’t get enough of JavaScript SEO? We don’t blame you! Check out some of our other article on the topic: