Understand the JavaScript SEO basics
Contents
JavaScript is an important part of the web platform because it provides many features that turn the web into a powerful application platform. Making your JavaScript-powered web applications discoverable via Google Search can help you find new users and re-engage existing users as they search for the content your web app provides. While Google Search runs JavaScript with an evergreen version of Chromium, there are a few things that you can optimize.
This guide describes how Google Search processes JavaScript and best practices for improving JavaScript web apps for Google Search.
How Googlebot processes JavaScript
Googlebot processes JavaScript web apps in three main phases:
- Crawling
- Rendering
- Indexing
Googlebot crawls, renders, and indexes a page.
When Googlebot fetches a URL from the crawling queue by making an HTTP request it first checks if you allow crawling. Googlebot reads the robots.txt file. If it marks the URL as disallowed, then Googlebot skips making an HTTP request to this URL and skips the URL.
Googlebot then parses the response for other URLs in the href
attribute of HTML links and adds the URLs to the crawl queue. To prevent link discovery, use the nofollow mechanism.
Crawling a URL and parsing the HTML response works well for classical websites or server-side rendered pages where the HTML in the HTTP response contains all content. Some JavaScript sites may use the app shell model where the initial HTML does not contain the actual content and Googlebot needs to execute JavaScript before being able to see the actual page content that JavaScript generates.
Googlebot queues all pages for rendering, unless a robots meta tag or header tells Googlebot not to index the page. The page may stay on this queue for a few seconds, but it can take longer than that. Once Googlebot’s resources allow, a headless Chromium renders the page and executes the JavaScript. Googlebot parses the rendered HTML for links again and queues the URLs it finds for crawling. Googlebot also uses the rendered HTML to index the page.
Keep in mind that server-side or pre-rendering is still a great idea because it makes your website faster for users and crawlers, and not all bots can run JavaScript.
Describe your page with unique titles and snippets
Unique, descriptive titles and helpful meta descriptions help users to quickly identify the best result for their goal and we explain what makes good titles and descriptions in our guidelines.
You can use JavaScript to set or change the meta description as well as the title.
Google Search might show a different title or description based on the user’s query. This happens when the title or description have a low relevance for the page content or when we found alternatives in the page that better match the search query. See this page for more information on titles and the description snippet.
Write compatible code
Browsers offer many APIs and JavaScript is a quickly-evolving language. Googlebot has some limitations regarding which APIs and JavaScript features it supports. To make sure your code is compatible with Googlebot, follow our guidelines for troubleshooting JavaScript problems.
Use meaningful HTTP status codes
Googlebot uses HTTP status codes to find out if something went wrong when crawling the page.
You should use a meaningful status code to tell Googlebot if a page should not be crawled or indexed, like a 404
for a page that could not be found or a 401
code for pages behind a login. You can use HTTP status codes to tell Googlebot if a page has moved to a new URL, so that the index can be updated accordingly.
Here’s a list of HTTP status codes and when to use them:
HTTP status | When to use |
---|---|
301 / 302 | The page has moved to a new URL. |
401 / 403 | The page is unavailable due to permission issues. |
404 / 410 | The page is no longer available. |
5xx | Something went wrong on the server side. |
You can prevent Googlebot from indexing a page or following links through the meta robots tag. For example, adding the following meta tag to the top of your page blocks Googlebot from indexing the page:
<!-- Googlebot won't index this page or follow links on this page --> <meta name="robots" content="noindex, nofollow">
You can use JavaScript to add a meta robots tag to a page or change its content. The following example code shows how to change the meta robots tag with JavaScript to prevent indexing of the current page if an API call doesn’t return content.
fetch('/api/products/' + productId) .then(function (response) { return response.json(); }) .then(function (apiResponse) { if (apiResponse.isError) { // get the robots meta tag var metaRobots = document.querySelector('meta[name="robots"]'); // if there was no robots meta tag, add one if (!metaRobots) { metaRobots = document.createElement('meta'); metaRobots.setAttribute('name', 'robots'); document.head.appendChild(metaRobots); } // tell Googlebot to exclude this page from the index metaRobots.setAttribute('content', 'noindex'); // display an error message to the user errorMsg.textContent = 'This product is no longer available'; return; } // display product information // ... });
When Googlebot encounters noindex
in the robots meta tag before running JavaScript, it doesn’t render or index the page.