ATOMSEO
  • Log In
  • Sign Up
ATOMSEO

Robots.txt File: What is it,
Examples & Best Practices

The robots.txt file is fundamental to website management and SEO, yet it's often misunderstood or overlooked. This article aims to clearly understand the robots.txt file, its purpose, and how to create and manage it effectively.

  1. What is a Robots.txt File?

The robots.txt file is a simple service file placed in the root directory of a website, containing a set of directives that control how the site is indexed. When a crawler visits your site, the robots.txt file is one of the first documents it checks. It provides instructions to search engine crawlers, or "robots," about which pages or sections of your website should not be crawled or indexed. This file helps web admins control search engine behavior and manage the site's content's visibility. By using robots.txt, you can limit the number of crawl requests, helping to reduce the load on your server.

2. How Does the Robots.txt File Work?

When a search engine bot visits your website, it first looks for the robots.txt file to see if there are any specific instructions on how to crawl the site. The file can allow or disallow bots from accessing certain areas of the site. For instance, prevent search engines from indexing pages under development or containing sensitive information.

The robots.txt file is not designed to hide your content from Google search results. To prevent specific pages from being listed in Google, use a noindex directive on those pages or restrict access with a password.

Robots.txt is primarily used to prevent duplicate content, service pages, deleted pages, and other non-essential pages from being indexed by search engines. Additionally, it can be used to specify the sitemap location for search engines. In certain situations, robots.txt can block a site from being crawled by unwanted search engines. Proper management of robots.txt helps direct search engine bots to the correct pages, avoiding the creation of duplicate content that can negatively impact search rankings.

You can use robots.txt to prevent the crawling of files like images, scripts, and stylesheets if they have a minimal impact on your page's design. However, avoid blocking these files if it could hinder search engines from properly interpreting your content, as this might result in incomplete or incorrect crawling of your pages.

Even if a page is blocked by robots.txt, it can still be indexed through links from other websites. Google won’t directly crawl or index content blocked by robots.txt, but if other sites link to the URL, it may still be discovered and added to the index. In such cases, the page could appear in search results, often accompanied by the anchor text of the linking page.

3. Is a Robots.txt File Necessary?

While a robots.txt file is not mandatory for all websites, it is highly recommended, especially for larger sites or those with specific sections you don't want to be indexed. Without a robots.txt file, search engine bots will assume they can crawl and index every page on your site, which might not always be desirable.

4. Where Should the Robots.txt File Be Placed?

The robots.txt file must be placed directly in the site's root directory and should be the only one present. This location ensures that search engine bots can easily find and read the file.

For example, on the site https://www.example.com/, the file should be located at https://www.example.com/robots.txt. It should not be placed in a subdirectory (such as https://example.com/pages/robots.txt).

Robots.txt files can also be used with URLs that include subdomains (e.g., https://website.example.com/robots.txt) or non-standard ports (e.g., http://example.com:8181/robots.txt).

5. What Happens If You Don’t Use a Robots.txt File?

If you don't use a robots.txt file, search engines will default to crawling and indexing all accessible pages of your website. This might lead to indexing pages you didn't intend to be publicly visible, such as internal search results pages, staging areas, or private directories.

6. Best Practices for Creating a Robots.txt File

  1. Be Specific: Clearly define which areas of your site should not be crawled. Overly broad disallow rules might unintentionally block important content.
  2. Use Allow Rules Wisely: Use allow rules to permit specific pages within a disallowed directory to be crawled.
  3. Include a Sitemap: Always include a link to your sitemap to help search engines index your site effectively.
  4. Test Your Robots.txt File: Before deploying your robots.txt file, use tools like Google Search Console to test it and ensure it’s working as expected.
  5. Keep It Updated: Regularly review and update your robots.txt file to reflect changes in your site structure.

7. Understanding Robots.txt File Syntax

The robots.txt file is a simple text file that provides instructions to search engine crawlers about which pages or sections of your website they can access. Understanding its syntax is crucial for properly managing your site's visibility in search engines.

Basic Structure

  • User-agent: This directive specifies the crawler's name to which the rule applies. For example, User-agent: Googlebot applies to Google's crawler, while User-agent: * applies to all crawlers.
  • Disallow: This directive tells the crawler which URLs it should not access. For example, Disallow: /private/ blocks the /private/ directory from being crawled.
  • Allow: This directive can override a Disallow directive, allowing specific pages or directories within a disallowed directory to be crawled. For instance, Allow: /private/public-page.html allows access to public-page.html within the disallowed /private/ directory.
  • Sitemap: This directive specifies your sitemap's location, helping crawlers find and index your pages more efficiently. Example: Sitemap: https://www.example.com/sitemap.xml.

Syntax Rules

  • Case Sensitivity: The robots.txt file is case-sensitive. For example, Disallow: /file.asp differs from Disallow: /File.asp.
  • Wildcards: The * wildcard represents any sequence of characters and can block or allow URL patterns. For example, Disallow: /*.pdf blocks all URLs ending in .pdf.
  • End-of-Line Comments: Use the # symbol to add comments within the file. Crawlers will ignore anything following the # on a line.
  • Blank Lines: For readability, it's recommended that different User-agent sections be separated by a blank line, though it's not required.

8. Practical Robots.txt Examples

User-agent: Googlebot
Disallow: /private/
Allow: /private/public-page.html

User-agent: *
Disallow: /tmp/

Sitemap: https://www.example.com/sitemap.xml

This example prevents Googlebot from accessing the /private/ directory, except for the public-page.html file. All other crawlers are blocked from accessing the /tmp/ directory, and the sitemap's location is provided.

Adhering to these syntax guidelines can help you effectively manage how search engines interact with your website, optimizing its accessibility and visibility in search results.


Here is another example of a simple robots.txt file with two rules:

User-agent: Googlebot
Disallow: /nogooglebot/

User-agent: *
Allow: /

Sitemap: http://www.example.com/sitemap.xml

Explanation:

  • The user agent Googlebot is restricted from crawling URLs that begin with http://example.com/nogooglebot/.
  • All other user agents are permitted to crawl the entire site. This rule is optional, as the default setting allows user agents to crawl the whole site.
  • The sitemap for the site is located at http://www.example.com/sitemap.xml.

9. Pros and Cons of Using Robots.txt

Pros:
  • Control Over Crawling: This helps you manage which parts of your site are crawled and indexed.
  • Improves Crawl Efficiency: Directs bots to focus on the most important content, improving the overall efficiency of the crawling process.
Cons:
  • Incorrect Rules Can Harm SEO: If not configured correctly, a robots.txt file can accidentally block essential pages from being indexed.
  • Not a Secure Method: Sensitive information should not rely on robots.txt for protection, as it only hides pages from search engines, not users.

10. Key Recommendations for Creating a Robots.txt File

The robots.txt file is essential for controlling search engine crawlers' interaction with your website. Here are some essential guidelines to follow when creating a robots.txt file:

1.       Place in Root Directory: The robots.txt file must be located in the root directory of your website. For example, it should be accessible at https://www.example.com/robots.txt. The robots.txt file should be saved in plain text format, encoded in UTF-8, and named "robots.txt."

2.       Specify User Agents: Search engines look for entries in the robots.txt file that begin with the User-agent field. This directive specifies which search engine robot the indexing rule applies to.

You can target specific crawlers or use * to apply rules to all crawlers.

According to the standard, inserting an empty line before each User-agent directive is advisable.

You can use the * wildcard in the file to represent "any sequence of characters." This is useful for specifying prefixes or suffixes in your site's path to a directory or page.

3.       Use Disallow and Allow Directives:

·Disallow: This directive blocks crawlers from accessing specific directories or pages. Ensure that directory paths end with a slash.
·Allow: This directive permits access to specific pages or subdirectories within a directory that has been disallowed.

4.       Limit Directives: The number of directives (such as Disallow or Allow) should not exceed 1024 to maintain efficiency and avoid errors.

5.       Include Sitemap: Add a Sitemap directive to specify the location of your sitemap file, which will help search engines discover and index your content more effectively.

6.       Comments and Formatting: The # symbol is used to include comments; search engines ignore anything following this symbol until the next line break. Remember that the robots.txt file is case-sensitive, so ensure your paths are correctly formatted.

For example, the directive disallow: /file.asp will block access to https://www.example.com/file.asp, but not to https://www.example.com/FILE.asp.

7.       Regular Updates: Regularly review and update your robots.txt file to reflect any changes in your site structure, ensuring that search engines have the most accurate instructions for crawling your site.

By adhering to these guidelines, you can effectively manage how search engines interact with your website, improving your site's indexing and visibility in search results.

The following rules can be used within the User-agent directive:

•        At least one directive is required: Each rule must include at least one Disallow: or Allow: directive.
•        Disallow: Indicates a directory or page in the root domain that the specified crawler should not crawl. If selecting a directory, the path must end with a slash. The * wildcard can represent a path prefix, suffix, or the entire path.
•        Allow: Identifies a directory or page in the root domain that the crawler should crawl. It can also override a Disallow: directive, allowing specific subdirectories or pages within a restricted directory to be crawled. If specifying a directory, the path must end with a slash. The * wildcard is supported for path prefixes, suffixes, or entire paths.
•        Sitemap: This optional directive specifies the location of the sitemap file. You can include multiple sitemap files, each listed on a separate line.
•        Unknown directives are ignored: This includes comments in the robots.txt file if needed.

11. Typical Mistakes in Robots.txt

1.   Mixed-Up Instructions: A frequent mistake is confusing the order of instructions in the robots.txt file. For example:

Incorrect:
Disallow: googlebot

Correct:
User-agent: googlebot
Disallow: /

2.   Specifying Multiple Directories in One Disallow Instruction: Some users try to list multiple directories in a single Disallow command, which is incorrect. For example:

Incorrect:
Disallow: /css/ /cgi-bin/ /images/

Correct:
Disallow: /css/
Disallow: /cgi-bin/
Disallow: /images/

3.   File Name with Capital Letters: The file should be named robots.txt, not Robots.txt or ROBOTS.TXT.

4.   Using the Wrong Filename: The correct filename is robots.txt, not robot.txt.

5.   Empty Line in User-agent Directive: An empty line after the User-agent directive is incorrect.

Incorrect:
User-agent:
Disallow:

Correct:
User-agent: *
Disallow:

6.   Site Mirrors and Host Directive: Use 301 redirects and Google Search Console to indicate the primary site and its mirrors, rather than relying on the Host directive.

7.   Redirecting to a 404 Error Page: If no robots.txt file exists, ensure the server returns a 404 status rather than redirecting to another page.

8.   Using CAPITAL LETTERS: Although robots.txt is case-insensitive by default, file and directory names often are not.

Incorrect:
USER-AGENT: GOOGLEBOT
DISALLOW:

Correct:
User-agent: googlebot
Disallow:

9. Listing All Files Individually: Instead of listing each file, it's better to block the entire directory:

Incorrect:
User-agent: *
Disallow: /AL/Alabama.html
Disallow: /AL/AR.html

Correct:
User-agent: *
Disallow: /AL/

10. Missing Disallow Instruction: Even if no pages are to be disallowed, an empty Disallow directive is necessary.

Incorrect:
User-agent: googlebot
Host: www.example.com

Correct:
User-agent: googlebot
Disallow:
Host: www.example.com

11. Missing Slashes in Directories: The robot may block files and directories without slashes. To specify only a directory:

Incorrect:
User-agent: googlebot
Disallow: john

Correct:
User-agent: googlebot
Disallow: /john/

12. Incorrect HTTP Header: The server must return "Content-Type: text/plain" for robots.txt. An incorrect header, like "Content-Type: text/html", can prevent some robots from processing the file.

13. Logical Errors: Logical errors can occur in complex site structures when blocking content. Google prioritizes the strictest rule, considering the length of the path. The order of rules with wildcards is not defined.

Additionally, ensure that you block sensitive pages and directories, such as:

  • Action pages (e.g., add to cart, compare products)
  • Shopping carts
  • Site search pages
  • Confidential directories (e.g., delivery addresses, contact information)
  • Session IDs and UTM tags

Proper creation and management of robots.txt are crucial. If neglected, search engines may index irrelevant or sensitive content, like empty pages or test versions of your site.

12. Testing and Submitting Your Robots.txt File in Google Search Console

After creating your robots.txt file, it is crucial to test it using Google Search Console's robots.txt tester. This tool lets you check whether your rules block or allow content correctly. Once tested, submit your robots.txt file to Google via Google Search Console to ensure search engines know your crawling preferences.
How to view in GSC which pages are not indexed because robots.txt rules block them
Testing your robots.txt file

  1. Access Google Search Console: Log in to your Google Search Console account and navigate to the appropriate property.
  2. Use the robots.txt Tester Tool: Within the Search Console, find the "robots.txt Tester" under the "Legacy tools and reports" section. This tool allows you to view and test your robots.txt file.
  3. Check for Errors: The tester will highlight any syntax errors or misconfigurations. You can also enter specific URLs to see if the current robots.txt settings block them.
  4. Review and Modify: If you identify any issues, such as unintentional page blocking, you can modify the robots.txt file directly within the tester to see how changes will impact the site's accessibility.

Fixing issues and submitting

  1. Edit the File: If the tester highlights any problems, edit your robots.txt file using a plain text editor. Correct any syntax errors, update paths, and adjust directives as needed.
  2. Upload the Updated File: After making changes, upload the corrected robots.txt file to your website's root directory.
  3. Re-test the File: Return to the robots.txt Tester in Google Search Console and re-test your file to ensure all issues have been resolved.
  4. Submit to Google: Once satisfied with the changes, use the "Submit" button in the tester tool to inform Google of the updated robots.txt file. This prompts Google to re-crawl the file and apply the new directives.

It's important to periodically check your robots.txt file for any issues, especially after changing your website's structure. Regular testing in Google Search Console ensures that your file is always correctly configured, helping maintain optimal search engine indexing and site performance.

Following these steps, you can effectively manage and troubleshoot your robots.txt file, ensuring that your site is accessible to search engines while protecting sensitive or unimportant areas from being crawled.

13. Check If Your Robots.txt File is Alive with Atomseo

How to check your robot.txt file availability with Atomseo
The robots.txt file is a powerful tool for managing how search engines interact with your website. Understanding how to create, test, and implement this file can improve your site's SEO and ensure that only the most relevant content is indexed. Regularly review your robots.txt file to adapt to any changes in your website's structure or content strategy, and always test it in Google Search Console to avoid potential issues.

It is crucial to verify the availability of your robots.txt file regularly. It should consistently return a 200 response code; otherwise, search engines will disregard all directives if the file becomes inaccessible (returning a 404 or 500 code). This could result in unintended consequences, such as the indexing of pages you intended to block.

We recommend checking the robots.txt file's availability daily or weekly to prevent this. For convenience, you can use the Atomseo Broken Link Checker at https://error404.atomseo.com/SeoListCheck. Enter your robots.txt file's URL, and the service will monitor its status. If any issues arise, you will be promptly notified, allowing you to address them quickly.

Check your site's robots.txt file now—it's free.

14. Relevant Links