Skip to content


Robots is a text file on the server that is used to control crawler behavior on the site. It can be accessed by navigating to: /robots.txt
Learn More About Robots.txt

Where to Manage Robots

The robots are managed globally under SEO Manager > Robots.

Who Can Manage Robots

Only users in one of the following user groups can access the robots manager:

  • WebAdmins: Full admin access to all of SEO Manager.
  • RobotAdmins (+ WebEditors): Full admin access to the robots manager.

WebEditors without the RobotAdmins user group assigned will not be able to access the robots manager at all.

Properties

Domain *

Required; site domain. 

Crawling Instructions *

Required; populate the desired crawling behavior for the site. For example, the baseline for allowing everything to be crawled is:

User-agent: *
Disallow:

How to Block Bots From Crawling Specific Sections

Each URL path that you want to block robots from crawling is entered as a separate Disallow on a separate line. For example:

User-agent: * 
Disallow: /test-content/
Disallow: /utility-pages/

In this case, any URL that begins with /test-content/ or /utility-pages/ will be blocked from crawlers.

Don't Forget the Sitemap Link

Remember to always link the sitemap at the bottom of the robots file, for example:

User-agent: *
Disallow: /test-content/
Disallow: /utility-pages/

Sitemap: https://www.eisneramper.com/sitemap.xml

NOTE: If no robots file has been configured in the CMS, it will be automatically set to block crawling of the entire site. While this is useful for non-production domains such as UAT, the production site should always have an active robots file configured through robots manager.