Navigate to: Stores and Reps > Stores > Manage > Robots.txt
The Robots.txt section allows administrators to instruct web robots (typically search engine robots) what pages of the website to crawl. Znode Admin Console provides two fields against every store to manage the robots.txt settings:
- Robots.txt
- Administrators can add information about the web pages they wish the web robots to crawl or not crawl.
- Default Page Level Robot Tag
- There are five options available under this field :
- None - This is the default option. When this option is selected, the behavior mentioned in the Robots.txt field (above) would be followed.
- INDEX, FOLLOW - This option allows search engines to index a web page that includes this instruction and allows all the links on the web page to be crawled.
- NOINDEX, NOFOLLOW - This option disallows search engines to index a web page that includes this instruction and disallows all the links on the web page to be crawled.
- NOINDEX, FOLLOW - This option disallows search engines to index a web page that includes this instruction and allows all the links on the web page to be crawled.
- INDEX, NOFOLLOW - This option allows search engines to index a web page that includes this instruction and disallows all the links on the web page to be crawled.
- There are five options available under this field :
Important: The value of the Robot Tag saved at the store level will be used on the webstore if and only if the value for the Robot Tag is not saved at the page level.