This robots.txt generator is designed for practical crawler-control work, not just for spitting out a boilerplate file. The current screen

Setup
Build a valid robots.txt with crawler-specific rules and sitemap lines.
Configure default access, crawler overrides, and restricted paths.
Default - All Robots are:  
Crawl-Delay:
Sitemap: (leave blank if you don't have)
Search & Crawlers: Google
 Google Image
 Bing
 DuckDuckGo
 Applebot
 Baidu
 Yandex
 Naver
Restricted Directories: The path is relative to root and must contain a trailing slash "/"
 
 
 
 
 
 
Advanced Options

Useful for exceptions (for example allow a path/file while other paths are blocked).
Create extra groups with custom User-agent and optional Allow/Disallow rules.
Result
Diff mode: save current content, regenerate, then compare.
Added: -
Removed: -
Test URL Against Current Rules
Generate robots.txt content first, then test URL behavior.

Save as robots.txt at your site root and verify with search engine tools after deploy.

I think computer viruses should count as life. I think it says something about human nature that the only form of life we have created so far is purely destructive. We’ve created life in our own image.

Stephen Hawking

CodersTool Categories