robots.txt Generator
Build a robots.txt file for your website — presets, rules, sitemaps
Quick Presets
Rule #1
Additional Settings
robots.txt Output
# robots.txt generated by OmniWebKit # Generated: 2026-04-05 User-agent: * Allow: /
Free Online robots.txt Generator — Build Your File in Seconds
Every website needs a robots.txt file. It tells search engine crawlers which pages they can access and which ones they should skip. Without it, crawlers may index pages you want to keep private — admin panels, checkout flows, search result pages, or duplicate content. A properly configured robots.txt file improves your SEO, reduces server load, and keeps your site clean in search results.
This free robots.txt Generator lets you build the file visually. Add rules for different user agents, specify Allow and Disallow paths, set a crawl delay, add your sitemap URLs, and see the output in real time. Five presets are included: Allow All, Block All, WordPress, Next.js, and E-commerce. Each preset is a one-click starting point that you can customise.
When you are finished, copy the output to your clipboard or download it as a robots.txt file. Upload it to the root directory of your website. All processing runs in your browser — no data is sent to any server.
Understanding robots.txt Directives
User-agent
Specifies which crawler the rules apply to. Use * for all crawlers, or name a specific bot like Googlebot, Bingbot, or Yandex.
Disallow
Tells the crawler NOT to access a specific path. Disallow: /admin/ blocks the entire /admin/ directory. Disallow: / blocks the entire site.
Allow
Overrides a Disallow rule for a specific path. Useful when you block a directory but want to allow a specific file within it.
Sitemap
Points crawlers to your XML sitemap. This helps search engines discover all your pages. Use the full URL: https://example.com/sitemap.xml.
Crawl-delay
Requests that crawlers wait a number of seconds between requests. Not all crawlers honour this directive, but it can reduce server load.
Host
Specifies the preferred domain for your site. Primarily used by Yandex. Most sites do not need this directive.
Five Presets Explained
Allow All
Allows all crawlers to access all pages. This is the most open configuration. Use it if you have nothing to hide and want maximum indexing.
Block All
Blocks all crawlers from accessing any page. Use this for staging sites, development servers, or sites not ready for public indexing.
WordPress
Blocks common WordPress admin directories, plugin files, trackbacks, feeds, and internal search results. Includes a sitemap directive.
Next.js
Blocks Next.js internal routes (_next), API routes, and error pages (404, 500). Includes a sitemap directive.
E-commerce
Blocks cart, checkout, account, admin, search queries, wishlist, and URL parameters for sorting and filtering. Keeps product pages indexed.
