Robots.txt is a text file web administrators create to instruct search engine robots on how to crawl and index pages that are part of their website.
Did you know that a text files are very powerful and a single text file could destroy your entire website? The error is damaging and the way it happens is pretty simple to understand:
- The sneaky text file that is ruining your life could tell search engines not to crawl your website.
- If search engines can't crawl your website, then your pages (that you've worked so hard on) won't appear in any search results.
- If your website can't be found on search engines, nobody will know about your website.
- As a result, business will suffer.
Don't worry. We won't let you or your website suffer. The best thing to do to avoid this issue is to use our FREE robots.txt generator. Our tool was completely designed to generate the proper robots.txt file for your website.
Key Features and Benefits
- Robots.txt file can be uploaded directly to your root directory.
- The new file will direct Google and other search engines which of your website's pages or directories should and should not show up in searches.
- Will give you proper recommendations when you decide to add new directive, file, or file path to either the new or existing robots.txt file.
Wouldn't you like a FREE and easy way to create a new robots.txt file, or edit an existing file for your website
When using our tool, you can specify as to which search engines you want to include in your personalized criteria.
Upon generating your new robots.txt file, Google or other specified search engines will be pointed to as which pages/directories of your website should or should not be shown in searches.
How to use?
Step 1: Enter your website domain name.
Step 2: Click on "Import Robots.txt" button. If robots.txt file is present, tool will fetch and display the content. And want to update the information update it by adding new directives or modifying existing one. If not present create new one by adding directives.
Example of adding new Directory
Allow: Allow crawling of a particular path
Input: Allow/Disallow is "Allow", User agent is "All", Directory or File is "/lxrmarketplace"
That means URLs which contain /lxrmarketplace will be crawled by all robots.
Disallow: Disallow crawling of a particular path
Input: Allow/Disallow is "Disallow", User agent is "Googlebot", Directory or File is "/xyz"
That means URLs which contain /xyz won't be crawled by Googlebot.
Step 3: Click on the "Get Result" button to view the output. You can also download the results into a text file by clicking the "Download" button.