Microsoft KB Archive/272553

= Site Server Search Does Not Read Robots.txt Files =

Article ID: 272553

Article Last Modified on 10/23/2000

-

APPLIES TO


 * Microsoft Site Server 3.0 Standard Edition

-



This article was previously published under Q272553



SYMPTOMS
When you build a catalog for a Web site with Site Server 3.0 Search, it may appear that the Robots.txt file for the site is not being read.



CAUSE
This is due to the Site Server Gatherer service caching the previously requested file for 24 hours. Site Server 3.0 Search follows the rules in the Robots.txt file that is placed in the root of a Web site. This file informs the search engine whether it can or cannot crawl certain areas of a site. However, if you rebuild the search catalog, or delete and re-create the catalog, the Robots.txt file may not be re-requested, and recent changes to the file are not picked up.



RESOLUTION
If this file needs to be re-read, you can stop and restart the Site Server Gatherer service. This causes the file to be re-read on the next site crawl.



MORE INFORMATION
For additional information about Robots.txt files, click the article number below to view the article in the Microsoft Knowledge Base:

217103 How to Write a Robots.txt file

Keywords: kbfix kbprb KB272553

-

[mailto:TECHNET@MICROSOFT.COM Send feedback to Microsoft]

© Microsoft Corporation. All rights reserved.