Here I’ve an crawling issue of my client WordPress site http://www.metroprinting.biz. This site is hosted on Dream Host Server. This website contains default robots.txt which stop bots to crawl the site. “And that robots.txt file cannot be found in Source.” Now if we add our own “physical robots.txt” then it is visible on live. But it doesn’t make bots to crawl the site. It seems that it is due to system generated default robots.txt.
Please help me to resolve this.