There were several controversial issues surrounding the Grub project in the time shortly after LookSmart aquired the project. Grub had a slight tendency to ignore a few mis-configured robots.txt files on the sites it crawled.[未記出處或冇根據] Even when the development team addressed these issues, a few webmasters continued blaming it for crawling their site too much, and not respecting their robots.txt files.[未記出處或冇根據]
Another issue was the closing of the source code base, and the apparent lack of using the crawled data for anything useful, such as a searchable index of the sites it crawled. It appears that Grub was used for a short time to seed the URL list for NetNanny, another acquisition of LookSmart.
Operations of Grub were shut down in late 2005. The site was reactivated on July 27, 2007, and the site is currently being updated. The original developers are assisting with the new deployment, and investigating the robots.txt issue, to ensure a repeat performance does not occur.
Users of Grub can download the peer-to-peer grubclient software and let it run during computer idle time. The client indexes the URLs and send them back to the main grub server in a highly compressed form. The collective crawl could then, in theory, be utilized by an indexing system, such as the one being proposed at Wikia Search. Grub is able to quickly build a large snapshot by asking thousands of clients to crawl and analyze a small portion of the web each.