Indexing/crawling failure during creation of your demo

If you signed up and created a demo, sometimes this did not go as expected, and you received an error message from us. There are a number of possible issues which might have prevented AddSearch from properly crawling/indexing your site.
The most common errors are below. Click on the title to see the solution.

Your website might have a .txt file, called robots.txt, that automatically gives instructions to our crawling software. It could be that your robots.txt tells our crawlers that it is forbidden to crawl the site. As our crawlers are very well behaved, they listen.

If you have access to the robots.txt file, you can change it to allow our crawlers to enter. Read more here: http://www.robotstxt.org/robotstxt.html

Your website might also contain an HTML tag, most likely in the <head> section of your page, which forbids our crawlers to enter the site. In that section, there will then be a meta tag, looking like this:

<meta name=”robots” content=”noindex”>

If you want our crawlers to enter your site, then you can either remove this tag, or replace the “noindex” with “index”, so that it looks like this:

<meta name=”robots” content=”index”>

This will allow our crawlers to enter your site, and subsequently index it properly.

It is possible that part of the HTML code on your website is incomplete or incorrect. This might not even cause any visual problems, so it could appear that your site is working fine, while for our crawlers it is impossible to crawl and index the site properly.

You can do an automatic check of your code on: http://validator.w3.org/

If your website has a main menu that is dependent on JavaScript, this could also cause problems for our crawlers. This might not only be the case with AddSearch’s crawlers, but also with those of web search engines, such as Google. Therefore it is not a practice that is widely recommended for website owners.
If your website is created and consisting mainly of Flash files, our crawlers are unable to read and index those.
If your website consists of only images, and the text is saved in the .jpg/.gif/.png etc. image files, then our crawlers are unable to read those texts, as technically the text has simply turned into an image.

It could be that you have such a small website, that our crawlers have overlooked your pages and reported the link back as empty.

It could be that many other users are making a similar request at the same time. This could cause our servers to be unavailable, but only very temporarily. Just try again, and it should work

.