I just found out about donotlink and thought it might be of interest to others here. We are always talking about j w dot org and I am worried it's helping them. Is that true?
Let me try.
What is donotlink?
Linking to dubious websites
You've all heard there's no such thing as bad publicity. On the internet this is doubly true. When you link to a website — regardless of the reason — this strengthens its position in search engines. This means that a bad review of a website makes it more popular.
When you are discussing or alerting others to a website that promotes a fraud, scam, cult or other questionable business and you link to that site, search engines will (after a while) improve the offending site's rank.
Therefore, more people will find these shady websites, and will be exposed to their content without getting the proper context.That's where donotlink comes in.
With donotlink.com, you can link to sites without giving them "Google juice".
Donotlink uses three different ways to block search engines from crawling a link. So you can post the link on forums, message boards, facebook, twitter, reddit, and other public places without giving shady websites any undue credibility.
What does donotlink do?
Using donotlink.com instead of linking to questionable websites directly will prevent your links from improving these websites' position in search engines.
How does this work?
Much like standard url shorteners, donotlink creates a shortened url to the website you submit. Just use this url instead of linking directly to the website, and we'll do the rest.
If you don't want to visit donotlink.com every time you use this service, you can also put "http://www.donotlink.com/" before the website's url like this:
http://www.donotlink.com/www.example.com/shady/stuff.html
How does this prevent search engines from crawling the website?
Donotlink routes links to questionable sites through a unique intermediate url that forwards the visitor to the destination through javascript.
- This url is blocked in our robots.txt file, so (search engine) robots are discouraged from crawling it.
- The "nofollow" attribute of the link and the intermediate page give robots another reminder to not crawl the link.
- If a known robot does decide to crawl the link, our code will identify it and serve it a blank page (403 Forbidden) instead of redirecting to the url.
But we don't stop there. We are continually improving our algorithms — a clever combination of blacklists and bayesian inference — that identify crawlers and bots, so even search engines and scrapers that don't play by any rules will get caught out.