The other day I wrote about a challenge I was facing with a client of mine who was using a WordPress site.
Their web designer had possibly nefariously tagged the entire site to be invisible to the search engines…like a vampire unable to see its own reflection.
While I worked for a while on unravelling this mystery by looking into the deepest, darkest corners of the code and then the web that encased it, I found no solutions. I tried a work around that failed, even though sucking the blood out of other methods succeeded. Ultimately, getting my hands dirty in the cobwebs of WordPress found success.
As I suspected, the culprit was a simple checkbox with a universal application. Unfortunately I didn’t know where to look. The WordPress gurus I consulted had no idea. To be fair, I fired off questions. Channa is awesome. I was looking for answers, not explorations.
While setting up a new WordPress site last night, I discovered the answer to my question. I’ll save the screenshot for another time.
Settings-Privacy-I would like my blog to be visible to everyone, including search engines (like Google, Sphere, Technorati) and archivers.
If you have this checked, your website is golden. If not, your website is dying an anonymous death. Like a rare disease, nobody but the experts knows why your site is dying from a dearth of traffic. I do. Let’s fix it.
I’ve written before about how web designers don’t know how to do search engine optimization and why bringing an SEO company into your project as early as possible is a good idea.
This last week I saw one of the worst uses of meta tags on a new client’s site. The company that set up the WordPress blog for my client coded in a meta tag that tells the search engine spiders to ignore every page and link on the site.
The worst thing is every method I tried to change that code failed. However, I know a work around for everything. Since I couldn’t change the meta tag, I added a robots.txt file. Spiders look at two things when they visit a site, the meta tags and the robots.txt file. On my client’s site the meta tags said “go away” but the robots file says index my entire site.
It’s entirely likely that a robot might find the same links on some other page without a NOFOLLOW (perhaps on some other site), and so still arrives at your undesired page.
The other thing I did to fix the situation was submit the URL to Google for indexing and created a directory listing for the site at Merchant Circle. Tomorrow a press release hits that also contains links, so I know we’ll be getting a lot of quality inbound links for my client in a very short time.
In the mean time, I’ll still be figuring out how to move that irresponsible meta tag. The reason the tag exists is to avoid the so called “duplicate content penalty.” However the penalty no longer exists.
There is one reason and one reason alone to ever use the noindex, nofollow meta tag. It’s when you’re running a paid search campaign to a specific landing page and you don’t want organic traffic to skew your numbers. That’s it!
Where do you place the robots.txt file?
It goes in the root directory like this – http://www.seobyswaby.com/robots.txt. It’s just like a page except it’s in basic text instead of .html.
What’s in a robots.txt file?
Very simple commands for the spider. Here’s what my client’s file looks like.
If you’re doing a site redesign or having a site made for the first time, be sure to get someone who understands SEO involved early because you will save money and possibly avoid having your site tagged to be ignored by Google.