In the current landscape of search engine marketing, it isn’t enough to get your content published, crawled, and indexed. You want to own it. You want it working for you. But there is a major obstacle to that happening for many webmasters.
It’s called duplicate content.
Duplicate content is a phrase that has scared a lot of webmasters into unnecessary paranoia. The problem with duplicate content has always been scraping, not two articles by the same author that are somewhat similar.
Look at it this way. You have two articles that overlap. They are both on your website and clearly have you as the author. What’s the worse that can happen? In Google’s world, you could have one of the articles de-indexed. While that could be an inconvenience, it pales in comparison to an article you wrote being de-indexed while the same article with someone else’s byline being catapulted to a No. 1 ranking. That would hurt.
Google’s problem with duplicate content is knowing which version of an article came first. If they get it right, no problem; if they get it wrong, that’s a problem.
When you publish your content on the web, article directories may not be the best place to go to. That’s because you are competing with thousands of articles and if your article appears elsewhere on the web, there’s no guarantee that your article in the article directory will be recognized by the search engines. Send original content to niche publishers that link back to you with a bio. Make sure those article are indexed fairly quickly.