Panelists at SES Chicago’s “Duplicate Content” session gave tips on how to avoid penalties for duplicate content – cases where identical homepages have different URLs or different links to several URLs for one Web site.
Anne Kennedy, managing partner at Portland, ME-based search engine optimization firm Beyond Ink, said content duplication is a problem because it violates Google Web Master Guidelines. Yahoo also does not want multiple sites offering the same content.
“Don’t confuse the spider,” Ms. Kennedy said. “Choose one canonical domain and link all internal pages on the site to it. Exclude landing pages for tracking from search engines using robots.txt. Use 301 redirects to point all your domains to a single site.”
Search engines look to content properties, linkage properties, content evolution, host name resolution and shingle comparison to determine duplicate content, said Shari Thurow, Webmaster and marketing director at Grantastic Designs Inc., Carpentersville, IL.
“Be proactive – if you know that your content management system is delivering duplicate content, use the robot’s exclusion protocol and 301 redirects,” Ms Thurow said.
Use Web analytics software and other tools to determine the pages that have the highest conversion rates, she said. And don’t exploit the search engines.
Mikkel deMib Svendsen, creative director at Denmark’s RedZoneGlobal, addressed technical duplicate issues.
Some common technical duplicate issues are with and without www, session IDs, URL rewriting, many-to-one problems in forum and breadcrumb navigation.
If your breadcrumb navigation is reflected in your URLs, you may have a dilemma, Mr. Svendsen said. Pages should only have one location. Store user navigation paths in a cookie.
“Remember, there are infinite ways to create multiple URLs to a single page,” Mr. Svendsen said. “Whatever you do, don’t leave it to the engines to deal with.”