200 likes | 327 Views
Internal Linking Link Juice Sculpting. SMX East October 7, 2008. The Syntax. NoFollow Attribute <a href=“/someurl.html” rel =“nofollow”> NoFollow Metatag <meta name="robots" content="nofollow"> NoIndex Metatag <meta name="robots" content=" noindex ”> Combined NoFollow and NoIndex
E N D
Internal LinkingLink Juice Sculpting SMX East October 7, 2008
The Syntax • NoFollow Attribute • <a href=“/someurl.html” rel=“nofollow”> • NoFollow Metatag • <meta name="robots" content="nofollow"> • NoIndexMetatag • <meta name="robots" content="noindex”> • Combined NoFollow and NoIndex • <meta name="robots" content="noindex, nofollow">
Robots.txt Syntax • User-agent: * • Disallow: / • Prevents all crawling of your site • user-agent: * • disallow: /test/ • disallow: /cgi-bin/ • Prevents crawling of your test and cig-bin folders
NoIndex Illustrated YES Link 1 YES Link 2 YES SE Robot Link 3 YES NO SE Index
NoFollow Metatag Illustrated YES NO Link 1 NO Link 2 SE Robot NO Link 3 YES SE Index
NoFollow Attribute Illustrated YES Link 1 NO Link 2 SE Robot YES Link 3 YES YES SE Index
Robots.txt Illustrated NO Link 1 NO Link 2 NO SE Robot Link 3 NO YES SE Index
Duplicate Content Scenario • PR 7 Site - aggregates press release content • About 60% the press releases show up on other sites • Value add of the site was to aggregate and organize the press releases • Google was having none of it • Hit by an algorithmic penalty
Duplicate Content Solution • NoIndex the pages • Let crawler find and remove the pages • Can speed this up using URL Removal Tool in WMT • Once removed, NoFollow links to the pages • DON’T use Robots.txt. • What if someone else links to them? • Want it to be able to pass juice
Content Syndication • Syndicate tens of thousands of pages • Exact copies of content for a major media site • To a PageRank 8 site • Sounds like major trouble, no? • SOLUTION: • NoIndex the syndicated pages • Prevents dupe content problem • Still passes link juice
Link Building • Ecommerce site • Hard to get links for it • SOLUTION: • Build a rich content tree • Get links to that • Incorporate in the content tree links to key parts of the site • NoFollow all the other links
Link Building for E-Commerce Other Site Pages E-Comm Links Quality Content
Http and Https Dupe Content • E-commerce site • User lands on http://www.yourdomain.com • Puts a product in the shopping cart • Goes to check out • Get sent to https://www.yourdomain.com/shoppingcart.asp • That page uses relative links (instead of absolute)
Http and Https Dupe Content - 2 • Example link: • <a href=“/about-us.asp”> - Instead of: • <a href=http://www.yourdomain.com/about-us.asp> • Click About Us on the home page: • <a href=http://www.yourdomain.com/about-us.asp> • Click About Us on the shopping cart page: • <a href=https://www.yourdomain.com/about-us.asp>
Http and Https Solution • Implement https://robots.txt to Disallow crawling of https pages. • NoFollow all links to the https pages. • Make sure you use https only where you really need it
Thank You! Eric Enge President Stone Temple Consulting eenge@stonetemple.com