Algorithm change launched

I just wanted to give a quick update on one thing I mentioned in my search engine spam post.

My post mentioned that “we’re evaluating multiple changes that should help drive spam levels even lower, including one change that primarily affects sites that copy others’ content and sites with low levels of original content.” That change was approved at our weekly quality launch meeting last Thursday and launched earlier this week.

This was a pretty targeted launch: slightly over 2% of queries change in some way, but less than half a percent of search results change enough that someone might really notice. The net effect is that searchers are more likely to see the sites that wrote the original content rather than a site that scraped or copied the original site’s content.

Thanks to Jeff Atwood and the team at Stack Overflow for providing feedback to Google about this issue. I mentioned the update over on Hacker News too, because folks on that site had been discussing specific queries too.

366 Responses to Algorithm change launched (Leave a comment)

  1. So when are u going to launch this new algorithm worl wide?

  2. Great to hear. I’ve been noticing more and more that content scrapers are winning. Also an interesting strategy is rather than scrape the entire thing, they scrape excerpts and drop link to the original source, apparently to look more legitimate. They just become an intermediate click as a way of trying to evade detection.

  3. That’s why you are the spam King Leonidas!

    Seriously, keep up the good work, there has been lots of murmuring lately about how much “web spam” in the serps, HOWEVER people tend to forget how much spam there is on a daily basis that you have to fight against.

    If you don’t end up using Google you have what? Bing? Even I can’t find anything on bing.

  4. Nice 😉

  5. Thanks, Matt. Too bad you can’t short Demand Media for another 30 days.

  6. Is this separate from the new focus on “content farms”? Can you give a more specific definition of what exactly Google considers a content farm at that? Is eHow, for example considered a content farm?

  7. I am a regular reader of technology and web design related blogs. As far as i know that more than 70% of blogs or sites rewrite the content. Will they all effect with this change or what will happen with them.

  8. Flavio Luiz Melo

    How can Google decide what’s the original and how is the scrapped content?

  9. Great news Matt. This been a problem for a long long time. I don’t hate aggregators but I don’t like scrapers who just reproduce other’s work with no additional benefits to the visitors. I hope this is more of a bad news for scrapers than aggregators, right Matt?

  10. That’s cool. Low quality content detection seems difficult, but as users, it’s the key to find good value on long tail (and medium tail) searches. More signal, less noise. That should be a time-saver plus productive effect tons of times bigger than Instant or previews. The top ten should be the top ten always (if that’s possible some day).

    Thanks a lot for your efforts. And to connect with users.

  11. THANK you!! When I first started my blog in 2007, I decided I’d submit copies of the articles to some directories to try and get more exposure. Lo and behold: the directories grabbed the search engine spot for my articles… wth! Glad that this will be fixed at last 🙂

  12. I noticed the difference already. Some of the more spammy and content-stagnant competitors for certain keywords are measurably affected. This is a great update that will only hurt websites with poor content, or those who take advantage of others’ hard work. Nice work on this one Matt and all at Google.

  13. Does this only effect scraper type, unauthorized use of content? Or will it also effect those articles where the third party has permission to republish the content.

    For instance, I have many articles where I have allowed other sites to run them, but I would still presume that my original article would be the one that shows up in a relevant search. It seems this is not always the case (even with your new algorithm…assuming it’s in place.

    Do we give up that right when we allow others to republish our work?


  14. Yeah, finally something fresh! Nice move Matt. I have been worried about this issue ’cause a lot of my websites were loosing positions to others websites that copied my content. Really great news. But I got a point: What about E-Commerce websites that have a lot of similar content (products), are they going to be penalized?

    Thanks again Matt, you major 🙂 !!!

  15. how is Google determining original from scraped? Is it just whoever was picked up by Googlebot first?

    Also, If you’re a large company that creates original content and then contractually sources that content to large partners (perhaps portals like AOL or Yahoo), will these sites be affected as well? Most likely a large portal (seed site) would get picked up before the original, right?

  16. A search query I did yesterday still shows ServerFault aggregators in position 3,4,5 (but at least 1 and 2 point to ServerFault).

    Google “How to get IIS7 to release a locked file”

  17. Finally!
    I believe that, at least in Brazil, many things will change.
    Since most news sites looking for smaller content sites in reference …
    So the idea is to prioritize these sites for reference. Cool.

  18. This is excellent. There’s nothing that I hate more than going to some site that has 200 different articles from some database that don’t have any relevancy at all. And although you say that no one will really notice, I’ll still sleep better at night knowing that the spammers are losing (still).

  19. All the Best for you and your team mates… We are waiting to see spam-free SERP. I m happy to be a cuttlet.

  20. @Robert Accettura, I’ve noticed people doing this as well. I’ve seen alot of people using autoblogging scripts/plugins to scrap portions of peoples articles from Article Directories and combining them to make unique content. This gets annoying when they’re outranking original content. Hopefully this Algo change will help..

  21. Matt,
    I’m glad to see Google taking action against these kinds of sites. However, if it only impacts “less than half a percent of search results” in a way that “someone might really notice” isn’t this only a very small first step?

    I would hope that you continue to address this issue because I’d guess this issue impacts more than less than half a percent of the SERPs. I mean any question you type into Google spits out several sites I’d qualify as “shallow or low quality content” that very often is just scraped from other sites.

  22. Great news indeed!

    Since you’re on it, can we push down w3schools too? 🙂 They have all the right search voodoo (first on JS searches for almost everything!) and none of the right content. What they have that is technically correct is underwhelming in its usefulness.

  23. How were those sites showing up in the first place since no one of import links to them?

  24. This is all well and good for the spammy sites that scrape and steal other content but I worry how will this affect sites with a legitimate ‘mirror’ of their content :/

  25. Sweet. So what’s the next 30 day challenge?

  26. I write 99% of the content on my site using my knowledge as a professional and recently I have noticed a few other sites with terrible articles getting top ten positioning. The articles were written by either a bot or a professional writer and not someone who has actually been there and done that. I really hope sites like that get it by this new algorithm change however it seems every time a change is made my site takes 3 steps back and I have to struggle to get the traffic up again. Every time I hear Google is going to make a change I shudder because it seems I will have more work to do to get the traffic back rather than focus on writing for my visitors. I truly hope the next change won’t have that effect.

  27. Thank you thank you thank you! from an a plain ol’ user. I hate those content thief sites.

  28. Glad to see Google cracking down on this and putting more focus on this in 2011. Thanks for killing my productivity today while I go check the SERPs to see the new results 😉

  29. It seems like my project get in trouble while this update. It’s a result of low content on some of the coupon pages? Damn…

  30. I like the fact that the new algorith will promote original content in organic search results

  31. Hi Matt:

    Long time fan, lots of respect for your work. Google is a great company, and creates incredible value for lots of users. In terms of creating ‘original content’, it produces none–which (somewhat ironically) puts it in the same category as the ‘scrapers’ you’re trying to punish. My question is this–do you lump product search engines in with this class of scraper sites? Our product search is competitive with Google Product Search. In my mind, we provide the same value as Google, but in a more limited way. I was hoping for some assurance that this measure is not used in a punitive way against legitimate competitors through a ‘guilt-by-association’ charge. Can you comment? Thanks!

  32. The concerns/questions I’m hearing mostly surround issues where content is legitimately republished and any negative impact.

    Going to stick my neck out and say that Matt et al. at Google are well versed in the popularity of re-blogging popular articles and these tactics will likely not get dinged unless the site as a whole is doing so from the top down.

    Positive signals such as feed subscribers, sharing, commenting will likely be key differentiators when similar/duplicate content comes into play.

    I hope that the algo is or becomes sophisticated enough to identify the original source when an article from a small site gets picked up on a big one. A recent example would be AJ Kohn’s “Quora button” post: that was picked up by TechCrunch. Hypothetically, AJ’s article should rank first since it received attribution but I’m not seeing that consistently.

  33. I think many people will welcome this change, what I wanted to know was how would google decide which content really is the original and to what degree will this impact the ratings and over all search results.

  34. Anything which filters out a lot of the crap online will be good and make the job of professional web developers much easier. Being able to focus more on content and less on the latest tricks and optimizations sounds like an veritable Utopia.

    In regards to Rick’s comment above, I’m not sure which order such a story should come up in the results… Where the original source may have been first, reading it on a more established news site should provide a better experience for the user and showcase the content better.

    So long as the aggregating / reproducing website gives credit where due, I think that is fair. Content should probably be number one, not just the first to press glory…

  35. Hi Matt!
    I have all good and the site up!
    But we have copied the text on the site – it is the official information from our partners and we can not change the description of the contract …. Although video of partners uploading to our youtube channel. Is it so bad then what we do? Of course not! Why should receive a filter? Maybe you need to think further on the new algorithm? Or what can I do in my case?
    Yes, I also request to make a feature in youtube permit insertion player only for certain sites – they steal our video to other sites, we are against!

  36. Rick, I think in AJ’s case, a lot would depend on the exact query that you are searching for as well as how it is presented on the page. AJ’s page has pretty much no content on it, while the TC page is full of it…content, that is. :0


    Finally the people that put in the effort, time and money will be rewarded

  38. I think wordpress is one of the main problems. I think starting there would be a good way to stop this stuff. Don’t get me wrong, wordpress it is great! A blog is supposed to be personal. Make a plugin, that will verify who they are, for people on very cheap hosting, with tuns of blogs on the same ip ranges. When you register for the hosting, to be shown in search results. Make the search for blogs even go more personal. Show a picture of the blog owner in search results next to their blog to tell them who’s giving this advice.

    I do not know where or how something like this would be started, but I think it is the only way. I hope you understand what I mean, you can probably think of more. But, Something needs to make it harder then buying a host for 5$ and 1 click installing software.

  39. Matt its good to know that Google is focusing on search results improvements 🙂

  40. Very happy to see Google’s continued focus on quality.

    It was clear that Mahalo was getting grouped into the “content farm” space, so a year ago we pow-wowed as a company and said “how can we do better?” Make less pages, but spend more was the answer!

    We started spending $100-$10,000 on each page WITH HD videos, in order to EARN our place in the rankings as the best site on the internet for LEARNING.

    That’s why we pivoted this past Tuesday from human-powered search to “learn anything.”

    Video from DLD:

    We knew we had to be able to say with a straight face “this page deserves the #1 or #2 or #3 ranking on Google because it is better than anything in front of it.”

    I’m so glad we invested *millions* of dollars in our 23,000+ video library that now extends our already strong pages.

    all excellent pages with videos to back them up. This is a great moment for quality sites, because as the content farms are devalued the great content producers will–as the algo does it’s work over the coming weeks/months–we will move up!

    Long live the algorithm!

    @Jason Calacanis
    CEO/founder – Learn Anything!

  41. Same question as Fabio really, haven’t seen anything in Sweden yet

  42. There’s something I don’t understand in all of this. If this algorithm update is so focused and targeted that it would only affect a small percentage of results to the point where a user might see something, then why would that small percentage of results be targeted? I know even a small percentage can have a big impact when economics of scale are factored in, but this seems too small to even be worth considering, unless the results are so full of spam and/or duplicate content (which should be filtered out anyway, as much as is reasonably possible) that an algorithm update would provide a significant filtering of results and a quality improvement as a result. The only two things I can come up with that might make an impact that way are MFA sites that regurgitate content and parked domains that do the same, and I personally almost never see those anymore (which probably puts me in the 99.5% of people who are unaffected by this).

    I guess in a long winded way, I’m wondering why announce it if you’ve gotten the feedback and the algorithm update would presumably be of such little consequence that no one would likely notice or comment on it unless you told everyone.

  43. But will the original web page suffer from the clones? I mean if the information on my blog (original text) is after a month on 1000 other pages is it makes my blog less or more impact on the web (in search)?

  44. Hi Matt,

    Can you please fix whatever you did? I wasn’t spamming, I’d come up with a funny comparison with Mark Zuckerberg and Augustus Aurelius, “Marcus Augustus” which was generating site traffic. My blog was ranking 5 for this and another blog was linking into mine with the same comparison.
    My funny picture ranks number two on Bing and Yahoo for “Marcus Zuckerberg”. Lots of spammy links and ranks way higher than my blog now, which focuses on social media.

    tweet me?

  45. Well you’re flagging the wrong sites. My content is copied all over the Internet (like Yahoo Answers all day long) since 2004 and you’re going to accuse me of farming content? PLEASE!

    Fix the algorithm.

  46. Oneal,
    Agreed, perhaps not an ideal example but one I could pull from the top off my head quickly.
    Meant more as an example of what could be than what is.

  47. Matt, I posted this ( at Hacker News, it seems to be getting some attention. Would love your thoughts on it!


  48. And by the way, I’m not a content farm.

    I’m self-employed and write everything myself since 2004. I don’t play the article directory game either and none of my content is for sale. If anything is getting scraped, it’s scraped OFF my blogs.

    I quote a few articles here and there (fair use only) but only once and a while.

    Also I’m not using a standard theme either, just about every line of my WP theme has been customized in some way.

    I don’t have a big problem with this update–yes it’s affecting my family in a big way but that’s not your concern. My problem is Google accusing me of plagiarism and lack of original content.

  49. @jason @mattcutts

    Mahalo’s (newly pivoted, with a UI overhaul) content is several levels of quality above Q&A sites from Associated Content or Demand Media. Mahalo seems sincere and motivated in delivering quality content to individuals seeking quality tutorials and answers across a broad range of subjects (like a for the masses).

    Content farms, like those out of Associated Content or Demand Media, with their algorithmic fueled, AdSense driven Q&A business model, maybe a short bet revenue generator but the old adage rings true that if you, “create quality content” (i.e., Mahalo) this will not only get you the highly coveted Google traffic but you will be able to enhance the quality of life for all those seeking to learn.

  50. I love this change! One thing I wonder though, will it make it harder for us to find the sites that have copied our content? We run about 10 or so sites with 100% original content and often find that our competitors have copied/stolen our work. Our usual way of detecting this is to past our content into a Google search and go after anyone who shows up for copyright infringement. From how you have described the algorithm change, we’ll now show up and the copyright violators will not, assuming Google correctly credits the content to us.

    Will Google be providing a channel for us to correct incorrectly attributed content?

  51. Hi Matt,

    Thanks for the info. I have lost 1 point in my site so I have delete some articles that I published with the project Syndicate authorisation in my website. I have though that would be useful for some of our readers some good data of quality.

  52. @Will: completely agree, but the problem with w3schools is that too many idiots and newbie programmers find it #1 in search results then link it from other places (forums, Q&A sites, etc). Which keeps it ranking #1…it’s a vicious circle.

    @MW Adam: it’s 0.5% of *searches* that are changed in any significant way. Maybe that doesn’t sound like much but when you have at least half a BILLION searches per day that’s still several million search results improved every single day. That’s definitely worthwhile.

  53. This is great news, Matt. I’m so glad to hear that Google is finally taking a stand against those who scrape and aggregate content. For those of us who actually take the time to write great content, it is finally good to see sites being rewarded.

  54. Great news, Matt. Overall, I think this speed-of-response against, say, the DecorMyEyes fiasco of what, about seven weeks ago? . . . is great work by you guys!

    Jeff Yablon
    President & CEO
    Answer Guy and Virtual VIP Computer Support, Business Change Coaching and SEO Consulting/Search Engine Optimization Services

  55. One of the most frustrating things about the SERPs recently has been the huge swell in the number of fake Q+A websites where millions of pages have been auto-generated as optimised exact matches for questions – “How do I {VERB} to {NOUN}?” – these provide a poor user experience, and are plastered with CPC ads where the answer should be.
    It’s refreshing to see Google respond definitively to what has been a serious issue for a long time, but I worry about the timing in the same week as the DM IPO.
    A warning to others?

  56. I am so glad to see this taking place. There are so many autoblogs out there that make absolutely no sense what so ever. Keep up the good work.

  57. Rick,

    Perhaps the original source of the content does carry more weight now, but still will not trump the authority factor when it comes to the SERPs. I think the point of this update was to get rid of them content scrapers more than it was to give credit where credit SHOULD be due.


  58. It’s been a crazy week for the SERPS, when can we see the new algo change stabilizing? Also, whats the recourse if someone is ripping your content but outranking you?


    Josh Ziering

  59. The issue I can see here is identifying where the original copy was posted.

    Will this be a big boost for sites like Ezinearticles, which regularly have their content published elsewhere? I’ll we watching the alexa stats.

  60. I already saw a bunch of people complaining in forums about this. I’m all for this.

  61. The best part of this post is the fact that Jason came in and defended Mahalo before anybody even said “wow, I wonder if this change was made to combat mahalo/aol/demand media type sites?”

    It’s almost like a little kid randomly running into the kitchen and yelling “I didn’t do it, Mom!”

    Which makes me wonder, Jason. When you say “We knew we had to be able to say with a straight face “this page deserves the #1…” are you actually admitting that your previous efforts were actually spam? Sure sounds like it.

    Honestly though, I’m glad this change was made. Perhaps I no longer have to create “trap content” (made up words) to feed to the scraper sites so that I can find and report them all with a simple Google search. (example: )

  62. good i hope that that high authority that scraped my content will not rank before me, that means Google should have a knowledge of who published the web page first.

  63. @Scott: I would agree, except that’s only half a percent of people that might notice, which means that most people would probably miss it, so the percentage of people that would notice is probably a lot lower than that. It also doesn’t indicate whether the user would notice an improvement or a decline in quality. If you factor out those who notice a decline, is the net improvement really big enough to even comment on it? It just seems too small, even with billions of searches (which is what I was referencing with economics of scale), unless we’re supposed to be reading between the lines. And if they’re targeting MFA-style sites, then why not just quietly roll this out, let the MFAers gradually lose the traffic and interest in running Adsense on low-grade sites, and be done with it?

  64. Awesome to hear Matt, looking forward to the changes. Thanks for the link to Jeff’s blog as well, good find for my interests.

  65. Hmmm.. I wonder if Jeff Atwood brought up this issue after I voiced my concerns few days ago of stackoverflow copycats high rankings on Google.

  66. When will these algorithm changes be implemented worldwide? In Romania many spam sites still hold the same top places for a lot of keywords.

  67. If one operates a classified website, allowing your users to post their items for sale containing photos and detailed description. How can you really control what other sites the user has posted information on and whether or no your site is the 1st one the user posted on?

  68. Jill Whalen

    You wrote:

    “I have many articles where I have allowed other sites to run them, but I would still presume that my original article would be the one that shows up in a relevant search. It seems this is not always the case (even with your new algorithm…assuming it’s in place.”

    Please study carefully Google webmaster guidlines under Duplicate content :

    “Syndicate carefully: If you syndicate your content on other sites, Google will always show the version we think is most appropriate for users in each given search, which may or may not be the version you’d prefer. However, it is helpful to ensure that each site on which your content is syndicated includes a link back to your original article. You can also ask those who use your syndicated material to use the noindex meta tag to prevent search engines from indexing their version of the content. ”

    I hope this helps.

  69. Matt Cutts, maybe be related to this link :

  70. All I have to say is its about time! Scrapers are the bane of my existence – I discover a new one stealing my original content from every single day. How is google going to be able to determine what is the “original” source of the content?

  71. Either we have fixed masses of our clients websites for the better or this is the best update I’ve seen in a long time. We noticed some movements in the search results but accredited it to other sites above ours dropping off. Clearly they fell foul of this update. But considering that all our clients are squeaky clean I think that this was an excellent move. At least this update improved things straight away, whereas the May Day update ruined nearly all searches until it settled in, or got changed!

  72. I have a travel site with accommodation and tours and I can’t really change the details of he original tour so that parts copied. I try to add more useful info around it and link to useful info. Does google penalize travel sites?

  73. Cheers for the update Matt! Good to hear that Google is taking action on scrapers and giving a possible boost to those who take the time and effort to create their own content.

  74. Does anyone have any actual examples? I have not seen any changes whatsoever. Is it only happening in USA?

  75. That is the good move from google. Need to keep check for copy-paste bloggers.

  76. Is this affecting sites that have copied like 80% of others content right or how does it work? Because I assume that just having similar content as another site wont be punishable. Also, wont this bring down all the article directories even more?

  77. Glad to see Google continually changing the algo. Hopefully this puts an end to the rash of recent attacks on the quality of Google search.

  78. Aussiewebmaster

    Can we name this change Taco Bell?

  79. Great news. For those of us who put a lot of work into creating the material for our site, it’s nice to know we might start getting a reward for our efforts.

  80. Thank you for doing this 2011 Quality update. Quality relevant search results, is what everyone wants, and you are now helping to deliver.

  81. I am a Sydney based website designer and Google is paramount to my company’s success. Every change to Google lately seems to be a good one. Thank you for your eternal struggle against spammers and your infinite tips on SEO. You are a legend Matt.

  82. Fantastic news. I started using Google in the late 90s, because it could find me the best information. In the 2000s, I was disappointed by more and more garbage search results (where the same information was regurgitated on scraper sites and appearing in the results, wasting my precious time). As a creator of original content and a Google user, I applaud any effort to make search results better.

  83. I like how they seem to be attacking the ‘curating sites’, and yet have about 500 different services providing access into their data API (youtube, blogs, search, videos, pics, custom search) which gives you insane access to limitless sets of data and info and content i can ‘chop up’ and basically provide a new UI on not just Google services, but all the social networks, everyone’s Q&A feeds, forums, directories, product databases, reviews, local business info etc etc. Their entire growth model is dependent on wide adoption by the development world at the enterprise and consumer level and enterprise level. look at their chrome app store. ‘webapps’ are (many seem to be glorified bookmarks to websites) being pushed aggressively. chrome extensions are another access point directly into the chrome browser – wonder what tidbits of data mining you can do there. android – don’t get me started. but their google apps marketplace with reasonably priced enterprise / small business apps built on the google platform can be a real threat to msft and even maybe Oracle – which might be behind their attack on google. google has actively perpetuated the ‘syndication’ of data while denouncing ‘scraper sites’ but means basically behaving as a giant structured database of information and data freely available to ‘scraper sites’ in a pretty CSS theme.

  84. Love the idea to reduce search spam, but can some companies be caught in the crossfire? My industry, promotional products distributor, relies on product descriptions provided by manufacturers. It is not feasible for a small or medium sized company to re-write 20,000 product descriptions.

    Will my site be penalized as a Site Scraper because my product descriptions match the descriptions on many other pages?

  85. Cool. Would be useful to understand a little bit more about proposed future changes. Giving original content writers higher priority seems only fair – cant see how you can argue against that…

  86. What if a website has both original content and canned content that is on 1,000’s of sites. I do not do this but I know of companies that do do this.

  87. ok not so happy u deleted my comment…but THANK YOU! Its back. I think I’m going to update it with a Photoshop version.
    MUCHOS GRACIAS. you’re wonderful.

  88. One thing i dont understand about google’s aggressive spam fighting is…
    how is it possible that once in a while google calendar reminders sent to my own email get flagged as spam. whats up with that?

  89. Can we name this change Taco Bell?

    Why would we do that? Is the change going to take input from webmasters and output diarrhea?

  90. I’m kind of wondering if this will affect sites like mine. I’m kind of a latecomer into an ecommerce sector. So, in effect, if I’m selling the same things as my competitors, I would have to change the descriptions to something else than what they have? Not that I’m copying word for word what they have for a description, but there are only so many ways to describe a 15 inch plunger. So would that mean I would get knocked down in the rankings?

  91. That’s a great news but there is still a concern of who will actually show up in the search results: the good guy or the bad one. I hope for the best.

  92. Matt –

    Will this effect real estate sites that are allowing Google to index the property listings from the board of Realtors? Many agent and broker sites in the same market are allowing Google to do that and they basically all have the same information. They might have 100 unique pages, but thousands of the real estate listings other sites have. Even big sites like Trulia and Zillow are as well. Can that hurt the sites over all rankings?

  93. Am I stupid or is this a relative easy fix? Presumably every site has the opportunity to present themselves to google’s spiders to be indexed. So why not make your site autosubmit every time new content is added? Then in google’s database your content is timestamped as being the first to appear ever on the net. So then when someone scrapes your content it’s timestamped at a later time and automatically shows up lower than than the original in ANY search result where the two sites come up.

    Also we should absolutely make a distinction between reporting and stealing. Any blog or news outlet that excerpts part of the article with a link back to the original is ok to me. News should propagate. But in any event the reporting service should not rank higher than the site with the original article – EXCEPT in very specific situations.

    For example if I have a page which lists excerpts from a dozen sources on how to refinish a cabinet. Say I crawled the web myself and picked the best ones which are most helpful and accurate then I think it’s not only possible that I could outrank any one of those original articles but I probably SHOULD outrank them. If that “aggregate” page is then part of my woodworking tools website then so be it. I am providing useful content that fits the need of the viewer by which they can complete their task and hopefully they will buy some sandpaper from me for providing them that information.

    Scraping for link-juice and diversions to viagra sites? Bad. Scraping to provide more paths to good info? Good.

    Tough job for Big G to discern which is which.

  94. I have a blog and its feed were used by many top rated websites like freinds feed, technorati, mybloglog etc. SO who will rank well- my blog or those high authority sites ?

  95. Hey Matt & the Google spam guys – absolutely love this change. At ROI we put heaps of time and effort into providing content that we think people want or will enjoy reading. If it’s not useful we don’t write it – I’m looking forward to seeing those sites that work the same way as we do ranking up top above the rubbish. After all it’s about finding quality info online – so many people loose site of this fact! 🙂

  96. This is long overdue. Content should always have been the main focus, not backlinks and all the other nonsense that is so easy to manipulate. Kudos to Google!

  97. Content online has been real stale lately with the same content being spread around in different websites. It would be real nice to see some new fresh topics and content coming around. But it makes you wonder, is this the end of article directories? who holds the original content there?

  98. Lots of news coming from Google this month. CEO switch, Page Rank boost, content farm torching…

  99. So happy about this change Matt. I’ve seen plenty of sites performing very well in the SERPs where all they do is scrape content. Seeing people crow about their AdSense revenue and good rankings from Google for these ‘auto-content’ sites had left me wondering about how committed to quality Google was. It’s nice to finally see some action, not just words on a policy page. 🙂

    Like others have asked, I’d like to know in more detail what constitutes a content farm. I would consider eHow and wisegeek content farms but do they fit Google’s description of one too?

  100. I think this is good for Google, good for the user! User get the good result , then they will use google always.

  101. really great new matt !.. thank you..

  102. This change will definitely bring positive changes on the Internet.

  103. That’s definitely good for webmasters who work really hard for them to earn reputation on the internet. This would definitely kill autoblogs and content spinners if the algorithm is good

  104. wow there’s a lot of changes going on in google, wonder how it will effect us, the webmasters. Hope none of my sites get penalized:D

  105. We have seen there are many websites copied our content it means there would be a serious and strict action from Google in respect of copy content.

    Looking forward to your response.

  106. Hi Matt,

    In Holland there are many websites that copy the original source and use it to score cheap in the search results and unfortunately they have success.
    In August I started a post about a copy case.

    And Barry Schwartz discusses it in his Video Recap of Weekly Search Buzz

    Good luck with the fight against spam

    Best regards,

    Imre Bernath

  107. Thanks for the update Matt. I hope this change will be a positive one.

  108. Well hopefully it’s improved soon! A search for “pet store” returns: Java Pet Store
    The Java Pet Store Demo is a sample application from the Java 2 Platform, Enterprise Edition (

    Oh, exactly what I was looking for! Not.

    Perhaps I will try the link below it.

    Pet Store – Blizzard Store – Blizzard Store
    Blizzard Store » Collectibles » Pet Store. 10 product(s) found. 2 1. World of Warcraft® Pet: Lil’ Ragnaros. $10.00. World of Warcraft® Pet: Moonkin …,c:33

    Again, another great search result about what I was searching for. Not.

    Overall impression? Of course google is still the best, but disappointed with new search results.

    People who type in “pet store” don’t want to play WoW or develop with Java.

  109. hi matt,
    would you released a video about this and give full explanation and detailed latest changes?

  110. As others have echoed, how do you determine who copied who’s content? Update looks good though, I’ve noticed the spammier sites and less useful sites are ranking lower. Good job. 🙂

  111. thanks for this update. Not!
    My original articles disappeared from the SERP and all the scraper sites and aggregators are ranking in top positions even when searching for “[article title]”, no matter some of them link back to me (source)

    Same search on Yahoo and my article is on 1st position, followed by sites with high authority. How come Yahoo manage to spot the difference between original content and scraper sites?

    It is very frustrating – and Matt, you know what’s even more frustrating than not seeing your own work in google SERP? – NOT being able to talk to someone from Google about it. I’m sick of automated replies from Webmaster Tools and alike.

    Good job!

  112. Glad to see this change come into place. Made the mistake of posting content on my site to several article directories. Google still sees them as the original source.

  113. that’s brilliant, when i created my blog, i was wondering about this and on 26th i added a related question on google forums
    Now if a blogger knows that his eforts are going to be appropriately rewarded, he / she will get most out of his / her results

  114. Hi Matt. While I generally agree you, there is also something call branding.

    Sometimes I go and read from a website A not because they produce the most original content but because i like to be associated them and the feel and look of their site. Pls don’t forget this. Some authors may write the best articles but it’s a pain browsing their site.

    Anyway, thanks for the update.

  115. I hope the algorithm improvement will give us better result in SERP, I was very disturbed with the recently conditions, which is many auto generated content sites. I’m looking forward to the newest update Matt

  116. In as much I want to believe this would work and help reduce spam in the SERP. I strongly feel this will cause more harm, as more authority sites with a much more longer online presence will still bury the smaller sites whether they own the content or not!

    If this works, then SEO will work!

  117. I think it is excellent, but wonder, are you able to nip article spinning too? I use google alerts to follow the competition and realize that there is a ton of spun content out there.

  118. when this algorithm launch?
    and how about effect of site that have been being spammed?

  119. I have just noticed all my sites going down about 30 positions, although they were built with very legitimate link building and have decent content, so I cannot say that the change seems to make much sense to me!!

  120. Thanks for the update Matt. Even though it targets a relatively small number of sites, the change strengthens the direction by setting anis articulating ‘policy’ about what the algorithm values.

  121. This is a great News for those dealing with SEO. It will definitely change the world of SERPs and will bring positive results. Hope this will work soon…

  122. look like a nice move,,,let’s clean up that mess. Those spamming website make some of my sites banned by google until now, and no respond from google about it…

    Clean all that mess so i am not so hurt about my sites

  123. Good news for all, but not for the sites that copy content.

    I noticed this year that a half a dozen sites have started auto-generated pic sites, pretending to be picture search engines. When in fact they are nothing more than ‘Ad’ pages with no original content. Row after row of graphic content from other peoples sites, including my own.

    Being an electrical engineering, the hardest search is trying to find an outdated component data sheet. A pdf of a transistor for example. The SERP is full of auto-generated pages from many different sites that seem to provide that data, but most done. However many just recopy that original pdf from the manufacturer (three ad filled pages deep).
    In reality they are all copies of each other.

  124. Great news, matt!
    I just wonder about the social site..
    I hope social site backlink (eg. twitter, facebook, etc) is “more worthed” on the new algorithm.
    World is changing.. its social site era.. (I think..)

  125. Re: real estate websites. Listings are typically published first in the mls and then syndicated to 100os of site, including our own Do we lose the the juice and authority that comes from our company’s 10,000 listings to the mls?

  126. There is constant battle going on between the creators and the ones who take, spam, and hack.

    Thanks for keeping us updated.


  127. This is good news Matt!

  128. excellent though like Jill Whelan I would be interested to have some info on how this effects syndicated content ( I am in house SEO for a major publisher and have more than a few sites that have content republished)

  129. This is good news, It should be. I have two more questions:
    How Google can recognize original content and the copied content? And when this will be added to google algorithm?

  130. this sucks… I still do not have clear the way they rank pages, one week a go My page rank dropped to 0 and all my content is original. there is actually a spammer page that copies my content from y rss source and he has more ranking than I do

  131. I work for a manufacturer. We supply business with products to sell in our niche through our datafeed, to help them get their sites up. The datafeed contains our content. So, our clients have the same product descriptions that we have. (We also sell retail.)

    Is this going to hurt us, since there are other sites using our content, rendering our ccontent ununique?

  132. I’d like to make sure two cases don’t cause trouble:

    1. We changed CMS’s on one of our major sites. All the post dates got updated. This site is scraped *heavily*. I could send DMCA notices all day every day.

    2. We repost older posts on some of our sites (on the same web site, at the same URL). I’d hate to see the scrapers suddenly be seen as _older_ than some of our best posts.

  133. The Google rankings are not fair. No matter how much tweaking is done. The good guys will finish last. Unless Matt can tell me how to create and recognize unique content related to a specific topic. Create unique content he says… but how can I create unique content if I don’t know whether it is unique or not relative to what someone else has written. What is more important to rankings: allintitle, allinanchor, allinbody. I’ve seen many of my websites rank higher due to strong allinbody rankings. But how do I create unique content – where the hell am I going to get it. I can only scrape bits and pieces together from other websites on the Internet. I don’t have the money or the funds to create it out of thin air. I see google now punishes also scraping. What can we do: we can’t do anything anymore. Everything is becoming hell.

  134. we write contents ourselives, but english is not so good, can it being regarded as articles farm? tksl

  135. if this means leaving out aggregators, it’ll be a disservice and cut down usage of google as a research engine..

  136. Yes, I think I notice it, now pages which were purposely built to rank low are out-ranking pages which have copied the content to supplement the vendors sales material.

    Good thing some people sell the product, it is too bad that the people who do not have a sales force now outrank those who supply the suppliers with their product, for resale.

    Doesn’t seem too bad… doubt I will notice traffic changes.

    So why does a webpage which resizes the browser window on landing get a better ranking than a web page which is formatted for any browser window size.

    Surely this has to be an indication that the page is a pop up window.

  137. Hey matt cutts that’s a good idea but in reality this will bite Google’s butt down the road. There are far too many lyrics sites among others that dupe content without the intention of being an autoblog or autocontent generation site. The algorithm update will in reality affect the integrity of the Google search engine with misinterpretation of autoscraped content and real sites because the fact is for now – there are more dupe content sites that don’t have the intention of spamming Google.

    I love the idea of course, but it has too many side negative side effects that far out way the possibility of removing spam from the SERPs.

  138. Dear Mr. Cutts and Mr. Page,

    I would like to make a suggestion to Google, which I personally feel is one way at looking to solve this problem it has with spam and rank manipulation. The Google algorithm is facing strong headwinds.

    May I make a suggestion: every say 500th viewing of a search query, a specific website gets a chance to move the very top – the #1 position. And then Google starts taking note: how long did people spend reading the page. Did they click through to another page or did they back out? And I think based upon a few indicators – such as previously mentioned, every web page gets its limelight. The user’s reaction determines how long they spend on the page, where they go to and then that’s it. The longer the people spend on the page and whether they click on the link. That’s influences ranking. No manipulation can influence the time a person spends and their responses. People are either satisfied with what they read or they back out of the page or proceed to click on a link on the page ( which means it engages their attention ) and that link should then get some credit. Link popularity. User attention. User behavior, measured in time and responses through say Google Chrome, rather than trying to put a value on a page ( the algorithm I understand can’t measure user responses to pics, images, sounds and good writing ) should become part of ranking. That’s my suggestion. Make Google Chrome reactions a larger part of ranking. Give a website limelight and start measuring audience responses. Good luck in any case. The other possibility I thought of was to create unique user IDs. Google creates a login for everyone and everyone publishes their content through their login. That code is attached to the page as a meta tag.

    Contact me if you have any comments or questions about these ideas.

    Wynand Meyering.

  139. So… if a page that had dupe/scrapped content on it… deletes it. Adds unique content, then that page would regain it’s original rank?

  140. I have a unique content website. There is absolutely nothing on my site that was copied from someone else. This includes text and images. One of the reasons I made this website was to see if search engine companies are telling the truth. Some of the pages have poor diction which would be typical for someone who has a disability or for non professors of english. I see Google Search is actually interested in getting fresh content because they are spidering/indexing much better and faster than any other search engine.
    BTW.. I personally see Google Search and Google Adsense as completely separate entities. I have a great respect for Google Search but not much for Adsense . I hope that Google Search people will be strong and not allow accountants to tell them how to run their Search Engine.

  141. Many preople make free translation from English to local languages without quoting or linking to the original material. It is dublicated content with different words and everyone is competing to post it first and make it ‘unique’. I do not think Google can detect such pages as dublicated content soon.

  142. While this is a great improvement, how about sites that publish fiction, lyrics or recite from …say …. the text of a Michael Jackson song or such alike? How to evaluate content that´s been duplicated just because there is a fan base that shares?

    What are your thoughts on this? Thanks!

  143. Yay! I am so tired of splogs stealing my original content. I know because I get a Google Alert on it. When they start coming up before mine do, it is annoying and frustrating.

    For awhile I tried to track them down WHOIS, call them up, email them to no avail.

    Looking forward to this update !

  144. @Matt,
    I was wondering why you had to specifically announce this change; you certainly don’t announce all algo changes, not even major ones (like you came out with the nofollow no-sculpting-benefit announcement around a year after it was implemented – in your own words).

    No wonder FUD works wonders where the algorithms fail!

  145. Good news, another step forward to cleaner results. Anyone care to guess what’s next for the spammers though?

  146. Thanks for the tip on the recent algorithm change. There have been a number of times I’ve found “word for word” paragraphs of mine on others site’s with no link back to my site, and in a few cases they took the entire article. I was equally angered and flattered. It seems with a finite amount of topics and a limited number of ways to spin the info actual original content will be relatively hard to come by.

  147. This sounds excellent, but I am more interested in hearing news about Google dealing with link farms. I have started to see several sites jump to the top of page 1, beating out long-established authority sites.

    When doing a link analysis on these new sites, the sites are linked in almost exclusively by 100+ other sites with little content and 8-10 non-relevant spammy links at the bottom for other unrelated companies. A few organizations seem to be generating massive link farms for large groups of pages and are dominating the results for practically every category.

    When can we expect to see these kinds of tactics dealt with by Google?

  148. Hi Matt,

    Really very good. By this way we can restrict web spamming, but i have an doubt that if two sites have same content then which site’s content is said to be unique.


  149. As the editor of a news and reviews site I welcome this change with open arms. Our content is scraped mercilessly, it is an impossible task to try and prevent it. Even worse, many of these sites are offshore and flagrantly ignore requests to cease.

    I do have some questions. My site is part of Google News. We do not publish syndicated articles (AP or Reuters for example), but many organizations do. How will this change the Google News world?

    In fact one of these days I would love for you to put an article or two together on the whole subject of Google News, SEO, ranking, spam, etc. Although it is a relatively niche part of Google, it is a subject that affects thousands of sites. That is particularly true in light of these new changes.


  150. “This site is scraped *heavily*. I could send DMCA notices all day every day.”


    Same here, and all these years I’m thinking Google would figure out the correct source. Apparently not. So these scrape sites can erase years of your life off the Internet (at least in Google) due to this algo update, while they reap the benefits of years of work.

    Almost makes me want to join Facebook 😉

    Because of this algorithm update I’m considering dropping my RSS feed and switching to a paid newsletter format. (Bad news for Google if everyone has the same idea.)

  151. I have to agree something needed doing but I am not sure how this is going to clamp down on it.

    Are article directories going to get hit? They are in essence all duplicate content…

    I have a lot of sites where I have pulled articles from other site and left all the links and resource boxes intact (as per terms of the writer) are these types of sites going to get slapped?

    I will be keen to see how this is going to effect bloggers around the globe, but I am sure it will work out the way google intended


  152. Thanks for the update Matt, it’s great that google is fighting web spam. Lately a lot of spam websites were ranking better than legit ones. I also saw several websites with scrapped content ranking better than content authors :/

  153. In concept I think the idea is right, and hopefully google will continue to refine the code / algo to actually achieve this. I noticed significant changes in the top 10 of some phrases I follow closely. And the sites that dropped were not ones that you’d anticipate dropping from this change.

  154. I personally think this is a great move – from a users perspective who wants spam and what quality does it give? From a content production point of view, most authentic businesses / content producers should be able to produce something that rich, engaging and their own!

  155. Wouldn’t a review of AdSense and it’s abuse be appropriate here? Although it’s probably a mammoth task, surely it would be possible to relate some of the SPAM detection department to the databases you keep on AdSense publishers. You could then close as many of their accounts as possible. It might at least send a signal about the quality you expect from your publishers.

    An even better step to take would be to review publications before allowing ads to show on them. Probably an even larger task but it’s what many of the smaller ad networks do. I think their focus is mainly on protecting their advertisers from deceptive affiliate marketing tactics but it could be used to protect the quality of search results as well.

  156. Hi Matt –

    How does this impact content syndication which isn’t scraped but rather syndicated from site to site, i.e., real estate listings to Trulia, Zillow, Realtor and then posted on our own sites. Will the highest domain authority win?

  157. Does the new algorithm reduce corporate spamming as well? Many corporations have been launching 10-20 thin content websites using the format “brandxxxxx dot com” because only they can use the “brand keyword” in their URL.

    The current situation often times result in Google giving 6 out of 10 spots in the first page of organic results, and 2 of 10 spots in the second page of organic results. The improved Google algorithm can be smarter in countering the corporate spam websites right?

  158. Awesome initiative if it worked as inteded. Like many other commenters my perfectly legitimate blog has dissapeared from the SERP for no apparent reason, until I found this page. Could you please let us know how to get our work back on the SERP and would you please also implement a message to webmaster tools when hit by this spam hammer?


  159. Does this change the way Google passes “link juice”?

  160. Probably a small step along the way to fight spammers. But will it relly make a significant difference? For all of us with good content websites I sincerely hope so but I think there will be many more battles to fight in future. It seems to be a never ending problem.

  161. Most excellent! I have had so much trouble with content farmers on simple things like finding information about a used fridge. Anything that can assist in producing the real result, instead of a link that leads to some lame site that says “what were you looking for?” …
    Thankyou! (times 1000 but I’m not wanting to type that much)

  162. Matt, this is a change I welcome with open arms. As someone that is responsible for editing user submitted data, I find that unique and compelling information is rarely used. While that’s great for me (keeping me employed), it is not good for the web and search results. Thanks for informing the webmaster community of this change.

  163. @San Nayak, check out the post on hacker news, an issue was raised there with regards to your comment about how this change will affect scrappers and aggregators. A lot of useful info on that thread.

  164. That’s cool Matt, but does it cover spinned content? Come on straight out scraping is like 5 years ago.

    Google is so sad. I want to be good but you punish me for it. I want to concentrate on creating great content but Google punishes me for focusing on content and not some black hat seo.

    I honestly feel Google is killing the creative spirit of the Internet. So many crappy BS sites sit at the top of the serps. Yes I report my competitors spam and your reporting tools are a black hole.

    The only possible explanation is you really don’t care matt. You just don’t care about anything but money.

  165. Matt, your google update sucks. I now have ranking higher than me for a unique sentence of a unique article that I wrote. It has a link to my page with a link and the first 3 sentences.

    I think you should check that your search update works correctly when a website has been redirected to a new domain.

  166. Sounds great but how do you find which site has original content and which is copy of it ? So the article sites and news sites are gone ?!

  167. Matt: I was thrilled to hear about your attempt to eliminate content farms from search results. Keep up the good work.

    My #1 frustration with google search in the past, however, is not with content farms (e.g. the people who mimic wikipedia or stackoverflow content) but is with online stores and reviews. Suppose that I want to buy some product, say, a new flash drive. And suppose that I know the exact model number, such as Crucial’s RealSSD C300. At this point, what I want is good hard hitting reviews for this drive from decent technical websites. I usually found that if I search for keywords like

    Crucial RealSSD C300 review

    that what I get is a bunch of online stores, not genuine review sites, which is very frustrating.

    Did the changes that you mention cover cases like this too? I ask because when I did a search with the keywords listed above, I now get a lot of genuine review sites on the first page, which is hugely encouraging. Not believing my luck, I tried a search for review sites for a specific Samsung LCD TV model. Again I got many genuine review sites. Hope that you have solved this problem…

  168. I think this is a great change, I was noticing this a LOT lately. Matt, how do we report more of such instances? Since last 1.5 years I have been trying to report in ‘Give us feedback’ section of the bottom of Search result page but obviously no change yet.

    I know it is tough for Google to hear every possible report but I have seen about 500+ sites with 100% duplicate sites adding no value but just ranking SOLELY based on keywords in domain are ranked higher with practically no other SEO element/reason.

    Will appreciate if you can create any such mechanism… at least let me know how to report one such instance for now 🙂

    Thank you!

  169. But what about product description pages like: Software, eBook etc. They are usually same in all pages….

  170. Thank you for your effort. I am looking forward to seeing this change live on since my original content is currently on thousands of thieves’ sites…

  171. A nice step forward….

  172. this is a great change, and I hope social site backlink (eg. twitter, facebook, etc) is “more worthed” on the new algorithm of Google.

  173. Thats the really nice stuff… 🙂

  174. > slightly over 2% of queries change in some way, but less than half a percent of search results change enough that someone might really notice

    I’m still seeing machine-generated scraped-content splogs cropping up in Google Alerts. Do alerts count as search results in this half a percent number?

    Spam in Google Alerts is worse than spam in the regular search results.

  175. it’s about time

  176. That’s good news. There are so many sites which are posting duplicate content but have mentioned resource link from where they copied, so is it right way? or It will effected by new changes?

  177. Hello Matt. Love this update as I have been a victim myself of those spammy sites.

    Having said that, I wanted to ask you the following: I own a site where website owners want their content to be duplicated: it is one of the top rss feed directory. It has been baned as well.

    Any idea on how to proceed on directories like these, where website owners list their rss feeds, or are these type of sites no longer allowed as well?

  178. what qualtifies low levels of original content? should we be beefing up our product descriptions and page content if so to what threshold?

    Or do you mean sites with few original pages????

  179. If I wish to add professional articles from other Blogs, and i give the credit to the originating web site – will I be hit by this Algorythm change?

  180. I am happy about this decision cause clean content will be more worth in future. But there are some tools in www they change original content to similar phrases… You can set how strong original text should be regenerated to new sentences. I think those tool will be much more worth in future.

    It will be interesting how the rating will happen. What site is the original one? There might be a content veryfication on base of a time stamp. Perhaps Websites on base of a special Blog Software like wordpress might have a validating tool.

  181. Anyone else notice “more worthed” popping up twice here? What does that even mean?

  182. How does this effect e-commerce sites that use Manufacturer product descriptions?

  183. Does Google even know where ‘original’ content is first posted and always attribute it to the original site? What if a scraper is indexed before the original for some reason, will that devalue the original content?

  184. I wonder if that gives great bloggers some relaxation towards link building. Their fresh and original content should reach page 1 sooner than before without much of promotional effort. Another issue, a lesser frequently site’s original content may be scraped by some crook wordpress blogger with changed dates. Does Google has got the eyes to detect this fish often when the cache will be more recent for both….

  185. I’m glad to see you guys working on this. Scrapers are the worst.

  186. That’s good news for everybody who puts a lot of effort in writing good articles! 🙂

  187. Hi Matt, we run over 1100 sites for independent community pharmacies in the UK who neither have the expertise nor time nor money to produce and run sites themselves..they are already under huge pressure from changes in our health system and government spending cuts. Health content is impossible to write uniquely for so many sites as it has to be approved and reviewed every 6 months to comply with best practise etc… We provide them with our own content which has a good reputation as well as syndicating (officially) content from the NHS into their sites. We also provide a white label e-commerce service. We are not content spammers but this algorithm change has already had a massive negative effect on for the pharmacies on our network with traffic from google collapsing by 90% in the last 3 or 4 days. How do we tell google we are not spammers?

  188. If that’s how G’s gonna play then the following is true. News sites that “scrape” content from Associate Press are bad and Associated Press is bad because it’s a content farm. Gotta apply the rules evenly, right?

  189. Haha, crack down on the spammers! I’m new to blogging so I didn’t even know scraping existed until recently. It sounds like a huge pain the butt for both google and the person getting ripped off.

  190. Apologies if above comment is too commercial – wasn’t meant to be … i guess from conceptual point of view, what i am getting at is this change seems to negate high quality syndicated content – particularly in an area like medical content. It is simply impossible to write reference content about breast cancer or acne or whatever in lots of unique ways. This is people’s health so one mistake can mean horrendous liability issues. At the end of the day there are only so many ways you can write about the description, causes, symptoms, treatment of a condition and when to seek help from different types of health professionals. Again without being commercial, there are best practice and regulatory issues for medical content. You have to provide sources and have content provided by qualified professionals. In the UK If you want to provide on-line pharmacy services you HAVE to provide on-line medical advice as part of your contract with the National Health Service. So where does this leave a “Mom and Pop” pharmacy who wants to provide a comprehensive on-line service to try and survive competition from giant chains with all their resources. Also from an e-commerce point of view in terms of retail pharmacy products you cannot change the manufacturer product descriptions as they all have to be approved by various regulatory bodies. Does the new algorithm simply regard all this as spam? Or does the structure of how the content is held in the site need to be changed?

  191. Hi Matt, last week on Thursday we saw on or our sites drop considerably in the rankings. Is is an Affiliate Site, we do publish coupons, and we also write product reviews. The site specializes in covering a technology brand.

    All of the content is unique, hand written and posted by hand!

    The last 3 days we have seen sales go from $10,000 per day to zero, along obviously with income. A high traffic brand related keyword that we were ranking num 2 for has dropped to page 4, along with lots of other similar results.

    The site is a genuine attempt to provide unique content to people searching for reviews and offers relating to the brand in question. We have two in house staff in the US that look after the site content.

    So the question is, why has your new Algorithm change affected our site and many others like it?

    Nobody likes spam, when when a change thats designed to reduce spam affects a genuine site, offering genuine content, then surely its time to reassess what you are doing?

    All the coupon feed aggregations are STILL there. The sites with no content, no genuine well written copy, no reviews

    We feel like an ant that has just been rolled over by a bulldozer! And who pays the staff at the end of the month?

    Not good Google

  192. About time …

    Honestly, I could have never imagined that for you to take action you needed to be told something along the line that searchers actually prefer to see the sites that wrote the original content rather than a site that scraped or copied the original site’s content.

  193. Sounds great on most fronts. However, what is the treatment of sites that publish intentionally syndicated content such as press releases and RSS feeds? And will the links within that content be discounted? Websites like Tech Crunch, Yahoo and tons of online newspapers do this all the time, using content from sites like PRWeb, Associate Press, etc.

  194. I would assume that the specific change in the algo was in reference to the percentage of spin on any given article; after date sourcing the original content. I wonder of the percentage of “acceptable” spin was just raised and then the SEO’r s have to now chase that higher rate?

    I think if GoogleBot could find a way to accurately measure LSI elements, it would be a better direction to go in, instead of declining poor content, reward quality content. This could be done by increasing vocabulary comparisons in the algo as well as usage.

    It’s software vs. software some days, haha.

    Great post, thanks for the update.

  195. When will we, here in France, have a french Matt Cutts ?

  196. Matt, all you have written is excellent in theory and I can only hope that some good changes will happen soon in Croatian Google, because in reality for a long time I haven’t seen any progress.

    Even it seems that situation has been worse in last two months. In 95% of cases the sites which takeover our content – and give us the credit – in Google search have the better position than our site.

    Especially it’s frustrating that Google gives us no chance to report those issues which cause damage to authors who try hard to write their own content.

  197. Hi Matt,

    Would this affect also News sites that use syndicated content? Please note also that most of the news sites use the same provider.

  198. I recently discussed incidents of copied posts showing up on other’s websites with a fellow blogger. Copying without attribution is it basically Plagiarism? Seems like it is. Word for word copying claiming it is their own work and reaping any benefits (rankings, hits, readers) that rightfully belong to the original author. So, I hope this Google adjustment will help???

  199. This will surely mark a new era in the overall search engine industry. Its really ironic how StackOverflow is getting outranked from copycats and autoblogs which solely steal/scrape their content

    Matt, when do you think we will be able to see some actual results? Or the fight against blackhatters will always be a tough one?

  200. I think this is way overdue. I’m so over having these thieving wombats just copy and paste my copy.

  201. That’s good news! But I wonder if this have been launched world wide and especially for Google in our country. SERPs in are really messed up with meta title, description, content spam.

  202. Very nice. Anyone try to search a technical keywords ends up with lot of spam recently. Can’t wait to get rid of them

  203. I agree Theo. Nice work, Matts and company.

  204. Oh how I welcome this change! It’s nice to see the ones that do the work and provide value come out on top. Nice work Matt & the Web Spam Team!


  205. Hi Matt,

    So, can you tell us when this is going to roll out? What I am curious about, is how do you know who the original content is from. I have customer with Blog site, my own and see these ping backs and trackbacks in WordPress sites that are nothing but SPAM. I am tired of it. But, really, how would you know that my article is the real one. Live or memorex?
    Thanks KP

  206. Hi Matt,

    If these series of algo updates do in fact clean up the spam results that we have been seeing for quite some time, then I am all for this. One of the sites that I work on produces useful and unique content but quite often has our content scraped by other sites. We work hard at providing timely, useful and unique content for our audience and while we have seen our traffic from Google slide in the past few months, we work hard on not contributing to all the spam, noise and fluff that is out on the Web. Good luck with this battle against spam. If Google can do their part and if Webmasters can do their part, then perhaps we can clean up the Web a little. From a user perspective, that would be a great thing.

  207. Great news. I have been dealing with a lot of this lately and it is certainly frustrating. Keep up the great work!

  208. @Will,

    I don’t think that google should do anything about w3schools, its not their problem. This whole approach from w3fools makes me suspicious, it seems like a marketing ploy from one of the several for-profit sites they “recommended”.

    If people are so desparate for better tutorials, they should go and do the work, and promote the site. if all they want is to whinge about it, they should go back to their caves, and quit trolling.

    That is all that W3fools is doing, is trolling. They freely admit they won’t do anything productive, and also have numerous errors on their site, but he who is without blame….oh wait…they should put up or shut up, since they aren’t doing anything productive.

  209. Yay stop spam…
    Great to see a change but, I believe part of the problem is with google’s over reliance on links.
    This has created an explosion in sites that do and say nothing. Apparently I need to build or pay for links on sites that do nothing but profit or exist to sell links to do well in search.
    These sites do nothing but scrape and spin articles and are useless for most net users.
    It is a google driven industry. See Below

    SEO Web 2.0. Revolutionary New
    PR4-PR8 Contextual Link Building Service
    Improve your site rankings on Google with the most quality links on the Internet, which will: blah blah blah.

  210. Finally, I’m sick of that losers who steal my content and use it for own profit! It was time!

  211. SEO spammers would have no chance of progress now!

  212. That is good thing from google, because now a days, if we search for a information, lots of spam is showing. I cant even find the relevant information till now with google.

    With this algorithm, i hope, google shows what they search for.

  213. This is a nice update. I have seen authoritative sites rank well for content that they scraped from smaller sites, I hope this helps filter those bugs out as well so that smaller sites can be awarded for their strong original content.

  214. I was definitely expecting to see this happening, it is a logical step to combat duplicate content. Nice!

  215. The change is noticeable already. Excellent update.
    This is not a “fight against spam”. I see it more as a “fight for quality content”, or at least a first step in that direction. Hopefully, content farms are next.

  216. Thanks Matt for updating us!!!!! This change will help the surffers to get more and more unique content everytime

  217. All my sites seem to have survived the new algorithm… It almost feels as good as a blue ribbon saying your sites have passed the spam test.

  218. Google should use ping to detect the original content, the first URL with same content that send ping to google is the original, it’s easy 😐

  219. A problem we have on our site is real estate agents post the same information to our site as they do on other similar sites. Im concerned this will negatively impact our sites rank if google somehow thinks were scraping content.

    I certainly support what google is doing but not all site content that is the same is scraped content its republished by its authors sometimes on several sites.

  220. I’d love to see all those fake article sites go. Very good move.

  221. Matt,

    I hope that you or someone else from Google reads this…

    There is something I would love to see from the searh in Google:

    More control over the results I am getting!

    Today, I can hide some of the hits I am getting. But this really does not help as most spam sites are great at generating links based on keywords or search words.

    What I would like to see is the possibility to hide whole domains from the search results. This should of course follow my logged in user so that it is saved for my next search. This way I can hide the whole spam domain and avoid all hits from it.

    If you made this a kind of social feature where I could make my list public and let others subscribe to it, then this could effectively become a spam prevention system for Google. And if Google then monitor these lists, you could have a top-100 list where you analyzed what made people block these domains.

    I use Google because I can not avoid it, but it is soooo much work to find the good stuff among all the rubbish…

  222. Errioxa its not that easy we have at times manually inspected items to ensure its not spam or bots.

    Realtors post the same information about a home across a number of sites and now this will look like scraping. We need human verification to resolve the issue.

  223. This is great news. Nothing irritates me more than searching Google and getting 20 results that are all duplicate content from SPAM sites, yet the original article/blog are nowhere to be found without looking on page 20.

  224. This sounds good in theory, but what about helpful directories? Some of them are based on duplicate copy in a way, then they lead the user to the proper website for full details.

  225. Do you believe that now honest content producers will beat the competition coming from content thieves?

    I don’t think content thieves will stop having the same rankings because they have many incoming links. Google cares more about link popularity than anything else. Besides that, there are numerous ways to rewrite a document. Thieves will keep distorting original documents without being discovered.

    Wish this change would really benefit all daily content producers.

  226. It would be great if this algo change filtered out the sites copying content, but I have yet to see any changes to results. For example, this is a search for a specific line in one of our articles (published here:, however all the sites that have either pulled an excerpt from our feed, or scraped the content, dominate the results, and we are dead last on page 2.

  227. Great work. If this functions as well as I hope it does, it will make sites with original content much more profitable!

  228. Good start, but I hope you are also looking at exact/partial domain matches??

    I can provide examples of keyword searches that return 80% more keyword rich domains than a year ago. Is this a good move? Some sites are less than six months old…

  229. This is great news! As a small business that publishes a lot of original content, it’s always frustrating when we see our articles copied, reproduced and then appearing in the search results.

  230. Unfortunately any action has side effects. I’m one of those, along with a few others who responded above.

    There is a spam site that copied content from one of my niche content sites. It ranks ABOVE my site when I search for the exact content phrase with only 2 results being shown. My site and theirs. Nearly all the traffic to my site from Google has stopped. Totally sucks.

    Also, let’s consider the case of a real estate web site / classifieds web site. These types of sites by default have nearly exact content because they are syndicated from MLS or in the case where users post the same content to multiple sites while they are attempting to sell something.

    How does Google determine which was the “first” aka the original content? I don’t think it can, and it probably spends resources monitoring the more popular sites.

    So if a user posts something on a niche classifieds site, and then posts it to Craigslist who gets the credit? I’m guessing it is most likely Craigslist because it’s “overall” more popular in terms of traffic and probably gets much more Google attention while the niche site that has a niche following may suffer.

    In the case of an MLS situation, the Realtor who posts the content is the original creator. Their content should be attributed to their site alone, but there’s no way to determine that with how content is syndicated. I presume the larger sites in this space such as, and so on will benefit while the smaller, individual realtors or real estate companies will suffer. I know some of my friends who are Realtors are seeing a huge drop in traffic from this change and I presume there are others.

    I’m interested to hear how these types of scenarios might be addressed for legitimate sites.

  231. I’m sorry – but what is this doing on your personal blog – surely this is 100% Google related. If you want feedback, which I assume is the reason this is here, then you should let this happen on Google’s official blog. Enough said.

  232. Is it possible that these new algorithm changes detect instances when another search engine is farming Google’s SERPs?

  233. I am wondering why this change has been necessary. Content duplication has been addressed years ago and this change looks for me not different than any action against content duplication. I am asking myself if content duplication actually ever has been addressed before. This doubt makes sense specially concerning the fact that sites with copied content never had any issues to outrank sites with original content.
    Maybe this change is now the first more than necessary action against content duplication/multiplication?

  234. Completely lame from Jason Calacanis. “Hey Matt! Seriously, Mahola is clean” I think Ryan Jones sums it up with the whole “I didn’t do it, Mom!” comment above.

  235. I like the sound of this change. Hopefully it will prevent blatant plagiarism and give more credit to original writers/owners of content. Sounds great to me.

  236. Weird question related to this: since this is being done to target low-quality sites, are there any plans now or in the future to tie this to Adsense and either increase the number of PSAs, otherwise decrease the PPC value of the ads displayed on these sites, or outright ban these users from the program? Reducing their traffic is one thing, but they still have Binghoo! et. al. as sources. And while they could go to an Adbrite or some other network, it’s unlikely that the ROI would be as strong. And the remaining PPC ads could be better distributed among quality content publishers, which is a win-win-win in the long run.

  237. Great news for original blogger and web marketers as :-
    { Google }
    “we’re evaluating multiple changes that should help drive spam levels even lower, including one change that primarily affects sites that copy others’ content and sites with low levels of original content.”

    Due to this now real change will come in search results and good content with original results will come on top of Google result pages .

    Other change as so many people offering services that copy / paste content will make money etc.. now will reduce as in 2010 so many spam / copy paste sites were started.

    So original sites was facing ranking problem now this will solve their Ranking problems.

    So just write original content and come on TOP of search results and make money as you wish.

  238. I’m asking how quotes websites will be affected by this new feature. We can’t create our own content, because we post quotes. Our content will be everywhere 🙂 I’m looking forward for your feedback. Thanks!

  239. Great news. I’ve actually seen the results of this change already.. I’ve had a continuing issue with one site that was stealing my content and images and ranking above me for every document… I even filled out a DMCA notice to send along. Thankfully, I no longer need to…

  240. Hi Matt,

    That’s a good initiative. What about the people that just spin the articles to make it unique? those content are hard to read and understand.


  241. Hi Matt: I’m involved in a variety of hand created sites that do not rank as well as sites that simply repost material from other sources. This discourages people with real expertise from blogging or otherwise sharing information. The question they ask is, “why bother.”
    Having said that, we “do” use the url’s of research and we “do” use excerpts from newspaper articles; we add our analysis and comments which make up the bulk of the articles we create.
    So my question is what about the combination of url’s and short excerpts and originally created content; will we be penalized for the combination?
    Google (and social media sites like Reddit) MUST do a better job of figuring out the crap from substance. At the moment, the crap is winning, and that’s hurting citizen journalism.
    Thanks for listening. Len.

  242. Matt-

    talk about content farms and abuse to gain back links. Today 2/3/11 I did a Google search for “Masters Nursing Online”, and I set the search to “Everything”, and set the time to look within the last 24 hours. Out of the first ten pages 80% or more is the exact same article from an essay writing company. I mean word for word the same article titled “Introduction to the Online Masters in Nursing”. Perhaps as you guys work on this it can be arranged that the exact same article is not shown in search results this many times. I got the point after page three.

  243. I just got a quote from an SEO company who would like to sell me a tool that dynamically generates website pages which will increase my presence on Google SERPs. I don’t want to mention them – to give them any publicity – but I was asked to weigh in with my opinion. The COO loves the idea – anything which will help him “legally” get higher in the SERPs, but I am afraid that even though I am being told this is all completely within Google acceptable parameters, that it won’t be for long, and we could get penalized for using them.

    Is it possible to send you – offline – the link to the product, and get your take on how safe it would be for us to subscribe to?
    Thanks for continuing help in allowing me to learn from the guru’s out there who are willing to share their knowledge.

  244. Hi Matt

    Great news – but do you think this may affect job aggregator websites? (Think Indeed, SimplyHired etc). Whilst some do build up decent unique content on top of the huge bulk of copied job content, many (esp in the UK) have very little beyond the massive copied job content that they pull in. The value to the user is an often highly debated topic for these type of services. What is NOT under debate however is that these website are very heavily present at the top of most job search results pages.

    Interested in your thoughts.

  245. Sounds like good news although when I searched on your entire 2nd paragraph, wrapped in quotes, returns this page in 3rd position.

    Ho hum .. 🙂

  246. @Ewan Kennedy

    Thanks for pointing that out. I have seen the same issues with sites that are perceived by Google as more authoritative, or they are indexed first.

    I know spam sites need to be eradicated, but I fear this new change while helping in some ways, has a lot of unintended consequences for legitimate sites.

    Apparently even this one.

  247. Something is going terribly wrong with the recent changes. Articles from the German Wikipedia are now disappearing from the SERPs – if there is an English equivalent. I’m German, and I’m searching from Germany. Until a few days ago, Google would often rank the German Wikipedia article No.1. Now they send me to an English article that has far less content than the German one – and it’s the wrong language!

    Wake up please!

  248. Many articles from the German Wikipedia now seem to be banned from Google. E.g.:

    Angelina Jolie is gone!

    Eva Mattes is gone!

    Galileo Galilei, an article rated “excellent”, banned by Google!

    Looks like a disaster…

  249. Where I am confused at is: I have a article web site where I post tons of articles on many different subjects with permission from the writers and I give credit back to the writers web site. I also get my info by RRS feeds and other sources – again I give full credit back to my sources. Is this now a huge no no? Thanks, Beth 🙂

  250. Will this algorithm update effect the copied product listings that most affiliate websites use?

  251. Great job google!! NOT.. thought of ecommerce sites that ‘have’ to use the manufacturers content. We can’t get away from that or even have our own unique content. Not for the products we sell so I see this as a BIG FAIL on google’s part. This is one of those things that benefits bloggers and yet screws the rest of us over.

    Even a friend who blogs about a specific product lost his rankings. Ranked number 3 for a term that drew a ton of traffic.. had unique content and is a blog and yet got penalized.. lost his rankings.. others are copying his content.. hmm.. how long does Google ‘test’ these algo updates before they implement them.. this isn’t going to stop the spamming sites.. just hurts thousands of innocent people and ecommerce sites..

    SO.. GREAT JOB GOOGLE.. HUGE FAIL.. Time to look to Yahoo and Bing where we get 10% of our traffic

  252. Hi Matt,

    How does this affect sites for multiple countries that essentially have the same content but with slight alterations for each country?

    For example we have with our content in English spellings and prices in pounds and also have with the same content with prices in dollars and some spellings altered to the American versions e.g. Color, Maximize etc.

    Cheers, and keep up the great work.


  253. What will you do with eHow? It’s painful to see how their “contributors” are stealing original content, including graphics, from my site.

  254. Great job google!! NOT.. thought of ecommerce sites that ‘have’ to use the manufacturers content. We can’t get away from that or even have our own unique content. Not for the products we sell so I see this as a BIG FAIL on google’s part.

    well why don’t you stop complaining and use some of that time to be CREATIVE and TO ENGAGE THE USER!! Create compelling designs, offers, and promotions and GET THEM TO SHARE IT WITH THEIR FRIENDS and SPREAD THE WORD! Find bloggers in the same niche and make friends, send them a sample and ask for an honest review, engage your audience/community. Ask for their opinions and LISTEN TO WHAT THEY SAY. GIVE THEM THAT! Stop complaining and get to work on your site and interaction in everyway, you might loose a few leads due to Google trying to straighten stuff out, but i’m sure there are other ways to drive traffic, and REPEAT traffic at that. Take care of your customers and they will take care of you. Enough said.

  255. Yoda,
    We too work within the realm of e-commerce wherein many of our competitors simply use the manufacturer’s provided description. We attempt to enhance their copy with information about the different benefits, customer reviews and comments, and some good ol’ fashion leg work. If you’re passionate about your business, one will take the time to come up with meaningful information to differentiate your product/website from another who simply takes what the mfg. provides them. Customers appreciate the additonal information and help, which coincides with what Google is attempting to find and provide in the serps.

    To me, a huge change I would like to see Google make is to factor in with greater priority important consumer matters when it comes to an ecommerce site, like the listing of a valid address, phone number and email address, rather than a generic contact form. In my 10 years experience competing online, it’s become pretty obvious those sites that are run by shady outfits… they simply never provide a way to determine who owns the company, an address nor a phone number. In my humble opinion, drop those sites way down. Additionally, Google should be looking at a means to factor in businesses that are doing good and giving back or have received legitimate awards or honors for good business practices, etc. ‘just a few thoughts.

  256. Since this new algo was initiated, our 10+ year old original content site traffic has dropped like a rock. In the mean time, scraper sites in my niche prosper rise to #1. A copycat site is also rising in the ranks. Although our listing still seem to be there, traffic and converting sales have plummeted to nearly zero. That’s a first in our sites history. Is it possible this algo is finding and penalizing false positives?

  257. Our traffic has also taken a major hit, as result of scrapers and feed based sites. For example, you can search for “The Apple and Google brands had the most significant impact online in the fourth quarter of 2010” (in quotes), which is the first line of one of our articles published today. The top listing is a site that is scraping our content, and provides not original content. I post this as a very specific example. The actual article page on our site is also omitted as the duplicate. This is happening for almost all our articles, and we have thousands of original articles. It doesn’t seem like the algo change is able to identify original source well, as stated. I’m hopeful that something can be done to fix this problem.

  258. I am wondering if anyone is going to explain any of this to us greenhorns???

    My ? is: “Where I am confused at is: I have a article web site where I post tons of articles on many different subjects with permission from the writers and I give credit back to the writers web site. I also get my info by RRS feeds and other sources – again I give full credit back to my sources. Is this now a huge no no? Thanks, Beth :)”

    My ? is much like another posters ? who web site only posts quotes from different sources with the sources permission.

    Hope someone who has some knowledge on this will respond 🙂

  259. I think there are definitely going to be a lot of false positives. I hope Google differentiates between sites the legitimately republish content(with proper crediting to original source) in a meaningful way and people who scrape content. I hope things settle down soon. Also, I like going to some sites even though their content is not unique simply because I like the look and feel of the site like Whiz mentioned. One original source had just white/red text on a black background(hurt my eyes to read it), I would rather read it on the republishers site( good navigation, images, visuals, etc).

  260. How will this change impact MLS sites? We have no option to change content for properties. I noticed recently that pages on my site have fallen out of the index completely. Do I now need to write blog posts to keep my online business alive?

  261. If our sites are no longer allowed to have articles posted that are posted else where online without our sites facing penalties, then what is the point of RSS Feeds and sites like and such other sites? Thanks, Beth 🙂

  262. Also wondering if it is normal during these major algo updates for “good” sites to expect a drop in traffic while Google goes through it’s “shake & bake” process. I’ve run several websites since 1997 and every time a major change comes down the line, we see slow downs that severely affect sales conversions as if traffic quality has great decreased. Days or weeks later things improve. Can’t these algo updates be released incrementally without this level of economic impact?

  263. Hello Matt,
    This is excellent news and something I have been praying for, for a long time…
    It is unconscionable for these scraper sites to take our original content and links and post them blatantly on their wretched, no content sites filled with unrelated ads and without permission.
    What I haven’t been able to find is where to officially file a complaint with Google because, unfortunately, as the owner of two blogs, I have been baffled and abused by the boldness of some of these sites. I even have a screen shot of one scraper site’s blatant listing of my blogger site posts; I am neither a forum member nor consenting participant to their site. They did not ask my permission and I would not have granted it. Please advice!
    Thank you and kudos for the decision to end this dreadful method of traffic building! 🙂

  264. Hello Matt,
    As a follow-up to my comment above, I truly believe that original content/site of origin should always – ALWAYS – precede the sites that have been given legitimate permission to publish our posts. I spend long, tiring hours writing original content for my blogs and it irks me to type in my blog title and find my post ranking higher on another site. While legitimate article directories are useful, I believe they should never supersede the site of origin on a Google search.
    Please keep in mind, I presume the recalculations will handle this issue, that some of those scraper sites pretend to be legitimate article directories which they are NOT! Legitimate article directories require participants to develop a relationship by signing up, confirm that we have signed up, and, in many cases, to add their logo to the listed blog.
    The scraper sites don’t bother with building a legitimate relationship because they are too busy stealing our content and know that most of us would ignore such a request to become a member of their sites. Please note the glaring difference…
    Again, thanks and kudos for the effort Google is making to end the problem with scraper sites. 🙂

  265. It is great that Google is trying to eliminate the major problems created through dupe content!

    But since the algo change there seems to be a little confusion as to who the original content actually belongs to!

  266. You can find websites out there, which can prevent content stealing (ie. Copyscape etc..) Or these plagiarism checkers aren’t working well?

  267. Hi,
    Thank you for this update. It is going to help REAL owner of the contents. I already contents on my sites is being copied to many other website through RSS feed. Now, I’m the one who is glad to see this update since my investment on contents would worth standing.
    Although, I thought this update would have happen.
    Please reply to us when this update would come.

  268. Regarding search engine SPAM; myself and a few colleages have uncovered a rather large link-wheeling scheme which includes a popular search directory. I understand Google has guidelines in place which caution webmasters about participating in linking schemes and any manipulation intended to boost ones rankings.

    I have submitted an abuse report via the search page, but its really not capable of sending all the information which has been gathered – info which details these serious abuses. Further and quite unfortunately the way the form works I have no idea if what I was able to send has actually been delivered to anyone and/or if it will even get reviewed.

    No doubt you are a very busy guy, but if you or one of your colleages have a few free minutes please contact me – I would love to share this information with someone who might have the ability to properly address the issue.

  269. That is a good news and I have seen some people complained that their earnings suddenly dropped beginning of Feb. I have one question that so many people use re-written content and there are so many softwares for re-writing. Is it going to effect all those websites too? What about Ezine articles? so many articles submitted their are re-written. I would like to hear on this topic more, thank you.

  270. Another thing I forgot to mention here. One of my blog has sudden drop in traffic ending January. I have written content with my hands. However, I have searched the ideas from various blogs and didn’t re-write the content. Eager to know more about the topic!

  271. Very welcome change, good news for original content creators…

  272. Whatever the recent changes to your algorithm, it has taken my noncommercial site and driven its trickle of traffic into the ground. This after spending 4-8 hours working hard on each of these articles over almost three years. And your comments about original content… Well let’s just say, I wrote all my articles the hard way and applied no SEO to the articles so your algorithm seems to have some serious flaws.

    By the way, you really need to work on a means to incrementally introduce your algorithms so sites have a chance to adapt to your ever changing and unwritten rules. You mess with a lot of lives with your algorithmic dabbling.

  273. From my impression, the worst thing isn’t the sites offering low original content.

    Much more of a problem are the content farms or article sites that have all original content, but without any value. Useless brabbling that’s unique but doesn’t tell you anything you really want to know, like Wikipedia, but on a very low quality level.

    The problem is the quality, irrespective of the fact if it’s original or not.

    So, basically I prefer finding something useful 3 times, even if copied, instead of wasting my time with something that looks unique but turns out as unique rubbish.

  274. Because of this algo update, I tweaked our content a bit to reduce the mention of the term we’re targeting down by two or three just to be extra careful we don’t get affected by it. Few days later, I see that we move down a notch in the SERPs to be replaced by a site that mentions our target term repeatedly like about 36 times on that page. Hmmm…

  275. I’m glad too see this update. I’ve noticed a fair increase in spam results, especially when trying to attain rankings for our clients, who have “legitimate” busineses, trying to beat spam sites.

    Thanks for the update Matt.

  276. Thanks to Matt.

    I really like this post.. Google really needs updates from a long time and its good time to change in algorithm.
    I also like to know the exact difference between Old and New Google Algorithm.
    please give me that link if you have !

    Thanks to all

    Warm Regards
    Denish verma

  277. Well, unfortunately my previous comment is still in the moderation queue and has been for 5 days so I’m going to try again since a subsequent comment is showing.

    Unfortunately any action has side effects. I’m one of those, along with a few others who responded above.

    There is a spam site that copied content from one of my niche content sites. It ranks ABOVE my site when I search for the exact content phrase with only 2 results being shown. My site and theirs. Nearly all the traffic to my site from Google has stopped. Totally sucks.

    Also, let’s consider the case of a real estate web site / classifieds web site. These types of sites by default have nearly exact content because they are syndicated from MLS or in the case where users post the same content to multiple sites while they are attempting to sell something.

    How does Google determine which was the “first” aka the original content? I don’t think it can, and it probably spends resources monitoring the more popular sites.

    So if a user posts something on a niche classifieds site, and then posts it to Craigslist who gets the credit? I’m guessing it is most likely Craigslist because it’s “overall” more popular in terms of traffic and probably gets much more Google attention while the niche site that has a niche following may suffer.

    In the case of an MLS situation, the Realtor who posts the content is the original creator. Their content should be attributed to their site alone, but there’s no way to determine that with how content is syndicated. I presume the larger sites in this space such as realtor (dot) com, trulia (dot) com and so on will benefit while the smaller, individual realtors or real estate companies will suffer. I know some of my friends who are Realtors are seeing a huge drop in traffic from this change and I presume there are others.

    I’m interested to hear how these types of scenarios might be addressed for legitimate sites.

  278. Matt just a question the 2% will increased over the time?, I mean this release is the first step to something bigger and can move the total of search engine results?

  279. I don’t know if I agree with this philosophy of “original content”. ecommerce world is filled with duplicated content because many product descriptions are what manufacturer submit and also many sites are either too small or due to legal issues, they can’t change those descriptions.

    Same deal with many legal websites. They can’t change the wording of the law and policies but that doesn’t mean that those sites should be dinged for it.

  280. Thats great hearing the changes in the algo. Will the algo consider removing the fake link backs purposely created for the PR boost. (I.e Link from the sites purely created for the sake of improving the PR. I have noticed several websites used to register fake domains with content farms and give link to the original website.)

  281. So the classifieds sites are good candidates for change … who wants to sell a house or a car repeats the same ad in dozens of sites classified …

  282. Thanks for the update, does this as well relate to germany respectively to all markets?
    Best regards from München

  283. I simply don’t see an improvement in your results. For a term I ranked for I’m now on the fourth page while sites that didn’t research their content and work hard to write original content are on the first page. In fact, your first page results for that term are now very poor and include one “content farm” after another. I think this algorithm change isn’t for the best, at least not for my term.
    I know this sounds like someone upset they dropped, and I am because I tried hard to avoid spam, write good content based on real research, and followed Google’s TOS to the T; but now I get replaced by articles sites. Come on, really?

  284. I think there are definitely going to be a lot of false positives. I hope Google differentiates between sites the legitimately republish content(with proper crediting to original source) in a meaningful way and people who scrape content. I hope things settle down soon. Also, I like going to some sites even though their content is not unique simply because I like the look and feel of the site like Whiz mentioned. One original source had just white/red text on a black background(hurt my eyes to read it), I would rather read it on the republishers site( good navigation, images, visuals, etc). But I suppose finding answers to queries is Google’s primary concern, but the original source is not always the best source. I would rather read from a republisher that published daily/weekly content from different sources, than an original source that may only update once every couple weeks…saves me a lot of clicks lol

  285. Heh, that is just what I was looking for!
    Noticed some sites SERP drop(((

  286. Let’s see how it goes. Young, creative sites may suffer unfair treatment, if an older more trusted site copies its content, but hopefully the algorithm will minimize such possibilities and the bans will be fair most of the time.

  287. Well I’m glad I found this article. One of my clients was Position 1 page 1 for a particularly strong keyword and now it’s dropped to position 7. She not happy about it. At least now I’ll have something concrete to tell her without sounding like I’m full of ***t. I guess I’ll have to improve the relevance quality of her website content 🙂

  288. Matt, I hope you can help. In the Webmaster Tools for our site we have many, many “not found” errors with really weird links from sites that we have never heard of. Here is an example:” width=”75″ height=”50″ alt=”image”/> <a href="/m/imgres?q=western+sams+club. Here is the site it is from:
    How could anyone link to our site with that, and why is Google considering this an error or even a legitimate link to our site? Is this happening to anyone else? What do we do?

  289. Just to make this very clear the “link” to our site is all of the following:” width=”75″ height=”50″ alt=”image”/> <a href="/m/imgres?q=western+sams+club
    Yes, the spaces, width, alt and part of an <a href are all part of the "link" to our site. This started a few weeks ago and now we have way more. Will we have thousands soon?

  290. Good to know that google is taking steps to get rid of the spam. Lately I have been really struggling with google search results because of the nonrelevant sites that invariably come on top of the search results.

  291. Great news for us and bad news for spammers, thanks for updating Google algorithm

  292. This new change is killing us. We just added a new employee in January and three days ago our serps dropped from page 1 to page 5 for many pages. We’ve lost nearly 50-percent of our traffic. We have long webpage names but back when we built the site this is what a lot of books recommended. We’ve invested a lot of money into our site and things are looking really bleak. Help

  293. This news is fantastic. I have had a blog for a few years that is opinion based and totally unique content. As a freelance writer I’m thinking this could also be good news for my profession. Instead of people buying $4. crap articles from the likes of Textbroker et al, maybe the value of my profession will return. I can only hope.

  294. Hi Matt,

    Thank you for the information regarding this update at Google. I have a question that is boggling my mind like a Rubic’s Cube. When I am logged-in to Google and I search for the company I work for using our #1 search term we have dropped from #2 to #3. However, running the same search in another browser which I am not logged into shows us at #2…what’s up with that? Is it a regional thing?

  295. I just noticed that someone is intentionally copying our web pages and putting them on other sites. I wonder if this is a reason for a drastic reduction in page rankings.

  296. Two comments. First, be careful with the algorithm tuning. There is a local Minnesota site, a blogger user, Dump Michele Bachmann, dedicated to reasons to remove the woman from Congress. It’s team tracks Bachmannalia around the web, and links and excerpts AND analyzes, as well as generating fresh content. I know the site and such, but I would hate to see it drop too far behind the Representative’s own House homepage. It has a very sound First Amendment political speech purpose and should not suffer from algorithm tuning.

    Second, the recent Google outlier thing, the attack on Bing, with bogus searchwords and then spamming Microsoft Bing via downloading IE8 with the Bing bar and enabling Bing site traffic tracking for 20 Google engineers who then did the outlier spamming; how does that fit into SEO offensiveness?

    My guess is that Google did it AFTER tuning its own algorithm to not be waylaid if Microsoft did the same thing in reverse, to attempt outlier search bias interjection into Google usage tracking, via the same bogus outlier attack.

    Isn’t that like the old western movies, “We’ll ambush them at the pass?”

    Wasn’t it spamming?

  297. Hi Google and Webmasters,
    How quickly some people forget. Well I have not. Google news was started just like this… They posted AP and AFP news without proper permission in. The three were at odds and arguing about principals. Google claimed they were sending AP and AFP traffic and they should pay Google. This dispute went on and on. The dispute was in 2006. This is how Google News began….
    Here is the news article about it….
    Google has agreed to pay the Associated Press news agency for using its news stories and pictures in Google News.
    The deal settles a dispute between the two over copyright issues and has implications for a dispute between Google and Agence France Presse (AFP).
    Agence France Presse is seeking £10m in damages after alleging Google broke its copyright by “aggregating” its stories, headlines and pictures in Google News.
    Google has denied any copyright infringement with the AFP content, but it was treating AFP copy in the same way as Associated Press content before its deal with that agency.
    The new partners say the deal will also allow Google to use the content in future services for Google users. The terms of the deal have not been disclosed. @

    How can Google get away with this and not the smaller sites??? This is just a move to get more advertising dollars out of the smaller companies, news sites, content sharing sites and educational sites who share content. I don’t think Google really cares about protecting content. They just care about protecting their power on the internet. Why can’t smaller news sites be given the same opportunity that Google founded their News section. This is just all about power and money. The bottom line. Smaller news sites give traffic to other content sites. They should get paid for it.

    If an author or owner of the content has a problem with their content on a website then they should contact that site and request removal. And never do it again. There is a lot of sites that have the rights and permission to have content on their websites. This is just going to squeeze the legitimate sites out of 60% or more of the search market.

    Well, I had my say.

  298. I am still fighting with content scrappers.It is quite difficult for google to determine which one is original and which one is duplicate, that is why both original and duplicate articles both are some time ranking on the same page on google.

  299. I run sites that are based around other content, however I only use content from sites like article directories, and yotube, which are supposed to distribute content.

    I follow all the rules, I play fair.

    I do my research and work on my sites to give my audience content they want to see.

    Since this update I have had a 700% or more drop in new traffic across the board.

    So let it be known that while this may be attacking sites that steal content, it is also hurting anyone with sites that are based around fairly used, distributed content.

    Considering youtube, facebook and most other major sites online have never created a single peice of original content in their existence, since they get their visitors to do it for the, I consider this to be hypocritical in the extreme. Even google is just a glorified middle man for websites. It just happens to be life and death for a lot of them. Google has made ALL of it’s money showing people, other people’s content.

    The difference is that it isn’t stolen, so maybe in the future the fact that the sharing content is what drives all the big sites online should be taken into account.

  300. Wake up Matt! This update *SUCKS*. Here is Why!

    Wake up Matt! This update *SUCKS*. Here is Why!

    The new algorithm has a fatal defect — it penalize the less authoritative site when a relatively more authoritative site copies less authoritative site, or many sites copies your site!!!

    Quite a few our competitors actually copy our content. This is bloody truth. However, OUR site is ending up being penalized by this update. (Our URL is Some of our product pages were literally copied in paragraphs by a leading competitor. However, our site is now being penalized. Our attorneys actually send them notice that they took down OUR PICTURE on their website. In their product descriptions, they had if “you need custom item, call OUR website”! They did not at first change our phone number to theirs. This is not a joke!!! We launched many products at least a year or two before them.

    I certainly applaud Google’s intent. But this result for us is just *hideous* and *plainly wrong*. We truly try best to provide better products and better service. But now the spammiest site who copies us is ranked #1 on most keywords. Matt, if you don’t believe, just google their site name + complain or review yourself to see if you would ever recommend our #1 ranking spammy competitor to your worst enemy.

    Matt, now please *do something* with this update — because the update let big bad Pete murder and get away with it too. Please let me know if you have any questions. Thanks.

  301. About time this was launched and pushed harder by G. We have worked for years on creating unique content, only for others to copy literally the entire listing, and out index through using specific url strings. Will be watching carefully to see how this takes effect in UK G search rankings over the coming weeks.

  302. I thought the results were getting better but i have just been looking for “Matalan Reviews” in the UK and from page 2 onwards the results are all the same istaffordshire, ibedfordshire etc. No reviews just simple listing of store address in a town and the results are dominated with dozens and dozens of these for next few pages. The results on yahoo for same term are far more relevant so hope this is just a google search glitch.

  303. Matt,

    My name is Adam Fendelman. I’m the founder and publisher of, which is a credible, 3-year-old publisher of daily entertainment news, reviews and interviews. We are staffed by multiple accredited Chicago Film Critics Association critics.

    Shortly after this Google algorithm change went live and starting on 2/9/11, we have lost a destructive amount of our Google search traffic from regular Google. Our Google search ranking is clearly being penalized and we’re still not sure why. We’ve opened a long Google help thread here:

    We’ve been thinking this has had to do with our paid text link relationship with Text-Ad-Links since there are some other threads on this. Based on your post here and what some others have told us, this may have to do with our content syndication relationships. We are one of the many credible publishers syndicated by IMDb, which has a higher PageRank than we do.

    Since 2/9/11, though, OUR content that’s syndicated (just the headline, first few paragraphs and a link back to US) by THEM is appearing in Google and our original content is not. It appears as if Google thinks IMDb’s syndicated content from us is the original content and we’re not originating it. This is wrong. We are the originators and IMDb is simply syndicating us. This is our syndication feed from IMDb:

    We have already filed two site reconsideration requests with Google. Our first didn’t know much about what was going on and our second is believing that Text-Ad-Links is our culprit. I’m not sure if I should file a third discussing our content partnership with IMDb. We can only continue to guess. This is very damaging to our business and an official response from Google would be very beneficial. We would very much appreciate your feedback so we can get our Google search traffic and ranking corrected. Thank you for your time.

    ~ Adam M. Fendelman

  304. This is good news for a lot of people… not so good news for people who earns the money by doing SEO.

  305. Scenario: a company creates a press release and does not post it on their website. Instead, they distribute it via a press release distribution service. Is it acceptable for the press release distribution service to use an ‘original source’ meta since they were the first to post online?

  306. Matt,

    Clearly there was another update on February 23/24. We got hit big time at our site. We use 100% google-approved techniques – we have never bought links, we never sell links, we never “compensate” people in any way for links. We always try to market our content to relevant, smaller, independent sites. We link to relevant, non-spam sites in our articles. We have 100% unique content in more than 1000 articles we wrote over 4 years on individual home improvement projects.

    We lost about 50% traffic in the latest update which appears to be targeted at low-quality content. We have sat in the top few SERP results for certain terms for years… and now … sometimes we’re on the second page – and the sites that rank ahead of us – spammy, poor links, or just “big” sites. It just doesn’t make sense to me, Matt.

    Now, some sites that are at the top look just like the sites Google doesn’t want there. Big, spammy, half-irrelevant sites… This is killing our business and is really discouraging after years of working hard to do “the right kind” of web marketing, playing by all the rules, and growing slowly. Some of our competitors go out and buy 1000s of pages of spam links or can afford major media campaigns or own 100s of other, mostly-irrelevant sites on their own. These sites continue to sit at the top, unchallenged except for other massive sites competing with them.

    What advice can you offer? Will there be a change that corrects some of this?

  307. I only hope that this content farm update includes sites like Experts Exchange that all too often show up high in the serps for a question but the answer is hidden behind a paywall. It gets tiresome having to add -“experts-exchange” to every query

  308. Love the move in this direction, but I’m sure there will be some working out of the kinks in this update.

    The one concern I’d have, knowing that some people ‘will be evil,’ is this part about the ‘personal block list’ for Chrome. Can you imagine some of the evil crowd going ‘Hey great! Let’s fire up a ton of personal proxies or botnets…and hammer our competition’ — Ugh, man that could get messy. Of course I’m certain you’ve had the pleasure of dealing with these type of logistical web-spam fighting issues before when pondering the weight applied to social loving of sites & inputs from users. :/

    Overall I’m jazzed about the move. Getting people focused back on content being king & producing quality work is good for us all — users and producers. :thumbsup

  309. Its good to see that there is a major crack down against Content Scrapers! Good Work Matt!! 🙂

  310. damn the new algorithm kicked my right in the b.. my site dropped to 1/3 of traffic .. are you just going to cripple software directories with the new algorithm ?

  311. Not too thrilled with this 23/24 update. A bunch of our sites dropped from the SERPs while spammy sites jumped to the top. We never scrape content or use blackhat scammy links, yet we are clearly being hit. We employ a team of US based writers to create our copy, and if this update isn’t rolled back we’ll be forced to lay off a dozen employees on Monday.

    I can just imagine how bad it will be after this is pushed out international.

    Thanks for killing our business, Google.

  312. This is bullying. On what basis do you decide that content is useful or not? I have original content, ZERO link building, no scraping from anywhere – just clean and pure website.
    You brainy guys come up with some algorithm ad decide to trash sites that your dumb algorithm “thinks” are not suitable for people to read? My site had 25k visits per day, 450+ comments and discussions per day and making money with adsense of which you keep a portion too. Suddenly my revenues are trashed and you keep making money from someone else’s website. Pathetic. I am feeling very bitter because I did everything in the book, hired the subject matter experts and get everything done right – followed all instructions for Webmasters – and now this. I really wish Facebook comes up with some idea that will take away your visitors too and wish you go bankrupt overnight. Pretty sad – I really understand now why “monopoly” in business is not good. I think your days are numbered too.

  313. I only hope that this content farm update includes sites like Experts Exchange that all too often show up high in the serps for a question but the answer is hidden behind a paywall. It gets tiresome having to add -”experts-exchange” to every query

    Use the Google cache for the term. It will reveal the answer. It’s a bit of a pain to do it that way, but it does work.

  314. I am a bit miffed over this update that killed many of the better article directories like EzineArticles. Though I did despise the concentration of ads on the pages I did look at the site as a public forum of sorts to publish high quality content. Considering how strict they were with the editing of each article and considering they check to see if it is original, I am surprised you decided to kill them off. What stinks about this is the fact that I DID create worthwhile, high quality content and it seems the quality of that content has been devalued just because of where it was posted. You are basically telling me what you wrote was garbage because of where you published it.

    Content I posted on my other blogs ranks as good as it did before, on page 30 as usual because the domain does not have 2,000 pages, 7 years of age, a bunch of blog comment links pointing to it, nor is it registered in 4,000 directories… so yes, this is very much a situation where I as the content creator suffer ONLY because of who I chose to publish my original content with.

    My business survives off of traffic, traffic I can not get by simply creating a wonderful site because Google hates new sites. I have to ride the authority coat tails of others in order to be seen or I will die of old age (or of starvation) before my new quality site is ever seen. It seems now there are no real worth-while ways to get people to link to you unless you beg..

    I look at sites that have been around for years that I compete with, I make it a point to create better content than they do but my sites are never seen because of nothing more than the lack of authority they have, the lack of age they have, the lack of self-created links published that point back to them and I stand very confused as a result.

    You, Matt Cutts, wants quality. Some of us actually do produce that quality. But because we did not produce that quality years ago, use all the link tricks that were available years ago, we STILL loose the game and now you have taken what chance we did have to drive traffic to our sites by creating a well-written article and published it on sites like to at least be seen. It was the quality article that caused the reader to click the link to my site..

    If we can not be found because your algo keeps us in the hole, how can we show the world that what we are producing today is better quality that what a PR5 site produced years ago that is garbage by comparison? Are new standards impossible to be set?

    It sure would be nice if you could find a way to take a page, apply some grammar filter and spell checker to at least attempt to test for quality, count the words, the photos, filter for excessive keywords, allow a small percent of dup content to cover citations… and have an eureka moment and realize that this content must be ok if it passes your test. Regardless of age, links, or where it was published.

    One could only hope…

    It seems your ranking factors focus on everything BUT the quality of the content itself.

  315. When are the hew changes comming to Sweden?
    I heared that You have made the algo update in some parts of the world.

    Petter Hedman

  316. I’d be interested to see something done about those sites who steal content and get away without any contact details for the domains.
    A being used for personal use is fine – nominet allow non-trading individuals (consumers) to opt out of having their details shown. But there are lots of sites that are obviously commercial who hide their details. If showing details was made compulsory we’d at least know who to contact to ask for copyright content to be removed.
    I will continue to report the horrible scraper sites I find in results as I find it very frustrating to constantly come across them! I’d like that option to remove them from the results reinstated too! Or perhaps google could add a “this is spam” option to the icons next to results?

  317. I have a question and hope I will get the answer here. I write the product reviews after buying the product I am reviewing. What is happening is that lots of people copy my contents and paste it on their sites that result in duplicate content. So then I also have duplicate contents. Who will get the penalty? Me? or the people that are copy pasting my contents?

    And if I can also get penalty how I can keep myself safe from it? because I can not go behind everyone and file case.


  318. Aussiewebmaster

    Mate seems a few really relevant sites have been impacted by this new change – why would DaniWeb – excellent programmers forum – and Complete-Review which is a unique and valuable review of books despite having a crappy domain name – seems there may be the need to make some tweaks to the change.

    How best does one of the sites that believes they warrant a second look go?

  319. look forward to the change and this is going to eliminate Millions of sites

  320. Matt,

    don’t you think it is time to make the news “Original-source” meta tag global and not only for news? Would that make things easier?


  321. Matt, I love that you’re providing us with more detail on the reasoning behind some of the changes. There’s been a lot of discussion whether Google should disclose their algorithm in order to reduce spam levels or in order to provide transparency. I’m a clear advocate of not doing this, since IMO it will allow for spammers to find loopholes they can then abuse.
    Having said that. I’m from the Netherlands. I’ve been using the personal block list for some time and think its awesome.

    In the US you probably have something similar, but we have something called, and what it does it provides users with a collection of links around a certain topic. Its not a content farm for that matter, its just links, or rather: a linkfarm with expensive paid links. But they are abusing the network they manage in such a way that high quality sites are being pushed down. I mean: when I search on Google, I want to have results of sites that tell me more about the subject, not see more links that I have to look at individually to see if they’re valuable. (btw having your link on costs a LOT of money). I would say that is typically the task of Google whom I believe to better capable determining the website’s content value than a linkfarm who is using paid links.
    I ‘challenge’ you to just perform ANY random search for major topic on Google Netherlands e.g. “muziek” (=music), “liefde” (=love), “auto”(=car), “woning”(=home) and I’m willing to bet my girlfriend (whom I LOVE and is EXTREMELY gorgeous) that within the first 20 results you see either or a similar variant.

    I was wondering if and when Google is going to reduce the occurrence of those sites, since they really clog up valuable results!

    I notice you dont reply to the posts here, which is understandable, I know you dont have the time to answer all posts individually, but this is something I just HAD to say! Hopefully you take this major annoyance into account when further improving the Google search experience.
    Anyways…its late here now, so keep up the good work!

  322. John has got a point. Why don’t you mention the original source specifically on top of the SERP and then put the other sources. For this it would be better to get the date of content submission, i.e. whichever website had posted the content the first will become the original source.

  323. Hey Matt,

    Nice to read about your blog and to get some nice experiences, I read somewhere on searchwarp article site like, “Google is doing a new semantic / grammatical analysis to filter out pages they suspect aren’t well written or are written by someone who doesn’t primarily speak English.” So my point is, if those statements are true, will the searches are going to get affected now onwards for the bad English? means getting a punishment kinda for bad grammars?

  324. The new algorithm sound to be pretty interesting but the thing that I have noticed with search results shown in Google is that, I am not able to get latest updated information. For e.g., if I am searching for some term or technology related query that has undergone many advancements until day, it still shows me the results from the articles/discussion boards that happened way back in 2003 or earlier. Hence the results shown are useless to me…

  325. What about scrapebox, xrumer and seonuke other softwares like them which are totally based on Spamming.These softwares creates different content from single article.

  326. Thank you for sharing your knowledge with us. It’s quite helpful to get such internals for giving small website a change to get a position where they become visible to users.

  327. You still appear to be rewarding ‘trusted’ sites with reasonable SERPs that are using syndicated content, above sites that have original descriptions for similar content. No doubt there are other factors at play here, but surely duplicate content is duplicate content, regardless of who serves it up, and favouring a trusted site is not providing searchers with the best possible search results.

  328. Dear Matt,

    I’m very appreciate what your team has done and the results are not just nonsenses. But your team FAIL to kick out the USER GENERATED CONTENT WEBSITE which producing page from BING search engine and Google document search page, especially spammy websites on my country. Indonesia.

  329. Sorry, have to post in parts:
    Subject: Search spam in the Netherlands, a Google fan’s perspective

    Lately, the search results in the Netherlands are getting clogged by link farm results. The link farm pages in this case are pages with collections of between 15 to 80 links. They provide no relevant content, just links to other websites.
    Main user objection: people use Google to be redirected quickly to relevant content, not to be redirected to another overview of links, which are not only NOT the most relevant links, but often also paid links.


    I did some research on random terms. Below are the Google search results, performed as anonymous (not logged in) user.
    Dutch search term (English translation) Position of linkfarms + linkfarm name
    Bruiloft muziek (wedding music) *,,,
    Werk (work) *
    Ziekenhuis (hospital) *
    Trouwen (wedding) *,
    Fietsen (cycling) *
    Watersportartikelen (watersport products) *
    Dating (dating) *
    Meubelen (furniture) *
    Woning (home) *
    Luckily there’s the Google Chrome extension which I use, but Chrome still has a relatively small market share and plugins like these are more for power users.
    I’ve told and explained the internals to my parents and many of my friends, but I notice they still keep struggling through the forest of bad links which have been turning up in Google lately.

  330. The extensions I’ve blocked in ONE google search on “bruiloftmuziek” when paging through 16 pages:

    I hope the sheer number of these types of sites in the search results (~23/160=14%) demonstrates their impact on the relevancy of search results.
    Once again, these are completely obsolete, since, Google is the list of most relevant search results, why would I want to visit ANOTHER list, and then even that’s less relevant?
    All these sites do is keep visitors a click away from finding truly relevant content.

    As you will be able to see, Dutch people, including me, are very loyal Google users (Google has over 97% search market share here).
    I DO know, we are a very small country, but it would be awesome if results like these could be reduced for all users and Google would give us the relevant content once again!

  332. No doubt that Google will have covered this but what if you publish external RSS feeds on a page. Is it spam to collate a series of industry RSS feeds onto a page and publish, is this spam, copying text, being viewed as a scraper site or putting something useful on the web? I guess that the Google engineers would have this covered, just thought that it would be an interesting question as I’m all for any scraper sites getting screwed over good and proper. I think that if you take genuine content from another site for your own benefit then you should be penalized.

  333. Matt,
    Websites using content farming and massive doorway and linking campaigns have corrupted our primary keyword string search results. I am not saying we deserve the #1 rank, I am just saying the playing field is now so far from level that it seems if you follow the webmasters guidelines ( at least for our category of result ) that it is more rewarding to add 2,000 pages a day to a website and pay to have services link campaign your anchor text and link on hundreds of sites per day. By every measure we can think of from age of site to coding standards to avoiding schemes to get our rank up we should be near the top of our primary keyword string search results.
    Who ranks above us? Majority of those sites are using blackhat SEO that is outright detectable. This obviously sounds so much like a rant and a cry for top ranking but it is simply an observation of the devastating effect the recent algorithm changes have had.

    It is not just J.C. Penney anymore, we are getting overrun by sites using techniques we are deathly afraid to use for fear of the WRATH OF GOOGLE.

    Who does 1 cry out to for justice?


  334. Dear Matt,

    I hope you can provide some help. I have already commented on this problem on Google’s Webmasters forum (did this on 3/18/11) but as of today have not received a response.

    We have noticed that since last Thursday, Mar. 17, 2011, Google’s search engine results are being absolutely flooded with spam URLs that are optimizing for our brand name, Soft Surroundings and different permutations of it. These spam URLs are not merely annoying, they are downright destructive since they attempt to launch malware on a person’s machine with the link is clicked on from Google’s SERPs. Amazingly, many of these URLs are showing up in the Top 10 results and are pushing down legimitate and relevant pages, such as our own Facebook page and affiliate websites. This is potentially costing us lots of money since affiliate sites are getting pushed below these spam URLs. I have looked at the backlinks on these spam URLs and they are using easily identified spammy cross-linking and back-linking among these auto-generated pages to propel them higher in the results.

    We are dutifully reporting these spam URLs to Google one-by-one, but are actually having better success in contacting the web hosting companies that are then scubbing the sites clean of the malicious code. Quite frankly, we are flabbergasted these results are appearing as high as they are given the recent algorithm update that was supposed to better identify those sites doing backlinking like this. These results currently cannot be found in Bing or Yahoo. Have the hackers found a crack in the recent update that they are exploiting in order to get to the top?

    Could you please share some insights on what is going on, and why I am not getting a response from Google to my post in the Webmaster Help Forum?

    Thank you,
    e-Commerce Dept
    Soft Surroundings

  335. After watching things now for a while for many of our clients sites I can say that overall the changes are neutral to very positive for search engine traffic. I am still finding content scraping sites that use some of our content and products, but they do not seem to be out ranking us on any of the keyword terms that we consider important. So… current opinion, nice change Google.

  336. Matt, I am glad to see you guys address this and hopefully it will get better.

  337. We ( are one of the sites adversely affected and still reeling after a 50% decline in traffic over night. Like a lot of sites that are still struggling to make a living and are at the mercy of Google rankings, we’ve added content that we think serves our target market, some 4000 articles on parenting from dads’ perspective. We have articles written by published authors as well as many articles and thoughts written by myself based on my experiences with my kids. Ironically, the GreatDad news feed is used by Google News.

    I can’t help thinking that Google search results now might have fewer link farms in them, but that the results are more frequent for a bad result from a more major site provider (example, a review for a stroller from Amazon versus the result the searcher really wanted “review on stroller x for dad.”)

    One result that particularly that shocked and saddened us was the result for the word, “dad.” We used to have #1 position here as the leading parenting website for fathers. Now the first result is the Wikipedia entry for the Danish rock band D.A.D., followed by an IMDB entry for a second rate movie, “Dad.” While I wouldn’t have been pleased had I been aced out by Dr. Sears or even Babycenter, to lose to two off-topic choices was disappointing.

    I was hoping by now the algorithm would be adjusted, but after 5 years of hard work getting to a fairly decent 100,000 uniques per month, I find myself again just struggling to find an audience.

    While much is made of all the things we should do to satisfy Google (meta tags, descriptions, internal links), somewhere it’s lost that real publishers spend time writing and creating content. The ones who know how to play all the crazy google games are the spammers and gamers who ruin the results for everyone.

  338. Hi Matt,

    After some unusual oscillation on our audiency, we believe that we’re victms of other sites using or copying our content without authorization. Do you know ways to avoid or at least reduce this kind of action? This site “ESPBR.COM” are using our content without authorization and affecting our Google’s position.


    Link Buyer and Affiliate Link Farmer …. how do sites like that get through a Panda Update?

    This person spends 10s of thousands a dollars a year buying links and ranking very well doing so. I thought that was all supposed to be against the rules yet this person is getting rich doing it.

  340. Google rocks! Thanks.

  341. Content was considered the King before this change was made. When it comes to SEO content takes a huge space and plays a key role to develop a stabled image of web. I was a bit confused about the recent change in the Google Algo, thank you for sharing this short and deep analysis and bringing the key elements in to the lime light. This recent change in algo damaged a huge list of spam webs and pages with full of rephrased copied content and it worked like a energy drink for quality web pages.

  342. Original content:

    – Are quotes in a blog post or article (quoting other blogs, news articles, websites e.t.c) perceived as duplicate content by the algorithm?

  343. The original source of the content is very important now, but still will not trump the authority factor when it comes to the SERPs. I think the point of this update content scrapers more than it was to give credit where credit SHOULD be due.

  344. Thanks for the info, all in all, I think we can say, google remains a mystery. Anyone know how they manage to keep things so secret?

  345. Until now what i’ve heard regarding this update affected only the sites based in the U.S.
    As an seo guy I would like to say that this is something I saw coming a few month ago due to the fact that actually this will be the change that will make the SERP’s as relevant as possible.
    Now, after i’ve warned most of my clients concerning the fact that their websites might get affected by this update they are trying to see it happens and it doesn’t, for now.
    When do you think google will apply the update for the rest of the world?

  346. Very useful algorithm change.The web is full of copied content sites and some are high ranked.I think that now the results will be more fairly.

  347. I have run a very legitimate online clothing business for 10 years. I write and photograph all of my own content. Have been written up in all major fashion mags etc… but ever since your index update my site has disappeared from the search engine. I have studied carefully SEO practices , no paid links or anything even slightly fishy. I’m actually worried about my future for the 1st time ever. Maybe you guys can do something to help us small businesses more? I tried the forum once but the guy who was the Google webmaster there was so rude and clueless, no help at all. Maybe monitor them so they act professional , are better educated, or something. Some of us legitimate sites are getting screwed in your new update.

  348. Somewhat new to this, but I must say its great to have a voice from Google Matt – feedback, advice & explanations are rare these days. One thing I’ve noticed a lot (which was apparently addressed by one of the previous updates) out there are still a proliferation of link farms – very obviously link farms too. I know a number of sites who are falsely benefiting from these & a more fair playing field would be welcomed. Howver, lets be positive – I think Google are doing a good job, it is after all a mammoth task

  349. Is it possible that someone actually figured out Google’s algorithm? This site says they reverse engineered it, could this be true?

    Not really sure what to think of this… anyone read this?


  350. Matt, One Important question, answer to which I can’t find on net.
    Do Google algorithm fade the value of blog post with time,like a post two weeks ago , remains as important today, is date of posting a criteria in valueation of blog articles.(This I have not experienced with website).

  351. Interesting developments Matt, good to see some updates to place a positive light on search engines pages, rather then showing pages with low quality results.
    I thank you for your hard work with the Google Spam Team =)

  352. Matt, truly a great post and lots of viable information regarding the algorithm of google. Though I cant see how any one could get a hold of this kind of information. Since it has been told googles switch it monthly but since this information is hush hush who can actually no one they switch is but its a good thing Google decided to switch to being able to find duplicate content and punish those sites its trully great this action will prevent people from gaining serp status by using other copy wright en works.

  353. Nice changes made 🙂

  354. Hi Matt I am a great fan and enjoy reading the posts. My question or comment is as follows. On my site i get alot of spam comments that don’t read well but have keywords in them. i understood that the no follow attribute built into wordpress makes these comments worthless anyway however does google actually index these comments and with the new algorithm change does it benefit the people doing it. i don’t approve the silly comments but i have often wondered why people keep persisting in posting them Cheers Lindsay perhaps if they were told that they have no value they will stop doing it!! 😉

  355. Thanks for such a nice post. I hope Google will take more serious action against auto blogs and MFA sites. Getting onto these sites, especially when you are about to find some serious results within a short time span, is really a pathetic experience. Anyways, whatever Google is doing is great for us and thanks for Panda update, it worked positively for me.

  356. OK, I discovered why my pagerank was 0, I was allowing RPC Ping-Backs on my wordpress blog and I was being spammed… this resulted in multiple links from my blog to bad web pages.

  357. Google Has done a great job keeping content as original as possible and bringing you related content that is dynamic and exactly what you searched for this is just one more Value added feature Google provides, in my opinion i believe Google is the greatest search engine ever created.

  358. I have an old site that had been down for a month or so. I now have it up and running again and had to change the link structure. I added 301 redirects but I was wondering how original content is judged, I have kept the content the same but because they are new files do i lose the original status?

  359. I am 100% in agreement with filtering out useless dupelicate content. The one issue I have seen with this process though is that ecommerce sites, which often use manufacturers content as there baseline, have gotten hurt. The reason these sites use manufacturers content is because it is the most accurate description of the product. I believe a sites with thousands of products like this can get badly hurt, but they are not trying to be decptive or tricky.

  360. Hi Matt

    My traffic dropped about 3 weeks ago. I noticed my homepage does not show up until page 3 when I issue the site:domain.tld command. I took a snippet of text from my homepage and entered in Goggle and a lot of scrapper sites come up before mine. I have issued DCMA requests and manged to get some removed but some I have had no response to. I have submitted a re inclusion request and also rewrote the homepage text, but still no luck getting my rankings or homepage back. Is there anything else I can do other than waiting for a response to my re inclusion request? I’ve already had to lay off 1 member of staff and may have to lay someone else off next week. Your help would be greatly appreciated.

  361. If you want to see REAL low quality results, try a search on the latest Android build ID.

    At the moment, ‘GRK39F’

  362. Nice share Matt, I think that the Google Algorithm change remind us that content is still a King.

  363. I absolutely agree with a recent comment on ecommerce sites that use manufacturers’ product description. Try to search a new release of a video game, for example FIFA 12, PES 2012, or call of duty modern warfare 3. You will get dozens of pages of absolutely similar content, because everybody would use the manufacturers description. It doesn’t make sense to change / rewrite product descriptions if you have over thousand products in your eshop. Besides, the manufacturers description is the most accurate description indeed, and in the case of a future release, it’s the only description, as nobody has seen or played the game yet! The manufacturers have spent millions on marketing costs, including development of product description, text and pictures, and this is intended for use by the resellers so that the product sells better. I think it would be right if SE allows different sites to use similar product description without banning them. The true competition among these eshops should be the price that the can offer and the quality of the service.

  364. My website traffic was also affected it seems certain pages were hurt more than others. I realize Google has a tremendous job displaying reputable sites but, the way they go about it seems to benefit whom or what topic they want to interfere with for whatever reason at the time.
    1) Judging sites creditability by who links to it is hideous. Anyone with a large pocket book can purchase links even on large PR9 websites. If you want to judge my Companies credibility look at how long I have been online, have I purchased VeriSign seals to make certain my website is safe for my visitors and last but not least pick up the phone and call my advertisers don’t assume I am not a creditable business or a crook.
    2) Google is now an English professor judging sites by misspelled words. You see missed spelled words every day in this fast pace multi-tasking world and QWERTY keyboards. Weren’t we taught in the SEO world from the beginning, to sometimes use misspellings as the public is not perfect. (Why doesn’t Google put their money where their mouth is and pay taxes like they supposed too, so the American teachers will grow not decrease. Or force products such as your own to have better spelling and grammar checking capabilities.
    3) Google is also saying if you sell links on your website that you will be punished. Does this not include Google the largest link seller in the world? And it seems sites that do nothing but sell Google Ad words are performing better than sites trying to help other small businesses. Could this be greed Google?
    4) What about site scraping how does Google know who wrote what first? I tried getting legal copyrights for my website to protect me from being copied by everyone in my market. Yes, they took my $35.00 but, I was not able to get the copyrights because there are no laws that protect legitimate honest people like me coming up with their own ideas not copying everyone else in my market. You are saying you have more power than the U.S. government.
    5) Last but not least I apologize if I may not have written this post with an A++ grade. After all I depend on computer programs to double check me.

  365. I’m not sure how well the algorithm is working against penalizing scrapers. After being hit badly by Panda 2.2, I noticed that for 3 months my large, well maintained website didn’t even rank in the first 500 results for a keyword where I used to be on the front page for years. I performed searches on the first sentence of article content on our site and just discovered a site that copied 98 articles this past July. Not only do they rank higher than we do for our own articles for certain searches but I’ve also discovered that sometimes Google even excludes my site from the search results! I did a little presentation if anyone is curious:

  366. Nice work on this Matt and all at Google for control spamming.