Google Panda Update: Short Clicks and Long Clicks / Pogosticking
OK, Folks, we have some old words, and old signals that have become more popular in the days of Post-Panda.
“Short Clicks – Long Clicks” – and “Pogosticking”.
Add these to your SEO dictionary if you don’t have them in there already.
I’ve started reading In The Plex (April 2011) by Steven Levy, and I came to a couple of interesting paragraphs that I’d like to share:
On the most basic level, Google could see how satisfied users were. To paraphrase Tolstoy, happy users were all the same. The best sign of their happiness was the “long click”. this occurred when someone went to a search result, ideally the top one, and did not return. That meant Google has successfully fulfilled the query. But unhappy users were unhappy in their own ways, most telling were the “short clicks” where a user followed a link and immediately returned to try again. “If people type something and then go and change their query, you could tell they aren’t happy,” says (Amit) Patel. “If they go to the next page of results, it’s a sign they’re not happy. You can use those signs that someone’s not happy with what we gave them to go back and study those cases and find places to improve search.”
We’ve known that Google has been looking at “Short clicks” and “long clicks” for years…I just think that with the Google Panda Updates, the measurement of those signals became much much stronger.
There’s 2 old articles worth reviewing as well . The first is by everyone’s favorite search patent translator, Bill Slawski (SEObytheSea). Bill wrote about Search Pogosticking and Search Previews in reference to a Yahoo patent back in November of 2008, where Bill says:
Search pogosticking is when a searcher bounces back and forth between a search results page at a search engine for a particular query and the pages listed in those search results.
A search engine could keep track of that kind of pogosticking activity in the data it collects in its log files or through a search toolbar, and use it to rerank the pages that show up in a search for that query.
And the second article is from Blind Five Year Old, from back in 2009, where he wrote about “Short Clicks vs. Long Clicks“.
They’re not peeking at bounce rates. Instead Google is measuring pogosticking activity by leveraging their current tracking mechanisms. Remember, Google already tracks the user, the search and the result clicked. All Google needed to do was to accurately model the time dimension.
Just a few days ago Microsoft was granted a patent called Automated satisfaction measurement for web search where Bill nicely translates this patent, and in his paper, Bill states:
Implicit feedback about how satisfied a searcher is with a web page that they found in a search result might be collected by a search engine. This kind of information isn’t provided explicitly by a searcher, but rather is implicit in the searcher’s actions or inactions.
And also in that same paper, Bill concludes:
Google’s Amit Singhal and Matt Cutts told us in The ‘Panda’ That Hates Farms: A Q&A With Google’s Top Search Engineers that the Panda update looks “for signals that recreate that same intuition, that same experience that you have as an engineer and that users have.” It’s possible that these signals are using some kind of classification system that might either incorporate user behavior signals into page rankings, or use it as feedback to evaluate the signals chosen to rerank pages in search results.
The kind of algorithmic approach that I pointed to in Searching Google for Big Panda and Finding Decision Trees may be in part what’s behind the Panda update, but it’s clear that user behavior plays a role in how a page or site might be evaluated by Google.
I also thought I’d include another paragraph from the In the Plex book worth noting:
In between the major rewrites, Google’s search quality teams constantly produced incremental improvements. “We’re looking at queries all the time and we find failures and say , ‘why, why why?'” says Singhal, who himself became involved in a perpetual quest to locate poor results that might have indicated bigger problems in the algorithm. He got into the habit of sampling the logs kept by Google on its users’ behavior and extracting random queries. When testing a new version of the search engine, his experimentation intensified. He would compile a list of tens of thousands of queries, simultaneously running them on the current version of Google search and the proposed revision. The secondary benefit of such a test was that it often detected a pattern of failure in certain queries.
I don’t have much to add to all these quotes… except that I still support my original theory that the biggest factor to the Panda update was the Tweaking of the importance of this factor:
Those who search Google…click on a search result listing…then go back to Google, and click on some other result….I think this is what can hurt you the most…. this in not bounce rate (bounce rate is when someone leaves your site and goes anywhere)… I am only concerned with those who leave your site, and go back to the Google search and click on someone else. ….Google can give all all sorts of great content advice…and we’ll take it and say “Thanks for the tips”…but I still think that the biggest factor to Panda is “short clicks” and “long clicks” and Pogosticking.
I’ve been writing a lot lately on the Google Panda Update…including a few posts that I did on on Saturday and Sunday that you may have missed… see these:
- Google Panda Update – User Behavior and Other Signals – Examines user behavior as a factor, query entry, the
SERP behavior, and click backs. Also looks into when you can get out of being Pandasized. - Thoughts and Solutions from Jim Boykin – Post discussing the background of the Panda Update, including the supplemental
results, caffine, and beyond. This post also looks at, " if I were Google what I would look at," and at solutions. - Google Panda Update – A Overview of Analytics of 5 Panda II Affected Sites – This post discusses the analytics
of five affected sites - Google Panda Update – Google’s Content Guidence and Jim’s Take – This article goes over the list of 23 questions of
the ‘Google MMindset’ for the Panda Update and outlines Jims thoughts for each. - Google Panda Update Panda’s Punitive Punishment of Good Content – This post discuses how the Panda will punish your
good content if fyou have bad content as well - Losing Clients to Panda. I Just Lost $17,500/Month – Sharing of experience on the Panda Update as well as reflection
on previous updates - Google Agent Rank and Reputational Scores…It’s About Content and Writers and Panda! – Breakdown of what ‘agent rank’ means
and a view from a Google Panda lens. - Google Panda Update: Content + Design = Usable, Trustworthy Websites – discussion of website usability and implications
on Google Panda Update - Google Panda Update: Your Site is Going to Survive (funny) – Jim’s Panda Update remake of, "Country Boy Can Survive"
12 Responses
One possible test for the effect pogosticking would be to test the annoying exit/abandon popups. It would “slow” the user who tried to return to the search. My sites are “brand” sites and I’m not about to do this, but I’d be curious to know if anyone has tried.
Hi Jim, you may well be right that pogosticking is a key factor, but how do you stop visitors doing it?
Jim, this whole bounce rate thing has nothing to do with this, I can show you results where the top site is 100% PURE STOLEN CONTENT/JUNK CONTENT with Adsense leaderboard 3x on the top and then followed by link units. They are still ranking and raking money.
By the time Google cleans up the mess they would have made enough.
Panda update is a mess, my homepage is not ranking for its main term but rather some junk site and for that same query another deep page of mine is ranking but not homepage.
This is a website which has links from NyTimes, 100s of universities and tons of .govs all editorial because of the content.
Bounce rate is 40% for our site and site speed is 93 according to PageSpeed. When this type of thing happens to an authority site you know the update is more of a mess than anything.
AziAsimi,
few things… your bounce rate that shows in your analytics is 40%…but that doesn’t mean anything… we had a client hit who had a bounce rate of 17% showing… bounce rate is NOT the issue.
The issues is when someone visits your site…and then goes back to google and clicks on another result and doesn’t return to you.
There’s also other measurements that are not calculated into bounce rate… see my other posts..and the the one on short clicks and long clicks…
people are confusing what I’m saying…it’s not bounce rate…it’s another measurement…
fyi…the guy with the 17% bounce rate had a site where they went 2 clicks in…then backed right out to google…at first glance you’d see 17%…hey, great…because any page you’d land on, you’d go 1 or 2 more clicks in…read a sales pitch…3% would convert…and 97% probably went right back to google and never returned.
The number we’d need we don’t have…only google has… think usability..think unique…think author…think user… and then use your gut instincts to tell you what you need to change…that’s the answer.
I think Google can use more than that to judge what I would call ‘bad bounces’. So suppose you rank #1 for a keyword, and Google finds your ‘bounce back rate’ (when you go back to that same Google search) is higher than other pages ranking below you, that may be one signal of a bad bounce. Then suppose those people who visited your #1 ranking page and then went back and visited any other page below you, usually returned back to that same search only (say) 20% of the times.
But when someone went first to another ranking page, and then came back to the search result and went to your #1 ranking page, and then they usually returned back to the search results (say) 50% of the times, that means on avg. you are not a good result even on second choice.
Now this was just one of your page for just one keyword. And Panda hits sitewide. So suppose Google finds you have lots of pages that behave similarly. All have a high ‘bad bounce rate’ AS COMPARED TO OTHERS. I think it’s very essential for Google to compare you to others. That’s the only way where they can discount other things like user may not be in shopping mode and just researching stuff, or he may just be reading articles (like on Panda update!). If you are performing worse than most others ranked below you, and this is happening on lots of keywords and lots of pages, that is a strong confirmation signal for Google.
But this may be just confirmation. And I think this is something that they may have incorporated on April 11th when they claim to have incorporated user behavior signals. On Feb 24th, they may not have used this, which is one reason why some sites (in the US) were hit on April 11. Maybe those sites (like eHow) had been violating some things like shallow content, many ads, etc, but somehow they got benefit of the doubt. But when Google found them guilty in the Apr 11 user behavior update, they had all the info they needed to hit again.
It’s also worth noting here that Google Analytics turned off the Benchmarking option in March 2011 (before this Apr 11 update). It may be coincidence but the Benchmarking report was a good thing to compare your bounces and time on site with other sites like you (I know that was the typical bounces but it was still better than nothing).
In Matt’s latest Twitter comment to Critchlow, where he mentioned G needs to ‘regenerate data’, that can likely mean the user behavior data. Because that, I assume, would ideally be collected over a period of time so that they are able to discount the noise.
Comments are closed.