Martin here at Arena has just been looking into how Google go about blocking the keyword in the referrer. We did a search for the term “seo” at https://www.google.com whilst logged into a Google Account. With Google Instant turned on it created the following URL…

https://www.google.com/search?sclient=psy-ab&hl=en&site=&source=hp&q=seo&btnK=Google+Search

Wikipedia was top of the results, so we had a look at the html element containing the result, which looked like this…

You’ll see a bit a of JavaScript there bound to the ‘onmousedown’ event, which calls a function called ‘rwt(…)’. Here is the function declaration…

 
There is a lot going on in there, but essentially it redirects the browser to a new URL with the ‘q=’ parameter wiped (as indicated by the wonderful red arrow above), which then redirects to the desired page you clicked on in the search result (which is passed via the ‘url=’ parameter). So, from the search result itself (Wikipedia in this case), the referrer becomes…

http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CGoQFjAA&

url=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FSearch_engine_optimization&ei=GpqeTqSzJsuFhQettaBN&
usg=AFQjCNHfIpCo_Ap336oSDlmNqh1STSriIg

Meaning the ‘q=’ parameter exists, but is empty! This is very bad news for our natural search scripts (and analytics packages full-stop!), the only thing we’ll be able to tell from the referrer is that they came from Google, any brand/generic keyword logic goes out of the window. The only thing we could do is isolate this traffic and track it differently, as we know it’ll be coming from ‘www.google.com/url…’ instead of ‘www.google.com/search…’.

This is one of my favourite SEO mythbusters.

It is the notion that preventing Google crawling a page in your robots.txt will prevent it appearing in organic search results. It’s rare to come across examples, but today I have, triggering me to make this blog post. In case there is any confusion, Matt Cutts explains it clearly in this Webmaster video. He also runs over the use of the meta noindex tag.

Here is the example. ESPN shop are disallowing all UAs from crawling the domain ESPNshop.co.uk.

However if you do some very strict searches, Google still shows the URL in search results.

I love quick wins. Remember if there is a page you want removed from Google organic search results, either

- Add the meta noindex to the <head> tag

- Remove the URL in Google Webmaster Tools

 

Big blue arrow pointing to Google Plus

Posted by bowdeni in SEO Added September 20, 2011 - (7 Comments)

It looks like Google are giving users a subtle hint as to how they can access Google Plus. The arrow is animated; when you load the page the arrow is drawn.

The History of F-Commerce

Posted by bowdeni in General Added September 18, 2011 - (0 Comments)

The History of F-Commerce

An infographic about F-Commerce

Catching and Resuscitating Dropped Domains

Posted by bowdeni in SEO Added August 29, 2011 - (1 Comments)

Catching dropped domains can instantly provide you not only with a solid backlink profile decent but also referral traffic. In this blog post I provide some advice on how to catch them and bring them back to life, illustrated with my real life example.

Within a niche one of my clients works within, a satirical one page website generated hundreds of authoritative  of links (TBPR5 if that ticks your boat). I first saw it on Reddit and dreamt I had come up with the idea, as it was a rare testament to the fact content is king.

In a moment of idle web surfing, I went back to revisit the site only to see that the domain was pending deletion. For those unaware, this is generally the ‘lifecycle’ of a domain.

  1. Available
  2. Registered
  3. Expiration (around 40 days)
  4. Redemption period (around 30 days)
  5. Pending deletion (around 5 days)

Until it hits pending deletion, the owner can claw back their domain. Fortunate timing for me, as it was already pending deletion so I knew it would just be a matter of time before it would become available.

At this point, I wouldn’t recommend just hanging around. Instead, use a number of backordering services, and where possible, all of them. Three noticeable companies include Pool.com, Namejet.com and snapnames.com. Generally you don’t pay unless they catch it, in which case you are quids in. If two or more people attempt to backorder it, it goes to auction. That’s what happened with me on Pool.com, and so it was set to go the highest bidder.

My auction went on for about 45 minutes, and ended up at around £230 ($400). Anyone who is familiar with paid linking will know this to be good value. Not that I was concerned, I wasn’t buying it for link equity, but just for fun and lulz.

I haven’t done this technique enough times to suggest that how I resuscitated it definitely the cause, but there is logic behind it. When I speak of resuscitating a dropped domain, I mean that TBPR returns. From this, I take Google to algorithmically valuing it in regards to page and domain authority, plus TrustRank as it did before it dropped. Here is what I did…

  • Visit archives.org and return what content you can find possible
  • This includes page titles and meta descriptions
  • My site was ODP listed, so I matched natural search copy with that
  • Don’t add any links (yet) until the domain has been brought back to life

Sure enough, come the next TBPR update, that little green box was back. The site was receiving around 3,000 visits a month from referral links and continues to grow. Think about how you can use this approach, but don’t abuse it.

  1. Scrape around the Internet for sites with authoritative links that have dropped
  2. Keep an eye on content that goes genuinely unintentionally viral, but may be likely to drop in the future with automatic tools
  3. Harvest a list of dropping domains and pull SEOmoz data in to analyse strength to draw up a list of acquisitions
  4. (My favourite) Take the referral traffic the sites was getting and use it to get eyeballs on your new content. If you’ve got a ton of referral traffic from places like Reddit, invite people to check out new content. With that, you can amplify new content you are creating and leverage more benefit.

Has anyone else been served these results? We’ve also been seeing video results in P1-P3.

Haven’t seen this before.

This is what appears when you click on request call…

… and finally request email.

EDIT: Our Google rep has just confirmed this to be new :) They’re called communication ad extensions, are a free way to get leads, therefore won’t appear in the Adwords UI and only appear for 10% of queries. Currently in alpha.

Another awesome infographic Arena Quantum have produced for a client, congratulations to those who have worked on it.

 

Grow your own

Grow your own infographic from LoveTheGarden.com

Here at Arena Quantum we like to do multi-click attribution, providing an insight into the true value of generics. To do this, we require an ad server. As part of a mistake putting tracking on, we discovered a very unlikely ranking signal we had not considered before. We uncovered evidence to suggest that Google treats the URL you specify to track the pageview as, in the same way a canonical tag.

For example, we placed the following code on superwidgets.com/redwidgets. Note that the domain we played it on differs to the URL we wanted to track it as.

In this example, we also own cheapwidgets.com. Only when examining the inbound links being reported by Google Webmaster Tools I noticed that there was a reported to be a link from cheapwidgets.com.

I have scoured the web page on cheapwidgets.com and the only reference to cheapwidgets.com is in the tracking code. Therefore it looks like trackpageview can act like a cross domain canonical. Key takeaway for this? Double check your tracking code to make sure you aren’t leaking any link juice.

Historically Google has used links as a proxy to determine the most relevant and authoritative websites to return to a user’s search query. Late 2010 Google and Bing confirmed that they do indeed now use social signals as a ranking factor,  but only now are those in the SEO community starting to identify case studies where social signals are having a clear influence on search results.

A new case study can be added, Money Supermarket. Between 10th January and 16th of January Money Supermarket held a free prize draw. Users had to retweet a message (see below) containing a link to the car insurance product page, to provide them with a chance of winning a years free car insurance.

Money Supermarket Tweet

I believe this generated around ~2,500 RTs over the 7 days. The impact it had on Money Supermarket’s ranking for the search query ‘car insurance’ is most interesting.

Between September 21st and 11th January Money Supermarket  had an average of 6th for the search query ‘car insurance’. During this time, their best position was 4th,  held for just a couple of days while their lowest rank was 9th. Just two days after the competition ended, Money Supermarket started ranking 1st.

Rankings Graph

This 1st place ranking was held until 14th March, when Money Supermarket dropped back down to 3rd. How does Money Supermarket react to this drop? Another Twitter competition! Running from the 14th to 20th March. Money Supermarket are now ranking 2nd, and I’ll update this post after the competition has ended.

As always correlation doesn’t not necessarily imply causation. There may have been other signals having an influential role but certainly this case study adds further evidence to the importance of  social signals.