Archives for April 2007

Five tweaks Amazon needs to make

Amazon, I love ya. Amazon Prime is a fantastic program. Once you’ve paid your Amazon Prime fee, two-day shipping is free for a year. Not having to worry about shipping fees means that I buy a ton more stuff. What’s that? Tara Calashain has a new book on Information Trapping? I just buy it on impulse now.

Amazon, my affection for you means that I’ll give you some feedback for the low, low price of free!

  1. I like certain authors (William Gibson, Terry Pratchett, Neil Gaiman, etc.). I’ll basically buy every new book from about 7-10 different authors. Give me a better way to track new books from a favorite author. At one point in the past I created Amazon email alerts for new books from a few specific authors, but then you started sending me emails for “related” authors. As far as I could tell, there was no way to get alerts about new books without getting the unwanted “you might also like this author” emails. If you let me watch specific authors, I’d buy the new John Brockman book from Amazon instead of stumbling across the book in a bookstore.
  2. While you’re at it, provide an RSS feed for that “new books by this author” info instead of as an email. C’mon, I can get an RSS feed for that funky Gold box feature but not for my favorite authors?
  3. My bank just sent me a new credit card because “security lapses have occurred involving … a retail merchant where you recently used your card.” Never mind that my bank doesn’t tell me which retail merchant had the security lapse; that’s a whinge for another time. But now I’m doing the “go to places that have my credit card and change the credit card number” dance. To make a long story short, I click on “My account” and the first thing I see is “Change payment method” so I click it, but it takes me to the “Open and recently shipped orders” screen. That link is quite poor/annoying. I’d 1) change it to say “Change payment method for an order”, or 2) change the UI of the page to make it clear that “Change payment method” and the “Where’s My Stuff?” section is visually different from the global payment settings, or 3) make the link go to the right place. The right place is a screen to actually change the payment method for an order. In fact, out of the 14 links in the “Where’s My Stuff?” section, eight of them take me to the exact same page. Fix that.
  4. Deputize an Amazon blogger to stop by and give some feedback on posts like this. 🙂
  5. Okay, leave aside the UI advice from point #3. I can delete a credit card, but can’t add a credit card without ordering something? From an Amazon page:

    Note: if you’d like to add a completely new credit or debit card or update a debit card issue number, you need to wait until the next time you place an order. On the “pay” page, select the radio button below “Or enter a new card” and enter the full card details. You can then use Your Account to delete any out-of-date cards.

    Paypal and Google Checkout let me add a new card in seconds, but I have to go and find something to buy before Amazon will let me add a new card? That’s really bad. Let me add a new credit card without ordering a book. Can any Amazon person explain to me why this is?

If this stuff annoyed me, it probably annoys other people too. I’m an Amazon fan, so someone stopping by and saying “we’ll looking into it” is half the battle, but fixing some of this stuff would make me happier with Amazon and let me spend more money at Amazon. Heck, I’m thinking of trying out Amazon’s S3 storage service to make an offsite back-up of my blog database, so if someone at Amazon would let me know that they’re listening and responding, it’s more likely that I’ll try other Amazon services. That leads me to my bonus feedback, which is:

6. Make Amazon’s S3 storage service compatible with scp from a Unix command-line. It’s a storage service. Why should I need to study code samples in Python or Ruby? If you make it “just work” on port 22 where ssh/scp hangs out, backups would be so much easier. Lots more people would use it, not just smart, technical people who are willing to code neat hacks.

The S3 team already did a smart thing by making it possible to host public web images on Amazon’s S3 service. Not offering scp effectively hampers the growth/uptake of the system. Why would Amazon want to do that? The only reason that I can think of not to allow scp uploads is maybe Amazon wants to host higher-value data, e.g. data belonging to startups, not become the mass backup for the world. But if that’s the issue, Amazon could still offer scp-upload storage at a slightly higher cost. Anyway, I’m thinking out loud now. C’mon S3 team, add scp support. Amazon, lemme know if I’m wrong about anything or if any of these suggested tweaks come true. 🙂

Anyone have other suggested tweaks for Amazon?

Url removal: yah!

For now, I’m just going to say “hot damn.” The smart folks on the webmaster console team have migrated Google’s url removal tool into the webmaster console. Along the way, it’s picking up a *lot* of nice new functionality. I’ll talk about it more pretty soon, because I have a fun story to tell, but in the mean time you can read more about it from the official webmaster blog or on Search Engine Land.

Yah!

SEO Lookalikes

How did I miss SEO Wife, the blog of Barbara Boser? Barbara is doing a very funny list of “this SEO looks like this celebrity” posts, e.g. Todd Friesen looks like Brendan Fraser.

In that spirit, I’d like to say that I think David Naylor looks like Nick Frost:

David Naylor Nick Frost

Fookin’ yeh, right? I haven’t watched Hot Fuzz yet, but I was watching Shaun of the Dead a while ago and kept getting DaveN flashbacks. 🙂

Google Maps for walks

I like that maps are so much more fun than they were a couple years ago. You can drag them, zoom them, annotate them, mash them up, and fly over where you grew up. One site I’ve been enjoying lately is Gmaps Pedometer. You just click to mark out where you walked and it will give you
– how far you walked
– the change in elevation
– how many calories you burned

It’s also easy to save a map and send it to friends, with no registration required. For example, here’s a lap around a chunk of the Googleplex:
http://www.gmap-pedometer.com/?r=866570

To make a map, use the normal controls to move or zoom. When you’re ready to make a walking record, click “Start recording.” Each double-click will add a new waypoint, and you can use the single-click to drag the map just like you’re used to. If you make a mistake, there’s an “Undo last point” button. The UI is great.

Here’s a walk that I did a couple weeks ago:
http://www.gmap-pedometer.com/?r=829446. Seven miles, ~1200 feet of elevation, ~1000 calories, in 2.5 hours. That’s nothing for some of the more fit people here, but I was proud. 🙂 It helps to have an iPod with energetic music. What’s your favorite maps mashup?

Update 7/15/2007: Clearing off my desktop and found this picture. Might as well put it up:

Google maps pedometer

Robots.txt analysis tool

This is just a reminder that if you see a problem with your site, one of the first places you may want to look is our webmaster console. In some cases, Google can alert site owners in the webmaster console if we see an issue for things like hidden text. In a case that I just saw yesterday, the robots.txt analysis tool in the webmaster console was a huge help in solving a problem. Here’s an example of debugging a robots.txt issue.

Someone was asking about a particular result in our search results. The result didn’t show a description, and the “Cached” link was missing too. Often when I see that happen, it’s because the page wasn’t crawled. When I see that, the first thing I check out is the robots.txt file. Loading that in the browser showed me a file that looked like this:

# robots.txt for http://www.example.com

User-agent: *

User-agent: Wget
Disallow: /

At first glance, the robots.txt file looked okay, but I did notice one strange thing. Normally robots.txt files have pairs of “User-Agent:” and “Disallow:” lines, e.g.

User-agent: Googlebot
Disallow: /cgi-bin/

In this case, there was a “User-agent: *” by itself (which matches every search engine agent that abides by robots.txt), and the next directive was a “Disallow: /” (which blocks an entire site). I wasn’t positive how Google would treat that file, so I hopped over to the webmaster console and clicked on the “robots.txt analysis” link. I copied/pasted the robots.txt file into the text box as if I were going to use that robots.txt file on my own site. When I clicked “Check” here’s what Google told me:

Example of a site blocking itself with robots.txt

Sure enough, that “User-Agent: *” followed by the “Disallow: /” (even with a different user-agent in between) was enough for Googlebot not to crawl the site.

In a way, it makes sense. If you removed some whitespace in the robots.txt file, it could also look like

User-agent: *
User-agent: Wget
Disallow: /

and it’s pretty understandable that our crawler would interpret that conservatively.

The takeaway is that if you see a page show up as url-only with no snippet or cached page links, I’d check for problems with your robots.txt file first. The Google webmaster console also includes crawl errors; that can be another way to self-diagnose crawl issues as well.

P.S. I promised Vanessa that I’d mention that the robots.txt tool doesn’t support the autodiscovery aspect of sitemaps yet, but it will soon. 🙂 I’ll talk about autodiscovery and sitemaps at some point, but personally I think it’s a great development for site owners, because it makes it easier to tell many search engines about your site’s urls.

css.php