Open Kinect Contest: $2000 in prizes

I’m starting a contest for people that do cool things with a Kinect. See the details below.

Open Kinect Logo

Before I joined Google, I was a grad student interested in topics like computer vision, motion self-tracking, laser scanners–basically any neat or unusual sensing device. That’s why I was so excited to hear about the Kinect, which is a low-cost ($150) peripheral for the Xbox. The output from a Kinect includes:
– a 640×480 color video stream.
– a 320×240 depth stream. Depth is recovered by projecting invisible infrared (IR) dots into a room. You should watch this cool video to see how the Kinect projects IR dots across a room. Here’s a single frame from the video:

IR Projection

but you should really watch the whole video to get a feel for what the Kinect is doing.
– the Kinect has a 3-axis accelerometer.
– the Kinect also has a controllable motor to tilt up and down plus four microphones.

What’s even better is that people have figured out how to access data from the Kinect without requiring an Xbox to go with it. In fact, open drivers for the Kinect have now been released. The always-cool Adafruit Industries, which offers all sorts of excellent do-it-yourself electronics kits, sponsored a contest to produce open-source drivers for the Kinect:

First person / group to get RGB out with distance values being used wins, you’re smart – you know what would be useful for the community out there. All the code needs to be open source and/or public domain.

Sure enough, within a few days, the contest was won by Héctor Martín Cantero, who is actually rolling his reward into tools and devices for fellow white-hat hackers and reverse engineers that he works with, which is a great gesture. Okay, so where are we now? If I were still in grad school, I’d be incredibly excited–there’s now a $150 off-the-shelf device that provides depth + stereo and a lot more.

It’s time for a new contest

I want to kickstart neat projects, so I’m starting my own contest with $2000 in prizes. There are two $1000 prizes. The first $1000 prize goes to the person or team that writes the coolest open-source app, demo, or program using the Kinect. The second prize goes to the person or team that does the most to make it easy to write programs that use the Kinect on Linux.

Enter the contests by leaving a comment on this blog post with a link to your project, along with a very-short description of what your project does or your contribution to Kinect hacking. The contest runs until the end of the year: that’s Dec. 31st, 2010 at midnight Pacific time. I may ask for outside input on who should be the winner, but I’ll make the final call on who wins.

To get your ideas flowing, I’ll offer a few suggestions. Let’s start with the second contest: making the Kinect more accessible. In my ideal world, would-be hackers would type a single command-line, e.g. “sudo apt-get install openkinect” and after that command finishes, several tools for the Kinect would be installed. Maybe a “Kinect snapshot” program that dumps a picture, a depth map, and the accelerometer values to a few files. Probably some sort of openkinect library plus header files so that people can write their own Kinect programs. I would *love* some bindings to a high-level language like Python so that would-be hobbyists can write 3-4 lines of python (“import openkinect”) and start trying ideas with minimal fuss. To win the second contest, you could write any of these libraries, utilities, bindings or simplify installing them on recent versions of Linux/Ubuntu (let’s say 10.04 or greater).

Okay, how about some ideas for cool things to do with a Kinect? I’ll throw out a few to get you thinking.

Idea 1: A Minority Report-style user interface where you can open, move, and close windows with your movements.

Idea 2: What if you move the Kinect around or mount it to something that moves? The Kinect has an accelerometer plus depth sensing plus video. That might be enough to reconstruct the position and pose of the Kinect as you move it around. As a side benefit, you might end up reconstructing a 3D model of your surroundings as a byproduct. The folks at UNC-Chapel Hill where I went to grad school built a wide-area self-tracker that relied on a Kalman filter to estimate a person’s position and pose. See this PDF paper for example.

Idea 3: Augmented reality. Given a video stream plus depth, look for discontinuities in depth to get a sort of 2.5 dimensional representation of a scene with layers. Then add new features into the video stream, e.g. a bouncing ball that goes between you and the couch, or behind the couch. The pictures at the end of this PDF paper should get you thinking.

Idea 4: Space carver. Like the previous idea, but instead of learning the 2.5D layers of a scene from a singe depth map, use the depth map over time. For example, think about a person walking behind a couch. When you can see the whole person, you can estimate how big they are. When they walk behind the couch, they’re still just as big, so you can guess that the couch is occluding that person and therefore the couch is in front of the person. Over time, you could build up much more accurate discontinuities and layers for a scene by watching who walks behind or in front of what.

Idea 5: A 3D Hough transform. A vanilla Hough transform takes a 2D image, looks for edges in the image, and then runs some computation to determine lines in the image. A 3D Hough transform finds planes in range data. I’ve done this with laser rangefinder data and it works. So you could take a depth data from a Kinect and reconstruct planes for the ground or walls in a room.

Idea 6: What if you had two or more Kinects? You’d have depth or range data from the viewpoint of each Kinect and you could combine or intersect that data. If you put two Kinects at right angles (or three or four Kinects around a room, all pointing into the room), could you reconstruct a true 3D scene or 3D object from intersecting the range data from each Kinect?

I hope a few of these ideas get you thinking about all the fun things you could do with a Kinect. I’m looking forward to seeing what cool ideas, applications, and projects people come up with!

Mini-review of the iPad

I played with an iPad yesterday. Here’s my mini-review. The screen is bright and the touch sensitivity is fantastic. Given that it reminds me the most of an iPhone, it’s surprisingly heavy. It feels dense with potential.

On the childlike-sense-of-wonder-scale (as fake Steve Jobs would say), the iPad is better than the Macbook Air but not as stunning as the iPhone when the iPhone first came out. I played with my wife’s iPhone for just a few minutes before I knew I had to have an iPhone. But I never really cared about the Macbook Air, mainly because the screen resolution was worse than my current laptop. The iPad fits between those two products in the spectrum of desirability for me.

The form factor is… weird. You’re going to feel strange carrying one of these into the grocery store, in the same way you felt weird using your cell phone in the grocery store at first. Leave it to Apple to blaze a trail of coolness though; the iPad will make this form factor acceptable, so you won’t feel quite as strange carrying a tablet into a meeting in a few months. The form factor fundamentally is awkward though: the iPad is book-sized, but much more delicate than a book. A screen this big with no protection will get scratched or scuffed. I’d expect to see plenty of articles about dropped iPads like you did about Wiimotes getting thrown into TVs and windows.

The gadget lover in me wants one, but the part of me that cares about open source and tinkering is stronger. I’m with Cory Doctorow on this one. The iPad is gorgeous, but it’s still not worth it for me. Yesterday, I also bought two books at the bookstore to read on a trip. Walking back to my car with “paper media” felt a bit dorky–why am I buying books on paper in 2010? If I could buy a book digitally and really own it (not just obtain a license to read a book, where the license could be revoked), I’d quickly switch to buying my books digitally. But the success of the Kindle shows that a lot of people care more about the convenience than completely owning what they’re buying digitally.

I think the iPad will be a huge hit. Non-tech-savvy consumers will love it because of the user experience, the simplicity, and the lack of viruses/malware/trojans. It’s like a computer without all the hassles of a typical computer (pre-installed crapware, anti-virus software, inconvenient software upgrades). Lots of tech-savvy consumers will love the iPad for the same reasons, and especially for the polish and user experience. The current iPad lacks a few things (such as a camera), which ensures that future generations of the iPad will also be a huge hit.

But the iPad isn’t for me. I want the ability to run arbitrary programs without paying extra money or getting permission from the computer manufacturer. Almost the only thing you give up when buying an iPad is a degree of openness, and tons of people could care less about that if they get a better user experience in return. I think that the iPad is a magical device built for consumers, but less for makers or tinkerers. I think the world needs more makers, which is why I don’t intend to buy an iPad. That said, I think the typical consumer will love the iPad.

Finding the best cell phone carrier

Okay, someone tell me if this device exists (or build it!). I want a device where I can pay $10-15 to get a gadget in the mail. The gadget would sit in my pocket for a week wherever I go. The device would record cell phone signal strength for each of the four major U.S. carriers every few seconds. After a week or so, the device would deliver the verdict on which cell phone carrier would have the strongest signal for me. Then I could mail the device back so someone else could use it — sort of a Netflix-like model to temporarily borrow this device.

At any point, I could go to a web page to view a map of where I’d been. The page would show a “heat map” of signal strength for each carrier or frequency band. Maybe I could also slice/dice by time or see the total number of readings in each location. I’m pretty sure you could rig this up out of 2-3 cell phones running Android in the worst case.

So far, I’ve found:


RF Signal Tracker is a nice app to collect and map signal strength data. It looks like it can upload to OpenCellID, which is a project to create an open database of cell IDs (numbers that correspond to cells).
Antennas is a pretty cool free app to show you nearby antennas and signal strength. It can even export some data in KML for use with Google Maps/Earth, but it doesn’t seem to make a heat map that could be easily grokked.
Sensorly has a free Android app, but they seem to want you to pay to zoom in closer than city level. I’m willing to do that, but didn’t see the for-pay addon in the Android Market.


– I also found an iPhone app called Signals that will continuously collect signal data and upload it.
– AT&T offers an iPhone app called Mark the Spot to report dropped calls, no coverage, etc. I have to admit that I don’t understand why this is manual though. Personally, I’d want my phone to ping my carrier with its location every time the phone dropped a call.


SignalMap is a website to (manually!) submit the number of bars for a location. It doesn’t appear to have any mobile app to back it up. Likewise, Dead Cell Zones and Got Reception? appear to rely on manual reports. I don’t think manual reports is the best way to tackle cell phone coverage maps though — you really want an app for this. has the standard manual reports data, but also will map the location of cell phone towers based on the location of cell phone towers registered with the FCC.
Root Wireless powers the cell phone signal strength maps that CNET uses, but I didn’t see any apps I could download or install on a phone. I registered to be a beta tester a long time ago, but no one ever contacted me.

That’s what I could find. Do you know of any good Android (or iPhone) programs to collect, map, or upload cell phone strength measurements? If so, let me know in the comments.

30 Day checkin: book challenge

So how did I do on the “15 books in 30 days” challenge? Not too badly–I made it through 12 books. I could probably have squeezed in three more books, but I’d rather take my time and enjoy books than artificially force things for a deadline. I’ll make up those last three books later. :)

This month is really busy with some internal Google projects–don’t worry, not related to webspam–so I’m not planning to do a new 30 day challenge this month. I have kept biking in to work and I’m enjoying it more lately. I think I’ll enjoy biking even more after I bling my bike out with the full-color LED lights I bought from MonkeyLectric at Maker Faire. Here’s an image from MonkeyLectric’s gallery to show you what they look like:

MonkeyLectric LED Bike lights

I have to say, they’re a big step up from my Tireflys, which are just LEDs that stick on the stem valve of your bike tire.

Hidden Google Gem: My Tracks

I’ve really enjoyed making videos for webmasters. In the most recent recording session, we decided that it would be fun to talk about some of the “hidden gems” of Google: features, products, or tips that you might not know about, but you might like.

One of my favorite hidden Google gems is a program for Android phones called My Tracks. I like it enough that we made a short video about it. Enjoy!

As always, you can watch more videos on the official webmaster video channel on YouTube.