Cheap internet-connected scale: Wii Balance Board + Linux

You can ignore this ancient “hairball” blog post. Gather round, kids, and witness this blog post from a time *before internet-connected scales*. That’s right. Back then, we had to hack our Wii balance boards to connect them to the internet. Of course now you can buy wifi-connected scales from Fitbit and Withings. But in a olden days, you had to hack something up or even write it down on paper!

You can easily make an internet-connected scale out of a Wii Balance Board and a Linux machine:

First, find a Bluetooth dongle and configure your Linux machine to talk to the Wiimote.

Next, apply a few extra patches so that your Linux machine can talk to a Wii Balance Board.

Finally, use some Python code to upload your weight to a Google Spreadsheet.

If you’d like to hear me describe how to hook it everything together, you can watch me give a 7-8 minute talk about it (more info in that post), or you can watch it here:

Special thanks to Kevin Kelly and Gary Wolf for kickstarting the Quantified Self movement and encouraging me to talk about this project.

Wanted: bookmarks.html merging program

You can ignore this “hairball” blog post. This post dates back to a time when people actually curated, saved, and managed their bookmarks.html file. Then Google Chrome introduced the ability to save and sync all your bookmarks, extensions, etc. in the cloud. Now I sign in to Chrome and everything is synced in the cloud.

Over the years, I’ve accumulated lots of bookmarks.html files. I’d love someone to write an App Engine program that would let you upload bookmarks.html files and would merge them all into one master file. After that, you could prune/remove useless bookmarks, especially any bookmark items that are installed by default on a new browser but are useless.

Why do it on the Google App Engine?

Because it would be an easy way to get started. Essentially you want to upload a small set of files to one web location from several different computers, and then do something interesting with that data. App Engine is perfect for that kind of thing.

Can App Engine’s version of Python parse bookmarks.html files?

The Mozilla/Firefox bookmarks.html file format is a little strange, but not too strange. I found a few programs to parse bookmarks.html files. For example, one fellow wrote a Python program to merge bookmarks using sgmllib, which I’m guessing would work on App Engine.

Digging into it more, it looks like several people like Beautiful Soup as a parser. First off, you can download it as a single Python file to work in App Engine. It also looks pretty easy to use. I like this short example of extracting favicons to .ico files from a bookmarks.html file using Beautiful Soup. At least one other person has released tools to manipulate bookmarks.html files with Beautiful Soup.

Can you upload files to Google App Engine?

Yes! There’s evidently a limit of 10MB on uploaded files, but my biggest bookmark file was about 500K, and I suspect most people have much smaller bookmark files. Stack Overflow has a good example of file uploading in Google App Engine, plus there’s official examples as well as people helping other people to the point of showing live examples.

Plus browsers are getting better about uploading files to the web easily. Google Chrome supports really easy drag-and-drop file upload. I think Safari supports drag-and-drop file upload as well? And I know Firefox has the dragdropupload extension that eases uploading files to the web.

What about uploading Google Chrome bookmarks files?

Ah, a person after my own heart. The short answer is that Google Chrome can export bookmarks in a format that looks like Firefox to me. Click on the Wrench, then “Bookmark manager,” then Tools->Export Bookmarks… to get a bookmarks.html file. The more fun answer is that “C:Documents and Settings{$USER}Local SettingsApplication DataGoogleChromeUser DataDefault” appears to have a “Bookmarks” file, and it appears to be in JSON format. Can Python parse JSON? It can; Yahoo mentions that simplejson is a great library to use, and it turns out that Google App Engine supports simplejson very easily. Just say “from django.utils import simplejson” to use simplejson. So it wouldn’t be hard to upload raw Chrome bookmark files either.

Aren’t there existing websites to do this?

Maybe, but I don’t know of them. I thought that Foxmarks might be able to do this. Foxmarks (like the now-defunct Google Browser Sync) can synchronize bookmarks across multiple computers. And Foxmarks provides a my.foxmarks.com web interface that lets you manipulate and export your bookmarks, but you can’t upload a raw bookmarks.html file to Foxmarks; instead, you have to upload/sync bookmarks via a browser extension. If Foxmarks added the ability to upload bookmarks.html files (vote for that idea here), that would be pretty sweet.

TimeTrax gone, SXRecorder lives

What’s that? You’ve never heard of an XMPCR? Don’t worry, the rest of the world hasn’t either. You can ignore this “hairball” post as I do spring cleaning on my blog.

TimeTrax was a program that allowed XMPCR owners to listen to XM Radio on their computer. Even nicer, the program would “time shift” the recording by taking the XM meta data and recording the raw audio of a channel to an MP3 with the artist/title of the song.

TimeTrax is not really viable anymore (see the Wikipedia page), but another program called SXRecorder will do much of the same thing.

If you’re using an XMPCR 100, you’ll need to install USB drivers for your device:

http://www.ftdichip.com/Drivers/VCP.htm

From the page: “Virtual COM port (VCP) drivers cause the USB device to appear as an additional COM port available to the PC.” So if you have the FTDI chip in a device (FTDI = Future Technology Devices International) like the XMPCR, it makes that USB device look like a serial port device, so that you can talk to COM ports.

Searching on Google for SXRecorder finds www.backpocket.com/sxrecorder/ . The software is free, but you can donate $25 or buy extra plugins for SXRecorder for $35.

You’ll need to activate your XMPCR receiver, but you can refresh your radio receiver at xmradio.com.

Announcing the winners of the Kinect contest

When the Kinect launched, Adafruit Industries ran a contest for the first person who released open-source code to extract video and depth from the Kinect. Adafruit also ended up donating to the EFF after the contest was over.

When I was in grad school, I would have loved to have a device like the Kinect. So I decided to run my own contest:

The first $1000 prize goes to the person or team that writes the coolest open-source app, demo, or program using the Kinect. The second prize goes to the person or team that does the most to make it easy to write programs that use the Kinect on Linux.

It’s time to announce the prize winners. There’s been so many cool things going on with the Kinect that instead of two winners, I ended up declaring seven $1000 winners.

Open-source Application or Demo

I picked two winners in this category.

People that have made it easier to write programs for the Kinect

A ton of people have made the Kinect more accessible on Linux or helped the Kinect community. I ended up picking five winners.

All of these individuals pushed things forward so others can develop great programs on the Kinect more easily. Congratulations to all the winners, and to everyone doing neat things with their Kinect!

Open Kinect Contest: $2000 in prizes

I’m starting a contest for people that do cool things with a Kinect. See the details below.

Open Kinect Logo

Before I joined Google, I was a grad student interested in topics like computer vision, motion self-tracking, laser scanners–basically any neat or unusual sensing device. That’s why I was so excited to hear about the Kinect, which is a low-cost ($150) peripheral for the Xbox. The output from a Kinect includes:
- a 640×480 color video stream.
- a 320×240 depth stream. Depth is recovered by projecting invisible infrared (IR) dots into a room. You should watch this cool video to see how the Kinect projects IR dots across a room. Here’s a single frame from the video:

IR Projection

but you should really watch the whole video to get a feel for what the Kinect is doing.
- the Kinect has a 3-axis accelerometer.
- the Kinect also has a controllable motor to tilt up and down plus four microphones.

What’s even better is that people have figured out how to access data from the Kinect without requiring an Xbox to go with it. In fact, open drivers for the Kinect have now been released. The always-cool Adafruit Industries, which offers all sorts of excellent do-it-yourself electronics kits, sponsored a contest to produce open-source drivers for the Kinect:

First person / group to get RGB out with distance values being used wins, you’re smart – you know what would be useful for the community out there. All the code needs to be open source and/or public domain.

Sure enough, within a few days, the contest was won by Héctor Martín Cantero, who is actually rolling his reward into tools and devices for fellow white-hat hackers and reverse engineers that he works with, which is a great gesture. Okay, so where are we now? If I were still in grad school, I’d be incredibly excited–there’s now a $150 off-the-shelf device that provides depth + stereo and a lot more.

It’s time for a new contest

I want to kickstart neat projects, so I’m starting my own contest with $2000 in prizes. There are two $1000 prizes. The first $1000 prize goes to the person or team that writes the coolest open-source app, demo, or program using the Kinect. The second prize goes to the person or team that does the most to make it easy to write programs that use the Kinect on Linux.

Enter the contests by leaving a comment on this blog post with a link to your project, along with a very-short description of what your project does or your contribution to Kinect hacking. The contest runs until the end of the year: that’s Dec. 31st, 2010 at midnight Pacific time. I may ask for outside input on who should be the winner, but I’ll make the final call on who wins.

To get your ideas flowing, I’ll offer a few suggestions. Let’s start with the second contest: making the Kinect more accessible. In my ideal world, would-be hackers would type a single command-line, e.g. “sudo apt-get install openkinect” and after that command finishes, several tools for the Kinect would be installed. Maybe a “Kinect snapshot” program that dumps a picture, a depth map, and the accelerometer values to a few files. Probably some sort of openkinect library plus header files so that people can write their own Kinect programs. I would *love* some bindings to a high-level language like Python so that would-be hobbyists can write 3-4 lines of python (“import openkinect”) and start trying ideas with minimal fuss. To win the second contest, you could write any of these libraries, utilities, bindings or simplify installing them on recent versions of Linux/Ubuntu (let’s say 10.04 or greater).

Okay, how about some ideas for cool things to do with a Kinect? I’ll throw out a few to get you thinking.

Idea 1: A Minority Report-style user interface where you can open, move, and close windows with your movements.

Idea 2: What if you move the Kinect around or mount it to something that moves? The Kinect has an accelerometer plus depth sensing plus video. That might be enough to reconstruct the position and pose of the Kinect as you move it around. As a side benefit, you might end up reconstructing a 3D model of your surroundings as a byproduct. The folks at UNC-Chapel Hill where I went to grad school built a wide-area self-tracker that relied on a Kalman filter to estimate a person’s position and pose. See this PDF paper for example.

Idea 3: Augmented reality. Given a video stream plus depth, look for discontinuities in depth to get a sort of 2.5 dimensional representation of a scene with layers. Then add new features into the video stream, e.g. a bouncing ball that goes between you and the couch, or behind the couch. The pictures at the end of this PDF paper should get you thinking.

Idea 4: Space carver. Like the previous idea, but instead of learning the 2.5D layers of a scene from a singe depth map, use the depth map over time. For example, think about a person walking behind a couch. When you can see the whole person, you can estimate how big they are. When they walk behind the couch, they’re still just as big, so you can guess that the couch is occluding that person and therefore the couch is in front of the person. Over time, you could build up much more accurate discontinuities and layers for a scene by watching who walks behind or in front of what.

Idea 5: A 3D Hough transform. A vanilla Hough transform takes a 2D image, looks for edges in the image, and then runs some computation to determine lines in the image. A 3D Hough transform finds planes in range data. I’ve done this with laser rangefinder data and it works. So you could take a depth data from a Kinect and reconstruct planes for the ground or walls in a room.

Idea 6: What if you had two or more Kinects? You’d have depth or range data from the viewpoint of each Kinect and you could combine or intersect that data. If you put two Kinects at right angles (or three or four Kinects around a room, all pointing into the room), could you reconstruct a true 3D scene or 3D object from intersecting the range data from each Kinect?

I hope a few of these ideas get you thinking about all the fun things you could do with a Kinect. I’m looking forward to seeing what cool ideas, applications, and projects people come up with!

css.php