I was overall very happy with these bulbs: decent Android and iOS apps and, compared to fancier solutions (e.g., Philips Hue or Belkin WeMo), they do not require any proprietary base stations, and you can’t beat the price!  However, switching off the lights before falling asleep involved hunting for the phone, opening the app, and waiting for it to scan the network; not an ideal user experience.  I was actually missing our old X10 alarm clock controller (remember those?), so I decided to make one from scratch, because… why not?
Although the X10 Powerhouse controller’s faux-wood styling and 7-segment LED had a certain… charm, I decided to go more modern and use a touchscreen.  I also designed a 3D printed enclosure with simple geometric shapes and used it as a further excuse to play with 3D print finishing techniques.  Here is the final result:
And here it is in action:
If this seems interesting, read on for details. Â The source code for everything is available on GitHub. Edit:Â You can also check the Hackaday.io project page for occasional updates.
Component selection. There are several boards with the ESP8266, most of them using the ESP-12 module. I decided to go with the SparkFun Thing (which directly incorporates the ESP chip), as it also includes a LiPo charge controller.  Perhaps overkill for battery backup, but nice to have.  If you do use the charge controller, then the price is very reasonable (e.g., an Adafruit ESP breakout and Micro-LiPo combo will cost about the same–although flash is 4x larger and the ESP-12 module is FCC-approved). Also, it’s a very nice board for experimentation and it’s become my go-to ESP board: nice header layout, and the easiest to program (tip: instead of fiddling with the DTR jumper on the Thing, just cut your DTR wire and insert a pin header pair: once esptool
starts uploading, just pull the pin and.. done!).
For the display, modules with a parallel interface were out of the question, since the ESP does not have enough pins. After some googling, I found Digole’s IPS touchscreen, which incorporates a PIC MCU and can connect over UART, I2C, or SPI (selectable via a solder jumper). There are several users that really like Digole’s display modules and, particularly their older models, seem quite popular. The display itself is very nice.  However, touchscreen support appears to be relatively recent and isn’t that great (more later).  It is also a bit on the expensive side, the firmware is not upgradeable (so you’re basically stuck with whatever version your module comes loaded with — I got one with an older version that has some bugs with 18-bit color support), and manufacturing quality could have been a bit better (mine had poor reflow).  Still, for prototype experimentation, this isn’t a bad module and the company is generally helpful to customer inquiries.
I also picked up a DS3231 RTC module off of Amazon, but I ended up not using it; periodically synchronizing with an NTP server is more than good enough.
Total cost. The first version of this device comes to about $45 including everything: SparkFun Thing ($15), touchscreen (highest cost at $21.50), and 500mAh LiPo cell ($8.50 off eBay). However, in retrospect, it could be done for much less: about $13 total (!) if you skip the LiPo (and charge controller), using a $5-6 ESP module instead, and also get a much cheaper ILI9341 touchscreen module (not IPS, but just $7 off eBay; I have one on the way from China). This does not include plastic filament (maybe a dollar?), paint (assuming you have these already, doesn’t use much), and labor.
3D-printed enclosure. I mocked a couple of profiles in 2D CAD to see what I like, and then did the actual design in OpenSCAD, which is my go-to CAD tool, because… code! (Who has time for point-and-click? :)  It’s a fairly standard affair, with simple geometric shapes, designed in multiple pieces for printing.
The picture above shows the parts in their printing orientation. The standoffs are conical to eliminate the need for supports (alternatively, I could have printed them as separate parts, but getting them inserted is too fiddly, especially in tight spaces like this). The cylindrical sections (middle right) are support ribs which I ABS-glued to the main enclosure’s vertices. They serve two purposes. First, to hold the endcaps in place (the ribs are slightly shorter than the enclosure). Second, to provide some extra support (after gluing, rib layers are perpendicular to enclosure layers). Printing them separately may be unnecessary overkill (there is also a version of the enclosure and ribs in one piece, which requires some extra support to print, but not much), but gluing them is easy enough so… why not. The little clip (top right) is for holding the LiPo cell in place, and is also glued inside the main enclosure.  It’s printed separately to eliminate the need for support (and, at the prototyping stage, also make it a little easier to try different battery sizes, without having to re-print the whole thing or do separate test-prints).
One part of the design that does require a lot of support is the opening for the display.  I entertained the idea of printing the main enclosure in three vertical sections (and gluing them together), but eventually decided against it. Printing that successfully took a bit of trial and error.  I use both Cura and Slic3r.  For most parts I used Slic3r (mainly because it produces smoother outer perimeters, and also integrates better with Octoprint).  However, for the life of me, I haven’t managed to print supports that break off easily.  Even with the new pillar mode, most parts are fine, except one part somewhere that’s just too close to the print to separate!  Cura, on the other hand, always does an excellent job with supports.
Finally, when designing cases like this, one of the (many!) things I like about open-source hardware is that I can download the PCB layout and get precise component positions and dimensions; no calipers and almost-there test prints! Â Sometimes it’s the little things…
You will note, however, that there are holes to insert hex nuts, which are visible from the outside and would have been rather ugly (you don’t see exposed fasteners in “real” products). Â Which brings me to the next trick.
Friction welding: dos and don’ts. I first heard about friction welding through Make magazine’s excellent article on 3D print post-processing and since then it has become one of my favorite techniques. I’ve seen a few tutorials on YouTube about friction welding plastics using a rotary tool.  However, at least the ones I found, seem to be from people who recently learned the technique themselves and are excited to share it.  I wish Make: had placed more emphasis on this (it would have saved me several failed attempts), but do not skip step 2c (pre-heating the surfaces); this is crucial! And, no matter what you do, do not immediately press the spinning, cold filament onto the cold pieces (as some of these tutorials appear to suggest), since you’ll most likely gouge them.  An alternative I’ve found to pre-heating with a heat gun is to use friction itself to do the preheating. Initially, just barely touch the spinning filament onto the plastic surfaces (not on the metal).  Without applying any pressure, wait until you see a tiny bit of plastic start flowing.  Only then gradually increase the pressure and start moving the filament, to keep a consistent flow.  Also, if blobs form on the tip of the filament, it’s best to stop and lightly spin it against some sandpaper to clean them off.
Embedding nuts with friction welding. Using friction welding to embed nuts is a trick I came upon by accident.  When I was building my Kossel-based printer, I overtighened the screws holding the rods to the effector, stripping the cutouts for the nylocs.  I was too lazy/impatient to print another effector, so I just quickly filled the gaps using friction welding.  I’m still using that effector, which has held the nuts very nicely for over a year (and after having taken the effector apart several times, to tweak various things).
I now regularly use this technique to also hold magnets and, generally, anything inserted that needs to stay put.  Superglue is the easiest, but it develops stress cracks and invariably fails over time (and, if you’re thinking threadlocker, don’t: it will craze the plastic, especially ABS).  Next easiest is using a soldering iron to press the nuts/items into the plastic, which is I use very often (I regularly design all my holes undersized and do this anyway).  However: (i) you can’t use it on non-metal items; (ii) you can’t use it on magnets (the necessary heat will demagnetize them); (iii) if you don’t have a steady hand, you may loosen the hole enough to cause the part to fall out eventually, even if it seems fine at first.  Friction welding takes a bit more time, but it’s the best solution I’ve found so far and it’s also very easy after just a little bit of  practice.  I haven’t tried threaded inserts yet. The McMaster-Carr “heat-set inserts for plastics” (it appears their site does not support direct linking!?) that Werner Berry uses look really nice and I’ve been itching to try them, but that’s another piece of hardware I need to keep around.
Another nice thing is that you can use this trick to embed blind nuts that are not visible from the outside. Â This is a trick that is rather obvious (once you’ve done all the above :). Â First, insert the nuts (I used the soldering iron) and make sure the surface is flat (lightly file, if necessary):
Make sure that the fastener axis is oriented properly (if not, adjust). Â Then, fully thread the holes from the outside. Â Use a proper tap (not a screw) to cut threads, especially for finer pitches. Â Do not skip this step (more later).
Finally, apply molten plastic, starting from the outside (i.e., touching the perimeter of the hole, plastic-to-plastic) and working your way towards the center. Â Once you’re done (if you do it right, you should end up with a very clean-looking plug, without any gouges or streaks), lightly file to make the surface flat.
Done! Now the nuts are not visible from the outside, and you have a very clean finish. Â Additional advantages of this approach: you do not need easy access to the hole from the fastener side (in this enclosure it would have been very difficult to insert the nuts and/or tap the holes from the inside), and you can use a regular taper or plug tap (rather than a bottom tap).
3D-print finishing. Although I often like the surface finish of 3D printed layers, in this case I wanted a smoother, more “product-like” finish.  Some time ago I bought some XTC-3D and this was a good opportunity to play with it a little more.  Overall, XTC works very well; especially on organic/curved shapes, you’re pretty much done after applying. Do follow the instructions about applying very thin coats (it will even out, even if it does not look like it at first).  However, in this case (no pun intended) there were two issues. First, I used an older printer (a Solidoodle 2; my Kossel is not yet set up for long ABS prints) which has significant banding. XTC is good, but it’s not magic; I did some initial sanding (and cleaning with denatured alcohol) before applying the XTC resin.  Second, on large flat surfaces, you will get some minor unevenness and some tiny bubbles here and there. Light sanding (with a sanding block!) will address most of these issues, but in some places you may need to use a little filler.  One-part Bondo spot putty is sufficient for this.  Apply it generously, and after it is dry, sand most of it off (it sands very easily).  Do wait for it to set, though.  Especially on thick coats, the manufacturer’s recommended set time (25 minutes) may not be sufficient; rule of thumb is to wait until it turns light pink everywhere and then wait some more.
All things considered, XTC-3D works great (unfortunately, I forgot to take pictures after applying just the XTC-3D). It definitely beats sanding (substantially reducing it), as well as two-part body fillers (which I haven’t used with prints, but I’ve used in another project a long time ago).  And for smaller surfaces or organic shapes, you’re pretty much done after applying.
Spray painting. I’m very new at this; I did it once in the past (again for this project) and, surprisingly, it had gone very smoothly.  I still don’t know why (maybe too much false confidence?), but it’s always the second time that gets you burned, isn’t it?  To cut the long story short, I learned about the difference between lacquers and enamels (simplifying, the first just dry by evaporation, the second cure by reacting with air), got distracted by paint chemistry (if you’re curious look, e.g., here or here, and if you’re really curious try this), and found the following paint compatibility chart, which is worth it’s weight in gold:
Furthermore, in the past I had used Krylon, which is not available at big box stores (we have one two blocks from home) so I decided to try Rustoleum instead.  Although people are often happier with Rustoleum (and, these days, they’re also cheaper), for the life of me I couldn’t get an even spray with their nozzles. Maybe they work well on large items like chairs and tables, or maybe it’s my (lack of) technique, but on this small enclosure I couldn’t get even coverage, and always got spots with too much paint (not enough to cause drips, but enough to affect the surface finish). More importantly, Rustoleum takes forever to dry and, if you’re doing your spraying in all sorts of weird places with temporary setups (we live in an apartment), that’s an issue.
So, I wiped it all off (tip perhaps worth sharing: I found that, at least if the paint hasn’t completely cured, white spirit works well and it doesn’t attack the plastic at all), went to an auto parts store, and got some Krylon.  I think their newer non-rotating nozzles spray a bit more like a firehose (just have to live with overspray), but other than that, the second attempt went pretty well.
I chose a satin finish both because I like it, and also because it’s a bit more forgiving with improper spraying distance (you can err on keeping the nozzle too far from the surface, and it won’t have an ill effect, within reason). Skipping the intermediate steps (nothing to be proud of :), here is the end result — not bad for a rookie:
Putting it together. The last bits were easy: soldering headers on the Thing (whatever fits in the enclosure, some straight and some raised right-angle pins) and on the display module.  Also, the right-angle JST header soldered onto the Thing wouldn’t work in this enclosure (the LiPo wire collides with the endcap), so I desoldered it and replaced it with a vertical JST header.  Finally, I had to solder a wire to the reset pads on the Digole module (the reset signal is not broken out, but it’s accessible through an unpopulated reset pushbutton).
After fiddling with the screws (long nose pliers and balldrive Allen keys FTW!) and wires, the mechanical assembly was done — whewww!
Epic fail(s). So far I’ve omitted an epic fail from the story.  The enclosure shown above is actually the second attempt.  The first one ended up in disaster, all within a couple of hours.  The first attempt was printed in PLA.  First fail and lesson: PLA really does melt under the sun, and it takes less than you’d think.  I sprayed the endcaps first and temporarily set them down on a cardboard on top of a metal outdoor table under the sun.  In the few minutes it took me to spray the first coat on the main enclosure, the encaps had seriously warped!  You can see this in the picture on the left (and that is after I spent half an hour re-shaping them with a temperature-controlled hot air gun at low heat!).  The second fail was even worse: when I did the first attempt, I did not have an M2 tap, so I decided to use M2 screws (and oversize tap diameter).  Unfortunately, this does not cut the threads properly, and the screws still meet substantial resistance.  Since the nuts will never be perfectly aligned, when inserting the screws from the inside what happened was what you see on the left photo. Doh!
So, definitely use a tap to properly cut threads (or, make the holes really oversize, and make sure you clean any molten plastic if you use a soldering iron). Furthermore, measuring your fastener lengths twice and hand-tightening them is not a bad idea either.
You may also notice that the finish here is a little glossier; that’s what happens when you over-apply paint and/or spray from too close.
Finally, to top it all off, I hadn’t realized that the standoffs for the RTC module where on the wrong side (double-doh!) and when test-fitting it also turned out that the wires I had crimped were a couple of cm too short. Oh well, it had been a while since I had an epic fail like this! :)
Protocol sniffing. On to the software part.  First thing was to reverse-engineer the WiFi bulbs’ protocol.  It appears that, although there are several variants of the hardware that look identical, not all of them run the same protocol (e.g., see links in sniffing notes on GitHub).  I’m not even sure all are made my the same OEM (FWIW, MAC vendor lookup on my bulbs says Hi-flying Electronics). Of course, none of these protocols are published, but all of them are very similar and quite simple.  In my case, since I’m running OpenWRT on our router, I just installed ngrep
and sniffed the iOS app’s traffic. Â I’m pretty sure it’s possible to sniff traffic even if you don’t have access to the router (but I didn’t have to find out). Â Edit: Root access on the router makes sniffing much easier (otherwise you’ll probably need a sniffer on your tablet/phone).
For on and off commands, I can just copy them verbatim. Â For commands to set color, the structure is easy to figure out. Â First is an opcode byte, followed by RGBW values (the bulbs have both RGB as well as warm-white LEDs, and it seems you can turn on either one or the other), a constant(?), and a checksum byte. Nothing too fancy.
The iOS app uses UDP broadcast for bulb discovery (that protocol is also easy to figure out). Â This step does take some time (and was one of the annoyances with the user experience, since this information is not cached by the iOS app). Â However, after that, all communication happens over TCP. Â To keep things simple, I decided to skip the device discovery step (at least for now), and just assign fixed hostnames/IPs to the bulbs.
Firmware. The firmware is fairly standard stuff. It’s written using the ESP port of Arduino (many thanks, @igrr et al!), and it currently occupies about 70% of the Thing’s flash.
First, the display driver and UI code. Touch handling uses a combination of interrupts to detect the first touch, and then polling and debouncing to detect finger down/move/up events, and update the UI accordingly (this is probably the most complex bit here, and it’s actually pretty simple). Â While at it, I did a gratuitous rewrite of the Digole library, inspired by a very cool hack I had seen. Â Then NTP client, WiFi bulb client, and a webserver for configuring the device over a web browser. Â Settings are stored in “EEPROM” (which, on the ESP, is just a sector of the flash memory). Â The web UI is pretty simple for now:
Arduino on ESP has a great set of libraries for networking stuff, which makes all this quite easy! I’m using basic Bootstrap and Knockout.js to make it look a little pretty. I decided to write a proper HTML5 frontend and a simple REST API (using the excellent ArduinoJson library). However, upon first boot, the device has no Internet access.  If it fails to connect to WiFi, the firmware will switch the device to AP mode, so initial configuration can be done over WiFi.  Since the flash chip is not large enough to store Bootstrap and Knockout locally, there is a separate, minimal UI (not shown) that uses regular HTML forms (no AJAX) and just allows setting the SSID and password.
One problem (that I eventually worked around, rather than solved) was getting the Digole module to talk back to the ESP.  I2C was a fail (and it wasted me a couple of days; still not sure if the problem is on the display’s end or with the Arduino’s clock stretching implementation, or something else) and SPI I didn’t really try. I finally got UART to work (except that you can’t turn off the ESP’s 78Kbaud boot messages, hence the need for accessing the reset signal on the Digole).  The downside is that now reflashing the firmware is a PITA (I have to fiddle inside the case, to disconnect the display and connect my FTDI), but that happens relatively infrequently (the display stuff is mostly done, and the network stuff I test on a spare Thing first).
Conclusion. After all this, I think the result is not bad for a completely home-made device. Could I have gotten a used Chumby for a comparable price (they go for about $60 used), or just used an old/cheap Android tablet (and perhaps just 3D print a stand)?  Aside from the Chumby service’s ups and downs… sure, but where’s the fun in that? :)  Also, there is no way to reduce the cost of these alternatives down to $13.
What’s next? Well, you may have noticed there is a zipcode setting. That’s for weather information (planning to use OpenWeatherMaps, which returns reasonably-sized responses — parsing anything more than 1KB, maybe 2KB, is probably a bit iffy).  Also, a web UI to control the lamps would be nice (the REST API endpoints are there, just need to get around to writing and refactoring the HTML bits).  Maybe adapt the whole thing to a cheaper display module (as discussed in the beginning; I’ve already started a port of the ucglib library to the ESP, but need an actual device to finish it). Finally, one could perhaps re-write it in Lua (NodeMCU?) with support for pluggable modules (a-la true Chumby). That probably won’t be me, though; by that time, I’m pretty sure a new hack will have “distracted” me. :)
]]>Before I continue, let me say that, yes, I know Matlab has cell arrays and even objects, but still… you wouldn’t really use Matlab for e.g., text processing or web scraping. Yes, I know Matlab has distributed computing toolboxes, but I’m only considering main memory here; these days 256GB RAM is not hard to come by and that’s good enough for 99% of (non-production) data exploration tasks. Finally, yes, I know you can interface Java to Matlab, but that’s still two languages and two codebases.
Storing matrix data in Matlab is easy.  The .MAT format works great, it is pretty efficient, and can be used with almost any language (including Python).  At the other extreme, arbitrary objects can be stored in Python as pickles (the de-facto Python standard?), however (i) they are notoriously inefficient (even with cPickle), and (ii) they are not portable.  I could perhaps live with (ii), but (i) is a problem.  At some point, I tried out SqlAlchemy (on top of sqlite) which is quite feature-rich, but also quite inefficient, since it does a lot of things I don’t need. I had expected to pay a performance penalty, but hadn’t realized how large until measuring it.  So, I decided to do some quick-n-dirty measurements of various options.
The goal was to compare Python overheads (due to the interpreted nature of Python, the GIL, etc etc), not raw I/O performance. Â Furthermore, I’m looking for a simple data storage solution, not for a distributed, fault-tolerant, scalable solution (so DHTs, column stores, etc like memcached, Riak, Redis, HBase, Cassandra, Impala, MongoDB, Neo4j, etc etc etc, are out). Â Also, I’m looking for something that’s as “Pythonic” as possible and with reasonably mature options (so I left things like LevelDB and Tokyo Cabinet out). Â And, in any case, this is not meant to be an exhaustive list (or a comprehensive benchmark, for that matter); I had to stop somewhere.
In the end, I ended up comparing the following storage options:
Furthermore, I also wanted to get an idea of how easily Python code can be optimized.  In the past, I’d hand-coded C extensions when really necessary, I had played a little bith with cython, and I had heard of PyPy (but never tried it).  So, while at it, I also considered the following Python implementations and tools:
The dataset used was very simple, consisting of five columns/fields of random floating point numbers (so the data are, hopefully, incompressible), with sizes of up to 500M records. Â The dataset size is quite modest, but should be sufficient for the goals stated above (comparing relative Python overheads, not actual disk I/O performance). File sizes (relative to sqlite, again) are shown below. Â For the record, the ‘raw’ data (500,000 rec x 5 floats/rec x 8 bytes/float) would have stood at 0.74, same as pytables which has zero overhead (well, 64KB to be exact); sqlite has a 36% overhead. Â ZODB size includes the index, but that’s just 2.7% of the total (caveat: although records were only added, never deleted, I’m not familiar with ZODB and didn’t check if I should still have done any manual garbage collection).
Runs were performed on a machine with an ext4-formatted Samsung EVO850 1TB SSD, Ubuntu 14.04LTS and, FWIW, a Core i7-480K at 3.7GHz. RAM was 64GB and, therefore, the buffercache was more than large enough to fit all dataset sizes.  One run was used to warm up the cache, and results shown are from a second run.  Note that, particularly in this setting (i.e., reading from memory), many (most?) of the libraries appear to be CPU-bound (due to serialization/deserialization and object construction overheads), not I/O-bound.  I cautiously say “appear to be” since this statement is based on eyeballing “top” output, rather than any serious profiling.
For full disclosure, here’s a dump of the source files and timing data, provided as-is (so, caveat: far from release quality, not intended for reuse, really messy, undocumented, etc etc—some bits need to be run manually through an iPython prompt and/or commented-in/commented-out, don’t ask me which, I don’t remember :). If, however, anyone finds anything stupid there, please do let me know.
First a sanity check, wall clock time vs. dataset size is perfectly linear, as expected:
The next plot shows average wall-clock time (over all dataset sizes) for both cpython and pypy, normalized against that of raw sqlite with cpython:
As usual, I procrastinated several weeks before posting any of this. In the meantime, I added a second EVO850 and migrated from ext4 to btrfs with RAID-0 for data and LZO compression.  Out of curiosity I reran the code.  While at it, I added ZODB to the mix. Here are the results (cpython only, normalized against sqlite on btrfs):
Pytables is, oddly, faster! Â For completeness, here are the speedups observed with striping across two disks, vs a single disk.
Remember that these are (or should be) hot buffercache measurements, so disk I/O bandwidth should not matter, only memory bandwidth.  Not quite sure what is going on here; I don’t believe PyTables uses multiple threads in its C code (and, even if it did, why would the number of threads depend on the… RAID level??).  Maybe some profiling is in order (and, if you have any ideas, please let me know).
Comparing Python implementations. Woah, look at PyPy go!  When it works, it really works.  See SqlAlchemy go from being 2.5x slower when using the low-level APIs or 25x slower with all the heavyweight ORM machinery, to almost directly competitive with raw sqlite or 6x slower (a 4x speedup), respectively.  Similarly, manual object re-construction on top of raw sqlite now has negligible overhead.  However, most libraries unfortunately do not (yet) run on PyPy.  More importantly, the frameworks I need for data analysis also do not support PyPy (I’m aware there is a special version of NumPy, but matplotlib, SciPy, etc are still far from being compatible).  Also, I’m not quite sure why pickles were noticeably slower with PyPy.
Comparing data formats. Sqlite is overall the most competitive option.  This is good; you can never really go wrong with a relational data format, so it should serve as a general basis. PyTables is also impressively fast (it’s pretty much the only option that beats raw sqlite, for this simple table of all-floats).  Finally, I was somewhat surprised that NumPy’s CSV I/O is that slow (yes, it has to deal with all the funky quotation, escapes, and formatting variations, and CSV text is not exactly a high-performance format, but still…).
For the time being, I’ll probably stick with sqlite, but get rid of the SqlAlchemy ORM bits that I’ve been using (or, perhaps, keep them for small datasets). The nice thing is that I can keep my data files and perhaps look for a better abstraction layer than DB-API, but the sqlite “core” itself appears reasonably efficient. Eventually, however, I’d like to have something like the relationship feature of the ORM (but without all the other heavyweight machinery for sessions, syncing, etc), so I can easily persist graph data, with arbitrary node and edge attributes (FWIW, I currently use NetworkX once the data is loaded; I know it’s slow, but it’s very Pythonic and convenient, and I rarely resort to iGraph or other libraries, at least so far—but that’s another story).
]]>This is one of my favorites.  It was one of the quickest to make, but it was used a lot.  My mother has her favorite eyeglasses and is loath to change them.  However, over time, the arm loosened and they would constantly slide down her nose. Tightening the screws didn’t do anything anymore. So, I quickly designed a clip that slides over the frame, and has a tapered nub to apply pressure to the arm (printed in ABS, so it has some flexibility).  Guess you could call it an “eyeglass arm pretensioner attachment”.  She’s been using them for years, and asked for a pack, in case she looses one (printing a set of six takes about 15 minutes; the example in the photo is an early print in black, instead of brown).
Moving on to something I was loath to change: an IKEA cheese plate, which IKEA has long since stopped selling in the US (I guess cheese isn’t such a daily staple here, so it probably didn’t sell well). Â In fact, I did try to find a replacement, but failed. Â Unfortunately, one of the times it was dropped, one of the handles broke. Â Since it’s made out of polypropylene, it was impossible to glue. Â So, here’s what I did: I traced the outline of the handle with a pen on paper, then scanned it, and then digitally traced it using InkScape, and inset the inner long edge (so the cover could fit nicely; see photo inset). Â Saved as DXF, imported into OpenSCAD, and a quick ((scale + extrude) – extrude) expression gave the CAD model for what you see below (outer shell – inner volume that slides onto the remaining handle). Printed, and.. perfect fit! Â We still use this.
Long time ago, we bought a folding shopping cart from Amazon.  These are great, except for one thing: the little plastic clip that holds it in the folded position has a tendency to fall off.  We finally lost the clip on the way to the supermarket.  Unfortunately, you can’t buy just the clip, and without it the cart won’t stay folded.  But, with a 3D printer, the solution is easy enough: measure the wire diameter with calipers, then a union of two cylinders and a cuboid, minus two cylinders and another two cuboids for the insertion cutouts, and… done!  In the photo, I quickly printed an arrow on a label printer, to indicate the side with the slightly wider cutout (easier to insert/remove, the other one is a very tight fit, in hopes that the replacement won’t fly off as often).  Printed in ABS for a little flexibility, works better than the original!
We don’t allow our cat in the bedrooms, but with the doors constantly closed, I sometimes felt like a prisoner.  So I decided to try making a “cat barrier door”.  I wanted this to fit outside the door proper, and be as minimally invasive to the door frame as possible. I used mosquito net frame extrusions and wire mesh to make the “door” (it needed to be strong enough for just a cat, so that was fine).  I used Japanese double hinges to mount it, and 3D printed clips to hold the hinges to the frame (easy: a difference of two cuboids to make a Î -shaped solid that fits over the frame, and is wide enough for the hinge plates).  I also needed a latch, but unfortunately I couldn’t find one that would sit flush enough to the frame.  So, I designed one: two parts make the casing, leaving a hollow channel where a third piece (with the latch tongue) slides up-and-down, tensioned by a small spring.  A quick spray with teflon lubricant made it slide super-nicely.  A pair of cheap neodymium magnets (one on the frame, one on the latch) hold the door closed.  Works great, strong enough for a cat (but humans did accidentally break it a couple of times; no big deal, you can always hit “print” for a replacement).
The barrier door worked fine for the cat for several months. However, once my daughter grew up, she figured out how to open it (but was too young to understand that she should close it again), so the cat would come in.  Hence, I have since removed it, and don’t have any pictures of it mounted.  Instead, here is a render of the CAD models.
We always had hooks over doors, to hang stuff (e.g., towels in the bathroom). Â However, our current apartment has doors which are nice and solid but, unfortunately, much wider than most. Â So none of the hooks we had would fit, and I couldn’t find anything that would fit in any of the usual places (Target, Home Depot, Amazon, etc). Â So… I just printed one. Â While at it, I made the CAD model parametric (aka “customizable”), so people could easily adapt it to their door (or to other inventive uses!). Â Although a very simple model (ok, it might have taken a little more than an hour to parameterize it, but not much more), it is also my most remixed design.
When my daughter was really young, we got one of those accordion-style play-yard fences.  We used it to separate a section of the living room, with a long, straight stretch of the fence.  Some members of the family were concerned that this long stretch was too wobbly.  So, I quickly whipped up these support legs, which tightly clip onto the fence’s frame.  Obviously they don’t provide structural support if someone were to, e.g., climb the fence (just saying… :), but they did stop the wobble and rattle quite successfully.
An easy “baby safety hack”: when my daughter less than a year old, she discovered a game: slamming the IKEA Besta sliding doors.  In order to prevent her from doing that (and pinching her finger), I quickly measured the aluminium channel dimensions, and made a tab that twists on tightly.  Took a test print and minor re-iteration to get it tight enough.  Works great, very unobtrusive.
Yet another “baby safety hack”: A plastic piece that slides into the door lock hole, and has a protrusion long enough to prevent the door from closing.  That was for when my daughter was a few months older than for the previous hack, and discovered the game of slamming room doors.  I remember that I actually whipped up this CAD model while holding my daughter on my lap (trying to prevent her from slamming the doors), in something like 10 minutes (later, I added a small hole on the handle, so we could tie a string and hang the thing from the door frame).
The above selection is rather random, and far from exhaustive.  Other quick hacks that come to mind, and not shown above: ethernet switch wall-brackets, embossed name tags for daycare, replacement water bottle cap, a customized shape-sorter toy, a slide-on button cover for a Nexus 7 (to prevent babies from hitting the home button and getting upset :), replacement tube guard-cage for a headphone tube amp, etc. I’m sure there are more that I forget.
Of course, there are also “hacks” that either took much longer than a few minutes to design (mostly stuff that needs to fit existing parts tightly, which takes a bit of trial-and-error, with a few test prints and re-iterations), or are relatively special-purpose.  For example, various enclosures (for, e.g., 3D printer, home-brew network video recorder, BusPirate, near-field mic), PSU covers, button faceplates, drawer knobs, mitre box, various jigs, OpenBeam cable clips, spool rollers and holders, iPhone dock for my car, adjustable clamp-on iPad stand for stationary exercise equipment, etc.
Finally, there are a number of things that I did not design myself but I found on the web.  Some examples that we use all the time are a cool “keychain”, laptop stand, stroller clips, bag clips, solder spool holder, etc etc.  Then, we naturally also do plastic toys and trinkets (the “novelty” aspect, which has been a hit with my daughter… those cute darn squirrels, for example). And, of course, I had to use the printer to build.. another printer (isn’t that what everyone with a 3D printer uses it for, sooner or later? :) It’s a modified Kossel, which has been working great for the last year; perhaps some day I’ll post about that “adventure”. :)
]]>TL;DR: I went from the PCB on the left, to the device on the right, without ever leaving home. Design files are available here (caveat: I’m not an EE, but I sometimes play one on the web! :).
In addition to the plastic enclosure (designed and 3D printed at home), I also added a boost converter and a LiPo charge controller, so that the device can run off a LiPo battery and can be recharged via a standard micro-USB port.  These days a computer, the right tools, a fair amount of googling, and some common sense go a long way. Much of this is possible by standing on the shoulders of open source, both software (e.g., OpenSCAD and Slic3r) and hardware (e.g., Adafruit’s designs).  Also, CAD and common data formats make it easy to manufacture components, from circuits, to enclosures, to mechanical assemblies (example of this in another post), with just a few mouse clicks (e.g., with a 3D printer or through online services like OSHPark).  Just, say, five years ago, very little of this would be as easy as it is today.  Even Jonathan Jaglom, son of Stratasys’s chairman and CEO of Makerbot, seems to recognize this (via Hackaday), although he doesn’t actually say the “o” (for opensource) word.
Measuring things out. Instead of getting off-the-shelf breakout boards and jamming them in a large enclosure, I decided to streamline everything onto a single PCB, which would fit the overall round shape of the W-Ear. First, I needed precise dimensions of the W-Ear PCB.  Some information (microphone and mounting hole locations) is available on the W-Ear website, but I also needed the board outline and component locations to make the add-on LiPo PCB fit as tightly as possible.  Therefore, I scanned the W-Ear PCB on a flatbed scanner, and traced the outlines using Inkscape (an opensource vector drawing application).  After marking the locations of taller components (capacitors, transistor, and LM386 IC), I also drew the add-on board outline, saved it as DXF, and imported it into Eagle.
Designing the voltage regulator and charge controller PCB. Working with the W-Ear PCB imposed some constraints that are somewhat artificial, the most important of which is that supply voltage needs to be 9V.  The LM386-4 has a minimum supply voltage of 5V, and I also wasn’t sure if the rest of the microphone array circuit would work properly with anything different.  A single-cell LiPo supplies about 3.7V, so a voltage converter was necessary.  I decided to go with the MIC2288, and basically copied the datasheet example circuit (including component placement guidelines, as much as possible).
Next, I needed a charge controller for the LiPo battery.  Adafruit has several, and I chose one of their older designs, based on the MCP73833 IC.  Since this is open source hardware, I could download the schematic, tweak it for my needs (e.g., remove a few headers I didn’t need, change some resistors and thermistors, and switch to an MSOP package so it’s easier to hand-solder), and then lay out my custom PCB.  Isn’t that nice?  In the meantime, I had chosen a couple of LiPo cells off of EBay, and had them shipped from China.  Finally, I laid out the PCB, using the traced board outline and leaving empty space for the LiPo.
In the meantime, I also soldered the W-Ear board and printed my charge controller PCB on paper and cut it out, to make sure that the outline was correct and that it would fit snugly around the various components.  After tweaking the outline’s cutouts by a few fractions of a mm here and there, I shipped the design files off to OSHPark, to have a set of three prototype boards made.  Here are the bare boards (including the add-on fix; see below):
Designing the enclosure. While I was waiting for those to arrive (it takes about 10-14 days), I started designing the 3D printed enclosure, using the actual W-Ear PCB and the paper mockup of my PCB.  I made the enclosure’s cAD design parametric (e.g., total height, slack around the board, position and size of microphone, LED, and socket cutouts, etc), so I could easily tweak it.  A couple of test prints later, I was almost done.  The enclosure measures 79mm in diameter (basically constrained by the diameter of the W-Ear PCB), and 19.5mm thick, which is significantly thinner than would have been possible with the originally supplied 9V battery. I was actually surprised to realize that the total height is constrained by the electrolytic caps, not by my extra PCB + LiPo “sandwich”!  Much better than I had expected.
One thing that bothered me was the huge volume knob that shipped with the W-Ear kit, so I quickly designed and 3D printed a smaller, nicer-looking one. Finally, somewhere at this point, I placed an order for all the necessary SMD components from DigiKey (these arrive quickly, in just a couple of days).
If you haven’t worked with 3D printing before, it can be like magic at first, but for me it’s now almost routine. Â Although there are a number of details in designing a CAD model like this, I’m glossing over them. Here is a render of the final CAD model for all enclosure pieces:
PCB mounting standoffs are part of the enclosure, and the tabs on the back cover (tapered, to make them less likely to break) are meant to hold the LiPo cell in place.
Assembly and initial testing. When everything arrived in the mail, I was ready to put together and test. Â I assembled the charge controller board using hot air reflow soldering. Â If you’re interested, there are several example videos on YouTube; here is one by Dave Jones, demonstrating on much smaller and trickier (QFN instead of MSOP) components than I used. Â Everything fit together almost perfectly (except for the battery’s JST connector that protruded by about 0.5mm, which was easy enough to trim). Â The “measure twice (or thrice, or more), cut once” mantra paid off, as usual.
Working around ripple issues. The circuit worked correctly the first time, much to my surprise (can you tell I have no EE training, or anything beyond high-school physics when in comes to circuits — e.g., see the redundant caps… :).  Except for one thing, which I had feared: there was too much ripple on the switching regulator’s output, and the W-Ear requires a very clean power supply.  After some googling, it seems I had two options: (i) design an appropriate output filter, or (ii) add a linear LDO regulator after the switching regulator. I decided against the first option, for two reasons.  First, it would probably take too much time (days?) and trial-and-error to get a clue on filter design.  Second, I wasn’t entirely sure that, even after all that, I’d get a passive low-pass filter with components small enough to fit in the enclosure.  Therefore, I searched DigiKey for an appropriate LDO, and came up with the ADP7102, which has a very high power-supply rejection ratio (PSRR; a term I hadn’t even heard of before :) and could probably serve as a kind of active filter in this case, I guess.  It ain’t cheap, but that wasn’t a concern, since this is a one-off circuit, mainly for fun.
Getting the PCB done from scratch would cost quite a bit, so I decided to make a tiny add-on board (basically, a breakout for the ADP7102, plus the datasheet-recommended input and output caps), which could be soldered onto the main board with a pin header.  So, instead of paying $29 for another batch of the entire board, I paid only $1.5.  SMD components made the add-on small enough to stay below the top of the LiPo battery.  I designed this tiny board quickly and shipped the files off to OSHPark, once again.  When the boards came back, I assembled them (hot air reflow again), changed the feedback resistor on the switching regulator to increase its output voltage by about 0.2-0.3V (to compensate for the dropout), and put everything together. And it actually worked!  No more hiss and distortion.  Here’s what the final assembly looks like:
Almost everything you see in the picture (except the green printed circuit) was designed and manufactured “at home” (or at least without leaving it)! Â Yay for opensource and CAD.
There is one more shortcoming in the design: the switching+linear regulator portion is always enabled, and quiescent current is enough to kill battery within a few days, even if the W-Ear is switched off. Â However, I didn’t want (or, rather, I was afraid?) to touch the W-Ear circuit in any way (e.g., tapping into it’s volume potentiometer’s on-off switch). Â I can live with this anyway.
The finishing touch was a piece of paracord (cut to length, inserted into the enclosure’s holes for it, then knotted and slightly melted with a lighter to make it stay put), so the finished device could be worn around the neck. Â Mission accomplished!
Conclusion. This side-project was completed over time during the summer of 2014. If I had to guess how long it would have taken if I’d worked exclusively on this, I’d say less than a week (excluding the time waiting for PCBs, but including time spent googling, learning, and collecting all necessary information). Is this a finished product, or even production-ready?  No, but it’s a pretty darn convincing prototype (and would have been even more so if I hadn’t been too lazy to apply a coat of XTC-3D and spraypaint; one of these days :). More so if you consider that it was done in a short period of time, by someone who has no formal training in design or EE, largely by re-using opensource designs on the web, and relying on freely available tools!  And all of this without ever leaving home, and without any major investments in equipment!  Not bad.
]]>The overview of SVMs was centered around the observations that the decision function is, eventually, a weighted additive superposition (linear combination) of evaluations of “things that behave like projections in a higher-dimensional space via a non-linear mapping” (kernel functions) over the support vectors (a subset of the training samples, chosen based on the idea of “fat margins”).
Most of the explanations and pictures were based on linear functions, but I wanted to give an idea of what these kernels look like, how their “superposition” looks like, and how kernel parameters vary the picture (and may relate to overfitting).  For that I chose radial basis functions. I found myself doing a lot of handwaving in the process, until I realized that I could whip up an animation.  Following that class, I had 1.5 hours during another midterm, so I did that (Python with Matplotlib animations, FTW!!).  The result follows.
Here is how the decision boundary changes as the bandwidth becomes narrower:
For large radii, there are a fewer support vectors and kernel evaluations cover a large swath of the space. Â As the radii shrink, all points become support vectors, and the SVM essentially devolves into a “table model” (i.e., the “model” is the data, and only the data, with no generalization ability whatsoever).
This decision boundary is the zero-crossing of the decision function, which can also be fully visualized in this case. Â One way to understand this is that the non-linear feature mapping “deforms” the 2D-plane into a more complex surface (where, however, we can still talk about “projections”, in a way), in such a way that I can still use a plane (z=0) to separate the two classes. Â Here is how that surface changes, again as the bandwidth becomes narrower:
Finally, in order to justify that, for this dataset, a really large radius is the appropriate choice, I ran the same experiments with multiple random subsets of the training data and showed that, for large radii, the decision boundaries are almost the same across all subsets, but for smaller radii, they start to diverge significantly.
Here is the source code I used (warning: this is raw and uncut, no cleanup for release!).  One of these days (or, at least, by next year’s class) I’ll get around to making multiple concurrent, alpha-blended animations for different subsets of the training set, to illustrate the last point better (I used static snapshots instead) and also give a nice visual illustration of model testing and ideas behind cross-validation; of course, feel free to play with the code. ;)
]]>Despite hearing about 3D printing daily, very few studies have looked at the digital content of physical things, and the processes that generate it. I collected data some time ago, and started off with this visualization, which I wrote about before. A further initial analysis of the data has some interesting stories to tell.
Exponential growth rates. The total number of things over time (blue) exhibits an exponential growing, with a compound doubling time of 6.1 months. Furthermore, if we consider only remixes (green), then the growth rate far outpaces the overall rate, with a compound doubling time of 4.6 months. Consequently, the relative ratio of remixes is also growing at an exponential pace (red) and, although obviously this cannot continue forever, there is little evidence that the growth rate of remixing is abating (in fact, after the introduction of the Thingiverse Customizer, which is excluded from this plot, the rate has picked up even further).
Popularity: views vs. likes vs. makes.  The following table summarizes the results of least-squares regression on measures of user actions, showing the top-3 best predictive features (\(p < 0.01\), ranked by \(t\)-test scores) with 95% confidence intervals of the corresponding regression coefficients, as well as the bottom-2 worst features.
Variable | Best predictors | Worst predictors |
---|---|---|
\(\mathit{\#Views}\) | \(\mathit{\#Likes}\!: 43.1\text{–}44.6, \mathit{\#DLs}\!: 0.35\text{–}0.38, \mathit{\#Views}’\!: 0.28\text{–}0.31\) | \(\mathit{\#Make}’\, (p=0.48), \mathit{\#Remix}’\, (p=0.06)\) |
\(\mathit{\#DLs}\) | \(\mathit{\#Likes}\!: 43.1\text{–}44.6, \mathit{\#DLs}\!: 0.35\text{–}0.38, \mathit{\#Views}’\!: 0.28\text{–}0.31\) | \(\mathit{\#Remix}\, (p=0.66), \mathit{\#Remix}’\, (p=0.51)\) |
\(\mathit{\#Likes}\) | \(\mathit{\#Views}\!: 0.006, \mathit{\#Make}\!: 2.72\text{–}2.83, \mathit{\#Likes}’\!: 0.42\text{–}0.46\) | \(\mathit{\#Remix}’\, (p=0.59), Â \mathit{\#DLs}’\, (p=0.27)\) |
\(\mathit{\#Makes}\) | \(\mathit{\#Likes}\!: 0.074\text{–}0.077, \mathit{\bf\#Files}\!: -0.13\text{–}0.11, \mathit{\#Makes}’\!: 0.28\text{–}0.33\) | \(\mathit{\#Remix}’\, (p=0.99), Â \mathit{\#DLs}’\, (p=0.51)\) |
\(\mathit{\#Remix}\) | \(\mathit{\#Views}\!: 0.0003, \mathit{\bf\#Remix}’\!: 0.18\text{–}0.27, \mathit{\bf\#Sources}\!: 0.19\text{–}0.39\) | \(\mathit{\bf\#Make}’\, (p=0.71), Â \mathit{\#DLs}\, (p=0.66)\) |
The relative incidence of user actions depends on the relative effort required to take those actions. Therefore, we observe that roughly (order of magnitude) 100 views “contribute” one like in our linear models, and roughly 10 likes “contribute” a make. The first is not particularly surprising. However, the fact that only 10\(\times\) likes contribute a make seems to suggest that users are actively seeking things, and have the means and motivation to actually print things that they have liked.
Another intuitive, in retrospect, observation is that the number of files has a negative effect on makes. This provides evidence for the hypothesis that simpler things (consisting of fewer parts) are more likely to be made.
Sublinearities and power-laws. The first figure below shows the number of likes vs. makes, and the second figure shows views vs. likes (both smoothed using exponential-size buckets).  The emerging relationships are that \(\mathit{\#Likes} \propto \mathit{\#Makes}^{0.70}\) and \(\mathit{\#Views} \propto \mathit{\#Likes}^{0.85}\).  Similar relationships have been observed in other domains.  However, if we look at remixes vs. makes, no such pattern emerges, which brings us to a last point.
Popular vs. Generative.  Perhaps the most surprising observation is that typical measures of general popularity have little relation to whether a thing is remixed or not: (i) makes are, in fact, the worst predictor of number of remixes (table and last figure above); and (ii) in fact, the number of remixes is a bad predictor of almost everything, except of other remixes (table above). This suggests that aspects of a design that make it broadly appealing are distinct from aspects that make it inspiring and, furthermore, agrees with the author’s personal experience that following remix links is more useful when looking for ideas, than when looking for utilitarian or fun things to print.
What next?  As a “bonus”, here is a visualization of the evolution of the largest connected component of the remix graph (with Customizer outputs excluded).  The last frame is essentially the same data as in our interactive visualization.  This video was hacked together using Matplotlib’s basic animation facilities and layed out using a simple breadth-first traversal of the graph.  Not as pretty as it can be, but it still shows an interesting picture.
]]>
The kit is great out of the box but, of course, I had to add some of my own tweaks. Â First was varnishing, to make it look even prettier: pre-stain conditioner, then three coats of Minwax Red Oak stain, and finally four coats of polyurethane. Â Considering this was my second varnishing job ever (and the first in a decade), it went pretty well.
Next, I designed a 3D-printable button cover plate for the monitor control panel, that sits in a cutout under the marquee and above the monitor.  Nothing too fancy, but gets the job nicely done and makes the kit look even more clean.
Finally, a minor annoyance was that the Kickstarter version of the kit needed two power supplies: one 12V for the display and audio amp, and another 5V for the Raspberry Pi.  Furthermore, there was no power switch, so turning the cabinet off meant pulling two wart adaptors from the wall socket (the newer version uses 5V for everything but it still has no power switch, I believe).  For some time I had been looking for an excuse to design a microcontroller circuit from scratch, and also play with surface-mount (SMT) components, so I designed a smart power switch board and. Got the PCBs made on OSHPark, wrote the firmware using the Arduino libraries, and flashed the MCU using a Bus Pirate.  Given that my PCB design education is 100% from Google, this went pretty well too.  Now I have a single pushbutton that works ATX-style, and also all power is turned off when the Pi shuts down itself.
While at it, I added a physical volume control button, using the extra GPIO pins available on header P5, plus a simple Python daemon running on the Pi. Â All hardware and software is available here.
Now for some retro gaming (Keystone Kapers, 1941, Arkanoid… ah, the memories)!
]]>Perhaps the NVR industry is ripe for “disruption”, but I wasn’t willing to wait. Last time I did that (for car stereos) was almost three years ago… and I’m still waiting.  Luckily, an NVR is a much simpler build than a custom car stereo (this was enough for me, thank you :).  There are several low-cost hardware options and ZoneMinder is a great open-source surveillance system that was originally built to scratch an itch (the original author’s power tools were stolen from his garage, and he couldn’t find any reasonably-priced commercial surveillance solutions he liked).  Here is what I got after about a day:
In addition to some familiarity with installing Linux, a 3D printer, and my case design from Thingiverse, you’ll also need:
Total cost comes to $120 if you have some spare parts around, or about $140 if you get everything and add shipping too. That’s about half the price of just software licenses for a NAS box, and an order of magnitude cheaper than NVR boxes in the market. Â Plus, there’s CPU cycles to spare, for more cameras, and it leaves the ReadyNAS Atom CPU free to handle its main tasks (file and media serving).
Although I love the Raspberry Pi and already have a couple for various tasks, I went with the Cubieboard since it has a much more powerful CPU (AllWinner A20 dual-core ARM) and built-in SATA, for just $20 more. Â Adding a powered USB hub and SATA-to-USB adaptor to a RasPi would probably have cost more (plus require funky wiring solutions); the Cubieboard was mostly plug-and-play.
The A20 can handle all four cameras in “modect” mode (motion detection triggered recording) at 1fps with one alarm zone per camera, without problems. Â The load average can be high (between 0.5 and 1.5) probably due to the continuous I/O, but actual utilization per core seems to peak around 20-25% and is typically in the single digits. Â Not bad at all for a low-power (10W max) single-board computer!
There are several Linux distributions for the Cubieboard (including Android) and the documentation is a bit messy, so I installed Linux a few times before I settled with Cubian (basically Debian Wheezy, in the spirit of Raspbian) which is great.  It can be installed on either an SD card or built-in NAND flash (I went with the former).  There are already DEBs for ZoneMinder, so this is a fairly standard Linux install.  The only additional steps were moving data directories for ZoneMinder and MySQL, as well as temporary files and logs (to minimize flash wear), over to the hard drive; see brief instructions on Thingiverse.
If you have a 3D printer and some basic Linux skills, perhaps this might save you a few hundred to a few thousand dollars. YMMV with other video formats (e.g., H.264 HD cameras). Let me know if it works for you.
]]>If you haven’t heard of it before, 3D printing refers to a family of manufacturing methods, originally developed for rapid prototyping, the first of which appeared almost three decades ago. Much like mainframe computers in the 1960s, professional 3D printers cost up to hundred thousands of dollars. Starting with the RepRap project a few years ago, home 3D printers are now becoming available, in the few hundred to a couple of thousand dollar price range.  For now, these are targeted mostly to tinkerers, much closer to an Altair or, at best, an Apple II, than a MacBook. Despite the hype that currently surrounds 3D printing, empowering average users to turn bits into atoms (and vice versa) will likely have profound effects, similar to those witnessed when content (music, news, books, etc) went digital, as Chris Anderson eloquently argues with his usual, captivating dramatic flair. Personally, I’m similarly excited about this as I was about “big data” (for lack of a better term) around 2006 and mobile around 2008, so I’ll take this as a good sign. :)
One of the key challenges, however, is finding things to print!  This is crucial for 3D printing to really take off. Learning CAD software and successfully designing 3D objects takes substantial, time, effort, and skill. Affordable 3D scanners (like the ones from Matterform, CADscan, and Makerbot) are beginning to appear. However, the most common way to find things is via online sharing of designs. Thingiverse is the most popular online community for “thing” sharing. Thingiverse items are freely available (usually under Creative Commons licenses), but there is also commercial potential: companies like Shapeways offer both manufacturing (using industrial 3D printers and manual post-processing) and marketing services for “thing” designs.
I’ve become a huge fan of Thingiverse.  You can check out my own user profile to find things that I’ve designed myself, or things that I’ve virtually “collected” because I thought they were really cool or useful (or both). Thingiverse is run by MakerBot, which manufactures and sells 3D printers, and needs to help people find things to print. It’s a social networking site centered around “thing” designs. Consequently, the main entities are people (users) and things, and links/relationships revolve around people creating things, people liking things, people downloading and making things, people virtually collecting things, and so on. Other than people-thing relationships, links can also represent people following other people (a-la Twitter or Facebook), and things remixing other things (more on this soon). Each thing also has a number of associated files (polygon meshes for 3D printing, vector paths for lasercutting, original CAD files—anything that’s needed to make the thing).
The data is quite rich and interesting. I chose to start with the remix relationships. When a user uploads a new design, the can optionally enter one or more things that their design “remixes”. In a sense, a remix is similar to a citation, and it conflates a few, related meanings. It can indicate an original source of inspiration; e.g., I see a design for 3D printable chainmail and decide that I could use a similar link shape and pattern to make a chain link bracelet.  I could design the bracelent from scratch, using just the chainmail idea, or perhaps I could download the original chainmail CAD files (if their creator made them available) and re-use part of the code/design.  A remix could also indicate partial relatedness: I download and make a 3D printer (yes, it’s possible, if you have the time—or, in this case, you can buy it instead) and decide to design a small improvement to a part.  Finally, a remix may indicate use of a component library (e.g., for embossed text, gears, electronic components, and much more).
Remix links can also be created automatically by apps. Like any good social networking platform, Thingiverse also has an API for 3rd party apps. The most popular Thingiverse app is the Customizer: anyone who can write parametric CAD designs may upload them and allow other users to create custom instances of the general design by choosing specific parameter values (which can be dimensions, angles, text or photos to emboss, etc).  For example, the customizable iPhone case allows you to chose your iPhone model, the case thickness, and the geometric patterns on the back.  Another popular parametric design is the wall plate customizer, which allows you to choose the configuration of various cutouts (for power outlets, switches, Ethernet jacks, etc) and print a custom-fit wallplate. A parametric design is essentially a simple computer program that describes a solid shape (via constructive solid geometry and extrusion operators). The Customizer will execute this program and render the solid on a server, generating a new thing, which will automatically have a remix link to the original parametric design.
So let’s get back to the remix relationship.  While I was waiting for my 3D printer to arrive, I spent some time browsing Thingiverse.  I noticed that I was frequently following remix hyperlinks to find related things, but following a trail was getting tedious and I was losing track.  So, I decided to make something that gives a birds eye view of those relationships. What are people creating, and how are they reusing both ideas and their digital representations? Last week I hacked together a visualization (using the absolutely amazing D3 libraries) to begin answering this question. Here is the largest connected component of the remix graph, which consists of about 3,500 things (nodes). If you think about it, its pretty amazing: more than 5% of the things (or at least those in my data) are somehow related to one another.  It may not seem like much at first, but check out the variety of things and you’ll see some pretty unexpected relationships (direct or indirect).
Clicking on the hyperlink or the image above will take you to an interactive visualization (if you’re on an iPad, you may want to grab your laptop for this component; D3 is pretty darn fast, but 3,500 nodes on an iPad is pushing it a bit :).  You can click-drag (on a blank area) to pan, and turn your scroll wheel (or two-finger scroll on a touchpad, or pinch on an iPad) to zoom. Nodes in red are things that a site editor/curator chose to feature on Thingiverse. Each featured thing is prominently displayed on the site’s frontpage for about a day. Graph links (edges) are directed and represent remix relationships (from a source thing to a derived thing).  If you mouse over a node, you’ll see some basic information in a tooltip, and outgoing links (i.e., links to things inspired or derived from that node) will be highlighted in green, whereas incoming links will be highlighted in orange. You can open the corresponding Thingiverse page to check out a thing by clicking on a graph node.  Finally, on the right-hand panel you can tweak a few more visualization parameters, or choose another connected component of the remix graph.
Before moving on to other components, a few remarks on the graph above: Although cycles are conceivable (I see thing X and it inspires me to remix it into thing Y, then the creator of X sees the remix action in his newsfeed, checks out my thing, and incorporates some of my ideas back into X, adding a remix link annotation in the process), it seems that this is never the case: the remix graph (or at least the fraction in this visualization, which is substantial) is, in practice, a DAG (directed, acyclic graph). Next, many of the large star subgraphs are centered around customizable (parametric) things; for example the large star on the left is the iPhone case (noted above) and it’s variations. Most of the remixes are simple instances of the parametric design, but some sport more involved modifications (e.g., cases branded with company logos). However not all stars fall in this category. For example, the star graph with many red nodes near the bottom left is centered around a 3D scan of Stephen Colbert, made on the set of the show. This has inspired may remixes, into things like ColberT-Rex, or Cowbert. Most of these remixes have one parent node, but some combine more than one 3D model; for example a cross between Colbert and the Stanford bunny is the Colberabbit, and a cross between Colbert and an octopus is Colberthulu. The original Colbert scan and most of it’s remixes were featured on Thingiverse’s frontpage (apparently the site editors are huge Colbert fans?).
So, anyway, how about the other connected components? The distribution of component sizes follows a power law (again, click on the image for an interactive plot—singleton components are not included), no surprises here:
Components beyond the giant one are also interesting (as always, click on each image for the interactive visualization).  For example, the component on the left below consists of things inspired by a 3D-printable chainmail design, which also includes things like link bracelets, etc.  The component on the right contains various designs for… catapults!
Some components contain pretty useful stuff, such as the one with items for kid’s parties (e.g., coasters, cookie cutters) — on the left.  Since many people in the community are tinkerers, there are many 3D-printable parts for.. 3D-printers!  An example is the component on the right, which is centered around the design files for the original MakerBot Replicator, and around it are related items (like covers and other modifications).
Other components contain cool, geeky things, such as the small but well-featured component on the left, with figures and items from the Star Wars universe (including Darth Vader, as well as Yoda, remixed into a “gangsta” and other things). Â Finally, not all components consist of 3D-printable things. Â The component on the right has designs for lasercutting plywood so it can be folded, which was remixed into book covers, Kindle covers, and other things:
All this is just a fraction of what’s out there. Thingiverse is also growing at an amazing pace: around March when I collected some of this data there were about 60,000 things and now there are over 100,000 things (the latter number is based simply on what appears to be linearly assigned thing IDs). Â That’s roughly a doubling in four months; the exponential trend is going on strong! Â This is quite impressive given the small (but fast-growing) size of the home 3D printer market.
Visualizing just the remix aspect of the Thingiverse is a start. For example, another thing I found myself doing when browsing Thingiverse is following indirect same-collection links (rather than direct remix links) to find related items. Once I get over gaping at the graph and all the stuff on Thingiverse (some of which I’ve printed on my Solidoodle), there are a few things to try in terms of data/graph properties as well as in terms of improving the visualization as a tool to browse the Thingiverse and discover interesting and useful things more effectively. If anyone is actually reading this :) and there is something you’d like to see, please chip in a comment.
Postscript: My favorite cluster among those I spotted in the visualization is probably the one related to Colbert (see above), with the Colberabbit (“a godless killing machine”) a particular favorite. Â I’ll be printing one of those soon. :)
]]>I generally like to make things (I used to say “build” things, but that was misconstrued by some manager/academic types, who apparently have a very different definition of “to build”), whether it’s software, writing, or “hardware”.  I usually talk about the first, but I occasionally do the last (much to the dismay of my wife, who has nonetheless been very patient! :).  I also like to try new things–I probably care more about the process and experimentation, learning what’s possible and how to do it, that the final product (which is not to say that I don’t care about the final product at all, but it get’s boring pretty quickly for me). So, sometime last year I decided to upgrade my car speakers (I also Dynamat-ted all doors, but I didn’t take photos of that adventure; one tip, though:: make sure you sit down properly, because after crouching down on tiptoe for almost an entire day, I needed physiotherapy for my heel tendon :) —now my Subaru’s doors sound like a Mercedes when you shut them). However, the new tweeters were much larger than the factory-installed ones, so I took the opportunity (excuse?) to learn fiberglassing and make new tweeter pods.
Let me set the mood by starting with my outfit: I started with the one on the left, but after plenty of PVC dust, fiberglass dust, and acetone fumes, I upgraded to the one on the right. A proper respirator helps a lot, especially if you’re working indoors. And don’t skip the safety glasses (even if you’re wearing vision glasses, as I found out). Â Always take the proper safety precautions.
I did not build the entire sail panel from scratch, of course (didn’t have a 3D printer back then but, even now that I do, decent 3D scanners are still not cheaply available, although that’ll hopefully change soon). Â I modified an OEM sail panel (I got the cheap ones, that aren’t designed to house factory tweeters, since I was going to cut them up anyway). Â The tweeter pod was fabricated from a piece of common PVC pipe. Â Cutting those can be a bit messy, but the Dremel is a fantastic tool. If you’re thinking of fabricating anything and you’re going to buy just one tool, it should be a Dremel. Â In fact, just stop reading and go over to Amazon and order one now! [No, Bosch is not paying me for this]
I used a cutting disk for the PVC pipe. Go slow to avoid melting the PVC too much and cut the piece a little longer, then sand off the molten bits of PVC (if any). This will give you a nice, clean edge. Â Make sure to use a sanding block (or put the sandpaper on a flat surface) to get a good edge. Â I used masking tape to mark the cutting path; getting this straight saves you some sanding. Â Also, instead of going completely freehand, I like to put both the tool and pipe on a steady surface, and then just slowly roll the pipe until it touches the disk; makes cutting much easier to control. I also notched one edge with a sanding band (which is what is attached to the tool in the picture on the left), ostensibly to route the wires, but that turned out to be unnecessary.
For the OEM sail panels, I used a cutting bit and did freehand cutting (picture on the right).  You don’t need to be terribly precise anyway.  Make sure you move the tool in the right direction, otherwise cutting will be difficult and the bit may kick back (although, even if you gouge the piece by accident, it’s not a big deal at this stage—and then you’ll know which direction to move along :).
The PVC pipe inner diameter turned out to be a tad small, but the Dremel with a sanding band was enough to fix this. Â Again, I put both the tool and pipe on a steady surface, and just rolled the pipe around the spinning band. Â You can also use a prop on the outside part of the pipe, to make sure you get a (relatively) consistent depth, if your hand isn’t particularly steady (mine isn’t, but you can always devise a jig or prop to compensate). Â The first picture below shows the original pipe (I used the two scrap pieces for practice) and the thinned down pieces are in the sail panels.
As I mentioned earlier, this does not need to be a precise job, since everything will be covered in fiberglass, eventually. Â However, you want to make sure that the tweeter pods (aka. PVC pipe) are aimed in the right direction. Â You want to go down to your car and test, then use a pen to mark the pipe around the edges where it touches the sail panel hole. Â Then go back and hotglue the pipe temporarily in place. Â This should be strong enough to prevent it from moving, so you can go back to your car again and double-check (and triple-check) the aiming.
It’s crucial to get this right, as you obviously cannot re-aim later. Â So, measure twice (or more)! Â Also, it helps if your tweeters are already broken in when you do the aiming. Mine were pretty harsh for the first couple of months (to the point that I was wondering how reviews could be so good), but eventually became pretty mellow. Â I did all this just a couple of weeks after using them in temporary mounts (I thought this would be enough to break them in; apparently I was wrong). Â Now I’m pretty happy with the tweeters’ sound quality, but I would have aimed slightly differently if I had done this later (still not too bad, though). Â So, if you don’t mind your car looking like a wreck, take your time breaking them in before you do all this.
Once you have the aiming down, it’s time to give the sail panel it’s final shape. Â You do this by stretching something that can absorb the resin over the frame that you’ve essentialy created. Â A common material for jobs like this is speaker grille cloth, which you can get pretty cheaply on eBay or Amazon. Â Another popular material for bigger surfaces seems to be a fleece blanket. Â However, for this small job, the speaker grille cloth is perfect: it’s very thin, it’s quite strong, stretches nicely, and follows the contours well. Â It’s also pretty absorbent (enough for the glue and resin).
You will need to superglue the cloth on the frame, but you’ll need activator to make it work. You can usually find a package that has both the cyanoacrylic glue and activator at any hardware store.  By the way, this works great for any hard-to-bond surfaces (not just speaker cloth on plastic :).  The bond may not be strong long-term, depending on the materials, but in this case it doesn’t matter (it’s pretty good actually). Make sure to stretch the cloth enough to get rid of any kinks.  The good thing about speaker grille cloth is that it’s pretty stretchy, so even if you have kinks after you’ve partially glued some sides, you can still get rid of them by just stretching harder. You may need a bit of practice doing all this, as the activator evaporates quickly, but the glue sets almost instantly if it touches it. So, you may want test in which order you apply glue and activator, but it’s not very difficult to get right after a couple of trials.  Plus, you can always rip it off, scrape the glue, and start over.  Once the bond is good, cut off the excess cloth with a sharp craft knife.
Now you’re ready for the actual fiberglassing! Â First, you apply some resin on the speaker grille cloth, without any fiberglass mat. In my case the first layer I applied was too thin (you can clearly see the speaker cloth), so I went for one more layer before the mat. Â You want it to be nice and thick (to fully cover the cloth), but not too thick (you can always sand it down, but resin takes forever to sand by hand if you overdo it).
After the first layer has cured, you can move on to the second layer, with the fiberglass mat.  First, I sanded down the resin a little bit, and cleaned off the wax release agent (apparently resins contain a small amount?).  Then, place pieces of fiberglass mat and dab them with resin on a paintbrush (I used a cheap $1 paintbrush, 1/2in wide—you may want to pick up a couple of those, resin can set fast, especially when you don’t know what you’re doing in the beginning).  One thing I found out the hard way: don’t be lazy and use a stick to properly mix the hardener into the resin, not your brush!  Otherwise, the brush will absorb the hardener (capillary action), which will have two undesirable effects.  First, the resin on your brush will set and render your brush useless.  Even worse, the resin that you put on the piece won’t have enough hardener, and it will take forever to cure.
Once the resin sets, one more round of light sanding and cleaning off the release agent. Â If you have excess resin around the pipe edges, you can also use the sanding band on the Dremel to shape it a bit (as I said before, cured resin is pretty hard and large pieces take forever to sand down by hand). Â Take it easy and slow with the Dremel though. Â Also, make sure to wear your respirator (fiberglass dust is particularly fine and gets everywhere).
You can do another mat layer, if you want, although that much strength is probably not needed here. Finally, you’re ready for filling. I used common Bondo body filler from an automotive shop.  This thing cures pretty quickly, so make sure you mix just the amount you need.  In my case, I thought I’d mix more, but my first batch hardened before I even  got to the second pod. Admittedly, I was also pretty slow spreading the filler, since it was my first time. However, filler is pretty easy to sand, so go ahead and make a mess (within reason).
Once the filler sets, you’re ready for the final sanding.  It’s crucial that you use a sanding block, to get a nice surface shape (don’t use handheld sanding paper, or you’ll get a funny surface with grooves etc).  Do the usual progressively finer grit sanding.  I also used wet sandpaper at the final stages. Finally, spray paint everything (I used gray primer then black paint that matched the OEM trim). Needless to say, don’t do this indoors, unless you want graffiti on your walls (I guess you could use plenty of sheeting and open windows, but it’s just easier to go outdoors).  I used a skewer taped to the backside to hold the pieces while spraying. I also covered the foam strips pre-installed on the sail panels with masking tape. Follow instructions on paint bottle for drying times, etc. This was also my first time using canned spray-paint, but no surprises here. Just take it slow and don’t start spraying too close (you can always move closer and/or spray on more paint, but not the opposite–however, this is just common sense).
So, after all this, here is the final result. Â The picture on the left shows the OEM sail panel for factory tweeters and my hand-made, fiberglassed sail panels. Â The picture on the right shows the sail panel and tweeter installed. Â If you look closely, you’ll see some small surface imperfections (I think it’s because I didn’t use a sanding block on the last, light pass with fine-grit sandpaper–it was getting late and I was getting tired and impatient), but not too bad for a first fiberglassing job!
Postscript: Of all places, I got excellent fiberglassing advice at… an NSF review panel, from a university professor who turned out to be a machinist and body shop worker (maintaining an antique car collection) in his former life! Wow!
]]>
If it’s technically possible to infer my identity (given a long enough period of observation, and enough resources and time to piece the various, possibly inaccurate, pieces of information together), someone (with enough patience and resources) will likely do it. Therefore, as the amount of data about me tends to infinity (which, on the Internet, it probably does), the fraction that I have to hide in order to maintain my privacy tends to one: you have long-term privacy only if you never reveal anything.  There are various ways of not revealing anything.  One is to simply not do it.  Another might be to keep it to yourself and never put it in any digital media.  Yet another might be encrypting the information.
However, not revealing anything isn’t really a solution (if a tree falls in the forest and nobody hears it… the tree has privacy, I guess).  There is an alternative, of course: precise access control. Your privacy can be safeguarded by a centralized, trusted gatekeeper that controls all access to data. This leads to something of a paradox: guaranteeing privacy (access control) implies zero privacy from the trusted gatekeeper: they (have to) know and control everything.  Many people are still confused about this. For example, a form of this dichotomy can be seen in peoples’ reactions towards Facebook: on one hand, people complain about giving Facebook complete control and ownership of their data, but they also complain when Facebook essentially gives up that control by making something “public” in one way or another. [Note: there is the valid issue of Facebook changing its promises here, but that’s not my point—people post certain information on Facebook and not on, say, Twitter or the “open web” precisely because they believe that Facebook guarantees them access control which, by the way, is a very tall order, leading to confusion on all sides, as I hope to convince you.]
Although I learned not to worry about what can be inferred about me, I am perhaps somewhat worried about knowing who is accessing my data (and making inferences), and how they are using it. Particularly if this is done by parties that have far more resources and determination than myself.  However, who uses my information and how is also another piece of information (data) itself.  Although everything is information, there seems to be an asymmetry: when my information is revealed and used, it may be called “intelligence”, but when the information that it was used is revealed, it may be called “whistleblowing” or even “treason“.  This asymmetry does not seem to have any technical grounding—one might make valid arguments on political, legal, moral, etc grounds, but not on technical grounds. Seen in this context, Zuckerberg’s calls for “more transparency” make perfect sense—he’s calling for less asymmetry.
More generally, privacy does not really seem to be a technical problem, much like DRM isn’t really a technical problem.  That privacy can be guaranteed by technical means seems to be a delusion and, perhaps, a dangerous one, because it gives a false sense of security. Privacy is, for the most part, a social, political and legal problem about how data can be used (any and all data!) and by whom. The apparent technical infeasibility of privacy had led me to believe that people will, eventually, get over the idea. After all, privacy is a 200-300 year old concept (at least in the western world; interestingly, Greek did not have a corresponding word until very recently). I may have missed something obvious, however: if privacy is attainable via a centralized, trusted gatekeeper, then perhaps privacy is the “killer app” for centralization and “walled gardens”. “I want full control over your data” is tougher to sell than “I want to protect your privacy”. Which is why Eric Schmidt’s recent backpedaling is somewhat worrying, even if the goal is noble (and there currently isn’t any evidence to believe otherwise).
I don’t think there are any (technical) solutions to privacy.  Also, enforcing transparency is perhaps almost as hard as enforcing privacy, although I have slightly more hope for the former—but that’s a separate discussion.  Privacy is cat-and-mouse game, much like “piracy” and DRM. However, our expectations should be tempered by the reality of near-zero-cost transmission, collection, and storage of “inifinitely” growing amounts of information, and we should perhaps re-examine existing notions of privacy under this light. I find that many non-technical people are still surprised when I explain the simple example in the opening paragraph, even though they consider it obvious in retrospect.
Personally, I find it safer to just assume that I have no privacy. Saves me the aggravation.
]]>There once was a large family, with many brothers, uncles, and cousins spread over many different places. Each of them led their own lives.  The extended family spanned all sorts of lifestyles, from successful businessmen, dignified and well-dressed, to smart but somewhat irresponsible bon viveurs.  They lived in many different places and they occasionally exchanged gifts and money, some more frequently than others (admittedly, this part is rather weak in its simplicity, but a single analogy can only be taken so far). But they were getting tired of running to Western Union, paying transaction fees, losing money on currency conversions due to volatility in exchange rates, and so on. Furthermore, some of the more powerful family members had gotten into nasty feuds (world wars).
So, under the leadership of some of the more powerful siblings (Germany and France) they thought: well, we have enough money to go down to an international bank and open a common family account in a solid currency, say, dollars (they in fact created their own currency and bank, perhaps to avoid associations with existing institutions, but it’s probably safe to say that they heavily mirrored those of one of the leading siblings). Â Then it will be so much easier to do the same things much more efficiently. Â The richer craftsmen and businessmen among them could send their stuff with less hassle and waste [e.g., paragraph seven], and the poorer ones could gain a bit by wisely using their portion of the funds and an occasional advance withdrawal.
The leading siblings knew how to keep their checkbooks balanced, and it seemed reasonable to assume that these methods were general enough and suitable for everyone. Â So, after opening the family account with all of them as joint holders, they shook hands and simply agreed to use the money wisely, pretty much in the way that had worked well for the richer and more productive ones (stability and growth pact). Â Once in a while they might briefly meet and agree on some further rules of how the money should be used, but basically each one of them went their way, living the life they always had, managing their portion of the family funds. Â One of the more cynical siblings (England) was a bit skeptical about opening a family account while living their separate lives apart, so it chose to stay out, at least for a while. Â Times were good for several years, but they didn’t last forever.
The first to get into trouble would be one of the younger cousins (Greece), who generally valued time more than money (he occasionally complains about that himself, but to little effect so far). Using some money from the family account, he did a few renovations to make his home look better and bought some decent clothes. Using the family account to boost his creditworthiness and sporting a sharper new look, he managed to get a credit card with a promotional  0% APR (Euro membership).  He even threw a big party that impressed many (Olympic games). But after a few years, the credit card companies came back asking for payment, and he found himself in deeper trouble than before the good times had begun.
Some of the other relatives had also started getting into trouble, even if not all of them had been as irresponsible.  But the immediate problem was that cousin.  What was the family to do?  Other people had started noticing, and were beginning to have some questions.  “What kind of family are you?”  Your cousin deserves what he gets, but did you really think it was that simple to run a family with such a diverse crowd? Obviously the little cousin should be taught a lesson and become more mature and responsible.  But it should also be a lesson that could be repeated on other relatives, if necessary.
One option would be to kick him out (bankruptcy). It might get him to change his ways (or not), but a homeless relative does not make the family look good, even if he’s largely responsible for his predicament (which he is, by the way). And what would happen to the other relatives that weren’t doing that great either? Â A 0% promotional APR cannot last forever, and it’s not hard to shoot yourself in the foot with it, even if you aren’t irresponsible. Â Will other relatives head for the door too? Â If they do, will they come back? Â And is it possible to neatly untangle the finances, after decades of using a common account? Â Furthermore, the cousin may start hanging out with “strangers”, some of which may be of questionable character (IMF, Russia, etc). In fact, keeping him out of undesirable company might have played a role in inviting him to the extended family account in the first place.
Another option would be to bring him and his family into the home of a richer and more dignified family member, force him into a suit, grab him by the hand (or neck), and teach him how behave like a grown up under close supervision. But the other members of the household (citizens), who contribute to its finances (pay taxes) and get food and shelter in return (welfare and other benefits) would rightfully protest. “Who is this noisy, scruffy guy in our home?  Why do we have to feed him and pay so much attention to him?”  The cousin’s family, who also valued time over money (e.g., preferring a relaxing lifestyle on modest means over hard work), was also not very happy. “I just wish we could go down to the beach and spend 2-3 hours enjoying coffee under the sun like we used to.  And why is your big cousin telling us what to do anyway?”  In addition, it was always likely that other, equally noisy and scruffy distant relatives might show up knocking at the door of the mansion, and demand the same attention.  This was certainly more than big cousin had signed up for when opening the family bank account.
Then there is a third option, which does not so much focus on teaching a lesson, but on saving face and postponing the worst trouble. Just give the little cousin a scolding and some pocket money to pay the rent and interest for a few months. Â At least he wouldn’t be out in the street. Â And, who knows, he might change his ways on his own in the meantime. Â Sweeping the mess under the rug is unlikely (although not provably impossible) that it’ll lead to any long-term solution, but it’s the option easiest to swallow by everyone involved.
Anyway, I’ll stop the anthropomorphic analogies here. Â Using a different analogy, I’ll add that tweaking the knobs (fiscal policy targets) and, perhaps, changing batteries (bail-out loans) won’t do much good in the long run if the machine is basically broken. Â But it’s hard to fix it if getting down to the cogs and gears that make it work (politics) is taboo, perhaps even more than it used to be (compare Victor Hugo’s vision of the “United States of Europe” more than a century ago, with the Lisbon treaty).
Although it’s a rather overloaded term, you can probably call me a technocrat. Â As such, Deng Xiaoping’s famous quote (“it doesn’t matter if it’s a black cat or a white cat, it’s a good cat as long as it catches mice”) is basically appealing. Â Cats competing with each other and against mice sounds like a “natural” situation, so it’s easy to overlook whether it’s the only possible state of affairs. Â However, Â if they’re domesticated and not out in the wild, it’s not hard to imagine the mice and both cats colluding to, basically, take it easy. Sometimes what is “natural” should be examined more closely.
Greece is the first to draw wide attention to such questions, but I don’t think it will be the last, nor is it the first mishap along European integration.  I’d venture that, unless the EU collapses, everyone will find their place in it.  Eventually.
I’ll finish with an annotated graph (original source via metablogging.gr, and public Google spreadsheet with subset of the data), showing Greek public debt (central government) as % of GDP over the past 40 years. I’ll just point out that 1981 looks like a particularly interesting year, for various reasons.
Postscript. It’s often mentioned that “Greece has been in default for 50 years during the past two centuries.” Â This is true; after independence in 1821, Greece was bankrupt starting at the end of the 19th century under Charilaos Trikoupis, and ending after WWII. Â During this period, Greece was involved in a number of wars in the Balkans and Asia Minor, growing and shrinking in size a few times. Obviously, this didn’t help financial matters, but I don’t think it bears much similarity to the current situation.
I’ve also been puzzled somewhat about the role of corruption. Â Obviously, it’s not good and I’m not trying to justify it in any way. Â On the other hand, it doesn’t seem to be the sole cause of trouble, as is often suggested. Â Several East Asian countries (notably China, although it’s not the first neither the only one) have shown progress despite corruption. I don’t have an answer, but it seems to me that, when you steal money, it matters where you steal it from. Â If I swipe some cash from my little brother’s wallet, it will make my brother poorer and angrier, but it probably won’t bankrupt the household; someone earned that money, even if it wasn’t me. Â However, if I pocket an advance withdrawal using the credit card our father gave us, it’ll get everyone in trouble, eventually.
Finally, as for 0% APR credit cards, it’s rather different if, say, Bill Gates (US) gets one versus if I get one (not that I’m that irresponsible : ). Â One of us has deeper pockets and that makes a difference on whether we deserve it, on the kind of trouble we can get in, and even on the moral hazards we face. Â As long as the card is used wisely for an appropriate period of time, it isn’t necessarily bad. Â Any comparisons between US and Greece are, at best, premature.
]]>At least in data mining, “fully automatic” is an often unquestioned holy grail.  There are certainly several valid reasons for this, such as if you’re trying to scan huge collections of books such as this, or index images from your daily life like this.  In this case, you use all the available processing power to make as few errors as possible (i.e., maximize accuracy).
However, if the user is sitting right in front of your program, watching your algorithms and their output, things are a little different. No matter how smart your algorithm is, some errors will occur. This tends to annoy users. In that sense, actively involved users are a liability. However, they can also be an asset: since they’re sitting there anyway, waiting for results, you may as well get them really involved. If you have cheap but intelligent labor ready and willing, use it! The results will be better or, at the very least, no worse. Â Also, users tend to remember the failures. So, even if end results were similar on average, allowing users to correct failures as early as possible will make them happier.
Instead of making algorithms as smart as possible, the goal now is to make them as fast as possible, so that they produce near-realtime results that don’t have to be perfect; they just shouldn’t be total garbage. When I started playing with the idea for WordSnap, I was thinking how to make the algorithms as smart as possible.  However, for the reasons above, I soon changed tactics.
The rest of this post describes some of the successful design decisions but, Â more importantly, the failures in the balance between “automatic” and “realtime guidance”. The story begins with the following example image:
Incidentally, this image was the inspiration for WordSnap: I wanted to look up “inimical” but I was too lazy to type. Also, for the record, WordSnap uses camera preview frames, which are semi-planar YUV data at HVGA resolution (480×320). This image is a downsampled (512×384) full-resolution photograph taken with the G1 camera (2048×1536); most experiments here were performed before WordSnap existed in any usable form. Finally, I should point out that OCR isn’t really my area; what I describe below is based on common sense rather than knowledge of prior art, although just before writing this post I did try a quick review of the literature.
A basic operation for OCR is binarization: mapping grayscale intensities between 0 and 255 to just two values: black (0) and white (1).  Only then can we start talking about shapes (lines, words, characters, etc).  One of the most widely used binarization algorithms is Otsu’s method.  It picks a single, global threshold so that it maximizes the within-class (black/white) variance, or equivalently maximizes the across-class variance. This is very simple to implement, very fast and works well for flatbed scans, which have uniform illumination.
However, camera images are not uniformly illuminated. The example image may look fine to human eyes, but it turns out that even for this image no global threshold is suitable (click on image for animation showing various global thresholds):
If you looked at the animation carefully, you might have noticed that at some point, at least the word of interest (“inimical”) is correctly binarized in this picture. Â However, if the lighting gradient were steeper, this would not be possible. Incidentally, ZXing uses Otsu’s method for binarization, because of it is fast. So, if you wondered why barcode scanning sometimes fails, now you know.
So, a slightly smarter approach is needed: instead of using one global threshold, the threshold should be determined individually for each pixel (i,j). A natural threshold t(i,j) is the mean intensity μw(i,j) of pixels within a w×w neighborhood around pixel (i,j).  The key operation here is mean filtering: convolving the original image with a w×w matrix with constant entries 1/w2.
The problem is that, using pure Java running on Dalvik, mean filtering is prohibitively slow. Â First, Dalvik is fully interpreted (no JIT, yet). Firthermore, the fact that Java bytes are always signed doesn’t help: casting to int and masking off the 24 most significant bits almost doubles running time.
Method | Dalvik (msec) | JNI (msec) | Speedup | ||||
---|---|---|---|---|---|---|---|
Naïve | 109,882 | ± | 4,813 | 1,712 | ± | 261 | 64× |
Sliding | 2,435 | ± | 141 | 71 | ± | 19 | 34× |
JNI to the rescue. The table above shows speedups for two implementations. The naïve approach uses a triple nested loop and has complexity O(w2mn), where m and n is the image height and width, respectively (m = 348, n = 512 in this example). The 1-D equivalent would simply be:
for i = 0 to N-1: s = 0 for j = max(i-r,0) to min(i+r,N-1): s += a[j]
where w = 2r+1 is the window size. The second implementation updates the sums incrementally, based on the values of adjacent windows. The complexity now is just O(mn). An interesting aside is the relative performance of two implementations for sliding window sums (where w = 2r+1 is the window size). The first checks border conditions inside each iteration:
Initialize s = sum(a[0]..a[r]) for i = 1 to N-1: if i > r: s -= a[i-r-1] if i < N-r: s += a[i+r]
The second moves the border condition checks outside the loop which, if you think about it for a second, amounts to:
Initialize s = sum(a[0]..a[r]) for i = 1 to r: s += a[i+r] for i = r+1 to N-r-1: s -= a[i-r-1] s += a[i+r] for i = N-r to N-1: s -= a[i-r-1]
Among these two, the first one is faster, at least on a laptop running Sun’s JVM with JIT (I didn’t time Dalvik or JNI). I’m guessing that the second one messes loop unrolling, but I haven’t checked my guess.
It turns out that there is a very similar approach in the literature, called Sauvola’s method. Furthermore, there are efficient methods to compute it, using integral images. These are simply the 2-D generalization of partial sums. In 1-D, if partial sums are pre-computed, window sums can be estimated in O(1) time using the simple observation that sum(i…j) = sum(1..j) – sum(1..i-1).
Savuola’s method also computes local variance σw(i,j), and uses a relative threshold t(i,j) = μw(i,j)(1 + λσw(i,j)/127). WordSnap uses the global variance and an additive threshold t(i,j) = μw(i,j) + λσglobal, but after doing a contrast stretch of the original image (i.e., linearly mapping minimum intensity to 0 and maximum to 255). Doing floating point math or 64-bit integer arithmetic is much more expensive, hence the additive threshold. Furthermore, WordSnap does not use integral images because the same runtime can be achieved without the need to allocate a large buffer. Memory allocation on a mobile device is not cheap: the time needed to allocate a 480×320 buffer of 32-bit integers (about 600KB total) varies significantly depending on how much system memory is available, whether the garbage collector is triggered and so on, but on average it’s about half a second on the G1. Even though most buffers can be allocated once, startup time is important for this application: if it takes more than 2-3 seconds to start scanning, the user might as well have typed the result.
Anyway, here is the final result of locally adaptive thresholding:
Conclusion: In this case we needed the slightly smarter approach, so we invested the time to implement it efficiently. WordSnap currently uses a 21×21 neighborhood.  Altogether, binarization takes under 100ms.
Another problem is that the orientation of the text lines may not be aligned with image edges. Â This is called skew and makes recognition much harder.
Initially, I set out to find a way to correct for skew.  After a few searches on Google, I came across the Hough transform.  The idea is simple.  Sayyou want to detect a curve desribed by a set of parameters. E.g., for a line, those would be distance Ï from origin and slope θ. For each black pixel, find the parameter values for all possible curves to which this pixel may belong. For a line, that’s all angles θ from 0 to 180 degrees, and all distances Ï from 0 to sqrt(m2+n2).  Then, compute the density distribution of parameter tuples.  If a line (Ï0,θ0) is present in the image, then the parameter density distribution should have a local maximum at (Ï0,θ0).
If we apply this approach to our example image, the first maximum is detected at an angle of 20 degrees. Here is the image counter-rotated by that amount:
Success!  However, computing the Hough transform is too slow!  Typical implementations bucketize the parameter space. This would require a buffer of about 180×580 32-bit integers (for a 480×320 image), or about 410KB. In addition, it would require trigonometric operations or lookups to find the buckets for each pixel, not to mention counter-rotation. There are obvious optimizations one can try, such as computing histograms at multiple resolutions to progressively prune the parameter space.  Still, the cost implied by back-of-the envelope calculations put me off from even trying to implement this on the phone. Instead, why not just try to use the users:
Conclusion: Simple approach with help from user wins, and the computer doesn’t even have to do anything to solve the problem! Incidentally, the guideline width is determined by the size of typical newsprint text at the smallest distance that the G1’s camera can focus.
Next, we need to detect individual words.  The approach WordSnap uses is to dilate the binary image with a rectangular structuring element (in the following image, the size 7×7), and then expand a rectangle (shown in green) until it covers the connected component which, presumably, is one word.
However, the size of the structuring element should really depend on the inter-word spacing, which in turn depends on the typeface as well as the distance of the camera from the text.  For example, if we use a 5×5 element, we would get the following:
I briefly toyed with two ideas for font size detection. Â The first is to do a Fourier transform. Â Presumably the first spatial frequency mode would correspond to inter-word and/or inter-line spacing and the second mode to inter-character spacing. But that assumes we apply Fourier to a “large enough” portion of the image, and things start becoming complicated. Â Not to mention computationally expensive.
The second approach (which also appears to be the most common?) is to to hierarchical grouping. First expand rectangles to cover individual letters (or, sometimes, ligatures), then compute histogram of horizontal distances and re-group into word rectangles, and so on. Â This is also non-trivial.
Instead, WordSnap uses a fixed dilation radius. Â The implementation is optimized to allow near-realtime annotation of the detected word extent. Â This video should give you an idea:
Conclusion: Simple wins again, but this time we have to do something (and let the user help with the rest). But, instead of trying to be smart and find the best parameters given the camera position, we try to be fast: fix the parameters and let the user find the camera position that works given the parameters. WordSnap uses a 5×5 rectangular structuring element, although you can change that to 3×3 or 7×7 in the preferenfces screen. Altogether, word extent detection takes about 150-200ms, although it could be significantly optimized, if necessary, by using only JNI only, instead of a mix of pure Java and JNI calls.
I’m now looking into the possibility of moving OCR into the “live” loop: as you move the camera, the phone shows not only the word extent rectangle, but also the recognized word.  Perhaps as a hyperlink to Google, or along with Google Translate results.  Then I can justifiably use the buzzword of the day, “augmented reality”!  It looks that it might just be possible, but let me get back to you in a week or two.  :)
Postscript: Some of the papers referenced were pointed out to me by Hideaki Goto, who started and maintains WeOCR. Also, skew detection and correction experiments are based on this quick-n-dirty Python script (needs OpenCV and it ain’t pretty!). Update (9/2): Fixed really stupid mistake in parametrization of line.
]]>Politically-correct and totally un-sarcastic as I am, I originally wanted to go with some combination of “principled anarchy”. Â Now, that was available! Apparently, nobody wanted to touch it with a ten foot pole, not even cybersquatters; which kind of gave me a hint. Â Wouldn’t want to, say, end up in a three-letter-agency watchlist, at least not while in the US on H1B. Â They might not share my sense of humor.
So, armed with online thesauri, dictionaries, the internet anagram server, and things like that, I set out on a name quest.  I don’t remember anymore what I tried; “coredump” (which, in case you didn’t know, has “code rump” as an anagram—still available, if you’re interested), “segfault”, “brainfart”, “farout”, and pretty much anything else I could think of: all taken.  Even these names as well as these are taken (thank god!).
At some point I was naïve enough to hope that a Tolkien name would be free.  No luck of course, anything semi-pronnounceable was taken.  You’d have to go as far as, say,  “gulduin” (which, by the way, means “magic river” in Elvish) to find something available. Good luck getting people to remember that!  Oh well, at least I had a reason to actually read some of the Silmarillion; if you’ve tried this and you’re not a religiously devoted Tolkien fan, you know what I’m talking about.
After the first week of searching, I think I even got temporarily banned from Yahoo! whois search. In desperation, I finally turned to one of many domain name generators.  I asked omniscient Google to give me one and, as always, it obliged.  By now I had decided that I wanted a name as free of any connotations as possible (say, like Google or Slashdot, not like Facebook or YouTube).  I went through things like “fractors”, “naphead”, “magnarchy”, “aniarchy”, “mallock”, “hexndex”, “squilt”, “terable”, and so on. It’s amazing how several weeks of searching in frustration temper one’s standards of quality. Anyway, one day “bitquill” popped up: neutral, inoffensive, bland, unusual, and a composite which is short and almost pronnounceable!  I couldn’t ask for much more, so I registered it. Â
That, and “clusterhack”. Â Sorry. Â I couldn’t resist.
]]>Overall, the Android APIs are quite impressive, even though some edges are still rough. Â It was reasonably easy to get up to speed, even though my prior experience on mobile application frameworks was zero. Â The toughest part was getting used to the heavily event-based programming style, as well as the idea that your code may be interrupted, killed and restarted at any time.
Activity lifecycle. Although Android supports multitasking and concurrency, on a mobile device with limited memory and no swap it’s likely that the O/S will have to kill some or all of your tasks to reclaim resources needed by higher-priority, user-visible processes (e.g., an incoming phone call). Â If you have non-persistent or external state, such as open database connections or separate threads that fetch data in the background, things may get a little tricky. Although Android has auxiliary features such as managed cursors and dialogs, you still need to know they exist and use them properly.
However, even things like screen orientation changes are handled by terminating and restarting any affected activities. At first, while spending a couple of hours to figure out why my app was crashing when I opened the keyboard, I bitched about this. Apparently, I wasn’t the only one who was confused. To my surprise, I found that many Android Market apps crash when the screen is rotated.  Some Market apps even come with grave-sounding warnings that, e.g., “the life counter [sic] resets on screen orientation change =/ Will fix for new version.” Luckily, I also found numerous good posts about orientation changes, such as this or this (the series by Mark Murphy are pretty good, by the way), as well as a post on the official blog.
In retrospect, handling orientation changes in this way is a good thing: it forces app developers to be prepared. After I fixed my code to handle orientation changes gracefully, I found that I was also ready to properly handle other sources of interruption: when an incoming call came as I was testing my app, everything worked out beautifully.
Now, whenever I download an app, I perform the following test: I flip the keyboard open when the app executes a background operation, even if I don’t need to type anything. Â If the app crashes or gets into an inconsistent state (something that happens surprisingly often), that’s a strong indication that the code is not very robust.
Event handling. For APIs that are so heavily event-based, one of my gripes was that some (but not all) event handlers are based on inheritance rather than delegation. These design choices are probably due to performance reasons that may be specific to Dalvik, the Android VM which is motivated partly for non-technical reasons.Â
However, inheritance sometimes complicates things. For example, Android supports managed cursors and dialogs via methods in the base Activity class. On more than one occasion I found that managed threads would also be nice.  Implementing this requires hooking into the activity lifecycle events (and has, on occasion, been over-engineered to death). Because there are several Activity subclasses (e.g., ListActivity, PreferenceActivity, etc), there is no simple way to extend them all. If lifecycle events were handled via delegates, it would be possible to implement a background UI thread manager as, say, an activity decorator that can be added to any activity instance. Â
The delegation-based event model was introduced in Java 1.1 precisely to address such shortcomings of the inheritance-based model. But, being pragmatic about performance on current mobile devices, I should probably not complain too much.  Still, some API design choices seem a bit arbitrary, perhaps even Microsoft-esque: why would performance be an issue with lifecycle events (which are presumably rare, but handlers use inheritance) but not with click events (which are presumably more frequent, but handlers use delegation)?
Data sync and caching. Another gripe was the lack of syncable content providers, something I’ve mentioned before. Also, content providers aren’t really appropriate for network-hosted data. The requirement that content providers use an integer primary key (row ID) is reasonable for local databases and simplifies the APIs, but requires some book-keeping when that’s not the “natural” primary key.
Ideally, I’d like to see some support for caching remote data on the SD card (which would require gracefully handling card removal, and transparently fetching data either from the cache or the network). Although the core APIs provide all that is necessary to implement this from scratch, it was getting too complicated for my simple “weekend hack” app, so I decided to drop it.
I hope that, in the near future, porting web apps to mobile devices will become easier with the support for offline applications and client-side storage in HTML5, as well the proposed geolocation APIs (all of which are already part of Google Gears). An application manifest might include “web activities”, translating intents into HTTP POST requests, while granting device access permissions to those activities (e.g., see promising hacks such as OilCan). Porting might then involve little more than writing a new stylesheet. Perhaps that’s where Palm is going with its WebOS which apparently supports both “native application” and “web application” models, but information is rather thin at the moment.
Epilogue. My first Android app was an interesting learning experience, not only from a technical standpoint (perhaps more on this in another post). I also found that Android is quite stable. I sometimes used my phone for live debugging, forcefully killing threads and processes through ADB.  Let me put it this way: if it wasn’t for the RC33 OTA update, my phone would now have an uptime of a few months. For a piece of software that barely existed a year ago, this is impressive.
There is plenty of documentation available, but at times it can take some searching to find the necessary information. Â However, since Android is open-source, it’s always possible to consult the source code itself (which is fairly well-written and documented).
Note:Â This post was mostly written sometime around February. Since then I had no time to try SDK v1.5, but I believe most points above are still relevant.
]]>After coming back from Seoul, New York seemed even dinkier than the last time I returned from a trip. As I was boarding the plane at Incheon, I picked up a copy of the Wall Street Journal (Asian edition). I had enough time to read almost all of it, as KAL arrived into Narita early, but Continental was six hours late. It might as well have been called “The GM Journal”, since about two thirds of the stories were about GM and Chrysler, and how the US government is trying to save them from doom due to chronic mis-management and exorbitant legacy costs. Â
My wife, who has a far more sensitive nose than me, jokes that the first thing you smell upon disembarking the plane is cigarette smoke in Greece, and garlic in Korea. Upon arriving at Newark (or any NYC airport, for that matter), even I can smell the mouldy carpets. Â Getting on the subway the next morning, the smell was even worse and the signs of age everywhere. Â I sat down, right across a poster ad by NYC Department of Consumer Affairs that read “Debt Stress? Â You’re not alone”. Â Someone had plastered a makeshift sticker on top, reading “Kill Your Boss”. Â After a ride on Metro North, I got into a taxi to work. Â It was one of those Ford relics, with a severely dented right side, a cracked windshield and a barely functioning transmission, but still street-legal. Â As the cab ended up triple-booked and I was the last one to get off, I got a 35-minute scenic tour through backstreets and pothole-riddled roads before finally arriving to the office.
The experience was enough to make me look up the definition of “developing country” in Wikipedia. Honestly, I don’t get why South Korea is sometimes still listed as such (e.g., in WSJ and, if memory serves me right, in the Economist), while the US isn’t. Something tells me it’s more than GM that needs patching up. Anyway, welcome back home!
]]>