Orb-Touch -> using deformation as a medium for human-computer interaction

This mini-post explores the use of deformation as a medium for human computer interaction. The question being asked is the following: can we use the shape of an object, and how it changes in time, to encode information? Here I present a deep learning approach to this problem and show that we can use a balloon to control a computer.

A comparison of robust regression methods based on iteratively reweighted least squares and maximum likelihood estimation

Time series data often have heavy-tailed noise. This is a form of heteroskedasticity, and it can be found throughout economics, finance, and engineering. In this post I share a few well known methods in the robust linear regression literature, using a toy dataset as an example throughout.

Cephalopod-inspired mechanical systems

With the exception of computer automation, the design of mechanical systems has not fundamentally changed since the industrial revolution. Even though our philosophy, aesthetics, and tools are far more refined and capable, we still design mechanical systems around bars, linkages, and motors. Similarly, electronic systems have largely been constrained to rigid circuits. These constraints are the central challenge being addressed by two emerging fields: soft robotics and soft electronics. In this post I’ll talk briefly about why non-rigid systems are interesting (and potentially useful), and present results from recent work that I published in Science Magazine on this topic.

How to pretrain a checkers engine using a game database and convolutional neural networks

This post explores the effectivness of convolutional neural networks at learning to play checkers from a database of master-level games. I first became interested in this problem in spring 2016, when Deepmind published its work on AlphaGo in Nature. AlphaGo proves that the complexity of GO (~250^150 board configurations) can be pruned down to a searchable subset of moves (using Monte Carlo methods) by first pretraining a deep neural network on expert human moves, followed by reinforcement learning though self play. Although Checkers is considerably less complex than Chess and Go (it was solved back in 2007), its search space is still enormous with ~ 50^20 board configurations, which leads to this question: can we prune its search space while maintaining a high level of play? For my final project in Bart Selman’s graduate AI course last semseter, I wanted to see how well a neural net could capture strategy from expert Checkers players, similar to the supervised learning step that Deepmind used to pretrain AlphaGo. As we’ll see, such a neural net can defeat a few of the popular online engines that use search and heuristic reasoning, as well as humans that play at an intermediate/advanced level. While it typically maintains a clear advantage through 20+ moves, it tends to suffer from the occasional blunder in the mid-late game against stronger opponents as the board becomes sparse and less familar–as we might expect.