This work explores the use of deformation as a medium through which we can interact with computers. The idea may seem strange at first–using the shape of an object and how it changes in time to encode information–but if we consider that a rubber membrane can occupy a continuum of deformed states, it is potentially information-rich. If we accept this basic premise, the next question is how do we encode deformation. This is a multi-faceted question that is non-trivial, and is something that I am currently working on. I recently presented one such approach at the 2016 Northeast Robotics Colloquium. Click here to learn more.
Time series data often have heavy-tailed noise. This is a form of heteroskedasticity, and it can be found throughout economics, finance, and engineering. In this post I share a few well known methods in the robust linear regression literature, using a toy dataset as an example throughout.
With the exception of computer automation, the design of mechanical systems has not fundamentally changed since the industrial revolution. Even though our philosophy, aesthetics, and tools are far more refined and capable, we still design mechanical systems around bars, linkages, and motors. Similarly, electronic systems have largely been constrained to rigid circuits. These constraints are the central challenge being addressed by two emerging fields: soft robotics and soft electronics. In this post I’ll talk briefly about why non-rigid systems are interesting (and potentially useful), and present results from recent work that I published in Science Magazine on this topic.
This post explores the effectivness of convolutional neural networks at learning to play checkers from a database of master-level games. I first became interested in this problem in spring 2016, when Deepmind published its work on AlphaGo in Nature. AlphaGo proves that the complexity of GO (~250^150 board configurations) can be pruned down to a searchable subset of moves (using Monte Carlo methods) by first pretraining a deep neural network on expert human moves, followed by reinforcement learning though self play. Although Checkers is considerably less complex than Chess and Go (it was solved back in 2007), its search space is still enormous with ~ 50^20 board configurations, which leads to this question: can we prune its search space while maintaining a high level of play? For my final project in Bart Selman’s graduate AI course last semseter, I wanted to see how well a neural net could capture strategy from expert Checkers players, similar to the supervised learning step that Deepmind used to pretrain AlphaGo. As we’ll see, such a neural net can defeat a few of the popular online engines that use search and heuristic reasoning, as well as humans that play at an intermediate/advanced level. While it typically maintains a clear advantage through 20+ moves, it tends to suffer from the occasional blunder in the mid-late game against stronger opponents as the board becomes sparse and less familar–as we might expect.