Research

My current research activities are with the Habanero Extreme Scale Software Research Laboratory at the Georgia Institute of Technology, under Prof. Vivek Sarkar.

Dynamic Deadlock Avoidance

Incorrect usage of synchronization can cause a parallel program to deadlock in ways that are difficult to predict or debug, especially because a deadlocking execution may only arise nondeterministically. Proving the absence of deadlocks statically can be burdensome or requires excessive restrictions on the kinds of parallelism available to the programmer. Instead, we develop an online policy which 1) guarantees deadlock-freedom if followed, and 2) can be verified at runtime with low overhead. If a program attempts an operation that will cause a deadlock, this event is caught and raised as a runtime exception. The problem of deadlocks can then be solved with defensive programming, where the programmer anticipates software bugs and writes code to correctly deal with them as they occur.


My prior research was in the domain of automated verification of heap programs in Georgia Tech's Trustable Programming Group, overseen by Dr. Bill Harris, now at Galois, Inc.

Safety Properties in Shape Analysis

This work focused on the automated generation of proofs for safety properties about program heaps. We asked questions about the structural organization of heaps which would traditionally require predicate logic (quantification) to express. A generalization of Hoare logic emerged which allowed us to reason about disparate phases of program execution simultaneously in order to avoid quantification. Rather than describe a single subprogram with pre- and post-conditions, we can describe multiple subprograms at once. This expressivity allows a verification proof to have the same shape as the heap itself! As a result, invariants are carried over unbounded heaps by the proof structure rather than by recursively defined predicates in the assertion language.


In the summer of 2016, I collaborated with Prof. Jim Griffioen of the University of Kentucky on the cross-domain problem of high-level approaches to software-defined networking.

Multi-author Hierarchical Network Policies

Software-defined networking (SDN) is the ability for routing protocols to be programmed into a network dynamically from a controller. A central authority can define forwarding rules to implement policies, firewalls, address translation, and other network functions. Moreover, these rules can be updated dynamically in response to network events to allow for adaptive policies like stateful firewalls and load balancing.

We were interested in the advantages of delegating the authorship of the total network policy to local administrators and even tenants of the network. Such an approach required an automated framework in which an IT department can retain ultimate control while still yielding to the contributions of individuals. Our framework used a hierarchical allocation of both permission and responsibility to users. At every point in the hierarchy, the framework answered the questions of what can be done and what should be done. The framework enforced permissions so that even a buggy or malicious contribution by an individual could not overstep its bounds. Our hope is that this decentralized approach can allow organizations with large networks to safely crowdsource network governance and achieve the full power of SDN.


I have had the privilege to work under the supervision of Prof. Mark Moll in the lab of Prof. Lydia E. Kavraki during my time as an undergraduate at Rice University. Here I discuss the two major research projects I have produced with them.

Randomizing Graph Searches for Robustness

Finding the shortest path through a weighted graph is a well-understood task. With a cost heuristic, A* is the way to go. But what if the graph is not reliable? A broken edge or an incorrect weight can make the apparent shortest path a poor choice. So how can we find paths other than the shortest one? And what criteria can we use to evaluate the set of proposed alternatives?

It is desirable that the alternatives be short. We could try the second shortest path, but in a moderately dense graph, it is likely to share almost all its edges with the original. The th shortest, for large , is probably better, but systematically checking all these options is expensive. Instead, consider an algorithm that encourages random deviations from known paths: it says, “What if this area of the graph, along the shortest path, is broken? Then what would the shortest path be? Now what about that area? Or both?” and so on, iteratively building a set of increasingly diverse alternatives. Intuitively, this heuristic simulates random breaking of the graph to build an arsenal that we hope will stand up against any specific issue with the actual graph.

This is the core of the contribution in my 2015 ICRA paper, together with applications for robotic motion planning and an analysis of the efficiency and quality of this approach. Also have a look at my code repository for this research.

High-dimensional Motion Planning Algorithms

The last time you carried a tray, or plate, or something else with two hands, did you keep it from spilling? How hard did you have to think in order to hold it level? For a robot that can be a very difficult task—not to execute the motion, but to compute beforehand what the motion should be. Each arm is a kinematic linkage, or series of joints. Each joint has one or more numbers associated to it explaining what angle it is currently set to. All the numbers taken together are a point in , and there is a computation that will tell us where this puts the end effector, or hand, of each arm. Holding a tray level means both hands are at the same height. This is called a constraint, which gives us an equation explaining how the numbers should relate to one another. This is the tricky part: not all of holds the tray level; instead, the tray is level on some curved surface, a manifold, with dimension .

It is hard to visualize a high-dimensional structure like this, but for a computer, the difficulty is not the dimensionality. We would like a formula that explains how to move along the manifold without leaving it, but often no such formula exists. This is problematic for motion planning algorithms that try to find paths by connecting valid robot configurations without hitting any obstacles. A good solution is to look around and pretend the manifold is flat where you are (a hyperplane). Then moving around is just some easy linear algebra! As long as you stay close, you can fix any errors that creep in as the manifold curves away from the plane. But once you get too far away, you have to switch to a different plane—a different linear approximation of the manifold. Each of these planar regions is called a chart, and together the charts make up an atlas of the manifold.

This technique for manifolds has already been combined with one existing algorithm for robotic motion planning. I am currently authoring a paper which takes the idea further, by combining it with a host of existing algorithms for robotic motion planning. The goal is to produce a powerful tool chest that solves a variety of problems for constrained kinematic linkages or any high-dimensional system with equations that imply a lower-dimensional manifold.