My current research activities are with the Habanero Extreme Scale Software Research Laboratory at the Georgia Institute of Technology, under Prof. Vivek Sarkar.
Dynamic Deadlock Avoidance
Incorrect usage of synchronization can cause a parallel program to deadlock in ways that are difficult to predict or debug, especially because a deadlocking execution may only arise non-deterministically. Proving the absence of deadlocks statically is difficult and may require tight restrictions on the kinds of parallelism available to the programmer. Instead, I have developed some runtime policies which guarantee deadlock-freedom if followed and can be verified with low performance overhead. If a program attempts an operation that will cause a deadlock, this event is caught and raised as a runtime exception. The problem of deadlocks can then be solved with defensive programming, where the programmer anticipates software bugs so that informative alarms can be raised as soon as they are discovered.
For synchronization using futures, a blocking primitive that represents the eventual return value of a task, I have developed the Transitive Joins policy  which detects all deadlock cycles as they form and eliminates a class of false positive alarm that is raised by prior work.
In the more general case of promises, I defined an ownership semantics that associates each promise to a task . The owner of a promise is responsible for fulfilling it directly or delegating ownership to a new task. A simple algorithm can enforce the rules of ownership to detect a class of non-termination bug in which the owner of a promise forgets to fulfill it. Such bugs are impossible to detect and attribute at runtime without knowledge of ownership. Moreover, ownership newly enables the definition of a promise deadlock cycle. I have proved the correctness of an algorithm for cycle detection in the presence of concurrent ownership transfers under a weak memory model. This algorithm detects every promise deadlock and raises no false alarms.
 Caleb Voss, Tiago Cogumbreiro, and Vivek Sarkar, “Transitive Joins: A Sound and Efficient Online Deadlock-Avoidance Policy,” in Proc. 24th Symp. Principles and Practice of Parallel Programming, 2019, pp. 378–390. [pdf] [publication]
 Caleb Voss and Vivek Sarkar, “An Ownership Policy and Deadlock Detector for Promises,” in Proc. 26th Symp. Principles and Practice of Parallel Programming, 2021. To Appear.
My prior research was in the domain of automated verification of heap programs in Georgia Tech's Trustable Programming Group, overseen by Dr. Bill Harris, now at Galois, Inc.
Safety Properties in Shape Analysis
This work focused on the automated generation of proofs for safety properties about program heaps. We asked questions about the structural organization of heaps which would traditionally require predicate logic (quantification) to express. A generalization of Hoare logic emerged which allowed us to reason about disparate phases of program execution simultaneously in order to avoid quantification. Rather than describe a single subprogram with pre- and post-conditions, we can describe multiple subprograms at once. This expressivity allows a verification proof to have the same shape as the heap itself! As a result, invariants are carried over unbounded heaps by the proof structure rather than by recursively defined predicates in the assertion language.
In the summer of 2016, I collaborated with Prof. Jim Griffioen of the University of Kentucky on the cross-domain problem of high-level approaches to software-defined networking.
Multi-author Hierarchical Network Policies
Software-defined networking (SDN) is the ability for routing protocols to be programmed into a network dynamically from a controller. A central authority can define forwarding rules to implement policies, firewalls, address translation, and other network functions. Moreover, these rules can be updated dynamically in response to network events to allow for adaptive policies like stateful firewalls and load balancing.
We were interested in the advantages of delegating the authorship of the total network policy to local administrators and even tenants of the network. Such an approach required an automated framework in which an IT department can retain ultimate control while still yielding to the contributions of individuals. Our framework used a hierarchical allocation of both permission and responsibility to users. At every point in the hierarchy, the framework answered the questions of what can be done and what should be done. The framework enforced permissions so that even a buggy or malicious contribution by an individual could not overstep its bounds. Our hope is that this decentralized approach can allow organizations with large networks to safely crowdsource network governance and achieve the full power of SDN.
I have had the privilege to work under the supervision of Prof. Mark Moll in the lab of Prof. Lydia E. Kavraki during my time as an undergraduate at Rice University. Here I discuss the two major research projects I have produced with them.
Randomizing Graph Searches for Robustness
Finding the shortest path through a weighted graph is a well-understood task. With a cost heuristic, A* is the way to go. But what if the graph is not reliable? A broken edge or an incorrect weight can make the apparent shortest path a poor choice. So how can we find paths other than the shortest one? And what criteria can we use to evaluate the set of proposed alternatives?
It is desirable that the alternatives be short. We could try the second shortest path, but in a moderately dense graph, it is likely to share almost all its edges with the original. The th shortest, for large , is probably better, but systematically checking all these options is expensive. Instead, consider an algorithm that encourages random deviations from known paths: it says, “What if this area of the graph, along the shortest path, is broken? Then what would the shortest path be? Now what about that area? Or both?” and so on, iteratively building a set of increasingly diverse alternatives. Intuitively, this heuristic simulates random breaking of the graph to build an arsenal that we hope will stand up against any specific issue with the actual graph.
This is the core of the contribution in my 2015 ICRA paper , together with applications for robotic motion planning and an analysis of the efficiency and quality of this approach.
High-dimensional Motion Planning Algorithms
The last time you carried a tray, or plate, or something else with two hands, did you keep it from spilling? How hard did you have to think in order to hold it level? For a robot that can be a very difficult task—not to execute the motion, but to compute beforehand what the motion should be. Each arm is a kinematic linkage, or series of joints. Each joint has one or more numbers associated to it explaining what angle it is currently set to. All the numbers taken together are a point in , and there is a computation that will tell us where this puts the end effector, or hand, of each arm. Holding a tray level means both hands are at the same height. This is called a constraint, which gives us an equation explaining how the numbers should relate to one another. This is the tricky part: not all of holds the tray level; instead, the tray is level on some curved surface, a manifold, with dimension .
It is hard to visualize a high-dimensional structure like this, but for a computer, the difficulty is not the dimensionality. We would like a formula that explains how to move along the manifold without leaving it, but often no such formula exists. This is problematic for motion planning algorithms that try to find paths by connecting valid robot configurations without hitting any obstacles. A good solution is to look around and pretend the manifold is flat where you are (a hyperplane). Then moving around is just some easy linear algebra! As long as you stay close, you can fix any errors that creep in as the manifold curves away from the plane. But once you get too far away, you have to switch to a different plane—a different linear approximation of the manifold. Each of these planar regions is called a chart, and together the charts make up an atlas of the manifold.
This technique for manifolds has already been combined with one existing algorithm for robotic motion planning. We decoupled the core of the algorithm and instantiate it using several existing robotic motion planners.