I'm a fifth year grad student at UMass Amherst working in the Resource-Bounded Reasoning Lab under Dr. Shlomo Zilberstein. My research focuses on problems in automated decision making. I work on metareasoning techniques that enable autonomous systems to optimize their own process of deliberation. I also work on computational methods that allow autonomous systems to resolve exceptions that can occur during normal operation. Before grad school, I worked as a software developer on Wall Street. Outside of all that, I'm interested in many problems in philosophy, especially free will, mind, metaphysics, and epistemology.
Autonomous systems rarely operate with moral sensibility in the real world. In fact, the standard stategy just involves making many small tweaks to the autonomous system until ethical behavior is produced. For example, a self-driving car with an objective that encourages completing a route can be adjusted to discourage dangerous driving. However, these tweaks often lead to unpredictable behavior that doesn't reflect the values of its stakeholders. To address this problem, we offer an approach to autonomous systems that optimally completes a task while following an ethical framework.
Autonomous systems use decision-making models that rely on simplifying assumptions to reduce the complexity of the real world. However, as a result of their assumptions, these systems can encounter many different exceptions. Our paper offers a new type of autonomous system, called an introspective autonomous system, that resolves exceptions by interleaving regular decision making with exception handling based on a belief over whether any exceptions have been encountered during normal operation.
We develop meta-level control techniques to determine the point at which a robot should stop thinking and start doing. In dynamic environments, a robot rarely has enough time to determine the optimal solution to a decision-making problem. To handle this limitation, a robot can use an anytime algorithm, which is a type of algorithm that slowly improves a solution over time. So, as the robot thinks more and more, the solution slowly gets better and better. Now, here's the question: how long should the robot think for?
Due to the limitations of their decision-making models, autonomous systems cannot always complete a task successfully. Under certain conditions, they may even behave dangerously in their environments. Ideally, to overcome situations like this, autonomous systems should rely on human assistance if needed. Our recent work proposes a framework that enables autonomous systems to operate at varying levels of autonomy, each of which requires a different level of human involvement, by learning through experience.
Autonomous systems often operate in environments where their behavior and the behavior of other agents is governed by a signal. However, if autonomous systems can't observe the signal directly, the consequences can be disastrous. For example, if a self-driving car can't observe a traffic light directly, it could drive into an intersection even when the traffic light is red. In response to this problem, we develop an approach to agent-aware state estimation where autonomous systems can exploit the behavior of other agents in the enviornment to observe the signal indirectly.
We explore how to best place vertices of graph snapshots across workers in a distributed dynamic graph database to speed up queries like PageRank and average degree. By extending G*, our own distributed dynamic graph database, we examine how each query performs when the vertices of graph snapshots are placed on a single worker or spread across all of the workers. It turns out that vertex placement has a big impact on query performance.
We present OpenSR, an open source stimulus-response testing framework. In our framework, you can create, manage, and track an implicit association test (IAT). Basically, an IAT can help researchers determine whether or not someone has an implicit bias for or against something. Since most researchers currently have to buy complicated, expensive software to do this, we built a free, easy-to-use, customizable alternative that's better than what's already on the market.
We built Deep Jammer, a music generator that learns how to compose classical music by listening to 320 classical piano pieces. By using deep learning, specifically two LSTMs, it can learn the spatial and temporal patterns of classical music. In a survey of over 50 participants, the classical piano pieces composed by Deep Jammer scored a 7.5 rating compared to a piece by Bach that scored an 8.1 rating. Finally, to experiment with transfer learning, we trained Deep Jammer on just twenty jazz piano pieces. While it wasn't perfect, it quicky picked up on the rhythm and sound of jazz.
To learn more about automated decision-making, I built Logos, a library that can solve MDPs. Logos offers dynamic programming methods like value iteration and policy iteration and reinforcement learning algorithms like Monte Carlo learning and TD learning. I currently use Logos in semi-autonomous system simulations to control a self-driving car and a reinforcement learning agent that learns how to optimally play tic-tac-toe by playing games against itself.
We built an optimal Rubik’s Cube solver, Deep Rainbow, that uses IDA*. We used an admissible heuristic based on three disjoint pattern databases where each database is associated with the set of edge cubees and two different sets of corner cubees. We based our solver on Richard Korf’s approach. In just a few minutes, Deep Rainbow can find optimal solutions to problems that are twelve steps or less away from the goal.
We built a chess agent named Shallow Blue that uses minimax with alpha-beta pruning. The evaluation function incorporates many important dimensions of chess, such as piece development, material balance, mobility, and attack range. Luckily, Shallow Blue won a chess tournament that had over fifteen teams that built their own chess agents. To add to the fun, I also built a Connect Four agent and a Tic-Tac-Toe agent using a similar method.
In an awesome class (CMPT 424) taught by Dr. Alan Labouseur, I built SvegOS, a browser-based operating system from scratch. Just like any Unix operating system, you can manage files (create, read, update, delete, and ls), load a program (load), manage processes (run, ps, and kill), and a lot more. To top it all off, the interface shows the CPU registers, the memory, the hard drive, and the state of every process, so you can follow my operating system as it runs.
In another great class (CMPT 432) taught by Dr. Alan Labouseur, I built Svegliator, a compiler that compiles a C-like language into 6502alan assembly language. The compiler shows the resulting machine code, the concrete syntax tree, the abstract syntax tree, and the symbol table. Just so you can trace exactly what the compiler is doing, it records each step of the scanner, the parser, the semantic analyzer, and the code generator during compilation.