Yinan Zhang

Ph.D. student at Dartmouth Computer Science

Enthusiast of assemblable structures and their geometry

CONTACT
220 Sudikoff
Dartmouth College
9 Maynard St.
Hanover, NH 03755

Yinan.Zhang.GR@Dartmouth.edu

About Me

I'm a Ph.D. student of Computer Science at Dartmouth College. My primary research area is robotics, with a focus on topics in motion planning, 3D fabrication, assemblable structure geometry, and reinforcement learning. My research advisor is Devin Balkcom.

From 2009-2013, I studied Software Engineering at Southeast University, Nanjing, China, where I joined Computer Graphics lab under Dr. Li Yao's supervision. I gained a lot of interests in Computer Graphics and doing research. In Sept. 2013, I was lucky to be admitted to Dartmouth College and able to further my study in Computer Science.

Outside of the office, I enjoy fishing, photography, and soccer. Go Barcelona!


Research

I'm interested in locomotion, rigid object manipulation, and assemblable structures. My collaborators include Weifu Wang, Yu-Han Lyu, Emily Whiting and Haoxiang Li.

Assemblable structures

For many years, my mom bought all kinds of toy building blocks, including Lego, as birthday gifts to me. Every week, we used to spend a whole day to build an imaginary fancy structure. At some point, somehow, either dad or mom would accidentally destroy my work. Then I had to repeat the process and rebuild a new structure in the next week. These toy blocks developed my interest in assemblable structures.

Interlocking assemblable structure. A painful experience as I mentioned is that structures I build with toy blocks would always finally be destroyed. The computer graphics community has been studying interlocking structures for a long time. An interlocking structure has only one or a finite number of keys that have to be dis-assembled before any other blocks can be taken apart from the structure. Wooden puzzles are a perfect example. Dr. Song Peng has done excellent studies on constructing and solving interlocking structures. Besides making puzzles as toys, I'm more interested in building extremely large-scale interlocking structures, for example, the empire state building. [Zhang2016a] designs 9 kinds of blocks with joints. With these 9 kinds of blocks and provided algorithm, we can assemble any voxelized structures with only translations. The final structure will have a finite number of keys.

We later improve the block designs and reduced the number of block into two. With the new block designs, any model can be assembled into an interlocking structure using two kinds of blocks with a small amount of blocks. A parallel construction scheme was also proposed to reduce the overal construction time. We are one step closer to physically construct large scale structure in an efficient way. Following is a video of a robot arm laying out blocks, and a rendered animation of a chair being assembled. Our paper is accepted by WAFR 2018 [Zhang2018b]

Disassembling structures with divisible and atomic components. A friend of mine had a car accident in 2015. He was lucky and got saved. In many car accident cases, firefighters have to cut the damaged vehicle into pieces to rescue the victim. Ideally, we want to cut as few pieces as possible, because that means faster to save a man. In general, we want to explore this problem: given two interlocking objects, one is divisible and another is atomic, how we should cut the divisible part to separate both parts? This is an opposite problem of grasping or caging, where one wants to keep an object in (a) certain configuration(s). [Zhang2016b] and [Zhang2018c] demonstrate some early explorations of the problem in 2D, where a lower bound number of pieces the divisible part should be cut into is presented, as well as a complete algorithm that guarantees the atomic part to be taken out.

Rearranging agents in a small space using global controls Consider a set of tiny robots sitting in a grid space, we are only allowed to globally control these robots with 4 commands: LEFT, RIGHT, UP and DOWN. We are also allowed to place obstacles in the grid space. How many obstacles do we have to place in the space and how many commands should be sent to the robots? How large should the workspace be? [Zhang2017a] explores these problems in a 2D setting. We show there exists a workspace a constant factor larger than the number of robots that enables complete rearrangement for a rectangle of robots.

Motion Planning

Many researchers have explored the scenario of optimal motion planning where obstacles are not presented, for example, Dubins and Reeds-Shepp steered cars, differential drives, and omni-directional vehicles. However, finding the optimal trajectories when there are obstacles is still very difficult. In [Balkcom2015] begins to explore the idea that some easy spaces (with large open regions, or large obstacle regions) can be represented easily, without giving up optimality, using a variable-sized cell decomposition approach. We show that a form of convexity can be defined that is suitable for configuration spaces with optimal steering methods and corresponding shortest path metrics, and that a variant on this form of convexity, subconvexity, is available locally everywhere there is a reachable ball in the free configuration space.

Reinforcement Learning

In the summer of 2017, I did an intern at Adobe Research. Our project focuses on drone control. Based on traditional control theories, PID or LQR controllers are easy to implement but is hardly optimal. An alternative method is to learn a control function with deep reinforcement learning which is supposed to achieve optimality eventually. However, Deep Reinforcement Learning requires a large number of trails and fails before achieving any reasonable results, which makes it impractical for real-world robots, especially when the robot has fragile parts. Inspired by this observation. We assume a pre-existing supervisor policy (PID or LQR for example) and combine it with a learning policy. By optimizing the combined policy and gradually reduce the contribution of the supervisor, we find it possible to have a reinforcement learning algorithm that has consistent stable performance over the whole training process and improves itself to eventually achieve optimality. [Zhang2018a]


Teaching Assistant


Experience


Publications