Come find out about the latest research advances being made by our students and faculty.
Listen to our students and faculty talk about their latest research results..
Talk to the students one-on-one about their work.
Checkout the cool demos.
Computer Science Research Symposium is an annual exhibition of the research happening in the computer science department. Meet and talk with the researchers in our department about the bleeding edge techniques!
|15:00 - 15:10||Lorenzo Torresani||Opening Remarks|
|15:10 - 15:40||Tom Cormen||Dense Gray Codes, or Easy Ways to Generate Cyclic and Non-Cyclic Gray Codes For the First $n$ Whole Numbers.|
|15:45 - 16:05||Jack Holland||A new definition of contact provides insight into protein structure prediction.|
|16:10 - 16:30||Zhao Tian||The DarkLight Rises: Visible Light Communication in the Dark|
|16:30 - 17:45||Posters session - Moore Basement Foyer|
|18:00 - 18:20||Deeptak Verma||Developing next-generation biotherapeutics using computational methods.|
|18:25 - 18:45||Karim Ahmed||Network of Experts for Large-Scale Image Categorization.|
|18:50 - 19:10||Tianxing Li||Practical Human Sensing in the Light|
|19:30 - 22:00||Award Ceremony and Dinner||DOC House|
Dense Gray Codes, or Easy Ways to Generate Cyclic and Non-Cyclic Gray Codes For the First $n$ Whole Numbers.
The standard binary reflected Gray code gives a sequence of binary numbers in the range 0 to $n-1$, where $n$ is a power of 2, such that each number in the sequence differs from the preceding number in only one bit. We present two methods to compute Gray codes containing exactly $n$ numbers in the range 0 to $n-1$---that is, a permutation of $\langle 0, 1, \ldots, n-1 \rangle$ in which each number differs from the preceding number in only one bit---where $n$ is unconstrained. The first method produces a Gray code that is not cyclic: the first and last numbers in the sequence differ in more than one bit. The second method produces a cyclic Gray code if $n$ is even, so that the first and last numbers differ in only one bit, at the expense of a slightly more complicated procedure. Both methods are based on the standard binary reflected Gray code and, as in the binary reflected Gray code, each number in the output sequence can be computed in a constant number of word operations given just its index in the sequence. Joint work with Jessica C. Fan ’17.
A new definition of contact provides insight into protein structure prediction.
A protein comprises a chain of amino acids specified by DNA. Predicting how these chains fold in the cell based on their constituent sequence of amino acids is a longstanding problem in cellular biology, and the inverse problem (predicting a sequence of amino acids that fold into a specified structure) is equally important in biological engineering. One approach to solving this dual problem is taking advantage of the wealth of data provided by experimentally determined structures. Statistical rules can be derived from a database of these structures and used to predict how likely it is to observe a given sequence fold into a particular structure (and vice-versa). However, the utility of these statistical rules depends on the relevance of the structural features they are based on. We introduce a novel definition of contact between amino acids that captures structural information useful for deriving statistical rules. The predictive value of these rules is tested by applying them to sets of protein structures: in each set, one structure is a real experimental structure, while the rest are decoys that do not reflect the structure that their sequence would fold into in an actual cell. Our new definition of contact and the statistical rules derived from it were tested on commonly used sets of decoys. In many situations, they predict which structure is the real one when state of the art methods do not. This success in decoy discrimination suggests that the rules we have developed are useful aids for evaluating how likely it is that particular sequences will fold into particular structures.
The DarkLight Rises: Visible Light Communication in the Dark.
Visible Light Communication (VLC) emerges as a new wireless communication technology with appealing benefits not present in radio communication. However, current VLC designs commonly require LED lights to emit perceptible light beams, which greatly limits the applicable scenarios of VLC (e.g., in a sunny day when indoor lighting is not needed), and brings high energy overhead and unpleasant visual experiences for mobile devices to transmit data using VLC. We design and develop DarkLight, a new VLC primitive that allows light-based communication to be sustained even when LEDs emit extremely-low luminance. The key idea is to encode data into ultra-short, imperceptible light pulses. We tackle challenges in circuit designs, data encoding/decoding schemes, and DarkLight networking, to efficiently generate and reliably detect ultra-short light pulses using off-the-shelf, low-cost LEDs and photodiodes. Our DarkLight prototype supports 1.3-m distance with 1.6-Kbps data rate. By loosening up VLC's reliance on visible light beams, DarkLight presents an unconventional direction of VLC and fundamentally broadens VLC's application scenarios.
Developing next-generation biotherapeutics using computational methods.
T cell driven recognition of non-"self" proteins presents a major obstacle in the development of next-generation biotherapeutics. In order to mitigate the immune response to T cell epitopes within an exogenous protein, we have developed a suite of powerful computational protein design algorithms that globally optimize variants for simultaneous function and immunogenicity. The talk describes our computational methods and presents experimental results from application to therapeutic candidates, demonstrating our ability to disrupt broadly distributed immunogenic epitopes without compromising protein function. In one such application, we have recently engineered epitope-depleted variants of lysostaphin, a highly potent but immunogenic anti-staphylococcal enzyme. In constrast to wild-type, our variants maintain low antibody titers and are able to repeatedly rescue humanized mice from challenges with methicillin resistant Staphylococcus aureus (MRSA). This work provides the first controlled demonstration that depletion of T cell epitopes from a biotherapeutic agent leads to a reduced antibody response and consequently enhanced efficacy in an immune competent disease models.
Network of Experts for Large-Scale Image Categorization.
We present a tree-structured network architecture for large-scale image classification. The trunk of the network contains convolutional layers optimized over all classes. At a given depth, the trunk splits into separate branches, each dedicated to discriminate a different subset of classes. Each branch acts as an expert classifying a set of categories that are difficult to tell apart, while the trunk provides common knowledge to all experts in the form of shared features. The training of our "network of experts" is completely end-to-end: the partition of categories into disjoint subsets is learned simultaneously with the parameters of the network trunk and the experts are trained jointly by minimizing a single learning objective over all classes. The proposed structure can be built from any existing convolutional neural network (CNN). We demonstrate its generality by adapting 3 popular CNNs for image categorization into the form of networks of experts. Our experiments on CIFAR100 and ImageNet show that in each case our method yields a substantial improvement in accuracy over the base CNN, and gives the best reported result on CIFAR100. Finally, the improvement in accuracy comes at little additional cost: compared to the base network, the training time of our model is about 1.5X and the number of parameters is comparable or in some cases even lower.
Practical Human Sensing in the Light.
We present StarLight, an infrastructure-based sensing system that reuses light emitted from ceiling LED panels to reconstruct fine-grained user skeleton postures continuously in real time. It relies on only a few (e.g., 20) photodiodes placed at optimized locations to passively capture low-level visual clues (light blockage information), with neither camera capturing sensitive images, nor on-body devices, nor electromagnetic interference. It then aggregates the blockage information of a large number of light rays from LED panels and identifies best-fit 3D skeleton postures. StarLight greatly advances the prior light-based sensing design by dramatically reducing the number of intrusive sensors, overcoming furniture blockage, and supporting user mobility. We build and deploy StarLight in a 3.6 m x 4.8 m office room, with customized 20 LED panels and 20 photodiodes. Experiments show that StarLight achieves 13.6 degrees mean angular error for five body joints and reconstructs a mobile skeleton at a high frame rate (40 FPS). StarLight enables a new unobtrusive sensing paradigm to augment today's mobile sensing for continuous and accurate behavioral monitoring.
|Srivamshi Pittala||Predictive Modeling of Vaccine Mediated Protection Against HIV/SIV.
One of the key problems in HIV vaccine research is to measure vaccine-induced protection prior to exposure to the virus. Hence it is important to identify biomarkers or correlates of protection, which can predict risk of infection. In this work, we describe a data-driven machine learning approach to identify such biomarkers. We use multivariate survival analysis on serological data collected during vaccine trials on non-human primates for predicting risk of infection against Simian Immunodeficiency Virus (SIV) . We test our approach using cross-validation, learning a unified model to predict risk of infection across different cohorts. We illustrate the predictive capability of our models by comparing the observed and predicted survival rates and risk of infection. We show that such models can identify multiple biomarkers, which can predict vaccine response.
|Lixing Lian||Out-of-Core Suffix Arrays Using FG.
Implementing suffix arrays for massive data is challenging since it requires significant I/O operations that are difficult to mitigate. The DC3 algorithm provides one design to implement out-of-core suffix arrays efficiently. This paper demonstrates how to implement out-of-core suffix arrays using FG, a framework for developing out-of-core programs running on clusters that mitigates high-latency operations. It models the computations as networks that consist of asynchronous stages and generates the suffix array for out-of-core data in five FG networks. Experimental results show that the FG-based implementation outperforms other implementations for out-of-core suffix arrays. The FG-based implementation also suggests additional features for FG, such as buffer initialization and thread scheduling.
|Athina Panotopoulou||Perceptual Models of Preference in 3D Printing Direction.
We introduce a perceptual model for determining 3D printing orientations, to prevent visual impact of artifacts produced by support structures.
|Rawan Alghofaili||Fabricating a Medical Cast Using Upper Extremity Motion Models.
Due to the simplicity of the application process and affordability of the material, and despite the uncomfortable experience for the wearer, Medical casts have long remained in the same form for years. Our objective is to minimize the cast’s design without compromising immobility and bone alignment.
|Srinath Ravichandran||Control Variates for Linear Light Sources in Participating Media.
We present a new variance reduction technique for computing direct illumination from linear light sources or global illumination from virtual ray lights (VRLs) within participating media. We propose analytic approximations for the contribution of linear light sources (or VRLs) to camera rays, and leverage these in a control variates framework to reduce variance. We attain these approximations by identifying a duality between our problem and well-known analytic solutions for area lighting at a shade point. Our technique is unbiased and automatically adapts based on the importance sampling strategy to provide the best possible variance reduction. We compare our results against the prior state of the art which relied only on importance sampling and show that by additionally leveraging our control variates technique, we can provide noticeable variance reduction in a variety of scenes containing homogeneous media with isotropic phase functions.
|Shruti Agarwal||Degrade It!
We propose to decipher the identity of license plates in low-quality images. This type of low-quality image– the result of low-resolution still-and video camera-and a low-light recording– appears with frustrating frequency in many investigations. Thus, despite being a critical evidence to an investigation, these low quality recordings are often useless; the extraction of a high quality image from a low quality image being incredibly difficult to impossible. On the other hand, it is incredibly easy to generate a low quality image from a high-quality image. Because of this asymmetry, we propose to decipher the contents of a low-quality image by starting with a high-quality hypothesis of the image contents, degrading this high-quality image, then comparing the result to the low-quality image in question. We posit, and preliminary results suggest, that although the contents of a low-quality image cannot generally be extracted directly, there remains distinguishing information in the low-quality image.
|Du Tran||Deep End2End Voxel2Voxel Prediction.
Over the last few years deep learning methods have emerged as one of the most prominent approaches for video analysis. However, so far their most successful applications have been in the area of video classification and detection, i.e., problems involving the prediction of a single class label or a handful of output variables per video. Furthermore, while deep networks are commonly recognized as the best models to use in these domains, there is a widespread perception that in order to yield successful results they often require time-consuming architecture search, manual tweaking of parameters and computationally intensive pre-processing or post-processing methods. In this paper we challenge these views by presenting a deep 3D convolutional architecture trained end to end to perform voxel-level prediction, i.e., to output a variable at every voxel of the video. Most importantly, we show that the same exact architecture can be used to achieve competitive results on three widely different voxel-prediction tasks: video semantic segmentation, optical flow estimation, and video coloring. The three networks learned on these problems are trained from raw video without any form of preprocessing and their outputs do not require post-processing to achieve outstanding performance. Thus, they offer an efficient alternative to traditional and much more computationally expensive methods in these video domains. Authors: Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri.
|Mohammad Haris Baig||Coupled Depth Learning.
We propose a method for estimating depth from a single image using a coarse to fine approach. We argue that modelling the fine depth details is easier after a coarse depth map has been computed. We express a global (coarse) depth map of an image as a linear combination of a depth basis learned from training examples. The depth basis captures spatial and statistical regularities and reduces the problem of global depth estimation to the task of predicting the input-specific coefficients in the linear combination. This is formulated as a regression problem from a holistic representation of the image. Crucially, the depth basis and the regression function are coupled and jointly optimized by our learning scheme. We demonstrate that this results in a significant improvement in accuracy compared to direct regression of depth pixel values or approaches learning the depth basis disjointly from the regression function. The global depth estimate is then used as a guidance by a local refinement method that introduces depth details that were not captured at the global level. Experiments on the NYUv2 and KITTI datasets show that our method outperforms the existing state-of-the-art at a considerably lower computational cost for both training and testing.
|Anup Joshi|| Estimating Partition Function with Better Proposal Distribution.
Abstract not given.
|Chuankai An||Improving Local Search with Open Geographic Data.
Local search helps users find certain types of business units (restaurant, gas stations, hospitals, etc.) in the surrounding area. However, some merchants might not have much online content (e.g. customer reviews, business descriptions, opening hours, telephone numbers, etc.). This can pose a problem for traditional local search algorithms such as vector space based approaches. With this difficulty in mind, we present an approach to local search that incorporates geographic open data. Using the publicly available dataset we are able to uncover patterns that link geographic features and user preferences. From this, we propose a model to infer user preferences that integrates geographic parameters. Through this model and estimation of user preference, we develop a new framework for "local'' (in the sense of geography) search that offsets the absence of contexts regarding physical business units. Our initial analysis points to the meaningful integration of open geographic data in local search and points out several directions for further research.
|Xi Xiong||Customizing Your Wireless Coverage via A 3D Fabricated Reflector
Accurate control of wireless signal propagation and resulting wireless coverage is important in residential, commercial, and industrial environments. However, existing methods require expensive directional antennas with limited control flexibility and presenting technical barriers for ordinary users. We present a new computational approach that uses a low-cost, 3D fabricated reflector to steer wireless signals and achieve a desired wireless coverage. Users simply input their environment settings and preferred coverage (e.g., areas with signals to be strengthened or weakened), press the button of a 3D printer to fabricate a 3D reflector, and place the reflector surrounding a Wi-Fi transmitter to achieve the desired coverage. We demonstrate the efficacy of our fabricated reflectors using indoor measurements.
|Rui Wang||CrossCheck: Towards Passive Sensing and Detection of Mental Health Changes in People with Schizophrenia.
Abstract not given.
|Prashant Anantharaman||Namespace and Cryptographic Complexity in the Smart Grid.
With the smart grid becoming a reality, there is a massive need for devices to talk to each other. The consumer-side smart grid consists of all the appliances in a smart home, the smart meter, and other data collection devices. The utility provider would send pricing signals, demand response signals and request information about consumption from the smart meter. A smart meter would send consumption details to its utility provider. Appliances within the consumer-side smart grid could send repair diagnostics or check for software updates. There are pressing reasons for these communications to be as secure as possible, in order to prevent someone else controlling appliances in smart homes, and sending malicious signals to them. Hence, we take a look at an Attribute Certificates based X.509 scheme and then design a Macaroons based scheme for the smart grid. In this poster we describe both these schemes in detail, and discuss where the schemes outweigh each other.
|Varun Mishra||Sensing Stress Levels
Abstract not given.