Me
Shao-Hua Sun
A Ph.D. student in Computer Science
at University of Southern California
Shao-Hua Sun
My research interests span over the fields of Deep Learning, Computer Vision, Reinforcement Learning, Meta-learning, and Robotics.


Bio

I am a Ph.D. student in Computer Science at the University of Southern California (USC) as an Annenberg Fellow in Cognitive Learning for Vision and Robotics Lab (CLVR) working with Professor Joseph J. Lim. My research interests span over the fields of Deep Learning, Computer Vision, Reinforcement Learning, Meta-learning, and Robotics. In particular, I am interested in developing learning algorithms that empower machines to efficiently master complex tasks as well as quickly adapt to novel tasks and environments with prior knowledge. Before joining USC, I received my B.S. degree from Dept. of Electrical Engineering at National Taiwan University (NTU), Taipei, Taiwan. I open source my research projects as well as implementations of state-of-the-art papers on my GitHub and tweet exciting stuff on my Twitter.

News

Dec 2018
Our paper Composing Complex Skills by Learning Transition Policies with Proximity Reward Induction is accepted by ICLR 2019.
Nov 2018
Our paper Toward Multimodal Model-Agnostic Meta-Learning is accepted by Meta-Learning Workshop at NeurIPS 2018.
Jul 2018
I gave a talk at ICML 2018 on our paper Neural Program Synthesis from Diverse Demonstration Videos (slides).
Jul 2018
Our paper Multi-view to Novel view: Synthesizing Novel Views with Self-Learned Confidence is accepted by ECCV 2018.
May 2018
Our paper Neural Program Synthesis from Diverse Demonstration Videos is accepted by ICML 2018.

Publications

Composing Complex Skills by Learning Transition Policies with Proximity Reward Induction
in International Conference on Learning Representations (ICLR) 2019

Intelligent creatures acquire complex skills by exploiting previously learned skills and learning to transition between them. To empower machines with this ability, we propose transition policies which effectively connect primitive skills to perform sequential tasks without handcrafted rewards. To effectively train our transition policies, we introduce proximity predictors which induce rewards gauging proximity to suitable initial states for the next skill. The proposed method is evaluated on a diverse set of experiments for continuous control in both bipedal locomotion and robotic arm manipulation tasks

Toward Multimodal Model-Agnostic Meta-Learning
in Meta-Learning Workshop at Neural Information Processing Systems (NeurIPS) 2018

Gradient-based meta-learners such as MAML are able to learn a meta-prior from similar tasks to adapt to novel tasks from the same distribution with few gradient updates. One important limitation of such frameworks is that they seek a common initialization shared across the entire task distribution, substantially limiting the diversity of the task distributions that they are able to learn from. In this paper, we augment MAML with the capability to identify tasks sampled from a multimodal task distribution and adapt quickly through gradient updates.

Multi-view to Novel View: Synthesizing Novel Views with Self-Learned Confidence
in European Conference on Computer Vision (ECCV) 2018

We address the task of multi-view novel view synthesis, where we are interested in synthesizing a target image with an arbitrary camera pose from given source images. We propose an end-to-end trainable framework which consists of a flow prediction module and a pixel generation module to directly leverage information presented in source views as well as hallucinate missing pixels from statistical priors. We introduce a self-learned confidence aggregation mechanism to merge the predictions produced by the two modules given multi-view source images.

Neural Program Synthesis from Diverse Demonstration Videos
in International Conference on Machine Learning (ICML) 2018

Interpreting decision making logic in demonstration videos is key to collaborating with and mimicking humans. To empower machines with this ability, we propose a neural program synthesizer that is able to explicitly synthesize underlying programs from behaviorally diverse and visually complicated demonstration videos. We introduce a summarizer module as part of our model to improve the network’s ability to integrate multiple demonstrations varying in behavior. We also employ a multi-task objective to encourage the model to learn meaningful intermediate representations for end-to-end training.

Timeline

Summer 2017 - Spring 2018
Research Internship
Snap Inc.
Computer Vision, Deep Learning
Mentor: Dr. Ning Zhang
Spring 2017 - present
PhD student
University of Southern California
Computer Vision, Deep Learning, Reinforcement Learning

Teaching

Spring 2019 CSCI-599 Deep Learning and its Application, Teaching Assistant, USC
Fall 2017 CSCI-599 Deep Learning and its Application, Teaching Assistant, USC

Professional Activity

Conference reviewer CVPR 2019, ICCV 2019