The recipe for success from grand winner in NVIDIA Jetson™ Developer Challenge
Mar 31, 2018
photo: Yu Ming Chen, grand winner in NVIDIA Jetson™ Developer Challenge, private archive
Have you ever wondered what’s the recipe for success in a hackathon? We’ve had the opportunity to ask Yu-Ming Chen, the grand winner in NVIDIA Jetson™ Developer Challenge a few questions. Aged 22 he’s a senior in college majoring in computer science at National Tsing Hua University (R.O.C). The project that he was responsible for is Sim-to-Real Autonomous Robotic Control. Without any further ado, let’s get to questions.
Hello Yu-Ming, to begin this interview I’d like to ask, why exactly did you decide to take part in the NVIDIA Challenge?
Actually, a similar challenge called “NVIDIA Embedded Intelligent Robotic Challenge” was held by NVIDIA at GPU Technology Conference in Taiwan (GTC Taiwan) in 2016. We won the champion at that challenge.
As the winner of NVIDIA Embedded Intelligent Robotic Challenges in 2016, we were later informed in December 2017 that another larger-scale robotic challenge called “NVIDIA Jetson Robotic Challenge” (which was later renamed NVIDIA Jetson Developer Challenge) was going to be held at GTC 2018 in San Jose. As we have been fascinated with deep learning technologies, computer vision, deep reinforcement learning, as well as intelligent robotics since participating the challenge in 2016, we decided to take part in the new challenge again in 2018.
So it seems that you’ve had some prior experience in working with AI. How long have you been interested in its use? Any interesting systems that you’ve build before that you would want to mention?
I joined ELSA Lab (at Department of Computer Science, National Tsing Hua University) which is specialized in artificial intelligence (AI) three years ago. Since then, I have been involved in researches in the areas of machine learning (ML), deep learning (DL), reinforcement learning (RL), and computer vision (CV) technologies. In 2016, we participated in NVIDIA Embedded Intelligent Robotic Challenges, and built a robot for the challenge using the AI technologies mentioned above.
That is quite a resume. Now, coming back to the current matters. How did you go about developing the project for the Jetson Challenge? What’s your story?
It is all started in last May (in 2017). At that time we were trying to develop an autonomous robot, and train it with deep neural networks to perform “obstacle avoidance” and “human following tasks” in the real world. However, we found that it was very inefficient to collect training data for the robot from the real world. In addition, it is pretty dangerous for a fragile robot to be trained in the real world - it may bump into other objects or even human beings.
Hence, after a few weeks of brainstorming, we came up with a new modular architecture, which allows us to train a robot in virtual worlds. We propose to do this because training in virtual reality is fast and cost efficient. With just several desktop computers, we were able to train the control policy of our robots within a single day. With the proposed new modular architecture, we are able to transfer the RL agent (or you can simply call it AI, or control policy) to the real-world robot.
As the idea has not been proposed by anybody else, we were not much confident in our architecture in the beginning. However, after a series of rigorous experiments, we convinced ourselves that the new architecture really works. We summarized our proposed ideas and experimental results as a research paper, which has been accepted by a top AI conference called IJCAI.
And what about the process of working on the project itself? Was it all piece of cake or did you encounter any problems despite your impressive experience in the field?
The project went for about a year. We truly had faced lots of unexpected problems. For example, after training our deep neural networks with a high-end GPU (NVIDIA GeForce GTX 1080 Ti), we need to migrate our models onto a low-end, embedded GPU called NVIDIA Jetson TX2 board, which is not as powerful as GTX 1080 Ti. We needed to figure out ways to optimize our models and fitted them onto Jetson TX2. The problems that we had to deal with include the small memory capacity provided by TX2 (only 8GB, much smaller than the 11GB memory of 1080 Ti). In addition, the execution speed of TX2 is also slow, which further limits the sizes of the deep neural network models from being too large.
Hence, in order to run the neural network models on TX2, we attempted lots of ways of optimization, and experimented a number of different neural network configurations as well. We cut down the number of layers, and downsized the input images. After a few months of struggling, we were finally able to build our robot which can successfully complete the challenges.
Some say that the best way to gain knowledge on a subject is to learn from your own mistakes. With all those difficulties you just mentioned, is there a big takeaway you’d like to share with us?
We have acquired a number of background knowledge in deep learning, including deep semantic segmentation models, deep reinforcement learning agents. We have also gained experience in developing an integrated intelligent robotics. The most interesting part was that we had the opportunity to develop a virtual-to-real methodology that has not been proposed by anybody else. In order to carry out this new idea, all of our team members work closely, and frequently exchange ideas for designing this project.
That’s fascinating! And besides what you just mentioned, did you have any opportunities to further develop your skills?
The opportunity of participating in this project enables us to learn lots of professional skills in deep learning and intelligent robotics design. Our system consists of two primary modules: a perception module, and a control policy module. To develop the perception module, we needed to survey a number of relevant state-of-the-art semantic segmentation models, and revised them so that they can be fitted into our system. These models include DeepLab, PSPNet, ICNet, FCN, etc. For the control policy module, we had to develop deep reinforcement learning agents, and trained the agent to navigate in virtual environments without hitting any obstacles as well as follow a human being. We attempted a number of deep reinforcement learning algorithms, including DQN, A3C, DDPG, etc., and designed several different reward functions. It was a tough process, as we were not sure which model and which reward function best fit with our system.
We actually investigated a lot of papers, refer to their implementations on Github, and attempted to develop ours. We spent lots of time in system implementation and integration, as we had two different modules to be integrated into a whole robotic system. System integration is the most difficult part and took us significant amount of time. Thanks for this opportunity, we gained lots of integration experiences in this competition.
What did you find the most difficult and the easiest for you during the competition?
Except for the unexpected issues occurred during the system integration, I think the most difficult part of the challenge is to improve the overall performance of our model. Sometimes we felt that optimizing deep neural networks and deep reinforcement learning models are very similar to dealing with a black box, and prone to failure if the hyper-parameters are not properly selected. Most of the time the unexpected performance are not explainable, requiring us to spend additional time in trial-and-errors. We have also spent lots of time studying research papers and Github projects, which help a lot and save us significant amount of time in fine-tuning the system.
We think that the easiest part in the competition might be the document preparation phase.
So it’s probably safe to say the formula of hackathons is something you enjoy.
Yes, we do. Going through those unpredictable difficulties during the hackathon and finally complete the challenge brought me a feeling of excitement, achievement, and satisfaction.
And how often do you take part in hackathons? Why?
About once a year. Some of the challenges may require a long time to complete, and others may probably only need a few weeks, so it depends on the duration of the hackathon.
I assume you probably plan on taking part in more hackathons in the future, correct?
Definitely. Taking part in hackathon significantly improved my problem-solving as well as programming skills. I enjoyed the feeling of fulfillment and satisfaction of completing the challenges. In addition, I love to work with my teammates. It was teamwork that made this championship possible.
Therefore, I will participate in more hackathons in the future, with my teammates.
Do you think that participation in those events will help you in the future?
Yes, it will. A hackathon helped me to gain the ability to analyze problems and solving them one by one. Moreover, it improved my collaboration and communication skills which are extremely important and helpful in my future career.
What did you gain thanks to the hackathon?
The hackathon not only enhanced my different skills but also increased my experience, knowledge, and professional skills. I would like to thank the hackathon for providing me such an opportunity and to let me gain the ability to handle the problems I may face in my future career.
And for the last question, let’s finish this interview with a piece of advice. What would you say to those who wish to take part in similar hackathons?
Here are our top three pieces of advice I would give to other competitors.
- Survey relevant papers and related publications in the literature before starting the project may prevent doing duplicate work.
- Leverage on open sources (e.g., Github Repo) will save lots of time.
- System integration may lead to lots of unexpected issues. It is better to reserve extra time for integration.
If you would like to participate in a similar hackathon visit our website with list of hackathons and challenges. Pick an event that suits your interests and get busy!