Pou-Chun (Frank) Kung
pckung@umich.edu
|
CV |
Github |
|
I am a PhD student in University of Michigan--Ann Arbor (UM). I am currently a research assistant at Ford Center for Autonomous Vehicle (FCAV) advised by Prof. Katie Skinner.
Before that, I worked in Self-Driving Car Team at Industrial Technology Research Institute (ITRI), Taiwan, with Prof. Chieh-Chih (Bob) Wang.
I received my M.Sc. in Robotics at National Yang Ming Chiao Tung University (NYCU),
advised by Prof. Chieh-Chih (Bob) Wang
and Prof. Wen-Chieh (Steve) Lin.
Before I started my master's study, I had a wonderful time working with Prof. Nikolay Atanasov at UC San Diego.
Prior to that, I received my B.Sc. in Electrical Engineering from National Sun Yat-sen University (NSYSU).
During my undergraduate study, I was an undergraduate research assistant advised by Prof. Kao-Shing Hwang.
My research interest lies in the intersection of robotics and computer vision. My research is focused on robot perception and state estimation.
Particularly, I am fascinated with building robust robotic systems and joint research on machine learning and dynamic simultaneous localization and mapping (SLAM).
|
U-M PhD in Robotics Sept. 22 - Now
|
NYCU (NCTU) M.Sc. in Robotics Sept. 19 - Sept. 21
|
UCSD Research Intern Jul. 19 - Sept. 19
|
NSYSU B.Sc. in EE Sept. 15 - July. 19
|
News
- [09/2022] Join FCAV lab at UM
- [03/2022] Join Taiwan (R.O.C) Marine Corps. Semper Fi!
- [01/2022] Join Self-Driving Car Team at Industrial Technology Research Institute (ITRI)
- [01/2022] My paper on radar occupancy prediction is accepted by RA-L'22.
- [09/2021] Receive The Phi Tau Phi Scholastic Honor.
- [09/2021] Receive my M.Sc. degree at NYCU. Thanks to Bob Wang, Steve Lin, and all my colleagues in PAL.
|
|
Radar Occupancy Prediction with Lidar Supervision while Preserving Long-Range Sensing and Penetrating Capabilities
Pou-Chun Kung, Chieh-Chih Wang, Wen-Chieh Lin
RA-L 2022, Philadelphia Virtual
abstract |
bibtex |
arXiv |
video
Radar shows great potential for autonomous driving by accomplishing long-range sensing under diverse weather conditions. But radar is also a particularly challenging sensing modality due to the radar noises. Recent works have made enormous progress in classifying free and occupied spaces in radar images by leveraging lidar label supervision. However, there are still several unsolved issues. Firstly, the sensing distance of the results is limited by the sensing range of lidar. Secondly, the performance of the results is degenerated by lidar due to the physical sensing discrepancies between the two sensors. For example, some objects visible to lidar are invisible to radar, and some objects occluded in lidar scans are visible in radar images because of the radar's penetrating capability. These sensing differences cause false positive and penetrating capability degeneration, respectively.
In this paper, we propose training data preprocessing and polar sliding window inference to solve the issues. The data preprocessing aims to reduce the effect caused by radar-invisible measurements in lidar scans. The polar sliding window inference aims to solve the limited sensing range issue by applying a near-range trained network to the long-range region. Instead of using common Cartesian representation, we propose to use polar representation to reduce the shape dissimilarity between long-range and near-range data. We find that extending a near-range trained network to long-range region inference in the polar space has 4.2 times better IoU than in Cartesian space. Besides, the polar sliding window inference can preserve the radar penetrating capability by changing the viewpoint of the inference region, which makes some occluded measurements seem non-occluded for a pretrained network.
@inproceedings{kung2021radar,
Author = {Pou-Chun Kung and
Chieh-Chih Wang and
Wen-Chieh Lin},
Title = {Radar Occupancy Prediction with Lidar Supervision while Preserving Long-Range Sensing and Penetrating Capabilities},
Booktitle = {arXiv:2112.04282}
}
|
|
Target 3D Shape Estimation from a Low-Cost 4D Radar Module
Chien-Cheng Fang, Pou-Chun Kung, Chieh-Chih Wang, Wen-Chieh Lin
Under submission
abstract |
video
Radar presents a promising alternative to LiDAR in autonomous driving applications by showing several advantages, such as the ability to work under diverse weather conditions, direct doppler velocity estimation, and relatively low price.
However, 4D radar outputs a 3D heatmap instead of a precise 3D shape of the surrounded environment. Moreover, the current state-of-the-art radar feature detection method, constant false alarm rate, only output sparse radar features.
In this paper, we show the potential to convert radar into the 3D shape measurement sensor by demonstrating the preliminary results of particular targets.
We introduce the 3D-Unet model to the radar data and takes 3D shape measurements of LiDAR as the training ground truth.
However, since the low-cost 4D radar only provides timestamp at second resolution and do not provides time interval between two consecutive data, it is particularly challenging to synchronize radar and LiDAR data. As a result, we only collect static data, including few classes of target objects and human postures, in our experiment.
We demonstrate the preliminary experiment that estimates the 3D shape of particular targets from the radar module. The result shows the great potential of converting radar into the 3D shape measurement sensor if the data collection is more effective in the near future.
|
|
A Normal Distribution Transform-Based Radar Odometry Designed For Scanning and Automotive Radars
Pou-Chun Kung, Chieh-Chih Wang, Wen-Chieh Lin
ICRA 2021, Xi'an Virtual
abstract |
bibtex |
arXiv |
video |
presentation
Existing radar sensors can be classified into automotive and scanning radars. While most radar odometry (RO) methods are only designed for a specific type of radar, our RO method adapts to both scanning and automotive radars. Our RO is simple yet effective, where the pipeline consists of thresholding, probabilistic submap building, and an NDT-based radar scan matching. The proposed RO has been tested on two public radar datasets: the Oxford Radar RobotCar dataset and the nuScenes dataset, which provide scanning and automotive radar data respectively. The results show that our approach surpasses state-of-the-art RO using either automotive or scanning radar by reducing translational error by 51% and 30%, respectively, and rotational error by 17% and 29%, respectively. Besides, we show that our RO achieves centimeter-level accuracy as lidar odometry, and automotive and scanning RO have similar accuracy.
@inproceedings{kung2021normal,
Author = {Pou-Chun Kung and
Chieh-Chih Wang and
Wen-Chieh Lin},
Title = {A Normal Distribution Transform-Based Radar Odometry Designed For Scanning and Automotive Radars},
Booktitle = {ICRA},
Year = {2021}
}
|
|
Self-driving Car Sensor Setup and Data Collection
Industrial Technology Research Institute (ITRI) Self-driving Car Team
Supervised by Chieh-Chih Wang
Collaborated with Sheng-Cheng Lee
abstract
Setup sensors on the vehicle and collect data in crowded urban.
Sensors:
- Lucid Camera x4
- Velodyne VLS-128 Lidar
- Ouster OS1-128 Lidar
- Baraja Lidar x2
- Navtech CIR504-X Radar
- Automotive Radar (Continental ARS 408-21) x2
|
|
Lidar Odometry with Pose Graph Optimization
UCSD summer internship project
Advised by Prof. Nikolay Atanasov
abstract |
webpage |
video |
slide |
code
In this project, we used Hokuyo Laser Scanner with scan matching (ICP) and factor graph optimization (GTSAM) to achieve Lidar odometry. We ran our algorithm onboard on Nvidia TX2 in Ubuntu18.04. Instead of doing consecutive Lidar scan matching only, we added the estimated transformation between the current and previous scans as constraints to optimize the robot pose.
|
|
ROS2.0 ROScube-X product testing and user's manual writing
Case job at ADLink
|
|
KnightCar: low-cost ROS autonomous platform
Side project
Cooparated with Wei-Zhi Lin at INIKI ELECTRONICS CO. LTD. to commercialize KnightCar
abstract |
product webpage |
video |
code
Extended Duckiebot with 2D Lidar to achieve 2D SLAM and navigation. Implemented Lidar particle filter SLAM, EKF SLAM, AMCL localization, and A* path planning on the robot. More than 200 KnightCars were saled as the teaching material in 5 colleges in a year.
|
|
ARDrone Indoor 3D Mapping and Navigation with LSD-SLAM
Undergraduate project
Advised by Prof. Kao-Shing Hwang
abstract |
webpage |
video |
code
Implemented LSD-SLAM with MultiSensor-Fusion EKF (fused with imu/sonar) on quadrotor to achieveautonomous indoor navigation. With this project, I won first place out of 26 teams in NSYSU College of Engineering Project Competition.
|
|
ARDrone Face Recognition, Classification, and Tracking
Undergraduate project
Advised by Prof. Kao-Shing Hwang
abstract
Implemented face recognition (Haar Cascades), classification (InceptionV3), tracking (KLT tracker) and PID controller on quadrotor to achieve people tracking.
|
Spring 2021, TA, Human Centric Computing, NYCU
Fall 2020, TA, Self-Driving Cars, NYCU
Spring 2020, TA, Human Centric Computing, NYCU
Spring 2021, Recipient of The Phi Tau Phi Scholastic Honor - Only award to 130 students every year (<0.4%)
Fall 2020, NYCU Academic Achievement Award - Ranked 1st in a semester
Spring 2020, NYCU Academic Achievement Award - Ranked 1st in a semester
Fall 2019, NYCU Academic Achievement Award - Ranked 1st in a semester
Fall 2018, NSYSU College of Engineering Project Champion - First place out of 26 teams
Fall 2018, NSYSU Excellent Student Award - Top 3 students in a semester
Spring 2017, NSYSU Excellent Student Award - Top 3 students in a semester
A huge thanks to template from this.
|
|