Haomin Chen

Haomin Chen

PhD of Computer Science

Johns Hopkins University

Biography

I am an Applied Research Scientist in Ericsson working in Ericsson Digital Human (EDH) for interpretable video-based language translation. I graduated as a Computer Science Ph.D. from Johns Hopkins University with a background in interpretable computer vision systems for medical image analysis with human-computer interaction, image classification, object detection, and segmentation. I have rich experience with whole slide images, CT scans, and X-rays. I am the first author of Nature partner journal paper. I have excellent communication skills and ability to work on multi-disciplinary teams.

Download my resumé.

Interests
  • Computer Vision
  • Medical Imaging
  • Transparent systems
Education
  • PhD in Computer Science, 2018-2022

    Johns Hopkins University

  • M.A. in Statistics, 2016-2017

    Columbia University

  • BSc in Physics, 2012-2016

    Fudan Univerisity

Skills

Python
PyTorch
TensorFlow
Matlab
C++
R

Experience

 
 
 
 
 
Ericsson
Applied Research Scientist
Ericsson
Feb 2023 – Present Los Angeles

Interpretable Video-based Language Translation:

  • International in-person or virtual real-time communication/entertainment are limited by languages. Even with human interpreters, asynchrony between facial movements and interpreted sentences creates a sense of distance and a non-immersive experience.
  • An interpretable video-based language translation can not only interpret the audio but also the facial movement visualization into different languages, providing no distance between both sides of the communication. I created the largest dataset of videos with speakers' faces from a wild source - YouTube. The dataset contains 340 hours of video length, which is 15 times larger than the largest public dataset, and 10 different languages.
 
 
 
 
 
Meta
Research Intern
Meta
Jun 2022 – Aug 2022 Bethesda, Maryland

3D scene style transfer with 2D style image by differential rendering:

  • Internship performance exceeds mentor/peers' expectation in review.
  • Learned style transfer, 3D mesh and rendering from scratch in one week.
  • Utilized PyTorch3D & nvdiffrast as differential rendering to generate 2D views.
  • Optimized texture maps by style transfer between 2D rendered images and style image.
  • Preserved object style consistency by semantic style transfer.
 
 
 
 
 
PingAn
Applied Research Intern
PingAn
May 2019 – Dec 2019 Bethesda, Meryland

Symmetric learning for Fracture Detection in Pelvic Trauma X-ray:

  • Paper accepted by ECCV 2020 with poster presentation.
  • Mimicked radiologists' practice by comparing vertical asymmetric areas via Siamese network.
  • Aligned Siamese features according to GNN-detected pelvic structure landmarks.
  • Learned anatomical asymmetry explicitly by novel pixel-wise contrastive loss.
 
 
 
 
 
NVIDIA
Applied Research Intern
NVIDIA
May 2018 – Dec 2018 Bethesda, Meryland

Deep Hierarchical Multi-label Classification of Chest X-ray:

  • Paper accepted by MIDL 2019 with oral presentation.
  • Special invitation to Journal ``Medical Image Analysis" and paper accepted.
  • Followed clinical taxonomy to construct hierarchical multi-label classification.
  • Developed a two-stage training procedure to fit the extreme label imbalance dataset.
  • Derived a numerically stable math formulation to avoid floating point underflow calculating loss.
 
 
 
 
 
PingAn
Applied Research Intern
PingAn
May 2017 – Aug 2017 Shanghai

Lung nodule detection in CT images:

  • Achieved rank 6 out of 2887 teams in the Skylake competition sponsored by Intel and Alibaba.
  • Applied PyTorch, 3D UNet and Caffe, Faster RCNN to detect lung nodules in 1000 CT scans.
  • Used fusion method to achieve false positive reduction.

Recent Publications

(2020). Gene Expression Profile Prediction in Uveal Melanoma Using Deep Learning: A Pilot Study for the Development of an Alternative Survival Prediction Tool. In Ophthalmology Retina.

PDF Cite

(2019). Deep hierarchical multi-label classification of chest X-ray images. In MIDL.

PDF Cite

Contact