I am an applied scientist with the Smart Home AI team at Amazon. I received my Ph.D. degree (Computer Science) from the National University of Singapore in 2022, supervised by Prof. Roger Zimmermann and Prof. Jiashi Feng. I am interested in CV tasks including scene graph generation, video understanding, 3D CV, etc. I am also interested in and working on Large Language Models, Superevised Fine-Tuning and their applications in CV. I code mostly in Python and PyTorch.
Meng-Jiun Chiou (Applied Scientist)
Taipei City, Taiwan
mengjiun.chiou [at] u.nus.edu
Ph.D. in Computer Science• Aug. 2017 - Jun. 2022
I worked on visual scene understanding and reasoning (especially, scene graph generation) with Prof. Roger Zimmermann at Media Management Research Lab and Prof. Jiashi Feng at Learning and Vision Lab.B.Sc. in Electrical and Computer Engineering• Sep. 2012 - Jun. 2016
I graduated from the Dep. of Electrical and Computer Engineering, National Chiao Tung University (NCTU) with superb GPA 3.9 with multiple scholarships awarded. Note, NCTU is now National Yang Ming Chiao Tung University.Exchange Student at Info-Comm Engineering• Oct. 2014 - Sep. 2015
I joined Department of Information and Communication Engineering at the University of Tokyo as an one-year-long exchange student. Working closely with Prof. Toshihiko Yamaski, I did research in efficient image classification.
Amazon•Jul. 2022 - Present•Taipei, Taiwan
I am researching and developing cutting-edge computer vision algorithms for intelligent devices at the Smart Home team under Amazon Devices and Services.
TikTok (ByteDance AI Lab)•Oct. 2020 - Jun. 2022•Singapore
I worked on computer vision research on unbiased scene graph generation and revealing the biases in single-positive multi-label (SPML) learning methods. I also worked with the Trust & Safety team to improve the violation video detection algorithms.
ASUS Intelligent Cloud Services•Jun. 2020 - Oct. 2020•Singapore
I worked on video-based human-object and human-human interaction for their smart retail initiative.
Microsoft• Jul. 2013 - Jun. 2014•Taipei, Taiwan
As a Microsoft Student Partner, I developed multiple Windows Apps, e.g., NHK Reader with 7K+ downloads, and gave Microsoft Tech Talks on software development to Taiwan’s college students.
Meng-Jiun Chiou, “Learning Structured Representations of Visual Scenes”, Ph.D. Thesis, National University of Singapore, 2022.
[Thesis]
Meng-Jiun Chiou, Henghui Ding, Hanshu Yan, Changhu Wang, Roger Zimmermann and Jiashi Feng, “Recovering the Unbiased Scene Graphs from the Biased Ones”, in Proceedings of the 29th ACM International Conference on Multimedia (ACMMM'21), 2021.
[arXiv] [Proceeding] [Slides] [Poster] [Video] [Code]
Meng-Jiun Chiou, Chun-Yu Liao, Li-Wei Wang, Roger Zimmermann and Jiashi Feng, “ST-HOI: A Spatial-Temporal Baseline for Human-Object Interaction Detection in Videos”, in Proceedings of the 2021 International Conference on Multimedia Retrieval (ICMR'21) Workshop on Intelligent Cross-Data Analysis and Retrieval, 2021.
[Paper] [Slides] [Video] [Code]
Meng-Jiun Chiou, Roger Zimmermann and Jiashi Feng, “Visual Relationship Detection with Visual-Linguistic Knowledge from Multimodal Representations”, in IEEE Access, vol. 9, pp. 50441-50451, 2021.
[Paper] [Code]
Meng-Jiun Chiou, Zhenguang Liu, Yifang Yin, An-An Liu and Roger Zimmermann, “Zero-Shot Multi-View Indoor Localization via Graph Location Networks”, in Proceedings of the 28th ACM International Conference on Multimedia (ACMMM'20), 2020.
[Paper] [Slides] [Poster] [Video] [Code]
Yifang Yin, Meng-Jiun Chiou, Zhenguang Liu, Harsh Shrivastava, Rajiv Ratn Shah and Roger Zimmermann, “Multi-Level Fusion based Class-aware Attention Model for Weakly Labeled Audio Tagging”, in Proceedings of the 27th ACM International Conference on Multimedia (ACMMM'19), 2019.
[Paper]
Meng-Jiun Chiou, Yamasaki Toshihiko and Aizawa Kiyoharu, “A Fast Table-Based Approach of Bag-of-Features for Large-Scale Image Classification”, in Proveedings of the ITE Annual Convention, The Institute of Image Information and Television Engineers, 2015.
[Paper]