Trong-Tung Nguyen received his B.Sc. degree (top 2% students) in Computer Science from Advanced Program
In Computer Science at the University of Science, VNU-HCM in 2022 under supervision of
Prof. Minh-Triet Tran.
During the time at university, he worked at Cinnamon AI company as AI Researcher (2020-2021). After that, he joined VinUni-Illinois Smart Health Center
(VinUniversity-UIUC) as research assistant and worked closely with
Dr.Huy-Hieu Pham and Prof. Minh Do on individual research about AI healthcare problem.
At the same time, he also worked as research assistant on a research project with Monash
University under supervision of Prof. Wray Buntine.
Trong-Tung Nguyen is actively looking for fully-funded Ph.D. position in Computer Science.
His research interests include applications of Computer Vision, Multimodal Learning, Explainable AI, and
Deep Incremental Learning.
Classifying pill categories from real-world images is crucial
for various smart healthcare applications. Although existing approaches
in image classification might achieve a good performance on fixed pill
categories, they fail to handle novel instances of pill categories that are
frequently presented to the learning algorithm. To this end, a trivial solution is to train the model with novel classes. However, this may result
in a phenomenon known as catastrophic forgetting, in which the system
forgets what it learned in previous classes. In this paper, we address
this challenge by introducing the class incremental learning (CIL) ability to traditional pill image classification systems. Specifically, we propose a novel incremental multi-stream intermediate fusion framework enabling incorporation of an additional guidance information stream that
best matches the domain of the problem into various state-of-the-art
CIL methods. From this framework, we consider color-specific information of pill images as a guidance stream and devise an approach,
namely “Color Guidance with Multi-stream intermediate fusion”(CG-IMIF) for solving CIL pill image classification task. We conduct comprehensive experiments on real-world incremental pill image classification
dataset, namely VAIPE-PCIL, and find that the CG-IMIF consistently
outperforms several state-of-the-art methods by a large margin in different task settings.
Our HCMUS team engaged in the challenge
with the main contribution of improving the classification, aiming
to intensify the effectiveness of our previous method in 2020. Our best
run ranked second in the Sports Video Task with 68.8% of accuracy
By defining the interaction between humans and objects via body-part regions,
we decompose the second stage into pair matching and relationship prediction,
whereas the first stage is used for detecting objects and humans in an image.
Moreover, we introduce two novel features: body-part aware features in pair
matching and object affordance features in relationship prediction to overcome
the current limitations of other methods. Our proposed method can achieve
state-of-the-art results for two-stage methods on a common HOI dataset:
PIC HOI-A 2019 with 0.6617 mAP scores. Our methods can now be integrated
into different intelligent visual analysis tasks such as human activity analysis,
life-logging, and visual question answering.
In this paper, we introduce our Intelligent Traffic Analysis Software Kit (iTASK) to tackle
three challenging problems: vehicle flow counting, vehicle re-identification, and abnormal event detection
We proposed an architecture to effectively make use of the network of citizens to deal with city anomalies.
At the center of our architecture is a neural network that automatically
classify incoming image data and use this information to assist anomaly handling efforts.
Projects
Besides doing research, I also participated in other projects building interesting application. Here are some of my highlighted projects.
VAIPE is a project funded by VinIF, composed of VinUniversity, Hanoi University of Science & Technology (HUST),
The University of Massachusetts Boston (UMass Boston), and The University of South Florida (USF).
The project aims to build an intelligent healthcare system to assist users in collecting, managing,
and analyzing their health-related data. Our system enables users to collect heterogeneous data captured from
multiple sources using a convenient smartphone camera, provides visualizations of analytical and predicted results,
and includes functions to support users, for example, reminding of medication schedules and warning of early-disease risks.
VAIPE is AI-assisted and involves original research and development of several key modules. For more information, please visit
our website.
I'm working with my teammate - Truong-Phat Nguyen for a solution to ICDAR 2019 Robust Reading Challenge on Scanned Receipts OCR and Information Extraction.
●[Jul. 2022]
I started to work as Research Assistant for a research project about Change Point Detection Problem from Monash University under supervision of
Prof. Wray Buntine and Dr. Mahsa Salehi
.
●[April. 2022]
I received my excellent B.Sc. degree in Computer Science (among top 2% students, GPA: 3.9/4.0 or 9.24/10.0) at University Of Science, VNU-HCM.
.
●[Feb. 2022]
I worked as Teaching Assistant for the course COMP2050-Artificial Intelligence at VinUniversity, taught by Prof. Wray Buntine
.