I am most interested in reasoning about computer programs for practical purposes. Among many practical
applications of program reasoning, I am currently focusing on program
repair whereby correct program behavior is inferred from specifications (e.g.,
reference programs, test cases, and documents), based on which the buggy program is automatically fixed through
various techniques such as program synthesis and deep
learning.
The dual of program repair is program verification, which checks
whether the program satisfies available specifications. In our lab, we use various automated verification
techniques, from lightweight ones such as fuzzing to more systematic ones such as
symbolic execution.
Below are the research areas I have been working on:
Program verification [TOSEM'22, TOSEM'15,
ISSTA'13, AOSD'13, ASEJ'12, IST'10, ASE'06]
News
September 2023: Our paper LeakPair: Proactive
Repairing of Memory Leaks in Single Page Web Applications received the ACM Distinguished Paper
Award at ASE 2023.
September 2023: Our paper Poracle: Testing Patches Under
Preservation Conditions to Combat the Overfitting Problem of Program Repair has been accepted
and will appear in TOSEM. Congratulations to Elkhan and Mazba!
July 2023: Joint work with Dongsun
Kim on LeakPair: Proactive Repairing of Memory Leaks in Single Page Web
Applications has been accepted to ASE
2023. Congratulations to Arooba Shahoor
and Askar Khamit!
July 2023: Our demo paper BUGSC++: A Highly Usable Real
World Defect Benchmark for C/C++ has been accepted to ASE 2023 Demo. This is joint work with Gabin An, Shin Yoo, Minhyuk Kwon, and Kyunghwa Choi. Our benchmark is available here.
May 2023: Our paper Automated Program Repair from Fuzzing
Perspective has been accepted to ISSTA
2023. Congratulations to YoungJae Kim, Seungheon Han, and Khamit Askar.
April 2023: Our research proposal on
Large-Language-Model-Based Low-Code Platform Development has been accepted for funding
by MSIT.
Feb 2023: Our research proposal on Patch Validation
Technique for Automated Program Repair has been accepted for funding by MSIT.
If you have proficiency in Korean, I highly recommend you watch these videos [v1, v2]. In essence, I suggest you train your neural learning
model (a.k.a, your brain) as much as you can in the classroom, using the natural loss function (a.k.a, trial and
error).