Abstract
It has been widely adopted to minimize the maintenance cost by predicting potential vulnerabilities before code audits in academia and industry. Most previous research dedicated to file/component level vulnerability prediction models is coarse- grained and may suffer from cost-prohibitive and impractical security testing activities. In this paper, we focus on a cost- aware vulnerability prediction model and present a just-in-time change-level code review tool called VulDigger to dig suspicious ones from a sea of code changes. Our contributions benefit from the case study of Mozilla Firefox by constructing a large-scale vulnerability-contributing changes (VCCs) dataset in a semi-automatic fashion. We then further manifest a classification tool with a mixture of established and new metrics derived from both software defect prediction and vulnerability prediction. Consequently, the precision of such tool is extremely promising (i.e., 92%) for an effort-aware software team. We also examine the return on investment by training a regression model to locate most skeptical changes with fewer lines to inspect. Our findings suggest that such model is capable of pinpointing 31% of all VCCs with only 20% of the effort it would take to audit all changes (i.e., 55% better than random predictor). Our outputs can assist as an early step of continuous security inspections as it provides immediate feedback once developers submit changes to their code base.
| Original language | English |
|---|---|
| Article number | 8254428 |
| Pages (from-to) | 1-7 |
| Number of pages | 7 |
| Journal | Proceedings - IEEE Global Communications Conference, GLOBECOM |
| Volume | 2018-January |
| DOIs | |
| State | Published - 2017 |
| Event | 2017 IEEE Global Communications Conference, GLOBECOM 2017 - Singapore, Singapore Duration: 4 Dec 2017 → 8 Dec 2017 |