APPROACH TO IMAGE ANALYSIS FOR COMPUTER VISION SYSTEMS
https://doi.org/10.35596/1729-7648-2020-18-2-62-70
Abstract
This paper suggests an approach to the semantic image analysis for application in computer vision systems. The aim of the work is to develop a method for automatically construction of a semantic model, that formalizes the spatial relationships between objects in the image and research thereof. A distinctive feature of this model is the detection of salient objects, due to which the construction algorithm analyzes significantly less relations between objects, which can greatly reduce the image processing time and the amount of resources spent for processing. Attention is paid to the selection of a neural network algorithm for object detection in an image, as a preliminary stage of model construction. Experiments were conducted on test datasets provided by Visual Genome database, developed by researchers from Stanford University to evaluate object detection algorithms, image captioning models, and other relevant image analysis tasks. When assessing the performance of the model, the accuracy of spatial relations recognition was evaluated. Further, the experiments on resulting model interpretation were conducted, namely image annotation, i.e. generating a textual description of the image content. The experimental results were compared with similar results obtained by means of the algorithm based on neural networks algorithm on the same dataset by other researchers, as well as by the author of this paper earlier. Up to 60 % improvement in image captioning quality (according to the METEOR metric) compared with neural network methods has been shown. In addition, the use of this model allows partial cleansing and normalization of data for training neural network architectures, which are widely used in image analysis among others. The prospects of using this technique in situational monitoring are considered. The disadvantages of this approach are some simplifications in the construction of the model, which will be taken into account in the further development of the model.
About the Author
N. A. IskraBelarus
Iskra Natalia Alexandrovna, M. Sci, senior lecturer at electronic computing machines Department
220013, Republic of Belarus, Minsk, P. Brovka str., 6; tel. +375-29-586-93-52
References
1. Liu L., Ouyang W., Wang X., Fieguth P., Chen J., Liu X., Pietikäinen M. Deep learning for generic object detection: A survey. International journal of computer vision. 2019. DOI: 10.1007/s11263-019-01247-4.
2. Müller J., Fregin A., Dietmayer K. Disparity sliding window: object proposals from disparity images. IEEE/RSJ International conference on intelligent robots and systems. New York: IEEE, 2018: 5777-5784. ISBN 978-1-5386-8094-0.
3. Girshick R., Donahue J., Darrell T., Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition. 2014: 580-587. DOI: 10.1109/CVPR.2014.81.
4. Liu W., Anguelov D., Erhan D., Szegedy C., Reed S., Fu C.Y., Berg A.C. Ssd: Single shot multibox detector. European conference on computer vision. Springer, Cham, 2016: 21-37. DOI: 10.1007/978-3-319-46448-0_2.
5. Girshick R. Fast r-cnn. Proceedings of the IEEE international conference on computer vision. 2015: 1440-1448. DOI: 10.1109/ICCV.2015.169.
6. Ren S., He K., Girshick R., Sun J. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems. 2015: 91-99. DOI: 10.5555/2969239.2969250.
7. He K., Gkioxari G., Dollár P., Girshick R. Mask r-cnn. Proceedings of the IEEE international conference on computer vision. 2017: 2961-2969. DOI: 10.1109/ICCV.2017.322.
8. Xu D., Zhu Y., Choy C.B., Fei-Fei L. Scene graph generation by iterative message passing. Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 5410-5419. DOI: 10.1109/CVPR.2017.330.
9. Krishna R., Zhu Y., Groth O., Johnson J., Hata K., Kravitz J., Chen S., Kalantidis Y., Li L.J., Shamma D.A., Bernstein M.S. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision. 2017;123(1):32-73. DOI: 10.1007/s11263-016-0981-7.
10. Miller G.A. WordNet: An electronic lexical database. First edition. Cambridge: MIT Press; 1998. ISBN 9780262061971.
11. Yang J., Lu J., Lee S., Batra D., Parikh D. Graph r-cnn for scene graph generation. Proceedings of the european conference on computer vision. 2018: 690-706. DOI: 10.1007/978-3-030-01246-5_41.
12. Borji A., Cheng M.M., Hou Q., Jiang H., Li J. Salient object detection: A survey. Computational visual media. 2019;5(2):117-150. DOI: 10.1007/s41095-019-0149-9.
13. Banerjee S., Lavie A. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. Michigan: Association for computational linguistics. 2005: 65-72. Anthology ID: W05-0909.
14. Johnson J., Karpathy A., Fei-Fei L. Densecap: Fully convolutional localization networks for dense captioning. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 4565-4574. DOI: 10.1109/CVPR.2016.494.
15. Iskra N., Iskra V. Temporal Convolutional and Recurrent Networks for Image Captioning. Communications in Computer and Information Science. 2019; 1055. Springer, Cham. DOI: https://doi.org/10.1007/978-3-030-35430-5_21.
Review
For citations:
Iskra N.A. APPROACH TO IMAGE ANALYSIS FOR COMPUTER VISION SYSTEMS. Doklady BGUIR. 2020;18(2):62-70. (In Russ.) https://doi.org/10.35596/1729-7648-2020-18-2-62-70