Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2019, Cilt: 40 Sayı: 4, 958 - 966, 31.12.2019
https://doi.org/10.17776/csj.638297

Öz

Kaynakça

  • [1] Graham M., Zook M., and Boulton A., Augmented Reality in Urban Places: contested content and the duplicity of code, Trans. Inst. Br. Geogr., 38-3 (2013) 464-479.
  • [2] Steuer J., Defining Virtual Reality: Dimensions Determining Telepresence, J. Commun., 42-4 (1992) 73-93.
  • [3] If You’re Not Seeing Data You’re Not Seeing, Wired.https://www.wired.com/2009/08/augmented-reality/. Retrieved October 25 (2009).
  • [4] Rublee E., Rabaud V., Konolige K., and Bradski G., ORB An Efficient Alternative to SIFT or SURF, Proceedings of 7th IEEE International Conference on Computer Vision, (2011) 2564-2571.
  • [5] Wagner D., Reitmayr G., Mulloni A., Drummond T., and Schmalstieg D., Pose Tracking from Natural Features on Mobile Phones, Proceedings of 7th IEEE and ACM International Symposium on Mixed and Augmented Reality, (2008) 125-134.
  • [6] Wagner D., Schmalstieg D., and Bischof H., Multiple Target Detection and Tracking with Guaranteed Framerates on Mobile Phones, International Symposium on Mixed and Augmented Reality, (2009) 57-64.
  • [7] Wagner D., Mulloni A., Langlotz T., and Schmalstieg D., Real-Time Panoramic Mapping and Tracking on Mobile Phones, Proceedings of IEEE Virtual Reality, (2010) 211-218.
  • [8] Klein G. and Murray D., Parallel Tracking and Mapping on a Camera Phone, Proceedings of 8th IEEE International Symposium on Mixed and Augmented Reality, (2009) 83-86.
  • [9] Ta D. N., Chen W. C., Gelfand N., and Pulli K., SURFTrac: Efficient Tracking and Continuous Object Recognition using Local Feature Descriptors, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2009) 2937-2944.
  • [10] Takacs G., Chandrasekhar V., Tsai S., Chen D., Grzeszczuk R., and Girod B., Rotation-invariant fast features for large-scale recognition and real-time tracking, Signal Process. Image, 28-4 (2013) 334-34.
  • [11] Rosten E. and Drummond T., Machine learning for high-speed corner detection, European Conference on Computer Vision, (2006) 430-443 .
  • [12] Rosten E., Porter R., and Drummond T., Faster and better: A machine learning approach to corner detection, IEEE T. Pattern Anal., 32-1 (2010) 105–119.
  • [13] Trzcinski T., Christoudias M., Lepetit V., and Fua P., Learning Image Descriptors with the Boosting-Trick, Advances in Neural Information Processing Systems, (2012) 1-9.
  • [14] Winder S. and Brown M., Learning Local Image Descriptors, IEEE Conference on Computer Vision and Pattern Recognition, (2007) 1-8.
  • [15] Brown M., Hua G., and Winder S., Discriminative Learning of Local Image Descriptors, IEEE T. Pattern Anal., 33-1 (2011) 43-57.
  • [16] Ke Y. and Sukthankar R., PCA-SIFT: A More Distinctive Representation for Local Image Descriptors, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2004) 506-514.
  • [17] Simonyan K., Vedali A., and Zisserman A., Descriptor Learning Using Convex Optimisation, European Conference on Computer Vision (2012), 243-256.
  • 18] Lowe D. G., Distinctive Image Features from Scale-Invariant Keypoints, Int. J. Comput. Vis., 60-2 (2004) 91-110.
  • [19] Bay H., Ess A., Tuytelaars T., and Gool L.V., Speeded-Up Robust Features (SURF), Comput. Vis. Image Und., 110-3 (2008), 346-359.
  • [20] Calonder M,, Lepetit V, Strecha C, and Fua P. Brief: Binary Robust Independent Elementary Features, European Conference on Computer Vision, Heraklion, (2010) 778-792.
  • [21] Harris C. and Stephens M., A combined corner and edge detector, Fourth Alvey Vision Conference, (1988) 147-151.
  • [22] Huang W., Wu L. D., Song H. C., and Wei Y. M. “RBRIEF: a robust descriptor based on random binary comparisons”. IET Comput. Vis,, 7-1 (2013) 29-35.
  • [23] Scale Invariant FeatureTransform, Scholarpedia.http://www.scholarpedia.org/article/SIFT. Retrieved October 18, 2013.
  • [24] Scale Invariant Feature Transform (SIFT), VLFeat. http://www.vlfeat.org/api/sift.html. Retrieved October 18, 2013.
  • [25] Schaeffer C., A Comparison of Keypoint Descriptors in the Context of Pedestrian Detection: FREAK vs. SURF vs. BRISK (2013).
  • [26] Alahi A., Ortiz R., and Vandergheynst P., FREAK: Fast Retina Keypoint, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2012) 510-517.
  • [27] Leutenegger S., Chli M., and Siegwart R. Y., BRISK: Binary Robust invariant scalable keypoints, Proceedings of the IEEE International Conference on Computer Vision, (2011) 2548-2555.

Comparative Analysis of the Feature Extraction Performance of Augmented Reality Algorithms

Yıl 2019, Cilt: 40 Sayı: 4, 958 - 966, 31.12.2019
https://doi.org/10.17776/csj.638297

Öz

The algorithms that extract keypoints
and descriptors in augmented reality applications are getting more and more
important  in terms of performance.
Criterions like time and correct matching of points gain more impact according
to the type of application. In this paper, the performance of the algorithms
used to identify an image using keypoint and descriptor extraction is  studied. In the context of this research,
main criterion like the number of keypoints and descriptors that the algorithms
extract, algorithm execution time, and the quality of keypoints and descriptors
extracted are considered as the performance metrics. Same data stacks were used
for obtaining comparison results. In addition to comparisons for a group of
well-known augmented reality applications, the best performing algorithms for
varying applications were also suggested. C++ language and OpenCV library were
used for the implementation of the augmented reality algorithms compared.

Kaynakça

  • [1] Graham M., Zook M., and Boulton A., Augmented Reality in Urban Places: contested content and the duplicity of code, Trans. Inst. Br. Geogr., 38-3 (2013) 464-479.
  • [2] Steuer J., Defining Virtual Reality: Dimensions Determining Telepresence, J. Commun., 42-4 (1992) 73-93.
  • [3] If You’re Not Seeing Data You’re Not Seeing, Wired.https://www.wired.com/2009/08/augmented-reality/. Retrieved October 25 (2009).
  • [4] Rublee E., Rabaud V., Konolige K., and Bradski G., ORB An Efficient Alternative to SIFT or SURF, Proceedings of 7th IEEE International Conference on Computer Vision, (2011) 2564-2571.
  • [5] Wagner D., Reitmayr G., Mulloni A., Drummond T., and Schmalstieg D., Pose Tracking from Natural Features on Mobile Phones, Proceedings of 7th IEEE and ACM International Symposium on Mixed and Augmented Reality, (2008) 125-134.
  • [6] Wagner D., Schmalstieg D., and Bischof H., Multiple Target Detection and Tracking with Guaranteed Framerates on Mobile Phones, International Symposium on Mixed and Augmented Reality, (2009) 57-64.
  • [7] Wagner D., Mulloni A., Langlotz T., and Schmalstieg D., Real-Time Panoramic Mapping and Tracking on Mobile Phones, Proceedings of IEEE Virtual Reality, (2010) 211-218.
  • [8] Klein G. and Murray D., Parallel Tracking and Mapping on a Camera Phone, Proceedings of 8th IEEE International Symposium on Mixed and Augmented Reality, (2009) 83-86.
  • [9] Ta D. N., Chen W. C., Gelfand N., and Pulli K., SURFTrac: Efficient Tracking and Continuous Object Recognition using Local Feature Descriptors, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2009) 2937-2944.
  • [10] Takacs G., Chandrasekhar V., Tsai S., Chen D., Grzeszczuk R., and Girod B., Rotation-invariant fast features for large-scale recognition and real-time tracking, Signal Process. Image, 28-4 (2013) 334-34.
  • [11] Rosten E. and Drummond T., Machine learning for high-speed corner detection, European Conference on Computer Vision, (2006) 430-443 .
  • [12] Rosten E., Porter R., and Drummond T., Faster and better: A machine learning approach to corner detection, IEEE T. Pattern Anal., 32-1 (2010) 105–119.
  • [13] Trzcinski T., Christoudias M., Lepetit V., and Fua P., Learning Image Descriptors with the Boosting-Trick, Advances in Neural Information Processing Systems, (2012) 1-9.
  • [14] Winder S. and Brown M., Learning Local Image Descriptors, IEEE Conference on Computer Vision and Pattern Recognition, (2007) 1-8.
  • [15] Brown M., Hua G., and Winder S., Discriminative Learning of Local Image Descriptors, IEEE T. Pattern Anal., 33-1 (2011) 43-57.
  • [16] Ke Y. and Sukthankar R., PCA-SIFT: A More Distinctive Representation for Local Image Descriptors, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2004) 506-514.
  • [17] Simonyan K., Vedali A., and Zisserman A., Descriptor Learning Using Convex Optimisation, European Conference on Computer Vision (2012), 243-256.
  • 18] Lowe D. G., Distinctive Image Features from Scale-Invariant Keypoints, Int. J. Comput. Vis., 60-2 (2004) 91-110.
  • [19] Bay H., Ess A., Tuytelaars T., and Gool L.V., Speeded-Up Robust Features (SURF), Comput. Vis. Image Und., 110-3 (2008), 346-359.
  • [20] Calonder M,, Lepetit V, Strecha C, and Fua P. Brief: Binary Robust Independent Elementary Features, European Conference on Computer Vision, Heraklion, (2010) 778-792.
  • [21] Harris C. and Stephens M., A combined corner and edge detector, Fourth Alvey Vision Conference, (1988) 147-151.
  • [22] Huang W., Wu L. D., Song H. C., and Wei Y. M. “RBRIEF: a robust descriptor based on random binary comparisons”. IET Comput. Vis,, 7-1 (2013) 29-35.
  • [23] Scale Invariant FeatureTransform, Scholarpedia.http://www.scholarpedia.org/article/SIFT. Retrieved October 18, 2013.
  • [24] Scale Invariant Feature Transform (SIFT), VLFeat. http://www.vlfeat.org/api/sift.html. Retrieved October 18, 2013.
  • [25] Schaeffer C., A Comparison of Keypoint Descriptors in the Context of Pedestrian Detection: FREAK vs. SURF vs. BRISK (2013).
  • [26] Alahi A., Ortiz R., and Vandergheynst P., FREAK: Fast Retina Keypoint, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2012) 510-517.
  • [27] Leutenegger S., Chli M., and Siegwart R. Y., BRISK: Binary Robust invariant scalable keypoints, Proceedings of the IEEE International Conference on Computer Vision, (2011) 2548-2555.
Toplam 27 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Bölüm Engineering Sciences
Yazarlar

Umut Tosun 0000-0002-9900-7987

Yayımlanma Tarihi 31 Aralık 2019
Gönderilme Tarihi 25 Ekim 2019
Kabul Tarihi 21 Kasım 2019
Yayımlandığı Sayı Yıl 2019Cilt: 40 Sayı: 4

Kaynak Göster

APA Tosun, U. (2019). Comparative Analysis of the Feature Extraction Performance of Augmented Reality Algorithms. Cumhuriyet Science Journal, 40(4), 958-966. https://doi.org/10.17776/csj.638297