References
- 김원익, 김창익. 2008. 새로운 비디오 자막 영역검출 기법. 방송공학회논문지, 13(4): 544-553
- 김종성, 이순탁, 백종환. 2005. 내용기반 비디오 요약을 위한 효율적인 얼굴 객체 검출. 한국통신학회논문지, 제30권 7C호: 675-686
- 김현희, 김용호, 고수현. 2007. 비디오 자료의 의미추출을 위한 영상 초록의 효용성에 관한실험적 연구. 정보관리학회지, 24(4):53-72
- 신성윤, 표성배. 2006. 텔레매틱스에서 효율적인장면전환 검출기법을 이용한 비디오 브라우징. 한국컴퓨터정보처리논문지, 11(4): 147-154
- 이준용, 문영식. 2003. 샷 기여도와 왜곡률을 고려한 키 프레임 추출 알고리즘. 전자공학회논문지, 제40권 CI편 제3호: 11-17
- Browne, P. and A. F. Smeaton. 2005. “Video Retrieval Using Dialogue, Keyframe Similarity and Video Objects." ICIP 2005 - International Conference on Image Processing, Genova, Italy: 11-14
- Choi, Y. and E. M. Rasmussen. 2002. “User's Relevance Criteria in Image Retrieval in American History." Information Processing and Management, 38(5): 695-726 https://doi.org/10.1016/S0306-4573(01)00059-0
- Chung, E. K. and J. W. Yoon. 2008. “A Categorical Comparison between Usersupplied Tags and Web Search Queries for Images." Proceedings of the ASIST Annual Meeting. Silver Spring, MD: American Society for Information Science and Technology
- Ding, W. et al. 1999. “Multimodal Surrogates for Video Browsing." In Proceedings of the fourth ACM Conference on Digital Libraries(August, Berkeley CA, USA), ACM: 85-93
- Dufaux, F. 2000. “Key Frame Selection to Represent a Video.” In IEEE Proceedings of International Conference on Image Processing, vol.2: 275-278
- Greisdorf, H. and B. O'Connor. 2002. “Modelling What Users See When They Look at Images: a Cognitive Viewpoint." Journal of Documentation, 58(1): 6-29 https://doi.org/10.1108/00220410210425386
- Hughes, A., et. al. 2003. “Text or Pictures? an Eye-tracking Study of How People View Digital Video Surrogates." Proceedings of CIVR 2003: 271-280
- Kristin, B. et al., 2006. Audio Surrogation for Digital Video: a Design Framework. UNC School of Information and Library Science(SILS) Technical Report TR 2006-21
- Laine-Hermandez, M. and S. Westman. 2008. “Multifaceted Image Similarity Criteria as Revealed by Sorting Tasks." Proceedings of the ASIST Annual Meeting. Silver Spring, MD: American Society for Information Science and Technology
- Lyer, H. and C. D. Lewis. 2007. “Prioritization Strategies for Video Storyboard Keyframes." Journal of American Society for Information Science and Technology, 58(5): 629-644 https://doi.org/10.1002/asi.20554
- Marchionini, G. and G. Geisler. 2002. “The Open Video Digital Library." D-Lib Magazine. 8(12). [cited 2008.9.25] https://doi.org/10.1045/december2002-marchionini
- Markkula, M. and E. Sormunen. 1998. “Searching for Photos - Journalistic Practices in Pictorial IR." In: Eakins, J.P., et al. (Eds), The Challenge of Image Retrieval. Newcastle upon Tyne, 5-6 Feb 1998. British Computer Society(BCS), Electronic Workshops in Computing
- Mu, X. and G. Marchionini. 2003. “Enriched Video Semantic Metadata: Authorization, Integration, and Presentation." Proceedings of the ASIST Annual Meeting: 316-322. Silver Spring, MD: American Society for Information Science and Technology
- Nagasaka, A. and Y. Tanka. 1992. “Automatic Video Indexing. and Full-Video Search for Object Appearances," Visual Database Systems, Vol.2: 113-127
- Panofsky, E. 1955. Meaning in the Visual Arts: Meaning in and on Art History. Doubleday
- Shatford, S. 1986. “Analyzing the Subject of a Picture: a Theoretical Approach.” Cataloging & Classification Quarterly, 6(3): 39-62 https://doi.org/10.1300/J104v06n03_04
- Yang, M. 2005. An Exploration of Users' Video Relevance Criteria. Unpublished Doctoral Dissertation, University of North Carolina at Chapel Hill
- Yang, M. and G. Marchionini. 2004. “Exploring Users' Video Relevance Criteria - A Pilot Study.” Proceedings of the ASIST Annual Meeting: 229-238. Silver Spring, MD: American Society for Information Science and Technology
Cited by
- A Key-Frame Extraction Method based on HSV Color Model for Smart Vehicle Management System vol.8, pp.4, 2013, https://doi.org/10.13067/JKIECS.2013.8.4.595