• Title/Summary/Keyword: Image indexing

Search Result 204, Processing Time 0.026 seconds

Integrating Video Image into Digital Map (동영상과 수치지도의 결합에 관한 연구)

  • Kim, Yong-Il;Pyeon, Mu-Wook
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.4 no.2 s.8
    • /
    • pp.161-172
    • /
    • 1996
  • The objective of this research is to develope a process of integrating video image into digital map. In order to reach the research objective, the work includes the development of georeferencing technique for video images, the development of pilot system and the assesment process. Georeferencing technique for video images is composed of DGPS positioning, filtering of abnormal points, map conflation, indexing locations for key frames via time tag and indexing locations for total frames. By using the proposed building process, we could find the result that the accuracy of image capturing test points is $92.8%({\pm}2\;frames)$. The eventual meaning of this study is that it is possible to find a new conception of digital map, which overcomes a limitation of exiting two dimensional digital map.

  • PDF

Content Description on a Mobile Image Sharing Service: Hashtags on Instagram

  • Dorsch, Isabelle
    • Journal of Information Science Theory and Practice
    • /
    • v.6 no.2
    • /
    • pp.46-61
    • /
    • 2018
  • The mobile social networking application Instagram is a well-known platform for sharing photos and videos. Since it is folksonomy-oriented, it provides the possibility for image indexing and knowledge representation through the assignment of hashtags to posted content. The purpose of this study is to analyze how Instagram users tag their pictures regarding different kinds of picture and hashtag categories. For such a content analysis, a distinction is made between Food, Pets, Selfies, Friends, Activity, Art, Fashion, Quotes (captioned photos), Landscape, and Architecture image categories as well as Content-relatedness (ofness, aboutness, and iconology), Emotiveness, Isness, Performativeness, Fakeness, "Insta"-Tags, and Sentences as hashtag categories. Altogether, 14,649 hashtags of 1,000 Instagram images were intellectually analyzed (100 pictures for each image category). Research questions are stated as follows: RQ1: Are there any differences in relative frequencies of hashtags in the picture categories? On average the number of hashtags per picture is 15. Lowest average values received the categories Selfie (average 10.9 tags per picture) and Friends (average 11.7 tags per picture); for highest, the categories Pet (average 18.6 tags), Fashion (average 17.6 tags), and Landscape (average 16.8 tags). RQ2: Given a picture category, what is the distribution of hashtag categories; and given a hashtag category, what is the distribution of picture categories? 60.20% of all hashtags were classified into the category Content-relatedness. Categories Emotiveness (about 4.38%) and Sentences (0.99%) were less often frequent. RQ3: Is there any association between image categories and hashtag categories? A statistically significant association between hashtag categories and image categories on Instagram exists, as a chi-square test of independence shows. This study enables a first broad overview on the tagging behavior of Instagram users and is not limited to a specific hashtag or picture motive, like previous studies.

An Efficient Object Extraction Scheme for Low Depth-of-Field Images (낮은 피사계 심도 영상에서 관심 물체의 효율적인 추출 방법)

  • Park Jung-Woo;Lee Jae-Ho;Kim Chang-Ick
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.9
    • /
    • pp.1139-1149
    • /
    • 2006
  • This paper describes a novel and efficient algorithm, which extracts focused objects from still images with low depth-of-field (DOF). The algorithm unfolds into four modules. In the first module, a HOS map, in which the spatial distribution of the high-frequency components is represented, is obtained from an input low DOF image [1]. The second module finds OOI candidate by using characteristics of the HOS. Since it is possible to contain some holes in the region, the third module detects and fills them. In order to obtain an OOI, the last module gets rid of background pixels in the OOI candidate. The experimental results show that the proposed method is highly useful in various applications, such as image indexing for content-based retrieval from huge amounts of image database, image analysis for digital cameras, and video analysis for virtual reality, immersive video system, photo-realistic video scene generation and video indexing system.

  • PDF

Methods for Video Caption Extraction and Extracted Caption Image Enhancement (영화 비디오 자막 추출 및 추출된 자막 이미지 향상 방법)

  • Kim, So-Myung;Kwak, Sang-Shin;Choi, Yeong-Woo;Chung, Kyu-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.235-247
    • /
    • 2002
  • For an efficient indexing and retrieval of digital video data, research on video caption extraction and recognition is required. This paper proposes methods for extracting artificial captions from video data and enhancing their image quality for an accurate Hangul and English character recognition. In the proposed methods, we first find locations of beginning and ending frames of the same caption contents and combine those multiple frames in each group by logical operation to remove background noises. During this process an evaluation is performed for detecting the integrated results with different caption images. After the multiple video frames are integrated, four different image enhancement techniques are applied to the image: resolution enhancement, contrast enhancement, stroke-based binarization, and morphological smoothing operations. By applying these operations to the video frames we can even improve the image quality of phonemes with complex strokes. Finding the beginning and ending locations of the frames with the same caption contents can be effectively used for the digital video indexing and browsing. We have tested the proposed methods with the video caption images containing both Hangul and English characters from cinema, and obtained the improved results of the character recognition.

Metadata Design and Machine Learning-Based Automatic Indexing for Efficient Data Management of Image Archives of Local Governments in South Korea (국내 지자체 사진 기록물의 효율적 관리를 위한 메타데이터 설계 및 기계학습 기반 자동 인덱싱 방법 연구)

  • Kim, InA;Kang, Young-Sun;Lee, Kyu-Chul
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.20 no.2
    • /
    • pp.67-83
    • /
    • 2020
  • Many local governments in Korea provide online services for people to easily access the audio-visual archives of events occurring in the area. However, the current method of managing these archives of the local governments has several problems in terms of compatibility with other organizations and convenience for searching of the archives because of the lack of standard metadata and the low utilization of image information. To solve these problems, we propose the metadata design and machine learning-based automatic indexing technology for the efficient management of the image archives of local governments in Korea. Moreover, we design metadata items specialized for the image archives of local governments to improve the compatibility and include the elements that can represent the basic information and characteristics of images into the metadata items, enabling efficient management. In addition, the text and objects in images, which include pieces of information that reflect events and categories, are automatically indexed based on the machine learning technology, enhancing users' search convenience. Lastly, we developed the program that automatically extracts text and objects from image archives using the proposed method, and stores the extracted contents and basic information in the metadata items we designed.

A Region-based Image Retrieval System using Salient Point Extraction and Image Segmentation (영상분할과 특징점 추출을 이용한 영역기반 영상검색 시스템)

  • 이희경;호요성
    • Journal of Broadcast Engineering
    • /
    • v.7 no.3
    • /
    • pp.262-270
    • /
    • 2002
  • Although most image indexing schemes ate based on global image features, they have limited discrimination capability because they cannot capture local variations of the image. In this paper, we propose a new region-based image retrieval system that can extract important regions in the image using salient point extraction and image segmentation techniques. Our experimental results show that color and texture information in the region provide a significantly improved retrieval performances compared to the global feature extraction methods.

Memory-Efficient NBNN Image Classification

  • Lee, YoonSeok;Yoon, Sung-Eui
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.1
    • /
    • pp.1-8
    • /
    • 2017
  • Naive Bayes nearest neighbor (NBNN) is a simple image classifier based on identifying nearest neighbors. NBNN uses original image descriptors (e.g., SIFTs) without vector quantization for preserving the discriminative power of descriptors and has a powerful generalization characteristic. However, it has a distinct disadvantage. Its memory requirement can be prohibitively high while processing a large amount of data. To deal with this problem, we apply a spherical hashing binary code embedding technique, to compactly encode data without significantly losing classification accuracy. We also propose using an inverted index to identify nearest neighbors among binarized image descriptors. To demonstrate the benefits of our method, we apply our method to two existing NBNN techniques with an image dataset. By using 64 bit length, we are able to reduce memory 16 times with higher runtime performance and no significant loss of classification accuracy. This result is achieved by our compact encoding scheme for image descriptors without losing much information from original image descriptors.

Content Based Image Retrieval Using Combined Features of Shape, Color and Relevance Feedback

  • Mussarat, Yasmin;Muhammad, Sharif;Sajjad, Mohsin;Isma, Irum
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.12
    • /
    • pp.3149-3165
    • /
    • 2013
  • Content based image retrieval is increasingly gaining popularity among image repository systems as images are a big source of digital communication and information sharing. Identification of image content is done through feature extraction which is the key operation for a successful content based image retrieval system. In this paper content based image retrieval system has been developed by adopting a strategy of combining multiple features of shape, color and relevance feedback. Shape is served as a primary operation to identify images whereas color and relevance feedback have been used as supporting features to make the system more efficient and accurate. Shape features are estimated through second derivative, least square polynomial and shapes coding methods. Color is estimated through max-min mean of neighborhood intensities. A new technique has been introduced for relevance feedback without bothering the user.

Extraction of Optimal Interest Points for Shape-based Image Classification (모양 기반 이미지 분류를 위한 최적의 우세점 추출)

  • 조성택;엄기현
    • Journal of KIISE:Databases
    • /
    • v.30 no.4
    • /
    • pp.362-371
    • /
    • 2003
  • In this paper, we propose an optimal interest point extraction method to support shape-base image classification and indexing for image database by applying a dynamic threshold that reflects the characteristics of the shape contour. The threshold is determined dynamically by comparing the contour length ratio of the original shape and the approximated polygon while the algorithm is running. Because our algorithm considers the characteristics of the shape contour, it can minimize the number of interest points. For n points of the contour, the proposed algorithm has O(nlogn) computational cost on an average to extract the number of m optimal interest points. Experiments were performed on the 70 synthetic shapes of 7 different contour types and 1100 fish shapes. It shows the average optimization ratio up to 0.92 and has 14% improvement, compared to the fixed threshold method. The shape features extracted from our proposed method can be used for shape-based image classification, indexing, and similarity search via normalization.

Feature-Based Image Retrieval using SOM-Based R*-Tree

  • Shin, Min-Hwa;Kwon, Chang-Hee;Bae, Sang-Hyun
    • Proceedings of the KAIS Fall Conference
    • /
    • 2003.11a
    • /
    • pp.223-230
    • /
    • 2003
  • Feature-based similarity retrieval has become an important research issue in multimedia database systems. The features of multimedia data are useful for discriminating between multimedia objects (e 'g', documents, images, video, music score, etc.). For example, images are represented by their color histograms, texture vectors, and shape descriptors, and are usually high-dimensional data. The performance of conventional multidimensional data structures(e'g', R- Tree family, K-D-B tree, grid file, TV-tree) tends to deteriorate as the number of dimensions of feature vectors increases. The R*-tree is the most successful variant of the R-tree. In this paper, we propose a SOM-based R*-tree as a new indexing method for high-dimensional feature vectors.The SOM-based R*-tree combines SOM and R*-tree to achieve search performance more scalable to high dimensionalities. Self-Organizing Maps (SOMs) provide mapping from high-dimensional feature vectors onto a two dimensional space. The mapping preserves the topology of the feature vectors. The map is called a topological of the feature map, and preserves the mutual relationship (similarity) in the feature spaces of input data, clustering mutually similar feature vectors in neighboring nodes. Each node of the topological feature map holds a codebook vector. A best-matching-image-list. (BMIL) holds similar images that are closest to each codebook vector. In a topological feature map, there are empty nodes in which no image is classified. When we build an R*-tree, we use codebook vectors of topological feature map which eliminates the empty nodes that cause unnecessary disk access and degrade retrieval performance. We experimentally compare the retrieval time cost of a SOM-based R*-tree with that of an SOM and an R*-tree using color feature vectors extracted from 40, 000 images. The result show that the SOM-based R*-tree outperforms both the SOM and R*-tree due to the reduction of the number of nodes required to build R*-tree and retrieval time cost.

  • PDF