• Title/Summary/Keyword: Text color

Search Result 198, Processing Time 0.034 seconds

Detection of Text Candidate Regions using Region Information-based Genetic Algorithm (영역정보기반의 유전자알고리즘을 이용한 텍스트 후보영역 검출)

  • Oh, Jun-Taek;Kim, Wook-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.70-77
    • /
    • 2008
  • This paper proposes a new text candidate region detection method that uses genetic algorithm based on information of the segmented regions. In image segmentation, a classification of the pixels at each color channel and a reclassification of the region-unit for reducing inhomogeneous clusters are performed. EWFCM(Entropy-based Weighted C-Means) algorithm to classify the pixels at each color channel is an improved FCM algorithm added with spatial information, and therefore it removes the meaningless regions like noise. A region-based reclassification based on a similarity between each segmented region of the most inhomogeneous cluster and the other clusters reduces the inhomogeneous clusters more efficiently than pixel- and cluster-based reclassifications. And detecting text candidate regions is performed by genetic algorithm based on energy and variance of the directional edge components, the number, and a size of the segmented regions. The region information-based detection method can singles out semantic text candidate regions more accurately than pixel-based detection method and the detection results will be more useful in recognizing the text regions hereafter. Experiments showed the results of the segmentation and the detection. And it confirmed that the proposed method was superior to the existing methods.

The Study on the ${\ulcorner}$Sun Gi Il Il Bun Wi Sa Si(順氣一日分爲四時)${\lrcorner}$ of the ${\ulcorner}$Young Chu(靈樞)${\lrcorner}$ ("영추.순기일일분위사시(靈權.順氣一日分爲四時)"에 대한 연구(硏究))

  • Kim, Young-Ha;Ruk, Sang-Won
    • Journal of Korean Medical classics
    • /
    • v.18 no.1 s.28
    • /
    • pp.33-48
    • /
    • 2005
  • The purpose of this study is that translates ${\ulcorner}$Sun Gi Il Il Bun Wi Sa Si${\lrcorner}$ in the ${\ulcorner}$Young Chu(靈樞)${\lrcorner}$ as a modern words because it is hard to understand which was written by classical words. We revised the original text with the 7 other classic books and classified annotations of the 6 annotated books according to the similar contents. We classified this volume by 3 chapters, and added Hangul suffixes to the original text. The Five types of changes(五變) in the second chapter is meaning to the mutual relationships among the Five viscera and Color, Time, Day, Note, Taste. The word order of contents in the second chapter must be unified follow the Color, Time, Day, Note, Tastes. The Five types of changes in the third chapter must be revise the Five types of diseases(五病) on the bases of the ${\ulcorner}$You Kyoung(類經)${\lrcorner}$.

  • PDF

Text Region Extraction from Videos using the Harris Corner Detector (해리스 코너 검출기를 이용한 비디오 자막 영역 추출)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.646-654
    • /
    • 2007
  • In recent years, the use of text inserted into TV contents has grown to provide viewers with better visual understanding. In this paper, video text is defined as superimposed text region located of the bottom of video. Video text extraction is the first step for video information retrieval and video indexing. Most of video text detection and extraction methods in the previous work are based on text color, contrast between text and background, edge, character filter, and so on. However, the video text extraction has big problems due to low resolution of video and complex background. To solve these problems, we propose a method to extract text from videos using the Harris corner detector. The proposed algorithm consists of four steps: corer map generation using the Harris corner detector, extraction of text candidates considering density of comers, text region determination using labeling, and post-processing. The proposed algorithm is language independent and can be applied to texts with various colors. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

A Study on Feature Information Parsing System of Video Image for Multimedia Service (멀티미디어 서비스를 위한 동영상 이미지의 특징정보 분석 시스템에 관한 연구)

  • 이창수;지정규
    • Journal of Information Technology Applications and Management
    • /
    • v.9 no.3
    • /
    • pp.1-12
    • /
    • 2002
  • Due to the fast development in computer and communication technologies, a video is now being more widely used than ever in many areas. The current information analyzing systems are originally built to process text-based data. Thus, it has little bits problems when it needs to correctly represent the ambiguity of a video, when it has to process a large amount of comments, or when it lacks the objectivity that the jobs require. We would like to purpose an algorithm that is capable of analyze a large amount of video efficiently. In a video, divided areas use a region growing and region merging techniques. To sample the color, we translate the color from RGB to HSI and use the information that matches with the representative colors. To sample the shape information, we use improved moment invariants(IMI) so that we can solve many problems of histogram intersection caused by current IMI and Jain. Sampled information on characteristics of the streaming media will be used to find similar frames.

  • PDF

Development of Motion Recognition Platform Using Smart-Phone Tracking and Color Communication (스마트 폰 추적 및 색상 통신을 이용한 동작인식 플랫폼 개발)

  • Oh, Byung-Hun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.5
    • /
    • pp.143-150
    • /
    • 2017
  • In this paper, we propose a novel motion recognition platform using smart-phone tracking and color communication. The interface requires only a camera and a personal smart-phone to provide a motion control interface rather than expensive equipment. The platform recognizes the user's gestures by the tracking 3D distance and the rotation angle of the smart-phone, which acts essentially as a motion controller in the user's hand. Also, a color coded communication method using RGB color combinations is included within the interface. Users can conveniently send or receive any text data through this function, and the data can be transferred continuously even while the user is performing gestures. We present the result that implementation of viable contents based on the proposed motion recognition platform.

Designing New Hanbok Products Using Saekdong -Using with CLO 3D- (색동을 활용한 신한복 제품의 디자인 개발 -CLO 3D 프로그램을 활용하여-)

  • Heeyoung Kim
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.46 no.6
    • /
    • pp.945-962
    • /
    • 2022
  • This study examines the use of traditional patterns by new Hanbok brands. A Saekdong print pattern based on previous research was developed and applied to clothing designs. A total of 488 images of printed products from the seven new Hanbok brands and 219 images from the collections of the National Folk Museum of Korea were analyzed. Traditional patterns accounted for 47.4% of the total printed products of the new Hanbok designs, with the following ratio of use, in descending order: flower patterns, traditional paintings, animals, geometrical designs, Dancheong, text and others, Jogakbo, and Saekdong. Saekdong was found in three brand products, and the color or shape was modified. To develop the Saekdong image, five colors - red, yellow, blue, white, and green - were selected. The ratio of use for each color and the width of each color were determined with reference to previous studies. The average color value was determined through color analysis of the Saekdong collections. A total of seven items were designed for the print pattern, and four items were added for coordination to consist of four styles. This study aims to use the results of this analysis to provide insights into product development using traditional patterns.

Complex Color Model for Efficient Representation of Color-Shape in Content-based Image Retrieval (내용 기반 이미지 검색에서 효율적인 색상-모양 표현을 위한 복소 색상 모델)

  • Choi, Min-Seok
    • Journal of Digital Convergence
    • /
    • v.15 no.4
    • /
    • pp.267-273
    • /
    • 2017
  • With the development of various devices and communication technologies, the production and distribution of various multimedia contents are increasing exponentially. In order to retrieve multimedia data such as images and videos, an approach different from conventional text-based retrieval is needed. Color and shape are key features used in content-based image retrieval, which quantifies and analyzes various physical features of images and compares them to search for similar images. Color and shape have been used as independent features, but the two features are closely related in terms of cognition. In this paper, a method of describing the spatial distribution of color using a complex color model that projects three-dimensional color information onto two-dimensional complex form is proposed. Experimental results show that the proposed method can efficiently represent the shape of spatial distribution of colors by frequency transforming the complex image and reconstructing it with only a few coefficients in the low frequency.

Study on News Video Character Extraction and Recognition (뉴스 비디오 자막 추출 및 인식 기법에 관한 연구)

  • 김종열;김성섭;문영식
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.1
    • /
    • pp.10-19
    • /
    • 2003
  • Caption information in news videos can be useful for video indexing and retrieval since it usually suggests or implies the contents of the video very well. In this paper, a new algorithm for extracting and recognizing characters from news video is proposed, without a priori knowledge such as font type, color, size of character. In the process of text region extraction, in order to improve the recognition rate for videos with complex background at low resolution, continuous frames with identical text regions are automatically detected to compose an average frame. The image of the averaged frame is projected to horizontal and vertical direction, and we apply region filling to remove backgrounds to produce the character. Then, K-means color clustering is applied to remove remaining backgrounds to produce the final text image. In the process of character recognition, simple features such as white run and zero-one transition from the center, are extracted from unknown characters. These feature are compared with the pre-composed character feature set to recognize the characters. Experimental results tested on various news videos show that the proposed method is superior in terms of caption extraction ability and character recognition rate.