• Title/Summary/Keyword: 문자입력

Search Result 567, Processing Time 0.034 seconds

Improved Method of License Plate Detection and Recognition using Synthetic Number Plate (인조 번호판을 이용한 자동차 번호인식 성능 향상 기법)

  • Chang, Il-Sik;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.453-462
    • /
    • 2021
  • A lot of license plate data is required for car number recognition. License plate data needs to be balanced from past license plates to the latest license plates. However, it is difficult to obtain data from the actual past license plate to the latest ones. In order to solve this problem, a license plate recognition study through deep learning is being conducted by creating a synthetic license plates. Since the synthetic data have differences from real data, and various data augmentation techniques are used to solve these problems. Existing data augmentation simply used methods such as brightness, rotation, affine transformation, blur, and noise. In this paper, we apply a style transformation method that transforms synthetic data into real-world data styles with data augmentation methods. In addition, real license plate data are noisy when it is captured from a distance and under the dark environment. If we simply recognize characters with input data, chances of misrecognition are high. To improve character recognition, in this paper, we applied the DeblurGANv2 method as a quality improvement method for character recognition, increasing the accuracy of license plate recognition. The method of deep learning for license plate detection and license plate number recognition used YOLO-V5. To determine the performance of the synthetic license plate data, we construct a test set by collecting our own secured license plates. License plate detection without style conversion recorded 0.614 mAP. As a result of applying the style transformation, we confirm that the license plate detection performance was improved by recording 0.679mAP. In addition, the successul detection rate without image enhancement was 0.872, and the detection rate was 0.915 after image enhancement, confirming that the performance improved.

Vehicle License Plate Detection Based on Mathematical Morphology and Symmetry (수리 형태론과 대칭성을 이용한 자동차 번호판 검출)

  • Kim, Jin-Heon;Moon, Je-Hyung;Choi, Tae-Young
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.2
    • /
    • pp.40-47
    • /
    • 2009
  • This paper proposes a method for vehicle license plate detection using mathematical morphology and symmetry. In general, the shape, color, size, and position of license plate are regulated by authorities for a better recognition by human. Among them, the relatively big intensity difference between the letter and the background region of the license plate and the symmetry about the plate are major discriminating factors for the detection. For the first, the opened image is subtracted from the closed image to intensify the region of plate using the rectangular structuring element which has the width of the distance between two characters. Second the subtraction image is average filtered with the mask size of the plate. Third, the column maximum graph of the average filtered image is acquired and the symmetry of the graph is measured at every position. Fourth, the peaks of the average filtered image are searched. Finally, the plate is assumed to be positioned around the one of local maxima nearest to the point of the highest symmetry. About 1,000 images taken by speed regulation camera are used for the experiment. The experimental result shows that the plate detection rate is about 93%.

Multi-modal Image Processing for Improving Recognition Accuracy of Text Data in Images (이미지 내의 텍스트 데이터 인식 정확도 향상을 위한 멀티 모달 이미지 처리 프로세스)

  • Park, Jungeun;Joo, Gyeongdon;Kim, Chulyun
    • Database Research
    • /
    • v.34 no.3
    • /
    • pp.148-158
    • /
    • 2018
  • The optical character recognition (OCR) is a technique to extract and recognize texts from images. It is an important preprocessing step in data analysis since most actual text information is embedded in images. Many OCR engines have high recognition accuracy for images where texts are clearly separable from background, such as white background and black lettering. However, they have low recognition accuracy for images where texts are not easily separable from complex background. To improve this low accuracy problem with complex images, it is necessary to transform the input image to make texts more noticeable. In this paper, we propose a method to segment an input image into text lines to enable OCR engines to recognize each line more efficiently, and to determine the final output by comparing the recognition rates of CLAHE module and Two-step module which distinguish texts from background regions based on image processing techniques. Through thorough experiments comparing with well-known OCR engines, Tesseract and Abbyy, we show that our proposed method have the best recognition accuracy with complex background images.

Metadata extraction using AI and advanced metadata research for web services (AI를 활용한 메타데이터 추출 및 웹서비스용 메타데이터 고도화 연구)

  • Sung Hwan Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.499-503
    • /
    • 2024
  • Broadcasting programs are provided to various media such as Internet replay, OTT, and IPTV services as well as self-broadcasting. In this case, it is very important to provide keywords for search that represent the characteristics of the content well. Broadcasters mainly use the method of manually entering key keywords in the production process and the archive process. This method is insufficient in terms of quantity to secure core metadata, and also reveals limitations in recommending and using content in other media services. This study supports securing a large number of metadata by utilizing closed caption data pre-archived through the DTV closed captioning server developed in EBS. First, core metadata was automatically extracted by applying Google's natural language AI technology. The next step is to propose a method of finding core metadata by reflecting priorities and content characteristics as core research contents. As a technology to obtain differentiated metadata weights, the importance was classified by applying the TF-IDF calculation method. Successful weight data were obtained as a result of the experiment. The string metadata obtained by this study, when combined with future string similarity measurement studies, becomes the basis for securing sophisticated content recommendation metadata from content services provided to other media.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Traffic Consideration and Link Capacity Estimation for Integrated Multimedia Network of The Naval Ship (함정용 멀티미디어 통합통신망을 위한 트래픽 및 링크용량 예측)

  • Lee, Chae-Dong;Shin, Woo-Seop;Kim, Suk-Chan
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.49 no.5
    • /
    • pp.99-106
    • /
    • 2012
  • Korea navy has been using the voice-oriented ICS to raise a efficiency of naval ship operation. Recently a multimedia network which are included voice, video and text is under consideration by korean navy. As a basic research to establish the integrated multimedia network of a naval ship, this paper classify the networks in order to apply to an integrated network among the various networks within a naval ship. We also consider the sort and characteristic of the multimedia traffic which is using within the classified networks. To predict the link capacity of switch from number of traffic input source, we suggest a traffic aggregation model. Then we calculate the link capacity of aggregated traffic and analyze a aggregated traffic of Korea major naval ship.

Design of Large-set Off-line Handwritten Hangul Database Construction (대용량 오프라인 한글 글씨 데이타베이스의 설계)

  • Lee, S.W.;Song, H.H.;Kim, J.S.;Lee, E.J.;Park, H.S.
    • Annual Conference on Human and Language Technology
    • /
    • 1995.10a
    • /
    • pp.131-136
    • /
    • 1995
  • 최근들어 자연스럽게 필기된 한글을 인식함으로써 정보 입력 과정을 자동화하기 위한 오프라인 한글 글씨 인식에 관한 연구가 활발히 진행되고 있다. 오프라인 한글 글씨 인식에 관한 연구에 있어서 반드시 확보되어야 하는 연구 환경으로 대용량 오프라인 한글 글씨 데이타베이스의 구축을 들 수 있는데, 본 논문에서는 시스템공학연구소 국어공학센터의 국어 정보 베이스 개발사업의 일환으로 추진중인 오프라인 한글 글씨 데이타베이스의 구축현황에 대해 간략히 소개하고자 한다. 오프라인 한글 글씨 데이타베이스의 구축은 크게 글씨 데이타베이스 설계, 글씨 데이타 수집, 용지 스캔 및 문자 단위 분할, 데이타베이스 검증의 4 단계로 구성된다. 본 연구에서는 다양한 변형을 갖는 글씨체의 수집을 데이타베이스 구축시 가장 고려해야 할 요소로 삼았으며, 고품질의 일관성 있는 글씨 데이타베이스 구축을 위해 데이타베이스 설계 단계와 검증 단계에 많은 시간을 할애했다. 마지막으로 본 연구에서는 WWW(World Wide Web)의 HTML(Hyper Text Markup Language)을 이용하여 편리 한 사용자 인터페이스를 구현함으로써 사용자들이 쉽게 한글 글씨 영상을 검색 할 수 있음은 물론 인식 알고리즘의 개발에 사용 가능한 형태의 화일을 제공받을 수 있도록 구성하고 있다. 현재는 KS C 완성형 한글 2,350자 중에서 사용 빈도순 상위 520자에 대한 한글 글씨 1,000벌을 수집하여 명도영상 데이타베이스를 구축 중에 있으며, 향후 2년간 나머지 1,830자에 대한 한글 글씨 데이타를 수집하여 데이타베이스를 완성하고자 한다. 구축된 글씨 데이타베이스는 조만간 국내의 오프라인 한글 글씨 인식 연구자들에게 제공되어 우수한 인식 알고리즘의 개발을 위한 중요한 실험 데이타로서 사용될 예정이며, 개발된 인식 시스템에 대한 객관적인 성능 평가에 있어서도 크게 기여하여 국내의 오프라인 한글 글씨 인식에 관한 연구를 활성화시켜주는 계기가 될 것으로 기대된다.

  • PDF

Vibrotactile Space Mouse (진동촉각 공간 마우스)

  • Park, Jun-Hyung;Choi, Ye-Rim;Lee, Kwang-Hyung;Back, Jong-Won;Jang, Tae-Jeong
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.337-341
    • /
    • 2008
  • This paper presents a vibrotactile space mouse which use pin-type vibrotactile display modules and a gyroscope chip. This mouse is a new interface device which is not only an input device as an ordinary space mouse but also a tactile output device. It consists of a space mouse which use gyroscope chip and vibrotactile display modules which have been developed in our own laboratory. Lately, by development of vibrotactile display modules which have small size and consume low power, vibrotactile displays are available in small sized embedded systems such as wireless mouses or mobile devices. Also, development of new sensors like miniature size gyroscope by MEMS technology enables manufacturing of a small space mouse which can be used in the air not in a plane. The vibrotactile space mouse proposed in this paper recognizes motion of a hand using the gyroscope chip and transmits the data to PC through Bluetooth. PC application receives the data and moves pointer. Also, 2 by 3 arrays of pin-type vibrotactile actuators are mounted on the front side of the mouse where fingers of a user's hand contact, and those actuators could be used to represent various information such as gray-scale of an image or Braille patterns for visually impared persons.

  • PDF

A Computer Access System for the Physically Disabled Using Eye-Tracking and Speech Recognition (아이트래킹 및 음성인식 기술을 활용한 지체장애인 컴퓨터 접근 시스템)

  • Kwak, Seongeun;Kim, Isaac;Sim, Debora;Lee, Seung Hwan;Hwang, Sung Soo
    • Journal of the HCI Society of Korea
    • /
    • v.12 no.4
    • /
    • pp.5-15
    • /
    • 2017
  • Alternative computer access devices are one of the ways for the physically disabled to meet their desire to participate in social activities. Most of these devices provide access to computers by using their feet or heads. However, it is not easy to control the mouse by using their feet, head, etc. with physical disabilities. In this paper, we propose a computer access system for the physically disabled. The proposed system can move the mouse only by the user's gaze using the eye-tracking technology. The mouse can be clicked through the external button which is relatively easy to press, and the character can be inputted easily and quickly through the voice recognition. It also provides detailed functions such as mouse right-click, double-click, drag function, on-screen keyboard function, internet function, scroll function, etc.

A New Self-Organizing Map based on Kernel Concepts (자가 조직화 지도의 커널 공간 해석에 관한 연구)

  • Cheong Sung-Moon;Kim Ki-Bom;Hong Soon-Jwa
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.439-448
    • /
    • 2006
  • Previous recognition/clustering algorithms such as Kohonen SOM(Self-Organizing Map), MLP(Multi-Layer Percecptron) and SVM(Support Vector Machine) might not adapt to unexpected input pattern. And it's recognition rate depends highly on the complexity of own training patterns. We could make up for and improve the weak points with lowering complexity of original problem without losing original characteristics. There are so many ways to lower complexity of the problem, and we chose a kernel concepts as an approach to do it. In this paper, using a kernel concepts, original data are mapped to hyper-dimension space which is near infinite dimension. Therefore, transferred data into the hyper-dimension are distributed spasely rather than originally distributed so as to guarantee the rate to be risen. Estimating ratio of recognition is based on a new similarity-probing and learning method that are proposed in this paper. Using CEDAR DB which data is written in cursive letters, 0 to 9, we compare a recognition/clustering performance of kSOM that is proposed in this paper with previous SOM.