• Title/Summary/Keyword: Automatic Recognition System

Search Result 639, Processing Time 0.024 seconds

Design of Smart Device Assistive Emergency WayFinder Using Vision Based Emergency Exit Sign Detection

  • Lee, Minwoo;Mariappan, Vinayagam;Mfitumukiza, Joseph;Lee, Junghoon;Cho, Juphil;Cha, Jaesang
    • Journal of Satellite, Information and Communications
    • /
    • v.12 no.1
    • /
    • pp.101-106
    • /
    • 2017
  • In this paper, we present Emergency exit signs are installed to provide escape routes or ways in buildings like shopping malls, hospitals, industry, and government complex, etc. and various other places for safety purpose to aid people to escape easily during emergency situations. In case of an emergency situation like smoke, fire, bad lightings and crowded stamped condition at emergency situations, it's difficult for people to recognize the emergency exit signs and emergency doors to exit from the emergency building areas. This paper propose an automatic emergency exit sing recognition to find exit direction using a smart device. The proposed approach aims to develop an computer vision based smart phone application to detect emergency exit signs using the smart device camera and guide the direction to escape in the visible and audible output format. In this research, a CAMShift object tracking approach is used to detect the emergency exit sign and the direction information extracted using template matching method. The direction information of the exit sign is stored in a text format and then using text-to-speech the text synthesized to audible acoustic signal. The synthesized acoustic signal render on smart device speaker as an escape guide information to the user. This research result is analyzed and concluded from the views of visual elements selecting, EXIT appearance design and EXIT's placement in the building, which is very valuable and can be commonly referred in wayfinder system.

Digital Image based Real-time Sea Fog Removal Technique using GPU (GPU를 이용한 영상기반 고속 해무제거 기술)

  • Choi, Woon-sik;Lee, Yoon-hyuk;Seo, Young-ho;Choi, Hyun-jun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.12
    • /
    • pp.2355-2362
    • /
    • 2016
  • Seg fog removal is an important issue concerned by both computer vision and image processing. Sea fog or haze removal is widely used in lots of fields, such as automatic control system, CCTV, and image recognition. Color image dehazing techniques have been extensively studied, and expecially the dark channel prior(DCP) technique has been widely used. This paper propose a fast and efficient image prior - dark channel prior to remove seg-fog from a single digital image based on the GPU. We implement the basic parallel program and then optimize it to obtain performance acceleration with more than 250 times. While paralleling and the optimizing the algorithm, we improve some parts of the original serial program or basic parallel program according to the characteristics of several steps. The proposed GPU programming algorithm and implementation results may be used with advantages as pre-processing in many systems, such as safe navigation for ship, topographical survey, intelligent vehicles, etc.

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

Study on the Development of Auto-classification Algorithm for Ginseng Seedling using SVM (Support Vector Machine) (SVM(Support Vector Machine)을 이용한 묘삼 자동등급 판정 알고리즘 개발에 관한 연구)

  • Oh, Hyun-Keun;Lee, Hoon-Soo;Chung, Sun-Ok;Cho, Byoung-Kwan
    • Journal of Biosystems Engineering
    • /
    • v.36 no.1
    • /
    • pp.40-47
    • /
    • 2011
  • Image analysis algorithm for the quality evaluation of ginseng seedling was investigated. The images of ginseng seedling were acquired with a color CCD camera and processed with the image analysis methods, such as binary conversion, labeling, and thinning. The processed images were used to calculate the length and weight of ginseng seedlings. The length and weight of the samples could be predicted with standard errors of 0.343 mm, and 0.0214 g respectively, $R^2$ values of 0.8738 and 0.9835 respectively. For the evaluation of the three quality grades of Gab, Eul, and abnormal ginseng seedlings, features from the processed images were extracted. The features combined with the ratio of the lengths and areas of the ginseng seedlings efficiently differentiate the abnormal shapes from the normal ones of the samples. The grade levels were evaluated with an efficient pattern recognition method of support vector machine analysis. The quality grade of ginseng seedling could be evaluated with an accuracy of 95% and 97% for training and validation, respectively. The result indicates that color image analysis with support vector machine algorithm has good potential to be used for the development of an automatic sorting system for ginseng seedling.

An Automatic Pattern Recognition Algorithm for Identifying the Spatio-temporal Congestion Evolution Patterns in Freeway Historic Data (고속도로 이력데이터에 포함된 정체 시공간 전개 패턴 자동인식 알고리즘 개발)

  • Park, Eun Mi;Oh, Hyun Sun
    • Journal of Korean Society of Transportation
    • /
    • v.32 no.5
    • /
    • pp.522-530
    • /
    • 2014
  • Spatio-temporal congestion evolution pattern can be reproduced using the VDS(Vehicle Detection System) historic speed dataset in the TMC(Traffic Management Center)s. Such dataset provides a pool of spatio-temporally experienced traffic conditions. Traffic flow pattern is known as spatio-temporally recurred, and even non-recurrent congestion caused by incidents has patterns according to the incident conditions. These imply that the information should be useful for traffic prediction and traffic management. Traffic flow predictions are generally performed using black-box approaches such as neural network, genetic algorithm, and etc. Black-box approaches are not designed to provide an explanation of their modeling and reasoning process and not to estimate the benefits and the risks of the implementation of such a solution. TMCs are reluctant to employ the black-box approaches even though there are numerous valuable articles. This research proposes a more readily understandable and intuitively appealing data-driven approach and developes an algorithm for identifying congestion patterns for recurrent and non-recurrent congestion management and information provision.

Automatic Title Detection by Spatial Feature and Projection Profile for Document Images (공간 정보와 투영 프로파일을 이용한 문서 영상에서의 타이틀 영역 추출)

  • Park, Hyo-Jin;Kim, Bo-Ram;Kim, Wook-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.3
    • /
    • pp.209-214
    • /
    • 2010
  • This paper proposes an algorithm of segmentation and title detection for document image. The automated title detection method that we have developed is composed of two phases, segmentation and title area detection. In the first phase, we extract and segment the document image. To perform this operation, the binary map is segmented by combination of morphological operation and CCA(connected component algorithm). The first phase provides segmented regions that would be detected as title area for the second stage. Candidate title areas are detected using geometric information, then we can extract the title region that is performed by removing non-title regions. After classification step that removes non-text regions, projection is performed to detect a title region. From the fact that usually the largest font is used for the title in the document, horizontal projection is performed within text areas. In this paper, we proposed a method of segmentation and title detection for various forms of document images using geometric features and projection profile analysis. The proposed system is expected to have various applications, such as document title recognition, multimedia data searching, real-time image processing and so on.

Displacement Measurement of Structure using Multi-View Camera & Photogrammetry (사진측량법과 다시점 카메라를 이용한 구조물의 변위계측)

  • Yeo, Jeong-Hyeon;Yoon, In-Mo;Jeong, Young-Kee
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.1141-1144
    • /
    • 2005
  • In this paper, we propose an automatic displacement system for testing stability of structure. Photogrammetry is a method which can measure accurate 3D data from 2D images taken from different locations and which is suitable for analyzing and measuring the displacement of structure. This paper consists of camera calibration, feature extraction using coded target & retro-reflective circle, 3D reconstruction and analyzing accuracy. Multi-view camera which is used for measuring displacement of structure is placed with different location respectively. Camera calibration calculates trifocal tensor from corresponding points in images, from which Euclidean camera is calculated. Especially, in a step of feature extraction, we utilize sub-pixel method and pattern recognition in order to measure the accurate 3D locations. Scale bar is used as reference to measure. the accurate value of world coordinate..

  • PDF

A Study on Speechreading about the Korean 8 Vowels (한국어 8모음 자동 독화에 관한 연구)

  • Lee, Kyong-Ho;Yang, Ryong;Kim, Sun-Ok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.3
    • /
    • pp.173-182
    • /
    • 2009
  • In this paper, we studied about the extraction of the parameter and implementation of speechreading system to recognize the Korean 8 vowel. Face features are detected by amplifying, reducing the image value and making a comparison between the image value which is represented for various value in various color space. The eyes position, the nose position, the inner boundary of lip, the outer boundary of upper lip and the outer line of the tooth is found to the feature and using the analysis the area of inner lip, the hight and width of inner lip, the outer line length of the tooth rate about a inner mouth area and the distance between the nose and outer boundary of upper lip are used for the parameter. 2400 data are gathered and analyzed. Based on this analysis, the neural net is constructed and the recognition experiments are performed. In the experiment, 5 normal persons were sampled. The observational error between samples was corrected using normalization method. The experiment show very encouraging result about the usefulness of the parameter.

Research to improve the performance of self localization of mobile robot utilizing video information of CCTV (CCTV 영상 정보를 활용한 이동 로봇의 자기 위치 추정 성능 향상을 위한 연구)

  • Park, Jong-Ho;Jeon, Young-Pil;Ryu, Ji-Hyoung;Yu, Dong-Hyun;Chong, Kil-To
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.12
    • /
    • pp.6420-6426
    • /
    • 2013
  • The indoor areas for the commercial use of automatic monitoring systems of mobile robot localization improves the cognitive abilities and the needs of the environment with this emerging and existing mobile robot localization, and object recognition methods commonly around its great sensor are leveraged. On the other hand, there is a difficulty with a problem-solving self-location estimation in indoor mobile robots using only the sensors of the robot. Therefore, in this paper, a self-position estimation method for an enhanced and effective mobile robot is proposed using a marker and CCTV video that is already installed in the building. In particular, after recognizing a square mobile robot and the object from the input image, and the vertices were confirmed, the feature points of the marker were found, and marker recognition was then performed. First, a self-position estimation of the mobile robot was performed according to the relationship of the image marker and a coordinate transformation was performed. In particular, the estimation was converted to an absolute coordinate value based on CCTV information, such as robots and obstacles. The study results can be used to make a convenient self-position estimation of the robot in the indoor areas to verify the self-position estimation method of the mobile robot. In addition, experimental operation was performed based on the actual robot system.

UbiController: Universal Mobile System for Controlling Appliances in Smart Home Environment (UbiController: 스마트 홈 환경의 가전기기 제어를 위한 통합 모바일 시스템)

  • Yoon, Hyo-Seok;Kim, Hye-Jin;Woo, Woon-Tack;Lee, Sang-Goog
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.8
    • /
    • pp.1059-1071
    • /
    • 2008
  • Users in ubiquitous computing environment can easily access and use a multitude of devices and services anywhere and anytime. The key technology to realize this scenario is the method to intuitively provide proper user interfaces for each device and service. Previous attempts simply provided a designated user interface for each device and service or provided an abstract user interface to control common functions of different services. To select a target appliance, either user directly specified the target device or depended on sensors such as RFID tags and readers limiting the applicable scenarios. In this paper, we present UbiController which uniquely uses camera on the mobile device to recognize markers of appliances to acquire user interface for controlling task. UbiController aims to provide automatic discovery of multiple services in the smart home environment, support traditional GUI and novel camera-based recognition method as well as intuitive interaction methods for users. In this paper, we show experiments on the performance of UbiController's discovery and recognition methods and user feedback on interaction methods from a user study.

  • PDF