• Title/Summary/Keyword: sign detection and recognition

Search Result 45, Processing Time 0.02 seconds

An Recognition and Acquisition method of Distance Information in Direction Signs for Vehicle Location (차량의 위치 파악을 위한 도로안내표지판 인식과 거리정보 습득 방법)

  • Kim, Hyun-Tae;Jeong, Jin-Seong;Jang, Young-Min;Cho, Sang-Bock
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.70-79
    • /
    • 2017
  • This study proposes a method to quickly and accurately acquire distance information on direction signs. The proposed method is composed of the recognition of the sign, pre-processing to facilitate the acquisition of the road sign distance, and the acquisition of the distance data. The road sign recognition uses color detection including gamma correction in order to mitigate various noise issues. In order to facilitate the acquisition of distance data, this study applied tilt correction using linear factors, and resolution correction using Fourier transform. To acquire the distance data, morphological operation was used to highlight the area, along with labeling and template matching. By acquiring the distance information on the direction sign through such a processes, the proposed system can be output the distance remaining to the next junction. As a result, when the proposed method is applied to system it can process the data in real-time using the fast calculation speed, average speed was shown to be 0.46 second per frame, with accuracy of 0.65 in similarity value.

Numeric Sign Language Interpreting Algorithm Based on Hand Image Processing (영상처리 기반 숫자 수화표현 인식 알고리즘)

  • Gwon, Kyungpil;Yoo, Joonhyuk
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.14 no.3
    • /
    • pp.133-142
    • /
    • 2019
  • The existing auxiliary communicating aids for the hearing-impaired have an inconvenience of using additional expensive sensing devices. This paper presents a hand image detection based algorithm to interpret the sign language of the hearing-impaired. The proposed sign language recognition system exploits the hand image only captured by the camera without using any additional gloves with extra sensors. Based on the hand image processing, the system can perfectly classify several numeric sign language representations. This work proposes a simple lightweight classification algorithm to identify the hand image of the hearing-impaired to communicate with others even further in an environment of complex background. Experimental results show that the proposed system can interpret the numeric sign language quite well with an accuracy of 95.6% on average.

Traffic Sign Area Detection System Based on Color Processing Mechanism of Human (인간의 색상처리방식에 기반한 교통 표지판 영역 추출 시스템)

  • Cheoi, Kyung-Joo;Park, Min-Chul
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.63-72
    • /
    • 2007
  • The traffic sign on the road should be easy to distinguishable even from far, and should be recognized in a short time. As traffic sign is a very important object which provides important information for the drivers to enhance safety, it has to attract human's attention among any other objects on the road. This paper proposes a new method of detecting the area of traffic sign, which uses attention module on the assumption that we attention our gaze on the traffic sign at first among other objects when we drive a car. In this paper, we analyze the previous studies of psycophysical and physiological results to get what kind of features are used in the process of human's object recognition, especially color processing, and with these results we detected the area of traffic sign. Various kinds of traffic sign images were tested, and the results showed good quality(average 97.8% success).

Traffic Sign Recognition using SVM and Decision Tree for Poor Driving Environment (SVM과 의사결정트리를 이용한 열악한 환경에서의 교통표지판 인식 알고리즘)

  • Jo, Young-Bae;Na, Won-Seob;Eom, Sung-Je;Jeong, Yong-Jin
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.485-494
    • /
    • 2014
  • Traffic Sign Recognition(TSR) is an important element in an Advanced Driver Assistance System(ADAS). However, many studies related to TSR approaches only in normal daytime environment because a sign's unique color doesn't appear in poor environment such as night time, snow, rain or fog. In this paper, we propose a new TSR algorithm based on machine learning for daytime as well as poor environment. In poor environment, traditional methods which use RGB color region doesn't show good performance. So we extracted sign characteristics using HoG extraction, and detected signs using a Support Vector Machine(SVM). The detected sign is recognized by a decision tree based on 25 reference points in a Normalized RGB system. The detection rate of the proposed system is 96.4% and the recognition rate is 94% when applied in poor environment. The testing was performed on an Intel i5 processor at 3.4 GHz using Full HD resolution images. As a result, the proposed algorithm shows that machine learning based detection and recognition methods can efficiently be used for TSR algorithm even in poor driving environment.

A Driving Information Centric Information Processing Technology Development Based on Image Processing (영상처리 기반의 운전자 중심 정보처리 기술 개발)

  • Yang, Seung-Hoon;Hong, Gwang-Soo;Kim, Byung-Gyu
    • Convergence Security Journal
    • /
    • v.12 no.6
    • /
    • pp.31-37
    • /
    • 2012
  • Today, the core technology of an automobile is becoming to IT-based convergence system technology. To cope with many kinds of situations and provide the convenience for drivers, various IT technologies are being integrated into automobile system. In this paper, we propose an convergence system, which is called Augmented Driving System (ADS), to provide high safety and convenience of drivers based on image information processing. From imaging sensor, the image data is acquisited and processed to give distance from the front car, lane, and traffic sign panel by the proposed methods. Also, a converged interface technology with camera for gesture recognition and microphone for speech recognition is provided. Based on this kind of system technology, car accident will be decreased although drivers could not recognize the dangerous situations, since the system can recognize situation or user context to give attention to the front view. Through the experiments, the proposed methods achieved over 90% of recognition in terms of traffic sign detection, lane detection, and distance measure from the front car.

Real-Time Cattle Action Recognition for Estrus Detection

  • Heo, Eui-Ju;Ahn, Sung-Jin;Choi, Kang-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2148-2161
    • /
    • 2019
  • In this paper, we present a real-time cattle action recognition algorithm to detect the estrus phase of cattle from a live video stream. In order to classify cattle movement, specifically, to detect the mounting action, the most observable sign of the estrus phase, a simple yet effective feature description exploiting motion history images (MHI) is designed. By learning the proposed features using the support vector machine framework, various representative cattle actions, such as mounting, walking, tail wagging, and foot stamping, can be recognized robustly in complex scenes. Thanks to low complexity of the proposed action recognition algorithm, multiple cattle in three enclosures can be monitored simultaneously using a single fisheye camera. Through extensive experiments with real video streams, we confirmed that the proposed algorithm outperforms a conventional human action recognition algorithm by 18% in terms of recognition accuracy even with much smaller dimensional feature description.

Damaged Traffic Sign Recognition using Hopfield Networks and Fuzzy Max-Min Neural Network (홉필드 네트워크와 퍼지 Max-Min 신경망을 이용한 손상된 교통 표지판 인식)

  • Kim, Kwang Baek
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.11
    • /
    • pp.1630-1636
    • /
    • 2022
  • The results of current method of traffic sign detection gets hindered by environmental conditions and the traffic sign's condition as well. Therefore, in this paper, we propose a method of improving detection performance of damaged traffic signs by utilizing Hopfield Network and Fuzzy Max-Min Neural Network. In this proposed method, the characteristics of damaged traffic signs are analyzed and those characteristics are configured as the training pattern to be used by Fuzzy Max-Min Neural Network to initially classify the characteristics of the traffic signs. The images with initial characteristics that has been classified are restored by using Hopfield Network. The images restored with Hopfield Network are classified by the Fuzzy Max-Min Neural Network onces again to finally classify and detect the damaged traffic signs. 8 traffic signs with varying degrees of damage are used to evaluate the performance of the proposed method which resulted with an average of 38.76% improvement on classification performance than the Fuzzy Max-Min Neural Network.

Lane Detection and Traffic Sign Recognition for a Autonomous RC Toy Car (자율주행 장난감자동차의 차선 및 신호등 인식)

  • Park, Jae-hyun;Lee, Chang Woo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2016.05a
    • /
    • pp.417-418
    • /
    • 2016
  • 본 논문에서 장난감 자동차를 이용한 차선의 검출과 신호등을 인식하는 자율주행 자동차 시스템에 관한 연구이다. 제안된 시스템에서는 장난감 자동차를 분해하여 라즈베리파이보드와 아두이노보드을 설치하고, 임의로 설치된 차선과 신호등을 인식하여 주행하도록 구현한다. 차선의 검출은 자동차의 상단에 설치된 파이카메라로부터 입력영상을 획득하고, 획득된 영상의 하단부분에서 차선검출을 통하여 자동차의 방향을 제어한다. 또한 트랙의 상단에 설치된 신호등의 초록과 빨강 신호를 검출하고 인식하도록 구현하였다.

  • PDF

A Decision Tree based Real-time Hand Gesture Recognition Method using Kinect

  • Chang, Guochao;Park, Jaewan;Oh, Chimin;Lee, Chilwoo
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.12
    • /
    • pp.1393-1402
    • /
    • 2013
  • Hand gesture is one of the most popular communication methods in everyday life. In human-computer interaction applications, hand gesture recognition provides a natural way of communication between humans and computers. There are mainly two methods of hand gesture recognition: glove-based method and vision-based method. In this paper, we propose a vision-based hand gesture recognition method using Kinect. By using the depth information is efficient and robust to achieve the hand detection process. The finger labeling makes the system achieve pose classification according to the finger name and the relationship between each fingers. It also make the classification more effective and accutate. Two kinds of gesture sets can be recognized by our system. According to the experiment, the average accuracy of American Sign Language(ASL) number gesture set is 94.33%, and that of general gestures set is 95.01%. Since our system runs in real-time and has a high recognition rate, we can embed it into various applications.

Detection Accuracy Improvement of Hang Region using Kinect (키넥트를 이용한 손 영역 검출의 정확도 개선)

  • Kim, Heeae;Lee, Chang Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.11
    • /
    • pp.2727-2732
    • /
    • 2014
  • Recently, the researches of object tracking and recognition using Microsoft's Kinect are being actively studied. In this environment human hand detection and tracking is the most basic technique for human computer interaction. This paper proposes a method of improving the accuracy of the detected hand region's boundary in the cluttered background. To do this, we combine the hand detection results using the skin color with the extracted depth image from Kinect. From the experimental results, we show that the proposed method increase the accuracy of the hand region detection than the method of detecting a hand region with a depth image only. If the proposed method is applied to the sign language or gesture recognition system it is expected to contribute much to accuracy improvement.