• Title/Summary/Keyword: Vision recognition

Search Result 1,049, Processing Time 0.035 seconds

Positioning Method Using a Vehicular Black-Box Camera and a 2D Barcode in an Indoor Parking Lot (스마트폰 카메라와 2차원 바코드를 이용한 실내 주차장 내 측위 방법)

  • Song, Jihyun;Lee, Jae-sung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.1
    • /
    • pp.142-152
    • /
    • 2016
  • GPS is not able to be used for indoor positioning and currently most of techniques emerging to overcome the limit of GPS utilize private wireless networks. However, these methods require high costs for installation and maintenance, and they are inappropriate to be used in the place where precise positioning is needed as in indoor parking lots. This paper proposes a vehicular indoor positioning method based on QR-code recognition. The method gets an absolute coordinate through QR-code scanning, and obtain the location (an relative coordinate) of a black-box camera using the tilt and roll angle correction through affine transformation, scale transformation, and trigonometric function. Using these information of an absolute coordinate and an relative one, the precise position of a car is estimated. As a result, average error of 13.79cm is achieved and it corresponds to just 27.6% error rate in contrast to 50cm error of the recent technique based on wireless networks.

Finger-Gesture Recognition Using Concentric-Circle Tracing Algorithm (동심원 추적 알고리즘을 사용한 손가락 동작 인식)

  • Hwang, Dong-Hyun;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.12
    • /
    • pp.2956-2962
    • /
    • 2015
  • In this paper, we propose a novel algorithm, Concentric-Circle Tracing algorithm, which recognizes finger's shape and counts the number of fingers of hand using low-cost web-camera. We improve algorithm's usability by using low-price web-camera and also enhance user's comfortability by not using a additional marker or sensor. As well as counting the number of fingers, it is possible to extract finger's shape information whether finger is straight or folded, efficiently. The experimental result shows that the finger gesture can be recognized with an average accuracy of 95.48%. It is confirmed that the hand-gesture is an useful method for HCI input and remote control command.

Illumination Robust Feature Descriptor Based on Exact Order (조명 변화에 강인한 엄격한 순차 기반의 특징점 기술자)

  • Kim, Bongjoe;Sohn, Kwanghoon
    • Journal of Broadcast Engineering
    • /
    • v.18 no.1
    • /
    • pp.77-87
    • /
    • 2013
  • In this paper, we present a novel method for local image descriptor called exact order based descriptor (EOD) which is robust to illumination changes and Gaussian noise. Exact orders of image patch is induced by changing discrete intensity value into k-dimensional continuous vector to resolve the ambiguity of ordering for same intensity pixel value. EOD is generated from overall distribution of exact orders in the patch. The proposed local descriptor is compared with several state-of-the-art descriptors over a number of images. Experimental results show that the proposed method outperforms many state-of-the-art descriptors in the presence of illumination changes, blur and viewpoint change. Also, the proposed method can be used for many computer vision applications such as face recognition, texture recognition and image analysis.

Object Detection Using Combined Random Fern for RGB-D Image Format (RGB-D 영상 포맷을 위한 결합형 무작위 Fern을 이용한 객체 검출)

  • Lim, Seung-Ouk;Kim, Yu-Seon;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.9
    • /
    • pp.451-459
    • /
    • 2016
  • While an object detection algorithm plays a key role in many computer vision applications, it requires extensive computation to show robustness under varying lightning and geometrical distortions. Recently, some approaches formulate the problem in a classification framework and show improved performances in object recognition. Among them, random fern algorithm drew a lot of attention because of its simple structure and high recognition rates. However, it reveals performance degradation under the illumination changes and noise addition, since it computes patch features based only on pixel intensities. In this paper, we propose a new structure of combined random fern which incorporates depth information into the conventional random fern reflecting 3D structure of the patch. In addition, a new structure of object tracker which exploits the combined random fern is also introduced. Experiments show that the proposed method provides superior performance of object detection under illumination change and noisy condition compared to the conventional methods.

A Study on the Implementation of RFID-based Autonomous Navigation System for Robotic Cellular Phone(RCP)

  • Choe, Jae-Il;Choi, Jung-Wook;Oh, Dong-Ik;Kim, Seung-Woo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.457-462
    • /
    • 2005
  • Industrial and economical importance of CP(Cellular Phone) is growing rapidly. Combined with IT technology, CP is currently one of the most attractive technologies for all. However, unless we find a breakthrough to the technology, its growth may slow down soon. RT(Robot Technology) is considered one of the most promising next generation technology. Unlike the industrial robot of the past, today's robots require advanced technologies, such as soft computing, human-friendly interface, interaction technique, speech recognition, object recognition, and many others. In this study, we present a new technological concept named RCP(Robotic Cellular Phone), which combines RT & CP, in the vision of opening a new direction to the advance of CP, IT, and RT all together. RCP consists of 3 sub-modules. They are $RCP^{Mobility}$, $RCP^{Interaction}$, and $RCP^{Interaction}$. $RCP^{Mobility}$ is the main focus of this paper. It is an autonomous navigation system that combines RT mobility with CP. Through $RCP^{Mobility}$, we should be able to provide CP with robotic functionalities such as auto-charging and real-world robotic entertainments. Eventually, CP may become a robotic pet to the human being. $RCP^{Mobility}$ consists of various controllers. Two of the main controllers are trajectory controller and self-localization controller. While Trajectory Controller is responsible for the wheel-based navigation of RCP, Self-Localization Controller provides localization information of the moving RCP. With the coordinate information acquired from RFID-based self-localization controller, Trajectory Controller refines RCP's movement to achieve better RCP navigations. In this paper, a prototype system we developed for $RCP^{Mobility}$ is presented. We describe overall structure of the system and provide experimental results of the RCP navigation.

  • PDF

A Study on the Development of Pavement Crack Recognition Algorithm Using Artificial Neural Network (신경망 학습 기법을 이용한 도로면 크랙 인식 알고리즘 개발에 관한 연구)

  • Yoo Hyun-Seok;Lee Jeong-Ho;Kim Young-suk;Sung Nak-won
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • 2004.11a
    • /
    • pp.561-564
    • /
    • 2004
  • Crack sealing automation machines' have been continually developed since the early 1990's because of the effectiveness of crack sealing that would be able to improve safety, quality and productivity. It has been considered challenging problem to detect crack network in pavement which includes noise (oil marks, skid marks, previously sealed cracks and inherent noise). It is required to develop crack network mapping and modeling algorithm in order to accurately inject sealant along to the middle of cut crack network. The primary objective of this study is to propose a crack network mapping and modeling algorithm using neural network for improving the accuracy of the algorithm used in the APCS. It is anticipated that the effective use of the proposed algorithms would be able to reduce error rate in image processing for detecting, mapping and modeling crack network as well as improving quality and productivity compared to existing vision algorithms.

  • PDF

Violent Behavior Detection using Motion Analysis in Surveillance Video (감시 영상에서 움직임 정보 분석을 통한 폭력행위 검출)

  • Kang, Joohyung;Kwak, Sooyeong
    • Journal of Broadcast Engineering
    • /
    • v.20 no.3
    • /
    • pp.430-439
    • /
    • 2015
  • The demand of violence detection techniques using a video analysis to help prevent crimes is increasing recently. Many researchers have studied vision based behavior recognition but, violent behavior analysis techniques usually focus on violent scenes in television and movie content. Many methods previously published usually used both a color(e.g., skin and blood) and motion information for detecting violent scenes because violences usually involve blood scenes in movies. However, color information (e.g., blood scenes) may not be useful cues for violence detection in surveillance videos, because they are rarely taken in real world situations. In this paper, we propose a method of violent behavior detection in surveillance videos using motion vectors such as flow vector magnitudes and changes in direction except the color information. In order to evaluate the proposed algorithm, we test both USI dataset and various real world surveillance videos from YouTube.

Indoor Location Positioning System for Image Recognition based LBS (영상인식 기반의 위치기반서비스를 위한 실내위치인식 시스템)

  • Kim, Jong-Bae
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.2
    • /
    • pp.49-62
    • /
    • 2008
  • This paper proposes an indoor location positioning system for the image recognition based LBS. The proposed system is a vision-based location positioning system that is implemented the augmented reality by overlaying the location results with the view of the user. For implementing, the proposed system uses the pattern matching and location model to recognize user location from images taken by a wearable mobile PC with camera. In the proposed system, the system uses the pattern matching and location model for recognizing a personal location in image sequences. The system is estimated user location by the image sequence matching and marker detection methods, and is recognized user location by using the pre-defined location model. To detect marker in image sequences, the proposed system apply to the adaptive thresholding method, and by using the location model to recognize a location, the system can be obtained more accurate and efficient results. Experimental results show that the proposed system has both quality and performance to be used as an indoor location-based services(LBS) for visitors in various environments.

  • PDF

A study on stand-alone autonomous mobile robot using mono camera (단일 카메라를 사용한 독립형 자율이동로봇 개발)

  • 정성보;이경복;장동식
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.1
    • /
    • pp.56-63
    • /
    • 2003
  • This paper introduces a vision based autonomous mini mobile robot that is an approach to produce real autonomous vehicle. Previous autonomous vehicles are dependent on PC, because of complexity of designing hardware, difficulty of installation and abundant calculations. In this paper, we present an autonomous motile robot system that has abilities of accurate steering, quick movement in high speed and intelligent recognition as a stand-alone system using a mono camera. The proposed system has been implemented on mini track of which width is 25~30cm, and length is about 200cm. Test robot can run at average 32.9km/h speed on straight lane and average 22.3km/h speed on curved lane with 30~40m radius. This system provides a model of autonomous mobile robot adapted a lane recognition algorithm in odor to make real autonomous vehicle easily.

  • PDF

Virtual Contamination Lane Image and Video Generation Method for the Performance Evaluation of the Lane Departure Warning System (차선 이탈 경고 시스템의 성능 검증을 위한 가상의 오염 차선 이미지 및 비디오 생성 방법)

  • Kwak, Jae-Ho;Kim, Whoi-Yul
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.24 no.6
    • /
    • pp.627-634
    • /
    • 2016
  • In this paper, an augmented video generation method to evaluate the performance of lane departure warning system is proposed. In our system, the input is a video which have road scene with general clean lane, and the content of output video is the same but the lane is synthesized with contamination image. In order to synthesize the contamination lane image, two approaches were used. One is example-based image synthesis, and the other is background-based image synthesis. Example-based image synthesis is generated in the assumption of the situation that contamination is applied to the lane, and background-based image synthesis is for the situation that the lane is erased due to aging. In this paper, a new contamination pattern generation method using Gaussian function is also proposed in order to produce contamination with various shape and size. The contamination lane video can be generated by shifting synthesized image as lane movement amount obtained empirically. Our experiment showed that the similarity between the generated contamination lane image and real lane image is over 90 %. Futhermore, we can verify the reliability of the video generated from the proposed method through the analysis of the change of lane recognition rate. In other words, the recognition rate based on the video generated from the proposed method is very similar to that of the real contamination lane video.