• Title/Summary/Keyword: vision-based recognition

Search Result 633, Processing Time 0.028 seconds

Fast On-Road Vehicle Detection Using Reduced Multivariate Polynomial Classifier (축소 다변수 다항식 분류기를 이용한 고속 차량 검출 방법)

  • Kim, Joong-Rock;Yu, Sun-Jin;Toh, Kar-Ann;Kim, Do-Hoon;Lee, Sang-Youn
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.8A
    • /
    • pp.639-647
    • /
    • 2012
  • Vision-based on-road vehicle detection is one of the key techniques in automotive driver assistance systems. However, due to the huge within-class variability in vehicle appearance and environmental changes, it remains a challenging task to develop an accurate and reliable detection system. In general, a vehicle detection system consists of two steps. The candidate locations of vehicles are found in the Hypothesis Generation (HG) step, and the detected locations in the HG step are verified in the Hypothesis Verification (HV) step. Since the final decision is made in the HV step, the HV step is crucial for accurate detection. In this paper, we propose using a reduced multivariate polynomial pattern classifier (RM) for the HV step. Our experimental results show that the RM classifier outperforms the well-known Support Vector Machine (SVM) classifier, particularly in terms of the fast decision speed, which is suitable for real-time implementation.

Odor Cognition and Source Tracking of an Intelligent Robot based upon Wireless Sensor Network (센서 네트워크 기반 지능 로봇의 냄새 인식 및 추적)

  • Lee, Jae-Yeon;Kang, Geun-Taek;Lee, Won-Chang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.1
    • /
    • pp.49-54
    • /
    • 2011
  • In this paper, we represent a mobile robot which can recognize chemical odor, measure concentration, and track its source indoors. The mobile robot has the function of smell that can sort several gases in experiment such as ammonia, ethanol, and their mixture with neural network algorithm and measure each gas concentration with fuzzy rules. In addition, it can not only navigate to the desired position with vision system by avoiding obstacles but also transmit odor information and warning messages earned from its own operations to other nodes by multi-hop communication in wireless sensor network. We suggest the way of odor sorting, concentration measurement, and source tracking for a mobile robot in wireless sensor network using a hybrid algorithm with vision system and gas sensors. The experimental studies prove that the efficiency of the proposed algorithm for odor recognition, concentration measurement, and source tracking.

A Study on the Implementation of RFID-based Autonomous Navigation System for Robotic Cellular Phone(RCP)

  • Choe, Jae-Il;Choi, Jung-Wook;Oh, Dong-Ik;Kim, Seung-Woo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.457-462
    • /
    • 2005
  • Industrial and economical importance of CP(Cellular Phone) is growing rapidly. Combined with IT technology, CP is currently one of the most attractive technologies for all. However, unless we find a breakthrough to the technology, its growth may slow down soon. RT(Robot Technology) is considered one of the most promising next generation technology. Unlike the industrial robot of the past, today's robots require advanced technologies, such as soft computing, human-friendly interface, interaction technique, speech recognition, object recognition, and many others. In this study, we present a new technological concept named RCP(Robotic Cellular Phone), which combines RT & CP, in the vision of opening a new direction to the advance of CP, IT, and RT all together. RCP consists of 3 sub-modules. They are $RCP^{Mobility}$, $RCP^{Interaction}$, and $RCP^{Interaction}$. $RCP^{Mobility}$ is the main focus of this paper. It is an autonomous navigation system that combines RT mobility with CP. Through $RCP^{Mobility}$, we should be able to provide CP with robotic functionalities such as auto-charging and real-world robotic entertainments. Eventually, CP may become a robotic pet to the human being. $RCP^{Mobility}$ consists of various controllers. Two of the main controllers are trajectory controller and self-localization controller. While Trajectory Controller is responsible for the wheel-based navigation of RCP, Self-Localization Controller provides localization information of the moving RCP. With the coordinate information acquired from RFID-based self-localization controller, Trajectory Controller refines RCP's movement to achieve better RCP navigations. In this paper, a prototype system we developed for $RCP^{Mobility}$ is presented. We describe overall structure of the system and provide experimental results of the RCP navigation.

  • PDF

Indoor Location Positioning System for Image Recognition based LBS (영상인식 기반의 위치기반서비스를 위한 실내위치인식 시스템)

  • Kim, Jong-Bae
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.2
    • /
    • pp.49-62
    • /
    • 2008
  • This paper proposes an indoor location positioning system for the image recognition based LBS. The proposed system is a vision-based location positioning system that is implemented the augmented reality by overlaying the location results with the view of the user. For implementing, the proposed system uses the pattern matching and location model to recognize user location from images taken by a wearable mobile PC with camera. In the proposed system, the system uses the pattern matching and location model for recognizing a personal location in image sequences. The system is estimated user location by the image sequence matching and marker detection methods, and is recognized user location by using the pre-defined location model. To detect marker in image sequences, the proposed system apply to the adaptive thresholding method, and by using the location model to recognize a location, the system can be obtained more accurate and efficient results. Experimental results show that the proposed system has both quality and performance to be used as an indoor location-based services(LBS) for visitors in various environments.

  • PDF

Virtual Contamination Lane Image and Video Generation Method for the Performance Evaluation of the Lane Departure Warning System (차선 이탈 경고 시스템의 성능 검증을 위한 가상의 오염 차선 이미지 및 비디오 생성 방법)

  • Kwak, Jae-Ho;Kim, Whoi-Yul
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.24 no.6
    • /
    • pp.627-634
    • /
    • 2016
  • In this paper, an augmented video generation method to evaluate the performance of lane departure warning system is proposed. In our system, the input is a video which have road scene with general clean lane, and the content of output video is the same but the lane is synthesized with contamination image. In order to synthesize the contamination lane image, two approaches were used. One is example-based image synthesis, and the other is background-based image synthesis. Example-based image synthesis is generated in the assumption of the situation that contamination is applied to the lane, and background-based image synthesis is for the situation that the lane is erased due to aging. In this paper, a new contamination pattern generation method using Gaussian function is also proposed in order to produce contamination with various shape and size. The contamination lane video can be generated by shifting synthesized image as lane movement amount obtained empirically. Our experiment showed that the similarity between the generated contamination lane image and real lane image is over 90 %. Futhermore, we can verify the reliability of the video generated from the proposed method through the analysis of the change of lane recognition rate. In other words, the recognition rate based on the video generated from the proposed method is very similar to that of the real contamination lane video.

Determination of Bar Code Cross-line Based on Block HOG Clustering (블록 HOG 군집화 기반의 1-D 바코드 크로스라인 결정)

  • Kim, Dong Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.7
    • /
    • pp.996-1003
    • /
    • 2022
  • In this paper, we present a new method for determining the scan line and range for vision-based 1-D barcode recognition. This is a study on how to detect valid barcode representative points and directions by applying the DBSCAN clustering method based on block HOG (histogram of gradient) and determine scan lines and barcode crosslines based on this. In this paper, the minimum and maximum search techniques were applied to determine the cross-line range of barcodes based on the obtained scan lines. This can be applied regardless of the barcode size. This technique enables barcode recognition even by detecting only a partial area of the barcode, and does not require rotation to read the code after detecting the barcode area. In addition, it is possible to detect barcodes of various sizes. Various experimental results are presented to evaluate the performance of the proposed technique in this paper.

RealBook: A Tangible Electronic Book Based on the Interface of TouchFace-V (RealBook: TouchFace-V 인터페이스 기반 실감형 전자책)

  • Song, Dae-Hyeon;Bae, Ki-Tae;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.551-559
    • /
    • 2013
  • In this paper, we proposed a tangible RealBook based on the interface of TouchFace-V which is able to recognize multi-touch and hand gesture. The TouchFace-V is applied projection technology on a flat surface such as table, without constraint of space. The system's configuration is addressed installation, calibration, and portability issues that are most existing front-projected vision-based tabletop display. It can provide hand touch and gesture applying computer vision by adopting tracking technology without sensor and traditional input device. The RealBook deals with the combination of each advantage of analog sensibility on texts and multimedia effects of e-book. Also, it provides digitally created stories that would differ in experiences and environments with interacting users' choices on the interface of the book. We proposed e-book that is new concept of electronic book; named RealBook, different from existing and TouchFace-V interface, which can provide more direct viewing, natural and intuitive interactions with hand touch and gesture.

A Study on the Implementation of RFID-Based Autonomous Navigation System for Robotic Cellular Phone (RCP) (RFID를 이용한 RCP 자율 네비게이션 시스템 구현을 위한 연구)

  • Choe Jae-Il;Choi Jung-Wook;Oh Dong-Ik;Kim Seung-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.5
    • /
    • pp.480-488
    • /
    • 2006
  • Industrial and economical importance of CP(Cellular Phone) is growing rapidly. Combined with IT technology, CP is one of the most attractive technologies of today. However, unless we find a new breakthrough in the technology, its growth may slow down soon. RT(Robot Technology) is considered one of the most promising next generation technologies. Unlike the industrial robot of the past, today's robots require advanced features, such as soft computing, human-friendly interface, interaction technique, speech recognition object recognition, among many others. In this paper, we present a new technological concept named RCP (Robotic Cellular Phone) which integrates RT and CP in the vision of opening a combined advancement of CP, IT, and RT, RCP consists of 3 sub-modules. They are $RCP^{Mobility}$(RCP Mobility System), $RCP^{Interaction}$, and $RCP^{Integration}$. The main focus of this paper is on $RCP^{Mobility}$ which combines an autonomous navigation system of the RT mobility with CP. Through $RCP^{Mobility}$, we are able to provide CP with robotic functions such as auto-charging and real-world robotic entertainment. Ultimately, CP may become a robotic pet to the human beings. $RCP^{Mobility}$ consists of various controllers. Two of the main controllers are trajectory controller and self-localization controller. While the former is responsible for the wheel-based navigation of RCP, the latter provides localization information of the moving RCP With the coordinates acquired from RFID-based self-localization controller, trajectory controller refines RCP's movement to achieve better navigation. In this paper, a prototype of $RCP^{Mobility}$ is presented. We describe overall structure of the system and provide experimental results on the RCP navigation.

A New Ergonomic Interface System for the Disabled Person (장애인을 위한 새로운 감성 인터페이스 연구)

  • Heo, Hwan;Lee, Ji-Woo;Lee, Won-Oh;Lee, Eui-Chul;Park, Kang-Ryoung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.30 no.1
    • /
    • pp.229-235
    • /
    • 2011
  • Objective: Making a new ergonomic interface system based on camera vision system, which helps the handicapped in home environment. Background: Enabling the handicapped to manipulate the consumer electronics by the proposed interface system. Method: A wearable device for capturing the eye image using a near-infrared(NIR) camera and illuminators is proposed for tracking eye gaze position(Heo et al., 2011). A frontal viewing camera is attached to the wearable device, which can recognize the consumer electronics to be controlled(Heo et al., 2011). And the amount of user's eye fatigue can be measured based on eye blink rate, and in case that the user's fatigue exceeds in the predetermined level, the proposed system can automatically change the mode of gaze based interface into that of manual selection. Results: The experimental results showed that the gaze estimation error of the proposed method was 1.98 degrees with the successful recognition of the object by the frontal viewing camera(Heo et al., 2011). Conclusion: We made a new ergonomic interface system based on gaze tracking and object recognition Application: The proposed system can be used for helping the handicapped in home environment.

Multi-modal Emotion Recognition using Semi-supervised Learning and Multiple Neural Networks in the Wild (준 지도학습과 여러 개의 딥 뉴럴 네트워크를 사용한 멀티 모달 기반 감정 인식 알고리즘)

  • Kim, Dae Ha;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.351-360
    • /
    • 2018
  • Human emotion recognition is a research topic that is receiving continuous attention in computer vision and artificial intelligence domains. This paper proposes a method for classifying human emotions through multiple neural networks based on multi-modal signals which consist of image, landmark, and audio in a wild environment. The proposed method has the following features. First, the learning performance of the image-based network is greatly improved by employing both multi-task learning and semi-supervised learning using the spatio-temporal characteristic of videos. Second, a model for converting 1-dimensional (1D) landmark information of face into two-dimensional (2D) images, is newly proposed, and a CNN-LSTM network based on the model is proposed for better emotion recognition. Third, based on an observation that audio signals are often very effective for specific emotions, we propose an audio deep learning mechanism robust to the specific emotions. Finally, so-called emotion adaptive fusion is applied to enable synergy of multiple networks. The proposed network improves emotion classification performance by appropriately integrating existing supervised learning and semi-supervised learning networks. In the fifth attempt on the given test set in the EmotiW2017 challenge, the proposed method achieved a classification accuracy of 57.12%.