• Title/Summary/Keyword: vision-based recognition

Search Result 633, Processing Time 0.03 seconds

Deep Learning-Based Face Recognition through Low-Light Enhancement (딥러닝 기반 저조도 향상 기술을 활용한 얼굴 인식 성능 개선)

  • Changwoo Baek;Kyeongbo Kong
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.5
    • /
    • pp.243-250
    • /
    • 2024
  • This study explores enhancing facial recognition performance in low-light environments using deep learning-based low-light enhancement techniques. Facial recognition technology is widely used in edge devices like smartphones, smart home devices, and security systems, but low-light conditions reduce accuracy due to degraded image quality and increased noise. We reviewed the latest techniques, including Zero-DCE, Zero-DCE++, and SCI (Self-Calibrated Illumination), and applied them as preprocessing steps in facial recognition on edge devices. Using the K-face dataset, experiments on the Qualcomm QRB5165 platform showed significant improvements in F1 SCORE from 0.57 to 0.833 with SCI. Processing times were 0.15ms for SCI, 0.4ms for Zero-DCE, and 0.7ms for Zero-DCE++, all much shorter than the facial recognition model MobileFaceNet's 5ms. These results indicate that these techniques can be effectively used in resource-limited edge devices, enhancing facial recognition in low-light conditions for various applications.

Vision-Based Finger Action Recognition by Angle Detection and Contour Analysis

  • Lee, Dae-Ho;Lee, Seung-Gwan
    • ETRI Journal
    • /
    • v.33 no.3
    • /
    • pp.415-422
    • /
    • 2011
  • In this paper, we present a novel vision-based method of recognizing finger actions for use in electronic appliance interfaces. Human skin is first detected by color and consecutive motion information. Then, fingertips are detected by a novel scale-invariant angle detection based on a variable k-cosine. Fingertip tracking is implemented by detected region-based tracking. By analyzing the contour of the tracked fingertip, fingertip parameters, such as position, thickness, and direction, are calculated. Finger actions, such as moving, clicking, and pointing, are recognized by analyzing these fingertip parameters. Experimental results show that the proposed angle detection can correctly detect fingertips, and that the recognized actions can be used for the interface with electronic appliances.

Visual Attention Algorithm for Object Recognition (물체 인식을 위한 시각 주목 알고리즘)

  • Ryu, Gwang-Geun;Lee, Sang-Hoon;Suh, Il-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.306-308
    • /
    • 2006
  • We propose an attention based object recognition system, to recognize object fast and robustly. For this we calculate visual stimulus degrees and make saliency maps. Through this map we find a strongly attentive part of image by stimulus degrees, where local features are extracted to recognize objects.

  • PDF

A Crosswalk and Stop Line Recognition System for Autonomous Vehicles (무인 자율 주행 자동차를 위한 횡단보도 및 정지선 인식 시스템)

  • Park, Tae-Jun;Cho, Tai-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.2
    • /
    • pp.154-160
    • /
    • 2012
  • Recently, development of technologies for autonomous vehicles has been actively carried out. This paper proposes a computer vision system to recognize lanes, crosswalks, and stop lines for autonomous vehicles. This vision system first recognizes lanes required for autonomous driving using the RANSAC algorithm and the Kalman filter, and changes the viewpoint from the perspective-angle view of the street to the top-view using the fact that the lanes are parallel. Then in the reconstructed top-view image this system recognizes a crosswalk based on its geometrical characteristics and searches for a stop line within a region of interest in front of the recognized crosswalk. Experimental results show excellent performance of the proposed vision system in recognizing lanes, crosswalks, and stop lines.

Corridor Navigation of the Mobile Robot Using Image Based Control

  • Han, Kyu-Bum;Kim, Hae-Young;Baek, Yoon-Su
    • Journal of Mechanical Science and Technology
    • /
    • v.15 no.8
    • /
    • pp.1097-1107
    • /
    • 2001
  • In this paper, the wall following navigation algorithm of the mobile robot using a mono vision system is described. The key points of the mobile robot navigation system are effective acquisition of the environmental information and fast recognition of the robot position. Also, from this information, the mobile robot should be appropriately controlled to follow a desired path. For the recognition of the relative position and orientation of the robot to the wall, the features of the corridor structure are extracted using the mono vision system, then the relative position, the offset distance and steering angle of the robot from the wall, is derived for a simple corridor geometry. For the alleviation of the computation burden of the image processing, the Kalman filter is used to reduce search region in the image space for line detection. Next, the robot is controlled by this information to follow the desired path. The wall following control scheme by the PD control scheme is composed of two control parts, the approaching control and the orientation control, and each control is performed by steering and forward-driving motion of the robot. To verify the effectiveness of the proposed algorithm, the real time navigation experiments are performed. Through the result of the experiments, the effectiveness and flexibility of the suggested algorithm are verified in comparison with a pure encoder-guided mobile robot navigation system.

  • PDF

Real Time Vision System for the Test of Steam Generator in Nuclear Power Plants Based on Fuzzy Membership Function (퍼지 소속 함수에 기초한 원전 증기발생기 검사용 실시간 비젼시스템)

  • 왕한흥
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1996.10a
    • /
    • pp.107-112
    • /
    • 1996
  • In this paper it is proposed a new approach to the development of the automatic vision system to examine and repair the steam generator tubes at remote distance. In nuclear power plants workers are reluctant of works in steam generator because of the high radiation environment and limited working space. It is strongly recommended that the examination and maintenance works be done by an automatic system for the protection of the operator from the radiation exposure. Digital signal processors are used in implementing real time recognition and examination of steam generator tubes in the preposed vision system, Performance of proposed digital vision system is illustrated by experiment for similar steam generator model.

  • PDF

Development of a Vision-based Blank Alignment Unit for Press Automation Process (프레스 자동화 공정을 위한 비전 기반 블랭크 정렬 장치 개발)

  • Oh, Jong-Kyu;Kim, Daesik;Kim, Soo-Jong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.65-69
    • /
    • 2015
  • A vision-based blank alignment unit for a press automation line is introduced in this paper. A press is a machine tool that changes the shape of a blank by applying pressure and is widely used in industries requiring mass production. In traditional press automation lines, a mechanical centering unit, which consists of guides and ball bearings, is employed to align a blank before a robot inserts it into the press. However it can only align limited sized and shaped of blanks. Moreover it cannot be applied to a process where more than two blanks are simultaneously inserted. To overcome these problems, we developed a press centering unit by means of vision sensors for press automation lines. The specification of the vision system is determined by considering information of the blank and the required accuracy. A vision application S/W with pattern recognition, camera calibration and monitoring functions is designed to successfully detect multiple blanks. Through real experiments with an industrial robot, we validated that the proposed system was able to align various sizes and shapes of blanks, and successfully detect more than two blanks which were simultaneously inserted.

A Local Feature-Based Robust Approach for Facial Expression Recognition from Depth Video

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1390-1403
    • /
    • 2016
  • Facial expression recognition (FER) plays a very significant role in computer vision, pattern recognition, and image processing applications such as human computer interaction as it provides sufficient information about emotions of people. For video-based facial expression recognition, depth cameras can be better candidates over RGB cameras as a person's face cannot be easily recognized from distance-based depth videos hence depth cameras also resolve some privacy issues that can arise using RGB faces. A good FER system is very much reliant on the extraction of robust features as well as recognition engine. In this work, an efficient novel approach is proposed to recognize some facial expressions from time-sequential depth videos. First of all, efficient Local Binary Pattern (LBP) features are obtained from the time-sequential depth faces that are further classified by Generalized Discriminant Analysis (GDA) to make the features more robust and finally, the LBP-GDA features are fed into Hidden Markov Models (HMMs) to train and recognize different facial expressions successfully. The depth information-based proposed facial expression recognition approach is compared to the conventional approaches such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA) where the proposed one outperforms others by obtaining better recognition rates.

A Study on Touch Recognition Improvement using Contrast Detection Method (대비검출방식을 이용한 터치 인식 개선방법에 관한 연구)

  • Park, jae-wan;Song, dae-hyeon;Kim, jong-gu;Kim, dong-min;Lee, chil-woo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.169-172
    • /
    • 2009
  • In this paper, we propose the method to improving touch recognition using edge mask on a touched object at vision-based touchscreen. Because vision-based touchscreen recognizes touch using threshold simply, noise occur in fist or wrist in case of touch directly with hand, correct touch recognition was difficult. However, in this paper, we execute morphology and extract surrounding mask in object that approximate to touchscreen, use change of contrast for the mask. When we touch screen to use these dynamic information, prevent noise. The goal of this paper is when hand was touched on screen it can recognize to touch.

  • PDF

Development of Robot Vision Technology for Real-Time Recognition of Model of 3D Parts (3D 부품모델 실시간 인식을 위한 로봇 비전기술 개발)

  • Shim, Byoung-Kyun;Choi, Kyung-Sun;jang, Sung-Cheol;Ahn, Yong-Suk;Han, Sung-Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.16 no.4
    • /
    • pp.113-117
    • /
    • 2013
  • This paper describes a new technology to develop the character recognition technology based on pattern recognition for non-contacting inspection optical lens slant or precision parts, and including external form state of lens or electronic parts for the performance verification, this development can achieve badness finding. And, establish to existing reflex data because inputting surface badness degree of scratch's standard specification condition directly, and error designed to distinguish from product more than schedule error to badness product by normalcy product within schedule extent after calculate the error comparing actuality measurement reflex data and standard reflex data mutually. Developed system to smallest 1 pixel unit though measuring is possible 1 pixel as $37{\mu}m{\times}37{\mu}m$ ($0.1369{\times}10-4mm^2$) the accuracy to $1.5{\times}10-4mm$ minutely measuring is possible performance verification and trust ability through an experiment prove.