• Title/Summary/Keyword: Camera-based Recognition

Search Result 593, Processing Time 0.025 seconds

Multi-Region based Radial GCN algorithm for Human action Recognition (행동인식을 위한 다중 영역 기반 방사형 GCN 알고리즘)

  • Jang, Han Byul;Lee, Chil Woo
    • Smart Media Journal
    • /
    • v.11 no.1
    • /
    • pp.46-57
    • /
    • 2022
  • In this paper, multi-region based Radial Graph Convolutional Network (MRGCN) algorithm which can perform end-to-end action recognition using the optical flow and gradient of input image is described. Because this method does not use information of skeleton that is difficult to acquire and complicated to estimate, it can be used in general CCTV environment in which only video camera is used. The novelty of MRGCN is that it expresses the optical flow and gradient of the input image as directional histograms and then converts it into six feature vectors to reduce the amount of computational load and uses a newly developed radial type network model to hierarchically propagate the deformation and shape change of the human body in spatio-temporal space. Another important feature is that the data input areas are arranged being overlapped each other, so that information is not spatially disconnected among input nodes. As a result of performing MRGCN's action recognition performance evaluation experiment for 30 actions, it was possible to obtain Top-1 accuracy of 84.78%, which is superior to the existing GCN-based action recognition method using skeleton data as an input.

Design and development of non-contact locks including face recognition function based on machine learning (머신러닝 기반 안면인식 기능을 포함한 비접촉 잠금장치 설계 및 개발)

  • Yeo Hoon Yoon;Ki Chang Kim;Whi Jin Jo;Hongjun Kim
    • Convergence Security Journal
    • /
    • v.22 no.1
    • /
    • pp.29-38
    • /
    • 2022
  • The importance of prevention of epidemics is increasing due to the serious spread of infectious diseases. For prevention of epidemics, we need to focus on the non-contact industry. Therefore, in this paper, a face recognition door lock that controls access through non-contact is designed and developed. First very simple features are combined to find objects and face recognition is performed using Haar-based cascade algorithm. Then the texture of the image is binarized to find features using LBPH. An non-contact door lock system which composed of Raspberry PI 3B+ board, an ultrasonic sensor, a camera module, a motor, etc. are suggested. To verify actual performance and ascertain the impact of light sources, various experiment were conducted. As experimental results, the maximum value of the recognition rate was about 85.7%.

Shape Based Framework for Recognition and Tracking of Texture-free Objects for Submerged Robots in Structured Underwater Environment (수중로봇을 위한 형태를 기반으로 하는 인공표식의 인식 및 추종 알고리즘)

  • Han, Kyung-Min;Choi, Hyun-Taek
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.6
    • /
    • pp.91-98
    • /
    • 2011
  • This paper proposes an efficient and accurate vision based recognition and tracking framework for texture free objects. We approached this problem with a two phased algorithm: detection phase and tracking phase. In the detection phase, the algorithm extracts shape context descriptors that used for classifying objects into predetermined interesting targets. Later on, the matching result is further refined by a minimization technique. In the tracking phase, we resorted to meanshift tracking algorithm based on Bhattacharyya coefficient measurement. In summary, the contributions of our methods for the underwater robot vision are four folds: 1) Our method can deal with camera motion and scale changes of objects in underwater environment; 2) It is inexpensive vision based recognition algorithm; 3) The advantage of shape based method compared to a distinct feature point based method (SIFT) in the underwater environment with possible turbidity variation; 4) We made a quantitative comparison of our method with a few other well-known methods. The result is quite promising for the map based underwater SLAM task which is the goal of our research.

A Novel Horizontal Disparity Estimation Algorithm Using Stereoscopic Camera Rig

  • Ramesh, Rohit;Shin, Heung-Sub;Jeong, Shin-Il;Chung, Wan-Young
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.1
    • /
    • pp.83-88
    • /
    • 2011
  • Abstract. Image segmentation is always a challenging task in computer vision as well as in pattern recognition. Nowadays, this method has great importance in the field of stereo vision. The disparity information extracting from the binocular image pairs has essential relevance in the fields like Stereoscopic (3D) Imaging Systems, Virtual Reality and 3D Graphics. The term 'disparity' represents the horizontal shift between left camera image and right camera image. Till now, many methods are proposed to visualize or estimate the disparity. In this paper, we present a new technique to visualize the horizontal disparity between two stereo images based on image segmentation method. The process of comparing left camera image with right camera image is popularly known as 'Stereo-Matching'. This method is used in the field of stereo vision for many years and it has large contribution in generating depth and disparity maps. Correlation based stereo-matching are used most of the times to visualize the disparity. Although, for few stereo image pairs it is easy to estimate the horizontal disparity but in case of some other stereo images it becomes quite difficult to distinguish the disparity. Therefore, in order to visualize the horizontal disparity between any stereo image pairs in more robust way, a novel stereo-matching algorithm is proposed which is named as "Quadtree Segmentation of Pixels Disparity Estimation (QSPDE)".

Semi-automatic Camera Calibration Using Quaternions (쿼터니언을 이용한 반자동 카메라 캘리브레이션)

  • Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.2
    • /
    • pp.43-50
    • /
    • 2018
  • The camera is a key element in image-based three-dimensional positioning, and camera calibration, which properly determines the internal characteristics of such a camera, is a necessary process that must be preceded in order to determine the three-dimensional coordinates of the object. In this study, a new methodology was proposed to determine interior orientation parameters of a camera semi-automatically without being influenced by size and shape of checkerboard for camera calibration. The proposed method consists of exterior orientation parameters estimation using quaternion, recognition of calibration target, and interior orientation parameter determination through bundle block adjustment. After determining the interior orientation parameters using the chessboard calibration target, the three-dimensional position of the small 3D model was determined. In addition, the horizontal and vertical position errors were about ${\pm}0.006m$ and ${\pm}0.007m$, respectively, through the accuracy evaluation using the checkpoints.

Real-Time Face Recognition System Based on Illumination-insensitive MCT and Frame Consistency (조명변화에 강인한 MCT와 프레임 연관성 기반 실시간 얼굴인식 시스템)

  • Cho, Gwang-Shin;Park, Su-Kyung;Sim, Dong-Gyu;Lee, Soo-Youn
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.3
    • /
    • pp.123-134
    • /
    • 2008
  • In this paper, we propose a real-tin e face recognition system that is robust under various lighting conditions. Th Modified Census Transform algorithm that is insensitive to illumination variations is employed to extract local structure features. In a practical face recognition system, acquired images through a camera are likely to be blurred and some of them could be side face images, resulting that unacceptable performance could be obtained. To improve stability of a practical face recognition system, we propose a real-time algorithm that rejects unnecessary facial picture and makes use of recognition consistency between successive frames. Experimental results on the Yale database with large illumination variations show that the proposed approach is approximately 20% better than conventional appearance-based approaches. We also found that the proposed real-time method is more stable than existing methods that produces recognition result for each frame.

The Development of Interactive Ski-Simulation Motion Recognition System by Physics-Based Analysis (물리 모델 분석을 통한 상호 작용형 스키시뮬레이터 동작인식 시스템 개발)

  • Jin, Moon-Sub;Choi, Chun-Ho;Chung, Kyung-Ryul
    • Transactions of the KSME C: Technology and Education
    • /
    • v.1 no.2
    • /
    • pp.205-210
    • /
    • 2013
  • In this research, we have developed a ski-simulation system based on a physics-based simulation model using Newton's second law of motion. Key parameters of the model, which estimates skier's trajectory, speed and acceleration change due to skier's control on ski plate and posture changes, were derived from a field test study performed on real ski slope. Skier's posture and motion were measured by motion capture system composed of 13 high speed IR camera, and skier's control and pressure distribution on ski plate were measured by acceleration and pressure sensors attached on ski plate and ski boots. Developed ski-simulation model analyzes user's full body and center of mass using a depth camera(Microsoft Kinect) device in real time and provides feedback about force, velocity and acceleration for user. As a result, through the development of interactive ski-simulation motion recognition system, we accumulated experience and skills based on physics models for development of sports simulator.

Dynamic Bayesian Network based Two-Hand Gesture Recognition (동적 베이스망 기반의 양손 제스처 인식)

  • Suk, Heung-Il;Sin, Bong-Kee
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.4
    • /
    • pp.265-279
    • /
    • 2008
  • The idea of using hand gestures for human-computer interaction is not new and has been studied intensively during the last dorado with a significant amount of qualitative progress that, however, has been short of our expectations. This paper describes a dynamic Bayesian network or DBN based approach to both two-hand gestures and one-hand gestures. Unlike wired glove-based approaches, the success of camera-based methods depends greatly on the image processing and feature extraction results. So the proposed method of DBN-based inference is preceded by fail-safe steps of skin extraction and modeling, and motion tracking. Then a new gesture recognition model for a set of both one-hand and two-hand gestures is proposed based on the dynamic Bayesian network framework which makes it easy to represent the relationship among features and incorporate new information to a model. In an experiment with ten isolated gestures, we obtained the recognition rate upwards of 99.59% with cross validation. The proposed model and the related approach are believed to have a strong potential for successful applications to other related problems such as sign languages.

Implementation of DID interface using gesture recognition (제스쳐 인식을 이용한 DID 인터페이스 구현)

  • Lee, Sang-Hun;Kim, Dae-Jin;Choi, Hong-Sub
    • Journal of Digital Contents Society
    • /
    • v.13 no.3
    • /
    • pp.343-352
    • /
    • 2012
  • In this paper, we implemented a touchless interface for DID(Digital Information Display) system using gesture recognition technique which includes both hand motion and hand shape recognition. Especially this touchless interface without extra attachments gives user both easier usage and spatial convenience. For hand motion recognition, two hand-motion's parameters such as a slope and a velocity were measured as a direction-based recognition way. And extraction of hand area image utilizing YCbCr color model and several image processing methods were adopted to recognize a hand shape recognition. These recognition methods are combined to generate various commands, such as, next-page, previous-page, screen-up, screen-down and mouse -click in oder to control DID system. Finally, experimental results showed the performance of 93% command recognition rate which is enough to confirm the possible application to commercial products.

Implementation of Pattern Recognition Algorithm Using Line Scan Camera for Recognition of Path and Location of AGV (무인운반차(AGV)의 주행경로 및 위치인식을 위한 라인스캔카메라를 이용한 패턴인식 알고리즘 구현)

  • Kim, Soo Hyun;Lee, Hyung Gyu
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.23 no.1
    • /
    • pp.13-21
    • /
    • 2018
  • AGVS (Automated Guided Vehicle System) is a core technology of logistics automation which automatically moves specific objects or goods within a certain work space. Conventional AGVS generally requires the in-door localization system and each AGV equips expensive sensors such as laser, magnetic, inertial sensors for the route recognition and automatic navigation. thus the high installation cost is inevitable and there are many restrictions on route(path) modification or expansion. To address this issue, in this paper, we propose a cost-effective and scalable AGV based on a light-weight pattern recognition technique. The proposed pattern recognition technology not only enables autonomous driving by recognizing the route(path), but also provides a technique for figuring out the loc ation of AGV itself by recognizing the simple patterns(bar-code like) installed on the route. This significantly reduces the cost of implementing AGVS as well as benefiting from route modification and expansion. In order to verify the effectiveness of the proposed technique, we first implement a pattern recognition algorithm on a light-weight MCU(Micro Control Unit), and then verify the results by implementing an MCU_controlled AGV prototype.