• Title/Summary/Keyword: 카메라 기반 인식

Search Result 700, Processing Time 0.034 seconds

Hand gesture based a pet robot control (손 제스처 기반의 애완용 로봇 제어)

  • Park, Se-Hyun;Kim, Tae-Ui;Kwon, Kyung-Su
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.4
    • /
    • pp.145-154
    • /
    • 2008
  • In this paper, we propose the pet robot control system using hand gesture recognition in image sequences acquired from a camera affixed to the pet robot. The proposed system consists of 4 steps; hand detection, feature extraction, gesture recognition and robot control. The hand region is first detected from the input images using the skin color model in HSI color space and connected component analysis. Next, the hand shape and motion features from the image sequences are extracted. Then we consider the hand shape for classification of meaning gestures. Thereafter the hand gesture is recognized by using HMMs (hidden markov models) which have the input as the quantized symbol sequence by the hand motion. Finally the pet robot is controlled by a order corresponding to the recognized hand gesture. We defined four commands of sit down, stand up, lie flat and shake hands for control of pet robot. And we show that user is able to control of pet robot through proposed system in the experiment.

  • PDF

Development of a Fall Detection System Using Fish-eye Lens Camera (어안 렌즈 카메라 영상을 이용한 기절동작 인식)

  • So, In-Mi;Han, Dae-Kyung;Kang, Sun-Kyung;Kim, Young-Un;Jong, Sung-tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.4
    • /
    • pp.97-103
    • /
    • 2008
  • This study is to present a fainting motion recognizing method by using fish-eye lens images to sense emergency situations. The camera with fish-eye lens located at the center of the ceiling of the living room sends images, and then the foreground pixels are extracted by means of the adaptive background modeling method based on the Gaussian complex model, which is followed by tracing of outer points in the foreground pixel area and the elliptical mapping. During the elliptical tracing, the fish-eye lens images are converted to fluoroscope images. the size and location changes, and moving speed information are extracted to judge whether the movement, pause, and motion are similar to fainting motion. The results show that compared to using fish-eye lens image, extraction of the size and location changes. and moving speed by means of the conversed fluoroscope images has good recognition rates.

  • PDF

Landmark Recognition Method based on Geometric Invariant Vectors (기하학적 불변벡터기반 랜드마크 인식방법)

  • Cha Jeong-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.3 s.35
    • /
    • pp.173-182
    • /
    • 2005
  • In this paper, we propose a landmark recognition method which is irrelevant to the camera viewpoint on the navigation for localization. Features in previous research is variable to camera viewpoint, therefore due to the wealth of information, extraction of visual landmarks for positioning is not an easy task. The proposed method in this paper, has the three following stages; first, extraction of features, second, learning and recognition, third, matching. In the feature extraction stage, we set the interest areas of the image. where we extract the corner points. And then, we extract features more accurate and resistant to noise through statistical analysis of a small eigenvalue. In learning and recognition stage, we form robust feature models by testing whether the feature model consisted of five corner points is an invariant feature irrelevant to viewpoint. In the matching stage, we reduce time complexity and find correspondence accurately by matching method using similarity evaluation function and Graham search method. In the experiments, we compare and analyse the proposed method with existing methods by using various indoor images to demonstrate the superiority of the proposed methods.

  • PDF

Violence Recognition using Deep CNN for Smart Surveillance Applications (스마트 감시 애플리케이션을 위해 Deep CNN을 이용한 폭력인식)

  • Ullah, Fath U Min;Ullah, Amin;Muhammad, Khan;Lee, Mi Young;Baik, Sung Wook
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.5
    • /
    • pp.53-59
    • /
    • 2018
  • Due to the recent developments in computer vision technology, complex actions can be recognized with reasonable accuracy in smart cities. In contrast, violence recognition such as events related to fight and knife, has gained less attention. The capability of visual surveillance can be used for detecting fight in streets or in prison centers. In this paper, we proposed a deep learning-based violence recognition method for surveillance cameras. A convolutional neural network (CNN) model is trained and fine-tuned on available benchmark datasets of fights and knives for violence recognition. When an abnormal event is detected, an alarm can be sent to the nearest police station to take immediate action. Moreover, when the probabilities of fight and knife classes are predicted very low, this situation is considered as normal situation. The experimental results of the proposed method outperformed other state-of-the-art CNN models with high margin by achieving maximum 99.21% accuracy.

Physically-Based Objects Interaction in Augmented Reality Environments (물리기반 모델링을 이용한 증강현실에서의 효과적 객체 상호작용)

  • Lee, Min-Kyoung;Kim, Young-J.;Redon,, Stephane
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.89-95
    • /
    • 2007
  • 본 논문에서는 연속적 충돌검사 방법과 제약 조건 기반의 강체 역학 모델링 기법을 이용하여 마커 기반의 트래킹 환경에서 현실의 객체와 가상의 객체가 물리적으로 현실적이고 안정적으로 상호작용하는 증강현실 방법을 제안한다. 본 논문에서 구현된 증강 현실 시스템은 증강 현실환경상의 현실 객체를 인식하고 트래킹 하는 부분과 증강현실에 등장하는 모든 종류의 객체들 간의 물리적인 상호작용을 시뮬레이션 하는 부분으로 크게 구성된다. 객체 트래킹에 사용되는 일반적인 카메라로는 적은 수의 불연속적인 프레임 밖에 얻을 수 없는 성능의 근본적인 한계에도 불구하고, 본 논문에서는 연속적 충돌검사 방법을 이용하여 객체간의 올바른 충돌 정보를 얻을 수 있었고, 이를 이용하여 제약 조건 기반의 강체 역학 시뮬레이션을 적용하여 안정적이고 현실적인 물리 반응을 생성할 수 있었다. 제안한 방법론은 이러한 트래킹 지연에도 불구하고 본 논문에서 사용된 다양한 벤치마킹 시나리오에서, 안정적으로 현실의 객체와 가상의 객체 사이에 물리적으로 실감나는 인터랙션 결과를 보여주었다.

  • PDF

Autonomous Surveillance-tracking System for Workers Monitoring (작업자 모니터링을 위한 자동 감시추적 시스템)

  • Ko, Jung-Hwan;Lee, Jung-Suk;An, Young-Hwan
    • 전자공학회논문지 IE
    • /
    • v.47 no.2
    • /
    • pp.38-46
    • /
    • 2010
  • In this paper, an autonomous surveillance-tracking system for Workers monitoring basing on the stereo vision scheme is proposed. That is, analysing the characteristics of the cross-axis camera system through some experiments, a optimized stereo vision system is constructed and using this system an intelligent worker surveillance-tracking system is implemented, in which a target worker moving through the environments can be detected and tracked, and its resultant stereo location coordinates and moving trajectory in the world space also can be extracted. From some experiments on moving target surveillance-tracking, it is analyzed that the target's center location after being tracked is kept to be very low error ratio of 1.82%, 1.11% on average in the horizontal and vertical directions, respectively. And, the error ratio between the calculation and measurement values of the 3D location coordinates of the target person is found to be very low value of 2.5% for the test scenario on average. Accordingly, in this paper, a possibility of practical implementation of the intelligent stereo surveillance system for real-time tracking of a target worker moving through the environments and robust detection of the target's 3D location coordinates and moving trajectory in the real world is finally suggested.

An Efficient Pedestrian Recognition Method based on PCA Reconstruction and HOG Feature Descriptor (PCA 복원과 HOG 특징 기술자 기반의 효율적인 보행자 인식 방법)

  • Kim, Cheol-Mun;Baek, Yeul-Min;Kim, Whoi-Yul
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.10
    • /
    • pp.162-170
    • /
    • 2013
  • In recent years, the interests and needs of the Pedestrian Protection System (PPS), which is mounted on the vehicle for the purpose of traffic safety improvement is increasing. In this paper, we propose a pedestrian candidate window extraction and unit cell histogram based HOG descriptor calculation methods. At pedestrian detection candidate windows extraction stage, the bright ratio of pedestrian and its circumference region, vertical edge projection, edge factor, and PCA reconstruction image are used. Dalal's HOG requires pixel based histogram calculation by Gaussian weights and trilinear interpolation on overlapping blocks, But our method performs Gaussian down-weight and computes histogram on a per-cell basis, and then the histogram is combined with the adjacent cell, so our method can be calculated faster than Dalal's method. Our PCA reconstruction error based pedestrian detection candidate window extraction method efficiently classifies background based on the difference between pedestrian's head and shoulder area. The proposed method improves detection speed compared to the conventional HOG just using image without any prior information from camera calibration or depth map obtained from stereo cameras.

Driver Assistance System By the Image Based Behavior Pattern Recognition (영상기반 행동패턴 인식에 의한 운전자 보조시스템)

  • Kim, Sangwon;Kim, Jungkyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.12
    • /
    • pp.123-129
    • /
    • 2014
  • In accordance with the development of various convergence devices, cameras are being used in many types of the systems such as security system, driver assistance device and so on, and a lot of people are exposed to these system. Therefore the system should be able to recognize the human behavior and support some useful functions with the information that is obtained from detected human behavior. In this paper we use a machine learning approach based on 2D image and propose the human behavior pattern recognition methods. The proposed methods can provide valuable information to support some useful function to user based on the recognized human behavior. First proposed one is "phone call behavior" recognition. If a camera of the black box, which is focused on driver in a car, recognize phone call pose, it can give a warning to driver for safe driving. The second one is "looking ahead" recognition for driving safety where we propose the decision rule and method to decide whether the driver is looking ahead or not. This paper also shows usefulness of proposed recognition methods with some experiment results in real time.

Remote Control System using Face and Gesture Recognition based on Deep Learning (딥러닝 기반의 얼굴과 제스처 인식을 활용한 원격 제어)

  • Hwang, Kitae;Lee, Jae-Moon;Jung, Inhwan
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.115-121
    • /
    • 2020
  • With the spread of IoT technology, various IoT applications using facial recognition are emerging. This paper describes the design and implementation of a remote control system using deep learning-based face recognition and hand gesture recognition. In general, an application system using face recognition consists of a part that takes an image in real time from a camera, a part that recognizes a face from the image, and a part that utilizes the recognized result. Raspberry PI, a single board computer that can be mounted anywhere, has been used to shoot images in real time, and face recognition software has been developed using tensorflow's FaceNet model for server computers and hand gesture recognition software using OpenCV. We classified users into three groups: Known users, Danger users, and Unknown users, and designed and implemented an application that opens automatic door locks only for Known users who have passed both face recognition and hand gestures.

Fall Early Response System Using Pose Recognition Technology Based on Skeleton Model (스켈레톤 모델 기반의 포즈 인식 기술을 활용한 낙상 조기 대응 시스템)

  • Woo-hyuk Jung;Geun-jae Lee;Chan-seok Bae;Gyu-ryang Hong;Ji-hyun Kwon;Hongseok Yoo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.479-480
    • /
    • 2023
  • 현대 사회는 출생아가 줄어들고 고령화 현상이 빠르게 진행 중이다. 20~30대의 사회 복지 종사자가 줄어들고 노인 인구는 늘어나는 반비례 현상이 나타나고 있다. 보호자가 없는 노인에게 낙상 사고와 같은 위급상황이 발생한다면 골든타임을 놓칠 수도 있을 것이다. 따라서, 본 논문에서는 낙상 사고 발생 시, 빠른 시간 내에 실시간 모니터링을 통해 노인 복지사가 상황을 인지할 수 있게 하는 시스템을 개발하였다. 미디어파이프 포즈 모델을 이용하여 관찰 대상의 움직임을 포착하도록 하였고 PTZ 카메라의 서보 모터 제어를 통해 포착한 관찰 대상을 추적하도록 하였다. 주요 장면은 사진으로 저장해 웹 서버로 전송하고, 심박수 측정 센서와 와이파이 통신 모듈이 장착된 아두이노 보드가 실시간으로 웹 서버로 전송하여, 전담 관리자는 사진을 통해 상황을 인식하고 심박수를 보고 얼마나 위급한지 알 수 있도록 하였다.

  • PDF