• Title/Summary/Keyword: vision-based recognition

Search Result 633, Processing Time 0.037 seconds

Information Processing in Primate Retinal Ganglion

  • Je, Sung-Kwan;Cho, Jae-Hyun;Kim, Gwang-Baek
    • Journal of information and communication convergence engineering
    • /
    • v.2 no.2
    • /
    • pp.132-137
    • /
    • 2004
  • Most of the current computer vision theories are based on hypotheses that are difficult to apply to the real world, and they simply imitate a coarse form of the human visual system. As a result, they have not been showing satisfying results. In the human visual system, there is a mechanism that processes information due to memory degradation with time and limited storage space. Starting from research on the human visual system, this study analyzes a mechanism that processes input information when information is transferred from the retina to ganglion cells. In this study, a model for the characteristics of ganglion cells in the retina is proposed after considering the structure of the retina and the efficiency of storage space. The MNIST database of handwritten letters is used as data for this research, and ART2 and SOM as recognizers. The results of this study show that the proposed recognition model is not much different from the general recognition model in terms of recognition rate, but the efficiency of storage space can be improved by constructing a mechanism that processes input information.

Real-time Hand Gesture Recognition System based on Vision for Intelligent Robot Control (지능로봇 제어를 위한 비전기반 실시간 수신호 인식 시스템)

  • Yang, Tae-Kyu;Seo, Yong-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.10
    • /
    • pp.2180-2188
    • /
    • 2009
  • This paper is study on real-time hand gesture recognition system based on vision for intelligent robot control. We are proposed a recognition system using PCA and BP algorithm. Recognition of hand gestures consists of two steps which are preprocessing step using PCA algorithm and classification step using BP algorithm. The PCA algorithm is a technique used to reduce multidimensional data sets to lower dimensions for effective analysis. In our simulation, the PCA is applied to calculate feature projection vectors for the image of a given hand. The BP algorithm is capable of doing parallel distributed processing and expedite processing since it take parallel structure. The BP algorithm recognized in real time hand gestures by self learning of trained eigen hand gesture. The proposed PCA and BP algorithm show improvement on the recognition compared to PCA algorithm.

Vision-based Authentication and Registration of Facial Identity in Hospital Information System

  • Bae, Seok-Chan;Lee, Yon-Sik;Choi, Sun-Woong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.12
    • /
    • pp.59-65
    • /
    • 2019
  • Hospital Information System includes a wide range of information in the medical profession, from the overall administrative work of the hospital to the medical work of doctors. In this paper, we proposed a Vision-based Authentication and Registration of Facial Identity in Hospital Information System using OpenCV. By using the proposed security module program a Vision-based Authentication and Registration of Facial Identity, the hospital information system was designed to enhance the security through registration of the face in the hospital personnel and to process the receipt, treatment, and prescription process without any secondary leakage of personal information. The implemented security module program eliminates the need for printing, exposing and recognizing the existing sticker paper tags and wristband type personal information that can be checked by the nurse in the hospital information system. In contrast to the original, the security module program is inputted with ID and password instead to improve privacy and recognition rate.

Improvement of Gesture Recognition using 2-stage HMM (2단계 히든마코프 모델을 이용한 제스쳐의 성능향상 연구)

  • Jung, Hwon-Jae;Park, Hyeonjun;Kim, Donghan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.1034-1037
    • /
    • 2015
  • In recent years in the field of robotics, various methods have been developed to create an intimate relationship between people and robots. These methods include speech, vision, and biometrics recognition as well as gesture-based interaction. These recognition technologies are used in various wearable devices, smartphones and other electric devices for convenience. Among these technologies, gesture recognition is the most commonly used and appropriate technology for wearable devices. Gesture recognition can be classified as contact or noncontact gesture recognition. This paper proposes contact gesture recognition with IMU and EMG sensors by using the hidden Markov model (HMM) twice. Several simple behaviors make main gestures through the one-stage HMM. It is equal to the Hidden Markov model process, which is well known for pattern recognition. Additionally, the sequence of the main gestures, which comes from the one-stage HMM, creates some higher-order gestures through the two-stage HMM. In this way, more natural and intelligent gestures can be implemented through simple gestures. This advanced process can play a larger role in gesture recognition-based UX for many wearable and smart devices.

stereo vision for monochromatic surface recognition based on competitive and cooperative neural network

  • Kang, Hyun-Deok;Jo, Kang-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.41.2-41
    • /
    • 2002
  • The stereo correspondence of two retinal images is one of the most difficult problems in stereo vision because the reconstruction of 3-D scene is a typical visual ill-posed problem. So far there still have been many unsolved problems, one of which is to reconstruct 3-D scene for a monochromatic surface because there is no clue to make a correspondence between two retinal images. We consider this problem with two layered self-organization neural network to simulate the competitive and cooperative interaction of binocular neurons. A...

  • PDF

Recognition of Multi-sensor based Car Driving Patterns for GeoVision (GeoVision을 위한 멀티 센서 기반 운전 패턴 인식)

  • Song, Chung-Won;Nam, Kwang-Woo;Lee, Chang-Woo
    • Annual Conference of KIPS
    • /
    • 2011.04a
    • /
    • pp.1185-1187
    • /
    • 2011
  • 이 논문에서는 운전자의 운전 패턴을 분석하기 위한 멀티 센서 기반의 패턴 분석 알고리즘을 제안한다. 센서를 통해 얻어진 주행 데이터의 상관 관계를 비교, 분석하여 주행 패턴을 인식한다. 가속도 센서에 작용하는 중력값과 지자기 센서의 방향 데이터을 통해 각 운전 패턴을 인식하는 정확도를 높이는데 이용하였다.

Construction Site Scene Understanding: A 2D Image Segmentation and Classification

  • Kim, Hongjo;Park, Sungjae;Ha, Sooji;Kim, Hyoungkwan
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.333-335
    • /
    • 2015
  • A computer vision-based scene recognition algorithm is proposed for monitoring construction sites. The system analyzes images acquired from a surveillance camera to separate regions and classify them as building, ground, and hole. Mean shift image segmentation algorithm is tested for separating meaningful regions of construction site images. The system would benefit current monitoring practices in that information extracted from images could embrace an environmental context.

  • PDF

Walking Features Detection for Human Recognition

  • Viet, Nguyen Anh;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.787-795
    • /
    • 2008
  • Human recognition on camera is an interesting topic in computer vision. While fingerprint and face recognition have been become common, gait is considered as a new biometric feature for distance recognition. In this paper, we propose a gait recognition algorithm based on the knee angle, 2 feet distance, walking velocity and head direction of a person who appear in camera view on one gait cycle. The background subtraction method firstly use for binary moving object extraction and then base on it we continue detect the leg region, head region and get gait features (leg angle, leg swing amplitude). Another feature, walking speed, also can be detected after a gait cycle finished. And then, we compute the errors between calculated features and stored features for recognition. This method gives good results when we performed testing using indoor and outdoor landscape in both lateral, oblique view.

  • PDF

Detection of Low-Level Human Action Change for Reducing Repetitive Tasks in Human Action Recognition (사람 행동 인식에서 반복 감소를 위한 저수준 사람 행동 변화 감지 방법)

  • Noh, Yohwan;Kim, Min-Jung;Lee, DoHoon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.4
    • /
    • pp.432-442
    • /
    • 2019
  • Most current human action recognition methods based on deep learning methods. It is required, however, a very high computational cost. In this paper, we propose an action change detection method to reduce repetitive human action recognition tasks. In reality, simple actions are often repeated and it is time consuming process to apply high cost action recognition methods on repeated actions. The proposed method decides whether action has changed. The action recognition is executed only when it has detected action change. The action change detection process is as follows. First, extract the number of non-zero pixel from motion history image and generate one-dimensional time-series data. Second, detecting action change by comparison of difference between current time trend and local extremum of time-series data and threshold. Experiments on the proposed method achieved 89% balanced accuracy on action change data and 61% reduced action recognition repetition.

An Efficient Deep Learning Based Image Recognition Service System Using AWS Lambda Serverless Computing Technology (AWS Lambda Serverless Computing 기술을 활용한 효율적인 딥러닝 기반 이미지 인식 서비스 시스템)

  • Lee, Hyunchul;Lee, Sungmin;Kim, Kangseok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.6
    • /
    • pp.177-186
    • /
    • 2020
  • Recent advances in deep learning technology have improved image recognition performance in the field of computer vision, and serverless computing is emerging as the next generation cloud computing technology for event-based cloud application development and services. Attempts to use deep learning and serverless computing technology to increase the number of real-world image recognition services are increasing. Therefore, this paper describes how to develop an efficient deep learning based image recognition service system using serverless computing technology. The proposed system suggests a method that can serve large neural network model to users at low cost by using AWS Lambda Server based on serverless computing. We also show that we can effectively build a serverless computing system that uses a large neural network model by addressing the shortcomings of AWS Lambda Server, cold start time and capacity limitation. Through experiments, we confirmed that the proposed system, using AWS Lambda Serverless Computing technology, is efficient for servicing large neural network models by solving processing time and capacity limitations as well as cost reduction.