• Title/Summary/Keyword: 생체 기반 시각정보처리

Search Result 6, Processing Time 0.028 seconds

A Bio-Inspired Modeling of Visual Information Processing for Action Recognition (생체 기반 시각정보처리 동작인식 모델링)

  • Kim, JinOk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.8
    • /
    • pp.299-308
    • /
    • 2014
  • Various literatures related computing of information processing have been recently shown the researches inspired from the remarkably excellent human capabilities which recognize and categorize very complex visual patterns such as body motions and facial expressions. Applied from human's outstanding ability of perception, the classification function of visual sequences without context information is specially crucial task for computer vision to understand both the coding and the retrieval of spatio-temporal patterns. This paper presents a biological process based action recognition model of computer vision, which is inspired from visual information processing of human brain for action recognition of visual sequences. Proposed model employs the structure of neural fields of bio-inspired visual perception on detecting motion sequences and discriminating visual patterns in human brain. Experimental results show that proposed recognition model takes not only into account several biological properties of visual information processing, but also is tolerant of time-warping. Furthermore, the model allows robust temporal evolution of classification compared to researches of action recognition. Presented model contributes to implement bio-inspired visual processing system such as intelligent robot agent, etc.

A Study on the Analysis of Research Trends on the Attention Monitoring of Drivers During Driving Tasks (주행 시 운전자의 운전작업 중 주의집중 모니터링에 대한 연구 동향 분석)

  • Han, Gaeul;Kim, Jongbae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.383-386
    • /
    • 2021
  • 본 논문에서는 주행 중 운전자의 운전작업 중 전방 주의집중 여부를 모니터링하는 연구 방안들을 조사하고 최신 연구 동향을 분석하였으며, 자율주행자동차에서 운전자의 주의집중이 필요한 상황들에 대해 사전에 안내하는 방안을 제시하고자 한다. 연구 동향을 조사한 결과 대부분의 방법은 시각 자료 기반과 생체신호 기반으로 진행하고 있다. 연구분석 결과를 바탕으로 두 가지 방법 중 본 연구에서는 시각 자료 기반 연구 방법에 초점을 맞추어, 자동차에 설치된 카메라를 통해 수집된 영상에서 운전자의 운전작업 주의 여부를 식별하는 방법들에 대해서 분석을 진행하였다. 주행 영상에서 HoG(histogram of oriented gradients) 특징과 딥러닝 학습을 통해 운전자의 주의집중 여부를 모니터링하는 방법이 효과적임을 제시한다. 본 연구조사를 통해 분석된 운전자 모니터링 방안들을 자율주행 자동차에 적용하기 위한 운전자 주의 태만 경고시스템에 적용이 가능함을 제시한다.

Design and Implementation of Healthcare System Based on Non-Contact Biosignal Measurement (비접촉 생체신호 측정 기반 헬스케어 시스템 설계 및 구현)

  • Hong, Seong-Pyo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.1
    • /
    • pp.185-190
    • /
    • 2020
  • The rapid aging is increasing as the shortage of medical facilities and the resulting of decline in the quality of public health. In order to ease the burden of rising medical expenses, advanced medical institutions are expanding their remote medical care to lower the cost of services. U-healthcare detects the changes in physical and chemical phenomena occurring in the human body and converts them into electrical signals that can be processed and feeds back to the results through analytical and visualization processes to select only the desired information from the measured signals. The service is provided through a process of providing an alarm to a user. However, traditional biometric methods of attaching sensors directly to the body can be annoying and rejected in daily life. Therefore, there is a need for a method of continuously measuring biometric information without causing inconvenience to daily life. In this paper, we propose an IR-UWB-based non-contact and non-responsive respiratory measurement system that can continuously monitor biological information without any inconveniences to daily life.

Face Emotion Recognition using ResNet with Identity-CBAM (Identity-CBAM ResNet 기반 얼굴 감정 식별 모듈)

  • Oh, Gyutea;Kim, Inki;Kim, Beomjun;Gwak, Jeonghwan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.559-561
    • /
    • 2022
  • 인공지능 시대에 들어서면서 개인 맞춤형 환경을 제공하기 위하여 사람의 감정을 인식하고 교감하는 기술이 많이 발전되고 있다. 사람의 감정을 인식하는 방법으로는 얼굴, 음성, 신체 동작, 생체 신호 등이 있지만 이 중 가장 직관적이면서도 쉽게 접할 수 있는 것은 표정이다. 따라서, 본 논문에서는 정확도 높은 얼굴 감정 식별을 위해서 Convolution Block Attention Module(CBAM)의 각 Gate와 Residual Block, Skip Connection을 이용한 Identity- CBAM Module을 제안한다. CBAM의 각 Gate와 Residual Block을 이용하여 각각의 표정에 대한 핵심 특징 정보들을 강조하여 Context 한 모델로 변화시켜주는 효과를 가지게 하였으며 Skip-Connection을 이용하여 기울기 소실 및 폭발에 강인하게 해주는 모듈을 제안한다. AI-HUB의 한국인 감정 인식을 위한 복합 영상 데이터 세트를 이용하여 총 6개의 클래스로 구분하였으며, F1-Score, Accuracy 기준으로 Identity-CBAM 모듈을 적용하였을 때 Vanilla ResNet50, ResNet101 대비 F1-Score 0.4~2.7%, Accuracy 0.18~2.03%의 성능 향상을 달성하였다. 또한, Guided Backpropagation과 Guided GradCam을 통해 시각화하였을 때 중요 특징점들을 더 세밀하게 표현하는 것을 확인하였다. 결과적으로 이미지 내 표정 분류 Task에서 Vanilla ResNet50, ResNet101을 사용하는 것보다 Identity-CBAM Module을 함께 사용하는 것이 더 적합함을 입증하였다.

Bio-mimetic Recognition of Action Sequence using Unsupervised Learning (비지도 학습을 이용한 생체 모방 동작 인지 기반의 동작 순서 인식)

  • Kim, Jin Ok
    • Journal of Internet Computing and Services
    • /
    • v.15 no.4
    • /
    • pp.9-20
    • /
    • 2014
  • Making good predictions about the outcome of one's actions would seem to be essential in the context of social interaction and decision-making. This paper proposes a computational model for learning articulated motion patterns for action recognition, which mimics biological-inspired visual perception processing of human brain. Developed model of cortical architecture for the unsupervised learning of motion sequence, builds upon neurophysiological knowledge about the cortical sites such as IT, MT, STS and specific neuronal representation which contribute to articulated motion perception. Experiments show how the model automatically selects significant motion patterns as well as meaningful static snapshot categories from continuous video input. Such key poses correspond to articulated postures which are utilized in probing the trained network to impose implied motion perception from static views. We also present how sequence selective representations are learned in STS by fusing snapshot and motion input and how learned feedback connections enable making predictions about future input sequence. Network simulations demonstrate the computational capacity of the proposed model for motion recognition.

Grasping a Target Object in Clutter with an Anthropomorphic Robot Hand via RGB-D Vision Intelligence, Target Path Planning and Deep Reinforcement Learning (RGB-D 환경인식 시각 지능, 목표 사물 경로 탐색 및 심층 강화학습에 기반한 사람형 로봇손의 목표 사물 파지)

  • Ryu, Ga Hyeon;Oh, Ji-Heon;Jeong, Jin Gyun;Jung, Hwanseok;Lee, Jin Hyuk;Lopez, Patricio Rivera;Kim, Tae-Seong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.9
    • /
    • pp.363-370
    • /
    • 2022
  • Grasping a target object among clutter objects without collision requires machine intelligence. Machine intelligence includes environment recognition, target & obstacle recognition, collision-free path planning, and object grasping intelligence of robot hands. In this work, we implement such system in simulation and hardware to grasp a target object without collision. We use a RGB-D image sensor to recognize the environment and objects. Various path-finding algorithms been implemented and tested to find collision-free paths. Finally for an anthropomorphic robot hand, object grasping intelligence is learned through deep reinforcement learning. In our simulation environment, grasping a target out of five clutter objects, showed an average success rate of 78.8%and a collision rate of 34% without path planning. Whereas our system combined with path planning showed an average success rate of 94% and an average collision rate of 20%. In our hardware environment grasping a target out of three clutter objects showed an average success rate of 30% and a collision rate of 97% without path planning whereas our system combined with path planning showed an average success rate of 90% and an average collision rate of 23%. Our results show that grasping a target object in clutter is feasible with vision intelligence, path planning, and deep RL.