• Title/Summary/Keyword: Human Tracking

Search Result 652, Processing Time 0.026 seconds

Implementation of a Transcutaneous Power Transmission System for Implantable Medical Devices by Resonant Frequency Tracking Method (주파수 추적 방식에 의한 이식형 의료기기용 무선전력전달 장치 구현)

  • Lim, H.G.;Lee, J.W.;Kim, D.W.;Lee, J.H.;Seong, K.W.;Kim, M.N.;Cho, J.H.
    • Journal of Biomedical Engineering Research
    • /
    • v.31 no.5
    • /
    • pp.401-406
    • /
    • 2010
  • Recently, many implantable medical devices have been developed and manufactured in many countries. In these devices, generally, energy is supplied by a transcutaneous method to avoid the skin penetration due to the power wires. As the most transcutaneous power transmission methods, the electromagnetic coupling between two coils and resonance at a specific frequency has been used widely. However, in case of a transcutaneous power transmitter with a fixed switching frequency to drive an electromagnetic coil, inefficient power transmission and thermal damage by the undesirable current variation may occur, because the electromagnetic coupling state between a primary coil and a secondary coil is very sensitive to skin thickness of each applied position and by person. In order to overcome these defects, a transcutaneous power transmitter of which operating frequency can be automatically tracked into the resonance frequency at each environment has been designed and implemented. Through the results of experiments for different coil surroundings, we have been demonstrated that the implemented transcutaneous power transmitter can track automatically into a varied resonance frequency according to arbitrary skin thickness change.

An Analyzed the Area of Interest based on the Visiting Intention and Existence of People in Cafe Space (카페공간에서 방문의도성과 대상체의 유무에 따른 관심영역 분석)

  • Kim, Ju-Yeon
    • Korean Institute of Interior Design Journal
    • /
    • v.25 no.5
    • /
    • pp.130-139
    • /
    • 2016
  • To determine "how humans move in space, what they want, and through which visual information they act and choose," this study aims to define in which sense space is preferred in gaze. The ultimate goal is to extract data of human visual awareness and preference in space. This study analyzed the observation characteristics perceived through observation frequency and time depending on the purpose cafe customers to understand what intension of visiting the space has on the observation characteristics which are the results obtained as information through visual perception. This research methods are as follows. First, the areas of preference in $caf{\acute{e}}$ space gazed by visual concentration are analyzed by divided into 12 by 12 grid A and B images separated depends on existence of people. Second, eye-tracking visual path in conscious gaze is analyzed to examine. Third, though the higher section frequency is likely to have more observation time, the interest area of I(3sec/180), II(6sec/360) and III(9sec/540) had higher frequency of Intention. The followings are the results of this study. First, the time range for searching or wandering and the observation characteristics could be estimated by the meaning of observation time by grade with the time-range out of the distribution of sections. Second, at the time distribution by section, when there was intention, the observation time was found to have higher occupation. In conclusion, this study is to determine the correlation of human concentration gazing at space images. It is an exploratory research on research methodologies, and aims to develop methodologies and provide basic data to plan attractive spaces in light of the subconscious of consumers in the future by interpreting gaze data related to concentration.

Realtime Facial Expression Recognition from Video Sequences Using Optical Flow and Expression HMM (광류와 표정 HMM에 의한 동영상으로부터의 실시간 얼굴표정 인식)

  • Chun, Jun-Chul;Shin, Gi-Han
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.55-70
    • /
    • 2009
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. In that sense, inferring the emotional state of the person based on the facial expression recognition is an important issue. In this paper, we present a novel approach to recognize facial expression from a sequence of input images using emotional specific HMM (Hidden Markov Model) and facial motion tracking based on optical flow. Conventionally, in the HMM which consists of basic emotional states, it is considered natural that transitions between emotions are imposed to pass through neutral state. However, in this work we propose an enhanced transition framework model which consists of transitions between each emotional state without passing through neutral state in addition to a traditional transition model. For the localization of facial features from video sequence we exploit template matching and optical flow. The facial feature displacements traced by the optical flow are used for input parameters to HMM for facial expression recognition. From the experiment, we can prove that the proposed framework can effectively recognize the facial expression in real time.

  • PDF

An Application of AdaBoost Learning Algorithm and Kalman Filter to Hand Detection and Tracking (AdaBoost 학습 알고리즘과 칼만 필터를 이용한 손 영역 탐지 및 추적)

  • Kim, Byeong-Man;Kim, Jun-Woo;Lee, Kwang-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.4 s.36
    • /
    • pp.47-56
    • /
    • 2005
  • With the development of wearable(ubiquitous) computers, those traditional interfaces between human and computers gradually become uncomfortable to use, which directly leads to a requirement for new one. In this paper, we study on a new interface in which computers try to recognize the gesture of human through a digital camera. Because the method of recognizing hand gesture through camera is affected by the surrounding environment such as lighting and so on, the detector should be a little sensitive. Recently, Viola's detector shows a favorable result in face detection. where Adaboost learning algorithm is used with the Haar features from the integral image. We apply this method to hand area detection and carry out comparative experiments with the classic method using skin color. Experimental results show Viola's detector is more robust than the detection method using skin color in the environment that degradation may occur by surroundings like effect of lighting.

  • PDF

Tracking Intravenous Adipose-Derived Mesenchymal Stem Cells in a Model of Elastase-Induced Emphysema

  • Kim, You-Sun;Kim, Ji-Young;Shin, Dong-Myung;Huh, Jin Won;Lee, Sei Won;Oh, Yeon-Mok
    • Tuberculosis and Respiratory Diseases
    • /
    • v.77 no.3
    • /
    • pp.116-123
    • /
    • 2014
  • Background: Mesenchymal stem cells (MSCs) obtained from bone marrow or adipose tissue can successfully repair emphysematous animal lungs, which is a characteristic of chronic obstructive pulmonary disease. Here, we describe the cellular distribution of MSCs that were intravenously injected into mice with elastase-induced emphysema. The distributions were also compared to the distributions in control mice without emphysema. Methods: We used fluorescence optical imaging with quantum dots (QDs) to track intravenously injected MSCs. In addition, we used a human Alu sequence-based real-time polymerase chain reaction method to assess the lungs, liver, kidney, and spleen in mice with elastase-induced emphysema and control mice at 1, 4, 24, 72, and 168 hours after MSCs injection. Results: The injected MSCs were detected with QD fluorescence at 1- and 4-hour postinjection, and the human Alu sequence was detected at 1-, 4- and 24-hour postinjection in control mice (lungs only). Injected MSCs remained more in mice with elastase-induced emphysema at 1, 4, and 24 hours after MSCs injection than the control lungs without emphysema. Conclusion: In conclusion, our results show that injected MSCs were observed at 1 and 4 hours post injection and more MSCs remain in lungs with emphysema.

Detection Accuracy Improvement of Hang Region using Kinect (키넥트를 이용한 손 영역 검출의 정확도 개선)

  • Kim, Heeae;Lee, Chang Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.11
    • /
    • pp.2727-2732
    • /
    • 2014
  • Recently, the researches of object tracking and recognition using Microsoft's Kinect are being actively studied. In this environment human hand detection and tracking is the most basic technique for human computer interaction. This paper proposes a method of improving the accuracy of the detected hand region's boundary in the cluttered background. To do this, we combine the hand detection results using the skin color with the extracted depth image from Kinect. From the experimental results, we show that the proposed method increase the accuracy of the hand region detection than the method of detecting a hand region with a depth image only. If the proposed method is applied to the sign language or gesture recognition system it is expected to contribute much to accuracy improvement.

Object Detection Based on Deep Learning Model for Two Stage Tracking with Pest Behavior Patterns in Soybean (Glycine max (L.) Merr.)

  • Yu-Hyeon Park;Junyong Song;Sang-Gyu Kim ;Tae-Hwan Jun
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.89-89
    • /
    • 2022
  • Soybean (Glycine max (L.) Merr.) is a representative food resource. To preserve the integrity of soybean, it is necessary to protect soybean yield and seed quality from threats of various pests and diseases. Riptortus pedestris is a well-known insect pest that causes the greatest loss of soybean yield in South Korea. This pest not only directly reduces yields but also causes disorders and diseases in plant growth. Unfortunately, no resistant soybean resources have been reported. Therefore, it is necessary to identify the distribution and movement of Riptortus pedestris at an early stage to reduce the damage caused by insect pests. Conventionally, the human eye has performed the diagnosis of agronomic traits related to pest outbreaks. However, due to human vision's subjectivity and impermanence, it is time-consuming, requires the assistance of specialists, and is labor-intensive. Therefore, the responses and behavior patterns of Riptortus pedestris to the scent of mixture R were visualized with a 3D model through the perspective of artificial intelligence. The movement patterns of Riptortus pedestris was analyzed by using time-series image data. In addition, classification was performed through visual analysis based on a deep learning model. In the object tracking, implemented using the YOLO series model, the path of the movement of pests shows a negative reaction to a mixture Rina video scene. As a result of 3D modeling using the x, y, and z-axis of the tracked objects, 80% of the subjects showed behavioral patterns consistent with the treatment of mixture R. In addition, these studies are being conducted in the soybean field and it will be possible to preserve the yield of soybeans through the application of a pest control platform to the early stage of soybeans.

  • PDF

Real-time Finger Gesture Recognition (실시간 손가락 제스처 인식)

  • Park, Jae-Wan;Song, Dae-Hyun;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.847-850
    • /
    • 2008
  • On today, human is going to develop machine by using mutual communication to machine. Including vision - based HCI(Human Computer Interaction), the technique which to recognize finger and to track finger is important in HCI systems, in HCI systems. In order to divide finger, this paper uses more effectively dividing the technique using subtraction which is separation of background and foreground, as well as to divide finger from limited background and cluttered background. In order to divide finger, the finger is recognized to make "Template-Matching" by identified fingertip images. And, identified gestures be compared the tracked gesture after tracking recognized finger. In this paper, after obtaining interest area, not only using subtraction image and template-matching but to perform template-matching in the area. So, emphasis is placed on decreasing perform speed and reaction speed, and we propose technique which is more effectively recognizing gestures.

  • PDF

Immersive user interfaces for visual telepresence in human-robot interaction (사람과 로봇간 원격작동을 위한 몰입형 사용자 인터페이스)

  • Jang, Su-Hyeong
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.406-410
    • /
    • 2009
  • As studies on more realistic human-robot interface are being actively carried out, people's interests about telepresence which remotely controls robot and obtains environmental information through video display are increasing. In order to provide natural telepresence services by moving a remote robot, it is required to recognize user's behaviors. The recognition of user movements used in previous telepresence system was difficult and costly to be implemented, limited in its applications to human-robot interaction. In this paper, using the Nintendo's Wii controller getting a lot of attention in these days and infrared LEDs, we propose an immersive user interface that easily recognizes user's position and gaze direction and provides remote video information through HMD.

  • PDF

2D Human Pose Estimation Using Component-Based Density Propagation (구성요소 기반 확률 전파를 이용한 2D 사람 자세 추정)

  • Cha, Eun-Mi;Lee, Kyoung-Mi
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.725-730
    • /
    • 2007
  • 본 논문에서는 인체 추적에 필요한 인체의 각 부위들을 구성요소로 각각 검출하여 연결하는 인체 모델을 통해 각 구성요소를 개별적으로 추정하게 된다. 여기서 인체의 구성요소 중 동작 추적에 가장 필요한 6개 부위로 구성된 구성요소인 머리, 몸통, 왼팔, 오른팔, 왼발, 오른발 등을 검출하여 추적한 후, 각 구성요소의 중심값과 색상정보를 이용하여 이전 프레임과 현재 프레임 간에 연결성을 두여 각 구성요소를 개별적으로 확률 전파를 통해 추적되어지고, 각 구성요소의 추적 결과는 구성요소들의 추정 결과를 구성요소 기반 확률 전파를 이용하여 인체의 동작을 추정하는 방법을 제안한다. 입력 영상에서 피부색 등의 색상 정보를 이용하여 인체 부위 또는 인체 모델의 구성 요소들 각각의 중심값과 색상정보를 가지고 확률전파를 통해 이것이 어떤 동작인지 동작 추정이 가능하다. 본 논문에서 제안하는 인체 동작 추적 시스템은 유아의 동작교육에 이용되는 7가지 동작인 걷기, 뛰기, 앙감질, 구부리기, 뻗기, 균형 잡기, 회전하기 등에 적용하였다. 본 논문에서 제안한 인체 모델의 각 구성요소 부위들을 독립적으로 검출하여 평균 96%의 높은 인식률을 나타냈고, 앞서 적용한 7가지 동작에 대해서 실험한 결과 평균 88.5% 성공률을 획득함으로써 본 논문에서 제안한 방법의 타당성을 보였다.

  • PDF