• Title/Summary/Keyword: web-cam

Search Result 96, Processing Time 0.024 seconds

GeoSensor Data Stream Processing System for u-GIS Computing (u-GIS 컴퓨팅을 위한 GeoSensor 데이터 스트림 처리 시스템)

  • Chung, Weon-Il;Shin, Soong-Sun;Back, Sung-Ha;Lee, Yeon;Lee, Dong-Wook;Kim, Kyung-Bae;Lee, Chung-Ho;Kim, Ju-Wan;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.1
    • /
    • pp.9-16
    • /
    • 2009
  • In ubiquitous spatial computing environments, GeoSensor generates sensor data streams including spatial information as well as various conventional sensor data from RFID, WSN, Web CAM, Digital Camera, CCTV, and Telematics units. This GeoSensor enables the revitalization of various ubiquitous USN technologies and services on geographic information. In order to service the u-GIS applications based on GeoSensors, it is indispensable to efficiently process sensor data streams from GeoSensors of a wide area. In this paper, we propose a GeoSensor data stream processing system for u-GIS computing over real-time stream data from GeoSensors with geographic information. The proposed system provides efficient gathering, storing, and continuous query processing of GeoSensor data stream, and also makes it possible to develop diverse u-GIS applications meet each user requirements effectively.

  • PDF

Design and Implementation of a Real-Time Emotional Avatar (실시간 감정 표현 아바타의 설계 및 구현)

  • Jung, Il-Hong;Cho, Sae-Hong
    • Journal of Digital Contents Society
    • /
    • v.7 no.4
    • /
    • pp.235-243
    • /
    • 2006
  • This paper presents the development of certain efficient method for expressing the emotion of an avatar based on the facial expression recognition. This new method is not changing a facial expression of the avatar manually. It can be changing a real time facial expression of the avatar based on recognition of a facial pattern which can be captured by a web cam. It provides a tool for recognizing some part of images captured by the web cam. Because of using the model-based approach, this tool recognizes the images faster than other approaches such as the template-based or the network-based. It is extracting the shape of user's lip after detecting the information of eyes by using the model-based approach. By using changes of lip's patterns, we define 6 patterns of avatar's facial expression by using 13 standard lip's patterns. Avatar changes a facial expression fast by using the pre-defined avatar with corresponding expression.

  • PDF

A Design and Implementation of Web based Integrated Nesting System (웹 기반의 네스팅 통합시스템 설계와 구현)

  • Ryu, Gab-Sang;Choi, Jin-Young;Kim, Il-Gon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.1_2
    • /
    • pp.44-51
    • /
    • 2006
  • In this paper, we present a network-based integrated nesting system, that is adapted a current engineering concept. Generally, sheet cutting processes proceed in turn following an existing order In this study, however, the special characteristics of these processes were analyzed and the processes were redesigned with distributed design so that they could be operated in a client/server environment. Also, the necessary sheet material cutting processes, part CAD, nesting, part CAM, and NC post-processor capabilities, were implemented so that many workers could handle them all on the Web. Compared to existing commonly used systems, the developed system can provide integrated work environment for cutting automation, allow job collaboration between workers, and it has proved that it can shorten the standby time for work. By grafting computer system technologies to sheet material cutting automation field, this research expects to contribute to the enhancement of factory automation efficiency and productivity.

Empathy Evaluation Method Using Micro-movement (인체 미동을 이용한 공감도 평가 방법)

  • Hwang, Sung Teac;Park, SangIn;Won, Myoung Ju;Whang, Mincheol
    • Science of Emotion and Sensibility
    • /
    • v.20 no.1
    • /
    • pp.67-74
    • /
    • 2017
  • The goal of this study is to present quantification method for empathy. The micro-movement technology (non-contact sensing method) was used to identify empathy level. Participants were first divided into two groups: Empathized and not empathized. Then, the upper body data of participants were collected utilizing web-cam when participants carried expression tasks. The data were analyzed and categorized into 0.5 Hz, 1 Hz, 3 Hz, 5 Hz, 15 Hz. The average movement, variation, and synchronization of the movement were then compared. The results showed a low average movement and variation in a group who empathized. Also, the participants, who empathized, synchronized their movement during the task. This indicates that the people concentrates with each other when empathy has been established and show different levels of movement. These findings suggest the possibility of empathy quantification using non-contact sensing method.

A Implementation and Performance Analysis of Emotion Messenger Based on Dynamic Gesture Recognitions using WebCAM (웹캠을 이용한 동적 제스쳐 인식 기반의 감성 메신저 구현 및 성능 분석)

  • Lee, Won-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.7
    • /
    • pp.75-81
    • /
    • 2010
  • In this paper, we propose an emotion messenger which recognizes face or hand gestures of a user using a WebCAM, converts recognized emotions (joy, anger, grief, happiness) to flash-cones, and transmits them to the counterpart. This messenger consists of face recognition module, hand gesture recognition module, and messenger module. In the face recognition module, it converts each region of the eye and the mouth to a binary image and recognizes wink, kiss, and yawn according to shape change of the eye and the mouth. In hand gesture recognition module, it recognizes gawi-bawi-bo according to the number of fingers it has recognized. In messenger module, it converts wink, kiss, and yawn recognized by the face recognition module and gawi-bawi-bo recognized by the hand gesture recognition module to flash-cones and transmits them to the counterpart. Through simulation, we confirmed that CPU share ratio of the emotion messenger is minimized. Moreover, with respect to recognition ratio, we show that the hand gesture recognition module performs better than the face recognition module.

Smart Electric Wheelchair using Eye-Tracking (아이트래킹을 이용한 스마트 전동휠체어)

  • Kim, Tae-Sun;Yoon, Seung-Mok;Kim, Tae-Seong;Park, Hyeon-Kyeong;Park, Seong-Hwan;Kim, Woo-Jong;Jeong, Sang-Su;Jang, Young-Sang;Jung, Hyo-Jin;Park, Su-Bin
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.259-260
    • /
    • 2020
  • 기존의 전동휠체어를 사용하는 약자 또는 중증 장애인 등 지체(肢體)가 불편한 사람들이 휠체어 사용 시 생기는 문제점을 해소할 목적으로 시작되었다. 이는 전동휠체어가 보행 기구임에도 자동차에 준하는 교통사고에 대해 무방비하게 노출되고, 중증 장애인에 대한 이동권 보장이 아직 미흡하여 생기는 문제이다. 따라서 본 연구에서는 이러한 문제로 인한 불편함을 해소하고자 아이트래킹을 이용한 스마트 전동휠체어 기술을 적용하고자 한다. 루게릭병 등으로 인해 지체(肢體)의 움직임에 제한이 있는 사람들에게 보호자가 밀어주는 휠체어에 의존하는 것이 아닌 Eye-Tracker를 이용한 시선 추적(Eye-Tracking) 기술로 휠체어 동작이 가능하다. Web-Cam과 라즈베리 파이를 통해 얻은 전·후·좌·우의 영상정보를 디스플레이 화면에 송출한다. 그 후 Eye-Tracking 기술을 이용해 디스플레이 화면에 표시된 전·후·좌·우 이동에 관한 UI(User Interface)룰 사용자가 송출된 영상을 보면서 눈의 움직임만으로 선택해 휠체어의 방향을 제어한다. 또한 전동휠체어의 조작 실수로 다른 행인 또는 장애물과 충돌하는 문제점을 초음파 센서를 이용하여 일정 거리 내에 사물이나 사람이 있을 경우 디스플레이 화면에 경고표시 및 경고음, 각 초음파 센서 위치에 맞는 LED작동으로 사용자들에게 추돌 위험경고와 함께 장애물의 위치파악이 가능하도록 한다. 따라서 스마트 전동휠체어를 통하여 수동적인 움직임이 아닌 능동적이고, 초음파 센서로 인해 안전한 이동이 가능하게 한다.

  • PDF

Design and Implementaion of Web-based Remote Control Laboratory Using Water-level Control of Coupled Tank Apparatus (이중 탱크의 수위제어 기구를 이용한 Web기반 원격 제어 실험실의 설계 및 구현)

  • Hong, Sang-Eun;Park, Sung-Moo;Kim, Yong-Rae;Sung, Jung-Kun;Oh, Sang-Yeol
    • Proceedings of the KAIS Fall Conference
    • /
    • 2010.11a
    • /
    • pp.325-328
    • /
    • 2010
  • 최근의 인터넷환경은 다양한 형태의 가상 및 원격 교육이 가능한 기반을 제공하고 있으며, 대학 및 교육기관에서는 이를 활용한 새로운 교육용 도구의 개발이 활발히 이루어지고 있다. 본 논문은 시공간의 제약 없이 실험을 수행할 수 있도록 하여 학습자들에게 반복 학습이 가능하도록 하였고, 유량제어를 실현 할 수 있는 비선형 시스템의 이중탱크 기구를 이용하여 다양한 제어이론을 실험할 수 있는 웹기반 실험실을 구현하였다. 전체 시스템은 SISO 시스템과 MIMO 시스템을 학습자가 선택하여 실험할 수 있도록 하였다. 유량제어 방식은 수동, PID, FUZZY 제어로 실험할 수 있도록 하여 학습자들에게 여러 가지 제어이론을 다양하게 학습할 수 있도록 구성하였으며, 릴레이 자기 동조법을 구현하여 학습자들로 하여금 PID변수를 확인할수 있도록 하였다. 또한 Web-Cam을 통하여 실험화면을 실시간으로 확인하면서 시뮬레이션을 동시에 실행하여 비교할 수 있도록 구현하였다.

  • PDF

An Implementation of Device Connection and Layout Recognition Techniques for the Multi-Display Contents Delivery System (멀티 디스플레이 콘텐츠 전송 시스템을 위한 디바이스 연결 및 배치 인식 기법의 구현)

  • Jeon, So-yeon;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1479-1486
    • /
    • 2016
  • According to the advancement of display devices, the multi-screen contents display environment is growing to be accepted for the display exhibition area. The objectives of this research are to find communications technology and to design an editor interface of contents delivery system for the larger and adaptive multi-display workspaces. The proposed system can find existence of display devices and get information without any additional tools like marker, and can recognize device layout with only web-cam and image processing technology. The multi-display contents delivery system is composed of devices with three roles; display device, editor device, and fixed server. The editor device which has the role of main control uses UPnP technology to find existence and receive information of display devices. extract appointed color in captured picture using a tracking library to recognize the physical layout of display devices. After the device information and physical layout of display devices are connected, the content delivery system allows the display contents to be sent to the corresponding display devices through WebSocket technology. Also the experimental results show the possibility of our device connection and layout recognition techniques can be utilized for the large spaced and adaptive multi-display applications.

A Review of Dysmenorrhea Related Articles in Literature of Oriental Medicine (한의학 관련 학회지의 월경통 관련 논문에 대한 종설)

  • Kim, Dong-Il
    • The Journal of Korean Obstetrics and Gynecology
    • /
    • v.21 no.1
    • /
    • pp.134-149
    • /
    • 2008
  • Purpose: Dysmenorrhea is a common gynecologic condition among reproductive-age women. It consisted with menstrual clamps and other symptoms such as nausea. vomiting, diarrhea, and fatigue. Pain control for this disease mainly through NASIDs. but recently CAM therapies, acupuncture and others for painful menstruation are widely used in western countries. But RCT articles about dysmenorrhea in Korea, are not enough to lead such global research trends. The purpose of this study is to review treatment method and other research tendency about dysmenorrhea in Korean Medicine(KM) related dominant journals. Methods: Searching was done through web site and directly searched dysmenorrhea related articles in journals of Korean medicine Gynecology, published during 1996-2007. Results: Twenty five articles were searched in KM related journals, then two of them were simple case report and ten were paper of clinical trials. But there no RCT or control group-designed study. All of them present positive effect on dysmenorrhea but drop out rates were relatively high. Conclusion: KM therapies including acupuncture and herbal medicine have some beneficial effect to resolve menstrual cramps, but KM therapy related Korean articles don't have strong objective power as a evidence in the viewpoint of EBM. So continuous clinical trials such as RCT and multi center trials are needed.

  • PDF

Object-Action and Risk-Situation Recognition Using Moment Change and Object Size's Ratio (모멘트 변화와 객체 크기 비율을 이용한 객체 행동 및 위험상황 인식)

  • Kwak, Nae-Joung;Song, Teuk-Seob
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.5
    • /
    • pp.556-565
    • /
    • 2014
  • This paper proposes a method to track object of real-time video transferred through single web-camera and to recognize risk-situation and human actions. The proposed method recognizes human basic actions that human can do in daily life and finds risk-situation such as faint and falling down to classify usual action and risk-situation. The proposed method models the background, obtains the difference image between input image and the modeled background image, extracts human object from input image, tracts object's motion and recognizes human actions. Tracking object uses the moment information of extracting object and the characteristic of object's recognition is moment's change and ratio of object's size between frames. Actions classified are four actions of walking, waling diagonally, sitting down, standing up among the most actions human do in daily life and suddenly falling down is classified into risk-situation. To test the proposed method, we applied it for eight participants from a video of a web-cam, classify human action and recognize risk-situation. The test result showed more than 97 percent recognition rate for each action and 100 percent recognition rate for risk-situation by the proposed method.