• Title/Summary/Keyword: User location Recognition

Search Result 114, Processing Time 0.021 seconds

A Name Recognition Based Call-and-Come Service for Home Robots (가정용 로봇의 호출음 등록 및 인식 시스템)

  • Oh, Yoo-Rhee;Yoon, Jae-Sam;Park, Ji-Hun;Kim, Min-A;Kim, Hong-Kook;Kong, Dong-Geon;Myung, Hyun;Bang, Seok-Won
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.360-365
    • /
    • 2008
  • We propose an efficient robot name registration and recognition method in order to enable a Call-and-Come service for home robots. In the proposed method for the name registration, the search space is first restricted by using monophone-based acoustic models. Second, the registration of robot names is completed by using triphone-based acoustic models in the restricted search space. Next, the parameter for the utterance verification is calculated to reduce the acceptance rate of false calls. In addition, acoustic models are adapted by using a distance speech database to improve the performance of distance speech recognition, Moreover, the location of a user is estimated by using a microphone array. The experimental result on the registration and recognition of robot names shows that the word accuracy of speech recognition is 98.3%.

  • PDF

WiSee's trend analysis using Wi-Fi (Wi-Fi를 이용한 WiSee의 동향 분석)

  • Han, Seung-Ah;Son, Tae-Hyun;Kim, Hyun-Ho;Lee, Hoon-Jae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.74-77
    • /
    • 2015
  • WiSee is by utilizing the frequency of Wi-Fi(802.11n/ac), a technique for performing the operation recognized by the user's gesture. Current motion recognition scheme are using a dedicated device (leaf motion, Kinekuto) and the recognition range is 30cm ~ 3.5m. also For recognition range increases the narrow recognition rate, there is inconvenience for maintaining a limited distance. But WiSee is used by Wi-Fi it is possible to anywhere motion recognition if available location. Permeability also has advantages as compared with the conventional recognition method. In this paper I take a look at the operation process and the recent trend of WiSee.

  • PDF

An Implementation of Gaze Direction Recognition System using Difference Image Entropy (차영상 엔트로피를 이용한 시선 인식 시스템의 구현)

  • Lee, Kue-Bum;Chung, Dong-Keun;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.16B no.2
    • /
    • pp.93-100
    • /
    • 2009
  • In this paper, we propose a Difference Image Entropy based gaze direction recognition system. The Difference Image Entropy is computed by histogram levels using the acquired difference image of current image and reference images or average images that have peak positions from $-255{\sim}+255$ to prevent information omission. There are two methods about the Difference Image Entropy based gaze direction. 1) The first method is to compute the Difference Image Entropy between an input image and average images of 45 images in each location of gaze, and to recognize the directions of user's gaze. 2) The second method is to compute the Difference Image Entropy between an input image and each 45 reference images, and to recognize the directions of user's gaze. The reference image is created by average image of 45 images in each location of gaze after receiving images of 4 directions. In order to evaluate the performance of the proposed system, we conduct comparison experiment with PCA based gaze direction system. The directions of recognition left-top, right-top, left-bottom, right-bottom, and we make an experiment on that, as changing the part of recognition about 45 reference images or average image. The experimental result shows that the recognition rate of Difference Image Entropy is 97.00% and PCA is 95.50%, so the recognition rate of Difference Image Entropy based system is 1.50% higher than PCA based system.

A Study on the Development of Multi-User Virtual Reality Moving Platform Based on Hybrid Sensing (하이브리드 센싱 기반 다중참여형 가상현실 이동 플랫폼 개발에 관한 연구)

  • Jang, Yong Hun;Chang, Min Hyuk;Jung, Ha Hyoung
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.355-372
    • /
    • 2021
  • Recently, high-performance HMDs (Head-Mounted Display) are becoming wireless due to the growth of virtual reality technology. Accordingly, environmental constraints on the hardware usage are reduced, enabling multiple users to experience virtual reality within a single space simultaneously. Existing multi-user virtual reality platforms use the user's location tracking and motion sensing technology based on vision sensors and active markers. However, there is a decrease in immersion due to the problem of overlapping markers or frequent matching errors due to the reflected light. Goal of this study is to develop a multi-user virtual reality moving platform in a single space that can resolve sensing errors and user immersion decrease. In order to achieve this goal hybrid sensing technology was developed, which is the convergence of vision sensor technology for position tracking, IMU (Inertial Measurement Unit) sensor motion capture technology and gesture recognition technology based on smart gloves. In addition, integrated safety operation system was developed which does not decrease the immersion but ensures the safety of the users and supports multimodal feedback. A 6 m×6 m×2.4 m test bed was configured to verify the effectiveness of the multi-user virtual reality moving platform for four users.

Analysis and Prediction Algorithms on the State of User's Action Using the Hidden Markov Model in a Ubiquitous Home Network System (유비쿼터스 홈 네트워크 시스템에서 은닉 마르코프 모델을 이용한 사용자 행동 상태 분석 및 예측 알고리즘)

  • Shin, Dong-Kyoo;Shin, Dong-Il;Hwang, Gu-Youn;Choi, Jin-Wook
    • Journal of Internet Computing and Services
    • /
    • v.12 no.2
    • /
    • pp.9-17
    • /
    • 2011
  • This paper proposes an algorithm that predicts the state of user's next actions, exploiting the HMM (Hidden Markov Model) on user profile data stored in the ubiquitous home network. The HMM, recognizes patterns of sequential data, adequately represents the temporal property implicated in the data, and is a typical model that can infer information from the sequential data. The proposed algorithm uses the number of the user's action performed, the location and duration of the actions saved by "Activity Recognition System" as training data. An objective formulation for the user's interest in his action is proposed by giving weight on his action, and change on the state of his next action is predicted by obtaining the change on the weight according to the flow of time using the HMM. The proposed algorithm, helps constructing realistic ubiquitous home networks.

A Study of Recognition-Based user Multi-Smart Plug System (사용자 인식 기반 멀티-스마트 플러그에 관한 연구)

  • Oh, Jin-Seok;Lee, Hunseok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.12
    • /
    • pp.2976-2983
    • /
    • 2013
  • Interest in reducing standby power is increasing because of electric power shortages. Most of electric equipment are in standby state that does not use a function of the original, most electronic devices consume a lot of electric power even in standby mode. In many countries, research on the smart plug is advanced in order to prevent power consumption due to standby state. However, due to the nature of the function, expensive in many case. These smart plugs would be to cut the standby power using motion detecting sensor or pattern control of the user. Theses features have no advantages because of malfunction of motion detecting sensor and in accordance with the diversification ot user's pattern. In this study, developing a multi-smart plug system that linked with bluetooth function of user's smart phone. Using smart phone bluetooth function, determination of the position of the user. The suggestion smart plug cutting the standby power of the electronic apparatus. It was confirmed that it is able to reduce the power consumption according to the location of the user.

Real-Time Recognition Method of Counting Fingers for Natural User Interface

  • Lee, Doyeob;Shin, Dongkyoo;Shin, Dongil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.5
    • /
    • pp.2363-2374
    • /
    • 2016
  • Communication occurs through verbal elements, which usually involve language, as well as non-verbal elements such as facial expressions, eye contact, and gestures. In particular, among these non-verbal elements, gestures are symbolic representations of physical, vocal, and emotional behaviors. This means that gestures can be signals toward a target or expressions of internal psychological processes, rather than simply movements of the body or hands. Moreover, gestures with such properties have been the focus of much research for a new interface in the NUI/NUX field. In this paper, we propose a method for recognizing the number of fingers and detecting the hand region based on the depth information and geometric features of the hand for application to an NUI/NUX. The hand region is detected by using depth information provided by the Kinect system, and the number of fingers is identified by comparing the distance between the contour and the center of the hand region. The contour is detected using the Suzuki85 algorithm, and the number of fingers is calculated by detecting the finger tips in a location at the maximum distance to compare the distances between three consecutive dots in the contour and the center point of the hand. The average recognition rate for the number of fingers is 98.6%, and the execution time is 0.065 ms for the algorithm used in the proposed method. Although this method is fast and its complexity is low, it shows a higher recognition rate and faster recognition speed than other methods. As an application example of the proposed method, this paper explains a Secret Door that recognizes a password by recognizing the number of fingers held up by a user.

A Distributed Activity Recognition Algorithm based on the Hidden Markov Model for u-Lifecare Applications (u-라이프케어를 위한 HMM 기반의 분산 행위 인지 알고리즘)

  • Kim, Hong-Sop;Yim, Geo-Su
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.5
    • /
    • pp.157-165
    • /
    • 2009
  • In this paper, we propose a distributed model that recognize ADLs of human can be occurred in daily living places. We collect and analyze user's environmental, location or activity information by simple sensor attached home devices or utensils. Based on these information, we provide a lifecare services by inferring the user's life pattern and health condition. But in order to provide a lifecare services well-refined activity recognition data are required and without enough inferred information it is very hard to build an ADL activity recognition model for high-level situation awareness. The sequence that generated by sensors are very helpful to infer the activities so we utilize the sequence to analyze an activity pattern and propose a distributed linear time inference algorithm. This algorithm is appropriate to recognize activities in small area like home, office or hospital. For performance evaluation, we test with an open data from MIT Media Lab and the recognition result shows over 75% accuracy.

A Fast Algorithm for Korean Text Extraction and Segmentation from Subway Signboard Images Utilizing Smartphone Sensors

  • Milevskiy, Igor;Ha, Jin-Young
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.3
    • /
    • pp.161-166
    • /
    • 2011
  • We present a fast algorithm for Korean text extraction and segmentation from subway signboards using smart phone sensors in order to minimize computational time and memory usage. The algorithm can be used as preprocessing steps for optical character recognition (OCR): binarization, text location, and segmentation. An image of a signboard captured by smart phone camera while holding smart phone by an arbitrary angle is rotated by the detected angle, as if the image was taken by holding a smart phone horizontally. Binarization is only performed once on the subset of connected components instead of the whole image area, resulting in a large reduction in computational time. Text location is guided by user's marker-line placed over the region of interest in binarized image via smart phone touch screen. Then, text segmentation utilizes the data of connected components received in the binarization step, and cuts the string into individual images for designated characters. The resulting data could be used as OCR input, hence solving the most difficult part of OCR on text area included in natural scene images. The experimental results showed that the binarization algorithm of our method is 3.5 and 3.7 times faster than Niblack and Sauvola adaptive-thresholding algorithms, respectively. In addition, our method achieved better quality than other methods.

Consideration for cognitive effects in smart environments for effective UXD(User eXperience Design) (스마트환경의 효과적인 UXD를 위한 인지작용 고찰)

  • Lee, Chang Wook;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.11 no.2
    • /
    • pp.397-405
    • /
    • 2013
  • The development of the technology of the 21st century, wireless Internet technology development in smart environments, was rapidly settled. In such an environment, the user is faced with many smart devices and smart content. This study is the analysis of the smart environment and smart devices, and user-to-user cognitive out about the effects reported. Cognitive effects observed behavior, technology, and user-centered system design, and plays a very important role to play in educating the users. And theoretical consideration about the UX (User eXperience) and UXD (User eXperience Design), by case analysis on the technical aspects of 'effective' visual aspect of interoperation aspects (interaction), and the cognitive effects of UXD (User eXperience Design) examined. As a result, on the visual aspects of the user experience based on the design that can be used to know, and be sound or through interaction with the user of the machine-to-machine interaction (and interaction) that must be provided, such as location-based or speech recognition technology will help you through the convenience of the user. Through this research, the smart environment and helping act of understanding, effective UXD (User eXperience Design) to take advantage of to help.