• Title/Summary/Keyword: User location Recognition

Search Result 114, Processing Time 0.021 seconds

A Study on Coaching System for Disabled and Elderly People (장애인 및 노령인구를 위해 코칭 시스템에 관한 연구)

  • Shin, Seung-Jung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.6
    • /
    • pp.237-242
    • /
    • 2013
  • There are some limitations to use the fitness center frequently by various reasons, excessive business workload, geographic location and time limit. This research specially focuses on the developement of health care application tool for aerobic/anaerobic exercise in a house accompanied by training coach, simply for the purpose of preventing from a boredom moment by exercing alone. This study newly proposes the realization of program tool by using Microsoft Kinect hardware tool which can be seen easily in the internet stores and can recognize the user. In addition, this program are designed to take care of movement control, weight and calory by maping the perceived user implementing modeling of 3D model.

Development of a Non-contact Input System Based on User's Gaze-Tracking and Analysis of Input Factors

  • Jiyoung LIM;Seonjae LEE;Junbeom KIM;Yunseo KIM;Hae-Duck Joshua JEONG
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.1
    • /
    • pp.9-15
    • /
    • 2023
  • As mobile devices such as smartphones, tablets, and kiosks become increasingly prevalent, there is growing interest in developing alternative input systems in addition to traditional tools such as keyboards and mouses. Many people use their own bodies as a pointer to enter simple information on a mobile device. However, methods using the body have limitations due to psychological factors that make the contact method unstable, especially during a pandemic, and the risk of shoulder surfing attacks. To overcome these limitations, we propose a simple information input system that utilizes gaze-tracking technology to input passwords and control web surfing using only non-contact gaze. Our proposed system is designed to recognize information input when the user stares at a specific location on the screen in real-time, using intelligent gaze-tracking technology. We present an analysis of the relationship between the gaze input box, gaze time, and average input time, and report experimental results on the effects of varying the size of the gaze input box and gaze time required to achieve 100% accuracy in inputting information. Through this paper, we demonstrate the effectiveness of our system in mitigating the challenges of contact-based input methods, and providing a non-contact alternative that is both secure and convenient.

Development of Cultural Content using a Markerless Tracking-based Augmented Reality (마커리스 트래킹 기반 증강현실을 이용한 문화콘텐츠 개발)

  • Lee, Young cheon
    • Smart Media Journal
    • /
    • v.5 no.4
    • /
    • pp.90-95
    • /
    • 2016
  • Recently, the quality of cultural experience can be improved through a stereoscopic information service provided by the latest mobile-based Information Telecommunication technology without the human cultural commentators, which was used in order to enhance the understanding of our cultural heritage. The purpose of this paper is to produce contents that introduce cultural heritage using the Android-based GPS and augmented reality. In this paper we propose a culture content creation method that is based on location information such as user/cultural anomalies using GPS and augmented reality based on Markerless Tracking. Marker Detection Technology and Markerless Tracking Technology are used for smart phone's rapid recognition of augmented real world and accurate recognition according to the state of the cultural heritage. Also, the Google Map of Android is used to locate the user. The strength of this method lies in that it can be used for a variety of subjects while the existing methods are limited to certain kinds of augmented reality contents.

Eye Location Algorithm For Natural Video-Conferencing (화상 회의 인터페이스를 위한 눈 위치 검출)

  • Lee, Jae-Jun;Choi, Jung-Il;Lee, Phill-Kyu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3211-3218
    • /
    • 1997
  • This paper addresses an eye location algorithm which is essential process of human face tracking system for natural video-conferencing. In current video-conferencing systems, user's facial movements are restricted by fixed camera, therefore it is inconvenient to users. We Propose an eye location algorithm for automatic face tracking. Because, locations of other facial features guessed from locations of eye and scale of face in the image can be calculated using inter-ocular distance. Most previous feature extraction methods for face recognition system are approached under assumption that approximative face region or location of each facial feature is known. The proposed algorithm in this paper uses no prior information on the given image. It is not sensitive to backgrounds and lighting conditions. The proposed algorithm uses the valley representation as major information to locate eyes. The experiments have been performed for 213 frames of 17 people and show very encouraging results.

  • PDF

Near-field Data Exchange by Motion Recognition of mobile phone (모바일 폰의 모션 인식에 의한 근거리 데이터 교환)

  • Hwang, Tae-won;Seo, Jung-hee;Park, Hung-bog
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.800-801
    • /
    • 2017
  • Location-based services (LBS) are used in various applications such as emergency support, navigation, location, traffic routes, information gathering, and entertainment due to the rapid growth of information communication technologies and mobile phones. In general, locations are represented by coordinates and are related to terrain. These are of great interest in mobile-based data transmission. This paper proposes a method to exchange the contact of the other party by detecting the movement of the mobile phone of the individual user based on the location-based service. The proposed method extracts motion using the acceleration sensor of the mobile phone and transmits the location and time information to the server when the motion continues for a predetermined time. Attempts to establish a connection between users who are experiencing motion in mobile phones in the short distance have been made from the server. Once the connection between the users is made, the encrypted contact is received from the server. Experimental results show that the proposed method can exchange data by minimizing the processing in the handset compared with the existing method.

  • PDF

Unauthorized person tracking system in video using CNN-LSTM based location positioning

  • Park, Chan;Kim, Hyungju;Moon, Nammee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.77-84
    • /
    • 2021
  • In this paper, we propose a system that uses image data and beacon data to classify authorized and unauthorized perosn who are allowed to enter a group facility. The image data collected through the IP camera uses YOLOv4 to extract a person object, and collects beacon signal data (UUID, RSSI) through an application to compose a fingerprinting-based radio map. Beacon extracts user location data after CNN-LSTM-based learning in order to improve location accuracy by supplementing signal instability. As a result of this paper, it showed an accuracy of 93.47%. In the future, it can be expected to fusion with the access authentication process such as QR code that has been used due to the COVID-19, track people who haven't through the authentication process.

Wearable Sensing Device Design for Biological Monitoring (생체정보 모니터링을 위한 웨어러블 센싱 디바이스 디자인)

  • Lee, Jee Hyun;Lee, Eun Ji;Kim, Ji Eun;Kim, Yoolee;Cho, Sinwon
    • Journal of the Korean Society of Costume
    • /
    • v.65 no.1
    • /
    • pp.118-135
    • /
    • 2015
  • In recent years, smart clothing had been developed in order to better detect and monitor physical movement of the patient, so that such activities such as location identification and biometric recognition could be done. However, most of the sensing devices of smart clothing were limited to smart sensing sports clothing and the designs did not consider the physical characteristics and the behavior of the wearer. Therefore, this study aimed to create an open protection system by developing a wearable sensing device for health monitoring and location information. For this purpose, this study developed eleven types of wearable sensing design that could be commercially sold and worn by people who needed their biological information to be constantly monitored. The study conducted four tests in order to develop three types of sensing devices for various sensing wears. The purpose of this study was to expand the user rang of smart sensing wears, and provide a foundation for the development of distinctive wearable sensing devices reflecting the user. Furthermore, contribute to the design for the person subject to protection.

Remote Control through Tracking of Pupil on Mobile Device (모바일 기기에서 눈동자 추적을 통한 원격 제어)

  • Kim, Su-Sun;Kang, Seok-Hoon;Kim, Seon-Woon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.4
    • /
    • pp.1849-1856
    • /
    • 2012
  • This paper proposes a method to track the center of pupil and perform the remote control for interface based on the substituted commands according to movements of pupil under smart phone environment. The proposed method, which is a remote control through the movement of eyes, may be helpful for the handicapped people or users who want a more convenient input method. A method based on webcam, which is representative one among the previous methods to track pupil of user, has a few limitations on distance and angle between location of user and webcam. However, this paper uses smart phone that is convenient to carry. The proposed method can perform the remote control through tracking of pupil using wireless network without any restriction on the location of users. Thus, the method is effectively applied for controlling the smart TV that should be controlled on the distance as well as the remote control for PC.

Touch TT: Scene Text Extractor Using Touchscreen Interface

  • Jung, Je-Hyun;Lee, Seong-Hun;Cho, Min-Su;Kim, Jin-Hyung
    • ETRI Journal
    • /
    • v.33 no.1
    • /
    • pp.78-88
    • /
    • 2011
  • In this paper, we present the Touch Text exTractor (Touch TT), an interactive text segmentation tool for the extraction of scene text from camera-based images. Touch TT provides a natural interface for a user to simply indicate the location of text regions with a simple touchline. Touch TT then automatically estimates the text color and roughly locates the text regions. By inferring text characteristics from the estimated text color and text region, Touch TT can extract text components. Touch TT can also handle partially drawn lines which cover only a small section of text area. The proposed system achieves reasonable accuracy for text extraction from moderately difficult examples from the ICDAR 2003 database and our own database.

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.