• Title/Summary/Keyword: People Tracking

Search Result 237, Processing Time 0.027 seconds

Measuring Visual Attention Processing of Virtual Environment Using Eye-Fixation Information

  • Kim, Jong Ha;Kim, Ju Yeon
    • Architectural research
    • /
    • v.22 no.4
    • /
    • pp.155-162
    • /
    • 2020
  • Numerous scholars have explored the modeling, control, and optimization of energy systems in buildings, offering new insights about technology and environments that can advance industry innovation. Eye trackers deliver objective eye-gaze data about visual and attentional processes. Due to its flexibility, accuracy, and efficiency in research, eye tracking has a control scheme that makes measuring rapid eye movement in three-dimensional space possible (e.g., virtual reality, augmented reality). Because eye movement is an effective modality for digital interaction with a virtual environment, tracking how users scan a visual field and fix on various digital objects can help designers optimize building environments and materials. Although several scholars have conducted Virtual Reality studies in three-dimensional space, scholars have not agreed on a consistent way to analyze eye tracking data. We conducted eye tracking experiments using objects in three-dimensional space to find an objective way to process quantitative visual data. By applying a 12 × 12 grid framework for eye tracking analysis, we investigated how people gazed at objects in a virtual space wearing a headmounted display. The findings provide an empirical base for a standardized protocol for analyzing eye tracking data in the context of virtual environments.

People Counting System using Raspberry Pi

  • Ansari, Md Israfil;Shim, Jaechang
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.239-242
    • /
    • 2017
  • This paper proposes a low-cost method for counting people based on blob detection and blob tracking. Here background subtraction is used to detected blob and then the blob is classified with its width and height to specify that the blob is a person. In this system we first define the area of entry and exit point in the video frame. The counting of people starts when midpoint of the people blob crosses the defined point. Finally, total number of people entry and exit from the place is displayed. Experiment result of this proposed system has high accuracy in real-time performance.

A Real-time People Counting Algorithm Using Background Modeling and CNN (배경모델링과 CNN을 이용한 실시간 피플 카운팅 알고리즘)

  • Yang, HunJun;Jang, Hyeok;Jeong, JaeHyup;Lee, Bowon;Jeong, DongSeok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.3
    • /
    • pp.70-77
    • /
    • 2017
  • Recently, Internet of Things (IoT) and deep learning techniques have affected video surveillance systems in various ways. The surveillance features that perform detection, tracking, and classification of specific objects in Closed Circuit Television (CCTV) video are becoming more intelligent. This paper presents real-time algorithm that can run in a PC environment using only a low power CPU. Traditional tracking algorithms combine background modeling using the Gaussian Mixture Model (GMM), Hungarian algorithm, and a Kalman filter; they have relatively low complexity but high detection errors. To supplement this, deep learning technology was used, which can be trained from a large amounts of data. In particular, an SRGB(Sequential RGB)-3 Layer CNN was used on tracked objects to emphasize the features of moving people. Performance evaluation comparing the proposed algorithm with existing ones using HOG and SVM showed move-in and move-out error rate reductions by 7.6 % and 9.0 %, respectively.

A Study on Eye Tracking Techniques using Wearable Devices (웨어러블향(向) 시선추적 기법에 관한 연구)

  • Jaehyuck Jang;Jiu Jung;Junghoon Park
    • Smart Media Journal
    • /
    • v.12 no.3
    • /
    • pp.19-29
    • /
    • 2023
  • The eye tracking technology is widespread all around the society, and is demonstrating great performances in both preciseness and convenience. Hereby we can glimpse new possibility of an interface's conduct without screen-touching. This technology can become a new way of conversation for those including but not limited to the patients suffering from Lou Gehrig's disease, who are paralyzed each part by part of the body and finally cannot help but only moving eyes. Formerly in that case, the patients were given nothing to do but waiting for the death, even being unable to communicate with there families. A new interface that harnesses eyes as a new means of communication, although it conveys great difficulty, can be helpful for them. There surely are some eye tracking systems and equipment for their exclusive uses on the market. Notwithstanding, several obstacles including the complexity of operation and their high prices of over 12 million won($9,300) are hindering universal supply to people and coverage for the patients. Therefore, this paper suggests wearable-type eye tracking device that can support minorities and vulnerable people and be occupied inexpensively and study eye tracking method in order to maximize the possibility of future development across the world, finally proposing the way of designing and developing a brought-down costed eye tracking system based on high-efficient wearable device.

A Semantic Diagnosis and Tracking System to Prevent the Spread of COVID-19 (COVID-19 확산 방지를 위한 시맨틱 진단 및 추적시스템)

  • Xiang, Sun Yu;Lee, Yong-Ju
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.3
    • /
    • pp.611-616
    • /
    • 2020
  • In order to prevent the further spread of the COVID-19 virus in big cities, this paper proposes a semantic diagnosis and tracking system based on Linked Data through the cluster analysis of the infection situation in Seoul, South Korea. This paper is mainly composed of three sections, information of infected people in Seoul is collected for the cluster analysis, important infected patient attributes are extracted to establish a diagnostic model based on random forest, and a tracking system based on Linked Data is designed and implemented. Experimental results show that the accuracy of our diagnostic model is more than 80%. Moreover, our tracking system is more flexible and open than existing systems and supports semantic queries.

Microsoft-Kinect Sensor utilizing People Tracking System (Microsoft-Kinect 센서를 활용한 화자추적 시스템)

  • Ban, Tae-Hak;Lee, Sang-Won;Kim, Jae-Min;Jung, Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.611-613
    • /
    • 2015
  • Multimedia classroom teaching as well as the automatic tracking of the camera are automatically saved track to be saved. The existing tracking system is attached to the body by a separate sensor to track or on the front of the sensor to the construction of the track was a hit at the same time in front of the discomfort caused by tracking errors when I had an issue that shouldn't be. In this paper, Microsoft-Kinect sensor, using the speaker's position and behavior analysis (instructor), and PTZ cameras, recording systems, storage classes and lectures with classroom lessons can be effective at the time of recording to the content production about the technology of unmanned speaker tracking solution.

  • PDF

Object Magnification and Voice Command in Gaze Interface for the Upper Limb Disabled (상지장애인을 위한 시선 인터페이스에서의 객체 확대 및 음성 명령 인터페이스 개발)

  • Park, Joo Hyun;Jo, Se-Ran;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.7
    • /
    • pp.903-912
    • /
    • 2021
  • Eye tracking research for upper limb disabilities is showing an effect in the aspect of device control. However, the reality is that it is not enough to perform web interaction with only eye tracking technology. In the Eye-Voice interface, a previous study, in order to solve the problem that the existing gaze tracking interfaces cause a malfunction of pointer execution, a gaze tracking interface supplemented with a voice command was proposed. In addition, the reduction of the malfunction rate of the pointer was confirmed through a comparison experiment with the existing interface. In this process, the difficulty of pointing due to the small size of the execution object in the web environment was identified as another important problem of malfunction. In this study, we propose an auto-magnification interface of objects so that people with upper extremities can freely click web contents by improving the problem that it was difficult to point and execute due to the high density of execution objects and their arrangements in web pages.

Development of a Non-contact Input System Based on User's Gaze-Tracking and Analysis of Input Factors

  • Jiyoung LIM;Seonjae LEE;Junbeom KIM;Yunseo KIM;Hae-Duck Joshua JEONG
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.1
    • /
    • pp.9-15
    • /
    • 2023
  • As mobile devices such as smartphones, tablets, and kiosks become increasingly prevalent, there is growing interest in developing alternative input systems in addition to traditional tools such as keyboards and mouses. Many people use their own bodies as a pointer to enter simple information on a mobile device. However, methods using the body have limitations due to psychological factors that make the contact method unstable, especially during a pandemic, and the risk of shoulder surfing attacks. To overcome these limitations, we propose a simple information input system that utilizes gaze-tracking technology to input passwords and control web surfing using only non-contact gaze. Our proposed system is designed to recognize information input when the user stares at a specific location on the screen in real-time, using intelligent gaze-tracking technology. We present an analysis of the relationship between the gaze input box, gaze time, and average input time, and report experimental results on the effects of varying the size of the gaze input box and gaze time required to achieve 100% accuracy in inputting information. Through this paper, we demonstrate the effectiveness of our system in mitigating the challenges of contact-based input methods, and providing a non-contact alternative that is both secure and convenient.

Research on Development of VR Realistic Sign Language Education Content Using Hand Tracking and Conversational AI (Hand Tracking과 대화형 AI를 활용한 VR 실감형 수어 교육 콘텐츠 개발 연구)

  • Jae-Sung Chun;Il-Young Moon
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.3
    • /
    • pp.369-374
    • /
    • 2024
  • This study aims to improve the accessibility and efficiency of sign language education for both hearing impaired and non-deaf people. To this end, we developed VR realistic sign language education content that integrates hand tracking technology and conversational AI. Through this content, users can learn sign language in real time and experience direct communication in a virtual environment. As a result of the study, it was confirmed that this integrated approach significantly improves immersion in sign language learning and contributes to lowering the barriers to sign language learning by providing learners with a deeper understanding. This presents a new paradigm for sign language education and shows how technology can change the accessibility and effectiveness of education.

CNN-based People Recognition for Vision Occupancy Sensors (비전 점유센서를 위한 합성곱 신경망 기반 사람 인식)

  • Lee, Seung Soo;Choi, Changyeol;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.23 no.2
    • /
    • pp.274-282
    • /
    • 2018
  • Most occupancy sensors installed in buildings, households and so forth are pyroelectric infra-red (PIR) sensors. One of disadvantages is that PIR sensor can not detect the stationary person due to its functionality of detecting the variation of thermal temperature. In order to overcome this problem, the utilization of camera vision sensors has gained interests, where object tracking is used for detecting the stationary persons. However, the object tracking has an inherent problem such as tracking drift. Therefore, the recognition of humans in static trackers is an important task. In this paper, we propose a CNN-based human recognition to determine whether a static tracker contains humans. Experimental results validated that human and non-humans are classified with accuracy of about 88% and that the proposed method can be incorporated into practical vision occupancy sensors.