• Title/Summary/Keyword: Gaze Data

Search Result 103, Processing Time 0.025 seconds

Improved Feature Extraction of Hand Movement EEG Signals based on Independent Component Analysis and Spatial Filter

  • Nguyen, Thanh Ha;Park, Seung-Min;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.4
    • /
    • pp.515-520
    • /
    • 2012
  • In brain computer interface (BCI) system, the most important part is classification of human thoughts in order to translate into commands. The more accuracy result in classification the system gets, the more effective BCI system is. To increase the quality of BCI system, we proposed to reduce noise and artifact from the recording data to analyzing data. We used auditory stimuli instead of visual ones to eliminate the eye movement, unwanted visual activation, gaze control. We applied independent component analysis (ICA) algorithm to purify the sources which constructed the raw signals. One of the most famous spatial filter in BCI context is common spatial patterns (CSP), which maximize one class while minimize the other by using covariance matrix. ICA and CSP also do the filter job, as a raw filter and refinement, which increase the classification result of linear discriminant analysis (LDA).

Development of Eye Protection App using Realtime Eye Tracking and Distance Measurement Method (실시간 시선 추적과 거리 측정 기법을 활용한 눈 보호 앱 개발)

  • Lee, Hye-Ran;Lee, Jun Pyo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.07a
    • /
    • pp.223-224
    • /
    • 2019
  • 본 논문에서는 카메라의 실시간 영상에서 얻을 수 있는 데이터를 수집 및 분석하여 일반인들에게 스마트폰의 실제 사용량, 최적화면 표현, 그리고 건조증 위험도의 정보를 제공하는 "i-eye" 응용 앱을 제안하여 눈 건강관리를 가능하게 한다. 제안하는 앱은 발전된 스마트 폰을 기반으로 동작되며 아이트래킹(eye-gaze tracking), 영상거리측정(image distance measurement), 눈 데이터분석(eye data analysis)의 3가지 핵심기술을 제안한다.

  • PDF

From wearing desires to the power of gazing hidden wearable technology algorithm (Based on system design) (웨어러블 기술 알고리즘에 숨겨진 입는 욕망에서부터 시선의 권력까지(시스템 설계 관점에서))

  • Kang, Jangmook
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.4
    • /
    • pp.205-210
    • /
    • 2018
  • This paper examines the wearable technology in two aspects. First, it is the desire that permeates the clothes that smart technology is embroidered. Second, it is a reflection on the gaze that looks at the user wearing clothes. This article is a study on clothes that will be newly appeared with wearable technology. But this is not just a technical issue. Rather, it is a system design that takes human instinct into clothes. Therefore, this study encompasses social scientific boundaries. This article does not refer to data collected from wearables as simple sensing based data. Rather, wearable technology reveals human life activities and emotions. This paper is an attempt to combine or combine humanities and technology.

Real-time Multi-device Control System Implementation for Natural User Interactive Platform

  • Kim, Myoung-Jin;Hwang, Tae-min;Chae, Sung-Hun;Kim, Min-Joon;Moon, Yeon-Kug;Kim, SeungJun
    • Journal of Internet Computing and Services
    • /
    • v.23 no.1
    • /
    • pp.19-29
    • /
    • 2022
  • Natural user interface (NUI) is used for the natural motion interface without using a specific device or tool like a mouse, keyboards, and pens. Recently, as non-contact sensor-based interaction technologies for recognizing human motion, gestures, voice, and gaze have been actively studied, an environment has been prepared that can provide more diverse contents based on various interaction methods compared to existing methods. However, as the number of sensors device is rapidly increasing, the system using a lot of sensors can suffer from a lack of computational resources. To address this problem, we proposed a real-time multi-device control system for natural interactive platform. In the proposed system, we classified two types of devices as the HC devices such as high-end commercial sensor and the LC devices such astraditional monitoring sensor with low-cost. we adopt each device manager to control efficiently. we demonstrate a proposed system works properly with user behavior such as gestures, motions, gazes, and voices.

Design of Korean eye-typing interfaces based on multilevel input system (단계식 입력 체계를 이용한 시선 추적 기반의 한글 입력 인터페이스 설계)

  • Kim, Hojoong;Woo, Sung-kyung;Lee, Kunwoo
    • Journal of the HCI Society of Korea
    • /
    • v.12 no.4
    • /
    • pp.37-44
    • /
    • 2017
  • Eye-typing is one kind of human-computer interactive input system which is implemented by location data of gaze. It is widely used as an input system for paralytics because it does not require physical motions other than the eye movement. However, eye-typing interface based on Korean character has not been suggested yet. Thus, this research aims to implement the eye-typing interface optimized for Korean. To begin with, design objectives were established based on the features of eye-typing: significant noise and Midas touch problem. Multilevel input system was introduced to deal with noise, and an area free from input button was applied to solve Midas touch problem. Then, two types of eye-typing interfaces were suggested on phonological consideration of Korean where each syllable is generated from combination of several phonemes. Named as consonant-vowel integrated interface and separated interface, the two interfaces are designed to input Korean in phases through grouped phonemes. Finally, evaluation procedures composed of comparative experiments against the conventional Double-Korean keyboard interface, and analysis on flow of gaze were conducted. As a result, newly designed interfaces showed potential to be applied as practical tools for eye-typing.

A Study on the GEO-Tracking Algorithm of EOTS for the Construction of HILS system (HILS 시스템 구축을 위한 EOTS의 좌표지향 알고리즘 실험에 대한 연구)

  • Gyu-Chan Lee;Jeong-Won Kim;Dong-Gi Kwag
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.663-668
    • /
    • 2023
  • Recently it is very important to collect information such as enemy positions and facilities. To this end, unmanned aerial vehicles such as multicopters have been actively developed, and various mission equipment mounted on unmanned aerial vehicles have also been developed. The coordinate-oriented algorithm refers to an algorithm that calculates a gaze angle so that the mission equipment can fix the gaze at a desired coordinate or position. Flight data and GPS data were collected and simulated using Matlab for coordinate-oriented algorithms. In the simulation using only the coordinate data, the average Pan axis angle was about 0.42°, the Tilt axis was 0.003°~0.43°, and the relatively wide error was about 0.15° on average. As a result of converting this into the distance in the NE direction, the error distance in the N direction was about 2.23m on average, and the error distance in the E direction was about -1.22m on average. The simulation applying the actual flight data showed a result of about 19m@CEP. Therefore, we conducted a study on the self-error of coordinate-oriented algorithms in monitoring and information collection, which is the main task of EOTS, and confirmed that the quantitative target of 500m is satisfied with 30m@CEP, and showed that the desired coordinates can be directed.

Measuring Visual Attention Processing of Virtual Environment Using Eye-Fixation Information

  • Kim, Jong Ha;Kim, Ju Yeon
    • Architectural research
    • /
    • v.22 no.4
    • /
    • pp.155-162
    • /
    • 2020
  • Numerous scholars have explored the modeling, control, and optimization of energy systems in buildings, offering new insights about technology and environments that can advance industry innovation. Eye trackers deliver objective eye-gaze data about visual and attentional processes. Due to its flexibility, accuracy, and efficiency in research, eye tracking has a control scheme that makes measuring rapid eye movement in three-dimensional space possible (e.g., virtual reality, augmented reality). Because eye movement is an effective modality for digital interaction with a virtual environment, tracking how users scan a visual field and fix on various digital objects can help designers optimize building environments and materials. Although several scholars have conducted Virtual Reality studies in three-dimensional space, scholars have not agreed on a consistent way to analyze eye tracking data. We conducted eye tracking experiments using objects in three-dimensional space to find an objective way to process quantitative visual data. By applying a 12 × 12 grid framework for eye tracking analysis, we investigated how people gazed at objects in a virtual space wearing a headmounted display. The findings provide an empirical base for a standardized protocol for analyzing eye tracking data in the context of virtual environments.

Proposal for AI Video Interview Using Image Data Analysis

  • Park, Jong-Youel;Ko, Chang-Bae
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.212-218
    • /
    • 2022
  • In this paper, the necessity of AI video interview arises when conducting an interview for acquisition of excellent talent in a non-face-to-face situation due to similar situations such as Covid-19. As a matter to be supplemented in general AI interviews, it is difficult to evaluate the reliability and qualitative factors. In addition, the AI interview is conducted not in a two-way Q&A, rather in a one-sided Q&A process. This paper intends to fuse the advantages of existing AI interviews and video interviews. When conducting an interview using AI image analysis technology, it supplements subjective information that evaluates interview management and provides quantitative analysis data and HR expert data. In this paper, image-based multi-modal AI image analysis technology, bioanalysis-based HR analysis technology, and web RTC-based P2P image communication technology are applied. The goal of applying this technology is to propose a method in which biological analysis results (gaze, posture, voice, gesture, landmark) and HR information (opinions or features based on user propensity) can be processed on a single screen to select the right person for the hire.

Development of Evaluation Metrics for Pedestrian Flow Optimization in a Complex Service Environment Based on Behavior Observation Method

  • Bahn, Sang-Woo;Lee, Chai-Woo;Kwon, Sang-Hyun;Yun, Myung-Hwan
    • Journal of the Ergonomics Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.647-654
    • /
    • 2010
  • In a service environment, the spatial layout is an important factor that has a great impact on customers' behavioral characteristics including wayfinding and purchasing. Previous studies have shown a gap between marketing, focusing solely on profitability and satisfaction, and architecture, looking only into efficiency of pedestrian flow. To balance such disparity, this study suggests an integrated approach for assessing behavioral patterns in complex service environments. With the objective that complex service environments should aim to increase its profitability and efficiency while guaranteeing customer satisfaction, quantitative metrics was developed for evaluation. The metrics was defined to use data from behavior observation including path tracking, population counting, and gaze analysis, while previous studies have relied on abstract survey methods that were prone to sampling errors and loss of data. For validation of the metrics in a real world setting, a case study was conducted at 4 train stations in Korea. In the case study, experiments were conducted to gather the required data in all 4 train stations, while their physical layouts were also analyzed. With the results from the case study, comparative evaluation of the 4 train stations in terms of behavioral efficiency was possible, together with a discussion on the effect of their physical settings.

Robust Head Pose Estimation for Masked Face Image via Data Augmentation (데이터 증강을 통한 마스크 착용 얼굴 이미지에 강인한 얼굴 자세추정)

  • Kyeongtak, Han;Sungeun, Hong
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.944-947
    • /
    • 2022
  • Due to the coronavirus pandemic, the wearing of a mask has been increasing worldwide; thus, the importance of image analysis on masked face images has become essential. Although head pose estimation can be applied to various face-related applications including driver attention, face frontalization, and gaze detection, few studies have been conducted to address the performance degradation caused by masked faces. This study proposes a new data augmentation that synthesizes the masked face, depending on the face image size and poses, which shows robust performance on BIWI benchmark dataset regardless of mask-wearing. Since the proposed scheme is not limited to the specific model, it can be utilized in various head pose estimation models.