• Title/Summary/Keyword: gaze data

Search Result 103, Processing Time 0.02 seconds

Gaze-Manipulated Data Augmentation for Gaze Estimation With Diffusion Autoencoders (디퓨전 오토인코더의 시선 조작 데이터 증강을 통한 시선 추적)

  • Kangryun Moon;Younghan Kim;Yongjun Park;Yonggyu Kim
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.51-59
    • /
    • 2024
  • Collecting a dataset with a corresponding labeled gaze vector requires a high cost in the gaze estimation field. In this paper, we suggest a data augmentation of manipulating the gaze of an original image, which improves the accuracy of the gaze estimation model when the number of given gaze labels is restricted. By conducting multi-class gaze bin classification as an auxiliary task and adjusting the latent variable of the diffusion model, the model semantically edits the gaze from the original image. We manipulate a non-binary attribute, pitch and yaw of gaze vector to a desired range and uses the edited image as an augmented train data. The improved gaze accuracy of the gaze estimation network in the semi-supervised learning validates the effectiveness of our data augmentation, especially when the number of gaze labels is 50k or less.

Analysis of the Fashion Shops' Images Applying Gaze Frequency (주시빈도를 적용한 패션숍 파사드 이미지 분석)

  • Yeo, Mi;Oh, Sun Ae
    • Korean Institute of Interior Design Journal
    • /
    • v.22 no.6
    • /
    • pp.212-219
    • /
    • 2013
  • This study uses a fashion shop facade design to track human gaze, find gaze frequency for gaze time for the gaze points along the path of sight, and expose the importance of facade design and figure out the value through theoretical systematization. Thus, this study employed the measurement method in physiological psychology which is sight-tracking device with eye-tracking to perform effective data evaluation. To find gaze frequency and study the contents to reflect on the facade, precedent study review, and case study of facade design to collect stimulants to be used in eye-tracking experiment were executed. And the eye-tracking experiment which traces the movement of eye[pupil] was performed. As the result of analyzing gaze frequency, The characteristics of such gaze path formation made the characteristics for gaze frequency even clearer. What was characteristic in the analysis result according to 'average value for gaze time' was that only 8 out of 2000 areas showed over 1 second of frequency and, and all other shoed less than 1 second of gaze time. This indicates that human sight endlessly jumps around, and that it 'Stay' where it has interest. This study found the average of the frequency of this 'Stay' in facade design. This study well presents the major points to add value to the design of the space of facade based on scientific measurement/analysis data obtained through visual understanding. Through such, this study is thought to be able to have a positive interaction with marketing by forming a theoretical background bringing utility to purchase environment and assisting in sales increase.

Basic Study on Selective Visual Search by Eyetracking - Image arond the Department Store Space - (시선추적을 이용한 선택적 시각탐색에 대한 기초적 연구 - 백화점매장 공간 이미지를 중심으로 -)

  • Park, Sun-Myung;Kim, Jong-Ha
    • Korean Institute of Interior Design Journal
    • /
    • v.24 no.2
    • /
    • pp.125-133
    • /
    • 2015
  • Gaze induction characteristics in space vary depending on characteristics of spatial components and display. This study analyzed dominant eye-fixation characteristics of three zones of department store space. Eye-fixation characteristics depending on spatial components and positional relationship can be defined as follows. First, [**.jpg] was used as an extension in the process of storing the image photographed during image data processing for analysis in pixels and due to compressed storage of image data, the image produced with a clear boundary was stored in neutral colors. To remove this problem, the image used in operation was re-processed in black and white and stored in the [**.bmp] format with large capability, at the same time. As the result, the effort caused by unnecessary colors in the program operation process was corrected. Second, the gaze ratio to space area can be indicated as a strength of each gaze zone and when analyzing the gaze strength of the three zones, the left store was a zone with a "little strong" gaze strength of "102.8", the middle space was a zone with an "extremely weak" gaze strength of "89.6" and the right store was a zone with an "extremely strong" gaze strength of "117.2". Third, the IV section had a strong strength of gaze on the middle space and the right store and the V section showed a markedly strong strength of gaze on the left and right stores. This tendency was the same as the VI section with the strongest gaze strength and the right store had a little strong gaze strength than the left store.

Non-intrusive Calibration for User Interaction based Gaze Estimation (사용자 상호작용 기반의 시선 검출을 위한 비강압식 캘리브레이션)

  • Lee, Tae-Gyun;Yoo, Jang-Hee
    • Journal of Software Assessment and Valuation
    • /
    • v.16 no.1
    • /
    • pp.45-53
    • /
    • 2020
  • In this paper, we describe a new method for acquiring calibration data using a user interaction process, which occurs continuously during web browsing in gaze estimation, and for performing calibration naturally while estimating the user's gaze. The proposed non-intrusive calibration is a tuning process over the pre-trained gaze estimation model to adapt to a new user using the obtained data. To achieve this, a generalized CNN model for estimating gaze is trained, then the non-intrusive calibration is employed to adapt quickly to new users through online learning. In experiments, the gaze estimation model is calibrated with a combination of various user interactions to compare the performance, and improved accuracy is achieved compared to existing methods.

Gaze Detection Using Two Neural Networks (다중 신경망을 이용한 사용자의 응시 위치 추출)

  • 박강령;이정준;이동재;김재희
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.587-590
    • /
    • 1999
  • Gaze detection is to locate the position on a monitor screen where a user is looking at. We implement it by a computer vision system setting a camera above a monitor, and a user move (rotates and or translates) her face to gaze at a different position on the monitor. Up to now, we have tried several different approaches and among them the Two Neural Network approach shows the best result which is described in this paper (1.7 inch error for test data including facial rotation. 3.1 inch error for test data including facial rotation and translation).

  • PDF

The Effects of Gaze Direction on the Stability and Coordination of the Lower Limb Joint during Drop-Landing (드롭랜딩 시 시선 방향의 차이가 하지관절의 안정성과 협응에 미치는 영향)

  • Kim, Kewwan;Ahn, Seji
    • Korean Journal of Applied Biomechanics
    • /
    • v.31 no.2
    • /
    • pp.126-132
    • /
    • 2021
  • Objective: The purpose of this study was to investigate how three gaze directions (bottom, normal, up) affects the coordination and stability of the lower limb during drop landing. Method: 20 female adults (age: 21.1±1.1 yrs, height: 165.7±6.2 cm, weight: 59.4±5.9 kg) participated in this study. Participants performed single-leg drop landing task on a 30 cm height and 20 cm horizontal distance away from the force plate. Kinetic and kinematic data were obtained using 8 motion capture cameras and 1 force plates and leg stiffness, loading rate, DPSI were calculated. All statistical analyses were computed by using SPSS 25.0 program. One-way repeated ANOVA was used to compared the differences between the variables in the direction of gaze. To locate the differences, Bonferroni post hoc was applied if significance was observed. Results: The hip flexion angle and ankle plantar flexion angle were significantly smaller when the gaze direction was up. In the kinetic variables, when the gaze direction was up, the loading rate and DPSI were significantly higher than those of other gaze directions. Conclusion: Our results indicated that decreased hip and ankle flexion angles, increased loading rate and DPSI when the gaze direction was up. This suggests that the difference in visual information can increase the risk of injury to the lower limb during landing.

Pilot Gaze Tracking and ILS Landing Result Analysis using VR HMD based Flight Simulators (VR HMD 시뮬레이터를 활용한 조종사 시선 추적 및 착륙 절차 결과 분석)

  • Jeong, Gu Moon;Lee, Youngjae;Kwag, TaeHo;Lee, Jae-Woo
    • Journal of the Korean Society for Aviation and Aeronautics
    • /
    • v.30 no.1
    • /
    • pp.44-49
    • /
    • 2022
  • This study performed precision instrument landing procedures for pilots with a commercial pilot license using VR HMD flight simulators, and assuming that the center of the pilot's gaze is in the front, 3-D.O.F. head tracking data and 2-D eye tracking of VR HMD worn by pilots gaze tracking was performed through. After that, AOI (Area of Interesting) was set for the instrument panel and external field of view of the cockpit to analyze how the pilot's gaze was distributed before and after the decision altitude. At the same time, the landing results were analyzed using the Localizer and G/S data as the pilot's precision instrument landing flight data. As a result, the pilot was quantitatively evaluated by reflecting the gaze tracking and the resulting landing result using a VR HMD simulator.

A Study on the Aesthetic Identity of Modern Eroticism Fashion from the Perspective of Jacques Lacan's Unconscious Theory -Focusing on Jouissance & Gaze Theory- (자크 라캉 무의식이론의 관점에서 본 현대 에로티시즘 패션의 미적 정체성 연구 -주이상스와 응시론을 중심으로-)

  • Jungwon Yang;Misuk Lee
    • Journal of Fashion Business
    • /
    • v.27 no.2
    • /
    • pp.124-139
    • /
    • 2023
  • The purpose of this study is to determine the aesthetic identity of modern eroticism fashion in which the energy of desire is maximized through the 'jouissance' and 'gaze' of the unconscious theory of Jacques Lacan. The research method derived aesthetic identity after examining jouissance and gaze deeply related to eroticism in Lacan's theory of the unconscious by analyzing data of domestic and foreign monographs and prior research. Case analysis was limited to 2000 S/S to 2022 F/W. Based on prior research, it was analyzed mainly on clothing with eroticism characteristics of 'exposure, close contact, see-through, the conversion of underclothes into outer garments'. Results of the study are as follows. First, eroticism, which can be linked to Lacan's type of 'jouissance' with multiple meanings as the generating point of eroticism, has manifested itself in voyeuristic eroticism, fatale eroticism, masochistic eroticism, surplus eroticism, and sacred eroticism. Second, as eyes of unconscious desire, visual expression characteristics of 'gaze' appeared as anamorphosis, trompe l'oeil, and dépaysement. The identity of eroticism derived from Lacan's jouissance and the perspective of the desire gaze was divided into voyeuristic desire to gaze, fatal desire to gaze, masochistic desire to gaze, surplus desire to gaze, and sacred desire to gaze. Results of this study will expand theoretical horizons of eroticist fashion with a new interpretation of eroticism by combining Lacan's desire as a repressed and alienated subject within the human unconscious with the art that expands it.

A Study on the Attention Concentration Properties in Convergent Exploration Situations in Cafe Space - Focusing on Gaze and Brain wave Data Analysis - (카페공간에 대한 수렴적 탐색상황에서의 주의집중 특성의 분석 방법에 관한 연구 - 선택적 주시데이터에 의한 뇌파 데이터 분석을 중심으로 -)

  • Kim, Jong-Ha;Kim, Ju-Yeon;Kim, Sang-Hee
    • Korean Institute of Interior Design Journal
    • /
    • v.25 no.2
    • /
    • pp.30-40
    • /
    • 2016
  • This study analyzed the attention concentration tendencies of one(1) subject who showed convergent exploratory acts actively through the gaze-brainwave measurement experiment of cafe space images and our research findings are as follows. First, the areas of interest (AOIs) that the subject gazed visually by paying attention to it and concentrating on it at a cafe space include counter&menu area, sign area, partition area, image wall area, stairs area, and movable furniture area, and built-in furniture area: seven areas in total. Second, conscious gaze frequency appeared the highest in counter&menu area, and conscious gaze appeared more later than in initial times. Third, conscious gaze pattern was divided into the zone that explored various areas dispersely (distributed exploratory zone) and the zone that explored between particular areas concentratedly (intensive exploratory zone). Fourth, as a result of analyzing the brainwave attention concentration, it was found that the attention concentration in prefrontal lobe (Fp1, Fp2) and frontal lobe (F3, F4) rose to a higher level in the zone of 15 to 16 seconds and this time zone was considered to be a zone where gazing at counter&menu area was very active. In addition, the attention concentration appeared higher in the initial zone than in the later zone, among the entire experimental time zones. Finally, as a result of analyzing the changes in activation by brain portion of the SMR wave expressed when maintaining the arousal and attention concentration, it was found that the right prefrontal lobe and the frontal lobe became activated in the time zone when the intensive exploration of "counter&menu area" and "movable furniture${\leftrightarrow}$built-in furniture area" had occurred and the time zone when the intensive exploration of "image wall${\leftrightarrow}$partition area" and "counter&menu${\leftrightarrow}$sign area" had occurred.

Deep Learning-based Gaze Direction Vector Estimation Network Integrated with Eye Landmark Localization (딥 러닝 기반의 눈 랜드마크 위치 검출이 통합된 시선 방향 벡터 추정 네트워크)

  • Joo, Heeyoung;Ko, Min-Soo;Song, Hyok
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.748-757
    • /
    • 2021
  • In this paper, we propose a gaze estimation network in which eye landmark position detection and gaze direction vector estimation are integrated into one deep learning network. The proposed network uses the Stacked Hourglass Network as a backbone structure and is largely composed of three parts: a landmark detector, a feature map extractor, and a gaze direction estimator. The landmark detector estimates the coordinates of 50 eye landmarks, and the feature map extractor generates a feature map of the eye image for estimating the gaze direction. And the gaze direction estimator estimates the final gaze direction vector by combining each output result. The proposed network was trained using virtual synthetic eye images and landmark coordinate data generated through the UnityEyes dataset, and the MPIIGaze dataset consisting of real human eye images was used for performance evaluation. Through the experiment, the gaze estimation error showed a performance of 3.9, and the estimation speed of the network was 42 FPS (Frames per second).