• Title/Summary/Keyword: cue

Search Result 549, Processing Time 0.025 seconds

Multi-Modal Wearable Sensor Integration for Daily Activity Pattern Analysis with Gated Multi-Modal Neural Networks (Gated Multi-Modal Neural Networks를 이용한 다중 웨어러블 센서 결합 방법 및 일상 행동 패턴 분석)

  • On, Kyoung-Woon;Kim, Eun-Sol;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.2
    • /
    • pp.104-109
    • /
    • 2017
  • We propose a new machine learning algorithm which analyzes daily activity patterns of users from multi-modal wearable sensor data. The proposed model learns and extracts activity patterns using input from wearable devices in real-time. Inspired by cue integration of human's property, we constructed gated multi-modal neural networks which integrate wearable sensor input data selectively by using gate modules. For the experiments, sensory data were collected by using multiple wearable devices in restaurant situations. As an experimental result, we first show that the proposed model performs well in terms of prediction accuracy. Then, the possibility to construct a knowledge schema automatically by analyzing the activation patterns in the middle layer of our proposed model is explained.

Applying differential techniques for 2D/3D video conversion to the objects grouped by depth information (2D/3D 동영상 변환을 위한 그룹화된 객체별 깊이 정보의 차등 적용 기법)

  • Han, Sung-Ho;Hong, Yeong-Pyo;Lee, Sang-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.3
    • /
    • pp.1302-1309
    • /
    • 2012
  • In this paper, we propose applying differential techniques for 2D/3D video conversion to the objects grouped by depth information. One of the problems converting 2D images to 3D images using the technique tracking the motion of pixels is that objects not moving between adjacent frames do not give any depth information. This problem can be solved by applying relative height cue only to the objects which have no moving information between frames, after the process of splitting the background and objects and extracting depth information using motion vectors between objects. Using this technique all the background and object can have their own depth information. This proposed method is used to generate depth map to generate 3D images using DIBR(Depth Image Based Rendering) and verified that the objects which have no movement between frames also had depth information.

Robust Facial Expression Recognition using PCA Representation (PCA 표상을 이용한 강인한 얼굴 표정 인식)

  • Shin Young-Suk
    • Korean Journal of Cognitive Science
    • /
    • v.16 no.4
    • /
    • pp.323-331
    • /
    • 2005
  • This paper proposes an improved system for recognizing facial expressions in various internal states that is illumination-invariant and without detectable rue such as a neutral expression. As a preprocessing to extract the facial expression information, a whitening step was applied. The whitening step indicates that the mean of the images is set to zero and the variances are equalized as unit variances, which reduces murk of the variability due to lightening. After the whitening step, we used the facial expression information based on principal component analysis(PCA) representation excluded the first 1 principle component. Therefore, it is possible to extract the features in the lariat expression images without detectable cue of neutral expression from the experimental results, we ran also implement the various and natural facial expression recognition because we perform the facial expression recognition based on dimension model of internal states on the images selected randomly in the various facial expression images corresponding to 83 internal emotional states.

  • PDF

Effects of e-reviews on purchase intention for cosmetics (온라인 리뷰 탐색이 화장품 구매의도에 미치는 영향)

  • Park, Eun-Joo;Jung, Yu-Jin
    • Korean Journal of Human Ecology
    • /
    • v.22 no.2
    • /
    • pp.343-355
    • /
    • 2013
  • E-reviews, electronic reviews, are generally perceived as trustworthy and credible by the consumers, because it is based on the experiences of other consumers who are independent of the marketers. Therefore, consumers may rely more on the review information as an important cue than direct experience or advertising. This paper explored the structural equation model to investigate the relationships among search motives of e-reviews, attributes of e-review, trust, and purchase intention for cosmetics. A self-questionnaire was developed based on previous researches. Data were collected from 300 female university students experienced purchasing cosmetics at the Internet and were analyzed by AMOS 20.0. Results showed that e-review attributes consisted of three factors: expertise/visuality, quality/functionality and advertising/design. Utilitarian and hedonic search motives were significantly related to expertise/ visuality attributes of e-review and then influenced the purchase intention for cosmetics, mediated by the trust of e-review. However, quality/functionality attributes related by utilitarian motive did not have a significant effect to trust of e-review and purchase intention for cosmetics. Regardless of search motives and trust of e-review, advertising/design attributes of e-review directly related to purchase intention of cosmetics. As predicted, the trust of e-review was an important mediated variable to stimulate the purchase intention of cosmetics at internet. The implications of findings for research and practice are discussed.

Visual Information Selection Mechanism Based on Human Visual Attention (인간의 주의시각에 기반한 시각정보 선택 방법)

  • Cheoi, Kyung-Joo;Park, Min-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.3
    • /
    • pp.378-391
    • /
    • 2011
  • In this paper, we suggest a novel method of selecting visual information based on bottom-up visual attention of human. We propose a new model that improve accuracy of detecting attention region by using depth information in addition to low-level spatial features such as color, lightness, orientation, form and temporal feature such as motion. Motion is important cue when we derive temporal saliency. But noise obtained during the input and computation process deteriorates accuracy of temporal saliency Our system exploited the result of psychological studies in order to remove the noise from motion information. Although typical systems get problems in determining the saliency if several salient regions are partially occluded and/or have almost equal saliency, our system is able to separate the regions with high accuracy. Spatiotemporally separated prominent regions in the first stage are prioritized using depth value one by one in the second stage. Experiment result shows that our system can describe the salient regions with higher accuracy than the previous approaches do.

A Surface Modeling Algorithm by Combination of Internal Vertexes in Spatial Grids for Virtual Conceptual Sketch (공간격자의 내부정점 조합에 의한 가상 개념 스케치용 곡면 모델링 알고리즘)

  • Nam, Sang-Hoon;Kim, Hark-Soo;Chai, Young-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.3
    • /
    • pp.217-225
    • /
    • 2009
  • In case of sketching a conceptual model in 3D space, it's not easy for designer to recognize the depth cue accurately and to draw a model correctly in short time. In this paper, multi-strokes based sketch is adopted not only to reduce the error of input point but to substantiate the shape o) the conceptual design effectively. The designer can see the drawing result immediately after stroking some curves. The shape can also be modified by stroking curves repeatedly and be confirmed the modified shape in real time. However, the multi-strokes based sketch needs to manage the great amount of input data. Therefore, the drawing space is divided into the limited spatial cubical grids and the movable infernal vertex in each spatial grid is implemented and used to define the surface by the multi-strokes. We implemented the spatial sketching system which allows the concept designer's intention to 3D model data efficiently.

Sitting Posture-Based Lighting System to Enhance the Desired Mood

  • Bae, Hyunjoo;Kim, Haechan;Suk, Hyeon-Jeong
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.2
    • /
    • pp.191-198
    • /
    • 2015
  • Objective: As a cue for desired mood, we attempted to identify types of sitting postures when people are involved in various tasks during their working hours. Background: Physical behaviors in reaction to user contexts were studied, such as automated posture analysis for detecting a subject's emotion. Sitting postures have high feasibility and can be detected robustly with a sensing chair, especially when it comes to an office. Method: First, we attached seven sensors, including six pressure sensors and one distance sensor, to an office chair. In Part 1, we recorded participants' postures while they took part in four different tasks. From the seven sensors, we gathered five sets of data related to the head, the lumbar, the hip, thigh pressure and the distance between the backrest and the body. We classified them into four postures: leaning forward, upright, upright with the lumbar supporting, and leaning backward. In part 2, we requested the subjects to take suitable poses for the each of the four task types. In this way, we compared the matches between postures and tasks in a natural setting to those in a controlled situation. Results: We derived four types of sitting postures that were mapped onto the different tasks. The comparison yielded no statistical significance between Parts 1 and 2. In addition, there was a significant association between the task types and the posture types. Conclusion: The users' sitting postures were related to different types of tasks. This study demonstrates how human emotion can interact with lighting, as mediated through physical behavior. Application: We developed a posture-based lighting system that manipulates the quality of office lighting and is operated by changes in one's posture. Facilitated by this system, color temperatures ranging between 3,000K and 7,000K and illuminations ranging between 300lx and 700lx were modulated.

Sound Source Localization using HRTF database

  • Hwang, Sung-Mok;Park, Young-Jin;Park, Youn-Sik
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.751-755
    • /
    • 2005
  • We propose a sound source localization method using the Head-Related-Transfer-Function (HRTF) to be implemented in a robot platform. In conventional localization methods, the location of a sound source is estimated from the time delays of wave fronts arriving in each microphone standing in an array formation in free-field. In case of a human head this corresponds to Interaural-Time-Delay (ITD) which is simply the time delay of incoming sound waves between the two ears. Although ITD is an excellent sound cue in stimulating a lateral perception on the horizontal plane, confusion is often raised when tracking the sound location from ITD alone because each sound source and its mirror image about the interaural axis share the same ITD. On the other hand, HRTFs associated with a dummy head microphone system or a robot platform with several microphones contain not only the information regarding proper time delays but also phase and magnitude distortions due to diffraction and scattering by the shading object such as the head and body of the platform. As a result, a set of HRTFs for any given platform provides a substantial amount of information as to the whereabouts of the source once proper analysis can be performed. In this study, we introduce new phase and magnitude criteria to be satisfied by a set of output signals from the microphones in order to find the sound source location in accordance with the HRTF database empirically obtained in an anechoic chamber with the given platform. The suggested method is verified through an experiment in a household environment and compared against the conventional method in performance.

  • PDF

Surveying Visitors′ Behavior in Chuwangsan National Park (주왕산국립공원의 이용자 행태조사)

  • 김용근;최성식
    • Korean Journal of Environment and Ecology
    • /
    • v.8 no.2
    • /
    • pp.160-166
    • /
    • 1995
  • Visitors to Chuwangsan National Park were survayed from August 3 to 5 n 1994. During this time, 346 visitors were contacted. Of those individuals, 65% were males. 63% of respondents reported that they had gone as far as college. 48% were 20 years of age. 97% of the survey respondents had experience to visit other national parks. The largest percentage of respondents were reported that they visited Chuwangsan Nat'1 Park for enjoying natural landscape. In group type, 50% were traveling with their family and 36% with their friends. In activity characteristics, 51% were day-time visitors, and 18% mentioned carrying in their on food. Generally most respondents were very interested in the environmental problem in national parks. The majority of visitors perceived that the environment of Chuwangsan Nat'1 Park were good enough. In six types of normative violations, the major reasons of littering were unintentional violation and releaser-cue violation. Most respondents were not likely to intervene to stop other visitors' depreciative behavior (Bystander intervention behavior). In two dilemmas, the more likely the intention to obey a regulation the less likely the intention to disobey a regulation, and vice-versa.

  • PDF

Computer-generated hologram based on the depth information of active sensor (능동형 센서의 깊이 정보를 이용한 컴퓨터 형성 홀로그램)

  • Kim, Sang-Jin;Kang, Hoon-Jong;Yoo, Ji-Sang;Lee, Seung-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.10 s.352
    • /
    • pp.22-27
    • /
    • 2006
  • In this paper, we propose a method that can generate a computer-generated hologram (CGH) from the depth stream and color image outputs provided by an active sensor add-on camera. Distinguished from an existing holographic display system that uses a computer graphic model to generate CGH, this method utilizes a real camera image including a depth information for each object captured by the camera, as well as color information. This procedure consists of two steps that the acquirement of a depth-annotated image of real object, and generation of CGH according to the 3D information that is extracted from the depth cue. In addition, we display the generated CGH via a holographic display system. In experimental system we reconstruct an image made from CGH with a reflective LCD panel that had a pixel-pitch of 10.4um and resolution of 1408X1050.