• Title/Summary/Keyword: visual performance

Search Result 2,007, Processing Time 0.036 seconds

A Study on the Relationship of Visual Aesthetics Design and Performance in the Internet Web Site (인터넷 웹사이트의 시각적 디자인과 성과와의 관계에 관한 연구)

  • Kim, Seung Kyung
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.4 no.4
    • /
    • pp.85-101
    • /
    • 2008
  • This study focuses on visual aesthetic factors in the apparel web site design. 103 undergraduates participated in the evaluation of six internet web sites. The evaluation were recorded and analyzed by using the inspection method and a questionnaire. Findings of this study can be summarized as follows: First, the result of SPSS-factor analysis shows that there are 2 distinct factors; 'classical aesthetics' and 'expressive aesthetics'. 'classical aesthetics' and 'expressive aesthetics' can be described by visual aesthetic design. This conceptual confusion relating to 'visual aesthetic design' can be clarified by these findings. Second, as a result of multiple regression analyses, 'classical aesthetics' and 'expressive aesthetics' have a positive influence on 'interactivity' and 'web site evaluation'. This research clarifies the concepts of 'engagement' of Rosen & Purinton as the 'interactivity' between users and web sites. Finally, this study suggests that 'good design' for internet web sites depends on understanding how to attain the appropriate balance between 'classical aesthetics' and 'expressive aesthetics', based on the target customer.

A Novel Integration Scheme for Audio Visual Speech Recognition

  • Pham, Than Trung;Kim, Jin-Young;Na, Seung-You
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.8
    • /
    • pp.832-842
    • /
    • 2009
  • Automatic speech recognition (ASR) has been successfully applied to many real human computer interaction (HCI) applications; however, its performance tends to be significantly decreased under noisy environments. The invention of audio visual speech recognition (AVSR) using an acoustic signal and lip motion has recently attracted more attention due to its noise-robustness characteristic. In this paper, we describe our novel integration scheme for AVSR based on a late integration approach. Firstly, we introduce the robust reliability measurement for audio and visual modalities using model based information and signal based information. The model based sources measure the confusability of vocabulary while the signal is used to estimate the noise level. Secondly, the output probabilities of audio and visual speech recognizers are normalized respectively before applying the final integration step using normalized output space and estimated weights. We evaluate the performance of our proposed method via Korean isolated word recognition system. The experimental results demonstrate the effectiveness and feasibility of our proposed system compared to the conventional systems.

The Influences of Visual and Hearing Impairment on Activities of Daily Living for the Community Dwelling Elderly (재가노인의 시청각기능장애가 일상생활수행능력에 미치는 영향)

  • Park, Eun-Ok;Kim, Eun-Young;Kim, Hee-Girl;So, Ae-Young;Yi, Ggo-Me;June, Kyung-Ja
    • Research in Community and Public Health Nursing
    • /
    • v.12 no.2
    • /
    • pp.417-427
    • /
    • 2001
  • Purpose: The aim of this study is to identify the influence of visual and hearing impairment on the activities of daily living of community dwelling elderly. Methods: Data were collected by home visiting interviewers from 452 older people aged 65 years or older living in community. Resident Assessment Instrument MDS-HC(2.0version) was used for data collection. Data analysis for descriptive statistics, Chi-square test and multiple regression was made by SAS 6.2 Results: 34.7% of the subject had hearing impairment and 64.3% had visual impairment Among IADL. one half of them were dependent in ordinary house work and meal preparation. In the case of ADL. 13.9% of subjects were dependent in bathing and 8.9% in personal hygiene. There was significant difference in IADL performance by visual and hearing impairment On the other hand, ADL performance showed the significant difference. only in the case of hearing impairment. As the result of input of visual and hearing impairment in the process of regression. variances were increased from 3% to 11%. Conclusions: Large proportions of older people living in the community have visual and hearing impairment. It could be confirmed that hearing and vision were significant factors influencing on IADL performance of older people. Intervention and support policy for elderly needs to focus on improvement of visual and hearing impairment.

  • PDF

Evaluation of Visual Performance for Implanted Aspheric Multifocal Intraocular Lens in the Cataract Patients (백내장 환자에서 비구면 다초점 인공수정체 삽입 후 시기능 평가)

  • Kim, Jae-Yoon;Lee, Koon-Ja
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.18 no.3
    • /
    • pp.347-356
    • /
    • 2013
  • Purpose: To evaluate the visual acuity and visual performance after implantation of a aspheric multifocal (ReSTOR$^{(R)}$ SN6AD3) intraocular lens (IOL). Methods: Nineteen cataract patients (30 eyes) implanted with an aspheric multifocal IOL (ReSTOR$^{(R)}$ SN6AD3) either unilaterally or bilaterally were participated. Visual acuity (VA) and objective optical performance were evaluated at the time of preoperation, 1 week, 1 month, and 3 month after operation. At 3 month of post-operation, objective visual performance were measured and compared with the 38 eyes of 20 age-matched normal control. Distance VA was measured by using the ETDRS LCD chart and intermediate and near visual acuity were measured using Jaeger chart. Objective visual performance was assessed preoperatively and 1 week, 1 month and 3 month postoperatively using a double-pass system (Optical Quality Analysis System) with a 4-mm pupil diameter, the OSI (objective scatter index), MTF (modulation transfer function) cut off and strehl ratio. At 3 month of post-operation, visual acuity and visual performance compared with age matched normal control. Results: The uncorrected distance VA, OSI, MTF cut off and strehl ratio were significantly improved (p<0.05) until 1 month postoperatively. Visual performance of MTF cut off and strehl ratio after 3 month of operation were significantly improved compared to the normal control (p=0.063, p=0.103 respectively), however, OSI was higher than normal control. Patients implanted with aspheric multifocal IOL were satisfied with distance and near VA however, were unsatisfied with intermediate VA and reported glare and halos. Conclusions: The visual performance reaches to a stable condition in 1 month of implantation of aspheric multifocal IOL and improved to the level of age-mated normal patients. Also patients were satisfied with their quality of vision, however, intermediate VA, glare and halos were reported as complications.

Robust Visual Odometry System for Illumination Variations Using Adaptive Thresholding (적응적 이진화를 이용하여 빛의 변화에 강인한 영상거리계를 통한 위치 추정)

  • Hwang, Yo-Seop;Yu, Ho-Yun;Lee, Jangmyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.9
    • /
    • pp.738-744
    • /
    • 2016
  • In this paper, a robust visual odometry system has been proposed and implemented in an environment with dynamic illumination. Visual odometry is based on stereo images to estimate the distance to an object. It is very difficult to realize a highly accurate and stable estimation because image quality is highly dependent on the illumination, which is a major disadvantage of visual odometry. Therefore, in order to solve the problem of low performance during the feature detection phase that is caused by illumination variations, it is suggested to determine an optimal threshold value in the image binarization and to use an adaptive threshold value for feature detection. A feature point direction and a magnitude of the motion vector that is not uniform are utilized as the features. The performance of feature detection has been improved by the RANSAC algorithm. As a result, the position of a mobile robot has been estimated using the feature points. The experimental results demonstrated that the proposed approach has superior performance against illumination variations.

Cross modal Transfer in Infancy : Transfer from Touch to Vision (유아의 감각양식간 전이 - 촉각에서 시각으로의 전이 -)

  • Hong, Hee Young
    • Korean Journal of Child Studies
    • /
    • v.7 no.1
    • /
    • pp.67-84
    • /
    • 1986
  • The purpose of the present research was to investigate cross-modal transfer, especially tactual-to-visual transfer in infancy and to study the relation between failure of cross-modal transfer performance and length of familiarization period. The subjects of this study were 60 infants, 10 boys and 10 girls at each level: six, nine, and twelve months of age. All were normal, healthy, full-term babies. The mothers' educational achievement was controlled at more than 12 years of schooling. There were two separate experimental conditions, one 30-sec and one 60-sec familiarization period. Each experimental condition consisted of a tactual familiarization and a visual recognition memory test. Each child was presented with these 2 sets of cross-modal stimuli in one of the 2 experimental conditions. Infants' visual responses in the visual recognition memory test were videotaped for 20 seconds. Visual fixation time to novel and familiar stimuli was observed throughout the test. The data was analyzed with t-test, percentage of total fixation time to novel stimuli, and ANOVA. The results showed that: 1) Significant differences were found in the cross-modal transfer performance from touch to vision between the 3 age groups. This is, 6 and 9 month old infants didn't show cross-modal transfer in the 30-sec condition, but 12 month old infants did show cross-modal transfer in the 30-second. 2) In all 3 age groups, no significant differences were found in cross-modal transfer performance between the two conditions.

  • PDF

Design of Visual Surveillance System based on Wireless High Definition Image Transmission Technology (무선 고해상도 영상 전송 기술에 기반한 영상 감시 시스템의 설계)

  • Kim, Won
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.5
    • /
    • pp.25-30
    • /
    • 2012
  • It is important to detect dangerous objects which are intentionally abandoned in public places. Nowadays visual surveillance system is required to enhance the performance in two ways : high resolution and wireless linking ability. In this study the design of visual surveillance system is newly proposed to detect abandoned objects for social security purpose based on wireless high resolution image transmission technology. Also, to enhance PED, PAT performance, the tracking algorithm is included in the previous visual surveillance software scheme. By implementing proposed design scheme on the real wireless high resolution image transmission system, the effectiveness of the overall system is shown with the transmission performance of 4.0 Gbps speed.

A Study on Visual Emotion Classification using Balanced Data Augmentation (균형 잡힌 데이터 증강 기반 영상 감정 분류에 관한 연구)

  • Jeong, Chi Yoon;Kim, Mooseop
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.7
    • /
    • pp.880-889
    • /
    • 2021
  • In everyday life, recognizing people's emotions from their frames is essential and is a popular research domain in the area of computer vision. Visual emotion has a severe class imbalance in which most of the data are distributed in specific categories. The existing methods do not consider class imbalance and used accuracy as the performance metric, which is not suitable for evaluating the performance of the imbalanced dataset. Therefore, we proposed a method for recognizing visual emotion using balanced data augmentation to address the class imbalance. The proposed method generates a balanced dataset by adopting the random over-sampling and image transformation methods. Also, the proposed method uses the Focal loss as a loss function, which can mitigate the class imbalance by down weighting the well-classified samples. EfficientNet, which is the state-of-the-art method for image classification is used to recognize visual emotion. We compare the performance of the proposed method with that of conventional methods by using a public dataset. The experimental results show that the proposed method increases the F1 score by 40% compared with the method without data augmentation, mitigating class imbalance without loss of classification accuracy.

The Effects of Driving Rehabilitation Functional Training on Visual Perception and Driving Reaction Velocity (운전시뮬레이터 훈련이 시 지각 및 운전 반응 속도에 미치는 효과)

  • Lee, Jungsook;Kim, Sungwon
    • Journal of The Korean Society of Integrative Medicine
    • /
    • v.5 no.4
    • /
    • pp.77-81
    • /
    • 2017
  • Purpose : This study examined the effects of driving rehabilitation functional training on visual perception ability and driving reaction velocity. Those subjects were put under MVPT-3 test to see their visual perceptual functions before and after the 4weeks' driving rehabilitation function training and then put to TMT A-type test to see their driving reaction velocity performance. The followings are the results of this study. Methods : Using a driving simulator, driving rehabilitation functional training was performed targeting men and women aged in 20s 20 minutes per time, two times per week, for a month. Results : As for the change in visual perception, the Raw Score of MVPT-3 very significantly increased (p<.01), and the Standard Score also very significantly increased (p<.01). As for the change in reaction velocity, TMT A-type very significantly decreased (p<.01), and TMT B-type also very significantly decreased (p<.01). Conclusion : It could be found that driving rehabilitation functional training should be effective for both visual perception and reaction velocity. Consequently, the driving rehabilitation function training can be applied to clinics as training method for functional recovery and improvement of visual perceptual functions and driving reaction velocity performance ability of the patients. Thus, various functional programs should be studied in the future.

The Effect of Background Grey Levels on the Visual Perception of Displayed Image on CRT Monitor (CRT 모니터의 배경계조도가 영상의 시각인식에 미치는 영향)

  • 김종효;박광석
    • Journal of Biomedical Engineering Research
    • /
    • v.14 no.1
    • /
    • pp.63-72
    • /
    • 1993
  • In this paper, the effect of background grey levels on the visual perception of target image displayed on CRT monitor has been investigated. The purpose of this study is to investigate the efficacy of CRT monitor as a display medium of image Information especially in medical imaging field. Tllree sets of experiments have been performed in this study : the first was to measure the luminance response of CRT monitor and to find the best fitting equation, and the second was the psychophysical experiment measuring the threshold grey level differences between the target image and the background required for visual discrimination (or various background grey levels, and the third was to develop a visual model that is predictable of the threshold grey level difference measured in the psychophysical experiment. The result of psycophysical experiment shows that the visual perception performance is significantly degraded in the range of grey levels lower than 50, which is turned out due to she low luminance change of CRT monitor in this range while human eye has been adapted lo relatively bright ambient illumination. And it Is also shown in the simulation study using the developed visual model that the dominant factor degrading the visual performance is the reflected light from the monitor surface by ambient light in general illumination condition.

  • PDF