• Title/Summary/Keyword: Mobile camera

Search Result 955, Processing Time 0.026 seconds

User Behavior Model Based on Shooting Photograph Interaction for Funology ; Focused on 'PhoDoSee' Kiosk (퍼놀로지를 위한 사진 촬영 인터랙션 기반에서의 사용자 행태 모델 ; '포도씨' 키오스크를 중심으로)

  • Kim, Hanjae;Kwon, Jieun
    • Cartoon and Animation Studies
    • /
    • s.36
    • /
    • pp.643-667
    • /
    • 2014
  • Recently, shooting photographs have become highly popular among general public and been used by various media such as digital camera, mobile, and kiosk. We could find that users prefer to Funology which is combined by fun and hardware technology on emotional point of view. Shooting photographs attracts user participation and makes effect of design to expand. The goal of this study is to classify user actions in a electronic kiosk which includes digital photography function based on the perspective of Funology and to bulit user behaviors model. Therefore user group model will be defined, and then interaction design guidelines of shooting photographs will be proposed. For this research, first of all, the concepts of Funology and user interaction with taking photographs are classified to three types which is based on literature investigation. Secondly, "Phodosee" kiosk is examined with Funology design elements which have been categorized beforehand. Then user's behaviors which are shown their interaction with "Phodosee" kiosk are observed and analyzed using video ethnography based on Funology perspectives. Finally, four persona models are suggested based on user's behaviors as follows; 1) to avoid being taken photography, 2) to try to shoot photography, 3) to participate shooting photography and 4) to lead others to take photography. To summarize this study, effects and limitations of Funology design elements using digital photography are discussed and guideline is suggested to improve user experience design.

Change Attention-based Vehicle Scratch Detection System (변화 주목 기반 차량 흠집 탐지 시스템)

  • Lee, EunSeong;Lee, DongJun;Park, GunHee;Lee, Woo-Ju;Sim, Donggyu;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.228-239
    • /
    • 2022
  • In this paper, we propose an unmanned vehicle scratch detection deep learning model for car sharing services. Conventional scratch detection models consist of two steps: 1) a deep learning module for scratch detection of images before and after rental, 2) a manual matching process for finding newly generated scratches. In order to build a fully automatic scratch detection model, we propose a one-step unmanned scratch detection deep learning model. The proposed model is implemented by applying transfer learning and fine-tuning to the deep learning model that detects changes in satellite images. In the proposed car sharing service, specular reflection greatly affects the scratch detection performance since the brightness of the gloss-treated automobile surface is anisotropic and a non-expert user takes a picture with a general camera. In order to reduce detection errors caused by specular reflected light, we propose a preprocessing process for removing specular reflection components. For data taken by mobile phone cameras, the proposed system can provide high matching performance subjectively and objectively. The scores for change detection metrics such as precision, recall, F1, and kappa are 67.90%, 74.56%, 71.08%, and 70.18%, respectively.

Research and improvement of image analysis and bar code and QR recognition technology for the development of visually impaired applications (시각장애인 애플리케이션 개발을 위한 이미지 분석과 바코드, QR 인식 기술의 연구 및 개선)

  • MinSeok Cho;MinKi Yoon;MinSu Seo;YoungHoon Hwang;Hyun Woo;WonWhoi Huh
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.861-866
    • /
    • 2023
  • Individuals with visual impairments face difficulties in accessing accurate information about medical services and medications, making it challenging for them to ensure proper medication intake. While there are healthcare laws addressing this issue, there is a lack of standardized solutions, and not all over-the-counter medications are covered. Therefore, we have undertaken the design of a mobile application that utilizes image recognition technology, barcode scanning, and QR code recognition to provide guidance on how to take over-the-counter medications, filling the existing gaps in the knowledge of visually impaired individuals. Currently available applications for individuals with visual impairments allow them to access information about medications. However, they still require the user to remember which specific medication they are taking, posing a significant challenge. In this research, we are optimizing the camera capture environment, user interface (UI), and user experience (UX) screens for image recognition, ensuring greater accessibility and convenience for visually impaired individuals. By implementing the findings from our research into the application, we aim to assist visually impaired individuals in acquiring the correct methods for taking over-the-counter medications.

Precision Evaluation of Expressway Incident Detection Based on Dash Cam (차량 내 영상 센서 기반 고속도로 돌발상황 검지 정밀도 평가)

  • Sanggi Nam;Younshik Chung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.6
    • /
    • pp.114-123
    • /
    • 2023
  • With the development of computer vision technology, video sensors such as CCTV are detecting incident. However, most of the current incident have been detected based on existing fixed imaging equipment. Accordingly, there has been a limit to the detection of incident in shaded areas where the image range of fixed equipment is not reached. With the recent development of edge-computing technology, real-time analysis of mobile image information has become possible. The purpose of this study is to evaluate the possibility of detecting expressway emergencies by introducing computer vision technology to dash cam. To this end, annotation data was constructed based on 4,388 dash cam still frame data collected by the Korea Expressway Corporation and analyzed using the YOLO algorithm. As a result of the analysis, the prediction accuracy of all objects was over 70%, and the precision of traffic accidents was about 85%. In addition, in the case of mAP(mean Average Precision), it was 0.769, and when looking at AP(Average Precision) for each object, traffic accidents were the highest at 0.904, and debris were the lowest at 0.629.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.