• Title/Summary/Keyword: Mobile Visual Search

Search Result 41, Processing Time 0.022 seconds

Image Retrieval using Multiple Features on Mobile Platform (모바일 플랫폼에서 다중 특징 기반의 이미지 검색)

  • Lee, Yong-Hwan;Cho, Han-Jin;Lee, June-Hwan
    • Journal of Digital Convergence
    • /
    • v.12 no.6
    • /
    • pp.237-243
    • /
    • 2014
  • In this paper, we propose a mobile image retrieval system which utilizes the mobile device's sensor information and enables running in a variety of the environments, and implement the system on Android platform. The proposed system deals with a new image descriptor using combination of the visual feature with EXIF attributes in the target of JPEG image, and image matching algorithm which is optimized to the mobile environments. Experiments are performed on the Android platform, and the experimental results revealed that the proposed algorithm exhibits a significant improved results with large image database.

The Usability Evaluation of Mobile Phone Interfaces Designed for the Elderly (고령자의 휴대전화기 사용성 평가에 관한 연구)

  • Choi, Ji-Ho;Lee, Seong-Il;Cho, Joo-Eun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.30 no.1
    • /
    • pp.265-273
    • /
    • 2011
  • Objective: The aim of this study is to investigate the attributing factors influencing usability of the mobile phones specifically designed for the elderly users. Efforts to identify factors that cause usability problems for the elderly users in using mobile phones can provide the groundwork for changes aimed at usability enhancement and design of interfaces of mobile phones. Background: It became important to understand the behaviors and tendencies of the elderly in communication as the society became older. The problems in digital divide is contributed to mainly lack of understanding in terms of the use pattern of the elderly and lack of consideration of their characteristics in designing user interfaces of most ICT devices. Method: A total of 30 elderly users who were over 65 years in age participated in usability evaluation test experiment and performed seven different tasks using a widely accepted model of universally designed mobile phone. Their performance was compared with that of contrast group that consisted of 10 younger participants who were on their 20s. Results: It was found that the elderly users had hard times in using mobile phones, especially in keypad manipulation among search, understanding, and manipulation subtasks. Conclusion: Older users seemed to have difficulty in all the subtasks of search, recognition, and manipulation. It was suggested that designers of mobile phones need to give careful consideration into designing visual interfaces for search tasks and keypads for easier control and input for the elderly users. Application: The study is expected to provide guidelines for the universal design of mobile phones and their interfaces for enhancing usability of the mobile phone for elderly users.

Design and Implementation of Mobile Visual Search Services based on Automatic Image Tagging using Convolutional Neural Network (회선신경망을 이용한 이미지 자동 태깅 기반 모바일 비주얼 검색 서비스 설계 및 구현)

  • Jeon, Jin-Hwan;Lee, Sang-Moon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2017.01a
    • /
    • pp.49-50
    • /
    • 2017
  • PC 또는 모바일 기기를 이용한 검색을 위해서는 키보드 또는 터치패드를 이용하여 키워드를 입력하는 고전적인 방식이 현재까지 널리 사용되고 있다. 음성, 이미지, 제스처 등을 이용한 새로운 검색 기술들이 등장하고 있지만, 관련 검색엔진의 문제로 검색 결과가 다소 미흡한 상태이다. 본 논문에서는 기존의 포털 검색의 키워드 입력 방식과는 달리, 검색하고자 하는 대상을 스마트폰과 같은 모바일 기기의 카메라로 촬영하면 해당 촬영 이미지가 사용자 입장에서는 검색 키워드와 같이 동일한 역할을 할 수 있도록 CNN기법을 사용하여 Image-to-Text 형태의 모바일 비주얼 검색 서비스에 대해 제안한다.

  • PDF

Modified Speeded Up Robust Features(SURF) for Performance Enhancement of Mobile Visual Search System (모바일 시각 검색 시스템의 성능 향상을 위하여 개선된 Speeded Up Robust Features(SURF) 알고리듬)

  • Seo, Jung-Jin;Yoona, Kyoung-Ro
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.388-399
    • /
    • 2012
  • In the paper, we propose enhanced feature extraction and matching methods for a mobile environment based on modified SURF. We propose three methods to reduce the computational complexity in a mobile environment. The first is to reduce the dimensions of the SURF descriptor. We compare the performance of existing 64-dimensional SURF with several other dimensional SURFs. The second is to improve the performance using the sign of the trace of the Hessian matrix. In other words, feature points are considered as matched if they have the same sign for the trace of the Hessian matrix, otherwise considered not matched. The last one is to find the best distance-ratio which is used to determine the matching points. We find the best distance-ratio through experiments, and it gives the relatively high accuracy. Finally, existing system which is based on normal SURF method is compared with our proposed system which is based on these three proposed methods. We present that our proposed system shows reduced response time while preserving reasonably good matching accuracy.

Two - Handed Hangul Input Performance Prediction Model for Mobile Phone (모바일 폰에서의 양 손을 이용한 한글 입력 수행도 예측 모델에 대한 연구)

  • Lee, Joo-Woo;Myung, Ro-Hae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.27 no.4
    • /
    • pp.73-83
    • /
    • 2008
  • With a rapid extension of functions in mobile phones, text input method has become very important for mobile phone users. Previous studies for text input methods were focused on Fitts' law, emphasizing expert's behaviors with one-handed text input method. However, it was observed that 97% of Korean mobile phone users input texts with two-hands. Therefore, this study was designed to develop a prediction model of two-handed Hangul text entry method including novice users as well as experts for mobile phone. For this study, Fitts' law was hypothesized to predict experts' movement time(MT) whereas Hick-Hyman law for visual search time was hypothesized to be added to MT for novices. The results showed that the prediction model was well fitted with the empirical data for both experts and novices with less than 3% error rates. In conclusion, this prediction model of two-handed Hangul text entry including novice users was proven to be a very effective model for modeling two-handed Hangul text input behavior for both experts.

Augmented Reality Technology Implementation Utilizing Web 3.0 Information Services in Disaster Response Situations (재난대응 상황에서 웹 3.0 정보서비스를 활용한 증강현실 기술 구현 방안)

  • Park, Jong-Hong;Shin, Younghwan;Kim, Yongkyun;Chung, Jong-Moon
    • Journal of Internet Computing and Services
    • /
    • v.17 no.4
    • /
    • pp.61-68
    • /
    • 2016
  • In this paper, an implementation method of augmented reality (AR) technology using Web 3.0 information services in the field of disaster response is proposed. The structure and characteristics of semantic web-based Web 3.0 are realized and a AR based mobile visual search (MVS) applied in the disaster sites is described. Based on Web 3.0 and AR MVS, a semantic web ontology oriented configuration scheme for disaster-related information and the communication scheme of information provided by AR technology are proposed. For the purpose of providing disaster-related and customized information to the disaster response site quickly and accurately, a method of leveraging Web 3.0 information services in AR technology is presented.

A Study on the Effective Preprocessing Methods for Accelerating Point Cloud Registration

  • Chungsu, Jang;Yongmin, Kim;Taehyun, Kim;Sunyong, Choi;Jinwoo, Koh;Seungkeun, Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.111-127
    • /
    • 2023
  • In visual slam and 3D data modeling, the Iterative Closest Point method is a primary fundamental algorithm, and many technical fields have used this method. However, it relies on search methods that take a high search time. This paper solves this problem by applying an effective point cloud refinement method. And this paper also accelerates the point cloud registration process with an indexing scheme using the spatial decomposition method. Through some experiments, the results of this paper show that the proposed point cloud refinement method helped to produce better performance.

Implementation of Mobile Search Services based on Image Deep-learning (이미지 딥러닝 기반의 모바일 검색 서비스 구현)

  • Song, Jeo;Cho, Jung-Hyun;Kwon, Jin-Gwan;Lee, Sang-Moon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2017.07a
    • /
    • pp.348-349
    • /
    • 2017
  • 본 논문에서 제안하는 내용은 기존의 포털 검색의 키워드 입력 방식과는 달리, 검색하고자 하는 대상을 스마트폰과 같은 모바일 기기의 카메라로 촬영하면, 해당 촬영 이미지가 사용자 입장에서는 검색 키워드와 같이 동일한 역할을 할 수 있도록 이미지에 해당되는 검색 키워드를 추출 및 매칭하여 검색을 위한 질의어로 사용할 수 있도록 해주는 것을 목적으로 한다.

  • PDF

Implementation of Object Feature Extraction within Image for Object Tracking (객체 추적을 위한 영상 내의 객체 특징점 추출 알고리즘 구현)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.17 no.3
    • /
    • pp.113-116
    • /
    • 2018
  • This paper proposes a mobile image search system which uses a sensor information of smart phone, and enables running in a variety of environments, which is implemented on Android platform. The implemented system deals with a new image descriptor using combination of the visual feature (CEDD) with EXIF attributes in the target of JPEG image, and image matching scheme, which is optimized to the mobile platform. Experimental result shows that the proposed method exhibited a significant improved searching results of around 80% in precision in the large image database. Considering the performance such as processing time and precision, we think that the proposed method can be used in other application field.

Face Recognition and Notification System for Visually Impaired People (시각장애인을 위한 얼굴 인식 및 알림 시스템)

  • Jin, Yongsik;Lee, Minho
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.12 no.1
    • /
    • pp.35-41
    • /
    • 2017
  • We propose a face recognition and notification system that can transform visual face information into tactile signals in order to help visually impaired people. The proposed system consists of a glasses type camera, a mobile computer and an electronic cane. The glasses type camera captures the frontal view of the user, and sends this image to mobile computer. The mobile computer starts to search for human's face in the image when obstacles are detected by ultrasonic sensors. In a case that human's face is detected, the mobile computer identifies detected face. At this time, Adaboost and compressive sensing are used as a detector and a classifier, respectively. After the identification procedures of the detected face, the identified face information is sent to controller attached to a cane using a Bluetooth communication. The controller generates motor control signals using Pulse Width Modulation (PWM) according to the recognized face labels. The vibration motor generates vibration patterns to inform the visually impaired person of the face recognition result. The experimental results of face recognition and notification system show that proposed system is helpful for visually impaired people by providing person identification results in front of him/her.