• Title/Summary/Keyword: mobile vision system

Search Result 290, Processing Time 0.036 seconds

A Study on the Characteristics of Methods for Experiencing Contents and Network Technologies in the Exhibition space applied with Location Based Service - Focus on T.um as the Public Exhibition Center for a Telecommunication Company - (위치기반서비스(LBS) 적용 전시관의 콘텐츠 체험방식과 기술특성에 관한 연구 - 이동통신 기업홍보관 티움(T.um)을 중심으로 -)

  • Yi, Joo-Hyoung
    • Korean Institute of Interior Design Journal
    • /
    • v.19 no.5
    • /
    • pp.173-181
    • /
    • 2010
  • Opened on November 2008, as the public exhibition center of a telecommunication company, T.um is dedicated for delivering the future ubiquitous technologies and business vision of the company leading domestic mobile communication business to the global expected clients and business partners. Since the public opening, not only over 18,000 audiences in 112 nations have been visiting T.um, but also the public media have been releasing news regarding the ubiquitous museum constantly. By the reasons, T.um is regarded as a successful case for public exhibition centers. The most distinguished quality of the museum is established by the Location Based Service technology in the initial construction stage. A visitor in anyplace of T.um can be detected by digital devices equipped GPS systems. The LBS system in T.um allows visitors to get the information of relevant technologies as well as the process of how to operating each content at his own spots by smart phone of which wireless network systems make it possible. This study is focusing on analyzing and defining the T.um special qualities in terms of technologies to provide the basic data for following exhibition space projects based on LBS. The special method of experiencing contents can be designed by utilizing the network system applied to T.um in the planning stage.

Real-Time Face Tracking Algorithm Robust to illumination Variations (조명 변화에 강인한 실시간 얼굴 추적 알고리즘)

  • Lee, Yong-Beom;You, Bum-Jae;Lee, Seong-Whan;Kim, Kwang-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3037-3040
    • /
    • 2000
  • Real-Time object tracking has emerged as an important component in several application areas including machine vision. surveillance. Human-Computer Interaction. image-based control. and so on. And there has been developed various algorithms for a long time. But in many cases. they have showed limited results under uncontrolled situation such as illumination changes or cluttered background. In this paper. we present a novel. computationally efficient algorithm for tracking human face robustly under illumination changes and cluttered backgrounds. Previous algorithms usually defines color model as a 2D membership function in a color space without consideration for illumination changes. Our new algorithm developed here. however. constructs a 3D color model by analysing plenty of images acquired under various illumination conditions. The algorithm described is applied to a mobile head-eye robot and experimented under various uncontrolled environments. It can track an human face more than 100 frames per second excluding image acquisition time.

  • PDF

Non-Marker Based Mobile Augmented Reality Technology Using Image Recognition (이미지 인식을 이용한 비마커 기반 모바일 증강현실 기법 연구)

  • Jo, Hui-Joon;Kim, Dae-Won
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.4
    • /
    • pp.258-266
    • /
    • 2011
  • AR(Augmented Reality) technology is now easily shown around us with respect to its applicable areas' being spreaded into various shapes since the usage is simply generalized and many-sided. Currently existing camera vision based AR used marker based methods rather than using real world's informations. For the marker based AR technology, there are limitations on applicable areas and its environmental properties that a user could immerse into the usage of application program. In this paper, we proposed a novel AR method which users could recognize objects from the real world's data and the related 3-dimensional contents are also displayed. Those are done using image processing skills and a smart mobile embedded camera for terminal based AR implementations without any markers. Object recognition is done from the comparison of pre-registered and referenced images. In this process, we tried to minimize the amount of computations of similarity measurements for improving working speed by considering features of smart mobile devices. Additionally, the proposed method is designed to perform reciprocal interactions through touch events using smart mobile devices after the 3-dimensional contents are displayed on the screen. Since then, a user is able to acquire object related informations through a web browser with respect to the user's choice. With the system described in this paper, we analyzed and compared a degree of object recognition, working speed, recognition error for functional differences to the existing AR technologies. The experimental results are presented and verified in smart mobile environments to be considered as an alternate and appropriate AR technology.

Indoor Location Positioning System for Image Recognition based LBS (영상인식 기반의 위치기반서비스를 위한 실내위치인식 시스템)

  • Kim, Jong-Bae
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.2
    • /
    • pp.49-62
    • /
    • 2008
  • This paper proposes an indoor location positioning system for the image recognition based LBS. The proposed system is a vision-based location positioning system that is implemented the augmented reality by overlaying the location results with the view of the user. For implementing, the proposed system uses the pattern matching and location model to recognize user location from images taken by a wearable mobile PC with camera. In the proposed system, the system uses the pattern matching and location model for recognizing a personal location in image sequences. The system is estimated user location by the image sequence matching and marker detection methods, and is recognized user location by using the pre-defined location model. To detect marker in image sequences, the proposed system apply to the adaptive thresholding method, and by using the location model to recognize a location, the system can be obtained more accurate and efficient results. Experimental results show that the proposed system has both quality and performance to be used as an indoor location-based services(LBS) for visitors in various environments.

  • PDF

The Obstacle Avoidance Algorithm of Mobile Robot using Line Histogram Intensity (Line Histogram Intensity를 이용한 이동로봇의 장애물 회피 알고리즘)

  • 류한성;최중경;구본민;박무열;방만식
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1365-1373
    • /
    • 2002
  • In this paper, we present two types of vision algorithm that mobile robot has CCD camera. for obstacle avoidance. This is simple algorithm that compare with grey level from input images. Also, The mobile robot depend on image processing and move command from PC host. we has been studied self controlled mobile robot system with CCD camera. This system consists of digital signal processor, step motor, RF module and CCD camera. we used wireless RF module for movable command transmitting between robot and host PC. This robot go straight until recognize obstacle from input image that preprocessed by edge detection, converting, thresholding. And it could avoid the obstacle when recognize obstacle by line histogram intensity. Host PC measurement wave from various line histogram each 20 pixel. This histogram is (x, y) value of pixel. For example, first line histogram intensity wave from (0, 0) to (0, 197) and last wave from (280, 0) to (2n, 197. So we find uniform wave region and nonuniform wave region. The period of uniform wave is obstacle region. we guess that algorithm is very useful about moving robot for obstacle avoidance.

A Micro-robotic Platform for Micro/nano Assembly: Development of a Compact Vision-based 3 DOF Absolute Position Sensor (마이크로/나노 핸들링을 위한 마이크로 로보틱 플랫폼: 비전 기반 3자유도 절대위치센서 개발)

  • Lee, Jae-Ha;Breguet, Jean Marc;Clavel, Reymond;Yang, Seung-Han
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.27 no.1
    • /
    • pp.125-133
    • /
    • 2010
  • A versatile micro-robotic platform for micro/nano scale assembly has been demanded in a variety of application areas such as micro-biology and nanotechnology. In the near future, a flexible and compact platform could be effectively used in a scanning electron microscope chamber. We are developing a platform that consists of miniature mobile robots and a compact positioning stage with multi degree-of-freedom. This paper presents the design and the implementation of a low-cost and compact multi degree of freedom position sensor that is capable of measuring absolute translational and rotational displacement. The proposed sensor is implemented by using a CMOS type image sensor and a target with specific hole patterns. Experimental design based on statistics was applied to finding optimal design of the target. Efficient algorithms for image processing and absolute position decoding are discussed. Simple calibration to eliminate the influence of inaccuracy of the fabricated target on the measuring performance also presented. The developed sensor was characterized by using a laser interferometer. It can be concluded that the sensor system has submicron resolution and accuracy of ${\pm}4{\mu}m$ over full travel range. The proposed vision-based sensor is cost-effective and used as a compact feedback device for implementation of a micro robotic platform.

Comparison of LoG and DoG for 3D reconstruction in haptic systems (햅틱스 시스템용 3D 재구성을 위한 LoG 방법과 DoG 방법의 성능 분석)

  • Sung, Mee-Young;Kim, Ki-Kwon
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.711-721
    • /
    • 2012
  • The objective of this study is to propose an efficient 3D reconstruction method for developing a stereo-vision-based haptics system which can replace "robotic eyes" and "robotic touch." The haptic rendering for 3D images requires to capture depth information and edge information of stereo images. This paper proposes the 3D reconstruction methods using LoG(Laplacian of Gaussian) algorithm and DoG(Difference of Gaussian) algorithm for edge detection in addition to the basic 3D depth extraction method for better haptic rendering. Also, some experiments are performed for evaluating the CPU time and the error rates of those methods. The experimental results lead us to conclude that the DoG method is more efficient for haptic rendering. This paper may contribute to investigate the effective methods for 3D image reconstruction such as in improving the performance of mobile patrol robots.

Indoor Localization by Matching of the Types of Vertices (모서리 유형의 정합을 이용한 실내 환경에서의 자기위치검출)

  • Ahn, Hyun-Sik
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.6
    • /
    • pp.65-72
    • /
    • 2009
  • This paper presents a vision based localization method for indoor mobile robots using the types of vertices from a monocular image. In the images captured from a camera of a robot, the types of vertices are determined by searching vertical edges and their branch edges with a geometric constraints. For obtaining correspondence between the comers of a 2-D map and the vertex of images, the type of vertices and geometrical constraints induced from a geometric analysis. The vertices are matched with the comers by a heuristic method using the type and position of the vertices and the comers. With the matched pairs, nonlinear equations derived from the perspective and rigid transformations are produced. The pose of the robot is computed by solving the equations using a least-squares optimization technique. Experimental results show that the proposed localization method is effective and applicable to the localization of indoor environments.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

Analysis of the application of image quality assessment method for mobile tunnel scanning system (이동식 터널 스캐닝 시스템의 이미지 품질 평가 기법의 적용성 분석)

  • Chulhee Lee;Dongku Kim;Donggyou Kim
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.26 no.4
    • /
    • pp.365-384
    • /
    • 2024
  • The development of scanning technology is accelerating for safer and more efficient automated inspection than human-based inspection. Research on automatically detecting facility damage from images collected using computer vision technology is also increasing. The pixel size, quality, and quantity of an image can affect the performance of deep learning or image processing for automatic damage detection. This study is a basic to acquire high-quality raw image data and camera performance of a mobile tunnel scanning system for automatic detection of damage based on deep learning, and proposes a method to quantitatively evaluate image quality. A test chart was attached to a panel device capable of simulating a moving speed of 40 km/h, and an indoor test was performed using the international standard ISO 12233 method. Existing image quality evaluation methods were applied to evaluate the quality of images obtained in indoor experiments. It was determined that the shutter speed of the camera is closely related to the motion blur that occurs in the image. Modulation transfer function (MTF), one of the image quality evaluation method, can objectively evaluate image quality and was judged to be consistent with visual observation.