• 제목/요약/키워드: Visual Object

검색결과 1,233건 처리시간 0.073초

로봇의 시각시스템을 위한 물체의 거리 및 크기측정 알고리즘 개발 (Development of a Robot's Visual System for Measuring Distance and Width of Object Algorism)

  • 김회인;김갑순
    • 제어로봇시스템학회논문지
    • /
    • 제17권2호
    • /
    • pp.88-92
    • /
    • 2011
  • This paper looks at the development of the visual system of robots, and the development of image processing algorism to measure the size of an object and the distance from robot to an object for the visual system. Robots usually get the visual systems with a camera for measuring the size of an object and the distance to an object. The visual systems are accurately impossible the size and distance in case of that the locations of the systems is changed and the objects are not on the ground. Thus, in this paper, we developed robot's visual system to measure the size of an object and the distance to an object using two cameras and two-degree robot mechanism. And, we developed the image processing algorism to measure the size of an object and the distance from robot to an object for the visual system, and finally, carried out the characteristics test of the developed visual system. As a result, it is thought that the developed system could accurately measure the size of an object and the distance to an object.

Local and Global Information Exchange for Enhancing Object Detection and Tracking

  • Lee, Jin-Seok;Cho, Shung-Han;Oh, Seong-Jun;Hong, Sang-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제6권5호
    • /
    • pp.1400-1420
    • /
    • 2012
  • Object detection and tracking using visual sensors is a critical component of surveillance systems, which presents many challenges. This paper addresses the enhancement of object detection and tracking via the combination of multiple visual sensors. The enhancement method we introduce compensates for missed object detection based on the partial detection of objects by multiple visual sensors. When one detects an object or more visual sensors, the detected object's local positions transformed into a global object position. Local and global information exchange allows a missed local object's position to recover. However, the exchange of the information may degrade the detection and tracking performance by incorrectly recovering the local object position, which propagated by false object detection. Furthermore, local object positions corresponding to an identical object can transformed into nonequivalent global object positions because of detection uncertainty such as shadows or other artifacts. We improved the performance by preventing the propagation of false object detection. In addition, we present an evaluation method for the final global object position. The proposed method analyzed and evaluated using case studies.

Object Classification based on Weakly Supervised E2LSH and Saliency map Weighting

  • Zhao, Yongwei;Li, Bicheng;Liu, Xin;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권1호
    • /
    • pp.364-380
    • /
    • 2016
  • The most popular approach in object classification is based on the bag of visual-words model, which has several fundamental problems that restricting the performance of this method, such as low time efficiency, the synonym and polysemy of visual words, and the lack of spatial information between visual words. In view of this, an object classification based on weakly supervised E2LSH and saliency map weighting is proposed. Firstly, E2LSH (Exact Euclidean Locality Sensitive Hashing) is employed to generate a group of weakly randomized visual dictionary by clustering SIFT features of the training dataset, and the selecting process of hash functions is effectively supervised inspired by the random forest ideas to reduce the randomcity of E2LSH. Secondly, graph-based visual saliency (GBVS) algorithm is applied to detect the saliency map of different images and weight the visual words according to the saliency prior. Finally, saliency map weighted visual language model is carried out to accomplish object classification. Experimental results datasets of Pascal 2007 and Caltech-256 indicate that the distinguishability of objects is effectively improved and our method is superior to the state-of-the-art object classification methods.

지능형 이동 로봇에서 강인 물체 인식을 위한 영상 문맥 정보 활용 기법 (Utilization of Visual Context for Robust Object Recognition in Intelligent Mobile Robots)

  • 김성호;김준식;권인소
    • 로봇학회논문지
    • /
    • 제1권1호
    • /
    • pp.36-45
    • /
    • 2006
  • In this paper, we introduce visual contexts in terms of types and utilization methods for robust object recognition with intelligent mobile robots. One of the core technologies for intelligent robots is visual object recognition. Robust techniques are strongly required since there are many sources of visual variations such as geometric, photometric, and noise. For such requirements, we define spatial context, hierarchical context, and temporal context. According to object recognition domain, we can select such visual contexts. We also propose a unified framework which can utilize the whole contexts and validates it in real working environment. Finally, we also discuss the future research directions of object recognition technologies for intelligent robots.

  • PDF

위치기반 비주얼 서보잉을 위한 견실한 위치 추적 및 양팔 로봇의 조작작업에의 응용 (Robust Position Tracking for Position-Based Visual Servoing and Its Application to Dual-Arm Task)

  • 김찬오;최성;정주노;양광웅;김홍석
    • 로봇학회논문지
    • /
    • 제2권2호
    • /
    • pp.129-136
    • /
    • 2007
  • This paper introduces a position-based robust visual servoing method which is developed for operation of a human-like robot with two arms. The proposed visual servoing method utilizes SIFT algorithm for object detection and CAMSHIFT algorithm for object tracking. While the conventional CAMSHIFT has been used mainly for object tracking in a 2D image plane, we extend its usage for object tracking in 3D space, by combining the results of CAMSHIFT for two image plane of a stereo camera. This approach shows a robust and dependable result. Once the robot's task is defined based on the extracted 3D information, the robot is commanded to carry out the task. We conduct several position-based visual servoing tasks and compare performances under different conditions. The results show that the proposed visual tracking algorithm is simple but very effective for position-based visual servoing.

  • PDF

Bag of Visual Words Method based on PLSA and Chi-Square Model for Object Category

  • Zhao, Yongwei;Peng, Tianqiang;Li, Bicheng;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권7호
    • /
    • pp.2633-2648
    • /
    • 2015
  • The problem of visual words' synonymy and ambiguity always exist in the conventional bag of visual words (BoVW) model based object category methods. Besides, the noisy visual words, so-called "visual stop-words" will degrade the semantic resolution of visual dictionary. In view of this, a novel bag of visual words method based on PLSA and chi-square model for object category is proposed. Firstly, Probabilistic Latent Semantic Analysis (PLSA) is used to analyze the semantic co-occurrence probability of visual words, infer the latent semantic topics in images, and get the latent topic distributions induced by the words. Secondly, the KL divergence is adopt to measure the semantic distance between visual words, which can get semantically related homoionym. Then, adaptive soft-assignment strategy is combined to realize the soft mapping between SIFT features and some homoionym. Finally, the chi-square model is introduced to eliminate the "visual stop-words" and reconstruct the visual vocabulary histograms. Moreover, SVM (Support Vector Machine) is applied to accomplish object classification. Experimental results indicated that the synonymy and ambiguity problems of visual words can be overcome effectively. The distinguish ability of visual semantic resolution as well as the object classification performance are substantially boosted compared with the traditional methods.

다중 채널 동적 객체 정보 추정을 통한 특징점 기반 Visual SLAM (A New Feature-Based Visual SLAM Using Multi-Channel Dynamic Object Estimation)

  • 박근형;조형기
    • 대한임베디드공학회논문지
    • /
    • 제19권1호
    • /
    • pp.65-71
    • /
    • 2024
  • An indirect visual SLAM takes raw image data and exploits geometric information such as key-points and line edges. Due to various environmental changes, SLAM performance may decrease. The main problem is caused by dynamic objects especially in highly crowded environments. In this paper, we propose a robust feature-based visual SLAM, building on ORB-SLAM, via multi-channel dynamic objects estimation. An optical flow and deep learning-based object detection algorithm each estimate different types of dynamic object information. Proposed method incorporates two dynamic object information and creates multi-channel dynamic masks. In this method, information on actually moving dynamic objects and potential dynamic objects can be obtained. Finally, dynamic objects included in the masks are removed in feature extraction part. As a results, proposed method can obtain more precise camera poses. The superiority of our ORB-SLAM was verified to compared with conventional ORB-SLAM by the experiment using KITTI odometry dataset.

Small Object Segmentation Based on Visual Saliency in Natural Images

  • Manh, Huynh Trung;Lee, Gueesang
    • Journal of Information Processing Systems
    • /
    • 제9권4호
    • /
    • pp.592-601
    • /
    • 2013
  • Object segmentation is a challenging task in image processing and computer vision. In this paper, we present a visual attention based segmentation method to segment small sized interesting objects in natural images. Different from the traditional methods, we first search the region of interest by using our novel saliency-based method, which is mainly based on band-pass filtering, to obtain the appropriate frequency. Secondly, we applied the Gaussian Mixture Model (GMM) to locate the object region. By incorporating the visual attention analysis into object segmentation, our proposed approach is able to narrow the search region for object segmentation, so that the accuracy is increased and the computational complexity is reduced. The experimental results indicate that our proposed approach is efficient for object segmentation in natural images, especially for small objects. Our proposed method significantly outperforms traditional GMM based segmentation.

Stochastic VOQL : 시각적 객체 질의어 (VOQL : A Visual Object Query Language)

  • 김정희;조완섭;이석균;황규영
    • 전자공학회논문지CI
    • /
    • 제38권5호
    • /
    • pp.1-15
    • /
    • 2001
  • 객체지향 데이터베이스를 위한 시각적 질의어(visual query language)의 설계에서 복잡한 질의 조건을 간단하고도 직관적으로 표현할 수 있도록 지원하는 것이 중요한 연구과제가 되고 있다. 본 논문에서는 시각적 객체지향 질의어인 VOQL(Visual Object Query Language)을 제안한다. VOQL은 그래프와 밴다이어그램(Venn Diagram)을 결합한 시각적 표현 기법을 사용하여 객체지향 데이터베이스의 스키마(schema)와 질의어를 하나의 통일된 시각적 표시법으로 표현하며, 객체지향 질의어에 포함된 한정된 경로식(quantified path expression), 집합 연산자, 상속 등의 객체지향 특성도 간단한 시각적 표시법을 이용하여 표현한다. VOQL의 가장 큰 특징은 기존의 시각적 개체지향 질의어들에 비하여 간단하고 직관적인 구문과 의미를 가지며, 뛰어난 질의 표현력을 가진다는 점이다.

  • PDF

시각센서를 이용한 움직이는 물체의 추적 및 안정된 파지를 위한 알고리즘의 개발 (An Advanced Visual Tracking and Stable Grasping Algorithm for a Moving Object)

  • 차인혁;손영갑;한창수
    • 한국정밀공학회지
    • /
    • 제15권6호
    • /
    • pp.175-182
    • /
    • 1998
  • An advanced visual tracking and stable grasping algorithm for a moving object is proposed. The stable grasping points for a moving 2D polygonal object are obtained through the visual tracking system with the Kalman filter and image prediction technique. The accuracy and efficiency are improved more than any other prediction algorithms for the tracking of an object. In the processing of a visual tracking. the shape predictors construct the parameterized family and grasp planner find the grasping points of unknown object through the geometric properties of the parameterized family. This algorithm conducts a process of ‘stable grasping and real time tracking’.

  • PDF