• Title/Summary/Keyword: Visual Object

Search Result 1,237, Processing Time 0.029 seconds

A Study on Visual Feedback Control of a Dual Arm Robot with Eight Joints

  • Lee, Woo-Song;Kim, Hong-Rae;Kim, Young-Tae;Jung, Dong-Yean;Han, Sung-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.610-615
    • /
    • 2005
  • Visual servoing is the fusion of results from many elemental areas including high-speed image processing, kinematics, dynamics, control theory, and real-time computing. It has much in common with research into active vision and structure from motion, but is quite different from the often described use of vision in hierarchical task-level robot control systems. We present a new approach to visual feedback control using image-based visual servoing with the stereo vision in this paper. In order to control the position and orientation of a robot with respect to an object, a new technique is proposed using a binocular stereo vision. The stereo vision enables us to calculate an exact image Jacobian not only at around a desired location but also at the other locations. The suggested technique can guide a robot manipulator to the desired location without giving such priori knowledge as the relative distance to the desired location or the model of an object even if the initial positioning error is large. This paper describes a model of stereo vision and how to generate feedback commands. The performance of the proposed visual servoing system is illustrated by the simulation and experimental results and compared with the case of conventional method for dual-arm robot made in Samsung Electronics Co., Ltd.

  • PDF

An Interactive Multi-View Visual Programming Environment for C++ (C++를 위한 대화식 다중 뷰 시각 프로그래밍 환경)

  • Ryu, Cheon-Yeol;Jeong, Geun-Ho;Yu, Jae-U;Song, Hu-Bong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.5
    • /
    • pp.746-756
    • /
    • 1995
  • This paper describes the intractive visual programming environment using multi-view which shows the tools of visualization for called and the visualizations for called member-function flow in C++ language. This research defines new visual symbols for class and constructs interactive visual programming environment of various views by using visual symbols. Our proposed interactive multi-view visual programming environment can represent visualization for representation of class and execution relationships between objects in the object-oriented language, which is easy to understand the structure of object-oriented program, therefore our proposed interactive visual programming environment enables easy program development, and can use of education and trainning for beginner in useful.

  • PDF

Multi-Object Goal Visual Navigation Based on Multimodal Context Fusion (멀티모달 맥락정보 융합에 기초한 다중 물체 목표 시각적 탐색 이동)

  • Jeong Hyun Choi;In Cheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.407-418
    • /
    • 2023
  • The Multi-Object Goal Visual Navigation(MultiOn) is a visual navigation task in which an agent must visit to multiple object goals in an unknown indoor environment in a given order. Existing models for the MultiOn task suffer from the limitation that they cannot utilize an integrated view of multimodal context because use only a unimodal context map. To overcome this limitation, in this paper, we propose a novel deep neural network-based agent model for MultiOn task. The proposed model, MCFMO, uses a multimodal context map, containing visual appearance features, semantic features of environmental objects, and goal object features. Moreover, the proposed model effectively fuses these three heterogeneous features into a global multimodal context map by using a point-wise convolutional neural network module. Lastly, the proposed model adopts an auxiliary task learning module to predict the observation status, goal direction and the goal distance, which can guide to learn the navigational policy efficiently. Conducting various quantitative and qualitative experiments using the Habitat-Matterport3D simulation environment and scene dataset, we demonstrate the superiority of the proposed model.

Integrated Object Representations in Visual Working Memory Examined by Change Detection and Recall Task Performance (변화탐지와 회상 과제에 기초한 시각작업기억의 통합적 객체 표상 검증)

  • Inae Lee;Joo-Seok Hyun
    • Korean Journal of Cognitive Science
    • /
    • v.35 no.1
    • /
    • pp.1-21
    • /
    • 2024
  • This study investigates the characteristics of visual working memory (VWM) representations by examining two theoretical models: the integrated-object and the parallel-independent feature storage models. Experiment I involved a change detection task where participants memorized arrays of either orientation bars, colored squares, or both. In the one-feature condition, the memory array consisted of one feature (either orientations or colors), whereas the two-feature condition included both. We found no differences in change detection performance between the conditions, favoring the integrated object model over the parallel-independent feature storage model. Experiment II employed a recall task with memory arrays of isosceles triangles' orientations, colored squares, or both, and one-feature and two-feature conditions were compared for their recall performance. We found again no clear difference in recall accuracy between the conditions, but the results of analyses for memory precision and guessing responses indicated the weak object model over the strong object model. For ongoing debates surrounding VWM's representational characteristics, these findings highlight the dominance of the integrated object model over the parallel independent feature storage model.

Visual Object Tracking Fusing CNN and Color Histogram based Tracker and Depth Estimation for Automatic Immersive Audio Mixing

  • Park, Sung-Jun;Islam, Md. Mahbubul;Baek, Joong-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1121-1141
    • /
    • 2020
  • We propose a robust visual object tracking algorithm fusing a convolutional neural network tracker trained offline from a large number of video repositories and a color histogram based tracker to track objects for mixing immersive audio. Our algorithm addresses the problem of occlusion and large movements of the CNN based GOTURN generic object tracker. The key idea is the offline training of a binary classifier with the color histogram similarity values estimated via both trackers used in this method to opt appropriate tracker for target tracking and update both trackers with the predicted bounding box position of the target to continue tracking. Furthermore, a histogram similarity constraint is applied before updating the trackers to maximize the tracking accuracy. Finally, we compute the depth(z) of the target object by one of the prominent unsupervised monocular depth estimation algorithms to ensure the necessary 3D position of the tracked object to mix the immersive audio into that object. Our proposed algorithm demonstrates about 2% improved accuracy over the outperforming GOTURN algorithm in the existing VOT2014 tracking benchmark. Additionally, our tracker also works well to track multiple objects utilizing the concept of single object tracker but no demonstrations on any MOT benchmark.

VLSI Architecture for Video Object Boundary Enhancement (비디오객체의 경계향상을 위한 VLSI 구조)

  • Kim, Jinsang-
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.11A
    • /
    • pp.1098-1103
    • /
    • 2005
  • The edge and contour information are very much appreciated by the human visual systems and are responsible for our perceptions and recognitions. Therefore, if edge information is integrated during extracting video objects, we can generate boundaries of oects closer to human visual systems for multimedia applications such as interaction between video objects, object-based coding, and representation. Most of object extraction methods are difficult to implement real-time systems due to their iterative and complex arithmetic operations. In this paper, we propose a VLSI architecture integrating edge information to extract video objects for precisely located object boundaries. The proposed architecture can be easily implemented into hardware due to simple arithmetic operations. Also, it can be applied to real-time object extraction for object-oriented multimedia applications.

Object Tracking System Using Kalman Filter (칼만 필터를 이용한 물체 추적 시스템)

  • Xu, Yanan;Ban, Tae-Hak;Yuk, Jung-Soo;Park, Dong-Won;Jung, Hoe-kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.1015-1017
    • /
    • 2013
  • Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, non-rigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location or the shape of the object in every frame. This paper describes an object tracking system based on active vision with two cameras, into algorithm of single camera tracking system an object active visual tracking and object locked system based on Extend Kalman Filter (EKF) is introduced, by analyzing data from which the next running state of the object can be figured out and after the tracking is performed at each of the cameras, the individual tracks are to be fused (combined) to obtain the final system object track.

  • PDF

Voting based Cue Integration for Visual Servoing

  • Cho, Che-Seung;Chung, Byeong-Mook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.798-802
    • /
    • 2003
  • The robustness and reliability of vision algorithms is the key issue in robotic research and industrial applications. In this paper, the robust real time visual tracking in complex scene is considered. A common approach to increase robustness of a tracking system is to use different models (CAD model etc.) known a priori. Also fusion of multiple features facilitates robust detection and tracking of objects in scenes of realistic complexity. Because voting is a very simple or no model is needed for fusion, voting-based fusion of cues is applied. The approach for this algorithm is tested in a 3D Cartesian robot which tracks a toy vehicle moving along 3D rail, and the Kalman filter is used to estimate the motion parameters, namely the system state vector of moving object with unknown dynamics. Experimental results show that fusion of cues and motion estimation in a tracking system has a robust performance.

  • PDF

A Study on the Transition of the Perspective connected with Visual Modality (시각양식과 관련한 투시도법의 변천에 관한 연구)

  • 곽기표
    • Korean Institute of Interior Design Journal
    • /
    • no.38
    • /
    • pp.48-56
    • /
    • 2003
  • This study is purposed to find the transition of the perspective connected with visual modality. The perspective based on Greek optics and euclidean geometry and rediscovered in Renaissance represents the object according to the particular moment and the point of view, is a principal fact which affect architecture, the form of a city and the spatial organization and symbolizes an ideal of the times. It embodied perception which treats the space rationally on the basis of realism and became visual modality based on the separation of the seeing subject and the world of the object. The point of view became one with the vanishing point which made up the shape and after Renaissance for four hundred years a straight line, a right angle and a circle got to be favorite geometrical choices in architecture. A fixed point of view of the subject is getting to change and break up fundamentally by the new visual technologies of the modem times.

Fuzzy Neural Network-based Visual Servoing : part I (퍼지 신경망을 이용한 시각구동(I))

  • 김태원;서일홍
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.43 no.6
    • /
    • pp.1010-1019
    • /
    • 1994
  • It is shown that there exists a nonlinear mapping which transforms image features and their changes to the desired camera motion without measuring of the relative distance between the camera and the object. This nonlinear mapping can eliminate several difficulties occurring in computing the inverse of the feature Jacobian as in the usual feature-based visual feedback control methods. Instead of analytically deriving the closed form of this mapping, a Fuzzy Membership Function-based Neural Network (FMFNN) incorporating a Fuzzy-Neural Interpolating Network is used to approximate the nonlinear mapping. Several FMFNN's are trained to be capable of tracking a moving object in the whole workspace along the line of sight. For an effective implementation of the proposed FMF network, an image feature selection process is investigated. Finally, several numerical examples are presented to show the validity of the proposed visual servoing method.

  • PDF