• Title/Summary/Keyword: Visual Dynamics Model

Search Result 64, Processing Time 0.036 seconds

Visual Dynamics Model for 3D Text Visualization

  • Lim, Sooyeon
    • International Journal of Contents
    • /
    • v.14 no.4
    • /
    • pp.86-91
    • /
    • 2018
  • Text has evolved along with the history of art as a means of communicating human intentions and emotions. In addition, text visualization artworks have been combined with the social form and contents of new media to produce social messages and related meanings. Recently, in text visualization artworks combined with digital media, communication forms with viewers are changing instantly and interactively, and viewers are actively participating in creating artworks by direct engagement. Interactive text visualization with additional viewer's interaction, generates external dynamics from text shapes and internal dynamics from embedded meanings of text. The purpose of this study is to propose a visual dynamics model to express the dynamics of text and to implement a text visualization system based on the model. It uses the deconstruction of the imaged text to create an interactive text visualization system that reacts to the gestures of the viewer in real time. Visual Transformation synchronized with the intentions of the viewer prevent the text from remaining in the interpretation of language symbols and extend the various meanings of the text. The visualized text in various forms shows visual dynamics that interpret the meaning according to the cultural background of the viewer.

Robust Visual Tracking for 3-D Moving Object using Kalman Filter (칼만필터를 이용한 3-D 이동물체의 강건한 시각추적)

  • 조지승;정병묵
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.1055-1058
    • /
    • 2003
  • The robustness and reliability of vision algorithms is the key issue in robotic research and industrial applications. In this paper robust real time visual tracking in complex scene is considered. A common approach to increase robustness of a tracking system is the use of different model (CAD model etc.) known a priori. Also fusion or multiple features facilitates robust detection and tracking of objects in scenes of realistic complexity. Voting-based fusion of cues is adapted. In voting. a very simple or no model is used for fusion. The approach for this algorithm is tested in a 3D Cartesian robot which tracks a toy vehicle moving along 3D rail, and the Kalman filter is used to estimate the motion parameters. namely the system state vector of moving object with unknown dynamics. Experimental results show that fusion of cues and motion estimation in a tracking system has a robust performance.

  • PDF

Development of a 3D Graphic Simulator for Assembling Robot (조립용 로봇이 3차원 그래픽 시뮬레이터 개발)

  • 장영희
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1998.03a
    • /
    • pp.227-232
    • /
    • 1998
  • We developed a Off-Line Graphic Simulator which can simulate a robot model in 3D graphics space in Windows 95 version. 4 axes SCARA robot was adopted as an objective model. Forward kinematics, inverse kinematics and robot dynamics modeling were included in the developed program. The interface between users and the off-line program system in the Windows 95's graphic user interface environment was also studied. The developing language is Microsoft Visual C++. Graphic libraries, OpenGL, by Silicon Graphics, Inc. were utilized for 3D graphics.

  • PDF

Investigation of Visual Perception Under Zen-Meditation Based On Alpha-Dependent F-VEPs

  • Liao, Hsien-Cheng;Liu, Chuan-Yi;Lo, Pei-Chen
    • Journal of Biomedical Engineering Research
    • /
    • v.27 no.6
    • /
    • pp.384-391
    • /
    • 2006
  • Variation of brain dynamics under Zen meditation has been one of our major research interests for years. One issue encountered is the inaccessibility to the actual meditation level or stage as a reference. In this paper, we propose an alternative strategy for investigating the human brain in response to external flash stimuli during Zen meditation course. To secure a consistent condition of the brain dynamics when applying stimulation, we designed a recording of flash visual evoked potentials (F-VEPs) based on a constant background EEG (electroencephalograph) frontal $\alpha-rhythm$ dominating activities that increase significantly during Zen meditation. Thus the flash-light stimulus was to be applied upon emergence of the frontal $\alpha-rhythm$. The alpha-dependent F-VEPs were then employed to inspect the effect of Zen meditation on brain dynamics. Based on the experimental protocol proposed, considerable differences between experimental and control groups were obtained. Our results showed that amplitudes of P1-N2 and N2-P2 on Cz and Fz increased significantly during meditation, contrary to the F-VEPs of control group at rest. We thus suggest that Zen meditation results in acute response on primary visual cortex and the associated parts.

Voting based Cue Integration for Visual Servoing

  • Cho, Che-Seung;Chung, Byeong-Mook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.798-802
    • /
    • 2003
  • The robustness and reliability of vision algorithms is the key issue in robotic research and industrial applications. In this paper, the robust real time visual tracking in complex scene is considered. A common approach to increase robustness of a tracking system is to use different models (CAD model etc.) known a priori. Also fusion of multiple features facilitates robust detection and tracking of objects in scenes of realistic complexity. Because voting is a very simple or no model is needed for fusion, voting-based fusion of cues is applied. The approach for this algorithm is tested in a 3D Cartesian robot which tracks a toy vehicle moving along 3D rail, and the Kalman filter is used to estimate the motion parameters, namely the system state vector of moving object with unknown dynamics. Experimental results show that fusion of cues and motion estimation in a tracking system has a robust performance.

  • PDF

Development of Uni-Axial Bushing Model for the Vehicle Dynamic Analysis Using the Bouc-Wen Hysteretic Model (Bouc-Wen 모델을 이용한 차량동역학 해석용 1축 부싱모델의 개발)

  • Ok, Jin-Kyu;Yoo, Wan-Suk;Sohn, Jeong-Hyun
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.14 no.2
    • /
    • pp.158-165
    • /
    • 2006
  • In this paper, a new uni-axial bushing model for vehicle dynamics analysis is proposed. Bushing components of a vehicle suspension system are tested to capture the nonlinear and hysteric behavior of the typical rubber bushing elements using the MTS machine. The results of the tests are used to develop the Bouc-Wen bushing model. The Bouc-Wen model is employed to represent the hysteretic characteristics of the bushing. ADAMS program is used for the identification process and VisualDOC program is also used to find the optimal coefficients of the model. Genetic algorithm is employed to carry out the optimal design. A numerical example is suggested to verify the performance of the proposed model.

Using Keystroke Dynamics for Implicit Authentication on Smartphone

  • Do, Son;Hoang, Thang;Luong, Chuyen;Choi, Seungchan;Lee, Dokyeong;Bang, Kihyun;Choi, Deokjai
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.8
    • /
    • pp.968-976
    • /
    • 2014
  • Authentication methods on smartphone are demanded to be implicit to users with minimum users' interaction. Existing authentication methods (e.g. PINs, passwords, visual patterns, etc.) are not effectively considering remembrance and privacy issues. Behavioral biometrics such as keystroke dynamics and gait biometrics can be acquired easily and implicitly by using integrated sensors on smartphone. We propose a biometric model involving keystroke dynamics for implicit authentication on smartphone. We first design a feature extraction method for keystroke dynamics. And then, we build a fusion model of keystroke dynamics and gait to improve the authentication performance of single behavioral biometric on smartphone. We operate the fusion at both feature extraction level and matching score level. Experiment using linear Support Vector Machines (SVM) classifier reveals that the best results are achieved with score fusion: a recognition rate approximately 97.86% under identification mode and an error rate approximately 1.11% under authentication mode.

A Study on Visual Feedback Control of a Dual Arm Robot with Eight Joints

  • Lee, Woo-Song;Kim, Hong-Rae;Kim, Young-Tae;Jung, Dong-Yean;Han, Sung-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.610-615
    • /
    • 2005
  • Visual servoing is the fusion of results from many elemental areas including high-speed image processing, kinematics, dynamics, control theory, and real-time computing. It has much in common with research into active vision and structure from motion, but is quite different from the often described use of vision in hierarchical task-level robot control systems. We present a new approach to visual feedback control using image-based visual servoing with the stereo vision in this paper. In order to control the position and orientation of a robot with respect to an object, a new technique is proposed using a binocular stereo vision. The stereo vision enables us to calculate an exact image Jacobian not only at around a desired location but also at the other locations. The suggested technique can guide a robot manipulator to the desired location without giving such priori knowledge as the relative distance to the desired location or the model of an object even if the initial positioning error is large. This paper describes a model of stereo vision and how to generate feedback commands. The performance of the proposed visual servoing system is illustrated by the simulation and experimental results and compared with the case of conventional method for dual-arm robot made in Samsung Electronics Co., Ltd.

  • PDF

Experimental Studies of Vision Based Position Tracking Control of Mobile Robot Using Neural Network (신경회로망을 이용한 비전 기반 이동 로봇의 위치제어에 대한 실험적 연구)

  • Jung, Seul;Jang, Pyung-Soo;Won, Moon-Chul;Hong, Sub
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.7
    • /
    • pp.515-526
    • /
    • 2003
  • Tutorial contents of kinematics and dynamics of a wheeled drive mobile robot are presented. Based on the dynamic model, simulation studies of position tracking of a mobile robot are performed. The control structure of several position control algorithms using visual feedback are proposed and their performances are compared. In order to compensate for uncertainties from unknown dynamics and ignored dynamic effects such as slip conditions, neural network based position control schemes are proposed. Experiments are conducted and the results show the performance of the vision based neural network control scheme fumed out to be the best among several proposed schemes.

Development for Tilting Train Dynamics Motion Base

  • Song, Yong-Soo;Shin, Seung-Kwon;Kim, Jung-Seok;Ho, Seong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1158-1161
    • /
    • 2004
  • This paper describes the construction of a half sphere screen driving tilting simulator that can perform six degree-of-freedom (DOF) motions simulator to a tilting train. The mathematical equations of Tilting Train dynamics are first derived from the 6-DOF bicycle model and incorporated with the bogie, carbody, and suspension subsystems. The equations of motion are then programmed by visual C++ code. To achieve the simulator functions, a motion platform that is constructed by six electric-driven actuators is designed, and its kinetics/inverse kinetics analysis is also conducted. Driver operation signals such as carbady angle, accelerator, and tilting positions are measured to trigger the Tilting dynamics calculation and further actuate the cylinders by the motion platform control program. In addition, a digital PID controller is added to achieve the stable and accurate displacements of the motion platform. The experiments prove that the designed simulator is adequate in performing some special rail road driving situations discussed in this paper.

  • PDF