• Title/Summary/Keyword: Real-time stereo

Search Result 246, Processing Time 0.031 seconds

A Study on Design and Implementation of Speech Recognition System Using ART2 Algorithm

  • Kim, Joeng Hoon;Kim, Dong Han;Jang, Won Il;Lee, Sang Bae
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.149-154
    • /
    • 2004
  • In this research, we selected the speech recognition to implement the electric wheelchair system as a method to control it by only using the speech and used DTW (Dynamic Time Warping), which is speaker-dependent and has a relatively high recognition rate among the speech recognitions. However, it has to have small memory and fast process speed performance under consideration of real-time. Thus, we introduced VQ (Vector Quantization) which is widely used as a compression algorithm of speaker-independent recognition, to secure fast recognition and small memory. However, we found that the recognition rate decreased after using VQ. To improve the recognition rate, we applied ART2 (Adaptive Reason Theory 2) algorithm as a post-process algorithm to obtain about 5% recognition rate improvement. To utilize ART2, we have to apply an error range. In case that the subtraction of the first distance from the second distance for each distance obtained to apply DTW is 20 or more, the error range is applied. Likewise, ART2 was applied and we could obtain fast process and high recognition rate. Moreover, since this system is a moving object, the system should be implemented as an embedded one. Thus, we selected TMS320C32 chip, which can process significantly many calculations relatively fast, to implement the embedded system. Considering that the memory is speech, we used 128kbyte-RAM and 64kbyte ROM to save large amount of data. In case of speech input, we used 16-bit stereo audio codec, securing relatively accurate data through high resolution capacity.

Real-time Disparity Acquisition Algorithm from Stereoscopic Image and its Hardware Implementation (스테레오 영상으로부터의 실시간 변이정보 획득 알고리듬 및 하드웨어 구현)

  • Shin, Wan-Soo;Choi, Hyun-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.11C
    • /
    • pp.1029-1039
    • /
    • 2009
  • In this paper, the existing disparity aquisition algorithms were analyzed, on the bases of which a disparity generation technique that is superior in accuracy to the generation time was proposed. Basically it uses a pixel-by-pixel motion estimation technique. It has a merit of possibility of a high-speed operation. But the motion estimation technique has a disadvantage of lower accuracy because it depends on the similarity of the matching window regardless of the distribution characteristics of the texture in an image. Therefore, an enhanced technique to increase the accuracy of the disparity is required. This paper introduced a variable-sized window matching technique for this requirement. By the proposed technique, high accuracies could be obtained at the homogeneous regions and the object edges. A hardware to generate disparity image was designed, which was optimized to the processing speed so that a high throughput is possible. The hardware was designed by Verilog-HDL and synthesized using Hynix $0.35{\mu}m$ CMOS cell library. The designed hardware was operated stably at 120MHz using Cadence NC-VerilogTM and could process 15 frames per second at this clock frequency.

A Home-Based Remote Rehabilitation System with Motion Recognition for Joint Range of Motion Improvement (관절 가동범위 향상을 위한 원격 모션 인식 재활 시스템)

  • Kim, Kyungah;Chung, Wan-Young
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.3
    • /
    • pp.151-158
    • /
    • 2019
  • Patients with disabilities from various reasons such as disasters, injuries or chronic illness or elderly with limited body motion range due to aging are recommended to participate in rehabilitation programs at hospitals. But typically, it's not as simple for them to commute without help as they have limited access outside of the home. Also, regarding the perspectives of hospitals, having to maintain the workforce and have them take care of the rehabilitation sessions leads them to more expenses in cost aspects. For those reasons, in this paper, a home-based remote rehabilitation system using motion recognition is developed without needing help from others. This system can be executed by a personal computer and a stereo camera at home, the real-time user motion status is monitored using motion recognition feature. The system tracks the joint range of motion(Joint ROM) of particular body parts of users to check the body function improvement. For demonstration, total of 4 subjects with various ages and health conditions participated in this project. Their motion data were collected during all 3 exercise sessions, and each session was repeated 9 times per person and was compared in the results.

Automation of Bio-Industrial Process Via Tele-Task Command(I) -identification and 3D coordinate extraction of object- (원격작업 지시를 이용한 생물산업공정의 생력화 (I) -대상체 인식 및 3차원 좌표 추출-)

  • Kim, S. C.;Choi, D. Y.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.26 no.1
    • /
    • pp.21-28
    • /
    • 2001
  • Major deficiencies of current automation scheme including various robots for bioproduction include the lack of task adaptability and real time processing, low job performance for diverse tasks, and the lack of robustness of take results, high system cost, failure of the credit from the operator, and so on. This paper proposed a scheme that could solve the current limitation of task abilities of conventional computer controlled automatic system. The proposed scheme is the man-machine hybrid automation via tele-operation which can handle various bioproduction processes. And it was classified into two categories. One category was the efficient task sharing between operator and CCM(computer controlled machine). The other was the efficient interface between operator and CCM. To realize the proposed concept, task of the object identification and extraction of 3D coordinate of an object was selected. 3D coordinate information was obtained from camera calibration using camera as a measurement device. Two stereo images were obtained by moving a camera certain distance in horizontal direction normal to focal axis and by acquiring two images at different locations. Transformation matrix for camera calibration was obtained via least square error approach using specified 6 known pairs of data points in 2D image and 3D world space. 3D world coordinate was obtained from two sets of image pixel coordinates of both camera images with calibrated transformation matrix. As an interface system between operator and CCM, a touch pad screen mounted on the monitor and remotely captured imaging system were used. Object indication was done by the operator’s finger touch to the captured image using the touch pad screen. A certain size of local image processing area was specified after the touch was made. And image processing was performed with the specified local area to extract desired features of the object. An MS Windows based interface software was developed using Visual C++6.0. The software was developed with four modules such as remote image acquisiton module, task command module, local image processing module and 3D coordinate extraction module. Proposed scheme shoed the feasibility of real time processing, robust and precise object identification, and adaptability of various job and environments though selected sample tasks.

  • PDF

Supervised Hybrid Control Architecture for Navigation of a Personal Robot

  • Shin, Hyun-Jong;Im, Chang-Jun;Kim, Jin-Oh;Lee, Ho-Gil
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1178-1183
    • /
    • 2003
  • As personal robots coexist with a person with a role to help a person, while adapting various human life and environment, the personal robots have to accommodate frequently-changing or different-from-home-to-home environment. In addition, personal robots may have many kinds of different Kinematic configurations depending on the capabilities. Some may have a mobile base and others may have arms and a head. The motivation of this study arises from this not-well-defined home environment and varying Kinematic configuration. So the goal of this study is to develop a general control architecture for personal robots. There exist three major architectures; deliberative, reactive and hybrid. We found that these are applicable only for the defined environment with a fixed Kinematic configuration. Neither could accommodate the above two requirements. For the general solution, we propose a Supervised Hybrid Architecture (SHA), in which we use double layers of deliberative and reactive controls, distributed control with a modular design of Kinematic configurations, and real-time Linux OS. Deliberative and reactive actions interact through a corresponding arbitrator. These arbitrators help a robot to choose an appropriate architecture depending on the current situation to successfully perform a given task. The distributed control modules communicate through IEEE 1394 for the easy expandability. With a personal robot platform with a mobile base, two arms, a head and a pan-tilt stereo eye system, we tested the developed SHA for static as well as dynamic environments. For this application, we developed decision-making rules for selecting appropriate control methods for several situations of navigation task. Examples are shown to show the effectiveness.

  • PDF

Virtual Reality Using X3DOM (X3DOM을 이용한 가상현실)

  • Chheang, Vuthea;Ryu, Ga-Ae;Jeong, Sangkwon;Lee, Gookhwan;Yoo, Kwan-Hee
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.165-170
    • /
    • 2017
  • Web 3D technology can be used to simulate the experiments of scientific, medical, engineering and multimedia visualization. On the web environment, 3D virtual reality can be accessed well without strictly on operating system, location and time. Virtual Reality (VR) is used to depict a three-dimensional, computer generated realistic images, sound and other sensations to replicated a real environment or an imaginary setting which can be explored and interacted with by a person. That person is immersed within virtual environment and is able to manipulate objects or perform a series of action. Virtual environment can be created with X3D which is the ISO standard for defining 3D interactive, web-based 3D content and integrating with multimedia. In this paper, we discuss about X3D VR stereo rendering scene and propose new X3D nodes for the HMD VR (head mounted display virtual reality). The proposed nodes are visualized by the web browser X3DOM of X3D.

Efficient VLSI Architecture of Full-Image Guided Filter Based on Two-Pass Model (양방향 모델을 적용한 Full-image Guided Filter의 효율적인 VLSI 구조)

  • Lee, Gyeore;Park, Taegeun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1507-1514
    • /
    • 2016
  • Full-image guided filter reflects all pixels of image in filtering by using weight propagation and two-pass model, whereas the existing guide filter is processed based on the kernel window. Therefore the computational complexity can be improved while maintaining characteristics of guide filter, such as edge-preserving, smoothing, and so on. In this paper, we propose an efficient VLSI architecture for the full-image guided filter by analyzing the data dependency, the data frequency and the PSNR analysis of the image in order to achieve enough speed for various applications such as stereo vision, real-time systems, etc. In addition, the proposed efficient scheduling enables the realtime process by minimizing the idle period in weight computation. The proposed VLSI architecture shows 214MHz of maximum operating frequency (image size: 384*288, 965 fps) and 76K of gates (internal memory excluded).

Vision-based Obstacle Detection using Geometric Analysis (기하학적 해석을 이용한 비전 기반의 장애물 검출)

  • Lee Jong-Shill;Lee Eung-Hyuk;Kim In-Young;Kim Sun-I.
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.3 s.309
    • /
    • pp.8-15
    • /
    • 2006
  • Obstacle detection is an important task for many mobile robot applications. The methods using stereo vision and optical flow are computationally expensive. Therefore, this paper presents a vision-based obstacle detection method using only two view images. The method uses a single passive camera and odometry, performs in real-time. The proposed method is an obstacle detection method using 3D reconstruction from taro views. Processing begins with feature extraction for each input image using Dr. Lowe's SIFT(Scale Invariant Feature Transform) and establish the correspondence of features across input images. Using extrinsic camera rotation and translation matrix which is provided by odometry, we could calculate the 3D position of these corresponding points by triangulation. The results of triangulation are partial 3D reconstruction for obstacles. The proposed method has been tested successfully on an indoor mobile robot and is able to detect obstacles at 75msec.

Development of the Practical System for the Automated Damage Assessment (재해 피해조사 자동화를 위한 실용시스템 구축)

  • Jin, Kyeonghyeok;Kim, Youngbok;Choi, Woojung;Shim, Jaehyun
    • Journal of Korean Society of societal Security
    • /
    • v.1 no.2
    • /
    • pp.73-78
    • /
    • 2008
  • Recently, large scale natural disasters such as floods and typhoons due to climate change have been occurring all over the world causing severe damages. Among the various efforts to reduce and recover damages, recently, advanced information technology and remote sensing techniques are applied in disaster management. In this study, a real-time automated damage estimation system using information technology and spatial imagery was developed to accomplish prompt and accurate disaster damage estimation. This system is able to estimate the damage amounts of public facilities such as roads, rivers, bridges automatically through spatial imageries including ground based digital images, aerial photos, satellite images of disaster sites. Based on these spatial imageries, the damage amounts are analyzed in the Web-GIS based analysis system. Consequently, the digital damage reports such as digital disaster information sheets and damage maps can be made promptly and accurately. This system can be a useful tool to carry out prompt disaster damage estimation and efficient disaster recovery.

  • PDF

Controlling robot by image-based visual servoing with stereo cameras

  • Fan, Jun-Min;Won, Sang-Chul
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.229-232
    • /
    • 2005
  • In this paper, an image-based "approach-align -grasp" visual servo control design is proposed for the problem of object grasping, which is based on the binocular stand-alone system. The basic idea consists of considering a vision system as a specific sensor dedicated a task and included in a control servo loop, and we perform automatic grasping follows the classical approach of splitting the task into preparation and execution stages. During the execution stage, once the image-based control modeling is established, the control task can be performed automatically. The proposed visual servoing control scheme ensures the convergence of the image-features to desired trajectories by using the Jacobian matrix, which is proved by the Lyapunov stability theory. And we also stress the importance of projective invariant object/gripper alignment. The alignment between two solids in 3-D projective space can be represented with view-invariant, more precisely; it can be easily mapped into an image set-point without any knowledge about the camera parameters. The main feature of this method is that the accuracy associated with the task to be performed is not affected by discrepancies between the Euclidean setups at preparation and at task execution stages. Then according to the projective alignment, the set point can be computed. The robot gripper will move to the desired position with the image-based control law. In this paper we adopt a constant Jacobian online. Such method describe herein integrate vision system, robotics and automatic control to achieve its goal, it overcomes disadvantages of discrepancies between the different Euclidean setups and proposes control law in binocular-stand vision case. The experimental simulation shows that such image-based approach is effective in performing the precise alignment between the robot end-effector and the object.

  • PDF