• 제목/요약/키워드: Vision Platform

검색결과 186건 처리시간 0.02초

불안정판을 이용한 평형감각 훈련시스템 개발 (Development of the Training System for Equilibrium Sense Using the Unstable Platform)

  • 박용군;유미;권대규;홍철운;김남균
    • 한국정밀공학회지
    • /
    • 제22권8호
    • /
    • pp.192-198
    • /
    • 2005
  • In this paper, we propose a new training system for the improvement of equilibrium sense using unstable platform. The equilibrium sense, which provides orientation with respect to gravity, is important to integrate the vision, somatosensory and vestibular function to maintain the equilibrium sense of the human body. In order to improve the equilibrium sense, we developed the software program such as a block game, pingpong game using Visual C++. These training system for the equilibrium sense consists of unstable platform, computer interface and software program. The unstable platform was a simple structure of elliptical-type which included tilt sensor, wireless RF module and the device of power supply. To evaluate the effect of balance training, we measured and evaluated the parameters as the moving time to the target, duration to maintain cursor in the target of screen and the error between sine curve and acquired data. As a results, the moving time to the target and duration to maintain cursor in the target was improved through the repeating training of equilibrium sense. It was concluded that this system was reliable in the evaluation of equilibrium sense. This system might be applied to clinical use as an effective balance training system.

이족보행 안전성을 위한 골반기구의 제어시스템 설계 (Control System Design of Pelvis Platform for Biped Walking Stability)

  • 김수현;양태규
    • 제어로봇시스템학회논문지
    • /
    • 제15권3호
    • /
    • pp.306-314
    • /
    • 2009
  • The pelvis platform is the mechanical part which accomplishes the activities of diminishing the disturbances from the lower body and maintaining a balanced posture. When a biped robot walks, a lot of disturbances and irregular vibrations are generated and transmitted to the upper body. As there are some important machines and instruments in the upper body or head such as CPU, controller units, vision system, etc., the upper part should be isolated from disturbances or vibrations to functions properly and finally to improve the biped stability. This platform has 3 rotational degrees of freedom and is able to maintain balanced level by feedback control system. Some sensors are fused for more accurate estimation and the control system which integrates synchronization and active filtering is simulated on the virtual environment.

A Real-Time Virtual Re-Convergence Hardware Platform

  • Kim, Jae-Gon;Kim, Jong-Hak;Ham, Hun-Ho;Kim, Jueng-Hun;Park, Chan-Oh;Park, Soon-Suk;Cho, Jun-Dong
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • 제12권2호
    • /
    • pp.127-138
    • /
    • 2012
  • In this paper, we propose a real-time virtual re-convergence hardware platform especially to reduce the visual fatigue caused by stereoscopy. Our unique idea to reduce visual fatigue is to utilize the virtual re-convergence based on the optimized disparity-map that contains more depth information in the negative disparity area than in the positive area. Our virtual re-convergence hardware platform, which consists of image rectification, disparity estimation, depth post-processing, and virtual view control, is realized in real time with 60 fps on a single Xilinx Virtex-5 FPGA chip.

실시간 차선 이탈 경고 및 Smart Night Vision을 위한 HDR Camera Platform 구현에 관한 연구 (A Study on Implementation for Real-time Lane Departure Warning System & Smart Night Vision Based on HDR Camera Platform)

  • 박화범;박지오;김영길
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2017년도 춘계학술대회
    • /
    • pp.123-126
    • /
    • 2017
  • 최근 발전되고 있는 정보통신 IT기술은 자동차 시장에도 큰 영향을 미치며 발전하고 있다. 근래에는 운전자의 안전성과 편의성을 위해 IT 기술이 접목된 장치들이 장착 되고 있다. 하지만 편의성이 증가된 장점과 함께 운전자의 주의 분산으로 인해 교통사고를 증가시키는 단점도 가져오게 되었다. 이러한 사고를 미연에 방지하기 위해 여러 방식과 종류의 안전시스템 개발이 필요하다. 본 논문에서는 Radar Sensor나 Stereo Video 영상을 이용하지 않고 보행자 및 차선 이탈 경보를 알려주는 다기능 카메라 주행 안전 System을 구현하는 방법과 차선 이탈 경보 Software 결과 분석에 관한 연구를 제안하고 한다.

  • PDF

Sentiment Analysis From Images - Comparative Study of SAI-G and SAI-C Models' Performances Using AutoML Vision Service from Google Cloud and Clarifai Platform

  • Marcu, Daniela;Danubianu, Mirela
    • International Journal of Computer Science & Network Security
    • /
    • 제21권9호
    • /
    • pp.179-184
    • /
    • 2021
  • In our study we performed a sentiments analysis from the images. For this purpose, we used 153 images that contain: people, animals, buildings, landscapes, cakes and objects that we divided into two categories: images that suggesting a positive or a negative emotion. In order to classify the images using the two categories, we created two models. The SAI-G model was created with Google's AutoML Vision service. The SAI-C model was created on the Clarifai platform. The data were labeled in a preprocessing stage, and for the SAI-C model we created the concepts POSITIVE (POZITIV) AND NEGATIVE (NEGATIV). In order to evaluate the performances of the two models, we used a series of evaluation metrics such as: Precision, Recall, ROC (Receiver Operating Characteristic) curve, Precision-Recall curve, Confusion Matrix, Accuracy Score and Average precision. Precision and Recall for the SAI-G model is 0.875, at a confidence threshold of 0.5, while for the SAI-C model we obtained much lower scores, respectively Precision = 0.727 and Recall = 0.571 for the same confidence threshold. The results indicate a lower classification performance of the SAI-C model compared to the SAI-G model. The exception is the value of Precision for the POSITIVE concept, which is 1,000.

Zynq를 이용한 비전 및 모션 컨트롤러 통합모듈 구현 (Implementation of Integration Module of Vision and Motion Controller using Zynq)

  • 문용선;노상현;이영필
    • 한국전자통신학회논문지
    • /
    • 제8권1호
    • /
    • pp.159-164
    • /
    • 2013
  • 최근 자동화 시스템에 있어 중요한 요소인 비전과 모션 컨트롤러를 통합한 솔루션이 많이 개발되고 있다. 다만 이러한 솔루션은 비전 처리와 모션 컨트롤을 네트워크로 통합하거나 하나의 모듈에 투 칩 솔루션으로 구성된 경우가 많다. 본 연구에서는 최근에 개발된 확장형 프로세싱 플랫폼인 Zynq-7000을 이용한 비전 및 모션 제어기를 통합한 원 칩 솔루션을 구현하였다. 또한 모션 컨트롤은 제어의 실시간이 보장되면서 대량의 데이터를 처리할 수 있는 개방형 표준 이더넷 호환성을 가지고 있는 산업용 이더넷 프로토콜인 EtherCAT을 적용하였다.

OnBoard Vision Based Object Tracking Control Stabilization Using PID Controller

  • Mariappan, Vinayagam;Lee, Minwoo;Cho, Juphil;Cha, Jaesang
    • International Journal of Advanced Culture Technology
    • /
    • 제4권4호
    • /
    • pp.81-86
    • /
    • 2016
  • In this paper, we propose a simple and effective vision-based tracking controller design for autonomous object tracking using multicopter. The multicopter based automatic tracking system usually unstable when the object moved so the tracking process can't define the object position location exactly that means when the object moves, the system can't track object suddenly along to the direction of objects movement. The system will always looking for the object from the first point or its home position. In this paper, PID control used to improve the stability of tracking system, so that the result object tracking became more stable than before, it can be seen from error of tracking. A computer vision and control strategy is applied to detect a diverse set of moving objects on Raspberry Pi based platform and Software defined PID controller design to control Yaw, Throttle, Pitch of the multicopter in real time. Finally based series of experiment results and concluded that the PID control make the tracking system become more stable in real time.

A Parallel Implementation of Multiple Non-overlapping Cameras for Robot Pose Estimation

  • Ragab, Mohammad Ehab;Elkabbany, Ghada Farouk
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권11호
    • /
    • pp.4103-4117
    • /
    • 2014
  • Image processing and computer vision algorithms are gaining larger concern in a variety of application areas such as robotics and man-machine interaction. Vision allows the development of flexible, intelligent, and less intrusive approaches than most of the other sensor systems. In this work, we determine the location and orientation of a mobile robot which is crucial for performing its tasks. In order to be able to operate in real time there is a need to speed up different vision routines. Therefore, we present and evaluate a method for introducing parallelism into the multiple non-overlapping camera pose estimation algorithm proposed in [1]. In this algorithm the problem has been solved in real time using multiple non-overlapping cameras and the Extended Kalman Filter (EKF). Four cameras arranged in two back-to-back pairs are put on the platform of a moving robot. An important benefit of using multiple cameras for robot pose estimation is the capability of resolving vision uncertainties such as the bas-relief ambiguity. The proposed method is based on algorithmic skeletons for low, medium and high levels of parallelization. The analysis shows that the use of a multiprocessor system enhances the system performance by about 87%. In addition, the proposed design is scalable, which is necaccery in this application where the number of features changes repeatedly.

어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM (3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner)

  • 최윤원;최정원;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제21권7호
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Implementation of a High-speed Template Matching System for Wafer-vision Alignment Using FPGA

  • Jae-Hyuk So;Minjoon Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권8호
    • /
    • pp.2366-2380
    • /
    • 2024
  • In this study, a high-speed template matching system is proposed for wafer-vision alignment. The proposed system is designed to rapidly locate markers in semiconductor equipment used for wafer-vision alignment. We optimized and implemented a template-matching algorithm for the high-speed processing of high-resolution wafer images. Owing to the simplicity of wafer markers, we removed unnecessary components in the algorithm and designed the system using a field-programmable gate array (FPGA) to implement high-speed processing. The hardware blocks were designed using the Xilinx ZCU104 board, and the pyramid and matching blocks were designed using programmable logic for accelerated operations. To validate the proposed system, we established a verification environment using stage equipment commonly used in industrial settings and reference-software-based validation frameworks. The output results from the FPGA were transmitted to the wafer-alignment controller for system verification. The proposed system reduced the data-processing time by approximately 30% and achieved a level of accuracy in detecting wafer markers that was comparable to that achieved by reference software, with minimal deviation. This system can be used to increase precision and productivity during semiconductor manufacturing processes.