• 제목/요약/키워드: Vision based

검색결과 3,487건 처리시간 0.033초

로보트와 Vision System Interface (Robot vision interface)

  • 김선일;여인택;박찬웅
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1987년도 한국자동제어학술회의논문집; 한국과학기술대학, 충남; 16-17 Oct. 1987
    • /
    • pp.101-104
    • /
    • 1987
  • This paper shows the robot-vision system which consists of robot, vision system, single board computer and IBM-PC. IBM-PC based system has a great flexibility in expansion for a vision system interfacing. Easy human interfacing and great calculation ability are the benefits of this system. It was carried to interface between each component. The calibration between two coordinate systems is studied. The robot language for robot-vision system was written in "C" language. User also can write job program in "C" language in which the robot and vision related functions reside in the library.side in the library.

  • PDF

반도체 칩의 높이 측정을 위한 스테레오 비전의 측정값 조정 알고리즘 (Adjustment Algorithms for the Measured Data of Stereo Vision Methods for Measuring the Height of Semiconductor Chips)

  • 김영두;조태훈
    • 반도체디스플레이기술학회지
    • /
    • 제10권2호
    • /
    • pp.97-102
    • /
    • 2011
  • Lots of 2D vision algorithms have been applied for inspection. However, these 2D vision algorithms have limitation in inspection applications which require 3D information data such as the height of semiconductor chips. Stereo vision is a well known method to measure the distance from the camera to the object to be measured. But it is difficult to apply for inspection directly because of its measurement error. In this paper, we propose two adjustment methods to reduce the error of the measured height data for stereo vision. The weight value based model is used to minimize the mean squared error. The average value based model is used with simple concept to reduce the measured error. The effect of these algorithms has been proved through the experiments which measure the height of semiconductor chips.

Barriers to Low Vision Services and Challenges Faced by The Providers in Pakistan

  • Javed, Momina;Afghani, Tayyab;Zafar, Kunza
    • 한국임상보건과학회지
    • /
    • 제3권3호
    • /
    • pp.399-408
    • /
    • 2015
  • Objective. There were two objectives of the study, first was to identify the barriers as perceived by the patients and providers to access the low vision services and second was to identify the challenges faced by the main providers. Study design. Structured questionnaire based interviews of patients and providers Methodology. To find out the barriers to access of low vision services, the interviews based on structured questionnaire were conducted for two patient groups. The first group consisted of 97 visually impaired individuals attending the department of low vision services at Al-Shifa Trust Eye Hospital Rawalpindi while the second group included 56 visually impaired individuals attending the four rehabilitation centers/schools for the blind in Rawalpindi/Islamabad. To identify the barriers as perceived by the main providers of low vision services and challenges faced by them the interviews based on structured questionnaire were conducted for 19 low vision service providers. Results. From patients point of view, major barrier to low vision services identified was inability to visit hospital /rehabilitation center alone - 29.8% in hospital group and 33.9% in rehabilitation centers group, while the lack of social support, lack of family support, cost of travelling, long distance, afford ability, hesitation in using devices and lack of satisfaction were other important barriers identified. From providers' point of view, major barrier to uptake of services was the need for repeated follow-ups. Optometrists were the main provider of low vision services contributing to 47.4% of the providers. The major challenge faced by the providers was motivation of patients to use low vision devices. Conclusion. The major barrier to low vision services according to the patients is inability to visit the hospital alone, while according to providers, it is the need for repeated follow up which proves major barrier towards uptake of services. The motivation is the major challenge faced by providers, majority of which are optometrists.

임베디드 엣지 플랫폼에서의 경량 비전 트랜스포머 성능 평가 (Performance Evaluation of Efficient Vision Transformers on Embedded Edge Platforms)

  • 이민하;이성재;김태현
    • 대한임베디드공학회논문지
    • /
    • 제18권3호
    • /
    • pp.89-100
    • /
    • 2023
  • Recently, on-device artificial intelligence (AI) solutions using mobile devices and embedded edge devices have emerged in various fields, such as computer vision, to address network traffic burdens, low-energy operations, and security problems. Although vision transformer deep learning models have outperformed conventional convolutional neural network (CNN) models in computer vision, they require more computations and parameters than CNN models. Thus, they are not directly applicable to embedded edge devices with limited hardware resources. Many researchers have proposed various model compression methods or lightweight architectures for vision transformers; however, there are only a few studies evaluating the effects of model compression techniques of vision transformers on performance. Regarding this problem, this paper presents a performance evaluation of vision transformers on embedded platforms. We investigated the behaviors of three vision transformers: DeiT, LeViT, and MobileViT. Each model performance was evaluated by accuracy and inference time on edge devices using the ImageNet dataset. We assessed the effects of the quantization method applied to the models on latency enhancement and accuracy degradation by profiling the proportion of response time occupied by major operations. In addition, we evaluated the performance of each model on GPU and EdgeTPU-based edge devices. In our experimental results, LeViT showed the best performance in CPU-based edge devices, and DeiT-small showed the highest performance improvement in GPU-based edge devices. In addition, only MobileViT models showed performance improvement on EdgeTPU. Summarizing the analysis results through profiling, the degree of performance improvement of each vision transformer model was highly dependent on the proportion of parts that could be optimized in the target edge device. In summary, to apply vision transformers to on-device AI solutions, either proper operation composition and optimizations specific to target edge devices must be considered.

Low cost, printed P-OLED displays for entry into the flexible display market

  • MacKenzie, J.Devin;Breeden, J.J.;Carter, S.A.;Chen, J.P.;Hinkle, P.;Jones, E.;Kreger, M.A.;Nakazawa, Y.;Roeloffs, R.;Vo, Vung;Wilkinson, M.
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 한국정보디스플레이학회 2006년도 6th International Meeting on Information Display
    • /
    • pp.641-643
    • /
    • 2006
  • Add-Vision has developed a low-cost print technology for P-OLED displays on flexible substrates that meets several essentials for a new technology including: (1) Functionality including low DC voltage and wide color gamut; (2) Utilization of inexpensive tools; (3) Performance matching entry applications and markets. AVI's process is based on large-area printing of a combination of doped emissive and air-stable cathode inks utilizing truly low-cost tools to create printed P-OLEDs.

  • PDF

STEREO VISION-BASED FORWARD OBSTACLE DETECTION

  • Jung, H.G.;Lee, Y.H.;Kim, B.J.;Yoon, P.J.;Kim, J.H.
    • International Journal of Automotive Technology
    • /
    • 제8권4호
    • /
    • pp.493-504
    • /
    • 2007
  • This paper proposes a stereo vision-based forward obstacle detection and distance measurement method. In general, stereo vision-based obstacle detection methods in automotive applications can be classified into two categories: IPM (Inverse Perspective Mapping)-based and disparity histogram-based. The existing disparity histogram-based method was developed for stop-and-go applications. The proposed method extends the scope of the disparity histogram-based method to highway applications by 1) replacing the fixed rectangular ROI (Region Of Interest) with the traveling lane-based ROI, and 2) replacing the peak detection with a constant threshold with peak detection using the threshold-line and peakness evaluation. In order to increase the true positive rate while decreasing the false positive rate, multiple candidate peaks were generated and then verified by the edge feature correlation method. By testing the proposed method with images captured on the highway, it was shown that the proposed method was able to overcome problems in previous implementations while being applied successfully to highway collision warning/avoidance conditions, In addition, comparisons with laser radar showed that vision sensors with a wider FOV (Field Of View) provided faster responses to cutting-in vehicles. Finally, we integrated the proposed method into a longitudinal collision avoidance system. Experimental results showed that activated braking by risk assessment using the state of the ego-vehicle and measuring the distance to upcoming obstacles could successfully prevent collisions.

Multilevel Precision-Based Rational Design of Chemical Inhibitors Targeting the Hydrophobic Cleft of Toxoplasma gondii Apical Membrane Antigen 1 (AMA1)

  • Vetrivel, Umashankar;Muralikumar, Shalini;Mahalakshmi, B;K, Lily Therese;HN, Madhavan;Alameen, Mohamed;Thirumudi, Indhuja
    • Genomics & Informatics
    • /
    • 제14권2호
    • /
    • pp.53-61
    • /
    • 2016
  • Toxoplasma gondii is an intracellular Apicomplexan parasite and a causative agent of toxoplasmosis in human. It causes encephalitis, uveitis, chorioretinitis, and congenital infection. T. gondii invades the host cell by forming a moving junction (MJ) complex. This complex formation is initiated by intermolecular interactions between the two secretory parasitic proteins-namely, apical membrane antigen 1 (AMA1) and rhoptry neck protein 2 (RON2) and is critically essential for the host invasion process. By this study, we propose two potential leads, NSC95522 and NSC179676 that can efficiently target the AMA1 hydrophobic cleft, which is a hotspot for targeting MJ complex formation. The proposed leads are the result of an exhaustive conformational search-based virtual screen with multilevel precision scoring of the docking affinities. These two compounds surpassed all the precision levels of docking and also the stringent post docking and cumulative molecular dynamics evaluations. Moreover, the backbone flexibility of hotspot residues in the hydrophobic cleft, which has been previously reported to be essential for accommodative binding of RON2 to AMA1, was also highly perturbed by these compounds. Furthermore, binding free energy calculations of these two compounds also revealed a significant affinity to AMA1. Machine learning approaches also predicted these two compounds to possess more relevant activities. Hence, these two leads, NSC95522 and NSC179676, may prove to be potential inhibitors targeting AMA1-RON2 complex formation towards combating toxoplasmosis.

A VISION SYSTEM IN ROBOTIC WELDING

  • Absi Alfaro, S. C.
    • 대한용접접합학회:학술대회논문집
    • /
    • 대한용접접합학회 2002년도 Proceedings of the International Welding/Joining Conference-Korea
    • /
    • pp.314-319
    • /
    • 2002
  • The Automation and Control Group at the University of Brasilia is developing an automatic welding station based on an industrial robot and a controllable welding machine. Several techniques were applied in order to improve the quality of the welding joints. This paper deals with the implementation of a laser-based computer vision system to guide the robotic manipulator during the welding process. Currently the robot is taught to follow a prescribed trajectory which is recorded a repeated over and over relying on the repeatability specification from the robot manufacturer. The objective of the computer vision system is monitoring the actual trajectory followed by the welding torch and to evaluate deviations from the desired trajectory. The position errors then being transfer to a control algorithm in order to actuate the robotic manipulator and cancel the trajectory errors. The computer vision systems consists of a CCD camera attached to the welding torch, a laser emitting diode circuit, a PC computer-based frame grabber card, and a computer vision algorithm. The laser circuit establishes a sharp luminous reference line which images are captured through the video camera. The raw image data is then digitized and stored in the frame grabber card for further processing using specifically written algorithms. These image-processing algorithms give the actual welding path, the relative position between the pieces and the required corrections. Two case studies are considered: the first is the joining of two flat metal pieces; and the second is concerned with joining a cylindrical-shape piece to a flat surface. An implementation of this computer vision system using parallel computer processing is being studied.

  • PDF

Feature Extraction for Vision Based Micromanipulation

  • Jang, Min-Soo;Lee, Seok-Joo;Park, Gwi-Tae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.41.5-41
    • /
    • 2002
  • This paper presents a feature extraction algorithm for vision-based micromanipulation. In order to guarantee of the accurate micromanipulation, most of micromanipulation systems use vision sensor. Vision data from an optical microscope or high magnification lens have vast information, however, characteristics of micro image such as emphasized contour, texture, and noise are make it difficult to apply macro image processing algorithms to micro image. Grasping points extraction is very important task in micromanipulation because inaccurate grasping points can cause breakdown of micro gripper or miss of micro objects. To solve those problems and extract grasping points for micromanipulation...

  • PDF

Light Source Target Detection Algorithm for Vision-based UAV Recovery

  • Won, Dae-Yeon;Tahk, Min-Jea;Roh, Eun-Jung;Shin, Sung-Sik
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제9권2호
    • /
    • pp.114-120
    • /
    • 2008
  • In the vision-based recovery phase, a terminal guidance for the blended-wing UAV requires visual information of high accuracy. This paper presents the light source target design and detection algorithm for vision-based UAV recovery. We propose a recovery target design with red and green LEDs. This frame provides the relative position between the target and the UAV. The target detection algorithm includes HSV-based segmentation, morphology, and blob processing. These techniques are employed to give efficient detection results in day and night net recovery operations. The performance of the proposed target design and detection algorithm are evaluated through ground-based experiments.