• Title/Summary/Keyword: Vision Information

Search Result 2,971, Processing Time 0.034 seconds

Development of Input Device for Positioning of Multiple DOFs (다자유도 위치설정을 위한 입력장치의 개발)

  • Kim, Dae-Sung;Kim, Jin-Oh
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.8
    • /
    • pp.851-858
    • /
    • 2009
  • In this study, we propose a new input device using vision technology for positioning of multiple DOFs. The input device is composed of multiple Tags on a transparent table and a vision camera below the table. Vision camera detects LEDs at the bottom of each Tag to derive information of the ID, position and orientation. The information are used to determine position and orientation of remote target DOFs. Our developed approach is very reliable and effective, especially when the corresponding DOFs are from many independent individuals. We show an application example with a SCARA robot to prove the flexibility and extendability.

An Automatic Visual Alignment System for an Exposure System (노광시스템을 위한 자동 정렬 비젼시스템)

  • Cho, Tai-Hoon;Seo, Jae-Yong
    • Journal of the Semiconductor & Display Technology
    • /
    • v.6 no.1 s.18
    • /
    • pp.43-48
    • /
    • 2007
  • For exposure systems, very accurate alignment between the mask and the substrate is indispensable. In this paper, an automatic alignment system using machine vision for exposure systems is described. Machine vision algorithms are described in detail including extraction of an alignment mark's center position and camera calibration. Methods for extracting parameters for alignment are also presented with some compensation techniques to reduce alignment time. Our alignment system was implemented with a vision system and motion control stages. The performance of the alignment system has been extensively tested with satisfactory results. The performance evaluation shows alignment accuracy of lum within total alignment time of about $2{\sim}3$ seconds including stage moving time.

  • PDF

Kicks from The Penalty Mark of The Humanoid Robot using Computer Vision (컴퓨터 비전을 이용한 휴머노이드 로봇의 축구 승부차기)

  • Han, Chung-Hui;Lee, Jang-Hae;Jang, Se-In;Park, Choong-Shik;Lee, Ho-Jun;Moon, Seok-Hwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.264-267
    • /
    • 2009
  • 기존의 자율형 휴머노이드 로봇 축구승부차기에서는 거리센서와 시각센서를 모두 이용한다. 본 논문에서는 시각센서만을 사용하는 사람과 유사한 승부차기 시스템을 제안한다. 이를 위하여 시각센서가 유연하게 움직일 수 있는 적합한 로봇의 조립 형태와 지능적 3차원 공간분석을 채용한다. 지식표현과 추론은 자체 개발한 지식처리 시스템인 NEO를 사용하였고, 그 NEO 시스템에 지능적 처리를 위한 영상처리 라이브러리인 OpenCV를 탑재한 시스템 VisionNEO를 사용하였다.

  • PDF

Eye Blink Detection and Alarm System to Reduce Symptoms of Computer Vision Syndrome

  • Atheer K. Alsaif;Abdul Rauf Baig
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.5
    • /
    • pp.193-206
    • /
    • 2023
  • In recent years, and with the increased adoption of digital transformation and spending long hours in front of these devices, clinicians have observed that the prolonged use of visual display units (VDUs) can result in a certain symptom complex, which has been defined as computer vision syndrome (CVS). This syndrome has been affected by many causes, such as light refractive errors, poor computer design, workplace ergonomics, and a highly demanding visual task. This research focuses on eliminating one of CVSs, which is the eye dry syndrome caused by infrequent eye blink rate while using a smart device for a long time. This research attempt to find a limitation on the current tools. In addition, exploring the other use cases to utilize the solution based on each vertical and needs.

From Masked Reconstructions to Disease Diagnostics: A Vision Transformer Approach for Fundus Images (마스크된 복원에서 질병 진단까지: 안저 영상을 위한 비전 트랜스포머 접근법)

  • Toan Duc Nguyen;Gyurin Byun;Hyunseung Choo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.557-560
    • /
    • 2023
  • In this paper, we introduce a pre-training method leveraging the capabilities of the Vision Transformer (ViT) for disease diagnosis in conventional Fundus images. Recognizing the need for effective representation learning in medical images, our method combines the Vision Transformer with a Masked Autoencoder to generate meaningful and pertinent image augmentations. During pre-training, the Masked Autoencoder produces an altered version of the original image, which serves as a positive pair. The Vision Transformer then employs contrastive learning techniques with this image pair to refine its weight parameters. Our experiments demonstrate that this dual-model approach harnesses the strengths of both the ViT and the Masked Autoencoder, resulting in robust and clinically relevant feature embeddings. Preliminary results suggest significant improvements in diagnostic accuracy, underscoring the potential of our methodology in enhancing automated disease diagnosis in fundus imaging.

A Study on Dynamically Visual System that Vision and Sense of Equilibrium are Fused (시각과 평형각이 융합된 다이나믹한 시각 시스템에 관한 연구)

  • 문용선;정남채
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.7
    • /
    • pp.1354-1360
    • /
    • 2001
  • Calculated velocity distribution was used to visual information by image that is obtained from camera. The visual velocity of object that is obtained from this visual information were fused and experimented. That is, need motion of eye that motion of head that happen by external disturbance or move of camera itself to get stable image in environment that receive external disturbance can be compensated. In this treatise, algorithm that control gaze which vision and sense of equilibrium are fused in environment with external disturbance proposed, and thing that compare with that it controls gaze only that control gaze which vision and sense of equilibrium are fused in the experiment result and position deflection is few confirmed. This was because action of camera prop is effect that record conclusion error of the speed because the appearance speed is decreased being compensated by angular velocity sensor.

  • PDF

Development of a SLAM System for Small UAVs in Indoor Environments using Gaussian Processes (가우시안 프로세스를 이용한 실내 환경에서 소형무인기에 적합한 SLAM 시스템 개발)

  • Jeon, Young-San;Choi, Jongeun;Lee, Jeong Oog
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.11
    • /
    • pp.1098-1102
    • /
    • 2014
  • Localization of aerial vehicles and map building of flight environments are key technologies for the autonomous flight of small UAVs. In outdoor environments, an unmanned aircraft can easily use a GPS (Global Positioning System) for its localization with acceptable accuracy. However, as the GPS is not available for use in indoor environments, the development of a SLAM (Simultaneous Localization and Mapping) system that is suitable for small UAVs is therefore needed. In this paper, we suggest a vision-based SLAM system that uses vision sensors and an AHRS (Attitude Heading Reference System) sensor. Feature points in images captured from the vision sensor are obtained by using GPU (Graphics Process Unit) based SIFT (Scale-invariant Feature Transform) algorithm. Those feature points are then combined with attitude information obtained from the AHRS to estimate the position of the small UAV. Based on the location information and color distribution, a Gaussian process model is generated, which could be a map. The experimental results show that the position of a small unmanned aircraft is estimated properly and the map of the environment is constructed by using the proposed method. Finally, the reliability of the proposed method is verified by comparing the difference between the estimated values and the actual values.

Vision-based Reduction of Gyro Drift for Intelligent Vehicles (지능형 운행체를 위한 비전 센서 기반 자이로 드리프트 감소)

  • Kyung, MinGi;Nguyen, Dang Khoi;Kang, Taesam;Min, Dugki;Lee, Jeong-Oog
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.627-633
    • /
    • 2015
  • Accurate heading information is crucial for the navigation of intelligent vehicles. In outdoor environments, GPS is usually used for the navigation of vehicles. However, in GPS-denied environments such as dense building areas, tunnels, underground areas and indoor environments, non-GPS solutions are required. Yaw-rates from a single gyro sensor could be one of the solutions. In dealing with gyro sensors, the drift problem should be resolved. HDR (Heuristic Drift Reduction) can reduce the average heading error in straight line movement. However, it shows rather large errors in some moving environments, especially along curved lines. This paper presents a method called VDR (Vision-based Drift Reduction), a system which uses a low-cost vision sensor as compensation for HDR errors.

Study on the Localization Improvement of the Dead Reckoning using the INS Calibrated by the Fusion Sensor Network Information (융합 센서 네트워크 정보로 보정된 관성항법센서를 이용한 추측항법의 위치추정 향상에 관한 연구)

  • Choi, Jae-Young;Kim, Sung-Gaun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.8
    • /
    • pp.744-749
    • /
    • 2012
  • In this paper, we suggest that how to improve an accuracy of mobile robot's localization by using the sensor network information which fuses the machine vision camera, encoder and IMU sensor. The heading value of IMU sensor is measured using terrestrial magnetism sensor which is based on magnetic field. However, this sensor is constantly affected by its surrounding environment. So, we isolated template of ceiling using vision camera to increase the sensor's accuracy when we use IMU sensor; we measured the angles by pattern matching algorithm; and to calibrate IMU sensor, we compared the obtained values with IMU sensor values and the offset value. The values that were used to obtain information on the robot's position which were of Encoder, IMU sensor, angle sensor of vision camera are transferred to the Host PC by wireless network. Then, the Host PC estimates the location of robot using all these values. As a result, we were able to get more accurate information on estimated positions than when using IMU sensor calibration solely.

Visual Cell OOK Modulation : A Case Study of MIMO CamCom (시각 셀 OOK 변조 : MIMO CamCom 연구 사례)

  • Le, Nam-Tuan;Jang, Yeong Min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.9
    • /
    • pp.781-786
    • /
    • 2013
  • Multiplexing information over parallel data channels based on RF MIMO concept is possible to achieve considerable data rates over large transmission ranges with just a single transmitting element. Visual multiplexing MIMO techniques will send independent streams of bits using the multiple elements of the light transmitter array and recording over a group of camera pixels can further enhance the data rates. The proposed system is a combination of the reliance on computer vision algorithms for tracking and OOK cell frame modulation. LED array are controlled to transmit message in the form of digital information using ON-OFF signaling with ON-OFF pulses (ON = bit 1, OFF = bit 0). A camera captures image frames of the array which are then individually processed and sequentially decoded to retrieve data. To demodulated data transmission, a motion tracking algorithm is implemented in OpenCV (Open source Computer Vision library) to classify the transmission pattern. One of the most advantages of proposed architecture is Computer Vision (CV) based image analysis techniques which can be used to spatially separate signals and remove interferences from ambient light. It will be the future challenges and opportunities for mobile communication networking research.