• Title/Summary/Keyword: Vision Information

Search Result 2,988, Processing Time 0.027 seconds

The Visual Distribution Map Based on the Geographic Information System for Ocular Health State (지리정보체계를 이용한 눈 건강수준의 시각적 분포도)

  • Kim, Hyojin;Kim, Hyi Jin;Park, Chang Won;Lee, Eun-Hee;Kim, Hee Ju;Ryu, Jungmook
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.19 no.4
    • /
    • pp.493-498
    • /
    • 2014
  • Purpose: This study utilized the Geographic Information System (GIS) which is one of the representative methods for describing visual distribution, to show the distribution of visions of middle and high school students in 16 cities or provinces in Korea. Method: The data of National Health and Nutrition Examination Survey (NHANES) were analysed from 2009 to 2011 and designed a population-based cross-sectional study. The subjects were total 1,049 students at the age of 13 to 18 and uncorrected vision was provided. Male subjects were 549 (52.3%) and female subjects were 500 (47.7%). Subjects were divided into 16 cities or provinces and average vision of regions were analysed. the differentials of vision among the regions were analysed by as a spatial analysis method. Results: The average uncorrected vision were significant difference by sex (p=0.001). However male and female student groups' average vision indicated no statistically significant difference by region in those 16 cities and provinces. In order to show the differentials of middle and high school students' vision by region with a visual distribution method, the GIS was utilized for mapping. Conclusions: The differentials of vision among regions by GIS provide a visually effective distribution map.

A Time Synchronization Scheme for Vision/IMU/OBD by GPS (GPS를 활용한 Vision/IMU/OBD 시각동기화 기법)

  • Lim, JoonHoo;Choi, Kwang Ho;Yoo, Won Jae;Kim, La Woo;Lee, Yu Dam;Lee, Hyung Keun
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.3
    • /
    • pp.251-257
    • /
    • 2017
  • Recently, hybrid positioning system combining GPS, vision sensor, and inertial sensor has drawn many attentions to estimate accurate vehicle positions. Since accurate multi-sensor fusion requires efficient time synchronization, this paper proposes an efficient method to obtain time synchronized measurements of vision sensor, inertial sensor, and OBD device based on GPS time information. In the proposed method, the time and position information is obtained by the GPS receiver, the attitude information is obtained by the inertial sensor, and the speed information is obtained by the OBD device. The obtained time, position, speed, and attitude information is converted to the color information. The color information is inserted to several corner pixels of the corresponding image frame. An experiment was performed with real measurements to evaluate the feasibility of the proposed method.

Quantization and Calibration of Color Information From Machine Vision System for Beef Color Grading (소고기 육색 등급 자동 판정을 위한 기계시각 시스템의 칼라 보정 및 정량화)

  • Kim, Jung-Hee;Choi, Sun;Han, Na-Young;Ko, Myung-Jin;Cho, Sung-Ho;Hwang, Heon
    • Journal of Biosystems Engineering
    • /
    • v.32 no.3
    • /
    • pp.160-165
    • /
    • 2007
  • This study was conducted to evaluate beef using a color machine vision system. The machine vision system has an advantage to measure larger area than a colorimeter and also could measure other quality factors like distribution of fats. However, the machine vision measurement is affected by system components. To measure the beef color with the machine vision system, the effect of color balancing control was tested and calibration model was developed. Neural network for color calibration which learned reference color patches showed a high correlation with colorimeter in L*a*b* coordinates and had an adaptability at various measurement environments. The trained network showed a very high correlation with the colorimeter when measuring beef color.

Smart Vision Sensor for Satellite Video Surveillance Sensor Network (위성 영상감시 센서망을 위한 스마트 비젼 센서)

  • Kim, Won-Ho;Im, Jae-Yoo
    • Journal of Satellite, Information and Communications
    • /
    • v.10 no.2
    • /
    • pp.70-74
    • /
    • 2015
  • In this paper, satellite communication based video surveillance system that consisted of ultra-small aperture terminals with small-size smart vision sensor is proposed. The events such as forest fire, smoke, intruder movement are detected automatically in field and false alarms are minimized by using intelligent and high-reliable video analysis algorithms. The smart vision sensor is necessary to achieve high-confidence, high hardware endurance, seamless communication and easy maintenance requirements. To satisfy these requirements, real-time digital signal processor, camera module and satellite transceiver are integrated as a smart vision sensor-based ultra-small aperture terminal. Also, high-performance video analysis and image coding algorithms are embedded. The video analysis functions and performances were verified and confirmed practicality through computer simulation and vision sensor prototype test.

Development of a Vision-based Blank Alignment Unit for Press Automation Process (프레스 자동화 공정을 위한 비전 기반 블랭크 정렬 장치 개발)

  • Oh, Jong-Kyu;Kim, Daesik;Kim, Soo-Jong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.65-69
    • /
    • 2015
  • A vision-based blank alignment unit for a press automation line is introduced in this paper. A press is a machine tool that changes the shape of a blank by applying pressure and is widely used in industries requiring mass production. In traditional press automation lines, a mechanical centering unit, which consists of guides and ball bearings, is employed to align a blank before a robot inserts it into the press. However it can only align limited sized and shaped of blanks. Moreover it cannot be applied to a process where more than two blanks are simultaneously inserted. To overcome these problems, we developed a press centering unit by means of vision sensors for press automation lines. The specification of the vision system is determined by considering information of the blank and the required accuracy. A vision application S/W with pattern recognition, camera calibration and monitoring functions is designed to successfully detect multiple blanks. Through real experiments with an industrial robot, we validated that the proposed system was able to align various sizes and shapes of blanks, and successfully detect more than two blanks which were simultaneously inserted.

Calibration for Color Measurement of Lean Tissue and Fat of the Beef

  • Lee, S.H.;Hwang, H.
    • Agricultural and Biosystems Engineering
    • /
    • v.4 no.1
    • /
    • pp.16-21
    • /
    • 2003
  • In the agricultural field, a machine vision system has been widely used to automate most inspection processes especially in quality grading. Though machine vision system was very effective in quantifying geometrical quality factors, it had a deficiency in quantifying color information. This study was conducted to evaluate color of beef using machine vision system. Though measuring color of a beef using machine vision system had an advantage of covering whole lean tissue area at a time compared to a colorimeter, it revealed the problem of sensitivity depending on the system components such as types of camera, lighting conditions, and so on. The effect of color balancing control of a camera was investigated and multi-layer BP neural network based color calibration process was developed. Color calibration network model was trained using reference color patches and showed the high correlation with L*a*b* coordinates of a colorimeter. The proposed calibration process showed the successful adaptability to various measurement environments such as different types of cameras and light sources. Compared results with the proposed calibration process and MLR based calibration were also presented. Color calibration network was also successfully applied to measure the color of the beef. However, it was suggested that reflectance properties of reference materials for calibration and test materials should be considered to achieve more accurate color measurement.

  • PDF

A completely non-contact recognition system for bridge unit influence line using portable cameras and computer vision

  • Dong, Chuan-Zhi;Bas, Selcuk;Catbas, F. Necati
    • Smart Structures and Systems
    • /
    • v.24 no.5
    • /
    • pp.617-630
    • /
    • 2019
  • Currently most of the vision-based structural identification research focus either on structural input (vehicle location) estimation or on structural output (structural displacement and strain responses) estimation. The structural condition assessment at global level just with the vision-based structural output cannot give a normalized response irrespective of the type and/or load configurations of the vehicles. Combining the vision-based structural input and the structural output from non-contact sensors overcomes the disadvantage given above, while reducing cost, time, labor force including cable wiring work. In conventional traffic monitoring, sometimes traffic closure is essential for bridge structures, which may cause other severe problems such as traffic jams and accidents. In this study, a completely non-contact structural identification system is proposed, and the system mainly targets the identification of bridge unit influence line (UIL) under operational traffic. Both the structural input (vehicle location information) and output (displacement responses) are obtained by only using cameras and computer vision techniques. Multiple cameras are synchronized by audio signal pattern recognition. The proposed system is verified with a laboratory experiment on a scaled bridge model under a small moving truck load and a field application on a footbridge on campus under a moving golf cart load. The UILs are successfully identified in both bridge cases. The pedestrian loads are also estimated with the extracted UIL and the predicted weights of pedestrians are observed to be in acceptable ranges.

Smart Bus System using BLE Beacon and Computer Vision (BLE 비콘과 컴퓨터비전을 적용한 스마트 버스 시스템)

  • You, Minjung;Rhee, Eugene
    • Journal of IKEEE
    • /
    • v.22 no.2
    • /
    • pp.250-257
    • /
    • 2018
  • In this paper, a smart bus system that automates public bus traffic payment by applying beacon and computer vision and provides bus route information, real-time location information, getting off alarm is proposed. By using the beacon to recognize busses near the stop and to board the bus to be boarded, this system automatically processes the payment when boarding by using the distance from the beacon and the information provided by the beacon and the face comparison. After the payment processing, the system provides the route information of the boarded bus and the real-time bus location information to the user, and when the user sets an alarm using these informations, the alarm is activated when the bus leaves the bus stop.

Development of Processing System for Audio-vision System Based on Auditory Input (청각을 이용한 시각 재현 시스템의 개발)

  • Kim, Jung-Hun;Kim, Deok-Kyu;Won, Chul-Ho;Lee, Jong-Min;Lee, Hee-Jung;Lee, Na-Hee;Yoon, Su-Young
    • Journal of Biomedical Engineering Research
    • /
    • v.33 no.1
    • /
    • pp.25-31
    • /
    • 2012
  • The audio vision system was developed for visually impaired people and usability was verified. In this study ten normal volunteers were included in the subject group and their mean age was 28.8 years old. Male and female ratio was 7:3. The usability of audio vision system was verified by as follows. First, volunteers learned distance of obstacles and up-down discrimination. After learning of audio vision system, indoor and outdoor walking examination was performed. The test was scored by ability of up-down and lateral discrimination, distance recognition and walking without collision. Each parameter was scored by 1 to 5. The results were 93.5 +- SD(ranges, 86 to 100) of 100. In this study, we could convert visual information to auditory information by audio-vision system and verified possibility of applying to daily life for visually impaired people.

Collision Avoidance Using Omni Vision SLAM Based on Fisheye Image (어안 이미지 기반의 전방향 영상 SLAM을 이용한 충돌 회피)

  • Choi, Yun Won;Choi, Jeong Won;Im, Sung Gyu;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.210-216
    • /
    • 2016
  • This paper presents a novel collision avoidance technique for mobile robots based on omni-directional vision simultaneous localization and mapping (SLAM). This method estimates the avoidance path and speed of a robot from the location of an obstacle, which can be detected using the Lucas-Kanade Optical Flow in images obtained through fish-eye cameras mounted on the robots. The conventional methods suggest avoidance paths by constructing an arbitrary force field around the obstacle found in the complete map obtained through the SLAM. Robots can also avoid obstacles by using the speed command based on the robot modeling and curved movement path of the robot. The recent research has been improved by optimizing the algorithm for the actual robot. However, research related to a robot using omni-directional vision SLAM to acquire around information at once has been comparatively less studied. The robot with the proposed algorithm avoids obstacles according to the estimated avoidance path based on the map obtained through an omni-directional vision SLAM using a fisheye image, and returns to the original path. In particular, it avoids the obstacles with various speed and direction using acceleration components based on motion information obtained by analyzing around the obstacles. The experimental results confirm the reliability of an avoidance algorithm through comparison between position obtained by the proposed algorithm and the real position collected while avoiding the obstacles.