• Title/Summary/Keyword: Vision data

Search Result 1,777, Processing Time 0.032 seconds

End to End Autonomous Driving System using Out-layer Removal (Out-layer를 제거한 End to End 자율주행 시스템)

  • Seung-Hyeok Jeong;Dong-Ho Yun;Sung-Hun Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.1
    • /
    • pp.65-70
    • /
    • 2023
  • In this paper, we propose an autonomous driving system using an end-to-end model to improve lane departure and misrecognition of traffic lights in a vision sensor-based system. End-to-end learning can be extended to a variety of environmental conditions. Driving data is collected using a model car based on a vision sensor. Using the collected data, it is composed of existing data and data with outlayers removed. A class was formed with camera image data as input data and speed and steering data as output data, and data learning was performed using an end-to-end model. The reliability of the trained model was verified. Apply the learned end-to-end model to the model car to predict the steering angle with image data. As a result of the learning of the model car, it can be seen that the model with the outlayer removed is improved than the existing model.

Effects of the Sensory Impairment on Functioning Levels of the Elderly (노인의 감각장애와 기능상태에 관한 연구)

  • 송미순
    • Journal of Korean Academy of Nursing
    • /
    • v.23 no.4
    • /
    • pp.678-693
    • /
    • 1993
  • The purposes of this study were to describe the level of vision and hearing impairments, depression and functional capacity, among Korean institutionalized elderly and to examine the relation-ship between sensory impairments, depression, and functional capacity in these people. The final pupose was to test the cognitive function path model using sensory competencies as predictors. A convenience sample of thirty nine male and 90 female subjects with a mean age of 80.5 were the subjects of this study. The subjects were tested for cognitive function, and vision and hearing impairments. Physical function and social function were measured by observation of designated task performance by the subjects. Their level of de-pression was measured using a Geriatric Depression Scale administered through an interview. Individual subjective ratings of hearing and vision were marked by the subjects, on a ladder scale. The results of the study showed that 48.8% of the subjects had a hearing impairment, 63.5% had a vision impairement, and 36.4% had both a vision and hearing impairement. The four sensory groups (no sensory impairement, hearing impairement, vision impairement, hearing and vision impairement) were tested for differences in depression, physical function, social behavior and cognitive function. The only significant difference that was found was in cognitive function, between the no sensory impairement group and the hearing and vision impairement group(F=3.25, P<.05), Subjective ratings of hearing showed a significant correlation with cognitive function(r=.34, p<.001) and with social behavior(r=.31, p<.001). There was no correlation between subjective vision ratings and cognitive function or social behavior. However there was a significant correlation between vision and hearing(r=.49, p<.001). There was also a significant negative correlation between age and vision(r=-.21, p<.01) and between age and hear-ing(r=-.34, p<.001). There was a significant correlation between depression and physical function (r=-.32, p<.001) but there was no correlation between depression and cognitive function or social behavior. Based on the literature review and the result, this study, a path model of sensory competence-> cognitive function- >social behavior was developed and tested : Perceived vision and perceived hearing were the exogenous variahles and cognitive function and social behavior were the endogeneous variables in the model. The path analysis result demonstrated an accept-able fit (GFI=.997, AGFI=.972, X$^2$=.72 (p=.396), RMSR=.019) between the data and the model. There was a significant direct effect($\beta$=.38) of perceived hearing on cognitive function. There was a significant direct effect ($\beta$=.32) of cognitive function on social behavior. The total effect of hearing on social behavior was $\beta$=.32 including the indirect effect ($\beta$=.12) . However perceived vsion had little effect ($\beta$=-.08) on cognitive function. The result of path analysis confirms that hearing levels influence cognitive function, and both hearing and cognitive function levels influence social behavior. However, vision has little effect on cognitive function or on social behavior. For the next study, a combined model of the pre viously developed environment - >depression- > physical and social function model, and the present cognitive function model, should be tested to further refine the functional capacity model. There also a need for longitudinal study of functional capacity and sencory competence in order to better understand how declining sensory competence influences functional capacity and how it effects in-creasing dependency and nursing needs in the elderly.

  • PDF

Lane Following Control of Vision Based Mobile Robot Using Neural Network (신경회로망을 이용한 비전기반 이동로봇의 경로추적제어)

  • Yang Seng-Ho;Shin Suk-Hun;Jang Young-Hak;Ryoo Young-Jae
    • Proceedings of the KIPE Conference
    • /
    • 2004.07a
    • /
    • pp.155-158
    • /
    • 2004
  • This paper describes a lane following control of vision based mobile robot that follows guidline. Summation of binarization conversion and image data of vertical axis was used in image processing. As an extraction of specific parameters of lane image, the raw image was converted to the binary data, and the binary data was summerized to the specific data vertically. The specific parameters were made to the inputs of neural network. Summation of image data was used for input of the net, and optimized value of turn angles of learned mobile robot was output. By using neural network algorithm, possibility of mobile robot moving to the target point and following the guidlines quickly and effectively was proved.

  • PDF

A Study of Using the Magnifying Lens to Detect the Detailed 3D Data in the Stereo Vision (양안입체시에서 3차원 정밀 데이터를 얻기 위한 확대경 사용에 관한 연구)

  • Cha, Kuk-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.10
    • /
    • pp.1296-1303
    • /
    • 2006
  • The range-based method is easy to get the 3D data in detail, but the image-based is not. In this paper, I suggests the new approach to get the 3D data in detail from the magnified stereo image. Main idea is using the magnifying lens. The magnifying lens not only magnifies the object but also increases the depth resolution. The relation between the amplification of the disparity and the increase of the depth resolution is verified mathematically and the method to improve the original 3D data is suggested.

  • PDF

Development of a Monitoring and Verification Tool for Sensor Fusion (센서융합 검증을 위한 실시간 모니터링 및 검증 도구 개발)

  • Kim, Hyunwoo;Shin, Seunghwan;Bae, Sangjin
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.3
    • /
    • pp.123-129
    • /
    • 2014
  • SCC (Smart Cruise Control) and AEBS (Autonomous Emergency Braking System) are using various types of sensors data, so it is important to consider about sensor data reliability. In this paper, data from radar and vision sensor is fused by applying a Bayesian sensor fusion technique to improve the reliability of sensors data. Then, it presents a sensor fusion verification tool developed to monitor acquired sensors data and to verify sensor fusion results, efficiently. A parallel computing method was applied to reduce verification time and a series of simulation results of this method are discussed in detail.

A study of using the magnifying lens to detect the detail 3D data (정밀한 3차원 데이터를 얻기 위한 확대경 사용에 관한 연구)

  • Cha, Kuk-Chan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.3
    • /
    • pp.41-47
    • /
    • 2006
  • The range-based method is easy to get the 3D data in detail, but the image-based is not. In this paper. employing the magnifying lens. the new approach to get the 3D data in detail is suggested. The magnifying lens amplifies the disparity in stereo vision system and the amplification of disparity is to increase the resolution of the depth. We mathematically and experimentally verifies the fact to amplify the disparity and suggests the method to improve the original 3D data with the detail 3D data.

  • PDF

Road marking classification method based on intensity of 2D Laser Scanner (신호세기를 이용한 2차원 레이저 스캐너 기반 노면표시 분류 기법)

  • Park, Seong-Hyeon;Choi, Jeong-hee;Park, Yong-Wan
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.5
    • /
    • pp.313-323
    • /
    • 2016
  • With the development of autonomous vehicle, there has been active research on advanced driver assistance system for road marking detection using vision sensor and 3D Laser scanner. However, vision sensor has the weak points that detection is difficult in situations involving severe illumination variance, such as at night, inside a tunnel or in a shaded area; and that processing time is long because of a large amount of data from both vision sensor and 3D Laser scanner. Accordingly, this paper proposes a road marking detection and classification method using single 2D Laser scanner. This method road marking detection and classification based on accumulation distance data and intensity data acquired through 2D Laser scanner. Experiments using a real autonomous vehicle in a real environment showed that calculation time decreased in comparison with 3D Laser scanner-based method, thus demonstrating the possibility of road marking type classification using single 2D Laser scanner.

A Study on Three-Dimensional Model Reconstruction Based on Laser-Vision Technology (레이저 비전 기술을 이용한 물체의 3D 모델 재구성 방법에 관한 연구)

  • Nguyen, Huu Cuong;Lee, Byung Ryong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.7
    • /
    • pp.633-641
    • /
    • 2015
  • In this study, we proposed a three-dimensional (3D) scanning system based on laser-vision technique and rotary mechanism for automatic 3D model reconstruction. The proposed scanning system consists of a laser projector, a camera, and a turntable. For laser-camera calibration a new and simple method was proposed. 3D point cloud data of the surface of scanned object was fully collected by integrating extracted laser profiles, which were extracted from laser stripe images, corresponding to rotary angles of the rotary mechanism. The obscured laser profile problem was also solved by adding an addition camera at another viewpoint. From collected 3D point cloud data, the 3D model of the scanned object was reconstructed based on facet-representation. The reconstructed 3D models showed effectiveness and the applicability of the proposed 3D scanning system to 3D model-based applications.

The Relationship between the Segment of Erector Spinae during a Core Stability Exercise according to Visual Control (코어 안정성 훈련 시 시각통제 유무에 따른 척추세움근의 분절 간 상관분석)

  • Yoon, Jung-Gyu
    • Journal of the Korean Society of Physical Medicine
    • /
    • v.8 no.3
    • /
    • pp.417-424
    • /
    • 2013
  • PURPOSE: We investigated the relationship between the segment of erector spinae during a core stability exercise according to visual control. METHODS: The subjects of this study were 20 healthy students. An 8-channel electromyograph was used to measure muscle activities of the erector spinae by segment(cervical, thoracic and lumbar) during a core stability exercise according to visual control. The collected data were analyzed using the independent t-test and Pearson-test. RESULTS: The activity of the erector spinae for all segment was higher without the vision than with. The activity of right cervical erector spinae was significantly increased by increasing the activity of the left thoracic erector spinae during a core stability exercise with vision (r= .555). The activity of left thoracic erector spinae was significantly increased by increasing the activity of the left lumbar erector spinae during a core stability exercise without vision (r= .472). CONCLUSION: There was a positive correlation between the cervical and thoracic segment of erector spinae during a core stability exercise with vision. There was a positive correlation between the thoracic and lumbar segment of erector spinae during a core stability exercise without vision.

Vision-based multipoint measurement systems for structural in-plane and out-of-plane movements including twisting rotation

  • Lee, Jong-Han;Jung, Chi-Young;Choi, Eunsoo;Cheung, Jin-Hwan
    • Smart Structures and Systems
    • /
    • v.20 no.5
    • /
    • pp.563-572
    • /
    • 2017
  • The safety of structures is closely associated with the structural out-of-plane behavior. In particular, long and slender beam structures have been increasingly used in the design and construction. Therefore, an evaluation of the lateral and torsional behavior of a structure is important for the safety of the structure during construction as well as under service conditions. The current contact measurement method using displacement meters cannot measure independent movements directly and also requires caution when installing the displacement meters. Therefore, in this study, a vision-based system was used to measure the in-plane and out-of-plane displacements of a structure. The image processing algorithm was based on reference objects, including multiple targets in Lab color space. The captured targets were synchronized using a load indicator connected wirelessly to a data logger system in the server. A laboratory beam test was carried out to compare the displacements and rotation obtained from the proposed vision-based measurement system with those from the current measurement method using string potentiometers. The test results showed that the proposed vision-based measurement system could be applied successfully and easily to evaluating both the in-plane and out-of-plane movements of a beam including twisting rotation.