• Title/Summary/Keyword: Vision data

Search Result 1,771, Processing Time 0.033 seconds

Development of a Lateral Control System for Autonomous Vehicles Using Data Fusion of Vision and IMU Sensors with Field Tests (비전 및 IMU 센서의 정보융합을 이용한 자율주행 자동차의 횡방향 제어시스템 개발 및 실차 실험)

  • Park, Eun Seong;Yu, Chang Ho;Choi, Jae Weon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.3
    • /
    • pp.179-186
    • /
    • 2015
  • In this paper, a novel lateral control system is proposed for the purpose of improving lane keeping performance which is independent from GPS signals. Lane keeping is a key function for the realization of unmanned driving systems. In order to obtain this objective, a vision sensor based real-time lane detection scheme is developed. Furthermore, we employ a data fusion along with a real-time steering angle of the test vehicle to improve its lane keeping performance. The fused direction data can be obtained by an IMU sensor and vision sensor. The performance of the proposed system was verified by computer simulations along with field tests using MOHAVE, a commercial vehicle from Kia Motors of Korea.

A Study on the Forming Failure Inspection of Small and Multi Pipes (소형 다품종 파이프의 실시간 성형불량 검사 시스템에 관한 연구)

  • 김형석;이회명;이병룡;양순용;안경관
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.11
    • /
    • pp.61-68
    • /
    • 2004
  • Recently, there has been an increasing demand for computer-vision based inspection and/or measurement system as a part of factory automation equipment. Existing manual inspection method can inspect only specific samples and has low measuring accuracy as well as it increases working time. Thus, in order to improve the objectivity and reproducibility, computer-aided analysis method is needed. In this paper, front and side profile inspection and/or data transfer system are developed using computer-vision during the inspection process on three kinds of pipes coming from a forming line. Straight line and circle are extracted from profiles obtained from vision using Laplace operator. To reduce inspection time, Hough Transform is used with clustering method for straight line detection and the center points and diameters of inner and outer circle are found to determine eccentricity and whether good or bad. Also, an inspection system has been built that each pipe's data and images of good/bad test are stored as files and transferred to the server so that the center can manage them.

Implementation of Enhanced Vision for an Autonomous Map-based Robot Navigation

  • Roland, Cubahiro;Choi, Donggyu;Kim, Minyoung;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.41-43
    • /
    • 2021
  • Robot Operating System (ROS) has been a prominent and successful framework used in robotics business and academia.. However, the framework has long been focused and limited to navigation of robots and manipulation of objects in the environment. This focus leaves out other important field such as speech recognition, vision abilities, etc. Our goal is to take advantage of ROS capacity to integrate additional libraries of programming functions aimed at real-time computer vision with a depth-image camera. In this paper we will focus on the implementation of an upgraded vision with the help of a depth camera which provides a high quality data for a much enhanced and accurate understanding of the environment. The varied data from the cameras are then incorporated in ROS communication structure for any potential use. For this particular case, the system will use OpenCV libraries to manipulate the data from the camera and provide a face-detection capabilities to the robot, while navigating an indoor environment. The whole system has been implemented and tested on the latest technologies of Turtlebot3 and Raspberry Pi4.

  • PDF

Development of an Intelligent Control System to Integrate Computer Vision Technology and Big Data of Safety Accidents in Korea

  • KANG, Sung Won;PARK, Sung Yong;SHIN, Jae Kwon;YOO, Wi Sung;SHIN, Yoonseok
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.721-727
    • /
    • 2022
  • Construction safety remains an ongoing concern, and project managers have been increasingly forced to cope with myriad uncertainties related to human operations on construction sites and the lack of a skilled workforce in hazardous circumstances. Various construction fatality monitoring systems have been widely proposed as alternatives to overcome these difficulties and to improve safety management performance. In this study, we propose an intelligent, automatic control system that can proactively protect workers using both the analysis of big data of past safety accidents, as well as the real-time detection of worker non-compliance in using personal protective equipment (PPE) on a construction site. These data are obtained using computer vision technology and data analytics, which are integrated and reinforced by lessons learned from the analysis of big data of safety accidents that occurred in the last 10 years. The system offers data-informed recommendations for high-risk workers, and proactively eliminates the possibility of safety accidents. As an illustrative case, we selected a pilot project and applied the proposed system to workers in uncontrolled environments. Decreases in workers PPE non-compliance rates, improvements in variable compliance rates, reductions in severe fatalities through guidelines that are customized according to the worker, and accelerations in safety performance achievements are expected.

  • PDF

Development of Intelligent Robot Vision System for Automatic Inspection of Optical Lens (광학렌즈 자동 검사용 지능형 로봇 비젼 시스템 개발)

  • 정동연;장영희;차보남;한성현
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2004.04a
    • /
    • pp.247-252
    • /
    • 2004
  • Developed shape awareness technology and vision technology for optical ten slant in this research and including external form state of lens for the performance verification developed so that can be good achieve badness finding. And, establish to existing reflex data because inputting surface badness degree of scratch's standard specification condition directly, and error designed to distinguish from product more than schedule error to badness product by normalcy product within schedule extent after calculate the error comparing actuality measurement reflex data md standard reflex data mutually. Developed system to smallest 1pixel unit though measuring is possible 1pixel as 3.7$\mu\textrm{m}$${\times}$3.7$\mu\textrm{m}$(0.1369${\times}$10/sub-1/$\textrm{mm}^2$) the accuracy to 10/sub-1/mm minutely measuring is possible performance verification and trust ability through an experiment prove.

  • PDF

High Accuracy Vision-Based Positioning Method at an Intersection

  • Manh, Cuong Nguyen;Lee, Jaesung
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.2
    • /
    • pp.114-124
    • /
    • 2018
  • This paper illustrates a vision-based vehicle positioning method at an intersection to support the C-ITS. It removes the minor shadow that causes the merging problem by simply eliminating the fractional parts of a quotient image. In order to separate the occlusion, it firstly performs the distance transform to analyze the contents of the single foreground object to find seeds, each of which represents one vehicle. Then, it applies the watershed to find the natural border of two cars. In addition, a general vehicle model and the corresponding space estimation method are proposed. For performance evaluation, the corresponding ground truth data are read and compared with the vision-based detected data. In addition, two criteria, IOU and DEER, are defined to measure the accuracy of the extracted data. The evaluation result shows that the average value of IOU is 0.65 with the hit ratio of 97%. It also shows that the average value of DEER is 0.0467, which means the positioning error is 32.7 centimeters.

A computer vision-based approach for crack detection in ultra high performance concrete beams

  • Roya Solhmirzaei;Hadi Salehi;Venkatesh Kodur
    • Computers and Concrete
    • /
    • v.33 no.4
    • /
    • pp.341-348
    • /
    • 2024
  • Ultra-high-performance concrete (UHPC) has received remarkable attentions in civil infrastructure due to its unique mechanical characteristics and durability. UHPC gains increasingly dominant in essential structural elements, while its unique properties pose challenges for traditional inspection methods, as damage may not always manifest visibly on the surface. As such, the need for robust inspection techniques for detecting cracks in UHPC members has become imperative as traditional methods often fall short in providing comprehensive and timely evaluations. In the era of artificial intelligence, computer vision has gained considerable interest as a powerful tool to enhance infrastructure condition assessment with image and video data collected from sensors, cameras, and unmanned aerial vehicles. This paper presents a computer vision-based approach employing deep learning to detect cracks in UHPC beams, with the aim of addressing the inherent limitations of traditional inspection methods. This work leverages computer vision to discern intricate patterns and anomalies. Particularly, a convolutional neural network architecture employing transfer learning is adopted to identify the presence of cracks in the beams. The proposed approach is evaluated with image data collected from full-scale experiments conducted on UHPC beams subjected to flexural and shear loadings. The results of this study indicate the applicability of computer vision and deep learning as intelligent methods to detect major and minor cracks and recognize various damage mechanisms in UHPC members with better efficiency compared to conventional monitoring methods. Findings from this work pave the way for the development of autonomous infrastructure health monitoring and condition assessment, ensuring early detection in response to evolving structural challenges. By leveraging computer vision, this paper contributes to usher in a new era of effectiveness in autonomous crack detection, enhancing the resilience and sustainability of UHPC civil infrastructure.

A Proposal for the Education Vision for Chemical Engineering Field (화학공학분야 교육비전 수립 연구)

  • Lee, Kyu-nyo;Hwang, Ju-young;Yi, Kwang-bok;Han, Su-kyoung;Rhee, Young-woo
    • Journal of Engineering Education Research
    • /
    • v.21 no.6
    • /
    • pp.99-107
    • /
    • 2018
  • The purpose of this study is to establish and propose educational vision of chemical engineering field in order to search for academic identity and future education direction in chemical engineering field. In order to achieve this research purpose, we investigate the literature and data on the vision, educational goals, and curriculum of the department of chemical engineering in domestic and foreign universities. We also analyze the SWOT of internal and external environmental factors respectively. The validity of the proposal was verified through delphi survey with delphi panels and the vision was developed by revising and improving upon the opinions of professionals. The vision is comprised of the value and mission of learning, the educational purpose, and the educational goal. The first stage is value and mission of chemical engineering. The educational purposes and the educational goals are divided into 'Department of Chemical Engineering' and 'Department of Chemical and Biological Engineering'. The application of the educational vision of chemical engineering field is as follows. First, we expect that the vision to be a valuable, philosophical, and theoretical basis for establishing educational objectives and goals in the field of chemical engineering. Hopefully, it will be used as a general education goal for the top-level education. Second, we hope that the vision will be used to develop customized vision, customized educational purpose, and educational goals that reflect the characteristics of region, departments, graduates, and educational needs in the field of chemical engineering. Finally, we hope that these results will be the starting point to discuss the educational vision in the department of chemical engineering.

Development of Web Based Mold Discrimination System using the Matching Process for Vision Information and CAD DB (비전정보와 캐드DB 매칭을 통한 웹 기반 금형 판별 시스템 개발)

  • Choi, Jin-Hwa;Jeon, Byung-Cheol;Cho, Myeong-Woo
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.15 no.5
    • /
    • pp.37-43
    • /
    • 2006
  • The target of this study is development of web based mold discrimination system by matching vision information with CAD database. The use of 2D vision image makes possible speedy mold discrimination from many databases. The image processing such as preprocessing, cleaning is done for obtaining vivid image with object information. The web-based system is a program which runs to exchange messages between a server and a client by making of ActiveX control and the result of mold discrimination is shown on web-browser. For effective feature classification and extraction, signature method is used to make sensible information from 2D data. As a result, the possibility of proposed system is shown as matching feature information from vision image with CAD database samples.

A Study on Weldability Estirmtion of Laser Welded Specimens by Vision Sensor (비전 센서를 이용한 레이져 용접물의 용접성 평가에 관한 연구)

  • 엄기원;이세헌;이정익
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1101-1104
    • /
    • 1995
  • Through welding fabrication, user can feel an surficaial and capable unsatisfaction because of welded defects, Generally speaking, these are called weld defects. For checking these defects effectively without time loss effectively, weldability estimation system setup isan urgent thing for detecting whole specimen quality. In this study, by laser vision camera, catching a rawdata on welded specimen profiles, treating vision processing with these data, qualititative defects are estimated from getting these information at first. At the same time, for detecting quantitative defects, whole specimen weldability estimation is pursued by multifeature pattern recognition, which is a kind of fuzzy pattern recognition. For user friendly, by weldability estimation results are shown each profiles, final reports and visual graphics method, user can easily determined weldability. By applying these system to welding fabrication, these technologies are contribution to on-line weldability estimation.

  • PDF