• 제목/요약/키워드: Autonomous Machine

검색결과 202건 처리시간 0.025초

기계시각과 퍼지 제어를 이용한 경운작업 트랙터의 자율주행 (Autonomous Tractor for Tillage Operation Using Machine Vision and Fuzzy Logic Control)

  • 조성인;최낙진;강인성
    • Journal of Biosystems Engineering
    • /
    • 제25권1호
    • /
    • pp.55-62
    • /
    • 2000
  • Autonomous farm operation needs to be developed for safety, labor shortage problem, health etc. In this research, an autonomous tractor for tillage was investigated using machine vision and a fuzzy logic controller(FLC). Tractor heading and offset were determined by image processing and a geomagnetic sensor. The FLC took the tractor heading and offset as inputs and generated the steering angle for tractor guidance as output. A color CCD camera was used fro the image processing . The heading and offset were obtained using Hough transform of the G-value color images. 15 fuzzy rules were used for inferencing the tractor steering angle. The tractor was tested in the file and it was proved that the tillage operation could be done autonomously within 20 cm deviation with the machine vision and the FLC.

  • PDF

RESEARCH ON AUTONOMOUS LAND VEHICLE FOR AGRICULTURE

  • Matsuo, Yosuke;Yukumoto, Isamu
    • 한국농업기계학회:학술대회논문집
    • /
    • 한국농업기계학회 1993년도 Proceedings of International Conference for Agricultural Machinery and Process Engineering
    • /
    • pp.810-819
    • /
    • 1993
  • An autonomous lan vehicle for agriculture(ALVA-II) was developed. A prototype vehicle was made by modifying a commercial tractor. A Navigation sensor system with a geo-magnetic sensor performed the autonomous operations of ALVA-II, such as rotary tilling with headland turnings. A navigation sensor system with a machine vision system was also investigated to control ALVA-II following a work boudnary.

  • PDF

기계학습 기반의 실시간 이미지 인식 알고리즘의 성능 (Performance of Real-time Image Recognition Algorithm Based on Machine Learning)

  • 선영규;황유민;홍승관;김진영
    • 한국위성정보통신학회논문지
    • /
    • 제12권3호
    • /
    • pp.69-73
    • /
    • 2017
  • 본 논문에서는 기계학습 기반의 실시간 이미지 인식 알고리즘을 개발하고 개발한 알고리즘의 성능을 테스트 하였다. 실시간 이미지 인식 알고리즘은 기계 학습된 이미지 데이터를 바탕으로 실시간으로 입력되는 이미지를 인식한다. 개발한 실시간 이미지 인식 알고리즘의 성능을 테스트하기 위해 자율주행 자동차 분야에 적용해보았고 이를 통해 개발한 실시간 이미지 인식 알고리즘의 성능을 확인해보았다.

Study on the Take-over Performance of Level 3 Autonomous Vehicles Based on Subjective Driving Tendency Questionnaires and Machine Learning Methods

  • Hyunsuk Kim;Woojin Kim;Jungsook Kim;Seung-Jun Lee;Daesub Yoon;Oh-Cheon Kwon;Cheong Hee Park
    • ETRI Journal
    • /
    • 제45권1호
    • /
    • pp.75-92
    • /
    • 2023
  • Level 3 autonomous vehicles require conditional autonomous driving in which autonomous and manual driving are alternately performed; whether the driver can resume manual driving within a limited time should be examined. This study investigates whether the demographics and subjective driving tendencies of drivers affect the take-over performance. We measured and analyzed the reengagement and stabilization time after a take-over request from the autonomous driving system to manual driving using a vehicle simulator that supports the driver's take-over mechanism. We discovered that the driver's reengagement and stabilization time correlated with the speeding and wild driving tendency as well as driving workload questionnaires. To verify the efficiency of subjective questionnaire information, we tested whether the driver with slow or fast reengagement and stabilization time can be detected based on machine learning techniques and obtained results. We expect to apply these results to training programs for autonomous vehicles' users and personalized human-vehicle interfaces for future autonomous vehicles.

자율주행 밭농업로봇의 로터리 경작을 고려한 모델 기반 제어 연구 (Study on the Model based Control considering Rotary Tillage of Autonomous Driving Agricultural Robot)

  • 송하준;양견모;오장석;송수환;한종부;서갑호
    • 로봇학회논문지
    • /
    • 제15권3호
    • /
    • pp.233-239
    • /
    • 2020
  • The aims of this paper is to develop a modular agricultural robot and its autonomous driving algorithm that can be used in field farming. Actually, it is difficult to develop a controller for autonomous agricultural robot that transforming their dynamic characteristics by installation of machine modules. So we develop for the model based control algorithm of rotary machine connected to agricultural robot. Autonomous control algorithm of agricultural robot consists of the path control, velocity control, orientation control. To verify the developed algorithm, we used to analytical techniques that have the advantage of reducing development time and risks. The model is formulated based on the multibody dynamics methods for high accuracy. Their model parameters get from the design parameter and real constructed data. Then we developed the co-simulation that is combined between the multibody dynamics model and control model using the ADAMS and Matlab simulink programs. Using the developed model, we carried out various dynamics simulation in the several rotation speed of blades.

DGPS와 기계시각을 이용한 자율주행 콤바인의 개발 (Development of Autonomous Combine Using DGPS and Machine Vision)

  • 조성인;박영식;최창현;황헌;김명락
    • Journal of Biosystems Engineering
    • /
    • 제26권1호
    • /
    • pp.29-38
    • /
    • 2001
  • A navigation system was developed for autonomous guidance of a combine. It consisted of a DGPS, a machine vision system, a gyro sensor and an ultrasonic sensor. For an autonomous operation of the combine, target points were determined at first. Secondly, heading angle and offset were calculated by comparing current positions obtained from the DGPS with the target points. Thirdly, the fuzzy controller decided steering angle by the fuzzy inference that took 3 inputs of heading angle, offset and distance to the bank around the rice field. Finally, the hydraulic system was actuated for the combine steering. In the case of the misbehavior of the DGPS, the machine vision system found the desired travel path. In this way, the combine traveled straight paths to the traget point and then turned to the next target point. The gyro sensor was used to check the turning angle. The autonomous combine traveled within 31.11cm deviation(RMS) on the straight paths and harvested up to 96% of the whole rice field. The field experiments proved a possibility of autonomous harvesting. Improvement of the DGPS accuracy should be studied further by compensation variations of combines attitude due to unevenness of the rice field.

  • PDF

Personal Driving Style based ADAS Customization using Machine Learning for Public Driving Safety

  • Giyoung Hwang;Dongjun Jung;Yunyeong Goh;Jong-Moon Chung
    • 인터넷정보학회논문지
    • /
    • 제24권1호
    • /
    • pp.39-47
    • /
    • 2023
  • The development of autonomous driving and Advanced Driver Assistance System (ADAS) technology has grown rapidly in recent years. As most traffic accidents occur due to human error, self-driving vehicles can drastically reduce the number of accidents and crashes that occur on the roads today. Obviously, technical advancements in autonomous driving can lead to improved public driving safety. However, due to the current limitations in technology and lack of public trust in self-driving cars (and drones), the actual use of Autonomous Vehicles (AVs) is still significantly low. According to prior studies, people's acceptance of an AV is mainly determined by trust. It is proven that people still feel much more comfortable in personalized ADAS, designed with the way people drive. Based on such needs, a new attempt for a customized ADAS considering each driver's driving style is proposed in this paper. Each driver's behavior is divided into two categories: assertive and defensive. In this paper, a novel customized ADAS algorithm with high classification accuracy is designed, which divides each driver based on their driving style. Each driver's driving data is collected and simulated using CARLA, which is an open-source autonomous driving simulator. In addition, Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) machine learning algorithms are used to optimize the ADAS parameters. The proposed scheme results in a high classification accuracy of time series driving data. Furthermore, among the vast amount of CARLA-based feature data extracted from the drivers, distinguishable driving features are collected selectively using Support Vector Machine (SVM) technology by comparing the amount of influence on the classification of the two categories. Therefore, by extracting distinguishable features and eliminating outliers using SVM, the classification accuracy is significantly improved. Based on this classification, the ADAS sensors can be made more sensitive for the case of assertive drivers, enabling more advanced driving safety support. The proposed technology of this paper is especially important because currently, the state-of-the-art level of autonomous driving is at level 3 (based on the SAE International driving automation standards), which requires advanced functions that can assist drivers using ADAS technology.

자율 기기를 위한 속도가 제어된 데이터 기반 실시간 스트림 프로세싱 (Rate-Controlled Data-Driven Real-Time Stream Processing for an Autonomous Machine)

  • 노순현;홍성수;김명선
    • 로봇학회논문지
    • /
    • 제14권4호
    • /
    • pp.340-347
    • /
    • 2019
  • Due to advances in machine intelligence and increased demands for autonomous machines, the complexity of the underlying software platform is increasing at a rapid pace, overwhelming the developers with implementation details. We attempt to ease the burden that falls onto the developers by creating a graphical programming framework we named Splash. Splash is designed to provide an effective programming abstraction for autonomous machines that require stream processing. It also enables programmers to specify genuine, end-to-end timing constraints, which the Splash framework automatically monitors for violation. By utilizing the timing constraints, Splash provides three key language semantics: timing semantics, in-order delivery semantics, and rate-controlled data-driven stream processing semantics. These three semantics together collectively serve as a conceptual tool that can hide low-level details from programmers, allowing developers to focus on the main logic of their applications. In this paper, we introduce the three-language semantics in detail and explain their function in association with Splash's language constructs. Furthermore, we present the internal workings of the Splash programming framework and validate its effectiveness via a lane keeping assist system.

An Autonomous Operational Service System for Machine Vision-based Inspection towards Smart Factory of Manufacturing Multi-wire Harnesses

  • Seung Beom, Hong;Kyou Ho, Lee
    • Journal of information and communication convergence engineering
    • /
    • 제20권4호
    • /
    • pp.317-325
    • /
    • 2022
  • In this study, we propose a technological system designed to provide machine vision-based automatic inspection and autonomous operation services for an entire process related to product inspection in wire harness manufacturing. The smart factory paradigm is a valuable and necessary goal, small companies may encounter steep barriers to entry. Therefore, the best approach is to develop towards this approach gradually in stages starting with the relatively simple improvement to manufacturing processes, such as replacing manual quality assurance stages with machine vision-based inspection. In this study, we consider design issues of a system based on the proposed technology and describe an experimental implementation. In addition, we evaluated the implementation of the proposed technology. The test results show that the adoption of the proposed machine vision-based automatic inspection and operation service system for multi-wire harness production may be considered justified, and the effectiveness of the proposed technology was verified.

머신비젼 기반의 자율주행 차량을 위한 카메라 교정 (Camera Calibration for Machine Vision Based Autonomous Vehicles)

  • 이문규;안택진
    • 제어로봇시스템학회논문지
    • /
    • 제8권9호
    • /
    • pp.803-811
    • /
    • 2002
  • Machine vision systems are usually used to identify traffic lanes and then determine the steering angle of an autonomous vehicle in real time. The steering angle is calculated using a geometric model of various parameters including the orientation, position, and hardware specification of a camera in the machine vision system. To find the accurate values of the parameters, camera calibration is required. This paper presents a new camera-calibration algorithm using known traffic lane features, line thickness and lane width. The camera parameters considered are divided into two groups: Group I (the camera orientation, the uncertainty image scale factor, and the focal length) and Group II(the camera position). First, six control points are extracted from an image of two traffic lines and then eight nonlinear equations are generated based on the points. The least square method is used to find the estimates for the Group I parameters. Finally, values of the Group II parameters are determined using point correspondences between the image and its corresponding real world. Experimental results prove the feasibility of the proposed algorithm.