• Title/Summary/Keyword: Vision-based Control

Search Result 689, Processing Time 0.025 seconds

A Study on 3D Geospatial Information Model based Influence Factor Management Application in Earthwork Plan (3차원 지형공간정보모델기반 토공사 계획 및 관리에 미치는 영향요인 관리 애플리케이션 연구)

  • Park, Jae-woo;Yun, Won Gun;Kim, Suk Su;Song, Jae Ho
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.2
    • /
    • pp.125-135
    • /
    • 2019
  • In recent years, the digital transformation age represented by the "Fourth Industrial Revolution", which is a universalization of digitalization across all industries, has become a reality. In the construction sector in 2018, the Ministry of Land, Infrastructure and Transport established the Smart Construction 2025 vision and established the 'Smart Construction Technology Roadmap' aiming to complete construction automation by 2030. Especially, in the construction stage, field monitoring technology using drones is needed to support construction equipment automation and on-site control, and a 3D geospatial information model can be utilized as a base tool for this. The purpose of this study is to investigate the factors affecting earthworks work in order to manage changes in site conditions and improve communication between managers and workers in the earthworks plan, which has a considerable part in terms of construction time and cost as a single type of work. Based on this, field management procedures and applications were developed.

Recognition of Occupants' Cold Discomfort-Related Actions for Energy-Efficient Buildings

  • Song, Kwonsik;Kang, Kyubyung;Min, Byung-Cheol
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.426-432
    • /
    • 2022
  • HVAC systems play a critical role in reducing energy consumption in buildings. Integrating occupants' thermal comfort evaluation into HVAC control strategies is believed to reduce building energy consumption while minimizing their thermal discomfort. Advanced technologies, such as visual sensors and deep learning, enable the recognition of occupants' discomfort-related actions, thus making it possible to estimate their thermal discomfort. Unfortunately, it remains unclear how accurate a deep learning-based classifier is to recognize occupants' discomfort-related actions in a working environment. Therefore, this research evaluates the classification performance of occupants' discomfort-related actions while sitting at a computer desk. To achieve this objective, this study collected RGB video data on nine college students' cold discomfort-related actions and then trained a deep learning-based classifier using the collected data. The classification results are threefold. First, the trained classifier has an average accuracy of 93.9% for classifying six cold discomfort-related actions. Second, each discomfort-related action is recognized with more than 85% accuracy. Third, classification errors are mostly observed among similar discomfort-related actions. These results indicate that using human action data will enable facility managers to estimate occupants' thermal discomfort and, in turn, adjust the operational settings of HVAC systems to improve the energy efficiency of buildings in conjunction with their thermal comfort levels.

  • PDF

Example of Application of Drone Mapping System based on LiDAR to Highway Construction Site (드론 LiDAR에 기반한 매핑 시스템의 고속도로 건설 현장 적용 사례)

  • Seung-Min Shin;Oh-Soung Kwon;Chang-Woo Ban
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.6_3
    • /
    • pp.1325-1332
    • /
    • 2023
  • Recently, much research is being conducted based on point cloud data for the growth of innovations such as construction automation in the transportation field and virtual national space. This data is often measured through remote control in terrain that is difficult for humans to access using devices such as UAVs and UGVs. Drones, one of the UAVs, are mainly used to acquire point cloud data, but photogrammetry using a vision camera, which takes a lot of time to create a point cloud map, is difficult to apply in construction sites where the terrain changes periodically and surveying is difficult. In this paper, we developed a point cloud mapping system by adopting non-repetitive scanning LiDAR and attempted to confirm improvements through field application. For accuracy analysis, a point cloud map was created through a 2 minute 40 second flight and about 30 seconds of software post-processing on a terrain measuring 144.5 × 138.8 m. As a result of comparing the actual measured distance for structures with an average of 4 m, an average error of 4.3 cm was recorded, confirming that the performance was within the error range applicable to the field.

Artificial Intelligence Plant Doctor: Plant Disease Diagnosis Using GPT4-vision

  • Yoeguang Hue;Jea Hyeoung Kim;Gang Lee;Byungheon Choi;Hyun Sim;Jongbum Jeon;Mun-Il Ahn;Yong Kyu Han;Ki-Tae Kim
    • Research in Plant Disease
    • /
    • v.30 no.1
    • /
    • pp.99-102
    • /
    • 2024
  • Integrated pest management is essential for controlling plant diseases that reduce crop yields. Rapid diagnosis is crucial for effective management in the event of an outbreak to identify the cause and minimize damage. Diagnosis methods range from indirect visual observation, which can be subjective and inaccurate, to machine learning and deep learning predictions that may suffer from biased data. Direct molecular-based methods, while accurate, are complex and time-consuming. However, the development of large multimodal models, like GPT-4, combines image recognition with natural language processing for more accurate diagnostic information. This study introduces GPT-4-based system for diagnosing plant diseases utilizing a detailed knowledge base with 1,420 host plants, 2,462 pathogens, and 37,467 pesticide instances from the official plant disease and pesticide registries of Korea. The AI plant doctor offers interactive advice on diagnosis, control methods, and pesticide use for diseases in Korea and is accessible at https://pdoc.scnu.ac.kr/.

Comparisons of Single Photo Resection Algorithms for the Determination of Exterior Orientation Parameters (단사진의 외부표정요소 결정을 위한 후방교회법 알고리즘의 비교)

  • Kim, Eui Myoung;Seo, Hong Deok
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.305-315
    • /
    • 2020
  • The purpose of this study is to compare algorithms of single photo resection, which determines the exterior orientation parameters used in fields such as photogrammetry, computer vision, robotics, etc. To this end, the algorithms were compared by generating experimental data by simulating terrain based on a camera used in aerial and close-range photogrammetry. Through experiments on aerial photographic camera that was taken almost vertically, it was possible to determine the exterior orientation parameters using three ground control points, but the Procrustes algorithm was sensitive to the configuration of the ground control points. Even in experiments with a close-range amateur camera where the attitude angles of the camera change significantly, the algorithm was sensitive to the configuration of the ground control points, and the other algorithms required at least six ground control points. Through experiments with two types of cameras, it was found that cosine lawbased spatial resection shows performance similar to that of a traditional photogrammetry algorithm because the number of iterations is short and no explicit initial values are required.

Smart HCI Based on the Informations Fusion of Biosignal and Vision (생체 신호와 비전 정보의 융합을 통한 스마트 휴먼-컴퓨터 인터페이스)

  • Kang, Hee-Su;Shin, Hyun-Chool
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.4
    • /
    • pp.47-54
    • /
    • 2010
  • We propose a smart human-computer interface replacing conventional mouse interface. The interface is able to control cursor and command action with only hand performing without object. Four finger motions(left click, right click, hold, drag) for command action are enough to express all mouse function. Also we materialize cursor movement control using image processing. The measure what we use for inference is entropy of EMG signal, gaussian modeling and maximum likelihood estimation. In image processing for cursor control, we use color recognition to get the center point of finger tip from marker, and map the point onto cursor. Accuracy of finger movement inference is over 95% and cursor control works naturally without delay. we materialize whole system to check its performance and utility.

Gaze Detection by Computing Facial and Eye Movement (얼굴 및 눈동자 움직임에 의한 시선 위치 추적)

  • 박강령
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.79-88
    • /
    • 2004
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Gaze detection systems have numerous fields of application. They are applicable to the man-machine interface for helping the handicapped to use computers and the view control in three dimensional simulation programs. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.8 cm of RMS error.

Effect of Systems Thinking Based STEAM Education Program on Climate Change Topics (시스템 사고에 기반한 STEAM 교육 프로그램이 기후변화 학습에 미치는 효과)

  • Cho, Kyu-Dohng;Kim, Hyoungbum
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.7
    • /
    • pp.113-123
    • /
    • 2017
  • This research is designed to review the systems thinking and STEAM theory while ascertaining the effects of the classroom application of the STEAM programs based on systems thinking appropriate for studying climate change. The systems thinking based STEAM program has been developed by researchers and experts, who had participated in expert meetings in a continued manner. The program was applied to science classes over the course of eight weeks. Therefore, the application effects of the systems thinking based STEAM program were analyzed in students' systems thinking, STEAM semantics survey, and students' academic achievement. The findings are as follows. First, the test group has shown a statistically meaningful difference in the systems thinking analysis compared to the control group in the four subcategories of 'Systems Analysis', 'Personal Mastery', 'Shared Vision' and 'Team Learning' except for 'Mental Model'. Second, in the pre- and post-knowledge tests, the independent sample t-test results in the areas of science, technology, engineering, art and mathematics show statistically meaningful differences compared to the control group. Third, in the academic performance test regarding climate change, the test group displayed higher achievement than the control group. In conclusion, the system-based STEAM program is considered appropriate to enhance amalgamative thinking skills based on systems thinking. In addition, the program is expected to improve creative thinking and problem-solving abilities by offering new ideas based on climate change science.

Visual Sensor Design and Environment Modeling for Autonomous Mobile Welding Robots (자율 주행 용접 로봇을 위한 시각 센서 개발과 환경 모델링)

  • Kim, Min-Yeong;Jo, Hyeong-Seok;Kim, Jae-Hun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.776-787
    • /
    • 2002
  • Automation of welding process in shipyards is ultimately necessary, since the welding site is spatially enclosed by floors and girders, and therefore welding operators are exposed to hostile working conditions. To solve this problem, a welding mobile robot that can navigate autonomously within the enclosure has been developed. To achieve the welding task in the closed space, the robotic welding system needs a sensor system for the working environment recognition and the weld seam tracking, and a specially designed environment recognition strategy. In this paper, a three-dimensional laser vision system is developed based on the optical triangulation technology in order to provide robots with 3D work environmental map. Using this sensor system, a spatial filter based on neural network technology is designed for extracting the center of laser stripe, and evaluated in various situations. An environment modeling algorithm structure is proposed and tested, which is composed of the laser scanning module for 3D voxel modeling and the plane reconstruction module for mobile robot localization. Finally, an environmental recognition strategy for welding mobile robot is developed in order to recognize the work environments efficiently. The design of the sensor system, the algorithm for sensing the partially structured environment with plane segments, and the recognition strategy and tactics for sensing the work environment are described and discussed with a series of experiments in detail.

A Method for Eliminating Aiming Error of Unguided Anti-Tank Rocket Using Improved Target Tracking (향상된 표적 추적 기법을 이용한 무유도 대전차 로켓의 조준 오차 제거 방법)

  • Song, Jin-Mo;Kim, Tae-Wan;Park, Tai-Sun;Do, Joo-Cheol;Bae, Jong-sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.21 no.1
    • /
    • pp.47-60
    • /
    • 2018
  • In this paper, we proposed a method for eliminating aiming error of unguided anti-tank rocket using improved target tracking. Since predicted fire is necessary to hit moving targets with unguided rockets, a method was proposed to estimate the position and velocity of target using fire control system. However, such a method has a problem that the hit rate may be lowered due to the aiming error of the shooter. In order to solve this problem, we used an image-based target tracking method to correct error caused by the shooter. We also proposed a robust tracking method based on TLD(Tracking Learning Detection) considering characteristics of the FCS(Fire Control System) devices. To verify the performance of our proposed algorithm, we measured the target velocity using GPS and compared it with our estimation. It is proved that our method is robust to shooter's aiming error.