• Title/Summary/Keyword: artificial vision

Search Result 316, Processing Time 0.032 seconds

The Power Line Deflection Monitoring System using Panoramic Video Stitching and Deep Learning (딥 러닝과 파노라마 영상 스티칭 기법을 이용한 송전선 늘어짐 모니터링 시스템)

  • Park, Eun-Soo;Kim, Seunghwan;Lee, Sangsoon;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.25 no.1
    • /
    • pp.13-24
    • /
    • 2020
  • There are about nine million power line poles and 1.3 million kilometers of the power line for electric power distribution in Korea. Maintenance of such a large number of electric power facilities requires a lot of manpower and time. Recently, various fault diagnosis techniques using artificial intelligence have been studied. Therefore, in this paper, proposes a power line deflection detect system using artificial intelligence and computer vision technology in images taken by vision system. The proposed system proceeds as follows. (i) Detection of transmission tower using object detection system (ii) Histogram equalization technique to solve the degradation in image quality problem of video data (iii) In general, since the distance between two transmission towers is long, a panoramic video stitching process is performed to grasp the entire power line (iv) Detecting deflection using computer vision technology after applying power line detection algorithm This paper explain and experiment about each process.

Design of the Vision Based Head Tracker Using Area of Artificial Mark (인공표식의 면적을 이용하는 영상 기반 헤드 트랙커 설계)

  • 김종훈;이대우;조겸래
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.34 no.7
    • /
    • pp.63-70
    • /
    • 2006
  • This paper describes research of using area of artificial mark on vision based head tracker system. A head tracker system consists of the translational and rotational motions which are detected by web camera. Results of the motion are taken from image processing and neural network. Because of the characteristics of cockpit, the specific color on the helmet is tracked for translational motion. And rotational motion is tracked via neural network. Ratio of two different colored area on the helmet is used as input of network. Neural network algorithms used, such as back-propagation and RBFN (Radial Basis Function Network). Both back-propagation using a characteristic of feedback and RBFN using a characteristic of statistics have a good performances for the tracking of nonlinear system such as a head motion. Finally, this paper analyzes and compares with tracking performance.

Development of an intelligent edge computing device equipped with on-device AI vision model (온디바이스 AI 비전 모델이 탑재된 지능형 엣지 컴퓨팅 기기 개발)

  • Kang, Namhi
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.17-22
    • /
    • 2022
  • In this paper, we design a lightweight embedded device that can support intelligent edge computing, and show that the device quickly detects an object in an image input from a camera device in real time. The proposed system can be applied to environments without pre-installed infrastructure, such as an intelligent video control system for industrial sites or military areas, or video security systems mounted on autonomous vehicles such as drones. The On-Device AI(Artificial intelligence) technology is increasingly required for the widespread application of intelligent vision recognition systems. Computing offloading from an image data acquisition device to a nearby edge device enables fast service with less network and system resources than AI services performed in the cloud. In addition, it is expected to be safely applied to various industries as it can reduce the attack surface vulnerable to various hacking attacks and minimize the disclosure of sensitive data.

Research on damage detection and assessment of civil engineering structures based on DeepLabV3+ deep learning model

  • Chengyan Song
    • Structural Engineering and Mechanics
    • /
    • v.91 no.5
    • /
    • pp.443-457
    • /
    • 2024
  • At present, the traditional concrete surface inspection methods based on artificial vision have the problems of high cost and insecurity, while the computer vision methods rely on artificial selection features in the case of sensitive environmental changes and difficult promotion. In order to solve these problems, this paper introduces deep learning technology in the field of computer vision to achieve automatic feature extraction of structural damage, with excellent detection speed and strong generalization ability. The main contents of this study are as follows: (1) A method based on DeepLabV3+ convolutional neural network model is proposed for surface detection of post-earthquake structural damage, including surface damage such as concrete cracks, spaling and exposed steel bars. The key semantic information is extracted by different backbone networks, and the data sets containing various surface damage are trained, tested and evaluated. The intersection ratios of 54.4%, 44.2%, and 89.9% in the test set demonstrate the network's capability to accurately identify different types of structural surface damages in pixel-level segmentation, highlighting its effectiveness in varied testing scenarios. (2) A semantic segmentation model based on DeepLabV3+ convolutional neural network is proposed for the detection and evaluation of post-earthquake structural components. Using a dataset that includes building structural components and their damage degrees for training, testing, and evaluation, semantic segmentation detection accuracies were recorded at 98.5% and 56.9%. To provide a comprehensive assessment that considers both false positives and false negatives, the Mean Intersection over Union (Mean IoU) was employed as the primary evaluation metric. This choice ensures that the network's performance in detecting and evaluating pixel-level damage in post-earthquake structural components is evaluated uniformly across all experiments. By incorporating deep learning technology, this study not only offers an innovative solution for accurately identifying post-earthquake damage in civil engineering structures but also contributes significantly to empirical research in automated detection and evaluation within the field of structural health monitoring.

Analysis of Korea's Artificial Intelligence Competitiveness Based on Patent Data: Focusing on Patent Index and Topic Modeling (특허데이터 기반 한국의 인공지능 경쟁력 분석 : 특허지표 및 토픽모델링을 중심으로)

  • Lee, Hyun-Sang;Qiao, Xin;Shin, Sun-Young;Kim, Gyu-Ri;Oh, Se-Hwan
    • Informatization Policy
    • /
    • v.29 no.4
    • /
    • pp.43-66
    • /
    • 2022
  • With the development of artificial intelligence technology, competition for artificial intelligence technology patents around the world is intensifying. During the period 2000 ~ 2021, artificial intelligence technology patent applications at the US Patent and Trademark Office have been steadily increasing, and the growth rate has been steeper since the 2010s. As a result of analyzing Korea's artificial intelligence technology competitiveness through patent indices, it is evaluated that patent activity, impact, and marketability are superior in areas such as auditory intelligence and visual intelligence. However, compared to other countries, overall Korea's artificial intelligence technology patents are good in terms of activity and marketability, but somewhat inferior in technological impact. While noise canceling and voice recognition have recently decreased as topics for artificial intelligence, growth is expected in areas such as model learning optimization, smart sensors, and autonomous driving. In the case of Korea, efforts are required as there is a slight lack of patent applications in areas such as fraud detection/security and medical vision learning.

A Study on the Automated Payment System for Artificial Intelligence-Based Product Recognition in the Age of Contactless Services

  • Kim, Heeyoung;Hong, Hotak;Ryu, Gihwan;Kim, Dongmin
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.2
    • /
    • pp.100-105
    • /
    • 2021
  • Contactless service is rapidly emerging as a new growth strategy due to consumers who are reluctant to the face-to-face situation in the global pandemic of coronavirus disease 2019 (COVID-19), and various technologies are being developed to support the fast-growing contactless service market. In particular, the restaurant industry is one of the most desperate industrial fields requiring technologies for contactless service, and the representative technical case should be a kiosk, which has the advantage of reducing labor costs for the restaurant owners and provides psychological relaxation and satisfaction to the customer. In this paper, we propose a solution to the restaurant's store operation through the unmanned kiosk using a state-of-the-art artificial intelligence (AI) technology of image recognition. Especially, for the products that do not have barcodes in bakeries, fresh foods (fruits, vegetables, etc.), and autonomous restaurants on highways, which cause increased labor costs and many hassles, our proposed system should be very useful. The proposed system recognizes products without barcodes on the ground of image-based AI algorithm technology and makes automatic payments. To test the proposed system feasibility, we established an AI vision system using a commercial camera and conducted an image recognition test by training object detection AI models using donut images. The proposed system has a self-learning system with mismatched information in operation. The self-learning AI technology allows us to upgrade the recognition performance continuously. We proposed a fully automated payment system with AI vision technology and showed system feasibility by the performance test. The system realizes contactless service for self-checkout in the restaurant business area and improves the cost-saving in managing human resources.

A Study on Effective Interpretation of AI Model based on Reference (Reference 기반 AI 모델의 효과적인 해석에 관한 연구)

  • Hyun-woo Lee;Tae-hyun Han;Yeong-ji Park;Tae-jin Lee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.3
    • /
    • pp.411-425
    • /
    • 2023
  • Today, AI (Artificial Intelligence) technology is widely used in various fields, performing classification and regression tasks according to the purpose of use, and research is also actively progressing. Especially in the field of security, unexpected threats need to be detected, and unsupervised learning-based anomaly detection techniques that can detect threats without adding known threat information to the model training process are promising methods. However, most of the preceding studies that provide interpretability for AI judgments are designed for supervised learning, so it is difficult to apply them to unsupervised learning models with fundamentally different learning methods. In addition, previously researched vision-centered AI mechanism interpretation studies are not suitable for application to the security field that is not expressed in images. Therefore, In this paper, we use a technique that provides interpretability for detected anomalies by searching for and comparing optimization references, which are the source of intrusion attacks. In this paper, based on reference, we propose additional logic to search for data closest to real data. Based on real data, it aims to provide a more intuitive interpretation of anomalies and to promote effective use of an anomaly detection model in the security field.

Training Dataset Generation through Generative AI for Multi-Modal Safety Monitoring in Construction

  • Insoo Jeong;Junghoon Kim;Seungmo Lim;Jeongbin Hwang;Seokho Chi
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.455-462
    • /
    • 2024
  • In the construction industry, known for its dynamic and hazardous environments, there exists a crucial demand for effective safety incident prevention. Traditional approaches to monitoring on-site safety, despite their importance, suffer from being laborious and heavily reliant on subjective, paper-based reports, which results in inefficiencies and fragmented data. Additionally, the incorporation of computer vision technologies for automated safety monitoring encounters a significant obstacle due to the lack of suitable training datasets. This challenge is due to the rare availability of safety accident images or videos and concerns over security and privacy violations. Consequently, this paper explores an innovative method to address the shortage of safety-related datasets in the construction sector by employing generative artificial intelligence (AI), specifically focusing on the Stable Diffusion model. Utilizing real-world construction accident scenarios, this method aims to generate photorealistic images to enrich training datasets for safety surveillance applications using computer vision. By systematically generating accident prompts, employing static prompts in empirical experiments, and compiling datasets with Stable Diffusion, this research bypasses the constraints of conventional data collection techniques in construction safety. The diversity and realism of the produced images hold considerable promise for tasks such as object detection and action recognition, thus improving safety measures. This study proposes future avenues for broadening scenario coverage, refining the prompt generation process, and merging artificial datasets with machine learning models for superior safety monitoring.

Comparison of Play Ability of Soccer Fields with Natural Turfgrass, Artificial Turf and Bare Ground (천연잔디, 인조잔디 및 맨땅 축구장에서 축구 경기력 비교)

  • Lee, Jae-Pil;Park, Hyun-Chul;Kim, Doo-Hwan
    • Asian Journal of Turfgrass Science
    • /
    • v.20 no.2
    • /
    • pp.203-211
    • /
    • 2006
  • This study was initiated to investigate the difference of playing ability among soccer fields established with natural turfgrass, artificial turf and bare ground. The soccer fields with natural turfgrasses were established with cool-season grass(Kentucky bluegrass 80%+Perennial ryegrass 20%) and zoysiagrass. The artificial turf field was constructed with Konigreen $DV5000^{TM}$. Bare ground was sandy soil. Data such as ball rolling distance and vertical ball rebound were collected at the Sports Science Town of Konkuk University from 2005 to 2006. A ball in the study was Hummel Air Vision #1, certified by KFA(Korea Football Association) in ball pressure of 1.01b. Ball rolling distance was the longest on bare ground(13.6m), followed by artificial grass(11.4m), cool-season grass(7.8m) and zoysiagrass(4.7m). It decreased with lower frequency in use, stronger rigidity and higher density of turfgrass. Vertical ball rebound was the highest on bare ground(1.0m), followed by artificial grass(0.9m), cool-season grass(0.6m) and zoysiagrass(0.4m). It was lower under conditions of low use frequency, strong rigidity, and high density. Both ball rolling distance and vertical ball rebound were not greatly affected by cool-season grass maintained with high intensity of culture by years after establishment. However, zoysiagrass field under low intensity of culture showed longer in ball rolling distance and higher in vertical ball rebound with time after establishment.

Indoor Location and Pose Estimation Algorithm using Artificial Attached Marker (인공 부착 마커를 활용한 실내 위치 및 자세 추정 알고리즘)

  • Ahn, Byeoung Min;Ko, Yun-Ho;Lee, Ji Hong
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.240-251
    • /
    • 2016
  • This paper presents a real-time indoor location and pose estimation method that utilizes simple artificial markers and image analysis techniques for the purpose of warehouse automation. The conventional indoor localization methods cannot work robustly in warehouses where severe environmental changes usually occur due to the movement of stocked goods. To overcome this problem, the proposed framework places artificial markers having different interior pattern on the predefined position of the warehouse floor. The proposed algorithm obtains marker candidate regions from a captured image by a simple binarization and labeling procedure. Then it extracts maker interior pattern information from each candidate region in order to decide whether the candidate region is a true marker or not. The extracted interior pattern information and the outer boundary of the marker are used to estimate location and heading angle of the localization system. Experimental results show that the proposed localization method can provide high performance which is almost equivalent to that of the conventional method using an expensive LIDAR sensor and AMCL algorithm.