• 제목/요약/키워드: Image Augmentation

검색결과 220건 처리시간 0.026초

입구경계층 두께와 경계층 펜스가 터빈 캐스케이드내 열전달 특서에 미치는 영향 (Effects of the Inlet Boundary Layer Thickness and the Boundary Layer Fence on the Heat Transfer Chracteristics in a Turbine Cascade)

  • 정지선;정진택
    • 대한기계학회:학술대회논문집
    • /
    • 대한기계학회 2001년도 춘계학술대회논문집D
    • /
    • pp.765-770
    • /
    • 2001
  • The objective of the present study is to investigate the effects of the various inlet boundary layer thickness on convective heat transfer distribution in a turbine cascade endwall and blade suction surface. In addition, the proper height of the boundary layer fences for various inlet boundary layer thickness were applied to turbine cascade endwall in order to reduce the secondary flow, and to verify its influence on the heat transfer process within the turbine cascade. Convective heat transfer distributions on the experimental regions were measured by the image processing system. The results show that heat transfer coefficients on the blade suction surface were increased with an augmentation of inlet boundary layer thickness. However, in a turbine cascade endwall, magnitude of heat transfer coefficients did not change with variation of inlet boundary layer thickness. The results also present that the boundary layer fence is effective in reducing heat transfer on the suction surface. On the other hand, in the endwall region, boundary layer fence brought about the subsidiary heat transfer increment.

  • PDF

도면증강 객체기반의 건설공사 사전 시공검증시스템 개발 연구 (Development of Pre-construction Verification System using AR-based Drawings Object)

  • 김현승;강인석
    • 토지주택연구
    • /
    • 제11권3호
    • /
    • pp.93-101
    • /
    • 2020
  • Recently, as a BIM-based construction simulation system, 4D CAD tools using virtual reality (VR) objects are being applied in construction project. In such a system, since the expression of the object is based on VR image, it has a sense of separation from the real environment, thus limiting the use of field engineers. For this reason, there are increasing cases of applying augmented reality (AR) technology to reduce the sense of separation from the field and express realistic VR objects. This study attempts to develop a methodology and BIM module for the pre-construction verification system using AR technology to increase the practical utility of VR-based BIM objects. To this end, authors develop an AR-based drawing verification function and drawing object-based 4D model augmentation function that can increase the practical utility of 2D drawings, and verify the applicability of the system by performing case analysis. Since VR object-based image has a problem of low realism to field engineers, the linking technology between AR object and 4D model is expected to contribute to the expansion of the use of 4D CADsystem in the construction project.

Waste Classification by Fine-Tuning Pre-trained CNN and GAN

  • Alsabei, Amani;Alsayed, Ashwaq;Alzahrani, Manar;Al-Shareef, Sarah
    • International Journal of Computer Science & Network Security
    • /
    • 제21권8호
    • /
    • pp.65-70
    • /
    • 2021
  • Waste accumulation is becoming a significant challenge in most urban areas and if it continues unchecked, is poised to have severe repercussions on our environment and health. The massive industrialisation in our cities has been followed by a commensurate waste creation that has become a bottleneck for even waste management systems. While recycling is a viable solution for waste management, it can be daunting to classify waste material for recycling accurately. In this study, transfer learning models were proposed to automatically classify wastes based on six materials (cardboard, glass, metal, paper, plastic, and trash). The tested pre-trained models were ResNet50, VGG16, InceptionV3, and Xception. Data augmentation was done using a Generative Adversarial Network (GAN) with various image generation percentages. It was found that models based on Xception and VGG16 were more robust. In contrast, models based on ResNet50 and InceptionV3 were sensitive to the added machine-generated images as the accuracy degrades significantly compared to training with no artificial data.

A study on road damage detection for safe driving of autonomous vehicles based on OpenCV and CNN

  • Lee, Sang-Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제14권2호
    • /
    • pp.47-54
    • /
    • 2022
  • For safe driving of autonomous vehicles, road damage detection is very important to lower the potential risk. In order to ensure safety while an autonomous vehicle is driving on the road, technology that can cope with various obstacles is required. Among them, technology that recognizes static obstacles such as poor road conditions as well as dynamic obstacles that may be encountered while driving, such as crosswalks, manholes, hollows, and speed bumps, is a priority. In this paper, we propose a method to extract similarity of images and find damaged road images using OpenCV image processing and CNN algorithm. To implement this, we trained a CNN model using 280 training datasheets and 70 test datasheets out of 350 image data. As a result of training, the object recognition processing speed and recognition speed of 100 images were tested, and the average processing speed was 45.9 ms, the average recognition speed was 66.78 ms, and the average object accuracy was 92%. In the future, it is expected that the driving safety of autonomous vehicles will be improved by using technology that detects road obstacles encountered while driving.

MalDC: Malicious Software Detection and Classification using Machine Learning

  • Moon, Jaewoong;Kim, Subin;Park, Jangyong;Lee, Jieun;Kim, Kyungshin;Song, Jaeseung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권5호
    • /
    • pp.1466-1488
    • /
    • 2022
  • Recently, the importance and necessity of artificial intelligence (AI), especially machine learning, has been emphasized. In fact, studies are actively underway to solve complex and challenging problems through the use of AI systems, such as intelligent CCTVs, intelligent AI security systems, and AI surgical robots. Information security that involves analysis and response to security vulnerabilities of software is no exception to this and is recognized as one of the fields wherein significant results are expected when AI is applied. This is because the frequency of malware incidents is gradually increasing, and the available security technologies are limited with regard to the use of software security experts or source code analysis tools. We conducted a study on MalDC, a technique that converts malware into images using machine learning, MalDC showed good performance and was able to analyze and classify different types of malware. MalDC applies a preprocessing step to minimize the noise generated in the image conversion process and employs an image augmentation technique to reinforce the insufficient dataset, thus improving the accuracy of the malware classification. To verify the feasibility of our method, we tested the malware classification technique used by MalDC on a dataset provided by Microsoft and malware data collected by the Korea Internet & Security Agency (KISA). Consequently, an accuracy of 97% was achieved.

시각 장애인을 위한 상품 영양 정보 안내 시스템 (Product Nutrition Information System for Visually Impaired People)

  • 정종욱;이제경;김효리;오유수
    • 대한임베디드공학회논문지
    • /
    • 제18권5호
    • /
    • pp.233-240
    • /
    • 2023
  • Nutrition information about food is written on the label paper, which is very inconvenient for visually impaired people to recognize. In order to solve the inconvenience of visually impaired people with nutritional information recognition, this paper proposes a product nutrition information guide system for visually impaired people. In the proposed system, user's image data input through UI, and object recognition is carried out through YOLO v5. The proposed system is a system that provides voice guidance on the names and nutrition information of recognized products. This paper constructs a new dataset that augments the 319 classes of canned/late-night snack product image data using rotate matrix techniques, pepper noise, and salt noise techniques. The proposed system compared and analyzed the performance of YOLO v5n, YOLO v5m, and YOLO v5l models through hyperparameter tuning and learned the dataset built with YOLO v5n models. This paper compares and analyzes the performance of the proposed system with that of previous studies.

The development of food image detection and recognition model of Korean food for mobile dietary management

  • Park, Seon-Joo;Palvanov, Akmaljon;Lee, Chang-Ho;Jeong, Nanoom;Cho, Young-Im;Lee, Hae-Jeung
    • Nutrition Research and Practice
    • /
    • 제13권6호
    • /
    • pp.521-528
    • /
    • 2019
  • BACKGROUND/OBJECTIVES: The aim of this study was to develop Korean food image detection and recognition model for use in mobile devices for accurate estimation of dietary intake. MATERIALS/METHODS: We collected food images by taking pictures or by searching web images and built an image dataset for use in training a complex recognition model for Korean food. Augmentation techniques were performed in order to increase the dataset size. The dataset for training contained more than 92,000 images categorized into 23 groups of Korean food. All images were down-sampled to a fixed resolution of $150{\times}150$ and then randomly divided into training and testing groups at a ratio of 3:1, resulting in 69,000 training images and 23,000 test images. We used a Deep Convolutional Neural Network (DCNN) for the complex recognition model and compared the results with those of other networks: AlexNet, GoogLeNet, Very Deep Convolutional Neural Network, VGG and ResNet, for large-scale image recognition. RESULTS: Our complex food recognition model, K-foodNet, had higher test accuracy (91.3%) and faster recognition time (0.4 ms) than those of the other networks. CONCLUSION: The results showed that K-foodNet achieved better performance in detecting and recognizing Korean food compared to other state-of-the-art models.

Design of Mobile Application for Learning Chemistry using Augmented Reality

  • Kim, Jin-Woong;Hur, Jee-Sic;Ha, Min Woo;Kim, Soo Kyun
    • 한국컴퓨터정보학회논문지
    • /
    • 제27권9호
    • /
    • pp.139-147
    • /
    • 2022
  • 본 연구에서는 증강현실 기술을 이용하여, 화학에 입문하는 사람이 화학 학습에 필요한 지식을 쉽게 습득할 수 있도록 모바일 애플리케이션을 개발하는 것을 목표로 한다. 본 연구에서는 2차원 형태의 그림을 인식해 화학 구조를 3차원의 개체로 증강 시켜 사용자의 화면에 보여주고, 이와 관련된 다분야의 정보를 동시에 제공하는 서비스를 활용해 새로운 화학 학습 경험을 제공하는 점이 특징이다. 이를 위해 별도의 시스템과 콘텐츠를 구성하였고, 안전하고 실시간적인 데이터 관리를 위해 로그인 API와 실시간 데이터베이스 기술을 사용하였으며, 이미지 인식 및 3차원 개체 증강 서비스를 위해 이미지 트래킹 기술을 사용하였다. 본 연구를 통한 결과는 실험을 통해 유의미한 결과를 도출하였다. 향후 연구에서는 화학 구조 데이터 라이브러리를 사용하여 효율적으로 데이터를 불러오고 출력할 수 있도록 한다.

Estimating vegetation index for outdoor free-range pig production using YOLO

  • Sang-Hyon Oh;Hee-Mun Park;Jin-Hyun Park
    • Journal of Animal Science and Technology
    • /
    • 제65권3호
    • /
    • pp.638-651
    • /
    • 2023
  • The objective of this study was to quantitatively estimate the level of grazing area damage in outdoor free-range pig production using a Unmanned Aerial Vehicles (UAV) with an RGB image sensor. Ten corn field images were captured by a UAV over approximately two weeks, during which gestating sows were allowed to graze freely on the corn field measuring 100 × 50 m2. The images were corrected to a bird's-eye view, and then divided into 32 segments and sequentially inputted into the YOLOv4 detector to detect the corn images according to their condition. The 43 raw training images selected randomly out of 320 segmented images were flipped to create 86 images, and then these images were further augmented by rotating them in 5-degree increments to create a total of 6,192 images. The increased 6,192 images are further augmented by applying three random color transformations to each image, resulting in 24,768 datasets. The occupancy rate of corn in the field was estimated efficiently using You Only Look Once (YOLO). As of the first day of observation (day 2), it was evident that almost all the corn had disappeared by the ninth day. When grazing 20 sows in a 50 × 100 m2 cornfield (250 m2/sow), it appears that the animals should be rotated to other grazing areas to protect the cover crop after at least five days. In agricultural technology, most of the research using machine and deep learning is related to the detection of fruits and pests, and research on other application fields is needed. In addition, large-scale image data collected by experts in the field are required as training data to apply deep learning. If the data required for deep learning is insufficient, a large number of data augmentation is required.

Automatic Estimation of Tillers and Leaf Numbers in Rice Using Deep Learning for Object Detection

  • Hyeokjin Bak;Ho-young Ban;Sungryul Chang;Dongwon Kwon;Jae-Kyeong Baek;Jung-Il Cho ;Wan-Gyu Sang
    • 한국작물학회:학술대회논문집
    • /
    • 한국작물학회 2022년도 추계학술대회
    • /
    • pp.81-81
    • /
    • 2022
  • Recently, many studies on big data based smart farming have been conducted. Research to quantify morphological characteristics using image data from various crops in smart farming is underway. Rice is one of the most important food crops in the world. Much research has been done to predict and model rice crop yield production. The number of productive tillers per plant is one of the important agronomic traits associated with the grain yield of rice crop. However, modeling the basic growth characteristics of rice requires accurate data measurements. The existing method of measurement by humans is not only labor intensive but also prone to human error. Therefore, conversion to digital data is necessary to obtain accurate and phenotyping quickly. In this study, we present an image-based method to predict leaf number and evaluate tiller number of individual rice crop using YOLOv5 deep learning network. We performed using various network of the YOLOv5 model and compared them to determine higher prediction accuracy. We ako performed data augmentation, a method we use to complement small datasets. Based on the number of leaves and tiller actually measured in rice crop, the number of leaves predicted by the model from the image data and the existing regression equation were used to evaluate the number of tillers using the image data.

  • PDF