• Title/Summary/Keyword: Image prediction model

Search Result 312, Processing Time 0.028 seconds

Runway visual range prediction using Convolutional Neural Network with Weather information

  • Ku, SungKwan;Kim, Seungsu;Hong, Seokmin
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.190-194
    • /
    • 2018
  • The runway visual range is one of the important factors that decide the possibility of taking offs and landings of the airplane at local airports. The runway visual range is affected by weather conditions like fog, wind, etc. The pilots and aviation related workers check a local weather forecast such as runway visual range for safe flight. However there are several local airfields at which no other forecasting functions are provided due to realistic problems like the deterioration, breakdown, expensive purchasing cost of the measurement equipment. To this end, this study proposes a prediction model of runway visual range for a local airport by applying convolutional neural network that has been most commonly used for image/video recognition, image classification, natural language processing and so on to the prediction of runway visual range. For constituting the prediction model, we use the previous time series data of wind speed, humidity, temperature and runway visibility. This paper shows the usefulness of the proposed prediction model of runway visual range by comparing with the measured data.

Concrete Crack Detection and Visualization Method Using CNN Model (CNN 모델을 활용한 콘크리트 균열 검출 및 시각화 방법)

  • Choi, Ju-hee;Kim, Young-Kwan;Lee, Han-Seung
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2022.04a
    • /
    • pp.73-74
    • /
    • 2022
  • Concrete structures occupy the largest proportion of modern infrastructure, and concrete structures often have cracking problems. Existing concrete crack diagnosis methods have limitations in crack evaluation because they rely on expert visual inspection. Therefore, in this study, we design a deep learning model that detects, visualizes, and outputs cracks on the surface of RC structures based on image data by using a CNN (Convolution Neural Networks) model that can process two- and three-dimensional data such as video and image data. do. An experimental study was conducted on an algorithm to automatically detect concrete cracks and visualize them using a CNN model. For the three deep learning models used for algorithm learning in this study, the concrete crack prediction accuracy satisfies 90%, and in particular, the 'InceptionV3'-based CNN model showed the highest accuracy. In the case of the crack detection visualization model, it showed high crack detection prediction accuracy of more than 95% on average for data with crack width of 0.2 mm or more.

  • PDF

Evaluation of Spatio-temporal Fusion Models of Multi-sensor High-resolution Satellite Images for Crop Monitoring: An Experiment on the Fusion of Sentinel-2 and RapidEye Images (작물 모니터링을 위한 다중 센서 고해상도 위성영상의 시공간 융합 모델의 평가: Sentinel-2 및 RapidEye 영상 융합 실험)

  • Park, Soyeon;Kim, Yeseul;Na, Sang-Il;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.807-821
    • /
    • 2020
  • The objective of this study is to evaluate the applicability of representative spatio-temporal fusion models developed for the fusion of mid- and low-resolution satellite images in order to construct a set of time-series high-resolution images for crop monitoring. Particularly, the effects of the characteristics of input image pairs on the prediction performance are investigated by considering the principle of spatio-temporal fusion. An experiment on the fusion of multi-temporal Sentinel-2 and RapidEye images in agricultural fields was conducted to evaluate the prediction performance. Three representative fusion models, including Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), SParse-representation-based SpatioTemporal reflectance Fusion Model (SPSTFM), and Flexible Spatiotemporal DAta Fusion (FSDAF), were applied to this comparative experiment. The three spatio-temporal fusion models exhibited different prediction performance in terms of prediction errors and spatial similarity. However, regardless of the model types, the correlation between coarse resolution images acquired on the pair dates and the prediction date was more significant than the difference between the pair dates and the prediction date to improve the prediction performance. In addition, using vegetation index as input for spatio-temporal fusion showed better prediction performance by alleviating error propagation problems, compared with using fused reflectance values in the calculation of vegetation index. These experimental results can be used as basic information for both the selection of optimal image pairs and input types, and the development of an advanced model in spatio-temporal fusion for crop monitoring.

Determining Whether to Enter a Hazardous Area Using Pedestrian Trajectory Prediction Techniques and Improving the Training of Small Models with Knowledge Distillation (보행자 경로 예측 기법을 이용한 위험구역 진입 여부 결정과 Knowledge Distillation을 이용한 작은 모델 학습 개선)

  • Choi, In-Kyu;Lee, Young Han;Song, Hyok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.9
    • /
    • pp.1244-1253
    • /
    • 2021
  • In this paper, we propose a method for predicting in advance whether pedestrians will enter the hazardous area after the current time using the pedestrian trajectory prediction method and an efficient simplification method of the trajectory prediction network. In addition, we propose a method to apply KD(Knowledge Distillation) to a small network for real-time operation in an embedded environment. Using the correlation between predicted future paths and hazard zones, we determined whether to enter or not, and applied efficient KD when learning small networks to minimize performance degradation. Experimentally, it was confirmed that the model applied with the simplification method proposed improved the speed by 37.49% compared to the existing model, but led to a slight decrease in accuracy. As a result of learning a small network with an initial accuracy of 91.43% using KD, It was confirmed that it has improved accuracy of 94.76%.

Development of weight prediction 2D image technology using the surface shape characteristics of strawberry cultivars

  • Yoo, Hyeonchae;Lim, Jongguk;Kim, Giyoung;Kim, Moon Sung;Kang, Jungsook;Seo, Youngwook;Lee, Ah-yeong;Cho, Byoung-Kwan;Hong, Soon-Jung;Mo, Changyeun
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.4
    • /
    • pp.753-767
    • /
    • 2020
  • The commercial value of strawberries is affected by various factors such as their shape, size and color. Among them, size determined by weight is one of the main factors determining the quality grade of strawberries. In this study, image technology was developed to predict the weight of strawberries using the shape characteristics of strawberry cultivars. For realtime weight measurements of strawberries in transport, an image measurement system was developed for weight prediction with a charge coupled device (CCD) color camera and a conveyor belt. A strawberry weight prediction algorithm was developed for three cultivars, Maehyang, Sulhyang, and Ssanta, using the number of pixels in the pulp portion that measured the strawberry weight. The discrimination accuracy (R2) of the weight prediction models of the Maeyang, Sulhyang and Santa cultivars was 0.9531, 0.951 and 0.9432, respectively. The discriminative accuracy (R2) and measurement error (RMSE) of the integrated weight prediction model of the three cultivars were 0.958 and 1.454 g, respectively. These results show that the 2D imaging technology considering the shape characteristics of strawberries has the potential to predict the weight of strawberries.

Adaptive Prediction for Lossless Image Compression

  • Park, Sang-Ho
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.169-172
    • /
    • 2005
  • Genetic algorithm based predictor for lossless image compression is propsed. We describe a genetic algorithm to learn predictive model for lossless image compression. The error image can be further compressed using entropy coding such as Huffman coding or arithmetic coding. We show that the proposed algorithm can be feasible to lossless image compression algorithm.

  • PDF

Prediction of Etch Profile Uniformity Using Wavelet and Neural Network

  • Park, Won-Sun;Lim, Myo-Taeg;Kim, Byungwhan
    • International Journal of Control, Automation, and Systems
    • /
    • v.2 no.2
    • /
    • pp.256-262
    • /
    • 2004
  • Conventionally, profile non-uniformity has been characterized by relying on approximated profile with angle or anisotropy. In this study, a new non-uniformity model for etch profile is presented by applying a discrete wavelet to the image obtained from a scanning electron microscopy (SEM). Prediction models for wavelet-transformed data are then constructed using a back-propagation neural network. The proposed method was applied to the data collected from the etching of tungsten material. Additionally, 7 experiments were conducted to obtain test data. Model performance was evaluated in terms of the average prediction accuracy (APA) and the best prediction accuracy (BPA). To take into account randomness in initial weights, two hundred models were generated for a given set of training factors. Behaviors of the APA and BPA were investigated as a function of training factors, including training tolerance, hidden neuron, initial weight distribution, and two slopes for bipolar sig-moid and linear function. For all variations in training factors, the APA was not consistent with the BPA. The prediction accuracy was optimized using three approaches, the best model based approach, the average model based approach and the combined model based approach. Despite the largest APA of the first approach, its BPA was smallest compared to the other two approaches.

A Review of Mobile Display Image Quality

  • Kim, Youn Jin
    • Information Display
    • /
    • v.15 no.5
    • /
    • pp.22-32
    • /
    • 2014
  • The current research intends to quantify the surround luminance effects on the shape of spatial luminance CSF and to propose an image quality evaluation method that is adaptive to both surround luminance and spatial frequency of a given stimulus. The proposed image quality method extends to a model called SQRI[8]. The non-linear behaviour of the HVS was taken into account by using CSF. This model can be defined as the square root integration of multiplication between display MTF and CSF. It is assumed that image quality can be determined by considering the MTF of the imaging system and the CSF of human observers. The CSF term in the original SQRI model was replaced by the surround adaptive CSF quantified in this study and it is divided by the Fourier transform of a given stimulus. A few limitations of the current work should be addressed and revised in the future study. First, more accurate model predictions can be achievable when the actual display MTF is measured and used instead of the approximation. Then, a further improvement to the model prediction accuracy can be made when chromatic adaptation of the HVS is taken into account[45-46].

Landmark Selection Using CNN-Based Heat Map for Facial Age Prediction (안면 연령 예측을 위한 CNN기반의 히트 맵을 이용한 랜드마크 선정)

  • Hong, Seok-Mi;Yoo, Hyun
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.7
    • /
    • pp.1-6
    • /
    • 2021
  • The purpose of this study is to improve the performance of the artificial neural network system for facial image analysis through the image landmark selection technique. For landmark selection, a CNN-based multi-layer ResNet model for classification of facial image age is required. From the configured ResNet model, a heat map that detects the change of the output node according to the change of the input node is extracted. By combining a plurality of extracted heat maps, facial landmarks related to age classification prediction are created. The importance of each pixel location can be analyzed through facial landmarks. In addition, by removing the pixels with low weights, a significant amount of input data can be reduced.

Very short-term rainfall prediction based on radar image learning using deep neural network (심층신경망을 이용한 레이더 영상 학습 기반 초단시간 강우예측)

  • Yoon, Seongsim;Park, Heeseong;Shin, Hongjoon
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.12
    • /
    • pp.1159-1172
    • /
    • 2020
  • This study applied deep convolution neural network based on U-Net and SegNet using long period weather radar data to very short-term rainfall prediction. And the results were compared and evaluated with the translation model. For training and validation of deep neural network, Mt. Gwanak and Mt. Gwangdeoksan radar data were collected from 2010 to 2016 and converted to a gray-scale image file in an HDF5 format with a 1km spatial resolution. The deep neural network model was trained to predict precipitation after 10 minutes by using the four consecutive radar image data, and the recursive method of repeating forecasts was applied to carry out lead time 60 minutes with the pretrained deep neural network model. To evaluate the performance of deep neural network prediction model, 24 rain cases in 2017 were forecast for rainfall up to 60 minutes in advance. As a result of evaluating the predicted performance by calculating the mean absolute error (MAE) and critical success index (CSI) at the threshold of 0.1, 1, and 5 mm/hr, the deep neural network model showed better performance in the case of rainfall threshold of 0.1, 1 mm/hr in terms of MAE, and showed better performance than the translation model for lead time 50 minutes in terms of CSI. In particular, although the deep neural network prediction model performed generally better than the translation model for weak rainfall of 5 mm/hr or less, the deep neural network prediction model had limitations in predicting distinct precipitation characteristics of high intensity as a result of the evaluation of threshold of 5 mm/hr. The longer lead time, the spatial smoothness increase with lead time thereby reducing the accuracy of rainfall prediction The translation model turned out to be superior in predicting the exceedance of higher intensity thresholds (> 5 mm/hr) because it preserves distinct precipitation characteristics, but the rainfall position tends to shift incorrectly. This study are expected to be helpful for the improvement of radar rainfall prediction model using deep neural networks in the future. In addition, the massive weather radar data established in this study will be provided through open repositories for future use in subsequent studies.