• Title/Summary/Keyword: Processing Accuracy

Search Result 3,722, Processing Time 0.031 seconds

Detection of Zebra-crossing Areas Based on Deep Learning with Combination of SegNet and ResNet (SegNet과 ResNet을 조합한 딥러닝에 기반한 횡단보도 영역 검출)

  • Liang, Han;Seo, Suyoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.141-148
    • /
    • 2021
  • This paper presents a method to detect zebra-crossing using deep learning which combines SegNet and ResNet. For the blind, a safe crossing system is important to know exactly where the zebra-crossings are. Zebra-crossing detection by deep learning can be a good solution to this problem and robotic vision-based assistive technologies sprung up over the past few years, which focused on specific scene objects using monocular detectors. These traditional methods have achieved significant results with relatively long processing times, and enhanced the zebra-crossing perception to a large extent. However, running all detectors jointly incurs a long latency and becomes computationally prohibitive on wearable embedded systems. In this paper, we propose a model for fast and stable segmentation of zebra-crossing from captured images. The model is improved based on a combination of SegNet and ResNet and consists of three steps. First, the input image is subsampled to extract image features and the convolutional neural network of ResNet is modified to make it the new encoder. Second, through the SegNet original up-sampling network, the abstract features are restored to the original image size. Finally, the method classifies all pixels and calculates the accuracy of each pixel. The experimental results prove the efficiency of the modified semantic segmentation algorithm with a relatively high computing speed.

Deep Learning Algorithm and Prediction Model Associated with Data Transmission of User-Participating Wearable Devices (사용자 참여형 웨어러블 디바이스 데이터 전송 연계 및 딥러닝 대사증후군 예측 모델)

  • Lee, Hyunsik;Lee, Woongjae;Jeong, Taikyeong
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.6
    • /
    • pp.33-45
    • /
    • 2020
  • This paper aims to look at the perspective that the latest cutting-edge technologies are predicting individual diseases in the actual medical environment in a situation where various types of wearable devices are rapidly increasing and used in the healthcare domain. Through the process of collecting, processing, and transmitting data by merging clinical data, genetic data, and life log data through a user-participating wearable device, it presents the process of connecting the learning model and the feedback model in the environment of the Deep Neural Network. In the case of the actual field that has undergone clinical trial procedures of medical IT occurring in such a high-tech medical field, the effect of a specific gene caused by metabolic syndrome on the disease is measured, and clinical information and life log data are merged to process different heterogeneous data. That is, it proves the objective suitability and certainty of the deep neural network of heterogeneous data, and through this, the performance evaluation according to the noise in the actual deep learning environment is performed. In the case of the automatic encoder, we proved that the accuracy and predicted value varying per 1,000 EPOCH are linearly changed several times with the increasing value of the variable.

Influences on Time and Spatial Characteristics of Soccer Pass Success Rate: A Case Study of the 2018 World Cup in Russia (시간과 공간적 특성에 따른 축구 패스 성공률 분석: 2018 러시아 월드컵 대회 자료를 중심으로)

  • Lee, Seung-Hun;Kim, Young-Hoon
    • Journal of Digital Convergence
    • /
    • v.19 no.1
    • /
    • pp.475-483
    • /
    • 2021
  • The purpose of this study is to identify the temporal and spatial characteristics of pass accuracy by utilizing the second processing data and official records collected from the 2018 FIFA World Cup Russia video data. For a total of 128 games, the success rate of passes based on the results of the game, passing time, and passing position was two-way ANOVA with repeated measure. The results showed no difference between winning and losing groups, and no interaction effects were found for passing time and location. The difference in passing time was high in the first half, with the highest success rate in the middle of the first half (79.2%) and the middle of the second half (77.9%) in the 15~30 minutes and the 60~75 minutes. Pass success rates were in the order of defense-midfield area (83.9%), midfield-attack area (81.7%), defense area (70.6%) and attack area (61.1%). In conclusion, there was no difference in the passing success rate of the winning and losing teams depending on the characteristics of the relative competitive strength of the World Cup games, and it is believed that follow-up research is needed to analyze the game contents rather than the factors of the winning and losing in the future.

MLP-based 3D Geotechnical Layer Mapping Using Borehole Database in Seoul, South Korea (MLP 기반의 서울시 3차원 지반공간모델링 연구)

  • Ji, Yoonsoo;Kim, Han-Saem;Lee, Moon-Gyo;Cho, Hyung-Ik;Sun, Chang-Guk
    • Journal of the Korean Geotechnical Society
    • /
    • v.37 no.5
    • /
    • pp.47-63
    • /
    • 2021
  • Recently, the demand for three-dimensional (3D) underground maps from the perspective of digital twins and the demand for linkage utilization are increasing. However, the vastness of national geotechnical survey data and the uncertainty in applying geostatistical techniques pose challenges in modeling underground regional geotechnical characteristics. In this study, an optimal learning model based on multi-layer perceptron (MLP) was constructed for 3D subsurface lithological and geotechnical classification in Seoul, South Korea. First, the geotechnical layer and 3D spatial coordinates of each borehole dataset in the Seoul area were constructed as a geotechnical database according to a standardized format, and data pre-processing such as correction and normalization of missing values for machine learning was performed. An optimal fitting model was designed through hyperparameter optimization of the MLP model and model performance evaluation, such as precision and accuracy tests. Then, a 3D grid network locally assigning geotechnical layer classification was constructed by applying an MLP-based bet-fitting model for each unit lattice. The constructed 3D geotechnical layer map was evaluated by comparing the results of a geostatistical interpolation technique and the topsoil properties of the geological map.

Comparison of Korean Classification Models' Korean Essay Score Range Prediction Performance (한국어 학습 모델별 한국어 쓰기 답안지 점수 구간 예측 성능 비교)

  • Cho, Heeryon;Im, Hyeonyeol;Yi, Yumi;Cha, Junwoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.133-140
    • /
    • 2022
  • We investigate the performance of deep learning-based Korean language models on a task of predicting the score range of Korean essays written by foreign students. We construct a data set containing a total of 304 essays, which include essays discussing the criteria for choosing a job ('job'), conditions of a happy life ('happ'), relationship between money and happiness ('econ'), and definition of success ('succ'). These essays were labeled according to four letter grades (A, B, C, and D), and a total of eleven essay score range prediction experiments were conducted (i.e., five for predicting the score range of 'job' essays, five for predicting the score range of 'happiness' essays, and one for predicting the score range of mixed topic essays). Three deep learning-based Korean language models, KoBERT, KcBERT, and KR-BERT, were fine-tuned using various training data. Moreover, two traditional probabilistic machine learning classifiers, naive Bayes and logistic regression, were also evaluated. Experiment results show that deep learning-based Korean language models performed better than the two traditional classifiers, with KR-BERT performing the best with 55.83% overall average prediction accuracy. A close second was KcBERT (55.77%) followed by KoBERT (54.91%). The performances of naive Bayes and logistic regression classifiers were 52.52% and 50.28% respectively. Due to the scarcity of training data and the imbalance in class distribution, the overall prediction performance was not high for all classifiers. Moreover, the classifiers' vocabulary did not explicitly capture the error features that were helpful in correctly grading the Korean essay. By overcoming these two limitations, we expect the score range prediction performance to improve.

Method of ChatBot Implementation Using Bot Framework (봇 프레임워크를 활용한 챗봇 구현 방안)

  • Kim, Ki-Young
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.1
    • /
    • pp.56-61
    • /
    • 2022
  • In this paper, we classify and present AI algorithms and natural language processing methods used in chatbots. A framework that can be used to implement a chatbot is also described. A chatbot is a system with a structure that interprets the input string by constructing the user interface in a conversational manner and selects an appropriate answer to the input string from the learned data and outputs it. However, training is required to generate an appropriate set of answers to a question and hardware with considerable computational power is required. Therefore, there is a limit to the practice of not only developing companies but also students learning AI development. Currently, chatbots are replacing the existing traditional tasks, and a practice course to understand and implement the system is required. RNN and Char-CNN are used to increase the accuracy of answering questions by learning unstructured data by applying technologies such as deep learning beyond the level of responding only to standardized data. In order to implement a chatbot, it is necessary to understand such a theory. In addition, the students presented examples of implementation of the entire system by utilizing the methods that can be used for coding education and the platform where existing developers and students can implement chatbots.

Development of Cloud-Based Medical Image Labeling System and It's Quantitative Analysis of Sarcopenia (클라우드기반 의료영상 라벨링 시스템 개발 및 근감소증 정량 분석)

  • Lee, Chung-Sub;Lim, Dong-Wook;Kim, Ji-Eon;Noh, Si-Hyeong;Yu, Yeong-Ju;Kim, Tae-Hoon;Yoon, Kwon-Ha;Jeong, Chang-Won
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.7
    • /
    • pp.233-240
    • /
    • 2022
  • Most of the recent AI researches has focused on developing AI models. However, recently, artificial intelligence research has gradually changed from model-centric to data-centric, and the importance of learning data is getting a lot of attention based on this trend. However, it takes a lot of time and effort because the preparation of learning data takes up a significant part of the entire process, and the generation of labeling data also differs depending on the purpose of development. Therefore, it is need to develop a tool with various labeling functions to solve the existing unmetneeds. In this paper, we describe a labeling system for creating precise and fast labeling data of medical images. To implement this, a semi-automatic method using Back Projection, Grabcut techniques and an automatic method predicted through a machine learning model were implemented. We not only showed the advantage of running time for the generation of labeling data of the proposed system, but also showed superiority through comparative evaluation of accuracy. In addition, by analyzing the image data set of about 1,000 patients, meaningful diagnostic indexes were presented for men and women in the diagnosis of sarcopenia.

The Effect of Ground Heterogeneity on the GPR Signal: Numerical Analysis (지반의 불균질성이 GPR탐사 신호에 미치는 영향에 대한 수치해석적 분석)

  • Lee, Sangyun;Song, Ki-il;Ryu, Heehwan;Kang, Kyungnam
    • Journal of the Korean GEO-environmental Society
    • /
    • v.23 no.8
    • /
    • pp.29-36
    • /
    • 2022
  • The importance of subsurface information is becoming crucial in urban area due to increase of underground construction. The position of underground facilities should be identified precisely before excavation work. Geophyiscal exporation method such as ground penetration radar (GPR) can be useful to investigate the subsurface facilities. GPR transmits electromagnetic waves to the ground and analyzes the reflected signals to determine the location and depth of subsurface facilities. Unfortunately, the readability of GPR signal is not favorable. To overcome this deficiency and automate the GPR signal processing, deep learning technique has been introduced recently. The accuracy of deep learning model can be improved with abundant training data. The ground is inherently heteorogeneous and the spacially variable ground properties can affact on the GPR signal. However, the effect of ground heterogeneity on the GPR signal has yet to be fully investigated. In this study, ground heterogeneity is simulated based on the fractal theory and GPR simulation is carried out by using gprMax. It is found that as the fractal dimension increases exceed 2.0, the error of fitting parameter reduces significantly. And the range of water content should be less than 0.14 to secure the validity of analysis.

Combining Conditional Generative Adversarial Network and Regression-based Calibration for Cloud Removal of Optical Imagery (광학 영상의 구름 제거를 위한 조건부 생성적 적대 신경망과 회귀 기반 보정의 결합)

  • Kwak, Geun-Ho;Park, Soyeon;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1357-1369
    • /
    • 2022
  • Cloud removal is an essential image processing step for any task requiring time-series optical images, such as vegetation monitoring and change detection. This paper presents a two-stage cloud removal method that combines conditional generative adversarial networks (cGANs) with regression-based calibration to construct a cloud-free time-series optical image set. In the first stage, the cGANs generate initial prediction results using quantitative relationships between optical and synthetic aperture radar images. In the second stage, the relationships between the predicted results and the actual values in non-cloud areas are first quantified via random forest-based regression modeling and then used to calibrate the cGAN-based prediction results. The potential of the proposed method was evaluated from a cloud removal experiment using Sentinel-2 and COSMO-SkyMed images in the rice field cultivation area of Gimje. The cGAN model could effectively predict the reflectance values in the cloud-contaminated rice fields where severe changes in physical surface conditions happened. Moreover, the regression-based calibration in the second stage could improve the prediction accuracy, compared with a regression-based cloud removal method using a supplementary image that is temporally distant from the target image. These experimental results indicate that the proposed method can be effectively applied to restore cloud-contaminated areas when cloud-free optical images are unavailable for environmental monitoring.

Role of unstructured data on water surface elevation prediction with LSTM: case study on Jamsu Bridge, Korea (LSTM 기법을 활용한 수위 예측 알고리즘 개발 시 비정형자료의 역할에 관한 연구: 잠수교 사례)

  • Lee, Seung Yeon;Yoo, Hyung Ju;Lee, Seung Oh
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1195-1204
    • /
    • 2021
  • Recently, local torrential rain have become more frequent and severe due to abnormal climate conditions, causing a surge in human and properties damage including infrastructures along the river. In this study, water surface elevation prediction algorithm was developed using the LSTM (Long Short-term Memory) technique specialized for time series data among Machine Learning to estimate and prevent flooding of the facilities. The study area is Jamsu Bridge, the study period is 6 years (2015~2020) of June, July and August and the water surface elevation of the Jamsu Bridge after 3 hours was predicted. Input data set is composed of the water surface elevation of Jamsu Bridge (EL.m), the amount of discharge from Paldang Dam (m3/s), the tide level of Ganghwa Bridge (cm) and the number of tweets in Seoul. Complementary data were constructed by using not only structured data mainly used in precedent research but also unstructured data constructed through wordcloud, and the role of unstructured data was presented through comparison and analysis of whether or not unstructured data was used. When predicting the water surface elevation of the Jamsu Bridge, the accuracy of prediction was improved and realized that complementary data could be conservative alerts to reduce casualties. In this study, it was concluded that the use of complementary data was relatively effective in providing the user's safety and convenience of riverside infrastructure. In the future, more accurate water surface elevation prediction would be expected through the addition of types of unstructured data or detailed pre-processing of input data.