• Title/Summary/Keyword: Deep-learning algorithm

Search Result 1,172, Processing Time 0.033 seconds

A study on the improvement of artificial intelligence-based Parking control system to prevent vehicle access with fake license plates (위조번호판 부착 차량 출입 방지를 위한 인공지능 기반의 주차관제시스템 개선 방안)

  • Jang, Sungmin;Iee, Jeongwoo;Park, Jonghyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.57-74
    • /
    • 2022
  • Recently, artificial intelligence parking control systems have increased the recognition rate of vehicle license plates using deep learning, but there is a problem that they cannot determine vehicles with fake license plates. Despite these security problems, several institutions have been using the existing system so far. For example, in an experiment using a counterfeit license plate, there are cases of successful entry into major government agencies. This paper proposes an improved system over the existing artificial intelligence parking control system to prevent vehicles with such fake license plates from entering. The proposed method is to use the degree of matching of the front feature points of the vehicle as a passing criterion using the ORB algorithm that extracts information on feature points characterized by an image, just as the existing system uses the matching of vehicle license plates as a passing criterion. In addition, a procedure for checking whether a vehicle exists inside was included in the proposed system to prevent the entry of the same type of vehicle with a fake license plate. As a result of the experiment, it showed the improved performance in identifying vehicles with fake license plates compared to the existing system. These results confirmed that the methods proposed in this paper could be applied to the existing parking control system while taking the flow of the original artificial intelligence parking control system to prevent vehicles with fake license plates from entering.

Flood Disaster Prediction and Prevention through Hybrid BigData Analysis (하이브리드 빅데이터 분석을 통한 홍수 재해 예측 및 예방)

  • Ki-Yeol Eom;Jai-Hyun Lee
    • The Journal of Bigdata
    • /
    • v.8 no.1
    • /
    • pp.99-109
    • /
    • 2023
  • Recently, not only in Korea but also around the world, we have been experiencing constant disasters such as typhoons, wildfires, and heavy rains. The property damage caused by typhoons and heavy rain in South Korea alone has exceeded 1 trillion won. These disasters have resulted in significant loss of life and property damage, and the recovery process will also take a considerable amount of time. In addition, the government's contingency funds are insufficient for the current situation. To prevent and effectively respond to these issues, it is necessary to collect and analyze accurate data in real-time. However, delays and data loss can occur depending on the environment where the sensors are located, the status of the communication network, and the receiving servers. In this paper, we propose a two-stage hybrid situation analysis and prediction algorithm that can accurately analyze even in such communication network conditions. In the first step, data on river and stream levels are collected, filtered, and refined from diverse sensors of different types and stored in a bigdata. An AI rule-based inference algorithm is applied to analyze the crisis alert levels. If the rainfall exceeds a certain threshold, but it remains below the desired level of interest, the second step of deep learning image analysis is performed to determine the final crisis alert level.

A Study on Evaluating the Possibility of Monitoring Ships of CAS500-1 Images Based on YOLO Algorithm: A Case Study of a Busan New Port and an Oakland Port in California (YOLO 알고리즘 기반 국토위성영상의 선박 모니터링 가능성 평가 연구: 부산 신항과 캘리포니아 오클랜드항을 대상으로)

  • Park, Sangchul;Park, Yeongbin;Jang, Soyeong;Kim, Tae-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1463-1478
    • /
    • 2022
  • Maritime transport accounts for 99.7% of the exports and imports of the Republic of Korea; therefore, developing a vessel monitoring system for efficient operation is of significant interest. Several studies have focused on tracking and monitoring vessel movements based on automatic identification system (AIS) data; however, ships without AIS have limited monitoring and tracking ability. High-resolution optical satellite images can provide the missing layer of information in AIS-based monitoring systems because they can identify non-AIS vessels and small ships over a wide range. Therefore, it is necessary to investigate vessel monitoring and small vessel classification systems using high-resolution optical satellite images. This study examined the possibility of developing ship monitoring systems using Compact Advanced Satellite 500-1 (CAS500-1) satellite images by first training a deep learning model using satellite image data and then performing detection in other images. To determine the effectiveness of the proposed method, the learning data was acquired from ships in the Yellow Sea and its major ports, and the detection model was established using the You Only Look Once (YOLO) algorithm. The ship detection performance was evaluated for a domestic and an international port. The results obtained using the detection model in ships in the anchorage and berth areas were compared with the ship classification information obtained using AIS, and an accuracy of 85.5% and 70% was achieved using domestic and international classification models, respectively. The results indicate that high-resolution satellite images can be used in mooring ships for vessel monitoring. The developed approach can potentially be used in vessel tracking and monitoring systems at major ports around the world if the accuracy of the detection model is improved through continuous learning data construction.

Construction of a Standard Dataset for Liver Tumors for Testing the Performance and Safety of Artificial Intelligence-Based Clinical Decision Support Systems (인공지능 기반 임상의학 결정 지원 시스템 의료기기의 성능 및 안전성 검증을 위한 간 종양 표준 데이터셋 구축)

  • Seung-seob Kim;Dong Ho Lee;Min Woo Lee;So Yeon Kim;Jaeseung Shin;Jin‑Young Choi;Byoung Wook Choi
    • Journal of the Korean Society of Radiology
    • /
    • v.82 no.5
    • /
    • pp.1196-1206
    • /
    • 2021
  • Purpose To construct a standard dataset of contrast-enhanced CT images of liver tumors to test the performance and safety of artificial intelligence (AI)-based algorithms for clinical decision support systems (CDSSs). Materials and Methods A consensus group of medical experts in gastrointestinal radiology from four national tertiary institutions discussed the conditions to be included in a standard dataset. Seventy-five cases of hepatocellular carcinoma, 75 cases of metastasis, and 30-50 cases of benign lesions were retrieved from each institution, and the final dataset consisted of 300 cases of hepatocellular carcinoma, 300 cases of metastasis, and 183 cases of benign lesions. Only pathologically confirmed cases of hepatocellular carcinomas and metastases were enrolled. The medical experts retrieved the medical records of the patients and manually labeled the CT images. The CT images were saved as Digital Imaging and Communications in Medicine (DICOM) files. Results The medical experts in gastrointestinal radiology constructed the standard dataset of contrast-enhanced CT images for 783 cases of liver tumors. The performance and safety of the AI algorithm can be evaluated by calculating the sensitivity and specificity for detecting and characterizing the lesions. Conclusion The constructed standard dataset can be utilized for evaluating the machine-learning-based AI algorithm for CDSS.

Development of Intelligent Severity of Atopic Dermatitis Diagnosis Model using Convolutional Neural Network (합성곱 신경망(Convolutional Neural Network)을 활용한 지능형 아토피피부염 중증도 진단 모델 개발)

  • Yoon, Jae-Woong;Chun, Jae-Heon;Bang, Chul-Hwan;Park, Young-Min;Kim, Young-Joo;Oh, Sung-Min;Jung, Joon-Ho;Lee, Suk-Jun;Lee, Ji-Hyun
    • Management & Information Systems Review
    • /
    • v.36 no.4
    • /
    • pp.33-51
    • /
    • 2017
  • With the advent of 'The Forth Industrial Revolution' and the growing demand for quality of life due to economic growth, needs for the quality of medical services are increasing. Artificial intelligence has been introduced in the medical field, but it is rarely used in chronic skin diseases that directly affect the quality of life. Also, atopic dermatitis, a representative disease among chronic skin diseases, has a disadvantage in that it is difficult to make an objective diagnosis of the severity of lesions. The aim of this study is to establish an intelligent severity recognition model of atopic dermatitis for improving the quality of patient's life. For this, the following steps were performed. First, image data of patients with atopic dermatitis were collected from the Catholic University of Korea Seoul Saint Mary's Hospital. Refinement and labeling were performed on the collected image data to obtain training and verification data that suitable for the objective intelligent atopic dermatitis severity recognition model. Second, learning and verification of various CNN algorithms are performed to select an image recognition algorithm that suitable for the objective intelligent atopic dermatitis severity recognition model. Experimental results showed that 'ResNet V1 101' and 'ResNet V2 50' were measured the highest performance with Erythema and Excoriation over 90% accuracy, and 'VGG-NET' was measured 89% accuracy lower than the two lesions due to lack of training data. The proposed methodology demonstrates that the image recognition algorithm has high performance not only in the field of object recognition but also in the medical field requiring expert knowledge. In addition, this study is expected to be highly applicable in the field of atopic dermatitis due to it uses image data of actual atopic dermatitis patients.

  • PDF

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Radiation Dose Reduction in Digital Mammography by Deep-Learning Algorithm Image Reconstruction: A Preliminary Study (딥러닝 알고리즘을 이용한 저선량 디지털 유방 촬영 영상의 복원: 예비 연구)

  • Su Min Ha;Hak Hee Kim;Eunhee Kang;Bo Kyoung Seo;Nami Choi;Tae Hee Kim;You Jin Ku;Jong Chul Ye
    • Journal of the Korean Society of Radiology
    • /
    • v.83 no.2
    • /
    • pp.344-359
    • /
    • 2022
  • Purpose To develop a denoising convolutional neural network-based image processing technique and investigate its efficacy in diagnosing breast cancer using low-dose mammography imaging. Materials and Methods A total of 6 breast radiologists were included in this prospective study. All radiologists independently evaluated low-dose images for lesion detection and rated them for diagnostic quality using a qualitative scale. After application of the denoising network, the same radiologists evaluated lesion detectability and image quality. For clinical application, a consensus on lesion type and localization on preoperative mammographic examinations of breast cancer patients was reached after discussion. Thereafter, coded low-dose, reconstructed full-dose, and full-dose images were presented and assessed in a random order. Results Lesions on 40% reconstructed full-dose images were better perceived when compared with low-dose images of mastectomy specimens as a reference. In clinical application, as compared to 40% reconstructed images, higher values were given on full-dose images for resolution (p < 0.001); diagnostic quality for calcifications (p < 0.001); and for masses, asymmetry, or architectural distortion (p = 0.037). The 40% reconstructed images showed comparable values to 100% full-dose images for overall quality (p = 0.547), lesion visibility (p = 0.120), and contrast (p = 0.083), without significant differences. Conclusion Effective denoising and image reconstruction processing techniques can enable breast cancer diagnosis with substantial radiation dose reduction.

A prediction of the rock mass rating of tunnelling area using artificial neural networks (인공신경망을 이용한 터널구간의 암반분류 예측)

  • Han, Myung-Sik;Yang, In-Jae;Kim, Kwang-Myung
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.4 no.4
    • /
    • pp.277-286
    • /
    • 2002
  • Most of the problems in dealing with the tunnel construction are the uncertainties and complexities of the stress conditions and rock strengths in ahead of the tunnel excavation. The limitations on the investigation technology, inaccessibility of borehole test in mountain area and public hatred also restrict our knowledge on the geologic conditions on the mountainous tunneling area. Nevertheless an extensive and superior geophysical exploration data is possibly acquired deep within the mountain area, with up to the tunnel locations in the case of alternative design or turn-key base projects. An appealing claim in the use of artificial neural networks (ANN) is that they give a more trustworthy results on our data based on identifying relevant input variables such as a little geotechnical information and biological learning principles. In this study, error back-propagation algorithm that is one of the teaching techniques of ANN is applied to presupposition on Rock Mass Ratings (RMR) for unknown tunnel area. In order to verify the applicability of this model, a 4km railway tunnel's field data are verified and used as input parameters for the prediction of RMR, with the learned pattern by error back propagation logics. ANN is one of basic methods in solving the geotechnical uncertainties and helpful in solving the problems with data consistency, but needs some modification on the technical problems and we hope our study to be developed in the future design work.

  • PDF

Semantic Query Expansion based on Concept Coverage of a Deep Question Category in QA systems (질의 응답 시스템에서 심층적 질의 카테고리의 개념 커버리지에 기반한 의미적 질의 확장)

  • Kim Hae-Jung;Kang Bo-Yeong;Lee Sang-Jo
    • Journal of KIISE:Databases
    • /
    • v.32 no.3
    • /
    • pp.297-303
    • /
    • 2005
  • When confronted with a query, question answering systems endeavor to extract the most exact answers possible by determining the answer type that fits with the key terms used in the query. However, the efficacy of such systems is limited by the fact that the terms used in a query may be in a syntactic form different to that of the same words in a document. In this paper, we present an efficient semantic query expansion methodology based on a question category concept list comprised of terms that are semantically close to terms used in a query. The semantically close terms of a term in a query may be hypernyms, synonyms, or terms in a different syntactic category. The proposed system constructs a concept list for each question type and then builds the concept list for each question category using a learning algorithm. In the question answering experiments on 42,654 Wall Street Journal documents of the TREC collection, the traditional system showed in 0.223 in MRR and the proposed system showed 0.50 superior to the traditional question answering system. The results of the present experiments suggest the promise of the proposed method.

Stock prediction using combination of BERT sentiment Analysis and Macro economy index

  • Jang, Euna;Choi, HoeRyeon;Lee, HongChul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.47-56
    • /
    • 2020
  • The stock index is used not only as an economic indicator for a country, but also as an indicator for investment judgment, which is why research into predicting the stock index is ongoing. The task of predicting the stock price index involves technical, basic, and psychological factors, and it is also necessary to consider complex factors for prediction accuracy. Therefore, it is necessary to study the model for predicting the stock price index by selecting and reflecting technical and auxiliary factors that affect the fluctuation of the stock price according to the stock price. Most of the existing studies related to this are forecasting studies that use news information or macroeconomic indicators that create market fluctuations, or reflect only a few combinations of indicators. In this paper, this we propose to present an effective combination of the news information sentiment analysis and various macroeconomic indicators in order to predict the US Dow Jones Index. After Crawling more than 93,000 business news from the New York Times for two years, the sentiment results analyzed using the latest natural language processing techniques BERT and NLTK, along with five macroeconomic indicators, gold prices, oil prices, and five foreign exchange rates affecting the US economy Combination was applied to the prediction algorithm LSTM, which is known to be the most suitable for combining numeric and text information. As a result of experimenting with various combinations, the combination of DJI, NLTK, BERT, OIL, GOLD, and EURUSD in the DJI index prediction yielded the smallest MSE value.