• Title/Summary/Keyword: Deep Learning AI

Search Result 611, Processing Time 0.026 seconds

A Deep Learning Application for Automated Feature Extraction in Transaction-based Machine Learning (트랜잭션 기반 머신러닝에서 특성 추출 자동화를 위한 딥러닝 응용)

  • Woo, Deock-Chae;Moon, Hyun Sil;Kwon, Suhnbeom;Cho, Yoonho
    • Journal of Information Technology Services
    • /
    • v.18 no.2
    • /
    • pp.143-159
    • /
    • 2019
  • Machine learning (ML) is a method of fitting given data to a mathematical model to derive insights or to predict. In the age of big data, where the amount of available data increases exponentially due to the development of information technology and smart devices, ML shows high prediction performance due to pattern detection without bias. The feature engineering that generates the features that can explain the problem to be solved in the ML process has a great influence on the performance and its importance is continuously emphasized. Despite this importance, however, it is still considered a difficult task as it requires a thorough understanding of the domain characteristics as well as an understanding of source data and the iterative procedure. Therefore, we propose methods to apply deep learning for solving the complexity and difficulty of feature extraction and improving the performance of ML model. Unlike other techniques, the most common reason for the superior performance of deep learning techniques in complex unstructured data processing is that it is possible to extract features from the source data itself. In order to apply these advantages to the business problems, we propose deep learning based methods that can automatically extract features from transaction data or directly predict and classify target variables. In particular, we applied techniques that show high performance in existing text processing based on the structural similarity between transaction data and text data. And we also verified the suitability of each method according to the characteristics of transaction data. Through our study, it is possible not only to search for the possibility of automated feature extraction but also to obtain a benchmark model that shows a certain level of performance before performing the feature extraction task by a human. In addition, it is expected that it will be able to provide guidelines for choosing a suitable deep learning model based on the business problem and the data characteristics.

Trends in the Use of Artificial Intelligence in Medical Image Analysis (의료영상 분석에서 인공지능 이용 동향)

  • Lee, Gil-Jae;Lee, Tae-Soo
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.4
    • /
    • pp.453-462
    • /
    • 2022
  • In this paper, the artificial intelligence (AI) technology used in the medical image analysis field was analyzed through a literature review. Literature searches were conducted on PubMed, ResearchGate, Google and Cochrane Review using the key word. Through literature search, 114 abstracts were searched, and 98 abstracts were reviewed, excluding 16 duplicates. In the reviewed literature, AI is applied in classification, localization, disease detection, disease segmentation, and fit degree of registration images. In machine learning (ML), prior feature extraction and inputting the extracted feature values into the neural network have disappeared. Instead, it appears that the neural network is changing to a deep learning (DL) method with multiple hidden layers. The reason is thought to be that feature extraction is processed in the DL process due to the increase in the amount of memory of the computer, the improvement of the calculation speed, and the construction of big data. In order to apply the analysis of medical images using AI to medical care, the role of physicians is important. Physicians must be able to interpret and analyze the predictions of AI algorithms. Additional medical education and professional development for existing physicians is needed to understand AI. Also, it seems that a revised curriculum for learners in medical school is needed.

Construction of a Spatio-Temporal Dataset for Deep Learning-Based Precipitation Nowcasting

  • Kim, Wonsu;Jang, Dongmin;Park, Sung Won;Yang, MyungSeok
    • Journal of Information Science Theory and Practice
    • /
    • v.10 no.spc
    • /
    • pp.135-142
    • /
    • 2022
  • Recently, with the development of data processing technology and the increase of computational power, methods to solving social problems using Artificial Intelligence (AI) are in the spotlight, and AI technologies are replacing and supplementing existing traditional methods in various fields. Meanwhile in Korea, heavy rain is one of the representative factors of natural disasters that cause enormous economic damage and casualties every year. Accurate prediction of heavy rainfall over the Korean peninsula is very difficult due to its geographical features, located between the Eurasian continent and the Pacific Ocean at mid-latitude, and the influence of the summer monsoon. In order to deal with such problems, the Korea Meteorological Administration operates various state-of-the-art observation equipment and a newly developed global atmospheric model system. Nevertheless, for precipitation nowcasting, the use of a separate system based on the extrapolation method is required due to the intrinsic characteristics associated with the operation of numerical weather prediction models. The predictability of existing precipitation nowcasting is reliable in the early stage of forecasting but decreases sharply as forecast lead time increases. At this point, AI technologies to deal with spatio-temporal features of data are expected to greatly contribute to overcoming the limitations of existing precipitation nowcasting systems. Thus, in this project the dataset required to develop, train, and verify deep learning-based precipitation nowcasting models has been constructed in a regularized form. The dataset not only provides various variables obtained from multiple sources, but also coincides with each other in spatio-temporal specifications.

Research on analysis of articleable advertisements and design of extraction method for articleable advertisements using deep learning

  • Seoksoo Kim;Jae-Young Jung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.6
    • /
    • pp.13-22
    • /
    • 2024
  • There is a need for and positive aspects of article-based advertising, but as exaggerated and disguised information is delivered due to some indiscriminate 'article-based advertisements', readers have difficulty distinguishing between general articles and article-based advertisements, leading to a lot of misinterpretation and confusion of information. is doing Since readers will continue to acquire new information and apply this information at the right time and place to bring a lot of value, it is judged to be even more important to distinguish between accurate general articles and article-like advertisements. Therefore, as differentiated information between general articles and article-like advertisements is needed, as part of this, for readers who have difficulty identifying accurate information due to such indiscriminate article-like advertisements in Internet newspapers, this paper introduces IT and AI technologies. We attempted to present a method that can be solved in terms of a system that incorporates, and this method was designed to extract articleable advertisements using a knowledge-based natural language processing method that finds and refines advertising keywords and deep learning technology.

Deep Learning City: A Big Data Analytics Framework for Smart Cities (딥러닝 시티: 스마트 시티의 빅데이터 분석 프레임워크 제안)

  • Kim, Hwa-Jong
    • Informatization Policy
    • /
    • v.24 no.4
    • /
    • pp.79-92
    • /
    • 2017
  • As city functions develop more complex and advanced, interests in smart cities are also increasing. Smart cities refer to the cities effectively solving urban problems such as traffic, safety, welfare, and living issues by utilizing ICT. Recently, many countries are attempting to introduce big data, Internet of Things, and artificial intelligence into smart cities, but they have not yet developed into comprehensive urban services. In this paper, we review the current status of domestic and overseas smart cities and suggest ways to solve issues of data sharing and service compatibility. To this end, we propose a "Deep Learning City Framework" that incorporates the deep learning technology into smart city services, and propose a new smart city strategy that safely shares spatial and temporal data in cities and converges learning data of various cities.

Research on Training and Implementation of Deep Learning Models for Web Page Analysis (웹페이지 분석을 위한 딥러닝 모델 학습과 구현에 관한 연구)

  • Jung Hwan Kim;Jae Won Cho;Jin San Kim;Han Jin Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.517-524
    • /
    • 2024
  • This study aims to train and implement a deep learning model for the fusion of website creation and artificial intelligence, in the era known as the AI revolution following the launch of the ChatGPT service. The deep learning model was trained using 3,000 collected web page images, processed based on a system of component and layout classification. This process was divided into three stages. First, prior research on AI models was reviewed to select the most appropriate algorithm for the model we intended to implement. Second, suitable web page and paragraph images were collected, categorized, and processed. Third, the deep learning model was trained, and a serving interface was integrated to verify the actual outcomes of the model. This implemented model will be used to detect multiple paragraphs on a web page, analyzing the number of lines, elements, and features in each paragraph, and deriving meaningful data based on the classification system. This process is expected to evolve, enabling more precise analysis of web pages. Furthermore, it is anticipated that the development of precise analysis techniques will lay the groundwork for research into AI's capability to automatically generate perfect web pages.

Design and Implementation of a Lightweight On-Device AI-Based Real-time Fault Diagnosis System using Continual Learning (연속학습을 활용한 경량 온-디바이스 AI 기반 실시간 기계 결함 진단 시스템 설계 및 구현)

  • Youngjun Kim;Taewan Kim;Suhyun Kim;Seongjae Lee;Taehyoun Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.3
    • /
    • pp.151-158
    • /
    • 2024
  • Although on-device artificial intelligence (AI) has gained attention to diagnosing machine faults in real time, most previous studies did not consider the model retraining and redeployment processes that must be performed in real-world industrial environments. Our study addresses this challenge by proposing an on-device AI-based real-time machine fault diagnosis system that utilizes continual learning. Our proposed system includes a lightweight convolutional neural network (CNN) model, a continual learning algorithm, and a real-time monitoring service. First, we developed a lightweight 1D CNN model to reduce the cost of model deployment and enable real-time inference on the target edge device with limited computing resources. We then compared the performance of five continual learning algorithms with three public bearing fault datasets and selected the most effective algorithm for our system. Finally, we implemented a real-time monitoring service using an open-source data visualization framework. In the performance comparison results between continual learning algorithms, we found that the replay-based algorithms outperformed the regularization-based algorithms, and the experience replay (ER) algorithm had the best diagnostic accuracy. We further tuned the number and length of data samples used for a memory buffer of the ER algorithm to maximize its performance. We confirmed that the performance of the ER algorithm becomes higher when a longer data length is used. Consequently, the proposed system showed an accuracy of 98.7%, while only 16.5% of the previous data was stored in memory buffer. Our lightweight CNN model was also able to diagnose a fault type of one data sample within 3.76 ms on the Raspberry Pi 4B device.

A review of Explainable AI Techniques in Medical Imaging (의료영상 분야를 위한 설명가능한 인공지능 기술 리뷰)

  • Lee, DongEon;Park, ChunSu;Kang, Jeong-Woon;Kim, MinWoo
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.259-270
    • /
    • 2022
  • Artificial intelligence (AI) has been studied in various fields of medical imaging. Currently, top-notch deep learning (DL) techniques have led to high diagnostic accuracy and fast computation. However, they are rarely used in real clinical practices because of a lack of reliability concerning their results. Most DL models can achieve high performance by extracting features from large volumes of data. However, increasing model complexity and nonlinearity turn such models into black boxes that are seldom accessible, interpretable, and transparent. As a result, scientific interest in the field of explainable artificial intelligence (XAI) is gradually emerging. This study aims to review diverse XAI approaches currently exploited in medical imaging. We identify the concepts of the methods, introduce studies applying them to imaging modalities such as computational tomography (CT), magnetic resonance imaging (MRI), and endoscopy, and lastly discuss limitations and challenges faced by XAI for future studies.

Improving the Cyber Security over Banking Sector by Detecting the Malicious Attacks Using the Wrapper Stepwise Resnet Classifier

  • Damodharan Kuttiyappan;Rajasekar, V
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.6
    • /
    • pp.1657-1673
    • /
    • 2023
  • With the advancement of information technology, criminals employ multiple cyberspaces to promote cybercrime. To combat cybercrime and cyber dangers, banks and financial institutions use artificial intelligence (AI). AI technologies assist the banking sector to develop and grow in many ways. Transparency and explanation of AI's ability are required to preserve trust. Deep learning protects client behavior and interest data. Deep learning techniques may anticipate cyber-attack behavior, allowing for secure banking transactions. This proposed approach is based on a user-centric design that safeguards people's private data over banking. Here, initially, the attack data can be generated over banking transactions. Routing is done for the configuration of the nodes. Then, the obtained data can be preprocessed for removing the errors. Followed by hierarchical network feature extraction can be used to identify the abnormal features related to the attack. Finally, the user data can be protected and the malicious attack in the transmission route can be identified by using the Wrapper stepwise ResNet classifier. The proposed work outperforms other techniques in terms of attack detection and accuracy, and the findings are depicted in the graphical format by employing the Python tool.

Ensure intellectual property rights for 3D pringting 3D modeling design (딥러닝 인공지능을 활용한 사물인터넷 비즈니스 모델 설계)

  • Lee, Yong-keu;Park, Dae-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.351-354
    • /
    • 2016
  • The competition of Go between AlphaGo and Lee Sedol attracted global interest leading AlphaGo to victory. The core function of AlphaGo is deep-learning system, studying by computer itself. Afterwards, the utilization of deep-learning system using artificial intelligence is said to be verified. Recently, the government passed the loT Act and developing its business model to promote loT. This study is on analyzing IoT business environment using deep-learning AI and constructing specialized business models.

  • PDF