• Title/Summary/Keyword: Dataset for AI

Search Result 195, Processing Time 0.029 seconds

Optimization of Action Recognition based on Slowfast Deep Learning Model using RGB Video Data (RGB 비디오 데이터를 이용한 Slowfast 모델 기반 이상 행동 인식 최적화)

  • Jeong, Jae-Hyeok;Kim, Min-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.1049-1058
    • /
    • 2022
  • HAR(Human Action Recognition) such as anomaly and object detection has become a trend in research field(s) that focus on utilizing Artificial Intelligence (AI) methods to analyze patterns of human action in crime-ridden area(s), media services, and industrial facilities. Especially, in real-time system(s) using video streaming data, HAR has become a more important AI-based research field in application development and many different research fields using HAR have currently been developed and improved. In this paper, we propose and analyze a deep-learning-based HAR that provides more efficient scheme(s) using an intelligent AI models, such system can be applied to media services using RGB video streaming data usage without feature extraction pre-processing. For the method, we adopt Slowfast based on the Deep Neural Network(DNN) model under an open dataset(HMDB-51 or UCF101) for improvement in prediction accuracy.

Deep Learning for Remote Sensing Applications (원격탐사활용을 위한 딥러닝기술)

  • Lee, Moung-Jin;Lee, Won-Jin;Lee, Seung-Kuk;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1581-1587
    • /
    • 2022
  • Recently, deep learning has become more important in remote sensing data processing. Huge amounts of data for artificial intelligence (AI) has been designed and built to develop new technologies for remote sensing, and AI models have been learned by the AI training dataset. Artificial intelligence models have developed rapidly, and model accuracy is increasing accordingly. However, there are variations in the model accuracy depending on the person who trains the AI model. Eventually, experts who can train AI models well are required more and more. Moreover, the deep learning technique enables us to automate methods for remote sensing applications. Methods having the performance of less than about 60% in the past are now over 90% and entering about 100%. In this special issue, thirteen papers on how deep learning techniques are used for remote sensing applications will be introduced.

Deep Learning-Based, Real-Time, False-Pick Filter for an Onsite Earthquake Early Warning (EEW) System (온사이트 지진조기경보를 위한 딥러닝 기반 실시간 오탐지 제거)

  • Seo, JeongBeom;Lee, JinKoo;Lee, Woodong;Lee, SeokTae;Lee, HoJun;Jeon, Inchan;Park, NamRyoul
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.71-81
    • /
    • 2021
  • This paper presents a real-time, false-pick filter based on deep learning to reduce false alarms of an onsite Earthquake Early Warning (EEW) system. Most onsite EEW systems use P-wave to predict S-wave. Therefore, it is essential to properly distinguish P-waves from noises or other seismic phases to avoid false alarms. To reduce false-picks causing false alarms, this study made the EEWNet Part 1 'False-Pick Filter' model based on Convolutional Neural Network (CNN). Specifically, it modified the Pick_FP (Lomax et al.) to generate input data such as the amplitude, velocity, and displacement of three components from 2 seconds ahead and 2 seconds after the P-wave arrival following one-second time steps. This model extracts log-mel power spectrum features from this input data, then classifies P-waves and others using these features. The dataset consisted of 3,189,583 samples: 81,394 samples from event data (727 events in the Korean Peninsula, 103 teleseismic events, and 1,734 events in Taiwan) and 3,108,189 samples from continuous data (recorded by seismic stations in South Korea for 27 months from 2018 to 2020). This model was trained with 1,826,357 samples through balancing, then tested on continuous data samples of the year 2019, filtering more than 99% of strong false-picks that could trigger false alarms. This model was developed as a module for USGS Earthworm and is written in C language to operate with minimal computing resources.

Detecting Foreign Objects in Chest X-Ray Images using Artificial Intelligence (인공 지능을 이용한 흉부 엑스레이 이미지에서의 이물질 검출)

  • Chang-Hwa Han
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.6
    • /
    • pp.873-879
    • /
    • 2023
  • This study explored the use of artificial intelligence(AI) to detect foreign bodies in chest X-ray images. Medical imaging, especially chest X-rays, plays a crucial role in diagnosing diseases such as pneumonia and lung cancer. With the increase in imaging tests, AI has become an important tool for efficient and fast diagnosis. However, images can contain foreign objects, including everyday jewelry like buttons and bra wires, which can interfere with accurate readings. In this study, we developed an AI algorithm that accurately identifies these foreign objects and processed the National Institutes of Health chest X-ray dataset based on the YOLOv8 model. The results showed high detection performance with accuracy, precision, recall, and F1-score all close to 0.91. Despite the excellent performance of AI, the study solved the problem that foreign objects in the image can distort the reading results, emphasizing the innovative role of AI in radiology and its reliability based on accuracy, which is essential for clinical implementation.

Applying a Novel Neuroscience Mining (NSM) Method to fNIRS Dataset for Predicting the Business Problem Solving Creativity: Emphasis on Combining CNN, BiLSTM, and Attention Network

  • Kim, Kyu Sung;Kim, Min Gyeong;Lee, Kun Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.1-7
    • /
    • 2022
  • With the development of artificial intelligence, efforts to incorporate neuroscience mining with AI have increased. Neuroscience mining, also known as NSM, expands on this concept by combining computational neuroscience and business analytics. Using fNIRS (functional near-infrared spectroscopy)-based experiment dataset, we have investigated the potential of NSM in the context of the BPSC (business problem-solving creativity) prediction. Although BPSC is regarded as an essential business differentiator and a difficult cognitive resource to imitate, measuring it is a challenging task. In the context of NSM, appropriate methods for assessing and predicting BPSC are still in their infancy. In this sense, we propose a novel NSM method that systematically combines CNN, BiLSTM, and attention network for the sake of enhancing the BPSC prediction performance significantly. We utilized a dataset containing over 150 thousand fNIRS-measured data points to evaluate the validity of our proposed NSM method. Empirical evidence demonstrates that the proposed NSM method reveals the most robust performance when compared to benchmarking methods.

Resource Metric Refining Module for AIOps Learning Data in Kubernetes Microservice

  • Jonghwan Park;Jaegi Son;Dongmin Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.6
    • /
    • pp.1545-1559
    • /
    • 2023
  • In the cloud environment, microservices are implemented through Kubernetes, and these services can be expanded or reduced through the autoscaling function under Kubernetes, depending on the service request or resource usage. However, the increase in the number of nodes or distributed microservices in Kubernetes and the unpredictable autoscaling function make it very difficult for system administrators to conduct operations. Artificial Intelligence for IT Operations (AIOps) supports resource management for cloud services through AI and has attracted attention as a solution to these problems. For example, after the AI model learns the metric or log data collected in the microservice units, failures can be inferred by predicting the resources in future data. However, it is difficult to construct data sets for generating learning models because many microservices used for autoscaling generate different metrics or logs in the same timestamp. In this study, we propose a cloud data refining module and structure that collects metric or log data in a microservice environment implemented by Kubernetes; and arranges it into computing resources corresponding to each service so that AI models can learn and analogize service-specific failures. We obtained Kubernetes-based AIOps learning data through this module, and after learning the built dataset through the AI model, we verified the prediction result through the differences between the obtained and actual data.

SAR Recognition of Target Variants Using Channel Attention Network without Dimensionality Reduction (차원축소 없는 채널집중 네트워크를 이용한 SAR 변형표적 식별)

  • Park, Ji-Hoon;Choi, Yeo-Reum;Chae, Dae-Young;Lim, Ho
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.25 no.3
    • /
    • pp.219-230
    • /
    • 2022
  • In implementing a robust automatic target recognition(ATR) system with synthetic aperture radar(SAR) imagery, one of the most important issues is accurate classification of target variants, which are the same targets with different serial numbers, configurations and versions, etc. In this paper, a deep learning network with channel attention modules is proposed to cope with the recognition problem for target variants based on the previous research findings that the channel attention mechanism selectively emphasizes the useful features for target recognition. Different from other existing attention methods, this paper employs the channel attention modules without dimensionality reduction along the channel direction from which direct correspondence between feature map channels can be preserved and the features valuable for recognizing SAR target variants can be effectively derived. Experiments with the public benchmark dataset demonstrate that the proposed scheme is superior to the network with other existing channel attention modules.

A Study of AI Impact on the Food Industry

  • Seong Soo CHA
    • The Korean Journal of Food & Health Convergence
    • /
    • v.9 no.4
    • /
    • pp.19-23
    • /
    • 2023
  • The integration of ChatGPT, an AI-powered language model, is causing a profound transformation within the food industry, impacting various domains. It offers novel capabilities in recipe creation, personalized dining, menu development, food safety, customer service, and culinary education. ChatGPT's vast culinary dataset analysis aids chefs in pushing flavor boundaries through innovative ingredient combinations. Its personalization potential caters to dietary preferences and cultural nuances, democratizing culinary knowledge. It functions as a virtual mentor, empowering enthusiasts to experiment creatively. For personalized dining, ChatGPT's language understanding enables customer interaction, dish recommendations based on preferences. In menu development, data-driven insights identify culinary trends, guiding chefs in crafting menus aligned with evolving tastes. It suggests inventive ingredient pairings, fostering innovation and inclusivity. AI-driven data analysis contributes to quality control, ensuring consistent taste and texture. Food writing and marketing benefit from ChatGPT's content generation, adapting to diverse strategies and consumer preferences. AI-powered chatbots revolutionize customer service, improving ordering experiences, and post-purchase engagement. In culinary education, ChatGPT acts as a virtual mentor, guiding learners through techniques and history. In food safety, data analysis prevents contamination and ensures compliance. Overall, ChatGPT reshapes the industry by uniting AI's analytics with culinary expertise, enhancing innovation, inclusivity, and efficiency in gastronomy.

Region of Interest Localization for Bone Age Estimation Using Whole-Body Bone Scintigraphy

  • Do, Thanh-Cong;Yang, Hyung Jeong;Kim, Soo Hyung;Lee, Guee Sang;Kang, Sae Ryung;Min, Jung Joon
    • Smart Media Journal
    • /
    • v.10 no.2
    • /
    • pp.22-29
    • /
    • 2021
  • In the past decade, deep learning has been applied to various medical image analysis tasks. Skeletal bone age estimation is clinically important as it can help prevent age-related illness and pave the way for new anti-aging therapies. Recent research has applied deep learning techniques to the task of bone age assessment and achieved positive results. In this paper, we propose a bone age prediction method using a deep convolutional neural network. Specifically, we first train a classification model that automatically localizes the most discriminative region of an image and crops it from the original image. The regions of interest are then used as input for a regression model to estimate the age of the patient. The experiments are conducted on a whole-body scintigraphy dataset that was collected by Chonnam National University Hwasun Hospital. The experimental results illustrate the potential of our proposed method, which has a mean absolute error of 3.35 years. Our proposed framework can be used as a robust supporting tool for clinicians to prevent age-related diseases.

Transfer Learning-based Generated Synthetic Images Identification Model (전이 학습 기반의 생성 이미지 판별 모델 설계)

  • Chaewon Kim;Sungyeon Yoon;Myeongeun Han;Minseo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.465-470
    • /
    • 2024
  • The advancement of AI-based image generation technology has resulted in the creation of various images, emphasizing the need for technology capable of accurately discerning them. The amount of generated image data is limited, and to achieve high performance with a limited dataset, this study proposes a model for discriminating generated images using transfer learning. Applying pre-trained models from the ImageNet dataset directly to the CIFAKE input dataset, we reduce training time cost followed by adding three hidden layers and one output layer to fine-tune the model. The modeling results revealed an improvement in the performance of the model when adjusting the final layer. Using transfer learning and then adjusting layers close to the output layer, small image data-related accuracy issues can be reduced and generated images can be classified.