• Title/Summary/Keyword: Artificial Intelligence

Search Result 4,833, Processing Time 0.025 seconds

Development of smart car intelligent wheel hub bearing embedded system using predictive diagnosis algorithm

  • Sam-Taek Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.1-8
    • /
    • 2023
  • If there is a defect in the wheel bearing, which is a major part of the car, it can cause problems such as traffic accidents. In order to solve this problem, big data is collected and monitoring is conducted to provide early information on the presence or absence of wheel bearing failure and type of failure through predictive diagnosis and management technology. System development is needed. In this paper, to implement such an intelligent wheel hub bearing maintenance system, we develop an embedded system equipped with sensors for monitoring reliability and soundness and algorithms for predictive diagnosis. The algorithm used acquires vibration signals from acceleration sensors installed in wheel bearings and can predict and diagnose failures through big data technology through signal processing techniques, fault frequency analysis, and health characteristic parameter definition. The implemented algorithm applies a stable signal extraction algorithm that can minimize vibration frequency components and maximize vibration components occurring in wheel bearings. In noise removal using a filter, an artificial intelligence-based soundness extraction algorithm is applied, and FFT is applied. The fault frequency was analyzed and the fault was diagnosed by extracting fault characteristic factors. The performance target of this system was over 12,800 ODR, and the target was met through test results.

Methods for Quantitative Disassembly and Code Establishment of CBS in BIM for Program and Payment Management (BIM의 공정과 기성 관리 적용을 위한 CBS 수량 분개 및 코드 정립 방안)

  • Hando Kim;Jeongyong Nam;Yongju Kim;Inhye Ryu
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.6
    • /
    • pp.381-389
    • /
    • 2023
  • One of the crucial components in building information modeling (BIM) is data. To systematically manage these data, various research studies have focused on the creation of object breakdown structures and property sets. Specifically, crucial data for managing programs and payments involves work breakdown structures (WBSs) and cost breakdown structures (CBSs), which are indispensable for mapping BIM objects. Achieving this requires disassembling CBS quantities based on 3D objects and WBS. However, this task is highly tedious owing to the large volume of CBS and divergent coding practices employed by different organizations. Manual processes, such as those based on Excel, become nearly impossible for such extensive tasks. In response to the challenge of computing quantities that are difficult to derive from BIM objects, this study presents methods for disassembling length-based quantities, incorporating significant portions of the bill of quantities (BOQs). The proposed approach recommends suitable CBS by leveraging the accumulated history of WBS-CBS mapping databases. Additionally, it establishes a unified CBS code, facilitating the effective operation of CBS databases.

An Exploratory Study of Generative AI Service Quality using LDA Topic Modeling and Comparison with Existing Dimensions (LDA토픽 모델링을 활용한 생성형 AI 챗봇의 탐색적 연구 : 기존 AI 챗봇 서비스 품질 요인과의 비교)

  • YaeEun Ahn;Jungsuk Oh
    • Journal of Service Research and Studies
    • /
    • v.13 no.4
    • /
    • pp.191-205
    • /
    • 2023
  • Artificial Intelligence (AI), especially in the domain of text-generative services, has witnessed a significant surge, with forecasts indicating the AI-as-a-Service (AIaaS) market reaching a valuation of $55.0 Billion by 2028. This research set out to explore the quality dimensions characterizing synthetic text media software, with a focus on four key players in the industry: ChatGPT, Writesonic, Jasper, and Anyword. Drawing from a comprehensive dataset of over 4,000 reviews sourced from a software evaluation platform, the study employed the Latent Dirichlet Allocation (LDA) topic modeling technique using the Gensim library. This process resulted the data into 11 distinct topics. Subsequent analysis involved comparing these topics against established AI service quality dimensions, specifically AICSQ and AISAQUAL. Notably, the reviews predominantly emphasized dimensions like availability and efficiency, while others, such as anthropomorphism, which have been underscored in prior literature, were absent. This observation is attributed to the inherent nature of the reviews of AI services examined, which lean more towards semantic understanding rather than direct user interaction. The study acknowledges inherent limitations, mainly potential biases stemming from the singular review source and the specific nature of the reviewer demographic. Possible future research includes gauging the real-world implications of these quality dimensions on user satisfaction and to discuss deeper into how individual dimensions might impact overall ratings.

Optimization-based Deep Learning Model to Localize L3 Slice in Whole Body Computerized Tomography Images (컴퓨터 단층촬영 영상에서 3번 요추부 슬라이스 검출을 위한 최적화 기반 딥러닝 모델)

  • Seongwon Chae;Jae-Hyun Jo;Ye-Eun Park;Jin-Hyoung, Jeong;Sung Jin Kim;Ahnryul Choi
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.331-337
    • /
    • 2023
  • In this paper, we propose a deep learning model to detect lumbar 3 (L3) CT images to determine the occurrence and degree of sarcopenia. In addition, we would like to propose an optimization technique that uses oversampling ratio and class weight as design parameters to address the problem of performance degradation due to data imbalance between L3 level and non-L3 level portions of CT data. In order to train and test the model, a total of 150 whole-body CT images of 104 prostate cancer patients and 46 bladder cancer patients who visited Gangneung Asan Medical Center were used. The deep learning model used ResNet50, and the design parameters of the optimization technique were selected as six types of model hyperparameters, data augmentation ratio, and class weight. It was confirmed that the proposed optimization-based L3 level extraction model reduced the median L3 error by about 1.0 slices compared to the control model (a model that optimized only 5 types of hyperparameters). Through the results of this study, accurate L3 slice detection was possible, and additionally, we were able to present the possibility of effectively solving the data imbalance problem through oversampling through data augmentation and class weight adjustment.

Detection Fastener Defect using Semi Supervised Learning and Transfer Learning (준지도 학습과 전이 학습을 이용한 선로 체결 장치 결함 검출)

  • Sangmin Lee;Seokmin Han
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.91-98
    • /
    • 2023
  • Recently, according to development of artificial intelligence, a wide range of industry being automatic and optimized. Also we can find out some research of using supervised learning for deteceting defect of railway in domestic rail industry. However, there are structures other than rails on the track, and the fastener is a device that binds the rail to other structures, and periodic inspections are required to prevent safety accidents. In this paper, we present a method of reducing cost for labeling using semi-supervised and transfer model trained on rail fastener data. We use Resnet50 as the backbone network pretrained on ImageNet. At first we randomly take training data from unlabeled data and then labeled that data to train model. After predict unlabeled data by trained model, we adopted a method of adding the data with the highest probability for each class to the training data by a predetermined size. Futhermore, we also conducted some experiments to investigate the influence of the number of initially labeled data. As a result of the experiment, model reaches 92% accuracy which has a performance difference of around 5% compared to supervised learning. This is expected to improve the performance of the classifier by using relatively few labels without additional labeling processes through the proposed method.

A Research on Adversarial Example-based Passive Air Defense Method against Object Detectable AI Drone (객체인식 AI적용 드론에 대응할 수 있는 적대적 예제 기반 소극방공 기법 연구)

  • Simun Yuk;Hweerang Park;Taisuk Suh;Youngho Cho
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.119-125
    • /
    • 2023
  • Through the Ukraine-Russia war, the military importance of drones is being reassessed, and North Korea has completed actual verification through a drone provocation towards South Korea at 2022. Furthermore, North Korea is actively integrating artificial intelligence (AI) technology into drones, highlighting the increasing threat posed by drones. In response, the Republic of Korea military has established Drone Operations Command(DOC) and implemented various drone defense systems. However, there is a concern that the efforts to enhance capabilities are disproportionately focused on striking systems, making it challenging to effectively counter swarm drone attacks. Particularly, Air Force bases located adjacent to urban areas face significant limitations in the use of traditional air defense weapons due to concerns about civilian casualties. Therefore, this study proposes a new passive air defense method that aims at disrupting the object detection capabilities of AI models to enhance the survivability of friendly aircraft against the threat posed by AI based swarm drones. Using laser-based adversarial examples, the study seeks to degrade the recognition accuracy of object recognition AI installed on enemy drones. Experimental results using synthetic images and precision-reduced models confirmed that the proposed method decreased the recognition accuracy of object recognition AI, which was initially approximately 95%, to around 0-15% after the application of the proposed method, thereby validating the effectiveness of the proposed method.

A Study on the Metadata Schema for the Collection of Sensor Data in Weapon Systems (무기체계 CBM+ 적용 및 확대를 위한 무기체계 센서데이터 수집용 메타데이터 스키마 연구)

  • Jinyoung Kim;Hyoung-seop Shim;Jiseong Son;Yun-Young Hwang
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.161-169
    • /
    • 2023
  • Due to the Fourth Industrial Revolution, innovation in various technologies such as artificial intelligence (AI), big data (Big Data), and cloud (Cloud) is accelerating, and data is considered an important asset. With the innovation of these technologies, various efforts are being made to lead technological innovation in the field of defense science and technology. In Korea, the government also announced the "Defense Innovation 4.0 Plan," which consists of five key points and 16 tasks to foster advanced science and technology forces in March 2023. The plan also includes the establishment of a Condition-Based Maintenance system (CBM+) to improve the operability and availability of weapons systems and reduce defense costs. Condition Based Maintenance (CBM) aims to secure the reliability and availability of the weapon system and analyze changes in equipment's state information to identify them as signs of failure and defects, and CBM+ is a concept that adds Remaining Useful Life prediction technology to the existing CBM concept [1]. In order to establish a CBM+ system for the weapon system, sensors are installed and sensor data are required to obtain condition information of the weapon system. In this paper, we propose a sensor data metadata schema to efficiently and effectively manage sensor data collected from sensors installed in various weapons systems.

Comparative Study of Automatic Trading and Buy-and-Hold in the S&P 500 Index Using a Volatility Breakout Strategy (변동성 돌파 전략을 사용한 S&P 500 지수의 자동 거래와 매수 및 보유 비교 연구)

  • Sunghyuck Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.57-62
    • /
    • 2023
  • This research is a comparative analysis of the U.S. S&P 500 index using the volatility breakout strategy against the Buy and Hold approach. The volatility breakout strategy is a trading method that exploits price movements after periods of relative market stability or concentration. Specifically, it is observed that large price movements tend to occur more frequently after periods of low volatility. When a stock moves within a narrow price range for a while and then suddenly rises or falls, it is expected to continue moving in that direction. To capitalize on these movements, traders adopt the volatility breakout strategy. The 'k' value is used as a multiplier applied to a measure of recent market volatility. One method of measuring volatility is the Average True Range (ATR), which represents the difference between the highest and lowest prices of recent trading days. The 'k' value plays a crucial role for traders in setting their trade threshold. This study calculated the 'k' value at a general level and compared its returns with the Buy and Hold strategy, finding that algorithmic trading using the volatility breakout strategy achieved slightly higher returns. In the future, we plan to present simulation results for maximizing returns by determining the optimal 'k' value for automated trading of the S&P 500 index using artificial intelligence deep learning techniques.

TAGS: Text Augmentation with Generation and Selection (생성-선정을 통한 텍스트 증강 프레임워크)

  • Kim Kyung Min;Dong Hwan Kim;Seongung Jo;Heung-Seon Oh;Myeong-Ha Hwang
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.455-460
    • /
    • 2023
  • Text augmentation is a methodology that creates new augmented texts by transforming or generating original texts for the purpose of improving the performance of NLP models. However existing text augmentation techniques have limitations such as lack of expressive diversity semantic distortion and limited number of augmented texts. Recently text augmentation using large language models and few-shot learning can overcome these limitations but there is also a risk of noise generation due to incorrect generation. In this paper, we propose a text augmentation method called TAGS that generates multiple candidate texts and selects the appropriate text as the augmented text. TAGS generates various expressions using few-shot learning while effectively selecting suitable data even with a small amount of original text by using contrastive learning and similarity comparison. We applied this method to task-oriented chatbot data and achieved more than sixty times quantitative improvement. We also analyzed the generated texts to confirm that they produced semantically and expressively diverse texts compared to the original texts. Moreover, we trained and evaluated a classification model using the augmented texts and showed that it improved the performance by more than 0.1915, confirming that it helps to improve the actual model performance.

Prediction of Water Storage Rate for Agricultural Reservoirs Using Univariate and Multivariate LSTM Models (단변량 및 다변량 LSTM을 이용한 농업용 저수지의 저수율 예측)

  • Sunguk Joh;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_4
    • /
    • pp.1125-1134
    • /
    • 2023
  • Out of the total 17,000 reservoirs in Korea, 13,600 small agricultural reservoirs do not have hydrological measurement facilities, making it difficult to predict water storage volume and appropriate operation. This paper examined univariate and multivariate long short-term memory (LSTM) modeling to predict the storage rate of agricultural reservoirs using remote sensing and artificial intelligence. The univariate LSTM model used only water storage rate as an explanatory variable, and the multivariate LSTM model added n-day accumulative precipitation and date of year (DOY) as explanatory variables. They were trained using eight years data (2013 to 2020) for Idong Reservoir, and the predictions of the daily water storage in 2021 were validated for accuracy assessment. The univariate showed the root-mean square error (RMSE) of 1.04%, 2.52%, and 4.18% for the one, three, and five-day predictions. The multivariate model showed the RMSE 0.98%, 1.95%, and 2.76% for the one, three, and five-day predictions. In addition to the time-series storage rate, DOY and daily and 5-day cumulative precipitation variables were more significant than others for the daily model, which means that the temporal range of the impacts of precipitation on the everyday water storage rate was approximately five days.