• Title/Summary/Keyword: AI (artificial intelligence)

Search Result 1,998, Processing Time 0.028 seconds

Methods for Quantitative Disassembly and Code Establishment of CBS in BIM for Program and Payment Management (BIM의 공정과 기성 관리 적용을 위한 CBS 수량 분개 및 코드 정립 방안)

  • Hando Kim;Jeongyong Nam;Yongju Kim;Inhye Ryu
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.6
    • /
    • pp.381-389
    • /
    • 2023
  • One of the crucial components in building information modeling (BIM) is data. To systematically manage these data, various research studies have focused on the creation of object breakdown structures and property sets. Specifically, crucial data for managing programs and payments involves work breakdown structures (WBSs) and cost breakdown structures (CBSs), which are indispensable for mapping BIM objects. Achieving this requires disassembling CBS quantities based on 3D objects and WBS. However, this task is highly tedious owing to the large volume of CBS and divergent coding practices employed by different organizations. Manual processes, such as those based on Excel, become nearly impossible for such extensive tasks. In response to the challenge of computing quantities that are difficult to derive from BIM objects, this study presents methods for disassembling length-based quantities, incorporating significant portions of the bill of quantities (BOQs). The proposed approach recommends suitable CBS by leveraging the accumulated history of WBS-CBS mapping databases. Additionally, it establishes a unified CBS code, facilitating the effective operation of CBS databases.

Detection Fastener Defect using Semi Supervised Learning and Transfer Learning (준지도 학습과 전이 학습을 이용한 선로 체결 장치 결함 검출)

  • Sangmin Lee;Seokmin Han
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.91-98
    • /
    • 2023
  • Recently, according to development of artificial intelligence, a wide range of industry being automatic and optimized. Also we can find out some research of using supervised learning for deteceting defect of railway in domestic rail industry. However, there are structures other than rails on the track, and the fastener is a device that binds the rail to other structures, and periodic inspections are required to prevent safety accidents. In this paper, we present a method of reducing cost for labeling using semi-supervised and transfer model trained on rail fastener data. We use Resnet50 as the backbone network pretrained on ImageNet. At first we randomly take training data from unlabeled data and then labeled that data to train model. After predict unlabeled data by trained model, we adopted a method of adding the data with the highest probability for each class to the training data by a predetermined size. Futhermore, we also conducted some experiments to investigate the influence of the number of initially labeled data. As a result of the experiment, model reaches 92% accuracy which has a performance difference of around 5% compared to supervised learning. This is expected to improve the performance of the classifier by using relatively few labels without additional labeling processes through the proposed method.

Comparative Study of Automatic Trading and Buy-and-Hold in the S&P 500 Index Using a Volatility Breakout Strategy (변동성 돌파 전략을 사용한 S&P 500 지수의 자동 거래와 매수 및 보유 비교 연구)

  • Sunghyuck Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.57-62
    • /
    • 2023
  • This research is a comparative analysis of the U.S. S&P 500 index using the volatility breakout strategy against the Buy and Hold approach. The volatility breakout strategy is a trading method that exploits price movements after periods of relative market stability or concentration. Specifically, it is observed that large price movements tend to occur more frequently after periods of low volatility. When a stock moves within a narrow price range for a while and then suddenly rises or falls, it is expected to continue moving in that direction. To capitalize on these movements, traders adopt the volatility breakout strategy. The 'k' value is used as a multiplier applied to a measure of recent market volatility. One method of measuring volatility is the Average True Range (ATR), which represents the difference between the highest and lowest prices of recent trading days. The 'k' value plays a crucial role for traders in setting their trade threshold. This study calculated the 'k' value at a general level and compared its returns with the Buy and Hold strategy, finding that algorithmic trading using the volatility breakout strategy achieved slightly higher returns. In the future, we plan to present simulation results for maximizing returns by determining the optimal 'k' value for automated trading of the S&P 500 index using artificial intelligence deep learning techniques.

Gain Enhancement of Double Dipole Quasi-Yagi Antenna Using Meanderline Array Structure (미앤더라인 배열 구조를 이용한 이중 다이폴 준-야기 안테나의 이득 향상)

  • Junho Yeo;Jong-Ig Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.4
    • /
    • pp.447-452
    • /
    • 2023
  • In this paper, gain enhancement of a double dipole quasi-Yagi antenna using a meanderline array structure was studied. A 4×1 meanderline array structure consisting of a meanderline conductor- shaped unit cell is located above the second dipole of the double dipole quasi-Yagi antenna. It was designed to have gain over 7 dBi in the frequency range between 1.70 and 2.70 GHz in order to compare the performance with the case using a conventional strip director. As a result of comparison, the average gain of the double dipole quasi-yagi antenna with the proposed meander line array structure was larger compared to the case with the conventional strip director. A double dipole quasi-Yagi antenna using the proposed meanderline array structure was fabricated on an FR4 substrate and its characteristics were compared with the simulation results. Experiment results show that the frequency band for a VSWR less than 2 was 1.55-2.82 GHz, and the frequency band for gain over 7 dBi was measured to be 1.54-2.83 GHz. The frequency bandwidth with gain over 7 dBi increased, and average gain also slightly increased, compared to the conventional case using a strip director.

AI-based stuttering automatic classification method: Using a convolutional neural network (인공지능 기반의 말더듬 자동분류 방법: 합성곱신경망(CNN) 활용)

  • Jin Park;Chang Gyun Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.71-80
    • /
    • 2023
  • This study primarily aimed to develop an automated stuttering identification and classification method using artificial intelligence technology. In particular, this study aimed to develop a deep learning-based identification model utilizing the convolutional neural networks (CNNs) algorithm for Korean speakers who stutter. To this aim, speech data were collected from 9 adults who stutter and 9 normally-fluent speakers. The data were automatically segmented at the phrasal level using Google Cloud speech-to-text (STT), and labels such as 'fluent', 'blockage', prolongation', and 'repetition' were assigned to them. Mel frequency cepstral coefficients (MFCCs) and the CNN-based classifier were also used for detecting and classifying each type of the stuttered disfluency. However, in the case of prolongation, five results were found and, therefore, excluded from the classifier model. Results showed that the accuracy of the CNN classifier was 0.96, and the F1-score for classification performance was as follows: 'fluent' 1.00, 'blockage' 0.67, and 'repetition' 0.74. Although the effectiveness of the automatic classification identifier was validated using CNNs to detect the stuttered disfluencies, the performance was found to be inadequate especially for the blockage and prolongation types. Consequently, the establishment of a big speech database for collecting data based on the types of stuttered disfluencies was identified as a necessary foundation for improving classification performance.

Development of a Prediction Model for Fall Patients in the Main Diagnostic S Code Using Artificial Intelligence (인공지능을 이용한 주진단 S코드의 낙상환자 예측모델 개발)

  • Ye-Ji Park;Eun-Mee Choi;So-Hyeon Bang;Jin-Hyoung Jeong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.526-532
    • /
    • 2023
  • Falls are fatal accidents that occur more than 420,000 times a year worldwide. Therefore, to study patients with falls, we found the association between extrinsic injury codes and principal diagnosis S-codes of patients with falls, and developed a prediction model to predict extrinsic injury codes based on the data of principal diagnosis S-codes of patients with falls. In this study, we received two years of data from 2020 and 2021 from Institution A, located in Gangneung City, Gangwon Special Self-Governing Province, and extracted only the data from W00 to W19 of the extrinsic injury codes related to falls, and developed a prediction model using W01, W10, W13, and W18 of the extrinsic injury codes of falls, which had enough principal diagnosis S-codes to develop a prediction model. 80% of the data were categorized as training data and 20% as testing data. The model was developed using MLP (Multi-Layer Perceptron) with 6 variables (gender, age, principal diagnosis S-code, surgery, hospitalization, and alcohol consumption) in the input layer, 2 hidden layers with 64 nodes, and an output layer with 4 nodes for W01, W10, W13, and W18 exogenous damage codes using the softmax activation function. As a result of the training, the first training had an accuracy of 31.2%, but the 30th training had an accuracy of 87.5%, which confirmed the association between the fall extrinsic code and the main diagnosis S code of the fall patient.

Safety Verification Techniques of Privacy Policy Using GPT (GPT를 활용한 개인정보 처리방침 안전성 검증 기법)

  • Hye-Yeon Shim;MinSeo Kweun;DaYoung Yoon;JiYoung Seo;Il-Gu Lee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.207-216
    • /
    • 2024
  • As big data was built due to the 4th Industrial Revolution, personalized services increased rapidly. As a result, the amount of personal information collected from online services has increased, and concerns about users' personal information leakage and privacy infringement have increased. Online service providers provide privacy policies to address concerns about privacy infringement of users, but privacy policies are often misused due to the long and complex problem that it is difficult for users to directly identify risk items. Therefore, there is a need for a method that can automatically check whether the privacy policy is safe. However, the safety verification technique of the conventional blacklist and machine learning-based privacy policy has a problem that is difficult to expand or has low accessibility. In this paper, to solve the problem, we propose a safety verification technique for the privacy policy using the GPT-3.5 API, which is a generative artificial intelligence. Classification work can be performed evenin a new environment, and it shows the possibility that the general public without expertise can easily inspect the privacy policy. In the experiment, how accurately the blacklist-based privacy policy and the GPT-based privacy policy classify safe and unsafe sentences and the time spent on classification was measured. According to the experimental results, the proposed technique showed 10.34% higher accuracy on average than the conventional blacklist-based sentence safety verification technique.

Analysis of Users' Sentiments and Needs for ChatGPT through Social Media on Reddit (Reddit 소셜미디어를 활용한 ChatGPT에 대한 사용자의 감정 및 요구 분석)

  • Hye-In Na;Byeong-Hee Lee
    • Journal of Internet Computing and Services
    • /
    • v.25 no.2
    • /
    • pp.79-92
    • /
    • 2024
  • ChatGPT, as a representative chatbot leveraging generative artificial intelligence technology, is used valuable not only in scientific and technological domains but also across diverse sectors such as society, economy, industry, and culture. This study conducts an explorative analysis of user sentiments and needs for ChatGPT by examining global social media discourse on Reddit. We collected 10,796 comments on Reddit from December 2022 to August 2023 and then employed keyword analysis, sentiment analysis, and need-mining-based topic modeling to derive insights. The analysis reveals several key findings. The most frequently mentioned term in ChatGPT-related comments is "time," indicative of users' emphasis on prompt responses, time efficiency, and enhanced productivity. Users express sentiments of trust and anticipation in ChatGPT, yet simultaneously articulate concerns and frustrations regarding its societal impact, including fears and anger. In addition, the topic modeling analysis identifies 14 topics, shedding light on potential user needs. Notably, users exhibit a keen interest in the educational applications of ChatGPT and its societal implications. Moreover, our investigation uncovers various user-driven topics related to ChatGPT, encompassing language models, jobs, information retrieval, healthcare applications, services, gaming, regulations, energy, and ethical concerns. In conclusion, this analysis provides insights into user perspectives, emphasizing the significance of understanding and addressing user needs. The identified application directions offer valuable guidance for enhancing existing products and services or planning the development of new service platforms.

User Experience Analysis and Management Based on Text Mining: A Smart Speaker Case (텍스트 마이닝 기반 사용자 경험 분석 및 관리: 스마트 스피커 사례)

  • Dine Yeon;Gayeon Park;Hee-Woong Kim
    • Information Systems Review
    • /
    • v.22 no.2
    • /
    • pp.77-99
    • /
    • 2020
  • Smart speaker is a device that provides an interactive voice-based service that can search and use various information and contents such as music, calendar, weather, and merchandise using artificial intelligence. Since AI technology provides more sophisticated and optimized services to users by accumulating data, early smart speaker manufacturers tried to build a platform through aggressive marketing. However, the frequency of using smart speakers is less than once a month, accounting for more than one third of the total, and user satisfaction is only 49%. Accordingly, the necessity of strengthening the user experience of smart speakers has emerged in order to acquire a large number of users and to enable continuous use. Therefore, this study analyzes the user experience of the smart speaker and proposes a method for enhancing the user experience of the smart speaker. Based on the analysis results in two stages, we propose ways to enhance the user experience of smart speakers by model. The existing research on the user experience of the smart speaker was mainly conducted by survey and interview-based research, whereas this study collected the actual review data written by the user. Also, this study interpreted the analysis result based on the smart speaker user experience dimension. There is an academic significance in interpreting the text mining results by developing the smart speaker user experience dimension. Based on the results of this study, we can suggest strategies for enhancing the user experience to smart speaker manufacturers.

Financial Products Recommendation System Using Customer Behavior Information (고객의 투자상품 선호도를 활용한 금융상품 추천시스템 개발)

  • Hyojoong Kim;SeongBeom Kim;Hee-Woong Kim
    • Information Systems Review
    • /
    • v.25 no.1
    • /
    • pp.111-128
    • /
    • 2023
  • With the development of artificial intelligence technology, interest in data-based product preference estimation and personalized recommender systems is increasing. However, if the recommendation is not suitable, there is a risk that it may reduce the purchase intention of the customer and even extend to a huge financial loss due to the characteristics of the financial product. Therefore, developing a recommender system that comprehensively reflects customer characteristics and product preferences is very important for business performance creation and response to compliance issues. In the case of financial products, product preference is clearly divided according to individual investment propensity and risk aversion, so it is necessary to provide customized recommendation service by utilizing accumulated customer data. In addition to using these customer behavioral characteristics and transaction history data, we intend to solve the cold-start problem of the recommender system, including customer demographic information, asset information, and stock holding information. Therefore, this study found that the model proposed deep learning-based collaborative filtering by deriving customer latent preferences through characteristic information such as customer investment propensity, transaction history, and financial product information based on customer transaction log records was the best. Based on the customer's financial investment mechanism, this study is meaningful in developing a service that recommends a high-priority group by establishing a recommendation model that derives expected preferences for untraded financial products through financial product transaction data.