• Title/Summary/Keyword: AI training data

Search Result 285, Processing Time 0.033 seconds

Automatic Adaptation Based Metaverse Virtual Human Interaction (자동 적응 기반 메타버스 가상 휴먼 상호작용 기법)

  • Chung, Jin-Ho;Jo, Dongsik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.2
    • /
    • pp.101-106
    • /
    • 2022
  • Recently, virtual human has been widely used in various fields such as education, training, information guide. In addition, it is expected to be applied to services that interact with remote users in metaverse. In this paper, we propose a novel method to make a virtual human' interaction to perceive the user's surroundings. We use the editing authoring tool to apply user's interaction for providing the virtual human's response. The virtual human can recognize users' situations based on fuzzy, present optimal response to users. With our interaction method by context awareness to address our paper, the virtual human can provide interaction suitable for the surrounding environment based on automatic adaptation.

Design of weighted federated learning framework based on local model validation

  • Kim, Jung-Jun;Kang, Jeon Seong;Chung, Hyun-Joon;Park, Byung-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.13-18
    • /
    • 2022
  • In this paper, we proposed VW-FedAVG(Validation based Weighted FedAVG) which updates the global model by weighting according to performance verification from the models of each device participating in the training. The first method is designed to validate each local client model through validation dataset before updating the global model with a server side validation structure. The second is a client-side validation structure, which is designed in such a way that the validation data set is evenly distributed to each client and the global model is after validation. MNIST, CIFAR-10 is used, and the IID, Non-IID distribution for image classification obtained higher accuracy than previous studies.

A Study on Improvement of Buffer Cache Performance for File I/O in Deep Learning (딥러닝의 파일 입출력을 위한 버퍼캐시 성능 개선 연구)

  • Jeongha Lee;Hyokyung Bahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.2
    • /
    • pp.93-98
    • /
    • 2024
  • With the rapid advance in AI (artificial intelligence) and high-performance computing technologies, deep learning is being used in various fields. Deep learning proceeds training by randomly reading a large amount of data and repeats this process. A large number of files are randomly repeatedly referenced during deep learning, which shows different access characteristics from traditional workloads with temporal locality. In order to cope with the difficulty in caching caused by deep learning, we propose a new sampling method that aims at reducing the randomness of dataset reading and adaptively operating on existing buffer cache algorithms. We show that the proposed policy reduces the miss rate of the buffer cache by 16% on average and up to 33% compared to the existing method, and improves the execution time by up to 24%.

A Study of Development and Application of an Inland Water Body Training Dataset Using Sentinel-1 SAR Images in Korea (Sentinel-1 SAR 영상을 활용한 국내 내륙 수체 학습 데이터셋 구축 및 알고리즘 적용 연구)

  • Eu-Ru Lee;Hyung-Sup Jung
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1371-1388
    • /
    • 2023
  • Floods are becoming more severe and frequent due to global warming-induced climate change. Water disasters are rising in Korea due to severe rainfall and wet seasons. This makes preventive climate change measures and efficient water catastrophe responses crucial, and synthetic aperture radar satellite imagery can help. This research created 1,423 water body learning datasets for individual water body regions along the Han and Nakdong waterways to reflect domestic water body properties discovered by Sentinel-1 satellite radar imagery. We created a document with exact data annotation criteria for many situations. After the dataset was processed, U-Net, a deep learning model, analyzed water body detection results. The results from applying the learned model to water body locations not involved in the learning process were studied to validate soil water body monitoring on a national scale. The analysis showed that the created water body area detected water bodies accurately (F1-Score: 0.987, Intersection over Union [IoU]: 0.955). Other domestic water body regions not used for training and evaluation showed similar accuracy (F1-Score: 0.941, IoU: 0.89). Both outcomes showed that the computer accurately spotted water bodies in most areas, however tiny streams and gloomy areas had problems. This work should improve water resource change and disaster damage surveillance. Future studies will likely include more water body attribute datasets. Such databases could help manage and monitor water bodies nationwide and shed light on misclassified regions.

Computer Vision-based Continuous Large-scale Site Monitoring System through Edge Computing and Small-Object Detection

  • Kim, Yeonjoo;Kim, Siyeon;Hwang, Sungjoo;Hong, Seok Hwan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1243-1244
    • /
    • 2022
  • In recent years, the growing interest in off-site construction has led to factories scaling up their manufacturing and production processes in the construction sector. Consequently, continuous large-scale site monitoring in low-variability environments, such as prefabricated components production plants (precast concrete production), has gained increasing importance. Although many studies on computer vision-based site monitoring have been conducted, challenges for deploying this technology for large-scale field applications still remain. One of the issues is collecting and transmitting vast amounts of video data. Continuous site monitoring systems are based on real-time video data collection and analysis, which requires excessive computational resources and network traffic. In addition, it is difficult to integrate various object information with different sizes and scales into a single scene. Various sizes and types of objects (e.g., workers, heavy equipment, and materials) exist in a plant production environment, and these objects should be detected simultaneously for effective site monitoring. However, with the existing object detection algorithms, it is difficult to simultaneously detect objects with significant differences in size because collecting and training massive amounts of object image data with various scales is necessary. This study thus developed a large-scale site monitoring system using edge computing and a small-object detection system to solve these problems. Edge computing is a distributed information technology architecture wherein the image or video data is processed near the originating source, not on a centralized server or cloud. By inferring information from the AI computing module equipped with CCTVs and communicating only the processed information with the server, it is possible to reduce excessive network traffic. Small-object detection is an innovative method to detect different-sized objects by cropping the raw image and setting the appropriate number of rows and columns for image splitting based on the target object size. This enables the detection of small objects from cropped and magnified images. The detected small objects can then be expressed in the original image. In the inference process, this study used the YOLO-v5 algorithm, known for its fast processing speed and widely used for real-time object detection. This method could effectively detect large and even small objects that were difficult to detect with the existing object detection algorithms. When the large-scale site monitoring system was tested, it performed well in detecting small objects, such as workers in a large-scale view of construction sites, which were inaccurately detected by the existing algorithms. Our next goal is to incorporate various safety monitoring and risk analysis algorithms into this system, such as collision risk estimation, based on the time-to-collision concept, enabling the optimization of safety routes by accumulating workers' paths and inferring the risky areas based on workers' trajectory patterns. Through such developments, this continuous large-scale site monitoring system can guide a construction plant's safety management system more effectively.

  • PDF

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Management Automation Technique for Maintaining Performance of Machine Learning-Based Power Grid Condition Prediction Model (기계학습 기반 전력망 상태예측 모델 성능 유지관리 자동화 기법)

  • Lee, Haesung;Lee, Byunsung;Moon, Sangun;Kim, Junhyuk;Lee, Heysun
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.6 no.4
    • /
    • pp.413-418
    • /
    • 2020
  • It is necessary to manage the prediction accuracy of the machine learning model to prevent the decrease in the performance of the grid network condition prediction model due to overfitting of the initial training data and to continuously utilize the prediction model in the field by maintaining the prediction accuracy. In this paper, we propose an automation technique for maintaining the performance of the model, which increases the accuracy and reliability of the prediction model by considering the characteristics of the power grid state data that constantly changes due to various factors, and enables quality maintenance at a level applicable to the field. The proposed technique modeled a series of tasks for maintaining the performance of the power grid condition prediction model through the application of the workflow management technology in the form of a workflow, and then automated it to make the work more efficient. In addition, the reliability of the performance result is secured by evaluating the performance of the prediction model taking into account both the degree of change in the statistical characteristics of the data and the level of generalization of the prediction, which has not been attempted in the existing technology. Through this, the accuracy of the prediction model is maintained at a certain level, and further new development of predictive models with excellent performance is possible. As a result, the proposed technique not only solves the problem of performance degradation of the predictive model, but also improves the field utilization of the condition prediction model in a complex power grid system.

Development of real-time defect detection technology for water distribution and sewerage networks (시나리오 기반 상·하수도 관로의 실시간 결함검출 기술 개발)

  • Park, Dong, Chae;Choi, Young Hwan
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.spc1
    • /
    • pp.1177-1185
    • /
    • 2022
  • The water and sewage system is an infrastructure that provides safe and clean water to people. In particular, since the water and sewage pipelines are buried underground, it is very difficult to detect system defects. For this reason, the diagnosis of pipelines is limited to post-defect detection, such as system diagnosis based on the images taken after taking pictures and videos with cameras and drones inside the pipelines. Therefore, real-time detection technology of pipelines is required. Recently, pipeline diagnosis technology using advanced equipment and artificial intelligence techniques is being developed, but AI-based defect detection technology requires a variety of learning data because the types and numbers of defect data affect the detection performance. Therefore, in this study, various defect scenarios are implemented using 3D printing model to improve the detection performance when detecting defects in pipelines. Afterwards, the collected images are performed to pre-processing such as classification according to the degree of risk and labeling of objects, and real-time defect detection is performed. The proposed technique can provide real-time feedback in the pipeline defect detection process, and it would be minimizing the possibility of missing diagnoses and improve the existing water and sewerage pipe diagnosis processing capability.

A Study on Evaluating the Possibility of Monitoring Ships of CAS500-1 Images Based on YOLO Algorithm: A Case Study of a Busan New Port and an Oakland Port in California (YOLO 알고리즘 기반 국토위성영상의 선박 모니터링 가능성 평가 연구: 부산 신항과 캘리포니아 오클랜드항을 대상으로)

  • Park, Sangchul;Park, Yeongbin;Jang, Soyeong;Kim, Tae-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1463-1478
    • /
    • 2022
  • Maritime transport accounts for 99.7% of the exports and imports of the Republic of Korea; therefore, developing a vessel monitoring system for efficient operation is of significant interest. Several studies have focused on tracking and monitoring vessel movements based on automatic identification system (AIS) data; however, ships without AIS have limited monitoring and tracking ability. High-resolution optical satellite images can provide the missing layer of information in AIS-based monitoring systems because they can identify non-AIS vessels and small ships over a wide range. Therefore, it is necessary to investigate vessel monitoring and small vessel classification systems using high-resolution optical satellite images. This study examined the possibility of developing ship monitoring systems using Compact Advanced Satellite 500-1 (CAS500-1) satellite images by first training a deep learning model using satellite image data and then performing detection in other images. To determine the effectiveness of the proposed method, the learning data was acquired from ships in the Yellow Sea and its major ports, and the detection model was established using the You Only Look Once (YOLO) algorithm. The ship detection performance was evaluated for a domestic and an international port. The results obtained using the detection model in ships in the anchorage and berth areas were compared with the ship classification information obtained using AIS, and an accuracy of 85.5% and 70% was achieved using domestic and international classification models, respectively. The results indicate that high-resolution satellite images can be used in mooring ships for vessel monitoring. The developed approach can potentially be used in vessel tracking and monitoring systems at major ports around the world if the accuracy of the detection model is improved through continuous learning data construction.