• Title/Summary/Keyword: Machine Learning #2

Search Result 1,718, Processing Time 0.03 seconds

KB-BERT: Training and Application of Korean Pre-trained Language Model in Financial Domain (KB-BERT: 금융 특화 한국어 사전학습 언어모델과 그 응용)

  • Kim, Donggyu;Lee, Dongwook;Park, Jangwon;Oh, Sungwoo;Kwon, Sungjun;Lee, Inyong;Choi, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.191-206
    • /
    • 2022
  • Recently, it is a de-facto approach to utilize a pre-trained language model(PLM) to achieve the state-of-the-art performance for various natural language tasks(called downstream tasks) such as sentiment analysis and question answering. However, similar to any other machine learning method, PLM tends to depend on the data distribution seen during the training phase and shows worse performance on the unseen (Out-of-Distribution) domain. Due to the aforementioned reason, there have been many efforts to develop domain-specified PLM for various fields such as medical and legal industries. In this paper, we discuss the training of a finance domain-specified PLM for the Korean language and its applications. Our finance domain-specified PLM, KB-BERT, is trained on a carefully curated financial corpus that includes domain-specific documents such as financial reports. We provide extensive performance evaluation results on three natural language tasks, topic classification, sentiment analysis, and question answering. Compared to the state-of-the-art Korean PLM models such as KoELECTRA and KLUE-RoBERTa, KB-BERT shows comparable performance on general datasets based on common corpora like Wikipedia and news articles. Moreover, KB-BERT outperforms compared models on finance domain datasets that require finance-specific knowledge to solve given problems.

Landslide Susceptibility Mapping Using Deep Neural Network and Convolutional Neural Network (Deep Neural Network와 Convolutional Neural Network 모델을 이용한 산사태 취약성 매핑)

  • Gong, Sung-Hyun;Baek, Won-Kyung;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1723-1735
    • /
    • 2022
  • Landslides are one of the most prevalent natural disasters, threating both humans and property. Also landslides can cause damage at the national level, so effective prediction and prevention are essential. Research to produce a landslide susceptibility map with high accuracy is steadily being conducted, and various models have been applied to landslide susceptibility analysis. Pixel-based machine learning models such as frequency ratio models, logistic regression models, ensembles models, and Artificial Neural Networks have been mainly applied. Recent studies have shown that the kernel-based convolutional neural network (CNN) technique is effective and that the spatial characteristics of input data have a significant effect on the accuracy of landslide susceptibility mapping. For this reason, the purpose of this study is to analyze landslide vulnerability using a pixel-based deep neural network model and a patch-based convolutional neural network model. The research area was set up in Gangwon-do, including Inje, Gangneung, and Pyeongchang, where landslides occurred frequently and damaged. Landslide-related factors include slope, curvature, stream power index (SPI), topographic wetness index (TWI), topographic position index (TPI), timber diameter, timber age, lithology, land use, soil depth, soil parent material, lineament density, fault density, normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used. Landslide-related factors were built into a spatial database through data preprocessing, and landslide susceptibility map was predicted using deep neural network (DNN) and CNN models. The model and landslide susceptibility map were verified through average precision (AP) and root mean square errors (RMSE), and as a result of the verification, the patch-based CNN model showed 3.4% improved performance compared to the pixel-based DNN model. The results of this study can be used to predict landslides and are expected to serve as a scientific basis for establishing land use policies and landslide management policies.

Estimation for Ground Air Temperature Using GEO-KOMPSAT-2A and Deep Neural Network (심층신경망과 천리안위성 2A호를 활용한 지상기온 추정에 관한 연구)

  • Taeyoon Eom;Kwangnyun Kim;Yonghan Jo;Keunyong Song;Yunjeong Lee;Yun Gon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.207-221
    • /
    • 2023
  • This study suggests deep neural network models for estimating air temperature with Level 1B (L1B) datasets of GEO-KOMPSAT-2A (GK-2A). The temperature at 1.5 m above the ground impact not only daily life but also weather warnings such as cold and heat waves. There are many studies to assume the air temperature from the land surface temperature (LST) retrieved from satellites because the air temperature has a strong relationship with the LST. However, an algorithm of the LST, Level 2 output of GK-2A, works only clear sky pixels. To overcome the cloud effects, we apply a deep neural network (DNN) model to assume the air temperature with L1B calibrated for radiometric and geometrics from raw satellite data and compare the model with a linear regression model between LST and air temperature. The root mean square errors (RMSE) of the air temperature for model outputs are used to evaluate the model. The number of 95 in-situ air temperature data was 2,496,634 and the ratio of datasets paired with LST and L1B show 42.1% and 98.4%. The training years are 2020 and 2021 and 2022 is used to validate. The DNN model is designed with an input layer taking 16 channels and four hidden fully connected layers to assume an air temperature. As a result of the model using 16 bands of L1B, the DNN with RMSE 2.22℃ showed great performance than the baseline model with RMSE 3.55℃ on clear sky conditions and the total RMSE including overcast samples was 3.33℃. It is suggested that the DNN is able to overcome cloud effects. However, it showed different characteristics in seasonal and hourly analysis and needed to append solar information as inputs to make a general DNN model because the summer and winter seasons showed a low coefficient of determinations with high standard deviations.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

The impact of functional brain change by transcranial direct current stimulation effects concerning circadian rhythm and chronotype (일주기 리듬과 일주기 유형이 경두개 직류전기자극에 의한 뇌기능 변화에 미치는 영향 탐색)

  • Jung, Dawoon;Yoo, Soomin;Lee, Hyunsoo;Han, Sanghoon
    • Korean Journal of Cognitive Science
    • /
    • v.33 no.1
    • /
    • pp.51-75
    • /
    • 2022
  • Transcranial direct current stimulation (tDCS) is a non-invasive brain stimulation that is able to alter neuronal activity in particular brain regions. Many studies have researched how tDCS modulates neuronal activity and reorganizes neural networks. However it is difficult to conclude the effect of brain stimulation because the studies are heterogeneous with respect to the stimulation parameter as well as individual difference. It is not fully in agreement with the effects of brain stimulation. In particular few studies have researched the reason of variability of brain stimulation in response to time so far. The study investigated individual variability of brain stimulation based on circadian rhythm and chronotype. Participants were divided into two groups which are morning type and evening type. The experiment was conducted by Zoom meeting which is video meeting programs. Participants were sent experiment tool which are Muse(EEG device), tdcs device, cell phone and cell phone holder after manuals for experimental equipment were explained. Participants were required to make a phone in frount of a camera so that experimenter can monitor online EEG data. Two participants who was difficult to use experimental devices experimented in a laboratory setting where experimenter set up devices. For all participants the accuracy of 98% was achieved by SVM using leave one out cross validation in classification in the the effects of morning stimulation and the evening stimulation. For morning type, the accuracy of 92% and 96% was achieved in classification in the morning stimulation and the evening stimulation. For evening type, it was 94% accuracy in classification for the effect of brain stimulation in the morning and the evening. Feature importance was different both in classification in the morning stimulation and the evening stimulation for morning type and evening type. Results indicated that the effect of brain stimulation can be explained with brain state and trait. Our study results noted that the tDCS protocol for target state is manipulated by individual differences as well as target state.

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

On Using Near-surface Remote Sensing Observation for Evaluation Gross Primary Productivity and Net Ecosystem CO2 Partitioning (근거리 원격탐사 기법을 이용한 총일차생산량 추정 및 순생태계 CO2 교환량 배분의 정확도 평가에 관하여)

  • Park, Juhan;Kang, Minseok;Cho, Sungsik;Sohn, Seungwon;Kim, Jongho;Kim, Su-Jin;Lim, Jong-Hwan;Kang, Mingu;Shim, Kyo-Moon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.4
    • /
    • pp.251-267
    • /
    • 2021
  • Remotely sensed vegetation indices (VIs) are empirically related with gross primary productivity (GPP) in various spatio-temporal scales. The uncertainties in GPP-VI relationship increase with temporal resolution. Uncertainty also exists in the eddy covariance (EC)-based estimation of GPP, arising from the partitioning of the measured net ecosystem CO2 exchange (NEE) into GPP and ecosystem respiration (RE). For two forests and two agricultural sites, we correlated the EC-derived GPP in various time scales with three different near-surface remotely sensed VIs: (1) normalized difference vegetation index (NDVI), (2) enhanced vegetation index (EVI), and (3) near infrared reflectance from vegetation (NIRv) along with NIRvP (i.e., NIRv multiplied by photosynthetically active radiation, PAR). Among the compared VIs, NIRvP showed highest correlation with half-hourly and monthly GPP at all sites. The NIRvP was used to test the reliability of GPP derived by two different NEE partitioning methods: (1) original KoFlux methods (GPPOri) and (2) machine-learning based method (GPPANN). GPPANN showed higher correlation with NIRvP at half-hourly time scale, but there was no difference at daily time scale. The NIRvP-GPP correlation was lower under clear sky conditions due to co-limitation of GPP by other environmental conditions such as air temperature, vapor pressure deficit and soil moisture. However, under cloudy conditions when photosynthesis is mainly limited by radiation, the use of NIRvP was more promising to test the credibility of NEE partitioning methods. Despite the necessity of further analyses, the results suggest that NIRvP can be used as the proxy of GPP at high temporal-scale. However, for the VIs-based GPP estimation with high temporal resolution to be meaningful, complex systems-based analysis methods (related to systems thinking and self-organization that goes beyond the empirical VIs-GPP relationship) should be developed.

Prediction of Spring Flowering Timing in Forested Area in 2023 (산림지역에서의 2023년 봄철 꽃나무 개화시기 예측)

  • Jihee Seo;Sukyung Kim;Hyun Seok Kim;Junghwa Chun;Myoungsoo Won;Keunchang Jang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.427-435
    • /
    • 2023
  • Changes in flowering time due to weather fluctuations impact plant growth and ecosystem dynamics. Accurate prediction of flowering timing is crucial for effective forest ecosystem management. This study uses a process-based model to predict flowering timing in 2023 for five major tree species in Korean forests. Models are developed based on nine years (2009-2017) of flowering data for Abeliophyllum distichum, Robinia pseudoacacia, Rhododendron schlippenbachii, Rhododendron yedoense f. poukhanense, and Sorbus commixta, distributed across 28 regions in the country, including mountains. Weather data from the Automatic Mountain Meteorology Observation System (AMOS) and the Korea Meteorological Administration (KMA) are utilized as inputs for the models. The Single Triangle Degree Days (STDD) and Growing Degree Days (GDD) models, known for their superior performance, are employed to predict flowering dates. Daily temperature readings at a 1 km spatial resolution are obtained by merging AMOS and KMA data. To improve prediction accuracy nationwide, random forest machine learning is used to generate region-specific correction coefficients. Applying these coefficients results in minimal prediction errors, particularly for Abeliophyllum distichum, Robinia pseudoacacia, and Rhododendron schlippenbachii, with root mean square errors (RMSEs) of 1.2, 0.6, and 1.2 days, respectively. Model performance is evaluated using ten random sampling tests per species, selecting the model with the highest R2. The models with applied correction coefficients achieve R2 values ranging from 0.07 to 0.7, except for Sorbus commixta, and exhibit a final explanatory power of 0.75-0.9. This study provides valuable insights into seasonal changes in plant phenology, aiding in identifying honey harvesting seasons affected by abnormal weather conditions, such as those of Robinia pseudoacacia. Detailed information on flowering timing for various plant species and regions enhances understanding of the climate-plant phenology relationship.

Assessment of climate change impact on aquatic ecology health indices in Han river basin using SWAT and random forest (SWAT 및 random forest를 이용한 기후변화에 따른 한강유역의 수생태계 건강성 지수 영향 평가)

  • Woo, So Young;Jung, Chung Gil;Kim, Jin Uk;Kim, Seong Joon
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.10
    • /
    • pp.863-874
    • /
    • 2018
  • The purpose of this study is to evaluate the future climate change impact on stream aquatic ecology health of Han River watershed ($34,148km^2$) using SWAT (Soil and Water Assessment Tool) and random forest. The 8 years (2008~2015) spring (April to June) Aquatic ecology Health Indices (AHI) such as Trophic Diatom Index (TDI), Benthic Macroinvertebrate Index (BMI) and Fish Assessment Index (FAI) scored (0~100) and graded (A~E) by NIER (National Institute of Environmental Research) were used. The 8 years NIER indices with the water quality (T-N, $NH_4$, $NO_3$, T-P, $PO_4$) showed that the deviation of AHI score is large when the concentration of water quality is low, and AHI score had negative correlation when the concentration is high. By using random forest, one of the Machine Learning techniques for classification analysis, the classification results for the 3 indices grade showed that all of precision, recall, and f1-score were above 0.81. The future SWAT hydrology and water quality results under HadGEM3-RA RCP 4.5 and 8.5 scenarios of Korea Meteorological Administration (KMA) showed that the future nitrogen-related water quality in watershed average increased up to 43.2% by the baseflow increase effect and the phosphorus-related water quality decreased up to 18.9% by the surface runoff decrease effect. The future FAI and BMI showed a little better Index grade while the future TDI showed a little worse index grade. We can infer that the future TDI is more sensitive to nitrogen-related water quality and the future FAI and BMI are responded to phosphorus-related water quality.

A Comparative Study on Mapping and Filtering Radii of Local Climate Zone in Changwon city using WUDAPT Protocol (WUDAPT 절차를 활용한 창원시의 국지기후대 제작과 필터링 반경에 따른 비교 연구)

  • Tae-Gyeong KIM;Kyung-Hun PARK;Bong-Geun SONG;Seoung-Hyeon KIM;Da-Eun JEONG;Geon-Ung PARK
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.27 no.2
    • /
    • pp.78-95
    • /
    • 2024
  • For the establishment and comparison of environmental plans across various domains, considering climate change and urban issues, it is crucial to build spatial data at the regional scale classified with consistent criteria. This study mapping the Local Climate Zone (LCZ) of Changwon City, where active climate and environmental research is being conducted, using the protocol suggested by the World Urban Database and Access Portal Tools (WUDAPT). Additionally, to address the fragmentation issue where some grids are classified with different climate characteristics despite being in regions with homogeneous climate traits, a filtering technique was applied, and the LCZ classification characteristics were compared according to the filtering radius. Using satellite images, ground reference data, and the supervised classification machine learning technique Random Forest, classification maps without filtering and with filtering radii of 1, 2, and 3 were produced, and their accuracies were compared. Furthermore, to compare the LCZ classification characteristics according to building types in urban areas, an urban form index used in GIS-based classification methodology was created and compared with the ranges suggested in previous studies. As a result, the overall accuracy was highest when the filtering radius was 1. When comparing the urban form index, the differences between LCZ types were minimal, and most satisfied the ranges of previous studies. However, the study identified a limitation in reflecting the height information of buildings, and it is believed that adding data to complement this would yield results with higher accuracy. The findings of this study can be used as reference material for creating fundamental spatial data for environmental research related to urban climates in South Korea.