• Title/Summary/Keyword: Financial Big Data

Search Result 188, Processing Time 0.028 seconds

A Study on Big Data Based Non-Face-to-Face Identity Proofing Technology (빅데이터 기반 비대면 본인확인 기술에 대한 연구)

  • Jung, Kwansoo;Yeom, Hee Gyun;Choi, Daeseon
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.10
    • /
    • pp.421-428
    • /
    • 2017
  • The need for various approaches to non-face-to-face identification technology for registering and authenticating users online is being required because of the growth of online financial services and the rapid development of financial technology. In general, non-face-to-face approaches can be exposed to a greater number of threats than face-to-face approaches. Therefore, identification policies and technologies to verify users by using various factors and channels are being studied in order to complement the risks and to be more reliable non-face-to-face identification methods. One of these new approaches is to collect and verify a large number of personal information of user. Therefore, we propose a big-data based non-face-to-face Identity Proofing method that verifies identity on online based on various and large amount of information of user. The proposed method also provides an identification information management scheme that collects and verifies only the user information required for the identity verification level required by the service. In addition, we propose an identity information sharing model that can provide the information to other service providers so that user can reuse verified identity information. Finally, we prove by implementing a system that verifies and manages only the identity assurance level required by the service through the enhanced user verification in the non-face-to-face identity proofing process.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

A Study on the Priority of 『Personal Information Safety Measure』 Using AHP Method: Focus on the Defferences between Financial Company and Consignee (AHP 기법을 이용한 금융회사 『개인정보의 안전성 확보조치 기준』 우선순위에 관한 연구: 금융회사 위·수탁자 간 인식 차이를 중심으로)

  • KIM, Seyoung;KIM, Inseok
    • The Journal of Society for e-Business Studies
    • /
    • v.24 no.4
    • /
    • pp.31-48
    • /
    • 2019
  • To survive in the trend of the fourth industrial revolution, companies are putting a lot of attention and effort into personalization services using the latest technologies such as big data, artificial intelligence and the Internet of Things, while entrusting third parties to handle personal information on the grounds of work efficiency, expertise and cost reduction. In such an environment, consignors need to check trustees on a more effective and reasonable basis to ensure personal information safety for trustees. This study used AHP techniques to derive the importance and priority of each item of "Personal Information Safety Assurance Measures" for financial companies and trustees, and objectively compared and analyzed differences in perceptions of importance between financial institutions and trustees. Based on this, the company recognizes the difference between self-inspection of financial institutions and inspection of trustees and presents policy grounds and implications for applying differentiated inspection standards that reflect the weights appropriate for the purpose.

Control Effect of Financial Retirement Preparation on Retirement and Senior Life Perception, Retirement Preparation Actions, and Retirement Satisfaction Level (은퇴 및 노후생활 인식, 은퇴준비 행동, 은퇴만족도 사이에서 경제적 은퇴준비 상황의 조절효과)

  • Kim, Nam-Won;Jang, Sun-Chul
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.7
    • /
    • pp.522-531
    • /
    • 2016
  • Extension of life expectancy and retirement of baby-boomers from 2010 made big population senior citizens, and concerns about retirement was heightened. Researches to remove the concerns about retirement found out that awareness for senior age life and financial preparation actions for retirement influence retirement life both directly and indirectly. This research identified the influence of financial retirement preparation on related variables, and analyzed its control effect on retirement and senior life perception, retirement preparation actions, and retirement satisfaction level. For that, from Nov 2015 to Feb 2016, this research surveyed 1,500 people from 20 to 69, by mailing or visiting with survey, noting purpose and process of research, eliminating 83 insufficient replies, and analyzing 1,417 surveys. This result shows the better retirement preparation, the more positive retirement and senior life perception, retirement preparation actions, and retirement satisfaction level. This research is significant as basic data for general and practical retirement preparation program development.

A Study on the Application Direction of Financial Industry Metaverse Platform to secure MZ Generation Contact Points (MZ 세대 접점 확보를 위한 금융권 메타버스 플랫폼 활용 방향 연구)

  • Ki-Jung Ryu;Ki-Bum Park;Sungwon Cho;Dongho Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.127-137
    • /
    • 2023
  • COVID-19 has not only affected all sectors of society, economy and politics, but also had a huge impact on industry. The non-face-to-face exchange method was essential to prevent infectious diseases, and the generation who experienced it recognizes the importance of a platform that can be quickly accessed anytime, anywhere, and attention is focused on the Metaverse that can accommodate it well. Each financial industry uses a differentiated metabus platform strategy, focuses on new customer service and revenue generation, and is also used as an internal and external communication channel. This paper analyzes the theoretical background of the financial sector metaverse and domestic and international cases, and studies and describes the direction of using the financial sector metaverse platform to secure MZ generation contact points.

Volatility for High Frequency Time Series Toward fGARCH(1,1) as a Functional Model

  • Hwang, Sun Young;Yoon, Jae Eun
    • Quantitative Bio-Science
    • /
    • v.37 no.2
    • /
    • pp.73-79
    • /
    • 2018
  • As high frequency (HF, for short) time series is now prevalent in the presence of real time big data, volatility computations based on traditional ARCH/GARCH models need to be further developed to suit the high frequency characteristics. This article reviews realized volatilities (RV) and multivariate GARCH (MGARCH) to deal with high frequency volatility computations. As a (functional) infinite dimensional models, the fARCH and fGARCH are introduced to accommodate ultra high frequency (UHF) volatilities. The fARCH and fGARCH models are developed in the recent literature by Hormann et al. [1] and Aue et al. [2], respectively, and our discussions are mainly based on these two key articles. Real data applications to domestic UHF financial time series are illustrated.

An Analysis of News Report Characteristics on Archives & Records Management for the Press in Korea: Based on 1999~2018 News Big Data (뉴스 빅데이터를 이용한 우리나라 언론의 기록관리 분야 보도 특성 분석: 1999~2018 뉴스를 중심으로)

  • Han, Seunghee
    • Journal of the Korean Society for information Management
    • /
    • v.35 no.3
    • /
    • pp.41-75
    • /
    • 2018
  • The purpose of this study is to analyze the characteristics of Korean media on the topic of archives & records management based on time-series analysis. In this study, from January, 1999 to June, 2018, 4,680 news articles on archives & records management topics were extracted from BigKinds. In order to examine the characteristics of the media coverage on the archives & records management topic, this study was analyzed to the difference of the press coverage by period, subject, and type of the media. In addition, this study was conducted word-frequency based content analysis and semantic network analysis to investigate the content characteristics of media on the subject. Based on these results, this study was analyzed to the differences of media coverage by period, subject, and type of media. As a result, the news in the field of records management showed that there was a difference in the amount of news coverage and news contents by period, subject, and type of media. The amount of news coverage began to increase after the Presidential Records Management Act was enacted in 2007, and the largest amount of news was reported in 2013. Daily newspapers and financial newspapers reported the largest amount of news. As a result of analyzing news reports, during the first 10 years after 1999, news topics were formed around the issues arising from the application and diffusion process of the concept of archives & records management. However, since the enactment of the Presidential Records Management Act, archives & records management has become a major factor in political and social issues, and a large amount of political and social news has been reported.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Factors Affecting Performances in Organizational Dealer Marketing: A Case Study Using BSC in Chinese Cosmetics Market (조직형 대리점마케팅에서 경영성과에 영향을 미치는 요인: BSC를 통한 중국 화장품 시장 사례연구)

  • An, Bongrak;Lee, Saebom;Suh, Yungho
    • Journal of Korean Society for Quality Management
    • /
    • v.46 no.1
    • /
    • pp.153-168
    • /
    • 2018
  • Purpose: The balanced scorecard (BSC) has been adopted to evaluate factors affecting performances in organizational dealer marketing in Chinese cosmetics market. Four performance measures in BSC: learning & growth, internal business processes, customer performance, and financial performance are employed in our empirical study. Methods: We conducted surveys of dealers in a Chinese cosmetics company and used total 463 samples for analysis. Confirmatory factor analysis and structural equation model analysis were employed using AMOS 20.0. Results: This study found that internal business process had a positive relation with customer performance and learning and growth. Also, customer performance and learning & growth positively affected financial performances. Conclusion: This study has some academic and practical contributions in that the revised BSC model reflects the special aspects of Chinese cosmetics market and it can be used as a guide for companies in the Chinese cosmetics market to understand which factors are affecting performances.

Trend Analysis of FinTech and Digital Financial Services using Text Mining (텍스트마이닝을 활용한 핀테크 및 디지털 금융 서비스 트렌드 분석)

  • Kim, Do-Hee;Kim, Min-Jeong
    • Journal of Digital Convergence
    • /
    • v.20 no.3
    • /
    • pp.131-143
    • /
    • 2022
  • Focusing on FinTech keywords, this study is analyzing newspaper articles and Twitter data by using text mining methodology in order to understand trends in the industry of domestic digital financial service. In the growth of FinTech lifecycle, the frequency analysis has been performed by four important points: Mobile Payment Service, Internet Primary Bank, Data 3 Act, MyData Businesses. Utilizing frequency analysis, which combines the keywords 'China', 'USA', and 'Future' with the 'FinTech', has been predicting the FinTech industry regarding of the current and future position. Next, sentiment analysis was conducted on Twitter to quantify consumers' expectations and concerns about FinTech services. Therefore, this study is able to share meaningful perspective in that it presented strategic directions that the government and companies can use to understanding future FinTech market by combining frequency analysis and sentiment analysis.