• Title/Summary/Keyword: Use Case Modeling

Search Result 460, Processing Time 0.035 seconds

A Method for Evaluating News Value based on Supply and Demand of Information Using Text Analysis (텍스트 분석을 활용한 정보의 수요 공급 기반 뉴스 가치 평가 방안)

  • Lee, Donghoon;Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.45-67
    • /
    • 2016
  • Given the recent development of smart devices, users are producing, sharing, and acquiring a variety of information via the Internet and social network services (SNSs). Because users tend to use multiple media simultaneously according to their goals and preferences, domestic SNS users use around 2.09 media concurrently on average. Since the information provided by such media is usually textually represented, recent studies have been actively conducting textual analysis in order to understand users more deeply. Earlier studies using textual analysis focused on analyzing a document's contents without substantive consideration of the diverse characteristics of the source medium. However, current studies argue that analytical and interpretive approaches should be applied differently according to the characteristics of a document's source. Documents can be classified into the following types: informative documents for delivering information, expressive documents for expressing emotions and aesthetics, operational documents for inducing the recipient's behavior, and audiovisual media documents for supplementing the above three functions through images and music. Further, documents can be classified according to their contents, which comprise facts, concepts, procedures, principles, rules, stories, opinions, and descriptions. Documents have unique characteristics according to the source media by which they are distributed. In terms of newspapers, only highly trained people tend to write articles for public dissemination. In contrast, with SNSs, various types of users can freely write any message and such messages are distributed in an unpredictable way. Again, in the case of newspapers, each article exists independently and does not tend to have any relation to other articles. However, messages (original tweets) on Twitter, for example, are highly organized and regularly duplicated and repeated through replies and retweets. There have been many studies focusing on the different characteristics between newspapers and SNSs. However, it is difficult to find a study that focuses on the difference between the two media from the perspective of supply and demand. We can regard the articles of newspapers as a kind of information supply, whereas messages on various SNSs represent a demand for information. By investigating traditional newspapers and SNSs from the perspective of supply and demand of information, we can explore and explain the information dilemma more clearly. For example, there may be superfluous issues that are heavily reported in newspaper articles despite the fact that users seldom have much interest in these issues. Such overproduced information is not only a waste of media resources but also makes it difficult to find valuable, in-demand information. Further, some issues that are covered by only a few newspapers may be of high interest to SNS users. To alleviate the deleterious effects of information asymmetries, it is necessary to analyze the supply and demand of each information source and, accordingly, provide information flexibly. Such an approach would allow the value of information to be explored and approximated on the basis of the supply-demand balance. Conceptually, this is very similar to the price of goods or services being determined by the supply-demand relationship. Adopting this concept, media companies could focus on the production of highly in-demand issues that are in short supply. In this study, we selected Internet news sites and Twitter as representative media for investigating information supply and demand, respectively. We present the notion of News Value Index (NVI), which evaluates the value of news information in terms of the magnitude of Twitter messages associated with it. In addition, we visualize the change of information value over time using the NVI. We conducted an analysis using 387,014 news articles and 31,674,795 Twitter messages. The analysis results revealed interesting patterns: most issues show lower NVI than average of the whole issue, whereas a few issues show steadily higher NVI than the average.

A Study on Greenspace Planning Strategies for Thermal Comfort and Energy Savings (열쾌적성과 에너지절약을 위한 녹지계획 전략 연구)

  • Jo, Hyun-Kil;Ahn, Tae-Won
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.38 no.3
    • /
    • pp.23-32
    • /
    • 2010
  • The purpose of this study is to quantify human energy budgets for different structures of outdoor spatial surfaces affecting thermal comfort, to analyze the impacts of tree shading on building energy savings, and to suggest desirable strategies of urban greenspace planning concerned. Concrete paving and grass spaces without tree shading and compacted-sand spaces with tree shading were selected to reflect archetypal compositional types for outdoor spatial materials. The study then estimated human energy budgets in static activity for the 3 space types. Major determinants of energy budgets were the presence of shading and also the albedo and temperature of base surfaces. The energy budgets for concrete paving and grass spaces without tree shading were $284\;W/m^2$ and $226\;W/m^2$, respectively, and these space types were considerably poor in thermal comfort. Therefore, it is desirable to construct outdoor resting spaces with evapotranspirational shade trees and natural materials for the base plane. Building energy savings from tree shading for the case of Daegu in the southern region were quantified using computer modeling programs and compared with a previous study for Chuncheon in the middle region. Shade trees planted to the west of a building were most effective for annual savings of heating and cooling energy. Plantings of shade trees in the south should be avoided, because they increased heating energy use with cooling energy savings low in both climate regions. A large shade tree in the west and east saved cooling energy by 1~2% across building types and regions. Based on previous studies and these results, some strategies including indicators for urban greenspace planning were suggested to improve thermal comfort of outdoor spaces and to save energy use in indoor spaces. These included thermal comfort in construction materials for outdoor spaces, building energy savings through shading, evapotranspiration and windspeed mitigation by greenspaces, and greenspace areas and volume for air-temperature reductions. In addition, this study explored the application of the strategies to greenspace-related regulations to ensure their effectiveness.

A Case Study on Students' Mathematical Concepts of Algebra, Connections and Attitudes toward Mathematics in a CAS Environment (CAS 그래핑 계산기를 활용한 수학 수업에 관한 사례 연구)

  • Park, Hui-Jeong;Kim, Kyung-Mi;Whang, Woo-Hyung
    • Communications of Mathematical Education
    • /
    • v.25 no.2
    • /
    • pp.403-430
    • /
    • 2011
  • The purpose of the study was to investigate how the use of graphing calculators influence on forming students' mathematical concept of algebra, students' mathematical connection, and attitude toward mathematics. First, graphing calculators give instant feedback to students as they make students compare their written answers with the results, which helps students learn equations and linear inequalities for themselves. In respect of quadratic inequalities they help students to correct wrong concepts and understand fundamental concepts, and with regard to functions students can draw graphs more easily using graphing calculators, which means that the difficulty of drawing graphs can not be hindrance to student's learning functions. Moreover students could understand functions intuitively by using graphing calculators and explored math problems volunteerly. As a result, students were able to perceive faster the concepts of functions that they considered difficult and remain the concepts in their mind for a long time. Second, most of students could not think of connection among equations, equalities and functions. However, they could understand the connection among equations, equalities and functions more easily. Additionally students could focus on changing the real life into the algebraic expression by modeling without the fear of calculating, which made students relieve the burden of calculating and realize the usefulness of mathematics through the experience of solving the real-life problems. Third, we identified the change of six students' attitude through preliminary and an ex post facto attitude test. Five of six students came to have positive attitude toward mathematics, but only one student came to have negative attitude. However, all of the students showed positive attitude toward using graphing calculators in math class. That's because they could have more interest in mathematics by the strengthened and visualization of graphing calculators which helped them understand difficult algebraic concepts, which gave them a sense of achievement. Also, students could relieve the burden of calculating and have confidence. In a conclusion, using graphing calculators in algebra and function class has many advantages : formulating mathematics concepts, mathematical connection, and enhancing positive attitude toward mathematics. Therefore we need more research of the effect of using calculators, practical classroom materials, instruction models and assessment tools for graphing calculators. Lastly We need to make the classroom environment more adequate for using graphing calculators in math classes.

A Study on the Black Box Design using Collective Intelligence Analysis (집단지성 분석법을 활용한 블랙박스 디자인 개발 연구)

  • Lee, Hee young;Hong, Jeong Pyo;Cho, Kwang Soo
    • Science of Emotion and Sensibility
    • /
    • v.21 no.2
    • /
    • pp.101-112
    • /
    • 2018
  • This study was carried out to enhance the competitiveness of blackbox design for domestic and international companies, based on the explosive growth of the blackbox market due to development of blackbox design for vehicle accident prevention and post-treatment. In the past, the blackbox market has produced products indiscriminately to meet the ever-increasing demand of consumers. Therefore, we thought a new design method was necessary to effectively investigate the needs of rapidly changing consumers. In this study, we aimed to identify the best-selling blackbox to understand the design flow, and the optimum area for a blackbox, considering the uniqueness of associated vehicle. Based on discussion with blackbox design experts, we studied the direction of design and the problems with blackbox use, which were reflected in blackbox development. Through this research, two types of design - leading blackbox (A type) and mass production blackbox (B type) - were proposed for compatibility of the blackbox with the car. The leading type of blackbox was positioned so that it was wrapped with the room mirror hinge before the screw was fastened, in order to achieve an integrated design. Therefore, we designed an integrated form and resolved the placement problem of an adhesive blackbox. To blend, the mass production blackbox implemented material and surface processing in the same way with the car, and adopted the slide structure to automatically turn off the main body power when removing the SDcard, reflecting consumer needs. This study considers evolving consumer needs through a case study and collective intelligence and deals with implementation of the whole design process during mass production. In this study, we aimed to strengthen the competitiveness of the blackbox design based on design method and its realization.

A Study on the Performance Evaluation of G2B Procurement Process Innovation by Using MAS: Korea G2B KONEPS Case (멀티에이전트시스템(MAS)을 이용한 G2B 조달 프로세스 혁신의 효과평가에 관한 연구 : 나라장터 G2B사례)

  • Seo, Won-Jun;Lee, Dae-Cheor;Lim, Gyoo-Gun
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.157-175
    • /
    • 2012
  • It is difficult to evaluate the performance of process innovation of e-procurement which has large scale and complex processes. The existing evaluation methods for measuring the effects of process innovation have been mainly done with statistically quantitative methods by analyzing operational data or with qualitative methods by conducting surveys and interviews. However, these methods have some limitations to evaluate the effects because the performance evaluation of e-procurement process innovation should consider the interactions among participants who are active either directly or indirectly through the processes. This study considers the e-procurement process as a complex system and develops a simulation model based on MAS(Multi-Agent System) to evaluate the effects of e-procurement process innovation. Multi-agent based simulation allows observing interaction patterns of objects in virtual world through relationship among objects and their behavioral mechanism. Agent-based simulation is suitable especially for complex business problems. In this study, we used Netlogo Version 4.1.3 as a MAS simulation tool which was developed in Northwestern University. To do this, we developed a interaction model of agents in MAS environment. We defined process agents and task agents, and assigned their behavioral characteristics. The developed simulation model was applied to G2B system (KONEPS: Korea ON-line E-Procurement System) of Public Procurement Service (PPS) in Korea and used to evaluate the innovation effects of the G2B system. KONEPS is a successfully established e-procurement system started in the year 2002. KONEPS is a representative e-Procurement system which integrates characteristics of e-commerce into government for business procurement activities. KONEPS deserves the international recognition considering the annual transaction volume of 56 billion dollars, daily exchanges of electronic documents, users consisted of 121,000 suppliers and 37,000 public organizations, and the 4.5 billion dollars of cost saving. For the simulation, we analyzed the e-procurement of process of KONEPS into eight sub processes such as 'process 1: search products and acquisition of proposal', 'process 2 : review the methods of contracts and item features', 'process 3 : a notice of bid', 'process 4 : registration and confirmation of qualification', 'process 5 : bidding', 'process 6 : a screening test', 'process 7 : contracts', and 'process 8 : invoice and payment'. For the parameter settings of the agents behavior, we collected some data from the transactional database of PPS and some information by conducting a survey. The used data for the simulation are 'participants (government organizations, local government organizations and public institutions)', 'the number of bidding per year', 'the number of total contracts', 'the number of shopping mall transactions', 'the rate of contracts between bidding and shopping mall', 'the successful bidding ratio', and the estimated time for each process. The comparison was done for the difference of time consumption between 'before the innovation (As-was)' and 'after the innovation (As-is).' The results showed that there were productivity improvements in every eight sub processes. The decrease ratio of 'average number of task processing' was 92.7% and the decrease ratio of 'average time of task processing' was 95.4% in entire processes when we use G2B system comparing to the conventional method. Also, this study found that the process innovation effect will be enhanced if the task process related to the 'contract' can be improved. This study shows the usability and possibility of using MAS in process innovation evaluation and its modeling.

Knowledge graph-based knowledge map for efficient expression and inference of associated knowledge (연관지식의 효율적인 표현 및 추론이 가능한 지식그래프 기반 지식지도)

  • Yoo, Keedong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.49-71
    • /
    • 2021
  • Users who intend to utilize knowledge to actively solve given problems proceed their jobs with cross- and sequential exploration of associated knowledge related each other in terms of certain criteria, such as content relevance. A knowledge map is the diagram or taxonomy overviewing status of currently managed knowledge in a knowledge-base, and supports users' knowledge exploration based on certain relationships between knowledge. A knowledge map, therefore, must be expressed in a networked form by linking related knowledge based on certain types of relationships, and should be implemented by deploying proper technologies or tools specialized in defining and inferring them. To meet this end, this study suggests a methodology for developing the knowledge graph-based knowledge map using the Graph DB known to exhibit proper functionality in expressing and inferring relationships between entities and their relationships stored in a knowledge-base. Procedures of the proposed methodology are modeling graph data, creating nodes, properties, relationships, and composing knowledge networks by combining identified links between knowledge. Among various Graph DBs, the Neo4j is used in this study for its high credibility and applicability through wide and various application cases. To examine the validity of the proposed methodology, a knowledge graph-based knowledge map is implemented deploying the Graph DB, and a performance comparison test is performed, by applying previous research's data to check whether this study's knowledge map can yield the same level of performance as the previous one did. Previous research's case is concerned with building a process-based knowledge map using the ontology technology, which identifies links between related knowledge based on the sequences of tasks producing or being activated by knowledge. In other words, since a task not only is activated by knowledge as an input but also produces knowledge as an output, input and output knowledge are linked as a flow by the task. Also since a business process is composed of affiliated tasks to fulfill the purpose of the process, the knowledge networks within a business process can be concluded by the sequences of the tasks composing the process. Therefore, using the Neo4j, considered process, task, and knowledge as well as the relationships among them are defined as nodes and relationships so that knowledge links can be identified based on the sequences of tasks. The resultant knowledge network by aggregating identified knowledge links is the knowledge map equipping functionality as a knowledge graph, and therefore its performance needs to be tested whether it meets the level of previous research's validation results. The performance test examines two aspects, the correctness of knowledge links and the possibility of inferring new types of knowledge: the former is examined using 7 questions, and the latter is checked by extracting two new-typed knowledge. As a result, the knowledge map constructed through the proposed methodology has showed the same level of performance as the previous one, and processed knowledge definition as well as knowledge relationship inference in a more efficient manner. Furthermore, comparing to the previous research's ontology-based approach, this study's Graph DB-based approach has also showed more beneficial functionality in intensively managing only the knowledge of interest, dynamically defining knowledge and relationships by reflecting various meanings from situations to purposes, agilely inferring knowledge and relationships through Cypher-based query, and easily creating a new relationship by aggregating existing ones, etc. This study's artifacts can be applied to implement the user-friendly function of knowledge exploration reflecting user's cognitive process toward associated knowledge, and can further underpin the development of an intelligent knowledge-base expanding autonomously through the discovery of new knowledge and their relationships by inference. This study, moreover than these, has an instant effect on implementing the networked knowledge map essential to satisfying contemporary users eagerly excavating the way to find proper knowledge to use.

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.

Flow Resistance and Modeling Rule of Fishing Nets -1. Analysis of Flow Resistance and Its Examination by Data on Plane Nettings- (그물어구의 유수저항과 근형수칙 -1. 유수저항의 해석 및 평면 그물감의 자료에 의한 검토-)

  • KIM Dae-An
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.28 no.2
    • /
    • pp.183-193
    • /
    • 1995
  • Assuming that fishing nets are porous structures to suck water into their mouth and then filtrate water out of them, the flow resistance N of nets with wall area S under the velicity v was taken by $R=kSv^2$, and the coefficient k was derived as $$k=c\;Re^{-m}(\frac{S_n}{S_m})n(\frac{S_n}{S})$$ where $R_e$ is the Reynolds' number, $S_m$ the area of net mouth, $S_n$ the total area of net projected to the plane perpendicular to the water flow. Then, the propriety of the above equation and the values of c, m and n were investigated by the experimental results on plane nettings carried out hitherto. The value of c and m were fixed respectively by $240(kg\cdot sec^2/m^4)$ and 0.1 when the representative size on $R_e$ was taken by the ratio k of the volume of bars to the area of meshes, i. e., $$\lambda={\frac{\pi\;d^2}{21\;sin\;2\varphi}$$ where d is the diameter of bars, 21 the mesh size, and 2n the angle between two adjacent bars. The value of n was larger than 1.0 as 1.2 because the wakes occurring at the knots and bars increased the resistance by obstructing the filtration of water through the meshes. In case in which the influence of $R_e$ was negligible, the value of $cR_e\;^{-m}$ became a constant distinguished by the regions of the attack angle $ \theta$ of nettings to the water flow, i. e., 100$(kg\cdot sec^2/m^4)\;in\;45^{\circ}<\theta \leq90^{\circ}\;and\;100(S_m/S)^{0.6}\;(kg\cdot sec^2/m^4)\;in\;0^{\circ}<\theta \leq45^{\circ}$. Thus, the coefficient $k(kg\cdot sec^2/m^4)$ of plane nettings could be obtained by utilizing the above values with $S_m\;and\;S_n$ given respectively by $$S_m=S\;sin\theta$$ and $$S_n=\frac{d}{I}\;\cdot\;\frac{\sqrt{1-cos^2\varphi cos^2\theta}} {sin\varphi\;cos\varphi} \cdot S$$ But, on the occasion of $\theta=0^{\circ}$ k was decided by the roughness of netting surface and so expressed as $$k=9(\frac{d}{I\;cos\varphi})^{0.8}$$ In these results, however, the values of c and m were regarded to be not sufficiently exact because they were obtained from insufficient data and the actual nets had no use for k at $\theta=0^{\circ}$. Therefore, the exact expression of $k(kg\cdotsec^2/m^4)$, for actual nets could De made in the case of no influence of $R_e$ as follows; $$k=100(\frac{S_n}{S_m})^{1.2}\;(\frac{S_m}{S})\;.\;for\;45^{\circ}<\theta \leq90^{\circ}$$, $$k=100(\frac{S_n}{S_m})^{1.2}\;(\frac{S_m}{S})^{1.6}\;.\;for\;0^{\circ}<\theta \leq45^{\circ}$$

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

An Exploratory Study on the Effects of Relational Benefits and Brand Identity : mediating effect of brand identity (관계혜택과 브랜드 동일시의 역할에 관한 탐색적 연구: 브랜드 동일시의 매개역할을 중심으로)

  • Bang, Jounghae;Jung, Jiyeon;Lee, Eunhyung;Kang, Hyunmo
    • Asia Marketing Journal
    • /
    • v.12 no.2
    • /
    • pp.155-175
    • /
    • 2010
  • Most of the service industries including finance and telecommunications have become matured and saturated. The competitions have become severe while the differences among brands become smaller. Therefore maintaining good relationships with customers has been critical for the service providers. In case of credit card and debit card, the similar patterns are shown. It is important for them to maintain good relationships with customers, and therefore, they have used marketing program which provides customized services to customers and utilizes the membership programs. Not only do they build and maintain good relationships, but also highlight their brands from the emotional aspects. For example, KB Card or Hyundai Card uses well-known designers' works for their credit card design. As well, they differentiate the designs of credit cards to stress on their brand personalities. BC Card introduced the credit card with perfume that a customer would like. Even though the credit card is small and not shown to public easily, it becomes more important for those companies to touch the customers' feelings with the brand personalities and their images. This is partly because of changes in consumers' lifestyles. Y-generations becomes highly likely to express themselves in many different ways and more emotional than X-generations. For the Y-generations, therefore, even credit cards in the wallet should be personalized and well-designed. In line with it, credit cards with good design can be seen as an example of brand identity, where different design for each customer can be used to recognize the membership groups that customers want to belong. On the other hand, these credit card companies offer the special treatment benefits for those customers who are heavy users for the cards. For example, those customers who love sports will receive some special discounts when they use their credit cards for sports related products. Therefore this study attempted to explore the relationships between relational benefits, brand identification and loyalty. It has been well known that relational benefits and brand identification lead to loyalty independently from many other studies, but there has been few study to review all the three variables all together in a research model. Furthermore, as reviewed above, in the card industry, many companies attempt to associate the brand image with their products to fit their customers' lifestyles while relational benefits are still playing an important role for their business. Therefore in our research model, relational benefits, brand identification, and loyalty are all included. We focus on the mediating effect of brand identification. From the relational benefits perspective, only special treatment benefit and confidence benefit are included. Social benefit is not applicable for this credit card industry because not many cases of face-to-face interaction can be found. From the brand identification perspective, personal brand identity and social brand identity are reviewed and included in the model. Overall, the research model emphasizes that the relationships between relational benefits and loyalty will be mediated by the effect of brand identification. The effects of relational benefits which are confidence benefit and special treatment benefits on loyalty will be realized when they fit to the personal brand identity and social brand identity. In the research model, therefore, the relationships between confidence benefit and social brand identity, and between confidence benefit and personal identity are hypothesized while the effects of special treatment benefit on social brand identity and personal brand identity are hypothesized. Loyalty, then, is hypothesized to have positive relationships with personal brand identity and social brand identity. In addition, confidence benefit among the relational benefits is expected to have a direct, positive relationship with loyalty because confidence benefit has been recognized as a critical factor for good relationships and satisfaction. Data were collected from college students who have been using either credit cards or debit cards. College students were regarded good subjects because they are in Y-generation cohorts and have tendency to express themselves more. Total sample size was two hundred three at the beginning, but after deleting those data with many missing values, one hundred ninety-seven data points were remained and used for the model testing. Measurement items were brought from the previous literatures and modified for this research. To test the reliability, using SPSS 14, chronbach's α was examined and all the values were from .874 to .928 exceeding over .7. Using AMOS 7.0, confirmatory factor analysis was conducted to investigate the measurement model. The measurement model was found good fit with χ2(67)=188.388 (p= .000), GFI=.886, AGFI=.821, CFI=.941, RMSEA=.096. Using AMOS 7.0, structural equation modeling has been used to analyze the research model. Overall, the research model fit were χ2(68)=188.670 (p= .000), GFI=.886, AGFI=,824 CFI=.942, RMSEA=.095 indicating good fit. In details, all the paths hypothesized in the research model were found significant except for the path from social brand identity to loyalty. Personal brand identity leads to loyalty while both confidence benefit and special treatment benefit have a positive relationships with personal and social identities. As well, confidence benefit has a direct positive effect on loyalty. The results indicates the followings. First, personal brand identity plays an important role for credit/debit card usage. Therefore even for the products which are not shown to public easy, design and emotional aspect can be important to fit the customers' lifestyles. Second, confidence benefit and special treatment benefit have a positive effects on personal brand identity. Therefore it will be needed for marketers to associate the special treatment and trust and confidence benefits with personal image, personality and personal identity. Third, this study found again the importance of confidence and trust. However interestingly enough, social brand identity was not found to be significantly related to loyalty. It can be explained that the main sample of this study consists of college students. Those strategies to facilitate social brand identity are focused on high social status groups while college students have not been established their status yet.

  • PDF