• Title/Summary/Keyword: 인식 시스템

Search Result 10,291, Processing Time 0.037 seconds

Analysis of dose reduction of surrounding patients in Portable X-ray (Portable X-ray 검사 시 주변 환자 피폭선량 감소 방안 연구)

  • Choe, Deayeon;Ko, Seongjin;Kang, Sesik;Kim, Changsoo;Kim, Junghoon;Kim, Donghyun;Choe, Seokyoon
    • Journal of the Korean Society of Radiology
    • /
    • v.7 no.2
    • /
    • pp.113-120
    • /
    • 2013
  • Nowadays, the medical system towards patients changes into the medical services. As the human rights are improved and the capitalism is enlarged, the rights and needs of patients are gradually increasing. Also, based on this change, several systems in hospitals are revised according to the convenience and needs of patients. Thus, the cases of mobile portable among examinations are getting augmented. Because the number of mobile portable examinations in patient's room, intensive care unit, operating room and recovery room increases, neighboring patients are unnecessarily exposed to radiation so that the examination is legally regulated. Hospitals have to specify that "In case that the examination is taken out of the operating room, emergency room or intensive care units, the portable medical X-ray protective blocks should be set" in accordance with the standards of radiation protective facility in diagnostic radiological system. Some keep this regulation well, but mostly they do not keep. In this study, we shielded around the Collimator where the radiation is detected and then checked the change of dose regarding that of angles in portable tube and collimator before and after shielding. Moreover, we tried to figure out the effects of shielding on dose according to the distance change between patients' beds. As a result, the neighboring areas around the collimator are affected by the shielding. After shielding, the radiation is blocked 20% more than doing nothing. When doing the portable examination, the exposure doses are increased $0^{\circ}C$, $90^{\circ}C$ and $45^{\circ}C$ in order. At the time when the angle is set, the change of doses around the collimator decline after shielding. In addition, the exposure doses related to the distance of beds are less at 1m than 0.5m. In consideration of the shielding effects, putting the beds as far as possible is the best way to block the radiation, which is close to 100%. Next thing is shielding the collimator and its effect is about 20%, and it is more or less 10% by controlling the angles. When taking the portable examination, it is better to keep the patients and guardians far enough away to reduce the exposure doses. However, in case that the bed is fixed and the patient cannot move, it is suggested to shield around the collimator. Furthermore, $90^{\circ}C$ of collimator and tube is recommended. If it is not possible, the examination should be taken at $0^{\circ}C$ and $45^{\circ}C$ is better to be disallowed. The radiation-related workers should be aware of above results, and apply them to themselves in practice. Also, it is recommended to carry out researches and try hard to figure out the ways of reducing the exposure doses and shielding the radiation effectively.

A Study of 'Emotion Trigger' by Text Mining Techniques (텍스트 마이닝을 이용한 감정 유발 요인 'Emotion Trigger'에 관한 연구)

  • An, Juyoung;Bae, Junghwan;Han, Namgi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.69-92
    • /
    • 2015
  • The explosion of social media data has led to apply text-mining techniques to analyze big social media data in a more rigorous manner. Even if social media text analysis algorithms were improved, previous approaches to social media text analysis have some limitations. In the field of sentiment analysis of social media written in Korean, there are two typical approaches. One is the linguistic approach using machine learning, which is the most common approach. Some studies have been conducted by adding grammatical factors to feature sets for training classification model. The other approach adopts the semantic analysis method to sentiment analysis, but this approach is mainly applied to English texts. To overcome these limitations, this study applies the Word2Vec algorithm which is an extension of the neural network algorithms to deal with more extensive semantic features that were underestimated in existing sentiment analysis. The result from adopting the Word2Vec algorithm is compared to the result from co-occurrence analysis to identify the difference between two approaches. The results show that the distribution related word extracted by Word2Vec algorithm in that the words represent some emotion about the keyword used are three times more than extracted by co-occurrence analysis. The reason of the difference between two results comes from Word2Vec's semantic features vectorization. Therefore, it is possible to say that Word2Vec algorithm is able to catch the hidden related words which have not been found in traditional analysis. In addition, Part Of Speech (POS) tagging for Korean is used to detect adjective as "emotional word" in Korean. In addition, the emotion words extracted from the text are converted into word vector by the Word2Vec algorithm to find related words. Among these related words, noun words are selected because each word of them would have causal relationship with "emotional word" in the sentence. The process of extracting these trigger factor of emotional word is named "Emotion Trigger" in this study. As a case study, the datasets used in the study are collected by searching using three keywords: professor, prosecutor, and doctor in that these keywords contain rich public emotion and opinion. Advanced data collecting was conducted to select secondary keywords for data gathering. The secondary keywords for each keyword used to gather the data to be used in actual analysis are followed: Professor (sexual assault, misappropriation of research money, recruitment irregularities, polifessor), Doctor (Shin hae-chul sky hospital, drinking and plastic surgery, rebate) Prosecutor (lewd behavior, sponsor). The size of the text data is about to 100,000(Professor: 25720, Doctor: 35110, Prosecutor: 43225) and the data are gathered from news, blog, and twitter to reflect various level of public emotion into text data analysis. As a visualization method, Gephi (http://gephi.github.io) was used and every program used in text processing and analysis are java coding. The contributions of this study are as follows: First, different approaches for sentiment analysis are integrated to overcome the limitations of existing approaches. Secondly, finding Emotion Trigger can detect the hidden connections to public emotion which existing method cannot detect. Finally, the approach used in this study could be generalized regardless of types of text data. The limitation of this study is that it is hard to say the word extracted by Emotion Trigger processing has significantly causal relationship with emotional word in a sentence. The future study will be conducted to clarify the causal relationship between emotional words and the words extracted by Emotion Trigger by comparing with the relationships manually tagged. Furthermore, the text data used in Emotion Trigger are twitter, so the data have a number of distinct features which we did not deal with in this study. These features will be considered in further study.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

The Analysis of the School Foodservice Employees' Knowledge and Performance Degree of HACCP System in Jeju (제주지역 학교급식 조리종사자의 HACCP 관련 지식 및 수행도 분석)

  • Song, Im-Sook;Chae, In-Sook
    • Journal of Nutrition and Health
    • /
    • v.41 no.8
    • /
    • pp.870-886
    • /
    • 2008
  • The purposes of this study were to (a) analyze school foodservice employees' knowledge and performance degree of HACCP system and (b) provide the basic data for planning the strategies which can be performed for systematic HACCP system in school foodservice. For these purposes, the subjects included 91 dieticians (a response rate 98.9%) and 270 foodservice employees (a response rate 98.2%) at school in Jeju city and they were surveyed from October 21 to November 4, 2006. The data were analyzed by descriptive analysis, reliability analysis, t-test, ANOVA (Duncan multiple range test) and Pearson's correlation coefficients using the SPSS Win Program (version 12.0). In terms of the number of training practice, the result of sanitary training indicated that the dieticians who trained the employees more than once per a week (48.6%) or everyday (36.3%) were 84.7%. And the dieticians who were higher age, full-time job, and working at middle school implemented significantly more training the employees. In the training methods, 40.7% of dieticians used the oral presentation and 37.4 % utilized the printed matters. Also, most of employees (98.1%) have experienced for the training, 39.6% of them did not have regular training experience and 40.7% of them responded that they were understanding the HACCP system well. The result of employees' knowledge level of HACCP system reported that the items of the personal hygiene scored the highest (92.3 points) whereas the items of CCP3 scored the lowest (58.3 points) as the average being 84.2 points (out of 100 scale). In terms of the performance degree of HACCP system, the average was 4.40 (out of 5 scale), the items of the personal hygiene scored the highest as 4.51 whereas the items of CCP2 scored the lowest as 4.31 points. The dieticians' perception degree of employees' performance degree in HACCP system showed that the average was 4.13 (out of 5 scale), so it was significantly lower than actual performance degree as average 4.40 (out of 5 scale). Additionally the employees' knowledge level was positively correlated to performance degree and employees' knowledge level of CCP3, CCP4, and the personal hygiene significantly influenced to the HACCP performance degree. Finally, the dieticians have to recognize correctly the employees' performance degree and on the basis of it must plan the sanitary training which has a proper contents and methods to enhance the employees' knowledge level and achieve more systematic HACCP system in school foodservice.

Risk Factor Analysis for Preventing Foodborne Illness in Restaurants and the Development of Food Safety Training Materials (레스토랑 식중독 예방을 위한 위해 요소 규명 및 위생교육 매체 개발)

  • Park, Sung-Hee;Noh, Jae-Min;Chang, Hye-Ja;Kang, Young-Jae;Kwak, Tong-Kyung
    • Korean journal of food and cookery science
    • /
    • v.23 no.5
    • /
    • pp.589-600
    • /
    • 2007
  • Recently, with the rapid expansion of the franchise restaurants, ensuring food safety has become essential for restaurant growth. Consequently, the need for food safety training and related material is in increasing demand. In this study, we identified potentially hazardous risk factors for ensuring food safety in restaurants through a food safety monitoring tool, and developed training materials for restaurant employees based on the results. The surveyed restaurants, consisting of 6 Korean restaurants and 1 Japanese restaurant were located in Seoul. Their average check was 15,500 won, ranging from 9,000 to 23,000 won. The range of their total space was 297.5 to $1322.4m^2$, and the amount of kitchen space per total area ranged from 4.4 to 30 percent. The mean score for food safety management performance was 57 out of 100 points, with a range of 51 to 73 points. For risk factor analysis, the most frequently cited sanitary violations involved the handwashing methods/handwashing facilities supplies (7.5%), receiving activities (7.5%), checking and recording of frozen/refrigerated foods temperature (0%), holding foods off the floor (0%), washing of fruits and vegetables (42%), planning and supervising facility cleaning and maintaining programs of facilities (50%), pest control (13%), and toilet equipped/cleaned (13%). Base on these results, the main points that were addressed in the hygiene training of restaurant employees included 4 principles and 8 concepts. The four principles consisted of personal hygiene, prevention of food contamination, time/temperature control, and refrigerator storage. The eight concepts included: (1) personal hygiene and cleanliness with proper handwashing, (2) approved food source and receiving management (3) refrigerator and freezer control, (4) storage management, (5) labeling, (6) prevention of food contamination, (7) cooking and reheating control, and (8) cleaning, sanitation, and plumbing control. Finally, a hygiene training manual and poster leaflets were developed as a food safety training materials for restaurants employees.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Clinical and radiographic evaluation of $Neoplan^{(R)}$ implant with a sandblasted and acid-etched surface and external connection (SLA 표면 처리 및 외측 연결형의 국산 임플랜트에 대한 임상적, 방사선학적 평가)

  • An, Hee-Suk;Moon, Hong-Suk;Shim, Jun-Sung;Cho, Kyu-Sung;Lee, Keun-Woo
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.46 no.2
    • /
    • pp.125-136
    • /
    • 2008
  • Statement of problem: Since the concept of osseointegration in dental implants was introduced by $Br{{\aa}}nemark$ et al, high long-term success rates have been achieved. Though the use of dental implants have increased dramatically, there are few studies on domestic implants with clinical and objective long-term data. Purpose: The aim of this retrospective study was to provide long-term data on the $Neoplan^{(R)}$ implant, which features a sandblasted and acid-etched surface and external connection. Material and methods: 96 $Neoplan^{(R)}$ implants placed in 25 patients in Yonsei University Hospital were examined to determine the effect of the factors on marginal bone loss, through clinical and radiographic results during 18 to 57 month period. Results: 1. Out of a total of 96 implants placed in 25 patients, two fixtures were lost, resulting in 97.9% of cumulative survival rate. 2. Throughout the study period, the survival rates were 96.8% in the maxilla and 98.5% in the mandible. The survival rates were 97.6% in the posterior regions and 100% in the anterior regions. 3. The mean bone loss for the first year after prosthesis placement and the mean annual bone loss after the first year for men were significantly higher than that of women (P<0.05). 4. The group of partial edentulism with no posterior teeth distal to the implant prosthesis showed significantly more bone loss compared to the group of partial edentulism with presence of posterior teeth distal to the implant prosthesis in terms of mean bone loss for the first year and after the first year (P<0.05). 5. The mean annual bone loss after the first year was more pronounced in posterior regions compared to anterior regions (P<0.05). 6. No significant difference in marginal bone loss was found in the following factors: jaws, type of prostheses, type of opposing dentition, and submerged /non-submerged implants (P<0.05). Conclusion: On the basis of these results, the factors influencing marginal bone loss were gender, type of edentulism, and location in the arch, while the factors such as arch, type of prostheses, type of opposing dentition, submerged / non- submerged implants had no significant effect on bone loss. In the present study, the cumulative survival rate of the $Neoplan^{(R)}$ implant with a sandblasted and acid-etched surface was 97.9% up to a maximum 57-month period. Further long-term investigations for this type of implant system and evaluation of other various domestic implant systems are needed in future studies.

Analysis of Success Cases of InsurTech and Digital Insurance Platform Based on Artificial Intelligence Technologies: Focused on Ping An Insurance Group Ltd. in China (인공지능 기술 기반 인슈어테크와 디지털보험플랫폼 성공사례 분석: 중국 평안보험그룹을 중심으로)

  • Lee, JaeWon;Oh, SangJin
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.71-90
    • /
    • 2020
  • Recently, the global insurance industry is rapidly developing digital transformation through the use of artificial intelligence technologies such as machine learning, natural language processing, and deep learning. As a result, more and more foreign insurers have achieved the success of artificial intelligence technology-based InsurTech and platform business, and Ping An Insurance Group Ltd., China's largest private company, is leading China's global fourth industrial revolution with remarkable achievements in InsurTech and Digital Platform as a result of its constant innovation, using 'finance and technology' and 'finance and ecosystem' as keywords for companies. In response, this study analyzed the InsurTech and platform business activities of Ping An Insurance Group Ltd. through the ser-M analysis model to provide strategic implications for revitalizing AI technology-based businesses of domestic insurers. The ser-M analysis model has been studied so that the vision and leadership of the CEO, the historical environment of the enterprise, the utilization of various resources, and the unique mechanism relationships can be interpreted in an integrated manner as a frame that can be interpreted in terms of the subject, environment, resource and mechanism. As a result of the case analysis, Ping An Insurance Group Ltd. has achieved cost reduction and customer service development by digitally innovating its entire business area such as sales, underwriting, claims, and loan service by utilizing core artificial intelligence technologies such as facial, voice, and facial expression recognition. In addition, "online data in China" and "the vast offline data and insights accumulated by the company" were combined with new technologies such as artificial intelligence and big data analysis to build a digital platform that integrates financial services and digital service businesses. Ping An Insurance Group Ltd. challenged constant innovation, and as of 2019, sales reached $155 billion, ranking seventh among all companies in the Global 2000 rankings selected by Forbes Magazine. Analyzing the background of the success of Ping An Insurance Group Ltd. from the perspective of ser-M, founder Mammingz quickly captured the development of digital technology, market competition and changes in population structure in the era of the fourth industrial revolution, and established a new vision and displayed an agile leadership of digital technology-focused. Based on the strong leadership led by the founder in response to environmental changes, the company has successfully led InsurTech and Platform Business through innovation of internal resources such as investment in artificial intelligence technology, securing excellent professionals, and strengthening big data capabilities, combining external absorption capabilities, and strategic alliances among various industries. Through this success story analysis of Ping An Insurance Group Ltd., the following implications can be given to domestic insurance companies that are preparing for digital transformation. First, CEOs of domestic companies also need to recognize the paradigm shift in industry due to the change in digital technology and quickly arm themselves with digital technology-oriented leadership to spearhead the digital transformation of enterprises. Second, the Korean government should urgently overhaul related laws and systems to further promote the use of data between different industries and provide drastic support such as deregulation, tax benefits and platform provision to help the domestic insurance industry secure global competitiveness. Third, Korean companies also need to make bolder investments in the development of artificial intelligence technology so that systematic securing of internal and external data, training of technical personnel, and patent applications can be expanded, and digital platforms should be quickly established so that diverse customer experiences can be integrated through learned artificial intelligence technology. Finally, since there may be limitations to generalization through a single case of an overseas insurance company, I hope that in the future, more extensive research will be conducted on various management strategies related to artificial intelligence technology by analyzing cases of multiple industries or multiple companies or conducting empirical research.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.