• Title/Summary/Keyword: Processing

Search Result 69,811, Processing Time 0.094 seconds

Analysis and Improvement Strategies for Korea's Cyber Security Systems Regulations and Policies

  • Park, Dong-Kyun;Cho, Sung-Je;Soung, Jea-Hyen
    • Korean Security Journal
    • /
    • no.18
    • /
    • pp.169-190
    • /
    • 2009
  • Today, the rapid advance of scientific technologies has brought about fundamental changes to the types and levels of terrorism while the war against the world more than one thousand small and big terrorists and crime organizations has already begun. A method highly likely to be employed by terrorist groups that are using 21st Century state of the art technology is cyber terrorism. In many instances, things that you could only imagine in reality could be made possible in the cyber space. An easy example would be to randomly alter a letter in the blood type of a terrorism subject in the health care data system, which could inflict harm to subjects and impact the overturning of the opponent's system or regime. The CIH Virus Crisis which occurred on April 26, 1999 had significant implications in various aspects. A virus program made of just a few lines by Taiwanese college students without any specific objective ended up spreading widely throughout the Internet, causing damage to 30,000 PCs in Korea and over 2 billion won in monetary damages in repairs and data recovery. Despite of such risks of cyber terrorism, a great number of Korean sites are employing loose security measures. In fact, there are many cases where a company with millions of subscribers has very slackened security systems. A nationwide preparation for cyber terrorism is called for. In this context, this research will analyze the current status of Korea's cyber security systems and its laws from a policy perspective, and move on to propose improvement strategies. This research suggests the following solutions. First, the National Cyber Security Management Act should be passed to have its effectiveness as the national cyber security management regulation. With the Act's establishment, a more efficient and proactive response to cyber security management will be made possible within a nationwide cyber security framework, and define its relationship with other related laws. The newly passed National Cyber Security Management Act will eliminate inefficiencies that are caused by functional redundancies dispersed across individual sectors in current legislation. Second, to ensure efficient nationwide cyber security management, national cyber security standards and models should be proposed; while at the same time a national cyber security management organizational structure should be established to implement national cyber security policies at each government-agencies and social-components. The National Cyber Security Center must serve as the comprehensive collection, analysis and processing point for national cyber crisis related information, oversee each government agency, and build collaborative relations with the private sector. Also, national and comprehensive response system in which both the private and public sectors participate should be set up, for advance detection and prevention of cyber crisis risks and for a consolidated and timely response using national resources in times of crisis.

  • PDF

A Comparative Study of the Standard Uptake Values of the PET Reconstruction Methods; Using Contrast Enhanced CT and Non Contrast Enhanced CT (PET/CT 영상에서 조영제를 사용하지 않은 CT와 조영제를 사용한 CT를 이용한 감쇠보정에 따른 표준화섭취계수의 비교)

  • Lee, Seung-Jae;Park, Hoon-Hee;Ahn, Sha-Ron;Oh, Shin-Hyun;NamKoong, Heuk;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.12 no.3
    • /
    • pp.235-240
    • /
    • 2008
  • Purpose: At the beginning of PET/CT, Computed Tomography was mainly used only for Attenuation Correction (AC), but as the performance of the CT have been increase, it could give improved diagnostic information with Contrast Media. But it was controversial that Contrast Media could affect AC on PET/CT scan. Some submitted thesis' show that Contrast Media could overestimate when it is for AC data processing. On the contrary, the opinion that Contrast Media could be possible to affect the alteration of SUV because of the overestimated AC. But it does not have a definite effect on the diagnosis. Thus, the affection of Contrast Media on AC was investigated in this study. Materials and Methods: Patient inclusion criteria required a history of a malignancy and performance of an integrated PET/CT scan and contrast- enhanced CT scan within a 1-day period. Thirty oncologic patients who had PET/CT scan from December 2007 to June 2008 underwent staging evaluation and met these criteria. All patients fasted for at least 6 hr before the IV injection of approximately 5.6 MBq/kg (0.15 mCi/kg) of $^{18}F$-FDG and were scanned about 60 min after injection. All patients had a whole body PET/CT performed without IV contrast media followed by a contrast-enhanced CT on the Discovery STe PET/CT scanner. CT data were used for AC and PET images came out after AC. The ROIs drew and measured SUV. A paired t-test of these results was performed to assess the significance of the difference between the SUV obtained from the two attenuation corrected PET images. Results: The mean and maximum Standardized Uptake Values (SUV) for different regions averaged over all Patients. Comparing before using Contrast Media and after using, Most of ROIs have the increased SUV when it did Contrast Enhanced CT compare to Non-Contrast enhanced CT. All regions have increased SUV and also their p value was under 0.05 except the mean SUV of the Heart region. Conclusion: In this regard, the effect on SUV measurements that occurs when a contrast-enhanced CT is used for attenuation correction could have significant clinical ramifications. But some submitted thesis insisted that the percentage change in SUV that can determine or modify clinical management of oncology patients is small. Because there was not much difference that could be discovered by interpreter. But obviously the numerical change was occurred and on the stage finding primary region, small change would be base line, such as the region of liver which has greater change than the other regions needs more attention.

  • PDF

Studies on the N-compounds during Chung-Kook-Jang Meju Fermentation (1) -Changes of Soybean Protein during Chung-Kook-Jang Meju Fermentation- (청국장(淸國醬) 메주 발효과정중(醱酵過程中)의 질소화합물(窒素化合物)의 소장(消長)에 관(關)한 연구(硏究)(I)-대두단백질(大豆蛋白質)의 소장(消長)에 관(關)하여-)

  • Park, Ke-In
    • Applied Biological Chemistry
    • /
    • v.15 no.2
    • /
    • pp.93-109
    • /
    • 1972
  • Three lots of Chung-Kook-Jang were prepared by the use of 2 strains of Bacillus subtilis and Bacillus natto. For four samples taken from each lot in 12 hrs interval changes of nitrogenous compounds, insoluble protein, water soluble protein, peptides, free amino acids, amino and ammonia nitrogens during Chung-Kook-Jang fermentation, were studied together with the changes of moisture, pH, proteolytic enzyme activity. In addition the average peptide length of the peptides of a Bacillus subtilis lot was determined by the method of molecular sieving using ion exchange resin. The results were as follows: 1. The contents of moisture and total-nitrogen changed little in all samples throughout the fermentation as it would be expected. 2. In all three experimental lots the pH became higher gradually from the initial value of 6.65 to the final $7.5{\sim}7.85$ during the fermentation. Proteolytic enzyme activities, in accordance with this pH change, steadily increased up to $48{\sim}60$ hrs. of fermentation and then slightly decreased, probably affected by the high pH. The most strong proteolytic activity was observed in the experimental Chung-Kook-Jang fermentation lot using the Bacillus subtilis K-27 isolated by the author. 3. The contents of insoluble protein nitrogen in soybeans increased markedly (5%) by the cooking, after steeping 12 hrs in water. During the Chung-Kook-Jang fermentation, however, it decreased from 1/2 to 1/10 of that of the cooked soybeans. 4. The contents of water soluble protein nitrogen (5%) whereas, greatly decreased to the value of 1.0% by the cooking; but little changed further during the fermentation, 5. The total contents (0.25%) of peptides, amino, and ammonia-nitrogens, PAA-N., increased almost double by the cooking and steadily became higher as the fermentation proceeded, reaching finally up to$4{\sim}7%$ in 72 hrs fermentation. 6. The amounts of free amino acids of soybean generally decreased during the processing of cooking, even some of them like glutamic acid were destroyed completely, However in the subsequent Chung-Kook-Jang fermentation for 72 hrs., they showed from several to a few hundreds folds increases depending upon the kinds of amino acids. Valine which was contained in HCl-hydrolyzed steeped or cooked soybeans in amounts $220{\sim}267mg%$ was not detected at all as the free amino acid in all fermented samples. 7. Average peptide length (APL) of all fractions, eluted and fractionated by using the Dowex-50 ion exchange resin column, and fraction collector showed the highest value for the cooked soybean and then decreased as the fermentation proceeded. The APL value of effluent showed the highest in 12 hrs fermented sample, The value decreased thereafter by fermentation.

  • PDF

Quality Characteristics of Kiwi Wine and Optimum Malolactic Fermentation Conditions (참다래 와인의 최적 malolactic fermentation 조건과 품질 특성)

  • Kang, Sang-Dong;Ko, Yu-Jin;Kim, Eun-Jung;Son, Yong-Hwi;Kim, Jin-Yong;Seol, Hui-Gyeong;Kim, Ig-Jo;Cho, Hyoun-Kook;Ryu, Chung-Ho
    • Journal of Life Science
    • /
    • v.21 no.4
    • /
    • pp.509-514
    • /
    • 2011
  • Maloactic fermentation (MLF) occurs after completion of alcoholic fermentation and is mediated by lactic acid bacteria (LAB), mainly Oenococcus oeni. Kiwi wine more than commercial grape wine has the problem of high acidity. Therefore, we investigated the optimal MLF conditions for regulating strong acidity and improving the quality properties of wine fermented with Kiwi fruit cultivated in Korea. For alcohol fermentation, industrial wine yeast Saccharomyces cerevisiae KCCM 12650 strains and LAB, known as MLF strains, were used to alleviate wine acidity. First, the various experimental conditions of Kiwi fruit, initial pH (2.5, 3.5, 4.5), fermenting temperature (20, 25, $30^{\circ}C$), and sugar contents (24 $^{\circ}Brix$), were adjusted, and after the fermentation period, we measured the acidity, pH, and the change in organic acid content by the AOAC method and HPLC analysis. The alcohol content of fermented Kiwi wine was 12.75%. Further, total acidity and pH of Kiwi wine were 0.78% and 3.5, respectively. Total sugar and total polyphenol contents of Kiwi wine were 38.72 mg/ml and 60.18 mg/ml, respectively. With regard to organic acid content, the control contained 0.63 mg/ml of oxalic acid, 2.99 mg/ml of malic acid, and 0.71 mg/ml of lactic acid, whereas MLF wine contained 0.69 mg/ml of oxalic acid, 0.06 mg/ml of malic acid, and 3.12 mg/ml of lactic acid. Kiwi wine had lower malic acid values and total acidity than control after MLF processing. In MLF, the optimum initial pH value and fermentation temperature were 3.5 and $25^{\circ}C$, respectively. Therefore, these studies suggest that establishment of optimal MLF conditions could improve the properties of Kiwi wine manufactured in Korea.

A Study on the Investigation of Sanitary Knowledge and Practice Level of School Foodservice Employees in Jeonju (전주지역 학교급식 조리종사자의 위생지식 및 위생관리 수행에 관한 연구)

  • Han, Eun-Hui;Yang, Hyang-Sook;Shon, Hee-Sook;Rho, Jeong-Ok
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.34 no.8
    • /
    • pp.1210-1218
    • /
    • 2005
  • This study was to investigate the sanitary knowledge and its practice level of school foodservice employees in Jeonju area. A total of 508 questionnaires were usable; resulting in 79.0$\%$ response rate. Statistics data analysis was completed using the SPSS 10.0 program. The results of this study were summarized as follow : About 62$\%$ of school foodservice employees were 41 $\∼$50 years old and 84$\%$ of them had a irregular job and they had a sanitation training at least once a month. The school foodservice employees had more knowledge about 'personal hygiene' than that about 'equipment and facilities sanitation', 'foodborn disease and food microorganism' Their hygiene practice level were high for 'equipment and facilities sanitation' (4.90$\pm$0.25) and were lesser in the order from 'foodborn disease and food microorganism'(4.86$\pm$0.30), 'personal sanitation'(4.79$\pm$0.34) and the least for food processing hygiene (4.70$\pm$0.37). As a result of relationship between knowledge and hygiene practice level, knowledge of school foodservice employees was not influenced on tile hygiene practice level during their working.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Analysis of Twitter for 2012 South Korea Presidential Election by Text Mining Techniques (텍스트 마이닝을 이용한 2012년 한국대선 관련 트위터 분석)

  • Bae, Jung-Hwan;Son, Ji-Eun;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.141-156
    • /
    • 2013
  • Social media is a representative form of the Web 2.0 that shapes the change of a user's information behavior by allowing users to produce their own contents without any expert skills. In particular, as a new communication medium, it has a profound impact on the social change by enabling users to communicate with the masses and acquaintances their opinions and thoughts. Social media data plays a significant role in an emerging Big Data arena. A variety of research areas such as social network analysis, opinion mining, and so on, therefore, have paid attention to discover meaningful information from vast amounts of data buried in social media. Social media has recently become main foci to the field of Information Retrieval and Text Mining because not only it produces massive unstructured textual data in real-time but also it serves as an influential channel for opinion leading. But most of the previous studies have adopted broad-brush and limited approaches. These approaches have made it difficult to find and analyze new information. To overcome these limitations, we developed a real-time Twitter trend mining system to capture the trend in real-time processing big stream datasets of Twitter. The system offers the functions of term co-occurrence retrieval, visualization of Twitter users by query, similarity calculation between two users, topic modeling to keep track of changes of topical trend, and mention-based user network analysis. In addition, we conducted a case study on the 2012 Korean presidential election. We collected 1,737,969 tweets which contain candidates' name and election on Twitter in Korea (http://www.twitter.com/) for one month in 2012 (October 1 to October 31). The case study shows that the system provides useful information and detects the trend of society effectively. The system also retrieves the list of terms co-occurred by given query terms. We compare the results of term co-occurrence retrieval by giving influential candidates' name, 'Geun Hae Park', 'Jae In Moon', and 'Chul Su Ahn' as query terms. General terms which are related to presidential election such as 'Presidential Election', 'Proclamation in Support', Public opinion poll' appear frequently. Also the results show specific terms that differentiate each candidate's feature such as 'Park Jung Hee' and 'Yuk Young Su' from the query 'Guen Hae Park', 'a single candidacy agreement' and 'Time of voting extension' from the query 'Jae In Moon' and 'a single candidacy agreement' and 'down contract' from the query 'Chul Su Ahn'. Our system not only extracts 10 topics along with related terms but also shows topics' dynamic changes over time by employing the multinomial Latent Dirichlet Allocation technique. Each topic can show one of two types of patterns-Rising tendency and Falling tendencydepending on the change of the probability distribution. To determine the relationship between topic trends in Twitter and social issues in the real world, we compare topic trends with related news articles. We are able to identify that Twitter can track the issue faster than the other media, newspapers. The user network in Twitter is different from those of other social media because of distinctive characteristics of making relationships in Twitter. Twitter users can make their relationships by exchanging mentions. We visualize and analyze mention based networks of 136,754 users. We put three candidates' name as query terms-Geun Hae Park', 'Jae In Moon', and 'Chul Su Ahn'. The results show that Twitter users mention all candidates' name regardless of their political tendencies. This case study discloses that Twitter could be an effective tool to detect and predict dynamic changes of social issues, and mention-based user networks could show different aspects of user behavior as a unique network that is uniquely found in Twitter.

A Study on the Clustering Method of Row and Multiplex Housing in Seoul Using K-Means Clustering Algorithm and Hedonic Model (K-Means Clustering 알고리즘과 헤도닉 모형을 활용한 서울시 연립·다세대 군집분류 방법에 관한 연구)

  • Kwon, Soonjae;Kim, Seonghyeon;Tak, Onsik;Jeong, Hyeonhee
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.95-118
    • /
    • 2017
  • Recent centrally the downtown area, the transaction between the row housing and multiplex housing is activated and platform services such as Zigbang and Dabang are growing. The row housing and multiplex housing is a blind spot for real estate information. Because there is a social problem, due to the change in market size and information asymmetry due to changes in demand. Also, the 5 or 25 districts used by the Seoul Metropolitan Government or the Korean Appraisal Board(hereafter, KAB) were established within the administrative boundaries and used in existing real estate studies. This is not a district classification for real estate researches because it is zoned urban planning. Based on the existing study, this study found that the city needs to reset the Seoul Metropolitan Government's spatial structure in estimating future housing prices. So, This study attempted to classify the area without spatial heterogeneity by the reflected the property price characteristics of row housing and Multiplex housing. In other words, There has been a problem that an inefficient side has arisen due to the simple division by the existing administrative district. Therefore, this study aims to cluster Seoul as a new area for more efficient real estate analysis. This study was applied to the hedonic model based on the real transactions price data of row housing and multiplex housing. And the K-Means Clustering algorithm was used to cluster the spatial structure of Seoul. In this study, data onto real transactions price of the Seoul Row housing and Multiplex Housing from January 2014 to December 2016, and the official land value of 2016 was used and it provided by Ministry of Land, Infrastructure and Transport(hereafter, MOLIT). Data preprocessing was followed by the following processing procedures: Removal of underground transaction, Price standardization per area, Removal of Real transaction case(above 5 and below -5). In this study, we analyzed data from 132,707 cases to 126,759 data through data preprocessing. The data analysis tool used the R program. After data preprocessing, data model was constructed. Priority, the K-means Clustering was performed. In addition, a regression analysis was conducted using Hedonic model and it was conducted a cosine similarity analysis. Based on the constructed data model, we clustered on the basis of the longitude and latitude of Seoul and conducted comparative analysis of existing area. The results of this study indicated that the goodness of fit of the model was above 75 % and the variables used for the Hedonic model were significant. In other words, 5 or 25 districts that is the area of the existing administrative area are divided into 16 districts. So, this study derived a clustering method of row housing and multiplex housing in Seoul using K-Means Clustering algorithm and hedonic model by the reflected the property price characteristics. Moreover, they presented academic and practical implications and presented the limitations of this study and the direction of future research. Academic implication has clustered by reflecting the property price characteristics in order to improve the problems of the areas used in the Seoul Metropolitan Government, KAB, and Existing Real Estate Research. Another academic implications are that apartments were the main study of existing real estate research, and has proposed a method of classifying area in Seoul using public information(i.e., real-data of MOLIT) of government 3.0. Practical implication is that it can be used as a basic data for real estate related research on row housing and multiplex housing. Another practical implications are that is expected the activation of row housing and multiplex housing research and, that is expected to increase the accuracy of the model of the actual transaction. The future research direction of this study involves conducting various analyses to overcome the limitations of the threshold and indicates the need for deeper research.

Analysis of Success Cases of InsurTech and Digital Insurance Platform Based on Artificial Intelligence Technologies: Focused on Ping An Insurance Group Ltd. in China (인공지능 기술 기반 인슈어테크와 디지털보험플랫폼 성공사례 분석: 중국 평안보험그룹을 중심으로)

  • Lee, JaeWon;Oh, SangJin
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.71-90
    • /
    • 2020
  • Recently, the global insurance industry is rapidly developing digital transformation through the use of artificial intelligence technologies such as machine learning, natural language processing, and deep learning. As a result, more and more foreign insurers have achieved the success of artificial intelligence technology-based InsurTech and platform business, and Ping An Insurance Group Ltd., China's largest private company, is leading China's global fourth industrial revolution with remarkable achievements in InsurTech and Digital Platform as a result of its constant innovation, using 'finance and technology' and 'finance and ecosystem' as keywords for companies. In response, this study analyzed the InsurTech and platform business activities of Ping An Insurance Group Ltd. through the ser-M analysis model to provide strategic implications for revitalizing AI technology-based businesses of domestic insurers. The ser-M analysis model has been studied so that the vision and leadership of the CEO, the historical environment of the enterprise, the utilization of various resources, and the unique mechanism relationships can be interpreted in an integrated manner as a frame that can be interpreted in terms of the subject, environment, resource and mechanism. As a result of the case analysis, Ping An Insurance Group Ltd. has achieved cost reduction and customer service development by digitally innovating its entire business area such as sales, underwriting, claims, and loan service by utilizing core artificial intelligence technologies such as facial, voice, and facial expression recognition. In addition, "online data in China" and "the vast offline data and insights accumulated by the company" were combined with new technologies such as artificial intelligence and big data analysis to build a digital platform that integrates financial services and digital service businesses. Ping An Insurance Group Ltd. challenged constant innovation, and as of 2019, sales reached $155 billion, ranking seventh among all companies in the Global 2000 rankings selected by Forbes Magazine. Analyzing the background of the success of Ping An Insurance Group Ltd. from the perspective of ser-M, founder Mammingz quickly captured the development of digital technology, market competition and changes in population structure in the era of the fourth industrial revolution, and established a new vision and displayed an agile leadership of digital technology-focused. Based on the strong leadership led by the founder in response to environmental changes, the company has successfully led InsurTech and Platform Business through innovation of internal resources such as investment in artificial intelligence technology, securing excellent professionals, and strengthening big data capabilities, combining external absorption capabilities, and strategic alliances among various industries. Through this success story analysis of Ping An Insurance Group Ltd., the following implications can be given to domestic insurance companies that are preparing for digital transformation. First, CEOs of domestic companies also need to recognize the paradigm shift in industry due to the change in digital technology and quickly arm themselves with digital technology-oriented leadership to spearhead the digital transformation of enterprises. Second, the Korean government should urgently overhaul related laws and systems to further promote the use of data between different industries and provide drastic support such as deregulation, tax benefits and platform provision to help the domestic insurance industry secure global competitiveness. Third, Korean companies also need to make bolder investments in the development of artificial intelligence technology so that systematic securing of internal and external data, training of technical personnel, and patent applications can be expanded, and digital platforms should be quickly established so that diverse customer experiences can be integrated through learned artificial intelligence technology. Finally, since there may be limitations to generalization through a single case of an overseas insurance company, I hope that in the future, more extensive research will be conducted on various management strategies related to artificial intelligence technology by analyzing cases of multiple industries or multiple companies or conducting empirical research.