• Title/Summary/Keyword: Mobile System

Search Result 9,540, Processing Time 0.046 seconds

A Methodology for Extracting Shopping-Related Keywords by Analyzing Internet Navigation Patterns (인터넷 검색기록 분석을 통한 쇼핑의도 포함 키워드 자동 추출 기법)

  • Kim, Mingyu;Kim, Namgyu;Jung, Inhwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.123-136
    • /
    • 2014
  • Recently, online shopping has further developed as the use of the Internet and a variety of smart mobile devices becomes more prevalent. The increase in the scale of such shopping has led to the creation of many Internet shopping malls. Consequently, there is a tendency for increasingly fierce competition among online retailers, and as a result, many Internet shopping malls are making significant attempts to attract online users to their sites. One such attempt is keyword marketing, whereby a retail site pays a fee to expose its link to potential customers when they insert a specific keyword on an Internet portal site. The price related to each keyword is generally estimated by the keyword's frequency of appearance. However, it is widely accepted that the price of keywords cannot be based solely on their frequency because many keywords may appear frequently but have little relationship to shopping. This implies that it is unreasonable for an online shopping mall to spend a great deal on some keywords simply because people frequently use them. Therefore, from the perspective of shopping malls, a specialized process is required to extract meaningful keywords. Further, the demand for automating this extraction process is increasing because of the drive to improve online sales performance. In this study, we propose a methodology that can automatically extract only shopping-related keywords from the entire set of search keywords used on portal sites. We define a shopping-related keyword as a keyword that is used directly before shopping behaviors. In other words, only search keywords that direct the search results page to shopping-related pages are extracted from among the entire set of search keywords. A comparison is then made between the extracted keywords' rankings and the rankings of the entire set of search keywords. Two types of data are used in our study's experiment: web browsing history from July 1, 2012 to June 30, 2013, and site information. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The original sample dataset contains 150 million transaction logs. First, portal sites are selected, and search keywords in those sites are extracted. Search keywords can be easily extracted by simple parsing. The extracted keywords are ranked according to their frequency. The experiment uses approximately 3.9 million search results from Korea's largest search portal site. As a result, a total of 344,822 search keywords were extracted. Next, by using web browsing history and site information, the shopping-related keywords were taken from the entire set of search keywords. As a result, we obtained 4,709 shopping-related keywords. For performance evaluation, we compared the hit ratios of all the search keywords with the shopping-related keywords. To achieve this, we extracted 80,298 search keywords from several Internet shopping malls and then chose the top 1,000 keywords as a set of true shopping keywords. We measured precision, recall, and F-scores of the entire amount of keywords and the shopping-related keywords. The F-Score was formulated by calculating the harmonic mean of precision and recall. The precision, recall, and F-score of shopping-related keywords derived by the proposed methodology were revealed to be higher than those of the entire number of keywords. This study proposes a scheme that is able to obtain shopping-related keywords in a relatively simple manner. We could easily extract shopping-related keywords simply by examining transactions whose next visit is a shopping mall. The resultant shopping-related keyword set is expected to be a useful asset for many shopping malls that participate in keyword marketing. Moreover, the proposed methodology can be easily applied to the construction of special area-related keywords as well as shopping-related ones.

Inhibitory Effects of Ethanolic Extracts from Aster glehni on Xanthine Oxidase and Content Determination of Bioactive Components Using HPLC-UV (섬쑥부쟁이 에탄올 추출물의 잔틴산화효소 저해 효능 및 HPLC-UV를 이용한 유효성분의 함량 분석)

  • Kang, Dong Hyeon;Han, Eun Hye;Jin, Changbae;Kim, Hyoung Ja
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.45 no.11
    • /
    • pp.1610-1616
    • /
    • 2016
  • This study aimed to establish an optimal extraction process and high performance liquid chromatography-ultraviolet (HPLC-UV) analytical method for determination of 3,5-dicaffeoylquinic acid (3,5-DCQA) as a part of materials standardization for the development of a xanthine oxidase inhibitor as a health functional food. The quantitative determination method of 3,5-DCQA as a marker compound was optimized by HPLC analysis using a Luna RP-18 column, and the correlation coefficient for the calibration curve showed good linearity of more than 0.9999 using a gradient eluent of water (1% acetic acid) and methanol as the mobile phase at a flow rate of 1.0 mL/min and a detection wavelength of 320 nm. The HPLC-UV method was applied successfully to quantification of the marker compound (3,5-DCQA) in Aster glehni extracts after validation of the method with linearity, accuracy, and precision. Ethanolic extracts of A. glehni (AGEs) were evaluated by reflux extraction at 70 and $80^{\circ}C$ with 30, 50, 70, and 80% ethanol for 3, 4, 5, and 6 h, respectively. Among AGEs, 70% AGE at $70^{\circ}C$ showed the highest content of 3,5-DCQA of $52.59{\pm}3.45mg/100g$ A. glehni. Furthermore, AGEs were analyzed for their inhibitory activities on uric acid production by the xanthine/xanthine oxidase system. The 70% AGE at $70^{\circ}C$ showed the most potent inhibitory activity with $IC_{50}$ values of $77.01{\pm}3.13{\sim}89.96{\pm}3.08{\mu}g/mL$. The results suggest that standardization of 3,5-DCQA in AGEs using HPLC-UV analysis would be an acceptable method for the development of health functional foods.

Estimation of Domestic Greenhouse Gas Emission of Refrigeration and Air Conditioning Sector adapting 2006 IPCC GL Tier 2b Method (국내 냉동 및 냉방부문 온실가스 배출량 산정 - 2006 IPCC GL Tier 2b 적용 -)

  • Shin, Myung-Hwan;Lyu, Young-Sook;Seo, Kyoung-Ae;Lee, Sue-Been;Lim, Cheolsoo;Lee, Sukjo
    • Journal of Climate Change Research
    • /
    • v.3 no.2
    • /
    • pp.117-128
    • /
    • 2012
  • The Government of South Korea has continued its effort to fixate virtuous circle of economic growth and climate change response to cope with international demands and pressure to commitment for greenhouse gas reduction effectively. Nationally, Korean Government has established "Enforcement of the Framework Act on Low carbon, Green Growth"(2010. 4. 13) to implement national mid-term GHG mitigation goal(30% reduction by 2020 compare to BAU), which established the foundation for phased GHG mitigation by setting up the sectoral and industrial goal, adopting GHG and Energy Target Management System. Also, follow-up measures are taken such as planning and control of mid-term and short-term mitigation target by detailed analysis of potential mitigation of sector and industry, building up the infrastructure for periodic and systematic analysis of target management. Likewise, it is required to establish more accurate, reliable and detailed sectoral GHG inventory for successfully establishment and implement the frame act. In comparison to the $CO_2$ emission, Especially fluorinated greenhouse gases (HFCs, PFCs, $SF_6$) are lacking research to build the greenhouse gas inventories to identify emissions sources and collection of the applicable collection activities data. In this study, with the refrigeration and air conditioning sector being used to fluorine refrigerant(HFCs) as the center, greenhouse gas emission estimation methodology for evaluating the feasibility of using this methodology look over and mobile air conditioning, fixed air conditioning, household refrigeration equipment, commercial refrigeration equipment for the greenhouse gas emissions were calculated. First look at in terms of methodology, refrigeration and air conditioning sector GHG emissions in developing country-specific emission factors and activity data of the industrial sector the construction of the DB is not enough, it's 2006 IPCC Guidelines Tier 2a (emission factor approach) rather than the Tier 2b (mass balance approach) deems appropriate, and each detail by process, sectoral activity data more accurate, if DB is built Tier 2a (emission factor approach) can be applied will also be judged. Refrigeration and air conditioning sector in 2009 due to the use of refrigerant greenhouse gas emissions ($CO_2eq.$) assessment results, portable air conditioner 1,974,646 ton to year, fixed-mount air conditioner 1,011,754 ton to year, household refrigeration unit 4,396 ton to year, commercial refrigeration equipment 1,263 ton to year was estimated to total 2,992,037 tons.

The Effects of Game User's Social Capital and Information Privacy Concern on SNGReuse Intention and Recommendation Intention Through Flow (게임 이용자의 사회자본과 개인정보제공에 대한 우려가 플로우를 통해 SNG 재이용의도와 추천의도에 미치는 영향)

  • Lee, Ji-Hyeon;Kim, Han-Ku
    • Management & Information Systems Review
    • /
    • v.37 no.4
    • /
    • pp.21-39
    • /
    • 2018
  • Today, Mobile Instant Message (MIM) has become a communication means which is commonly used by many people as the technology on smart phones has been enhanced. Among the services, KakaoGame creates much profits continuously by using its representative Kakao platform. However, even though the number of users of KakaoGame increases and the characteristics of the users are more diversified, there are few researches on the relationship between the characteristics of the SNG users and the continuous use of the game. Since the social capital that is formed by the SNG users with the acquaintances create the sense of belonging, its role is being emphasized under the environment of social network. In addition, game user's concerns about the information privacy may decrease the trust on a game APP, and it also caused to threaten about the game system. Therefore, this study was designed to examine the structural relationships among SNG users' social capital, concerns about the information privacy, flow, SNG reuse intention and recommendation intention. The results from this study are as follow. First of all, the participants' bridging social capital had a positive effect on the flow of an SNG, but the bonding social capital had a negative effect on the flow of an SNG. In addition, awareness of information privacy concern had a negative effects on the flow of an SNG, but control of information privacy concern had a positive effect on the flow of an SNG. Lastly, the flow of an SNG had a positive effect on the reuse intention and recommendation intention of an SNG. Also, reuse intention of an SNG had a positive effect on the recommendation intention. Based on the results from this study, academic and practical implications can be drawn. First, This study focused on KakaoTalk which has both of the closed and open characteristics of an SNS and it was found that the SNG user's social capital might be a factor influencing each user's behaviors through the user's flow experiences in SNG. Second, this study extends the scope of prior researches by empirically analysing the relationship between the concerns about the SNG user's information privacy and flow of an SNG. Finally, the results of this research can provide practical guidelines to develop effective marketing strategies considering them for SNG companies.

Research on the Digital Twin Policy for the Utilization of Administrative Services (행정서비스 활용을 위한 디지털 트윈 정책 연구)

  • Jina Ok;Soonduck Yoo;Hyojin Jung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.35-43
    • /
    • 2023
  • The purpose of this study is to research digital twin policies for the use of administrative services. The study was conducted through a mobile survey of 1,000 participants, and the results are as follows. First, in order to utilize digital twin technology, it is necessary to first identify appropriate services that can be applied from the perspective of Gyeonggi Province. Efforts to identify digital twin services that are suitable for Gyeonggi Province's field work should be prioritized, and this should lead to increased efficiency in the work. Second, Gyeonggi Province's digital twin administrative services should prevent duplication with central government projects and establish a model that can be connected and utilized. It should be driven around current issues in Gyeonggi Province and the demands of citizens for administrative services. Third, to develop Gyeonggi Province's digital twin administrative services, a standard model development plan through participation in pilot projects should be considered. Gyeonggi Province should lead the project as the main agency and promote it through a collaborative project agreement. It is suggested that a support system for the overall project be established through the Gyeonggi Province Digital Twin Advisory Committee. Fourth, relevant regulations and systems for the construction, operation, and management of dedicated departments and administrative services should be established. To achieve the realization of digital twins in Gyeonggi Province, a dedicated organization that can perform various roles in project promotion and operation, as well as legal and institutional improvements, is necessary. To designate a dedicated organization, it is necessary to consider expanding and reorganizing existing departments and evaluating the operation of newly established departments. The limitation of this study is that it only surveyed participants from Gyeonggi Province, and it is recommended that future research be conducted nationwide. The expected effect of this study is that it can serve as a foundational resource for applying digital twin services to public work.

Application of Remote Sensing Techniques to Survey and Estimate the Standing-Stock of Floating Debris in the Upper Daecheong Lake (원격탐사 기법 적용을 통한 대청호 상류 유입 부유쓰레기 조사 및 현존량 추정 연구)

  • Youngmin Kim;Seon Woong Jang ;Heung-Min Kim;Tak-Young Kim;Suho Bak
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.589-597
    • /
    • 2023
  • Floating debris in large quantities from land during heavy rainfall has adverse social, economic, and environmental impacts, but the monitoring system for the concentration area and amount is insufficient. In this study, we proposed an efficient monitoring method for floating debris entering the river during heavy rainfall in Daecheong Lake, the largest water supply source in the central region, and applied remote sensing techniques to estimate the standing-stock of floating debris. To investigate the status of floating debris in the upper of Daecheong Lake, we used a tracking buoy equipped with a low-orbit satellite communication terminal to identify the movement route and behavior characteristics, and used a drone to estimate the potential concentration area and standing-stock of floating debris. The location tracking buoys moved rapidly during the period when the cumulative rainfall for 3 days increased by more than 200 to 300 mm. In the case of Hotan Bridge, which showed the longest distance, it moved about 72.8 km for one day, and the maximum moving speed at this time was 5.71 km/h. As a result of calculating the standing-stock of floating debris using a drone after heavy rainfall, it was found to be 658.8 to 9,165.4 tons, with the largest amount occurring in the Seokhori area. In this study, we were able to identify the main concentrations of floating debris by using location-tracking buoys and drones. It is believed that remote sensing-based monitoring methods, which are more mobile and quicker than traditional monitoring methods, can contribute to reducing the cost of collecting and processing large amounts of floating debris that flows in during heavy rain periods in the future.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.