• Title/Summary/Keyword: network performance

Search Result 13,763, Processing Time 0.053 seconds

Olympic Advertisers Win Gold, Experience Stock Price Gains During and After the Games (오운선수작위엄고대언인영득금패(奥运选手作为广告代言人赢得金牌), 비새중화비새후적고표개격상양(比赛中和比赛后的股票价格上扬))

  • Tomovick, Chuck;Yelkur, Rama
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.80-88
    • /
    • 2010
  • There has been considerable research examining the relationship between stockholders equity and various marketing strategies. These include studies linking stock price performance to advertising, customer service metrics, new product introductions, research and development, celebrity endorsers, brand perception, brand extensions, brand evaluation, company name changes, and sports sponsorships. Another facet of marketing investments which has received heightened scrutiny for its purported influence on stockholder equity is television advertisement embedded within specific sporting events such as the Super Bowl. Research indicates that firms which advertise in Super Bowls experience stock price gains. Given this reported relationship between advertising investment and increased shareholder value, for both general and special events, it is surprising that relatively little research attention has been paid to investigating the relationship between advertising in the Olympic Games and its subsequent impact on stockholder equity. While attention has been directed at examining the effectiveness of sponsoring the Olympic Games, much less focus has been placed on the financial soundness of advertising during the telecasts of these Games. Notable exceptions to this include Peters (2008), Pfanner (2008), Saini (2008), and Keller Fay Group (2009). This paper presents a study of Olympic advertisers who ran TV ads on NBC in the American telecasts of the 2000, 2004, and 2008 Summer Olympic Games. Five hypothesis were tested: H1: The stock prices of firms which advertised on American telecasts of the 2008, 2004 and 2000 Olympics (referred to as O-Stocks), will outperform the S&P 500 during this same period of time (i.e., the Monday before the Games through to the Friday after the Games). H2: O-Stocks will outperform the S&P 500 during the medium term, that is, for the period of the Monday before the Games through to the end of each Olympic calendar year (December 31st of 2000, 2004, and 2008 respectively). H3: O-Stocks will outperform the S&P 500 in the longer term, that is, for the period of the Monday before the Games through to the midpoint of the following years (June 30th of 2001, 2005, and 2009 respectively). H4: There will be no difference in the performance of these O-Stocks vs. the S&P 500 in the Non-Olympic time control periods (i.e. three months earlier for each of the Olympic years). H5: The annual revenue of firms which advertised on American telecasts of the 2008, 2004 and 2000 Olympics will be higher for those years than the revenue for those same firms in the years preceding those three Olympics respectively. In this study, we recorded stock prices of those companies that advertised during the Olympics for the last three Summer Olympic Games (i.e. Beijing in 2008, Athens in 2004, and Sydney in 2000). We identified these advertisers using Google searches as well as with the help of the television network (i.e., NBC) that hosted the Games. NBC held the American broadcast rights to all three Olympic Games studied. We used Internet sources to verify the parent companies of the brands that were advertised each year. Stock prices of these parent companies were found using Yahoo! Finance. Only companies that were publicly held and traded were used in the study. We identified changes in Olympic advertisers' stock prices over the four-week period that included the Monday before through the Friday after the Games. In total, there were 117 advertisers of the Games on telecasts which were broadcast in the U.S. for 2008, 2004, and 2000 Olympics. Figure 1 provides a breakdown of those advertisers, by industry sector. Results indicate the stock of the firms that advertised (O-Stocks) out-performed the S&P 500 during the period of interest and under-performed the S&P 500 during the earlier control periods. These same O-Stocks also outperformed the S&P 500 from the start of these Games through to the end of each Olympic year, and for six months beyond that. Price pressure linkage, signaling theory, high involvement viewers, and corporate activation strategies are believed to contribute to these positive results. Implications for advertisers and researchers are discussed, as are study limitations and future research directions.

Capacity Comparison of Two Uplink OFDMA Systems Considering Synchronization Error among Multiple Users and Nonlinear Distortion of Amplifiers (사용자간 동기오차와 증폭기의 비선형 왜곡을 동시에 고려한 두 상향링크 OFDMA 기법의 채널용량 비교 분석)

  • Lee, Jin-Hui;Kim, Bong-Seok;Choi, Kwonhue
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.5
    • /
    • pp.258-270
    • /
    • 2014
  • In this paper, we investigate channel capacity of two kinds of uplink OFDMA (Orthogonal Frequency Division Multiple Access) schemes, i.e. ZCZ (Zero Correlation Zone) code time-spread OFDMA and sparse SC-FDMA (Single Carrier Frequency Division Mmultiple Access) robust to access timing offset (TO) among multiple users. In order to reflect the practical condition, we consider not only access TO among multiple users but also peak to average power ratio (PAPR) which is one of hot issues of uplink OFDMA. In the case with access TO among multiple users, the amplified signal of users by power control might affect a severe interference to signals of other users. Meanwhile, amplified signal by considering distance between user and base station might be distorted due to the limit of amplifier and thus the performance might degrade. In order to achieve the maximum channel capacity, we investigate the combinations of transmit power so called ASF (adaptive scaling factor) by numerical simulations. We check that the channel capacity of the case with ASF increases compared to the case with considering only distance i.e. ASF=1. From the simulation results, In the case of high signal to noise ratio (SNR), ZCZ code time-spread OFDMA achieves higher channel capacity compared to sparse block SC-FDMA. On the other hand, in the case of low SNR, the sparse block SC-FDMA achieves better performance compared to ZCZ time-spread OFDMA.

Knowledge Management Strategy of a Franchise Business : The Case of a Paris Baguette Bakery (프랜차이즈 기업의 지식경영 전략 : 파리바게뜨 사례를 중심으로)

  • Cho, Joon-Sang;Kim, Bo-Yong
    • Journal of Distribution Science
    • /
    • v.10 no.6
    • /
    • pp.39-53
    • /
    • 2012
  • It is widely known that knowledge management plays a facilitating role that contributes to upgrading organizational performance. Knowledge management systems (KMS), especially, support the knowledge management process including the sharing, creating, and using of knowledge within a company, and maximize the value of knowledge resources within an organization. Despite this widely held belief, there are few studies that describe how companies actually develop, share, and practice their knowledge. Companies in the domestic small franchise sector, which are in the early stages in terms of knowledge management, need to improve their KMS to manage their franchisees effectively. From this perspective, this study uses a qualitative approach to explore the actual process of knowledge management implementation. This article presents a case study of PB (Paris Baguette) company, which is the first to build a KMS in the franchise industry. The study was able to confirm the following facts through the analysis of target companies. First, the chief executive's support is a critical success factor and this support can increase the participation of organization members. Second, it is important to build a process and culture that actively creates and leverages information in knowledge management activities. The organizational learning culture should be one where the creation, learning, and sharing of new knowledge is developed continuously. Third, a horizontal network organization is needed in order to make relationships within the organization more close-knit. Fourth, in order to connect the diverse processes such as knowledge acquisition, storage, and utilization of knowledge management activities, information technology (IT) capabilities are essential. Indeed, IT can be a powerful tool for improving the quality of work and maximizing the spread and use of knowledge. However, during the construction of an intranet based KMS, research is required to ensure that the most efficient system is implemented. Finally, proper evaluation and compensation are important success factors. In order to develop knowledge workers, an appropriate program of promotion and compensation should be established. Also, building members' confidence in the benefits of knowledge management should be an ongoing activity. The company developed its original KMS to achieve a flexible and proactive organization, and a new KMS to improve organizational and personal capabilities. The PB case shows that there are differences between participants perceptions and actual performance in managing knowledge; that knowledge management is not a matter of formality but a paradigm that assures the sharing of knowledge; and that IT boosts communication skills, thus creating a mutual relationship to enhance the flow of knowledge and information between people. Knowledge management for building organizational capabilities can be successful when considering its focus and ways to increase its acceptance. This study suggests guidelines for major factors that corporate executives of domestic franchises should consider to improve knowledge management and the higher operating activities that can be used.

  • PDF

Comparison of Models for Stock Price Prediction Based on Keyword Search Volume According to the Social Acceptance of Artificial Intelligence (인공지능의 사회적 수용도에 따른 키워드 검색량 기반 주가예측모형 비교연구)

  • Cho, Yujung;Sohn, Kwonsang;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.103-128
    • /
    • 2021
  • Recently, investors' interest and the influence of stock-related information dissemination are being considered as significant factors that explain stock returns and volume. Besides, companies that develop, distribute, or utilize innovative new technologies such as artificial intelligence have a problem that it is difficult to accurately predict a company's future stock returns and volatility due to macro-environment and market uncertainty. Market uncertainty is recognized as an obstacle to the activation and spread of artificial intelligence technology, so research is needed to mitigate this. Hence, the purpose of this study is to propose a machine learning model that predicts the volatility of a company's stock price by using the internet search volume of artificial intelligence-related technology keywords as a measure of the interest of investors. To this end, for predicting the stock market, we using the VAR(Vector Auto Regression) and deep neural network LSTM (Long Short-Term Memory). And the stock price prediction performance using keyword search volume is compared according to the technology's social acceptance stage. In addition, we also conduct the analysis of sub-technology of artificial intelligence technology to examine the change in the search volume of detailed technology keywords according to the technology acceptance stage and the effect of interest in specific technology on the stock market forecast. To this end, in this study, the words artificial intelligence, deep learning, machine learning were selected as keywords. Next, we investigated how many keywords each week appeared in online documents for five years from January 1, 2015, to December 31, 2019. The stock price and transaction volume data of KOSDAQ listed companies were also collected and used for analysis. As a result, we found that the keyword search volume for artificial intelligence technology increased as the social acceptance of artificial intelligence technology increased. In particular, starting from AlphaGo Shock, the keyword search volume for artificial intelligence itself and detailed technologies such as machine learning and deep learning appeared to increase. Also, the keyword search volume for artificial intelligence technology increases as the social acceptance stage progresses. It showed high accuracy, and it was confirmed that the acceptance stages showing the best prediction performance were different for each keyword. As a result of stock price prediction based on keyword search volume for each social acceptance stage of artificial intelligence technologies classified in this study, the awareness stage's prediction accuracy was found to be the highest. The prediction accuracy was different according to the keywords used in the stock price prediction model for each social acceptance stage. Therefore, when constructing a stock price prediction model using technology keywords, it is necessary to consider social acceptance of the technology and sub-technology classification. The results of this study provide the following implications. First, to predict the return on investment for companies based on innovative technology, it is most important to capture the recognition stage in which public interest rapidly increases in social acceptance of the technology. Second, the change in keyword search volume and the accuracy of the prediction model varies according to the social acceptance of technology should be considered in developing a Decision Support System for investment such as the big data-based Robo-advisor recently introduced by the financial sector.

A preliminary assessment of high-spatial-resolution satellite rainfall estimation from SAR Sentinel-1 over the central region of South Korea (한반도 중부지역에서의 SAR Sentinel-1 위성강우량 추정에 관한 예비평가)

  • Nguyen, Hoang Hai;Jung, Woosung;Lee, Dalgeun;Shin, Daeyun
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.6
    • /
    • pp.393-404
    • /
    • 2022
  • Reliable terrestrial rainfall observations from satellites at finer spatial resolution are essential for urban hydrological and microscale agricultural demands. Although various traditional "top-down" approach-based satellite rainfall products were widely used, they are limited in spatial resolution. This study aims to assess the potential of a novel "bottom-up" approach for rainfall estimation, the parameterized SM2RAIN model, applied to the C-band SAR Sentinel-1 satellite data (SM2RAIN-S1), to generate high-spatial-resolution terrestrial rainfall estimates (0.01° grid/6-day) over Central South Korea. Its performance was evaluated for both spatial and temporal variability using the respective rainfall data from a conventional reanalysis product and rain gauge network for a 1-year period over two different sub-regions in Central South Korea-the mixed forest-dominated, middle sub-region and cropland-dominated, west coast sub-region. Evaluation results indicated that the SM2RAIN-S1 product can capture general rainfall patterns in Central South Korea, and hold potential for high-spatial-resolution rainfall measurement over the local scale with different land covers, while less biased rainfall estimates against rain gauge observations were provided. Moreover, the SM2RAIN-S1 rainfall product was better in mixed forests considering the Pearson's correlation coefficient (R = 0.69), implying the suitability of 6-day SM2RAIN-S1 data in capturing the temporal dynamics of soil moisture and rainfall in mixed forests. However, in terms of RMSE and Bias, better performance was obtained with the SM2RAIN-S1 rainfall product over croplands rather than mixed forests, indicating that larger errors induced by high evapotranspiration losses (especially in mixed forests) need to be included in further improvement of the SM2RAIN.

Optimization of Characteristic Change due to Differences in the Electrode Mixing Method (전극 혼합 방식의 차이로 인한 특성 변화 최적화)

  • Jeong-Tae Kim;Carlos Tafara Mpupuni;Beom-Hui Lee;Sun-Yul Ryou
    • Journal of the Korean Electrochemical Society
    • /
    • v.26 no.1
    • /
    • pp.1-10
    • /
    • 2023
  • The cathode, which is one of the four major components of a lithium secondary battery, is an important component responsible for the energy density of the battery. The mixing process of active material, conductive material, and polymer binder is very essential in the commonly used wet manufacturing process of the cathode. However, in the case of mixing conditions of the cathode, since there is no systematic method, in most cases, differences in performance occur depending on the manufacturer. Therefore, LiMn2O4 (LMO) cathodes were prepared using a commonly used THINKY mixer and homogenizer to optimize the mixing method in the cathode slurry preparation step, and their characteristics were compared. Each mixing condition was performed at 2000 RPM and 7 min, and to determine only the difference in the mixing method during the manufacture of the cathode other experiment conditions (mixing time, material input order, etc.) were kept constant. Among the manufactured THINKY mixer LMO (TLMO) and homogenizer LMO (HLMO), HLMO has more uniform particle dispersion than TLMO, and thus shows higher adhesive strength. Also, the result of the electrochemical evaluation reveals that HLMO cathode showed improved performance with a more stable life cycle compared to TLMO. The initial discharge capacity retention rate of HLMO at 69 cycles was 88%, which is about 4.4 times higher than that of TLMO, and in the case of rate capability, HLMO exhibited a better capacity retention even at high C-rates of 10, 15, and 20 C and the capacity recovery at 1 C was higher than that of TLMO. It's postulated that the use of a homogenizer improves the characteristics of the slurry containing the active material, the conductive material, and the polymer binder creating an electrically conductive network formed by uniformly dispersing the conductive material suppressing its strong electrostatic properties thus avoiding aggregation. As a result, surface contact between the active material and the conductive material increases, electrons move more smoothly, changes in lattice volume during charging and discharging are more reversible and contact resistance between the active material and the conductive material is suppressed.

Study on the Discovery and Spread of Local Folk Songs: In the Case of Memil-dorikkaejil-sori (지역민요의 발굴과 확산: 메밀도리깨질소리 사례)

  • Lee, Chang-Sik
    • (The) Research of the performance art and culture
    • /
    • no.40
    • /
    • pp.193-222
    • /
    • 2020
  • In this study, the development of traditional contents is to be enabled to succeed the values of the heritage on the song sang during dry-field farming of Bongpyeong Memil-dorikkaejil-sori. In addition, the identity on the heritage of songs sang during farming were diagnosed, and their value in context with the history and the value as the community for succession are emphasized as the value of cultural asset to expand the discussion up to the level of traditional cultural industry works. In the Bongpyeong Memil-dorikkaejil-sori, the intrinsic artistic value, the excellence of value as the educational experience, factor for overcoming the extinction of farming songs, and the promotion direction of the storytelling on buckwheat were provided. This is breaking from the formalization and being old-fashioned on the Bongpyeong Memil-dorikkaejil-sori to focus on the symbolization of the agricultural heritage in the modern context, habituating and spreading the gene of slash-and-burn field (hwajeon or budaeki). In terms of methodology on awareness, historicity or creativity, alternative method on the folk songs in labor was provided by having critical mind on the Bongpyeong Memil-dorikkaejil-sori and buckwheat songs. By reviewing the field contextualization of designating the intangible cultural asset, suggestions were made on activating the succession, and even the method of symbolic registration on the heritage of buckwheat farming was mentioned. Heritage on agricultural culture that can represent Pyeongchang and Gangwon must be discovered to be made into a brand. In addition, the uniqueness in the Madang 'Song Madaengi Traditional Music' must be found to apply as the brand on the point in which the people around the world can have consensus for utilization. As the farming song, rediscovery of the Bongpyeong Memil-dorikkaejil-sori is required to create the sustainable status as the multi-purpose cultural contents and provide the network of professionals for activating the folk songs to enable the opportunity of qualitative substantiality and spread instead of quantitative growth. In addition, festivals for each region, especially the festival for Pyeongchang area must be utilized centrally on the development of farming songs to organize the storytelling actively.

Development of deep learning network based low-quality image enhancement techniques for improving foreign object detection performance (이물 객체 탐지 성능 개선을 위한 딥러닝 네트워크 기반 저품질 영상 개선 기법 개발)

  • Ki-Yeol Eom;Byeong-Seok Min
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • Along with economic growth and industrial development, there is an increasing demand for various electronic components and device production of semiconductor, SMT component, and electrical battery products. However, these products may contain foreign substances coming from manufacturing process such as iron, aluminum, plastic and so on, which could lead to serious problems or malfunctioning of the product, and fire on the electric vehicle. To solve these problems, it is necessary to determine whether there are foreign materials inside the product, and may tests have been done by means of non-destructive testing methodology such as ultrasound ot X-ray. Nevertheless, there are technical challenges and limitation in acquiring X-ray images and determining the presence of foreign materials. In particular Small-sized or low-density foreign materials may not be visible even when X-ray equipment is used, and noise can also make it difficult to detect foreign objects. Moreover, in order to meet the manufacturing speed requirement, the x-ray acquisition time should be reduced, which can result in the very low signal- to-noise ratio(SNR) lowering the foreign material detection accuracy. Therefore, in this paper, we propose a five-step approach to overcome the limitations of low resolution, which make it challenging to detect foreign substances. Firstly, global contrast of X-ray images are increased through histogram stretching methodology. Second, to strengthen the high frequency signal and local contrast, we applied local contrast enhancement technique. Third, to improve the edge clearness, Unsharp masking is applied to enhance edges, making objects more visible. Forth, the super-resolution method of the Residual Dense Block (RDB) is used for noise reduction and image enhancement. Last, the Yolov5 algorithm is employed to train and detect foreign objects after learning. Using the proposed method in this study, experimental results show an improvement of more than 10% in performance metrics such as precision compared to low-density images.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.