• Title/Summary/Keyword: School-based

Search Result 46,331, Processing Time 0.088 seconds

Analysis of domestic and overseas coastal groundwater management laws and policies (국내외 해안 지하수관리 법·정책 사례 분석)

  • Shim, Young-Gyoo;Chung, Il-Moon;Chang, Sun Woo
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.9
    • /
    • pp.633-643
    • /
    • 2024
  • Many coastal countries have developed and used a wide range of technologies and policy measures to protect freshwater aquifers and groundwater resources from seawater intrusion, and have established and implemented a foundation to legally and institutionally support them. This study covers coastal states in the eastern United States, the Netheland, India and Japan. The goal of this study is to analyze each country's legal and policy measures for coastal groundwater management. By introducing Jeju Island's groundwater standard level system, we aim to provide a basis for future discussions on groundwater management measures not only in Jeju Island but also in coastal areas of Korea. As a result of the analysis, despite the various contents and aspects of coastal groundwater management based on local issues and characteristics around the world, in order to achieve the common goal of securing a stable amount of groundwater withdrawal and preventing seawater intrusion and to maximize the efficiency of groundwater management, it is understood that attempts are being made to establish optimal management measures, laws, systems, and policies based on several key factors. First, considering the hydrogeological characteristics and status of coastal groundwater, a separate special management system is being established and implemented within the scope of the national groundwater management system. In addition, preventing and maintaining groundwater level decline through limiting the amount of groundwater withdrawal and preventing seawater intrusion are key policy goals and policy tools, and it is suppored by research and development. Finally, tt was found that synergy effects are being sought by using various other policy tools and measures in a complex manner.

Exploring the Effects of Corporate Organizational Culture on Financial Performance: Using Text Analysis and Panel Data Approach (기업의 조직문화가 재무성과에 미치는 영향에 대한 연구: 텍스트 분석과 패널 데이터 방법을 이용하여)

  • Hansol Kim;Hyemin Kim;Seung Ik Baek
    • Information Systems Review
    • /
    • v.26 no.1
    • /
    • pp.269-288
    • /
    • 2024
  • The main objective of this study is to empirically explore how the organizational culture influences financial performance of companies. To achieve this, 58 companies included in the KOSPI 200 were selected from an online job platform in South Korea, JobPlanet. In order to understand the organizational culture of these companies, data was collected and analyzed from 81,067 reviews written by current and former members of these companies on JobPlanet over a period of 9 years from 2014 to 2022. To define the organizational culture of each company based on the review data, this study utilized well-known text analysis techniques, namely Word2Vec and FastText analysis methods. By modifying, supplementing, and extending the keywords associated with the five organizational culture values (Innovation, Integrity, Quality, Respect, and Teamwork) defined by Guiso et al. (2015), this study created a new Culture Dictionary. By using this dictionary, this study explored which cultural values-related keywords appear most often in the review data of each company, revealing the relative strength of specific cultural values within companies. Going a step further, the study also investigated which cultural values statistically impact financial performance. The results indicated that the organizational culture focusing on innovation and creativity (Innovation) and on customers and the market (Quality) positively influenced Tobin's Q, an indicator of a company's future value and growth. For the indicator of profitability, ROA, only the organizational culture emphasizing customers and the market (Quality) showed statistically significant impact. This study distinguishes itself from traditional surveys and case analysis-based research on organizational culture by analyzing large-scale text data to explore organizational culture.

Analysis of Research Trends in Journal of Distribution Science (유통과학연구의 연구 동향 분석 : 창간호부터 제8권 제3호까지를 중심으로)

  • Kim, Young-Min;Kim, Young-Ei;Youn, Myoung-Kil
    • Journal of Distribution Science
    • /
    • v.8 no.4
    • /
    • pp.5-15
    • /
    • 2010
  • This study investigated research trends of JDS that KODISA published and gave implications to elevate quality of scholarly journals. In other words, the study classified scientific system of distribution area to investigate research trends and to compare it with other scholarly journals of distribution and to give implications for higher level of JDS. KODISA published JDS Vol.1 No.1 for the first time in 1999 followed by Vol.8 No.3 in September 2010 to show 109 theses in total. KODISA investigated subjects, research institutions, number of participants, methodology, frequency of theses in both the Korean language and English, frequency of participation of not only the Koreans but also foreigners and use of references, etc. And, the study investigated JDR of KODIA, JKDM(The Journal of Korean Distribution & Management) and JDA that researched distribution, so that it found out development ways. To investigate research trends of JDS that KODISA publishes, main category was made based on the national science and technology standard classification system of MEST (Ministry Of Education, Science And Technology), table of classification of research areas of NRF(National Research Foundation of Korea), research classification system of both KOREADIMA and KLRA(Korea Logistics Research Association) and distribution science and others that KODISA is looking for, and distribution economy area was divided into general distribution, distribution economy, distribution, distribution information and others, and distribution management was divided into distribution management, marketing, MD and purchasing, consumer behavior and others. The findings were as follow: Firstly, main category occupied 47 theses (43.1%) of distribution economy and 62 theses (56.9%) of distribution management among 109 theses in total. Active research area of distribution economy consisted of 14 theses (12.8%) of distribution information and 9 theses (8.3%) of distribution economy to research distribution as well as distribution information positively every year. The distribution management consisted of 25 theses (22.9%) of distribution management and 20 theses (18.3%) of marketing, These days, research on distribution management, marketing, distribution, distribution information and others is increasing. Secondly, researchers published theses as follow: 55 theses (50.5%) by professor by himself or herself, 12 theses (11.0%) of joint research by professors and businesses, Professors/students published 9 theses (8.3%) followed by 5 theses (4.6%) of researchers, 5 theses (4.6%) of businesses, 4 theses (3.7%) of professors, researchers and businesses and 2 theses (1.8%) of students. Professors published theses less, while businesses, research institutions and graduate school students did more continuously. The number of researchers occupied single researcher (43 theses, 39.5%), two researchers (42 theses, 38.5%) and three researchers or more (24 theses, 22.0%). Thirdly, professors published theses the most at most of areas. Researchers of main category of distribution economy consisted of professors (25 theses, 53.2%), professors and businesses (7 theses, 14.9%), professors and businesses (7 theses, 14.9%), professors and researchers (6 theses, 12.8%) and professors and students (3 theses, 6.3%). And, researchers of main category of distribution management consisted of professors (30 theses, 48.4%), professors and businesses (10 theses, 16.1%), and professors and researchers as well as professors and students (6 theses, 9.7%). Researchers of distribution management consisted of professors, professors and businesses, professors and researchers, researchers and businesses, etc to have various types. Professors mainly researched marketing, MD and purchasing, and consumer behavior, etc to demand active participation of businesses and researchers. Fourthly, research methodology was: Literature research occupied 45 theses (41.3%) the most followed by empirical research based on questionnaire survey (44 theses, 40.4%). General distribution, distribution economy, distribution and distribution management, etc mostly adopted literature research, while marketing did empirical research based on questionnaire survey the most. Fifthly, theses in the Korean language occupied 92.7% (101 theses), while those in English did 7.3% (8 theses). No more than one thesis in English was published until 2006, and 7 theses (11.9%) were published after 2007 to increase. The theses in English were published more to be affirmative. Foreigner researcher published one thesis (0.9%) and both Korean researchers and foreigner researchers jointly published two theses (1.8%) to have very much low participation of foreigner researchers. Sixthly, one thesis of JDS had 27.5 references in average that consisted of 11.1 local references and 16.4 foreign references. And, cited times was 0.4 thesis in average to be low. The distribution economy cited 24.2 references in average (9.4 local references and 14.8 foreign references and JDS had 0.6 cited reference. The distribution management had 30.0 references in average (12.1 local references and 17.9 foreign references) and had 0.3 reference of JDS itself. Seventhly, similar type of scholarly journal had theses in the Korean language and English: JDR( Journal of Distribution Research) of KODIA(Korea Distribution Association) published 92 theses in the Korean language (96.8%) and 3 theses in English (3.2%), that is to say, 95 theses in total. JKDM of KOREADIMA published 132 theses in total that consisted of 93 theses in the Korean language (70.5%) and 39 theses in English (29.5%). Since 2008, JKDM has published scholarly journal in English one time every year. JDS published 52 theses in the Korean language (88.1%) and 7 theses in English (11.9%), that is to say, 59 theses in total. Sixthly, similar type of scholarly journals and research methodology were: JDR's research methodology had 65 empirical researches based on questionnaire survey (68.4%), followed by 17 literature researches (17.9%) and 11 quantitative analyses (11.6%). JKDM made use of various kinds of research methodologies to have 60 questionnaire surveys (45.5%), followed by 40 literature researches (30.3%), 21 quantitative analyses (15.9%), 6 system analyses (4.5%) and 5 case studies (3.8%). And, JDS made use of 30 questionnaire surveys (50.8%), followed by 15 literature researches (25.4%), 7 case studies (11.9%) and 6 quantitative analyses (10.2%). Ninthly, similar types of scholarly journals and Korean researchers and foreigner researchers were: JDR published 93 theses (97.8%) by Korean researchers except for 1 thesis by foreigner researcher and 1 thesis by joint research of the Korean researchers and foreigner researchers. And, JKDM had no foreigner research and 13 theses (9.8%) by joint research of the Korean researchers and foreigner researchers to have more foreigner researchers as well as researchers in foreign countries than similar types of scholarly journals had. And, JDS published 56 theses (94.9%) of the Korean researchers, one thesis (1.7%) of foreigner researcher only, and 2 theses (3.4%) of joint research of both the Koreans and foreigners. Tenthly, similar type of scholarly journals and reference had citation: JDR had 42.5 literatures in average that consisted of 10.9 local literatures (25.7%) and 31.6 foreign literatures (74.3%), and cited times accounted for 1.1 thesis to decrease. JKDM cited 10.5 Korean literatures (36.3%) and 18.4 foreign literatures (63.7%), and number of self-cited literature was no more than 1.1. Number of cited times accounted for 2.9 literatures in 2008 and then decreased continuously since then. JDS cited 26,8 references in average that consisted of 10.9 local references (40.7%) and 15.9 foreign references (59.3%), and number of self-cited accounted for 0.2 reference until 2009, and it increased to be 2.1 references in 2010. The author gives implications based on JDS research trends and investigation on similar type of scholarly journals as follow: Firstly, JDS shall actively invite foreign contributors to prepare for SSCI. Secondly, ratio of theses in English shall increase greatly. Thirdly, various kinds of research methodology shall be accepted to elevate quality of scholarly journals. Fourthly, to increase cited times, Google and other web retrievals shall be reinforced to supply scholarly journals to foreign countries more. Local scholarly journals can be worldwide scholarly journal enough to be acknowledged even in foreign countries by improving the implications above.

  • PDF

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.

Health Assessment of the Nakdong River Basin Aquatic Ecosystems Utilizing GIS and Spatial Statistics (GIS 및 공간통계를 활용한 낙동강 유역 수생태계의 건강성 평가)

  • JO, Myung-Hee;SIM, Jun-Seok;LEE, Jae-An;JANG, Sung-Hyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.18 no.2
    • /
    • pp.174-189
    • /
    • 2015
  • The objective of this study was to reconstruct spatial information using the results of the investigation and evaluation of the health of the living organisms, habitat, and water quality at the investigation points for the aquatic ecosystem health of the Nakdong River basin, to support the rational decision making of the aquatic ecosystem preservation and restoration policies of the Nakdong River basin using spatial analysis techniques, and to present efficient management methods. To analyze the aquatic ecosystem health of the Nakdong River basin, punctiform data were constructed based on the position information of each point with the aquatic ecosystem health investigation and evaluation results of 250 investigation sections. To apply the spatial analysis technique, the data need to be reconstructed into areal data. For this purpose, spatial influence and trends were analyzed using the Kriging interpolation(ArcGIS 10.1, Geostatistical Analysis), and were reconstructed into areal data. To analyze the spatial distribution characteristics of the Nakdong River basin health based on these analytical results, hotspot(Getis-Ord Gi, $G^*_i$), LISA(Local Indicator of Spatial Association), and standard deviational ellipse analyses were used. The hotspot analysis results showed that the hotspot basins of the biotic indices(TDI, BMI, FAI) were the Andong Dam upstream, Wangpicheon, and the Imha Dam basin, and that the health grades of their biotic indices were good. The coldspot basins were Nakdong River Namhae, the Nakdong River mouth, and the Suyeong River basin. The LISA analysis results showed that the exceptional areas were Gahwacheon, the Hapcheon Dam, and the Yeong River upstream basin. These areas had high bio-health indices, but their surrounding basins were low and required management for aquatic ecosystem health. The hotspot basins of the physicochemical factor(BOD) were the Nakdong River downstream basin, Suyeong River, Hoeya River, and the Nakdong River Namhae basin, whereas the coldspot basins were the upstream basins of the Nakdong River tributaries, including Andong Dam, Imha Dam, and Yeong River. The hotspots of the habitat and riverside environment factor(HRI) were different from the hotspots and coldspots of each factor in the LISA analysis results. In general, the habitat and riverside environment of the Nakdong River mainstream and tributaries, including the Nakdong river upstream, Andong Dam, Imha Dam, and the Hapcheon Dam basin, had good health. The coldspot basins of the habitat and riverside environment also showed low health indices of the biotic indices and physicochemical factors, thus requiring management of the habitat and riverside environment. As a result of the time-series analysis with a standard deviation ellipsoid, the areas with good aquatic ecosystem health of the organisms, habitat, and riverside environment showed a tendency to move northward, and the BOD results showed different directions and concentrations by the year of investigation. These aquatic ecosystem health analysis results can provide not only the health management information for each investigation spot but also information for managing the aquatic ecosystem in the catchment unit for the working research staff as well as for the water environment researchers in the future, based on spatial information.

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

Effects of Joining Coalition Loyalty Program : How the Brand affects Brand Loyalty Based on Brand Preference (브랜드 선호에 따라 제휴 로열티 프로그램 가입이 가맹점 브랜드 충성도에 미치는 영향)

  • Rhee, Jin-Hwa
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.87-115
    • /
    • 2012
  • Introduction: In these days, a loyalty program is one of the most common marketing mechanisms (Lacey & Sneath, 2006; Nues & Dreze, 2006; Uncles et al., 20003). In recent years, Coalition Loyalty Program is more noticeable as one of progressed forms. In the past, loyalty program was operating independently by single product brand or single retail channel brand. Now, companies using Coalition Loyalty Program share their programs as one single service and companies to participate to this program continue to have benefits from their existing program as well as positive spillover effect from the other participating network companies. Instead of consumers to earn or spend points from single retail channel or brand, consumers will have more opportunities to utilize their points and be able to purchase other participating companies products. Issues that are related to form of loyalty programs are essentially connected with consumers' perceived view on convenience of using its program. This can be a problem for distribution companies' strategic marketing plan. Although Coalition Loyalty Program is popular corporate marketing strategy to most companies, only few researches have been published. However, compared to independent loyalty program, coalition loyalty program operated by third parties of partnership has following conditions: Companies cannot autonomously modify structures of program for individual companies' benefits, and there is no guarantee to operate and to participate its program continuously by signing a contract. Thus, it is important to conduct the study on how coalition loyalty program affects companies' success and its process as much as conducting the study on effects of independent program. This study will complement the lack of coalition loyalty program study. The purpose of this study is to find out how consumer loyalty affects affiliated brands, its cause and mechanism. The past study about loyalty program only provided the variation of performance analysis, but this study will specifically focus on causes of results. In order to do these, this study is designed and to verify three primary objects as following; First, based on opinions of Switching Barriers (Fornell, 1992; Ping, 1993; Jones, et at., 2000) about causes of loyalty of coalition brand, 'brand attractiveness' and 'brand switching cost' are antecedents and causes of change in 'brand loyalty' will be investigated. Second, influence of consumers' perception and attitude prior to joining coalition loyalty program, influence of program in retail brands, brand attractiveness and spillover effect of switching cost after joining coalition program will be verified. Finally, the study will apply 'prior brand preference' as a variable and will provide a relationship between effects of coalition loyalty program and prior preference level. Hypothesis Hypothesis 1. After joining coalition loyalty program, more preferred brand (compared to less preferred brand) will increase influence on brand attractiveness to brand loyalty. Hypothesis 2. After joining coalition loyalty program, less preferred brand (compared to more preferred brand) will increase influence on brand switching cost to brand loyalty. Hypothesis 3. (1)Brand attractiveness and (2)brand switching cost of more preferred brand (before joining the coalition loyalty program) will influence more positive effects from (1)program attractiveness and (2)program switching cost of coalition loyalty program (after joining) than less preferred brand. Hypothesis 4. After joining coalition loyalty program, (1)brand attractiveness and (2)brand switching cost of more preferred brand will receive more positive impacts from (1)program attractiveness and (2)program switching cost of coalition loyalty program than less preferred brand. Hypothesis 5. After joining coalition loyalty program, (1)brand attractiveness and (2)brand switching cost of more preferred brand will receive less impacts from (1)brand attractiveness and (2)brand switching cost of different brands (having different preference level), which joined simultaneously, than less preferred brand. Method : In order to validate hypotheses, this study will apply experimental method throughout virtual scenario of coalition loyalty program if consumers have used or available for the actual brands. The experiment is conducted twice to participants. In a first experiment, the study will provide six coalition brands which are already selected based on prior research. The survey asked each brand attractiveness, switching cost, and loyalty after they choose high preference brand and low preference brand. One hour break was provided prior to the second experiment. In a second experiment, virtual coalition loyalty program "SaveBag" was introduced to participants. Participants were informed that "SaveBag" will be new alliance with six coalition brands from the first experiment. Brand attractiveness and switching cost about coalition program were measured and brand attractiveness and switching cost of high preference brand and low preference brand were measured as same method of first experiment. Limitation and future research This study shows limitations of effects of coalition loyalty program by using virtual scenario instead of actual research. Thus, future study should compare and analyze CLP panel data to provide more in-depth information. In addition, this study only proved the effectiveness of coalition loyalty program. However, there are two types of loyalty program, which are Single and Coalition, and success of coalition loyalty program will be dependent on market brand power and prior customer attitude. Therefore, it will be interesting to compare effects of two programs in the future.

  • PDF

Impact of Shortly Acquired IPO Firms on ICT Industry Concentration (ICT 산업분야 신생기업의 IPO 이후 인수합병과 산업 집중도에 관한 연구)

  • Chang, YoungBong;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.51-69
    • /
    • 2020
  • Now, it is a stylized fact that a small number of technology firms such as Apple, Alphabet, Microsoft, Amazon, Facebook and a few others have become larger and dominant players in an industry. Coupled with the rise of these leading firms, we have also observed that a large number of young firms have become an acquisition target in their early IPO stages. This indeed results in a sharp decline in the number of new entries in public exchanges although a series of policy reforms have been promulgated to foster competition through an increase in new entries. Given the observed industry trend in recent decades, a number of studies have reported increased concentration in most developed countries. However, it is less understood as to what caused an increase in industry concentration. In this paper, we uncover the mechanisms by which industries have become concentrated over the last decades by tracing the changes in industry concentration associated with a firm's status change in its early IPO stages. To this end, we put emphasis on the case in which firms are acquired shortly after they went public. Especially, with the transition to digital-based economies, it is imperative for incumbent firms to adapt and keep pace with new ICT and related intelligent systems. For instance, after the acquisition of a young firm equipped with AI-based solutions, an incumbent firm may better respond to a change in customer taste and preference by integrating acquired AI solutions and analytics skills into multiple business processes. Accordingly, it is not unusual for young ICT firms become an attractive acquisition target. To examine the role of M&As involved with young firms in reshaping the level of industry concentration, we identify a firm's status in early post-IPO stages over the sample periods spanning from 1990 to 2016 as follows: i) being delisted, ii) being standalone firms and iii) being acquired. According to our analysis, firms that have conducted IPO since 2000s have been acquired by incumbent firms at a relatively quicker time than those that did IPO in previous generations. We also show a greater acquisition rate for IPO firms in the ICT sector compared with their counterparts in other sectors. Our results based on multinomial logit models suggest that a large number of IPO firms have been acquired in their early post-IPO lives despite their financial soundness. Specifically, we show that IPO firms are likely to be acquired rather than be delisted due to financial distress in early IPO stages when they are more profitable, more mature or less leveraged. For those IPO firms with venture capital backup have also become an acquisition target more frequently. As a larger number of firms are acquired shortly after their IPO, our results show increased concentration. While providing limited evidence on the impact of large incumbent firms in explaining the change in industry concentration, our results show that the large firms' effect on industry concentration are pronounced in the ICT sector. This result possibly captures the current trend that a few tech giants such as Alphabet, Apple and Facebook continue to increase their market share. In addition, compared with the acquisitions of non-ICT firms, the concentration impact of IPO firms in early stages becomes larger when ICT firms are acquired as a target. Our study makes new contributions. To our best knowledge, this is one of a few studies that link a firm's post-IPO status to associated changes in industry concentration. Although some studies have addressed concentration issues, their primary focus was on market power or proprietary software. Contrast to earlier studies, we are able to uncover the mechanism by which industries have become concentrated by placing emphasis on M&As involving young IPO firms. Interestingly, the concentration impact of IPO firm acquisitions are magnified when a large incumbent firms are involved as an acquirer. This leads us to infer the underlying reasons as to why industries have become more concentrated with a favor of large firms in recent decades. Overall, our study sheds new light on the literature by providing a plausible explanation as to why industries have become concentrated.

Analysis on Factors Influencing Welfare Spending of Local Authority : Implementing the Detailed Data Extracted from the Social Security Information System (지방자치단체 자체 복지사업 지출 영향요인 분석 : 사회보장정보시스템을 통한 접근)

  • Kim, Kyoung-June;Ham, Young-Jin;Lee, Ki-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.141-156
    • /
    • 2013
  • Researchers in welfare services of local government in Korea have rather been on isolated issues as disables, childcare, aging phenomenon, etc. (Kang, 2004; Jung et al., 2009). Lately, local officials, yet, realize that they need more comprehensive welfare services for all residents, not just for above-mentioned focused groups. Still cases dealt with focused group approach have been a main research stream due to various reason(Jung et al., 2009; Lee, 2009; Jang, 2011). Social Security Information System is an information system that comprehensively manages 292 welfare benefits provided by 17 ministries and 40 thousand welfare services provided by 230 local authorities in Korea. The purpose of the system is to improve efficiency of social welfare delivery process. The study of local government expenditure has been on the rise over the last few decades after the restarting the local autonomy, but these studies have limitations on data collection. Measurement of a local government's welfare efforts(spending) has been primarily on expenditures or budget for an individual, set aside for welfare. This practice of using monetary value for an individual as a "proxy value" for welfare effort(spending) is based on the assumption that expenditure is directly linked to welfare efforts(Lee et al., 2007). This expenditure/budget approach commonly uses total welfare amount or percentage figure as dependent variables (Wildavsky, 1985; Lee et al., 2007; Kang, 2000). However, current practice of using actual amount being used or percentage figure as a dependent variable may have some limitation; since budget or expenditure is greatly influenced by the total budget of a local government, relying on such monetary value may create inflate or deflate the true "welfare effort" (Jang, 2012). In addition, government budget usually contain a large amount of administrative cost, i.e., salary, for local officials, which is highly unrelated to the actual welfare expenditure (Jang, 2011). This paper used local government welfare service data from the detailed data sets linked to the Social Security Information System. The purpose of this paper is to analyze the factors that affect social welfare spending of 230 local authorities in 2012. The paper applied multiple regression based model to analyze the pooled financial data from the system. Based on the regression analysis, the following factors affecting self-funded welfare spending were identified. In our research model, we use the welfare budget/total budget(%) of a local government as a true measurement for a local government's welfare effort(spending). Doing so, we exclude central government subsidies or support being used for local welfare service. It is because central government welfare support does not truly reflect the welfare efforts(spending) of a local. The dependent variable of this paper is the volume of the welfare spending and the independent variables of the model are comprised of three categories, in terms of socio-demographic perspectives, the local economy and the financial capacity of local government. This paper categorized local authorities into 3 groups, districts, and cities and suburb areas. The model used a dummy variable as the control variable (local political factor). This paper demonstrated that the volume of the welfare spending for the welfare services is commonly influenced by the ratio of welfare budget to total local budget, the population of infants, self-reliance ratio and the level of unemployment factor. Interestingly, the influential factors are different by the size of local government. Analysis of determinants of local government self-welfare spending, we found a significant effect of local Gov. Finance characteristic in degree of the local government's financial independence, financial independence rate, rate of social welfare budget, and regional economic in opening-to-application ratio, and sociology of population in rate of infants. The result means that local authorities should have differentiated welfare strategies according to their conditions and circumstances. There is a meaning that this paper has successfully proven the significant factors influencing welfare spending of local government in Korea.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.