• Title/Summary/Keyword: Open System

Search Result 6,700, Processing Time 0.05 seconds

A Study on the Revitalization of Tourism Industry through Big Data Analysis (한국관광 실태조사 빅 데이터 분석을 통한 관광산업 활성화 방안 연구)

  • Lee, Jungmi;Liu, Meina;Lim, Gyoo Gun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.149-169
    • /
    • 2018
  • Korea is currently accumulating a large amount of data in public institutions based on the public data open policy and the "Government 3.0". Especially, a lot of data is accumulated in the tourism field. However, the academic discussions utilizing the tourism data are still limited. Moreover, the openness of the data of restaurants, hotels, and online tourism information, and how to use SNS Big Data in tourism are still limited. Therefore, utilization through tourism big data analysis is still low. In this paper, we tried to analyze influencing factors on foreign tourists' satisfaction in Korea through numerical data using data mining technique and R programming technique. In this study, we tried to find ways to revitalize the tourism industry by analyzing about 36,000 big data of the "Survey on the actual situation of foreign tourists from 2013 to 2015" surveyed by the Korea Culture & Tourism Research Institute. To do this, we analyzed the factors that have high influence on the 'Satisfaction', 'Revisit intention', and 'Recommendation' variables of foreign tourists. Furthermore, we analyzed the practical influences of the variables that are mentioned above. As a procedure of this study, we first integrated survey data of foreign tourists conducted by Korea Culture & Tourism Research Institute, which is stored in the tourist information system from 2013 to 2015, and eliminate unnecessary variables that are inconsistent with the research purpose among the integrated data. Some variables were modified to improve the accuracy of the analysis. And we analyzed the factors affecting the dependent variables by using data-mining methods: decision tree(C5.0, CART, CHAID, QUEST), artificial neural network, and logistic regression analysis of SPSS IBM Modeler 16.0. The seven variables that have the greatest effect on each dependent variable were derived. As a result of data analysis, it was found that seven major variables influencing 'overall satisfaction' were sightseeing spot attraction, food satisfaction, accommodation satisfaction, traffic satisfaction, guide service satisfaction, number of visiting places, and country. Variables that had a great influence appeared food satisfaction and sightseeing spot attraction. The seven variables that had the greatest influence on 'revisit intention' were the country, travel motivation, activity, food satisfaction, best activity, guide service satisfaction and sightseeing spot attraction. The most influential variables were food satisfaction and travel motivation for Korean style. Lastly, the seven variables that have the greatest influence on the 'recommendation intention' were the country, sightseeing spot attraction, number of visiting places, food satisfaction, activity, tour guide service satisfaction and cost. And then the variables that had the greatest influence were the country, sightseeing spot attraction, and food satisfaction. In addition, in order to grasp the influence of each independent variables more deeply, we used R programming to identify the influence of independent variables. As a result, it was found that the food satisfaction and sightseeing spot attraction were higher than other variables in overall satisfaction and had a greater effect than other influential variables. Revisit intention had a higher ${\beta}$ value in the travel motive as the purpose of Korean Wave than other variables. It will be necessary to have a policy that will lead to a substantial revisit of tourists by enhancing tourist attractions for the purpose of Korean Wave. Lastly, the recommendation had the same result of satisfaction as the sightseeing spot attraction and food satisfaction have higher ${\beta}$ value than other variables. From this analysis, we found that 'food satisfaction' and 'sightseeing spot attraction' variables were the common factors to influence three dependent variables that are mentioned above('Overall satisfaction', 'Revisit intention' and 'Recommendation'), and that those factors affected the satisfaction of travel in Korea significantly. The purpose of this study is to examine how to activate foreign tourists in Korea through big data analysis. It is expected to be used as basic data for analyzing tourism data and establishing effective tourism policy. It is expected to be used as a material to establish an activation plan that can contribute to tourism development in Korea in the future.

Comparison of Naphthalene Degradation Efficiency and OH Radical Production by the Change of Frequency and Reaction Conditions of Ultrasound (초음파 주파수 및 반응조건 변화에 따른 나프탈렌 분해효율과 OH 라디칼의 발생량 비교)

  • Park, Jong-Sung;Park, So-Young;Oh, Je-Ill;Jeong, Sang-Jo;Lee, Min-Ju;Her, Nam-Guk
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.31 no.2
    • /
    • pp.79-89
    • /
    • 2009
  • Naphthalene is a volatile, hydrophobic, and possibly carcinogenic compound that is known to have a severe detrimental effect to aquatic ecosystem. Our research examined the effects of various operating conditions (temperature, pH, initial concentration, and frequency and type of ultrasound) on the sonochemical degradation of naphthalene and OH radical production. The MDL (Method detection limit) determined by LC/FLD (1200 series, Agilient) using C-18 reversed column is measured up to 0.01 ppm. Naphthalene vapor produced from ultrasound irradiation was detected under 0.05 ppm. Comparison of naphthalene sonodegradion efficiency tested under open and closed reactor cover fell within less than 1% of difference. Increasing the reaction temperature from $15^{\circ}C$ to $40^{\circ}C$ resulted in reduction of naphthalene degradation efficiency ($15^{\circ}C$: 95% ${\rightarrow}$ $40^{\circ}C$: 85%), and altering pH from 12 to 3 increased the effect (pH 12: 84% ${\rightarrow}$pH 3: 95.6%). Pseudo first-order constants ($k_1$) of sonodegradation of naphthalene decreased as initial concentration of naphthalene increased (2.5 ppm: $27.3{\times}10^{-3}\;min^{-3}\;{\rightarrow}$ 10 ppm : $19.3{\times}10^{-3}\;min^{-3}$). Degradation efficiency of 2.5 ppm of naphthalene subjected to 28 kHz of ultrasonic irradiation was found to be 1.46 times as much as when exposed under 132 kHz (132 kHz: 56%, 28 kHz: 82.7%). Additionally, its $k_1$ constant was increased by 2.3 times (132 kHz: $2.4{\times}10^{-3}\;min^{-1}$, 28 kHz: $5.0{\times}10^{-3}\;min^{-1}$). $H_2O_2$ concentration measured 10 minutes after the exposure to 132 kHz of ultrasound, when compared with the measurement under frequency of 28 kHz, was 7.2 times as much. The concentration measured after 90 minutes, however, showed the difference of only 10%. (concentration of $H_2O_2$ under 28 kHz being 1.1 times greater than that under 132 kHz.) The $H_2O_2$ concentration resulting from 2.5 ppm naphthalene after 90 minutes of sonication at 24 kHz and 132 kHz were lower by 0.05 and 0.1 ppm, respectively, than the concentration measured from the irradiated M.Q. water (no naphthalene added.) Degradation efficiency of horn type (24 kHz) and bath type (28 kHz) ultrasound was found to be 87% and 82.7%, respectively, and $k_1$ was calculated into $22.8{\times}10^{-3}\;min^{-1}$ and $18.7{\times}10^{-3}\;min^{-1}$ respectively. Using the multi- frequency and mixed type of ultrasound system (28 kHz bath type + 24 kHz horn type) simultaneously resulted in combined efficiency of 88.1%, while $H_2O_2$ concentration increased 3.5 times (28 kHz + 24 kHz: 2.37 ppm, 24 kHz: 0.7 ppm.) Therefore, the multi-frequency and mixed type of ultrasound system procedure might be most effectively used for removing the substances that are easily oxidized by the OH radical.

A Study on Differences of Contents and Tones of Arguments among Newspapers Using Text Mining Analysis (텍스트 마이닝을 활용한 신문사에 따른 내용 및 논조 차이점 분석)

  • Kam, Miah;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.53-77
    • /
    • 2012
  • This study analyses the difference of contents and tones of arguments among three Korean major newspapers, the Kyunghyang Shinmoon, the HanKyoreh, and the Dong-A Ilbo. It is commonly accepted that newspapers in Korea explicitly deliver their own tone of arguments when they talk about some sensitive issues and topics. It could be controversial if readers of newspapers read the news without being aware of the type of tones of arguments because the contents and the tones of arguments can affect readers easily. Thus it is very desirable to have a new tool that can inform the readers of what tone of argument a newspaper has. This study presents the results of clustering and classification techniques as part of text mining analysis. We focus on six main subjects such as Culture, Politics, International, Editorial-opinion, Eco-business and National issues in newspapers, and attempt to identify differences and similarities among the newspapers. The basic unit of text mining analysis is a paragraph of news articles. This study uses a keyword-network analysis tool and visualizes relationships among keywords to make it easier to see the differences. Newspaper articles were gathered from KINDS, the Korean integrated news database system. KINDS preserves news articles of the Kyunghyang Shinmun, the HanKyoreh and the Dong-A Ilbo and these are open to the public. This study used these three Korean major newspapers from KINDS. About 3,030 articles from 2008 to 2012 were used. International, national issues and politics sections were gathered with some specific issues. The International section was collected with the keyword of 'Nuclear weapon of North Korea.' The National issues section was collected with the keyword of '4-major-river.' The Politics section was collected with the keyword of 'Tonghap-Jinbo Dang.' All of the articles from April 2012 to May 2012 of Eco-business, Culture and Editorial-opinion sections were also collected. All of the collected data were handled and edited into paragraphs. We got rid of stop-words using the Lucene Korean Module. We calculated keyword co-occurrence counts from the paired co-occurrence list of keywords in a paragraph. We made a co-occurrence matrix from the list. Once the co-occurrence matrix was built, we used the Cosine coefficient matrix as input for PFNet(Pathfinder Network). In order to analyze these three newspapers and find out the significant keywords in each paper, we analyzed the list of 10 highest frequency keywords and keyword-networks of 20 highest ranking frequency keywords to closely examine the relationships and show the detailed network map among keywords. We used NodeXL software to visualize the PFNet. After drawing all the networks, we compared the results with the classification results. Classification was firstly handled to identify how the tone of argument of a newspaper is different from others. Then, to analyze tones of arguments, all the paragraphs were divided into two types of tones, Positive tone and Negative tone. To identify and classify all of the tones of paragraphs and articles we had collected, supervised learning technique was used. The Na$\ddot{i}$ve Bayesian classifier algorithm provided in the MALLET package was used to classify all the paragraphs in articles. After classification, Precision, Recall and F-value were used to evaluate the results of classification. Based on the results of this study, three subjects such as Culture, Eco-business and Politics showed some differences in contents and tones of arguments among these three newspapers. In addition, for the National issues, tones of arguments on 4-major-rivers project were different from each other. It seems three newspapers have their own specific tone of argument in those sections. And keyword-networks showed different shapes with each other in the same period in the same section. It means that frequently appeared keywords in articles are different and their contents are comprised with different keywords. And the Positive-Negative classification showed the possibility of classifying newspapers' tones of arguments compared to others. These results indicate that the approach in this study is promising to be extended as a new tool to identify the different tones of arguments of newspapers.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

Lived experience of mothers who have child with cerebral palsy (뇌성마비아 어머니의 경험)

  • Lee Hwa Za;Kim Yee Soon;Lee Gee Won;Gwan Soo Za;Kang In Soon;An Hea Gyung
    • Child Health Nursing Research
    • /
    • v.2 no.1
    • /
    • pp.93-111
    • /
    • 1996
  • The purpose of the study is to identify the lived experience of mothers who have children with cerebral palsy in order to understand their agony. Moreover, the result of study was to find some nursing intervention for disabled children and their mothers. For this purpose, ten mothers who are willing to cooperate with this research were selected at random from those who have children with the cerebral palsy, currently using the municipal facilities for the handicapped with cerebral malfunction. Data collection was done from October 4, 1994 th December 31, 1994. The data were collected by asking the mothers mentioned above with some unstructured open-ended questions, recorded on the tapes with permission by the interviewee in order to prevent missing of the interviewed contents. These collected data have been substantiated and properly analyzed on the basis of phenomenological approach initiated by Colaizzi's method. The results and validity are proved to be credible by means of the individual checking of the interviewed mothers. The results of this study are as follows : 1. When the mother is first informed of the diagnosis of cerebral palsy on her child, she usually misses the crucial timing needed for proper treatment of the child's disorder because she is notified through the doctor's indifference and his apparently inactive, matter-of-fact attitude. At first she suspects the doctor's diagnosis and tries to attribute it to the unknown cause from a certain genetic problem and then she quickly wants to deny the whole situation that her child is really suffering from the cerebral palsy. The reality is too much for her to accept as it is and she would not believe her child is abnormal. Therefore, she even attempts depend on the power of God for its solution. 2. The mother, who goes thorough this kind of uncommon experiences, is totally devoted to the treatment and care of the child and completely ignores her own life and happiness. At the same time, she feels sorry for her other normal children she believes having not enough care and concern. Also, she feels sorry for the sick child when the child's brothers or sisters show special concern for the patient out of sympathy. It is sorry and not satisfied for her that the child is growing with abnormality and neighbor other around have inappropriate attitudes. Likewise, she is discontent with her husband's lack of concern about the child's treatment. She believes that the health care system in this society isn't fulfilling its due purpose. In the state of her utmost distress and anxiety, she always feels the need of competent consultants, and is angry about that her child is treated as an abnormal being, she is trying to hide the child from other people and to make him or her disappear, if possible. Although she doesn't have harmonious relation with her husband, she id happy when he shows his affection for the child and she feels relieved and thankful when the relatives don't mention about the child's condition Since the child's overall status of health is continuously in unstable conditions, requiring her all-time readiness for an emergency, she feels guilty of her child's illness toward the fEmily members as if it was her own fault to have borne such an abnormal child and she feels responsible for the child morally and financially if necessary Because her life is centered on taking care of the child, she cannot afford to enjoy her own life and happiness. She is a lonely mother, fatigued, with no proper relationship with other people around her. With this sense of guilt and responsibility as a mother of an unusual disease, she has no choice but to grieve her destiny from which she is not allowed to escape. 3. Nevertheless, the mother with the child suffering from the cerebral palsy does not easily give up the hope of getting her child cured and she believes that in the long run, though slower than hoped, her abnormal son or daughter will be eventually cured to become a normal sibling someday. This kind of hope is sustained by the mother's strong faith coming from observing the progress of other similar children getting better. Sometimes she is encouraged to have this faith by other mothers who share the same painful experiences, believing that her child will improve even more rapidly than others with the same palsy. Full of hope, she painstakingly waits for the child's healing. Moreover, she plans to have another child. she thinks that the patient child's brothers and sisters only can truly understand and look after the patients. However, when she notices that the progress of other children under the treatment does not look so hopeful, she is distressed by the thoughts that her child may never get well. Too, she is worried that the patient's brother or sister will be born as the same invalid with the cerebral disease. She is discouraged to have another baby as much as she is encouraged to. She is also troubled by the thought that in case she has another baby, she will have to be forced. to neglect the patient child, especially when she does have an extra hand or some reliable person to help her with taking care of the patient.

  • PDF

Sequence Stratigraphy of the Yeongweol Group (Cambrian-Ordovician), Taebaeksan Basin, Korea: Paleogeographic Implications (전기고생대 태백산분지 영월층군의 순차층서 연구를 통한 고지리적 추론)

  • Kwon, Y.K.
    • Economic and Environmental Geology
    • /
    • v.45 no.3
    • /
    • pp.317-333
    • /
    • 2012
  • The Yeongweol Group is a Lower Paleozoic mixed carbonate-siliciclastic sequence in the Taebaeksan Basin of Korea, and consists of five lithologic formations: Sambangsan, Machari, Wagok, Mungok, and Yeongheung in ascending order. Sequence stratigraphic interpretation of the group indicates that initial flooding in the Yeongweol area of the Taebaeksan Basin resulted in basal siliciclastic-dominated sequences of the Sambangsan Formation during the Middle Cambrian. The accelerated sea-level rise in the late Middle to early Late Cambrian generated a mixed carbonate-siliciclastic slope or deep ramp sequence of shale, grainstone and breccia intercalations, representing the lower part of the Machari Formation. The continued rise of sea level in the Late Cambrian made substantial accommodation space and activated subtidal carbonate factory, forming carbonate-dominated subtidal platform sequence in the middle and upper parts of the Machari Formation. The overlying Wagok Formation might originally be a ramp carbonate sequence of subtidal ribbon carbonates and marls with conglomerates, deposited during the normal rise of relative sea level in the late Late Cambrian. The formation was affected by unstable dolomitization shortly after the deposition during the relative sea-level fall in the latest Cambrian or earliest Ordovician. Subsequently, it was extensively dolomitized under the deep burial diagenetic condition. During the Early Ordovician (Tremadocian), global transgression (viz. Sauk) was continued, and subtidal ramp deposition was sustained in the Yeongweol platform, forming the Mungok Formation. The formation is overlain by the peritidal carbonates of the Yeongheung Formation, and is stacked by cyclic sedimentation during the Early to Middle Ordovician (Arenigian to Caradocian). The lithologic change from subtidal ramp to peritidal facies is preserved at the uppermost part of the Mungok Formation. The transition between Sauk and Tippecanoe sequences is recognized within the middle part of the Yeongheung Formation as a minimum accommodation zone. The global eustatic fall in the earliest Middle Ordovician and the ensuing rise of relative sea level during the Darrwillian to Caradocian produced broadly-prograding peritidal carbonates of shallowing-upward cyclic successions within the Yeongheung Formation. The reconstructed relative sea-level curve of the Yeongweol platform is very similar to that of the Taebaek platform. This reveals that the Yeongweol platform experienced same tectonic movements with the Taebaek platform, and consequently that both platform sequences might be located in a body or somewhere separately in the margin of the North China platform. The significant differences in lithologic and stratigraphic successions imply that the Yeongweol platform was much far from the Taebaek platform and not associated with the Taebaek platform as a single depositional system. The Yeongweol platform was probably located in relatively open shallow marine environments, whereas the Taebaek platform was a part of the restricted embayments. During the late Paleozoic to early Mesozoic amalgamations of the Korean massifs, the Yeongweol platform was probably pushed against the Taebaek platform by the complex movement, forming fragmented platform sequences of the Taebaeksan Basin.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A New Exploratory Research on Franchisor's Provision of Exclusive Territories (가맹본부의 배타적 영업지역보호에 대한 탐색적 연구)

  • Lim, Young-Kyun;Lee, Su-Dong;Kim, Ju-Young
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.37-63
    • /
    • 2012
  • In franchise business, exclusive sales territory (sometimes EST in table) protection is a very important issue from an economic, social and political point of view. It affects the growth and survival of both franchisor and franchisee and often raises issues of social and political conflicts. When franchisee is not familiar with related laws and regulations, franchisor has high chance to utilize it. Exclusive sales territory protection by the manufacturer and distributors (wholesalers or retailers) means sales area restriction by which only certain distributors have right to sell products or services. The distributor, who has been granted exclusive sales territories, can protect its own territory, whereas he may be prohibited from entering in other regions. Even though exclusive sales territory is a quite critical problem in franchise business, there is not much rigorous research about the reason, results, evaluation, and future direction based on empirical data. This paper tries to address this problem not only from logical and nomological validity, but from empirical validation. While we purse an empirical analysis, we take into account the difficulties of real data collection and statistical analysis techniques. We use a set of disclosure document data collected by Korea Fair Trade Commission, instead of conventional survey method which is usually criticized for its measurement error. Existing theories about exclusive sales territory can be summarized into two groups as shown in the table below. The first one is about the effectiveness of exclusive sales territory from both franchisor and franchisee point of view. In fact, output of exclusive sales territory can be positive for franchisors but negative for franchisees. Also, it can be positive in terms of sales but negative in terms of profit. Therefore, variables and viewpoints should be set properly. The other one is about the motive or reason why exclusive sales territory is protected. The reasons can be classified into four groups - industry characteristics, franchise systems characteristics, capability to maintain exclusive sales territory, and strategic decision. Within four groups of reasons, there are more specific variables and theories as below. Based on these theories, we develop nine hypotheses which are briefly shown in the last table below with the results. In order to validate the hypothesis, data is collected from government (FTC) homepage which is open source. The sample consists of 1,896 franchisors and it contains about three year operation data, from 2006 to 2008. Within the samples, 627 have exclusive sales territory protection policy and the one with exclusive sales territory policy is not evenly distributed over 19 representative industries. Additional data are also collected from another government agency homepage, like Statistics Korea. Also, we combine data from various secondary sources to create meaningful variables as shown in the table below. All variables are dichotomized by mean or median split if they are not inherently dichotomized by its definition, since each hypothesis is composed by multiple variables and there is no solid statistical technique to incorporate all these conditions to test the hypotheses. This paper uses a simple chi-square test because hypotheses and theories are built upon quite specific conditions such as industry type, economic condition, company history and various strategic purposes. It is almost impossible to find all those samples to satisfy them and it can't be manipulated in experimental settings. However, more advanced statistical techniques are very good on clean data without exogenous variables, but not good with real complex data. The chi-square test is applied in a way that samples are grouped into four with two criteria, whether they use exclusive sales territory protection or not, and whether they satisfy conditions of each hypothesis. So the proportion of sample franchisors which satisfy conditions and protect exclusive sales territory, does significantly exceed the proportion of samples that satisfy condition and do not protect. In fact, chi-square test is equivalent with the Poisson regression which allows more flexible application. As results, only three hypotheses are accepted. When attitude toward the risk is high so loyalty fee is determined according to sales performance, EST protection makes poor results as expected. And when franchisor protects EST in order to recruit franchisee easily, EST protection makes better results. Also, when EST protection is to improve the efficiency of franchise system as a whole, it shows better performances. High efficiency is achieved as EST prohibits the free riding of franchisee who exploits other's marketing efforts, and it encourages proper investments and distributes franchisee into multiple regions evenly. Other hypotheses are not supported in the results of significance testing. Exclusive sales territory should be protected from proper motives and administered for mutual benefits. Legal restrictions driven by the government agency like FTC could be misused and cause mis-understandings. So there need more careful monitoring on real practices and more rigorous studies by both academicians and practitioners.

  • PDF

Mineralogy and Geochemistry of the Jeonheung and Oksan Pb-Zn-Cu Deposits, Euiseong Area (의성(義城)지역 전흥(田興) 및 옥산(玉山) 열수(熱水) 연(鉛)-아연(亞鉛)-동(銅) 광상(鑛床)에 관한 광물학적(鑛物學的)·지화학적(地化學的) 연구(硏究))

  • Choi, Seon-Gyu;Lee, Jae-Ho;Yun, Seong-Taek;So, Chil-Sup
    • Economic and Environmental Geology
    • /
    • v.25 no.4
    • /
    • pp.417-433
    • /
    • 1992
  • Lead-zinc-copper deposits of the Jeonheung and the Oksan mines around Euiseong area occur as hydrothermal quartz and calcite veins that crosscut Cretaceous sedimentary rocks of the Gyeongsang Basin. The mineralization occurred in three distinct stages (I, II, and III): (I) quartz-sulfides-sulfosalts-hematite mineralization stage; (II) barren quartz-fluorite stage; and (III) barren calcite stage. Stage I ore minerals comprise pyrite, chalcopyrite, sphalerite, galena and Pb-Ag-Bi-Sb sulfosalts. Mineralogies of the two mines are different, and arsenopyrite, pyrrhotite, tetrahedrite and iron-rich (up to 21 mole % FeS) sphalerite are restricted to the Oksan mine. A K-Ar radiometric dating for sericite indicates that the Pb-Zn-Cu deposits of the Euiseong area were formed during late Cretaceous age ($62.3{\pm}2.8Ma$), likely associated with a subvolcanic activity related to the volcanic complex in the nearby Geumseongsan Caldera and the ubiquitous felsite dykes. Stage I mineralization occurred at temperatures between > $380^{\circ}C$ and $240^{\circ}C$ from fluids with salinities between 6.3 and 0.7 equiv. wt. % NaCl. The chalcopyrite deposition occurred mostly at higher temperatures of > $300^{\circ}C$. Fluid inclusion data indicate that the Pb-Zn-Cu ore mineralization resulted from a complex history of boiling, cooling and dilution of ore fluids. The mineralization at Jeonheung resulted mainly from cooling and dilution by an influx of cooler meteoric waters, whereas the mineralization at Oksan was largely due to fluid boiling. Evidence of fluid boiling suggests that pressures decreased from about 210 bars to 80 bars. This corresponds to a depth of about 900 m in a hydrothermal system that changed from lithostatic (closed) toward hydrostatic (open) conditions. Sulfur isotope compositions of sulfide minerals (${\delta}^{34}S=2.9{\sim}9.6$ per mil) indicate that the ${\delta}^{34}S_{{\Sigma}S}$ value of ore fluids was ${\approx}8.6$ per mil. This ${\delta}^{34}S_{{\Sigma}S}$ value is likely consistent with an igneous sulfur mixed with sulfates (?) in surrounding sedimentary rocks. Measured and calculated hydrogen and oxygen isotope values of ore-forming fluids suggest meteoric water dominance, approaching unexchanged meteoric water values. Equilibrium thermodynamic interpretation indicates that the temperature versus $fs_2$ variation of stage I ore fluids differed between the two mines as follows: the $fs_2$ of ore fluids at Jeonheung changed with decreasing temperature constantly near the pyrite-hematite-magnetite sulfidation curve, whereas those at Oksan changed from the pyrite-pyrrhotite sulfidation state towards the pyrite-hematite-magnetite state. The shift in minerals precipitated during stage I also reflects a concomitant $fo_2$ increase, probably due to mixing of ore fluids with cooler, more oxidizing meteoric waters. Thermodynamic consideration of copper solubility suggests that the ore-forming fluids cooled through boiling at Oksan and mixing with less-evolved meteoric waters at Jeonheung, and that this cooling was the main cause of copper deposition through destabilization of copper chloride complexes.

  • PDF

Effects of Customers' Relationship Networks on Organizational Performance: Focusing on Facebook Fan Page (고객 간 관계 네트워크가 조직성과에 미치는 영향: 페이스북 기업 팬페이지를 중심으로)

  • Jeon, Su-Hyeon;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.57-79
    • /
    • 2016
  • It is a rising trend that the number of users using one of the social media channels, the Social Network Service, so called the SNS, is getting increased. As per to this social trend, more companies have interest in this networking platform and start to invest their funds in it. It has received much attention as a tool spreading and expanding the message that a company wants to deliver to its customers and has been recognized as an important channel in terms of the relationship marketing with them. The environment of media that is radically changing these days makes possible for companies to approach their customers in various ways. Particularly, the social network service, which has been developed rapidly, provides the environment that customers can freely talk about products. For companies, it also works as a channel that gives customized information to customers. To succeed in the online environment, companies need to not only build the relationship between companies and customers but focus on the relationship between customers as well. In response to the online environment with the continuous development of technology, companies have tirelessly made the novel marketing strategy. Especially, as the one-to-one marketing to customers become available, it is more important for companies to maintain the relationship marketing with their customers. Among many SNS, Facebook, which many companies use as a communication channel, provides a fan page service for each company that supports its business. Facebook fan page is the platform that the event, information and announcement can be shared with customers using texts, videos, and pictures. Companies open their own fan pages in order to inform their companies and businesses. Such page functions as the websites of companies and has a characteristic of their brand communities such as blogs as well. As Facebook has become the major communication medium with customers, companies recognize its importance as the effective marketing channel, but they still need to investigate their business performances by using Facebook. Although there are infinite potentials in Facebook fan page that even has a function as a community between users, which other platforms do not, it is incomplete to regard companies' Facebook fan pages as communities and analyze them. In this study, it explores the relationship among customers through the network of the Facebook fan page users. The previous studies on a company's Facebook fan page were focused on finding out the effective operational direction by analyzing the use state of the company. However, in this study, it draws out the structural variable of the network, which customer committment can be measured by applying the social network analysis methodology and investigates the influence of the structural characteristics of network on the business performance of companies in an empirical way. Through each company's Facebook fan page, the network of users who engaged in the communication with each company is exploited and it is the one-mode undirected binary network that respectively regards users and the relationship of them in terms of their marketing activities as the node and link. In this network, it draws out the structural variable of network that can explain the customer commitment, who pressed "like," made comments and shared the Facebook marketing message, of each company by calculating density, global clustering coefficient, mean geodesic distance, diameter. By exploiting companies' historical performance such as net income and Tobin's Q indicator as the result variables, this study investigates influence on companies' business performances. For this purpose, it collects the network data on the subjects of 54 companies among KOSPI-listed companies, which have posted more than 100 articles on their Facebook fan pages during the data collection period. Then it draws out the network indicator of each company. The indicator related to companies' performances is calculated, based on the posted value on DART website of the Financial Supervisory Service. From the academic perspective, this study suggests a new approach through the social network analysis methodology to researchers who attempt to study the business-purpose utilization of the social media channel. From the practical perspective, this study proposes the more substantive marketing performance measurements to companies performing marketing activities through the social media and it is expected that it will bring a foundation of establishing smart business strategies by using the network indicators.