• Title/Summary/Keyword: Size

Search Result 66,245, Processing Time 0.088 seconds

A Study of 'Emotion Trigger' by Text Mining Techniques (텍스트 마이닝을 이용한 감정 유발 요인 'Emotion Trigger'에 관한 연구)

  • An, Juyoung;Bae, Junghwan;Han, Namgi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.69-92
    • /
    • 2015
  • The explosion of social media data has led to apply text-mining techniques to analyze big social media data in a more rigorous manner. Even if social media text analysis algorithms were improved, previous approaches to social media text analysis have some limitations. In the field of sentiment analysis of social media written in Korean, there are two typical approaches. One is the linguistic approach using machine learning, which is the most common approach. Some studies have been conducted by adding grammatical factors to feature sets for training classification model. The other approach adopts the semantic analysis method to sentiment analysis, but this approach is mainly applied to English texts. To overcome these limitations, this study applies the Word2Vec algorithm which is an extension of the neural network algorithms to deal with more extensive semantic features that were underestimated in existing sentiment analysis. The result from adopting the Word2Vec algorithm is compared to the result from co-occurrence analysis to identify the difference between two approaches. The results show that the distribution related word extracted by Word2Vec algorithm in that the words represent some emotion about the keyword used are three times more than extracted by co-occurrence analysis. The reason of the difference between two results comes from Word2Vec's semantic features vectorization. Therefore, it is possible to say that Word2Vec algorithm is able to catch the hidden related words which have not been found in traditional analysis. In addition, Part Of Speech (POS) tagging for Korean is used to detect adjective as "emotional word" in Korean. In addition, the emotion words extracted from the text are converted into word vector by the Word2Vec algorithm to find related words. Among these related words, noun words are selected because each word of them would have causal relationship with "emotional word" in the sentence. The process of extracting these trigger factor of emotional word is named "Emotion Trigger" in this study. As a case study, the datasets used in the study are collected by searching using three keywords: professor, prosecutor, and doctor in that these keywords contain rich public emotion and opinion. Advanced data collecting was conducted to select secondary keywords for data gathering. The secondary keywords for each keyword used to gather the data to be used in actual analysis are followed: Professor (sexual assault, misappropriation of research money, recruitment irregularities, polifessor), Doctor (Shin hae-chul sky hospital, drinking and plastic surgery, rebate) Prosecutor (lewd behavior, sponsor). The size of the text data is about to 100,000(Professor: 25720, Doctor: 35110, Prosecutor: 43225) and the data are gathered from news, blog, and twitter to reflect various level of public emotion into text data analysis. As a visualization method, Gephi (http://gephi.github.io) was used and every program used in text processing and analysis are java coding. The contributions of this study are as follows: First, different approaches for sentiment analysis are integrated to overcome the limitations of existing approaches. Secondly, finding Emotion Trigger can detect the hidden connections to public emotion which existing method cannot detect. Finally, the approach used in this study could be generalized regardless of types of text data. The limitation of this study is that it is hard to say the word extracted by Emotion Trigger processing has significantly causal relationship with emotional word in a sentence. The future study will be conducted to clarify the causal relationship between emotional words and the words extracted by Emotion Trigger by comparing with the relationships manually tagged. Furthermore, the text data used in Emotion Trigger are twitter, so the data have a number of distinct features which we did not deal with in this study. These features will be considered in further study.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

THE RELATIONSHIP BETWEEN PARTICLE INJECTION RATE OBSERVED AT GEOSYNCHRONOUS ORBIT AND DST INDEX DURING GEOMAGNETIC STORMS (자기폭풍 기간 중 정지궤도 공간에서의 입자 유입률과 Dst 지수 사이의 상관관계)

  • 문가희;안병호
    • Journal of Astronomy and Space Sciences
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2003
  • To examine the causal relationship between geomagnetic storm and substorm, we investigate the correlation between dispersionless particle injection rate of proton flux observed from geosynchronous satellites, which is known to be a typical indicator of the substorm expansion activity, and Dst index during magnetic storms. We utilize geomagnetic storms occurred during the period of 1996 ~ 2000 and categorize them into three classes in terms of the minimum value of the Dst index ($Dst_{min}$); intense ($-200nT{$\leq$}Dst_{min}{$\leq$}-100nT$), moderate($-100nT{\leq}Dst_{min}{\leq}-50nT$), and small ($-50nT{\leq}Dst_{min}{\leq}-30nT$) -30nT)storms. We use the proton flux of the energy range from 50 keV to 670 keV, the major constituents of the ring current particles, observed from the LANL geosynchronous satellites located within the local time sector from 18:00 MLT to 04:00 MLT. We also examine the flux ratio ($f_{max}/f_{ave}$) to estimate particle energy injection rate into the inner magnetosphere, with $f_{ave}$ and $f_{max}$ being the flux levels during quiet and onset levels, respectively. The total energy injection rate into the inner magnetosphere can not be estimated from particle measurements by one or two satellites. However, the total energy injection rate should be at least proportional to the flux ratio and the injection frequency. Thus we propose a quantity, “total energy injection parameter (TEIP)”, defined by the product of the flux ratio and the injection frequency as an indicator of the injected energy into the inner magnetosphere. To investigate the phase dependence of the substorm contribution to the development of magnetic storm, we examine the correlations during the two intervals, main and recovery phase of storm separately. Several interesting tendencies are noted particularly during the main phase of storm. First, the average particle injection frequency tends to increase with the storm size with the correlation coefficient being 0.83. Second, the flux ratio ($f_{max}/f_{ave}$) tends to be higher during large storms. The correlation coefficient between $Dst_{min}$ and the flux ratio is generally high, for example, 0.74 for the 75~113 keV energy channel. Third, it is also worth mentioning that there is a high correlation between the TEIP and $Dst_{min}$ with the highest coefficient (0.80) being recorded for the energy channel of 75~113 keV, the typical particle energies of the ring current belt. Fourth, the particle injection during the recovery phase tends to make the storms longer. It is particularly the case for intense storms. These characteristics observed during the main phase of the magnetic storm indicate that substorm expansion activity is closely associated with the development of mangetic storm.

Results of Radiation Therapy and Extrafascial Hysterectomy in Bulky Stage IB, IIA-B Carcinoma of the Uterine Cervix (종괴가 큰 병기 IB, IIA-B 자궁경부암에서 방사선치료와 Extrafascial Hysterectomy의 결과)

  • Kim Jin Hee;Lee Ho Jun;Choi Tae Jin;Do Cha Soon;Lee Tae Sung;Kim Ok Bae
    • Radiation Oncology Journal
    • /
    • v.17 no.1
    • /
    • pp.23-29
    • /
    • 1999
  • Purpose : To evaluate the efficacy of radiation therapy and extrafascial hysterectomy in bulky stage IB, IIA-B uterine cervix cancers. Methods and Materials : Twenty four patients with bulky stage IB and IIA-B carcinoma of the uterine cervix were treated with extrafascial hysterectomy following radiation therapy due to doubts of residual disease at Department of therapeutic radiology, Keimyung University, Dongsan Hospital, from April 1986 to December 1997 According to FIGO staging system, there were 7 patients with stage IB, 9 patients with IIA and 8 patients with IIB stage whose median age was 45. Pathologic distribution showed 16 patients with squamous cell carcinoma and 8 patients with adenocarcinoma. Seven patients had tumors that are less than 5cm in size and 17 patients had tumors with larger than 5cm. The mean interval between radiation therapy and extrafascial hysterectomy was 57 days. The radiation therapy consisted of external irradition to the whole pelvis (180 cGy/fraction, mean 4100 cGy) and parametrial boost (for a mean total dose of 5000 cGy) with midline shield (4H 10 cm), followed by intracavitary irradiation up to 7500 cGy to point A (maximum 8500 cGy). The maximum follow up duration was 107 months and mean follow up duration was 42 months. Results :Ten out of 24 patients (41.7%) had residual disease found at the time of extrafascial hysterectomies. Five year overall survival rate (5Y OSR) and five year disease free survival rate (5Y DFSR) were 63.6% and 62.5% respectively. Five year overall survival rate for stage IB and IIA was 71.4% and 50% for stage IIB. There was a significant difference in 5Y OSR and 5Y DFSR between patients with and those without residual disease (negative vs positive, 83.3% vs. 40% (P=0.01), 83.3% vs 36% (P=0.01) respectively). There was a notable tendency of better survival with adenocarcinoma than with squamous cell carcinoma (adenocarcinoma vs squamous cell carcinoma, 85.7% vs. 53.3% (P=0.1), 85.7% vs. 50.9% (P=0.1) of 5Y OSR and 5Y DFS respectivey). Total dose to A point did not make a significant difference in survival rate or the existence of residual lesion (< 7500 cGy, ${\geq}$ 7500 cOy). It was also noted that significantly more frequent local failures have occurred in patients with positive residual disease compared with negative residual disease (5/10 vs. 0/14, p=0.003), There was no death related to the treatment. Conclusion : There was no improvement of residual disease and to the overall survival rate in spite of increased total dose to point A. We conclude that there is a possible beneficial effect of radiation therapy followed by extrafaseial hysterectomy in survival for adenocarcinoma of bulky stage IB and IIA-B uterine cervix. We need to confirm this with longer follow up and with large number of patients.

  • PDF

Investigation of Study Items for the Patterns of Care Study in the Radiotherapy of Laryngeal Cancer: Preliminary Results (후두암의 방사선치료 Patterns of Care Study를 위한 프로그램 항목 개발: 예비 결과)

  • Chung Woong-Ki;Kim I1-Han;Ahn Sung-Ja;Nam Taek-Keun;Oh Yoon-Kyeong;Song Ju-Young;Nah Byung-Sik;Chung Gyung-Ai;Kwon Hyoung-Cheol;Kim Jung-Soo;Kim Soo-Kon;Kang Jeong-Ku
    • Radiation Oncology Journal
    • /
    • v.21 no.4
    • /
    • pp.299-305
    • /
    • 2003
  • Purpose: In order to develop the national guide-lines for the standardization of radiotherapy we are planning to establish a web-based, on-line data-base system for laryngeal cancer. As a first step this study was performed to accumulate the basic clinical information of laryngeal cancer and to determine the items needed for the data-base system. Materials and Methods: We analyzed the clinical data on patients who were treated under the diagnosis of laryngeal cancer from January 1998 through December 1999 In the South-west area of Korea. Eligiblity criteria of the patients are as follows: 18 years or older, currently diagnosed with primary epithelial carcinoma of larynx, and no history of previous treatments for another cancers and the other laryngeal diseases. The items were developed and filled out by radiation oncologlst who are members of forean Southwest Radiation Oncology Group. SPSS vl0.0 software was used for statistical analysis. Results: Data of forty-five patients were collected. Age distribution of patients ranged from 28 to 88 years(median, 61). Laryngeal cancer occurred predominantly In males (10 : 1 sex ratio). Twenty-eight patients (62$\%$) had primary cancers in the glottis and 17 (38$\%$) in the supraglottis. Most of them were diagnosed pathologically as squamous cell carcinoma (44/45, 98$\%$). Twenty-four of 28 glottic cancer patients (86$\%$) had AJCC (American Joint Committee on Cancer) stage I/II, but 50$\%$ (8/16) had In supraglottic cancer patients (p=0.02). Most patients(89$\%$) had the symptom of hoarseness. indirect laryngoscopy was done in all patients and direct laryngoscopy was peformed in 43 (98$\%$) patients. Twenty-one of 28 (75$\%$) glottic cancer cases and 6 of 17 (35$\%$) supraglottic cancer cases were treated with radiation alone, respectively. The combined treatment of surgery and radiation was used in 5 (18$\%$) glottic and 8 (47$\%$) supraglottic patients. Chemotherapy and radiation was used in 2 (7$\%$) glottic and 3 (18$\%$) supraglottic patients. There was no statistically significant difference in the use of combined modality treatments between glottic and supraglottic cancers (p=0.20). In all patients, 6 MV X-ray was used with conventional fractionation. The iraction size was 2 Gy In 80$\%$ of glottic cancer patients compared with 1.8 Gy in 59$\%$ of the patients with supraglottic cancers. The mean total dose delivered to primary lesions were 65.98 ey and 70.15 Gy in glottic and supraglottic patients treated, respectively, with radiation alone. Based on the collected data, 12 modules with 90 items were developed or the study of the patterns of care In laryngeal cancer. Conclusion: The study Items for laryngeal cancer were developed. In the near future, a web system will be established based on the Items Investigated, and then a nation-wide analysis on laryngeal cancer will be processed for the standardization and optimization of radlotherapy.

Geological Structures of the Hadong Northern Anorthosite Complex and its surrounding Area in the Jirisan Province, Yeongnam Massif, Korea (영남육괴 지리산지구에서 하동 북부 회장암복합체와 그 주변지역의 지질구조)

  • Lee, Deok-Seon;Kang, Ji-Hoon
    • The Journal of the Petrological Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.287-307
    • /
    • 2012
  • The study area, which is located in the southeastern part of the Jirisan province of the Yeongnam massif, Korea, consists mainly of the Precambrian Hadong northern anorthosite complex (HNAC) and the Jirisan metamorphic rock complex (JMRC) and the Mesozoic granitoids which intrude them. Its tectonic frame is built into NS trend, unlike the general NE-trending tectonic frame of Korean Peninsula. This paper researched the structural characteristics at each deformation phase to clarify the geological structures associated with the NS-trending tectonic frame which was built in the HNAC and JMRC. The result indicates that the geological structures of this area were formed at least through three phases of deformation. (1) The $D_1$ deformation formed the $F_1$ sheath or "A"-type folds in the HNAC and JMRC, and the $S_{0-1}$ composite foliation and the $S_1$ foliation and the $D_1$ ductile shear zone which are (sub)parallel to the axial plane of $F_1$ fold, and the $L_1$ stretching lineation which is parallel to the $F_1$ fold axis owing to the large-scale top-to-the SE shearing on the $S_0$ foliation. (2) The $D_2$ deformation (re)folded the $D_1$ structural elements under the EW-trending tectonic compression environment, and formed the NS-trending $F_2$ open, tight, isoclinal, intrafolial folds with the $S_{0-1-2}$ composite foliation and the $S_2$ foliation and the $D_2$ ductile shear zone with S-C-C' structure and the $L_2$ stretching lineation which is (sub)parallel to the axial plane of $F_2$ fold. The extensive $D_2$ ductile shear zone (Hadong shear zone) of NS trend was persistently developed along the eastern boundary of HNAC and JMRC which would be to the limb of $F_2$ fold on a geological map scale. The Hadong shear zone is no less than 1.4 km width, and was formed in the mylonitization process which produced the mylonitic structure and the stretching lineation with the reduction of grain size during the $F_2$ passive folding. (3) The $D_3$ deformation formed the EW-trending $F_3$ kink or open fold under the NS-trending tectonic compression environment and partially rearranged the NS-trending pre-$D_3$ structural elements into (E)NE or (W)NW direction. The regional trend of $D_1$ tectonic frame before the $D_2$ deformation would be NE-SW unlike the present, and the NS-trending tectonic frame in the HNAC and JMRC like the present was formed by the rearrangement of the $D_1$ tectonic frame owing to the $F_2$ active and passive folding. Based on the main intrusion age of (N)NE-trending basic dyke in the study area, these three deformation events are interpreted to have occurred before the Late Paleozoic.

The Structural Relationships between Control Types over Salespeople, Their Responses, and Job Satisfaction - Mediating Roles of Role Clarity and Self-Efficacy - (영업사원에 대한 통제유형, 반응, 그리고 직무만족 간의 구조적 관계 - 역할명확성과 자기효능감의 매개효과 -)

  • Yoo, Dong-Keun;Lim, Jong-Koo;Lim, Ji-Hoon
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.4
    • /
    • pp.23-49
    • /
    • 2007
  • Salespeople act at the point of MOT with customers and deliver the enterprise's message to the customers. They build up relationships with customers as well as deliver the customer's message to the enterprise. The salespeople's activity at the point of MOT with the customers and the degree of satisfaction of the customers' needs will affect the customers' attitude toward the enterprise, brand loyalty, and retention intention. Ultimately, it will influence the enterprise's financial performance. The control of salespe1ople is one of the most interesting topics of marketing. This research investigates the relationships of the control types over salespeople(positive/negative outcome control, positive/negative behavior control) and job satisfaction and their mediating variables. The mediating variables in the relationships have been identified as outcome/behavior-related role clarity and self-efficacy. The purpose of this study is more specifically as follows: First, it investigate how the perception of salespeople control types affect role-clarity. Second, it examines how the perception of salespeople control types influence self-efficacy. Third, it investigate the mediating role of role-clarity between the perception of salespeople control types and self-efficacy. Fourth, it investigates how role-clarity affect self-efficacy and job satisfaction. Finally, it will investigates how self-efficacy influences job satisfaction. Data were collected from the pharmaceutical industry salespeople and analyzed by SPSS 12.0 and AMOS 6.0. The data were collected by 400 respondents and 377 valid questionnaires were analyzed. The results are summarized as follows: First, positive/negative outcome controls had a positive relationship with outcome-related role clarity. Also positive behavior control had a positive effect on behavior-related role clarity, but negative behavior control didn't influence behavior-related role clarity. Second, positive outcome control influenced self-efficacy positively, but positive behavior control didn't have a positive effect on self-efficacy. In addition negative outcome control and negative behavior control had a positive effect on self-efficacy due to the mediating role of outcome-related and behavior-related role clarity. Third, outcome-related role clarity and behavior-related role clarity influenced self-efficacy positively. Behavior-related role clarity had a positive effect on job satisfaction, but outcome-related role clarity didn't influence job satisfaction. Finally, self-efficacy didn't have any effect on job satisfaction. The contributions of this study are as follows: First, existing studies have investigated the direct causal relationship between salespeoples' control type and performance, but this study investigates the structural causality between salespeoples' control types, responses, and performances. Second, this study found the mediating role of outcome-related/behavior-related role-clarity between outcome/behavior control and self-efficacy. Finally, the findings of this study further insight to existing studies on the relationship between job satisfaction and self-efficacy. The confidence of salespeoples' task influenced job satisfaction positively in existing articles,field studies, but the relationship between these two variables was not significant in this study. This means that there can be a different relationship between confidence and job satisfaction according to salespeoples' business. That is, the business environment may not be satisfying, even if the salespeople say that they have ability and confidence about their business. This means that able salespeople who have ability and confidence about their business are not satisfied with their job advancement in the company. Therefore, enterprise need to provide training that can establish a business environment that can satisfy the salespeole's expectation level which will secure good salespeople. This study may have limitation when applied to future studies. First,in this study as with existing studies it investigates the control level that salespeople feel is being measured. Actuality, the control level that a manager enforces and the control level that salespeople perceive when one is late can be different. There is need to measure lateness from both the perspective of the manager and salespeople should be done to supplement this study in the future Second, this study used variables that were connected with action result but salespeople's job satisfaction is due to the result of control. But, focusing on result of control can provide a more important financial result than sales performance. This study is also limited in that it did not consider financial result by result of control. Further studies on this will need to be done in the future. Third, this study may have a further limitation,because the investigation was restricted to pharmaceutical salespeople selling to hospitals. It is necessary to execute investigations in various industries to increase the generalization of the study findings Fourth, in this study, role clarity and self-efficacy by response variable for control and considered job satisfaction by outcome variable of control was considered. But, can other variables be considered beside response variable and result variable for control? For example, can financial affairs and change of post by outcome variable along with business stress by response variable for control be considered? Therefore, future studies need to consider various control variables. Finally, there is limited supporting research in the field of marketing which restricts the generalization of the study finding along with collecting material through random sampling of a limited size. This research summarizes the research in this area, the difference from the previous research, and provides a discussion of its limitations and the need and direction for further future research.

  • PDF

Evaluation of the Perception and Satisfaction of Working and Internship Abroad -By Undergraduates Studying in Culinary and Foodservice Departments- (해외 취업 및 인턴쉽에 대한 인식과 만족도에 관한 연구 -조리 및 외식관련 전공자를 대상으로-)

  • Choi, Young-Hee;Kim, Il-Soon;Kim, Soo-Yeun
    • Journal of the East Asian Society of Dietary Life
    • /
    • v.19 no.2
    • /
    • pp.287-294
    • /
    • 2009
  • This study was conducted to evaluate the perception and satisfaction of undergraduates majoring in culinary arts and food service with working and internship abroad. The responses of the participants to 10 questions regarding perception and 13 questions regarding the importance and satisfaction with working and internship abroad were measured on a 5 point Likert scale. The primary results were as follows : 1) The subjects were composed of 50.9% male and 49.1% female students, of which 42.1% were employed and 57.9% experienced an internship abroad. 2) Most students went abroad to gain experience with respect to various foreign cultures in response to recommendations by the western cuisine department. 3) The items "I wish to conduct my affairs continuously"(M=4.21) and "I have good relationships with my colleagues at work"(M=4.11) received the highest points from male and female respondents, respectively. 4) Male students considered "cooperation among divisions"(M=4.11), "language skills"(M=4.38), and "kitchen environment"(M=4.34) to be very important. However, female students believed that "language skills"(M=4.36),"social relationships"(M=4.21), and "wage income"(M=4.18) were most important. Furthermore, male students were most satisfied with "company size" (M=4.28), "kitchen environment"(M=4.21), and "business hours"(M=4.10), while female students were most satisfied with "kitchen environment","incentive"(M=4.14) and "social relationships"(M=4.11).

  • PDF

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.