• Title/Summary/Keyword: Work Result

Search Result 7,890, Processing Time 0.041 seconds

The Impact of Human Resource Innovativeness, Learning Orientation, and Their Interaction on Innovation Effect and Business Performance : Comparison of Small and Medium-Sized vs. Large-Sized Companies (인적자원의 혁신성, 학습지향성, 이들의 상호작용이 혁신효과 및 사업성과에 미치는 영향 : 중소기업과 대기업의 비교연구)

  • Yoh, Eunah
    • Korean small business review
    • /
    • v.31 no.2
    • /
    • pp.19-37
    • /
    • 2009
  • The purpose of this research is to explore differences between small and medium-sized companies and large-sized companies in the impact of human resource innovativeness(HRI), learning orientation(LO), and HRI-LO interaction on innovation effect and business performance. Although learning orientation has long been considered as a key factor influencing good performance of a business, little research was devoted to exploring the effect of HRI-LO interaction on innovation effect and business performance. In this study, it is investigated whether there is a synergy effect between innovative human workforce and learning orientation corporate culture, in addition to each by itself, to generate good business performance as well as a success of new innovations in the market. Research hypotheses were as follows, including H1) human resource innovativeness(HRI), learning orientation(LO), and interactions of HRI and LO(HRI-LO interaction) positively affect innovation effect, H2) there is a difference of the effect of HRI, LO, and HRI-LO interaction on innovation effect between large-sized and small-sized companies, H3) HRI, LO, HRI-LO interaction, innovation effect positively affect business performance, and H4) there is a difference of the effect of HRI, LO, HRI-LO interaction, and innovation effect on business performance between large-sized and small-sized companies. Data were obtained from 479 practitioners through a web survey since the web survey is an efficient method to collect a national data at a variety of fields. A single respondent from a company was allowed to participate in the study after checking whether they have more than 5-year work experiences in the company. To check whether a common source bias is existed in the sample, additional data from a convenient sample of 97 companies were gathered through the traditional survey method, and were used to confirm correlations between research variables of the original sample and the additional sample. Data were divided into two groups according to company size, such as 352 small and medium-sized companies with less than 300 employees and 127 large-sized companies with 300 or more employees. Data were analyzed through t-test and regression analyses. HRI which is the innovativeness of human resources in the company was measured with 9 items assessing the innovativenss of practitioners in staff, manager, and executive-level positions. LO is the company's effort to encourage employees' development, sharing, and utilizing of knowledge through consistent learning. LO was measured by 18 items assessing commitment to learning, vision sharing, and open-mindedness. Innovation effect which assesses a success of new products/services in the market, was measured with 3 items. Business performance was measured by respondents' evaluations on profitability, sales increase, market share, and general business performance, compared to other companies in the same field. All items were measured by using 6-point Likert scales. Means of multiple items measuring a construct were used as variables based on acceptable reliability and validity. To reduce multi-collinearity problems generated on the regression analysis of interaction terms, centered data were used for HRI, LO, and Innovation effect on regression analyses. In group comparison, large-sized companies were superior on annual sales, annual net profit, the number of new products/services in the last 3 years, the number of new processes advanced in the last 3 years, and the number of R&D personnel, compared to small and medium-sized companies. Also, large-sized companies indicated a higher level of HRI, LO, HRI-LO interaction, innovation effect and business performance than did small and medium-sized companies. The results indicate that large-sized companies tend to have more innovative human resources and invest more on learning orientation than did small-sized companies, therefore, large-sized companies tend to have more success of a new product/service in the market, generating better business performance. In order to test research hypotheses, a series of multiple-regression analysis was conducted. In the regression analysis examining the impact on innovation effect, important results were generated as : 1) HRI, LO, and HRI-LO affected innovation effect, and 2) company size indicated a moderating effect. Based on the result, the impact of HRI on innovation effect would be greater in small and medium-sized companies than in large-sized companies whereas the impact of LO on innovation effect would be greater in large-sized companies than in small and medium-sized companies. In other words, innovative workforce would be more important in making new products/services that would be successful in the market for small and medium-sized companies than for large-sized companies. Otherwise, learning orientation culture would be more effective in making successful products/services for large-sized companies than for small and medium-sized companies. Based on these results, research hypotheses 1 and 2 were supported. In the analysis of a regression examining the impact on business performance, important results were generated as : 1) innovation effect, LO, and HRI-LO affected business performance, 2) HRI by itself did not have a direct effect on business performance regardless of company size, and 3) company size indicated a moderating effect. Specifically, an effect of the HRI-LO interaction on business performance was stronger in large-sized companies than in small and medium-sized companies. It means that the synergy effect of innovative human resources and learning orientation culture tends to be stronger as company is larger. Referring to these result, research hypothesis 3 was partially supported whereas hypothesis 4 was supported. Based on research results, implications for companies were generated. Regardless of company size, companies need to develop the learning orientation corporate culture as well as human resources' innovativeness together in order to achieve successful development of innovative products and services as well as to improve sales and profits. However, the effectiveness of the HRI-LO interaction would be varied by company size. Specifically, the synergy effect of HRI-LO was stronger to make a success of new products/services in small and medium-sized companies than in large-sized companies. However, the synergy effect of HRI-LO was more effective to increase business performance of large-sized companies than that of small and medium-sized companies. In the case of small and medium-sized companies, business performance was achieved more through the success of new products/services than much directly affected by HRI-LO. The most meaningful result of this study is that the effect of HRI-LO interaction on innovation effect and business performance was confirmed. It was often ignored in the previous research. Also, it was found that the innovativeness of human workforce would not directly influence in generating good business performance, however, innovative human resources would indirectly affect making good business performance by contributing to achieving the development of new products/services that would be successful in the market. These findings would provide valuable managerial implications specifically in regard to the development of corporate culture and education program of small and medium-sized as well as large-sized companies in a variety of fields.

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

A Study on Hoslital Nurses' Preferred Duty Shift and Duty Hours (병원 간호사의 선호근무시간대에 관한 연구)

  • Lee, Gyeong-Sik;Jeong, Geum-Hui
    • The Korean Nurse
    • /
    • v.36 no.1
    • /
    • pp.77-96
    • /
    • 1997
  • The duty shifts of hospital nurses not only affect nurses' physical and mental health but also present various personnel management problems which often result in high turnover rates. In this context a study was carried out from October to November 1995 for a period of two months to find out the status of hospital nurses' duty shift patterns, and preferred duty hours and fixed duty shifts. The study population was 867 RNs working in five general hospitals located in Seoul and its vicinity. The questionnaire developed by the writer was used for data collection. The response rate was 85.9 percent or 745 returns. The SAS program was used for data analysis with the computation of frequencies, percentages and Chi square test. The findings of the study are as follows: 1. General characteristics of the study population: 56 percent of respondents was (25 years group and 76.5 percent were "single": the predominant proportion of respondents was junior nursing college graduates(92.2%) and have less than 5 years nursing experience in hospitals(65.5%). For their future working plan in nursing profession, nearly 50% responded as uncertain The reasons given for their career plan was predominantly 'personal growth and development' rather than financial reasons. 2. The interval for rotations of duty stations was found to be mostly irregular(56.4%) while others reported as weekly(16.1%), monthly(12.9%), and fixed terms(4.6%). 3. The main problems related to duty shifts particularly the evening and night duty nurses reported were "not enough time for the family, " "afraid of security problems after the work when returning home late at night." and "lack of leisure time". "problems in physical and physiological adjustment." "problems in family life." "lack of time for interactions with fellow nurses" etc. 4. The forty percent of respondents reported to have '1-2 times' of duty shift rotations while all others reported that '0 time'. '2-3 times'. 'more than 3 times' etc. which suggest the irregularity in duty shift rotations. 5. The majority(62.8%) of study population found to favor the rotating system of duty stations. The reasons for favoring the rotation system were: the opportunity for "learning new things and personal development." "better human relations are possible. "better understanding in various duty stations." "changes in monotonous routine job" etc. The proportion of those disfavor the rotating 'system was 34.7 percent. giving the reasons of"it impedes development of specialization." "poor job performances." "stress factors" etc. Furthermore. respondents made the following comments in relation to the rotation of duty stations: the nurses should be given the opportunity to participate in the. decision making process: personal interest and aptitudes should be considered: regular intervals for the rotations or it should be planned in advance. etc. 6. For the future career plan. the older. married group with longer nursing experiences appeared to think the nursing as their lifetime career more likely than the younger. single group with shorter nursing experiences ($x^2=61.19.{\;}p=.000;{\;}x^2=41.55.{\;}p=.000$). The reason given for their future career plan regardless of length of future service, was predominantly "personal growth and development" rather than financial reasons. For further analysis, the group those with the shorter career plan appeared to claim "financial reasons" for their future career more readily than the group who consider the nursing job as their lifetime career$(x^2$= 11.73, p=.003) did. This finding suggests the need for careful .considerations in personnel management of nursing administration particularly when dealing with the nurses' career development. The majority of respondents preferred the fixed day shift. However, further analysis of those preferred evening shift by age and civil status, "< 25 years group"(15.1%) and "single group"(13.2) were more likely to favor the fixed evening shift than > 25 years(6.4%) and married(4.8%)groups. This differences were statistically significant ($x^2=14.54, {\;}p=.000;{\;}x^2=8.75, {\;}p=.003$). 7. A great majority of respondents(86.9% or n=647) found to prefer the day shifts. When the four different types of duty shifts(Types A. B. C, D) were presented, 55.0 percent of total respondents preferred the A type or the existing one followed by D type(22.7%). B type(12.4%) and C type(8.2%). 8. When the condition of monetary incentives for the evening(20% of salary) and night shifts(40% of. salary) of the existing duty type was presented. again the day shift appeared to be the most preferred one although the rate was slightly lower(66.4% against 86.9%). In the case of evening shift, with the same incentive, the preference rates for evening and night shifts increased from 11.0 to 22.4 percent and from 0.5 to 3.0 percent respectively. When the age variable was controlled. < 25 yrs group showed higher rates(31.6%. 4.8%) than those of > 25 yrs group(15.5%. 1.3%) respectively preferring the evening and night shifts(p=.000). The civil status also seemed to operate on the preferences of the duty shifts as the single group showed lower rate(69.0%) for day duty against 83. 6% of the married group. and higher rates for evening and night duties(27.2%. 15.1%) respectively against those of the married group(3.8%. 1.8%) while a higher proportion of the married group(83. 6%) preferred the day duties than the single group(69.0%). These differences were found to be statistically all significant(p=.001). 9. The findings on preferences of three different types of fixed duty hours namely, B, C. and D(with additional monetary incentives) are as follows in order of preference: B type(12hrs a day, 3days a wk): day shift(64.1%), evening shift(26.1%). night shift(6.5%) C type(12hrs a day. 4days a wk) : evening shift(49.2%). day shift(32.8%), night shift(11.5%) D type(10hrs a day. 4days a wk): showed the similar trend as B type. The findings of higher preferences on the evening and night duties when the incentives are given. as shown above, suggest the need for the introductions of different patterns of duty hours and incentive measures in order to overcome the difficulties in rostering the nursing duties. However, the interpretation of the above data, particularly the C type, needs cautions as the total number of respondents is very small(n=61). It requires further in-depth study. In conclusion. it seemed to suggest that the patterns of nurses duty hours and shifts in the most hospitals in the country have neither been tried for different duty types nor been flexible. The stereotype rostering system of three shifts and insensitiveness for personal life aspect of nurses seemed to be prevailing. This study seems to support that irregular and frequent rotations of duty shifts may be contributing factors for most nurses' maladjustment problems in physical and mental health. personal and family life which eventually may result in high turnover rates. In order to overcome the increasing problems in personnel management of hospital nurses particularly in rostering of evening and night duty shifts, which may related to eventual high turnover rates, the findings of this study strongly suggest the need for an introduction of new rostering systems including fixed duties and appropriate incentive measures for evenings and nights which the most nurses want to avoid, In considering the nursing care of inpatients is the round-the clock business. the practice of the nursing duty shift system is inevitable. In this context, based on the findings of this study. the following are recommended: 1. The further in-depth studies on duty shifts and hours need to be undertaken for the development of appropriate and effective rostering systems for hospital nurses. 2. An introduction of appropriate incentive measures for evening and night duty shifts along with organizational considerations such as the trials for preferred duty time bands, duty hours, and fixed duty shifts should be considered if good quality of care for the patients be maintained for the round the clock. This may require an initiation of systematic research and development activities in the field of hospital nursing administration as a part of permanent system in the hospital. 3. Planned and regular intervals, orientation and training, and professional and personal growth should be considered for the rotation of different duty stations or units. 4. In considering the higher degree of preferences in the duty type of "10hours a day, 4days a week" shown in this study, it would be worthwhile to undertake the R&D type studies in large hospital settings.

  • PDF

Methodology for Identifying Issues of User Reviews from the Perspective of Evaluation Criteria: Focus on a Hotel Information Site (사용자 리뷰의 평가기준 별 이슈 식별 방법론: 호텔 리뷰 사이트를 중심으로)

  • Byun, Sungho;Lee, Donghoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.23-43
    • /
    • 2016
  • As a result of the growth of Internet data and the rapid development of Internet technology, "big data" analysis has gained prominence as a major approach for evaluating and mining enormous data for various purposes. Especially, in recent years, people tend to share their experiences related to their leisure activities while also reviewing others' inputs concerning their activities. Therefore, by referring to others' leisure activity-related experiences, they are able to gather information that might guarantee them better leisure activities in the future. This phenomenon has appeared throughout many aspects of leisure activities such as movies, traveling, accommodation, and dining. Apart from blogs and social networking sites, many other websites provide a wealth of information related to leisure activities. Most of these websites provide information of each product in various formats depending on different purposes and perspectives. Generally, most of the websites provide the average ratings and detailed reviews of users who actually used products/services, and these ratings and reviews can actually support the decision of potential customers in purchasing the same products/services. However, the existing websites offering information on leisure activities only provide the rating and review based on one stage of a set of evaluation criteria. Therefore, to identify the main issue for each evaluation criterion as well as the characteristics of specific elements comprising each criterion, users have to read a large number of reviews. In particular, as most of the users search for the characteristics of the detailed elements for one or more specific evaluation criteria based on their priorities, they must spend a great deal of time and effort to obtain the desired information by reading more reviews and understanding the contents of such reviews. Although some websites break down the evaluation criteria and direct the user to input their reviews according to different levels of criteria, there exist excessive amounts of input sections that make the whole process inconvenient for the users. Further, problems may arise if a user does not follow the instructions for the input sections or fill in the wrong input sections. Finally, treating the evaluation criteria breakdown as a realistic alternative is difficult, because identifying all the detailed criteria for each evaluation criterion is a challenging task. For example, if a review about a certain hotel has been written, people tend to only write one-stage reviews for various components such as accessibility, rooms, services, or food. These might be the reviews for most frequently asked questions, such as distance between the nearest subway station or condition of the bathroom, but they still lack detailed information for these questions. In addition, in case a breakdown of the evaluation criteria was provided along with various input sections, the user might only fill in the evaluation criterion for accessibility or fill in the wrong information such as information regarding rooms in the evaluation criteria for accessibility. Thus, the reliability of the segmented review will be greatly reduced. In this study, we propose an approach to overcome the limitations of the existing leisure activity information websites, namely, (1) the reliability of reviews for each evaluation criteria and (2) the difficulty of identifying the detailed contents that make up the evaluation criteria. In our proposed methodology, we first identify the review content and construct the lexicon for each evaluation criterion by using the terms that are frequently used for each criterion. Next, the sentences in the review documents containing the terms in the constructed lexicon are decomposed into review units, which are then reconstructed by using the evaluation criteria. Finally, the issues of the constructed review units by evaluation criteria are derived and the summary results are provided. Apart from the derived issues, the review units are also provided. Therefore, this approach aims to help users save on time and effort, because they will only be reading the relevant information they need for each evaluation criterion rather than go through the entire text of review. Our proposed methodology is based on the topic modeling, which is being actively used in text analysis. The review is decomposed into sentence units rather than considering the whole review as a document unit. After being decomposed into individual review units, the review units are reorganized according to each evaluation criterion and then used in the subsequent analysis. This work largely differs from the existing topic modeling-based studies. In this paper, we collected 423 reviews from hotel information websites and decomposed these reviews into 4,860 review units. We then reorganized the review units according to six different evaluation criteria. By applying these review units in our methodology, the analysis results can be introduced, and the utility of proposed methodology can be demonstrated.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

Facters Affecting Recurrence after Video-assisted Thoracic Surgery for the Treatment of Spontaneous Pneumothotax (자연기흉에 대한 비디오흉강경수술후 재발에 영향을 미치는 요인들)

  • 이송암;김광택;이일현;백만종;최영호;이인성;김형묵;김학제
    • Journal of Chest Surgery
    • /
    • v.32 no.5
    • /
    • pp.448-455
    • /
    • 1999
  • Background: Recent developments in techniques of video-assisted thoracic surgery(VATS) and endoscopic equipment has expanded the application of video-assisted surgical procedures in the field of thoracic surgery. Especially, it will probably become the treatment of choice of spontaneous pneumothorax(SP). There is, however, a high recurrence rate, high cost, and paucity of long-term results. We report the results of postoperative follow-up and analyze perioperative parameters affected to recurrence, retrospectively. Material and Method: From march 1992 to march 1997, 276 patients with spontaneous pneumothorax underwent 292 VATS procedures. Conversion to open thoracotomy was necessitated in eight patients, and this patients excluded from the study. Result: The sex distribution was 249 males and 31 females. The mean age was 28.1 12.2 years(range, 15 to 69 years). Primary SP was 237cases(83.5%) and secondary SP was 47cases(16.5%). The major underlying lung diseases associated with secondary SP were tuberculosis 27cases(57.4%) and emphysema 8cases (38.3%). Operative indications included Ipsilateral recurrence 123(43.9%), persistent air-leak 53(18.9%), x-ray visible bleb 40(14.3%), tension 30(10.7%), contralateral recurrence 21(7.5%), uncomplicated first episode 8(2.9%), bilateral 3(1.1%), complicated episode 2(0.7%). Blebs were visualized in 247cases(87%) and 244cases(85.9%) performed stapled blebectomy. Early postoperative complications occurred in 33 cases(11.6%): 16 prolonged air-leak more than 5 days(four of them were required a second operation and found missed blebs); 5 bleeding; 5 empyema; 2 atelectasis; 1 wound infection. No deaths occured. The mean operative time was 52.8 23.1 minutes(range, 20 to 165 minutes). The mean d ration of chest tube drainage was 5.0 4.5 days(range, 2 to 37 days). The mean duration ofhospital stay was 8.2 5.5 days (range, 3 to 43days). At a mean follow-up 22.3 18.4 months(range, 1 to 65 months), 12 patients(4.2%) were lost to follow-up. There were 24 recurrences and seven patients underwent second operation and 6 patients(85.7%) were found the missed blebs. 12 perioperative parameters(age, sex, site, underlying disease, extent of collapse, operative indication, size of bleb, number of bleb, location of bleb, bleb management, pleural procedure, prolonged postoperative air-leak) were analyzed statistically to identify significant predictors of recurrence. The significant predictors of recurrence was the underlying disease[17.0%(8/47): 6.8%(16/237), p=0.038], prolonged postoperative air-leakage[37.5%(6/16): 6.7%(18/268), p=0.001], and pleural procedure [11.4%(19/167): 4.3%(5/117), p=0.034]. Blebectomy has less recurrence rate then non-blebectomy [8.2%(20/244) : 10.0%(4/40), p>0. 5]. However, this difference was not statistically significant(p=0.758). Conclusion: We conclude that it is important that we shoud careful finding of bleb during VATS due to reducing of recurrnece, and cases of no bleb identified and secondary spontaneous pneumothorax were indicated of pleurodectomy. VATS is a valid alternative to open procedure for the treatment of spontaneous pneumothorax with less pain, shorter hospital stay, more rapid return to work, high patient acceptance, less scar and exellent cosmetics. But, there is high recurrence rate and high cost, and than it is necessary to evaluate of long-term results for recurrence and to observate carefully during VATS.

  • PDF

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.