• Title/Summary/Keyword: Importance and Performance Analysis

Search Result 1,955, Processing Time 0.032 seconds

How to improve the accuracy of recommendation systems: Combining ratings and review texts sentiment scores (평점과 리뷰 텍스트 감성분석을 결합한 추천시스템 향상 방안 연구)

  • Hyun, Jiyeon;Ryu, Sangyi;Lee, Sang-Yong Tom
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.219-239
    • /
    • 2019
  • As the importance of providing customized services to individuals becomes important, researches on personalized recommendation systems are constantly being carried out. Collaborative filtering is one of the most popular systems in academia and industry. However, there exists limitation in a sense that recommendations were mostly based on quantitative information such as users' ratings, which made the accuracy be lowered. To solve these problems, many studies have been actively attempted to improve the performance of the recommendation system by using other information besides the quantitative information. Good examples are the usages of the sentiment analysis on customer review text data. Nevertheless, the existing research has not directly combined the results of the sentiment analysis and quantitative rating scores in the recommendation system. Therefore, this study aims to reflect the sentiments shown in the reviews into the rating scores. In other words, we propose a new algorithm that can directly convert the user 's own review into the empirically quantitative information and reflect it directly to the recommendation system. To do this, we needed to quantify users' reviews, which were originally qualitative information. In this study, sentiment score was calculated through sentiment analysis technique of text mining. The data was targeted for movie review. Based on the data, a domain specific sentiment dictionary is constructed for the movie reviews. Regression analysis was used as a method to construct sentiment dictionary. Each positive / negative dictionary was constructed using Lasso regression, Ridge regression, and ElasticNet methods. Based on this constructed sentiment dictionary, the accuracy was verified through confusion matrix. The accuracy of the Lasso based dictionary was 70%, the accuracy of the Ridge based dictionary was 79%, and that of the ElasticNet (${\alpha}=0.3$) was 83%. Therefore, in this study, the sentiment score of the review is calculated based on the dictionary of the ElasticNet method. It was combined with a rating to create a new rating. In this paper, we show that the collaborative filtering that reflects sentiment scores of user review is superior to the traditional method that only considers the existing rating. In order to show that the proposed algorithm is based on memory-based user collaboration filtering, item-based collaborative filtering and model based matrix factorization SVD, and SVD ++. Based on the above algorithm, the mean absolute error (MAE) and the root mean square error (RMSE) are calculated to evaluate the recommendation system with a score that combines sentiment scores with a system that only considers scores. When the evaluation index was MAE, it was improved by 0.059 for UBCF, 0.0862 for IBCF, 0.1012 for SVD and 0.188 for SVD ++. When the evaluation index is RMSE, UBCF is 0.0431, IBCF is 0.0882, SVD is 0.1103, and SVD ++ is 0.1756. As a result, it can be seen that the prediction performance of the evaluation point reflecting the sentiment score proposed in this paper is superior to that of the conventional evaluation method. In other words, in this paper, it is confirmed that the collaborative filtering that reflects the sentiment score of the user review shows superior accuracy as compared with the conventional type of collaborative filtering that only considers the quantitative score. We then attempted paired t-test validation to ensure that the proposed model was a better approach and concluded that the proposed model is better. In this study, to overcome limitations of previous researches that judge user's sentiment only by quantitative rating score, the review was numerically calculated and a user's opinion was more refined and considered into the recommendation system to improve the accuracy. The findings of this study have managerial implications to recommendation system developers who need to consider both quantitative information and qualitative information it is expect. The way of constructing the combined system in this paper might be directly used by the developers.

Accounting for zero flows in probabilistic distributed hydrological modeling for ephemeral catchment (무유출의 고려를 통한 간헐하천 유역에 확률기반의 격자형 수문모형의 구축)

  • Lee, DongGi;Ahn, Kuk-Hyun
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.6
    • /
    • pp.437-450
    • /
    • 2020
  • This study presents a probabilistic distributed hydrological model for Ephemeral catchment, where zero flow often occurs due to the influence of distinct climate characteristics in South Korea. The gridded hydrological model is developed by combining the Sacramento Soil Moisture Accounting Model (SAC-SMA) runoff model with a routing model. In addition, an error model is employed to represent a probabilistic hydrologic model. To be specific, the hydrologic model is coupled with a censoring error model to properly represent the features of ephemeral catchments. The performance of the censoring error model is evaluated by comparing it with the Gaussian error model, which has been utilized in a probabilistic model. We first address the necessity to consider ephemeral catchments through a review of the extensive research conducted over the recent decade. Then, the Yongdam Dam catchment is selected for our study area to confirm the usefulness of the hydrologic model developed in this study. Our results indicate that the use of the censored error model provides more reliable results, although the two models considered in this study perform reliable results. In addition, the Gaussian model delivers many negative flow values, suggesting that it occasionally offers unrealistic estimations in hydrologic modeling. In an in-depth analysis, we find that the efficiency of the censored error model may increase as the frequency of zero flow increases. Finally, we discuss the importance of utilizing the censored error model when the hydrologic model is applied for ephemeral catchments in South Korea.

A Study on the Effect of User Value on Smartwatch Digital HealthcareAcceptance Intention to Promote Digital Healthcare Venture Start Up (Digital Healthcare 벤처창업 촉진을 위한, 사용자 가치가 Smartwatch Digital Healthcare 수용의도에 미치는 영향 연구)

  • Eekseong Jin;soyoung Lee
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.2
    • /
    • pp.35-52
    • /
    • 2023
  • Recently, as the non-face-to-face environment has developed due to COVID-19 and environmental pollution, the importance of online digital healthcare is increasing, and venture start-ups and activities such as health care, telemedicine, and digital treatments are also actively underway. This study conducted the impact on the acceptability of digital healthcare smartwatches with an integrated approach of the expanded integrated technology acceptance model (UTAUT2) and the behavioral inference model (BRT). The most advanced integrated technology acceptance model for innovative technology acceptance research was used to identify major factors such as utility expectations, social effects, convenience, price barriers, lack of alternatives, and behavioral intentions. For the study, about 410 responses from ordinary people in their teens to 60s across the country were collected, and based on this, the hypothesis was verified using structural equations after testing reliability and validity of the data. SPSS 23 and AMOS 23 were used for research analysis. Studies have shown that personal innovation has a significant impact on the reasons for acceptance (use value, social impact, convenience of use), attitude, and non-use (price barriers, lack of alternatives, and barriers to use). These results are the same as the results of previous studies that confirmed the influence of the main value of innovative ICT on user acceptance intention. In addition, the reason for acceptance had a significant effect on attitude, but the effect of the reason for non-acceptance was not significant. It can be analyzed that consumers are interested in new ICT products and new services, but purchase them more carefully and selectively. This study has evolved from the acceptance analysis of general-purpose consumer innovation technology to the acceptance analysis of consumer value in smartwatch digital healthcare, which is a new and important area in the future. Industrially, it can contribute to the product's purchase and marketing. It is hoped that this study will contribute to increasing research in the digital healthcare sector, which will play an important role in our lives in the future, and that it will develop into in-depth factors that are more suitable for consumer value through integrated approach models and integrated analysis of consumer acceptance and non-acceptance.

  • PDF

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

The Impact of Service Level Management(SLM) Process Maturity on Information Systems Success in Total Outsourcing: An Analytical Case Study (토털 아웃소싱 환경 하에서 IT서비스 수준관리(Service Level Management) 프로세스 성숙도가 정보시스템 성공에 미치는 영향에 관한 분석적 사례연구)

  • Cho, Geun Su;An, Joon Mo;Min, Hyoung Jin
    • Asia pacific journal of information systems
    • /
    • v.23 no.2
    • /
    • pp.21-39
    • /
    • 2013
  • As the utilization of information technology and the turbulence of technological change increase in organizations, the adoption of IT outsourcing also grows to manage IT resource more effectively and efficiently. In this new way of IT management technique, service level management(SLM) process becomes critical to derive success from the outsourcing in the view of end users in organization. Even though much of the research on service level management or agreement have been done during last decades, the performance of the service level management process have not been evaluated in terms of final objectives of the management efforts or success from the view of end-users. This study explores the relationship between SLM maturity and IT outsourcing success from the users' point of view by a analytical case study in four client organizations under an IT outsourcing vendor, which is a member company of a major Korean conglomerate. For setting up a model for the analysis, previous researches on service level management process maturity and information systems success are reviewed. In particular, information systems success from users' point of view are reviewed based the DeLone and McLean's study, which is argued and accepted as a comprehensively tested model of information systems success currently. The model proposed in this study argues that SLM process maturity influences information systems success, which is evaluated in terms of information quality, systems quality, service quality, and net effect proposed by DeLone and McLean. SLM process maturity can be measured in planning process, implementation process and operation and evaluation process. Instruments for measuring the factors in the proposed constructs of information systems success and SL management process maturity were collected from previous researches and evaluated for securing reliability and validity, utilizing appropriate statistical methods and pilot tests before exploring the case study. Four cases from four different companies under one vendor company were utilized for the analysis. All of the cases had been contracted in SLA(Service Level Agreement) and had implemented ITIL(IT Infrastructure Library), Six Sigma and BSC(Balanced Scored Card) methods since last several years, which means that all the client organizations pursued concerted efforts to acquire quality services from IT outsourcing from the organization and users' point of view. For comparing the differences among the four organizations in IT out-sourcing sucess, T-test and non-parametric analysis have been applied on the data set collected from the organization using survey instruments. The process maturities of planning and implementation phases of SLM are found not to influence on any dimensions of information systems success from users' point of view. It was found that the SLM maturity in the phase of operations and evaluation could influence systems quality only from users' view. This result seems to be quite against the arguments in IT outsourcing practices in the fields, which emphasize usually the importance of planning and implementation processes upfront in IT outsourcing projects. According to after-the-fact observation by an expert in an organization participating in the study, their needs and motivations for outsourcing contracts had been quite familiar already to the vendors as long-term partners under a same conglomerate, so that the maturity in the phases of planning and implementation seems not to be differentiating factors for the success of IT outsourcing. This study will be the foundation for the future research in the area of IT outsourcing management and success, in particular in the service level management. And also, it could guide managers in practice in IT outsourcing management to focus on service level management process in operation and evaluation stage especially for long-term outsourcing contracts under very unique context like Korean IT outsourcing projects. This study has some limitations in generalization because the sample size is small and the context itself is confined in an unique environment. For future exploration, survey based research could be designed and implemented.

  • PDF

What are the Characteristics and Future Directions of Domestic Angel Investment Research? (국내 엔젤투자 연구의 특징과 향후 방향은 무엇인가?)

  • Min Kim;Byung Chul Choi;Woo Jin Lee
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.6
    • /
    • pp.57-70
    • /
    • 2023
  • The investigation delved into 457 pieces of scholarly work, encompassing articles, published theses, and dissertations from the National Research Foundation of Korea, spanning the period of the 1997 IMF financial crisis up to 2022. The materials were sourced using terms such as 'angel investment', 'angel investor', and 'angel investment attraction'. The initial phase involved filtering out redundant entries from the preliminary collection of 267 works, leaving aside pieces that didn't pertain directly to angel investment as indicated in their abstracts. The next stage of the analysis involved a more rigorous selection process. Out of 43 papers earmarked in the preceding cut, only 32 were chosen. The criteria for this focused on the exclusion of conference presentations, articles that were either not submitted or inconclusive, and those that duplicated content under different titles. The final selection of 32 papers underwent a thorough systematic literature review. These documents, all pertinent to angel investment in South Korea, were scrutinized under five distinct categories: 1) publication year, 2) themes of research, 3) strategies employed in the studies, 4) participants involved in the research, and 5) methods of research utilized. This meticulous process illuminated the existing landscape of angel investment studies within Korea. Moreover, this study pinpointed gaps in the current body of research, offering guidance on future scholarly directions and proposing social scientific theories to further enrich the field of angel investment studies and analysis also seeks to pinpoint which areas require additional exploration to energize the field of angel investment moving forward. Through a comprehensive review of literature, this research intends to validate the establishment of future research trajectories and pinpoint areas that are currently and relatively underexplored in Korea's angel investment research stream. This study revealed that current research on domestic angel investment is concentrated on several areas: 1) the traits of angel investors, 2) the motivations behind angel investing, 3) startup ventures, 4) relevant institutions and policies, and 5) the various forms of angel investments. It was determined that there is a need to broaden the scope of research to aid in enhancing and stimulating the scale of domestic angel investing. This includes research into performance analysis of angel investments and detailed case studies in the field. Furthermore, the study emphasizes the importance of diversifying research efforts. Instead of solely focusing on specific factors like investment types, startups, accelerators, venture capital, and regulatory frameworks, there is a call for research that explores a variety of associated variables. These include aspects related to crowdfunding and return on investment in the context of angel investing, ensuring a more holistic approach to research in this domain. Specifically, there's a clear need for more detailed studies focusing on the relationships with variables that serve as dependent variables influencing the outcomes of angel investments. Moreover, it's essential to invigorate both qualitative and quantitative research that delves into the theoretical framework from multiple perspectives. This involves analyzing the structure of variables that have an impact on angel investments and the decisions surrounding these investments, thereby enriching the theoretical foundation of this field. Finally, we presented the direction of development for future research by confirming that the effect on the completeness of the business plan is high or low depending on the satisfaction of the entrepreneurs in addition to the components.

  • PDF

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

The Need Analysis for Operating Course-based National Technical Qualification Course of Vocational School Teachers (직업계고 교사의 과정평가형 자격 과정 운영에 대한 교육요구도 분석)

  • Park, Byeong-seon;Yoon, Ji-A;Lee, Chang-hoon
    • 대한공업교육학회지
    • /
    • v.44 no.2
    • /
    • pp.28-46
    • /
    • 2019
  • The purpose of this study is to use as a basic data of establishing operating Course-based National Technical Qualification(CNTQ) support program by examining the educational needs for operating CNTQ of vocational school teachers, and to contribute to the vocational school settlement of CNTQ course. To achieve those purposes, this study drew 27 tasks performed by teachers operating CNTQ. Also, it surveyed the perceived importance and the performance. The findings of this study are as follows. First, it is showed that 'selection of qualification fields and confirmation of organization criteria, organization of educational training time by competency unit, organization of subjects and establishment of competency unit operating plan by grade and semester, selection of teaching materials, implementation of education and training, establishment of evaluation plan, implementation of evaluation, re-education and re-evaluation students with grades under 40%, guidance of paper evaluation, guidance of practical evaluation, guidance of interview evaluation' are the first priority tasks in the result of the need analysis. Second, it is indicated that 'application of external evaluation, guidance to retake an exam for failure' are the secondary priority tasks. According to these results, the following conclusions were made. First, it will be more positive effects if the educational needs in the next CNTQ support program include contents of the first priority tasks. Second, it is indicated that the priority of the educational needs for tasks of operating plan stages is commonly high. In particular, the highest ranking in the result means that it is completely supported from the first step on operating course. It is expected that the program which teachers on operating the course of similar qualification fields share each operating experience is effective. Third, the priority of the educational needs for external evaluation step ranked high. External evaluation has a different level of difficulty and a form of practical evaluation output according to qualification fields, so the method of guidance has to be different. It needs the program constructed by similar fields.

An Empirical Study on the Factors Affecting RFID Adoption Stage with Organizational Resources (조직의 자원을 고려한 RFID 도입단계별 영향요인에 관한 실증연구)

  • Jang, Sung-Hee;Lee, Dong-Man
    • Asia pacific journal of information systems
    • /
    • v.19 no.3
    • /
    • pp.125-150
    • /
    • 2009
  • RFID(Radio Frequency IDentification) is a wireless frequency of recognition technology that can be used to recognize, trace, and identify people, things, and animals using radio frequency(RF). RFID will bring about many changes in manufacturing and distributions, among other areas. In accordance with the increasing importance of RFID techniques, great advancement has been made in RFID studies. Initially, the RFID research started as a research literature or case study. Recently, empirical research has floated on the surface for announcement. But most of the existing researches on RFID adoption have been restricted to a dichotomous measure of 'adoption vs. non-adoption' or adoption intention. In short, RFID research is still at an initial stage, mainly focusing on the research of the RFID performance, integration, and its usage has been considered dismissive. The purpose of this study is to investigate which factors are important for the RFID adoption and implementation with organizational resources. In this study, the organizational resources are classified into either finance resources or IT knowledge resources. A research model and four hypotheses are set up to identify the relationships among these variables based on the investigations of such theories as technological innovations, adoption stage, and organizational resources. In order to conduct this study, a survey was carried out from September 27, 2008 until October 23, 2008. The questionnaire was completed by 143 managers and workers from physical distribution and manufacturing companies related to the RFID in South Korea. 37 out of 180 surveys, which turned out unfit for the study, were discarded and the remaining 143(adoption stage 89, implementation stage 54) were used for the empirical study. The statistics were analyzed using Excel 2003 and SPSS 12.0. The results of the analysis are as follows. First, the adoption stage shows that perceived benefits, standardization, perceived cost savings, environmental uncertainty, and pressures from rival firms have significant effects on the intent of the RFID adoption. Further, the implementation stage shows that perceived benefits, standardization, environmental uncertainty, pressures from rival firms, inter-organizational cooperation, and inter-organizational trust have significant effects on the extent of the RFID use. In contrast, inter-organizational cooperation and inter-organizational trust did not show much impact on the intent of RFID adoption while perceived cost savings did not significantly affect the extent of RFID use. Second, in the adoption stage, financial issues had adverse effect on both inter-organizational cooperation and the intent against the RFID adoption. IT knowledge resources also had a deterring effect on both perceived cost savings and the extent of the RFID adoption. Third, in the implementation stage, finance resources had a moderate effect on environmental uncertainty and extent of RFID use while IT knowledge resources had also a moderate effect on perceived cost savings and the extent of the RFID use. Limitations and future research issues can be summarized as follows. First, it is difficult to say that the sample is large enough to be representative of the population. Second, because the sample of this study was conducted among manufacturers only, it may be limited in analyzing fully the effect on the industry as a whole. Third, in consideration of the fact that the organizational resources in the RFID study require a great deal of researches, this research may deem insufficient to fulfill the purpose that it initially set out to achieve. Future studies using performance research are, therefore, needed to help better understand the organizational level of the RFID adoption and implementation.