• Title/Summary/Keyword: Key-Word Network

Search Result 100, Processing Time 0.027 seconds

Fast Stream Cipher AA32 for Software Implementation (소프트웨어 구현에 적합한 고속 스트림 암호 AA32)

  • Kim, Gil-Ho;Park, Chang-Soo;Kim, Jong-Nam;Cho, Gyeong-Yeon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.6B
    • /
    • pp.954-961
    • /
    • 2010
  • Stream cipher was worse than block cipher in terms of security, but faster in execution speed as an advantage. However, since so far there have been many algorithm researches about the execution speed of block cipher, these days, there is almost no difference between them in the execution speed of AES. Therefore an secure and fast stream cipher development is urgently needed. In this paper, we propose a 32bit output fast stream cipher, AA32, which is composed of ASR(Arithmetic Shifter Register) and simple logical operation. Proposed algorithm is a cipher algorithm which has been designed to be implemented by software easily. AA32 supports 128bit key and executes operations by word and byte unit. As Linear Feedback Sequencer, ASR 151bit is applied to AA32 and the reduction function is a very simple structure stream cipher, which consists of two major parts, using simple logical operations, instead of S-Box for a non-linear operation. The proposed stream cipher AA32 shows the result that it is faster than SSC2 and Salsa20 and satisfied with the security required for these days. Proposed cipher algorithm is a fast stream cipher algorithm which can be used in the field which requires wireless internet environment such as mobile phone system and real-time processing such as DRM(Digital Right Management) and limited computational environments such as WSN(Wireless Sensor Network).

Trends in the Use of Artificial Intelligence in Medical Image Analysis (의료영상 분석에서 인공지능 이용 동향)

  • Lee, Gil-Jae;Lee, Tae-Soo
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.4
    • /
    • pp.453-462
    • /
    • 2022
  • In this paper, the artificial intelligence (AI) technology used in the medical image analysis field was analyzed through a literature review. Literature searches were conducted on PubMed, ResearchGate, Google and Cochrane Review using the key word. Through literature search, 114 abstracts were searched, and 98 abstracts were reviewed, excluding 16 duplicates. In the reviewed literature, AI is applied in classification, localization, disease detection, disease segmentation, and fit degree of registration images. In machine learning (ML), prior feature extraction and inputting the extracted feature values into the neural network have disappeared. Instead, it appears that the neural network is changing to a deep learning (DL) method with multiple hidden layers. The reason is thought to be that feature extraction is processed in the DL process due to the increase in the amount of memory of the computer, the improvement of the calculation speed, and the construction of big data. In order to apply the analysis of medical images using AI to medical care, the role of physicians is important. Physicians must be able to interpret and analyze the predictions of AI algorithms. Additional medical education and professional development for existing physicians is needed to understand AI. Also, it seems that a revised curriculum for learners in medical school is needed.

The Method for Real-time Complex Event Detection of Unstructured Big data (비정형 빅데이터의 실시간 복합 이벤트 탐지를 위한 기법)

  • Lee, Jun Heui;Baek, Sung Ha;Lee, Soon Jo;Bae, Hae Young
    • Spatial Information Research
    • /
    • v.20 no.5
    • /
    • pp.99-109
    • /
    • 2012
  • Recently, due to the growth of social media and spread of smart-phone, the amount of data has considerably increased by full use of SNS (Social Network Service). According to it, the Big Data concept is come up and many researchers are seeking solutions to make the best use of big data. To maximize the creative value of the big data held by many companies, it is required to combine them with existing data. The physical and theoretical storage structures of data sources are so different that a system which can integrate and manage them is needed. In order to process big data, MapReduce is developed as a system which has advantages over processing data fast by distributed processing. However, it is difficult to construct and store a system for all key words. Due to the process of storage and search, it is to some extent difficult to do real-time processing. And it makes extra expenses to process complex event without structure of processing different data. In order to solve this problem, the existing Complex Event Processing System is supposed to be used. When it comes to complex event processing system, it gets data from different sources and combines them with each other to make it possible to do complex event processing that is useful for real-time processing specially in stream data. Nevertheless, unstructured data based on text of SNS and internet articles is managed as text type and there is a need to compare strings every time the query processing should be done. And it results in poor performance. Therefore, we try to make it possible to manage unstructured data and do query process fast in complex event processing system. And we extend the data complex function for giving theoretical schema of string. It is completed by changing the string key word into integer type with filtering which uses keyword set. In addition, by using the Complex Event Processing System and processing stream data at real-time of in-memory, we try to reduce the time of reading the query processing after it is stored in the disk.

Design of Serendipity Service Based on Near Field Communication Technology (NFC 기반 세렌디피티 시스템 설계)

  • Lee, Kyoung-Jun;Hong, Sung-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.293-304
    • /
    • 2011
  • The world of ubiquitous computing is one in which we will be surrounded by an ever-richer set of networked devices and services. Especially, mobile phone now becomes one of the key issues in ubiquitous computing environments. Mobile phones have been infecting our normal lives more thoroughly, and are the fastest technology in human history that has been adapted to people. In Korea, the number of mobile phones registered to the telecom company, is more than the population of the country. Last year, the numbers of mobile phone sold are many times more than the number of personal computer sold. The new advanced technology of mobile phone is now becoming the most concern on every field of technologies. The mix of wireless communication technology (wifi) and mobile phone (smart phone) has made a new world of ubiquitous computing and people can always access to the network anywhere, in high speed, and easily. In such a world, people cannot expect to have available to us specific applications that allow them to accomplish every conceivable combination of information that they might wish. They are willing to have information they want at easy way, and fast way, compared to the world we had before, where we had to have a desktop, cable connection, limited application, and limited speed to achieve what they want. Instead, now people can believe that many of their interactions will be through highly generic tools that allow end-user discovery, configuration, interconnection, and control of the devices around them. Serendipity is an application of the architecture that will help people to solve a concern of achieving their information. The word 'serendipity', introduced to scientific fields in eighteenth century, is the meaning of making new discoveries by accidents and sagacity. By combining to the field of ubiquitous computing and smart phone, it will change the way of achieving the information. Serendipity may enable professional practitioners to function more effectively in the unpredictable, dynamic environment that informs the reality of information seeking. This paper designs the Serendipity Service based on NFC (Near Field Communication) technology. When users of NFC smart phone get information and services by touching the NFC tags, serendipity service will be core services which will give an unexpected but valuable finding. This paper proposes the architecture, scenario and the interface of serendipity service using tag touch data, serendipity cases, serendipity rule base and user profile.

Introducing Keyword Bibliographic Coupling Analysis (KBCA) for Identifying the Intellectual Structure (지적구조 규명을 위한 키워드서지결합분석 기법에 관한 연구)

  • Lee, Jae Yun;Chung, EunKyung
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.1
    • /
    • pp.309-330
    • /
    • 2022
  • Intellectual structure analysis, which quantitatively identifies the structure, characteristics, and sub-domains of fields, has rapidly increased in recent years. Analysis techniques traditionally used to conduct intellectual structure analysis research include bibliographic coupling analysis, co-citation analysis, co-occurrence analysis, and author bibliographic coupling analysis. This study proposes a novel intellectual structure analysis method, Keyword Bibliographic Coupling Analysis (KBCA). The Keyword Bibliographic Coupling Analysis (KBCA) is a variation of the author bibliographic coupling analysis, which targets keywords instead of authors. It calculates the number of references shared by two keywords to the degree of coupling between the two keywords. A set of 1,366 articles in the field of 'Open Data' searched in the Web of Science were collected using the proposed KBCA technique. A total of 63 keywords that appeared more than 7 times, extracted from 1,366 article sets, were selected as core keywords in the open data field. The intellectual structure presented by the KBCA technique with 63 key keywords identified the main areas of open government and open science and 10 sub-areas. On the other hand, the intellectual structure network of co-occurrence word analysis was found to be insufficient in the overall structure and detailed domain structure. This result can be considered because the KBCA sufficiently measures the relationship between keywords using the degree of bibliographic coupling.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Personalized Recommendation System for IPTV using Ontology and K-medoids (IPTV환경에서 온톨로지와 k-medoids기법을 이용한 개인화 시스템)

  • Yun, Byeong-Dae;Kim, Jong-Woo;Cho, Yong-Seok;Kang, Sang-Gil
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.147-161
    • /
    • 2010
  • As broadcasting and communication are converged recently, communication is jointed to TV. TV viewing has brought about many changes. The IPTV (Internet Protocol Television) provides information service, movie contents, broadcast, etc. through internet with live programs + VOD (Video on demand) jointed. Using communication network, it becomes an issue of new business. In addition, new technical issues have been created by imaging technology for the service, networking technology without video cuts, security technologies to protect copyright, etc. Through this IPTV network, users can watch their desired programs when they want. However, IPTV has difficulties in search approach, menu approach, or finding programs. Menu approach spends a lot of time in approaching programs desired. Search approach can't be found when title, genre, name of actors, etc. are not known. In addition, inserting letters through remote control have problems. However, the bigger problem is that many times users are not usually ware of the services they use. Thus, to resolve difficulties when selecting VOD service in IPTV, a personalized service is recommended, which enhance users' satisfaction and use your time, efficiently. This paper provides appropriate programs which are fit to individuals not to save time in order to solve IPTV's shortcomings through filtering and recommendation-related system. The proposed recommendation system collects TV program information, the user's preferred program genres and detailed genre, channel, watching program, and information on viewing time based on individual records of watching IPTV. To look for these kinds of similarities, similarities can be compared by using ontology for TV programs. The reason to use these is because the distance of program can be measured by the similarity comparison. TV program ontology we are using is one extracted from TV-Anytime metadata which represents semantic nature. Also, ontology expresses the contents and features in figures. Through world net, vocabulary similarity is determined. All the words described on the programs are expanded into upper and lower classes for word similarity decision. The average of described key words was measured. The criterion of distance calculated ties similar programs through K-medoids dividing method. K-medoids dividing method is a dividing way to divide classified groups into ones with similar characteristics. This K-medoids method sets K-unit representative objects. Here, distance from representative object sets temporary distance and colonize it. Through algorithm, when the initial n-unit objects are tried to be divided into K-units. The optimal object must be found through repeated trials after selecting representative object temporarily. Through this course, similar programs must be colonized. Selecting programs through group analysis, weight should be given to the recommendation. The way to provide weight with recommendation is as the follows. When each group recommends programs, similar programs near representative objects will be recommended to users. The formula to calculate the distance is same as measure similar distance. It will be a basic figure which determines the rankings of recommended programs. Weight is used to calculate the number of watching lists. As the more programs are, the higher weight will be loaded. This is defined as cluster weight. Through this, sub-TV programs which are representative of the groups must be selected. The final TV programs ranks must be determined. However, the group-representative TV programs include errors. Therefore, weights must be added to TV program viewing preference. They must determine the finalranks.Based on this, our customers prefer proposed to recommend contents. So, based on the proposed method this paper suggested, experiment was carried out in controlled environment. Through experiment, the superiority of the proposed method is shown, compared to existing ways.

Analysis of media trends related to spent nuclear fuel treatment technology using text mining techniques (텍스트마이닝 기법을 활용한 사용후핵연료 건식처리기술 관련 언론 동향 분석)

  • Jeong, Ji-Song;Kim, Ho-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.33-54
    • /
    • 2021
  • With the fourth industrial revolution and the arrival of the New Normal era due to Corona, the importance of Non-contact technologies such as artificial intelligence and big data research has been increasing. Convergent research is being conducted in earnest to keep up with these research trends, but not many studies have been conducted in the area of nuclear research using artificial intelligence and big data-related technologies such as natural language processing and text mining analysis. This study was conducted to confirm the applicability of data science analysis techniques to the field of nuclear research. Furthermore, the study of identifying trends in nuclear spent fuel recognition is critical in terms of being able to determine directions to nuclear industry policies and respond in advance to changes in industrial policies. For those reasons, this study conducted a media trend analysis of pyroprocessing, a spent nuclear fuel treatment technology. We objectively analyze changes in media perception of spent nuclear fuel dry treatment techniques by applying text mining analysis techniques. Text data specializing in Naver's web news articles, including the keywords "Pyroprocessing" and "Sodium Cooled Reactor," were collected through Python code to identify changes in perception over time. The analysis period was set from 2007 to 2020, when the first article was published, and detailed and multi-layered analysis of text data was carried out through analysis methods such as word cloud writing based on frequency analysis, TF-IDF and degree centrality calculation. Analysis of the frequency of the keyword showed that there was a change in media perception of spent nuclear fuel dry treatment technology in the mid-2010s, which was influenced by the Gyeongju earthquake in 2016 and the implementation of the new government's energy conversion policy in 2017. Therefore, trend analysis was conducted based on the corresponding time period, and word frequency analysis, TF-IDF, degree centrality values, and semantic network graphs were derived. Studies show that before the 2010s, media perception of spent nuclear fuel dry treatment technology was diplomatic and positive. However, over time, the frequency of keywords such as "safety", "reexamination", "disposal", and "disassembly" has increased, indicating that the sustainability of spent nuclear fuel dry treatment technology is being seriously considered. It was confirmed that social awareness also changed as spent nuclear fuel dry treatment technology, which was recognized as a political and diplomatic technology, became ambiguous due to changes in domestic policy. This means that domestic policy changes such as nuclear power policy have a greater impact on media perceptions than issues of "spent nuclear fuel processing technology" itself. This seems to be because nuclear policy is a socially more discussed and public-friendly topic than spent nuclear fuel. Therefore, in order to improve social awareness of spent nuclear fuel processing technology, it would be necessary to provide sufficient information about this, and linking it to nuclear policy issues would also be a good idea. In addition, the study highlighted the importance of social science research in nuclear power. It is necessary to apply the social sciences sector widely to the nuclear engineering sector, and considering national policy changes, we could confirm that the nuclear industry would be sustainable. However, this study has limitations that it has applied big data analysis methods only to detailed research areas such as "Pyroprocessing," a spent nuclear fuel dry processing technology. Furthermore, there was no clear basis for the cause of the change in social perception, and only news articles were analyzed to determine social perception. Considering future comments, it is expected that more reliable results will be produced and efficiently used in the field of nuclear policy research if a media trend analysis study on nuclear power is conducted. Recently, the development of uncontact-related technologies such as artificial intelligence and big data research is accelerating in the wake of the recent arrival of the New Normal era caused by corona. Convergence research is being conducted in earnest in various research fields to follow these research trends, but not many studies have been conducted in the nuclear field with artificial intelligence and big data-related technologies such as natural language processing and text mining analysis. The academic significance of this study is that it was possible to confirm the applicability of data science analysis technology in the field of nuclear research. Furthermore, due to the impact of current government energy policies such as nuclear power plant reductions, re-evaluation of spent fuel treatment technology research is undertaken, and key keyword analysis in the field can contribute to future research orientation. It is important to consider the views of others outside, not just the safety technology and engineering integrity of nuclear power, and further reconsider whether it is appropriate to discuss nuclear engineering technology internally. In addition, if multidisciplinary research on nuclear power is carried out, reasonable alternatives can be prepared to maintain the nuclear industry.

Various Possibilities of Dispositif Film (디스포지티프 영화의 다양한 가능성)

  • KIM, Chaehee
    • Trans-
    • /
    • v.3
    • /
    • pp.55-86
    • /
    • 2017
  • This study begins with the necessity of the concept of reincarnation of film media and the inclusion of specific tendencies of contemporary films as post - cinema comes. Variable movements around recent films Challenging and experimental films show aesthetics that are difficult to approach with the analysis of classical mise en scene and montage. In this way, I review the dispositif proposed by Martin in films that are puzzling to criticize with the classical conceptual framework. This is because the concept of dispositive is a conceptual pile that extends more than a mise en scene and a montage. Dispositif films tend to be non-reproducible and non-narrative, but not all non-narrativef tendencies are dispositif films. Only the dispositif film is included in the flow. Dispositif movement has increased dramatically in the modern environment on which digital technology is based, but it is not a tendency to be found in any particular age. The movement has been detected in classical films, and the dispositif tendency has continued to exist in avant-garde films in the 1920s and some modernist films. First, for clear conceptualization of cinematic dispositif, this study examines the sources of dispositif debates that are being introduced into film theory today. In this process, the theory of Jean Louis Baudry, Michel Foucault, Agamben, Flusser, and Deleuze will help. The concept of dispositif was discussed by several scholars, including Baudry and Foucault, and today the notion of dispositif is defined across all these definitions. However, these various discussions are distinctly different from the cinematic dispositif or dispositif films that Martin advocates. Martin's proposed concept reminds us of the fundamentals of cinematic aesthetics that have distinguished between the mise-en-scene and the montage. And it will be able to reconsider those concepts and make it possible to view a thing a new light or create new films. The basic implications of dispositif are apparatus as devices, disposition and arrangement, the combination of heterogeneity. Thus, if you define a dispositif film in a word, it is a new 'constraint' consisting of rearrangement and arrangement of the heterogeneous elements that make up the conditions of the classical film. In order for something to become a new design, changes must be made in the arrangement and arrangement of the elements, forces, and forces that make up it. Naturally, the elements encompass both internal and external factors. These dispositif films have a variety of possibilities, such as reflection on the archival possibilities and the role of supervision, the reestablishment of active and creative audience, the reason for the film medium, and the ideological reflection. films can also 'network' quickly and easily with other media faster than any medium and create a new 'devised' aesthetic style. And the dispositif film that makes use of this will be a key keyword in reading the films that present the new trend of modern film. Because dispositif are so comprehensive and have a broad implication, there are certainly areas that are difficult to sophisticate. However this will have a positive effect on the future activation of dispositif studies end for end. Dispositif is difficult to elaborate the concept clearly, so it can be accessed from a wide range of dimensions and has theoretically infinite extensibility. At the beginning and end of the 21st century film, the concept of cinematic dispositif will become a decisive factor to dismantle old film aesthetics.

  • PDF

Consumer's Negative Brand Rumor Acceptance and Rumor Diffusion (소비자의 부정적 브랜드 루머의 수용과 확산)

  • Lee, Won-jun;Lee, Han-Suk
    • Asia Marketing Journal
    • /
    • v.14 no.2
    • /
    • pp.65-96
    • /
    • 2012
  • Brand has received much attention from considerable marketing research. When consumers consume product or services, they are exposed to a lot of brand related stimuli. These contain brand personality, brand experience, brand identity, brand communications and so on. A special kind of new crisis occasionally confronting companies' brand management today is the brand related rumor. An important influence on consumers' purchase decision making is the word-of-mouth spread by other consumers and most decisions are influenced by other's recommendations. In light of this influence, firms have reasonable reason to study and understand consumer-to-consumer communication such as brand rumor. The importance of brand rumor to marketers is increasing as the number of internet user and SNS(social network service) site grows. Due to the development of internet technology, people can spread rumors without the limitation of time, space and place. However relatively few studies have been published in marketing journals and little is known about brand rumors in the marketplace. The study of rumor has a long history in all major social science. But very few studies have dealt with the antecedents and consequences of any kind of brand rumor. Rumor has been generally described as a story or statement in general circulation without proper confirmation or certainty as to fact. And it also can be defined as an unconfirmed proposition, passed along from people to people. Rosnow(1991) claimed that rumors were transmitted because people needed to explain ambiguous and uncertain events and talking about them reduced associated anxiety. Especially negative rumors are believed to have the potential to devastate a company's reputation and relations with customers. From the perspective of marketer, negative rumors are considered harmful and extremely difficult to control in general. It is becoming a threat to a company's sustainability and sometimes leads to negative brand image and loss of customers. Thus there is a growing concern that these negative rumors can damage brands' reputations and lead them to financial disaster too. In this study we aimed to distinguish antecedents of brand rumor transmission and investigate the effects of brand rumor characteristics on rumor spread intention. We also found key components in personal acceptance of brand rumor. In contextualist perspective, we tried to unify the traditional psychological and sociological views. In this unified research approach we defined brand rumor's characteristics based on five major variables that had been found to influence the process of rumor spread intention. The five factors of usefulness, source credibility, message credibility, worry, and vividness, encompass multi level elements of brand rumor. We also selected product involvement as a control variable. To perform the empirical research, imaginary Korean 'Kimch' brand and related contamination rumor was created and proposed. Questionnaires were collected from 178 Korean samples. Data were collected from college students who have been experienced the focal product. College students were regarded as good subjects because they have a tendency to express their opinions in detail. PLS(partial least square) method was adopted to analyze the relations between variables in the equation model. The most widely adopted causal modeling method is LISREL. However it is poorly suited to deal with relatively small data samples and can yield not proper solutions in some cases. PLS has been developed to avoid some of these limitations and provide more reliable results. To test the reliability using SPSS 16 s/w, Cronbach alpha was examined and all the values were appropriate showing alpha values between .802 and .953. Subsequently, confirmatory factor analysis was conducted successfully. And structural equation modeling has been used to analyze the research model using smartPLS(ver. 2.0) s/w. Overall, R2 of adoption of rumor is .476 and R2 of intention of rumor transmission is .218. The overall model showed a satisfactory fit. The empirical results can be summarized as follows. According to the results, the variables of brand rumor characteristic such as source credibility, message credibility, worry, and vividness affect argument strength of rumor. And argument strength of rumor also affects rumor intention. On the other hand, the relationship between perceived usefulness and argument strength of rumor is not significant. The moderating effect of product involvement on the relations between argument strength of rumor and rumor W.O.M intention is not supported neither. Consequently this study suggests some managerial and academic implications. We consider some implications for corporate crisis management planning, PR and brand management. This results show marketers that rumor is a critical factor for managing strong brand assets. Also for researchers, brand rumor should become an important thesis of their interests to understand the relationship between consumer and brand. Recently many brand managers and marketers have focused on the short-term view. They just focused on strengthen the positive brand image. According to this study we suggested that effective brand management requires managing negative brand rumors with a long-term view of marketing decisions.

  • PDF