• Title/Summary/Keyword: Users Knowledge

Search Result 1,341, Processing Time 0.028 seconds

The Effect of Consumers' Value Motives on the Perception of Blog Reviews Credibility: the Moderation Effect of Tie Strength (소비자의 가치 추구 동인이 블로그 리뷰의 신뢰성 지각에 미치는 영향: 유대강도에 따른 조절효과를 중심으로)

  • Chu, Wujin;Roh, Min Jung
    • Asia Marketing Journal
    • /
    • v.13 no.4
    • /
    • pp.159-189
    • /
    • 2012
  • What attracts consumers to bloggers' reviews? Consumers would be attracted both by the Bloggers' expertise (i.e., knowledge and experience) and by his/her unbiased manner of delivering information. Expertise and trustworthiness are both virtues of information sources, particularly when there is uncertainty in decision-making. Noting this point, we postulate that consumers' motives determine the relative weights they place on expertise and trustworthiness. In addition, our hypotheses assume that tie strength moderates consumers' expectation on bloggers' expertise and trustworthiness: with expectation on expertise enhanced for power-blog user-group (weak-ties), and an expectation on trustworthiness elevated for personal-blog user-group (strong-ties). Finally, we theorize that the effect of credibility on willingness to accept a review is moderated by tie strength; the predictive power of credibility is more prominent for the personal-blog user-groups than for the power-blog user groups. To support these assumptions, we conducted a field survey with blog users, collecting retrospective self-report data. The "gourmet shop" was chosen as a target product category, and obtained data analyzed by structural equations modeling. Findings from these data provide empirical support for our theoretical predictions. First, we found that the purposive motive aimed at satisfying instrumental information needs increases reliance on bloggers' expertise, but interpersonal connectivity value for alleviating loneliness elevates reliance on bloggers' trustworthiness. Second, expertise-based credibility is more prominent for power-blog user-groups than for personal-blog user-groups. While strong ties attract consumers with trustworthiness based on close emotional bonds, weak ties gain consumers' attention with new, non-redundant information (Levin & Cross, 2004). Thus, when the existing knowledge system, used in strong ties, does not work as smoothly for addressing an impending problem, the weak-tie source can be utilized as a handy reference. Thus, we can anticipate that power bloggers secure credibility by virtue of their expertise while personal bloggers trade off on their trustworthiness. Our analysis demonstrates that power bloggers appeal more strongly to consumers than do personal bloggers in the area of expertise-based credibility. Finally, the effect of review credibility on willingness to accept a review is higher for the personal-blog user-group than for the power-blog user-group. Actually, the inference that review credibility is a potent predictor of assessing willingness to accept a review is grounded on the analogy that attitude is an effective indicator of purchase intention. However, if memory about established attitudes is blocked, the predictive power of attitude on purchase intention is considerably diminished. Likewise, the effect of credibility on willingness to accept a review can be affected by certain moderators. Inspired by this analogy, we introduced tie strength as a possible moderator and demonstrated that tie strength moderated the effect of credibility on willingness to accept a review. Previously, Levin and Cross (2004) showed that credibility mediates strong-ties through receipt of knowledge, but this credibility mediation is not observed for weak-ties, where a direct path to it is activated. Thus, the predictive power of credibility on behavioral intention - that is, willingness to accept a review - is expected to be higher for strong-ties.

  • PDF

Intelligent Brand Positioning Visualization System Based on Web Search Traffic Information : Focusing on Tablet PC (웹검색 트래픽 정보를 활용한 지능형 브랜드 포지셔닝 시스템 : 태블릿 PC 사례를 중심으로)

  • Jun, Seung-Pyo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.93-111
    • /
    • 2013
  • As Internet and information technology (IT) continues to develop and evolve, the issue of big data has emerged at the foreground of scholarly and industrial attention. Big data is generally defined as data that exceed the range that can be collected, stored, managed and analyzed by existing conventional information systems and it also refers to the new technologies designed to effectively extract values from such data. With the widespread dissemination of IT systems, continual efforts have been made in various fields of industry such as R&D, manufacturing, and finance to collect and analyze immense quantities of data in order to extract meaningful information and to use this information to solve various problems. Since IT has converged with various industries in many aspects, digital data are now being generated at a remarkably accelerating rate while developments in state-of-the-art technology have led to continual enhancements in system performance. The types of big data that are currently receiving the most attention include information available within companies, such as information on consumer characteristics, information on purchase records, logistics information and log information indicating the usage of products and services by consumers, as well as information accumulated outside companies, such as information on the web search traffic of online users, social network information, and patent information. Among these various types of big data, web searches performed by online users constitute one of the most effective and important sources of information for marketing purposes because consumers search for information on the internet in order to make efficient and rational choices. Recently, Google has provided public access to its information on the web search traffic of online users through a service named Google Trends. Research that uses this web search traffic information to analyze the information search behavior of online users is now receiving much attention in academia and in fields of industry. Studies using web search traffic information can be broadly classified into two fields. The first field consists of empirical demonstrations that show how web search information can be used to forecast social phenomena, the purchasing power of consumers, the outcomes of political elections, etc. The other field focuses on using web search traffic information to observe consumer behavior, identifying the attributes of a product that consumers regard as important or tracking changes on consumers' expectations, for example, but relatively less research has been completed in this field. In particular, to the extent of our knowledge, hardly any studies related to brands have yet attempted to use web search traffic information to analyze the factors that influence consumers' purchasing activities. This study aims to demonstrate that consumers' web search traffic information can be used to derive the relations among brands and the relations between an individual brand and product attributes. When consumers input their search words on the web, they may use a single keyword for the search, but they also often input multiple keywords to seek related information (this is referred to as simultaneous searching). A consumer performs a simultaneous search either to simultaneously compare two product brands to obtain information on their similarities and differences, or to acquire more in-depth information about a specific attribute in a specific brand. Web search traffic information shows that the quantity of simultaneous searches using certain keywords increases when the relation is closer in the consumer's mind and it will be possible to derive the relations between each of the keywords by collecting this relational data and subjecting it to network analysis. Accordingly, this study proposes a method of analyzing how brands are positioned by consumers and what relationships exist between product attributes and an individual brand, using simultaneous search traffic information. It also presents case studies demonstrating the actual application of this method, with a focus on tablets, belonging to innovative product groups.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

A Study on the Website Evaluation and Improvement of Korean Cyber University Websites (국내 사이버대학교 웹사이트 평가 및 개선방안 연구)

  • Moon, Tae-Eun;Moon, Hyung-Nam
    • Journal of Intelligence and Information Systems
    • /
    • v.14 no.2
    • /
    • pp.137-156
    • /
    • 2008
  • The purpose of this study is to implement evaluation on domestic cyber University web site to analyze web usability and web accessibility in order to present how much cyber University provides various personalized services and qualitative contents to users on web site. And also with this result, I hope to contribute in qualitative development of web site and reliability of remote education. For this purpose, I developed checklist suitable to remote University by applying SM-ABCDE assessment method of Moonhyeongnam, the professor in Sookmyung Women's University, and implemented evaluation on web sites of 17 cyber University over the five aspects of attraction, business, contents, design, and engineering. As the result of this study, it was found that Busan Digital University was best in the respect of attraction, both Kyunghee Cyber University and Cyber University of foreign studies were best in the respect of business, Kyunghee University was best in the respect of contents, and Hanyang Cyber University was best in the respect of design. It was also evaluated in the comprehensive view and the order was found that Kyunghee Cyber University, Busan Cyber University, and Hanyang Cyber University. However in the respect of engineering, it came out that all the sites did not observe the regulation on web accessibility. If domestic Cyber Universities are to observe the regulation on web accessibility with the long term view, on the base of this study, web usability would be increasing. Then they can secure various classes of customers as well as ordinary people so that they will be able to take the real place of educational institution orienting the most advanced educational method in the society of knowledge and information in addition to advance in themselves.

  • PDF

Factors Influencing Frequency of Abnormal Peak in the Measurement of HbA1c by HPLC (HPLC법을 이용한 HbA1c 측정시 Abnormal Peak의 빈도와 원인)

  • Kim, Sun-Kyung;Bae, Ae-Young;Choi, Dae-Yong;Kim, Myung-Soo;Yoo, Kwang-Hyun;Ki, Chang-Seok
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.37 no.2
    • /
    • pp.71-77
    • /
    • 2005
  • We experienced the specimen that contains a hemoglobin variant known as interference from HbAS in October 2003. It was the first case of Hb variants since Samsung Medical Center began conducting glycohemoglobin College of American Pathologists surveys in 1997. The purpose of this study is to share our experience with the specimen and promote the understanding of Hb variants & derivatives. We've performed cross checks to examine HbA1c by using two pieces of equipment; the TOSHOH G7 and BIO-RAD VARIANT-T(turbo), and Automatic High Performance Liquid Chromatography(HPLC) as an analytic measurement method. HPLC provides different fractional information of hemoglobin with a two-dimensional graph as well as numeric results. We have been performing a "Systematic Checking Process". Three specimen suspicious of Hb variants & derivatives were found through this process. College of American Pathologists notified that it is important for users to be aware of the limitation of their glycohemoglobin method to avoid reporting incorrect results due to interference from hemoglobin variants or hemoglobin adducts. Therefore, laboratory findings of Hb variants & derivatives are very important. The experience of qualified technicians with professional knowledge in Hb variants is the most important aspect in finding Hb variants. Korea is homogeneous in race and is not in an area with a higher finding rate of Hb variants. While 1,024 cases of Hb variants have been found in Japan, we do not have specific data on how many cases of Hb variants have been found in Korea. Considering Hb variant cases in Japan, which is geographically close to us, it is presumed that there must be various Hb variant cases in Korea. If domestic laboratories set a systemic protocol and build a network to share our experience in Hb variants, I expect the Korean Hb variants could also be listed on the world's Hb variant list.

  • PDF

Distributed Hashing-based Fast Discovery Scheme for a Publish/Subscribe System with Densely Distributed Participants (참가자가 밀집된 환경에서의 게재/구독을 위한 분산 해쉬 기반의 고속 서비스 탐색 기법)

  • Ahn, Si-Nae;Kang, Kyungran;Cho, Young-Jong;Kim, Nowon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.12
    • /
    • pp.1134-1149
    • /
    • 2013
  • Pub/sub system enables data users to access any necessary data without knowledge of the data producer and synchronization with the data producer. It is widely used as the middleware technology for the data-centric services. DDS (Data Distribution Service) is a standard middleware supported by the OMG (Object Management Group), one of global standardization organizations. It is considered quite useful as a standard middleware for US military services. However, it is well-known that it takes considerably long time in searching the Participants and Endpoints in the system, especially when the system is booting up. In this paper, we propose a discovery scheme to reduce the latency when the participants and Endpoints are densely distributed in a small area. We propose to modify the standard DDS discovery process in three folds. First, we integrate the Endpoint discovery process with the Participant discovery process. Second, we reduce the number of connections per participant during the discovery process by adopting the concept of successors in Distributed Hashing scheme. Third, instead of UDP, the participants are connected through TCP to exploit the reliable delivery feature of TCP. We evaluated the performance of our scheme by comparing with the standard DDS discovery process. The evaluation results show that our scheme achieves quite lower discovery latency in case that the Participants and the Endpoints are densely distributed in a local network.

A Study on the Role Performance of Collective intelligence as Scaffold in Web-based PBL (웹을 활용한 PBL에서 집단지성의 스캐폴더 역할 연구)

  • Suh, Soon-Shik;Heo, Dong-Hyeon
    • Journal of The Korean Association of Information Education
    • /
    • v.12 no.3
    • /
    • pp.355-363
    • /
    • 2008
  • In order to enhance the effect of Problem-based Learning, the role of scaffold as a learning support strategy is necessary. Collective intelligence provides scaffolding in the sense that it integrates users' knowledge, information, experiences, values, etc. Based on these factors, collective intelligence determines the direction of behavior, revises the direction continuously, and provides problem-solving methods. Teaching and learning situations emphasize learners' initiative, voluntary, and active participation. Thus, this study was conducted to find out if collective intelligence can be an effective and attractive alternative of learning strategy. Specifically, this study purposed to examine how collective intelligence performs the role of scaffold on the Web and what types of scaffolding are provided to learners. According to the results of this study, collective intelligence had a positive effect on learners' learning attitude, confidence, interest, etc. in the affective aspect, but its effect on the cognitive aspect was different according to learners' school year and learning level. Because collective intelligence had a positive effect on learners, we identified scaffolding types explanation, suggestion of direction, illustration and feedback in the cognitive aspect, and positive response and encouragement in the affective aspect.

  • PDF

A Study on the Improvement of In-Home Care Service Quality through Evaluation of Services and Agency by Long-term Care Workers (요양보호사의 기관 및 서비스 평가를 통한 재가노인서비스 품질 향상 방안)

  • Bae, Hwa-Sook;Han, Jeong-Won
    • Journal of Digital Convergence
    • /
    • v.14 no.10
    • /
    • pp.71-81
    • /
    • 2016
  • The objective of this study is to suggest methods to improve in-home service quality through service evaluation by long-term care workers. To achieve this objective, general characteristics of 223 long-term care workers, evaluation of service and agency, and retraining needs have been surveyed. An assessment of the survey results have resulted in the following conclusions. Though long-term care workers are not uneducated, the majority face unstable employment. And the content of supervision hoped for in producing improved long-term care services has been found to be based on the service-user's relationships. Moreover, among topics needing to be addressed for retraining, much attention has been shown for understanding of the elderly and their families, health care knowledge about geriatric diseases, and counseling techniques directed towards the affected person and their family. Findings from the research are as follow: enhancing the quality of long-term care requires a structural reassessment; upgrading the quality of care agencies requires the improvement of methods used to raise awareness of users and their guardians and the expansion of opportunities for education programs for professionalism.

The Atmospheric Factors Affecting User's Satisfaction in Natural Parks (자연공원의 분위기가 이용자의 만족도에 미치는 영향 - 국립공원과 도립공원을 대상으로 -)

  • 장병문;배민기
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.30 no.1
    • /
    • pp.29-43
    • /
    • 2002
  • The purpose of this paper is to examine atmospherical factors affecting user's satisfaction in natural park to answer the research question: what are the effects of atmosphere on user's satisfaction in natural parks(NP). After reviewing the literature, mechanism of NP, and use elements in NP, We constructed the conceptual framework and have formulated the hypothesis of this research. We had obtained data through a questionnaire, which surveyed 508 visitors at 6 of the 73 NP in Korea in 2001, based on stratified sampling method. We have analyzed the data using descriptive statistical methods, the mean difference test, Pearson's correlation analysis, and the multiple linear regression method. We found that 1) the five atmospheric variables, j.e., number of users(NOU), crowding, damage to park resources(DPR), and maintenance of park resources and facilities(MPRF), encounter level(EL) affecting user's satisfaction, have tuned out to be statistically significant at a five percent level. The direction of the relationship between user's satisfaction and MPRF, NOU, EL is the same as that of the dependent variable and the opposite of crowding, and BPR, 2) in bivariate analysis, the positive relationships between user's satisfaction and park resources, MPRF are fairly high and statistically significant. The higher the value of DPR, and crowing, the lower the degree of user's satisfaction, 3) in multivariate analysis, such variables as NOU, crowding, DPR, EL, and MPRF affecting user's satisfaction have been statistically significant at five percent level, and 4) the relative contribution of MPRF, park resources, park facilities, NOU, crowding, DPR, and size of activity space on user's satisfaction have been determined to have respectively 6.00, 4.78, 2.53, 1.83, 1.64, 1.59 and 2.03 times more important than that of EL. Among the atmospheric variables, MPRF is the most important at 1.26 times higher than that of park resources. The research results suggest that the development of devices for the increase in user's satisfaction and user management program based on the knowledge we have found, be recommended in the planning and development process of natural park. The approach adopted by this research is valid and useful for evaluation criteria of NP. It is recommended that more empirical studies by activity types, activity spaces, and seasons on atmospheric elements affecting user's satisfaction be performed in the future.