• Title/Summary/Keyword: high fashion

Search Result 2,481, Processing Time 0.029 seconds

Clinical Practice Guideline for Endoscopic Resection of Early Gastrointestinal Cancer (조기위장관암 내시경 치료 임상진료지침)

  • Park, Chan Hyuk;Yang, Dong-Hoon;Kim, Jong Wook;Kim, Jie-Hyun;Kim, Ji Hyun;Min, Yang Won;Lee, Si Hyung;Bae, Jung Ho;Chung, Hyunsoo;Choi, Kee Don;Park, Jun Chul;Lee, Hyuk;Kwak, Min-Seob;Kim, Bun;Lee, Hyun Jung;Lee, Hye Seung;Choi, Miyoung;Park, Dong-Ah;Lee, Jong Yeul;Byeon, Jeong-Sik;Park, Chan Guk;Cho, Joo Young;Lee, Soo Teik;Chun, Hoon Jai
    • Journal of Digestive Cancer Research
    • /
    • v.8 no.1
    • /
    • pp.1-50
    • /
    • 2020
  • Although surgery was the standard treatment for early gastrointestinal cancers, endoscopic resection is now a standard treatment for early gastrointestinal cancers without regional lymph node metastasis. High-definition white light endoscopy, chromoendoscopy, and image-enhanced endoscopy such as narrow band imaging are performed to assess the edge and depth of early gastrointestinal cancers for delineation of resection boundaries and prediction of the possibility of lymph node metastasis before the decision of endoscopic resection. Endoscopic mucosal resection and/or endoscopic submucosal dissection can be performed to remove early gastrointestinal cancers completely by en bloc fashion. Histopathological evaluation should be carefully made to investigate the presence of risk factors for lymph node metastasis such as depth of cancer invasion and lymphovascular invasion. Additional treatment such as radical surgery with regional lymphadenectomy should be considered if the endoscopically resected specimen shows risk factors for lymph node metastasis. This is the first Korean clinical practice guideline for endoscopic resection of early gastrointestinal cancer. This guideline was developed by using mainly de novo methods and encompasses endoscopic management of superficial esophageal squamous cell carcinoma, early gastric cancer, and early colorectal cancer. This guideline will be revised as new data on early gastrointestinal cancer are collected.

Extension Method of Association Rules Using Social Network Analysis (사회연결망 분석을 활용한 연관규칙 확장기법)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.111-126
    • /
    • 2017
  • Recommender systems based on association rule mining significantly contribute to seller's sales by reducing consumers' time to search for products that they want. Recommendations based on the frequency of transactions such as orders can effectively screen out the products that are statistically marketable among multiple products. A product with a high possibility of sales, however, can be omitted from the recommendation if it records insufficient number of transactions at the beginning of the sale. Products missing from the associated recommendations may lose the chance of exposure to consumers, which leads to a decline in the number of transactions. In turn, diminished transactions may create a vicious circle of lost opportunity to be recommended. Thus, initial sales are likely to remain stagnant for a certain period of time. Products that are susceptible to fashion or seasonality, such as clothing, may be greatly affected. This study was aimed at expanding association rules to include into the list of recommendations those products whose initial trading frequency of transactions is low despite the possibility of high sales. The particular purpose is to predict the strength of the direct connection of two unconnected items through the properties of the paths located between them. An association between two items revealed in transactions can be interpreted as the interaction between them, which can be expressed as a link in a social network whose nodes are items. The first step calculates the centralities of the nodes in the middle of the paths that indirectly connect the two nodes without direct connection. The next step identifies the number of the paths and the shortest among them. These extracts are used as independent variables in the regression analysis to predict future connection strength between the nodes. The strength of the connection between the two nodes of the model, which is defined by the number of nodes between the two nodes, is measured after a certain period of time. The regression analysis results confirm that the number of paths between the two products, the distance of the shortest path, and the number of neighboring items connected to the products are significantly related to their potential strength. This study used actual order transaction data collected for three months from February to April in 2016 from an online commerce company. To reduce the complexity of analytics as the scale of the network grows, the analysis was performed only on miscellaneous goods. Two consecutively purchased items were chosen from each customer's transactions to obtain a pair of antecedent and consequent, which secures a link needed for constituting a social network. The direction of the link was determined in the order in which the goods were purchased. Except for the last ten days of the data collection period, the social network of associated items was built for the extraction of independent variables. The model predicts the number of links to be connected in the next ten days from the explanatory variables. Of the 5,711 previously unconnected links, 611 were newly connected for the last ten days. Through experiments, the proposed model demonstrated excellent predictions. Of the 571 links that the proposed model predicts, 269 were confirmed to have been connected. This is 4.4 times more than the average of 61, which can be found without any prediction model. This study is expected to be useful regarding industries whose new products launch quickly with short life cycles, since their exposure time is critical. Also, it can be used to detect diseases that are rarely found in the early stages of medical treatment because of the low incidence of outbreaks. Since the complexity of the social networking analysis is sensitive to the number of nodes and links that make up the network, this study was conducted in a particular category of miscellaneous goods. Future research should consider that this condition may limit the opportunity to detect unexpected associations between products belonging to different categories of classification.

The Usefulness of Product Display of Online Store by the Product Type of Usage Situation - Focusing on the moderate effect of the product portability - (사용상황별 제품유형에 따른 온라인 점포 제품디스플레이의 유용성 - 제품 휴대성의 조절효과를 중심으로 -)

  • Lee, Dong-Il;Choi, Seung-Hoon
    • Journal of Distribution Research
    • /
    • v.16 no.2
    • /
    • pp.1-24
    • /
    • 2011
  • 1. Introduction: Contrast to the offline purchasing environment, online store cannot offer the sense of touch or direct visual information of its product to the consumers. So the builder of the online shopping mall should provide more concrete and detailed product information(Kim 2008), and Alba (1997) also predicted that the quality of the offered information is determined by the post-purchase consumer satisfaction. In practice, many fashion and apparel online shopping malls offer the picture information with the product on the real person model to enhance the usefulness of product information. On the other virtual product experience has been suggested to the ways of overcoming the online consumers' limited perceptual capability (Jiang & Benbasat 2005). However, the adoption and the facilitation of the virtual reality tools requires high investment and technical specialty compared to the text/picture product information offerings (Shaffer 2006). This could make the entry barrier to the online shopping to the small retailers and sometimes it could be demanding high level of consumers' perceptual efforts. So the expensive technological solution could affects negatively to the consumer decision making processes. Nevertheless, most of the previous research on the online product information provision suggests the VR be the more effective tools. 2. Research Model and Hypothesis: Presented in

    , research model suggests VR effect could be moderated by the product types by the usage situations. Product types could be defined as the portable product and installed product, and the information offering type as still picture of the product, picture of the product with the real-person model and VR. 3. Methods and Results: 3.1. Experimental design and measured variables We designed the 2(product types) X 3(product information types) experimental setting and measured dependent variables such as information usefulness, attitude toward the shopping mall, overall product quality, purchase intention and the revisiting intention. In the case of information usefulness and attitude toward the shopping mall were measured by multi-item scale. As a result of reliability test, Cronbach's Alpha value of each variable shows more than 0.6. Thus, we ensured that the internal consistency of items. 3.2. Manipulation check The main concern of this study is to verify the moderate effect by the product type of usage situation. indicates that our experimental manipulation of the moderate effect of the product type was successful. 3.3. Results As
    indicates, there was a significant main effect on the only one dependent variable(attitude toward the shopping mall) by the information types. As predicted, VR has highest mean value compared to other information types. Thus, H1 was partially supported. However, main effect by the product types was not found. To evaluate H2 and H3, a two-way ANOVA was conducted. As
    indicates, there exist the interaction effects on the three dependent variables(information usefulness, overall product quality and purchase intention) by the information types and the product types. As predicted, picture of the product with the real-person model has highest mean among the information types in the case of portable product. On the other hand, VR has highest mean among the information types in the case of installed product. Thus, H2 and H3 was supported. 4. Implications: The present study found the moderate effect by the product type of usage situation. Based on the findings the following managerial implications are asserted. First, it was found that information types are affect only the attitude toward the shopping mall. The meaning of this finding is that VR effects are not enough to understand the product itself. Therefore, we must consider when and how to use this VR tools. Second, it was found that there exist the interaction effects on the information usefulness, overall product quality and purchase intention. This finding suggests that consideration of usage situation helps consumer's understanding of product and promotes their purchase intention. In conclusion, not only product attributes but also product usage situations must be fully considered by the online retailers when they want to meet the needs of consumers.

  • PDF
  • The Effect of Attributes of Innovation and Perceived Risk on Product Attitudes and Intention to Adopt Smart Wear (스마트 의류의 혁신속성과 지각된 위험이 제품 태도 및 수용의도에 미치는 영향)

    • Ko, Eun-Ju;Sung, Hee-Won;Yoon, Hye-Rim
      • Journal of Global Scholars of Marketing Science
      • /
      • v.18 no.2
      • /
      • pp.89-111
      • /
      • 2008
    • Due to the development of digital technology, studies regarding smart wear integrating daily life have rapidly increased. However, consumer research about perception and attitude toward smart clothing hardly could find. The purpose of this study was to identify innovative characteristics and perceived risk of smart clothing and to analyze the influences of theses factors on product attitudes and intention to adopt. Specifically, five hypotheses were established. H1: Perceived attributes of smart clothing except for complexity would have positive relations to product attitude or purchase intention, while complexity would be opposite. H2: Product attitude would have positive relation to purchase intention. H3: Product attitude would have a mediating effect between perceived attributes and purchase intention. H4: Perceived risks of smart clothing would have negative relations to perceived attributes except for complexity, and positive relations to complexity. H5: Product attitude would have a mediating effect between perceived risks and purchase intention. A self-administered questionnaire was developed based on previous studies. After pretest, the data were collected during September, 2006, from university students in Korea who were relatively sensitive to innovative products. A total of 300 final useful questionnaire were analyzed by SPSS 13.0 program. About 60.3% were male with the mean age of 21.3 years old. About 59.3% reported that they were aware of smart clothing, but only 9 respondents purchased it. The mean of attitudes toward smart clothing and purchase intention was 2.96 (SD=.56) and 2.63 (SD=.65) respectively. Factor analysis using principal components with varimax rotation was conducted to identify perceived attribute and perceived risk dimensions. Perceived attributes of smart wear were categorized into relative advantage (including compatibility), observability (including triability), and complexity. Perceived risks were identified into physical/performance risk, social psychological risk, time loss risk, and economic risk. Regression analysis was conducted to test five hypotheses. Relative advantage and observability were significant predictors of product attitude (adj $R^2$=.223) and purchase intention (adj $R^2$=.221). Complexity showed negative influence on product attitude. Product attitude presented significant relation to purchase intention (adj $R^2$=.692) and partial mediating effect between perceived attributes and purchase intention (adj $R^2$=.698). Therefore hypothesis one to three were accepted. In order to test hypothesis four, four dimensions of perceived risk and demographic variables (age, gender, monthly household income, awareness of smart clothing, and purchase experience) were entered as independent variables in the regression models. Social psychological risk, economic risk, and gender (female) were significant to predict relative advantage (adj $R^2$=.276). When perceived observability was a dependent variable, social psychological risk, time loss risk, physical/performance risk, and age (younger) were significant in order (adj $R^2$=.144). However, physical/performance risk was positively related to observability. The more Koreans seemed to be observable of smart clothing, the more increased the probability of physical harm or performance problems received. Complexity was predicted by product awareness, social psychological risk, economic risk, and purchase experience in order (adj $R^2$=.114). Product awareness was negatively related to complexity, meaning high level of product awareness would reduce complexity of smart clothing. However, purchase experience presented positive relation with complexity. It appears that consumers can perceive high level of complexity when they are actually consuming smart clothing in real life. Risk variables were positively related with complexity. That is, in order to decrease complexity, it is also necessary to consider minimizing anxiety factors about social psychological wound or loss of money. Thus, hypothesis 4 was partially accepted. Finally, in testing hypothesis 5, social psychological risk and economic risk were significant predictors for product attitude (adj $R^2$=.122) and purchase intention (adj $R^2$=.099) respectively. When attitude variable was included with risk variables as independent variables in the regression model to predict purchase intention, only attitude variable was significant (adj $R^2$=.691). Thus attitude variable presented full mediating effect between perceived risks and purchase intention, and hypothesis 5 was accepted. Findings would provide guidelines for fashion and electronic businesses who aim to create and strengthen positive attitude toward smart clothing. Marketers need to consider not only functional feature of smart clothing, but also practical and aesthetic attributes, since appropriateness for social norm or self image would reduce uncertainty of psychological or social risk, which increase relative advantage of smart clothing. Actually social psychological risk was significantly associated to relative advantage. Economic risk is negatively associated with product attitudes as well as purchase intention, suggesting that smart-wear developers have to reflect on price ranges of potential adopters. It will be effective to utilize the findings associated with complexity when marketers in US plan communication strategy.

    • PDF

    How Enduring Product Involvement and Perceived Risk Affect Consumers' Online Merchant Selection Process: The 'Required Trust Level' Perspective (지속적 관여도 및 인지된 위험이 소비자의 온라인 상인선택 프로세스에 미치는 영향에 관한 연구: 요구신뢰 수준 개념을 중심으로)

    • Hong, Il-Yoo B.;Lee, Jung-Min;Cho, Hwi-Hyung
      • Asia pacific journal of information systems
      • /
      • v.22 no.1
      • /
      • pp.29-52
      • /
      • 2012
    • Consumers differ in the way they make a purchase. An audio mania would willingly make a bold, yet serious, decision to buy a top-of-the-line home theater system, while he is not interested in replacing his two-decade-old shabby car. On the contrary, an automobile enthusiast wouldn't mind spending forty thousand dollars to buy a new Jaguar convertible, yet cares little about his junky component system. It is product involvement that helps us explain such differences among individuals in the purchase style. Product involvement refers to the extent to which a product is perceived to be important to a consumer (Zaichkowsky, 2001). Product involvement is an important factor that strongly influences consumer's purchase decision-making process, and thus has been of prime interest to consumer behavior researchers. Furthermore, researchers found that involvement is closely related to perceived risk (Dholakia, 2001). While abundant research exists addressing how product involvement relates to overall perceived risk, little attention has been paid to the relationship between involvement and different types of perceived risk in an electronic commerce setting. Given that perceived risk can be a substantial barrier to the online purchase (Jarvenpaa, 2000), research addressing such an issue will offer useful implications on what specific types of perceived risk an online firm should focus on mitigating if it is to increase sales to a fullest potential. Meanwhile, past research has focused on such consumer responses as information search and dissemination as a consequence of involvement, neglecting other behavioral responses like online merchant selection. For one example, will a consumer seriously considering the purchase of a pricey Guzzi bag perceive a great degree of risk associated with online buying and therefore choose to buy it from a digital storefront rather than from an online marketplace to mitigate risk? Will a consumer require greater trust on the part of the online merchant when the perceived risk of online buying is rather high? We intend to find answers to these research questions through an empirical study. This paper explores the impact of enduring product involvement and perceived risks on required trust level, and further on online merchant choice. For the purpose of the research, five types or components of perceived risk are taken into consideration, including financial, performance, delivery, psychological, and social risks. A research model has been built around the constructs under consideration, and 12 hypotheses have been developed based on the research model to examine the relationships between enduring involvement and five components of perceived risk, between five components of perceived risk and required trust level, between enduring involvement and required trust level, and finally between required trust level and preference toward an e-tailer. To attain our research objectives, we conducted an empirical analysis consisting of two phases of data collection: a pilot test and main survey. The pilot test was conducted using 25 college students to ensure that the questionnaire items are clear and straightforward. Then the main survey was conducted using 295 college students at a major university for nine days between December 13, 2010 and December 21, 2010. The measures employed to test the model included eight constructs: (1) enduring involvement, (2) financial risk, (3) performance risk, (4) delivery risk, (5) psychological risk, (6) social risk, (7) required trust level, (8) preference toward an e-tailer. The statistical package, SPSS 17.0, was used to test the internal consistency among the items within the individual measures. Based on the Cronbach's ${\alpha}$ coefficients of the individual measure, the reliability of all the variables is supported. Meanwhile, the Amos 18.0 package was employed to perform a confirmatory factor analysis designed to assess the unidimensionality of the measures. The goodness of fit for the measurement model was satisfied. Unidimensionality was tested using convergent, discriminant, and nomological validity. The statistical evidences proved that the three types of validity were all satisfied. Now the structured equation modeling technique was used to analyze the individual paths along the relationships among the research constructs. The results indicated that enduring involvement has significant positive relationships with all the five components of perceived risk, while only performance risk is significantly related to trust level required by consumers for purchase. It can be inferred from the findings that product performance problems are mostly likely to occur when a merchant behaves in an opportunistic manner. Positive relationships were also found between involvement and required trust level and between required trust level and online merchant choice. Enduring involvement is concerned with the pleasure a consumer derives from a product class and/or with the desire for knowledge for the product class, and thus is likely to motivate the consumer to look for ways of mitigating perceived risk by requiring a higher level of trust on the part of the online merchant. Likewise, a consumer requiring a high level of trust on the merchant will choose a digital storefront rather than an e-marketplace, since a digital storefront is believed to be trustworthier than an e-marketplace, as it fulfills orders by itself rather than acting as an intermediary. The findings of the present research provide both academic and practical implications. The first academic implication is that enduring product involvement is a strong motivator of consumer responses, especially the selection of a merchant, in the context of electronic shopping. Secondly, academicians are advised to pay attention to the finding that an individual component or type of perceived risk can be used as an important research construct, since it would allow one to pinpoint the specific types of risk that are influenced by antecedents or that influence consequents. Meanwhile, our research provides implications useful for online merchants (both online storefronts and e-marketplaces). Merchants may develop strategies to attract consumers by managing perceived performance risk involved in purchase decisions, since it was found to have significant positive relationship with the level of trust required by a consumer on the part of the merchant. One way to manage performance risk would be to thoroughly examine the product before shipping to ensure that it has no deficiencies or flaws. Secondly, digital storefronts are advised to focus on symbolic goods (e.g., cars, cell phones, fashion outfits, and handbags) in which consumers are relatively more involved than others, whereas e- marketplaces should put their emphasis on non-symbolic goods (e.g., drinks, books, MP3 players, and bike accessories).

    • PDF

    Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

    • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
      • Journal of Intelligence and Information Systems
      • /
      • v.19 no.4
      • /
      • pp.123-132
      • /
      • 2013
    • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

    A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

    • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
      • Asia pacific journal of information systems
      • /
      • v.21 no.1
      • /
      • pp.103-122
      • /
      • 2011
    • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

    Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

    • Park, Hyun-Jung;Shin, Kyung-Shik
      • Journal of Intelligence and Information Systems
      • /
      • v.20 no.3
      • /
      • pp.19-43
      • /
      • 2014
    • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

    A Study on the Meaning and Strategy of Keyword Advertising Marketing

    • Park, Nam Goo
      • Journal of Distribution Science
      • /
      • v.8 no.3
      • /
      • pp.49-56
      • /
      • 2010
    • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

    • PDF

    Pharmacokinetic Study of Isoniazid and Rifampicin in Healthy Korean Volunteers (정상 한국인에서의 Isoniazid와 Rifampicin 약동학 연구)

    • Chung, Man-Pyo;Kim, Ho-Cheol;Suh, Gee-Young;Park, Jeong-Woong;Kim, Ho-Joong;Kwon, O-Jung;Rhee, Chong-H.;Han, Yong-Chol;Park, Hyo-Jung;Kim, Myoung-Min;Choi, Kyung-Eob
      • Tuberculosis and Respiratory Diseases
      • /
      • v.44 no.3
      • /
      • pp.479-492
      • /
      • 1997
    • Background : Isoniazid(INH) and rifampicin(RFP) are potent antituberculous drugs which have made tuberculous disease become decreasing. In Korea, prescribed doses of INH and RFP have been different from those recommended by American Thoracic Society. In fact they were determined by clinical experience rather than by scientific basis. Even there has been. few reports about pharmacokintic parameters of INH and RFP in healthy Koreans. Method : Oral pharmacokinetics of INH were studied in 22 healthy native Koreans after administration of 300 mg and 400mg of INH to each same person successively at least 2 weeks apart. After an overnight fast, subjects received medication and blood samples were drawn at scheduled times over a 24-hour period. Urine collection was also done for 24 hours. Pharmacokinetics of RFP were studied in 20 subjects in a same fashion with 450mg and 600mg of RFP. Plasma and urinary concentrations of INH and RFP were determined by high-performance liquid chromatography(HPLC). Results : Time to reach peak serum concentration (Tmax) of INH was $1.05{\pm}0.34\;hrs$ at 300mg dose and $0.98{\pm}0.59\;hrs$ at 400mg dose. Half-life was $2.49{\pm}0.88\;hrs$ and $2.80{\pm}0.75\;hrs$, respectively. They were not different significantly(p > 0.05). Peak serum concentration(Cmax) after administration of 400mg of INH was $7.14{\pm}1.95mcg/mL$ which was significantly higher than Cmax ($4.37{\pm}1.28mcg/mL$) by 300mg of INH(p < 0.01). Total clearance(CLtot) of INH at 300mg dose was $26.76{\pm}11.80mL/hr$. At 400mg dose it was $21.09{\pm}8.31mL/hr$ which was significantly lower(p < 0.01) than by 300mg dose. While renal clearance(CLr) was not different among two groups, nonrenal clearance(CLnr) at 400mg dose ($18.18{\pm}8.36mL/hr$) was significantly lower than CLnr ($23.71{\pm}11.52mL/hr$) by 300mg dose(p < 0.01). Tmax of RFP was $1.11{\pm}0.41\;hrs$ at 450mg dose and $1.15{\pm}0.43\;hrs$ at 600mg dose. Half-life was $4.20{\pm}0.73\;hrs$ and $4.95{\pm}2.25\;hrs$, respectively. They were not different significantly(p > 0.05). Cmax after administration of 600mg of RFP was $13.61{\pm}3.43mcg/mL$ which was significantly higher than Cmax($10.12{\pm}2.25mcg/mL$) by 450mg of RFP(p < 0.01). CLtot of RFP at 450mg dose was $7.60{\pm}1.34mL/hr$. At 600mg dose it was $7.05{\pm}1.20mL/hr$ which was significantly lower(p < 0.05) than by 450mg dose. While CLr was not different among two groups, CLnr at 600 mg dose($5.36{\pm}1.20mL/hr$) was significantly lower than CLnr($6.19{\pm}1.56mL/hr$) by 450mg dose(p < 0.01). Conclusion : Considering Cmax and CLnr, 300mg, of INH and 450mg RFP might be sufficient doses for the treatment of tuberculosis in Koreans. But it remains to be clarified in the patients with tuberculosis.

    • PDF

    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.