• Title/Summary/Keyword: Search

Search Result 17,020, Processing Time 0.051 seconds

Global Cosmetics Trends and Cosmceuticals for 21st Century Asia (화장품의 세계적인 개발동향과 21세기 아시아인을 위한 기능성 화장품)

  • T.Joseph Lin
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.23 no.1
    • /
    • pp.5-20
    • /
    • 1997
  • War and poverty depress the consumption of cosmetics, while peace and prosperity encourage their proliferation. With the end of World War II, the US, Europe and Japan witnessed rapid growth of their cosmetic industries. The ending of the Cold War has stimulated the growth of the industry in Eastern Europe. Improved economies, and mass communication are also responsible for the fast growth of the cosmetic industries in many Asian nations. The rapid development of the cosmetic industry in mainland China over the past decade proves that changing economies and political climates can deeply affect the health of our business. In addition to war, economy, political climate and mass communication, factors such as lifestyle, religion, morality and value concepts, can also affect the growth of our industry. Cosmetics are the product of the society. As society and the needs of its people change, cosmetics also evolve with respect to their contents, packaging, distribution, marketing concepts, and emphasis. In many ways, cosmetics mirror our society, reflecting social changes. Until the early 70's, cosmetics in the US were primarily developed for white women. The civil rights movement of the 60's gave birth to ethnic cosmetics, and products designed for African-Americans became popular in the 70's and 80's. The consumerism of the 70's led the FDA to tighten cosmetic regulations, forcing manufacturers to disclose ingredients on their labels. The result was the spread of safety-oriented, "hypoallergenic" cosmetics and more selective use of ingredients. The new ingredient labeling law in Europe is also likely to affect the manner in which development chemists choose ingredients for new products. Environmental pollution, too, can affect cosmetics trends. For example, the concern over ozone depletion in the stratosphere has promoted the consumption of suncare products. Similarly, the popularity of natural cosmetic ingredients, the search of non-animal testing methods, and ecology-conscious cosmetic packaging seen in recent years all reflect the profound influences of our changing world. In the 1980's, a class of efficacy-oriented skin-care products, which the New York Times dubbed "serious" cosmetics, emerged in the US. "Cosmeceuticals" refer to hybrids of cosmetics and pharmaceuticals which have gained importance in the US in the 90's and are quickly spreading world-wide. In spite of regulatory problems, consumer demand and new technologies continue to encourage their development. New classes of cosmeceuticals are emerging to meet the demands of increasingly affluent Asian consumers as we enter the 21st century. as we enter the 21st century.

  • PDF

Extension Method of Association Rules Using Social Network Analysis (사회연결망 분석을 활용한 연관규칙 확장기법)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.111-126
    • /
    • 2017
  • Recommender systems based on association rule mining significantly contribute to seller's sales by reducing consumers' time to search for products that they want. Recommendations based on the frequency of transactions such as orders can effectively screen out the products that are statistically marketable among multiple products. A product with a high possibility of sales, however, can be omitted from the recommendation if it records insufficient number of transactions at the beginning of the sale. Products missing from the associated recommendations may lose the chance of exposure to consumers, which leads to a decline in the number of transactions. In turn, diminished transactions may create a vicious circle of lost opportunity to be recommended. Thus, initial sales are likely to remain stagnant for a certain period of time. Products that are susceptible to fashion or seasonality, such as clothing, may be greatly affected. This study was aimed at expanding association rules to include into the list of recommendations those products whose initial trading frequency of transactions is low despite the possibility of high sales. The particular purpose is to predict the strength of the direct connection of two unconnected items through the properties of the paths located between them. An association between two items revealed in transactions can be interpreted as the interaction between them, which can be expressed as a link in a social network whose nodes are items. The first step calculates the centralities of the nodes in the middle of the paths that indirectly connect the two nodes without direct connection. The next step identifies the number of the paths and the shortest among them. These extracts are used as independent variables in the regression analysis to predict future connection strength between the nodes. The strength of the connection between the two nodes of the model, which is defined by the number of nodes between the two nodes, is measured after a certain period of time. The regression analysis results confirm that the number of paths between the two products, the distance of the shortest path, and the number of neighboring items connected to the products are significantly related to their potential strength. This study used actual order transaction data collected for three months from February to April in 2016 from an online commerce company. To reduce the complexity of analytics as the scale of the network grows, the analysis was performed only on miscellaneous goods. Two consecutively purchased items were chosen from each customer's transactions to obtain a pair of antecedent and consequent, which secures a link needed for constituting a social network. The direction of the link was determined in the order in which the goods were purchased. Except for the last ten days of the data collection period, the social network of associated items was built for the extraction of independent variables. The model predicts the number of links to be connected in the next ten days from the explanatory variables. Of the 5,711 previously unconnected links, 611 were newly connected for the last ten days. Through experiments, the proposed model demonstrated excellent predictions. Of the 571 links that the proposed model predicts, 269 were confirmed to have been connected. This is 4.4 times more than the average of 61, which can be found without any prediction model. This study is expected to be useful regarding industries whose new products launch quickly with short life cycles, since their exposure time is critical. Also, it can be used to detect diseases that are rarely found in the early stages of medical treatment because of the low incidence of outbreaks. Since the complexity of the social networking analysis is sensitive to the number of nodes and links that make up the network, this study was conducted in a particular category of miscellaneous goods. Future research should consider that this condition may limit the opportunity to detect unexpected associations between products belonging to different categories of classification.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

The Irradiated Lung Volume in Tangential Fields for the Treatment of a Breast (유방암의 접선 조사시 피폭 폐용적)

  • Oh Young Taek;Kim Juree;Kang Haejin;Sohn Jeong Hye;Kang Seung Hee;Chun Mison
    • Radiation Oncology Journal
    • /
    • v.15 no.2
    • /
    • pp.137-143
    • /
    • 1997
  • Purpose : Radiation pneumonitis is one of the complications caused by radiation therapy that includes a Portion of the lung tissue. The severity of radiation induced pulmonary dysfunction depends on the irradiated lung volume, total dose, dose rate and underlying Pulmonary function. It also depends on whether chemotherapy is done or not. The irradiated lung volume is the most important factor to predict the pulmonary dysfunction in breast cancer Patients following radiation therapy. There are some data that show the irradiated lung volume measured from CT scans as a part of treatment Planning with the tangential beams. But such data have not been reported in Korea. We planned to evaluate the irradiated lung volume quantitatively using CT scans for the breast tangential field and search for useful factors that could Predict the irradiated lung volume Materials and Methods : The lung volume was measured for 25 patients with breast cancer irradiated with tangential field from Jan.1995 to Aug.1996. Parameters that can predict the irradiated lung volume included; (1) the peruendicular distance from the Posterior tangential edge to the posterior part of the anterior chest wall at the center of the field (CLD) ; (2) the maximum perpendicular distance from the posterior tangential field edge to the posterior Part of the anterior chest wall (MLD) ; (3) the greatest perpendicular distance from the Posterior tangential edge to the posterior part of anterior chest wall on CT image at the center of the longitudinal field (GPD) ; (4) the length of the longitudinal field (L). The irradiated lung volume(RV), the entire both lung volume(EV) and the ipsilateral lung volume(IV) were measured using dose volume histogram. The relationship between the irradiated lung volume and predictors was evaluated by regression analysis. Results :The RV is 61-279cc (mean 170cc), the RV/EV is $2.9-13.0\%\;(mean\;5.8\%)$ and the RV/IV is $4.9-29.0\%\;(mean\;12.2\%)$. The CLD, the MLD and the GPD ave 1.9-3.3cm, 1.9-3.3cm and 1.4-3.1cm respectively. The significant relations between the irradiated lung volume such as RV. RV/EV, RV/IV and parameters such as CLD, MLD, GPO, L. $CLD\timesL,\;MLD\timesL\;and\;GPD\timesL$ are not found with little variances in parameters. The RV/IV of the left breast irradiation is significantly larger than that of the right but the RV/EVS do not show the differences. There is no symptomatic radiation pneumonitis at least during 6 months follow up. Conclusion : The significant relationship between the irradiated lung volume and predictors is not found with little variation on parameters. The irradiated lung volume in the tangential held is liss than $10\%$ of entire lung volume when CLO is less than 3cm. The RV/IV of the left tangential field is larger than that of the right but there was no significant differences in RV/EVS. Symptomatic radiation pneumonitis has not occurred during minimum 6 months follow up.

  • PDF

Recognition and attitude to functional division between physicians and pharmacists of practising physicians and pharmacists in Taegu city (대구시 개원의사와 개국약사의 의약분업에 대한 인식과 태도)

  • Lee, Moo-Sik;Yoon, Nung-Ki;Suh, Suk-Kwon;Park, Jae-Yong
    • Journal of Preventive Medicine and Public Health
    • /
    • v.26 no.1 s.41
    • /
    • pp.1-19
    • /
    • 1993
  • Mail questionnaire was administrated to 370 practising physicians and 388 pharmacists in Taegu city selected by systematic sampling to examine utilization states and opinion of pharmacy under medical care insurance programme and the attitude to the functional division between physicians and pharmacists from April to May 1992. Regarding the opinion on the outcome of drug-store under medical insurance, 71.2 percent of practicing physician answered faliure but 13.4 percent of practicing pharmacists answered failure in contrast. Fifty percent of practicing physician asserted introducing functional division between physician and pharmacist while 66.9 percent of practicing pharmacist answered drug-store under medical insurance itself is sucessful programme. Average daily numbers of preparation of medicine was 32.2 case. Percentage of utilization of drug-store under medical issurance to average daily cases of preparing of medicine was 20 percent, percentage of utilization with physician's prescription was 0.7 percent. And 58.7 percent of practicing physician experienced outside the institute prescription. Regarding the opinion on the pros and cons of enforcing functional division between physician and pharmacist, 59.2 percent of practicing physician prefered pros and 17.7 percent cons, but 38 percent of practicing pharmacist prefered pros and 45.5 percent cons. And pharmacist knew better the content of functional division between physician and pharmacist than physician. As a reason for pros of enforcing functional division between physician and pharmacist, practicing physician emphasized to prevent misuse or abuse of medicine but practicing pharmacist emphasized to display physician and pharmacist's professional ability. And as an opinion on implementation style of functional division between physician and pharmacist in pros respondents, practicing physician favored mandatory enforcement (52.3%), while practicing pharmacist favored partial incomplete functional division (81.7%). As the method of prescription if functional division between physician and pharmacist will be enforced, both practicing physician and pharmacist prefered generic name (44.0%, 89%) mostly, but physician prefered brand name (35.3%) secondly. Regarding the reason for not implementing functional division between physician and pharmacist up to date, both physician and pharmacist answered problem of business right between physician and pharmacist, followed by lack of recognition, and interest of people and lack of the govermental willness. Regarding the opinion on prior decision of condition for enforcing functional division between physician and pharmacist, practicing physician and pharmacist named uneven distribution of medical facilities and drug-store between rural and urban, inequality of physician and pharmacist manpower and the problem of manpower demand and supply mostly, and practicing physician pointed out establishing attitude of acceptance on the part of pharmacist and practicing pharmacist favored establishing attitude of acceptance on the part of physician, which was different attitudes between physician and pharmacist. Following conclusion was reached ; 1. Current drug-store under medical insurance program yield insufficient outcome, so we should consider program conversion from drug-store under medical insurance program to functional division between physician and pharmacist. 2. There were problem of business right and conflicts between physician and pharmacist at enforcing functional division between physician and pharmacist, so the goverment should search for formulating plan to resolve the problem and have neutral willness for the protection of the national health.

  • PDF

How Enduring Product Involvement and Perceived Risk Affect Consumers' Online Merchant Selection Process: The 'Required Trust Level' Perspective (지속적 관여도 및 인지된 위험이 소비자의 온라인 상인선택 프로세스에 미치는 영향에 관한 연구: 요구신뢰 수준 개념을 중심으로)

  • Hong, Il-Yoo B.;Lee, Jung-Min;Cho, Hwi-Hyung
    • Asia pacific journal of information systems
    • /
    • v.22 no.1
    • /
    • pp.29-52
    • /
    • 2012
  • Consumers differ in the way they make a purchase. An audio mania would willingly make a bold, yet serious, decision to buy a top-of-the-line home theater system, while he is not interested in replacing his two-decade-old shabby car. On the contrary, an automobile enthusiast wouldn't mind spending forty thousand dollars to buy a new Jaguar convertible, yet cares little about his junky component system. It is product involvement that helps us explain such differences among individuals in the purchase style. Product involvement refers to the extent to which a product is perceived to be important to a consumer (Zaichkowsky, 2001). Product involvement is an important factor that strongly influences consumer's purchase decision-making process, and thus has been of prime interest to consumer behavior researchers. Furthermore, researchers found that involvement is closely related to perceived risk (Dholakia, 2001). While abundant research exists addressing how product involvement relates to overall perceived risk, little attention has been paid to the relationship between involvement and different types of perceived risk in an electronic commerce setting. Given that perceived risk can be a substantial barrier to the online purchase (Jarvenpaa, 2000), research addressing such an issue will offer useful implications on what specific types of perceived risk an online firm should focus on mitigating if it is to increase sales to a fullest potential. Meanwhile, past research has focused on such consumer responses as information search and dissemination as a consequence of involvement, neglecting other behavioral responses like online merchant selection. For one example, will a consumer seriously considering the purchase of a pricey Guzzi bag perceive a great degree of risk associated with online buying and therefore choose to buy it from a digital storefront rather than from an online marketplace to mitigate risk? Will a consumer require greater trust on the part of the online merchant when the perceived risk of online buying is rather high? We intend to find answers to these research questions through an empirical study. This paper explores the impact of enduring product involvement and perceived risks on required trust level, and further on online merchant choice. For the purpose of the research, five types or components of perceived risk are taken into consideration, including financial, performance, delivery, psychological, and social risks. A research model has been built around the constructs under consideration, and 12 hypotheses have been developed based on the research model to examine the relationships between enduring involvement and five components of perceived risk, between five components of perceived risk and required trust level, between enduring involvement and required trust level, and finally between required trust level and preference toward an e-tailer. To attain our research objectives, we conducted an empirical analysis consisting of two phases of data collection: a pilot test and main survey. The pilot test was conducted using 25 college students to ensure that the questionnaire items are clear and straightforward. Then the main survey was conducted using 295 college students at a major university for nine days between December 13, 2010 and December 21, 2010. The measures employed to test the model included eight constructs: (1) enduring involvement, (2) financial risk, (3) performance risk, (4) delivery risk, (5) psychological risk, (6) social risk, (7) required trust level, (8) preference toward an e-tailer. The statistical package, SPSS 17.0, was used to test the internal consistency among the items within the individual measures. Based on the Cronbach's ${\alpha}$ coefficients of the individual measure, the reliability of all the variables is supported. Meanwhile, the Amos 18.0 package was employed to perform a confirmatory factor analysis designed to assess the unidimensionality of the measures. The goodness of fit for the measurement model was satisfied. Unidimensionality was tested using convergent, discriminant, and nomological validity. The statistical evidences proved that the three types of validity were all satisfied. Now the structured equation modeling technique was used to analyze the individual paths along the relationships among the research constructs. The results indicated that enduring involvement has significant positive relationships with all the five components of perceived risk, while only performance risk is significantly related to trust level required by consumers for purchase. It can be inferred from the findings that product performance problems are mostly likely to occur when a merchant behaves in an opportunistic manner. Positive relationships were also found between involvement and required trust level and between required trust level and online merchant choice. Enduring involvement is concerned with the pleasure a consumer derives from a product class and/or with the desire for knowledge for the product class, and thus is likely to motivate the consumer to look for ways of mitigating perceived risk by requiring a higher level of trust on the part of the online merchant. Likewise, a consumer requiring a high level of trust on the merchant will choose a digital storefront rather than an e-marketplace, since a digital storefront is believed to be trustworthier than an e-marketplace, as it fulfills orders by itself rather than acting as an intermediary. The findings of the present research provide both academic and practical implications. The first academic implication is that enduring product involvement is a strong motivator of consumer responses, especially the selection of a merchant, in the context of electronic shopping. Secondly, academicians are advised to pay attention to the finding that an individual component or type of perceived risk can be used as an important research construct, since it would allow one to pinpoint the specific types of risk that are influenced by antecedents or that influence consequents. Meanwhile, our research provides implications useful for online merchants (both online storefronts and e-marketplaces). Merchants may develop strategies to attract consumers by managing perceived performance risk involved in purchase decisions, since it was found to have significant positive relationship with the level of trust required by a consumer on the part of the merchant. One way to manage performance risk would be to thoroughly examine the product before shipping to ensure that it has no deficiencies or flaws. Secondly, digital storefronts are advised to focus on symbolic goods (e.g., cars, cell phones, fashion outfits, and handbags) in which consumers are relatively more involved than others, whereas e- marketplaces should put their emphasis on non-symbolic goods (e.g., drinks, books, MP3 players, and bike accessories).

  • PDF

The Definition of Outer Space and the Air/Outer Space Boundary Question (우주의 법적 지위와 경계획정 문제)

  • Lee, Young-Jin
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.30 no.2
    • /
    • pp.427-468
    • /
    • 2015
  • To date, we have considered the theoretical views, the standpoint of states and the discourse within the international community such as the UN Committee on the Peaceful Uses of Outer Space(COPUOS) regarding the Air/Outer Space Boundary Question which is one of the first issues of UN COPUOS established in line with marking the starting point of Outer Space Area. As above mentioned, discussions in the United Nations and among scholars of within each state regarding the delimitation issue often saw a division between those in favor of a functional approach (the functionalists) and those seeking the delineation of a boundary (the spatialists). The spatialists emphasize that the boundary between air and outer space should be delimited because the status of outer space is a type of public domain from which sovereign jurisdiction is excluded, as stated in Article 2 of Outer Space Treaty. On the contrary art. I of Chicago Convention is evidence of the acknowledgement of sovereignty over airspace existing as an international customary law, has the binding force of which exists independently of the Convention. The functionalists, backed initially by the major space powers, which viewed any boundary demarcation as possibly restricting their access to space, whether for peaceful or non-military purposes, considered it insufficient or inadequate to delimit a boundary of outer space without obvious scientific and technological evidences. Last more than 50 years there were large development in the exploration and use of outer space. But a large number states including those taking the view of a functionalist have taken on a negative attitude. As the element of location is a decisive factor for the choice of the legal regime to be applied, a purely functional approach to the regulation of activities in the space above the Earth does not offer a solution. It seems therefore to welcome the arrival of clear evidence of a growing recognition of and national practices concerning a spatial approach to the problem is gaining support both by a large number of States as well as by publicists. The search for a solution to the problem of demarcating the two different legal regimes governing the space above Earth has undoubtedly been facilitated and a number of countries including Russia have already advocated the acceptance of the lowest perigee boundary of outer space at a height of 100km. As a matter of fact the lowest perigee where space objects are still able to continue in their orbiting around the earth has already been imposed as a natural criterion for the delimitation of outer space. This delimitation of outer space has also been evidenced by the constant practice of a large number of States and their tacit consent to space activities accomplished so far at this distance and beyond it. Of course there are still numerous opposing views on the delineation of a outer space boundary by space powers like U.S.A., England, France and so on. Therefore, first of all to solve the legal issues faced by the international community in outer space activities like delimitation problem, there needs a positive and peaceful will of international cooperation. From this viewpoint, President John F. Kennedy once described the rationale behind the outer space activities in his famous "Moon speech" given at Rice University in 1962. He called upon Americans and all mankind to strive for peaceful cooperation and coexistence in our future outer space activities. And Kennedy explained, "There is no strife, ${\ldots}$ nor any international conflict in outer space as yet. But its hazards are hostile to us all: Its conquest deserves the best of all mankind, and its opportunity for peaceful cooperation may never come again." This speech seems to even present us in the contemporary era with ample suggestions for further peaceful cooperation in outer space activities including the delimitation of outer space.

Differential Diagnosis By Analysis of Pleural Effusion (흉수분석에 의한 질병의 감별진단)

  • Ko, Won-Ki;Lee, Jun-Gu;Jung, Jae-Ho;Park, Mu-Suk;Jeong, Nak-Yeong;Kim, Young-Sam;Yang, Dong-Gyoo;Yoo, Nae-Choon;Ahn, Chul-Min;Kim, Sung-Kyu
    • Tuberculosis and Respiratory Diseases
    • /
    • v.51 no.6
    • /
    • pp.559-569
    • /
    • 2001
  • Background : Pleural effusion is one of the most common clinical manifestations associated with a variety of pulmonary diseases such as malignancy, tuberculosis, and pneumonia. However, there are no useful laboratory tests to determine the specific cause of pleural effusion. Therefore, an attempt was made to analyze the various types of pleural effusion and search for useful laboratory tests for pleural effusion in order to differentiate between the diseases, especially between a malignant pleural effusion and a non-malignant pleural effusion. Methods : 93 patients with a pleural effusion, who visited the Severance hospital from January 1998 to August 1999, were enrolled in this study. Ultrasound-guided thoracentesis was done and a confirmational diagnosis was made by a gram stain, bacterial culture, Ziehl-Neelsen stain, a mycobacterial culture, a pleural biopsy and cytology. Results : The male to female ratio was 56 : 37 and the average age was $47.1{\pm}21.8$ years. There were 16 cases with a malignant effusion, 12 cases with a para-malignant effusion, 36 cases with tuberculosis, 22 cases with a para-pneumonic effusion, and 7 cases with transudate. The LDH2 fraction was significantly higher in the para-malignant effusion group compared to the para-pneumonic effusion group [$30.6{\pm}6.4%$ and $20.2{\pm}7.5%$, respectively (p<0.05)] and both the LDH1 and LDH2 fraction was significantly in the para-malignant effusion group compared to those with tuberculosis [$16.4{\pm}7.2%$ vs. $7.6{\pm}4.7%$, and $30.6{\pm}6.4%$ vs.$17.6{\pm}6.3%$, respectively (p<0.05)]. The pleural effusion/serum LDH4 fraction ratio was significantly lower in the malignant effusion group compared to those with tuberculosis [$1.5{\pm}0.8$ vs. $2.1{\pm}0.6$, respectively (p<0.05)]. The LDH4 fraction and the pleural effusion/serum LDH4 fraction ratio was significantly lower in the para-malignant effusion group compared to those with tuberculosis [$17.0{\pm}5.8%$ vs. $23.5{\pm}4.6%$ and $1.3{\pm}0.4$ vs. $2.1{\pm}0.6$, respectively (p<0.05)]. Conclusion : These results suggest that the LDH isoenzyme was the only useful biochemical test for a differential diagnosis of the various diseases. In particular, the most useful test was the pleural effusion/serum LDH4 fraction ratio to distinguish between a para-malignant effusion and a tuberculous effusion.

  • PDF

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.