• Title/Summary/Keyword: reported result

Search Result 3,600, Processing Time 0.031 seconds

A Study on the Part of Speech of 'Perfect' Marker 'YOU(有)' - The part of speech of 'YOU(有)' in the question 'YOUMEIYOU(有没有)+VP?' (완료상표지 '유(有)'의 품사자질 연구 - '유몰유(有没有)+VP?'구조를 중심으로)

  • 최정미;최재영
    • Journal of Sinology and China Studies
    • /
    • v.80
    • /
    • pp.177-197
    • /
    • 2019
  • The question 'YOUMEIYOU(有没有)+VP?' indicating 'Perfect' was mainly used in the Southern Chinese dialect, recently this question 'YOUMEIYOU(有没有)+VP?' has also been widely used in Mandarin. In the academic world, it is reported that the appearance of 'YOUMEIYOU(有没有)+VP?' was influenced by the Southern Chinese dialect. However, there are divergent opinions about the part of speech of 'YOU(有)', such as 'auxiliary verb' , 'adverb' , 'verb', and 'particle' etc,, and these are classifiable into two main theories-'auxiliary verb' theory and 'adverb' theory. In this paper, we considered the validity of these two theories in terms of syntactic, semantic, diachronous, and typological. The results are as follows. First, based on the prototype category theory, we reclassified typical syntactic and untypical syntactic features of 'auxiliary verb' and 'adverb'. Through this, we considered the syntactic features of 'YOU(有)' are closer to the syntactic features of either of the two. As a result, It is reasonable to think that '有(YOU)' is an 'adverb' that has untypical syntactic features (It is 'adverb', but can constitute a positive-negative question, such as 'ZAIBUZAI(再不再)', 'CHANGBUCHANG(常不常)', 'CENGBUCENG(曾不曾)' and so on.). Second, the semantic features of 'YOU(有)' in the question 'YOUMEIYOU(有没有)+VP?' indicates 'Aspect' meaning ('Perfect'), does not indicate the 'Modality' meaning represented by the general 'auxiliary verb'(ability, will, deontic, epistemic etc.). Unlike English, contemporary Mandarin does not have a separate form of 'auxiliary verb'(have, be) that represents 'Modality' and 'Aspect'. On the other hand, it is reasonable to consider 'You(有)' as an 'adverb', because 'adverb' has played a role of 'Aspect' meaning(Ceng(曾), Yi(已), Zai(在), Yao(要) etc.) from the pre-QIN(先秦) dynasty before the appearance of 'Perfect particle' '了'. Third, in the Chinese history and grammar academic world, the prevailing view is that 'MEIYOU(没有)' which began to appear in front of VP in Ming(明) dynasty is regarded as a 'negative adverb'. Therefore, if 'MEIYOU(没有)' in the question 'YOUMEIYOU(有没有)+VP?' is regarded as a 'negative adverb', it is not reasonable to regard the remaining '有(YOU)' as an auxiliary verb. Also, there are some examples of 'You(有)' written as an 'adverb' in the pre-QIN(先秦) dynasty, so it is reasonable to regard '有(YOU)' as an 'adverb'. Fourth, according to the study of the linguistic typological theory, the 'H-POSSESSIVE' verb was grammaticalized toward 'Perfect' marker in many languages around the world. However, after this grammaticalization the part of speech is not all 'auxiliary verb'. In some languages, they can be grammaticalized toward 'adverb'. Therefore, it is reasonable to regard '有(YOU)' as an 'adverb'.

The Effect of Price Discount Rate According to Brand Loyalty on Consumer's Acquisition Value and Transaction Value (브랜드애호도에 따른 가격할인율의 차이가 소비자의 획득가치와 거래가치에 미치는 영향)

  • Kim, Young-Ei;Kim, Jae-Yeong;Shin, Chang-Nag
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.4
    • /
    • pp.247-269
    • /
    • 2007
  • In recent years, one of the major reasons for the fierce competition amongst firms is that they strive to increase their own market shares and customer acquisition rate in the same market with similar and apparently undifferentiated products in terms of quality and perceived benefit. Because of this change in recent marketing environment, the differentiated after-sales service and diversified promotion strategies have become more important to gain competitive advantage. Price promotion is the favorite strategy that most retailers use to achieve short-term sales increase, induce consumer's brand switch, in troduce new product into market, and so forth. However, if marketers apply or copy an identical price promotion strategy without considering the characteristic differences in product and consumer preference, it will cause serious problems because discounted price itself could make people skeptical about product quality, and the changes of perceived value might appear differently depending on other factors such as consumer involvement or brand attitude. Previous studies showed that price promotion would certainly increase sales, and the discounted price compared to regular price would enhance the consumer's perceived values. On the other hand, discounted price itself could make people depreciate or skeptical about product quality, and reduce the consumers' positivity bias because consumers might be unsure whether the current price promotion is the retailer's best price offer. Moreover, we cannot say that discounted price absolutely enhances the consumer's perceived values regardless of product category and purchase situations. That is, the factors that affect consumers' value perceptions and buying behavior are so diverse in reality that the results of studies on the same dependent variable come out differently depending on what variable was used or how experiment conditions were designed. Majority of previous researches on the effect of price-comparison advertising have used consumers' buying behavior as dependent variable. In order to figure out consumers' buying behavior theoretically, analysis of value perceptions which influence buying intentions is needed. In addition, they did not combined the independent variables such as brand loyalty and price discount rate together. For this reason, this paper tried to examine the moderating effect of brand loyalty on relationship between the different levels of discounting rate and buyers' value perception. And we provided with theoretical and managerial implications that marketers need to consider such variables as product attributes, brand loyalty, and consumer involvement at the same time, and then establish a differentiated pricing strategy case by case in order to enhance consumer's perceived values properl. Three research concepts were used in our study and each concept based on past researches was defined. The perceived acquisition value in this study was defined as the perceived net gains associated with the products or services acquired. That is, the perceived acquisition value of the product will be positively influenced by the benefits buyers believe they are getting by acquiring and using the product, and negatively influenced by the money given up to acquire the product. And the perceived transaction value was defined as the perception of psychological satisfaction or pleasure obtained from taking advantage of the financial terms of the price deal. Lastly, the brand loyalty was defined as favorable attitude towards a purchased product. Thus, a consumer loyal to a brand has an emotional attachment to the brand or firm. Repeat purchasers continue to buy the same brand even though they do not have an emotional attachment to it. We assumed that if the degree of brand loyalty is high, the perceived acquisition value and the perceived transaction value will increase when higher discount rate is provided. But we found that there are no significant differences in values between two different discount rates as a result of empirical analysis. It means that price reduction did not affect consumer's brand choice significantly because the perceived sacrifice decreased only a little, and customers are satisfied with product's benefits when brand loyalty is high. From the result, we confirmed that consumers with high degree of brand loyalty to a specific product are less sensitive to price change. Thus, using price promotion strategy to merely expect sale increase is not recommendable. Instead of discounting price, marketers need to strengthen consumers' brand loyalty and maintain the skimming strategy. On the contrary, when the degree of brand loyalty is low, the perceived acquisition value and the perceived transaction value decreased significantly when higher discount rate is provided. Generally brands that are considered inferior might be able to draw attention away from the quality of the product by making consumers focus more on the sacrifice component of price. But considering the fact that consumers with low degree of brand loyalty are known to be unsatisfied with product's benefits and have relatively negative brand attitude, bigger price reduction offered in experiment condition of this paper made consumers depreciate product's quality and benefit more and more, and consumer's psychological perceived sacrifice increased while perceived values decreased accordingly. We infer that, in the case of inferior brand, a drastic price-cut or frequent price promotion may increase consumers' uncertainty about overall components of product. Therefore, it appears that reinforcing the augmented product such as after-sale service, delivery and giving credit which is one of the levels consisting of product would be more effective in reality. This will be better rather than competing with product that holds high brand loyalty by reducing sale price. Although this study tried to examine the moderating effect of brand loyalty on relationship between the different levels of discounting rate and buyers' value perception, there are several limitations. This study was conducted in controlled conditions where the high involvement product and two different levels of discount rate were applied. Given the presence of low involvement product, when both pieces of information are available, it is likely that the results we have reported here may have been different. Thus, this research results explain only the specific situation. Second, the sample selected in this study was university students in their twenties, so we cannot say that the results are firmly effective to all generations. Future research that manipulates the level of discount along with the consumer involvement might lead to a more robust understanding of the effects various discount rate. And, we used a cellular phone as a product stimulus, so it would be very interesting to analyze the result when the product stimulus is an intangible product such as service. It could be also valuable to analyze whether the change of perceived value affects consumers' final buying behavior positively or negatively.

  • PDF

Correlation of p53 Protein Overexpression, Gene Mutation with Prognosis in Resected Non-Small Cell Lung Cancer(NSCLC) Patients (비소세포폐암에서 p53유전자의 구조적 이상 및 단백질 발현이 예후에 미치는 영향)

  • Lee, Y.H.;Shin, D.H.;Kim, J.H.;Lim, H.Y.;Chung, K.Y.;Yang, W.I.;Kim, S.K.;Chang, J.;Roh, J.K.;Kim, S.K.;Lee, W.Y.;Kim, B.S.;Kim, B.S.
    • Tuberculosis and Respiratory Diseases
    • /
    • v.41 no.4
    • /
    • pp.339-353
    • /
    • 1994
  • Background : The p53 gene codes for a DNA-binding nuclear phosphoprotein that appears to inhibit the progression of cells from the G1 to the S phase of the cell cycle. Mutations of the p53 gene are common in a wide variety of human cancers, including lung cancer. In lung cancers, point mutations of the p53 gene have been found in all histological types including approximately 45% of resected NSCLC and even more frequently in SCLC specimens. Mutant forms of the p53 protein have transforming activity and interfere with the cell-cycle regulatory function of the wild-type protein. The majority of p53 gene mutations produce proteins with altered conformation and prolonged half life; these mutant proteins accumulate in the cell nucleus and can be detected by immunohistochemical staining. But protein overexpression has been reported in the absence of mutation. p53 protein overexpression or gene mutation is reported poor prognostic factor in breast cancer, but in lung cancer, its prognostic significance is controversial. Method : We investigated the p53 abnormalities by nucleotide sequencing, polymerase chain reaction-single strand conformation polymorphism(PCR-SSCP), and immunohistochemical staining. We correlated these results with each other and survival in 75 patients with NSCLC resected with curative intent. Overexpression of the p53 protein was studied immunohistochemically in archival paraffin- embedded tumor samples using the D07(Novocastra, U.K.) antibody. Overexpression of p53 protein was defined by the nuclear staining of greater than 25% immunopositive cells in tumors. Detection of p53 gene mutation was done by PCR-SSCP and nucleotide sequencing from the exon 5-9 of p53 gene. Result: 1) Of the 75 patients, 36%(27/75) showed p53 overexpression by immunohistochemical stain. There was no survival difference between positive and negative p53 immunostaining(overall median survival of 26 months, disease free median survival of 13 months in both groups). 2) By PCR-SSCP, 27.6%(16/58) of the patients showed mobility shift. There was no significant difference in survival according to mobility shift(overall median survival of 27 in patients without mobility shift vs 20 months in patients with mobility shift, disease free median survival of 8 months vs 10 months respectively). 3) Nucleotide sequence was analysed from 29 patients, and 34.5%(10/29) had mutant p53 sequence. Patients with the presence of gene mutations showed tendency to shortened survival compared with the patients with no mutation(overall median survival of 22 vs 27 months, disease free median survival of 10 vs 20 months), but there was no statistical significance. 4) The sensitivity and specificity of immunostain based on PCR-SSCP was 67.0%, 74.0%, and that of the PCR-SSCP based on the nucleotide sequencing was 91.8%, 96.2% respectively. The concordance rate between the immunostain and PCR-SSCP was 62.5%, and the rate between the PCR-SSCP and nucleotide sequencing was 95.3%. Conclusion : In terms of detection of p53 gene mutation, PCR-SSCP was superior to immunostaining. p53 gene abnormalities either overexpression or mutation were not a significant prognostic factor in NSCLC patients resected with curative intent. However, patients with the mutated p53 gene showed the trends of early relapse.

  • PDF

Factors Affecting International Transfer Pricing of Multinational Enterprises in Korea (외국인투자기업의 국제이전가격 결정에 영향을 미치는 환경 및 기업요인)

  • Jun, Tae-Young;Byun, Yong-Hwan
    • Korean small business review
    • /
    • v.31 no.2
    • /
    • pp.85-102
    • /
    • 2009
  • With the continued globalization of world markets, transfer pricing has become one of the dominant sources of controversy in international taxation. Transfer pricing is the process by which a multinational corporation calculates a price for goods and services that are transferred to affiliated entities. Consider a Korean electronic enterprise that buys supplies from its own subsidiary located in China. How much the Korean parent company pays its subsidiary will determine how much profit the Chinese unit reports in local taxes. If the parent company pays above normal market prices, it may appear to have a poor profit, even if the group as a whole shows a respectable profit margin. In this way, transfer prices impact the taxable income reported in each country in which the multinational enterprise operates. It's importance lies in that around 60% of international trade involves transactions between two related parts of multinationals, according to the OECD. Multinational enterprises (hereafter MEs) exert much effort into utilizing organizational advantages to make global investments. MEs wish to minimize their tax burden. So MEs spend a fortune on economists and accountants to justify transfer prices that suit their tax needs. On the contrary, local governments are not prepared to cope with MEs' powerful financial instruments. Tax authorities in each country wish to ensure that the tax base of any ME is divided fairly. Thus, both tax authorities and MEs have a vested interest in the way in which a transfer price is determined, and this is why MEs' international transfer prices are at the center of disputes concerned with taxation. Transfer pricing issues and practices are sometimes difficult to control for regulators because the tax administration does not have enough staffs with the knowledge and resources necessary to understand them. The authors examine transfer pricing practices to provide relevant resources useful in designing tax incentives and regulation schemes for policy makers. This study focuses on identifying the relevant business and environmental factors that could influence the international transfer pricing of MEs. In this perspective, we empirically investigate how the management perception of related variables influences their choice of international transfer pricing methods. We believe that this research is particularly useful in the design of tax policy. Because it can concentrate on a few selected factors in consideration of the limited budget of the tax administration with assistance of this research. Data is composed of questionnaire responses from foreign firms in Korea with investment balances exceeding one million dollars in the end of 2004. We mailed questionnaires to 861 managers in charge of the accounting departments of each company, resulting in 121 valid responses. Seventy six percent of the sample firms are classified as small and medium sized enterprises with assets below 100 billion Korean won. Reviewing transfer pricing methods, cost-based transfer pricing is most popular showing that 60 firms have adopted it. The market-based method is used by 31 firms, and 13 firms have reported the resale-pricing method. Regarding the nationalities of foreign investors, the Japanese and the Americans constitute most of the sample. Logistic regressions have been performed for statistical analysis. The dependent variable is binary in that whether the method of international transfer pricing is a market-based method or a cost-based method. This type of binary classification is founded on the belief that the market-based method is evaluated as the relatively objective way of pricing compared with the cost-based methods. Cost-based pricing is assumed to give mangers flexibility in transfer pricing decisions. Therefore, local regulatory agencies are thought to prefer market-based pricing over cost-based pricing. Independent variables are composed of eight factors such as corporate tax rate, tariffs, relations with local tax authorities, tax audit, equity ratios of local investors, volume of internal trade, sales volume, and product life cycle. The first four variables are included in the model because taxation lies in the center of transfer pricing disputes. So identifying the impact of these variables in Korean business environments is much needed. Equity ratio is included to represent the interest of local partners. Volume of internal trade was sometimes employed in previous research to check the pricing behavior of managers, so we have followed these footsteps in this paper. Product life cycle is used as a surrogate of competition in local markets. Control variables are firm size and nationality of foreign investors. Firm size is controlled using dummy variables in that whether or not the specific firm is small and medium sized. This is because some researchers report that big firms show different behaviors compared with small and medium sized firms in transfer pricing. The other control variable is also expressed in dummy variable showing if the entrepreneur is the American or not. That's because some prior studies conclude that the American management style is different in that they limit branch manger's freedom of decision. Reviewing the statistical results, we have found that managers prefer the cost-based method over the market-based method as the importance of corporate taxes and tariffs increase. This result means that managers need flexibility to lessen the tax burden when they feel taxes are important. They also prefer the cost-based method as the product life cycle matures, which means that they support subsidiaries in local market competition using cost-based transfer pricing. On the contrary, as the relationship with local tax authorities becomes more important, managers prefer the market-based method. That is because market-based pricing is a better way to maintain good relations with the tax officials. Other variables like tax audit, volume of internal transactions, sales volume, and local equity ratio have shown only insignificant influence. Additionally, we have replaced two tax variables(corporate taxes and tariffs) with the data showing top marginal tax rate and mean tariff rates of each country, and have performed another regression to find if we could get different results compared with the former one. As a consequence, we have found something different on the part of mean tariffs, that shows only an insignificant influence on the dependent variable. We guess that each company in the sample pays tariffs with a specific rate applied only for one's own company, which could be located far from mean tariff rates. Therefore we have concluded we need a more detailed data that shows the tariffs of each company if we want to check the role of this variable. Considering that the present paper has heavily relied on questionnaires, an effort to build a reliable data base is needed for enhancing the research reliability.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Comparison of One-day and Two-day Protocol of $^{11}C$-Acetate and $^{18}F$-FDG Scan in Hepatoma (간암환자에 있어서 $^{11}C$-Acetate와 $^{18}F$-FDG PET/CT 검사의 당일 검사법과 양일 검사법의 비교)

  • Kang, Sin-Chang;Park, Hoon-Hee;Kim, Jung-Yul;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.3-8
    • /
    • 2010
  • Purpose: $^{11}C$-Acetate PET/CT is useful in detecting lesions that are related to livers in the human body and leads to a sensitivity of 87.3%. On the other hand, $^{18}F$-FDG PET/CT has a sensitivity of 47.3% and it has been reported that if both $^{18}F$-FDG and $^{11}C$-Acetate PET/CT are carried out together, their cumulative sensitivity is around 100%. However, the normal intake of the pancreas and the spleen in $^{11}C$-Acetate PET/CT can influence the $^{18}F$-FDG PET/CT leading to an inaccurate diagnosis. This research was aimed at the verification of the usefulness of how much influence these two radioactive medical supplies can cause on the medical images through comparative analysis between the one-day and two-day protocol. Materials and Methods: This research was carried out based on 46 patients who were diagnosed with liver cancer and have gone through the PET/CT (35 male, 11 female participants, average age: $54{\pm}10.6$ years, age range: 29-69 years). The equipment used for this test was the Biograph TruePoint40 PET/CT (Siemens Medical Systems, USA) and 21 participants who went through the one-day protocol test were first given the $^{11}C$-Acetate PET/CT and the $^{18}F$-FDG PET/CT, the latter exactly after one hour. The other 25 participants who went through the two-day protocol test were given the $^{11}C$-Acetate PET/CT on the first day and the $^{18}F$-FDG PET/CT on the next day. These two groups were then graded comparatively by assigning identical areas of interest of the pancreas and the spleen in the $^{18}F$-FDG images and by measuring the Standard Uptake Value (SUV). SPSS Ver.17 (SPSS Inc., USA) was used for statistical analysis, where statistical significance was found through the unpaired t-test. Results: After analyzing the participants' medical images from each of the two different protocol types, the average${\pm}$standard deviation of the SUV of the pancreas carried out under the two-day protocol were as follows: head $1.62{\pm}0.32$ g/mL, body $1.57{\pm}0.37$ g/mL, tail $1.49{\pm}0.33$ g/mL and the spleen $1.53{\pm}0.28$ g/mL. Whereas, the results for participants carried out under the one-day protocol were as follows: head $1.65{\pm}0.35$ g/mL, body $1.58{\pm}0.27$ g/mL, tail $1.49{\pm}0.28$ g/mL and the spleen $1.66{\pm}0.29$ g/mL. Conclusion: It was found that no statistical significant difference existed between the one-day and two-day protocol SUV in the pancreas and the spleen (p<0.05), and nothing which could be misconceived as false positive were found from the PET/CT medical image analysis. From this research, it was also found that no overestimation of the SUV occurred from the influence of $^{11}C$-Acetate on the $^{18}F$-FDG medical images where those two tests were carried out for one day. This result was supported by the statistical significance of the SUV of measurement. If $^{11}C$-Acetate becomes commercialized in the future, the diagnostic ability of liver diseases can be improved by $^{18}F$-FDG and one-day protocol. It is from this result where tests can be accomplished in one day without the interference phenomenon of the two radioactive medical supplies and furthermore, could reduce the waiting time improving customer satisfaction.

  • PDF

Studies on the Effects of Rice Plant on the Changes of Materials in Submerged Paddy Soils (수도재배(水稻栽培)가 답상태토양(畓狀態土壤)의 물질변화(物質變化)에 미치는 영향(影響)에 관(關)한 연구(硏究))

  • Kim, Kwang Sik
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.7 no.2
    • /
    • pp.71-97
    • /
    • 1974
  • Many studies on the changes of the materials in the water-logged paddy soil have been reported, but there will be several problems to apply them on the field soil. The main differences between the method of soil packed in beaker or column tube to that of natural field furrow slice are with or without of the rice root and the effect of water percolation. On the other hand, the mechanism of the water percolation on the changes of material in the natural field furrow slice are gradually understood. The purpose of this experiment is to know the effect of the rice cultivation on the chemical and physical changes of material in the water-logged paddy soil. Obtained results are as follows. 1. The physical and chemical changes on the water-logged paddy soil in the non-planted control-plot were nearly the same as the beaker or column tube experiment, while in the planted plot, slightly altered patterns were observed. 2. The relation between the number of tillers and total cation, $Ca^{{+}{+}}$, $Mg^{{+}{+}}$, Fe and Mn in the leachate showed very high significance. T hisresult showed that the leaching of those cation was promoted by growing of the rice r- of the rice root. 3. On the other hand, the concentration of the potassium, silica and phosphorus in leachates was gradually decreased and that of $NH_4$-N could not detect after the stage of active tillering. These facts revealed that such components were absorbed by rice plant. 4. The highly significant correlation between the number of tillers and the concentration of the total cation, $Ca^{{+}{+}}$, $Mg^{{+}{+}}$, $Fe^{{+}{+}}$, Fe and Mn in the percolated water was observed except that of $Mg^{{+}{+}}$. It was also showed that the rice root promoted the leaching of those cation. 5. The very high significance in the correlation between $HCO_3{^-}$ and the number of tillers indicated that the higher activity of the rice root was, the more $HCO_3{^-}$ concentration in the leachate was increased. 6. The relationship between the $HCO_3{^-}$ and the total cation, $Ca^{{+}{+}}$, $Mg^{{+}{+}}$, $Fe^{{+}{+}}$, Fe and Mn was appeared very highly significant. $HCO_3{^-}$, the metabolite of the rice root, promoted the leaching of $Ca^{{+}{+}}$, $Mg^{{+}{+}}$, $Fe^{{+}{+}}$ and Mn. This fact might be a result that these cations were leached as the form of bicarbonate. 7. The iron in the leachate was the form of $Fe^{{+}{+}}$ and the correlation between $Fe^{{+}{+}}$ and $HCO_3{^-}$ was very highly significant. This result indicated that it seemed to be ferrous bicarbonate when it is leached out. 8. In the rhizosphere, ferrous iron was decreased gradually and the concentration of glucose was as high as 2 to 3 times in comparison with the other parts of the soil. These facts were the same as the previous reports in which rhizosphere was oxidized by the oxigen excreted from the root, and was enriched by the organic matter which was also excreted from the root and accumulated residues of the root. 9. ${\beta}$-Glucosidase and phosphatase activity in the rhizosphere was higher than that of the other parts of the soil. This facts might be attributed to the vigorous activity of microorganism in the rhizosphere where glucose concentration was high. 10. The pH in the leachate of the planted plot was lower than that of control, and the Eh on the planted soil was elevated in the last stage.

  • PDF

Application of Westgard Multi-Rules for Improving Nuclear Medicine Blood Test Quality Control (핵의학 검체검사 정도관리의 개선을 위한 Westgard Multi-Rules의 적용)

  • Jung, Heung-Soo;Bae, Jin-Soo;Shin, Yong-Hwan;Kim, Ji-Young;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.1
    • /
    • pp.115-118
    • /
    • 2012
  • Purpose: The Levey-Jennings chart controlled measurement values that deviated from the tolerance value (mean ${\pm}2SD$ or ${\pm}3SD$). On the other hand, the upgraded Westgard Multi-Rules are actively recommended as a more efficient, specialized form of hospital certification in relation to Internal Quality Control. To apply Westgard Multi-Rules in quality control, credible quality control substance and target value are required. However, as physical examinations commonly use quality control substances provided within the test kit, there are many difficulties presented in the calculation of target value in relation to frequent changes in concentration value and insufficient credibility of quality control substance. This study attempts to improve the professionalism and credibility of quality control by applying Westgard Multi-Rules and calculating credible target value by using a commercialized quality control substance. Materials and Methods : This study used Immunoassay Plus Control Level 1, 2, 3 of Company B as the quality control substance of Total T3, which is the thyroid test implemented at the relevant hospital. Target value was established as the mean value of 295 cases collected for 1 month, excluding values that deviated from ${\pm}2SD$. The hospital quality control calculation program was used to enter target value. 12s, 22s, 13s, 2 of 32s, R4s, 41s, $10\bar{x}$, 7T of Westgard Multi-Rules were applied in the Total T3 experiment, which was conducted 194 times for 20 days in August. Based on the applied rules, this study classified data into random error and systemic error for analysis. Results: Quality control substances 1, 2, and 3 were each established as 84.2 ng/$dl$, 156.7 ng/$dl$, 242.4 ng/$dl$ for target values of Total T3, with the standard deviation established as 11.22 ng/$dl$, 14.52 ng/$dl$, 14.52 ng/$dl$ respectively. According to error type analysis achieved after applying Westgard Multi-Rules based on established target values, the following results were obtained for Random error, 12s was analyzed 48 times, 13s was analyzed 13 times, R4s was analyzed 6 times, for Systemic error, 22s was analyzed 10 times, 41s was analyzed 11 times, 2 of 32s was analyzed 17 times, $10\bar{x}$ was analyzed 10 times, and 7T was not applied. For uncontrollable Random error types, the entire experimental process was rechecked and greater emphasis was placed on re-testing. For controllable Systemic error types, this study searched the cause of error, recorded the relevant cause in the action form and reported the information to the Internal Quality Control committee if necessary. Conclusions : This study applied Westgard Multi-Rules by using commercialized substance as quality control substance and establishing target values. In result, precise analysis of Random error and Systemic error was achieved through the analysis of 12s, 22s, 13s, 2 of 32s, R4s, 41s, $10\bar{x}$, 7T rules. Furthermore, ideal quality control was achieved through analysis conducted on all data presented within the range of ${\pm}3SD$. In this regard, it can be said that the quality control method formed based on the systematic application of Westgard Multi-Rules is more effective than the Levey-Jennings chart and can maximize error detection.

  • PDF

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Clinical Experiences for Cardiac Myxomas (심장 점액종의 임상적 고찰)

  • Lee, Geun-Dong;Lee, Jae-Won;Jung, Jae-Seung;Jung, Sung-Ho;Je, Hyoung-Gon;Choo, Suk-Jung;Song, Hyun;Chung, Cheol-Hyun
    • Journal of Chest Surgery
    • /
    • v.41 no.6
    • /
    • pp.703-709
    • /
    • 2008
  • Background: Diagnosis and treatment are often successful in the setting of cardiac myxomas. However, cardiac myxomas can lead to catastrophic complications, due to intracardiac obstruction and embolism preoperatively, and can recur postoperatively. Material and Method: We retrospectively reviewed the clinical characteristics, surgical treatment, and recurrence data of 85 patients who underwent cardiac myxoma surgery at Asan Medical Center between November 1994 and June 2007. We analyzed the morphologic characteristics of 58 patients with left atrial myxomas and determined the development of functional mitral valve stenosis and systemic embolism through reviewing the results of preoperative echo-cardiograms to find potential preoperative risk factors. Result: Twenty-seven (31.8%) patients were men, and 58 (68.2%) were women. The mean patient age was $54.5{\pm}14.3$ years. Preoperative symptoms included obstructive symptoms in 41 (48.2%) patients, signs of embolism in 19 (22.4%), constitutional symptoms in 8 (9.4%), and no symptoms in 19 (20.0%). Among the 58 patients with left atrial myxomas, the mean maximal tumor diameter was $4.3{\pm}1.8$ (range $1.1{\sim}8\;cm$)cm. Twenty-six (44.8%) patients had a prolapsing type, defined as a tumor mobile enough to move down. to the mitral. annular plane during diastole, and 32 (55.2%) had villous type, defined as a tumor consisting of multiple fine villous extensions on the surface. Twelve (20.7%) patients had severe functional mitral valve stenosis, and 15 (25.9%) had systemic embolism preoperatively. The incidence of severe functional mitral valve stenosis was significantly higher in patients with the prolapsing type than in those with the non-prolapsing type (p=0.001). The mean maximal tumor diameter in patients with severe functional mitral valve stenosis was $5.1{\pm}1.0\;cm$, significantly larger than that seen in patients without severe functional mitral valve stenosis (p=0.041). The incidence of systemic embolism was significantly higher in patients with the villous type than in those with the smooth type (p=0.006). Postoperative complications were noted in 6 (7.1%) patients, and early mortality was noted in 1 (1.2%). The mean postoperative follow-up duration was $36.2{\pm}37.5$ months, with recurrence reported in 2(2.4%) patients during the follow-up period. The disease free interval were 48, 12 months, respectively. Conclusion: Surgical treatment for cardiac myxomas was performed safely, and long-term prognosis was good. In patients with left atrial myxoma, close attention should be maintained and surgery should be performed promptly in those of prolapsing type, those with large maximal diameter in order to prevent severe functional mitral valve stenosis, and those of villous type in order to prevent systemic embolism. Echocardiography should be followed serially in order to detect recurrence.