• Title/Summary/Keyword: heterogeneous system

Search Result 1,490, Processing Time 0.041 seconds

Research Direction for Functional Foods Safety (건강기능식품 안전관리 연구방향)

  • Jung, Ki-Hwa
    • Journal of Food Hygiene and Safety
    • /
    • v.25 no.4
    • /
    • pp.410-417
    • /
    • 2010
  • Various functional foods, marketing health and functional effects, have been distributed in the market. These products, being in forms of foods, tablets, and capsules, are likely to be mistaken as drugs. In addition, non-experts may sell these as foods, or use these for therapy. Efforts for creating health food regulations or building regulatory system for improving the current status of functional foods have been made, but these have not been communicated to consumers yet. As a result, problems of circulating functional foods for therapy or adding illegal medical to such products have persisted, which has become worse by internet media. The cause of this problem can be categorized into (1) product itself and (2) its use, but in either case, one possible cause is lack of communications with consumers. Potential problems that can be caused by functional foods include illegal substances, hazardous substances, allergic reactions, considerations when administered to patients, drug interactions, ingredients with purity or concentrations too low to be detected, products with metabolic activations, health risks from over- or under-dose of vitamin and minerals, and products with alkaloids. (Journal of Health Science, 56, Supplement (2010)). The reason why side effects related to functional foods have been increasing is that under-qualified functional food companies are exaggerating the functionality for marketing purposes. KFDA has been informing consumers, through its web pages, to address the above mentioned issues related to functional foods, but there still is room for improvement, to promote proper use of functional foods and avoid drug interactions. Specifically, to address these issues, institutionalizing to collect information on approved products and their side effects, settling reevaluation systems, and standardizing preclinical tests and clinical tests are becoming urgent. Also to provide crucial information, unified database systems, seamlessly aggregating heterogeneous data in different domains, with user interfaces enabling effective one-stop search, are crucial.

Improvement in facies discrimination using multiple seismic attributes for permeability modelling of the Athabasca Oil Sands, Canada (캐나다 Athabasca 오일샌드의 투수도 모델링을 위한 다양한 탄성파 속성들을 이용한 상 구분 향상)

  • Kashihara, Koji;Tsuji, Takashi
    • Geophysics and Geophysical Exploration
    • /
    • v.13 no.1
    • /
    • pp.80-87
    • /
    • 2010
  • This study was conducted to develop a reservoir modelling workflow to reproduce the heterogeneous distribution of effective permeability that impacts on the performance of SAGD (Steam Assisted Gravity Drainage), the in-situ bitumen recovery technique in the Athabasca Oil Sands. Lithologic facies distribution is the main cause of the heterogeneity in bitumen reservoirs in the study area. The target formation consists of sand with mudstone facies in a fluvial-to-estuary channel system, where the mudstone interrupts fluid flow and reduces effective permeability. In this study, the lithologic facies is classified into three classes having different characteristics of effective permeability, depending on the shapes of mudstones. The reservoir modelling workflow of this study consists of two main modules; facies modelling and permeability modelling. The facies modelling provides an identification of the three lithologic facies, using a stochastic approach, which mainly control the effective permeability. The permeability modelling populates mudstone volume fraction first, then transforms it into effective permeability. A series of flow simulations applied to mini-models of the lithologic facies obtains the transformation functions of the mudstone volume fraction into the effective permeability. Seismic data contribute to the facies modelling via providing prior probability of facies, which is incorporated in the facies models by geostatistical techniques. In particular, this study employs a probabilistic neural network utilising multiple seismic attributes in facies prediction that improves the prior probability of facies. The result of using the improved prior probability in facies modelling is compared to the conventional method using a single seismic attribute to demonstrate the improvement in the facies discrimination. Using P-wave velocity in combination with density in the multiple seismic attributes is the essence of the improved facies discrimination. This paper also discusses sand matrix porosity that makes P-wave velocity differ between the different facies in the study area, where the sand matrix porosity is uniquely evaluated using log-derived porosity, P-wave velocity and photographically-predicted mudstone volume.

A Study on Automatic Classification Model of Documents Based on Korean Standard Industrial Classification (한국표준산업분류를 기준으로 한 문서의 자동 분류 모델에 관한 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.221-241
    • /
    • 2018
  • As we enter the knowledge society, the importance of information as a new form of capital is being emphasized. The importance of information classification is also increasing for efficient management of digital information produced exponentially. In this study, we tried to automatically classify and provide tailored information that can help companies decide to make technology commercialization. Therefore, we propose a method to classify information based on Korea Standard Industry Classification (KSIC), which indicates the business characteristics of enterprises. The classification of information or documents has been largely based on machine learning, but there is not enough training data categorized on the basis of KSIC. Therefore, this study applied the method of calculating similarity between documents. Specifically, a method and a model for presenting the most appropriate KSIC code are proposed by collecting explanatory texts of each code of KSIC and calculating the similarity with the classification object document using the vector space model. The IPC data were collected and classified by KSIC. And then verified the methodology by comparing it with the KSIC-IPC concordance table provided by the Korean Intellectual Property Office. As a result of the verification, the highest agreement was obtained when the LT method, which is a kind of TF-IDF calculation formula, was applied. At this time, the degree of match of the first rank matching KSIC was 53% and the cumulative match of the fifth ranking was 76%. Through this, it can be confirmed that KSIC classification of technology, industry, and market information that SMEs need more quantitatively and objectively is possible. In addition, it is considered that the methods and results provided in this study can be used as a basic data to help the qualitative judgment of experts in creating a linkage table between heterogeneous classification systems.

DENTAL TREATMENT IN A PATIENT WITH FUKUYAMA TYPE MUSCULAR DYSTROPHY UNDER TOTAL INTRAVENOUS ANESTHESIA USING PROPOFOL (후쿠야마 선천성 근이영양증 환자의 프로포폴을 이용한 전정맥마취 하 치과치료)

  • Jin, Dallae;Shin, Teo-Jeon;Hyun, Hong-Keun;Kim, Young-Jae;Kim, Jung-Wook;Lee, Sang-Hoon;Kim, Chong-Chul;Jang, Ki-Taeg
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.40 no.1
    • /
    • pp.66-71
    • /
    • 2013
  • Muscular dystrophy is a genetically heterogeneous group of disorders characterized by progressive muscle weakness of variable distribution and severity. Fukuyama type congenital muscular dystrophy (FCMD) is an unusual form of muscular dystrophy with autosomal recessive inheritance and is clinically characterized by an early age of onset, severe central nervous system involvement, facial muscle weakness, and multiple joint contractures. Muscular dystrophy is susceptible to perioperative respiratory, cardiac and other complications. Patients with FCMD have upper airway muscle weakness, therefore general anesthesia is preferred to sedation regarding maintaining the airway when treating these patients. The development of malignant hyperthermia in general anesthesia for patients with muscular dystrophy is a concern. Total intravenous anesthesia should be used instead of inhaled anesthetics because of the risk of malignant hyperthermia. A 3-year-9-month old, 13kg girl with Fukuyama type congenital muscular dystrophy was scheduled for dental treatment under general anesthesia. She had multiple caries and 14 primary teeth needed caries treatment. Prior to general anesthesia, oral premedication with 9 mg midazolam was given. General anesthesia was induced and maintained with target controlled infusion of propofol $3{\sim}3.5{\mu}g/mL$. The patient with progressive muscular dystrophy was successfully treated under total intravenous anesthesia with a target controlled infusion of propofol. There were no complications related to anesthesia and dental treatment during or after the operation. This case suggests that target controlled infusion of propofol is a safe and appropriate anesthetic technique in FCMD patients for dental treatment.

Numerical and Experimental Study on the Coal Reaction in an Entrained Flow Gasifier (습식분류층 석탄가스화기 수치해석 및 실험적 연구)

  • Kim, Hey-Suk;Choi, Seung-Hee;Hwang, Min-Jung;Song, Woo-Young;Shin, Mi-Soo;Jang, Dong-Soon;Yun, Sang-June;Choi, Young-Chan;Lee, Gae-Goo
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.2
    • /
    • pp.165-174
    • /
    • 2010
  • The numerical modeling of a coal gasification reaction occurring in an entrained flow coal gasifier is presented in this study. The purposes of this study are to develop a reliable evaluation method of coal gasifier not only for the basic design but also further system operation optimization using a CFD(Computational Fluid Dynamics) method. The coal gasification reaction consists of a series of reaction processes such as water evaporation, coal devolatilization, heterogeneous char reactions, and coal-off gaseous reaction in two-phase, turbulent and radiation participating media. Both numerical and experimental studies are made for the 1.0 ton/day entrained flow coal gasifier installed in the Korea Institute of Energy Research (KIER). The comprehensive computer program in this study is made basically using commercial CFD program by implementing several subroutines necessary for gasification process, which include Eddy-Breakup model together with the harmonic mean approach for turbulent reaction. Further Lagrangian approach in particle trajectory is adopted with the consideration of turbulent effect caused by the non-linearity of drag force, etc. The program developed is successfully evaluated against experimental data such as profiles of temperature and gaseous species concentration together with the cold gas efficiency. Further intensive investigation has been made in terms of the size distribution of pulverized coal particle, the slurry concentration, and the design parameters of gasifier. These parameters considered in this study are compared and evaluated each other through the calculated syngas production rate and cold gas efficiency, appearing to directly affect gasification performance. Considering the complexity of entrained coal gasification, even if the results of this study looks physically reasonable and consistent in parametric study, more efforts of elaborating modeling together with the systematic evaluation against experimental data are necessary for the development of an reliable design tool using CFD method.

Continuous Wet Oxidation of TCE over Supported Metal Oxide Catalysts (금속산화물 담지촉매상에서 연속 습식 TCE 분해반응)

  • Kim, Moon Hyeon;Choo, Kwang-Ho
    • Korean Chemical Engineering Research
    • /
    • v.43 no.2
    • /
    • pp.206-214
    • /
    • 2005
  • Heterogeneously-catalyzed oxidation of aqueous phase trichloroethylene (TCE) over supported metal oxides has been conducted to establish an approach to eliminate ppm levels of organic compounds in water. A continuous flow reactor system was designed to effect predominant reaction parameters in determining catalytic activity of the catalysts for wet TCE decomposition as a model reaction. 5 wt.% $CoO_x/TiO_2$ catalyst exhibited a transient period in activity vs. on-stream time behavior, suggesting that the surface structure of the $CoO_x$ might be altered with on-stream hours; regardless, it is probable to be the most promising catalyst. Not only could the bare support be inactive for the wet decomposition reaction at $36^{\circ}C$, but no TCE removal also occurred by the process of adsorption on $TiO_2$ surface. The catalytic activity was independent of all particle sizes used, thereby representing no mass transfer limitation in intraparticle diffusion. Very low TCE conversion appeared for $TiO_2$-supported $NiO_x$ and $CrO_x$ catalysts. Wet oxidation performance of supported Cu and Fe catalysts, obtained through an incipient wetness and ion exchange technique, was dependent primarily on the kinds of the metal oxides, in addition to the acidic solid supports and the preparation routes. 5 wt.% $FeO_x/TiO_2$ catalyst gave no activity in the oxidation reaction at $36^{\circ}C$, while 1.2 wt.% Fe-MFI was active for the wet decomposition depending on time on-stream. The noticeable difference in activity of the both catalysts suggests that the Fe oxidation states involved to catalytic redox cycle during the course of reaction play a significant role in catalyzing the wet decomposition as well as in maintaining the time on-stream activity. Based on the results of different $CoO_x$ loadings and reaction temperatures for the decomposition reaction at $36^{\circ}C$ with $CoO_x/TiO_2$, the catalyst possessed an optimal $CoO_x$ amount at which higher reaction temperatures facilitated the catalytic TCE conversion. Small amounts of the active ingredient could be dissolved by acidic leaching but such a process gave no appreciable activity loss of the $CoO_x$ catalyst.

Dynamic Changes of Urban Spatial Structure in Seoul: Focusing on a Relative Office Price Gradient (오피스 가격경사계수를 이용한 서울시 도시공간구조 변화 분석)

  • Ryu, Kang Min;Song, Ki Wook
    • Land and Housing Review
    • /
    • v.12 no.3
    • /
    • pp.11-26
    • /
    • 2021
  • With the increasing demand for office space, there have been questions on how office rent distribution produces a change in the urban spatial structure in Seoul. The purpose of this paper is to investigate a relative price gradient and to present a time-series model that can quantitatively explain the dynamic changes in the urban spatial structure. The analysis was dealt with office rent above 3,306 m2 for the past 10 years from 1Q 2010 to 4Q 2019 within Seoul. A modified repeat sales model was employed. The main findings are briefly summarized as follows. First, according to the estimates of the office price gradient in the three major urban centers of Seoul, the CBD remained at a certain level with little change, while those in the GBD and the YBD continued to increase. This result reveals that the urban form of Seoul has shifted from monocentric to polycentric. This shows that the spatial distribution of companies has gradually accelerated decentralized concentration implying that the business networks have become significant. Second, contrary to small and medium-sized office buildings that have undertaken no change in the gradient, large office buildings have seen an increase in the gradient. The relative price gradients in small and medium-sized buildings were inversely proportional among the CBD, the GBD, and the YBD, implying their heterogeneous submarkets by office rent movements. Presumably, those differences in the submarkets were attributed to investment attraction, industrial competition, and the credit and preference of tenants. The findings are consistent with the hierarchical system identified in the Seoul 2030 Plan as well as the literature about Seoul's urban form. This research claims that the proposed method, based on the modified repeat sales model, is useful in understanding temporal dynamic changes. Moreover, the findings can provide implications for urban growth strategies under rapidly changing market conditions.

The Impact of the Internet Channel Introduction Depending on the Ownership of the Internet Channel (도입주체에 따른 인터넷경로의 도입효과)

  • Yoo, Weon-Sang
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.1
    • /
    • pp.37-46
    • /
    • 2009
  • The Census Bureau of the Department of Commerce announced in May 2008 that U.S. retail e-commerce sales for 2006 reached $ 107 billion, up from $ 87 billion in 2005 - an increase of 22 percent. From 2001 to 2006, retail e-sales increased at an average annual growth rate of 25.4 percent. The explosive growth of E-Commerce has caused profound changes in marketing channel relationships and structures in many industries. Despite the great potential implications for both academicians and practitioners, there still exists a great deal of uncertainty about the impact of the Internet channel introduction on distribution channel management. The purpose of this study is to investigate how the ownership of the new Internet channel affects the existing channel members and consumers. To explore the above research questions, this study conducts well-controlled mathematical experiments to isolate the impact of the Internet channel by comparing before and after the Internet channel entry. The model consists of a monopolist manufacturer selling its product through a channel system including one independent physical store before the entry of an Internet store. The addition of the Internet store to this channel system results in a mixed channel comprised of two different types of channels. The new Internet store can be launched by the independent physical store such as Bestbuy. In this case, the physical retailer coordinates the two types of stores to maximize the joint profits from the two stores. The Internet store also can be introduced by an independent Internet retailer such as Amazon. In this case, a retail level competition occurs between the two types of stores. Although the manufacturer sells only one product, consumers view each product-outlet pair as a unique offering. Thus, the introduction of the Internet channel provides two product offerings for consumers. The channel structures analyzed in this study are illustrated in Fig.1. It is assumed that the manufacturer plays as a Stackelberg leader maximizing its own profits with the foresight of the independent retailer's optimal responses as typically assumed in previous analytical channel studies. As a Stackelberg follower, the independent physical retailer or independent Internet retailer maximizes its own profits, conditional on the manufacturer's wholesale price. The price competition between two the independent retailers is assumed to be a Bertrand Nash game. For simplicity, the marginal cost is set at zero, as typically assumed in this type of study. In order to explore the research questions above, this study develops a game theoretic model that possesses the following three key characteristics. First, the model explicitly captures the fact that an Internet channel and a physical store exist in two independent dimensions (one in physical space and the other in cyber space). This enables this model to demonstrate that the effect of adding an Internet store is different from that of adding another physical store. Second, the model reflects the fact that consumers are heterogeneous in their preferences for using a physical store and for using an Internet channel. Third, the model captures the vertical strategic interactions between an upstream manufacturer and a downstream retailer, making it possible to analyze the channel structure issues discussed in this paper. Although numerous previous models capture this vertical dimension of marketing channels, none simultaneously incorporates the three characteristics reflected in this model. The analysis results are summarized in Table 1. When the new Internet channel is introduced by the existing physical retailer and the retailer coordinates both types of stores to maximize the joint profits from the both stores, retail prices increase due to a combination of the coordination of the retail prices and the wider market coverage. The quantity sold does not significantly increase despite the wider market coverage, because the excessively high retail prices alleviate the market coverage effect to a degree. Interestingly, the coordinated total retail profits are lower than the combined retail profits of two competing independent retailers. This implies that when a physical retailer opens an Internet channel, the retailers could be better off managing the two channels separately rather than coordinating them, unless they have the foresight of the manufacturer's pricing behavior. It is also found that the introduction of an Internet channel affects the power balance of the channel. The retail competition is strong when an independent Internet store joins a channel with an independent physical retailer. This implies that each retailer in this structure has weak channel power. Due to intense retail competition, the manufacturer uses its channel power to increase its wholesale price to extract more profits from the total channel profit. However, the retailers cannot increase retail prices accordingly because of the intense retail level competition, leading to lower channel power. In this case, consumer welfare increases due to the wider market coverage and lower retail prices caused by the retail competition. The model employed for this study is not designed to capture all the characteristics of the Internet channel. The theoretical model in this study can also be applied for any stores that are not geographically constrained such as TV home shopping or catalog sales via mail. The reasons the model in this study is names as "Internet" are as follows: first, the most representative example of the stores that are not geographically constrained is the Internet. Second, catalog sales usually determine the target markets using the pre-specified mailing lists. In this aspect, the model used in this study is closer to the Internet than catalog sales. However, it would be a desirable future research direction to mathematically and theoretically distinguish the core differences among the stores that are not geographically constrained. The model is simplified by a set of assumptions to obtain mathematical traceability. First, this study assumes the price is the only strategic tool for competition. In the real world, however, various marketing variables can be used for competition. Therefore, a more realistic model can be designed if a model incorporates other various marketing variables such as service levels or operation costs. Second, this study assumes the market with one monopoly manufacturer. Therefore, the results from this study should be carefully interpreted considering this limitation. Future research could extend this limitation by introducing manufacturer level competition. Finally, some of the results are drawn from the assumption that the monopoly manufacturer is the Stackelberg leader. Although this is a standard assumption among game theoretic studies of this kind, we could gain deeper understanding and generalize our findings beyond this assumption if the model is analyzed by different game rules.

  • PDF

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Comparative Analysis of Patterns of Care Study of Radiotherapy for Esophageal Cancer among Three Countries: South Korea, Japan and the United States (한국, 미국, 일본의 식도암 방사선 치료에 대한 PCS($1998{\sim}1999$) 결과의 비교 분석)

  • Hur, Won-Joo;Choi, Young-Min;Kim, Jeung-Kee;Lee, Hyung-Sik;Choi, Seok-Reyol;Kim, Il-Han
    • Radiation Oncology Journal
    • /
    • v.26 no.2
    • /
    • pp.83-90
    • /
    • 2008
  • Purpose: For the first time, a nationwide survey of the Patterns of Care Study(PCS) for the various radiotherapy treatments of esophageal cancer was carried out in South Korea. In order to observe the different parameters, as well as offer a solid cooperative system, we compared the Korean results with those observed in the United States(US) and Japan. Materials and Methods: Two hundreds forty-six esophageal cancer patients from 21 institutions were enrolled in the South Korean study. The patients received radiation theraphy(RT) from 1998 to 1999. In order to compare these results with those from the United States, a published study by Suntharalingam, which included 414 patients[treated by Radiotherapy(RT)] from 59 institutions between 1996 and 1999 was chosen. In order to compare the South Korean with the Japanese data, we choose two different studies. The results published by Gomi were selected as the surgery group, in which 220 esophageal cancer patients were analyzed from 76 facilities. The patients underwent surgery and received RT with or without chemotherapy between 1998 and 2001. The non-surgery group originated from a study by Murakami, in which 385 patients were treated either by RT alone or RT with chemotherapy, but no surgery, between 1999 and 2001. Results: The median age of enrolled patients was highest in the Japanese non-surgery group(71 years old). The gender ratio was approximately 9:1(male:female) in both the Korean and Japanese studies, whereas females made up 23.1% of the study population in the US study. Adenocarcinoma outnumbered squamous cell carcinoma in the US study, whereas squamous cell carcinoma was more prevalent both the Korean and Japanese studies(Korea 96.3%, Japan 98%). An esophagogram, endoscopy, and chest CT scan were the main modalities of diagnostic evaluation used in all three countries. The US and Japan used the abdominal CT scan more frequently than the abdominal ultrasonography. Radiotherapy alone treatment was most rarely used in the US study(9.5%), compared to the Korean(23.2%) and Japanese(39%) studies. The combination of the three modalities(Surgery+RT+Chemotherapy) was performed least often in Korea(11.8%) compared to the Japanese(49.5%) and US(32.8%) studies. Chemotherapy(89%) and chemotherapy with concurrent chemoradiotherapy(97%) was most frequently used in the US study. Fluorouracil(5-FU) and Cisplatin were the most preferred drug treatments used in all three countries. The median radiation dose was 50.4 Gy in the US study, as compared to 55.8 Gy in the Korean study regardless of whether an operation was performed. However, in Japan, different median doses were delivered for the surgery(48 Gy) and non-surgery groups(60 Gy). Conclusion: Although some aspects of the evaluation of esophageal cancer and its various treatment modalities were heterogeneous among the three countries surveyed, we found no remarkable differences in the RT dose or technique, which includes the number of portals and energy beams.