• Title/Summary/Keyword: use

Search Result 110,182, Processing Time 0.116 seconds

Discovering Promising Convergence Technologies Using Network Analysis of Maturity and Dependency of Technology (기술 성숙도 및 의존도의 네트워크 분석을 통한 유망 융합 기술 발굴 방법론)

  • Choi, Hochang;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.101-124
    • /
    • 2018
  • Recently, most of the technologies have been developed in various forms through the advancement of single technology or interaction with other technologies. Particularly, these technologies have the characteristic of the convergence caused by the interaction between two or more techniques. In addition, efforts in responding to technological changes by advance are continuously increasing through forecasting promising convergence technologies that will emerge in the near future. According to this phenomenon, many researchers are attempting to perform various analyses about forecasting promising convergence technologies. A convergence technology has characteristics of various technologies according to the principle of generation. Therefore, forecasting promising convergence technologies is much more difficult than forecasting general technologies with high growth potential. Nevertheless, some achievements have been confirmed in an attempt to forecasting promising technologies using big data analysis and social network analysis. Studies of convergence technology through data analysis are actively conducted with the theme of discovering new convergence technologies and analyzing their trends. According that, information about new convergence technologies is being provided more abundantly than in the past. However, existing methods in analyzing convergence technology have some limitations. Firstly, most studies deal with convergence technology analyze data through predefined technology classifications. The technologies appearing recently tend to have characteristics of convergence and thus consist of technologies from various fields. In other words, the new convergence technologies may not belong to the defined classification. Therefore, the existing method does not properly reflect the dynamic change of the convergence phenomenon. Secondly, in order to forecast the promising convergence technologies, most of the existing analysis method use the general purpose indicators in process. This method does not fully utilize the specificity of convergence phenomenon. The new convergence technology is highly dependent on the existing technology, which is the origin of that technology. Based on that, it can grow into the independent field or disappear rapidly, according to the change of the dependent technology. In the existing analysis, the potential growth of convergence technology is judged through the traditional indicators designed from the general purpose. However, these indicators do not reflect the principle of convergence. In other words, these indicators do not reflect the characteristics of convergence technology, which brings the meaning of new technologies emerge through two or more mature technologies and grown technologies affect the creation of another technology. Thirdly, previous studies do not provide objective methods for evaluating the accuracy of models in forecasting promising convergence technologies. In the studies of convergence technology, the subject of forecasting promising technologies was relatively insufficient due to the complexity of the field. Therefore, it is difficult to find a method to evaluate the accuracy of the model that forecasting promising convergence technologies. In order to activate the field of forecasting promising convergence technology, it is important to establish a method for objectively verifying and evaluating the accuracy of the model proposed by each study. To overcome these limitations, we propose a new method for analysis of convergence technologies. First of all, through topic modeling, we derive a new technology classification in terms of text content. It reflects the dynamic change of the actual technology market, not the existing fixed classification standard. In addition, we identify the influence relationships between technologies through the topic correspondence weights of each document, and structuralize them into a network. In addition, we devise a centrality indicator (PGC, potential growth centrality) to forecast the future growth of technology by utilizing the centrality information of each technology. It reflects the convergence characteristics of each technology, according to technology maturity and interdependence between technologies. Along with this, we propose a method to evaluate the accuracy of forecasting model by measuring the growth rate of promising technology. It is based on the variation of potential growth centrality by period. In this paper, we conduct experiments with 13,477 patent documents dealing with technical contents to evaluate the performance and practical applicability of the proposed method. As a result, it is confirmed that the forecast model based on a centrality indicator of the proposed method has a maximum forecast accuracy of about 2.88 times higher than the accuracy of the forecast model based on the currently used network indicators.

A Study on Public Interest-based Technology Valuation Models in Water Resources Field (수자원 분야 공익형 기술가치평가 시스템에 대한 연구)

  • Ryu, Seung-Mi;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.177-198
    • /
    • 2018
  • Recently, as economic property it has become necessary to acquire and utilize the framework for water resource measurement and performance management as the property of water resources changes to hold "public property". To date, the evaluation of water technology has been carried out by feasibility study analysis or technology assessment based on net present value (NPV) or benefit-to-cost (B/C) effect, however it is not yet systemized in terms of valuation models to objectively assess an economic value of technology-based business to receive diffusion and feedback of research outcomes. Therefore, K-water (known as a government-supported public company in Korea) company feels the necessity to establish a technology valuation framework suitable for technical characteristics of water resources fields in charge and verify an exemplified case applied to the technology. The K-water evaluation technology applied to this study, as a public interest goods, can be used as a tool to measure the value and achievement contributed to society and to manage them. Therefore, by calculating the value in which the subject technology contributed to the entire society as a public resource, we make use of it as a basis information for the advertising medium of performance on the influence effect of the benefits or the necessity of cost input, and then secure the legitimacy for large-scale R&D cost input in terms of the characteristics of public technology. Hence, K-water company, one of the public corporation in Korea which deals with public goods of 'water resources', will be able to establish a commercialization strategy for business operation and prepare for a basis for the performance calculation of input R&D cost. In this study, K-water has developed a web-based technology valuation model for public interest type water resources based on the technology evaluation system that is suitable for the characteristics of a technology in water resources fields. In particular, by utilizing the evaluation methodology of the Institute of Advanced Industrial Science and Technology (AIST) in Japan to match the expense items to the expense accounts based on the related benefit items, we proposed the so-called 'K-water's proprietary model' which involves the 'cost-benefit' approach and the FCF (Free Cash Flow), and ultimately led to build a pipeline on the K-water research performance management system and then verify the practical case of a technology related to "desalination". We analyze the embedded design logic and evaluation process of web-based valuation system that reflects characteristics of water resources technology, reference information and database(D/B)-associated logic for each model to calculate public interest-based and profit-based technology values in technology integrated management system. We review the hybrid evaluation module that reflects the quantitative index of the qualitative evaluation indices reflecting the unique characteristics of water resources and the visualized user-interface (UI) of the actual web-based evaluation, which both are appended for calculating the business value based on financial data to the existing web-based technology valuation systems in other fields. K-water's technology valuation model is evaluated by distinguishing between public-interest type and profitable-type water technology. First, evaluation modules in profit-type technology valuation model are designed based on 'profitability of technology'. For example, the technology inventory K-water holds has a number of profit-oriented technologies such as water treatment membranes. On the other hand, the public interest-type technology valuation is designed to evaluate the public-interest oriented technology such as the dam, which reflects the characteristics of public benefits and costs. In order to examine the appropriateness of the cost-benefit based public utility valuation model (i.e. K-water specific technology valuation model) presented in this study, we applied to practical cases from calculation of benefit-to-cost analysis on water resource technology with 20 years of lifetime. In future we will additionally conduct verifying the K-water public utility-based valuation model by each business model which reflects various business environmental characteristics.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

Product Evaluation Criteria Extraction through Online Review Analysis: Using LDA and k-Nearest Neighbor Approach (온라인 리뷰 분석을 통한 상품 평가 기준 추출: LDA 및 k-최근접 이웃 접근법을 활용하여)

  • Lee, Ji Hyeon;Jung, Sang Hyung;Kim, Jun Ho;Min, Eun Joo;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.97-117
    • /
    • 2020
  • Product evaluation criteria is an indicator describing attributes or values of products, which enable users or manufacturers measure and understand the products. When companies analyze their products or compare them with competitors, appropriate criteria must be selected for objective evaluation. The criteria should show the features of products that consumers considered when they purchased, used and evaluated the products. However, current evaluation criteria do not reflect different consumers' opinion from product to product. Previous studies tried to used online reviews from e-commerce sites that reflect consumer opinions to extract the features and topics of products and use them as evaluation criteria. However, there is still a limit that they produce irrelevant criteria to products due to extracted or improper words are not refined. To overcome this limitation, this research suggests LDA-k-NN model which extracts possible criteria words from online reviews by using LDA and refines them with k-nearest neighbor. Proposed approach starts with preparation phase, which is constructed with 6 steps. At first, it collects review data from e-commerce websites. Most e-commerce websites classify their selling items by high-level, middle-level, and low-level categories. Review data for preparation phase are gathered from each middle-level category and collapsed later, which is to present single high-level category. Next, nouns, adjectives, adverbs, and verbs are extracted from reviews by getting part of speech information using morpheme analysis module. After preprocessing, words per each topic from review are shown with LDA and only nouns in topic words are chosen as potential words for criteria. Then, words are tagged based on possibility of criteria for each middle-level category. Next, every tagged word is vectorized by pre-trained word embedding model. Finally, k-nearest neighbor case-based approach is used to classify each word with tags. After setting up preparation phase, criteria extraction phase is conducted with low-level categories. This phase starts with crawling reviews in the corresponding low-level category. Same preprocessing as preparation phase is conducted using morpheme analysis module and LDA. Possible criteria words are extracted by getting nouns from the data and vectorized by pre-trained word embedding model. Finally, evaluation criteria are extracted by refining possible criteria words using k-nearest neighbor approach and reference proportion of each word in the words set. To evaluate the performance of the proposed model, an experiment was conducted with review on '11st', one of the biggest e-commerce companies in Korea. Review data were from 'Electronics/Digital' section, one of high-level categories in 11st. For performance evaluation of suggested model, three other models were used for comparing with the suggested model; actual criteria of 11st, a model that extracts nouns by morpheme analysis module and refines them according to word frequency, and a model that extracts nouns from LDA topics and refines them by word frequency. The performance evaluation was set to predict evaluation criteria of 10 low-level categories with the suggested model and 3 models above. Criteria words extracted from each model were combined into a single words set and it was used for survey questionnaires. In the survey, respondents chose every item they consider as appropriate criteria for each category. Each model got its score when chosen words were extracted from that model. The suggested model had higher scores than other models in 8 out of 10 low-level categories. By conducting paired t-tests on scores of each model, we confirmed that the suggested model shows better performance in 26 tests out of 30. In addition, the suggested model was the best model in terms of accuracy. This research proposes evaluation criteria extracting method that combines topic extraction using LDA and refinement with k-nearest neighbor approach. This method overcomes the limits of previous dictionary-based models and frequency-based refinement models. This study can contribute to improve review analysis for deriving business insights in e-commerce market.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

A Study on Developing Customized Bolus using 3D Printers (3D 프린터를 이용한 Customized Bolus 제작에 관한 연구)

  • Jung, Sang Min;Yang, Jin Ho;Lee, Seung Hyun;Kim, Jin Uk;Yeom, Du Seok
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.1
    • /
    • pp.61-71
    • /
    • 2015
  • Purpose : 3D Printers are used to create three-dimensional models based on blueprints. Based on this characteristic, it is feasible to develop a bolus that can minimize the air gap between skin and bolus in radiotherapy. This study aims to compare and analyze air gap and target dose at the branded 1 cm bolus with the developed customized bolus using 3D printers. Materials and Methods : RANDO phantom with a protruded tumor was used to procure images using CT simulator. CT DICOM file was transferred into the STL file, equivalent to 3D printers. Using this, customized bolus molding box (maintaining the 1 cm width) was created by processing 3D printers, and paraffin was melted to develop the customized bolus. The air gap of customized bolus and the branded 1 cm bolus was checked, and the differences in air gap was used to compare $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$ and $V_{95%}$ in treatment plan through Eclipse. Results : Customized bolus production period took about 3 days. The total volume of air gap was average $3.9cm^3$ at the customized bolus. And it was average $29.6cm^3$ at the branded 1 cm bolus. The customized bolus developed by the 3D printer was more useful in minimizing the air gap than the branded 1 cm bolus. In the 6 MV photon, at the customized bolus, $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$, $V_{95%}$ of GTV were 102.8%, 88.1%, 99.1%, 95.0%, 94.4% and the $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$, $V_{95%}$ of branded 1cm bolus were 101.4%, 92.0%, 98.2%, 95.2%, 95.7%, respectively. In the proton, at the customized bolus, $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$, $V_{95%}$ of GTV were 104.1%, 84.0%, 101.2%, 95.1%, 99.8% and the $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$, $V_{95%}$ of branded 1cm bolus were 104.8%, 87.9%, 101.5%, 94.9%, 99.9%, respectively. Thus, in treatment plan, there was no significant difference between the customized bolus and 1 cm bolus. However, the normal tissue nearby the GTV showed relatively lower radiation dose. Conclusion : The customized bolus developed by 3D printers was effective in minimizing the air gap, especially when it is used against the treatment area with irregular surface. However, the air gap between branded bolus and skin was not enough to cause a change in target dose. On the other hand, in the chest wall could confirm that dose decrease for small the air gap. Customized bolus production period took about 3 days and the development cost was quite expensive. Therefore, the commercialization of customized bolus developed by 3D printers requires low-cost 3D printer materials, adequate for the use of bolus.

  • PDF

THE RELATIONSHIP BETWEEN PARTICLE INJECTION RATE OBSERVED AT GEOSYNCHRONOUS ORBIT AND DST INDEX DURING GEOMAGNETIC STORMS (자기폭풍 기간 중 정지궤도 공간에서의 입자 유입률과 Dst 지수 사이의 상관관계)

  • 문가희;안병호
    • Journal of Astronomy and Space Sciences
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2003
  • To examine the causal relationship between geomagnetic storm and substorm, we investigate the correlation between dispersionless particle injection rate of proton flux observed from geosynchronous satellites, which is known to be a typical indicator of the substorm expansion activity, and Dst index during magnetic storms. We utilize geomagnetic storms occurred during the period of 1996 ~ 2000 and categorize them into three classes in terms of the minimum value of the Dst index ($Dst_{min}$); intense ($-200nT{$\leq$}Dst_{min}{$\leq$}-100nT$), moderate($-100nT{\leq}Dst_{min}{\leq}-50nT$), and small ($-50nT{\leq}Dst_{min}{\leq}-30nT$) -30nT)storms. We use the proton flux of the energy range from 50 keV to 670 keV, the major constituents of the ring current particles, observed from the LANL geosynchronous satellites located within the local time sector from 18:00 MLT to 04:00 MLT. We also examine the flux ratio ($f_{max}/f_{ave}$) to estimate particle energy injection rate into the inner magnetosphere, with $f_{ave}$ and $f_{max}$ being the flux levels during quiet and onset levels, respectively. The total energy injection rate into the inner magnetosphere can not be estimated from particle measurements by one or two satellites. However, the total energy injection rate should be at least proportional to the flux ratio and the injection frequency. Thus we propose a quantity, “total energy injection parameter (TEIP)”, defined by the product of the flux ratio and the injection frequency as an indicator of the injected energy into the inner magnetosphere. To investigate the phase dependence of the substorm contribution to the development of magnetic storm, we examine the correlations during the two intervals, main and recovery phase of storm separately. Several interesting tendencies are noted particularly during the main phase of storm. First, the average particle injection frequency tends to increase with the storm size with the correlation coefficient being 0.83. Second, the flux ratio ($f_{max}/f_{ave}$) tends to be higher during large storms. The correlation coefficient between $Dst_{min}$ and the flux ratio is generally high, for example, 0.74 for the 75~113 keV energy channel. Third, it is also worth mentioning that there is a high correlation between the TEIP and $Dst_{min}$ with the highest coefficient (0.80) being recorded for the energy channel of 75~113 keV, the typical particle energies of the ring current belt. Fourth, the particle injection during the recovery phase tends to make the storms longer. It is particularly the case for intense storms. These characteristics observed during the main phase of the magnetic storm indicate that substorm expansion activity is closely associated with the development of mangetic storm.

Fly Ash Application Effects on CH4 and CO2 Emission in an Incubation Experiment with a Paddy Soil (항온 배양 논토양 조건에서 비산재 처리에 따른 CH4와 CO2 방출 특성)

  • Lim, Sang-Sun;Choi, Woo-Jung;Kim, Han-Yong;Jung, Jae-Woon;Yoon, Kwang-Sik
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.5
    • /
    • pp.853-860
    • /
    • 2012
  • To estimate potential use of fly ash in reducing $CH_4$ and $CO_2$ emission from soil, $CH_4$ and $CO_2$ fluxes from a paddy soil mixed with fly ash at different rate (w/w; 0, 5, and 10%) in the presence and absence of fertilizer N ($(NH_4)_2SO_4$) addition were investigated in a laboratory incubation for 60 days under changing water regime from wetting to drying via transition. The mean $CH_4$ flux during the entire incubation period ranged from 0.59 to $1.68mg\;CH_4\;m^{-2}day^{-1}$ with a lower rate in the soil treated with N fertilizer due to suppression of $CH_4$ production by $SO_4^{2-}$ that acts as an electron acceptor, leading to decreases in electron availability for methanogen. Fly ash application reduced $CH_4$ flux by 37.5 and 33.0% in soils without and with N addition, respectively, probably due to retardation of $CH_4$ diffusion through soil pores by addition of fine-textured fly ash. In addition, as fly ash has a potential for $CO_2$ removal via carbonation (formation of carbonate precipitates) that decreases $CO_2$ availability that is a substrate for $CO_2$ reduction reaction (one of $CH_4$ generation pathways) is likely to be another mechanisms of $CH_4$ flux reduction by fly ash. Meanwhile, the mean $CO_2$ flux during the entire incubation period was between 0.64 and $0.90g\;CO_2\;m^{-2}day^{-1}$, and that of N treated soil was lower than that without N addition. Because N addition is likely to increase soil respiration, it is not straightforward to explain the results. However, it may be possible that our experiment did not account for the substantial amount of $CO_2$ produced by heterotrophs that were activated by N addition in earlier period than the measurement was initiated. Fly ash application also lowered $CO_2$ flux by up to 20% in the soil mixed with fly ash at 10% through $CO_2$ removal by the carbonation. At the whole picture, fly ash application at 10% decreased global warming potential of emitted $CH_4$ and $CO_2$ by about 20%. Therefore, our results suggest that fly ash application can be a soil management practice to reduce green house gas emission from paddy soils. Further studies under field conditions with rice cultivation are necessary to verify our findings.

Investigation of Study Items for the Patterns of Care Study in the Radiotherapy of Laryngeal Cancer: Preliminary Results (후두암의 방사선치료 Patterns of Care Study를 위한 프로그램 항목 개발: 예비 결과)

  • Chung Woong-Ki;Kim I1-Han;Ahn Sung-Ja;Nam Taek-Keun;Oh Yoon-Kyeong;Song Ju-Young;Nah Byung-Sik;Chung Gyung-Ai;Kwon Hyoung-Cheol;Kim Jung-Soo;Kim Soo-Kon;Kang Jeong-Ku
    • Radiation Oncology Journal
    • /
    • v.21 no.4
    • /
    • pp.299-305
    • /
    • 2003
  • Purpose: In order to develop the national guide-lines for the standardization of radiotherapy we are planning to establish a web-based, on-line data-base system for laryngeal cancer. As a first step this study was performed to accumulate the basic clinical information of laryngeal cancer and to determine the items needed for the data-base system. Materials and Methods: We analyzed the clinical data on patients who were treated under the diagnosis of laryngeal cancer from January 1998 through December 1999 In the South-west area of Korea. Eligiblity criteria of the patients are as follows: 18 years or older, currently diagnosed with primary epithelial carcinoma of larynx, and no history of previous treatments for another cancers and the other laryngeal diseases. The items were developed and filled out by radiation oncologlst who are members of forean Southwest Radiation Oncology Group. SPSS vl0.0 software was used for statistical analysis. Results: Data of forty-five patients were collected. Age distribution of patients ranged from 28 to 88 years(median, 61). Laryngeal cancer occurred predominantly In males (10 : 1 sex ratio). Twenty-eight patients (62$\%$) had primary cancers in the glottis and 17 (38$\%$) in the supraglottis. Most of them were diagnosed pathologically as squamous cell carcinoma (44/45, 98$\%$). Twenty-four of 28 glottic cancer patients (86$\%$) had AJCC (American Joint Committee on Cancer) stage I/II, but 50$\%$ (8/16) had In supraglottic cancer patients (p=0.02). Most patients(89$\%$) had the symptom of hoarseness. indirect laryngoscopy was done in all patients and direct laryngoscopy was peformed in 43 (98$\%$) patients. Twenty-one of 28 (75$\%$) glottic cancer cases and 6 of 17 (35$\%$) supraglottic cancer cases were treated with radiation alone, respectively. The combined treatment of surgery and radiation was used in 5 (18$\%$) glottic and 8 (47$\%$) supraglottic patients. Chemotherapy and radiation was used in 2 (7$\%$) glottic and 3 (18$\%$) supraglottic patients. There was no statistically significant difference in the use of combined modality treatments between glottic and supraglottic cancers (p=0.20). In all patients, 6 MV X-ray was used with conventional fractionation. The iraction size was 2 Gy In 80$\%$ of glottic cancer patients compared with 1.8 Gy in 59$\%$ of the patients with supraglottic cancers. The mean total dose delivered to primary lesions were 65.98 ey and 70.15 Gy in glottic and supraglottic patients treated, respectively, with radiation alone. Based on the collected data, 12 modules with 90 items were developed or the study of the patterns of care In laryngeal cancer. Conclusion: The study Items for laryngeal cancer were developed. In the near future, a web system will be established based on the Items Investigated, and then a nation-wide analysis on laryngeal cancer will be processed for the standardization and optimization of radlotherapy.

The Effect of Common Features on Consumer Preference for a No-Choice Option: The Moderating Role of Regulatory Focus (재몰유선택적정황하공동특성대우고객희호적영향(在没有选择的情况下共同特性对于顾客喜好的影响): 조절초점적조절작용(调节焦点的调节作用))

  • Park, Jong-Chul;Kim, Kyung-Jin
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.89-97
    • /
    • 2010
  • This study researches the effects of common features on a no-choice option with respect to regulatory focus theory. The primary interest is in three factors and their interrelationship: common features, no-choice option, and regulatory focus. Prior studies have compiled vast body of research in these areas. First, the "common features effect" has been observed bymany noted marketing researchers. Tversky (1972) proposed the seminal theory, the EBA model: elimination by aspect. According to this theory, consumers are prone to focus only on unique features during comparison processing, thereby dismissing any common features as redundant information. Recently, however, more provocative ideas have attacked the EBA model by asserting that common features really do affect consumer judgment. Chernev (1997) first reported that adding common features mitigates the choice gap because of the increasing perception of similarity among alternatives. Later, however, Chernev (2001) published a critically developed study against his prior perspective with the proposition that common features may be a cognitive load to consumers, and thus consumers are possible that they are prone to prefer the heuristic processing to the systematic processing. This tends to bring one question to the forefront: Do "common features" affect consumer choice? If so, what are the concrete effects? This study tries to answer the question with respect to the "no-choice" option and regulatory focus. Second, some researchers hold that the no-choice option is another best alternative of consumers, who are likely to avoid having to choose in the context of knotty trade-off settings or mental conflicts. Hope for the future also may increase the no-choice option in the context of optimism or the expectancy of a more satisfactory alternative appearing later. Other issues reported in this domain are time pressure, consumer confidence, and alternative numbers (Dhar and Nowlis 1999; Lin and Wu 2005; Zakay and Tsal 1993). This study casts the no-choice option in yet another perspective: the interactive effects between common features and regulatory focus. Third, "regulatory focus theory" is a very popular theme in recent marketing research. It suggests that consumers have two focal goals facing each other: promotion vs. prevention. A promotion focus deals with the concepts of hope, inspiration, achievement, or gain, whereas prevention focus involves duty, responsibility, safety, or loss-aversion. Thus, while consumers with a promotion focus tend to take risks for gain, the same does not hold true for a prevention focus. Regulatory focus theory predicts consumers' emotions, creativity, attitudes, memory, performance, and judgment, as documented in a vast field of marketing and psychology articles. The perspective of the current study in exploring consumer choice and common features is a somewhat creative viewpoint in the area of regulatory focus. These reviews inspire this study of the interaction possibility between regulatory focus and common features with a no-choice option. Specifically, adding common features rather than omitting them may increase the no-choice option ratio in the choice setting only to prevention-focused consumers, but vice versa to promotion-focused consumers. The reasoning is that when prevention-focused consumers come in contact with common features, they may perceive higher similarity among the alternatives. This conflict among similar options would increase the no-choice ratio. Promotion-focused consumers, however, are possible that they perceive common features as a cue of confirmation bias. And thus their confirmation processing would make their prior preference more robust, then the no-choice ratio may shrink. This logic is verified in two experiments. The first is a $2{\times}2$ between-subject design (whether common features or not X regulatory focus) using a digital cameras as the relevant stimulus-a product very familiar to young subjects. Specifically, the regulatory focus variable is median split through a measure of eleven items. Common features included zoom, weight, memory, and battery, whereas the other two attributes (pixel and price) were unique features. Results supported our hypothesis that adding common features enhanced the no-choice ratio only to prevention-focus consumers, not to those with a promotion focus. These results confirm our hypothesis - the interactive effects between a regulatory focus and the common features. Prior research had suggested that including common features had a effect on consumer choice, but this study shows that common features affect choice by consumer segmentation. The second experiment was used to replicate the results of the first experiment. This experimental study is equal to the prior except only two - priming manipulation and another stimulus. For the promotion focus condition, subjects had to write an essay using words such as profit, inspiration, pleasure, achievement, development, hedonic, change, pursuit, etc. For prevention, however, they had to use the words persistence, safety, protection, aversion, loss, responsibility, stability etc. The room for rent had common features (sunshine, facility, ventilation) and unique features (distance time and building state). These attributes implied various levels and valence for replication of the prior experiment. Our hypothesis was supported repeatedly in the results, and the interaction effects were significant between regulatory focus and common features. Thus, these studies showed the dual effects of common features on consumer choice for a no-choice option. Adding common features may enhance or mitigate no-choice, contradictory as it may sound. Under a prevention focus, adding common features is likely to enhance the no-choice ratio because of increasing mental conflict; under the promotion focus, it is prone to shrink the ratio perhaps because of a "confirmation bias." The research has practical and theoretical implications for marketers, who may need to consider common features carefully in a practical display context according to consumer segmentation (i.e., promotion vs. prevention focus.) Theoretically, the results suggest some meaningful moderator variable between common features and no-choice in that the effect on no-choice option is partly dependent on a regulatory focus. This variable corresponds not only to a chronic perspective but also a situational perspective in our hypothesis domain. Finally, in light of some shortcomings in the research, such as overlooked attribute importance, low ratio of no-choice, or the external validity issue, we hope it influences future studies to explore the little-known world of the "no-choice option."