• Title/Summary/Keyword: 작용시간

Search Result 4,505, Processing Time 0.031 seconds

Effect of Byproducts Supplementation by Partically Replacing Soybean Meal to a Total Mixed Ration on Rumen Fermentation Characteristics In Vitro (대두박 대체 부산물 위주의 TMR 사료가 반추위 내 미생물의 In Vitro 발효특성에 미치는 영향)

  • Bae, Gui Seck;Kim, Eun Joong;Song, Tae Ho;Song, Tae Hwa;Park, Tae Il;Choi, Nag Jin;Kwon, Chan Ho;Chang, Moon Baek
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.34 no.2
    • /
    • pp.129-140
    • /
    • 2014
  • This study was performed to evaluate the effects of replacing basic total mixed ration (TMR) with fermented soybean curd, Artemisia princeps Pampanini cv. Sajabal, and spent coffee grounds by-product on rumen microbial fermentation in vitro. Soybean in the basic TMR diet (control) was replaced by the following 9 treatments (3 replicates): maximum amounts of soybean curd (SC); fermented SC (FSC); 3, 5, and 10% FSC + fermented A. princeps Pampanini cv. Sajabal (1:1, DM basis, FSCS); and 3, 5, 10% FSC + fermented coffee meal (1:1, DM basis, FSCC) of soybean. FSC, FSCS, and FSCC were fermented using Lactobacillus acidophilus ATCC 496, Lactobacillus fermentum ATCC 1493, Lactobacillus plantarum KCTC 1048, and Lactobacillus casei IFO 3533. Replacing dairy cow TMR with FSC treatment led to a pH value of 6 after 8 h of incubation-the lowest value measured (p<0.05), and FSCS and FSCC treatments were higher than SC and FSC treatment after 6 h (p<0.05). Gas production was higher in response to 3% FSC and FSCC treatments than the control after 4-10 h. Dry matter digestibility was increased 0-12 h after FSC treatment (p<0.05) and was the highest after 24 h of 10% FSCS treatment. $NH_3-N$ concentration was the lowest after 24 h of FSC treatment (p<0.05). Microbial protein content increased in response to treatments that had been fermented by the Lactobacillus spp. compared to control and SC treatments (p<0.05). The total concentration of volatile fatty acids (VFAs) was increased after 6-12 h of FSC treatment (p<0.05), while the highest acetate proportion was observed 24 h after 5% and 10% FSCS treatments. The FSC of propionate proportion was increased for 0-10 h compared with among treatments (p<0.05). The highest acetate in the propionate ration was observed after 12 h of SC treatment and the lowest with FSCS 3% treatment after 24 h. Methane ($CH_4$) emulsion was lower with A. princeps Pampanini cv. Sajabal and spent coffee grounds treatments than with the control, SC, and FSC treatments. These experiments were designed to replace the by-products of dairy cow TMR with SC, FSC, FSCS, and FSCC to improve TMR quality. Condensed tannins contained in FSCS and FSCC treatments, which reduced $CH_4$ emulsion in vitro, decreased rumen microbial fermentation during the early incubation time. Therefore, future experiments are required to develop a rumen continuous culture system and an in vivo test to optimize the percentages of FSC, FSCS, and FSCC in the TMR diet of the dairy cows.

Case Analysis of the Promotion Methodologies in the Smart Exhibition Environment (스마트 전시 환경에서 프로모션 적용 사례 및 분석)

  • Moon, Hyun Sil;Kim, Nam Hee;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.171-183
    • /
    • 2012
  • In the development of technologies, the exhibition industry has received much attention from governments and companies as an important way of marketing activities. Also, the exhibitors have considered the exhibition as new channels of marketing activities. However, the growing size of exhibitions for net square feet and the number of visitors naturally creates the competitive environment for them. Therefore, to make use of the effective marketing tools in these environments, they have planned and implemented many promotion technics. Especially, through smart environment which makes them provide real-time information for visitors, they can implement various kinds of promotion. However, promotions ignoring visitors' various needs and preferences can lose the original purposes and functions of them. That is, as indiscriminate promotions make visitors feel like spam, they can't achieve their purposes. Therefore, they need an approach using STP strategy which segments visitors through right evidences (Segmentation), selects the target visitors (Targeting), and give proper services to them (Positioning). For using STP Strategy in the smart exhibition environment, we consider these characteristics of it. First, an exhibition is defined as market events of a specific duration, which are held at intervals. According to this, exhibitors who plan some promotions should different events and promotions in each exhibition. Therefore, when they adopt traditional STP strategies, a system can provide services using insufficient information and of existing visitors, and should guarantee the performance of it. Second, to segment automatically, cluster analysis which is generally used as data mining technology can be adopted. In the smart exhibition environment, information of visitors can be acquired in real-time. At the same time, services using this information should be also provided in real-time. However, many clustering algorithms have scalability problem which they hardly work on a large database and require for domain knowledge to determine input parameters. Therefore, through selecting a suitable methodology and fitting, it should provide real-time services. Finally, it is needed to make use of data in the smart exhibition environment. As there are useful data such as booth visit records and participation records for events, the STP strategy for the smart exhibition is based on not only demographical segmentation but also behavioral segmentation. Therefore, in this study, we analyze a case of the promotion methodology which exhibitors can provide a differentiated service to segmented visitors in the smart exhibition environment. First, considering characteristics of the smart exhibition environment, we draw evidences of segmentation and fit the clustering methodology for providing real-time services. There are many studies for classify visitors, but we adopt a segmentation methodology based on visitors' behavioral traits. Through the direct observation, Veron and Levasseur classify visitors into four groups to liken visitors' traits to animals (Butterfly, fish, grasshopper, and ant). Especially, because variables of their classification like the number of visits and the average time of a visit can estimate in the smart exhibition environment, it can provide theoretical and practical background for our system. Next, we construct a pilot system which automatically selects suitable visitors along the objectives of promotions and instantly provide promotion messages to them. That is, based on the segmentation of our methodology, our system automatically selects suitable visitors along the characteristics of promotions. We adopt this system to real exhibition environment, and analyze data from results of adaptation. As a result, as we classify visitors into four types through their behavioral pattern in the exhibition, we provide some insights for researchers who build the smart exhibition environment and can gain promotion strategies fitting each cluster. First, visitors of ANT type show high response rate for promotion messages except experience promotion. So they are fascinated by actual profits in exhibition area, and dislike promotions requiring a long time. Contrastively, visitors of GRASSHOPPER type show high response rate only for experience promotion. Second, visitors of FISH type appear favors to coupon and contents promotions. That is, although they don't look in detail, they prefer to obtain further information such as brochure. Especially, exhibitors that want to give much information for limited time should give attention to visitors of this type. Consequently, these promotion strategies are expected to give exhibitors some insights when they plan and organize their activities, and grow the performance of them.

A STUDY ON THE RELATIONS OF VARIOUS PARTS OF THE PALATE FOR PRIMARY AND PERMANENT DENTITION (유치열과 영구치열의 구개 각부의 관계에 관한 연구)

  • Lee, Yong-Hoon;Yang, Yeon-Mi;Lee, Yong-Hee;Kim, Sang-Hoon;Kim, Jae-Gon;Baik, Byeong-Ju
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.31 no.4
    • /
    • pp.569-578
    • /
    • 2004
  • The purpose of this study was to clarify the palatal arch length, width and height in the primary and permanent dentition. Samples were consisted of normal occlusions both in the primary dentition(50 males and 50 females) and in the permanent dentition(50 males and 50 females). With their upper plaster casts were used and through 3-dimensional laser scanning(3D Scanner, DS4060, LDI, U.S.A.), cloud data, polygonization, section curve and loft surface, fit and horizontal plane were based to measure the palatal arch length, width and height(Surfacer 10.0, Imageware, U.S.A.). T-tests were applied for the statistical analyze of the data. The results were as follows : 1. In the measurement values, the values of the male were higher than those of the female except primary anterior palatal height. There were not only statistically significant differences in anterior palatal width(p<0.05) and posterior palatal width(p<0.01) in primary dentition but palatal width(p<0.05), anterior palatal length(p<0.01), middle and posterior palatal length(p<0.05) in permanent dentition between male and female. 2. In the indices of palate, there were statistically significant differences in height-length index(p<0.05) and width-length index(p<0.01) between male and female in primary dentition. In permanent dentition, there was statistically difference between male and female. 3. In the measurement values, posterior palatal width was increased most greatly. Posterior palatal height, anterior palatal width and anterior palatal length were followed by descending order. On the other hand, anterior palatal height and posterior palatal length were decreased. 4. In the indices of palate, the height-length index, the width-length index and posterior height-width index were increased, but the others were decreased.

  • PDF

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

The Effects of Diltiazem and Pentoxifylline on Apoptosis of Irradiated Rat Salivary Gland (흰쥐 침샘의 방사선조사시 Apoptosis에 대한 Diltiazem과 Pentoxifylline의 효과)

  • Yang, Kwang-Mo;Suh, Hyun-Suk
    • Radiation Oncology Journal
    • /
    • v.16 no.4
    • /
    • pp.367-375
    • /
    • 1998
  • Purpose : Xerostomia is a complication met by almost all patients who have radiotherapy for cancers of head and neck. Many studies for prevention of xerostomia will be necessary. Radiation-induced acute response of salivary glands has been defined as interphase death or apoptosis. Increased intracellular calcium level have an important role in radiation-induced apoptosis. Calcium channel blocker may prevent radiation-induced apoptosis of salivary glands. This study was designed to evaluate the effectiveness of diltiazem known as calcium channel blocker and pentoxifylline with inhibition of inflammatory response on the apoptosis as an acute response of radiation in rat salivary glands. Materials and Methods : Sprague-Dawley rats with about body weight 200-250 g were divided into 5 study groups : control, radiation alone, diltiazem with radiation, pentoxifylline with radiation, and diltiazem and pentoxifylline with radiation. The diltiazen and pentoxifylline were injected intraperitoneally 20 mg/kg and 50 mg/kg, 30 and 20 mimute before irradiation. respectively. Irradiation was given with a 4 MV linear accelerator. The 1600 cGy of radiation was delivered in a single fraction through a single anterior portal encompassing the entire neck. After 24 h of irradiation, rats were sacrificed and parotid and submandibular glands were removed and stained with hematoxylin and eosin. The quantification of apoptosis was performed by microscopic examination of stained tissue sections at a magnification of 200X and the percentage of apoptotic cell was calculated. Results : On parotid glands, the percentage of apoptosis by radiation alone, diltiazem with radiation, pentoxifylline with radiation, and diltiazem and pentoxifylline with radiation were 1.72$\%$ (8.35/486), 0.64$\%$ (2.9/453), 0.23$\%$ (1.2/516), and 0.28$\%$ (1.1/399), respectively. The apoptosis was markedly reduced in the groups receiving drugs compared with groups receivinge, radiation alone (p<0.05). In serous cell of submandibular glands, the percentages of apoptosis of radiation alone, diltiazem with radiation, pentoxifylline with radiation, and diltiazem and pentoxifylline with radiation were 1.94$\%$ (l1/567), 0.34$\%$ (1.9/554), 0.28$\%$ (1.8/637), and 0.22$\%$ (1.3/601), respectively. In the mucus cell of submandibular glands, the percentages of apoptosis were 0.92$\%$ (5.1/552), 0.41$\%$ (2.5/612), 0.29$\%$ (1.3/455), and 0.18$\%$ (1.0/562), respectively. The apoptosis was markedly reduced in the serous glands (p<0.05), but there was no difference in development of apoptosis in each group of mucus gland. Conclusion : These results suggest that radiation-induced apoptosis of serous cells of salivary glands may be decreased by diltiazem and pentoxifylline administration.

  • PDF

Comparison and Evaluation of the Effectiveness between Respiratory Gating Method Applying The Flow Mode and Additional Gated Method in PET/CT Scanning. (PET/CT 검사에서 Flow mode를 적용한 Respiratory Gating Method 촬영과 추가 Gating 촬영의 비교 및 유용성 평가)

  • Jang, Donghoon;Kim, Kyunghun;Lee, Jinhyung;Cho, Hyunduk;Park, Sohyun;Park, Youngjae;Lee, Inwon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.1
    • /
    • pp.54-59
    • /
    • 2017
  • Purpose The present study aimed at assessing the effectiveness of the respiratory gating method used in the flow mode and additional localized respiratory-gated imaging, which differs from the step and go method. Materials and Methods Respiratory gated imaging was performed in the flow mode to twenty patients with lung cancer (10 patients with stable signals and 10 patients with unstable signals), who underwent PET/CT scanning of the torso using Biograph mCT Flow PET/CT at Bundang Seoul University Hospital from June 2016 to September 2016. Additional images of the lungs were obtained by using the respiratory gating method. SUVmax, SUVmean, and Tumor Volume ($cm^3$) of non-gating images, gating images, and additional lung gating images were found with Syngo,bia (Siemens, Germany). A paired t-test was performed with GraphPad Prism6, and changes in the width of the amplitude range were compared between the two types of gating images. Results The following results were obtained from all patients when the respiratory gating method was applied: $SUV_{max}=9.43{\pm}3.93$, $SUV_{mean}=1.77{\pm}0.89$, and $Tumor\;Volume=4.17{\pm}2.41$ for the non-gating images, $SUV_{max}=10.08{\pm}4.07$, $SUV_{mean}=1.75{\pm}0.81$, and $Tumor\;Volume=3.56{\pm}2.11$ for the gating images, and $SUV_{max}=10.86{\pm}4.36$, $SUV_{mean}=1.77{\pm}0.85$, $Tumor\;Volume=3.36{\pm}1.98$ for the additional lung gating images. No statistically significant difference in the values of $SUV_{mean}$ was found between the non-gating and gating images, and between the gating and lung gating images (P>0.05). A significant difference in the values of $SUV_{max}$ and Tumor Volume were found between the aforementioned groups (P<0.05). The width of the amplitude range was smaller for lung gating images than gating images for 12 from 20 patients (3 patients with stable signals, 9 patients with unstable signals). Conclusion In PET/CT scanning using the respiratory gating method in the flow mode, any lesion movements caused by respiration were adjusted; therefore, more accurate measurements of $SUV_{max}$, and Tumor Volume could be obtained from the gating images than the non-gating images in this study. In addition, the width of the amplitude range decreased according to the stability of respiration to a more significant degree in the additional lung gating images than the gating images. We found that gating images provide information that is more useful for diagnosis than the one provided by non-gating images. For patients with irregular signals, it may be helpful to perform localized scanning additionally if time allows.

  • PDF

A Case Study on the Effective Liquid Manure Treatment System in Pig Farms (양돈농가의 돈분뇨 액비화 처리 우수사례 실태조사)

  • Kim, Soo-Ryang;Jeon, Sang-Joon;Hong, In-Gi;Kim, Dong-Kyun;Lee, Myung-Gyu
    • Journal of Animal Environmental Science
    • /
    • v.18 no.2
    • /
    • pp.99-110
    • /
    • 2012
  • The purpose of the study is to collect basis data for to establish standard administrative processes of liquid fertilizer treatment. From this survey we could make out the key point of each step through a case of effective liquid manure treatment system in pig house. It is divided into six step; 1. piggery slurry management step, 2. Solid-liquid separation step, 3. liquid fertilizer treatment (aeration) step, 4. liquid fertilizer treatment (microorganism, recirculation and internal return) step, 5. liquid fertilizer treatment (completion) step, 6. land application step. From now on, standardization process of liquid manure treatment technologies need to be develop based on the six steps process.

The Patterns of Garic and Onion price Cycle in Korea (마늘.양파의 가격동향(價格動向)과 변동(變動)패턴 분석(分析))

  • Choi, Kyu Seob
    • Current Research on Agriculture and Life Sciences
    • /
    • v.4
    • /
    • pp.141-153
    • /
    • 1986
  • This study intends to document the existing cyclical fluctuations of garic and onion price at farm gate level during the period of 1966-1986 in Korea. The existing patterns of such cyclical fluctuations were estimated systematically by removing the seasonal fluctuation and irregular movement as well as secular trend from the original price through the moving average method. It was found that the cyclical fluctuations of garic and onion prices repeated six and seven times respectively during the same period, also the amplitude coefficient of cyclical fluctuations showed speed up in recent years. It was noticed that the cyclical fluctuations of price in onion was higher than that of in garic.

  • PDF

A study on the Success Factors and Strategy of Information Technology Investment Based on Intelligent Economic Simulation Modeling (지능형 시뮬레이션 모형을 기반으로 한 정보기술 투자 성과 요인 및 전략 도출에 관한 연구)

  • Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.35-55
    • /
    • 2013
  • Information technology is a critical resource necessary for any company hoping to support and realize its strategic goals, which contribute to growth promotion and sustainable development. The selection of information technology and its strategic use are imperative for the enhanced performance of every aspect of company management, leading a wide range of companies to have invested continuously in information technology. Despite researchers, managers, and policy makers' keen interest in how information technology contributes to organizational performance, there is uncertainty and debate about the result of information technology investment. In other words, researchers and managers cannot easily identify the independent factors that can impact the investment performance of information technology. This is mainly owing to the fact that many factors, ranging from the internal components of a company, strategies, and external customers, are interconnected with the investment performance of information technology. Using an agent-based simulation technique, this research extracts factors expected to affect investment performance on information technology, simplifies the analyses of their relationship with economic modeling, and examines the performance dependent on changes in the factors. In terms of economic modeling, I expand the model that highlights the way in which product quality moderates the relationship between information technology investments and economic performance (Thatcher and Pingry, 2004) by considering the cost of information technology investment and the demand creation resulting from product quality enhancement. For quality enhancement and its consequences for demand creation, I apply the concept of information quality and decision-maker quality (Raghunathan, 1999). This concept implies that the investment on information technology improves the quality of information, which, in turn, improves decision quality and performance, thus enhancing the level of product or service quality. Additionally, I consider the effect of word of mouth among consumers, which creates new demand for a product or service through the information diffusion effect. This demand creation is analyzed with an agent-based simulation model that is widely used for network analyses. Results show that the investment on information technology enhances the quality of a company's product or service, which indirectly affects the economic performance of that company, particularly with regard to factors such as consumer surplus, company profit, and company productivity. Specifically, when a company makes its initial investment in information technology, the resultant increase in the quality of a company's product or service immediately has a positive effect on consumer surplus, but the investment cost has a negative effect on company productivity and profit. As time goes by, the enhancement of the quality of that company's product or service creates new consumer demand through the information diffusion effect. Finally, the new demand positively affects the company's profit and productivity. In terms of the investment strategy for information technology, this study's results also reveal that the selection of information technology needs to be based on analysis of service and the network effect of customers, and demonstrate that information technology implementation should fit into the company's business strategy. Specifically, if a company seeks the short-term enhancement of company performance, it needs to have a one-shot strategy (making a large investment at one time). On the other hand, if a company seeks a long-term sustainable profit structure, it needs to have a split strategy (making several small investments at different times). The findings from this study make several contributions to the literature. In terms of methodology, the study integrates both economic modeling and simulation technique in order to overcome the limitations of each methodology. It also indicates the mediating effect of product quality on the relationship between information technology and the performance of a company. Finally, it analyzes the effect of information technology investment strategies and information diffusion among consumers on the investment performance of information technology.