• Title/Summary/Keyword: Quantify

Search Result 2,973, Processing Time 0.028 seconds

Optimization and Applicability Verification of Simultaneous Chlorogenic acid and Caffeine Analysis in Health Functional Foods using HPLC-UVD (HPLC-UVD를 이용한 건강기능식품에서 클로로겐산과 카페인 동시분석법 최적화 및 적용성 검증)

  • Hee-Sun Jeong;Se-Yun Lee;Kyu-Heon Kim;Mi-Young Lee;Jung-Ho Choi;Jeong-Sun Ahn;Jae-Myoung Oh;Kwang-Il Kwon;Hye-Young Lee
    • Journal of Food Hygiene and Safety
    • /
    • v.39 no.2
    • /
    • pp.61-71
    • /
    • 2024
  • In this study, we analyzed chlorogenic acid indicator components in preparation for the additional listing of green coffee bean extract in the Health Functional Food Code and optimized caffeine for simultaneous analysis. We extracted chlorogenic acid and caffeine using 30% methanol, phosphoric acid solution, and acetonitrile-containing phosphoric acid and analyzed them at 330 and 280 nm, respectively, using liquid chromatography. Our analysis validation results yielded a correlation coefficient (R2) revealing a significance level of at least 0.999 within the linear quantitative range. The chlorogenic acid and caffeine detection and quantification limits were 0.5 and 0.2 ㎍/mL and 1.4, and 0.4 ㎍/mL, respectively. We confirmed that the precision and accuracy results were suitable using the AOAC validation guidelines. Finally, we developed a simultaneous chlorogenic acid and caffeine analysis approach. In addition, we confirmed that our analysis approach could simultaneously quantify chlorogenic acid and caffeine by examining the applicability of each formulation through prototypes and distribution products. In conclusion, the results of this study demonstrated that the standardized analysis would expectably increase chlorogenic acidcontaining health functional food quality control reliability.

Assessment of Methane Production Rate Based on Factors of Contaminated Sediments (오염퇴적물의 주요 영향인자에 따른 메탄발생 생성률 평가)

  • Dong Hyun Kim;Hyung Jun Park;Young Jun Bang;Seung Oh Lee
    • Journal of Korean Society of Disaster and Security
    • /
    • v.16 no.4
    • /
    • pp.45-59
    • /
    • 2023
  • The global focus on mitigating climate change has traditionally centered on carbon dioxide, but recent attention has shifted towards methane as a crucial factor in climate change adaptation. Natural settings, particularly aquatic environments such as wetlands, reservoirs, and lakes, play a significant role as sources of greenhouse gases. The accumulation of organic contaminants on the lake and reservoir beds can lead to the microbial decomposition of sedimentary material, generating greenhouse gases, notably methane, under anaerobic conditions. The escalation of methane emissions in freshwater is attributed to the growing impact of non-point sources, alterations in water bodies for diverse purposes, and the introduction of structures such as river crossings that disrupt natural flow patterns. Furthermore, the effects of climate change, including rising water temperatures and ensuing hydrological and water quality challenges, contribute to an acceleration in methane emissions into the atmosphere. Methane emissions occur through various pathways, with ebullition fluxes-where methane bubbles are formed and released from bed sediments-recognized as a major mechanism. This study employs Biochemical Methane Potential (BMP) tests to analyze and quantify the factors influencing methane gas emissions. Methane production rates are measured under diverse conditions, including temperature, substrate type (glucose), shear velocity, and sediment properties. Additionally, numerical simulations are conducted to analyze the relationship between fluid shear stress on the sand bed and methane ebullition rates. The findings reveal that biochemical factors significantly influence methane production, whereas shear velocity primarily affects methane ebullition. Sediment properties are identified as influential factors impacting both methane production and ebullition. Overall, this study establishes empirical relationships between bubble dynamics, the Weber number, and methane emissions, presenting a formula to estimate methane ebullition flux. Future research, incorporating specific conditions such as water depth, effective shear stress beneath the sediment's tensile strength, and organic matter, is expected to contribute to the development of biogeochemical and hydro-environmental impact assessment methods suitable for in-situ applications.

Validation of ECOSTRESS Based Land Surface Temperature and Evapotranspiration (PT-JPL) Data Across Korea (국내에서 ECOSTRESS 지표면 온도 및 증발산(PT-JPL) 자료의 검증)

  • Park, Ki Jin;Kim, Ki Young;Kim, Chan Young;Park, Jong Min
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.5
    • /
    • pp.637-648
    • /
    • 2024
  • The frequency of extreme weather events such as heavy and extreme rainfall has been increasing due to global climate change. Accordingly, it is essential to quantify hydrometeorological variables for efficient water resource management. Among the various hydro-meteorological variables, Land Surface Temperature (LST) and Evapotranspiration (ET) play key roles in understanding the interaction between the surface and the atmosphere. In Korea, LST and ET are mainly observed through ground-based stations, which also have limitation in obtaining data from ungauged watersheds, and thus, it hinders to estimate spatial behavior of LST and ET. Alternatively, remote sensing-based methods have been used to overcome the limitation of ground-based stations. In this study, we evaluated the applicability of the National Aeronautics and Space Administration's (NASA) ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) LST and ET data estimated across Korea (from July 1, 2018 to December 31, 2022). For validation, we utilized NASA's MODerate Resolution Imaging Spectroradiometer (MODIS) data and eddy covariance flux tower observations managed by agencies under the Ministry of Environment of South Korea. Overall, results indicated that ECOSTRESS-based LSTs showed similar temporal trends (R: 0.47~0.73) to MODIS and ground-based observations. The index of agreement also showed a good agreement of ECOSTRESS-based LST with reference datasets (ranging from 0.82 to 0.91), although it also revealed distinctive uncertainties depending on the season. The ECOSTRESS-based ET demonstrated the capability to capture the temporal trends observed in MODIS and ground-based ET data, but higher Mean Absolute Error and Root Mean Square Error were also exhibited. This is likely due to the low acquisition rate of the ECOSTRESS data and environmental factors such as cooling effect of evapotranspiration, overestimation during the morning. This study suggests conducting additional validation of ECOSTRESS-based LST and ET, particularly in topographical and hydrological aspects. Such validation efforts could enhance the practical application of ECOSTRESS for estimating basin-scale LST and ET in Korea.

[ $Gd(DTPA)^{2-}$ ]-enhanced, and Quantitative MR Imaging in Articular Cartilage (관절연골의 $Gd(DTPA)^{2-}$-조영증강 및 정량적 자기공명영상에 대한 실험적 연구)

  • Eun Choong-Ki;Lee Yeong-Joon;Park Auh-Whan;Park Yeong-Mi;Bae Jae-Ik;Ryu Ji Hwa;Baik Dae-Il;Jung Soo-Jin;Lee Seon-Joo
    • Investigative Magnetic Resonance Imaging
    • /
    • v.8 no.2
    • /
    • pp.100-108
    • /
    • 2004
  • Purpose : Early degeneration of articular cartilage is accompanied by a loss of glycosaminoglycan (GAG) and the consequent change of the integrity. The purpose of this study was to biochemically quantify the loss of GAG, and to evaluate the $Gd(DTPA)^{2-}$-enhanced, and T1, T2, rho relaxation map for detection of the early degeneration of cartilage. Materials and Methods : A cartilage-bone block in size of $8mm\;\times\;10mm$ was acquired from the patella in each of three pigs. Quantitative analysis of GAG of cartilage was performed at spectrophotometry by use of dimethylmethylene blue. Each of cartilage blocks was cultured in one of three different media: two different culture media (0.2 mg/ml trypsin solution, 1mM Gd $(DTPA)^{2-}$ mixed trypsin solution) and the control media (phosphate buffered saline (PBS)). The cartilage blocks were cultured for 5 hrs, during which MR images of the blocks were obtained at one hour interval (0 hr, 1 hr, 2 hr, 3 hr, 4 hr, 5 hr). And then, additional culture was done for 24 hrs and 48 hrs. Both T1-weighted image (TR/TE, 450/22 ms), and mixed-echo sequence (TR/TE, 760/21-168ms; 8 echoes) were obtained at all times using field of view 50 mm, slice thickness 2 mm, and matrix $256\times512$. The MRI data were analyzed with pixel-by-pixel comparisons. The cultured cartilage-bone blocks were microscopically observed using hematoxylin & eosin, toluidine blue, alcian blue, and trichrome stains. Results : At quantitation analysis, GAG concentration in the culture solutions was proportional to the culture durations. The T1-signal of the cartilage-bone block cultured in the $Gd(DTPA)^{2-}$ mixed solution was significantly higher ($42\%$ in average, p<0.05) than that of the cartilage-bone block cultured in the trypsin solution alone. The T1, T2, rho relaxation times of cultured tissue were not significantly correlated with culture duration (p>0.05). However the focal increase in T1 relaxation time at superficial and transitional layers of cartilage was seen in $Gd(DTPA)^{2-}$ mixed culture. Toluidine blue and alcian blue stains revealed multiple defects in whole thickness of the cartilage cultured in trypsin media. Conclusion : The quantitative analysis showed gradual loss of GAG proportional to the culture duration. Microimagings of cartilage with $Gd(DTPA)^{2-}$-enhancement, relaxation maps were available by pixel size of $97.9\times195\;{\mu}m$. Loss of GAG over time better demonstrated with $Gd(DTPA)^{2-}$-enhanced images than with T1, T2, rho relaxation maps. Therefore $Gd(DTPA)^{2-}$-enhanced T1-weighted image is superior for detection of early degeneration of cartilage.

  • PDF

Econometric Analysis on Factors of Food Demand in the Household : Comparative Study between Korea and Japan (가계 식품수요 요인의 계량분석 - 한국과 일본의 비교 -)

  • Jho, Kwang-Hyun
    • Journal of the Korean Society of Food Culture
    • /
    • v.14 no.4
    • /
    • pp.371-383
    • /
    • 1999
  • This report gave analysis of food demand both in Korea and Japan through introducing the concept of cohort analysis to the conventional demand model. This research was done to clarify the factors which determine food demand of the household. The traits of the new model for demand analysis are to consider and quantify those effects on food demand not only of economic factors such as expenditure and price but also of non-economic factors such as the age and birth cohort of the householder. The results of the analysis can be summarized as follows: 1) The comparison of the item-wise elasticities of food demand demonstrates that the expenditure elasticity is higher in Korea than in Japan and that the expenditure elasticity is -0.1 for cereal and more than 1 for eating-out in both countries. In respect to price elasticity, the absolute values of all the items except alcohol and cooked food are higher in the Korea than in Japan, and especially the price elasticities of beverages, dairy products and fruit are predominantly higher in Japan. In this way, both expenditure and price elasticities of a large number of items are higher in Korea than in Japan, which may be explained from the fact that the level of expenditure is higher in Japan than in Korea. 2) In both of Korea and Japan, as the householder grows older, the expenditure for each item increases and the composition of expenditure changes in such a way that these moves may be regarded as due to the age effect. However, there are both similarities and differences in the details of such moves between Korea and Japan. Those two countries have this trait in common that the young age groups of the householder spend more on dairy products and middle age groups spend more on cake than other age groups. In the Korea, however, there can be seen a certain trend that higher age groups spend more on a large number of items, reflecting the fact that there are more two-generation families in higher age groups. Japan differs from Korea in that expenditure in Japan is diversified, depending upon the age group. For example, in Japan, middle age groups spend more on cake, cereal, high-caloric food like meat and eating-out while older age groups spend more for Japanese-style food like fish/shellfish and vegetable/seaweed, and cooked food. 3) The effect of the birth cohort effect was also demonstrated. The birth cohort effect was introduced under the supposition that the food circumstances under which the householder was born and brought up would determine the current expenditure. Thus, the following was made clear: older generations in both countries placed more emphasis upon stable food in their composition of food consumption; the share of livestock products, oil/fats and externalized food was higher in the food composition of younger generation; differences in food composition among generations were extremely large in Korea while they were relatively small in Japan; and Westernization and externalization of diet made rapid increases simultaneously with generation changes in Korea while they made any gradual increases in Japan during the same time period. 4) The four major factors which impact the long-term change of food demand of the household are expenditure, price, the age of the householder, and the birth cohort of the householder. Investigations were made as to which factor had the largest impact. As a result, it was found that the price effect was the smallest in both countries, and that the relative importance of the factor-by-factor effects differed among the two countries: in Korea the expenditure effect was greater than the effects of age and birth cohort while in Japan the effects of non-economic factors such as the age and birth cohort of householder were greater than those of economic factors such as expenditures.

  • PDF

A STUDY ON THE IONOSPHERE AND THERMOSPHERE INTERACTION BASED ON NCAR-TIEGCM: DEPENDENCE OF THE INTERPLANETARY MAGNETIC FIELD (IMF) ON THE MOMENTUM FORCING IN THE HIGH-LATITUDE LOWER THERMOSPHERE (NCAR-TIEGCM을 이용한 이온권과 열권의 상호작용 연구: 행성간 자기장(IMF)에 따른 고위도 하부 열권의 운동량 강제에 대한 연구)

  • Kwak, Young-Sil;Richmond, Arthur D.;Ahn, Byung-Ho;Won, Young-In
    • Journal of Astronomy and Space Sciences
    • /
    • v.22 no.2
    • /
    • pp.147-174
    • /
    • 2005
  • To understand the physical processes that control the high-latitude lower thermospheric dynamics, we quantify the forces that are mainly responsible for maintaining the high-latitude lower thermospheric wind system with the aid of the National Center for Atmospheric Research Thermosphere-Ionosphere Electrodynamics General Circulation Model (NCAR-TIEGCM). Momentum forcing is statistically analyzed in magnetic coordinates, and its behavior with respect to the magnitude and orientation of the interplanetary magnetic field (IMF) is further examined. By subtracting the values with zero IMF from those with non-zero IMF, we obtained the difference winds and forces in the high-latitude 1ower thermosphere(<180 km). They show a simple structure over the polar cap and auroral regions for positive($B_y$ > 0.8|$\overline{B}_z$ |) or negative($B_y$ < -0.8|$\overline{B}_z$|) IMF-$\overline{B}_y$ conditions, with maximum values appearing around -80$^{\circ}$ magnetic latitude. Difference winds and difference forces for negative and positive $\overline{B}_y$ have an opposite sign and similar strength each other. For positive($B_z$ > 0.3125|$\overline{B}_y$|) or negative($B_z$ < -0.3125|$\overline{B}_y$|) IMF-$\overline{B}_z$ conditions the difference winds and difference forces are noted to subauroral latitudes. Difference winds and difference forces for negative $\overline{B}_z$ have an opposite sign to positive $\overline{B}_z$ condition. Those for negative $\overline{B}_z$ are stronger than those for positive indicating that negative $\overline{B}_z$ has a stronger effect on the winds and momentum forces than does positive $\overline{B}_z$ At higher altitudes(>125 km) the primary forces that determine the variations of tile neutral winds are the pressure gradient, Coriolis and rotational Pedersen ion drag forces; however, at various locations and times significant contributions can be made by the horizontal advection force. On the other hand, at lower altitudes(108-125 km) the pressure gradient, Coriolis and non-rotational Hall ion drag forces determine the variations of the neutral winds. At lower altitudes(<108 km) it tends to generate a geostrophic motion with the balance between the pressure gradient and Coriolis forces. The northward component of IMF By-dependent average momentum forces act more significantly on the neutral motion except for the ion drag. At lower altitudes(108-425 km) for negative IMF-$\overline{B}_y$ condition the ion drag force tends to generate a warm clockwise circulation with downward vertical motion associated with the adiabatic compress heating in the polar cap region. For positive IMF-$\overline{B}_y$ condition it tends to generate a cold anticlockwise circulation with upward vertical motion associated with the adiabatic expansion cooling in the polar cap region. For negative IMF-$\overline{B}_z$ the ion drag force tends to generate a cold anticlockwise circulation with upward vertical motion in the dawn sector. For positive IMF-$\overline{B}_z$ it tends to generate a warm clockwise circulation with downward vertical motion in the dawn sector.

Correlation between High-Resolution CT and Pulmonary Function Tests in Patients with Emphysema (폐기종환자에서 고해상도 CT와 폐기능검사와의 상관관계)

  • Ahn, Joong-Hyun;Park, Jeong-Mee;Ko, Seung-Hyeon;Yoon, Jong-Goo;Kwon, Soon-Seug;Kim, Young-Kyoon;Kim, Kwan-Hyoung;Moon, Hwa-Sik;Park, Sung-Hak;Song, Jeong-Sup
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.3
    • /
    • pp.367-376
    • /
    • 1996
  • Background : The diagnosis of emphysema during life is based on a combination of clinical, functional, and radiographic findings, but this combination is relatively insensitive and nonspecific. The development of rapid, high-resolution third and fourth generation CT scanners has enabled us to resolve pulmonary parenchymal abnormalities with great precision. We compared the chest HRCT findings to the pulmonary function test and arterial blood gas analysis in pulmonary emphysema patients to test the ability of HRCT to quantify the degree of pulmonary emphysema. Methods : From october 1994 to october 1995, the study group consisted of 20 subjects in whom HRCT of the thorax and pulmonary function studies had been obtained at St. Mary's hospital. The analysis was from scans at preselected anatomic levels and incorporated both lungs. On each HRCT slice the lung parenchyma was assessed for two aspects of emphysema: severity and extent. The five levels were graded and scored separately for the left and right lung giving a total of 10 lung fields. A combination of severity and extent gave the degree of emphysema. We compared the HRCT quantitation of emphysema, pulmonary function tests, ABGA, CBC, and patients characteristics(age, sex, height, weight, smoking amounts etc.) in 20 patients. Results : 1) There was a significant inverse correlation between HRCT scores for emphysema and percentage predicted values of DLco(r = -0.68, p < 0.05), DLco/VA(r = -0.49, p < 0.05), FEV1(r = -0.53, p < 0.05), and FVC(r = -0.47, p < 0.05). 2) There was a significant correlation between the HRCT scores and percentage predicted values of TLC(r = 0.50, p < 0.05), RV(r = 0.64, p < 0.05). 3) There was a significant inverse correlation between the HRCT scores and PaO2(r = -0.48, p < 0.05) and significant correlation with D(A-a)O2(r = -0.48, p < 0.05) but no significant correlation between the HRCT scores and PaCO2. 4) There was no significant correlation between the HRCT scores and age, sex, height, weight, smoking amounts in patients, hemoglobin, hematocrit, and wbc counts. Conclusion : High-Resolution CT provides a useful method for early detection and quantitating emphysema in life and correlates significantly with pulmonary function tests and arterial blood gas analysis.

  • PDF

Expression of Decidual Natural Killer (NK) Cells in Women of Recurrent Abortion with Increased Peripheral NK Cells (말초혈액자연살해세포가 증가된 반복유산 환자의 탈락막자연살해세포의 발현)

  • Yeon, Myeong-Jin;Yang, Kwang-Moon;Park, Chan-Woo;Song, In-Ok;Kang, Inn-Soo;Hong, Sung-Ran;Cho, Dong-Hee;Cho, Yong-Kyoon
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.35 no.2
    • /
    • pp.119-129
    • /
    • 2008
  • Objective: The purpose of this study was to quantify decidual $CD56^+$ and $CD16^+$ NK cell subtype population and to evaluate the correlation between decidual NK cell expression and peripheral $CD56^+$ NK cell expression in women with a history of recurrent abortion and increased peripheral NK cells. Methods: Twenty-nine women with recurrent abortion and elevated peripheral $CD56^+$ NK cell percentage who had chromosomally normal conceptus were included in this study. Thirty-two women with recurrent abortion who had chromosomally abnormal conceptus were used as controls. Distribution of $CD56^+$ and $CD16^+$ NK cells in decidual tissues including implantation sites was examined by immunohistochemical staining. The degree of immunohistochemical staining was interpreted by score and percentage. Results: There was a significant difference in decidual $CD56^+$ NK cell score ($43.6{\pm}24.5$ vs. $23.9{\pm}16.3$ P =0.001) and $CD56^+$ NK cell percentage ($42.1{\pm}11.7$ vs. $33.9{\pm}15.8$ P =0.027) between increased peripheral NK cell group and control group. However, there was no statistically significant difference in decidual $CD16^+$ NK cell score ($18.7{\pm}9.5$ vs. $13.2{\pm}39.4$ P =0.108) and $CD16^+$ NK cell percentage ($24.7{\pm}5.9$ vs. $23.4{\pm}11.7$ P =0.599). There was no significant correlation between decidual NK cell score and peripheral NK cell percentage in increase peripheral NK cell group (peripheral $CD56^+$ NK cell percentage vs. decidual $CD56^+$ NK cell score, r=-0.016, P =0.932, peripheral $CD16^+$ NK cell percentage vs. decidual $CD16^+$ NK cell score, r=0.008, P =0.968). Conclusion: This study shows that $CD56^+$ decidual NK cells are increased in decidua of women exhibiting a history of recurrent abortion with increased $CD56^+$ peripheral NK cell. There was no significant correlation between decidual and peripheral NK cell increment in increase peripheral NK cell group. This study suggests the possibility that decidual NK cells may play an important role in the immune mechanism of recurrent abortion.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Effects of firm strategies on customer acquisition of Software as a Service (SaaS) providers: A mediating and moderating role of SaaS technology maturity (SaaS 기업의 차별화 및 가격전략이 고객획득성과에 미치는 영향: SaaS 기술성숙도 수준의 매개효과 및 조절효과를 중심으로)

  • Chae, SeongWook;Park, Sungbum
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.151-171
    • /
    • 2014
  • Firms today have sought management effectiveness and efficiency utilizing information technologies (IT). Numerous firms are outsourcing specific information systems functions to cope with their short of information resources or IT experts, or to reduce their capital cost. Recently, Software-as-a-Service (SaaS) as a new type of information system has become one of the powerful outsourcing alternatives. SaaS is software deployed as a hosted and accessed over the internet. It is regarded as the idea of on-demand, pay-per-use, and utility computing and is now being applied to support the core competencies of clients in areas ranging from the individual productivity area to the vertical industry and e-commerce area. In this study, therefore, we seek to quantify the value that SaaS has on business performance by examining the relationships among firm strategies, SaaS technology maturity, and business performance of SaaS providers. We begin by drawing from prior literature on SaaS, technology maturity and firm strategy. SaaS technology maturity is classified into three different phases such as application service providing (ASP), Web-native application, and Web-service application. Firm strategies are manipulated by the low-cost strategy and differentiation strategy. Finally, we considered customer acquisition as a business performance. In this sense, specific objectives of this study are as follows. First, we examine the relationships between customer acquisition performance and both low-cost strategy and differentiation strategy of SaaS providers. Secondly, we investigate the mediating and moderating effects of SaaS technology maturity on those relationships. For this purpose, study collects data from the SaaS providers, and their line of applications registered in the database in CNK (Commerce net Korea) in Korea using a questionnaire method by the professional research institution. The unit of analysis in this study is the SBUs (strategic business unit) in the software provider. A total of 199 SBUs is used for analyzing and testing our hypotheses. With regards to the measurement of firm strategy, we take three measurement items for differentiation strategy such as the application uniqueness (referring an application aims to differentiate within just one or a small number of target industry), supply channel diversification (regarding whether SaaS vendor had diversified supply chain) as well as the number of specialized expertise and take two items for low cost strategy like subscription fee and initial set-up fee. We employ a hierarchical regression analysis technique for testing moderation effects of SaaS technology maturity and follow the Baron and Kenny's procedure for determining if firm strategies affect customer acquisition through technology maturity. Empirical results revealed that, firstly, when differentiation strategy is applied to attain business performance like customer acquisition, the effects of the strategy is moderated by the technology maturity level of SaaS providers. In other words, securing higher level of SaaS technology maturity is essential for higher business performance. For instance, given that firms implement application uniqueness or a distribution channel diversification as a differentiation strategy, they can acquire more customers when their level of SaaS technology maturity is higher rather than lower. Secondly, results indicate that pursuing differentiation strategy or low cost strategy effectively works for SaaS providers' obtaining customer, which means that continuously differentiating their service from others or making their service fee (subscription fee or initial set-up fee) lower are helpful for their business success in terms of acquiring their customers. Lastly, results show that the level of SaaS technology maturity mediates the relationships between low cost strategy and customer acquisition. That is, based on our research design, customers usually perceive the real value of the low subscription fee or initial set-up fee only through the SaaS service provide by vender and, in turn, this will affect their decision making whether subscribe or not.