• Title/Summary/Keyword: formula

Search Result 8,115, Processing Time 0.048 seconds

The Evaluation of Predose Counts in the GFR Test Using $^{99m}Tc$-DTPA ($^{99m}Tc$-DTPA를 이용한 사구체 여과율 측정에서 주사 전선량계수치의 평가)

  • Yeon, Joon-Ho;Lee, Hyuk;Chi, Yong-Ki;Kim, Soo-Yung;Lee, Kyoo-Bok;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.94-100
    • /
    • 2010
  • Purpose: We can evaluate function of kidney by Glomerular Filtration Rate (GFR) test using $^{99m}Tc$-DTPA which is simple. This test is influenced by several parameter such as net syringe count, kidney depth, corrected kidney count, acquisition time and characters of gamma camera. In this study, we evaluated predose count according to matrix size in the GFR test using $^{99m}Tc$-DTPA. Materials and Methods: Gamma camera of Infinia in GE was used, and LEGP collimator, three types of matrix size ($64{\times}64$, $128{\times}128$, $256{\times}256$) and 1.0 of zoom factor were applied. We increased radioactivity concentration from 222 (6), 296 (8), 370 (10), 444 (12) up to 518 MBq (14 mCi) respectively and acquired images according to matrix size at 30 cm distance from detector. Lastly, we evaluated these values and then substituted them for GFR formula. Results: In $64{\times}64$, $128{\times}128$ and $256{\times}256$ of matrix size, counts per second was 26.8, 34.5, 41.5, 49.1 and 55.3 kcps, 25.3, 33.4, 41.0, 48.4 and 54.3 kcps and 25.5, 33.7, 40.8, 48.1 and 54.7 kcps respectively. Total counts for 5 second were 134, 172, 208, 245 and 276 kcounts from $64{\times}64$, 127, 172, 205, 242, 271 kcounts from $128{\times}128$, and 137, 168, 204, 240 and 273 kcounts from $256{\times}256$, and total counts for 60 seconds were 1,503, 1,866, 2,093, 2,280, 2,321 kcounts, 1,511, 1,994, 2,453, 2,890 and 3,244 kcounts, and 1,524, 2,011, 2,439, 2,869 and 3,268 kcounts respectively. It is different from 0 to 30.02 % of percentage difference in $64{\times}64$ of matrix size. But in $128{\times}128$ and $256{\times}256$, it is showed 0.60 and 0.69 % of maximum value each. GFR of percentage difference in $64{\times}64$ represented 6.77% of 222 MBq (6 mCi), 42.89 % of 518 MBq (14 mCi) at 60 seconds respectively. However it is represented 0.60 and 0.63 % each in $128{\times}128$ and $256{\times}256$. Conclusion: There was no big difference in total counts of percentage difference and GFR values acquiring from $128{\times}128$ and $256{\times}256$ of matrix size. But in $64{\times}64$ of matrix size when the total count exceeded 1,500 kcounts, the overflow phenomenon was appeared differently according to predose radioactivity of concentration and acquisition time. Therefore, we must optimize matrix size and net syringe count considering the total count of predose to get accurate GFR results.

  • PDF

The Effects of the Computer Aided Innovation Capabilities on the R&D Capabilities: Focusing on the SMEs of Korea (Computer Aided Innovation 역량이 연구개발역량에 미치는 효과: 국내 중소기업을 대상으로)

  • Shim, Jae Eok;Byeon, Moo Jang;Moon, Hyo Gon;Oh, Jay In
    • Asia pacific journal of information systems
    • /
    • v.23 no.3
    • /
    • pp.25-53
    • /
    • 2013
  • This study analyzes the effect of Computer Aided Innovation (CAI) to improve R&D Capabilities empirically. Survey was distributed by e-mail and Google Docs, targeting CTO of 235 SMEs. 142 surveys were returned back (rate of return 60.4%) from companies. Survey results from 119 companies (83.8%) which are effective samples except no-response, insincere response, estimated value, etc. were used for statistics analysis. Companies with less than 50billion KRW sales of entire researched companies occupy 76.5% in terms of sample traits. Companies with less than 300 employees occupy 83.2%. In terms of the type of company business Partners (called 'partners with big companies' hereunder) who work with big companies for business occupy 68.1%. SMEs based on their own business (called 'independent small companies') appear to occupy 31.9%. The present status of holding IT system according to traits of company business was classified into partners with big companies versus independent SMEs. The present status of ERP is 18.5% to 34.5%. QMS is 11.8% to 9.2%. And PLM (Product Life-cycle Management) is 6.7% to 2.5%. The holding of 3D CAD is 47.1% to 21%. IT system-holding and its application of independent SMEs seemed very vulnerable, compared with partner companies of big companies. This study is comprised of IT infra and IT Utilization as CAI capacity factors which are independent variables. factors of R&D capabilities which are independent variables are organization capability, process capability, HR capability, technology-accumulating capability, and internal/external collaboration capability. The highest average value of variables was 4.24 in organization capability 2. The lowest average value was 3.01 in IT infra which makes users access to data and information in other areas and use them with ease when required during new product development. It seems that the inferior environment of IT infra of general SMEs is reflected in CAI itself. In order to review the validity used to measure variables, Factors have been analyzed. 7 factors which have over 1.0 pure value of their dependent and independent variables were extracted. These factors appear to explain 71.167% in total of total variances. From the result of factor analysis about measurable variables in this study, reliability of each item was checked by Cronbach's Alpha coefficient. All measurable factors at least over 0.611 seemed to acquire reliability. Next, correlation has been done to explain certain phenomenon by correlation analysis between variables. As R&D capabilities factors which are arranged as dependent variables, organization capability, process capability, HR capability, technology-accumulating capability, and internal/external collaboration capability turned out that they acquire significant correlation at 99% reliability level in all variables of IT infra and IT Utilization which are independent variables. In addition, correlation coefficient between each factor is less than 0.8, which proves that the validity of this study judgement has been acquired. The pair with the highest coefficient had 0.628 for IT utilization and technology-accumulating capability. Regression model which can estimate independent variables was used in this study under the hypothesis that there is linear relation between independent variables and dependent variables so as to identify CAI capability's impact factors on R&D. The total explanations of IT infra among CAI capability for independent variables such as organization capability, process capability, human resources capability, technology-accumulating capability, and collaboration capability are 10.3%, 7%, 11.9%, 30.9%, and 10.5% respectively. IT Utilization exposes comprehensively low explanatory capability with 12.4%, 5.9%, 11.1%, 38.9%, and 13.4% for organization capability, process capability, human resources capability, technology-accumulating capability, and collaboration capability respectively. However, both factors of independent variables expose very high explanatory capability relatively for technology-accumulating capability among independent variable. Regression formula which is comprised of independent variables and dependent variables are all significant (P<0.005). The suitability of regression model seems high. When the results of test for dependent variables and independent variables are estimated, the hypothesis of 10 different factors appeared all significant in regression analysis model coefficient (P<0.01) which is estimated to affect in the hypothesis. As a result of liner regression analysis between two independent variables drawn by influence factor analysis for R&D capability and R&D capability. IT infra and IT Utilization which are CAI capability factors has positive correlation to organization capability, process capability, human resources capability, technology-accumulating capability, and collaboration capability with inside and outside which are dependent variables, R&D capability factors. It was identified as a significant factor which affects R&D capability. However, considering adjustable variables, a big gap is found, compared to entire company. First of all, in case of partner companies with big companies, in IT infra as CAI capability, organization capability, process capability, human resources capability, and technology capability out of R&D capacities seems to have positive correlation. However, collaboration capability appeared insignificance. IT utilization which is a CAI capability factor seemed to have positive relation to organization capability, process capability, human resources capability, and internal/external collaboration capability just as those of entire companies. Next, by analyzing independent types of SMEs as an adjustable variable, very different results were found from those of entire companies or partner companies with big companies. First of all, all factors in IT infra except technology-accumulating capability were rejected. IT utilization was rejected except technology-accumulating capability and collaboration capability. Comprehending the above adjustable variables, the following results were drawn in this study. First, in case of big companies or partner companies with big companies, IT infra and IT utilization affect improving R&D Capabilities positively. It was because most of big companies encourage innovation by using IT utilization and IT infra building over certain level to their partner companies. Second, in all companies, IT infra and IT utilization as CAI capability affect improving technology-accumulating capability positively at least as R&D capability factor. The most of factor explanation is low at around 10%. However, technology-accumulating capability is rather high around 25.6% to 38.4%. It was found that CAI capability contributes to technology-accumulating capability highly. Companies shouldn't consider IT infra and IT utilization as a simple product developing tool in R&D section. However, they have to consider to use them as a management innovating strategy tool which proceeds entire-company management innovation centered in new product development. Not only the improvement of technology-accumulating capability in department of R&D. Centered in new product development, it has to be used as original management innovative strategy which proceeds entire company management innovation. It suggests that it can be a method to improve technology-accumulating capability in R&D section and Dynamic capability to acquire sustainable competitive advantage.

Research and Development Trends on Omega-3 Fatty Acid Fortified Foodstuffs (오메가 3계 지방산 강화 식품류의 연구개발 동향)

  • 이희애;유익종;이복희
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.26 no.1
    • /
    • pp.161-174
    • /
    • 1997
  • Omega-3 fatty acids have been major research interests in medical and nutritional science relating to life sciences since after the epidemiologic data on Green3and Eskimos reported by several researchers clearly showed fewer per capita deaths from heart diseases and a lower incidence of adult diseases. Linolenic acid(LNA) is an essential fatty acid for human beings as well as linoleic acid(LA) due to the fact that vertebrates lack an enzyme required to incorporate a double bond beyond carbon 9 in the chain. In addition the ratio of omega-6 and 3 fatty acids seems to be important in terms of alleviation of heart diseases since LA and LNA competes for the metabolic pathways of eicosanoids synthesis. High consumption of omega-3 fatty acids in seafoods may control heart diseases by reducing blood cholesterol, triglyceride, VLDL, LDL and increasing HDL and by inhibiting plaque development through the formation of antiaggregatory substances like PGI$_2$, PGI$_3$ and TXA$_3$ metabolized from LNA. Omega 3 fatty acids also play an important role in neuronal developments and visual functioning, in turn influence learning behaviors. Current dietary sources of omega-3 fatty acids are limited mostly to seafoods, leafy vegetables, marine and some seed oils and the most appropriate way to provide omega-3 fatty acids is as a part of the normal dietary regimen. The efforts to enhance the intake of omega-3 fatty acids due to several beneficial effects have been made nowadays by way of food processing technology. Two different ways can be applied: one is add Purified and concentrated omega-3 fatty acids into foods and the other is to produce foods with high amounts of omega-3 fatty acids by raising animals with specially formulated feed best for the transfer of omega-3 fatty acids. Recently, items of manufactured and marketed omega-3 fatty acids fortified foodstuffs are pork, milk, cheese, egg, formula milk and ham. In domestic food market, many of them are distributed already, but problem is that nutritional informations on the amounts of omega-3 fatty acids are not presented on the labeling, which might cause distrust of consumers on those products, result in lower sales volumes. It would be very much wise if we consume natural products, result in lower sales volumes. It would be very much wise if we consume natural products high in omega-3 fatty acids to Promote health related to many types of adult diseases rather than processed foods fortified with omega-3 fatty acids.

  • PDF

Clinical Features and the Natural History of Dietary Protein Induced Proctocolitis: a Study on the Elimination of Offending Foods from the Maternal Diet (식품 단백질 유발성 직결장염의 임상 소견과 식이 조절에 관한 연구)

  • Choi, Seon Yun;Park, Moon Ho;Choi, Won Joung;Kang, Una;Oh, Hoon Kyu;Kam, Sin;Hwang, Jin-Bok
    • Pediatric Gastroenterology, Hepatology & Nutrition
    • /
    • v.8 no.1
    • /
    • pp.21-30
    • /
    • 2005
  • Purpose: The aim of this study was to identify the clinical features and natural history of dietary protein induced proctocolitis (DPIPC) and to detect the causative foods of DPIPC, and to evaluate the effect of elimination of the foods on the course of the disease. Methods: Between March 2003 and July 2004, data from 30 consecutive patients with DPIPC who were followed for over 6 months, was reviewed. The diagnostic criterion used for DPIPC was an increase in the number of eosinophils in the lamina propria (${\geq}60per$ 10 high-power fields). In breast feeding mothers, 5 highly allergenic foods were eliminated from the maternal diet for 7 days, namely, allergenic food groups such as dairy products, eggs, nuts and soybean, fish and shellfish, and wheat and buckwheat. We observed the disappearance or appearance of hematochezia after elimination or challenge with the offending foods. Results: Before diagnosis infants were breast-fed (93.3%) or formula-fed (6.7%). Mean age at symptom onset was $11.5{\pm}5.1$ (5~24) weeks, and mean age at diagnosis was $17.8{\pm}9.5$ (8~56) weeks. Duration from symptom onset to diagnosis was $6.3{\pm}6.7$ (0~36) weeks. Mean peripheral blood eosinophil count was $478{\pm}320$ (40~1,790)/$mm^3$ and eosinophilia (> $250/mm^3$) was observedin 90.0% of patients. None of patients were found to have an increased serum IgE level. Of 15 patients that received sigmoidoscopy, nodular hyperplasia with erosion was observed in 93.3%. Of 27 patients whose mother ate the diet eliminated the 5 food groups, hematochezia diappeared in 74.1% of patients. Offending foods were identified as dairy products (37.5%), wheat and buckwheat (27.5%), fish and shellfish (20.0%), nuts and soybean (7.5%) and eggs (7.5%). A free maternal diet without patient's clinical symptoms was achieved at $29.4{\pm}8.7$ (9~44) weeks of patient's age, and a free baby diet without blood in stools was achieved at $37.5{\pm}9.7$ (12~56) weeks of age. Conclusion: DPIPC commonly occurs in exclusively breast-fed babies. Elimination of the above-mentioned 5 hyper-allergenic food groups from the maternal diet for 7days enables the detection of the offending foods. DPIPC is a transient disorder and 96.0% of patients can tolerate the offending foods at 12 months of age.

  • PDF

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

Analysis of the ESD and DAP According to the Change of the Cine Imaging Condition of Coronary Angiography and Usefulness of SNR and CNR of the Images: Focusing on the Change of Tube Current (관상동맥 조영술(Coronary Angiography)의 씨네(cine) 촬영조건 변화에 따른 입사표면선량(ESD)과 흡수선량(DAP) 및 영상의 SNR·CNR 유용성 분석: 관전류 변화를 중점으로)

  • Seo, Young Hyun;Song, Jong Nam
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.3
    • /
    • pp.371-379
    • /
    • 2019
  • The purpose of this study was to investigate the effect of the change in the X-ray condition on the entrance surface dose (ESD) and dose area product (DAP) in the cine imaging of coronary angiography (CAG), and to analyze the usefulness of the condition change on the dose relation and image quality by measuring and analyzing the Signal to Noise Radio (SNR) and Contrast to Nois Ratio (CNR) of the angiographic images taken by the Image J program. Data were collected from 33 patients (24 males and 9 females) who underwent CAG at this hospital from November 2017 to March 2018. In terms of imaging condition and data acquisition, the ESD and DAP of group A with a high tube current of 397.2 mA and group B with a low tube current of 370.7 mA were retrospectively obtained for comparison and analysis. For the SNR and CNR measurement and analysis via Image J, the result values were derived by substituting the obtained data into the formula. The correlations among ESD and DAP according to the change in the imaging condition, SNR, and CNR were analyzed by using the SPSS statistical analysis software. The relationships of groups A and B, having a difference in the imaging condition, mA, with ESD ($A:483.5{\pm}60.1$; $B: 464.4{\pm}39.9$) and DAP ($A:84.3{\pm}10.7$; $B:81.5{\pm}7$) were not statistically significant (p>0.05). In the relationships with SNR and CNR based on Image J, the SNR ($5.451{\pm}0.529$) and CNR ($0.411{\pm}0.0432$) of the images obtained via the left coronary artery (LCA) imaging of group B showed differences of $0.475{\pm}0.096$ and $-0.048{\pm}0.0$, respectively, from the SNR ($4.976{\pm}0.433$) and CNR ($0.459{\pm}0.0431$) of the LCA of group A. However, the differences were not statistically significant (p<0.05). In the SNR and CNR obtained via the right coronary artery (RCA) imaging, the SNR ($4.731{\pm}0.773$) and CNR ($0.354{\pm}0.083$) of group A showed increased values of $1.491{\pm}0.405$ and $0.188{\pm}0.005$, respectively, from the SNR ($3.24{\pm}0.368$) and CNR ($0.166{\pm}0.033$) of group B. Among these, CNR was statistically significant (p<0.05). In the correlation analysis, statistically significant results were shown in SNR (LCA) and CNR (LCA); SNR (RCA) and CNR (RCA); ESD and DAP; ESD and sec; DAP and CNR (RCA); and DAP and sec (p<0.05). As a result of the analyses on the image quality evaluation and usefulness of the dose change, the SNR and CNR were increased in the RCA images of the CAG obtained by increasing the mA. Based on the result that CNR showed a statistically significant difference, it is believed that the contrast in the image quality can be further improved by increasing the mA in RCA imaging.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

The Effect of Customer Satisfaction on Corporate Credit Ratings (고객만족이 기업의 신용평가에 미치는 영향)

  • Jeon, In-soo;Chun, Myung-hoon;Yu, Jung-su
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.1-24
    • /
    • 2012
  • Nowadays, customer satisfaction has been one of company's major objectives, and the index to measure and communicate customer satisfaction has been generally accepted among business practices. The major issues of CSI(customer satisfaction index) are three questions, as follows: (a)what level of customer satisfaction is tolerable, (b)whether customer satisfaction and company performance has positive causality, and (c)what to do to improve customer satisfaction. Among these, the second issue is recently attracting academic research in several perspectives. On this study, the second issue will be addressed. Many researchers including Anderson have regarded customer satisfaction as core competencies, such as brand equity, customer equity. They want to verify following causality "customer satisfaction → market performance(market share, sales growth rate) → financial performance(operating margin, profitability) → corporate value performance(stock price, credit ratings)" based on the process model of marketing performance. On the other hand, Insoo Jeon and Aeju Jeong(2009) verified sequential causality based on the process model by the domestic data. According to the rejection of several hypotheses, they suggested the balance model of marketing performance as an alternative. The objective of this study, based on the existing process model, is to examine the causal relationship between customer satisfaction and corporate value performance. Anderson and Mansi(2009) proved the relationship between ACSI(American Customer Satisfaction Index) and credit ratings using 2,574 samples from 1994 to 2004 on the assumption that credit rating could be an indicator of a corporate value performance. The similar study(Sangwoon Yoon, 2010) was processed in Korean data, but it didn't confirm the relationship between KCSI(Korean CSI) and credit ratings, unlike the results of Anderson and Mansi(2009). The summary of these studies is in the Table 1. Two studies analyzing the relationship between customer satisfaction and credit ratings weren't consistent results. So, in this study we are to test the conflicting results of the relationship between customer satisfaction and credit ratings based on the research model considering Korean credit ratings. To prove the hypothesis, we suggest the research model as follows. Two important features of this model are the inclusion of important variables in the existing Korean credit rating system and government support. To control their influences on credit ratings, we included three important variables of Korean credit rating system and government support, in case of financial institutions including banks. ROA, ER, TA, these three variables are chosen among various kinds of financial indicators since they are the most frequent variables in many previous studies. The results of the research model are relatively favorable : R2, F-value and p-value is .631, 233.15 and .000 respectively. Thus, the explanatory power of the research model as a whole is good and the model is statistically significant. The research model has good explanatory power, the regression coefficients of the KCSI is .096 as positive(+) and t-value and p-value is 2.220 and .0135 respectively. As a results, we can say the hypothesis is supported. Meanwhile, all other explanatory variables including ROA, ER, log(TA), GS_DV are identified as significant and each variables has a positive(+) relationship with CRS. In particular, the t-value of log(TA) is 23.557 and log(TA) as an explanatory variables of the corporate credit ratings shows very high level of statistical significance. Considering interrelationship between financial indicators such as ROA, ER which include total asset in their formula, we can expect multicollinearity problem. But indicators like VIF and tolerance limits that shows whether multicollinearity exists or not, say that there is no statistically significant multicollinearity in all the explanatory variables. KCSI, the main subject of this study, is a statistically significant level even though the standardized regression coefficients and t-value of KCSI is .055 and 2.220 respectively and a relatively low level among explanatory variables. Considering that we chose other explanatory variables based on the level of explanatory power out of many indicators in the previous studies, KCSI is validated as one of the most significant explanatory variables for credit rating score. And this result can provide new insights on the determinants of credit ratings. However, KCSI has relatively lower impact than main financial indicators like log(TA), ER. Therefore, KCSI is one of the determinants of credit ratings, but don't have an exceedingly significant influence. In addition, this study found that customer satisfaction had more meaningful impact on corporations of small asset size than those of big asset size, and on service companies than manufacturers. The findings of this study is consistent with Anderson and Mansi(2009), but different from Sangwoon Yoon(2010). Although research model of this study is a bit different from Anderson and Mansi(2009), we can conclude that customer satisfaction has a significant influence on company's credit ratings either Korea or the United State. In addition, this paper found that customer satisfaction had more meaningful impact on corporations of small asset size than those of big asset size and on service companies than manufacturers. Until now there are a few of researches about the relationship between customer satisfaction and various business performance, some of which were supported, some weren't. The contribution of this study is that credit rating is applied as a corporate value performance in addition to stock price. It is somewhat important, because credit ratings determine the cost of debt. But so far it doesn't get attention of marketing researches. Based on this study, we can say that customer satisfaction is partially related to all indicators of corporate business performances. Practical meanings for customer satisfaction department are that it needs to actively invest in the customer satisfaction, because active investment also contributes to higher credit ratings and other business performances. A suggestion for credit evaluators is that they need to design new credit rating model which reflect qualitative customer satisfaction as well as existing variables like ROA, ER, TA.

  • PDF

A Study on the Dimensions, Surface Area and Volume of Grains (곡립(穀粒)의 치수, 표면적(表面積) 및 체적(體積)에 관(關)한 연구(硏究))

  • Park, Jong Min;Kim, Man Soo
    • Korean Journal of Agricultural Science
    • /
    • v.16 no.1
    • /
    • pp.84-101
    • /
    • 1989
  • An accurate measurement of size, surface area and volume of agricultural products is essential in many engineering operations such as handling and sorting, and in heat transfer studies on heating and cooling processes. Little information is available on these properties due to their irregular shape, and moreover very little information on the rough rice, soybean, barley, and wheat has been published. Physical dimensions of grain, such as length, width, thickness, surface area, and volume vary according to the variety, environmental conditions, temperature, and moisture content. Especially, recent research has emphasized on the variation of these properties with the important factors such as moisture content. The objectives of this study were to determine physical dimensions such as length, width and thickness, surface area and volume of the rough rice, soybean, barley, and wheat as a function of moisture content, to investigate the effect of moisture content on the properties, and to develop exponential equations to predict the surface area and the volume of the grains as a function of physical dimensions. The varieties of the rough rice used in this study were Akibare, Milyang 15, Seomjin, Samkang, Chilseong, and Yongmun, as a soybean sample Jangyeobkong and Hwangkeumkong, as a barley sample Olbori and Salbori, and as a wheat sample Eunpa and Guru were selected, respectively. The physical properties of the grain samples were determined at four levels of moisture content and ten or fifteen replications were run at each moisture content level and each variety. The results of this study are summarized as follows; 1. In comparison of the surface area and the volume of the 0.0375m diameter-sphere measured in this study with the calculated values by the formula the percent error between them showed least values of 0.65% and 0.77% at the rotational degree interval of 15 degree respectively. 2. The statistical test(t-test) results of the physical properties between the types of rough rice, and between the varieties of soybean and wheat indicated that there were significant difference at the 5% level between them. 3. The physical dimensions varied linearly with the moisture content, and the ratios of length to thickness (L/T) and of width to thickness (W/T) in rough rice decreased with increase of moisture content, while increased in soybean, but uniform tendency of the ratios in barley and wheat was not shown. In all of the sample grains except Olbori, sphericity decreased with increase of moisture content. 4. Over the experimental moisture levels, the surface area and the volume were in the ranges of about $45{\sim}51{\times}10^{-6}m^2$, $25{\sim}30{\times}10^{-9}m^3$ for Japonica-type rough rice, about $42{\sim}47{\times}10^{-6}m^2$, $21{\sim}26{\times}10^{-9}m^3$ for Indica${\times}$Japonica type rough rice, about $188{\sim}200{\times}10^{-6}m^2$, $277{\sim}300{\times}10^{-9}m^3$ for Jangyeobkong, about $180{\sim}201{\times}10^{-6}m^2$, $190{\sim}253{\times}10^{-9}m^3$ for Hwangkeumkong, about $60{\sim}69{\times}10^{-6}m^2$, $36{\sim}45{\times}10^{-9}m^3$ for Covered barley, about $47{\sim}60{\times}10^{-6}m^2$, $22{\sim}28{\times}10^{-9}m^3$ for Naked barley, about $51{\sim}20{\times}10^{-6}m^2$, $23{\sim}31{\times}10^{-9}m^3$ for Eunpamill, and about $57{\sim}69{\times}10^{-6}m^2$, $27{\sim}34{\times}10^{-9}m^3$ for Gurumill, respectively. 5. The increasing rate of surface area and volume with increase of moisture content was higher in soybean than other sample grains, and that of Japonica-type was slightly higher than Indica${\times}$Japonica type in rough rice. 6. The regression equations of physical dimensions, surface area and volume were developed as a function of moisture content, the exponential equations of surface area and volume were also developed as a function of physical dimensions, and the regression equations of surface area were also developed as a function of volume in all grain samples.

  • PDF