• Title/Summary/Keyword: Model Tests

Search Result 7,077, Processing Time 0.035 seconds

A PLS Path Modeling Approach on the Cause-and-Effect Relationships among BSC Critical Success Factors for IT Organizations (PLS 경로모형을 이용한 IT 조직의 BSC 성공요인간의 인과관계 분석)

  • Lee, Jung-Hoon;Shin, Taek-Soo;Lim, Jong-Ho
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.207-228
    • /
    • 2007
  • Measuring Information Technology(IT) organizations' activities have been limited to mainly measure financial indicators for a long time. However, according to the multifarious functions of Information System, a number of researches have been done for the new trends on measurement methodologies that come with financial measurement as well as new measurement methods. Especially, the researches on IT Balanced Scorecard(BSC), concept from BSC measuring IT activities have been done as well in recent years. BSC provides more advantages than only integration of non-financial measures in a performance measurement system. The core of BSC rests on the cause-and-effect relationships between measures to allow prediction of value chain performance measures to allow prediction of value chain performance measures, communication, and realization of the corporate strategy and incentive controlled actions. More recently, BSC proponents have focused on the need to tie measures together into a causal chain of performance, and to test the validity of these hypothesized effects to guide the development of strategy. Kaplan and Norton[2001] argue that one of the primary benefits of the balanced scorecard is its use in gauging the success of strategy. Norreklit[2000] insist that the cause-and-effect chain is central to the balanced scorecard. The cause-and-effect chain is also central to the IT BSC. However, prior researches on relationship between information system and enterprise strategies as well as connection between various IT performance measurement indicators are not so much studied. Ittner et al.[2003] report that 77% of all surveyed companies with an implemented BSC place no or only little interest on soundly modeled cause-and-effect relationships despite of the importance of cause-and-effect chains as an integral part of BSC. This shortcoming can be explained with one theoretical and one practical reason[Blumenberg and Hinz, 2006]. From a theoretical point of view, causalities within the BSC method and their application are only vaguely described by Kaplan and Norton. From a practical consideration, modeling corporate causalities is a complex task due to tedious data acquisition and following reliability maintenance. However, cause-and effect relationships are an essential part of BSCs because they differentiate performance measurement systems like BSCs from simple key performance indicator(KPI) lists. KPI lists present an ad-hoc collection of measures to managers but do not allow for a comprehensive view on corporate performance. Instead, performance measurement system like BSCs tries to model the relationships of the underlying value chain in cause-and-effect relationships. Therefore, to overcome the deficiencies of causal modeling in IT BSC, sound and robust causal modeling approaches are required in theory as well as in practice for offering a solution. The propose of this study is to suggest critical success factors(CSFs) and KPIs for measuring performance for IT organizations and empirically validate the casual relationships between those CSFs. For this purpose, we define four perspectives of BSC for IT organizations according to Van Grembergen's study[2000] as follows. The Future Orientation perspective represents the human and technology resources needed by IT to deliver its services. The Operational Excellence perspective represents the IT processes employed to develop and deliver the applications. The User Orientation perspective represents the user evaluation of IT. The Business Contribution perspective captures the business value of the IT investments. Each of these perspectives has to be translated into corresponding metrics and measures that assess the current situations. This study suggests 12 CSFs for IT BSC based on the previous IT BSC's studies and COBIT 4.1. These CSFs consist of 51 KPIs. We defines the cause-and-effect relationships among BSC CSFs for IT Organizations as follows. The Future Orientation perspective will have positive effects on the Operational Excellence perspective. Then the Operational Excellence perspective will have positive effects on the User Orientation perspective. Finally, the User Orientation perspective will have positive effects on the Business Contribution perspective. This research tests the validity of these hypothesized casual effects and the sub-hypothesized causal relationships. For the purpose, we used the Partial Least Squares approach to Structural Equation Modeling(or PLS Path Modeling) for analyzing multiple IT BSC CSFs. The PLS path modeling has special abilities that make it more appropriate than other techniques, such as multiple regression and LISREL, when analyzing small sample sizes. Recently the use of PLS path modeling has been gaining interests and use among IS researchers in recent years because of its ability to model latent constructs under conditions of nonormality and with small to medium sample sizes(Chin et al., 2003). The empirical results of our study using PLS path modeling show that the casual effects in IT BSC significantly exist partially in our hypotheses.

Two Dimensional Size Effect on the Compressive Strength of Composite Plates Considering Influence of an Anti-buckling Device (좌굴방지장치 영향을 고려한 복합재 적층판의 압축강도에 대한 이차원 크기 효과)

  • ;;C. Soutis
    • Composites Research
    • /
    • v.15 no.4
    • /
    • pp.23-31
    • /
    • 2002
  • The two dimensional size effect of specimen gauge section ($length{\;}{\times}{\;}width$) was investigated on the compressive behavior of a T300/924 $\textrm{[}45/-45/0/90\textrm{]}_{3s}$, carbon fiber-epoxy laminate. A modified ICSTM compression test fixture was used together with an anti-buckling device to test 3mm thick specimens with a $30mm{\;}{\times}{\;}30mm,{\;}50mm{\;}{\times}{\;}50mm,{\;}70mm{\;}{\times}{\;}70mm{\;}and{\;}90mm{\;}{\times}{\;}90mm$ gauge length by width section. In all cases failure was sudden and occurred mainly within the gauge length. Post failure examination suggests that $0^{\circ}$ fiber microbuckling is the critical damage mechanism that causes final failure. This is the matrix dominated failure mode and its triggering depends very much on initial fiber waviness. It is suggested that manufacturing process and quality may play a significant role in determining the compressive strength. When the anti-buckling device was used on specimens, it was showed that the compressive strength with the device was slightly greater than that without the device due to surface friction between the specimen and the device by pretoque in bolts of the device. In the analysis result on influence of the anti-buckling device using the finite element method, it was found that the compressive strength with the anti-buckling device by loaded bolts was about 7% higher than actual compressive strength. Additionally, compressive tests on specimen with an open hole were performed. The local stress concentration arising from the hole dominates the strength of the laminate rather than the stresses in the bulk of the material. It is observed that the remote failure stress decreases with increasing hole size and specimen width but is generally well above the value one might predict from the elastic stress concentration factor. This suggests that the material is not ideally brittle and some stress relief occurs around the hole. X-ray radiography reveals that damage in the form of fiber microbuckling and delamination initiates at the edge of the hole at approximately 80% of the failure load and extends stably under increasing load before becoming unstable at a critical length of 2-3mm (depends on specimen geometry). This damage growth and failure are analysed by a linear cohesive zone model. Using the independently measured laminate parameters of unnotched compressive strength and in-plane fracture toughness the model predicts successfully the notched strength as a function of hole size and width.

Experimental Study on the Effect of Filter Layers on Pumping Capacity and Well Efficiency in an Unconfined Aquifer (자유면대수층에서 필터층이 취수량 및 우물효율에 미치는 영향에 대한 실험적 연구)

  • Song, Jae-Yong;Lee, Sang-Moo;Choi, Yong-Soo;Jeong, Gyo-Cheol
    • The Journal of Engineering Geology
    • /
    • v.27 no.4
    • /
    • pp.405-416
    • /
    • 2017
  • This study evaluated a model unconfined aquifer comprising a sand or gravel layer, a filter layer, a pumping well, and an observation well. The model was employed in step drawdown tests and then used to assess the permeability of each test tank. The optimal yield and well efficiency were then calculated. Evaluation of yield by step in sand layer filters of equal thickness gave optimized watering rates of 22.03 L/min in the double filter and 19.71 L/min in the single filter. The double filter's yield was 115.0% that of the single filter. A comparison of double and single filters, each 10 cm thick, showed the double filter to have a maximum yield of 182.7%. Yields for the gravel layer were 73.56 L/min for a double filter and 65.47 L/min for a single filter of the same thickness; the former value is 112.3% of that of the latter. Comparison of double and single filters with 10-cm-thick gravel layers revealed that the double filter had a maximum yield of 160.9%. Results for sand wells showed the double filter to have a maximum efficiency of 70.4% and the single filter to have a minimum efficiency of 37.1%. Gravel-layer well efficiencies were >66.5% for both double and single filters (each 30 cm thick), but only 22.5% for a 10-cm-thick single filter. This study confirms that permeability improved as the filter material became thicker; it also shows that a double filter has a higher yield and well efficiency than a single filter. These results can be applied to the practical design of wells.

Developing and Applying the Questionnaire to Measure Science Core Competencies Based on the 2015 Revised National Science Curriculum (2015 개정 과학과 교육과정에 기초한 과학과 핵심역량 조사 문항의 개발 및 적용)

  • Ha, Minsu;Park, HyunJu;Kim, Yong-Jin;Kang, Nam-Hwa;Oh, Phil Seok;Kim, Mi-Jum;Min, Jae-Sik;Lee, Yoonhyeong;Han, Hyo-Jeong;Kim, Moogyeong;Ko, Sung-Woo;Son, Mi-Hyun
    • Journal of The Korean Association For Science Education
    • /
    • v.38 no.4
    • /
    • pp.495-504
    • /
    • 2018
  • This study was conducted to develop items to measure scientific core competency based on statements of scientific core competencies presented in the 2015 revised national science curriculum and to identify the validity and reliability of the newly developed items. Based on the explanations of scientific reasoning, scientific inquiry ability, scientific problem-solving ability, scientific communication ability, participation/lifelong learning in science presented in the 2015 revised national science curriculum, 25 items were developed by five science education experts. To explore the validity and reliability of the developed items, data were collected from 11,348 students in elementary, middle, and high schools nationwide. The content validity, substantive validity, the internal structure validity, and generalization validity proposed by Messick (1995) were examined by various statistical tests. The results of the MNSQ analysis showed that there were no nonconformity in the 25 items. The confirmatory factor analysis using the structural equation modeling revealed that the five-factor model was a suitable model. The differential item functioning analyses by gender and school level revealed that the nonconformity DIF value was found in only two out of 175 cases. The results of the multivariate analysis of variance by gender and school level showed significant differences of test scores between schools and genders, and the interaction effect was also significant. The assessment items of science core competency based on the 2015 revised national science curriculum are valid from a psychometric point of view and can be used in the science education field.

An Empirical Study on How the Moderating Effects of Individual Cultural Characteristics towards a Specific Target Affects User Experience: Based on the Survey Results of Four Types of Digital Device Users in the US, Germany, and Russia (특정 대상에 대한 개인 수준의 문화적 성향이 사용자 경험에 미치는 조절효과에 대한 실증적 연구: 미국, 독일, 러시아의 4개 디지털 기기 사용자를 대상으로)

  • Lee, In-Seong;Choi, Gi-Woong;Kim, So-Lyung;Lee, Ki-Ho;Kim, Jin-Woo
    • Asia pacific journal of information systems
    • /
    • v.19 no.1
    • /
    • pp.113-145
    • /
    • 2009
  • Recently, due to the globalization of the IT(Information Technology) market, devices and systems designed in one country are used in other countries as well. This phenomenon is becoming the key factor for increased interest on cross-cultural, or cross-national, research within the IT area. However, as the IT market is becoming bigger and more globalized, a great number of IT practitioners are having difficulty in designing and developing devices or systems which can provide optimal experience. This is because not only tangible factors such as language and a country's economic or industrial power affect the user experience of a certain device or system but also invisible and intangible factors as well. Among such invisible and intangible factors, the cultural characteristics of users from different countries may affect the user experience of certain devices or systems because cultural characteristics affect how they understand and interpret the devices or systems. In other words, when users evaluate the quality of overall user experience, the cultural characteristics of each user act as a perceptual lens that leads the user to focus on a certain elements of experience. Therefore, there is a need within the IT field to consider cultural characteristics when designing or developing certain devices or systems and plan a strategy for localization. In such an environment, existing IS studies identify the culture with the country, emphasize the importance of culture in a national level perspective, and hypothesize that users within the same country have same cultural characteristics. Under such assumptions, these studies focus on the moderating effects of cultural characteristics on a national level within a certain theoretical framework. This has already been suggested by cross-cultural studies conducted by scholars such as Hofstede(1980) in providing numerical research results and measurement items for cultural characteristics and using such results or items as they increase the efficiency of studies. However, such national level culture has its limitations in forecasting and explaining individual-level behaviors such as voluntary device or system usage. This is because individual cultural characteristics are the outcome of not only the national culture but also the culture of a race, company, local area, family, and other groups that are formulated through interaction within the group. Therefore, national or nationally dominant cultural characteristics may have its limitations in forecasting and explaining the cultural characteristics of an individual. Moreover, past studies in psychology suggest a possibility that there exist different cultural characteristics within a single individual depending on the subject being measured or its context. For example, in relation to individual vs. collective characteristics, which is one of the major cultural characteristics, an individual may show collectivistic characteristics when he or she is with family or friends but show individualistic characteristics in his or her workplace. Therefore, this study acknowledged such limitations of past studies and conducted a research within the framework of 'theoretically integrated model of user satisfaction and emotional attachment', which was developed through a former study, on how the effects of different experience elements on emotional attachment or user satisfaction are differentiated depending on the individual cultural characteristics related to a system or device usage. In order to do this, this study hypothesized the moderating effects of four cultural dimensions (uncertainty avoidance, individualism vs, collectivism, masculinity vs. femininity, and power distance) as suggested by Hofstede(1980) within the theoretically integrated model of emotional attachment and user satisfaction. Statistical tests were then implemented on these moderating effects through conducting surveys with users of four digital devices (mobile phone, MP3 player, LCD TV, and refrigerator) in three countries (US, Germany, and Russia). In order to explain and forecast the behavior of personal device or system users, individual cultural characteristics must be measured, and depending on the target device or system, measurements must be measured independently. Through this suggestion, this study hopes to provide new and useful perspectives for future IS research.

The Impact of Amalgam Exposure an Urinary Mercury Concentration in Children (어린이의 구강 내 아말감 노출이 요중 수은농도에 미치는 영향)

  • Jeon, Eun-Suk;Jin, Hye-Jung;Kim, Eun-Kyong;Im, Sang-Uk;Song, Keun-Bae;Choi, Youn Hee
    • Journal of dental hygiene science
    • /
    • v.14 no.1
    • /
    • pp.7-14
    • /
    • 2014
  • This study aims to evaluate the impact of varying exposure to dental amalgam on urinary mercury levels in children by measuring the number of amalgam-filled teeth and the variance of mercury concentration in urine over a period of 2 years. A total of 317 (male 158, female 159) elementary school children (1st~4th graders) attending 2 schools in urban regions participated in this study. At 6-month intervals, 4 oral examinations were conducted to check any variance in the conditions of dental caries and the status of dental fillings. Also, urine tests were conducted followed by a questionnaire survey. To elucidate the factors potentially affecting the mercury concentration in urine, t-test, ANOVA, chi-square test and a mixed model were used for the analysis. Regarding the status of dental fillings in line with examination time periods, deciduous teeth had more amalgam-filled surfaces than those filled with resin, whereas permanent teeth had more resin-filled surfaces than those filled with amalgam. A significant relevance was found between the exposure to dental amalgam and urinary mercury levels. Specifically, subjects whose teeth surfaces had been filled with dental amalgam showed higher urinary mercury levels than those who had no dental amalgam fillings. Based on the analysis using the mixed model, the increase in the number of teeth surfaces filled with amalgam was found to be the factor affecting the increase in urinary mercury levels. The urinary mercury levels were found to be highly associated with the exposure to dental amalgam. The more the teeth surfaces filled with amalgam, the higher the urinary mercury levels. Hence, even a trace of dental amalgam fillings can liberate mercury, affecting the variance in the urinary mercury levels. These findings suggest that some criteria or measures should be developed to minimize the exposure to dental amalgam. Moreover, relevant further studies are warranted.

Does Home Oxygen Therapy Slow Down the Progression of Chronic Obstructive Pulmonary Diseases?

  • Han, Kyu-Tae;Kim, Sun Jung;Park, Eun-Cheol;Yoo, Ki-Bong;Kwon, Jeoung A;Kim, Tae Hyun
    • Journal of Hospice and Palliative Care
    • /
    • v.18 no.2
    • /
    • pp.128-135
    • /
    • 2015
  • Purpose: As the National Health Insurance Service (NHIS) began to cover home oxygen therapy (HOT) services from 2006, it is expected that the new services have contributed to overall positive outcome of patients with chronic obstructive pulmonary disease (COPD). We examined whether the usage of HOT has helped slow down the progression of COPD. Methods: We examined hospital claim data (N=10,798) of COPD inpatients who were treated in 2007~2012. We performed ${\chi}^2$ tests to analyze the differences in the changes to respiratory impairment grades. Multiple logistic regression analysis was used to identify factors that are associated with the use of HOT. Finally, a generalized linear mixed model was used to examine association between the HOT treatment and changes to respiratory impairment grades. Results: A total of 2,490 patients had grade 1 respiratory impairment, and patients with grades 2 or 3 totaled 8,308. The OR for use of HOT was lower in grade 3 patients than others (OR: 0.33, 95% CI: 0.30~0.37). The maintenance/mitigation in all grades, those who used HOT had a higher OR than non-users (OR: 1.41, 95% CI: 1.23~1.61). Conclusion: HOT was effective in maintaining or mitigating the respiratory impairment in COPD patients.

Analysis of Contrast Medium Dilution Rate for changes in Tube Current and SOD, which are Parameters of Lower Limb Angiography Examination (하지 혈관조영검사 시 매개변수인 관전류와 SOD에 변화에 대한 조영제 희석률 분석)

  • Kong, Chang gi;Han, Jae Bok
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.5
    • /
    • pp.603-612
    • /
    • 2020
  • This study has a purpose to look into the effect of the relationship between the Tube current (mA) and SOD(Source to Object Distance), which is a parameter of lower limb angiography examination, and the dilution rate of the contrast medium concentration (300, 320, 350) on the image. To that end, using 3 mm vessel model water phantom, a vessel model custom made in the size of peripheral vessel diameter, this study measured relationships between change of parameters, such as tube current (mA), SOD and varying concentrations (300, 320, 350) of contrast medium dilution into SNR and CNR values while analyzing the coefficients of variance(cv<10). The software used to measure SNR and CNR values was Image J 1.50i from NIH (National Institutes of Health, USA). MPV (mean pixel value) and SD (standard deviation) were used after verifying numerically the image signal for region of interest (ROI) and background on phantom from the DICOM (digital imaging and communications in medicine) 3.0 file transmitted to PACS. As to contrast medium dilution by the change of tube current, when 146 mA and 102 mA were compared, For both SNR and CNR, the coefficient of variation value was less than 10 until the section of CM: N/S dilution (100% ~ 30% : 70%) but CM: N/S dilution rate (20%: 80% ~ 10% : 90%) the coefficient of variation was 10 or more. As to contrast medium dilution by concentration for SOD change, when SOD's (32.5 cm and 22.5 cm) were compared,For both SNR and CNR, the coefficient of variation value was less than 10 until the section of CM: N/S dilution (100% ~ 30% : 70%) but CM: N/S dilution rate (20%: 80% ~ 10% : 90%) the coefficient of variation was 10 or more. As to contrast medium dilution by concentration for SOD change, when SOD's (32.5 cm and 12.5 cm) were compared,For both SNR and CNR, the coefficient of variation value was less than 10 until the section of CM: N/S dilution (100% ~ 30% : 70%) but CM: N/S dilution rate (20%: 80% ~ 10% : 90%) the coefficient of variation was 10 or more. As a result, set a low tube current value in other tests or procedures including peripheral angiography of the lower extremities in the intervention, and make the table as close as possible to the image receiver, and adjust the contrast agent concentration (300) to CM: N/S dilution (30%: 70%). ) Is suggested as the most efficient way to obtain images with an appropriate concentration while simultaneously reducing the burden on the kidney and the burden on exposure.

The Factors Influencing the Asthenopia of Emmetropia with Phoria (사위를 가진 정시안의 안정피로에 영향을 미치는 요인)

  • Kim, Jung-Hee;Lee, Dong-Hee
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.10 no.1
    • /
    • pp.71-82
    • /
    • 2005
  • The aim of this study was to provide fundamental data for the factors influencing the asthenopia of emmetropia with phoria and alleviation of asthenopia. A total of 348 subjects, aged between 19 and 30 years old, who had no strabismus, an eye trouble or whole body disease, were examined using corrected visual acuity, corrected diopter, stereopsis and suppression tests from September of 2002 to September of 2004. We excluded 21 subjects for the following reasons: if they had an amblyopia affecting binocular vision or inaccurate data. After these exclusions, 327 subjects remained. We then individually measured the refractive error correction, pupillary distance, optical center distance, phoria, convergence, accommodation and the AC/A as well as the asthenopia during binocular vision using a questionnaire. After analysis of factors affecting asthenopia, we also examined the reductive effect of a prism on the asthenopia in subjects who had asthenopia. To determine the factors affecting asthenopia during binocular vision, statistic analyses were carried out using the Chi-square test and the multivariate Logistic regression model. The results of this study were as follow. For asthenopia during near binocular vision of emmetropia with phoria, in case of the lower the accommodation and convergence, a significantly higher rate of asthenopia was observed (p<0.001). When the AC/A is lower, the higher the rate of asthenopia was observed but not significantly and there was no association between phoria and asthenopia. When the multivariate logistic regression model was used to determine factors affecting binocular vision of emmetropia with phoria, in case of the lower accommodation and convergence, a significantly higher rate of asthenopia was observed. when the phoria is esophoria or higher exophoria, or when the AC/A is lower than normal, the higher the rate of asthenopia was observed but not significantly and there was no association between phoria. AC/A and asthenopia. Therefore accommodation and convergence could be predictive factors for asthenopia during near distance binocular vision. Prism was used among' subjects who had asthenopia during near distance binocular vision, the symptom of asthenopia was eased up to 74.2% in emmetropia with phoria.

  • PDF

Earth Pressure on the Braced Wall in the Composite Ground Depending on the Depth and the Joint Dips of the Base Rocks under the Soil Strata (복합지반 굴착 시 기반암의 깊이와 절리경사에 따라 흙막이벽체에 작용하는 토압)

  • Bae, Sang Su;Lee, Sang Duk
    • Journal of the Korean Geotechnical Society
    • /
    • v.32 no.10
    • /
    • pp.41-53
    • /
    • 2016
  • Stability of the braced earth wall in the composite ground, which is composed of the jointed base rocks and the soil strata depends on the earth pressure acting on it. In most cases, the earth pressure is calculated by the empirical method, in which base rocks are considered as a soil strata with the shear strength parameters of base rocks. In this case the effect of the joint dips of the jointed base rocks is ignored. Therefore, the calculated earth pressure is smaller than the actual earth pressure. In this study, the magnitude and the distribution of the earth pressure acting on the braced wall in the composite ground depending on the joint dips of the base rocks and the ratio of soil strata and base rocks were experimentally studied. Two dimensional large-scale model tests were conducted in a large scale test facility (height 3.0 m, length 3.0 m and width 0.5 m) by installing 10 supports in a scale of 1/14.5. The test ground was presumed with the base rock ratio of the composite ground of 65%:35% and 50%:50% and with the joint dips for each base rock layer, $0^{\circ}$, $30^{\circ}$, $45^{\circ}$ and $60^{\circ}$, respectively. And then finite element analyses were performed in the same condition. As results, the earth pressure on the braced wall increased as the base rock layer's joint dips became larger. And earth pressure at the rock layer increased as the rock rate became larger. The largest earth pressure was measured when the base rock rate was 50% (R50) and the rock layer's joint dips was $60^{\circ}$. Based on these results, a formular for the calculation of the earth pressure in the composite ground could be suggested. Distribution of earth pressure was idealized in a quadrangular form, in which the magnitude and the position of peak earth pressure depended on the rock ratio and the joint dips.