• Title/Summary/Keyword: Korean validation

Search Result 5,989, Processing Time 0.167 seconds

Investigation of Automated Neonatal Hearing Screening for Early Detection of Childhood Hearing Impairment (소아 난청의 조기진단을 위한 신생아 청력 선별검사에 대한 평가)

  • Seo, Jeong Il;Yoo, Si Uk;Gong, Sung Hyeon;Hwang, Gwang Su;Lee, Hyeon Jung;Kim, Joong Pyo;Choi, Hyeon;Lee, Bo Young;Mok, Ji Sun
    • Clinical and Experimental Pediatrics
    • /
    • v.48 no.7
    • /
    • pp.706-710
    • /
    • 2005
  • Purpose : Early diagnosis of congenital hearing loss through the neonatal hearing screening test minimizes language defect. This research intends to identify frequency of congenital hearing loss in infants through neonatal hearing screening test with the aim of communicating the importance of hearing test for infants. Methods : From May 20, 2003 to May 19, 2004, infants were subjected to Automated Auditory Brainstem Response test during one month of birth to conduct the test with 35 dB sound. Infants who passed the 1st round of hearing test, were classified into 'pass' group whereas those who did not were classified into 'refer' group. Infants who did not 'pass' in the hearing test conducted within one month of birth were subjected to re-test one month later, and if classified as 'refer' during the re-test, they were subjected to the diagnosis for validation of hearing loss by requesting test to the hearing loss clinic. Results : There was no difference among the 'pass' and 'refer' group in terms of form of childbirth, weight at birth and gestational age. In the 1st test, total of 45 infants were classified into 'refer' group. Six among 35 who were subjected to re-test(17%) did not pass the re-test, and all were diagnosed with congenital hearing loss. This corresponds to 0.35%(3.5 per 1,000) among total number of 1,718 subjects. Conclusion : In our study the congenital hearing loss tends to be considerably more frequently than congenital metabolic disorder. Accordingly, newly born infants are strongly recommended to undergo neonatal hearing screening test.

Study on Basic Requirements of Geoscientific Area for the Deep Geological Repository of Spent Nuclear Fuel in Korea (사용후핵연료 심지층처분장부지 지질환경 기본요건 검토)

  • Bae, Dae-Seok;Koh, Yong-Kwon;Park, Ju-Wan;Park, Jin-Baek;Song, Jong-Soon
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.10 no.1
    • /
    • pp.63-75
    • /
    • 2012
  • This paper gives some basic requirements and preferences of various geological environmental conditions for the final deep geological repository of spent nuclear fuel (SNF). This study also indicates how the requirements and preferences are to be considered prior to the selection of sites for a site investigation as well as the final disposal in Korea. The results of the study are based on the knowledge and experience from the IAEA and NEA/OECD as well as the advanced countries in SNF disposal project. This study discusses and suggests preliminary guideline of the disposal requirements including geological, mechanical, thermal, hydrogeological, chemical and transport properties of host rock with long term geological stabilities which influence the functions of a multi-barrier disposal system. To apply and determine whether requirements and preferences for a given parameter are satisfied at different stages during a site selection and suitability assessment of a final disposal site, the quantitative criteria in each area should be formulated with credibility through relevant research and development efforts for the deep geological environment during the site screening and selection processes as well as specific studies such as productions of safety cases and validation studies using a generic underground research laboratory (URL) in Korea.

Development of A Two-Variable Spatial Leaf Photosynthetic Model of Irwin Mango Grown in Greenhouse (온실재배 어윈 망고의 위치 별 2변수 엽 광합성 모델 개발)

  • Jung, Dae Ho;Shin, Jong Hwa;Cho, Young Yeol;Son, Jung Eek
    • Journal of Bio-Environment Control
    • /
    • v.24 no.3
    • /
    • pp.161-166
    • /
    • 2015
  • To determine the adequate levels of light intensity and $CO_2$ concentration for mango grown in greenhouses, quantitative measurements of photosynthetic rates at various leaf positions in the tree are required. The objective of this study was to develop two-variable leaf photosynthetic models of Irwin mango (Mangifera indica L. cv. Irwin) using light intensity and $CO_2$ concentration at different leaf positions. Leaf photosynthetic rates at different positions (top, middle, and bottom) were measured by a leaf photosynthesis analyzer at light intensities (0, 50, 100, 200, 300, 400, 600, and $800{\mu}mol{\cdot}m^{-2}{\cdot}s^{-1}$) with $CO_2$ concentrations (100, 400, 800, 1200, and $1600{\mu}mol{\cdot}mol^{-1}$). The two-variable model consisted of the two leaf photosynthetic models expressed as negative exponential functions for light intensity and $CO_2$ concentrations, respectively. The photosynthetic rates of top leaves were saturated at a light intensity of $400{\mu}mol{\cdot}^{-2}{\cdot}s^{-1}$, while those of middle and bottom leaves saturated at $200{\mu}mol{\cdot}^{-2}{\cdot}s^{-1}$. The leaf photosynthetic rates did not reach the saturation point at a $CO_2$ concentration of $1600imolmol^{-1}$. In validation of the model, the estimated photosynthetic rates at top and bottom leaves showed better agreements with the measured ones than the middle leaves. It is expected that the optimal conditions of light intensity and $CO_2$ concentration can be determined for maximizing photosynthetic rates of Irwin mango grown in greenhouses by using the two-variable model.

Image Color, Brightness, Saturation Similarity Validation Study of Emotion Computing (이미지 색상, 명도, 채도 감성컴퓨팅의 유사성 검증 연구)

  • Lee, Yean-Ran
    • Cartoon and Animation Studies
    • /
    • s.40
    • /
    • pp.477-496
    • /
    • 2015
  • Emotional awareness is the image of a person is represented by different tendencies. Currently, the emotion computing to objectively evaluate the emotion recognition research is being actively studied. However, existing emotional computing research has many problems to run. First, the non-objective in emotion recognition if it is inaccurate. Second, the correlation between the emotion recognition is unclear points. So to test the regularity of image sensitivity to the need of the present study is to control emotions in the computing system. In addition, the screen number of the emotion recognized for the purpose of this study, applying the method of objective image emotional computing system and compared with a similar degree of emotion of the person. The key features of the image emotional computing system calculates the emotion recognized as numbered digital form. And to study the background of emotion computing is a key advantage of the effect of the James A. Russell for digitization of emotion (Core Affect). Pleasure emotions about the core axis (X axis) of pleasure and displeasure, tension (Y-axis) axis of tension and relaxation of emotion, emotion is applied to the computing research. Emotional axis with associated representative sensibility very happy, excited, elated, happy, contentment, calm, relaxing, quiet, tired, helpless, depressed, sad, angry, stress, anxiety, pieces 16 of tense emotional separated by a sensibility ComputingIt applies. Course of the present study is to use the color of the color key elements of the image computing formula sensitivity, brightness, and saturation applied to the sensitivity property elements. Property and calculating the rate sensitivity factors are applied to the importance weight, measured by free-level sensitivity score (X-axis) and the tension (Y-axis). Emotion won again expanded on the basis of emotion crossed point, and included a representative selection in Sensibility size of the top five ranking representative of the main emotion. In addition, measuring the emotional image of a person with 16 representative emotional score, and separated by a representative of the top five senses. Compare the main representative of the main representatives of Emotion and Sensibility people aware of the sensitivity of the results to verify the similarity degree computing emotion emotional emotions depending on the number of representative matches. The emotional similarity computing results represent the average concordance rate of major sensitivity was 51%, representing 2.5 sensibilities were consistent with the person's emotion recognition. Similar measures were the degree of emotion computing calculation and emotion recognition in this study who were given the objective criteria of the sensitivity calculation. Future research will need to be maintained weight room and the study of the emotional equation of a higher concordance rate improved.

The Effect of Consumer's Perceptual Characteristics for PB Products on Relational Continuance Intention: Mediated by Brand Trust and Brand Equity (PB상품에 대한 소비자의 지각특성이 관계지속의도에 미치는 영향: 브랜드신뢰 및 브랜드자산을 매개로 한 정책적 접근)

  • Lim, Chaekwan
    • Journal of Distribution Research
    • /
    • v.17 no.5
    • /
    • pp.85-111
    • /
    • 2012
  • Introduction : The purpose of this study was to examine the relationship between perceptual characteristics of consumers and intent of relational continuance for PB(Private Brand) products in discount stores. This study was conducted as an empirical study based on survey. For the empirical study, factors of PB products as characteristics perceived by consumers such as perceived quality, store image, brand image and perceived value were deduced from preceding studies. The effect of such factors on intent of relational continuance mediated by brand trust and brand equity of PB products was structurally examined. Research Model : Based on theory analysis and hypotheses, constructed a Structural Equation Model(SEM). The research model is shown in Figure 1. Research Method : This paper is based on s qualitative study of selected literature and empirical data. The survey for empirical study was carried out on consumers in Gyeonggi and Busan between January 2012 and May 2012. 300 surveys were distributed and 253 (84.3%) of them were returned. After excluding omissions and insincere responses, 245 surveys (81.6%) were used for final analysis as effective samples. Result : First of all, the Reliability was carried out for instrument used. The lower limit of 0.7 for Cronbach's Alpha as suggested by Hair et al. (1998). And Construct validity was established by carrying out exploratory factor analysis by Varimax rotation for all. Four factor result for the consumer's perceptual characteristics of PB Products, two mediating factors and one dependent factor. All constructs included in research framework have acceptable validity and reliability. Table 1 shows the factor loading, eigen value, explained variance and Cronbach's alpha for each factor. In order to assure validity of constructs, I implemented Confirmatory Factor Analysis (CFA), using AMOS 20.0. In confirmatory factor analysis, researcher can take control over the specification of indicators for each factor by hypothesizing that a specific factor is loaded with the relevant indicators. Moreover, CFA is particularly useful in the validation of scale for the measurement of specific construct. CFA result summarized Table 2 shows that the fit measures of all constructs fulfill the recommended level and loadings are significant. To test causal relationship between constructs in the research model, used AMOS 20.0 that provides a graphic module as method for analysing Structural Equation Modeling. The result of hypothesis test is shown in Table 3. As a result of empirical study, perceived quality, brand image and perceived value as selected attributes for PB products showed significantly positive (+) effect on brand trust and brand equity. Furthermore, brand trust and brand equity showed significantly positive (+) effect on intent of relational continuance. However, store image of discount stores selling the PB products was analyzed to have positive (+) effect on brand trust and no significant effect on brand equity. Discussion : Based on the results of this study, the relationship between overall quality, store image, brand image and value perceived by consumers about PB products and intent of relational continuance was structurally verified as being mediated by brand trust and brand equity. Looking at the results, a strategic approach that maximizes brand trust and equity value for PB products by large discount stores is required on top of basic efforts to improve quality, brand image and value of PB products in order to maximize consumer's intent of relational continuance and to continuously attract repeated purchase of products.

  • PDF

A Control Method for designing Object Interactions in 3D Game (3차원 게임에서 객체들의 상호 작용을 디자인하기 위한 제어 기법)

  • 김기현;김상욱
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.3
    • /
    • pp.322-331
    • /
    • 2003
  • As the complexity of a 3D game is increased by various factors of the game scenario, it has a problem for controlling the interrelation of the game objects. Therefore, a game system has a necessity of the coordination of the responses of the game objects. Also, it is necessary to control the behaviors of animations of the game objects in terms of the game scenario. To produce realistic game simulations, a system has to include a structure for designing the interactions among the game objects. This paper presents a method that designs the dynamic control mechanism for the interaction of the game objects in the game scenario. For the method, we suggest a game agent system as a framework that is based on intelligent agents who can make decisions using specific rules. Game agent systems are used in order to manage environment data, to simulate the game objects, to control interactions among game objects, and to support visual authoring interface that ran define a various interrelations of the game objects. These techniques can process the autonomy level of the game objects and the associated collision avoidance method, etc. Also, it is possible to make the coherent decision-making ability of the game objects about a change of the scene. In this paper, the rule-based behavior control was designed to guide the simulation of the game objects. The rules are pre-defined by the user using visual interface for designing their interaction. The Agent State Decision Network, which is composed of the visual elements, is able to pass the information and infers the current state of the game objects. All of such methods can monitor and check a variation of motion state between game objects in real time. Finally, we present a validation of the control method together with a simple case-study example. In this paper, we design and implement the supervised classification systems for high resolution satellite images. The systems support various interfaces and statistical data of training samples so that we can select the most effective training data. In addition, the efficient extension of new classification algorithms and satellite image formats are applied easily through the modularized systems. The classifiers are considered the characteristics of spectral bands from the selected training data. They provide various supervised classification algorithms which include Parallelepiped, Minimum distance, Mahalanobis distance, Maximum likelihood and Fuzzy theory. We used IKONOS images for the input and verified the systems for the classification of high resolution satellite images.

Development of a Detection Model for the Companies Designated as Administrative Issue in KOSDAQ Market (KOSDAQ 시장의 관리종목 지정 탐지 모형 개발)

  • Shin, Dong-In;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.157-176
    • /
    • 2018
  • The purpose of this research is to develop a detection model for companies designated as administrative issue in KOSDAQ market using financial data. Administration issue designates the companies with high potential for delisting, which gives them time to overcome the reasons for the delisting under certain restrictions of the Korean stock market. It acts as an alarm to inform investors and market participants of which companies are likely to be delisted and warns them to make safe investments. Despite this importance, there are relatively few studies on administration issues prediction model in comparison with the lots of studies on bankruptcy prediction model. Therefore, this study develops and verifies the detection model of the companies designated as administrative issue using financial data of KOSDAQ companies. In this study, logistic regression and decision tree are proposed as the data mining models for detecting administrative issues. According to the results of the analysis, the logistic regression model predicted the companies designated as administrative issue using three variables - ROE(Earnings before tax), Cash flows/Shareholder's equity, and Asset turnover ratio, and its overall accuracy was 86% for the validation dataset. The decision tree (Classification and Regression Trees, CART) model applied the classification rules using Cash flows/Total assets and ROA(Net income), and the overall accuracy reached 87%. Implications of the financial indictors selected in our logistic regression and decision tree models are as follows. First, ROE(Earnings before tax) in the logistic detection model shows the profit and loss of the business segment that will continue without including the revenue and expenses of the discontinued business. Therefore, the weakening of the variable means that the competitiveness of the core business is weakened. If a large part of the profits is generated from one-off profit, it is very likely that the deterioration of business management is further intensified. As the ROE of a KOSDAQ company decreases significantly, it is highly likely that the company can be delisted. Second, cash flows to shareholder's equity represents that the firm's ability to generate cash flow under the condition that the financial condition of the subsidiary company is excluded. In other words, the weakening of the management capacity of the parent company, excluding the subsidiary's competence, can be a main reason for the increase of the possibility of administrative issue designation. Third, low asset turnover ratio means that current assets and non-current assets are ineffectively used by corporation, or that asset investment by corporation is excessive. If the asset turnover ratio of a KOSDAQ-listed company decreases, it is necessary to examine in detail corporate activities from various perspectives such as weakening sales or increasing or decreasing inventories of company. Cash flow / total assets, a variable selected by the decision tree detection model, is a key indicator of the company's cash condition and its ability to generate cash from operating activities. Cash flow indicates whether a firm can perform its main activities(maintaining its operating ability, repaying debts, paying dividends and making new investments) without relying on external financial resources. Therefore, if the index of the variable is negative(-), it indicates the possibility that a company has serious problems in business activities. If the cash flow from operating activities of a specific company is smaller than the net profit, it means that the net profit has not been cashed, indicating that there is a serious problem in managing the trade receivables and inventory assets of the company. Therefore, it can be understood that as the cash flows / total assets decrease, the probability of administrative issue designation and the probability of delisting are increased. In summary, the logistic regression-based detection model in this study was found to be affected by the company's financial activities including ROE(Earnings before tax). However, decision tree-based detection model predicts the designation based on the cash flows of the company.

Improvement of Mid-and Low-flow Estimation Using Variable Nonlinear Catchment Wetness Index (비선형 유역습윤지수를 이용한 평갈수기 유출모의개선)

  • Hyun, Sukhoon;Kang, Boosik;Kim, Jin-Gyeom
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.36 no.5
    • /
    • pp.779-789
    • /
    • 2016
  • The effective rainfall is calculated considering the soil moisture. It utilizes observed data directly in order to incorporate the soil moisture into the rainfall-runoff model, or it calculates indirectly within the model. The rainfall-runoff model, IHACRES, used in this study computes the catchment wetness index (CWI) first varying with temperature and utilize it for estimating precipitation loss. The nonlinear relationship between the CWI and the effective rainfall in the Hapcheondam watershed was derived and utilized for the long-term runoff calculation. The effects of variable and constant CWI during calibration and validation were suggested by flow regime. The results show the variable CWI is generally more effective than the constant CWI. The $R^2$ during high flow period shows relatively higher than the ones during normal or low flow period, but the difference between cases of the variable and constant CWI was insignificant. The results indicates that the high flow is relatively less sensitive to the evaporation and soil moisture associated with temperature. On the other hand, the variable CWI gives more desirable results during normal and low flow periods which means that it is crucial to incorporate evaporation and soil moisture depending on temperature into long-term continuous runoff simulation. The NSE tends to decrease during high flow period with high variability which could be natural because NSE index is largely influenced by outliers of underlying variable. Nevertheless overall NSE shows satisfactory range higher than 0.9. The utilization of variable CWI during normal and low flow period would improve the computation of long-term rainfall-runoff simulation.

Estimation of Near Surface Air Temperature Using MODIS Land Surface Temperature Data and Geostatistics (MODIS 지표면 온도 자료와 지구통계기법을 이용한 지상 기온 추정)

  • Shin, HyuSeok;Chang, Eunmi;Hong, Sungwook
    • Spatial Information Research
    • /
    • v.22 no.1
    • /
    • pp.55-63
    • /
    • 2014
  • Near surface air temperature data which are one of the essential factors in hydrology, meteorology and climatology, have drawn a substantial amount of attention from various academic domains and societies. Meteorological observations, however, have high spatio-temporal constraints with the limits in the number and distribution over the earth surface. To overcome such limits, many studies have sought to estimate the near surface air temperature from satellite image data at a regional or continental scale with simple regression methods. Alternatively, we applied various Kriging methods such as ordinary Kriging, universal Kriging, Cokriging, Regression Kriging in search of an optimal estimation method based on near surface air temperature data observed from automatic weather stations (AWS) in South Korea throughout 2010 (365 days) and MODIS land surface temperature (LST) data (MOD11A1, 365 images). Due to high spatial heterogeneity, auxiliary data have been also analyzed such as land cover, DEM (digital elevation model) to consider factors that can affect near surface air temperature. Prior to the main estimation, we calculated root mean square error (RMSE) of temperature differences from the 365-days LST and AWS data by season and landcover. The results show that the coefficient of variation (CV) of RMSE by season is 0.86, but the equivalent value of CV by landcover is 0.00746. Seasonal differences between LST and AWS data were greater than that those by landcover. Seasonal RMSE was the lowest in winter (3.72). The results from a linear regression analysis for examining the relationship among AWS, LST, and auxiliary data show that the coefficient of determination was the highest in winter (0.818) but the lowest in summer (0.078), thereby indicating a significant level of seasonal variation. Based on these results, we utilized a variety of Kriging techniques to estimate the surface temperature. The results of cross-validation in each Kriging model show that the measure of model accuracy was 1.71, 1.71, 1.848, and 1.630 for universal Kriging, ordinary Kriging, cokriging, and regression Kriging, respectively. The estimates from regression Kriging thus proved to be the most accurate among the Kriging methods compared.

Evaluation of the Measurement Uncertainty from the Standard Operating Procedures(SOP) of the National Environmental Specimen Bank (국가환경시료은행 생태계 대표시료의 채취 및 분석 표준운영절차에 대한 단계별 측정불확도 평가 연구)

  • Lee, Jongchun;Lee, Jangho;Park, Jong-Hyouk;Lee, Eugene;Shim, Kyuyoung;Kim, Taekyu;Han, Areum;Kim, Myungjin
    • Journal of Environmental Impact Assessment
    • /
    • v.24 no.6
    • /
    • pp.607-618
    • /
    • 2015
  • Five years have passed since the first set of environmental samples was taken in 2011 to represent various ecosystems which would help future generations lead back to the past environment. Those samples have been preserved cryogenically in the National Environmental Specimen Bank(NESB) at the National Institute of Environmental Research. Even though there is a strict regulation (SOP, standard operating procedure) that rules over the whole sampling procedure to ensure each sample to represent the sampling area, it has not been put to the test for the validation. The question needs to be answered to clear any doubts on the representativeness and the quality of the samples. In order to address the question and ensure the sampling practice set in the SOP, many steps to the measurement of the sample, that is, from sampling in the field and the chemical analysis in the lab are broken down to evaluate the uncertainty at each level. Of the 8 species currently taken for the cryogenic preservation in the NESB, pine tree samples from two different sites were selected for this study. Duplicate samples were taken from each site according to the sampling protocol followed by the duplicate analyses which were carried out for each discrete sample. The uncertainties were evaluated by Robust ANOVA; two levels of uncertainty, one is the uncertainty from the sampling practice, and the other from the analytical process, were then compiled to give the measurement uncertainty on a measured concentration of the measurand. As a result, it was confirmed that it is the sampling practice not the analytical process that accounts for the most of the measurement uncertainty. Based on the top-down approach for the measurement uncertainty, the efficient way to ensure the representativeness of the sample was to increase the quantity of each discrete sample for the making of a composite sample, than to increase the number of the discrete samples across the site. Furthermore, the cost-effective approach to enhance the confidence level on the measurement can be expected from the efforts to lower the sampling uncertainty, not the analytical uncertainty. To test the representativeness of a composite sample of a sampling area, the variance within the site should be less than the difference from duplicate sampling. For that, a criterion, ${i.e.s^2}_{geochem}$(across the site variance) <${s^2}_{samp}$(variance at the sampling location) was proposed. In light of the criterion, the two representative samples for the two study areas passed the requirement. In contrast, whenever the variance of among the sampling locations (i.e. across the site) is larger than the sampling variance, more sampling increments need to be added within the sampling area until the requirement for the representativeness is achieved.