• Title/Summary/Keyword: Reliable

Search Result 13,956, Processing Time 0.043 seconds

Establishment and Application of Molecular Genetic Techniques for Preimplantation Genetic Diagnosis of Osteogenesis Imperfecta (골형성부전증의 착상전 유전진단을 위한 분자유전학적 방법의 조건 확립과 적용)

  • Kim, Min-Jee;Lee, Hyoung-Song;Choi, Hye-Won;Lim, Chun-Kyu;Cho, Jae-Won;Kim, Jin-Young;Song, In-Ok;Kang, Inn-Soo
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.35 no.2
    • /
    • pp.99-110
    • /
    • 2008
  • Objectives: Preimplantation genetic diagnosis (PGD) has become an assisted reproductive technique for couples carrying genetic conditions that may affect their offspring. Osteogenesis imperfecta (OI) is an autosomal dominant disorder of connective tissue characterized by bone fragility and low bone mass. At least 95% of cases are caused by dominant mutations in the COL1A1 or COL1A2. In this study, we report on our experience clinical outcomes with 5 PGD cycles for OI in two couples. Methods: Before clinical PGD, we assessed the amplification rate and allele drop-out (ADO) rate of alkaline lysis and nested PCR protocol using heterozygous patient's single lymphocytes in the pre-clinical diagnostic tests for OI. We performed 5 cycles of PGD for OI by nested PCR for the causative mutation loci, COL1A1 c.2452G>A and c.3226G>A, in case 1 and case 2, respectively. The PCR products were analyzed by agarose gel electrophoresis, restriction fragment length polymorphism (RFLP) analysis with HaeIII restriction enzyme in the case 1 and direct DNA sequencing. Results: We confirmed the causative mutation loci, COL1A1 c.2452G>A in case 1 and c.3226G>A in case 2. In the pre-clinical tests, the amplification rate was 94.2% and ADO rate was 22.5% in case 1, while 98.1% and 1.9% in case 2, respectively. In case 1, a total of 34 embryos were analyzed and 31 embryos (91.2%) were successfully diagnosed in 3 PGD cycles. Eight out of 19 embryos diagnosed as unaffected embryos were transferred in all 3 cycles, and in the third cycle, pregnancy was achieved and a healthy baby was delivered without any complications in July, 2005. In case 2, all 19 embryos (100.0%) were successfully diagnosed and 4 out of 11 unaffected embryos were transferred in 2 cycles. Pregnancy was achieved in the second cycle and the healthy baby was delivered in March, 2008. The causative locus was confirmed as a normal by amniocentesis and postnatal diagnosis. Conclusions: To our knowledge, these two cases are the first successful PGD for OI in Korea. Our experience provides a further demonstration that PGD is a reliable and effective clinical techniques and a useful option for many couples with a high risk of transmitting a genetic disease.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

An Empirical Study on Motivation Factors and Reward Structure for User's Createve Contents Generation: Focusing on the Mediating Effect of Commitment (창의적인 UCC 제작에 영향을 미치는 동기 및 보상 체계에 대한 연구: 몰입에 매개 효과를 중심으로)

  • Kim, Jin-Woo;Yang, Seung-Hwa;Lim, Seong-Taek;Lee, In-Seong
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.141-170
    • /
    • 2010
  • User created content (UCC) is created and shared by common users on line. From the user's perspective, the increase of UCCs has led to an expansion of alternative means of communications, while from the business perspective UCCs have formed an environment in which an abundant amount of new contents can be produced. Despite outward quantitative growth, however, many aspects of UCCs do not meet the expectations of general users in terms of quality, and this can be observed through pirated contents and user-copied contents. The purpose of this research is to investigate effective methods for fostering production of creative user-generated content. This study proposes two core elements, namely, reward and motivation, which are believed to enhance content creativity as well as the mediating factor and users' committement, which will be effective for bridging the increasing motivation and content creativity. Based on this perspective, this research takes an in-depth look at issues related to constructing the dimensions of reward and motivation in UCC services for creative content product, which are identified in three phases. First, three dimensions of rewards have been proposed: task dimension, social dimension, and organizational dimention. The task dimension rewards are related to the inherent characteristics of a task such as writing blog articles and pasting photos. Four concrete ways of providing task-related rewards in UCC environments are suggested in this study, which include skill variety, task significance, task identity, and autonomy. The social dimensioni rewards are related to the connected relationships among users. The organizational dimension consists of monetary payoff and recognition from others. Second, the two types of motivations are suggested to be affected by the diverse rewards schemes: intrinsic motivation and extrinsic motivation. Intrinsic motivation occurs when people create new UCC contents for its' own sake, whereas extrinsic motivation occurs when people create new contents for other purposes such as fame and money. Third, commitments are suggested to work as important mediating variables between motivation and content creativity. We believe commitments are especially important in online environments because they have been found to exert stronger impacts on the Internet users than other relevant factors do. Two types of commitments are suggested in this study: emotional commitment and continuity commitment. Finally, content creativity is proposed as the final dependent variable in this study. We provide a systematic method to measure the creativity of UCC content based on the prior studies in creativity measurement. The method includes expert evaluation of blog pages posted by the Internet users. In order to test the theoretical model of our study, 133 active blog users were recruited to participate in a group discussion as well as a survey. They were asked to fill out a questionnaire on their commitment, motivation and rewards of creating UCC contents. At the same time, their creativity was measured by independent experts using Torrance Tests of Creative Thinking. Finally, two independent users visited the study participants' blog pages and evaluated their content creativity using the Creative Products Semantic Scale. All the data were compiled and analyzed through structural equation modeling. We first conducted a confirmatory factor analysis to validate the measurement model of our research. It was found that measures used in our study satisfied the requirement of reliability, convergent validity as well as discriminant validity. Given the fact that our measurement model is valid and reliable, we proceeded to conduct a structural model analysis. The results indicated that all the variables in our model had higher than necessary explanatory powers in terms of R-square values. The study results identified several important reward shemes. First of all, skill variety, task importance, task identity, and automony were all found to have significant influences on the intrinsic motivation of creating UCC contents. Also, the relationship with other users was found to have strong influences upon both intrinsic and extrinsic motivation. Finally, the opportunity to get recognition for their UCC work was found to have a significant impact on the extrinsic motivation of UCC users. However, different from our expectation, monetary compensation was found not to have a significant impact on the extrinsic motivation. It was also found that commitment was an important mediating factor in UCC environment between motivation and content creativity. A more fully mediating model was found to have the highest explanation power compared to no-mediation or partially mediated models. This paper ends with implications of the study results. First, from the theoretical perspective this study proposes and empirically validates the commitment as an important mediating factor between motivation and content creativity. This result reflects the characteristics of online environment in which the UCC creation activities occur voluntarily. Second, from the practical perspective this study proposes several concrete reward factors that are germane to the UCC environment, and their effectiveness to the content creativity is estimated. In addition to the quantitive results of relative importance of the reward factrs, this study also proposes concrete ways to provide the rewards in the UCC environment based on the FGI data that are collected after our participants finish asnwering survey questions. Finally, from the methodological perspective, this study suggests and implements a way to measure the UCC content creativity independently from the content generators' creativity, which can be used later by future research on UCC creativity. In sum, this study proposes and validates important reward features and their relations to the motivation, commitment, and the content creativity in UCC environment, which is believed to be one of the most important factors for the success of UCC and Web 2.0. As such, this study can provide significant theoretical as well as practical bases for fostering creativity in UCC contents.

Evaluation of Radiation Exposure to Nurse on Nuclear Medicine Examination by Use Radioisotope (방사성 동위원소를 이용한 핵의학과 검사에서 병동 간호사의 방사선 피폭선량 평가)

  • Jeong, Jae Hoon;Lee, Chung Wun;You, Yeon Wook;Seo, Yeong Deok;Choi, Ho Yong;Kim, Yun Cheol;Kim, Yong Geun;Won, Woo Jae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.1
    • /
    • pp.44-49
    • /
    • 2017
  • Purpose Radiation exposure management has been strictly regulated for the radiation workers, but there are only a few studies on potential risk of radiation exposure to non-radiation workers, especially nurses in a general ward. The present study aimed to estimate the exact total exposure of the nurse in a general ward by close contact with the patient undergoing nuclear medicine examinations. Materials and Methods Radiation exposure rate was determined by using thermoluminescent dosimeter (TLD) and optical simulated luminescence (OSL) in 14 nurses in a general ward from October 2015 to June 2016. External radiation rate was measured immediately after injection and examination at skin surface, and 50 cm and 1 m distance from 50 patients (PET/CT 20 pts; Bone scan 20 pts; Myocardial SPECT 10 pts). After measurement, effective half-life, and total radiation exposure expected in nurses were calculated. Then, expected total exposure was compared with total exposures actually measured in nurses by TLD and OSL. Results Mean and maximum amount of radiation exposure of 14 nurses in a general ward were 0.01 and 0.02 mSv, respectively in each measuring period. External radiation rate after injection at skin surface, 0.5 m and 1 m distance from patients was as following; $376.0{\pm}25.2$, $88.1{\pm}8.2$ and $29.0{\pm}5.8{\mu}Sv/hr$, respectively in PET/CT; $206.7{\pm}56.6$, $23.1{\pm}4.4$ and $10.1{\pm}1.4{\mu}Sv/hr$, respectively in bone scan; $22.5{\pm}2.6$, $2.4{\pm}0.7$ and $0.9{\pm}0.2{\mu}Sv/hr$, respectively in myocardial SPECT. After examination, external radiation rate at skin surface, 0.5 m and 1 m distance from patients was decreased as following; $165.3{\pm}22.1$, $38.7{\pm}5.9$ and $12.4{\pm}2.5{\mu}Sv/hr$, respectively in PET/CT; $32.1{\pm}8.7$, $6.2{\pm}1.1$, $2.8{\pm}0.6$, respectively in bone scan; $14.0{\pm}1.2$, $2.1{\pm}0.3$, $0.8{\pm}0.2{\mu}Sv/hr$, respectively in myocardial SPECT. Based upon the results, an effective half-life was calculated, and at 30 minutes after examination the time to reach normal dose limit in 'Nuclear Safety Act' was calculated conservatively without considering a half-life. In oder of distance (at skin surface, 0.5 m and 1 m distance from patients), it was 7.9, 34.1 and 106.8 hr, respectively in PET/CT; 40.4, 199.5 and 451.1 hr, respectively in bone scan, 62.5, 519.3 and 1313.6 hr, respectively in myocardial SPECT. Conclusion Radiation exposure rate may differ slightly depending on the work process and the environment in a general ward. Exposure rate was measured at step in the general examination procedure and it made our results more reliable. Our results clearly showed that total amount of radiation exposure caused by residual radioactive isotope in the patient body was neglectable, even comparing with the natural radiation exposure. In conclusion, nurses in a general ward were much less exposed than the normal dose limit, and the effects of exposure by contacting patients undergoing nuclear medicine examination was ignorable.

  • PDF

Converting Ieodo Ocean Research Station Wind Speed Observations to Reference Height Data for Real-Time Operational Use (이어도 해양과학기지 풍속 자료의 실시간 운용을 위한 기준 고도 변환 과정)

  • BYUN, DO-SEONG;KIM, HYOWON;LEE, JOOYOUNG;LEE, EUNIL;PARK, KYUNG-AE;WOO, HYE-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.23 no.4
    • /
    • pp.153-178
    • /
    • 2018
  • Most operational uses of wind speed data require measurements at, or estimates generated for, the reference height of 10 m above mean sea level (AMSL). On the Ieodo Ocean Research Station (IORS), wind speed is measured by instruments installed on the lighthouse tower of the roof deck at 42.3 m AMSL. This preliminary study indicates how these data can best be converted into synthetic 10 m wind speed data for operational uses via the Korea Hydrographic and Oceanographic Agency (KHOA) website. We tested three well-known conventional empirical neutral wind profile formulas (a power law (PL); a drag coefficient based logarithmic law (DCLL); and a roughness height based logarithmic law (RHLL)), and compared their results to those generated using a well-known, highly tested and validated logarithmic model (LMS) with a stability function (${\psi}_{\nu}$), to assess the potential use of each method for accurately synthesizing reference level wind speeds. From these experiments, we conclude that the reliable LMS technique and the RHLL technique are both useful for generating reference wind speed data from IORS observations, since these methods produced very similar results: comparisons between the RHLL and the LMS results showed relatively small bias values ($-0.001m\;s^{-1}$) and Root Mean Square Deviations (RMSD, $0.122m\;s^{-1}$). We also compared the synthetic wind speed data generated using each of the four neutral wind profile formulas under examination with Advanced SCATterometer (ASCAT) data. Comparisons revealed that the 'LMS without ${\psi}_{\nu}^{\prime}$ produced the best results, with only $0.191m\;s^{-1}$ of bias and $1.111m\;s^{-1}$ of RMSD. As well as comparing these four different approaches, we also explored potential refinements that could be applied within or through each approach. Firstly, we tested the effect of tidal variations in sea level height on wind speed calculations, through comparison of results generated with and without the adjustment of sea level heights for tidal effects. Tidal adjustment of the sea levels used in reference wind speed calculations resulted in remarkably small bias (<$0.0001m\;s^{-1}$) and RMSD (<$0.012m\;s^{-1}$) values when compared to calculations performed without adjustment, indicating that this tidal effect can be ignored for the purposes of IORS reference wind speed estimates. We also estimated surface roughness heights ($z_0$) based on RHLL and LMS calculations in order to explore the best parameterization of this factor, with results leading to our recommendation of a new $z_0$ parameterization derived from observed wind speed data. Lastly, we suggest the necessity of including a suitable, experimentally derived, surface drag coefficient and $z_0$ formulas within conventional wind profile formulas for situations characterized by strong wind (${\geq}33m\;s^{-1}$) conditions, since without this inclusion the wind adjustment approaches used in this study are only optimal for wind speeds ${\leq}25m\;s^{-1}$.

Comparative analysis of Glomerular Filtration Rate measurement and estimated glomerular filtration rate using 99mTc-DTPA in kidney transplant donors. (신장이식 공여자에서 99mTc-DTPA를 이용한 Glomerular Filtration Rate 측정과 추정사구체여과율의 비교분석)

  • Cheon, Jun Hong;Yoo, Nam Ho;Lee, Sun Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.25 no.2
    • /
    • pp.35-40
    • /
    • 2021
  • Purpose Glomerular filtration rate(GFR) is an important indicator for the diagnosis, treatment, and follow-up of kidney disease and is also used by healthy individuals for drug use and evaluating kidney function in donors. The gold standard method of the GFR test is to measure by continuously injecting the inulin which is extrinsic marker, but it takes a long time and the test method is complicated. so, the method of measuring the serum concentration of creatinine is used. Estimated glomerular filtration rate (eGFR) is used instead. However, creatinine is known to be affected by age, gender, muscle mass, etc. eGFR formulas that are currently used include the Cockroft-Gault formula, the modification of diet in renal disease (MDRD) formula, and the chronic kidney disease epidemilogy collaboration (CKD-EPI) formula for adults. For children, the Schwartz formula is used. Measurement of GFR using 51Cr-EDTA (diethylenetriamine tetraacetic acid), 99mTc-DTPA (diethylenetriamine pentaacetic acid) can replace inulin and is currently in use. Therefore, We compared the GFR measured using 99mTc-DTPA with the eGFR using CKD-EPI formula. Materials and Methods For 200 kidney transplant donors who visited Asan medical center.(96 males, 104 females, 47.3 years ± 12.7 years old) GFR was measured using plasma(Two-plasma-sample-method, TPSM) obtained by intravenous administration of 99mTc-DTPA(0.5mCi, 18.5 MBq). eGFR was derived using CKD-EPI formula based on serum creatinine concentration. Results GFR average measured using 99mTc-DTPA for 200 kidney transplant donors is 97.27±19.46(ml/min/1.73m2), and the eGFR average value using the CKD-EPI formula is 96.84±17.74(ml/min/1.73m2), The concentration of serum creatinine is 0.84±0.39(mg/dL). Regression formula of 99mTc-DTPA GFR for serum creatinine-based eGFR was Y = 0.5073X + 48.186, and the correlation coefficient was 0.698 (P<0.01). Difference (%) was 1.52±18.28. Conclusion The correlation coefficient between the 99mTc-DTPA and the eGFR derived on serum creatinine concentration was confirmed to be moderate. This is estimated that eGFR is affected by external factors such as age, gender, and muscle mass and use of formulas made for kidney disease patients. By using 99mTc-DTPA, we can provide reliable GFR results, which is used for diagnosis, treatment and observation of kidney disease, and kidney evaluation of kidney transplant patients.

Improvement and Validation of an Analytical Method for Quercetin-3-𝑜-gentiobioside and Isoquercitrin in Abelmoschus esculentus L. Moench (오크라 분말의 Quercetin-3-𝑜-Gentiobioside 및 Isoquercitrin의 분석법 개선 및 검증)

  • Han, Xionggao;Choi, Sun-Il;Men, Xiao;Lee, Se-jeong;Jin, Heegu;Oh, Hyun-Ji;Cho, Sehaeng;Lee, Boo-Yong;Lee, Ok-Hwan
    • Journal of Food Hygiene and Safety
    • /
    • v.37 no.2
    • /
    • pp.39-45
    • /
    • 2022
  • This study aimed to investigate the validation and modify the analytical method to determine quercetin-3-𝑜-gentiobioside and isoquercitrin in Abelmoschus esculentus L. Moench for the standardization of ingredients in development of functional health products. The analytical method was validated based on the ICH (International Conference for Harmonization) guidelines to verify the reliability and validity there of on the specificity, linearity, accuracy, precision, detection limit and quantification limit. For the HPLC analysis method, the peak retention time of the index component of the standard solution and the peak retention time of the index component of A. esculentus L. Moench powder sample were consistent with the spectra thereof, confirming the specificity. The calibration curves of quercetin-3-𝑜-gentiobioside and isoquercitrin showed a linearity with a near-one correlation coefficient (0.9999 and 0.9999), indicating the high suitability thereof for the analysis. A. esculentus L. Moench powder sample of a known concentration were prepared with low, medium, and high concentrations of standard substances and were calculated for the precision and accuracy. The precision of quercetin-3-𝑜-gentiobioside and isoquercitrin was confirmed for intra-day and daily. As a result, the intra-day precision was found to be 0.50-1.48% and 0.77-2.87%, and the daily precision to be 0.07-3.37% and 0.58-1.37%, implying an excellent precision at level below 5%. As a result of accuracy measurement, the intra-day accuracy of quercetin-3-𝑜-gentiobioside and isoquercitrin was found to be 104.87-109.64% and the daily accuracy thereof was found to be 106.85-109.06%, reflecting high level of accuracy. The detection limits of quercetin-3-𝑜-gentiobioside and isoquercitrin were 0.24 ㎍/mL and 0.16 ㎍/mL, respectively, whereas the quantitation limits were 0.71 ㎍/mL and 0.49 ㎍/mL, confirming that detection was valid at the low concentrations as well. From the analysis, the established analytical method was proven to be excellent with high level of results from the verification on the specificity, linearity, precision, accuracy, detection limit and quantitation limit thereof. In addition, as a result of analyzing the content of A. esculentus L. Moench powder samples using a validated analytical method, quercetin-3-𝑜-gentiobioside was analyzed to contain 1.49±0.01 mg/dry weight g, while isoquercitrin contained 1.39±0.01 mg/dry weight g. The study was conducted to verify that the simultaneous analysis on quercetin-3-𝑜-gentiobioside and isoquercitrin, the indicators of A. esculentus L. Moench, is a scientifically reliable and suitable analytical method.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

The Effect of Corporate Association on the Perceived Risk of the Product (소비자의 제품 지각 위험에 대한 기업연상과 효과: 지식과 관여의 조절적 역활을 중심으로)

  • Cho, Hyun-Chul;Kang, Suk-Hou;Kim, Jin-Yong
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.4
    • /
    • pp.1-32
    • /
    • 2008
  • Brown and Dacin (1997) have investigated the relationship between corporate associations and product evaluations. Their study focused on the effects of associations with a company's corporate ability (CA) and its corporate social responsibility (CSR) on consumers' product evaluations. Their study has found that both of CA and CSR influenced product evaluation but CA association has a stronger effect than CSR associations. Brown and Dacin (1997) have, however, claimed that there are few researches on how corporate association impacts product responses. Accordingly, some of researchers have found the variables to moderate or to mediate the relationship between the corporate association and the product responses. In particular, there has been existed a few of studies that tested the influence of the reputation on the product-relevant perceived risk, but the effects of two types of the corporate association on the product-relevant perceived risk were not identified so far. The primary goal of this article is to identify and empirically examine some variables to moderate the effects of CA association and CSR association on the perceived risk of the product. In this articles, we take the concept of the corporate associations that Brown and Dacin (1997) had proposed. CA association is those association related to the company's expertise in producing and delivering its outputs and CSR association reflected the organization's status and activities with respect to its perceived societal obligations. Also, this study defines the risk, which is the uncertainty or loss of the product and corporate that consumers have taken in a particular purchase decision or after having purchased. The risk is classified into product-relevant performance risk and financial risk. Performance risk is the possibility or the consequence of a product not functioning at some expected level and financial risk is the monetary loss one perceives to be incurring if a product does not function at some expected level. In relation to consumer's knowledge, expert consumers have much of the experiences or knowledge of the product in consumer position and novice consumers does not. The model tested in this article are shown in Figure 1. The model indicates that both of CA association and CSR association influence on performance risk and financial risk. In addition, the effects of CA and CSR are moderated by product category knowledge (product knowledge) and product category involvement (product involvement). In this study, the relationships between the corporate association and product-relevant perceived risk are hypothesized as the following form. For example, Hypothesis 1a($H_{1a}$) is represented that CA association has a positive influence on the performance risk of consumer. Also, the hypotheses that identified some variables to moderate the effects of two types of corporate association on the perceived risk of the product are laid down. One of the hypotheses of the interaction effect is Hypothesis 3a($H_{3a}$), it is described that consumer's knowledges of the product moderates the negative relationship between CA association and product-relevant performance risk. A field experiment was conducted in order to examine our model. The company tested was not real but imagined to meet the internal validity. Water purifiers were used for our study. Four scenarios have been developed and described as the imaginary company: Type A with both of superior CA and CSR, Type B with superior CSR and inferior CA, Type C with superior CA and inferior CSR, and Type D with both inferior of CA and CSR. The respondents of this study were classified into four groups. One type of four scenarios (Type A, B, C, or D) in its questionnaire was given to the respondent who filled out questions. Data were collected by means of a self-administered questionnaire to the respondents, chosen in convenience. A total of 300 respondents filled out the questionnaire but 207 were used for further analysis. Table 1 indicates that the scales in this study are reliable because the range of coefficients of Cronbach's $\alpha$ are from 0.85 to 0.92. The composite reliability is in the range of 0,85 to 0,92 and average variance extracted is in 0.72-0.98 range that is higher than the base level of 0.6. As shown in Table 2, the values for CFI, NNFI, root-mean-square error approximation (RMSEA), and standardized root-mean-square residual (SRMR) are acceptably close to the standards suggested by Hu and Bentler (1999):.95 for CFI and NNFI,.06 for RMSEA, and.08 for SRMR. We also tested discriminant validity provided by Fornell and Larcker (1981). As shown in Table 2, we found strong evidence for discriminant validity between each possible pair of latent constructs in all samples. Given that these batteries of overall goodness-of-fit indices were accurate and that the model was developed on theoretical bases, and given the high level of consistency across samples, this enables us to proceed the previously defined scales. We used the moderated hierarchical regression analysis to test the influence of the corporate association(CA and CSR associations) on product-relevant perceived risk(performance and financial risks) and to identify the variables moderating the relationship between the corporate association and product-relevant performance risk. In this study, dependent variables are performance and financial risk. CA and CSR associations are described the independent variables. The moderating variables are product category knowledge and product category involvement. The results are, as expected, found that CA association has statistically a significant influence on the perceived risk of the product, but CSR association does not. Product category knowledge and involvement moderate the relationship between the CA association and the perceived risk of the product. However, the effect of CSR association on the perceived risk of the product is not moderated by the consumers' knowledge and involvement. For this result, it is necessary for a corporate to inform its customers CA association more than CSR association so that they could be felt to be the reduction of the perceived risk. The important theoretical contribution of this research is the meanings that two types of corporate association that Brown and Dacin(1997), and Brown(1998) have proposed replicated the difference of the effects on product evaluation. According to Hunter(2001), it was an important affair to accomplish the validity of a particular study and we had to take about ten studies to deduce a strict study. Next, there is the contribution of the this study to find that the effects of corporate association on the perceived risk of the product are varied by the moderator variables. In particular, the moderating effect of knowledge on the relationship between corporate association and product-relevant perceived risk has not been tested in Korea. In the managerial implications of this research, we suggest the necessity to stress the ability that corporate manufactures the product well(CA association) than the accomplishment of corporate's social obligation(CSR association). This study suffers from various limitations that imply future research directions. The moderating effects of product category knowledge and involvement on the relationship between corporate association and perceived risk need to be replicated. Next, future research could explore whether the mediated effects of the perceived risk has the relationship between corporate association and consumer's product purchase. In addition, to ensure the external validity of the study will be needed to use realistic company, not artificial.

  • PDF