• Title/Summary/Keyword: ACCURACY

Search Result 33,961, Processing Time 0.059 seconds

A Study on the Fabrication and Comparison of the Phantom for CT Dose Measurements Using 3D Printer (3D프린터를 이용한 CT 선량측정 팬텀 제작 및 비교에 관한 연구)

  • Yoon, Myeong-Seong;Kang, Seong-Hyeon;Hong, Soon-Min;Lee, Youngjin;Han, Dong-Koon
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.6
    • /
    • pp.737-743
    • /
    • 2018
  • Patient exposure dose exposure test, which is one of the items of accuracy control of Computed Tomography, conducts measurements every year based on the installation and operation of special medical equipment under Article 38 of the Medical Law, And keep records. The CT-Dose phantom used for dosimetry can accurately measure doses, but has the disadvantage of high price. Therefore, through this research, the existing CT - Dose phantom was similarly manufactured with a 3D printer and compared with the existing phantom to examine the usefulness. In order to produce the same phantom as the conventional CT-Dose phantom, a 3D printer of the FFF method is used by using a PLA filament, and in order to calculate the CTDIw value, Ion chambers were inserted into the central part and the central part, and measurements were made ten times each. Measurement results The CT-Dose phantom was measured at $30.44{\pm}0.31mGy$ in the periphery, $29.55{\pm}0.34mGy$ CTDIw value was measured at $30.14{\pm}0.30mGy$ in the center, and the phantom fabricated using the 3D printer was measured at the periphery $30.59{\pm}0.18mGy$, the central part was $29.01{\pm}0.04mGy$, and the CTDIw value was measured at $30.06{\pm}0.13mGy$. Analysis using the Mann - Whiteney U-test of the SPSS statistical program showed that there was a statistically significant difference in the result values in the central part, but statistically significant differences were observed between the peripheral part and CTDIw results I did not show. In conclusion, even in the CT-Dose phantom made with a 3D printer, we showed dose measurement performance like existing CT-Dose phantom and confirmed the possibility of low-cost phantom production using 3D printer through this research did it.

Prediction of a hit drama with a pattern analysis on early viewing ratings (초기 시청시간 패턴 분석을 통한 대흥행 드라마 예측)

  • Nam, Kihwan;Seong, Nohyoon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.33-49
    • /
    • 2018
  • The impact of TV Drama success on TV Rating and the channel promotion effectiveness is very high. The cultural and business impact has been also demonstrated through the Korean Wave. Therefore, the early prediction of the blockbuster success of TV Drama is very important from the strategic perspective of the media industry. Previous studies have tried to predict the audience ratings and success of drama based on various methods. However, most of the studies have made simple predictions using intuitive methods such as the main actor and time zone. These studies have limitations in predicting. In this study, we propose a model for predicting the popularity of drama by analyzing the customer's viewing pattern based on various theories. This is not only a theoretical contribution but also has a contribution from the practical point of view that can be used in actual broadcasting companies. In this study, we collected data of 280 TV mini-series dramas, broadcasted over the terrestrial channels for 10 years from 2003 to 2012. From the data, we selected the most highly ranked and the least highly ranked 45 TV drama and analyzed the viewing patterns of them by 11-step. The various assumptions and conditions for modeling are based on existing studies, or by the opinions of actual broadcasters and by data mining techniques. Then, we developed a prediction model by measuring the viewing-time distance (difference) using Euclidean and Correlation method, which is termed in our study similarity (the sum of distance). Through the similarity measure, we predicted the success of dramas from the viewer's initial viewing-time pattern distribution using 1~5 episodes. In order to confirm that the model is shaken according to the measurement method, various distance measurement methods were applied and the model was checked for its dryness. And when the model was established, we could make a more predictive model using a grid search. Furthermore, we classified the viewers who had watched TV drama more than 70% of the total airtime as the "passionate viewer" when a new drama is broadcasted. Then we compared the drama's passionate viewer percentage the most highly ranked and the least highly ranked dramas. So that we can determine the possibility of blockbuster TV mini-series. We find that the initial viewing-time pattern is the key factor for the prediction of blockbuster dramas. From our model, block-buster dramas were correctly classified with the 75.47% accuracy with the initial viewing-time pattern analysis. This paper shows high prediction rate while suggesting audience rating method different from existing ones. Currently, broadcasters rely heavily on some famous actors called so-called star systems, so they are in more severe competition than ever due to rising production costs of broadcasting programs, long-term recession, aggressive investment in comprehensive programming channels and large corporations. Everyone is in a financially difficult situation. The basic revenue model of these broadcasters is advertising, and the execution of advertising is based on audience rating as a basic index. In the drama, there is uncertainty in the drama market that it is difficult to forecast the demand due to the nature of the commodity, while the drama market has a high financial contribution in the success of various contents of the broadcasting company. Therefore, to minimize the risk of failure. Thus, by analyzing the distribution of the first-time viewing time, it can be a practical help to establish a response strategy (organization/ marketing/story change, etc.) of the related company. Also, in this paper, we found that the behavior of the audience is crucial to the success of the program. In this paper, we define TV viewing as a measure of how enthusiastically watching TV is watched. We can predict the success of the program successfully by calculating the loyalty of the customer with the hot blood. This way of calculating loyalty can also be used to calculate loyalty to various platforms. It can also be used for marketing programs such as highlights, script previews, making movies, characters, games, and other marketing projects.

Exploring the Factors Influencing on the Accuracy of Self-Reported Responses in Affective Assessment of Science (과학과 자기보고식 정의적 영역 평가의 정확성에 영향을 주는 요소 탐색)

  • Chung, Sue-Im;Shin, Donghee
    • Journal of The Korean Association For Science Education
    • /
    • v.39 no.3
    • /
    • pp.363-377
    • /
    • 2019
  • This study reveals the aspects of subjectivity in the test results in a science-specific aspect when assessing science-related affective characteristic through self-report items. The science-specific response was defined as the response that appear due to student's recognition of nature or characteristics of science when his or her concepts or perceptions about science were attempted to measure. We have searched for cases where science-specific responses especially interfere with the measurement objective or accurate self-reports. The results of the error due to the science-specific factors were derived from the quantitative data of 649 students in the 1st and 2nd grade of high school and the qualitative data of 44 students interviewed. The perspective of science and the characteristics of science that students internalize from everyday life and science learning experiences interact with the items that form the test tool. As a result, it was found that there were obstacles to accurate self-report in three aspects: characteristics of science, personal science experience, and science in tool. In terms of the characteristic of science in relation to the essential aspect of science, students respond to items regardless of the measuring constructs, because of their views and perceived characteristics of science based on subjective recognition. The personal science experience factor representing the learner side consists of student's science motivation, interaction with science experience, and perception of science and life. Finally, from the instrumental point of view, science in tool leads to terminological confusion due to the uncertainty of science concepts and results in a distance from accurate self-report eventually. Implications from the results of the study are as follows: review of inclusion of science-specific factors, precaution to clarify the concept of measurement, check of science specificity factors at the development stage, and efforts to cross the boundaries between everyday science and school science.

Comparative evaluation of marginal and internal fit of metal copings fabricated by various CAD/CAM methods (다양한 CAD/CAM 방식으로 제작한 금속하부구조물 간의 변연 및 내면 적합도 비교 연구)

  • Jeong, Seung-Jin;Cho, Hye-Won;Jung, Ji-Hye;Kim, Jeong-Mi;Kim, Yu-Lee
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.57 no.3
    • /
    • pp.211-218
    • /
    • 2019
  • Purpose: The purpose of the present study was to compare the accuracy of four different metal copings fabricated by CAD/CAM technology and to evaluate clinical effectiveness. Materials and methods: Composite resin tooth of the maxillary central incisor was prepared for a metal ceramic crown and duplicated metal die was fabricated. Then scan the metal die for 12 times to obtain STL files using a confocal microscopy type oral scanner. Metal copings with a thickness of 0.5 mm and a cement space of $50{\mu}m$ were designed on a CAD program. The Co-Cr metal copings were fabricated by the following four methods: Wax pattern milling & Casting (WM), Resin pattern 3D Printing & casting (RP), Milling & Sintering (MS), Selective laser melting (SLM). Silicone replica technique was used to measure marginal and internal discrepancies. The data was statistically analyzed with One-way analysis of variance and appropriate post hoc test (Scheffe test) (${\alpha}=.05$). Results: Mean marginal discrepancy was significantly smaller in the Group WM ($27.66{\pm}9.85{\mu}m$) and Group MS ($28.88{\pm}10.13{\mu}m$) than in the Group RP ($38.09{\pm}11.14{\mu}m$). Mean cervical discrepancy was significantly smaller in the Group MS than in the Group RP. Mean axial discrepancy was significantly smaller in the Group WM and Group MS then in the Group RP and Group SLM. Mean incisal discrepancies was significantly smaller in the Group RP than in all other groups. Conclusion: The marginal and axial discrepancies of the Co-Cr coping fabricated by the Wax pattern milling and Milling/Sintering method were better than those of the other groups. The marginal, cervical and axial fit of Co-Cr copings in all groups are within a clinically acceptable range.

A Study on Phthalate Analysis of Nail Related Products (네일 관련 제품들의 프탈레이트 분석에 관한 연구)

  • Rark, Sin-Hee;Song, Seo-Hyeon;Kim, Hyun-Joo;Cho, Youn-Sik;Kim, Ae-Ran;Kim, Beom-Ho;Hong, Mi-Yeun;Park, Sang-Hyun;Yoon, Mi-Hye
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.45 no.3
    • /
    • pp.217-224
    • /
    • 2019
  • Phthalates, endocrine disrupting chemicals, are similar in structure to sex hormones and mainly show reproductive toxicity and developmental toxicity. In this study, we analyzed 11 phthalates, including 3 kinds of phthalates prohibited in cosmetic use and 8 kinds of phthalates regulated in 'Common standards for children's products safety' and EU cosmetic regulation (EC No. 1223/2009). The phthalate analysis was optimized using GC-MS/MS. In analytical method validation, this method was satisfied in specificity, linearity, recovery rate, accuracy and MQL. Therefore, we used this method to analyze 82 products of Nail cosmetics & polish. Although six phthalates such as DBP, BBP, DEHP, DPP, DIBP and DIDP were detected at concentrations of $1.0{\sim}59.8{\mu}g/g$g, they were suitable to Korean cosmetic standards. DIBP and DBP were detected at concentration of $1.1{\sim}2.6{\mu}g/g$ in artificial nail, DBP and DEHP were $1.4{\sim}2.5{\mu}g/g$ in glue for nails, and DIBP, DBP, and DEHP were $2.5{\sim}33.3{\mu}g/g$ in nail stickers. Although substances such as DBP and DEHP in artificial nail, Glue for nails, and nail stickers were detected, they were suitable to 'Common safety standards for children's products. DIBP is not a regulated substance in Korea but showed the third highest detection rate following DBP (84.6%) and DEHP (63.4%). The concentration of phthalates detected in nail products is considered to be safe in current standards but continuous monitoring and research about non-regulated substances are also needed to be considered.

Detection Ability of Occlusion Object in Deep Learning Algorithm depending on Image Qualities (영상품질별 학습기반 알고리즘 폐색영역 객체 검출 능력 분석)

  • LEE, Jeong-Min;HAM, Geon-Woo;BAE, Kyoung-Ho;PARK, Hong-Ki
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.3
    • /
    • pp.82-98
    • /
    • 2019
  • The importance of spatial information is rapidly rising. In particular, 3D spatial information construction and modeling for Real World Objects, such as smart cities and digital twins, has become an important core technology. The constructed 3D spatial information is used in various fields such as land management, landscape analysis, environment and welfare service. Three-dimensional modeling with image has the hig visibility and reality of objects by generating texturing. However, some texturing might have occlusion area inevitably generated due to physical deposits such as roadside trees, adjacent objects, vehicles, banners, etc. at the time of acquiring image Such occlusion area is a major cause of the deterioration of reality and accuracy of the constructed 3D modeling. Various studies have been conducted to solve the occlusion area. Recently the researches of deep learning algorithm have been conducted for detecting and resolving the occlusion area. For deep learning algorithm, sufficient training data is required, and the collected training data quality directly affects the performance and the result of the deep learning. Therefore, this study analyzed the ability of detecting the occlusion area of the image using various image quality to verify the performance and the result of deep learning according to the quality of the learning data. An image containing an object that causes occlusion is generated for each artificial and quantified image quality and applied to the implemented deep learning algorithm. The study found that the image quality for adjusting brightness was lower at 0.56 detection ratio for brighter images and that the image quality for pixel size and artificial noise control decreased rapidly from images adjusted from the main image to the middle level. In the F-measure performance evaluation method, the change in noise-controlled image resolution was the highest at 0.53 points. The ability to detect occlusion zones by image quality will be used as a valuable criterion for actual application of deep learning in the future. In the acquiring image, it is expected to contribute a lot to the practical application of deep learning by providing a certain level of image acquisition.

Performance Evaluation of Reconstruction Algorithms for DMIDR (DMIDR 장치의 재구성 알고리즘 별 성능 평가)

  • Kwak, In-Suk;Lee, Hyuk;Moon, Seung-Cheol
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.2
    • /
    • pp.29-37
    • /
    • 2019
  • Purpose DMIDR(Discovery Molecular Imaging Digital Ready, General Electric Healthcare, USA) is a PET/CT scanner designed to allow application of PSF(Point Spread Function), TOF(Time of Flight) and Q.Clear algorithm. Especially, Q.Clear is a reconstruction algorithm which can overcome the limitation of OSEM(Ordered Subset Expectation Maximization) and reduce the image noise based on voxel unit. The aim of this paper is to evaluate the performance of reconstruction algorithms and optimize the algorithm combination to improve the accurate SUV(Standardized Uptake Value) measurement and lesion detectability. Materials and Methods PET phantom was filled with $^{18}F-FDG$ radioactivity concentration ratio of hot to background was in a ratio of 2:1, 4:1 and 8:1. Scan was performed using the NEMA protocols. Scan data was reconstructed using combination of (1)VPFX(VUE point FX(TOF)), (2)VPHD-S(VUE Point HD+PSF), (3)VPFX-S (TOF+PSF), (4)QCHD-S-400((VUE Point HD+Q.Clear(${\beta}-strength$ 400)+PSF), (5)QCFX-S-400(TOF +Q.Clear(${\beta}-strength$ 400)+PSF), (6)QCHD-S-50(VUE Point HD+Q.Clear(${\beta}-strength$ 50)+PSF) and (7)QCFX-S-50(TOF+Q.Clear(${\beta}-strength$ 50)+PSF). CR(Contrast Recovery) and BV(Background Variability) were compared. Also, SNR(Signal to Noise Ratio) and RC(Recovery Coefficient) of counts and SUV were compared respectively. Results VPFX-S showed the highest CR value in sphere size of 10 and 13 mm, and QCFX-S-50 showed the highest value in spheres greater than 17 mm. In comparison of BV and SNR, QCFX-S-400 and QCHD-S-400 showed good results. The results of SUV measurement were proportional to the H/B ratio. RC for SUV is in inverse proportion to the H/B ratio and QCFX-S-50 showed highest value. In addition, reconstruction algorithm of Q.Clear using 400 of ${\beta}-strength$ showed lower value. Conclusion When higher ${\beta}-strength$ was applied Q.Clear showed better image quality by reducing the noise. On the contrary, lower ${\beta}-strength$ was applied Q.Clear showed that sharpness increase and PVE(Partial Volume Effect) decrease, so it is possible to measure SUV based on high RC comparing to conventional reconstruction conditions. An appropriate choice of these reconstruction algorithm can improve the accuracy and lesion detectability. In this reason, it is necessary to optimize the algorithm parameter according to the purpose.

Effect of abutment superimposition process of dental model scanner on final virtual model (치과용 모형 스캐너의 지대치 중첩 과정이 최종 가상 모형에 미치는 영향)

  • Yu, Beom-Young;Son, Keunbada;Lee, Kyu-Bok
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.57 no.3
    • /
    • pp.203-210
    • /
    • 2019
  • Purpose: The purpose of this study was to verify the effect of the abutment superimposition process on the final virtual model in the scanning process of single and 3-units bridge model using a dental model scanner. Materials and methods: A gypsum model for single and 3-unit bridges was manufactured for evaluating. And working casts with removable dies were made using Pindex system. A dental model scanner (3Shape E1 scanner) was used to obtain CAD reference model (CRM) and CAD test model (CTM). The CRM was scanned without removing after dividing the abutments in the working cast. Then, CTM was scanned with separated from the divided abutments and superimposed on the CRM (n=20). Finally, three-dimensional analysis software (Geomagic control X) was used to analyze the root mean square (RMS) and Mann-Whitney U test was used for statistical analysis (${\alpha}=.05$). Results: The RMS mean abutment for single full crown preparation was $10.93{\mu}m$ and the RMS average abutment for 3 unit bridge preparation was $6.9{\mu}m$. The RMS mean of the two groups showed statistically significant differences (P<.001). In addition, errors of positive and negative of two groups averaged $9.83{\mu}m$, $-6.79{\mu}m$ and 3-units bridge abutment $6.22{\mu}m$, $-3.3{\mu}m$, respectively. The mean values of the errors of positive and negative of two groups were all statistically significantly lower in 3-unit bridge abutments (P<.001). Conclusion: Although the number of abutments increased during the scan process of the working cast with removable dies, the error due to the superimposition of abutments did not increase. There was also a significantly higher error in single abutments, but within the range of clinically acceptable scan accuracy.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Simultaneous Multicomponent Analysis of Preservatives in Cosmetics by Gas Chromatography (GC를 이용한 화장품 살균·보존제의 다성분 동시분석법)

  • Cho, Sang Hun;Jung, Hong Rae;Kim, Young Sug;Kim, Yang Hee;Park, Eun Mi;Shin, Sang Woon;Eum, Kyoung Suk;Hong, Se Ra;Kang, Hyo Jeong;Yoon, Mi Hye
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.45 no.1
    • /
    • pp.69-75
    • /
    • 2019
  • Preservatives of cosmetics is managed by positive list in Korea. Positive list requires a proper quantitative analysis method, but the analysis method is still insufficient. In this study, gas chromatography with flame ionization detector was used to simultaneously analyze 14 preservatives in cosmetics. As a result of method validation, the specificity was confirmed by the calibration curves of 14 preservatives showing good linearity correlation coefficient of above 0.9997 except dehydroacetic acid (0.9891). The limits of detection (LOD) and quantification (LOQ) of 14 preservatives were 0.0001 mg/mL ~ 0.0039 mg/mL and 0.0003 mg/mL ~ 0.0118 mg/mL, respectively, but they were 0.0204 mg/mL, 0.0617 mg/mL for dehydroacetic acid, respectively. The precision (Repeatability) of the values was less than 1.0%, but 7.1% for dehydroacetic acid. The Accuracy (% recovery) of 14 preservatives in cosmetics showed 96.9% ~ 109.2%. Finally, this method was applied to 50 cosmetics available in market. Results showed that the commonly used preservatives were chlorophene, phenoxyethanol, benzyl alcohol and parabens. However, the amount of the detected preservatives was within maximum allowed limits established by KFDA.