• Title/Summary/Keyword: form-accuracy

Search Result 1,350, Processing Time 0.031 seconds

Serial Survey on Group A beta-Hemolytic Streptococcal Carrier Rate and Serotyping in Elementary School Children in 1996~1998 (3년간(1996~1998) 초등학생의 A군 연쇄구균 보균율과 혈청학적 분류에 관한 연구)

  • Kim, Ji-Hyun;Kim, Ju-Ye;Kang, Hyeon-Ho;Cha, Sung-Ho;Lee, Young-Hee
    • Pediatric Infection and Vaccine
    • /
    • v.7 no.1
    • /
    • pp.143-151
    • /
    • 2000
  • Purpose : The accuracy of bacteriologic diagnosis of beta-hemolytic streptococcal pharyngotonsillitis depends on the degree of carrier rate in that area where the throat swabs are obtained and the evaluation of serological T typing as an epidemiologic marker is important to understand epidemiology of streptococcal infection. The purpose of this study is to know the carrier rates of group A streptococcus in normal children form four different areas and to find out the epidemiologic characteristic in distribution of the serotypes for 3 years. Method : Throat swabs were obtained from the tonsillar fossa of normal school children in four different areas(Uljin, Seoul, Osan, Kunsan) from March to May 1996, in Uljin in April 1997, and in Uljin in April 1998. The samples were plated on a 5% sheep blood agar plate and incubated overnight at $37^{\circ}C$ before examination for the presence of beta-hemolytic colonies. All isolated beta-hemolytic streptococcus were grouped and serotyped by T agglutination. Results : The carrier rate of beta-hemolytic streptococci and group A streptococci in 1996 were 27.6%, 18.6% at Uljin; 16.4%, 2.7% at Seoul; 33.0%, 26.0% at Osan; 20.0%, 12.3% at Kunsan, respectively. Among 1,192 normal school children from 4 different areas, we obtained 179 strains of group A streptococci. Fifty two percent of the strains were typable by T agglutination in 1996. Common T-type in 1996 were NT, T1, T3, T2 at Uljin; T12, T25 at Seoul; NT, T6, T28 at Osan; T25, T4, NT, T5 at Kunsan, in decreasing order, respectively. At Uljin, T1, T3, T25 accounted for 69% of strains in 1996, T1, T12, T25 accounted for 70% in 1997, and T12, T4 accounted for 88% in 1998. Conclusion : Higher carrier rates were found in Uljin and Osan, where there are a lower population density with scanty of medical facilities compared with another areas. We supposed that low carrier rates is likely to be related to antibiotic abuse or some epidemiologic factor. The periodic and seasonal serotyping analysis is important in monitoring and understanding the epidemiologic patterns of group A streptococci.

  • PDF

Time Series Analysis of Park Use Behavior Utilizing Big Data - Targeting Olympic Park - (빅데이터를 활용한 공원 이용행태의 시계열분석 - 올림픽공원을 대상으로 -)

  • Woo, Kyung-Sook;Suh, Joo-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.46 no.2
    • /
    • pp.27-36
    • /
    • 2018
  • This study suggests the necessity of behavior analysis as changes to a park environment to reflect user desires can be implemented only by grasping the needs of park users. Online data (blog) were defined as the basic data of the study. After collecting data by 5 - year units, data mining was used to derive the characteristics of the time series behavior while the significance of the online data was verified through social network analysis. The results of the text mining analysis are as follows. First, primary results included 'walking', 'photography', 'riding bicycles'(inline, kickboard, etc.), and 'eating'. Second, in the early days of the collected data, active physical activity such as exercise was the main factor, but recent passive behavior such as eating, using a mobile phone, games, food and drinking coffee also appeared as a new behavior characteristic in parks. Third, the factors affecting the behavior of park users are the changes of various conditions of society such as internet development and a culture of expressing unique personalities and styles. Fourth, the special behaviors appearing at Olympic Park were derived from educational activities such as cultural activities including watching performances and history lessons. In conclusion, it has been shown that people's lifestyle changes and the behavior of a park are influenced by the changes of the various times rather than the original purpose that was intended during park planning and design. Therefore, it is necessary to create an environment tailored to users by considering the main behaviors and influencing factors of Olympic Park. Text mining used as an analytical method has the merit that past data can be collected. Therefore, it is possible to form analysis from a long-term viewpoint of behavior analysis as well as to measure new behavior and value with derived keywords. In addition, the validity of online data was verified through social network analysis to increase the legitimacy of research results. Research on more comprehensive behavior analysis should be carried out by diversifying the types of data collected later, and various methods for verifying the accuracy and reliability of large-volume data will be needed.

The Application of Computer Program for Determination of Fluid Properties and P-T Condition from Microthermometric Data on Fluid Inclusions (유체포유물의 생성시 온도-압력 조건과 유체포유물의 물리화학적 특성연구에 있어서의 컴퓨터 프로그램이용)

  • Oh, Chang-Whan;Choi, Sang-Hoon
    • Economic and Environmental Geology
    • /
    • v.26 no.1
    • /
    • pp.21-27
    • /
    • 1993
  • Fluid inclusion has been widely used to study the origin and physiochemical conditions of ore deposits. However, it is difficult to get the accurate physiochemical data from fluid inclusion study due to the error of microthermometric data and the complexity of calculation of density and isochore of fluid inclusion. The computer programs HALWAT, $CO_2$, and CHNACL written by Nicholls and Crowford (1985) partly contributed to improve the accuracy of physiochemical data by using complicated equations. These programs are applied to determine the densities and isochores of fluid inclusions for the Cretaceous Keumhak mine using Choi and So's data (1992) and for the Jurassic Samhwanghak mine using Yun's data (1990). The estimated PoT for Keumhak mine from calculated isochores of coexisting fluid inclusions are $230^{\circ}{\sim}290^{\circ}C$ and 500~800 bar which matche well to the poT estimated by Choi and So ($280^{\circ}{\sim}360^{\circ}C$ and 500~800 bar, 1992). However, the poT for Samwhanghak mine estimated in this study by combining the calculated isochores and sulfur isotope geothermometer data by Yun (1990) are about 4~7 kb at $329{\pm}50^{\circ}{\sim}344{\pm}55^{\circ}C$ which are quite different form the P-T estimates by Yun ($255^{\circ}{\sim}294^{\circ}C$ and 1.2~1.9kb, 1990). This discrepancy caused by misinterpretation of homogenization temperature (Th) of fluid inclusion and by application of inappropriate isochores. The application of homogenization temperature and/or inappropriately selected isochore to determine the trapping PoT condition of ore-deposits should be avoided, particularly for ore-deposits formed at pressures higher than 1~2 kb.

  • PDF

Analysis of Changes in Elementary Students' Mental Models about the Causes of the Seasonal Change (계절 변화의 원인에 관한 초등학생의 멘탈 모델 변화 과정 분석)

  • Kim, Soon-Mi;Yang, Il-Ho;Lim, Sung-Man
    • Journal of The Korean Association For Science Education
    • /
    • v.33 no.5
    • /
    • pp.893-910
    • /
    • 2013
  • The purpose of this study was to identify changes in mental models of students in the elementary school about causes of seasonal changes. During a total of eight sessions, eight sixth graders were asked to describe the causes of seasonal changes through pictures, writing and thinking aloud by using microgenetic research methods, and the changes in mental models were examined. When the research was conducted, linguistic and behavioral factors and contents of interviews of participants were recorded on video. Moreover, a variety of materials such as field observation chart were written by a researcher and mental models records were written by a student. The protocol was written by integration of collected results, and it was repeated to read and was inductively categorized. The results of this study were as follows: First, participants' mental models about causes of seasonal changes were changed in various paths within and across sessions. Participants' mental models that had been more changed in various ways were closer to the scientific model. In addition, like rotation and revolution, students who correctly established the preconceptions related to seasonal changes formed the mental models consistent with scientific concept based on new information. On the other hand, students who did not correctly establish the preconceptions did not deviate from non-scientific mental models. Second, prior knowledge, experience and information which participants held in advance, accuracy of prior knowledge, resolution of inconsistency between new knowledge and existing mental models, activation of mental models through operation of models and drawing an picture affected the changes of mental models. Teachers should provide to learners with sufficient experience which can be configured to various mental models in order to form the scientific concepts. And they need to let learners feel the doubt and resolve it through presentation of new teaching material which is inconsistent with the existing mental models.

Bankruptcy Prediction Modeling Using Qualitative Information Based on Big Data Analytics (빅데이터 기반의 정성 정보를 활용한 부도 예측 모형 구축)

  • Jo, Nam-ok;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.33-56
    • /
    • 2016
  • Many researchers have focused on developing bankruptcy prediction models using modeling techniques, such as statistical methods including multiple discriminant analysis (MDA) and logit analysis or artificial intelligence techniques containing artificial neural networks (ANN), decision trees, and support vector machines (SVM), to secure enhanced performance. Most of the bankruptcy prediction models in academic studies have used financial ratios as main input variables. The bankruptcy of firms is associated with firm's financial states and the external economic situation. However, the inclusion of qualitative information, such as the economic atmosphere, has not been actively discussed despite the fact that exploiting only financial ratios has some drawbacks. Accounting information, such as financial ratios, is based on past data, and it is usually determined one year before bankruptcy. Thus, a time lag exists between the point of closing financial statements and the point of credit evaluation. In addition, financial ratios do not contain environmental factors, such as external economic situations. Therefore, using only financial ratios may be insufficient in constructing a bankruptcy prediction model, because they essentially reflect past corporate internal accounting information while neglecting recent information. Thus, qualitative information must be added to the conventional bankruptcy prediction model to supplement accounting information. Due to the lack of an analytic mechanism for obtaining and processing qualitative information from various information sources, previous studies have only used qualitative information. However, recently, big data analytics, such as text mining techniques, have been drawing much attention in academia and industry, with an increasing amount of unstructured text data available on the web. A few previous studies have sought to adopt big data analytics in business prediction modeling. Nevertheless, the use of qualitative information on the web for business prediction modeling is still deemed to be in the primary stage, restricted to limited applications, such as stock prediction and movie revenue prediction applications. Thus, it is necessary to apply big data analytics techniques, such as text mining, to various business prediction problems, including credit risk evaluation. Analytic methods are required for processing qualitative information represented in unstructured text form due to the complexity of managing and processing unstructured text data. This study proposes a bankruptcy prediction model for Korean small- and medium-sized construction firms using both quantitative information, such as financial ratios, and qualitative information acquired from economic news articles. The performance of the proposed method depends on how well information types are transformed from qualitative into quantitative information that is suitable for incorporating into the bankruptcy prediction model. We employ big data analytics techniques, especially text mining, as a mechanism for processing qualitative information. The sentiment index is provided at the industry level by extracting from a large amount of text data to quantify the external economic atmosphere represented in the media. The proposed method involves keyword-based sentiment analysis using a domain-specific sentiment lexicon to extract sentiment from economic news articles. The generated sentiment lexicon is designed to represent sentiment for the construction business by considering the relationship between the occurring term and the actual situation with respect to the economic condition of the industry rather than the inherent semantics of the term. The experimental results proved that incorporating qualitative information based on big data analytics into the traditional bankruptcy prediction model based on accounting information is effective for enhancing the predictive performance. The sentiment variable extracted from economic news articles had an impact on corporate bankruptcy. In particular, a negative sentiment variable improved the accuracy of corporate bankruptcy prediction because the corporate bankruptcy of construction firms is sensitive to poor economic conditions. The bankruptcy prediction model using qualitative information based on big data analytics contributes to the field, in that it reflects not only relatively recent information but also environmental factors, such as external economic conditions.

A Study of Traffic Incident Flow Characteristics on Korean Highway Using Multi-Regime (Multi-Regime에 의한 돌발상황 시 교통류 분석)

  • Lee Seon-Ha;kang Hee-Chan
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.4 no.1 s.6
    • /
    • pp.43-56
    • /
    • 2005
  • This research has examined a time series analysis(TSA) of an every hour traffic information such as occupancy, a traffic flow, and a speed, a statistical model of a surveyed data on the traffic fundamental diagram and an expand aspect of a traffic jam by many Parts of the traffic flow. Based on the detected data from traffic accidents on the Cheonan-Nonsan high way and events when the road volume decreases dramatically like traffic accidents it can be estimated from the change of occupancy right after accidents. When it comes to a traffic jam like events the changing gap of the occupancy and the mean speed is gentle, in addition to a quickness and an accuracy of a detection by the time series analyse of simple traffic index is weak. When it is a stable flow a relationship between the occupancy and a flow is a linear, which explain a very high reliability. In contrast, a platoon form presented by a wide deviation about an ideal speed of drivers is difficult to express by a statical model in a relationship between the speed and occupancy, In this case the speed drops shifty at 6$\~$8$\%$ occupancy. In case of an unstable flow, it is difficult to adopt a statistical model because the formation-clearance Process of a traffic jam is analyzed in each parts. Taken the formation-clearance process of a traffic jam by 2 parts division into consideration the flow having an accident is transferred to a stopped flow and the occupancy increases dramatically. When the flow recovers from a sloped flow to a free flow the occupancy which has increased dramatically decrease gradually and then traffic flow increases according as the result analyzed traffic flow by the multi regime as time series. When it is on the traffic jam the traffic flow transfers from an impeded free flow to a congested flow and then a jammed flow which is complicated more than on the accidents and the gap of traffic volume in each traffic conditions about a same occupancy is generated huge. This research presents a need of a multi-regime division when analyzing a traffic flow and for the future it needs a fixed quantity division and model about each traffic regimes.

  • PDF

Estimation of Precipitable Water from the GMS-5 Split Window Data (GMS-5 Split Window 자료를 이용한 가강수량 산출)

  • 손승희;정효상;김금란;이정환
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.1
    • /
    • pp.53-68
    • /
    • 1998
  • Observation of hydrometeors' behavior in the atmosphere is important to understand weather and climate. By conventional observations, we can get the distribution of water vapor at limited number of points on the earth. In this study, the precipitable water has been estimated from the split window channel data on GMS-5 based upon the technique developed by Chesters et al.(1983). To retrieve the precipitable water, water vapor absorption parameter depending on filter function of sensor has been derived using the regression analysis between the split window channel data and the radiosonde data observed at Osan, Pohang, Kwangiu and Cheju staions for 4 months. The air temperature of 700 hPa from the Global Spectral Model of Korea Meteorological Administration (GSM/KMA) has been used as mean air temperature for single layer radiation model. The retrieved precipitable water for the period from August 1996 through December 1996 are compared to radiosonde data. It is shown that the root mean square differences between radiosonde observations and the GMS-5 retrievals range from 0.65 g/$cm^2$ to 1.09 g/$cm^2$ with correlation coefficient of 0.46 on hourly basis. The monthly distribution of precipitable water from GMS-5 shows almost good representation in large scale. Precipitable water is produced 4 times a day at Korea Meteorological Administration in the form of grid point data with 0.5 degree lat./lon. resolution. The data can be used in the objective analysis for numerical weather prediction and to increase the accuracy of humidity analysis especially under clear sky condition. And also, the data is a useful complement to existing data set for climatological research. But it is necessary to get higher correlation between radiosonde observations and the GMS-5 retrievals for operational applications.

Development and Evaluation of Silicon Passive Layer Dosimeter Based Lead-Monoxide for Measuring Skin Dose (피부선량 측정을 위한 Lead-Monoxide 기반의 Silicon Passive layer PbO 선량계 개발 및 평가)

  • Yang, Seung-Woo;Han, Moo-Jae;Jung, Jae-Hoon;Bae, Sang-Il;Moon, Young-Min;Park, Sung-Kwang;Kim, Jin-Young
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.6
    • /
    • pp.781-788
    • /
    • 2021
  • Due to the high sensitivity to radiation, excessive exposure needs to be prevented by accurately measuring the dose irradiated to the skin during radiation therapy. Although clinical trials use dosimeters such as film, OSLD, TLD, glass dosimeter, etc. to measure skin dose, these dosimeters have difficulty in accurate dosimetry on skin curves. In this study, to solve these problems, we developed a skin dosimeter that can be attached according to human flexion and evaluated its response characteristics. For the manufacture of the dosimeter, lead oxide (PbO) with high atomic number (ZPb: 82, ZO: 8) and density (9.53 g/cm3) and silicon binders that can bend according to human flexion were used. In the case of a dosimeter made of PbO material, the performance degradation has been prevented by using parylene and others due to the presence of degradation due to oxidation, but the previously used parylene is affected by bending, so a new form of passive layer was produced and applied to the skin dosimeter. The characteristic evaluation of the skin dosimeter was evaluated by analyzing SEM, reproducibility, and linearity. Through SEM analysis, bending was evaluated, reproducibility and linearity at 6 MeV energy were evaluated, and applicability was assessed with a skin dosimeter. As a result of observing the dosimeter surface through SEM analysis, the parylene passive layer PbO dosimeter with the positive layer raised to the parylene produced cracks on the surface when bent. On the other hand, no crack was observed in the silicon passive layer PbO dosimeter, which was raised to silicon passive layer. In the reproducibility measurement results, the RSD of the silicon passive layer PbO dosimeter was 1.47% which satisfied the evaluation criteria RSD 1.5% and the linearity evaluation results showed the R2 value of 0.9990, which satisfied the evaluation criteria R2 9990. The silicon passive layer PbO dosimeter was evaluated to be applicable to skin dosimeters by demonstrating high signal stability, precision, and accuracy in reproducibility and linearity, without cracking due to bending.

Exploring the Factors Influencing on the Accuracy of Self-Reported Responses in Affective Assessment of Science (과학과 자기보고식 정의적 영역 평가의 정확성에 영향을 주는 요소 탐색)

  • Chung, Sue-Im;Shin, Donghee
    • Journal of The Korean Association For Science Education
    • /
    • v.39 no.3
    • /
    • pp.363-377
    • /
    • 2019
  • This study reveals the aspects of subjectivity in the test results in a science-specific aspect when assessing science-related affective characteristic through self-report items. The science-specific response was defined as the response that appear due to student's recognition of nature or characteristics of science when his or her concepts or perceptions about science were attempted to measure. We have searched for cases where science-specific responses especially interfere with the measurement objective or accurate self-reports. The results of the error due to the science-specific factors were derived from the quantitative data of 649 students in the 1st and 2nd grade of high school and the qualitative data of 44 students interviewed. The perspective of science and the characteristics of science that students internalize from everyday life and science learning experiences interact with the items that form the test tool. As a result, it was found that there were obstacles to accurate self-report in three aspects: characteristics of science, personal science experience, and science in tool. In terms of the characteristic of science in relation to the essential aspect of science, students respond to items regardless of the measuring constructs, because of their views and perceived characteristics of science based on subjective recognition. The personal science experience factor representing the learner side consists of student's science motivation, interaction with science experience, and perception of science and life. Finally, from the instrumental point of view, science in tool leads to terminological confusion due to the uncertainty of science concepts and results in a distance from accurate self-report eventually. Implications from the results of the study are as follows: review of inclusion of science-specific factors, precaution to clarify the concept of measurement, check of science specificity factors at the development stage, and efforts to cross the boundaries between everyday science and school science.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.