• Title/Summary/Keyword: Object-detection

Search Result 2,473, Processing Time 0.03 seconds

A Study on Real-Time Defect Detection Using Ultrasound Excited Thermography (초음파 서모그라피를 이용한 실시간 결함 검출에 대한 연구)

  • Cho, Jai-Wan;Seo, Yong-Chil;Jung, Seung-Ho;Jung, Hyun-Kyu;Kim, Seung-Ho
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.26 no.4
    • /
    • pp.211-219
    • /
    • 2006
  • The UET(ultrasound excited thermography) for the ,eat-time diagnostics of the object employs an infrared camera to image defects of the surface and subsurface which are locally heated using high-frequency putted ultrasonic excitation. The dissipation of high-power ultrasonic energy around the feces of the defects causes an increase In temperature. The defect's image appears as a hot spot (bright IR source) within a dark background field. The UET for nondestructive diagnostic and evaluation is based on the image analysis of the hot spot as a local response to ultrasonic excited heat deposition. In this paper the applicability of VET for fast imaging of defect is described. The ultrasonic energy is injected into the sample through a transducer in the vertical and horizontal directions respectively. The voltage applied to the transducer is measured by digital oscilloscope, and the waveform are compared. Measurements were performed on four kinds of materials: SUS fatigue crack specimen(thickness 14mm), PCB plate(1.8 mm), CFRP plate(3 mm) and Inconel 600 plate (1 mm). A high power ultrasonic energy with pulse durations of 250ms Is injected into the samples in the horizontal and vertical directions respectively The obtained experimental result reveals that the dissipation loss of the ultrasonic energy In the vertical injection is less than that in the horizontal direction. In the cafe or PCB, CFRP, the size of hot spot in the vortical injection if larger than that in horizontal direction. Duration time of the hot spot in the vertical direction is three times as long as that in the horizontal direction. In the case of Inconel 600 plate and SUS sample, the hot spot in the horizontal injection was detected faster than that in the vertical direction

Multi-resolution SAR Image-based Agricultural Reservoir Monitoring (농업용 저수지 모니터링을 위한 다해상도 SAR 영상의 활용)

  • Lee, Seulchan;Jeong, Jaehwan;Oh, Seungcheol;Jeong, Hagyu;Choi, Minha
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_1
    • /
    • pp.497-510
    • /
    • 2022
  • Agricultural reservoirs are essential structures for water supplies during dry period in the Korean peninsula, where water resources are temporally unequally distributed. For efficient water management, systematic and effective monitoring of medium-small reservoirs is required. Synthetic Aperture Radar (SAR) provides a way for continuous monitoring of those, with its capability of all-weather observation. This study aims to evaluate the applicability of SAR in monitoring medium-small reservoirs using Sentinel-1 (10 m resolution) and Capella X-SAR (1 m resolution), at Chari (CR), Galjeon (GJ), Dwitgol (DG) reservoirs located in Ulsan, Korea. Water detected results applying Z fuzzy function-based threshold (Z-thresh) and Chan-vese (CV), an object detection-based segmentation algorithm, are quantitatively evaluated using UAV-detected water boundary (UWB). Accuracy metrics from Z-thresh were 0.87, 0.89, 0.77 (at CR, GJ, DG, respectively) using Sentinel-1 and 0.78, 0.72, 0.81 using Capella, and improvements were observed when CV was applied (Sentinel-1: 0.94, 0.89, 0.84, Capella: 0.92, 0.89, 0.93). Boundaries of the waterbody detected from Capella agreed relatively well with UWB; however, false- and un-detections occurred from speckle noises, due to its high resolution. When masked with optical sensor-based supplementary images, improvements up to 13% were observed. More effective water resource management is expected to be possible with continuous monitoring of available water quantity, when more accurate and precise SAR-based water detection technique is developed.

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

The Consideration about Heavy Metal Contamination of Room and Worker in a Workshop (공작실에서 실내 및 작업종사자의 중금속 오염도에 관한 고찰)

  • Kim Jeong-Ho;Kim Gha-Jung;Kim Sung-Ki;Bea Suk-Hwan
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.17 no.2
    • /
    • pp.87-94
    • /
    • 2005
  • Purpose : Heavy metal use when producing the block from the workshop. At this time, production of heavy metal dust and fume gives risk in human. This like heavy metal to improve seriousness through measurement and analysis. And by the quest in solution is purpose of this thesis. Materials and Methods : Organization is Inductively Coupled Plasma Atomic Emission Spectrometer, and the object is Deajeon city 4 workshops in university hospital radiation oncology (Bismuth, Lead, Tin and cadmium). Method is the ppb the pumping it does at unit, comparison analysis. And the Calculation heavy metal standard level in air through heavy metal standard level in body and blood, so Heavy metal temporary standard set. Results : Subterranean existence room air quality the administration laws appointed Lead and Cadmium's exposure recommend that it is $3{\mu}g/m^3\;and\;2{\mu}g/m^3$. And Bismuth and Tin decides $7{\mu}g/m^3\;and\;6{\mu}g/m^3$ through standard level in air heavy metal and standard level in body and blood. Heavy metal measurement level of workshops in 4 university hospital Daejeon city compares with work existence and nonexistence. On work nonexistence almost measurement level is below the recommend level. But work existence case express high level. Also consequently in composition ratio of the block is continuous with the detection ratio. Conclusion : Worker's heavy metal contamination imbrued serious for solution founds basic part. In hospital may operation on local air exhauster and periodical efficiency check, protector offer, et al. And worker have a correct understanding part of heavy metal contamination, and have continuous interest, health control. Finally, learned society sphere administer to establishment standard level and periodical measurement. And it founds basic solution plan of periodical special health checkup.

  • PDF

Comparison of a whole blood Interferon-γ assay and A tuberculin skin test for detecting latent tuberculosis infection in children (소아 잠복 결핵 감염 진단에 있어서 투베르쿨린 피부반응 검사와 결핵 특이항원 자극 Interferon-γ 분비능 측정의 비교)

  • Chun, Jin-Kyong;Kim, Chang Ki;Kim, Hyun Sook;Jung, Ghee Young;Linton, John A.;Kim, Ki Hwan;Lee, Taek Jin;Jeon, Ji Hyun;Kim, Dong Soo
    • Clinical and Experimental Pediatrics
    • /
    • v.51 no.9
    • /
    • pp.971-976
    • /
    • 2008
  • Purpose : Surveillance for detecting and managing latent tuberculosis infection (LTBI) is a key component of tuberculosis control. The classic surveillance tool, the tuberculin skin test (TST), may have some limitations when used in the Bacillus Calmette-$Gu{\acute{e}}rin$ (BCG)-vaccinated population. The object was to perform a blood test $QuantiFERON^{(R)}$-TB Gold In Tube (QFT-G IT) based on the detection of interferon-$\gamma$ ($IFN-{\gamma}$) released by T cells in response to Mycobacterium tuberculosis-specific antigens, and to compare the efficacy of this new diagnostic tool for LTBI with that of TST. Methods : For six months, between October 1, 2006 and April 30, 2007, data were collected from 111 patients under 15 years of age at Severance Children's Hospital. TST and QFT-G IT tests were performed with children with or without contact histories of tuberculosis. In addition to these tests, we examined comparative data from 29 adults who had tuberculosis, to detect false negative rates in the QFT-G IT method. Results : Thirty-three children had household contact histories. In this group, 15% and 42% of cases were found to be positive using the QFT-G IT assay and TST, respectively. Agreement was low between these two tests (${\kappa}=0.39$). In the adult active tuberculosis group, the QFT-G IT false negative rate defined as a positive culture and a negative QFT-G IT result was 12.5%. Conclusion : In diagnosing LTBI in children, the usefulness of a whole-blood $IFN-{\gamma}$ assay employing TB-specific antigens will be revealed only by examining additional longitudinal clinical data; this study serves as a starting point in that process.

A study of Diagnostic Significance of Simultaneous Examination of Proteinuria and Hematuria in the Urinary Mass Screening (집단뇨검사(Urinary mass screening) 방법으로 단백뇨와 혈뇨의 동시검사가 가지는 진단적 가치에 대한 연구)

  • Kim, Young-Kyoun;Lee, Chong-Guk
    • Childhood Kidney Diseases
    • /
    • v.3 no.1
    • /
    • pp.57-63
    • /
    • 1999
  • Purpose : To evaluate the diagnostic significance of simultaneous examination of hematuria and proteinuria in the urinary mass screening for early detection ot incipient renal diseases. Method and Object : During the period of 4 months from August to December in 1997, we did urinary mass screening on students of first grade of high school (16 years aged group) nationwide together with Korean Association of Health(KAH). In the first screening test, Comber-10 $N^{(R)}$ M dipsticks were used to detect proteinuria, hematuria, pyuria and nitrite simultaneously. Total 26,508 students (16 years aged group) from 33 high schools in every province in Korea, respectively, complied to the urinary mass screening. After then, one high school in Seoul was selected to reveal the true incidence of incipient renal diseases among students who showed hematuria in the initial screening through intensive examinations. Those who had hematuria and/or proteinuria visited the Paik hospital, and underwent blood tests and ultrasonographic examinations. The results were evaluated. Results 1) The initial screening revealed that the prevalence of proteinuria, hematuria, pyuria and positive nitrite urine, were $0.73\%,\;2.69\%,\;0.23\%\;and\;0.03\%$, respectively. 2) The first urinary screening among 875 students from one high school in Seoul selected fir the second test showed proteinuria, hematuria, pyuria and positive nitrite urine, were $0.91\%,\;4.68\%,\;0.34\%\;and\;0\%$, respectively. a) Total 8 among 875 students showed proteinuria, but one of them had orthostatic proteinuria and those remaining 7 students had transient proteinuria. b) There were 41 students who had hematuria in the initial screening. Among 33 who complied the second test, only one student showed asymptomatic isolated hematuria and those remaining students were normal. Conclusion : 1) Because of high false positive hematuria rate in the urinary mass screening, it dosen't seem to be appropriate that hematuria screening using dipsticks be included in the urinary mass screening. 2) A unified organization is needed from the fret that so various results of urinary mass screening came out. 3) Positive rates of pyuria and nitrite were so low that validity of urinary mass screening for urinary tract infection needs more studies.

  • PDF