• Title/Summary/Keyword: Target detection

Search Result 1,829, Processing Time 0.037 seconds

Diagnosis of Enteropathogens in Children with Acute Gastroenteritis: One Year Prospective Study in a Single Hospital (소아의 급성 위장관염의 원인균 진단: 단일 병원에서 1년간의 전향적 연구)

  • Chang, Ju Young;Choi, Ji Eun;Shin, Sue;Yoon, Jong Hyun
    • Pediatric Gastroenterology, Hepatology & Nutrition
    • /
    • v.9 no.1
    • /
    • pp.1-13
    • /
    • 2006
  • Purpose: Acute gastroenteritis in children is one of the frequently encountered diseases with relatively high admission rate. The aim of this study is to determine the isolation trends of common and emerging pathogens in acute gastroenteritis in children over a 12-month period in a community hospital. Methods: The study group included the children who were hospitalized to Seoul National University Boramae Hospital from April, 2003 to March, 2004 or visited outpatient clinic from April, 2003 to July, 2003 with presenting features of acute gastroenteritis. Stool specimens were obtained within 2 days after the visit and examined for the following pathogens: rotavirus, adenovirus, Salmonella, Shigella, Vibrio, pathogenic Escherichia coli (E.coli), Campylobacter and Yersinia species. Viral study was done with commercial kits for antigen detection. Identification of the bacterial pathogens was done by culture using selective media. For pathogenic E.coli, polymerase chain reaction (PCR) was done with the target genes related to the pathogenecity of enterotoxigenic E.coli (ETEC), enteropathogenic E.coli (EPEC) and enterohemorrhagic E.coli (EHEC). Results: The 130 hospitalized children and 28 outpatients were included in this study. The majority of children (>93%) were less than 6 years. Pathogens were isolated in 47% of inpatients and 43% of outpatients, respectively. Rotavirus was the most frequently identified pathogen, accounting for 42.3% of inpatients and 29.6% of outpatients. Nontyphoidal salmonella is the most commonly isolated bacterial pathogen (3.9%) in hospitalized children. Pathogenic E.coli (EPEC, ETEC) was detected in 2.1% (2/97) of inpatients and 25% (3/12) of outpatients. EHEC, adenovirus, Campylobacter, Yersinia and Shigella species were not detected in this study. Conclusion: Rotavirus is the most common enteropathogen in children with acute gastroenteritis. Nontyphoidal salmonella and pathogenic E.coli are important bacterial pathogens. Campylobacter species may not be commonly detected organism in hospitalized children with acute diarrhea.

  • PDF

Development of Species-Specific PCR to Determine the Animal Raw Material (종 특이 프라이머를 이용한 동물성 식품원료의 진위 판별법 개발)

  • Kim, Kyu-Heon;Lee, Ho-Yeon;Kim, Yong-Sang;Kim, Mi-Ra;Jung, Yoo Kyung;Lee, Jae-Hwang;Chang, Hye-Sook;Park, Yong-Chjun;Kim, Sang Yub;Choi, Jang Duck;Jang, Young-Mi
    • Journal of Food Hygiene and Safety
    • /
    • v.29 no.4
    • /
    • pp.347-355
    • /
    • 2014
  • In this study, the detection method was developed using molecular biological technique to distinguish authenticity of animal raw materials. The genes for distinction of species about animals targeted at Cytochrome c oxidase subunit I (COI), Cytochrome b (Cytb), and 16S ribosomal RNA (16S rRNA) genes in mitochondrial DNA. The species-specific primers were designed by that Polymerase Chain Reaction (PCR) product size was around 200 bp for applying to processed products. The target 24 raw materials were 2 species of domestic animals, 6 species of poultry, 2 species of freshwater fishes, 13 species of marine fishes and 1 species of crustaceans. The results of PCR for Rabbit, Fox, Pheasant, Domestic Pigeon, Rufous Turtle Dove, Quail, Tree Sparrow, Barn Swallow, Catfish, Mandarin Fish, Flying Fish, Mallotus villosus, Pacific Herring, Sand Lance, Japanese Anchovy, Small Yellow Croaker, Halibut, Jacopever, Skate Ray, Ray, File Fish, Sea Bass, Sea Urchin, and Lobster raw materials were confirmed 113 bp ~ 218 bp, respectively. Also, non-specific PCR products were not detected in compare species by species-specific primers. The method using primers developed in this study may be applied to distinguish an authenticity of food materials included animal raw materials for various processed products.

A Study on the Analysis of Five Artificial Sweetners in Beverages by HPLC/MS/MS (HPLC/MS/MS를 이용한 음료류 중 인공감미료 동시분석에 관한 연구)

  • Lee, Seong-Bong;Yong, Kum-Chan;Hwang, Sun-Il;Kim, Young-Su;Jung, You-Jung;Seo, Mi-Young;Lee, Chang-Hee;Sung, Jin-Hee;Yoon, Mi-Hye
    • Journal of Food Hygiene and Safety
    • /
    • v.29 no.4
    • /
    • pp.327-333
    • /
    • 2014
  • A method for analysis of five artificial sweetners (sodium saccharin, aspartame, acesulfame-K, sucralose, cyclamate) in beverage samples was developed using high-performance liquid chromatography/triple quadrupole mass spectrometry (HPLC/MS/MS). The method uses a single-step dilution for sample preperation. Seperation was achieved on a $C_{18}$ column ($2.1{\times}150mm$, $3.5{\mu}m$) with A- 2% methanol (1 mM ammonium acetate), B-95% methanol (1 mM ammonium acetate) as mobile phase with gradient mode. The quantitation of target compounds was performed by external calibration in selected reaction monitorning (SRM) mode. The coefficient of determination of calibration curve for sodium saccharin, aspartame, acesulfame-K, sucralose and cyclamate were 0.9957, 0.9991, 0.9943, 0.9982 and 0.9948, respectively. The limits of detection (LODs) and limits of quantitation (LOQs) were in the range of 0.001~0.022 mg/L and 0.004~0.073 mg/L, repectively. Recoveries for beverage samples were in the range of 92.76~113.50% with RSD < 10.91%. The method has applied to the determination of the five sweetners in 102 beverage samples. Three artificial sweetners-aspartame, acesulfame-K, sucralose were detected from 42 samples. Sodium saccharin and cyclamate were not detected in all samples.

Development of Deep Learning Structure to Improve Quality of Polygonal Containers (다각형 용기의 품질 향상을 위한 딥러닝 구조 개발)

  • Yoon, Suk-Moon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.493-500
    • /
    • 2021
  • In this paper, we propose the development of deep learning structure to improve quality of polygonal containers. The deep learning structure consists of a convolution layer, a bottleneck layer, a fully connect layer, and a softmax layer. The convolution layer is a layer that obtains a feature image by performing a convolution 3x3 operation on the input image or the feature image of the previous layer with several feature filters. The bottleneck layer selects only the optimal features among the features on the feature image extracted through the convolution layer, reduces the channel to a convolution 1x1 ReLU, and performs a convolution 3x3 ReLU. The global average pooling operation performed after going through the bottleneck layer reduces the size of the feature image by selecting only the optimal features among the features of the feature image extracted through the convolution layer. The fully connect layer outputs the output data through 6 fully connect layers. The softmax layer multiplies and multiplies the value between the value of the input layer node and the target node to be calculated, and converts it into a value between 0 and 1 through an activation function. After the learning is completed, the recognition process classifies non-circular glass bottles by performing image acquisition using a camera, measuring position detection, and non-circular glass bottle classification using deep learning as in the learning process. In order to evaluate the performance of the deep learning structure to improve quality of polygonal containers, as a result of an experiment at an authorized testing institute, it was calculated to be at the same level as the world's highest level with 99% good/defective discrimination accuracy. Inspection time averaged 1.7 seconds, which was calculated within the operating time standards of production processes using non-circular machine vision systems. Therefore, the effectiveness of the performance of the deep learning structure to improve quality of polygonal containers proposed in this paper was proven.

A Study on Defense and Attack Model for Cyber Command Control System based Cyber Kill Chain (사이버 킬체인 기반 사이버 지휘통제체계 방어 및 공격 모델 연구)

  • Lee, Jung-Sik;Cho, Sung-Young;Oh, Heang-Rok;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.22 no.1
    • /
    • pp.41-50
    • /
    • 2021
  • Cyber Kill Chain is derived from Kill chain of traditional military terms. Kill chain means "a continuous and cyclical process from detection to destruction of military targets requiring destruction, or dividing it into several distinct actions." The kill chain has evolved the existing operational procedures to effectively deal with time-limited emergency targets that require immediate response due to changes in location and increased risk, such as nuclear weapons and missiles. It began with the military concept of incapacitating the attacker's intended purpose by preventing it from functioning at any one stage of the process of reaching it. Thus the basic concept of the cyber kill chain is that the attack performed by a cyber attacker consists of each stage, and the cyber attacker can achieve the attack goal only when each stage is successfully performed, and from a defense point of view, each stage is detailed. It is believed that if a response procedure is prepared and responded, the chain of attacks is broken, and the attack of the attacker can be neutralized or delayed. Also, from the point of view of an attack, if a specific response procedure is prepared at each stage, the chain of attacks can be successful and the target of the attack can be neutralized. The cyber command and control system is a system that is applied to both defense and attack, and should present defensive countermeasures and offensive countermeasures to neutralize the enemy's kill chain during defense, and each step-by-step procedure to neutralize the enemy when attacking. Therefore, thist paper proposed a cyber kill chain model from the perspective of defense and attack of the cyber command and control system, and also researched and presented the threat classification/analysis/prediction framework of the cyber command and control system from the defense aspect

Study of Feature Based Algorithm Performance Comparison for Image Matching between Virtual Texture Image and Real Image (가상 텍스쳐 영상과 실촬영 영상간 매칭을 위한 특징점 기반 알고리즘 성능 비교 연구)

  • Lee, Yoo Jin;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1057-1068
    • /
    • 2022
  • This paper compares the combination performance of feature point-based matching algorithms as a study to confirm the matching possibility between image taken by a user and a virtual texture image with the goal of developing mobile-based real-time image positioning technology. The feature based matching algorithm includes process of extracting features, calculating descriptors, matching features from both images, and finally eliminating mismatched features. At this time, for matching algorithm combination, we combined the process of extracting features and the process of calculating descriptors in the same or different matching algorithm respectively. V-World 3D desktop was used for the virtual indoor texture image. Currently, V-World 3D desktop is reinforced with details such as vertical and horizontal protrusions and dents. In addition, levels with real image textures. Using this, we constructed dataset with virtual indoor texture data as a reference image, and real image shooting at the same location as a target image. After constructing dataset, matching success rate and matching processing time were measured, and based on this, matching algorithm combination was determined for matching real image with virtual image. In this study, based on the characteristics of each matching technique, the matching algorithm was combined and applied to the constructed dataset to confirm the applicability, and performance comparison was also performed when the rotation was additionally considered. As a result of study, it was confirmed that the combination of Scale Invariant Feature Transform (SIFT)'s feature and descriptor detection had the highest matching success rate, but matching processing time was longest. And in the case of Features from Accelerated Segment Test (FAST)'s feature detector and Oriented FAST and Rotated BRIEF (ORB)'s descriptor calculation, the matching success rate was similar to that of SIFT-SIFT combination, while matching processing time was short. Furthermore, in case of FAST-ORB, it was confirmed that the matching performance was superior even when 10° rotation was applied to the dataset. Therefore, it was confirmed that the matching algorithm of FAST-ORB combination could be suitable for matching between virtual texture image and real image.

Development of Lateral Flow Immunofluorescence Assay Applicable to Lung Cancer (폐암 진단에 적용 가능한 측면 유동 면역 형광 분석법 개발)

  • Supianto, Mulya;Lim, Jungmin;Lee, Hye Jin
    • Applied Chemistry for Engineering
    • /
    • v.33 no.2
    • /
    • pp.173-178
    • /
    • 2022
  • A lateral flow immunoassay (LFIA) method using carbon nanodot@silica as a signaling material was developed for analyzing the concentration of retinol-binding protein 4 (RBP4), one of the lung cancer biomarkers. Instead of antibodies mainly used as bioreceptors in nitrocellulose membranes in LFIA for protein detection, aptamers that are more economical, easy to store for a long time, and have strong affinities toward specific target proteins were used. A 5' terminal of biotin-modified aptamer specific to RBP4 was first reacted with neutravidin followed by spraying the mixture on the membrane in order to immobilize the aptamer in a porous membrane by the strong binding affinity between biotin and neutravidin. Carbon nanodot@silica nanoparticles with blue fluorescent signal covalently conjugated to the RBP4 antibody, and RBP4 were injected in a lateral flow manner on to the surface bound aptamer to form a sandwich complex. Surfactant concentrations, ionic strength, and additional blocking reagents were added to the running buffer solution to optimize the fluorescent signal off from the sandwich complex which was correlated to the concentration of RBP4. A 10 mM Tris (pH 7.4) running buffer containing 150 mM NaCl and 0.05% Tween-20 with 0.6 M ethanolamine as a blocking agent showed the optimum assay condition for carbon nanodot@silica-based LFIA. The results indicate that an aptamer, more economical and easier to store for a long time can be used as an alternative immobilizing probe for antibody in a LFIA device which can be used as a point-of-care diagnosis kit for lung cancer diseases.

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.