• Title/Summary/Keyword: Detection Effectiveness Analysis

Search Result 333, Processing Time 0.026 seconds

Study on Anomaly Detection Method of Improper Foods using Import Food Big data (수입식품 빅데이터를 이용한 부적합식품 탐지 시스템에 관한 연구)

  • Cho, Sanggoo;Choi, Gyunghyun
    • The Journal of Bigdata
    • /
    • v.3 no.2
    • /
    • pp.19-33
    • /
    • 2018
  • Owing to the increase of FTA, food trade, and versatile preferences of consumers, food import has increased at tremendous rate every year. While the inspection check of imported food accounts for about 20% of the total food import, the budget and manpower necessary for the government's import inspection control is reaching its limit. The sudden import food accidents can cause enormous social and economic losses. Therefore, predictive system to forecast the compliance of food import with its preemptive measures will greatly improve the efficiency and effectiveness of import safety control management. There has already been a huge data accumulated from the past. The processed foods account for 75% of the total food import in the import food sector. The analysis of big data and the application of analytical techniques are also used to extract meaningful information from a large amount of data. Unfortunately, not many studies have been done regarding analyzing the import food and its implication with understanding the big data of food import. In this context, this study applied a variety of classification algorithms in the field of machine learning and suggested a data preprocessing method through the generation of new derivative variables to improve the accuracy of the model. In addition, the present study compared the performance of the predictive classification algorithms with the general base classifier. The Gaussian Naïve Bayes prediction model among various base classifiers showed the best performance to detect and predict the nonconformity of imported food. In the future, it is expected that the application of the abnormality detection model using the Gaussian Naïve Bayes. The predictive model will reduce the burdens of the inspection of import food and increase the non-conformity rate, which will have a great effect on the efficiency of the food import safety control and the speed of import customs clearance.

A LiDAR-based Visual Sensor System for Automatic Mooring of a Ship (선박 자동계류를 위한 LiDAR기반 시각센서 시스템 개발)

  • Kim, Jin-Man;Nam, Taek-Kun;Kim, Heon-Hui
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.6
    • /
    • pp.1036-1043
    • /
    • 2022
  • This paper discusses about the development of a visual sensor that can be installed in an automatic mooring device to detect the berthing condition of a vessel. Despite controlling the ship's speed and confirming its location to prevent accidents while berthing a vessel, ship collision occurs at the pier every year, causing great economic and environmental damage. Therefore, it is important to develop a visual system that can quickly obtain the information on the speed and location of the vessel to ensure safety of the berthing vessel. In this study, a visual sensor was developed to observe a ship through an image while berthing, and to properly check the ship's status according to the surrounding environment. To obtain the adequacy of the visual sensor to be developed, the sensor characteristics were analyzed in terms of information provided from the existing sensors, that is, detection range, real-timeness, accuracy, and precision. Based on these analysis data, we developed a 3D visual module that can acquire information on objects in real time by conducting conceptual designs of LiDAR (Light Detection And Ranging) type 3D visual system, driving mechanism, and position and force controller for motion tilting system. Finally, performance evaluation of the control system and scan speed test were executed, and the effectiveness of the developed system was confirmed through experiments.

Efficiency and accuracy of artificial intelligence in the radiographic detection of periodontal bone loss: A systematic review

  • Asmhan Tariq;Fatmah Bin Nakhi;Fatema Salah;Gabass Eltayeb;Ghada Jassem Abdulla;Noor Najim;Salma Ahmed Khedr;Sara Elkerdasy;Natheer Al-Rawi;Sausan Alkawas;Marwan Mohammed;Shishir Ram Shetty
    • Imaging Science in Dentistry
    • /
    • v.53 no.3
    • /
    • pp.193-198
    • /
    • 2023
  • Purpose: Artificial intelligence (AI) is poised to play a major role in medical diagnostics. Periodontal disease is one of the most common oral diseases. The early diagnosis of periodontal disease is essential for effective treatment and a favorable prognosis. This study aimed to assess the effectiveness of AI in diagnosing periodontal bone loss through radiographic analysis. Materials and Methods: A literature search involving 5 databases (PubMed, ScienceDirect, Scopus, Health and Medical Collection, Dentistry and Oral Sciences) was carried out. A specific combination of keywords was used to obtain the articles. The PRISMA guidelines were used to filter eligible articles. The study design, sample size, type of AI software, and the results of each eligible study were analyzed. The CASP diagnostic study checklist was used to evaluate the evidence strength score. Results: Seven articles were eligible for review according to the PRISMA guidelines. Out of the 7 eligible studies, 4 had strong CASP evidence strength scores (7-8/9). The remaining studies had intermediate CASP evidence strength scores (3.5-6.5/9). The highest area under the curve among the reported studies was 94%, the highest F1 score was 91%, and the highest specificity and sensitivity were 98.1% and 94%, respectively. Conclusion: AI-based detection of periodontal bone loss using radiographs is an efficient method. However, more clinical studies need to be conducted before this method is introduced into routine dental practice.

Class Classification and Validation of a Musculoskeletal Risk Factor Dataset for Manufacturing Workers (제조업 노동자 근골격계 부담요인 데이터셋 클래스 분류와 유효성 검증)

  • Young-Jin Kang;;;Jeong, Seok Chan
    • The Journal of Bigdata
    • /
    • v.8 no.1
    • /
    • pp.49-59
    • /
    • 2023
  • There are various items in the safety and health standards of the manufacturing industry, but they can be divided into work-related diseases and musculoskeletal diseases according to the standards for sickness and accident victims. Musculoskeletal diseases occur frequently in manufacturing and can lead to a decrease in labor productivity and a weakening of competitiveness in manufacturing. In this paper, to detect the musculoskeletal harmful factors of manufacturing workers, we defined the musculoskeletal load work factor analysis, harmful load working postures, and key points matching, and constructed data for Artificial Intelligence(AI) learning. To check the effectiveness of the suggested dataset, AI algorithms such as YOLO, Lite-HRNet, and EfficientNet were used to train and verify. Our experimental results the human detection accuracy is 99%, the key points matching accuracy of the detected person is @AP0.5 88%, and the accuracy of working postures evaluation by integrating the inferred matching positions is LEGS 72.2%, NECT 85.7%, TRUNK 81.9%, UPPERARM 79.8%, and LOWERARM 92.7%, and considered the necessity for research that can prevent deep learning-based musculoskeletal diseases.

eBPF-based Container Activity Analysis System (eBPF를 활용한 컨테이너 활동 분석 시스템)

  • Jisu Kim;Jaehyun Nam
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.9
    • /
    • pp.404-412
    • /
    • 2024
  • The adoption of cloud environments has revolutionized application deployment and management, with microservices architecture and container technology serving as key enablers of this transformation. However, these advancements have introduced new challenges, particularly the necessity to precisely understand service interactions and conduct detailed analyses of internal processes within complex service environments such as microservices. Traditional monitoring techniques have proven inadequate in effectively analyzing these complex environments, leading to increased interest in eBPF (extended Berkeley Packet Filter) technology as a solution. eBPF is a powerful tool capable of real-time event collection and analysis within the Linux kernel, enabling the monitoring of various events, including file system activities within the kernel space. This paper proposes a container activity analysis system based on eBPF, which monitors events occurring in the kernel space of both containers and host systems in real-time and analyzes the collected data. Furthermore, this paper conducts a comparative analysis of prominent eBPF-based container monitoring systems (Tetragon, Falco, and Tracee), focusing on aspects such as event detection methods, default policy application, event type identification, and system call blocking and alert generation. Through this evaluation, the paper identifies the strengths and weaknesses of each system and determines the necessary features for effective container process monitoring and restriction. In addition, the proposed system is evaluated in terms of container metadata collection, internal activity monitoring, and system metadata integration, and the effectiveness and future potential of eBPF-based monitoring systems.

RPCA-GMM for Speaker Identification (화자식별을 위한 강인한 주성분 분석 가우시안 혼합 모델)

  • 이윤정;서창우;강상기;이기용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.7
    • /
    • pp.519-527
    • /
    • 2003
  • Speech is much influenced by the existence of outliers which are introduced by such an unexpected happenings as additive background noise, change of speaker's utterance pattern and voice detection errors. These kinds of outliers may result in severe degradation of speaker recognition performance. In this paper, we proposed the GMM based on robust principal component analysis (RPCA-GMM) using M-estimation to solve the problems of both ouliers and high dimensionality of training feature vectors in speaker identification. Firstly, a new feature vector with reduced dimension is obtained by robust PCA obtained from M-estimation. The robust PCA transforms the original dimensional feature vector onto the reduced dimensional linear subspace that is spanned by the leading eigenvectors of the covariance matrix of feature vector. Secondly, the GMM with diagonal covariance matrix is obtained from these transformed feature vectors. We peformed speaker identification experiments to show the effectiveness of the proposed method. We compared the proposed method (RPCA-GMM) with transformed feature vectors to the PCA and the conventional GMM with diagonal matrix. Whenever the portion of outliers increases by every 2%, the proposed method maintains almost same speaker identification rate with 0.03% of little degradation, while the conventional GMM and the PCA shows much degradation of that by 0.65% and 0.55%, respectively This means that our method is more robust to the existence of outlier.

Coastal Erosion Time-series Analysis of the Littoral Cell GW36 in Gangwon Using Seahawk Airborne Bathymetric LiDAR Data (씨호크 항공수심라이다 데이터를 활용한 연안침식 시계열 분석 - 강원도 표사계 GW36을 중심으로 -)

  • Lee, Jaebin;Kim, Jiyoung;Kim, Gahyun;Hur, Hyunsoo;Wie, Gwangjae
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1527-1539
    • /
    • 2022
  • As coastal erosion of the east coast is accelerating, the need for scientific and quantitative coastal erosion monitoring technology for a wide area increases. The traditional method for observing changes in the coast was precision monitoring based on field surveys, but it can only be applied to a small area. The airborne bathymetric Light Detection And Ranging (LiDAR) system is a technology that enables economical surveying of coastal and seabed topography in a wide area. In particular, it has the advantage of constructing topographical data for the intertidal zone, which is a major area of interest for coastal erosion monitoring. In this study, time series analysis of coastal seabed topography acquired in Aug, 2021 and Mar. 2022 on the littoral cell GW36 in Gangwon was performed using the Seahawk Airborne Bathymetric LiDAR (ABL) system. We quantitatively monitored the topographical changes by measuring the baseline length, shoreline and Digital Terrain Model (DTM) changes. Through this, the effectiveness of the ABL surveying technique was confirmed in coastal erosion monitoring.

Study on the Possibility of Estimating Surface Soil Moisture Using Sentinel-1 SAR Satellite Imagery Based on Google Earth Engine (Google Earth Engine 기반 Sentinel-1 SAR 위성영상을 이용한 지표 토양수분량 산정 가능성에 관한 연구)

  • Younghyun Cho
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.2
    • /
    • pp.229-241
    • /
    • 2024
  • With the advancement of big data processing technology using cloud platforms, access, processing, and analysis of large-volume data such as satellite imagery have recently been significantly improved. In this study, the Change Detection Method, a relatively simple technique for retrieving soil moisture, was applied to the backscattering coefficient values of pre-processed Sentinel-1 synthetic aperture radar (SAR) satellite imagery product based on Google Earth Engine (GEE), one of those platforms, to estimate the surface soil moisture for six observatories within the Yongdam Dam watershed in South Korea for the period of 2015 to 2023, as well as the watershed average. Subsequently, a correlation analysis was conducted between the estimated values and actual measurements, along with an examination of the applicability of GEE. The results revealed that the surface soil moisture estimated for small areas within the soil moisture observatories of the watershed exhibited low correlations ranging from 0.1 to 0.3 for both VH and VV polarizations, likely due to the inherent measurement accuracy of the SAR satellite imagery and variations in data characteristics. However, the surface soil moisture average, which was derived by extracting the average SAR backscattering coefficient values for the entire watershed area and applying moving averages to mitigate data uncertainties and variability, exhibited significantly improved results at the level of 0.5. The results obtained from estimating soil moisture using GEE demonstrate its utility despite limitations in directly conducting desired analyses due to preprocessed SAR data. However, the efficient processing of extensive satellite imagery data allows for the estimation and evaluation of soil moisture over broad ranges, such as long-term watershed averages. This highlights the effectiveness of GEE in handling vast satellite imagery datasets to assess soil moisture. Based on this, it is anticipated that GEE can be effectively utilized to assess long-term variations of soil moisture average in major dam watersheds, in conjunction with soil moisture observation data from various locations across the country in the future.

Plasma Amino Acid and Urine Organic Acid Analyses in Leigh Syndrome (리증후군에서의 혈장 아미노산 및 소변 유기산 분석)

  • Na, Ji-Hoon;Lee, Hyunjoo;Lee, Hae-in;Huh, Euira;Lee, Young-Mock
    • Journal of The Korean Society of Inherited Metabolic disease
    • /
    • v.22 no.1
    • /
    • pp.28-36
    • /
    • 2022
  • Purpose: Detection of abnormal metabolites in plasma amino acid (PAA) and urine organic acid (UOA) analyses has been used to diagnose clinical mitochondrial diseases, such as Leigh syndrome. In this study, the diagnostic values and effectiveness of PAA and UOA analyses were reviewed. Methods: This was a retrospective study of patients with Leigh syndrome who were diagnosed between 2003 and 2018 in a single tertiary care center. Through a whole mitochondrial sequencing and nuclear DNA associated mitochondrial gene panel analysis, 19 patients were found to be positive for mitochondrial DNA (mtDNA) mutation-associated Leigh syndrome, and 57 patients were negative. Their PAA and UOA analyses results were then compared. Results: In the comparison of the PAA and UOA analyses results between the two groups, no abnormal metabolites showed obvious differences between the mtDNA mutation-positive Leigh syndrome and mtDNA mutation-negative Leigh syndrome groups. Conclusion: PAA and UOA analyses are inappropriate test methods for diagnosing Leigh syndrome or screening of mtDNA mutation-associated Leigh syndrome. However, UOA analysis might still be a suitable screening test for Leigh syndrome.

Oil Fluorescence Spectrum Analysis for the Design of Fluorimeter (형광 광도계 설계인자 도출을 위한 기름의 형광 스펙트럼 분석)

  • Oh, Sangwoo;Seo, Dongmin;Ann, Kiyoung;Kim, Jaewoo;Lee, Moonjin;Chun, Taebyung;Seo, Sungkyu
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.18 no.4
    • /
    • pp.304-309
    • /
    • 2015
  • To evaluate the degree of contamination caused by oil spill accident in the sea, the in-situ sensors which are based on the scientific method are needed in the real site. The sensors which are based on the fluorescence detection theory can provide the useful data, such as the concentration of oil. However these kinds of sensors commonly are composed of the ultraviolet (UV) light source such as UV mercury lamp, the multiple excitation/emission filters and the optical sensor which is mainly photomultiplier tube (PMT) type. Therefore, the size of the total sensing platform is large not suitable to be handled in the oil spill field and also the total price of it is extremely expensive. To overcome these drawbacks, we designed the fluorimeter for the oil spill detection which has compact size and cost effectiveness. Before the detail design process, we conducted the experiments to measure the excitation and emission spectrum of oils using five different kinds of crude oils and three different kinds of processed oils. And the fluorescence spectrometer were used to analyze the excitation and emission spectrum of oil samples. We have compared the spectrum results and drawn the each common spectrum regions of excitation and emission. In the experiments, we can see that the average gap between maximum excitation and emission peak wavelengths is near 50 nm for the every case. In the experiment which were fixed by the excitation wavelength of 365 nm and 405 nm, we can find out that the intensity of emission was weaker than that of 280 nm and 325 nm. So, if the light sources having the wavelength of 365 nm or 405 nm are used in the design process of fluorimeter, the optical sensor needs to have the sensitivity which can cover the weak light intensity. Through the results which were derived by the experiment, we can define the important factors which can be useful to select the effective wavelengths of light source, photo detector and filters.