• Title/Summary/Keyword: Pre-Classification

Search Result 657, Processing Time 0.031 seconds

A Non-annotated Recurrent Neural Network Ensemble-based Model for Near-real Time Detection of Erroneous Sea Level Anomaly in Coastal Tide Gauge Observation (비주석 재귀신경망 앙상블 모델을 기반으로 한 조위관측소 해수위의 준실시간 이상값 탐지)

  • LEE, EUN-JOO;KIM, YOUNG-TAEG;KIM, SONG-HAK;JU, HO-JEONG;PARK, JAE-HUN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.26 no.4
    • /
    • pp.307-326
    • /
    • 2021
  • Real-time sea level observations from tide gauges include missing and erroneous values. Classification as abnormal values can be done for the latter by the quality control procedure. Although the 3𝜎 (three standard deviations) rule has been applied in general to eliminate them, it is difficult to apply it to the sea-level data where extreme values can exist due to weather events, etc., or where erroneous values can exist even within the 3𝜎 range. An artificial intelligence model set designed in this study consists of non-annotated recurrent neural networks and ensemble techniques that do not require pre-labeling of the abnormal values. The developed model can identify an erroneous value less than 20 minutes of tide gauge recording an abnormal sea level. The validated model well separates normal and abnormal values during normal times and weather events. It was also confirmed that abnormal values can be detected even in the period of years when the sea level data have not been used for training. The artificial neural network algorithm utilized in this study is not limited to the coastal sea level, and hence it can be extended to the detection model of erroneous values in various oceanic and atmospheric data.

Development of A Quantitative Risk Assessment Model by BIM-based Risk Factor Extraction - Focusing on Falling Accidents - (BIM 기반 위험요소 도출을 통한 정량적 위험성 평가 모델 개발 - 떨어짐 사고를 중심으로 -)

  • Go, Huijea;Hyun, Jihun;Lee, Juhee;Ahn, Joseph
    • Korean Journal of Construction Engineering and Management
    • /
    • v.23 no.4
    • /
    • pp.15-25
    • /
    • 2022
  • As the incidence and mortality of serious disasters in the construction industry are the highest, various efforts are being made in Korea to reduce them. Among them, risk assessment is used as data for disaster reduction measures and evaluation of risk factors at the construction stage. However, the existing risk assessment involves the subjectivity of the performer and is vulnerable to the domestic construction site. This study established a DB classification system for risk assessment with the aim of early identification and pre-removal of risks by quantitatively deriving risk factors using BIM in the risk assessment field and presents a methodology for risk assessment using BIM. Through this, prior removal of risks increases the safety of construction workers and reduces additional costs in the field of safety management. In addition, since it can be applied to new construction methods, it improves the understanding of project participants and becomes a tool for communication. This study proposes a framework for deriving quantitative risks based on BIM, and will be used as a base technology in the field of risk assessment using BIM in the future.

Classification of Diabetic Retinopathy using Mask R-CNN and Random Forest Method

  • Jung, Younghoon;Kim, Daewon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.12
    • /
    • pp.29-40
    • /
    • 2022
  • In this paper, we studied a system that detects and analyzes the pathological features of diabetic retinopathy using Mask R-CNN and a Random Forest classifier. Those are one of the deep learning techniques and automatically diagnoses diabetic retinopathy. Diabetic retinopathy can be diagnosed through fundus images taken with special equipment. Brightness, color tone, and contrast may vary depending on the device. Research and development of an automatic diagnosis system using artificial intelligence to help ophthalmologists make medical judgments possible. This system detects pathological features such as microvascular perfusion and retinal hemorrhage using the Mask R-CNN technique. It also diagnoses normal and abnormal conditions of the eye by using a Random Forest classifier after pre-processing. In order to improve the detection performance of the Mask R-CNN algorithm, image augmentation was performed and learning procedure was conducted. Dice similarity coefficients and mean accuracy were used as evaluation indicators to measure detection accuracy. The Faster R-CNN method was used as a control group, and the detection performance of the Mask R-CNN method through this study showed an average of 90% accuracy through Dice coefficients. In the case of mean accuracy it showed 91% accuracy. When diabetic retinopathy was diagnosed by learning a Random Forest classifier based on the detected pathological symptoms, the accuracy was 99%.

Modified tunneling technique for root coverage of anterior mandible using minimal soft tissue harvesting and volume-stable collagen matrix: a retrospective study

  • Lee, Yoonsub;Lee, Dajung;Kim, Sungtae;Ku, Young;Rhyu, In-Chul
    • Journal of Periodontal and Implant Science
    • /
    • v.51 no.6
    • /
    • pp.398-408
    • /
    • 2021
  • Purpose: In this study, we aimed to evaluate the clinical validity of the modified tunneling technique using minimal soft tissue harvesting and volume-stable collagen matrix in the anterior mandible. Methods: In total, 27 anterior mandibular teeth and palatal donor sites in 17 patients with ≥1 mm of gingival recession (GR) were analyzed before and after root coverage. For the recipient sites, vertical vestibular incisions were made in the interdental area and a subperiosteal tunnel was created with an elevator. After both sides of the marginal gingiva were tied to one another, a prepared connective tissue graft and volume-stable collagen matrix were inserted through the vestibular vertical incision and were fixed with resorbable suture material. The root coverage results of the recipient site were measured at baseline (T0), 3 weeks (T3), 12 weeks (T12), and the latest visit (Tl). For palatal donor sites, a free gingival graft from a pre-decided area avoiding the main trunk of the greater palatine artery was harvested using a prefabricated surgical template at a depth of 2 mm after de-epithelization using a rotating bur. In each patient, the clinical and volumetric changes at the donor sites between T0 and T3 were measured. Results: During an average follow-up of 14.5 months, teeth with denuded root lengths of 1-3 mm (n=12), 3-6 mm (n=11), and >6 mm (n=2) achieved root coverage of 97.01%±7.65%, 86.70%±5.66%, and 82.53%±1.39%, respectively. Miller classification I (n=12), II (n=10), and III (n=3) teeth showed mean coverage rates of 97.01%±7.65%, 86.91%±5.90%, and 83.19%±1.62%, respectively. At the donor sites, an average defect depth of 1.41 mm (70.5%) recovered in 3 weeks, and the wounds were epithelized completely in all cases. Conclusions: The modified tunneling technique in this study is a promising treatment modality for overcoming GR in the anterior mandible.

BEEF MEAT TRACEABILITY. CAN NIRS COULD HELP\ulcorner

  • Cozzolino, D.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1246-1246
    • /
    • 2001
  • The quality of meat is highly variable in many properties. This variability originates from both animal production and meat processing. At the pre-slaughter stage, animal factors such as breed, sex, age contribute to this variability. Environmental factors include feeding, rearing, transport and conditions just before slaughter (Hildrum et al., 1995). Meat can be presented in a variety of forms, each offering different opportunities for adulteration and contamination. This has imposed great pressure on the food manufacturing industry to guarantee the safety of meat. Tissue and muscle speciation of flesh foods, as well as speciation of animal derived by-products fed to all classes of domestic animals, are now perhaps the most important uncertainty which the food industry must resolve to allay consumer concern. Recently, there is a demand for rapid and low cost methods of direct quality measurements in both food and food ingredients (including high performance liquid chromatography (HPLC), thin layer chromatography (TLC), enzymatic and inmunological tests (e.g. ELISA test) and physical tests) to establish their authenticity and hence guarantee the quality of products manufactured for consumers (Holland et al., 1998). The use of Near Infrared Reflectance Spectroscopy (NIRS) for the rapid, precise and non-destructive analysis of a wide range of organic materials has been comprehensively documented (Osborne et at., 1993). Most of the established methods have involved the development of NIRS calibrations for the quantitative prediction of composition in meat (Ben-Gera and Norris, 1968; Lanza, 1983; Clark and Short, 1994). This was a rational strategy to pursue during the initial stages of its application, given the type of equipment available, the state of development of the emerging discipline of chemometrics and the overwhelming commercial interest in solving such problems (Downey, 1994). One of the advantages of NIRS technology is not only to assess chemical structures through the analysis of the molecular bonds in the near infrared spectrum, but also to build an optical model characteristic of the sample which behaves like the “finger print” of the sample. This opens the possibility of using spectra to determine complex attributes of organic structures, which are related to molecular chromophores, organoleptic scores and sensory characteristics (Hildrum et al., 1994, 1995; Park et al., 1998). In addition, the application of statistical packages like principal component or discriminant analysis provides the possibility to understand the optical properties of the sample and make a classification without the chemical information. The objectives of this present work were: (1) to examine two methods of sample presentation to the instrument (intact and minced) and (2) to explore the use of principal component analysis (PCA) and Soft Independent Modelling of class Analogy (SIMCA) to classify muscles by quality attributes. Seventy-eight (n: 78) beef muscles (m. longissimus dorsi) from Hereford breed of cattle were used. The samples were scanned in a NIRS monochromator instrument (NIR Systems 6500, Silver Spring, MD, USA) in reflectance mode (log 1/R). Both intact and minced presentation to the instrument were explored. Qualitative analysis of optical information through PCA and SIMCA analysis showed differences in muscles resulting from two different feeding systems.

  • PDF

Development of real-time defect detection technology for water distribution and sewerage networks (시나리오 기반 상·하수도 관로의 실시간 결함검출 기술 개발)

  • Park, Dong, Chae;Choi, Young Hwan
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.spc1
    • /
    • pp.1177-1185
    • /
    • 2022
  • The water and sewage system is an infrastructure that provides safe and clean water to people. In particular, since the water and sewage pipelines are buried underground, it is very difficult to detect system defects. For this reason, the diagnosis of pipelines is limited to post-defect detection, such as system diagnosis based on the images taken after taking pictures and videos with cameras and drones inside the pipelines. Therefore, real-time detection technology of pipelines is required. Recently, pipeline diagnosis technology using advanced equipment and artificial intelligence techniques is being developed, but AI-based defect detection technology requires a variety of learning data because the types and numbers of defect data affect the detection performance. Therefore, in this study, various defect scenarios are implemented using 3D printing model to improve the detection performance when detecting defects in pipelines. Afterwards, the collected images are performed to pre-processing such as classification according to the degree of risk and labeling of objects, and real-time defect detection is performed. The proposed technique can provide real-time feedback in the pipeline defect detection process, and it would be minimizing the possibility of missing diagnoses and improve the existing water and sewerage pipe diagnosis processing capability.

Hate Speech Detection Using Modified Principal Component Analysis and Enhanced Convolution Neural Network on Twitter Dataset

  • Majed, Alowaidi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.1
    • /
    • pp.112-119
    • /
    • 2023
  • Traditionally used for networking computers and communications, the Internet has been evolving from the beginning. Internet is the backbone for many things on the web including social media. The concept of social networking which started in the early 1990s has also been growing with the internet. Social Networking Sites (SNSs) sprung and stayed back to an important element of internet usage mainly due to the services or provisions they allow on the web. Twitter and Facebook have become the primary means by which most individuals keep in touch with others and carry on substantive conversations. These sites allow the posting of photos, videos and support audio and video storage on the sites which can be shared amongst users. Although an attractive option, these provisions have also culminated in issues for these sites like posting offensive material. Though not always, users of SNSs have their share in promoting hate by their words or speeches which is difficult to be curtailed after being uploaded in the media. Hence, this article outlines a process for extracting user reviews from the Twitter corpus in order to identify instances of hate speech. Through the use of MPCA (Modified Principal Component Analysis) and ECNN, we are able to identify instances of hate speech in the text (Enhanced Convolutional Neural Network). With the use of NLP, a fully autonomous system for assessing syntax and meaning can be established (NLP). There is a strong emphasis on pre-processing, feature extraction, and classification. Cleansing the text by removing extra spaces, punctuation, and stop words is what normalization is all about. In the process of extracting features, these features that have already been processed are used. During the feature extraction process, the MPCA algorithm is used. It takes a set of related features and pulls out the ones that tell us the most about the dataset we give itThe proposed categorization method is then put forth as a means of detecting instances of hate speech or abusive language. It is argued that ECNN is superior to other methods for identifying hateful content online. It can take in massive amounts of data and quickly return accurate results, especially for larger datasets. As a result, the proposed MPCA+ECNN algorithm improves not only the F-measure values, but also the accuracy, precision, and recall.

Seismic Zonation on Site Responses in Daejeon by Building Geotechnical Information System Based on Spatial GIS Framework (공간 GIS 기반의 지반 정보 시스템 구축을 통한 대전 지역의 부지 응답에 따른 지진재해 구역화)

  • Sun, Chang-Guk
    • Journal of the Korean Geotechnical Society
    • /
    • v.25 no.1
    • /
    • pp.5-19
    • /
    • 2009
  • Most of earthquake-induced geotechnical hazards have been caused by the site effects relating to the amplification of ground motion, which is strongly influenced by the local geologic conditions such as soil thickness or bedrock depth and soil stiffness. In this study, an integrated GIS-based information system for geotechnical data, called geotechnical information system (GTIS), was constructed to establish a regional counterplan against earthquake-induced hazards at an urban area of Daejeon, which is represented as a hub of research and development in Korea. To build the GTIS for the area concerned, pre-existing geotechnical data collections were performed across the extended area including the study area and site visits were additionally carried out to acquire surface geo-knowledge data. For practical application of the GTIS used to estimate the site effects at the area concerned, seismic zoning map of the site period was created and presented as regional synthetic strategy for earthquake-induced hazards prediction. In addition, seismic zonation for site classification according to the spatial distribution of the site period was also performed to determine the site amplification coefficients for seismic design and seismic performance evaluation at any site in the study area. Based on this case study on seismic zonations in Daejeon, it was verified that the GIS-based GTIS was very useful for the regional prediction of seismic hazards and also the decision support for seismic hazard mitigation.

Detecting Vehicles That Are Illegally Driving on Road Shoulders Using Faster R-CNN (Faster R-CNN을 이용한 갓길 차로 위반 차량 검출)

  • Go, MyungJin;Park, Minju;Yeo, Jiho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.1
    • /
    • pp.105-122
    • /
    • 2022
  • According to the statistics about the fatal crashes that have occurred on the expressways for the last 5 years, those who died on the shoulders of the road has been as 3 times high as the others who died on the expressways. It suggests that the crashes on the shoulders of the road should be fatal, and that it would be important to prevent the traffic crashes by cracking down on the vehicles intruding the shoulders of the road. Therefore, this study proposed a method to detect a vehicle that violates the shoulder lane by using the Faster R-CNN. The vehicle was detected based on the Faster R-CNN, and an additional reading module was configured to determine whether there was a shoulder violation. For experiments and evaluations, GTAV, a simulation game that can reproduce situations similar to the real world, was used. 1,800 images of training data and 800 evaluation data were processed and generated, and the performance according to the change of the threshold value was measured in ZFNet and VGG16. As a result, the detection rate of ZFNet was 99.2% based on Threshold 0.8 and VGG16 93.9% based on Threshold 0.7, and the average detection speed for each model was 0.0468 seconds for ZFNet and 0.16 seconds for VGG16, so the detection rate of ZFNet was about 7% higher. The speed was also confirmed to be about 3.4 times faster. These results show that even in a relatively uncomplicated network, it is possible to detect a vehicle that violates the shoulder lane at a high speed without pre-processing the input image. It suggests that this algorithm can be used to detect violations of designated lanes if sufficient training datasets based on actual video data are obtained.

Predicting fetal toxicity of drugs through attention algorithm (Attention 알고리즘 기반 약물의 태아 독성 예측 연구)

  • Jeong, Myeong-hyeon;Yoo, Sun-yong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.273-275
    • /
    • 2022
  • The use of drugs by pregnant women poses a potential risk to the fetus. Therefore, it is essential to classify drugs that pregnant women should prohibit. However, the fetal toxicity of most drugs has not been identified. This takes a lot of time and cost. In silico approaches, such as virtual screening, can identify compounds that may present a high risk to the fetus for a wide range of compounds at the low cost and time. We collected class information of each drug from the hazard classification lists for prescribing drugs in pregnancy by the government of Korea and Australia. Using the structural and chemical features of each drug, various machine learning models were constructed to predict fetal toxicity of drugs. For all models, the quantitative performance evaluation was performed. Based on the attention algorithm, important molecular substructures of compounds were identified in the process of predicting the fetal toxicity of the drug by the proposed model. From the results, we confirmed that drugs with a high risk of fetal toxicity can be predicted for a wide range of compounds by machine learning. This study can be used as a pre-screening tool for fetal toxicity predictions, as it provides key molecular substructures associated with the fetal toxicity of compounds.

  • PDF