• Title/Summary/Keyword: Multi-class

Search Result 942, Processing Time 0.032 seconds

A STUDY ON THE MICROLEAKAGE OF DENTIN BONDING SYSTEMS (상아질 접착제의 미세누출에 관한 연구)

  • Son, Jeong-Min;Choi, Nam-Ki;Kim, Seon-Mi;Yang, Kyu-Ho;Park, Yang, Ji-il
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.35 no.4
    • /
    • pp.619-627
    • /
    • 2008
  • The objective of this study was to compare the microleakage of five adhesive systems in the enamel and dentin of permanent teeth. Class V cavity preparations with occlusal margins in enamel and gingival margins in dentin were prepared on both buccal and lingual surfaces of 25 extracted human molar teeth. The tested adhesives were: Adper Scotchbond Multi-purpose Plus Adhesive (SM), Adper Single bond 2 (SB), Clearfil SE Bond (SE), Adper Prompt L-Pop (PL) and G-Bond (GB). The results were as follows: 1. At the enamel margins, PL showed the highest leakage value(0.85), and others showed values of SB(0.55), GB(0.50), SM(0.35) and SE(0.25) in decreasing order. There were statistically significant differences in PL vs. SM and PL vs. SE(p<0.05). 2. At the dentin margins, GB showed the highest leakage value(2.10), and others showed values of SE(1.45), PL(1.40), SB(1.05), SM(0.70) in decreasing order. There were statistically significant differences in GB vs. SB and GB vs. SM(p<0.05). 3. Dentin margins showed high dye penetration rate than enamel margins in all material tested groups and there were statistically significant differences for SE, PL and GB.

  • PDF

INFLUENCE OF REBONDING PROCEDURES ON MICROLEAKAGE OF COMPOSITE RESIN RESTORATIONS (복합레진 수복 시 재접착 술식이 미세누출에 미치는 영향)

  • Lee, Mi-Ae;Seo, Duck-Kyu;Son, Ho-Hyun;Cho, Byeong-Hoon
    • Restorative Dentistry and Endodontics
    • /
    • v.35 no.3
    • /
    • pp.164-172
    • /
    • 2010
  • During a composite resin restoration, an anticipating contraction gap is usually tried to seal with low-viscosity resin after successive polishing, etching, rinsing and drying steps, which as a whole is called rebonding procedure. However, the gap might already have been filled with water or debris before applying the sealing resin. We hypothesized that microleakage would decrease if the rebonding agent was applied before the polishing step, i.e., immediately after curing composite resin. On the buccal and lingual surfaces of 35 extracted human molar teeth, class V cavities were prepared with the occlusal margin in enamel and the gingival margin in dentin. They were restored with a hybrid composite resin Z250 (3M ESPE, USA) using an adhesive AdperTM Single Bond 2 (3M ESPE). As rebonding agents, BisCover LV (Bisco, USA), ScotchBond Multi-Purpose adhesive (3M ESPE) and an experimental adhesive were applied on the restoration margins before polishing step or after successive polishing and etching steps. The infiltration depth of 2% methylene blue into the margin was measured using an optical stereomicroscope. The correlation between viscosity of rebonding agents and mciroleakage was also evaluated. There were no statistically significant differences in the microleakage within the rebonding procedures, within the rebonding agents, and within the margins. However, when the restorations were not rebonded, the microleakage at gingival margin was significantly higher than those groups rebonded with 3 agents (p < 0.05). The difference was not observed at the occlusal margin. No significant correlation was found between viscosity of rebonding agents and microleakage, except very weak correlation in case of rebonding after polishing and etching at gingival margin.

The Secondary School Education of Geography and the System of Teacher Training in Belgium - Focused on the Case of Francophone Community - (벨지움의 중등학교 지리교육 내용과 교사양성제도 - 프랑코폰 공동체를 사례로 -)

  • Kwak, Chul-Hong
    • Journal of the Korean association of regional geographers
    • /
    • v.6 no.3
    • /
    • pp.101-115
    • /
    • 2000
  • This study aims to make a research on the secondary school education of geography and the system of teacher training in Belgium, focused on the case of Francophone Community. What has been made clear by this research can be summed up as follows. The first two years of the secondary school offer two hours of 'environment education', per week, which can be categorized into the learning of living geography, in that at this stage students learn how to observe the geographic phenomena in their daily life and pigeonhole them. The two years of the second stage of the secondary school offer one hour of 'world geography' which actually is focused on the district of Europe and Russia. The two years of the third stage of the secondary school offer an advanced course of geography which aims to teach systematically the physical geography and the human geography. A remarkable change in geographic education in Belgium is that in the wake of the Revision Act of the secondary school education, textbooks were replaced by other teaching manuals adapted to the regional condition by the teachers. This may result in a wide gap of achievements in geography according to the conditions of educational establishments. Another notable change is that the stress of geographic education tends to be placed on the ability of acquiring practical geographic knowledge rather than the geographic information itself. And it is also another marked tendency that most learning activities in geography class are conducted on the basis of student-centered and the method of investigation. Teachers of the lower secondary schools in Belgium are trained in the School of Education as multi-major teachers, such as a teacher for biology-chemistry-geography or a teacher for history-sociology-geography. Teachers of the higher secondary school education are trained in the Department of Teacher Education in universities as solo-major teachers in that they are required to know more deeply to teach an advanced course of geography in the higher secondary schools. To improve the teacher education many folds of policies are adopted. One is that many in-service teachers are officially put into services of guiding and teaching teacher training. Another is that faculty members in charge of teacher training course are trying to level up the qualifications of teachers by rigorous disciplining.

  • PDF

Wildfire Severity Mapping Using Sentinel Satellite Data Based on Machine Learning Approaches (Sentinel 위성영상과 기계학습을 이용한 국내산불 피해강도 탐지)

  • Sim, Seongmun;Kim, Woohyeok;Lee, Jaese;Kang, Yoojin;Im, Jungho;Kwon, Chunguen;Kim, Sungyong
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1109-1123
    • /
    • 2020
  • In South Korea with forest as a major land cover class (over 60% of the country), many wildfires occur every year. Wildfires weaken the shear strength of the soil, forming a layer of soil that is vulnerable to landslides. It is important to identify the severity of a wildfire as well as the burned area to sustainably manage the forest. Although satellite remote sensing has been widely used to map wildfire severity, it is often difficult to determine the severity using only the temporal change of satellite-derived indices such as Normalized Difference Vegetation Index (NDVI) and Normalized Burn Ratio (NBR). In this study, we proposed an approach for determining wildfire severity based on machine learning through the synergistic use of Sentinel-1A Synthetic Aperture Radar-C data and Sentinel-2A Multi Spectral Instrument data. Three wildfire cases-Samcheok in May 2017, Gangreung·Donghae in April 2019, and Gosung·Sokcho in April 2019-were used for developing wildfire severity mapping models with three machine learning algorithms (i.e., Random Forest, Logistic Regression, and Support Vector Machine). The results showed that the random forest model yielded the best performance, resulting in an overall accuracy of 82.3%. The cross-site validation to examine the spatiotemporal transferability of the machine learning models showed that the models were highly sensitive to temporal differences between the training and validation sites, especially in the early growing season. This implies that a more robust model with high spatiotemporal transferability can be developed when more wildfire cases with different seasons and areas are added in the future.

Monitoring and Exposure Assessment of Pesticide Residues in Domestic Agricultural Products (국내 유통 다소비 농산물의 잔류농약 모니터링 및 노출평가)

  • Kang, Namsuk;Kim, Seongcheol;Kang, Yoonjung;Kim, Dohyeong;Jang, Jinwook;Won, Sera;Hyun, Jaehee;Kim, Dongeon;Jeong, Il-Yong;Rhee, Gyuseek;Shin, Yeongmin;Joung, Dong Yun;Kim, Sang Yub;Park, Juyoung;Kwon, Kisung;Ji, Youngae
    • The Korean Journal of Pesticide Science
    • /
    • v.19 no.1
    • /
    • pp.32-40
    • /
    • 2015
  • This study was implemented to evaluate food safety on residual pesticides in agricultural products of Korea and to use as a data base for the establishment of food policy. A total of 196 pesticide upon these products were analyzed using multi class pesticide multiresidue methods of Korean Food Code, and 232 samples of 15 agricultural products collected from 9 regions were supplied for this study. In the results, 64 kinds of pesticides were detected in 53 samples, chlorpyrifos and procymidone of them were shown a high frequency of detection in the analyzed pesticides. Among them, two samples (chlorpyrifos in perilla leaves and picoxystrobin in peach) were detected over Maximum Residue Limits (MRLs). The levels of the detected pesticide residues were within safe levels. Also, the intake assessment for pesticide residues including chlorpyrifos at multi pesticide residue monitoring were carried out. The result showed that the ratio of EDI (estimated daily intake) to ADI (acceptable daily intake) was 0.001~0.902% which means that the detected pesticide residues were in a safe range so that residual pesticides in the agricultural products in Korea are properly controlled.

Effect of cavity shape, bond quality and volume on dentin bond strength (와동의 형태, 접착층의 성숙도, 및 와동의 부피가 상아질 접착력에 미치는 영향)

  • Lee, Hyo-Jin;Kim, Jong-Soon;Lee, Shin-Jae;Lim, Bum-Soon;Baek, Seung-Ho;Cho, Byeong-Hoon
    • Restorative Dentistry and Endodontics
    • /
    • v.30 no.6
    • /
    • pp.450-460
    • /
    • 2005
  • The aim of this study was to evaluate the effect of cavity shape, bond quality of bonding agent and volume of resin composite on shrinkage stress developed at the cavity floor. This was done by measuring the shear bond strength with respect to iris materials (cavity shape , adhesive-coated dentin as a high C-factor and Teflon-coated metal as a low C-factor), bonding agents (bond quality: $Scotchbond^{TM}$ Multi-purpose and Xeno III) and iris hole diameters (volume; 1mm or 3mm in $diameter{\times}1.5mm$ in thickness). Ninety-six molars were randomly divided into 8 groups ($2{\times}2{\times}2$ experimental setup). In order to simulate a Class I cavity, shear bond strength was measured on the flat occlusal dentin surface with irises. The iris hole was filled with Z250 restorative resin composite in a bulk-filling manner. The data was analyzed using three-way ANOVA and the Tukey test. Fracture mode analysis was also done When the cavity had high C-factor, good bond quality and large volume, the bond strength decreased significantly The volume of resin composite restricted within the well-bonded cavity walls is also be suggested to be included in the concept of C-factor, as well as the cavity shape and bond quality. Since the bond quality and volume can exaggerate the effect of cavity shape on the shrinkage stress developed at the resin-dentin bond, resin composites must be filled in a method, which minimizes the volume that can increase the C-factor.

Research on ITB Contract Terms Classification Model for Risk Management in EPC Projects: Deep Learning-Based PLM Ensemble Techniques (EPC 프로젝트의 위험 관리를 위한 ITB 문서 조항 분류 모델 연구: 딥러닝 기반 PLM 앙상블 기법 활용)

  • Hyunsang Lee;Wonseok Lee;Bogeun Jo;Heejun Lee;Sangjin Oh;Sangwoo You;Maru Nam;Hyunsik Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.11
    • /
    • pp.471-480
    • /
    • 2023
  • The Korean construction order volume in South Korea grew significantly from 91.3 trillion won in public orders in 2013 to a total of 212 trillion won in 2021, particularly in the private sector. As the size of the domestic and overseas markets grew, the scale and complexity of EPC (Engineering, Procurement, Construction) projects increased, and risk management of project management and ITB (Invitation to Bid) documents became a critical issue. The time granted to actual construction companies in the bidding process following the EPC project award is not only limited, but also extremely challenging to review all the risk terms in the ITB document due to manpower and cost issues. Previous research attempted to categorize the risk terms in EPC contract documents and detect them based on AI, but there were limitations to practical use due to problems related to data, such as the limit of labeled data utilization and class imbalance. Therefore, this study aims to develop an AI model that can categorize the contract terms based on the FIDIC Yellow 2017(Federation Internationale Des Ingenieurs-Conseils Contract terms) standard in detail, rather than defining and classifying risk terms like previous research. A multi-text classification function is necessary because the contract terms that need to be reviewed in detail may vary depending on the scale and type of the project. To enhance the performance of the multi-text classification model, we developed the ELECTRA PLM (Pre-trained Language Model) capable of efficiently learning the context of text data from the pre-training stage, and conducted a four-step experiment to validate the performance of the model. As a result, the ensemble version of the self-developed ITB-ELECTRA model and Legal-BERT achieved the best performance with a weighted average F1-Score of 76% in the classification of 57 contract terms.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

Quantitative Assessment Technology of Small Animal Myocardial Infarction PET Image Using Gaussian Mixture Model (다중가우시안혼합모델을 이용한 소동물 심근경색 PET 영상의 정량적 평가 기술)

  • Woo, Sang-Keun;Lee, Yong-Jin;Lee, Won-Ho;Kim, Min-Hwan;Park, Ji-Ae;Kim, Jin-Su;Kim, Jong-Guk;Kang, Joo-Hyun;Ji, Young-Hoon;Choi, Chang-Woon;Lim, Sang-Moo;Kim, Kyeong-Min
    • Progress in Medical Physics
    • /
    • v.22 no.1
    • /
    • pp.42-51
    • /
    • 2011
  • Nuclear medicine images (SPECT, PET) were widely used tool for assessment of myocardial viability and perfusion. However it had difficult to define accurate myocardial infarct region. The purpose of this study was to investigate methodological approach for automatic measurement of rat myocardial infarct size using polar map with adaptive threshold. Rat myocardial infarction model was induced by ligation of the left circumflex artery. PET images were obtained after intravenous injection of 37 MBq $^{18}F$-FDG. After 60 min uptake, each animal was scanned for 20 min with ECG gating. PET data were reconstructed using ordered subset expectation maximization (OSEM) 2D. To automatically make the myocardial contour and generate polar map, we used QGS software (Cedars-Sinai Medical Center). The reference infarct size was defined by infarction area percentage of the total left myocardium using TTC staining. We used three threshold methods (predefined threshold, Otsu and Multi Gaussian mixture model; MGMM). Predefined threshold method was commonly used in other studies. We applied threshold value form 10% to 90% in step of 10%. Otsu algorithm calculated threshold with the maximum between class variance. MGMM method estimated the distribution of image intensity using multiple Gaussian mixture models (MGMM2, ${\cdots}$ MGMM5) and calculated adaptive threshold. The infarct size in polar map was calculated as the percentage of lower threshold area in polar map from the total polar map area. The measured infarct size using different threshold methods was evaluated by comparison with reference infarct size. The mean difference between with polar map defect size by predefined thresholds (20%, 30%, and 40%) and reference infarct size were $7.04{\pm}3.44%$, $3.87{\pm}2.09%$ and $2.15{\pm}2.07%$, respectively. Otsu verse reference infarct size was $3.56{\pm}4.16%$. MGMM methods verse reference infarct size was $2.29{\pm}1.94%$. The predefined threshold (30%) showed the smallest mean difference with reference infarct size. However, MGMM was more accurate than predefined threshold in under 10% reference infarct size case (MGMM: 0.006%, predefined threshold: 0.59%). In this study, we was to evaluate myocardial infarct size in polar map using multiple Gaussian mixture model. MGMM method was provide adaptive threshold in each subject and will be a useful for automatic measurement of infarct size.