• Title/Summary/Keyword: optimal structure

Search Result 3,291, Processing Time 0.03 seconds

Scale and Scope Economies and Prospect for the Korea's Banking Industry (우리나라 은행산업(銀行産業)의 효율성분석(效率性分析)과 제도개선방안(制度改善方案))

  • Jwa, Sung-hee
    • KDI Journal of Economic Policy
    • /
    • v.14 no.2
    • /
    • pp.109-153
    • /
    • 1992
  • This paper estimates a translog cost function for the Korea's banking industry and derives various implications on the prospect for the Korean banking structure in the future based on the estimated efficiency indicators for the banking sector. The Korean banking industry is permitted to operate trust business to the full extent and the security business to a limited extent, while it is formally subjected to the strict, specialized banking system. Security underwriting and investment businesses are allowed in a very limited extent only for stocks and bonds of maturity longer than three year and only up to 100 percent of the bank paid-in capital. Until the end of 1991, the ceiling was only up to 25 percent of the total balance of the demand deposits. However, they are prohibited from the security brokerage business. While the in-house integration of security businesses with the traditional business of deposit and commercial lending is restrictively regulated as such, Korean banks can enter the security business by establishing subsidiaries in the industry. This paper, therefore, estimates the efficiency indicators as well as the cost functions, identifying the in-house integrated trust business and security investment business as important banking activities, for various cases where both the production and the intermediation function approaches in modelling the financial intermediaries are separately applied, and the banking businesses of deposit, lending and security investment as one group and the trust businesses as another group are separately and integrally analyzed. The estimation results of the efficiency indicators for various cases are summarized in Table 1 and Table 2. First, security businesses exhibit economies of scale but also economies of scope with traditional banking activities, which implies that in-house integration of the banking and security businesses may not be a nonoptimal banking structure. Therefore, this result further implies that the transformation of Korea's banking system from the current, specialized system to the universal banking system will not impede the improvement of the banking industry's efficiency. Second, the lending businesses turn out to be subjected to diseconomies of scale, while exhibiting unclear evidence for economies of scope. In sum, it implies potential efficiency gain of the continued in-house integration of the lending activity. Third, the continued integration of the trust businesses seems to contribute to improving the efficiency of the banking businesses, since the trust businesses exhibit economies of scope. Fourth, deposit services and fee-based activities, such as foreign exchange and credit card businesses, exhibit economies of scale but constant returns to scope, which implies, the possibility of separating those businesses from other banking and trust activities. The recent trend of the credit card business being operated separately from other banking activities by an independent identity in Korea as well as in the global banking market seems to be consistent with this finding. Then, how can the possibility of separating deposit services from the remaining activities be interpreted? If one insists a strict definition of commercial banking that is confined to deposit and commercial lending activities, separating the deposit service will suggest a resolution or a disappearance of banking, itself. Recently, however, there has been a suggestion that separating banks' deposit and lending activities by allowing a depository institution which specialize in deposit taking and investing deposit fund only in the safest securities such as government securities to administer the deposit activity will alleviate the risk of a bank run. This method, in turn, will help improve the safety of the payment system (Robert E. Litan, What should Banks Do? Washington, D.C., The Brookings Institution, 1987). In this context, the possibility of separating the deposit activity will imply that a new type of depository institution will arise naturally without contradicting the efficiency of the banking businesses, as the size of the banking market grows in the future. Moreover, it is also interesting to see additional evidences confirming this statement that deposit taking and security business are cost complementarity but deposit taking and lending businesses are cost substitute (see Table 2 for cost complementarity relationship in Korea's banking industry). Finally, it has been observed that the Korea's banking industry is lacking in the characteristics of natural monopoly. Therefore, it may not be optimal to encourage the merger and acquisition in the banking industry only for the purpose of improving the efficiency.

  • PDF

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Luminescence Characterization of SrAl2O4:Ho3+ Green Phosphor Prepared by Spray Pyrolysis (분무열분해법으로 제조된 SrAl2O4:Ho3+ 녹색 형광체의 발광특성)

  • Jung, Kyeong Youl;Kim, Woo Hyun
    • Korean Chemical Engineering Research
    • /
    • v.53 no.5
    • /
    • pp.620-626
    • /
    • 2015
  • $Ho^{3+}$ doped $SrAl_2O_4$ upconversion phosphor powders were synthesized by spray pyrolysis, and the crystallographic properties and luminescence characteristics were examined by varying activator concentrations and heattreatment temperatures. The effect of organic additives on the crystal structure and luminescent properties was also investigated. $SrAl_2O_4:Ho^{3+}$ powders showed intensive green emission due to the $^5F_4/^5S_2{\rightarrow}^5I_8$ transition of $Ho^{3+}$. The optimal $Ho^{3+}$ concentration in order to achieve the highest luminescence was 0.1%. Over this concentration, emission intensities were largely diminished via a concentration quenching due to dipole-dipole interaction between activator ions. According to the dependence of emission intensity on the pumping power of a laser diode, it was clear that the upconversion of $SrAl_2O_4:Ho^{3+}$ occurred via the ground state absorption-excited state absorption processes involving two near-IR photons. Synthesized powders were monoclinic as a major phase, having some hexagonal phase. The increase of heat-treatment temperatures from $1000^{\circ}C$ to $1350^{\circ}C$ led to crystallinity enhancement of monoclinic phase, reducing hexagonal phase. The hexagonal phase, however, did not disappear even at $1350^{\circ}C$. When both citric acid (CA) and ethylene glycol (EG) were added to the spray solution, the resulting powders had pure monoclinic phase without forming hexagonal phase, and led to largely enhancement of crystallinity. Also, N,N-Dimethylformamide (DMF) addition to the spray solution containing both CA and EG made it possible to effectively reduce the surface area of $SrAl_2O_4:Ho^{3+}$ powders. Consequently, the $SrAl_2O_4:Ho^{3+}$ powders prepared by using the spray solution containing CA/EG/DMF mixture as the organic additives showed about 168% improved luminescence compared to the phosphor prepared without organic additives. It was concluded that both the increased crystallinity of high-purity monoclinic phase and the decrease of surface area were attributed to the large enhancement of upconversion luminescence.

Studies on the Drying Methods of Gangjung Pellets (강정 반데기 건조방법에 관한 연구)

  • 이승아;김창순;김혁일
    • Korean journal of food and cookery science
    • /
    • v.16 no.1
    • /
    • pp.47-56
    • /
    • 2000
  • The purpose of this study was to develop a drying method of Gangjung. a traditional Korean snack, thus to reduce the drying time and to improve the quality of Gangjung. Two drying methods, hot air drying and far infrared ray drying were used by changing conditions such as air velocity(0.4, 1.2, 1.6 m/s), temperature(40, 50, 60$\^{C}$), and aging. Optimal moisture content of dried Gangjung pellet was 17% which was proper for frying. Cracks appeared on the surface of Gangjung pellet at lower levels of moisture content. Far infrared ray drying saved drying time about 20%. Both hot air drying and far infrared ray drying at 0.4 m/s of air velocity tended to show better quality of Gangjung than those dried at higher air velocities. The expansion volume and texture of Gangjung drying at 40$\^{C}$ was better than other temperature conditions, regardless of drying methods. Quality of Gangjung, dried at single stage without aging, was superior to those dried at double stage including aging process. Moreover, single stage drying save the drying time at least 24 hr. Gangjung dried at high temperature became hard and less brittle in sensory evaluation. In image analysis, air cell distribution in inner structure of Gangjung became uniform and fine as drying temperature decreased to 40$\^{C}$. Overall, Gangjung made of Gangjung pellet by the use of far infrared ray drying at 40$\^{C}$ without aging, showed the best quality in terms of physical and sensory properties.

  • PDF

A Study on the Block Shear Strength according to the Layer Composition of and Adhesive Type of Ply-Lam CLT (Ply-Lam CLT의 층재 구성 및 접착제 종류에 따른 블록전단강도에 관한 연구)

  • CHOI, Gyu Woong;YANG, Seung Min;LEE, Hyun Jae;KIM, Jun Ho;CHOI, Kwang Hyeon;KANG, Seog Goo
    • Journal of the Korean Wood Science and Technology
    • /
    • v.48 no.6
    • /
    • pp.791-806
    • /
    • 2020
  • In this study, a block shear strength test was conducted to compare and analyze the strength and failure mode on the glued laminated timber, CLT, and Ply-lam CLT, which are mainly used for the construction of wood construction as engineering wood. Through this, the Ply-lam CLT manufacturing conditions for optimum production, such as the type of lamina, plywood, adhesive, and layer composition, were investigated. The results are as follow. Through block shear strength test, it showed high strength in the order of glued laminated timber, Ply-lam CLT and CLT. In particular, the shear strength of Ply-lam CLT, which is made of a composite structure of larch plywood and larch lamina, passed 7.1 N/㎟, which is a Korean industrial standards for block shear strength of structural glued laminated timber. In addition, in this study, there was no different in shear strength according to the adhesive type used for glulam, CLT, and Ply-lam CLT adhesion. However, in the case of Ply-lam CLT, the difference in shear strength of Ply-lam CLT was shown according to the type of lamina and plywood. The results showed high strength in the order of Larix kaempferi > Mixed light hardwood ≒ Pinus densiflora, sieb, et, Zucc plywood. The optimal configuration of Ply-lam CLT is when larch plywood and larch lamina are used, and it is decided that the adhesive can be used by selecting PRF and PUR according to the application. The results of block shear strength failure mode by type of wood based materials were analyzed. The failure mode showed shear parallel-to-grain for glulam, rolling shear for CLT, and shear parallel-to-grain and rolling for ply-lam CLT. This is closely related to shear strength results and is decided to indicate higher shear strength in Ply-lam CLT than in CLT due to rolling shear.

In vivo Antifungal Activity of Pyrrolnitrin Isolated from Burkholderia capacia EB215 with Antagonistic Activity Towards Colletotrichum Species (탄저병균에 대하여 길항작용을 보이는 Burkholderia cepacia EB215로부터 분리한 Pyrrolnitrin의 항균활성)

  • Park, Ji-Hyun;Choi, Gyung-Ja;Lee, Seon-Woo;Jang, Kyoung-Soo;Choi, Yong-Ho;Chung, Young-Ryun;Cho, Kwang-Yun;Kim, Jin-Cheol
    • The Korean Journal of Mycology
    • /
    • v.32 no.1
    • /
    • pp.31-38
    • /
    • 2004
  • An endophytic bacterial strain EB215 that was isolated from cucumber (Cucumis sativus) roots displayed a potent in vivo antifungal activity against Colletotrichum species. The strain was identified as Burkholderia cepacia based on its physiological and biochemical characteristics, and 16S rDNA gene sequence. Optimal medium and incubation period for the production of antifungal substances by B. cepacia EB215 were nutrient broth (NB) and 3 days, respectively. An antifungal substance was isolated from the NB cultures of B. cepacia EB215 strain by centrifugation, n-hexane partitioning, silica gel column chromatography, preparative TLC, and in vitro bioassay. Its chemical structure was determined to be pyrrolnitrin by mass and NMR spectral analyses. Pyrrolnitrin showed potent disease control efficacy of more than 90% against pepper anthracnose (Colletotrichum coccodes), cucumber anthracnose (Colletotrichum orbiculare), rice blast (Magnaporthe grisea) and rice sheath blight (Corticium sasaki) even at a low concentration of $11.1\;{\mu}g/ml$. In addition, it effectively controlled the development of tomato gray mold (Botrytis cinerea) and wheat leaf rust (Puccinia recondita) at concentrations over $33.3\;{\mu}g/ml$. However, it had no antifungal activity against Phytophthora infestans on tomato plants. Further studies on the development of microbial fungicide using B. cepacia EB215 are in progress.

Development of Market Growth Pattern Map Based on Growth Model and Self-organizing Map Algorithm: Focusing on ICT products (자기조직화 지도를 활용한 성장모형 기반의 시장 성장패턴 지도 구축: ICT제품을 중심으로)

  • Park, Do-Hyung;Chung, Jaekwon;Chung, Yeo Jin;Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.1-23
    • /
    • 2014
  • Market forecasting aims to estimate the sales volume of a product or service that is sold to consumers for a specific selling period. From the perspective of the enterprise, accurate market forecasting assists in determining the timing of new product introduction, product design, and establishing production plans and marketing strategies that enable a more efficient decision-making process. Moreover, accurate market forecasting enables governments to efficiently establish a national budget organization. This study aims to generate a market growth curve for ICT (information and communication technology) goods using past time series data; categorize products showing similar growth patterns; understand markets in the industry; and forecast the future outlook of such products. This study suggests the useful and meaningful process (or methodology) to identify the market growth pattern with quantitative growth model and data mining algorithm. The study employs the following methodology. At the first stage, past time series data are collected based on the target products or services of categorized industry. The data, such as the volume of sales and domestic consumption for a specific product or service, are collected from the relevant government ministry, the National Statistical Office, and other relevant government organizations. For collected data that may not be analyzed due to the lack of past data and the alteration of code names, data pre-processing work should be performed. At the second stage of this process, an optimal model for market forecasting should be selected. This model can be varied on the basis of the characteristics of each categorized industry. As this study is focused on the ICT industry, which has more frequent new technology appearances resulting in changes of the market structure, Logistic model, Gompertz model, and Bass model are selected. A hybrid model that combines different models can also be considered. The hybrid model considered for use in this study analyzes the size of the market potential through the Logistic and Gompertz models, and then the figures are used for the Bass model. The third stage of this process is to evaluate which model most accurately explains the data. In order to do this, the parameter should be estimated on the basis of the collected past time series data to generate the models' predictive value and calculate the root-mean squared error (RMSE). The model that shows the lowest average RMSE value for every product type is considered as the best model. At the fourth stage of this process, based on the estimated parameter value generated by the best model, a market growth pattern map is constructed with self-organizing map algorithm. A self-organizing map is learning with market pattern parameters for all products or services as input data, and the products or services are organized into an $N{\times}N$ map. The number of clusters increase from 2 to M, depending on the characteristics of the nodes on the map. The clusters are divided into zones, and the clusters with the ability to provide the most meaningful explanation are selected. Based on the final selection of clusters, the boundaries between the nodes are selected and, ultimately, the market growth pattern map is completed. The last step is to determine the final characteristics of the clusters as well as the market growth curve. The average of the market growth pattern parameters in the clusters is taken to be a representative figure. Using this figure, a growth curve is drawn for each cluster, and their characteristics are analyzed. Also, taking into consideration the product types in each cluster, their characteristics can be qualitatively generated. We expect that the process and system that this paper suggests can be used as a tool for forecasting demand in the ICT and other industries.

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

THE EFFECT OF ETCHING TIME ON THE PATTERN OF ACID ETCHING ON THE ENAMEL OF PRIMARY TEETH (산부식 시간에 따른 유전치 법랑질의 부식 유형에 관한 연구)

  • Choi, Su-Mi;Choi, Young-Chul;Park, Jae-Hong;Choi, Sung-Chul
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.35 no.3
    • /
    • pp.437-445
    • /
    • 2008
  • The presence of a "prismless" layer on the enamel surface particularly on deciduous teeth has been reported by a number of workers. This structure, which appears to lack the normal prism delineations, could interfere with tag formation and hence, reduce bonding to such surfaces. The purpose of this study was to investigate the relationship of etching times on the effect of acid etching on primary enamel with respect to the quality of etching patterns. Labial surfaces of 32 extracted or exfoliated caries-free primary anterior teeth were used. 35% phosphoric acid gel was used only cervical regions of labial surfaces for each etching time group, 15, 30, 45 and 60 seconds. The surfaces were then washed with water for 20 seconds and dried with air spray for 20 seconds. 1. The Type 3 is 75% when the 15 seconds acid etching time was used. 2. The Type 1 is 38% and Type 2 is 75% when the 30 and 45 seconds acid etching time was used. 3. The Type 1 is 25% and Type 2 is 75% when the 60 seconds acid etching time was used. 4. An etching time of 60 seconds produced a constant and regular etching pattern. 5. There is a significant difference between the groups with respect to the patterns of etch achieved(p<0.05). 6. We confirmed that the acid induced patterns(type 1, 2) became more pronounced when the application time increased(p<0.05). $45{\sim}60$ seconds was the optimal time for etching on the primary enamel.

  • PDF