• Title/Summary/Keyword: 16

Search Result 98,334, Processing Time 0.107 seconds

A Template-based Interactive University Timetabling Support System (템플릿 기반의 상호대화형 전공강의시간표 작성지원시스템)

  • Chang, Yong-Sik;Jeong, Ye-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.121-145
    • /
    • 2010
  • University timetabling depending on the educational environments of universities is an NP-hard problem that the amount of computation required to find solutions increases exponentially with the problem size. For many years, there have been lots of studies on university timetabling from the necessity of automatic timetable generation for students' convenience and effective lesson, and for the effective allocation of subjects, lecturers, and classrooms. Timetables are classified into a course timetable and an examination timetable. This study focuses on the former. In general, a course timetable for liberal arts is scheduled by the office of academic affairs and a course timetable for major subjects is scheduled by each department of a university. We found several problems from the analysis of current course timetabling in departments. First, it is time-consuming and inefficient for each department to do the routine and repetitive timetabling work manually. Second, many classes are concentrated into several time slots in a timetable. This tendency decreases the effectiveness of students' classes. Third, several major subjects might overlap some required subjects in liberal arts at the same time slots in the timetable. In this case, it is required that students should choose only one from the overlapped subjects. Fourth, many subjects are lectured by same lecturers every year and most of lecturers prefer the same time slots for the subjects compared with last year. This means that it will be helpful if departments reuse the previous timetables. To solve such problems and support the effective course timetabling in each department, this study proposes a university timetabling support system based on two phases. In the first phase, each department generates a timetable template from the most similar timetable case, which is based on case-based reasoning. In the second phase, the department schedules a timetable with the help of interactive user interface under the timetabling criteria, which is based on rule-based approach. This study provides the illustrations of Hanshin University. We classified timetabling criteria into intrinsic and extrinsic criteria. In intrinsic criteria, there are three criteria related to lecturer, class, and classroom which are all hard constraints. In extrinsic criteria, there are four criteria related to 'the numbers of lesson hours' by the lecturer, 'prohibition of lecture allocation to specific day-hours' for committee members, 'the number of subjects in the same day-hour,' and 'the use of common classrooms.' In 'the numbers of lesson hours' by the lecturer, there are three kinds of criteria : 'minimum number of lesson hours per week,' 'maximum number of lesson hours per week,' 'maximum number of lesson hours per day.' Extrinsic criteria are also all hard constraints except for 'minimum number of lesson hours per week' considered as a soft constraint. In addition, we proposed two indices for measuring similarities between subjects of current semester and subjects of the previous timetables, and for evaluating distribution degrees of a scheduled timetable. Similarity is measured by comparison of two attributes-subject name and its lecturer-between current semester and a previous semester. The index of distribution degree, based on information entropy, indicates a distribution of subjects in the timetable. To show this study's viability, we implemented a prototype system and performed experiments with the real data of Hanshin University. Average similarity from the most similar cases of all departments was estimated as 41.72%. It means that a timetable template generated from the most similar case will be helpful. Through sensitivity analysis, the result shows that distribution degree will increase if we set 'the number of subjects in the same day-hour' to more than 90%.

Personalized Recommendation System for IPTV using Ontology and K-medoids (IPTV환경에서 온톨로지와 k-medoids기법을 이용한 개인화 시스템)

  • Yun, Byeong-Dae;Kim, Jong-Woo;Cho, Yong-Seok;Kang, Sang-Gil
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.147-161
    • /
    • 2010
  • As broadcasting and communication are converged recently, communication is jointed to TV. TV viewing has brought about many changes. The IPTV (Internet Protocol Television) provides information service, movie contents, broadcast, etc. through internet with live programs + VOD (Video on demand) jointed. Using communication network, it becomes an issue of new business. In addition, new technical issues have been created by imaging technology for the service, networking technology without video cuts, security technologies to protect copyright, etc. Through this IPTV network, users can watch their desired programs when they want. However, IPTV has difficulties in search approach, menu approach, or finding programs. Menu approach spends a lot of time in approaching programs desired. Search approach can't be found when title, genre, name of actors, etc. are not known. In addition, inserting letters through remote control have problems. However, the bigger problem is that many times users are not usually ware of the services they use. Thus, to resolve difficulties when selecting VOD service in IPTV, a personalized service is recommended, which enhance users' satisfaction and use your time, efficiently. This paper provides appropriate programs which are fit to individuals not to save time in order to solve IPTV's shortcomings through filtering and recommendation-related system. The proposed recommendation system collects TV program information, the user's preferred program genres and detailed genre, channel, watching program, and information on viewing time based on individual records of watching IPTV. To look for these kinds of similarities, similarities can be compared by using ontology for TV programs. The reason to use these is because the distance of program can be measured by the similarity comparison. TV program ontology we are using is one extracted from TV-Anytime metadata which represents semantic nature. Also, ontology expresses the contents and features in figures. Through world net, vocabulary similarity is determined. All the words described on the programs are expanded into upper and lower classes for word similarity decision. The average of described key words was measured. The criterion of distance calculated ties similar programs through K-medoids dividing method. K-medoids dividing method is a dividing way to divide classified groups into ones with similar characteristics. This K-medoids method sets K-unit representative objects. Here, distance from representative object sets temporary distance and colonize it. Through algorithm, when the initial n-unit objects are tried to be divided into K-units. The optimal object must be found through repeated trials after selecting representative object temporarily. Through this course, similar programs must be colonized. Selecting programs through group analysis, weight should be given to the recommendation. The way to provide weight with recommendation is as the follows. When each group recommends programs, similar programs near representative objects will be recommended to users. The formula to calculate the distance is same as measure similar distance. It will be a basic figure which determines the rankings of recommended programs. Weight is used to calculate the number of watching lists. As the more programs are, the higher weight will be loaded. This is defined as cluster weight. Through this, sub-TV programs which are representative of the groups must be selected. The final TV programs ranks must be determined. However, the group-representative TV programs include errors. Therefore, weights must be added to TV program viewing preference. They must determine the finalranks.Based on this, our customers prefer proposed to recommend contents. So, based on the proposed method this paper suggested, experiment was carried out in controlled environment. Through experiment, the superiority of the proposed method is shown, compared to existing ways.

A Study on the Intelligent Quick Response System for Fast Fashion(IQRS-FF) (패스트 패션을 위한 지능형 신속대응시스템(IQRS-FF)에 관한 연구)

  • Park, Hyun-Sung;Park, Kwang-Ho
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.163-179
    • /
    • 2010
  • Recentlythe concept of fast fashion is drawing attention as customer needs are diversified and supply lead time is getting shorter in fashion industry. It is emphasized as one of the critical success factors in the fashion industry how quickly and efficiently to satisfy the customer needs as the competition has intensified. Because the fast fashion is inherently susceptible to trend, it is very important for fashion retailers to make quick decisions regarding items to launch, quantity based on demand prediction, and the time to respond. Also the planning decisions must be executed through the business processes of procurement, production, and logistics in real time. In order to adapt to this trend, the fashion industry urgently needs supports from intelligent quick response(QR) system. However, the traditional functions of QR systems have not been able to completely satisfy such demands of the fast fashion industry. This paper proposes an intelligent quick response system for the fast fashion(IQRS-FF). Presented are models for QR process, QR principles and execution, and QR quantity and timing computation. IQRS-FF models support the decision makers by providing useful information with automated and rule-based algorithms. If the predefined conditions of a rule are satisfied, the actions defined in the rule are automatically taken or informed to the decision makers. In IQRS-FF, QRdecisions are made in two stages: pre-season and in-season. In pre-season, firstly master demand prediction is performed based on the macro level analysis such as local and global economy, fashion trends and competitors. The prediction proceeds to the master production and procurement planning. Checking availability and delivery of materials for production, decision makers must make reservations or request procurements. For the outsourcing materials, they must check the availability and capacity of partners. By the master plans, the performance of the QR during the in-season is greatly enhanced and the decision to select the QR items is made fully considering the availability of materials in warehouse as well as partners' capacity. During in-season, the decision makers must find the right time to QR as the actual sales occur in stores. Then they are to decide items to QRbased not only on the qualitative criteria such as opinions from sales persons but also on the quantitative criteria such as sales volume, the recent sales trend, inventory level, the remaining period, the forecast for the remaining period, and competitors' performance. To calculate QR quantity in IQRS-FF, two calculation methods are designed: QR Index based calculation and attribute similarity based calculation using demographic cluster. In the early period of a new season, the attribute similarity based QR amount calculation is better used because there are not enough historical sales data. By analyzing sales trends of the categories or items that have similar attributes, QR quantity can be computed. On the other hand, in case of having enough information to analyze the sales trends or forecasting, the QR Index based calculation method can be used. Having defined the models for decision making for QR, we design KPIs(Key Performance Indicators) to test the reliability of the models in critical decision makings: the difference of sales volumebetween QR items and non-QR items; the accuracy rate of QR the lead-time spent on QR decision-making. To verify the effectiveness and practicality of the proposed models, a case study has been performed for a representative fashion company which recently developed and launched the IQRS-FF. The case study shows that the average sales rateof QR items increased by 15%, the differences in sales rate between QR items and non-QR items increased by 10%, the QR accuracy was 70%, the lead time for QR dramatically decreased from 120 hours to 8 hours.

Development of User Based Recommender System using Social Network for u-Healthcare (사회 네트워크를 이용한 사용자 기반 유헬스케어 서비스 추천 시스템 개발)

  • Kim, Hyea-Kyeong;Choi, Il-Young;Ha, Ki-Mok;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.181-199
    • /
    • 2010
  • As rapid progress of population aging and strong interest in health, the demand for new healthcare service is increasing. Until now healthcare service has provided post treatment by face-to-face manner. But according to related researches, proactive treatment is resulted to be more effective for preventing diseases. Particularly, the existing healthcare services have limitations in preventing and managing metabolic syndrome such a lifestyle disease, because the cause of metabolic syndrome is related to life habit. As the advent of ubiquitous technology, patients with the metabolic syndrome can improve life habit such as poor eating habits and physical inactivity without the constraints of time and space through u-healthcare service. Therefore, lots of researches for u-healthcare service focus on providing the personalized healthcare service for preventing and managing metabolic syndrome. For example, Kim et al.(2010) have proposed a healthcare model for providing the customized calories and rates of nutrition factors by analyzing the user's preference in foods. Lee et al.(2010) have suggested the customized diet recommendation service considering the basic information, vital signs, family history of diseases and food preferences to prevent and manage coronary heart disease. And, Kim and Han(2004) have demonstrated that the web-based nutrition counseling has effects on food intake and lipids of patients with hyperlipidemia. However, the existing researches for u-healthcare service focus on providing the predefined one-way u-healthcare service. Thus, users have a tendency to easily lose interest in improving life habit. To solve such a problem of u-healthcare service, this research suggests a u-healthcare recommender system which is based on collaborative filtering principle and social network. This research follows the principle of collaborative filtering, but preserves local networks (consisting of small group of similar neighbors) for target users to recommend context aware healthcare services. Our research is consisted of the following five steps. In the first step, user profile is created using the usage history data for improvement in life habit. And then, a set of users known as neighbors is formed by the degree of similarity between the users, which is calculated by Pearson correlation coefficient. In the second step, the target user obtains service information from his/her neighbors. In the third step, recommendation list of top-N service is generated for the target user. Making the list, we use the multi-filtering based on user's psychological context information and body mass index (BMI) information for the detailed recommendation. In the fourth step, the personal information, which is the history of the usage service, is updated when the target user uses the recommended service. In the final step, a social network is reformed to continually provide qualified recommendation. For example, the neighbors may be excluded from the social network if the target user doesn't like the recommendation list received from them. That is, this step updates each user's neighbors locally, so maintains the updated local neighbors always to give context aware recommendation in real time. The characteristics of our research as follows. First, we develop the u-healthcare recommender system for improving life habit such as poor eating habits and physical inactivity. Second, the proposed recommender system uses autonomous collaboration, which enables users to prevent dropping and not to lose user's interest in improving life habit. Third, the reformation of the social network is automated to maintain the quality of recommendation. Finally, this research has implemented a mobile prototype system using JAVA and Microsoft Access2007 to recommend the prescribed foods and exercises for chronic disease prevention, which are provided by A university medical center. This research intends to prevent diseases such as chronic illnesses and to improve user's lifestyle through providing context aware and personalized food and exercise services with the help of similar users'experience and knowledge. We expect that the user of this system can improve their life habit with the help of handheld mobile smart phone, because it uses autonomous collaboration to arouse interest in healthcare.

Finding Weighted Sequential Patterns over Data Streams via a Gap-based Weighting Approach (발생 간격 기반 가중치 부여 기법을 활용한 데이터 스트림에서 가중치 순차패턴 탐색)

  • Chang, Joong-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.55-75
    • /
    • 2010
  • Sequential pattern mining aims to discover interesting sequential patterns in a sequence database, and it is one of the essential data mining tasks widely used in various application fields such as Web access pattern analysis, customer purchase pattern analysis, and DNA sequence analysis. In general sequential pattern mining, only the generation order of data element in a sequence is considered, so that it can easily find simple sequential patterns, but has a limit to find more interesting sequential patterns being widely used in real world applications. One of the essential research topics to compensate the limit is a topic of weighted sequential pattern mining. In weighted sequential pattern mining, not only the generation order of data element but also its weight is considered to get more interesting sequential patterns. In recent, data has been increasingly taking the form of continuous data streams rather than finite stored data sets in various application fields, the database research community has begun focusing its attention on processing over data streams. The data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. In data stream processing, each data element should be examined at most once to analyze the data stream, and the memory usage for data stream analysis should be restricted finitely although new data elements are continuously generated in a data stream. Moreover, newly generated data elements should be processed as fast as possible to produce the up-to-date analysis result of a data stream, so that it can be instantly utilized upon request. To satisfy these requirements, data stream processing sacrifices the correctness of its analysis result by allowing some error. Considering the changes in the form of data generated in real world application fields, many researches have been actively performed to find various kinds of knowledge embedded in data streams. They mainly focus on efficient mining of frequent itemsets and sequential patterns over data streams, which have been proven to be useful in conventional data mining for a finite data set. In addition, mining algorithms have also been proposed to efficiently reflect the changes of data streams over time into their mining results. However, they have been targeting on finding naively interesting patterns such as frequent patterns and simple sequential patterns, which are found intuitively, taking no interest in mining novel interesting patterns that express the characteristics of target data streams better. Therefore, it can be a valuable research topic in the field of mining data streams to define novel interesting patterns and develop a mining method finding the novel patterns, which will be effectively used to analyze recent data streams. This paper proposes a gap-based weighting approach for a sequential pattern and amining method of weighted sequential patterns over sequence data streams via the weighting approach. A gap-based weight of a sequential pattern can be computed from the gaps of data elements in the sequential pattern without any pre-defined weight information. That is, in the approach, the gaps of data elements in each sequential pattern as well as their generation orders are used to get the weight of the sequential pattern, therefore it can help to get more interesting and useful sequential patterns. Recently most of computer application fields generate data as a form of data streams rather than a finite data set. Considering the change of data, the proposed method is mainly focus on sequence data streams.

A Case Study on Forecasting Inbound Calls of Motor Insurance Company Using Interactive Data Mining Technique (대화식 데이터 마이닝 기법을 활용한 자동차 보험사의 인입 콜량 예측 사례)

  • Baek, Woong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.99-120
    • /
    • 2010
  • Due to the wide spread of customers' frequent access of non face-to-face services, there have been many attempts to improve customer satisfaction using huge amounts of data accumulated throughnon face-to-face channels. Usually, a call center is regarded to be one of the most representative non-faced channels. Therefore, it is important that a call center has enough agents to offer high level customer satisfaction. However, managing too many agents would increase the operational costs of a call center by increasing labor costs. Therefore, predicting and calculating the appropriate size of human resources of a call center is one of the most critical success factors of call center management. For this reason, most call centers are currently establishing a department of WFM(Work Force Management) to estimate the appropriate number of agents and to direct much effort to predict the volume of inbound calls. In real world applications, inbound call prediction is usually performed based on the intuition and experience of a domain expert. In other words, a domain expert usually predicts the volume of calls by calculating the average call of some periods and adjusting the average according tohis/her subjective estimation. However, this kind of approach has radical limitations in that the result of prediction might be strongly affected by the expert's personal experience and competence. It is often the case that a domain expert may predict inbound calls quite differently from anotherif the two experts have mutually different opinions on selecting influential variables and priorities among the variables. Moreover, it is almost impossible to logically clarify the process of expert's subjective prediction. Currently, to overcome the limitations of subjective call prediction, most call centers are adopting a WFMS(Workforce Management System) package in which expert's best practices are systemized. With WFMS, a user can predict the volume of calls by calculating the average call of each day of the week, excluding some eventful days. However, WFMS costs too much capital during the early stage of system establishment. Moreover, it is hard to reflect new information ontothe system when some factors affecting the amount of calls have been changed. In this paper, we attempt to devise a new model for predicting inbound calls that is not only based on theoretical background but also easily applicable to real world applications. Our model was mainly developed by the interactive decision tree technique, one of the most popular techniques in data mining. Therefore, we expect that our model can predict inbound calls automatically based on historical data, and it can utilize expert's domain knowledge during the process of tree construction. To analyze the accuracy of our model, we performed intensive experiments on a real case of one of the largest car insurance companies in Korea. In the case study, the prediction accuracy of the devised two models and traditional WFMS are analyzed with respect to the various error rates allowable. The experiments reveal that our data mining-based two models outperform WFMS in terms of predicting the amount of accident calls and fault calls in most experimental situations examined.

The Planting and Use of Landscaping Plants in Kangweon-Do (강원도내 조경식물의 배치과 이용)

  • 이기의;이우철;박봉우;조철길
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.15 no.3
    • /
    • pp.33-50
    • /
    • 1988
  • This study was executed to find out how to improve on the planting and use of the plants in Kangweon- Do by surveying the planting areas -gardens, parks, streets, schools, etc. - in five cities of this province, and to select available native plants by surveying the main mountains in this province. The results are as follows ; 1. The species number within surrey areas was 319 species. and the species of which planting frequency was very high were Hibiscus syriacus, Juniperus chinensis, Buxus microphylla v. koreana, etc.. 2. The species number of school trees and flowers of 202 schools in Kangweon-Do were 33 species, 32 species respectively. and the species of school trees and flowers that showed the highest preference were Juniperus chinensis, Forsythia koreana each. 3. The species number of flowers and trees designated in 22 City and Keun were 14 species, 7 species respectively, and the species of flowers and trees that presented the highest designation frequency were Rhododendron schlippenbachii, Ginkgo biloba each. 4. The street trees planted along the main streets in Kangweon-Do were 18 species and 84,939 individuals, and the ratio of Populus alba${\times}$glandulosa was the highest among occurrence individuals. 5, As for the composition ratios of life forms of plants within survey areas, the ratio of deciduous broad-leaved tree was the highest as about 56% and that of deciduous coniferous tree was the lowest as about 1.6%. The ratios of native species versus exotic were 43 : 57 6. Through these results, it was thought that the diversification of planing species, the selection of plants suitable to each space and the generalization of use of native species were needed. So 254 plants native to Kangweon-Do were presented to correspond to these requirements.

  • PDF

MARGINAL MICROLEAKAGE AND SHEAR BOND STRENGTH OF COMPOSITE RESIN ACCORDING TO TREATMENT METHODS OF ARTIFICIAL SALIVA-CONTAMINATED SURFACE AFTER PRIMING (접착강화제 도포후 인공타액에 오염된 표면의 처리방법에 따른 복합레진의 번연누출과 전단결합강도)

  • Cho, Young-Gon;Ko, Kee-Jong;Lee, Suk-Jong
    • Restorative Dentistry and Endodontics
    • /
    • v.25 no.1
    • /
    • pp.46-55
    • /
    • 2000
  • During bonding procedure of composite resin, the prepared cavity can be contaminated by saliva. In this study, marginal microleakage and shear bond strength of a composite resin to primed enamel and dentin treated with artificial saliva(Taliva$^{(R)}$) were evaluated. For the marginal microleakage test, Class V cavities were prepared in the buccal surfaces of fifty molars. The samples were randomly assigned into 5 groups with 10 samples in each group. Control group was applied with a bonding system (Scotchbond$^{TM}$ Multi-Purpose plus) according to manufacture's directions without saliva contamination. Experimental groups were divided into 4 groups and contaminated with artificial saliva for 30 seconds after priming: Experimental 1 group ; artificial saliva was dried with compressed air only, Experimental 2 group ; artificial saliva was rinsed and dried. Experimental 3 group ; cavities were etched with 35% phosphoric acid for 15 seconds after rinsing and drying artificial saliva. Experimental 4 group ; cavities were etched with 35% phosphoric acid for 15 seconds and primer was reapplied after rinsing and drying artificial saliva. All the cavities were applied a bonding agent and filled with a composite resin (Z-100$^{TM}$). Specimens were immersed in 0.5% basic fuschin dye for 24 hours and embedded in transparent acrylic resin and sectioned buccolingually with diamond wheel saw. Four sections were obtained from one specimen. Degree of marginal leakage was scored under stereomicroscope and their scores were averaged from four sections. The data were analyzed by Kruscal-Wallis test and Fisher's LSD. For the shear bond strength test, the buccal or occlusal surfaces of one hundred molar teeth were ground to expose enamel(n=50) or dentin(n=50) using diamond wheel saw and its surface was smoothed with Lapping and Polishing Machine(South Bay Technology Co., U.S.A.). Samples were divided into 5 groups. Treatment of saliva-contaminated enamel and dentin surfaces was same as the marginal microleakage test and composite resin was bonded via a gelatin capsule. All specimens were stored in distilled water for 48 hours. The shear bond strengths were measured by universal testing machine (AGS-1000 4D, Shimaduzu Co., Japan) with a crosshead speed of 5 mm/minute. Failure mode of fracture sites was examined under stereomicroscope. The data were analyzed by ANOVA and Tukey's studentized range test. The results of this study were as follows : 1. Enamel marginal microleakage showed no significant difference among groups. 2. Dentinal marginal microleakages of control, experimental 2 and 4 groups were lower than those of experimental 1 and 3 groups (p<0.05). 3. The shear bond strength to enamel was the highest value in control group (20.03${\pm}$4.47MPa) and the lowest value in experimental 1 group (13.28${\pm}$6.52MPa). There were significant differences between experimental 1 group and other groups (p<0.05). 4. The shear bond strength to dentin was higher in control group (17.87${\pm}$4.02MPa) and experimental 4 group (16.38${\pm}$3.23MPa) than in other groups, its value was low in experimental 1 group (3.95${\pm}$2.51 MPa) and experimental 2 group (6.72${\pm}$2.26MPa)(p<0.05). 5. Failure mode of fractured site on the enamel showed mostly adhesive failures in experimental 1 and 3 groups. 6. Failure mode of fractured site on the dentin did not show adhesive failures in control group, but showed mostly adhesive failure in experimental groups. As a summary of above results, if the primed tooth surface was contaminated with artificial saliva, primer should be reapplied after re-etching it.

  • PDF

EFFECT OF FILM THICKNESS OF RESIN CEMENT ON BONDING EFFICIENCY IN INDIRECT COMPOSITE RESTORATION (레진 시멘트의 film thickness가 간접 복합 레진 수복물의 접착 효율에 미치는 영향에 관한 연구)

  • Lee, Sang-Hyuck;Choi, Gi-Woon;Choi, Kyung-Kyu
    • Restorative Dentistry and Endodontics
    • /
    • v.35 no.2
    • /
    • pp.69-79
    • /
    • 2010
  • The purpose of this study was to evaluate the effect of film thickness of various resin cements on bonding efficiency in indirect composite restoration by measurement of microtensile bond strength, polymerization shrinkage, flexural strength and modulus, fractographic FE-SEM analysis. Experimental groups were divided according to film thickness (< $50\;{\mu}m$-control, $50\;{\mu}m$-T50, $100\;{\mu}m$-T100, $150\;{\mu}m$-T150) using composite- based resin cements (Variolink II, Duo-Link) and adhesive-based resin cements (Panavia F, Rely X Unicem). The data was analyzed using ANOVA and Duncan's multiple comparison test (p < 0.05). The results were as follows ; 1. Variolink II showed higher microtensile bond strength than that of adhesive-based resin cements in all film thickness (p < 0.05) but Duo-Link did not show significant difference except control group (p > 0.05). 2. Microtensile bond strength of composite-based resin cements were decreased significantly according to increasing film thickness (p < 0.05) but adhesive-based resin cements did not show significant difference among film thickness (p > 0.05). 3. Panavia F showed significantly lower polymerization shrinkage than other resin cements (p < 0.05). 4. Composite-based resin cements showed significantly higher flexural strength and modulus than adhesive-based resin cements (p < 0.05). 5. FE-SEM examination showed uniform adhesive layer and well developed resin tags in composite-based resin cements but unclear adhesive layer and poorly developed resin tags in adhesive-based resin cements. In debonded surface examination, composite-based resin cements showed mixed failures but adhesive-based resin cements showed adhesive failures.

THE INFLUENCE OF CAVITY CONFIGURATION ON THE MICROTENSILE BOND STRENGTH BETWEEN COMPOSITE RESIN AND DENTIN (와동의 형태가 상아질과 복합레진 사이의 미세인장결합강도에 미치는 영향)

  • Kim, Ye-Mi;Park, Jeong-Won;Lee, Chan-Young;Song, Yoon-Jung;Seo, Deok-Kyu;Roh, Byoung-Duck
    • Restorative Dentistry and Endodontics
    • /
    • v.33 no.5
    • /
    • pp.472-480
    • /
    • 2008
  • This study was conducted to evaluate the influence of the C-factor on the bond strength of a 6th generation self-etching system by measuring the microtensile bond strength of four types of restorations classified by different C-factors with an identical depth of dentin. Eighty human molars were divided into four experimental groups, each of which had a C-factor of 0.25, 2, 3 or 4. Each group was then further divided into four subgroups based on the adhesive and composite resin used. The adhesives used for this study were AQ Bond Plus (Sun Medical, Japan) and XenoIII (DENTSPLY, Germany). And composite resins used were fantasists (Sun Medical, Japan) and Ceram-X mono (DENTSPLY, Germany). The results were then analyzed using one-way ANOVA, a Tukey's test, and a Pearson's correlation test and were as follows. 1. There was no significant difference among C-factor groups with the exception of groups of Xeno III and Ceram-X mono (p<0.05). 2. There was no significant difference between any of the adhesives and composite resins in groups with C-factor 0.25, 2 and 4. 3. There was no correlation between the change in C-factor and microtensile bond strength in the Fantasista groups. It was concluded that the C-factor of cavities does not have a significant effect on the microtensile bond strength of the restorations when cavities of the same depth of dentin are restored using composite resin in conjunction with the 6th generation self-etching system.