• Title/Summary/Keyword: Machine Parts

Search Result 1,299, Processing Time 0.033 seconds

The Effects of Environmental Dynamism on Supply Chain Commitment in the High-tech Industry: The Roles of Flexibility and Dependence (첨단산업의 환경동태성이 공급체인의 결속에 미치는 영향: 유연성과 의존성의 역할)

  • Kim, Sang-Deok;Ji, Seong-Goo
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.2
    • /
    • pp.31-54
    • /
    • 2007
  • The exchange between buyers and sellers in the industrial market is changing from short-term to long-term relationships. Long-term relationships are governed mainly by formal contracts or informal agreements, but many scholars are now asserting that controlling relationship by using formal contracts under environmental dynamism is inappropriate. In this case, partners will depend on each other's flexibility or interdependence. The former, flexibility, provides a general frame of reference, order, and standards against which to guide and assess appropriate behavior in dynamic and ambiguous situations, thus motivating the value-oriented performance goals shared between partners. It is based on social sacrifices, which can potentially minimize any opportunistic behaviors. The later, interdependence, means that each firm possesses a high level of dependence in an dynamic channel relationship. When interdependence is high in magnitude and symmetric, each firm enjoys a high level of power and the bonds between the firms should be reasonably strong. Strong shared power is likely to promote commitment because of the common interests, attention, and support found in such channel relationships. This study deals with environmental dynamism in high-tech industry. Firms in the high-tech industry regard it as a key success factor to successfully cope with environmental changes. However, due to the lack of studies dealing with environmental dynamism and supply chain commitment in the high-tech industry, it is very difficult to find effective strategies to cope with them. This paper presents the results of an empirical study on the relationship between environmental dynamism and supply chain commitment in the high-tech industry. We examined the effects of consumer, competitor, and technological dynamism on supply chain commitment. Additionally, we examined the moderating effects of flexibility and dependence of supply chains. This study was confined to the type of high-tech industry which has the characteristics of rapid technology change and short product lifecycle. Flexibility among the firms of this industry, having the characteristic of hard and fast growth, is more important here than among any other industry. Thus, a variety of environmental dynamism can affect a supply chain relationship. The industries targeted industries were electronic parts, metal product, computer, electric machine, automobile, and medical precision manufacturing industries. Data was collected as follows. During the survey, the researchers managed to obtain the list of parts suppliers of 2 companies, N and L, with an international competitiveness in the mobile phone manufacturing industry; and of the suppliers in a business relationship with S company, a semiconductor manufacturing company. They were asked to respond to the survey via telephone and e-mail. During the two month period of February-April 2006, we were able to collect data from 44 companies. The respondents were restricted to direct dealing authorities and subcontractor company (the supplier) staff with at least three months of dealing experience with a manufacture (an industrial material buyer). The measurement validation procedures included scale reliability; discriminant and convergent validity were used to validate measures. Also, the reliability measurements traditionally employed, such as the Cronbach's alpha, were used. All the reliabilities were greater than.70. A series of exploratory factor analyses was conducted. We conducted confirmatory factor analyses to assess the validity of our measurements. A series of chi-square difference tests were conducted so that the discriminant validity could be ensured. For each pair, we estimated two models-an unconstrained model and a constrained model-and compared the two model fits. All these tests supported discriminant validity. Also, all items loaded significantly on their respective constructs, providing support for convergent validity. We then examined composite reliability and average variance extracted (AVE). The composite reliability of each construct was greater than.70. The AVE of each construct was greater than.50. According to the multiple regression analysis, customer dynamism had a negative effect and competitor dynamism had a positive effect on a supplier's commitment. In addition, flexibility and dependence had significant moderating effects on customer and competitor dynamism. On the other hand, all hypotheses about technological dynamism had no significant effects on commitment. In other words, technological dynamism had no direct effect on supplier's commitment and was not moderated by the flexibility and dependence of the supply chain. This study makes its contribution in the point of view that this is a rare study on environmental dynamism and supply chain commitment in the field of high-tech industry. Especially, this study verified the effects of three sectors of environmental dynamism on supplier's commitment. Also, it empirically tested how the effects were moderated by flexibility and dependence. The results showed that flexibility and interdependence had a role to strengthen supplier's commitment under environmental dynamism in high-tech industry. Thus relationship managers in high-tech industry should make supply chain relationship flexible and interdependent. The limitations of the study are as follows; First, about the research setting, the study was conducted with high-tech industry, in which the direction of the change in the power balance of supply chain dyads is usually determined by manufacturers. So we have a difficulty with generalization. We need to control the power structure between partners in a future study. Secondly, about flexibility, we treated it throughout the paper as positive, but it can also be negative, i.e. violating an agreement or moving, but in the wrong direction, etc. Therefore we need to investigate the multi-dimensionality of flexibility in future research.

  • PDF

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

Essay on Form and Function Design (디자인의 형태와 기능에 관한 연구)

  • 이재국
    • Archives of design research
    • /
    • v.2 no.1
    • /
    • pp.63-97
    • /
    • 1989
  • There is nothing more important than the form and function in design, because every design product can be done on the basis of them. Form and Function are already existed before the word of design has been appeared and all the natural and man-made things' basic organization is based on their organic relations. The organic relations is the source of vitality which identifies the subsistance of all the objects and the evolution of living creatures has been changed their appearances by the natural law and order. Design is no exception. Design is a man-made organic thing which is developed its own way according to the purposed aim and given situations. If so, what is the ultimate goal of design. It is without saying that the goal is to make every effort to contribute to the -human beings most desirable life by the designer who is devoting himself to their convenience and well-being. Therefore, the designer can be called the man of rich life practitioner. This word implies a lot of meanings since the essence of design is improving the guality of life by the man-made things which are created by the designer. Also, the things are existed through the relations between form and function, and the things can keep their value when they are answered to the right purpose. In design, thus, it is to be a main concern how to create valuable things and to use them in the right way, and the subject of study is focused on the designer's outlook of value and uk relations between form and function. Christopher Alexander mentioned the importance of form as follows. The ultimate object of design is form. Every design problem begins with an effort to achieve fittness between the form and its context. The form is the solution to the problem: the context defmes the problem. In other words, when we speak of design, the real object of discussion is not form alone, but the ensemble comprising the form and its context. Good fit is a desirable property of this ensemble which relates to some particular division of the ensemble into form and context. Max Bill mainatined how important form is in design. Form represents a self-contained concept, and its embodiment in an object results in that object becoming a work of art. Futhermore, this explains why we use form so freguently in a comparative sense for determining whether one thing is less or more beautiful than another, and why the ideal of absolute beauty is always the standard by which we appraise form, and through form, art itself. Hence form has became synonymous with beauty. On the other hand, Laszlo Moholy-Nagy stated the importance of function as follows. Function means the task an object is designed to fulfill the task instrument is shaping the form. Unfortunately, this principle was not appreciated at the same time but through the endeavors of Frank Lloyd Wright and of the Bauhaus group and its many collegues in Europe, the idea of functionalism became the keynote of the twenites. Functionalism soon became a cheap slogan, however, and its original meaning blurred. It is neccessary to reexamine it in the light of present circumstances. Charles William Eliot expressed his idea on the relations between function and beauty. Beauty often results chiefly from fittness: indeed it is easy to manitain that nothing is fair except what is fit its uses or functions. If the function of the product of a machine be useful and valuable, an the machine be eminently fit for its function, it conspicuously has the beauty of fittness. A locomotive or a steamship has the same sort of beauty, derived from the supreme fittness for its function. As functions vary, so will those beauty..vary. However, it is impossible to study form and function in separate beings. Function can't be existed without form, and without function, form is nothing. In other words, form is a function's container, and function is content in form. It can be said that, therefore, the form and function are indispensable and commensal individuals which have coetemal relations. From the different point of view, sometimes, one is more emphasized than the other, but in this case, the logic is only accepted on the assumption of recognizing the importance of the other's entity. The fact can be proved what Frank Hoyd wright said that form and function are one. In spite of that, the form and function should be considered as independent indivisuals, because they are too important to be treated just as the simple single one. Form and function have flexible properties to the context. In other words, the context plays a role as the barometer to define the form and function, also which implies every meaning of surroun'||'&'||'not;dings. Thus, design is formed under the influence of situations. Situations are dynamic, like the design process itself, in which fixed focus can be cripping. Moreover, situations control over making the good design. Judging from the respect, I defined the good design in my thesis An Analytic Research on Desigh Ethic, "good design is to solve the problem by the most proper way in the situations." Situations are changeable, and so is design. There is no progress without change, but change is not neccessarily progress. It is highly desirable that there changes be beneficial to mankind. Our main problem is to be able to discriminate between that which should be discarded and that which should be kept, built upon, and improved. Form and Function are no exception. The practical function gives birth to the inevitable form and the $$\mu$ti-classified function is delivered to the varieties of form. All of these are depended upon changeable situations. That is precisely the situations of "situation de'||'&'||'not;sign", the concept of moving from the design of things to the design of the circumstances in which things are used. From this point of view, the core of form and function is depended upon how the designer can manage it efficiently in given situations. That is to say that the creativity designer plays an important role to fulfill the purpose. Generally speaking, creativity is the organization of a concept in response to a human need-a solution that is both satisfying and innovative. In order to meet human needs, creative design activities require a special intuitive insight which is set into motion by purposeful imagination. Therefore, creativity is the most essential quality of every designer. In addition, designers share with other creative people a compulsive ingenuity and a passion for imaginative solutions which will meet their criteria for excellence. Ultimately, it is said that the form and function is the matter which belongs to the desire of creative designers who constantly try to bring new thing into being to create new things. In accordance with that the main puppose of this thesis is to catch every meaning of the form and function and to close analyze their relations for the promotion of understanding and devising practical application to gradual progression in design. The thesis is composed of four parts: Introduction, Form, Function and Conclusion. Introduction, the purpose and background of the research are presented. In Chapter I, orgin of form, perception of form, and classification of form are studied. In Chapter II, generation of function, development of function, and diversification of function are considered. Conclusion, some concluding words are mentioned.ioned.

  • PDF

A Hybrid Forecasting Framework based on Case-based Reasoning and Artificial Neural Network (사례기반 추론기법과 인공신경망을 이용한 서비스 수요예측 프레임워크)

  • Hwang, Yousub
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.43-57
    • /
    • 2012
  • To enhance the competitive advantage in a constantly changing business environment, an enterprise management must make the right decision in many business activities based on both internal and external information. Thus, providing accurate information plays a prominent role in management's decision making. Intuitively, historical data can provide a feasible estimate through the forecasting models. Therefore, if the service department can estimate the service quantity for the next period, the service department can then effectively control the inventory of service related resources such as human, parts, and other facilities. In addition, the production department can make load map for improving its product quality. Therefore, obtaining an accurate service forecast most likely appears to be critical to manufacturing companies. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average simulation. However, these methods are only efficient for data with are seasonal or cyclical. If the data are influenced by the special characteristics of product, they are not feasible. In our research, we propose a forecasting framework that predicts service demand of manufacturing organization by combining Case-based reasoning (CBR) and leveraging an unsupervised artificial neural network based clustering analysis (i.e., Self-Organizing Maps; SOM). We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the service forecasting domain. Our proposed approach has several appealing features : (1) We applied CBR and SOM in a new forecasting domain such as service demand forecasting. (2) We proposed our combined approach between CBR and SOM in order to overcome limitations of traditional statistical forecasting methods and We have developed a service forecasting tool based on the proposed approach using an unsupervised artificial neural network and Case-based reasoning. In this research, we conducted an empirical study on a real digital TV manufacturer (i.e., Company A). In addition, we have empirically evaluated the proposed approach and tool using real sales and service related data from digital TV manufacturer. In our empirical experiments, we intend to explore the performance of our proposed service forecasting framework when compared to the performances predicted by other two service forecasting methods; one is traditional CBR based forecasting model and the other is the existing service forecasting model used by Company A. We ran each service forecasting 144 times; each time, input data were randomly sampled for each service forecasting framework. To evaluate accuracy of forecasting results, we used Mean Absolute Percentage Error (MAPE) as primary performance measure in our experiments. We conducted one-way ANOVA test with the 144 measurements of MAPE for three different service forecasting approaches. For example, the F-ratio of MAPE for three different service forecasting approaches is 67.25 and the p-value is 0.000. This means that the difference between the MAPE of the three different service forecasting approaches is significant at the level of 0.000. Since there is a significant difference among the different service forecasting approaches, we conducted Tukey's HSD post hoc test to determine exactly which means of MAPE are significantly different from which other ones. In terms of MAPE, Tukey's HSD post hoc test grouped the three different service forecasting approaches into three different subsets in the following order: our proposed approach > traditional CBR-based service forecasting approach > the existing forecasting approach used by Company A. Consequently, our empirical experiments show that our proposed approach outperformed the traditional CBR based forecasting model and the existing service forecasting model used by Company A. The rest of this paper is organized as follows. Section 2 provides some research background information such as summary of CBR and SOM. Section 3 presents a hybrid service forecasting framework based on Case-based Reasoning and Self-Organizing Maps, while the empirical evaluation results are summarized in Section 4. Conclusion and future research directions are finally discussed in Section 5.

EFFECT OF ULTRASONIC VIBRATION ON ENAMEL AND DENTIN BOND STRENGTH AND RESIN INFILTRATION IN ALL-IN-ONE ADHESIVE SYSTEMS (All-in-one 접착제에서 초음파진동이 법랑질과 상아질의 결합강도와 레진침투에 미치는 영향)

  • Lee, Bum-Eui;Jang, Ki-Taeg;Lee, Sang-Hoon;Kim, Chong-Chul;Hahn, Se-Hyun
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.31 no.1
    • /
    • pp.66-78
    • /
    • 2004
  • The objective of this study was to apply the vibration technique to reduce the viscosity of bonding adhesives and thereby compare the bond strength and resin penetration in enamel and dentin achieved with those gained using the conventional technique and vibration technique. For enamel specimens, thirty teeth were sectioned mesio-distally. Sectioned two parts were assigned to same adhesive system but different treatment(vibration vs. non-vibration). Each specimen was embedded in 1-inch inner diameter PVC pipe with a acrylic resin. The buccal and lingual surfaces were placed so that the tooth and the embedding medium were at the same level. The samples were subsequently polished silicon carbide abrasive papers. Each adhesive system was applied according to its manufacture's instruction. Vibration groups were additionally vibrated for 15 seconds before curing. For dentin specimen, except removing the coronal part and placing occlusal surface at the mold level, the remaining procedures were same as enamel specimen. Resin composite(Z250. 3M. U.S.A.) was condensed on to the prepared surface in two increments using a mold kit(Ultradent Inc., U.S.A.). Each increments was light cured for 40 seconds. After 24 hours in tap water at room temperature, the specimens were thermocycled for 1000cycles. Shear bond strengths were measured with a universal testing machine(Instron 4465, England). To investigate infiltration patterns of adhesive materials, the surface of specimens was examined with scanning electron microscope. The results were as follows: 1. In enamel the mean values of shear bond strengths in vibration groups(group 2, 4, 6) were greater than those of non-vibration group(group 1, 3, 5). The differences were statistically significant except AQ bond group. 2. In dentin, the mean values of shear bond strengths in vibration groups(group 2, 4, 6) were greater than those of non-vibration groups(group 1, 3, 5). But the differences were not statistically significant except One-Up Bond F group. 3. The vibration group showed more mineral loss in enamel and longer resin tag and greater number of lateral branches in dentin under SEM examination.

  • PDF

A Study on Laboratory Treatment of Metalworking Wastewater Using Ultrafiltration Membrane System and Its Field Application (한외여과막시스템을 이용한 금속가공폐수의 실험실적 처리 및 현장 적용 연구)

  • Bae, Jae Heum;Hwang, In-Gook;Jeon, Sung Duk
    • Korean Chemical Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.487-494
    • /
    • 2005
  • Nowadays a large amount of wastewater containing metal working fluids and cleaning agents is generated during the cleaning process of parts working in various industries of automobile, machine and metal, and electronics etc. In this study, aqueous or semi-aqueous cleaning wastewater contaminated with soluble or nonsoluble oils was treated using ultrafiltration system. And the membrane permeability flux and performance of oil-water separation (or COD removal efficiency) of the ultrafiltration system employing PAN as its membrane material were measured at various operating conditions with change of membrane pore sizes and soil concentrations of wastewater and examined their suitability for wastewater treatment contaminated with soluble or insoluble oil. As a result, in case of wastewater contaminated with soluble oil and aqueous or semi-aqueous cleaning agent, the membrane permeability increased rapidly even though COD removal efficiency was almost constant as 90 or 95% as the membrane pore size increased from 10 kDa to 100 kDa. However, in case of the wastewater contaminated with nonsoluble oil and aqueous or semi-aqueous cleaning agent, as the membrane pore size increased from 10 kDa to 100 kDa and the soil concentration of wastewater increased, the membrane permeability was reduced rapidly while COD removal efficiency was almost constant. These phenomena explain that since the membrane material is hydrophilic PAN material, it blocks nonsoluble oil and reduces membrane permeability. Thus, it can be concluded that the aqueous or semi-aqueous cleaning solution contaminated with soluble oil can be treated by ultrafiltration system with the membrane of PAN material and its pore size of 100 kDa. Based on these basic experimental results, a pilot plant facility of ultrafiltration system with PAN material and 100 kDa pore size was designed, installed and operated in order to treat and recycle alkaline cleaning solution contaminated with deep drawing oil. As a result of its field application, the ultrafiltration system was able to separate aqueous cleaning solution and soluble oil effectively, and recycle them. Further more, it can increase life span of aqueous cleaning solution 12 times compared with the previous process.

Effect of MRI Media Contrast on PET/MRI (PET/MRI에 있어 MRI 조영제가 PET에 미치는 영향)

  • Kim, Jae Il;Kim, In Soo;Lee, Hong Jae;Kim, Jin Eui
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.19-25
    • /
    • 2014
  • Purpose: Integrated PET/MRI has been developed recently has become a lot of help to the point oncologic, neological, cardiological nuclear medicine. By using this PET/MRI, a ${\mu}-map$ is created some special MRI sequence which may be divided parts of the body for attenuation correction. However, because an MRI contrast agent is necessary in order to obtain an more MRI information, we will evaluate to see an effect of SUV on PET image that corrected attenuation by MRI with contrast agent. Materials and Methods: As PET/MRI machine, Biograph mMR (Siemens, Germany) was used. For phantom test, 1mCi $^{18}F-FDG$ was injected in cylinderical uniformity phantom, and then acquire PET data about 10 minutes with VIBE-DIXON, UTE MRI sequence image for attenuation correction. T1 weighted contrast media, 4 cc DOTAREM (GUERBET, FRANCE) was injected in a same phatnom, and then PET data, MRI data were acquired by same methodes. Using this PET, non-contrast MRI and contrast MRI, it was reconstructed attenuation correction PET image, in which we evanuated the difference of SUVs. Additionally, for let a high desity of contrast media, 500 cc 2 plastic bottles were used. We injected $^{18}F-FDG$ with 5 cc DOTAREM in first bottle. At second bottle, only $^{18}F-FDG$ was injected. and then we evaluated a SUVs reconstructed by same methods. For clinical patient study, rectal caner-pancreas cancer patients were selected. we evaluated SUVs of PET image corrected attenuastion by contrast weighted MRI and non-contrast MRI. Results: For a phantom study, although VIBE DIXON MRI signal with contrast media is 433% higher than non-contrast media MRI, the signals intensity of ${\mu}-map$, attenuation corrected PET are same together. In case of high contrast media density, image distortion is appeared on ${\mu}-map$ and PET images. For clinical a patient study, VIBE DIXON MRI signal on lesion portion is increased in 495% by using DOTAREM. But there are no significant differences at ${\mu}-map$, non AC PET, AC-PET image whether using contrast media or not. In case of whole body PET/MRI study, %diff between contras and non contrast MRAC at lung, liver, renal cortex, femoral head, myocardium, bladder, muscle are -4.32%, -2.48%, -8.05%, -3.14%, 2.30%, 1.53%, 6.49% at each other. Conclusion: In integrated PET/MRI, a segmentation ${\mu}-map$ method is used for correcting attenuation of PET signal. although MRI signal for attenuation correciton change by using contrast media, ${\mu}-map$ will not change, and then MRAC PET signal will not change too. Therefore, MRI contrast media dose not affect for attenuation correction PET. As well, not only When we make a flow of PET/MRI protocol, order of PET and MRI sequence dose not matter, but It's possible to compare PET images before and after contrast agent injection.

  • PDF

Comparison of adhesive strength of resinous teeth splinting materials according to enamel surface treatment (법랑질 표면 처리방법에 따른 레진계 치아 고정재료의 접착강도 비교)

  • Lee, Ye-Rim;Kim, Soo-Yeon;Kim, Jin-Woo;Park, Se-Hee;Cho, Kyung-Mo
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.35 no.2
    • /
    • pp.72-80
    • /
    • 2019
  • Purpose: The purpose of this study is to compare and analyze the shear bond strength and fracture pattern in different enamel tooth surface treatments for resin splinting materials. Materials and Methods: G-FIX and LightFix were used as tooth splinting materials. Twenty bovine mandibular incisors were used for the preparation of the specimens. The exposed enamel surface was separated into four parts. Each tooth was treated with 37% phosphoric acid, 37% phosphoric acid + adhesive resin, 37% phosphoric acid + G-premio bond, and G-premio bond for each fraction. Shear bond strength was measured using a universal testing machine. After measuring the shear bond strength, the fractured surface of the specimen was magnified with a microscope to observe the fracture pattern. Two-way ANOVA was used to verify the interaction between the material and the surface treatment method. One-way ANOVA was used for comparison between the surface treatment methods of each material and post-hoc test was conducted with Scheffe's test. An independent t-test was conducted to compare shear bond strengths between materials in each surface treatment method. All statistics were conducted at 95% significance level. Results: G-FIX, a tooth splinting resin, showed similar shear bonding strength when additional adhesive resins were used when material was applied after only acid etching, and LightFix showed the highest shear bonding strength when additional adhesive resins were used after the acid etching. In addition, both G-FIX and LightFix showed the lowest shear bond strength when only self-etching adhesive was applied without additional acid etching. Verification of interactions observed interconnection between resins and surface treatment methods. Most of the mixed failure was observed in all counties. Conclusion: When using G-FIX and LightFix, which are tooth-splinting materials, it is considered that sufficient adhesion will be achieved even after applying only acid etching as instructed by the manufacturer.

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.