• Title/Summary/Keyword: traditional uses

Search Result 975, Processing Time 0.035 seconds

Analysis of Research on Non-Timber Forest Plants - Based on the Articles Published in the Journal of Korean Forest Society from 1962 to 2013 - (산림과학분야의 산림특용자원식물의 연구 - 한국임학회지에 게재된 논문을 중심으로 -)

  • Lee, Hyunseok;Yi, Jaeseon;An, Chanhoon;Lee, Jeonghoon
    • Journal of Korean Society of Forest Science
    • /
    • v.104 no.3
    • /
    • pp.337-351
    • /
    • 2015
  • The articles, published in the Journal of Korean Forest Society from Volume 1 (1962) to Volume 102 (2013), were investigated for the research trend analysis about forest plants for special purposes, i.e., edible plants, medicinal plants, feed resources, landscape plants, fiber plants, industrial usage, bee plants, bioenergy/phytoremediation uses, dye materials, and rare/endangered/endemic plants. These research articles were classified again based on the contents of research into following categories - habitat environment, ecology, physiology, propagation, silviculture (including planting and tending), genetics and breeding, identification, pest and disease control, animal-related research, components analysis and extracts, vegetation survey, biotechnology, management, recreation and forest healing, and research review. Among the total 2,433 articles published, 611 (25.1%) were related to plants for special usage or purposes. The highest frequency (14.9%) in publications was found in the field of silviculture followed by physiology, propagation, identification, and genetics and breeding, respectively. On the bases of usage, edible plants showed higher frequency (26.5%) than others, followed by industrial purpose, bioenergy/phytoremediation usage, landscape plants, medicinal plants, and rare/endangered/endemic plants. Populus plant species was the most popular in research, showing 62 articles; and Castanea crenata 36; Pinus koraiensis 35; Robinia pseudoacacia 20; Ginko biloba 17, etc. Based on the survey and analysis, the following points are suggested: 1) improved evaluation of forest plants as non-wood resources, 2) expanding research topics on the basis of production, management, and utilization of non-wood forest resources, 3) management of database of forest plant information and encouragement needed to strengthen cooperative researches satisfying the needs of other industrial and scientific areas, and 4) encouraging to promote traditional knowledge based research on forest plants.

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

Impact of Net-Based Customer Service on Firm Profits and Consumer Welfare (기업의 온라인 고객 서비스가 기업의 수익 및 고객의 후생에 미치는 영향에 관한 연구)

  • Kim, Eun-Jin;Lee, Byung-Tae
    • Asia pacific journal of information systems
    • /
    • v.17 no.2
    • /
    • pp.123-137
    • /
    • 2007
  • The advent of the Internet and related Web technologies has created an easily accessible link between a firm and its customers, and has provided opportunities to a firm to use information technology to support supplementary after-sale services associated with a product or service. It has been widely recognized that supplementary services are an important source of customer value and of competitive advantage as the characteristics of the product itself. Many of these supplementary services are information-based and need not be co-located with the product, so more and more companies are delivering these services electronically. Net-based customer service, which is defined as an Internet-based computerized information system that delivers services to a customer, therefore, is the core infrastructure for supplementary service provision. The importance of net-based customer service in delivering supplementary after-sale services associated with product has been well documented. The strategic advantages of well-implemented net-based customer service are enhanced customer loyalty and higher lock-in of customers, and a resulting reduction in competition and the consequent increase in profits. However, not all customers utilize such net-based customer service. The digital divide is the phenomenon in our society that captures the observation that not all customers have equal access to computers. Socioeconomic factors such as race, gender, and education level are strongly related to Internet accessibility and ability to use. This is due to the differences in the ability to bear the cost of a computer, and the differences in self-efficacy in the use of a technology, among other reasons. This concept, applied to e-commerce, has been called the "e-commerce divide." High Internet penetration is not eradicating the digital divide and e-commerce divide as one would hope. Besides, to accommodate personalized support, a customer must often provide personal information to the firm. This personal information includes not only name and address, but also preferences information and perhaps valuation information. However, many recent studies show that consumers may not be willing to share information about themselves due to concerns about privacy online. Due to the e-commerce divide, and due to privacy and security concerns of the customer for sharing personal information with firms, limited numbers of customers adopt net-based customer service. The limited level of customer adoption of net-based customer service affects the firm profits and the customers' welfare. We use a game-theoretic model in which we model the net-based customer service system as a mechanism to enhance customers' loyalty. We model a market entry scenario where a firm (the incumbent) uses the net-based customer service system in inducing loyalty in its customer base. The firm sells one product through the traditional retailing channels and at a price set for these channels. Another firm (the entrant) enters the market, and having observed the price of the incumbent firm (and after deducing the loyalty levels in the customer base), chooses its price. The profits of the firms and the surplus of the two customers segments (the segment that utilizes net-based customer service and the segment that does not) are analyzed in the Stackelberg leader-follower model of competition between the firms. We find that an increase in adoption of net-based customer service by the customer base is not always desirable for firms. With low effectiveness in enhancing customer loyalty, firms prefer a high level of customer adoption of net-based customer service, because an increase in adoption rate decreases competition and increases profits. A firm in an industry where net-based customer service is highly effective loyalty mechanism, on the other hand, prefers a low level of adoption by customers.

Pottery Glaze Making and It′s Properties by Using Grain Stem Ash & Vegetables Ash (곡물재와 채소재를 이용한 도자기용 유약제조와 그 특성)

  • Han, Young-Soon;Lee, Byung-Ha
    • Journal of the Korean Ceramic Society
    • /
    • v.41 no.11
    • /
    • pp.834-841
    • /
    • 2004
  • The purpose of this study is to investigate the properties of traditional Korean ash glazes by using locally available sources; 10 kinds of grain stems,2 kinds of husks (pod, chaff), and 4 kinds of vegetables (spinach, radish leaf and stem, pumpkin leaf and stem, pepper stem), and to develop their practical uses as ash glazes. The test results of these ash glazes indicate that these ashes can be classified into four categories. The first group, which includes perilla stem ash, sesame stem ash, black bean stem ash and red-bean stem ash, shows strong milky white due to relatively lower content of $SiO_2$, and relatively higher content of CaO and P$_2$O$\_$5/ content (10% higher than others), and their glazes were found to be suitable for opaque glaze as they show relatively stable bright greenish color. The second group includes pepper stem ash, spinach ash, pod ash, radish leaf and stem ash, and bean stem ash, and this group was found to contain even quantity of every component. And their glaze show somewhat greenish color because of especially high content of MgO and more than 2% of Fe$_2$ $O_3$. They were found to be suitable for basic glaze of IRABO glaze. The third group, which includes com stalk ash, white bean ash, pumpkin leaf and stem ash, has more $SiO_2$ and Al$_2$ $O_3$ than other ashes, and it also contains 3~5% of Fe$_2$ $O_3$. As a result of those components, this third group shows the greatest change of color and chroma, and was found to be suitable glazes as basic glaze of Temmoku and black glazes. The fourth group (reed ash, rice straw ash, indian millet stalk ash and chaff ash) has as much as 45~82% of $SiO_2$ and relatively lower content of Fe$_2$ $O_3$ and P$_2$ $O_3$. This group shows blue or greenish white color, and was found to be suitable as the basic glaze of white glaze.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Perfluoropolymer Membranes of Tetrafluoroethylene and 2,2,4Trifluofo- 5Trifluorometoxy- 1,3Dioxole.

  • Arcella, V.;Colaianna, P.;Brinati, G.;Gordano, A.;Clarizia, G.;Tocci, E.;Drioli, E.
    • Proceedings of the Membrane Society of Korea Conference
    • /
    • 1999.07a
    • /
    • pp.39-42
    • /
    • 1999
  • Perfluoropolymers represent the ultimate resistance to hostile chemical environments and high service temperature, attributed to the presence of fluorine in the polymer backbone, i.e. to the high bond energy of C-F and C-C bonds of fluorocarbons. Copolymers of Tetrafluoroethylene (TEE) and 2, 2, 4Trifluoro-5Trifluorometoxy- 1, 3Dioxole (TTD), commercially known as HYFLON AD, are amorphous perfluoropolymers with glass transition temperature (Tg)higher than room temperature, showing a thermal decomposition temperature exceeding 40$0^{\circ}C$. These polymer systems are highly soluble in fluorinated solvents, with low solution viscosities. This property allows the preparation of self-supported and composite membranes with desired membrane thickness. Symmetric and asymmetric perfluoropolymer membranes, made with HYFLON AD, have been prepared and evaluated. Porous and not porous symmetric membranes have been obtained by solvent evaporation with various processing conditions. Asymmetric membranes have been prepared by th wet phase inversion method. Measure of contact angle to distilled water have been carried out. Figure 1 compares experimental results with those of other commercial membranes. Contact angles of about 120$^{\circ}$for our amorphous perfluoropolymer membranes demonstrate that they posses a high hydrophobic character. Measure of contact angles to hexandecane have been also carried out to evaluate the organophobic character. Rsults are reported in Figure 2. The observed strong organophobicity leads to excellent fouling resistance and inertness. Porous membranes with pore size between 30 and 80 nanometers have shown no permeation to water at pressures as high as 10 bars. However high permeation to gases, such as O2, N2 and CO2, and no selectivities were observed. Considering the porous structure of the membrane, this behavior was expected. In consideration of the above properties, possible useful uses in th field of gas- liquid separations are envisaged for these membranes. A particularly promising application is in the field of membrane contactors, equipments in which membranes are used to improve mass transfer coefficients in respect to traditional extraction and absorption processes. Gas permeation properties have been evaluated for asymmetric membranes and composite symmetric ones. Experimental permselectivity values, obtained at different pressure differences, to various single gases are reported in Tab. 1, 2 and 3. Experimental data have been compared with literature data obtained with membranes made with different amorphous perfluoropolymer systems, such as copolymers of Perfluoro2, 2dimethyl dioxole (PDD) and Tetrafluorethylene, commercialized by the Du Pont Company with the trade name of Teflon AF. An interesting linear relationship between permeability and the glass transition temperature of the polymer constituting the membrane has been observed. Results are descussed in terms of polymer chain structure, which affects the presence of voids at molecular scale and their size distribution. Molecular Dyanmics studies are in progress in order to support the understanding of these results. A modified Theodoru- Suter method provided by the Amorphous Cell module of InsightII/Discover was used to determine the chain packing. A completely amorphous polymer box of about 3.5 nm was considered. Last but not least the use of amorphous perfluoropolymer membranes appears to be ideal when separation processes have to be performed in hostile environments, i.e. high temperatures and aggressive non-aqueous media, such as chemicals and solvents. In these cases Hyflon AD membranes can exploit the outstanding resistance of perfluoropolymers.

  • PDF

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Dynamics of Technology Adoption in Markets Exhibiting Network Effects

  • Hur, Won-Chang
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.127-140
    • /
    • 2010
  • The benefit that a consumer derives from the use of a good often depends on the number of other consumers purchasing the same goods or other compatible items. This property, which is known as network externality, is significant in many IT related industries. Over the past few decades, network externalities have been recognized in the context of physical networks such as the telephone and railroad industries. Today, as many products are provided as a form of system that consists of compatible components, the appreciation of network externality is becoming increasingly important. Network externalities have been extensively studied among economists who have been seeking to explain new phenomena resulting from rapid advancements in ICT (Information and Communication Technology). As a result of these efforts, a new body of theories for 'New Economy' has been proposed. The theoretical bottom-line argument of such theories is that technologies subject to network effects exhibit multiple equilibriums and will finally lock into a monopoly with one standard cornering the entire market. They emphasize that such "tippiness" is a typical characteristic in such networked markets, describing that multiple incompatible technologies rarely coexist and that the switch to a single, leading standard occurs suddenly. Moreover, it is argued that this standardization process is path dependent, and the ultimate outcome is unpredictable. With incomplete information about other actors' preferences, there can be excess inertia, as consumers only moderately favor the change, and hence are themselves insufficiently motivated to start the bandwagon rolling, but would get on it once it did start to roll. This startup problem can prevent the adoption of any standard at all, even if it is preferred by everyone. Conversely, excess momentum is another possible outcome, for example, if a sponsoring firm uses low prices during early periods of diffusion. The aim of this paper is to analyze the dynamics of the adoption process in markets exhibiting network effects by focusing on two factors; switching and agent heterogeneity. Switching is an important factor that should be considered in analyzing the adoption process. An agent's switching invokes switching by other adopters, which brings about a positive feedback process that can significantly complicate the adoption process. Agent heterogeneity also plays a important role in shaping the early development of the adoption process, which has a significant impact on the later development of the process. The effects of these two factors are analyzed by developing an agent-based simulation model. ABM is a computer-based simulation methodology that can offer many advantages over traditional analytical approaches. The model is designed such that agents have diverse preferences regarding technology and are allowed to switch their previous choice. The simulation results showed that the adoption processes in a market exhibiting networks effects are significantly affected by the distribution of agents and the occurrence of switching. In particular, it is found that both weak heterogeneity and strong network effects cause agents to start to switch early and this plays a role of expediting the emergence of 'lock-in.' When network effects are strong, agents are easily affected by changes in early market shares. This causes agents to switch earlier and in turn speeds up the market's tipping. The same effect is found in the case of highly homogeneous agents. When agents are highly homogeneous, the market starts to tip toward one technology rapidly, and its choice is not always consistent with the populations' initial inclination. Increased volatility and faster lock-in increase the possibility that the market will reach an unexpected outcome. The primary contribution of this study is the elucidation of the role of parameters characterizing the market in the development of the lock-in process, and identification of conditions where such unexpected outcomes happen.

Descartes' proofs for the existence of God (데카르트 신 존재증명의 의의)

  • Kim, Wan-jong
    • Journal of Korean Philosophical Society
    • /
    • v.141
    • /
    • pp.1-42
    • /
    • 2017
  • This paper's purpose is to seek to grasp how Descartes demonstrates proofs of God's existence on the basis of his works especially Meditations. To consider these points, I shall explore first, second, third proofs that are present in his works, and contents related to God. Descartes argues that there is idea of God within me, but it is God, which is first proof. On the basis of this fact, Descartes shows only God is the cause of thinking self who has idea of God(second proof), both of them are called Cosmological argument. To investigate this, at first he states that representative reality that is different from formal reality sets a kind of hierarchy, the degree of this reality is equally applied to cause and effect, consequently to the cause of my idea or existence(God). From Meditation V, third proof which is called Ontological argument, Descartes examined a supremely perfect God can't be separated from God's existence(perfection) just as surly as the certainty of any shape or number, for example triangle, namely it is quite evident that God's existence includes his essence. Through these processes I shall examine following points: the way of having Descartes' proofs of God's existence itself is not only exposed, God's existence who guarantees cogito ergo sum which is never doubted, despite doubting all things that is outside, is but also postulated; Proofs for the existence of God are an ultimate source of ensuring the clear and distinct perception of human reason, Descartes uses reason suitable for non-christians instead of faith suitable for Christians for these methods, which are similarities with the traditional views on the one hand, but nevertheless there are some of discontinuities establishing authority or power of the first philosophical principle to which God is subjected, on the other.

A Study on Enhancing Personalization Recommendation Service Performance with CNN-based Review Helpfulness Score Prediction (CNN 기반 리뷰 유용성 점수 예측을 통한 개인화 추천 서비스 성능 향상에 관한 연구)

  • Li, Qinglong;Lee, Byunghyun;Li, Xinzhe;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.29-56
    • /
    • 2021
  • Recently, various types of products have been launched with the rapid growth of the e-commerce market. As a result, many users face information overload problems, which is time-consuming in the purchasing decision-making process. Therefore, the importance of a personalized recommendation service that can provide customized products and services to users is emerging. For example, global companies such as Netflix, Amazon, and Google have introduced personalized recommendation services to support users' purchasing decisions. Accordingly, the user's information search cost can reduce which can positively affect the company's sales increase. The existing personalized recommendation service research applied Collaborative Filtering (CF) technique predicts user preference mainly use quantified information. However, the recommendation performance may have decreased if only use quantitative information. To improve the problems of such existing studies, many studies using reviews to enhance recommendation performance. However, reviews contain factors that hinder purchasing decisions, such as advertising content, false comments, meaningless or irrelevant content. When providing recommendation service uses a review that includes these factors can lead to decrease recommendation performance. Therefore, we proposed a novel recommendation methodology through CNN-based review usefulness score prediction to improve these problems. The results show that the proposed methodology has better prediction performance than the recommendation method considering all existing preference ratings. In addition, the results suggest that can enhance the performance of traditional CF when the information on review usefulness reflects in the personalized recommendation service.