• Title/Summary/Keyword: problem analysis

Search Result 16,365, Processing Time 0.047 seconds

Love and Justice are Compatible ? - In Theory of Paul Ricœur (사랑과 정의, 양립 가능한가 - 폴 리쾨르 이론을 중심으로 -)

  • Lee, Kyung-lae
    • Cross-Cultural Studies
    • /
    • v.52
    • /
    • pp.53-78
    • /
    • 2018
  • In the moral culture of the West, love and justice are two commands with roots in ancient times. One is the heritage of Hebraism, and the other belongs to the tradition of Hebraism and Hellenism. The two concepts are the most important virtues required for preserving stability in society. These two commands are compatible, in an exclusive relationship to each other. To ultimately seek their reconciliation, the precise concept analysis and understanding of each of them should be premised on, due to the multi-layered meaning of implications of the two concepts. To this end, we first have started with a lexical meaning and have done a conceptual analysis of what these two concepts are expressing. We have looked at Paul $Ric{\oe}ur$ in his interpretation of the discourse of love and justice. Finally, we looked at how these two concepts are narrated in literature. Through the literary works of Stendal, Albert Camus, and Dostoevsky, we have seen examples of literary configurations that have been embodied in life. In this way, through conceptual analysis, discourse analysis, and narrative analysis of the two concepts, the following conclusions were drawn. Love and justice were not a matter of choice. We could see coldness and unrealism of a society lacking love or with a problem of unclean love, through Stendhal's and Albert Camus' novels and their actual debate. In addition, in unclean paternalism, risk of the power of love blocking certain a certain touch of justice was also confirmed. So, it was necessary for a healthy future society to explore the possibility of the coexistence of love and justice. We confirmed the possibility of compatibility in a 'considerate balance' wherein the 'moral judgment in situation' is required, as Paul $Ric{\oe}ur$ expressed. This ideal situation may be realized when forms of love involving solidarity, mutual care, and compassion with pain like Dostoevsky are combined with the principle of distributional justice. When Albert Camus pursued justice and eventually faced reality and mentioned the need for mercy, he could have made a moral judgment based on this situation. In the end, love protects justice, and justice contributes to the realization of love. Justice reduces super-ethical love to moral categories, and love plays a role in enabling justice to exert its full force.

Research Trends of Health Recommender Systems (HRS): Applying Citation Network Analysis and GraphSAGE (건강추천시스템(HRS) 연구 동향: 인용네트워크 분석과 GraphSAGE를 활용하여)

  • Haryeom Jang;Jeesoo You;Sung-Byung Yang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.57-84
    • /
    • 2023
  • With the development of information and communications technology (ICT) and big data technology, anyone can easily obtain and utilize vast amounts of data through the Internet. Therefore, the capability of selecting high-quality data from a large amount of information is becoming more important than the capability of just collecting them. This trend continues in academia; literature reviews, such as systematic and non-systematic reviews, have been conducted in various research fields to construct a healthy knowledge structure by selecting high-quality research from accumulated research materials. Meanwhile, after the COVID-19 pandemic, remote healthcare services, which have not been agreed upon, are allowed to a limited extent, and new healthcare services such as health recommender systems (HRS) equipped with artificial intelligence (AI) and big data technologies are in the spotlight. Although, in practice, HRS are considered one of the most important technologies to lead the future healthcare industry, literature review on HRS is relatively rare compared to other fields. In addition, although HRS are fields of convergence with a strong interdisciplinary nature, prior literature review studies have mainly applied either systematic or non-systematic review methods; hence, there are limitations in analyzing interactions or dynamic relationships with other research fields. Therefore, in this study, the overall network structure of HRS and surrounding research fields were identified using citation network analysis (CNA). Additionally, in this process, in order to address the problem that the latest papers are underestimated in their citation relationships, the GraphSAGE algorithm was applied. As a result, this study identified 'recommender system', 'wireless & IoT', 'computer vision', and 'text mining' as increasingly important research fields related to HRS research, and confirmed that 'personalization' and 'privacy' are emerging issues in HRS research. The study findings would provide both academic and practical insights into identifying the structure of the HRS research community, examining related research trends, and designing future HRS research directions.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

A Study on the Nutritive Value and Utilization of Powdered Seaweeds (해조의 식용분말화에 관한 연구)

  • Yu, Jong-Yull;Lee, Ki-Yull;Kim, Sook-Hee
    • Journal of Nutrition and Health
    • /
    • v.8 no.1
    • /
    • pp.15-37
    • /
    • 1975
  • I. Subject of the study A study on the nutritive value and utilization of powdered seaweeds. II. Purpose and Importance of the study A. In Korea the shortage of food will be inevitable by the rapidly growing population. It will be very important study to develop a new food from the seaweeds which were not used hitherto for human consumption. B. The several kinds of seaweeds have been used by man in Korea mainly as side-dishes. However, a properly powdered seaweed will enable itself to be a good supplement or mixture to certain cereal flours. C. By adding the powdered seaweed to any cereals which have long been staple foods in this country the two fold benefits; saving of cereals and change of dietary pattern, will be secured. III. Objects and scope of the study A. Objects of the study The objects will come under four items. 1. To develop a powdered seaweed as a new food from the seaweeds which have been not used for human consumption. 2. To evaluate the nutritional quality of the products the analysis for chemical composition and animal feeding experiment will be conducted. 3. Experimental cocking and accepability test will be conducted for the powdered products to evaluate the value as food stuff. 4. Sanitary test and also economical analysis will be conducted for the powdered products. B. Scope of the study 1. Production of seaweed powders Sargassum fulvellum growing in eastern coast and Sargassum patens C.A. in southern coast were used as the material for the powders. These algae, which have been not used for human consumption, were pulverized through the processes of washing, drying, pulverization, etc. 2. Nutritional experiments a. Chemical composition Proximate components (water, protein, fat, cellulose, sugar, ash, salt), minerals (calcium, phosphorus, iron, iodine), vitamins (A, $B_1,\;B_2$ niacin, C) and amino acids were analyzed for the seaweed powders. b. Animal feeding experiment Weaning 160 rats (80 male and 80 female rats) were used as experimental animals, dividing them into 16 groups, 10 rats each group. Each group was fed for 12 weeks on cereal diet (Wheat flour, rice powder, barley powder, potato powder, corn flour) with the supplementary levels of 5%, 10%, 15%, 20% and 30% of the seaweed powder. After the feeding the growth, feed efficiency ratio, protain efficiency ratio and ,organs weights were checked and urine analysis, feces analysis and serum analysis were also conducted. 3. Experimental cooking and acceptability test a. Several basic studies were conducted to find the characteristics of the seaweed powder. b. 17 kinds of Korean dishes and 9 kinds of foreign dishes were prepared with cereal flours (wheat, rice, barley, potato, corn) with the supplementary levels of 5%, 10%, 15%, 20% and 30% of the seaweed powder. c. Acceptability test for the dishes was conducted according to plank's Form. 4. Sanitary test The heavy metals (Cd, Pb, As, Hg) in the seaweed powders were determined. 5. Economical analysis The retail price of the seaweed powder was compared with those of other cereals in the market. And also economical analysis was made from the nutritional point of view, calculating the body weight gained in grams per unit price of each feeding diet. IV. Results of the study and the suggestion for application A. Chemical composition 1. There is no any big difference in proximate components between powders of Sargassum fulvellum in eastern coast and Sargassum patens C.A. in southern coast. Seasonal difference is also not significant. Higher levels of protein, cellulose, ash and salt were found in the powders compared with common cereal foods. 2. The levels of calcium (Ca) and iron (Fe) in the powders were significantly higher than common cereal foods and also rich in iodine (I). Existence of vitamin A and vitamin C in the Powders is different point from cereal foods. Vitamin $B_1\;and\;B_2$ are also relatively rich in the powders.'Vitamin A in ·Sargassum fulvellum is high and the levels of some minerals and vitamins are seemed4 to be some influenced by seasons. 3. In the amino acid composition methionine, isoleucine, Iysine and valine are limiting amino acids. The protein qualities of Sargassum fulvellum and Sargassum patens C.A. are seemed to be .almost same and generally ·good. Seasonal difference in amino acid composition was found. B. Animal feeding experiment 1. The best growth was found at.10% supplemental level of the seaweed Powder and lower growth rate was shown at 30% level. 2. It was shown that 15% supplemental level of the Seaweed powder seems to fulfil, to some extent the mineral requirement of the animals. 3. No any changes were found in organs development except that, in kidney, there found decreasing in weight by increasing the supplemental level of the seaweed powder. 4. There is no any significant changes in nitrogen retention, serum cholesterol, serum calcium and urinary calcium in each supplemental level of the seaweed powder. 5. In animal feeding experiment it was concluded that $5%{\sim}15%$ levels supplementation of the seaweed powder are possible. C. Experimental cooking and acceptability test 1. The seaweed powder showed to be utilized more excellently in foreign cookings than in Korean cookings. Higher supplemental level of seaweed was passible in foreign cookings. 2. Hae-Jo-Kang and Jeon-Byung were more excellent than Song-Pyun, wheat cake, Soo-Je-Bee and wheat noodle. Hae-Je-Kang was excellent in its quality even as high as 5% supplemental level. 3. The higher levels of supplementation were used the more sticky cooking products were obtained. Song-Pyun and wheat cake were palatable and lustrous in 2% supplementation level. 4. In drop cookie the higher levels of supplementation, the more crisp product was obtained, compared with other cookies. 5. Corn cake, thin rice gruel, rice gruel and potato Jeon-Byung were more excellent in their quality than potato Man-Doo and potato noodle. Corn cake, thin rice gruel and rice gruel were excellent even as high as 5% supplementation level. 6. In several cooking Porducts some seaweed-oder was perceived in case of 3% or more levels of supplementation. This may be much diminished by the use of proper condiments. D. Sanitary test It seems that there is no any heavy metals (Cd, Pb, As, Hg) problem in these seaweed Powders in case these Powders are used as supplements to any cereal flours E. Economical analysis The price of the seaweed powder is lower than those of other cereals and that may be more lowered when mass production of the seaweed powder is made in future. The supplement of the seaweed powder to any cereals is also economical with the criterion of animal growth rate. F. It is recommended that these seaweed powders should be developed and used as supplement to any cereal flours or used as other food material. By doing so, both saving of cereals and improvement of individual's nutrition will greatly be achieved. It is also recommended that the feeding experiment for men would be conducted in future.

  • PDF

Land-Cover Change Detection of Western DMZ and Vicinity using Spectral Mixture Analysis of Landsat Imagery (선형분광혼합화소분석을 이용한 서부지역 DMZ의 토지피복 변화 탐지)

  • Kim, Sang-Wook
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.9 no.1
    • /
    • pp.158-167
    • /
    • 2006
  • The object of this study is to detect of land-cover change in western DMZ and vicinity. This was performed as a basic study to construct a decision support system for the conservation or a sustainable development of the DMZ and Vicinity near future. DMZ is an is 4km wide and 250km long and it's one of the most highly fortified boundaries in the world and also a unique thin green line. Environmentalists want to declare the DMZ as a natural reserve and a biodiversity zone, but nowadays through the strengthening of the inter-Korean economic cooperation, some developers are trying to construct a new-town or an industrial complex inside of the DMZ. This study investigates the current environmental conditions, especially deforestation of the western DMZ adopting remote sensing and GIS techniques. The Land-covers were identified through the linear spectvral mixture analysis(LSMA) which was used to handle the spectral mixture problem of low spatial resolution imagery of Landsat TM and ETM+ imagery. To analyze quantitative and spatial change of vegetation-cover in western DMZ, GIS overlay method was used. In LSMA, to develop high-quality fraction images, three endmembers of green vegetation(GV), soil, water were driven from pure features in the imagery. Through 15 years, from 1987 to 2002, forest of western DMZ and vicinity was devastated and changed to urban, farmland or barren land. Northern part of western DMZ and vicinity was more deforested than that of southern part. ($52.37km^2$ of North Korean forest and $39.04km^2$ of South Korean were change to other land-covers.) In case of North Korean part, forest changed to barren land and farmland and in South Korean part, forest changed to farmland and urban area. Especially, In North Korean part of DMZ and vicinity, $56.15km^2$ of farmland changed to barren land through 15 years, which showed the failure of the 'Darakbat' (terrace filed) project which is one of food increase projects in North Korea.

  • PDF