• Title/Summary/Keyword: real world

Search Result 4,188, Processing Time 0.031 seconds

Territorial Expansion the King Võ (Võ Vương, 1738-1765) in the Mekong Delta: Variation of Tám Thực Chi Kế (strategy of silkworm nibbling) and Dĩ Man Công Man (to strike barbarians by barbarians) in the Way to Build a New World Order (무왕(武王, 1738-1765) 시기 메콩 델타에서의 영토 확장 추이: 제국으로 가는 길, '잠식지계(蠶食之計)'와 '이만공만(以蠻攻蠻)'의 변주)

  • CHOI, Byung Wook
    • The Southeast Asian review
    • /
    • v.27 no.2
    • /
    • pp.37-76
    • /
    • 2017
  • $Nguy{\tilde{\hat{e}}}n$ Cư Trinh has two faces in the history of territorial expansion of Vietnam into the Mekong delta. One is his heroic contribution to the $Nguy{\tilde{\hat{e}}}n$ family gaining control over the large part of the Mekong delta. The other is his role to make the eyes of readers of Vietnamese history be fixed only to the present territory of Vietnam. To the readers, $Nguy{\tilde{\hat{e}}}n$ Cư Trinh's achievement of territorial expansion was the final stage of the nam $ti{\acute{\hat{e}}n$ of Vietnam. In fact, however, his achievement was partial. This study pays attention to the King $V{\tilde{o}}$ instead of $Nguy{\tilde{\hat{e}}}n$ Cư Trinh in the history of the territorial expansion in the Mekong delta. King's goal was more ambitious. And the ambition was propelled by his dream to build a new world, and its order, in which his new capital, $Ph{\acute{u}}$ $Xu{\hat{a}}n$ was to be the center with his status as an emperor. To improve my assertion, three elements were examined in this article. First is the nature of $V{\tilde{o}}$ Vương's new kingship. Second is the preparation and the background of the military operation in the Mekong Delta. The nature of the new territory is the third element of the discussion. In 1744, six years after this ascending to the throne, $V{\tilde{o}}$ Vương declared he was a king. Author points out this event as the departure of the southern kingdom from the traditional dynasties based on the Red River delta. Besides, the government system, northern custom and way of dressings were abandoned and new southern modes were adopted. $V{\tilde{o}}$ Vương had enough tributary kingdoms such as Cambodia, Champa, Thủy $X{\tilde{a}}$, Hoả $X{\tilde{a}}$, Vạn Tượng, and Nam Chưởng. Compared with the $L{\hat{e}}$ empire, the number of the tributary kingdoms was higher and the number was equivalent to that of the Đại Nam empire of the 19th century. In reality, author claims, the King $V{\tilde{o}}^{\prime}s$ real intention was to become an emperor. Though he failed in using the title of emperor, he distinguished himself by claiming himself as the Heaven King, $Thi{\hat{e}}n$ Vương. Cambodian king's attack on the thousands of Cham ethnics in Cambodian territory was an enough reason to the King $V{\tilde{o}}^{\prime}s$ military intervention. He considered these Cham men and women as his amicable subjects, and he saw them a branch of the Cham communities in his realm. He declared war against Cambodia in 1750. At the same time he sent a lengthy letter to the Siamese king claiming that the Cambodia was his exclusive tributary kingdom. Before he launched a fatal strike on the Mekong delta which had been the southern part of Cambodia, $V{\tilde{o}}$ Vương renovated his capital $Ph{\acute{u}}$ $Xu{\hat{a}}n$ to the level of the new center of power equivalent to that of empire for his sake. Inflation, famine, economic distortion were also the features of this time. But this study pays attention more to the active policy of the King $V{\tilde{o}}$ as an empire builder than to the economic situation that has been told as the main reason for King $V{\tilde{o}}^{\prime}s$ annexation of the large part of the Mekong delta. From the year of 1754, by the initiative of $Nguy{\tilde{\hat{e}}}n$ Cư Trinh, almost whole region of the Mekong delta within the current border line was incorporated into the territory of $V{\tilde{o}}$ Vương within three years, though the intention of the king was to extend his land to the right side of the Mekong Basin beyond the current border such as Kampong Cham, Prey Vieng, and Svai Rieng. The main reason was $V{\tilde{o}}$ Vương's need to expand his territory to be matched with that of his potential empire with the large number of the tributary kingdoms. King $V{\tilde{o}}^{\prime}s$ strategy was the variation of 'silkworm nibbling' and 'to strike barbarians by barbarians.' He ate the land of Lower Cambodia, the region of the Mekong delta step by step as silkworm nibbles mulberry leave(general meaning of $t{\acute{a}}m$ thực), but his final goal was to eat all(another meaning of $t{\acute{a}}m$ thực) the part of the Mekong delta including the three provinces of Cambodia mentioned above. He used Cham to strike Cambodian in the process of getting land from Long An area to $Ch{\hat{a}}u$ Đốc. This is a faithful application of the Dĩ Man $C{\hat{o}}ng$ Man (to strike barbarians by barbarians). In addition he used Chinese refugees led by the Mạc family or their quasi kingdom to gain land in the region of $H{\grave{a}}$ $Ti{\hat{e}}n$ and its environs from the hand of Cambodian king. This is another application of Dĩ Man $C{\hat{o}}ng$ Man. In sum, author claims a new way of looking at the origin of the imperial world order which emerged during the first half of the 19th century. It was not the result of the long history of Đại Việt empires based on the Red River delta, but the succession of the King $V{\tilde{o}}^{\prime}s$ new world based on $Ph{\acute{u}}$ $Xu{\hat{a}}n$. The same ways of Dĩ Man $C{\hat{o}}ng$ Man and $T{\acute{a}}m$ Thực Chi $K{\acute{\hat{e}}}$ were still used by $V{\tilde{o}}^{\prime}s$ descendents. His grandson Gia Long used man such as Thai, Khmer, Lao, Chinese, and European to win another man the '$T{\hat{a}}y$ Sơn bandits' that included many of Chinese pirates, Cham, and other mountain peoples. His great grand son Minh Mạng constructed a splendid empire. At the same time, however, Minh Mạng kept expanding the size of his empire by eating all the part of Cambodia and Cham territories.

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

A study on the interaction between visual perception and the body in contemporary painting space (20세기 회화공간에서 시지각과 신체의 상관성에 관한 연구)

  • Lee, Kum-Hee
    • Journal of Science of Art and Design
    • /
    • v.11
    • /
    • pp.109-152
    • /
    • 2007
  • This thesis started from accepting the criticism and concretely seeking the possibility of visual visuality, in particular, visual physicality or physical visuality through the expression revealed in painting space. This study aims at stressing the role of the body in visual perception and pictorial expression by it by examining the interaction between it and the body. First of all, this study explored perception and the position of the body in the great frame of the historical stream from modernism, through minimalism, through post-minimalism to later art in order to confirm the interaction between visual perception and the body or the change in the intervention of physicality in the stream of contemporary art, and connected them with a discourse on perception and the body. It raised as the grounds for it the discussions which provided the theoretical background about perception. It dealt with the scientific discussions on perceptual physicality by Gestalt psychology in perceptive psychology, and next the discussion of Rudolf Arnheim who exemplified Gestalt psychology mainly on the dimension of visual art. It is significant in explaining the perceptual activeness which is the same as that of M. Merleau-Ponty as a primary debater to solve the questions of perceptual physicality and physical visuality. M. Merleau-Ponty set forth ambiguous perception and the body as its background as the fundamental bases for perceiving the world rather than consciousness proved explicitly. As Hal Foster said, as minimalist phenomenological background they provided appropriate theoretical background to the late art rising against modernist logic. Next, after the 1970s Frank Stella showed a working method and a tendency entirely different from those in the previous period. For example, deconstruction of frame, decentralized spatial expression, dynamic and mixed expression, and allowing real space by overlapping were judged to swing to approval of perceptual physicality. Francis Bacon's painting structure, that is, figure, triptych, aplat and a method of production by accident were understood to well reflect M. Merleau-Ponty's chair logic of chiasme. This study tries to seek the possibility of pictorial expression from works aiming at defining the question of seeing in connection with physicality, the role of the body as the body accumulated and the linking with a real, daily life as the background of the body, and confirm the phase shift.

  • PDF

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (비정형 텍스트 분석을 활용한 이슈의 동적 변이과정 고찰)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.1-18
    • /
    • 2016
  • Owing to the extensive use of Web media and the development of the IT industry, a large amount of data has been generated, shared, and stored. Nowadays, various types of unstructured data such as image, sound, video, and text are distributed through Web media. Therefore, many attempts have been made in recent years to discover new value through an analysis of these unstructured data. Among these types of unstructured data, text is recognized as the most representative method for users to express and share their opinions on the Web. In this sense, demand for obtaining new insights through text analysis is steadily increasing. Accordingly, text mining is increasingly being used for different purposes in various fields. In particular, issue tracking is being widely studied not only in the academic world but also in industries because it can be used to extract various issues from text such as news, (SocialNetworkServices) to analyze the trends of these issues. Conventionally, issue tracking is used to identify major issues sustained over a long period of time through topic modeling and to analyze the detailed distribution of documents involved in each issue. However, because conventional issue tracking assumes that the content composing each issue does not change throughout the entire tracking period, it cannot represent the dynamic mutation process of detailed issues that can be created, merged, divided, and deleted between these periods. Moreover, because only keywords that appear consistently throughout the entire period can be derived as issue keywords, concrete issue keywords such as "nuclear test" and "separated families" may be concealed by more general issue keywords such as "North Korea" in an analysis over a long period of time. This implies that many meaningful but short-lived issues cannot be discovered by conventional issue tracking. Note that detailed keywords are preferable to general keywords because the former can be clues for providing actionable strategies. To overcome these limitations, we performed an independent analysis on the documents of each detailed period. We generated an issue flow diagram based on the similarity of each issue between two consecutive periods. The issue transition pattern among categories was analyzed by using the category information of each document. In this study, we then applied the proposed methodology to a real case of 53,739 news articles. We derived an issue flow diagram from the articles. We then proposed the following useful application scenarios for the issue flow diagram presented in the experiment section. First, we can identify an issue that actively appears during a certain period and promptly disappears in the next period. Second, the preceding and following issues of a particular issue can be easily discovered from the issue flow diagram. This implies that our methodology can be used to discover the association between inter-period issues. Finally, an interesting pattern of one-way and two-way transitions was discovered by analyzing the transition patterns of issues through category analysis. Thus, we discovered that a pair of mutually similar categories induces two-way transitions. In contrast, one-way transitions can be recognized as an indicator that issues in a certain category tend to be influenced by other issues in another category. For practical application of the proposed methodology, high-quality word and stop word dictionaries need to be constructed. In addition, not only the number of documents but also additional meta-information such as the read counts, written time, and comments of documents should be analyzed. A rigorous performance evaluation or validation of the proposed methodology should be performed in future works.

Improving Usage of the Korea Meteorological Administration's Digital Forecasts in Agriculture: 2. Refining the Distribution of Precipitation Amount (기상청 동네예보의 영농활용도 증진을 위한 방안: 2. 강수량 분포 상세화)

  • Kim, Dae-Jun;Yun, Jin I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.15 no.3
    • /
    • pp.171-177
    • /
    • 2013
  • The purpose of this study is to find a scheme to scale down the KMA (Korea Meteorological Administration) digital precipitation maps to the grid cell resolution comparable to the rural landscape scale in Korea. As a result, we suggest two steps procedure called RATER (Radar Assisted Topography and Elevation Revision) based on both radar echo data and a mountain precipitation model. In this scheme, the radar reflection intensity at the constant altitude of 1.5 km is applied first to the KMA local analysis and prediction system (KLAPS) 5 km grid cell to obtain 1 km resolution. For the second step the elevation and topography effect on the basis of 270 m digital elevation model (DEM) which represented by the Parameter-elevation Regressions on Independent Slopes Model (PRISM) is applied to the 1 km resolution data to produce the 270 m precipitation map. An experimental watershed with about $50km^2$ catchment area was selected for evaluating this scheme and automated rain gauges were deployed to 13 locations with the various elevations and slope aspects. 19 cases with 1 mm or more precipitation per day were collected from January to May in 2013 and the corresponding KLAPS daily precipitation data were treated with the second step procedure. For the first step, the 24-hour integrated radar echo data were applied to the KLAPS daily precipitation to produce the 1 km resolution data across the watershed. Estimated precipitation at each 1 km grid cell was then regarded as the real world precipitation observed at the center location of the grid cell in order to derive the elevation regressions in the PRISM step. We produced the digital precipitation maps for all the 19 cases by using RATER and extracted the grid cell values corresponding to 13 points from the maps to compare with the observed data. For the cases of 10 mm or more observed precipitation, significant improvement was found in the estimated precipitation at all 13 sites with RATER, compared with the untreated KLAPS 5 km data. Especially, reduction in RMSE was 35% on 30 mm or more observed precipitation.

Enhancing Predictive Accuracy of Collaborative Filtering Algorithms using the Network Analysis of Trust Relationship among Users (사용자 간 신뢰관계 네트워크 분석을 활용한 협업 필터링 알고리즘의 예측 정확도 개선)

  • Choi, Seulbi;Kwahk, Kee-Young;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.113-127
    • /
    • 2016
  • Among the techniques for recommendation, collaborative filtering (CF) is commonly recognized to be the most effective for implementing recommender systems. Until now, CF has been popularly studied and adopted in both academic and real-world applications. The basic idea of CF is to create recommendation results by finding correlations between users of a recommendation system. CF system compares users based on how similar they are, and recommend products to users by using other like-minded people's results of evaluation for each product. Thus, it is very important to compute evaluation similarities among users in CF because the recommendation quality depends on it. Typical CF uses user's explicit numeric ratings of items (i.e. quantitative information) when computing the similarities among users in CF. In other words, user's numeric ratings have been a sole source of user preference information in traditional CF. However, user ratings are unable to fully reflect user's actual preferences from time to time. According to several studies, users may more actively accommodate recommendation of reliable others when purchasing goods. Thus, trust relationship can be regarded as the informative source for identifying user's preference with accuracy. Under this background, we propose a new hybrid recommender system that fuses CF and social network analysis (SNA). The proposed system adopts the recommendation algorithm that additionally reflect the result analyzed by SNA. In detail, our proposed system is based on conventional memory-based CF, but it is designed to use both user's numeric ratings and trust relationship information between users when calculating user similarities. For this, our system creates and uses not only user-item rating matrix, but also user-to-user trust network. As the methods for calculating user similarity between users, we proposed two alternatives - one is algorithm calculating the degree of similarity between users by utilizing in-degree and out-degree centrality, which are the indices representing the central location in the social network. We named these approaches as 'Trust CF - All' and 'Trust CF - Conditional'. The other alternative is the algorithm reflecting a neighbor's score higher when a target user trusts the neighbor directly or indirectly. The direct or indirect trust relationship can be identified by searching trust network of users. In this study, we call this approach 'Trust CF - Search'. To validate the applicability of the proposed system, we used experimental data provided by LibRec that crawled from the entire FilmTrust website. It consists of ratings of movies and trust relationship network indicating who to trust between users. The experimental system was implemented using Microsoft Visual Basic for Applications (VBA) and UCINET 6. To examine the effectiveness of the proposed system, we compared the performance of our proposed method with one of conventional CF system. The performances of recommender system were evaluated by using average MAE (mean absolute error). The analysis results confirmed that in case of applying without conditions the in-degree centrality index of trusted network of users(i.e. Trust CF - All), the accuracy (MAE = 0.565134) was lower than conventional CF (MAE = 0.564966). And, in case of applying the in-degree centrality index only to the users with the out-degree centrality above a certain threshold value(i.e. Trust CF - Conditional), the proposed system improved the accuracy a little (MAE = 0.564909) compared to traditional CF. However, the algorithm searching based on the trusted network of users (i.e. Trust CF - Search) was found to show the best performance (MAE = 0.564846). And the result from paired samples t-test presented that Trust CF - Search outperformed conventional CF with 10% statistical significance level. Our study sheds a light on the application of user's trust relationship network information for facilitating electronic commerce by recommending proper items to users.

Social Network-based Hybrid Collaborative Filtering using Genetic Algorithms (유전자 알고리즘을 활용한 소셜네트워크 기반 하이브리드 협업필터링)

  • Noh, Heeryong;Choi, Seulbi;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.19-38
    • /
    • 2017
  • Collaborative filtering (CF) algorithm has been popularly used for implementing recommender systems. Until now, there have been many prior studies to improve the accuracy of CF. Among them, some recent studies adopt 'hybrid recommendation approach', which enhances the performance of conventional CF by using additional information. In this research, we propose a new hybrid recommender system which fuses CF and the results from the social network analysis on trust and distrust relationship networks among users to enhance prediction accuracy. The proposed algorithm of our study is based on memory-based CF. But, when calculating the similarity between users in CF, our proposed algorithm considers not only the correlation of the users' numeric rating patterns, but also the users' in-degree centrality values derived from trust and distrust relationship networks. In specific, it is designed to amplify the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the trust relationship network. Also, it attenuates the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the distrust relationship network. Our proposed algorithm considers four (4) types of user relationships - direct trust, indirect trust, direct distrust, and indirect distrust - in total. And, it uses four adjusting coefficients, which adjusts the level of amplification / attenuation for in-degree centrality values derived from direct / indirect trust and distrust relationship networks. To determine optimal adjusting coefficients, genetic algorithms (GA) has been adopted. Under this background, we named our proposed algorithm as SNACF-GA (Social Network Analysis - based CF using GA). To validate the performance of the SNACF-GA, we used a real-world data set which is called 'Extended Epinions dataset' provided by 'trustlet.org'. It is the data set contains user responses (rating scores and reviews) after purchasing specific items (e.g. car, movie, music, book) as well as trust / distrust relationship information indicating whom to trust or distrust between users. The experimental system was basically developed using Microsoft Visual Basic for Applications (VBA), but we also used UCINET 6 for calculating the in-degree centrality of trust / distrust relationship networks. In addition, we used Palisade Software's Evolver, which is a commercial software implements genetic algorithm. To examine the effectiveness of our proposed system more precisely, we adopted two comparison models. The first comparison model is conventional CF. It only uses users' explicit numeric ratings when calculating the similarities between users. That is, it does not consider trust / distrust relationship between users at all. The second comparison model is SNACF (Social Network Analysis - based CF). SNACF differs from the proposed algorithm SNACF-GA in that it considers only direct trust / distrust relationships. It also does not use GA optimization. The performances of the proposed algorithm and comparison models were evaluated by using average MAE (mean absolute error). Experimental result showed that the optimal adjusting coefficients for direct trust, indirect trust, direct distrust, indirect distrust were 0, 1.4287, 1.5, 0.4615 each. This implies that distrust relationships between users are more important than trust ones in recommender systems. From the perspective of recommendation accuracy, SNACF-GA (Avg. MAE = 0.111943), the proposed algorithm which reflects both direct and indirect trust / distrust relationships information, was found to greatly outperform a conventional CF (Avg. MAE = 0.112638). Also, the algorithm showed better recommendation accuracy than the SNACF (Avg. MAE = 0.112209). To confirm whether these differences are statistically significant or not, we applied paired samples t-test. The results from the paired samples t-test presented that the difference between SNACF-GA and conventional CF was statistical significant at the 1% significance level, and the difference between SNACF-GA and SNACF was statistical significant at the 5%. Our study found that the trust/distrust relationship can be important information for improving performance of recommendation algorithms. Especially, distrust relationship information was found to have a greater impact on the performance improvement of CF. This implies that we need to have more attention on distrust (negative) relationships rather than trust (positive) ones when tracking and managing social relationships between users.

Education of Humanistic Tendency of Kerschensteiner (케어션스타이너 교육사상의 인문적 전통)

  • Kim, Deok-Chill
    • Journal of Fisheries and Marine Sciences Education
    • /
    • v.13 no.1
    • /
    • pp.117-131
    • /
    • 2001
  • The character of the educational tradition of Germany could be divided into two aspects. One is the humanistic liberal tendency and the other is vocational. From the beginning of the Twentieth Century, however, there has been an attempt to unify these two trends by the . Georg Kerschensteiner is the first of importance to make some comprehensive curriculum for this goal. In Kerschensteiner, the genuine education makes the individual assume his work and role in society, and to develop them by cultivating insight, will and power. His view is well expressed in the slogn "The vocational education is the beginning of the humanistic education." His goal is to make men of independence and autonomy through vocational education. The theory of Kerschensteiner's education is called 'general vocational education'. The reason why is that his vocational education concerns not just technical training for industry, but also general liberal arts. In this point, Kerschensteiner's point of view goes back to Wilhelm von Humboldt, neo-humanist afar in the first half of the Ninteenth Century, and to John Dewey, pragmatist in the contemporary age of Kerschensteiner. Kerschensteiner was much influenced by Humboldt's concepts of power and individuality. These concepts came to be embodied as a principle of vocational education in Kerschensteiner. Furthermore, Humboldt's concept of power could be associated with Dewey's theory of reflexive thinking. The power in Humboldt is to create spirit, which is connected with the world outside through language. The reflexive thinking of Dewey is a process that examines and selects some alternative thinking in the consciousness before acting. This process makes one find the method of problem-solving which results in behaviour. That is the experimental spirit or pragmatic behaviourism. These theories are reduced to the concept of 'work' in Kerschensteiner. And Kerschensteiner's theory of education that has both sides, humanistic and vocational, is similar to that of John Dewey. Dewey brings forward the idea that the vocational education is the best way to cultivate intelligence and emotion, as intelligence operates best in the life. The position of Dewey is in accord with that of Kerschensteiner who intends to cover experiences of various fields of society through practice-learning, and to have knowledge got from outside of school, refuting the misled trend of education isolated from real life. However, there is some difference between Kerschensteiner and Humbolt or Dewey. While the Neo-humanism of Humbolt and the pragmatic education of Dewey put emphasis rather on the liberal arts and culture of individuality, Kerschensteiner is concerned more with the work and life of the reality of society as a group. Kerschensteiner's concept of utility is related to education for the whole man and to the work of the individual and the nation as well as the will and power to practice it. The ideal man of utility of Kerschensteiner is to learn perfectly the value and behaviour of society through vocational life and to have right view of the state establishing a sound mutual relation between individual and state. Kerschensteiner is regarded as a devotee of 'the state of harmony' or 'the ideal of the state', as he makes the state as the criterion for defining the role of the individual. It can be said that Kerschensteiner is not a democrat of the American style as Dewey is, as he makes much of the value of the nation and state. However, he is a humanist and democrat in the point of vocational education. His purpose of education is to make whole men through work and vocational education.

  • PDF