• Title/Summary/Keyword: real value

Search Result 3,372, Processing Time 0.038 seconds

A Study on Chinese Traditional Auspicious Fish Pattern Application in Corperate Identity Design (중국 전통 길상 어(魚)문양을 응용한 중국 기업의 아이덴티티 디자인 동향)

  • ZHANG, JINGQIU
    • Cartoon and Animation Studies
    • /
    • s.50
    • /
    • pp.349-382
    • /
    • 2018
  • China is a great civilization which is a combination of various ethnic groups with long history change. As one of these important components of traditional culture, the lucky shape has been going through the ideological upheaval of the history change of China. Up to now, it has become the important parts which can stimulate the emotion of Chinese nation. The lucky shape becomes the basis of the rich traditional culture by long history of the Chinese nation. Even say it is the centre of this traditional culture resource. The lucky shape is a way of expressing the Chinese history and national emotions. It is the important part of people's living habits, emotion, as well as the cultural background. What's more, it has the value of beliefs of Surname totem. Meanwhile, it also has the function of passing on information. The symbol of information finally was created by the being of lucky shape to indicate its conceptual content. There are various kinds of lucky shapes. It will have its limitations when researching all kinds of them professionally. So, here the lucky shape of FISH will be researched. The shape of fish is the first good shape created by the Chinese nation. It is about 6000 years. Its special shape and lucky meaning embody the peculiar inherent culture and intension of the Chinese nation. It's the important component of the Chinese traditional culture. The traditional shape of fish was focused on the continuation of history and the patterns recognition, etc. It seldom indicated the meaning of the shape into the using of the modern design. So by searching the lucky meaning & the way of fish shape, the purpose of the search is to explore the real analysis of value of the fish shape in the modern enterprise identity design. The way of search is through the development of the history, the evolvement and the meaning of lucky of the traditional fish shape to analyse the symbolic meaning and the cultural meaning from all levels in nation, culture, art and life, etc. And by using the huge living example of the enterprise identity design of the traditional shape of the fish to analyse that how it works in positive way by those enterprise which is based on the trust with good image. In the modern Chinese enterprise identity design, the lucky image will be reinterpreted in the modern way. It will be proofed by the national perceptual knowledge of the consumer and the way of enlarge the goodwill of corporate image. It will be the conclusion. The traditional fish shape is the important core of modern design.So this search is taken through the instance of the design of enterprise image of the traditional fish shape to analysis the idea of the majority Chinese people of the traditional luck and the influence of corporation which based on trust and credibility. In modern image design of Chinese corporation, the auspicious sign reappear. The question survey is taken by people through the perceptual knowledge of the consumer and the cognition the enterprise image. According the result, people can speculate the improvement of consumer's recognition and the possibility of development of traditional concept.

Performance Analysis of Frequent Pattern Mining with Multiple Minimum Supports (다중 최소 임계치 기반 빈발 패턴 마이닝의 성능분석)

  • Ryang, Heungmo;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.1-8
    • /
    • 2013
  • Data mining techniques are used to find important and meaningful information from huge databases, and pattern mining is one of the significant data mining techniques. Pattern mining is a method of discovering useful patterns from the huge databases. Frequent pattern mining which is one of the pattern mining extracts patterns having higher frequencies than a minimum support threshold from databases, and the patterns are called frequent patterns. Traditional frequent pattern mining is based on a single minimum support threshold for the whole database to perform mining frequent patterns. This single support model implicitly supposes that all of the items in the database have the same nature. In real world applications, however, each item in databases can have relative characteristics, and thus an appropriate pattern mining technique which reflects the characteristics is required. In the framework of frequent pattern mining, where the natures of items are not considered, it needs to set the single minimum support threshold to a too low value for mining patterns containing rare items. It leads to too many patterns including meaningless items though. In contrast, we cannot mine any pattern if a too high threshold is used. This dilemma is called the rare item problem. To solve this problem, the initial researches proposed approximate approaches which split data into several groups according to item frequencies or group related rare items. However, these methods cannot find all of the frequent patterns including rare frequent patterns due to being based on approximate techniques. Hence, pattern mining model with multiple minimum supports is proposed in order to solve the rare item problem. In the model, each item has a corresponding minimum support threshold, called MIS (Minimum Item Support), and it is calculated based on item frequencies in databases. The multiple minimum supports model finds all of the rare frequent patterns without generating meaningless patterns and losing significant patterns by applying the MIS. Meanwhile, candidate patterns are extracted during a process of mining frequent patterns, and the only single minimum support is compared with frequencies of the candidate patterns in the single minimum support model. Therefore, the characteristics of items consist of the candidate patterns are not reflected. In addition, the rare item problem occurs in the model. In order to address this issue in the multiple minimum supports model, the minimum MIS value among all of the values of items in a candidate pattern is used as a minimum support threshold with respect to the candidate pattern for considering its characteristics. For efficiently mining frequent patterns including rare frequent patterns by adopting the above concept, tree based algorithms of the multiple minimum supports model sort items in a tree according to MIS descending order in contrast to those of the single minimum support model, where the items are ordered in frequency descending order. In this paper, we study the characteristics of the frequent pattern mining based on multiple minimum supports and conduct performance evaluation with a general frequent pattern mining algorithm in terms of runtime, memory usage, and scalability. Experimental results show that the multiple minimum supports based algorithm outperforms the single minimum support based one and demands more memory usage for MIS information. Moreover, the compared algorithms have a good scalability in the results.

Essay on Form and Function Design (디자인의 형태와 기능에 관한 연구)

  • 이재국
    • Archives of design research
    • /
    • v.2 no.1
    • /
    • pp.63-97
    • /
    • 1989
  • There is nothing more important than the form and function in design, because every design product can be done on the basis of them. Form and Function are already existed before the word of design has been appeared and all the natural and man-made things' basic organization is based on their organic relations. The organic relations is the source of vitality which identifies the subsistance of all the objects and the evolution of living creatures has been changed their appearances by the natural law and order. Design is no exception. Design is a man-made organic thing which is developed its own way according to the purposed aim and given situations. If so, what is the ultimate goal of design. It is without saying that the goal is to make every effort to contribute to the -human beings most desirable life by the designer who is devoting himself to their convenience and well-being. Therefore, the designer can be called the man of rich life practitioner. This word implies a lot of meanings since the essence of design is improving the guality of life by the man-made things which are created by the designer. Also, the things are existed through the relations between form and function, and the things can keep their value when they are answered to the right purpose. In design, thus, it is to be a main concern how to create valuable things and to use them in the right way, and the subject of study is focused on the designer's outlook of value and uk relations between form and function. Christopher Alexander mentioned the importance of form as follows. The ultimate object of design is form. Every design problem begins with an effort to achieve fittness between the form and its context. The form is the solution to the problem: the context defmes the problem. In other words, when we speak of design, the real object of discussion is not form alone, but the ensemble comprising the form and its context. Good fit is a desirable property of this ensemble which relates to some particular division of the ensemble into form and context. Max Bill mainatined how important form is in design. Form represents a self-contained concept, and its embodiment in an object results in that object becoming a work of art. Futhermore, this explains why we use form so freguently in a comparative sense for determining whether one thing is less or more beautiful than another, and why the ideal of absolute beauty is always the standard by which we appraise form, and through form, art itself. Hence form has became synonymous with beauty. On the other hand, Laszlo Moholy-Nagy stated the importance of function as follows. Function means the task an object is designed to fulfill the task instrument is shaping the form. Unfortunately, this principle was not appreciated at the same time but through the endeavors of Frank Lloyd Wright and of the Bauhaus group and its many collegues in Europe, the idea of functionalism became the keynote of the twenites. Functionalism soon became a cheap slogan, however, and its original meaning blurred. It is neccessary to reexamine it in the light of present circumstances. Charles William Eliot expressed his idea on the relations between function and beauty. Beauty often results chiefly from fittness: indeed it is easy to manitain that nothing is fair except what is fit its uses or functions. If the function of the product of a machine be useful and valuable, an the machine be eminently fit for its function, it conspicuously has the beauty of fittness. A locomotive or a steamship has the same sort of beauty, derived from the supreme fittness for its function. As functions vary, so will those beauty..vary. However, it is impossible to study form and function in separate beings. Function can't be existed without form, and without function, form is nothing. In other words, form is a function's container, and function is content in form. It can be said that, therefore, the form and function are indispensable and commensal individuals which have coetemal relations. From the different point of view, sometimes, one is more emphasized than the other, but in this case, the logic is only accepted on the assumption of recognizing the importance of the other's entity. The fact can be proved what Frank Hoyd wright said that form and function are one. In spite of that, the form and function should be considered as independent indivisuals, because they are too important to be treated just as the simple single one. Form and function have flexible properties to the context. In other words, the context plays a role as the barometer to define the form and function, also which implies every meaning of surroun'||'&'||'not;dings. Thus, design is formed under the influence of situations. Situations are dynamic, like the design process itself, in which fixed focus can be cripping. Moreover, situations control over making the good design. Judging from the respect, I defined the good design in my thesis An Analytic Research on Desigh Ethic, "good design is to solve the problem by the most proper way in the situations." Situations are changeable, and so is design. There is no progress without change, but change is not neccessarily progress. It is highly desirable that there changes be beneficial to mankind. Our main problem is to be able to discriminate between that which should be discarded and that which should be kept, built upon, and improved. Form and Function are no exception. The practical function gives birth to the inevitable form and the $$\mu$ti-classified function is delivered to the varieties of form. All of these are depended upon changeable situations. That is precisely the situations of "situation de'||'&'||'not;sign", the concept of moving from the design of things to the design of the circumstances in which things are used. From this point of view, the core of form and function is depended upon how the designer can manage it efficiently in given situations. That is to say that the creativity designer plays an important role to fulfill the purpose. Generally speaking, creativity is the organization of a concept in response to a human need-a solution that is both satisfying and innovative. In order to meet human needs, creative design activities require a special intuitive insight which is set into motion by purposeful imagination. Therefore, creativity is the most essential quality of every designer. In addition, designers share with other creative people a compulsive ingenuity and a passion for imaginative solutions which will meet their criteria for excellence. Ultimately, it is said that the form and function is the matter which belongs to the desire of creative designers who constantly try to bring new thing into being to create new things. In accordance with that the main puppose of this thesis is to catch every meaning of the form and function and to close analyze their relations for the promotion of understanding and devising practical application to gradual progression in design. The thesis is composed of four parts: Introduction, Form, Function and Conclusion. Introduction, the purpose and background of the research are presented. In Chapter I, orgin of form, perception of form, and classification of form are studied. In Chapter II, generation of function, development of function, and diversification of function are considered. Conclusion, some concluding words are mentioned.ioned.

  • PDF

Effects of firm strategies on customer acquisition of Software as a Service (SaaS) providers: A mediating and moderating role of SaaS technology maturity (SaaS 기업의 차별화 및 가격전략이 고객획득성과에 미치는 영향: SaaS 기술성숙도 수준의 매개효과 및 조절효과를 중심으로)

  • Chae, SeongWook;Park, Sungbum
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.151-171
    • /
    • 2014
  • Firms today have sought management effectiveness and efficiency utilizing information technologies (IT). Numerous firms are outsourcing specific information systems functions to cope with their short of information resources or IT experts, or to reduce their capital cost. Recently, Software-as-a-Service (SaaS) as a new type of information system has become one of the powerful outsourcing alternatives. SaaS is software deployed as a hosted and accessed over the internet. It is regarded as the idea of on-demand, pay-per-use, and utility computing and is now being applied to support the core competencies of clients in areas ranging from the individual productivity area to the vertical industry and e-commerce area. In this study, therefore, we seek to quantify the value that SaaS has on business performance by examining the relationships among firm strategies, SaaS technology maturity, and business performance of SaaS providers. We begin by drawing from prior literature on SaaS, technology maturity and firm strategy. SaaS technology maturity is classified into three different phases such as application service providing (ASP), Web-native application, and Web-service application. Firm strategies are manipulated by the low-cost strategy and differentiation strategy. Finally, we considered customer acquisition as a business performance. In this sense, specific objectives of this study are as follows. First, we examine the relationships between customer acquisition performance and both low-cost strategy and differentiation strategy of SaaS providers. Secondly, we investigate the mediating and moderating effects of SaaS technology maturity on those relationships. For this purpose, study collects data from the SaaS providers, and their line of applications registered in the database in CNK (Commerce net Korea) in Korea using a questionnaire method by the professional research institution. The unit of analysis in this study is the SBUs (strategic business unit) in the software provider. A total of 199 SBUs is used for analyzing and testing our hypotheses. With regards to the measurement of firm strategy, we take three measurement items for differentiation strategy such as the application uniqueness (referring an application aims to differentiate within just one or a small number of target industry), supply channel diversification (regarding whether SaaS vendor had diversified supply chain) as well as the number of specialized expertise and take two items for low cost strategy like subscription fee and initial set-up fee. We employ a hierarchical regression analysis technique for testing moderation effects of SaaS technology maturity and follow the Baron and Kenny's procedure for determining if firm strategies affect customer acquisition through technology maturity. Empirical results revealed that, firstly, when differentiation strategy is applied to attain business performance like customer acquisition, the effects of the strategy is moderated by the technology maturity level of SaaS providers. In other words, securing higher level of SaaS technology maturity is essential for higher business performance. For instance, given that firms implement application uniqueness or a distribution channel diversification as a differentiation strategy, they can acquire more customers when their level of SaaS technology maturity is higher rather than lower. Secondly, results indicate that pursuing differentiation strategy or low cost strategy effectively works for SaaS providers' obtaining customer, which means that continuously differentiating their service from others or making their service fee (subscription fee or initial set-up fee) lower are helpful for their business success in terms of acquiring their customers. Lastly, results show that the level of SaaS technology maturity mediates the relationships between low cost strategy and customer acquisition. That is, based on our research design, customers usually perceive the real value of the low subscription fee or initial set-up fee only through the SaaS service provide by vender and, in turn, this will affect their decision making whether subscribe or not.

Predicting the Direction of the Stock Index by Using a Domain-Specific Sentiment Dictionary (주가지수 방향성 예측을 위한 주제지향 감성사전 구축 방안)

  • Yu, Eunji;Kim, Yoosin;Kim, Namgyu;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.95-110
    • /
    • 2013
  • Recently, the amount of unstructured data being generated through a variety of social media has been increasing rapidly, resulting in the increasing need to collect, store, search for, analyze, and visualize this data. This kind of data cannot be handled appropriately by using the traditional methodologies usually used for analyzing structured data because of its vast volume and unstructured nature. In this situation, many attempts are being made to analyze unstructured data such as text files and log files through various commercial or noncommercial analytical tools. Among the various contemporary issues dealt with in the literature of unstructured text data analysis, the concepts and techniques of opinion mining have been attracting much attention from pioneer researchers and business practitioners. Opinion mining or sentiment analysis refers to a series of processes that analyze participants' opinions, sentiments, evaluations, attitudes, and emotions about selected products, services, organizations, social issues, and so on. In other words, many attempts based on various opinion mining techniques are being made to resolve complicated issues that could not have otherwise been solved by existing traditional approaches. One of the most representative attempts using the opinion mining technique may be the recent research that proposed an intelligent model for predicting the direction of the stock index. This model works mainly on the basis of opinions extracted from an overwhelming number of economic news repots. News content published on various media is obviously a traditional example of unstructured text data. Every day, a large volume of new content is created, digitalized, and subsequently distributed to us via online or offline channels. Many studies have revealed that we make better decisions on political, economic, and social issues by analyzing news and other related information. In this sense, we expect to predict the fluctuation of stock markets partly by analyzing the relationship between economic news reports and the pattern of stock prices. So far, in the literature on opinion mining, most studies including ours have utilized a sentiment dictionary to elicit sentiment polarity or sentiment value from a large number of documents. A sentiment dictionary consists of pairs of selected words and their sentiment values. Sentiment classifiers refer to the dictionary to formulate the sentiment polarity of words, sentences in a document, and the whole document. However, most traditional approaches have common limitations in that they do not consider the flexibility of sentiment polarity, that is, the sentiment polarity or sentiment value of a word is fixed and cannot be changed in a traditional sentiment dictionary. In the real world, however, the sentiment polarity of a word can vary depending on the time, situation, and purpose of the analysis. It can also be contradictory in nature. The flexibility of sentiment polarity motivated us to conduct this study. In this paper, we have stated that sentiment polarity should be assigned, not merely on the basis of the inherent meaning of a word but on the basis of its ad hoc meaning within a particular context. To implement our idea, we presented an intelligent investment decision-support model based on opinion mining that performs the scrapping and parsing of massive volumes of economic news on the web, tags sentiment words, classifies sentiment polarity of the news, and finally predicts the direction of the next day's stock index. In addition, we applied a domain-specific sentiment dictionary instead of a general purpose one to classify each piece of news as either positive or negative. For the purpose of performance evaluation, we performed intensive experiments and investigated the prediction accuracy of our model. For the experiments to predict the direction of the stock index, we gathered and analyzed 1,072 articles about stock markets published by "M" and "E" media between July 2011 and September 2011.

Construction and Application of Intelligent Decision Support System through Defense Ontology - Application example of Air Force Logistics Situation Management System (국방 온톨로지를 통한 지능형 의사결정지원시스템 구축 및 활용 - 공군 군수상황관리체계 적용 사례)

  • Jo, Wongi;Kim, Hak-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.77-97
    • /
    • 2019
  • The large amount of data that emerges from the initial connection environment of the Fourth Industrial Revolution is a major factor that distinguishes the Fourth Industrial Revolution from the existing production environment. This environment has two-sided features that allow it to produce data while using it. And the data produced so produces another value. Due to the massive scale of data, future information systems need to process more data in terms of quantities than existing information systems. In addition, in terms of quality, only a large amount of data, Ability is required. In a small-scale information system, it is possible for a person to accurately understand the system and obtain the necessary information, but in a variety of complex systems where it is difficult to understand the system accurately, it becomes increasingly difficult to acquire the desired information. In other words, more accurate processing of large amounts of data has become a basic condition for future information systems. This problem related to the efficient performance of the information system can be solved by building a semantic web which enables various information processing by expressing the collected data as an ontology that can be understood by not only people but also computers. For example, as in most other organizations, IT has been introduced in the military, and most of the work has been done through information systems. Currently, most of the work is done through information systems. As existing systems contain increasingly large amounts of data, efforts are needed to make the system easier to use through its data utilization. An ontology-based system has a large data semantic network through connection with other systems, and has a wide range of databases that can be utilized, and has the advantage of searching more precisely and quickly through relationships between predefined concepts. In this paper, we propose a defense ontology as a method for effective data management and decision support. In order to judge the applicability and effectiveness of the actual system, we reconstructed the existing air force munitions situation management system as an ontology based system. It is a system constructed to strengthen management and control of logistics situation of commanders and practitioners by providing real - time information on maintenance and distribution situation as it becomes difficult to use complicated logistics information system with large amount of data. Although it is a method to take pre-specified necessary information from the existing logistics system and display it as a web page, it is also difficult to confirm this system except for a few specified items in advance, and it is also time-consuming to extend the additional function if necessary And it is a system composed of category type without search function. Therefore, it has a disadvantage that it can be easily utilized only when the system is well known as in the existing system. The ontology-based logistics situation management system is designed to provide the intuitive visualization of the complex information of the existing logistics information system through the ontology. In order to construct the logistics situation management system through the ontology, And the useful functions such as performance - based logistics support contract management and component dictionary are further identified and included in the ontology. In order to confirm whether the constructed ontology can be used for decision support, it is necessary to implement a meaningful analysis function such as calculation of the utilization rate of the aircraft, inquiry about performance-based military contract. Especially, in contrast to building ontology database in ontology study in the past, in this study, time series data which change value according to time such as the state of aircraft by date are constructed by ontology, and through the constructed ontology, It is confirmed that it is possible to calculate the utilization rate based on various criteria as well as the computable utilization rate. In addition, the data related to performance-based logistics contracts introduced as a new maintenance method of aircraft and other munitions can be inquired into various contents, and it is easy to calculate performance indexes used in performance-based logistics contract through reasoning and functions. Of course, we propose a new performance index that complements the limitations of the currently applied performance indicators, and calculate it through the ontology, confirming the possibility of using the constructed ontology. Finally, it is possible to calculate the failure rate or reliability of each component, including MTBF data of the selected fault-tolerant item based on the actual part consumption performance. The reliability of the mission and the reliability of the system are calculated. In order to confirm the usability of the constructed ontology-based logistics situation management system, the proposed system through the Technology Acceptance Model (TAM), which is a representative model for measuring the acceptability of the technology, is more useful and convenient than the existing system.

A Study of Feasibility of Dipole-dipole Electric Method to Metallic Ore-deposit Exploration in Korea (국내 금속광 탐사를 위한 쌍극자-쌍극자 전기탐사의 적용성 연구)

  • Min, Dong-Joo;Jung, Hyun-Key;Park, Sam-Gyu;Chon, Hyo-Taek;Kwak, Na-Eun
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.3
    • /
    • pp.250-262
    • /
    • 2008
  • In order to assess the feasibility of the dipole-dipole electric method to the investigation of metallic ore deposit, both field data simulation and inversion are carried out for several simplified ore deposit models. Our interest is in a vein-type model, because most of the ore deposits (more than 70%) exist in a vein type in Korea. Based on the fact that the width of the vein-type ore deposits ranges from tens of centimeters to 2m, we change the width and the material property of the vein, and we use 40m-electrode spacing for our test. For the vein-type model with too small width, the low resistivity zone is not detected, even though the resistivity of the vein amounts to 1/300 of that of the surrounding rock. Considering a wide electrode interval and cell size used in the inversion, it is natural that the size of the low resistivity zone is overestimated. We also perform field data simulation and inversion for a vein-type model with surrounding hydrothermal alteration zones, which is a typical structure in an epithermal ore deposits. In the model, the material properties are assumed on the basis of resistivity values directly observed in a mine originated from an epithermal ore deposits. From this simulation, we can also note that the high resistivity value of the vein does not affect the results when the width of the vein is narrow. This indicates that our main target should be surrounding hydrothermal alteration zones rather than veins in field survey. From these results, we can summarize that when the vein is placed at the deep part and the difference of resistivity values between the vein and the surrounding rock is not large enough, we cannot detect low resistivity zone and interpret the subsurface structures incorrectly using the electric method performed at the surface. Although this work is a little simple, it can be used as references for field survey design and field data Interpretation. If we perform field data simulation and inversion for a number of models and provide some references, they will be helpful in real field survey and interpretation.

Multi-level Analysis of the Antecedents of Knowledge Transfer: Integration of Social Capital Theory and Social Network Theory (지식이전 선행요인에 관한 다차원 분석: 사회적 자본 이론과 사회연결망 이론의 결합)

  • Kang, Minhyung;Hau, Yong Sauk
    • Asia pacific journal of information systems
    • /
    • v.22 no.3
    • /
    • pp.75-97
    • /
    • 2012
  • Knowledge residing in the heads of employees has always been regarded as one of the most critical resources within a firm. However, many tries to facilitate knowledge transfer among employees has been unsuccessful because of the motivational and cognitive problems between the knowledge source and the recipient. Social capital, which is defined as "the sum of the actual and potential resources embedded within, available through, derived from the network of relationships possessed by an individual or social unit [Nahapiet and Ghoshal, 1998]," is suggested to resolve these motivational and cognitive problems of knowledge transfer. In Social capital theory, there are two research streams. One insists that social capital strengthens group solidarity and brings up cooperative behaviors among group members, such as voluntary help to colleagues. Therefore, social capital can motivate an expert to transfer his/her knowledge to a colleague in need without any direct reward. The other stream insists that social capital provides an access to various resources that the owner of social capital doesn't possess directly. In knowledge transfer context, an employee with social capital can access and learn much knowledge from his/her colleagues. Therefore, social capital provides benefits to both the knowledge source and the recipient in different ways. However, prior research on knowledge transfer and social capital is mostly limited to either of the research stream of social capital and covered only the knowledge source's or the knowledge recipient's perspective. Social network theory which focuses on the structural dimension of social capital provides clear explanation about the in-depth mechanisms of social capital's two different benefits. 'Strong tie' builds up identification, trust, and emotional attachment between the knowledge source and the recipient; therefore, it motivates the knowledge source to transfer his/her knowledge to the recipient. On the other hand, 'weak tie' easily expands to 'diverse' knowledge sources because it does not take much effort to manage. Therefore, the real value of 'weak tie' comes from the 'diverse network structure,' not the 'weak tie' itself. It implies that the two different perspectives on strength of ties can co-exist. For example, an extroverted employee can manage many 'strong' ties with 'various' colleagues. In this regards, the individual-level structure of one's relationships as well as the dyadic-level relationship should be considered together to provide a holistic view of social capital. In addition, interaction effect between individual-level characteristics and dyadic-level characteristics can be examined, too. Based on these arguments, this study has following research questions. (1) How does the social capital of the knowledge source and the recipient influence knowledge transfer respectively? (2) How does the strength of ties between the knowledge source and the recipient influence knowledge transfer? (3) How does the social capital of the knowledge source and the recipient influence the effect of the strength of ties between the knowledge source and the recipient on knowledge transfer? Based on Social capital theory and Social network theory, a multi-level research model is developed to consider both the individual-level social capital of the knowledge source and the recipient and the dyadic-level strength of relationship between the knowledge source and the recipient. 'Cross-classified random effect model,' one of the multi-level analysis methods, is adopted to analyze the survey responses from 337 R&D employees. The results of analysis provide several findings. First, among three dimensions of the knowledge source's social capital, network centrality (i.e., structural dimension) shows the significant direct effect on knowledge transfer. On the other hand, the knowledge recipient's network centrality is not influential. Instead, it strengthens the influence of the strength of ties between the knowledge source and the recipient on knowledge transfer. It means that the knowledge source's network centrality does not directly increase knowledge transfer. Instead, by providing access to various knowledge sources, the network centrality provides only the context where the strong tie between the knowledge source and the recipient leads to effective knowledge transfer. In short, network centrality has indirect effect on knowledge transfer from the knowledge recipient's perspective, while it has direct effect from the knowledge source's perspective. This is the most important contribution of this research. In addition, contrary to the research hypothesis, company tenure of the knowledge recipient negatively influences knowledge transfer. It means that experienced employees do not look for new knowledge and stick to their own knowledge. This is also an interesting result. One of the possible reasons is the hierarchical culture of Korea, such as a fear of losing face in front of subordinates. In a research methodology perspective, multi-level analysis adopted in this study seems to be very promising in management research area which has a multi-level data structure, such as employee-team-department-company. In addition, social network analysis is also a promising research approach with an exploding availability of online social network data.

  • PDF

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (비정형 텍스트 분석을 활용한 이슈의 동적 변이과정 고찰)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.1-18
    • /
    • 2016
  • Owing to the extensive use of Web media and the development of the IT industry, a large amount of data has been generated, shared, and stored. Nowadays, various types of unstructured data such as image, sound, video, and text are distributed through Web media. Therefore, many attempts have been made in recent years to discover new value through an analysis of these unstructured data. Among these types of unstructured data, text is recognized as the most representative method for users to express and share their opinions on the Web. In this sense, demand for obtaining new insights through text analysis is steadily increasing. Accordingly, text mining is increasingly being used for different purposes in various fields. In particular, issue tracking is being widely studied not only in the academic world but also in industries because it can be used to extract various issues from text such as news, (SocialNetworkServices) to analyze the trends of these issues. Conventionally, issue tracking is used to identify major issues sustained over a long period of time through topic modeling and to analyze the detailed distribution of documents involved in each issue. However, because conventional issue tracking assumes that the content composing each issue does not change throughout the entire tracking period, it cannot represent the dynamic mutation process of detailed issues that can be created, merged, divided, and deleted between these periods. Moreover, because only keywords that appear consistently throughout the entire period can be derived as issue keywords, concrete issue keywords such as "nuclear test" and "separated families" may be concealed by more general issue keywords such as "North Korea" in an analysis over a long period of time. This implies that many meaningful but short-lived issues cannot be discovered by conventional issue tracking. Note that detailed keywords are preferable to general keywords because the former can be clues for providing actionable strategies. To overcome these limitations, we performed an independent analysis on the documents of each detailed period. We generated an issue flow diagram based on the similarity of each issue between two consecutive periods. The issue transition pattern among categories was analyzed by using the category information of each document. In this study, we then applied the proposed methodology to a real case of 53,739 news articles. We derived an issue flow diagram from the articles. We then proposed the following useful application scenarios for the issue flow diagram presented in the experiment section. First, we can identify an issue that actively appears during a certain period and promptly disappears in the next period. Second, the preceding and following issues of a particular issue can be easily discovered from the issue flow diagram. This implies that our methodology can be used to discover the association between inter-period issues. Finally, an interesting pattern of one-way and two-way transitions was discovered by analyzing the transition patterns of issues through category analysis. Thus, we discovered that a pair of mutually similar categories induces two-way transitions. In contrast, one-way transitions can be recognized as an indicator that issues in a certain category tend to be influenced by other issues in another category. For practical application of the proposed methodology, high-quality word and stop word dictionaries need to be constructed. In addition, not only the number of documents but also additional meta-information such as the read counts, written time, and comments of documents should be analyzed. A rigorous performance evaluation or validation of the proposed methodology should be performed in future works.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.