• Title/Summary/Keyword: Domain

Search Result 17,670, Processing Time 0.049 seconds

Study on the Difference in Intake Rate by Kidney in Accordance with whether the Bladder is Shielded and Injection method in 99mTc-DMSA Renal Scan for Infants (소아 99mTc-DMSA renal scan에서 방광차폐유무와 방사성동위원소 주입방법에 따른 콩팥섭취율 차이에 관한 연구)

  • Park, Jeong Kyun;Cha, Jae Hoon;Kim, Kwang Hyun;An, Jong Ki;Hong, Da Young;Seong, Hyo Jin
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.20 no.2
    • /
    • pp.27-31
    • /
    • 2016
  • Purpose $^{99m}Tc-DMSA$ renal scan is a test for the comparison of the function by imaging the parenchyma of the kidneys by the cortex of a kidney and by computing the intake ratio of radiation by the left and right kidney. Since the distance between the kidneys and the bladder is not far given the bodily structure of an infant, the bladder is included in the examination domain. Research was carried out with the presumption that counts of bladder would impart an influence on the kidneys at the time of this renal scan. In consideration of the special feature that only a trace amount of a RI is injected in a pediatric examination, research on the method of injection was also carried out concurrently. Materials and Methods With 34 infants aged between 1 month to 12 months for whom a $^{99m}Tc-DMSA$ renal scan was implemented on the subjects, a Post IMAGE was acquired in accordance with the test time after having injected the same quantity of DMSA of 0.5mCi. Then, after having acquired an additional image by shielding the bladder by using a circular lead plate for comparison purposes, a comparison was made by illustrating the percentile of (Lt. Kidney counts + Rt. Kidney counts)/ Total counts, by drawing the same sized ROI (length of 55.2mm X width of 70.0mm). In addition, in the format of a 3-way stopcock, a Heparin cap and direct injection into the patient were performed in accordance with RI injection methods. The differences in the count changes in accordance with each of the methods were compared by injecting an additional 2cc of saline into the 3-way stopcock and Heparin cap. Results The image prior to shielding of the bladder displayed a kidney intake rate with a deviation of $70.9{\pm}3.18%$ while the image after the shielding of the bladder displayed a kidney intake rate with a deviation of $79.4{\pm}5.19%$, thereby showing approximately 6.5~8.5% of difference. In terms of the injection method, the method that used the 3-way form, a deviation of $68.9{\pm}2.80%$ prior to the shielding and a deviation of $78.1{\pm}5.14%$ after the shielding were displayed. In the method of using a Heparin cap, a deviation of $71.3{\pm}5.14%$ prior to the shielding and a deviation of $79.8{\pm}3.26%$ after the shielding were displayed. Lastly, in the method of direct injection into the patient, a deviation of $75.1{\pm}4.30%$ prior to the shielding and a deviation of $82.1{\pm}2.35%$ after the shielding were displayed, thereby illustrating differences in the kidney intake rates in the order of direct injection, a Heparin cap and the 3-way methods. Conclusion Since a substantially minute quantity of radiopharmaceuticals is injected for infants in comparison to adults, the cases of having shielded the bladder by removing radiation of the bladder displayed kidney intake rates that are improved from those of the cases of not having shielded the bladder. Although there are difficulties in securing blood vessels, it is deemed that the method of direct injection would be more helpful in acquisition of better images since it displays improved kidney intake rate in comparison to other methods.

  • PDF

Status and Prospect of Herbicide Resistant Weeds in Rice Field of Korea (한국 논에서 제초제 저항성잡초 발생 현황과 전망)

  • Park, Tae-Seon;Lee, In-Yong;Seong, Ki-Yeong;Cho, Hyeon-Suk;Park, Hong-Kyu;Ko, Jae-Kwon;Kang, Ui-Gum
    • Korean Journal of Weed Science
    • /
    • v.31 no.2
    • /
    • pp.119-133
    • /
    • 2011
  • Sulfonylurea (SU)-resistant weeds include seven annual weeds such as Monochoria vaginalis, Scirpus juncoides and Cyperus difformis, etc., and three perennial weeds of Scirpus planiculmis, Sagittaria pigmaea and Eleocharis acicularis as of 2010 since identification Monochoria korsakowii in the reclaimed rice field in 1998. The Echinochloa oryzoides resistant to acetyl CoA carboxylase (ACCase) and acetolactate synthase (ALS) inhibitors has been confirmed in wet-direct seeding rice field of the southern province, Korea in 2009. In the beginning of occurrence of SU-resistant weeds the M. vaginalis, S. juncoides and C. difformis were rapidly and individually spreaded in different fields, however, theses resistant weeds have been occurring simultaneously in the same filed as time goes by. The resistant biotype by weed species demonstrated about 10- to 1,000-fold resistance, base on $GR_{50}$ (50% growth reduction) values of the SU herbicides tested. And the resistant biotype of E. oryzoides to cyhalofop-butyl, pyriminobac-methyl, and penoxsulam was about 14, 8, and 11 times more resistant than the susceptible biotype base on $GR_{50}$ values. In history of paddy herbicides in Korea, the introduction of SU herbicides including besulfuron-metyl and pyrazosulfuron-ethyl that control many troublesome weeds at low use rates and provide excellent crop safety gave farmers and many workers for herbicide business refreshing jolt. The products and applied area of SU-included herbicides have been rapidly increased, and have accounted for about 69% and 96%, respectively, in Korea. The top ten herbicides by applied area were composed of all SU-included herbicides by 2003. The concentrated and successive treatment of ACCase and ALS inhibitors for control of barnyardgrass in direct-seeded rice led up to the resistance of E. oryzoides. Also, SU-herbicides like pyrazosulfuron-ethyl and imazosulfuron which are effective to barnyardgrass can be bound up with the resistance of E. oryzoides. The ALS activity isolated from the resistant biotype of M. korsakowii to SU-herbicides tested was less sensitive than that of susceptible biotype. The concentration of herbicide required for 50% inhibition of ALS activity ($I_{50}$) of the SU-resistant M. korsakowii was 14- to 76-fold higher as compared to the susceptible biotype. No differences were observed in the rates of [$^{14}C$]bensulfuron uptake and translocation. ALS genes from M. vaginalis resistant and susceptible biotypes against SU-herbicides revealed a single amino acid substitution of proline (CCT), at 197th position based on the M. korsakowii ALS sequence numbering, to serin (TCT) in conserved domain A of the gene. Carfentrazone-ethyl and pyrazolate were used mainly to control SU-resistant M. vaginalis by 2006, the early period, in Korea. However, the alternative herbicides such as benzobicyclone, to be possible to control simultaneously the several resistant weeds, have been developing and using broadly because the several resistant weeds have been occurring simultaneously in the same filed. The top ten herbicides by applied area in Korea have been occupied by products of 3-way mixture type including herbicides with alternative mode of action for the herbicide resistant weeds. Mefenacet, fentrazamide and cafenstrole had excellent controlling effects on the ACCase and ALS inhibitors resistant when they were applied within 2 leaf stage.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Studies on the Assumption of the Locations and Formational Characteristics in Yigye-gugok, Mt. Bukhansan (북한산 이계구곡(耳溪九曲)의 위치비정과 집경(集景) 특성)

  • Jung, Woo-Jin;Rho, Jae-Hyun;Lee, Hee-Young
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.35 no.3
    • /
    • pp.41-66
    • /
    • 2017
  • The purpose of this research is to empirically trace the junctures of Yigye-gugok managed by Gwan-am Hong Gyeong-mo, a grandson of Yigye Hong Yang-ho who originally designed Yigye-gugok, while reviewing the features of the forms and patterns of gugok. The results of the research are as follows. 1. Ui-dong was part of the domain of the capital during the Chosun dynasty, which also is located in the city of Seoul as a matter of administrative zone. Likewisely, Yigye-gugok is taken as a special meaning for it was one and only gugok. Starting with Mangyeong Waterfall as the $1^{st}$ gok, Yigye follows through the $2^{nd}$ gok of Jeokchwibyeong Rock, the $3^{rd}$ gok of Chanunbong Peak, the $4^{th}$ gok of Jinuigang Rock, the $5^{th}$ gok of Okkyeongdae Rock, the $6^{th}$ gok of Wolyeongdam Pond, the $7^{th}$ gok of Tagyeongam Rock, the $8^{th}$ gok of Myeongoktan Stream, and the $9^{th}$ gok of Jaeganjeong Pavilion. Of these, Mangyeong Waterfall, Chanunbong Peak, and Okkyeongdae Rock are distinct for their locations in as much as their features, while estimated locations for Jinuigang Rock, Wolyeongdam Pond, Myeongoktan Stream, and Jaeganjeong Pavilion were discovered. However, Jeokchwibyeong Rock and Tagyeongam Rock demonstrated multiple locations in close resemblance to documentary literatures within secretive proximity, whereas geography, scenery, and sighted objects were considered to evaluate the 1st estimated location. Through these endeavored, it was possible to identify the shipping routes and structures for the total distance of 2.1km running from the $1^{st}$ gok to the $9^{th}$ gok, which nears Gwanam's description of 5ri(里), or approximately 1.96km for gugok. 2. Set towards the end of the $18^{th}$ century, Yigye-gugok originated from a series of work shaping the space of Hong Yang-ho's tomb into a space for the family. Comparing Yigye-gugok to other gugoks, numerous differences are apparent from beyond the rather more general format such as adjoining the $8^{th}$ gok while paving through the lower directions from the upper directions of the water. This gives rises to the interpretation such that Yigye-gugok was positioned to separate the doman of the family from those of the other families in power, thereby taking over Ui-dong. Yet, the aspect of the possession of the space lends itself to the determination that the location positioned at the $8^{th}$ gok above Mangyeongpok Waterfall representing Wooyi-dong was a consequence of the centrifugal space creation efforts. 3. While writings and poetic works were manufactured in such large quantities in Yigye-gugok whose products of setters and managers seemed intended towards gugok-do and letters carved on the rocks among others, there is yet a tremendous lack of visual media in the same respect. 'Yigye-gugok Daejacheop' Specimens of Handwriting offers the traces of Gwanam's attempts to engrave gakja at the food of Yigye-gugok. This research was able to ascertain that 'Yigye-gugok Daejacheop' Specimens of Handwriting was a product of Hong Yang-ho's collections maintained under the auspices of the National Central Museum, which are renowned for Song Shi-yeol's penmanship.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Organizational Buying Behavior in an Interdependent World (상호의존세계중적조직구매행위(相互依存世界中的组织购买行为))

  • Wind, Yoram;Thomas, Robert J.
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.2
    • /
    • pp.110-122
    • /
    • 2010
  • The emergence of the field of organizational buying behavior in the mid-1960’s with the publication of Industrial Buying and Creative Marketing (1967) set the stage for a new paradigm of thinking about how business was conducted in markets other than those serving ultimate consumers. Whether it is "industrial marketing" or "business-to-business marketing" (B-to-B), organizational buying behavior remains the core differentiating characteristic of this domain of marketing. This paper explores the impact of several dynamic factors that have influenced how organizations relate to one another in a rapidly increasing interdependence, which in turn can impact organizational buying behavior. The paper also raises the question of whether or not the major conceptual models of organizational buying behavior in an interdependent world are still relevant to guide research and managerial thinking, in this dynamic business environment. The paper is structured to explore three questions related to organizational interdependencies: 1. What are the factors and trends driving the emergence of organizational interdependencies? 2. Will the major conceptual models of organizational buying behavior that have developed over the past half century be applicable in a world of interdependent organizations? 3. What are the implications of organizational interdependencies on the research and practice of organizational buying behavior? Consideration of the factors and trends driving organizational interdependencies revealed five critical drivers in the relationships among organizations that can impact their purchasing behavior: Accelerating Globalization, Flattening Networks of Organizations, Disrupting Value Chains, Intensifying Government Involvement, and Continuously Fragmenting Customer Needs. These five interlinked drivers of interdependency and their underlying technological advances can alter the relationships within and among organizations that buy products and services to remain competitive in their markets. Viewed in the context of a customer driven marketing strategy, these forces affect three levels of strategy development: (1) evolving customer needs, (2) the resulting product/service/solution offerings to meet these needs, and (3) the organization competencies and processes required to develop and implement the offerings to meet needs. The five drivers of interdependency among organizations do not necessarily operate independently in their impact on how organizations buy. They can interact with each other and become even more potent in their impact on organizational buying behavior. For example, accelerating globalization may influence the emergence of additional networks that further disrupt traditional value chain relationships, thereby changing how organizations purchase products and services. Increased government involvement in business operations in one country may increase costs of doing business and therefore drive firms to seek low cost sources in emerging markets in other countries. This can reduce employment opportunitiesn one country and increase them in another, further accelerating the pace of globalization. The second major question in the paper is what impact these drivers of interdependencies have had on the core conceptual models of organizational buying behavior. Consider the three enduring conceptual models developed in the Industrial Buying and Creative Marketing and Organizational Buying Behavior books: the organizational buying process, the buying center, and the buying situation. A review of these core models of organizational buying behavior, as originally conceptualized, shows they are still valid and not likely to change with the increasingly intense drivers of interdependency among organizations. What will change however is the way in which buyers and sellers interact under conditions of interdependency. For example, increased interdependencies can lead to increased opportunities for collaboration as well as conflict between buying and selling organizations, thereby changing aspects of the buying process. In addition, the importance of communication processes between and among organizations will increase as the role of trust becomes an important criterion for a successful buying relationship. The third question in the paper explored consequences and implications of these interdependencies on organizational buying behavior for practice and research. The following are considered in the paper: the need to increase understanding of network influences on organizational buying behavior, the need to increase understanding of the role of trust and value among organizational participants, the need to improve understanding of how to manage organizational buying in networked environments, the need to increase understanding of customer needs in the value network, and the need to increase understanding of the impact of emerging new business models on organizational buying behavior. In many ways, these needs deriving from increased organizational interdependencies are an extension of the conceptual tradition in organizational buying behavior. In 1977, Nicosia and Wind suggested a focus on inter-organizational over intra-organizational perspectives, a trend that has received considerable momentum since the 1990's. Likewise for managers to survive in an increasingly interdependent world, they will need to better understand the complexities of how organizations relate to one another. The transition from an inter-organizational to an interdependent perspective has begun, and must continue so as to develop an improved understanding of these important relationships. A shift to such an interdependent network perspective may require many academicians and practitioners to fundamentally challenge and change the mental models underlying their business and organizational buying behavior models. The focus can no longer be only on the dyadic relations of the buying organization and the selling organization but should involve all the related members of the network, including the network of customers, developers, and other suppliers and intermediaries. Consider for example the numerous partner networks initiated by SAP which involves over 9000 companies and over a million participants. This evolving, complex, and uncertain reality of interdependencies and dynamic networks requires reconsideration of how purchase decisions are made; as a result they should be the focus of the next phase of research and theory building among academics and the focus of practical models and experiments undertaken by practitioners. The hope is that such research will take place, not in the isolation of the ivory tower, nor in the confines of the business world, but rather, by increased collaboration of academics and practitioners. In conclusion, the consideration of increased interdependence among organizations revealed the continued relevance of the fundamental models of organizational buying behavior. However to increase the value of these models in an interdependent world, academics and practitioners should improve their understanding of (1) network influences, (2) how to better manage these influences, (3) the role of trust and value among organizational participants, (4) the evolution of customer needs in the value network, and (5) the impact of emerging new business models on organizational buying behavior. To accomplish this, greater collaboration between industry and academia is needed to advance our understanding of organizational buying behavior in an interdependent world.

Seeking a Better Place: Sustainability in the CPG Industry (추심경호적지방(追寻更好的地方): 유포장적소비품적산업적가지속발전(有包装的消费品的产业的可持续发展))

  • Rapert, Molly Inhofe;Newman, Christopher;Park, Seong-Yeon;Lee, Eun-Mi
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.2
    • /
    • pp.199-207
    • /
    • 2010
  • For us, there is virtually no distinction between being a responsible citizen and a successful business... they are one and the same for Wal-Mart today." ~ Lee Scott, al-Mart CEO after the 2005 Katrina disaster; cited in Green to Gold (Esty and Winston 2006). Lee Scott's statement signaled a new era in sustainability as manufacturers and retailers around the globe watched the world's largest mass merchandiser confirm its intentions with respect to sustainability. For decades, the environmental movement has grown, slowly bleeding over into the corporate world. Companies have been born, products have been created, academic journals have been launched, and government initiatives have been undertaken - all in the pursuit of sustainability (Peattie and Crane 2005). While progress has been admittedly slower than some may desire, the emergence and entrance of environmentally concerned mass merchandisers has done much to help with sustainable efforts. To better understand this movement, we incorporate the perspectives of both executives and consumers involved in the consumer packaged goods (CPG) industry. This research relies on three underlying themes: (1) Conceptual and anecdotal evidence suggests that companies undertake sustainability initiatives for a plethora of reasons, (2) The number of sustainability initiatives continues to increase in the consumer packaged goods industries, and (3) That it is, therefore, necessary to explore the role that sustainability plays in the minds of consumers. In light of these themes, surveys were administered to and completed by 143 college students and 101 business executives to assess a number of variables in regards to sustainability including willingness-to-pay, behavioral intentions, attitudes, willingness-to-pay, and preferences. Survey results indicate that the top three reasons why executives believe sustainability to be important include (1) the opportunity for profitability, (2) the fulfillment of an obligation to the environment, and (3) a responsibility to customers and shareholders. College students identified the top three reasons as (1) a responsibility to the environment, (2) an indebtedness to future generations, and (3) an effective management of resources. While the rationale for supporting sustainability efforts differed between college students and executives, the executives and consumers reported similar responses for the majority of the remaining sustainability issues. Furthermore, when we asked consumers to assess the importance of six key issues (healthcare, economy, education, crime, government spending, and environment) previously identified as important to consumers by Gallup Poll, protecting the environment only ranked fourth out of the six (Carlson 2005). While all six of these issues were identified as important, the top three that emerged as most important were (1) improvements in education, (2) the economy, and (3) health care. As the pursuit and incorporation of sustainability continues to evolve, so too will the expected outcomes. New definitions of performance that reflect the social/business benefits as well as the lengthened implementation period are relevant and warranted (Ehrenfeld 2005; Hitchcock and Willard 2006). We identified three primary categories of outcomes based on a literature review of both anecdotal and conceptual expectations of sustainability: (1) improvements in constituent satisfaction, (2) differentiation opportunities, and (3) financial rewards. Within each of these categories, several specific outcomes were identified resulting in eleven different outcomes arising from sustainability initiatives. Our survey results indicate that the top five most likely outcomes for companies that pursue sustainability are: (1) green consumers will be more satisfied, (2) company image will be better, (3) corporate responsibility will be enhanced, (4) energy costs will be reduced, and (5) products will be more innovative. Additionally, to better understand the interesting intersection between the environmental "identity" of a consumer and the willingness to manifest that identity with marketplace purchases, we extended prior research developed by Experian Research (2008). Accordingly, respondents were categorized as one of four types of green consumers (Behavioral Greens, Think Greens, Potential Greens, or True Browns) to garner a better understanding of the green consumer in addition to assisting with a more effective interpretation of results. We assessed these consumers' willingness to engage in eco-friendly behavior by evaluating three options: (1) shopping at retailers that support environmental initiatives, (2) paying more for products that protect the environment, and (3) paying higher taxes so the government can support environmental initiatives. Think Greens expressed the greatest willingness to change, followed by Behavioral Greens, Potential Greens, and True Browns. These differences were all significant at p<.01. Further Conclusions and Implications We have undertaken a descriptive study which seeks to enhance our understanding of the strategic domain of sustainability. Specifically, this research fills a gap in the literature by comparing and contrasting the sustainability views of business executives and consumers with specific regard to preferences, intentions, willingness-to-pay, behavior, and attitudes. For practitioners, much can be gained from a strategic standpoint. In addition to the many results already reported, respondents also reported than willing to pay more for products that protect the environment. Other specific results indicate that female respondents consistently communicate a stronger willingness than males to pay more for these products and to shop at eco-friendly retailers. Knowing this additional information, practitioners can now have a more specific market in which to target and communicate their sustainability efforts. While this research is only an initial step towards understanding similarities and differences among practitioners and consumers regarding sustainability, it presents original findings that contribute to both practice and research. Future research should be directed toward examining other variables affecting this relationship, as well as other specific industries.

A Hybrid Recommender System based on Collaborative Filtering with Selective Use of Overall and Multicriteria Ratings (종합 평점과 다기준 평점을 선택적으로 활용하는 협업필터링 기반 하이브리드 추천 시스템)

  • Ku, Min Jung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.85-109
    • /
    • 2018
  • Recommender system recommends the items expected to be purchased by a customer in the future according to his or her previous purchase behaviors. It has been served as a tool for realizing one-to-one personalization for an e-commerce service company. Traditional recommender systems, especially the recommender systems based on collaborative filtering (CF), which is the most popular recommendation algorithm in both academy and industry, are designed to generate the items list for recommendation by using 'overall rating' - a single criterion. However, it has critical limitations in understanding the customers' preferences in detail. Recently, to mitigate these limitations, some leading e-commerce companies have begun to get feedback from their customers in a form of 'multicritera ratings'. Multicriteria ratings enable the companies to understand their customers' preferences from the multidimensional viewpoints. Moreover, it is easy to handle and analyze the multidimensional ratings because they are quantitative. But, the recommendation using multicritera ratings also has limitation that it may omit detail information on a user's preference because it only considers three-to-five predetermined criteria in most cases. Under this background, this study proposes a novel hybrid recommendation system, which selectively uses the results from 'traditional CF' and 'CF using multicriteria ratings'. Our proposed system is based on the premise that some people have holistic preference scheme, whereas others have composite preference scheme. Thus, our system is designed to use traditional CF using overall rating for the users with holistic preference, and to use CF using multicriteria ratings for the users with composite preference. To validate the usefulness of the proposed system, we applied it to a real-world dataset regarding the recommendation for POI (point-of-interests). Providing personalized POI recommendation is getting more attentions as the popularity of the location-based services such as Yelp and Foursquare increases. The dataset was collected from university students via a Web-based online survey system. Using the survey system, we collected the overall ratings as well as the ratings for each criterion for 48 POIs that are located near K university in Seoul, South Korea. The criteria include 'food or taste', 'price' and 'service or mood'. As a result, we obtain 2,878 valid ratings from 112 users. Among 48 items, 38 items (80%) are used as training dataset, and the remaining 10 items (20%) are used as validation dataset. To examine the effectiveness of the proposed system (i.e. hybrid selective model), we compared its performance to the performances of two comparison models - the traditional CF and the CF with multicriteria ratings. The performances of recommender systems were evaluated by using two metrics - average MAE(mean absolute error) and precision-in-top-N. Precision-in-top-N represents the percentage of truly high overall ratings among those that the model predicted would be the N most relevant items for each user. The experimental system was developed using Microsoft Visual Basic for Applications (VBA). The experimental results showed that our proposed system (avg. MAE = 0.584) outperformed traditional CF (avg. MAE = 0.591) as well as multicriteria CF (avg. AVE = 0.608). We also found that multicriteria CF showed worse performance compared to traditional CF in our data set, which is contradictory to the results in the most previous studies. This result supports the premise of our study that people have two different types of preference schemes - holistic and composite. Besides MAE, the proposed system outperformed all the comparison models in precision-in-top-3, precision-in-top-5, and precision-in-top-7. The results from the paired samples t-test presented that our proposed system outperformed traditional CF with 10% statistical significance level, and multicriteria CF with 1% statistical significance level from the perspective of average MAE. The proposed system sheds light on how to understand and utilize user's preference schemes in recommender systems domain.

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.