• Title/Summary/Keyword: smart technology

Search Result 7,350, Processing Time 0.036 seconds

A Study on the Choice of Export Payment Types by Applying the Characteristics of the New Trade & Logistics Environment (신(新)무역물류환경의 특성을 적용한 수출대금 결제유형 선택연구)

  • Chang-bong Kim;Dong-jun Lee
    • Korea Trade Review
    • /
    • v.48 no.4
    • /
    • pp.303-320
    • /
    • 2023
  • Recently, import and export companies have been using T/T remittance and Surrender B/L more frequently than L/C when selecting the process and method of trade payment settlement. The new trade and logistics environment is thriving in the era of the Fourth Industrial Revolution (4IR). Document-based trade transactions are undergoing a digitalization as bills of lading or smart contracts are being developed. The purpose of this study is to verify whether exporters choose export payment types based on negotiating factors. In addition, we would like to discuss the application of the characteristics of the new trade and logistics environment. Data for analysis was collected through surveys. The collection method consisted of direct visits to the company, e-mail, fax, and online surveys. The survey distribution period is from February 1, 2023, to April 30, 2023. The questionnaire was distributed in 2,000 copies, and 447 copies were collected. The final 336 copies were used for analysis, excluding 111 copies that were deemed inappropriate for the purpose of this study. The results of the study are shown below. First, among the negotiating factors, the product differentiation of exporters did not significantly affect the selection of export payment types. Second, among the negotiating factors, the greater the purchasing advantage recognized by exporters, the higher the possibility of using the post-transfer method. In addition to analyzing the results, this study suggests that exporters should consider adopting new payment methods, such as blockchain technology-based bills of lading and trade finance platforms, to adapt to the characteristics of the evolving trade and logistics environment. Therefore, exporters should continue to show interest in initiatives aimed at digitizing trade documents as a response to the challenges posed by bills of lading. In future studies, it is necessary to address the lack of social awareness in Korea by conducting advanced research abroad.

Analysis of domestic water usage patterns in Chungcheong using historical data of domestic water usage and climate variables (생활용수 실적자료와 기후 변수를 활용한 충청권역 생활용수 이용량 패턴 분석)

  • Kim, Min Ji;Park, Sung Min;Lee, Kyungju;So, Byung-Jin;Kim, Tae-Woong
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.1
    • /
    • pp.1-8
    • /
    • 2024
  • Persistent droughts due to climate change will intensify water shortage problems in Korea. According to the 1st National Water Management Plan, the shortage of domestic and industrial waters is projected to be 0.07 billion m3/year under a 50-year drought event. A long-term prediction of water demand is essential for effectively responding to water shortage problems. Unlike industrial water, which has a relatively constant monthly usage, domestic water is analyzed on monthly basis due to apparent monthly usage patterns. We analyzed monthly water usage patterns using water usage data from 2017 to 2021 in Chungcheong, South Korea. The monthly water usage rate was calculated by dividing monthly water usage by annual water usage. We also calculated the water distribution rate considering correlations between water usage rate and climate variables. The division method that divided the monthly water usage rate by monthly average temperature resulted in the smallest absolute error. Using the division method with average temperature, we calculated the water distribution rates for the Chungcheong region. Then we predicted future water usage rates in the Chungcheong region by multiplying the average temperature of the SSP5-8.5 scenario and the water distribution rate. As a result, the average of the maximum water usage rate increased from 1.16 to 1.29 and the average of the minimum water usage rate decreased from 0.86 to 0.84, and the first quartile decreased from 0.95 to 0.93 and the third quartile increased from 1.04 to 1.06. Therefore, it is expected that the variability in monthly water usage rates will increase in the future.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

The Ontology Based, the Movie Contents Recommendation Scheme, Using Relations of Movie Metadata (온톨로지 기반 영화 메타데이터간 연관성을 활용한 영화 추천 기법)

  • Kim, Jaeyoung;Lee, Seok-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.25-44
    • /
    • 2013
  • Accessing movie contents has become easier and increased with the advent of smart TV, IPTV and web services that are able to be used to search and watch movies. In this situation, there are increasing search for preference movie contents of users. However, since the amount of provided movie contents is too large, the user needs more effort and time for searching the movie contents. Hence, there are a lot of researches for recommendations of personalized item through analysis and clustering of the user preferences and user profiles. In this study, we propose recommendation system which uses ontology based knowledge base. Our ontology can represent not only relations between metadata of movies but also relations between metadata and profile of user. The relation of each metadata can show similarity between movies. In order to build, the knowledge base our ontology model is considered two aspects which are the movie metadata model and the user model. On the part of build the movie metadata model based on ontology, we decide main metadata that are genre, actor/actress, keywords and synopsis. Those affect that users choose the interested movie. And there are demographic information of user and relation between user and movie metadata in user model. In our model, movie ontology model consists of seven concepts (Movie, Genre, Keywords, Synopsis Keywords, Character, and Person), eight attributes (title, rating, limit, description, character name, character description, person job, person name) and ten relations between concepts. For our knowledge base, we input individual data of 14,374 movies for each concept in contents ontology model. This movie metadata knowledge base is used to search the movie that is related to interesting metadata of user. And it can search the similar movie through relations between concepts. We also propose the architecture for movie recommendation. The proposed architecture consists of four components. The first component search candidate movies based the demographic information of the user. In this component, we decide the group of users according to demographic information to recommend the movie for each group and define the rule to decide the group of users. We generate the query that be used to search the candidate movie for recommendation in this component. The second component search candidate movies based user preference. When users choose the movie, users consider metadata such as genre, actor/actress, synopsis, keywords. Users input their preference and then in this component, system search the movie based on users preferences. The proposed system can search the similar movie through relation between concepts, unlike existing movie recommendation systems. Each metadata of recommended candidate movies have weight that will be used for deciding recommendation order. The third component the merges results of first component and second component. In this step, we calculate the weight of movies using the weight value of metadata for each movie. Then we sort movies order by the weight value. The fourth component analyzes result of third component, and then it decides level of the contribution of metadata. And we apply contribution weight to metadata. Finally, we use the result of this step as recommendation for users. We test the usability of the proposed scheme by using web application. We implement that web application for experimental process by using JSP, Java Script and prot$\acute{e}$g$\acute{e}$ API. In our experiment, we collect results of 20 men and woman, ranging in age from 20 to 29. And we use 7,418 movies with rating that is not fewer than 7.0. In order to experiment, we provide Top-5, Top-10 and Top-20 recommended movies to user, and then users choose interested movies. The result of experiment is that average number of to choose interested movie are 2.1 in Top-5, 3.35 in Top-10, 6.35 in Top-20. It is better than results that are yielded by for each metadata.

Effects of Customers' Relationship Networks on Organizational Performance: Focusing on Facebook Fan Page (고객 간 관계 네트워크가 조직성과에 미치는 영향: 페이스북 기업 팬페이지를 중심으로)

  • Jeon, Su-Hyeon;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.57-79
    • /
    • 2016
  • It is a rising trend that the number of users using one of the social media channels, the Social Network Service, so called the SNS, is getting increased. As per to this social trend, more companies have interest in this networking platform and start to invest their funds in it. It has received much attention as a tool spreading and expanding the message that a company wants to deliver to its customers and has been recognized as an important channel in terms of the relationship marketing with them. The environment of media that is radically changing these days makes possible for companies to approach their customers in various ways. Particularly, the social network service, which has been developed rapidly, provides the environment that customers can freely talk about products. For companies, it also works as a channel that gives customized information to customers. To succeed in the online environment, companies need to not only build the relationship between companies and customers but focus on the relationship between customers as well. In response to the online environment with the continuous development of technology, companies have tirelessly made the novel marketing strategy. Especially, as the one-to-one marketing to customers become available, it is more important for companies to maintain the relationship marketing with their customers. Among many SNS, Facebook, which many companies use as a communication channel, provides a fan page service for each company that supports its business. Facebook fan page is the platform that the event, information and announcement can be shared with customers using texts, videos, and pictures. Companies open their own fan pages in order to inform their companies and businesses. Such page functions as the websites of companies and has a characteristic of their brand communities such as blogs as well. As Facebook has become the major communication medium with customers, companies recognize its importance as the effective marketing channel, but they still need to investigate their business performances by using Facebook. Although there are infinite potentials in Facebook fan page that even has a function as a community between users, which other platforms do not, it is incomplete to regard companies' Facebook fan pages as communities and analyze them. In this study, it explores the relationship among customers through the network of the Facebook fan page users. The previous studies on a company's Facebook fan page were focused on finding out the effective operational direction by analyzing the use state of the company. However, in this study, it draws out the structural variable of the network, which customer committment can be measured by applying the social network analysis methodology and investigates the influence of the structural characteristics of network on the business performance of companies in an empirical way. Through each company's Facebook fan page, the network of users who engaged in the communication with each company is exploited and it is the one-mode undirected binary network that respectively regards users and the relationship of them in terms of their marketing activities as the node and link. In this network, it draws out the structural variable of network that can explain the customer commitment, who pressed "like," made comments and shared the Facebook marketing message, of each company by calculating density, global clustering coefficient, mean geodesic distance, diameter. By exploiting companies' historical performance such as net income and Tobin's Q indicator as the result variables, this study investigates influence on companies' business performances. For this purpose, it collects the network data on the subjects of 54 companies among KOSPI-listed companies, which have posted more than 100 articles on their Facebook fan pages during the data collection period. Then it draws out the network indicator of each company. The indicator related to companies' performances is calculated, based on the posted value on DART website of the Financial Supervisory Service. From the academic perspective, this study suggests a new approach through the social network analysis methodology to researchers who attempt to study the business-purpose utilization of the social media channel. From the practical perspective, this study proposes the more substantive marketing performance measurements to companies performing marketing activities through the social media and it is expected that it will bring a foundation of establishing smart business strategies by using the network indicators.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.

A Study on the Evaluation of Fertilizer Loss in the Drainage(Waste) Water of Hydroponic Cultivation, Korea (수경재배 유출 배액(폐양액)의 비료 손실량 평가 연구)

  • Jinkwan Son;Sungwook Yun;Jinkyung Kwon;Jihoon Shin;Donghyeon Kang;Minjung Park;Ryugap Lim
    • Journal of Wetlands Research
    • /
    • v.25 no.1
    • /
    • pp.35-47
    • /
    • 2023
  • Korean facility horticulture and hydroponic cultivation methods increase, requiring the management of waste water generated. In this study, the amount of fertilizer contained in the discharged waste liquid was determined. By evaluating this as a price, it was suggested to reduce water treatment costs and recycle fertilizer components. It was evaluated based on the results of major water quality analysis of waste liquid by crop, such as tomatoes, paprika, cucumbers, and strawberries, and in the case of P component, it was analyzed by converting it to the amount of phosphoric acid (P2O5). The amount of nitrogen (N) can be calculated by discharging 1,145.90kg·ha-1 of tomatoes, 920.43kg·ha-1 of paprika, 804.16kg·ha-1 of cucumbers, 405.83kg·ha-1 of strawberries, and the fertilizer content of P2O5 is 830.65kg·ha-1 of paprika, 622.32kg·ha-1 of tomatoes, 477.67kg·ha-1 of cucumbers. In addition, trace elements such as potassium (K), calcium (Ca), magnesium (Mg), iron (Fe), and manganese (Mn) were also analyzed to be emitted. The price per kg of each item calculated by averaging the price of fertilizer sold on the market can be evaluated as KRW, N 860.7, P 2,378.2, K 2,121.7, Ca 981.2, Mg 1,036.3, Fe 126,076.9, Mn 62,322.1, Zn 15,825.0, Cu 31,362.0, B 4,238.0, Mo 149,041.7. The annual fertilizer loss amount for each crop was calculated by comprehensively considering the price per kg calculated based on the market price of fertilizer, the concentration of waste by crop analyzed earlier, and the average annual emission of hydroponic cultivation. As a result of the analysis, the average of the four hydroponic crops was 5,475,361.1 won in fertilizer ingredients, with tomatoes valued at 6,995,622.3 won, paprika valued at 7,384,923.8 won, cucumbers valued at 5,091,607.9 won, and strawberries valued at 2,429,290.6 won. It was expected that if hydroponic drainage is managed through self-treatment or threshing before discharge rather than by leaking it into a river and treating it as a pollutant, it can be a valuable reusable fertilizer ingredient along with reducing water treatment costs.

Consumer's Negative Brand Rumor Acceptance and Rumor Diffusion (소비자의 부정적 브랜드 루머의 수용과 확산)

  • Lee, Won-jun;Lee, Han-Suk
    • Asia Marketing Journal
    • /
    • v.14 no.2
    • /
    • pp.65-96
    • /
    • 2012
  • Brand has received much attention from considerable marketing research. When consumers consume product or services, they are exposed to a lot of brand related stimuli. These contain brand personality, brand experience, brand identity, brand communications and so on. A special kind of new crisis occasionally confronting companies' brand management today is the brand related rumor. An important influence on consumers' purchase decision making is the word-of-mouth spread by other consumers and most decisions are influenced by other's recommendations. In light of this influence, firms have reasonable reason to study and understand consumer-to-consumer communication such as brand rumor. The importance of brand rumor to marketers is increasing as the number of internet user and SNS(social network service) site grows. Due to the development of internet technology, people can spread rumors without the limitation of time, space and place. However relatively few studies have been published in marketing journals and little is known about brand rumors in the marketplace. The study of rumor has a long history in all major social science. But very few studies have dealt with the antecedents and consequences of any kind of brand rumor. Rumor has been generally described as a story or statement in general circulation without proper confirmation or certainty as to fact. And it also can be defined as an unconfirmed proposition, passed along from people to people. Rosnow(1991) claimed that rumors were transmitted because people needed to explain ambiguous and uncertain events and talking about them reduced associated anxiety. Especially negative rumors are believed to have the potential to devastate a company's reputation and relations with customers. From the perspective of marketer, negative rumors are considered harmful and extremely difficult to control in general. It is becoming a threat to a company's sustainability and sometimes leads to negative brand image and loss of customers. Thus there is a growing concern that these negative rumors can damage brands' reputations and lead them to financial disaster too. In this study we aimed to distinguish antecedents of brand rumor transmission and investigate the effects of brand rumor characteristics on rumor spread intention. We also found key components in personal acceptance of brand rumor. In contextualist perspective, we tried to unify the traditional psychological and sociological views. In this unified research approach we defined brand rumor's characteristics based on five major variables that had been found to influence the process of rumor spread intention. The five factors of usefulness, source credibility, message credibility, worry, and vividness, encompass multi level elements of brand rumor. We also selected product involvement as a control variable. To perform the empirical research, imaginary Korean 'Kimch' brand and related contamination rumor was created and proposed. Questionnaires were collected from 178 Korean samples. Data were collected from college students who have been experienced the focal product. College students were regarded as good subjects because they have a tendency to express their opinions in detail. PLS(partial least square) method was adopted to analyze the relations between variables in the equation model. The most widely adopted causal modeling method is LISREL. However it is poorly suited to deal with relatively small data samples and can yield not proper solutions in some cases. PLS has been developed to avoid some of these limitations and provide more reliable results. To test the reliability using SPSS 16 s/w, Cronbach alpha was examined and all the values were appropriate showing alpha values between .802 and .953. Subsequently, confirmatory factor analysis was conducted successfully. And structural equation modeling has been used to analyze the research model using smartPLS(ver. 2.0) s/w. Overall, R2 of adoption of rumor is .476 and R2 of intention of rumor transmission is .218. The overall model showed a satisfactory fit. The empirical results can be summarized as follows. According to the results, the variables of brand rumor characteristic such as source credibility, message credibility, worry, and vividness affect argument strength of rumor. And argument strength of rumor also affects rumor intention. On the other hand, the relationship between perceived usefulness and argument strength of rumor is not significant. The moderating effect of product involvement on the relations between argument strength of rumor and rumor W.O.M intention is not supported neither. Consequently this study suggests some managerial and academic implications. We consider some implications for corporate crisis management planning, PR and brand management. This results show marketers that rumor is a critical factor for managing strong brand assets. Also for researchers, brand rumor should become an important thesis of their interests to understand the relationship between consumer and brand. Recently many brand managers and marketers have focused on the short-term view. They just focused on strengthen the positive brand image. According to this study we suggested that effective brand management requires managing negative brand rumors with a long-term view of marketing decisions.

  • PDF